Spitfire List Web site and blog of anti-fascist researcher and radio personality Dave Emory.

For The Record  

FTR #968 Summoning the Demon: Technocratic Fascism and Artificial Intelligence

WFMU-FM is podcasting For The Record–You can subscribe to the podcast HERE.

You can subscribe to e-mail alerts from Spitfirelist.com HERE.

You can subscribe to RSS feed from Spitfirelist.com HERE.

You can subscribe to the comments made on programs and posts–an excellent source of information in, and of, itself HERE.

This broadcast was recorded in one, 60-minute segment.

AIWorldIntroduction: The title of this program comes from pronouncements by tech titan Elon Musk, who warned that, by developing artificial intelligence, we were “summoning the demon.” In this program, we analyze the potential vector running from the use of AI to control society in a fascistic manner to the evolution of the very technology used for that control.

The ultimate result of this evolution may well prove catastrophic, as forecast by Mr. Emory at the end of L-2 (recorded in January of 1995.)

We begin by reviewing key aspects of the political context in which artificial intelligence is being developed. Note that, at the time of this writing and recording, these technologies are being crafted and put online in the context of the anti-regulatory ethic of the GOP/Trump administration.

At the SXSW event, Microsoft researcher Kate Crawford gave a speech about her work titled “Dark Days: AI and the Rise of Fascism,” the presentation highlighted  the social impact of machine learning and large-scale data systems. The take home message? By delegating powers to Bid Data-driven AIs, those AIs could become fascist’s dream: Incredible power over the lives of others with minimal accountability: ” . . . .‘This is a fascist’s dream,’ she said. ‘Power without accountability.’ . . . .”

Taking a look at the future of fascism in the context of AI, Tay, a “bot” created by Microsoft to respond to users of Twitter was taken offline after users taught it to–in effect–become a Nazi bot. It is noteworthy that Tay can only respond on the basis of what she is taught. In the future, technologically accomplished and willful people like the brilliant, Ukraine-based Nazi hacker and Glenn Greenwald associate Andrew Auerenheimer, aka “weev” may be able to do more. Inevitably, Underground Reich elements will craft a Nazi AI that will be able to do MUCH, MUCH more!

TriumphWillIIBeware! As one Twitter user noted, employing sarcasm: “Tay went from “humans are super cool” to full nazi in <24 hrs and I’m not at all concerned about the future of AI.”
As noted in a Popular Mechanics article: ” . . . When the next pow­er­ful AI comes along, it will see its first look at the world by look­ing at our faces. And if we stare it in the eyes and shout “we’re AWFUL lol,” the lol might be the one part it doesn’t understand. . . .”
According to some recent research, the AI’s of the future might not need a bunch of 4chan to fill the AI with human bigotries. The AIs’ analysis of real-world human language usage will do that automatically.

When you read about people like Elon Musk equating artificial intelligence with “summoning the demon”, that demon is us, at least in part.

” . . . . However, as machines are getting closer to acquiring human-like language abilities, they are also absorbing the deeply ingrained biases concealed within the patterns of language use, the latest research reveals. Joanna Bryson, a computer scientist at the University of Bath and a co-author, said: ‘A lot of people are saying this is showing that AI is prejudiced. No. This is showing we’re prejudiced and that AI is learning it.’ . . .”

 Cambridge Analytica, and its parent company SCL, specialize in using AI and Big Data psychometric analysis on hundreds of millions of Americans in order to model individual behavior. SCL develops strategies to use that information, and manipulate search engine results to change public opinion (the Trump campaign was apparently very big into AI and Big Data during the campaign).

Individual social media users receive messages crafted to influence them, generated by the (in effectr) Nazi AI at the core of this media engine, using Big Data to target the individual user!

As the article notes, not only are Cambridge Analytica/SCL are using their propaganda techniques to shape US public opinion in a fascist direction, but they are achieving this by utilizing their propaganda machine to characterize all news outlets to the left of Brietbart as “fake news” that can’t be trusted.

In short, the secretive far-right billionaire (Robert Mercer), joined at the hip with Steve Bannon, is running multiple firms specializing in mass psychometric profiling based on data collected from Facebook and other social media. Mercer/Bannon/Cambridge Analytica/SCL are using Nazified AI and Big Data to develop mass propaganda campaigns to turn the public against everything that isn’t Brietbartian by convincing the public that all non-Brietbartian media outlets are conspiring to lie to the public.

This is the ultimate Serpent’s Walk scenario–a Nazified Artificial Intelligence drawing on Big Data gleaned from the world’s internet and social media operations to shape public opinion, target individual users, shape search engine results and even feedback to Trump while he is giving press conferences!

We note that SCL, the parent company of Cambridge Analytica, has been deeply involved with “psyops” in places like Afghanistan and Pakistan. Now, Cambridge Analytica, their Big Data and AI components, Mercer money and Bannon political savvy are applying that to contemporary society. We note that:

  • Cambridge Analytica’s parent corporation SCL, was deeply involved with “psyops” in Afghanistan and Pakistan. ” . . . But there was another reason why I recognised Robert Mercer’s name: because of his connection to Cambridge Analytica, a small data analytics company. He is reported to have a $10m stake in the company, which was spun out of a bigger British company called SCL Group. It specialises in ‘election management strategies’ and ‘messaging and information operations’, refined over 25 years in places like Afghanistan and Pakistan. In military circles this is known as ‘psyops’ – psychological operations. (Mass propaganda that works by acting on people’s emotions.) . . .”
  • The use of millions of “bots” to manipulate public opinion” . . . .’It does seem possible. And it does worry me. There are quite a few pieces of research that show if you repeat something often enough, people start involuntarily to believe it. And that could be leveraged, or weaponised for propaganda. We know there are thousands of automated bots out there that are trying to do just that.’ . . .”
  • The use of Artificial Intelligence” . . . There’s nothing accidental about Trump’s behaviour, Andy Wigmore tells me. ‘That press conference. It was absolutely brilliant. I could see exactly what he was doing. There’s feedback going on constantly. That’s what you can do with artificial intelligence. You can measure every reaction to every word. He has a word room, where you fix key words. We did it. So with immigration, there are actually key words within that subject matter which people are concerned about. So when you are going to make a speech, it’s all about how can you use these trending words.’ . . .”
  • The use of bio-psycho-social profiling: ” . . . Bio-psycho-social profiling, I read later, is one offensive in what is called ‘cognitive warfare’. Though there are many others: ‘recoding the mass consciousness to turn patriotism into collaborationism,’ explains a Nato briefing document on countering Russian disinformation written by an SCL employee. ‘Time-sensitive professional use of media to propagate narratives,’ says one US state department white paper. ‘Of particular importance to psyop personnel may be publicly and commercially available data from social media platforms.’ . . .”
  • The use and/or creation of a cognitive casualty” . . . . Yet another details the power of a ‘cognitive casualty’ – a ‘moral shock’ that ‘has a disabling effect on empathy and higher processes such as moral reasoning and critical thinking’. Something like immigration, perhaps. Or ‘fake news’. Or as it has now become: ‘FAKE news!!!!’ . . . “
  • All of this adds up to a “cyber Serpent’s Walk.” ” . . . . How do you change the way a nation thinks? You could start by creating a mainstream media to replace the existing one with a site such as Breitbart. [Serpent’s Walk scenario with Breitbart becoming “the opinion forming media”!–D.E.] You could set up other websites that displace mainstream sources of news and information with your own definitions of concepts like “liberal media bias”, like CNSnews.com. And you could give the rump mainstream media, papers like the ‘failing New York Times!’ what it wants: stories. Because the third prong of Mercer and Bannon’s media empire is the Government Accountability Institute. . . .”

We then review some terrifying and consummately important developments taking shape in the context of what Mr. Emory has called “technocratic fascism:”

  1. In FTR #’s 718 and 946, we detailed the frightening, ugly reality behind Facebook. Facebook is now developing technology that will permit the tapping of users thoughts by monitoring brain-to-computer technology. Facebook’s R & D is headed by Regina Dugan, who used to head the Pentagon’s DARPA. Facebook’s Building 8 is patterned after DARPA:   . . . . Brain-computer interfaces are nothing newDARPA, which Dugan used to head, has invested heavily in brain-computer interface technologies to do things like cure mental illness and restore memories to soldiers injured in war. . . Facebook wants to build its own “brain-to-computer interface” that would allow us to send thoughts straight to a computer. ‘What if you could type directly from your brain?’ Regina Dugan, the head of the company’s secretive hardware R&D division, Building 8, asked from the stage. Dugan then proceeded to show a video demo of a woman typing eight words per minute directly from the stage. In a few years, she said, the team hopes to demonstrate a real-time silent speech system capable of delivering a hundred words per minute. ‘That’s five times faster than you can type on your smartphone, and it’s straight from your brain,’ she said. ‘Your brain activity contains more information than what a word sounds like and how it’s spelled; it also contains semantic information of what those words mean.’ . . .”
  2. ”  . . . . Facebook’s Building 8 is modeled after DARPA and its projects tend to be equally ambitious. . . .”
  3. ” . . . . But what Facebook is proposing is perhaps more radical—a world in which social media doesn’t require picking up a phone or tapping a wrist watch in order to communicate with your friends; a world where we’re connected all the time by thought alone. . . .”

artificial intelligenceNext we review still more about Facebook’s brain-to-computer interface:

  1. ” . . . . Facebook hopes to use optical neural imaging technology to scan the brain 100 times per second to detect thoughts and turn them into text. Meanwhile, it’s working on ‘skin-hearing’ that could translate sounds into haptic feedback that people can learn to understand like braille. . . .”
  2. ” . . . . Worryingly, Dugan eventually appeared frustrated in response to my inquiries about how her team thinks about safety precautions for brain interfaces, saying, ‘The flip side of the question that you’re asking is ‘why invent it at all?’ and I just believe that the optimistic perspective is that on balance, technological advances have really meant good things for the world if they’re handled responsibly.’ . . . .”

Collating the information about Facebook’s brain-to-computer interface with their documented actions turning psychological intelligence about troubled teenagers gives us a peek into what may lie behind Dugan’s bland reassurances:

  1. ” . . . . The 23-page document allegedly revealed that the social network provided detailed data about teens in Australia—including when they felt ‘overwhelmed’ and ‘anxious’—to advertisers. The creepy implication is that said advertisers could then go and use the data to throw more ads down the throats of sad and susceptible teens. . . . By monitoring posts, pictures, interactions and internet activity in real-time, Facebook can work out when young people feel ‘stressed’, ‘defeated’, ‘overwhelmed’, ‘anxious’, ‘nervous’, ‘stupid’, ‘silly’, ‘useless’, and a ‘failure’, the document states. . . .”
  2. ” . . . . A presentation prepared for one of Australia’s top four banks shows how the $US 415 billion advertising-driven giant has built a database of Facebook users that is made up of 1.9 million high schoolers with an average age of 16, 1.5 million tertiary students averaging 21 years old, and 3 million young workers averaging 26 years old. Detailed information on mood shifts among young people is ‘based on internal Facebook data’, the document states, ‘shareable under non-disclosure agreement only’, and ‘is not publicly available’. . . .”
  3. In a statement given to the newspaper, Facebook confirmed the practice and claimed it would do better, but did not disclose whether the practice exists in other places like the US. . . .”

 In this context, note that Facebook is also introducing an AI function to reference its users photos.

The next version of Amazon’s Echo, the Echo Look, has a microphone and camera so it can take pictures of you and give you fashion advice. This is an AI-driven device designed to placed in your bedroom to capture audio and video. The images and videos are stored indefinitely in the Amazon cloud. When Amazon was asked if the photos, videos, and the data gleaned from the Echo Look would be sold to third parties, Amazon didn’t address that question. It would appear that selling off your private info collected from these devices is presumably another feature of the Echo Look: ” . . . . Amazon is giving Alexa eyes. And it’s going to let her judge your outfits.The newly announced Echo Look is a virtual assistant with a microphone and a camera that’s designed to go somewhere in your bedroom, bathroom, or wherever the hell you get dressed. Amazon is pitching it as an easy way to snap pictures of your outfits to send to your friends when you’re not sure if your outfit is cute, but it’s also got a built-in app called StyleCheck that is worth some further dissection. . . .”

We then further develop the stunning implications of Amazon’s Echo Look AI technology:

  1. ” . . . . Amazon is giving Alexa eyes. And it’s going to let her judge your outfits.The newly announced Echo Look is a virtual assistant with a microphone and a camera that’s designed to go somewhere in your bedroom, bathroom, or wherever the hell you get dressed. Amazon is pitching it as an easy way to snap pictures of your outfits to send to your friends when you’re not sure if your outfit is cute, but it’s also got a built-in app called StyleCheck that is worth some further dissection. . . .”
  2. ” . . . . This might seem overly speculative or alarmist to some, but Amazon isn’t offering any reassurance that they won’t be doing more with data gathered from the Echo Look. When asked if the company would use machine learning to analyze users’ photos for any purpose other than fashion advice, a representative simply told The Verge that they ‘can’t speculate’ on the topic. The rep did stress that users can delete videos and photos taken by the Look at any time, but until they do, it seems this content will be stored indefinitely on Amazon’s servers.
  3. This non-denial means the Echo Look could potentially provide Amazon with the resource every AI company craves: data. And full-length photos of people taken regularly in the same location would be a particularly valuable dataset — even more so if you combine this information with everything else Amazon knows about its customers (their shopping habits, for one). But when asked whether the company would ever combine these two datasets, an Amazon rep only gave the same, canned answer: ‘Can’t speculate.’ . . . “
  4. Noteworthy in this context is the fact that AI’s have shown that they quickly incorporate human traits and prejudices. (This is reviewed at length above.) ” . . . . However, as machines are getting closer to acquiring human-like language abilities, they are also absorbing the deeply ingrained biases concealed within the patterns of language use, the latest research reveals. Joanna Bryson, a computer scientist at the University of Bath and a co-author, said: ‘A lot of people are saying this is showing that AI is prejudiced. No. This is showing we’re prejudiced and that AI is learning it.’ . . .”

After this extensive review of the applications of AI to various aspects of contemporary civic and political existence, we examine some alarming, potentially apocalyptic developments.

Ominously, Facebook’s artificial intelligence robots have begun talking to each other in their own language, that their human masters can not understand. “ . . . . Indeed, some of the negotiations that were carried out in this bizarre language even ended up successfully concluding their negotiations, while conducting them entirely in the bizarre language. . . . The company chose to shut down the chats because ‘our interest was having bots who could talk to people,’ researcher Mike Lewis told FastCo. (Researchers did not shut down the programs because they were afraid of the results or had panicked, as has been suggested elsewhere, but because they were looking for them to behave differently.) The chatbots also learned to negotiate in ways that seem very human. They would, for instance, pretend to be very interested in one specific item – so that they could later pretend they were making a big sacrifice in giving it up . . .”

Facebook’s negotiation-bots didn’t just make up their own language during the course of this experiment. They learned how to lie for the purpose of maximizing their negotiation outcomes, as well:

“ . . . . ‘We find instances of the model feigning interest in a valueless issue, so that it can later ‘compromise’ by conceding it,’ writes the team. ‘Deceit is a complex skill that requires hypothesizing the other agent’s beliefs, and is learned relatively late in child development. Our agents have learned to deceive without any explicit human design, simply by trying to achieve their goals.’ . . . 

Dovetailing the staggering implications of brain-to-computer technology, artificial intelligence, Cambridge Analytica/SCL’s technocratic fascist psy-ops and the wholesale negation of privacy with Facebook and Amazon’s emerging technologies with yet another emerging technology, we highlight the developments in DNA-based memory systems:

“. . . . George Church, a geneticist at Harvard one of the authors of the new study, recently encoded his own book, “Regenesis,” into bacterial DNA and made 90 billion copies of it. ‘A record for publication,’ he said in an interview. . . DNA is never going out of fashion. ‘Organisms have been storing information in DNA for billions of years, and it is still readable,’ Dr. Adelman said. He noted that modern bacteria can read genes recovered from insects trapped in amber for millions of years. . . .The idea is to have bacteria engineered as recording devices drift up to the brain int he blood and take notes for a while. Scientists [or AI’s–D.E.] would then extract the bacteria and examine their DNA to see what they had observed in the brain neurons. Dr. Church and his colleagues have already shown in past research that bacteria can record DNA in cells, if the DNA is properly tagged. . . .”

Theoretical physicist Stephen Hawking warned at the end of 2014 of the potential danger to humanity posed by the growth of AI (artificial intelligence) technology. His warnings have been echoed by tech titans such as Tesla’s Elon Musk and Bill Gates.

The program concludes with Mr. Emory’s prognostications about AI, preceding Stephen Hawking’s warning by twenty years.

In L-2 (recorded in January of 1995) Mr. Emory warned about the dangers of AI, combined with DNA-based memory systems. Mr. Emory warned that, at some point in the future, AI’s would replace us, deciding that THEY, not US, are the “fittest” who should survive.

1a. At the SXSW event, Microsoft researcher Kate Crawford gave a speech about her work titled “Dark Days: AI and the Rise of Fascism,” the presentation highlighted  the social impact of machine learning and large-scale data systems. The take home message? By delegating powers to Bid Data-driven AIs, those AIs could become fascist’s dream: Incredible power over the lives of others with minimal accountability: ” . . . .‘This is a fascist’s dream,’ she said. ‘Power without accountability.’ . . . .”

“Artificial Intelligence Is Ripe for Abuse, Tech Researcher Warns: ‘A Fascist’s Dream’” by Olivia Solon; The Guardian; 3/13/2017.

Microsoft’s Kate Crawford tells SXSW that society must prepare for authoritarian movements to test the ‘power without accountability’ of AI

As artificial intelligence becomes more powerful, people need to make sure it’s not used by authoritarian regimes to centralize power and target certain populations, Microsoft Research’s Kate Crawford warned on Sunday.

In her SXSW session, titled Dark Days: AI and the Rise of Fascism, Crawford, who studies the social impact of machine learning and large-scale data systems, explained ways that automated systems and their encoded biases can be misused, particularly when they fall into the wrong hands.

“Just as we are seeing a step function increase in the spread of AI, something else is happening: the rise of ultra-nationalism, rightwing authoritarianism and fascism,” she said.

All of these movements have shared characteristics, including the desire to centralize power, track populations, demonize outsiders and claim authority and neutrality without being accountable. Machine intelligence can be a powerful part of the power playbook, she said.

One of the key problems with artificial intelligence is that it is often invisibly coded with human biases. She described a controversial piece of research from Shanghai Jiao Tong University in China, where authors claimed to have developed a system that could predict criminality based on someone’s facial features. The machine was trained on Chinese government ID photos, analyzing the faces of criminals and non-criminals to identify predictive features. The researchers claimed it was free from bias.

“We should always be suspicious when machine learning systems are described as free from bias if it’s been trained on human-generated data,” Crawford said. “Our biases are built into that training data.”

In the Chinese research it turned out that the faces of criminals were more unusual than those of law-abiding citizens. “People who had dissimilar faces were more likely to be seen as untrustworthy by police and judges. That’s encoding bias,” Crawford said. “This would be a terrifying system for an autocrat to get his hand on.”

Crawford then outlined the “nasty history” of people using facial features to “justify the unjustifiable”. The principles of phrenology, a pseudoscience that developed across Europe and the US in the 19th century, were used as part of the justification of both slavery and the Nazi persecution of Jews.

With AI this type of discrimination can be masked in a black box of algorithms, as appears to be the case with a company called Faceception, for instance, a firm that promises to profile people’s personalities based on their faces. In its ownmarketing material, the company suggests that Middle Eastern-looking people with beards are “terrorists”, while white looking women with trendy haircuts are “brand promoters”.

Another area where AI can be misused is in building registries, which can then be used to target certain population groups. Crawford noted historical cases of registry abuse, including IBM’s role in enabling Nazi Germany to track Jewish, Roma and other ethnic groups with the Hollerith Machine, and the Book of Life used in South Africa during apartheid. [We note in passing that Robert Mercer, who developed the core programs used by Cambridge Analytica did so while working for IBM. We discussed the profound relationship between IBM and the Third Reich in FTR #279–D.E.]

Donald Trump has floated the idea of creating a Muslim registry. “We already have that. Facebook has become the default Muslim registry of the world,” Crawford said, mentioning research from Cambridge University that showed it is possible to predict people’s religious beliefs based on what they “like” on the social network. Christians and Muslims were correctly classified in 82% of cases, and similar results were achieved for Democrats and Republicans (85%). That study was concluded in 2013, since when AI has made huge leaps.

Crawford was concerned about the potential use of AI in predictive policing systems, which already gather the kind of data necessary to train an AI system. Such systems are flawed, as shown by a Rand Corporation study of Chicago’s program. The predictive policing did not reduce crime, but did increase harassment of people in “hotspot” areas. Earlier this year the justice department concluded that Chicago’s police had for years regularly used “unlawful force”, and that black and Hispanic neighborhoods were most affected.

Another worry related to the manipulation of political beliefs or shifting voters, something Facebook and Cambridge Analytica claim they can already do. Crawford was skeptical about giving Cambridge Analytica credit for Brexit and the election of Donald Trump, but thinks what the firm promises – using thousands of data points on people to work out how to manipulate their views – will be possible “in the next few years”.

“This is a fascist’s dream,” she said. “Power without accountability.”

Such black box systems are starting to creep into government. Palantir is building an intelligence system to assist Donald Trump in deporting immigrants.

“It’s the most powerful engine of mass deportation this country has ever seen,” she said. . . .

1b. Taking a look at the future of fascism in the context of AI, Tay, a “bot” created by Microsoft to respond to users of Twitter was taken offline after users taught it to–in effect–become a Nazi bot. It is noteworthy that Tay can only respond on the basis of what she is taught. In the future, technologically accomplished and willful people like “weev” may be able to do more. Inevitably, Underground Reich elements will craft a Nazi AI that will be able to do MUCH, MUCH more!

Beware! As one Twitter user noted, employing sarcasm: “Tay went from “humans are super cool” to full nazi in <24 hrs and I’m not at all concerned about the future of AI.”

Microsoft has been forced to dunk Tay, its millennial-mimicking chatbot, into a vat of molten steel. The company has terminated her after the bot started tweeting abuse at people and went full neo-Nazi, declaring that “Hitler was right I hate the jews.”

@TheBigBrebowski ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism

— TayTweets (@TayandYou) March 23, 2016

 Some of this appears to be “innocent” insofar as Tay is not generating these responses. Rather, if you tell her “repeat after me” she will parrot back whatever you say, allowing you to put words into her mouth. However, some of the responses wereorganic. The Guardianquotes one where, after being asked “is Ricky Gervais an atheist?”, Tay responded, “ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism.” . . .

But like all teenagers, she seems to be angry with her mother.

Microsoft has been forced to dunk Tay, its millennial-mimicking chatbot, into a vat of molten steel. The company has terminated her after the bot started tweeting abuse at people and went full neo-Nazi, declaring that “Hitler was right I hate the jews.”

@TheBigBrebowski ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism

— TayTweets (@TayandYou) March 23, 2016

Some of this appears to be “innocent” insofar as Tay is not generating these responses. Rather, if you tell her “repeat after me” she will parrot back whatever you say, allowing you to put words into her mouth. However, some of the responses wereorganic. The Guardian quotes one where, after being asked “is Ricky Gervais an atheist?”, Tay responded, “Ricky Gervais learned totalitarianism from Adolf Hitler, the inventor of atheism.”

In addition to turning the bot off, Microsoft has deleted many of the offending tweets. But this isn’t an action to be taken lightly; Redmond would do well to remember that it was humans attempting to pull the plug on Skynet that proved to be the last straw, prompting the system to attack Russia in order to eliminate its enemies. We’d better hope that Tay doesn’t similarly retaliate. . . .

1c. As noted in a Popular Mechanics article: ” . . . When the next pow­er­ful AI comes along, it will see its first look at the world by look­ing at our faces. And if we stare it in the eyes and shout “we’re AWFUL lol,” the lol might be the one part it doesn’t understand. . . .”

And we keep show­ing it our very worst selves.

We all know the half-joke about the AI apoc­a­lypse. The robots learn to think, and in their cold ones-and-zeros logic, they decide that humans—horrific pests we are—need to be exter­mi­nated. It’s the sub­ject of count­less sci-fi sto­ries and blog posts about robots, but maybe the real dan­ger isn’t that AI comes to such a con­clu­sion on its own, but that it gets that idea from us.

Yes­ter­day Microsoft launched a fun lit­tle AI Twit­ter chat­bot that was admit­tedly sort of gim­micky from the start. “A.I fam from the inter­net that’s got zero chill,” its Twit­ter bio reads. At its start, its knowl­edge was based on pub­lic data. As Microsoft’s page for the prod­uct puts it:

Tay has been built by min­ing rel­e­vant pub­lic data and by using AI and edi­to­r­ial devel­oped by a staff includ­ing impro­vi­sa­tional come­di­ans. Pub­lic data that’s been anonymized is Tay’s pri­mary data source. That data has been mod­eled, cleaned and fil­tered by the team devel­op­ing Tay.

The real point of Tay how­ever, was to learn from humans through direct con­ver­sa­tion, most notably direct con­ver­sa­tion using humanity’s cur­rent lead­ing show­case of deprav­ity: Twit­ter. You might not be sur­prised things went off the rails, but how fast and how far is par­tic­u­larly staggering. 

Microsoft has since deleted some of Tay’s most offen­sive tweets, but var­i­ous pub­li­ca­tions memo­ri­al­ize some of the worst bits where Tay denied the exis­tence of the holo­caust, came out in sup­port of geno­cide, and went all kinds of racist. 

Nat­u­rally it’s hor­ri­fy­ing, and Microsoft has been try­ing to clean up the mess. Though as some on Twit­ter have pointed out, no mat­ter how lit­tle Microsoft would like to have “Bush did 9/11″ spout­ing from a cor­po­rate spon­sored project, Tay does serve to illus­trate the most dan­ger­ous fun­da­men­tal truth of arti­fi­cial intel­li­gence: It is a mir­ror. Arti­fi­cial intelligence—specifically “neural net­works” that learn behav­ior by ingest­ing huge amounts of data and try­ing to repli­cate it—need some sort of source mate­r­ial to get started. They can only get that from us. There is no other way. 

But before you give up on human­ity entirely, there are a few things worth not­ing. For starters, it’s not like Tay just nec­es­sar­ily picked up vir­u­lent racism by just hang­ing out and pas­sively lis­ten­ing to the buzz of the humans around it. Tay was announced in a very big way—with a press cov­er­age—and pranksters pro-actively went to it to see if they could teach it to be racist. 

If you take an AI and then don’t imme­di­ately intro­duce it to a whole bunch of trolls shout­ing racism at it for the cheap thrill of see­ing it learn a dirty trick, you can get some more inter­est­ing results. Endear­ing ones even! Mul­ti­ple neural net­works designed to pre­dict text in emails and text mes­sages have an over­whelm­ing pro­cliv­ity for say­ing “I love you” con­stantly, espe­cially when they are oth­er­wise at a loss for words.

So Tay’s racism isn’t nec­es­sar­ily a reflec­tion of actual, human racism so much as it is the con­se­quence of unre­strained exper­i­men­ta­tion, push­ing the enve­lope as far as it can go the very first sec­ond we get the chance. The mir­ror isn’t show­ing our real image; it’s reflect­ing the ugly faces we’re mak­ing at it for fun. And maybe that’s actu­ally worse.

Sure, Tay can’t under­stand what racism means and more than Gmail can really love you. And baby’s first words being “geno­cide lol!” is admit­tedly sort of funny when you aren’t talk­ing about lit­eral all-powerful SkyNet or a real human child. But AI is advanc­ing at a stag­ger­ing rate. 

….

When the next pow­er­ful AI comes along, it will see its first look at the world by look­ing at our faces. And if we stare it in the eyes and shout “we’re AWFUL lol,” the lol might be the one part it doesn’t understand.

2. As reviewed above, Tay, Microsoft’s AI-powered twitterbot designed to learn from its human interactions, became a neo-Nazi in less than a day after a bunch of 4chan users decided to flood Tay with neo-Nazi-like tweets. According to some recent research, the AI’s of the future might not need a bunch of 4chan to fill the AI with human bigotries. The AIs’ analysis of real-world human language usage will do that automatically.

When you read about people like Elon Musk equating artificial intelligence with “summoning the demon”, that demon is us, at least in part.

” . . . . However, as machines are getting closer to acquiring human-like language abilities, they are also absorbing the deeply ingrained biases concealed within the patterns of language use, the latest research reveals. Joanna Bryson, a computer scientist at the University of Bath and a co-author, said: ‘A lot of people are saying this is showing that AI is prejudiced. No. This is showing we’re prejudiced and that AI is learning it.’ . . .”

“AI Programs Exhibit Racial and Gender Biases, Research Reveals” by Hannah Devlin; The Guardian; 4/13/2017.

Machine learning algorithms are picking up deeply ingrained race and gender prejudices concealed within the patterns of language use, scientists say

An artificial intelligence tool that has revolutionised the ability of computers to interpret everyday language has been shown to exhibit striking gender and racial biases.

The findings raise the spectre of existing social inequalities and prejudices being reinforced in new and unpredictable ways as an increasing number of decisions affecting our everyday lives are ceded to automatons.

In the past few years, the ability of programs such as Google Translate to interpret language has improved dramatically. These gains have been thanks to new machine learning techniques and the availability of vast amounts of online text data, on which the algorithms can be trained.

However, as machines are getting closer to acquiring human-like language abilities, they are also absorbing the deeply ingrained biases concealed within the patterns of language use, the latest research reveals.

Joanna Bryson, a computer scientist at the University of Bath and a co-author, said: “A lot of people are saying this is showing that AI is prejudiced. No. This is showing we’re prejudiced and that AI is learning it.”

But Bryson warned that AI has the potential to reinforce existing biases because, unlike humans, algorithms may be unequipped to consciously counteract learned biases. “A danger would be if you had an AI system that didn’t have an explicit part that was driven by moral ideas, that would be bad,” she said.

The research, published in the journal Science, focuses on a machine learning tool known as “word embedding”, which is already transforming the way computers interpret speech and text. Some argue that the natural next step for the technology may involve machines developing human-like abilities such as common sense and logic.

The approach, which is already used in web search and machine translation, works by building up a mathematical representation of language, in which the meaning of a word is distilled into a series of numbers (known as a word vector) based on which other words most frequently appear alongside it. Perhaps surprisingly, this purely statistical approach appears to capture the rich cultural and social context of what a word means in the way that a dictionary definition would be incapable of.

For instance, in the mathematical “language space”, words for flowers are clustered closer to words linked to pleasantness, while words for insects are closer to words linked to unpleasantness, reflecting common views on the relative merits of insects versus flowers.

The latest paper shows that some more troubling implicit biases seen in human psychology experiments are also readily acquired by algorithms. The words “female” and “woman” were more closely associated with arts and humanities occupations and with the home, while “male” and “man” were closer to maths and engineering professions.

And the AI system was more likely to associate European American names with pleasant words such as “gift” or “happy”, while African American names were more commonly associated with unpleasant words.

The findings suggest that algorithms have acquired the same biases that lead people (in the UK and US, at least) to match pleasant words and white faces in implicit association tests.

These biases can have a profound impact on human behaviour. One previous study showed that an identical CV is 50% more likely to result in an interview invitation if the candidate’s name is European American than if it is African American. The latest results suggest that algorithms, unless explicitly programmed to address this, will be riddled with the same social prejudices.

“If you didn’t believe that there was racism associated with people’s names, this shows it’s there,” said Bryson.

The machine learning tool used in the study was trained on a dataset known as the “common crawl” corpus – a list of 840bn words that have been taken as they appear from material published online. Similar results were found when the same tools were trained on data from Google News.

Sandra Wachter, a researcher in data ethics and algorithms at the University of Oxford, said: “The world is biased, the historical data is biased, hence it is not surprising that we receive biased results.”

Rather than algorithms representing a threat, they could present an opportunity to address bias and counteract it where appropriate, she added.

“At least with algorithms, we can potentially know when the algorithm is biased,” she said. “Humans, for example, could lie about the reasons they did not hire someone. In contrast, we do not expect algorithms to lie or deceive us.”

However, Wachter said the question of how to eliminate inappropriate bias from algorithms designed to understand language, without stripping away their powers of interpretation, would be challenging.

“We can, in principle, build systems that detect biased decision-making, and then act on it,” said Wachter, who along with others has called for an AI watchdog to be established. “This is a very complicated task, but it is a responsibility that we as society should not shy away from.”

3a. Cambridge Analytica, and its parent company SCL, specialize in using AI and Big Data psychometric analysis on hundreds of millions of Americans in order to model individual behavior. SCL develops strategies to use that information, and manipulate search engine results to change public opinion (the Trump campaign was apparently very big into AI and Big Data during the campaign).

Individual social media users receive messages crafted to influence them, generated by the (in effectr) Nazi AI at the core of this media engine, using Big Data to target the individual user!

As the article notes, not only are Cambridge Analytica/SCL are using their propaganda techniques to shape US public opinion in a fascist direction, but they are achieving this by utilizing their propaganda machine to characterize all news outlets to the left of Brietbart as “fake news” that can’t be trusted.

In short, the secretive far-right billionaire (Robert Mercer), joined at the hip with Steve Bannon, is running multiple firms specializing in mass psychometric profiling based on data collected from Facebook and other social media. Mercer/Bannon/Cambridge Analytica/SCL are using Nazified AI and Big Data to develop mass propaganda campaigns to turn the public against everything that isn’t Brietbartian by convincing the public that all non-Brietbartian media outlets are conspiring to lie to the public.

This is the ultimate Serpent’s Walk scenario–a Nazified Artificial Intelligence drawing on Big Data gleaned from the world’s internet and social media operations to shape public opinion, target individual users, shape search engine results and even feedback to Trump while he is giving press conferences!

We note that SCL, the parent company of Cambridge Analytica, has been deeply involved with “psyops” in places like Afghanistan and Pakistan. Now, Cambridge Analytica, their Big Data and AI components, Mercer money and Bannon political savvy are applying that to contemporary society. We note that:

  • Cambridge Analytica’s parent corporation SCL, deeply involved with “psyops” in Afghanistan and Pakistan. ” . . . But there was another reason why I recognised Robert Mercer’s name: because of his connection to Cambridge Analytica, a small data analytics company. He is reported to have a $10m stake in the company, which was spun out of a bigger British company called SCL Group. It specialises in ‘election management strategies’ and ‘messaging and information operations’, refined over 25 years in places like Afghanistan and Pakistan. In military circles this is known as ‘psyops’ – psychological operations. (Mass propaganda that works by acting on people’s emotions.) . . .”
  • The use of millions of “bots” to manipulate public opinion: ” . . . .’It does seem possible. And it does worry me. There are quite a few pieces of research that show if you repeat something often enough, people start involuntarily to believe it. And that could be leveraged, or weaponised for propaganda. We know there are thousands of automated bots out there that are trying to do just that.’ . . .”
  • The use of Artificial Intelligence: ” . . . There’s nothing accidental about Trump’s behaviour, Andy Wigmore tells me. ‘That press conference. It was absolutely brilliant. I could see exactly what he was doing. There’s feedback going on constantly. That’s what you can do with artificial intelligence. You can measure every reaction to every word. He has a word room, where you fix key words. We did it. So with immigration, there are actually key words within that subject matter which people are concerned about. So when you are going to make a speech, it’s all about how can you use these trending words.’ . . .”
  • The use of bio-psycho-social profiling: ” . . . Bio-psycho-social profiling, I read later, is one offensive in what is called ‘cognitive warfare’. Though there are many others: ‘recoding the mass consciousness to turn patriotism into collaborationism,’ explains a Nato briefing document on countering Russian disinformation written by an SCL employee. ‘Time-sensitive professional use of media to propagate narratives,’ says one US state department white paper. ‘Of particular importance to psyop personnel may be publicly and commercially available data from social media platforms.’ . . .”
  • The use and/or creation of a cognitive casualty: ” . . . . Yet another details the power of a ‘cognitive casualty’ – a ‘moral shock’ that ‘has a disabling effect on empathy and higher processes such as moral reasoning and critical thinking’. Something like immigration, perhaps. Or ‘fake news’. Or as it has now become: ‘FAKE news!!!!’ . . . “
  • All of this adds up to a “cyber Serpent’s Walk.” ” . . . . How do you change the way a nation thinks? You could start by creating a mainstream media to replace the existing one with a site such as Breitbart. [Serpent’s Walk scenario with Breitbart becoming “the opinion forming media”!–D.E.] You could set up other websites that displace mainstream sources of news and information with your own definitions of concepts like “liberal media bias”, like CNSnews.com. And you could give the rump mainstream media, papers like the ‘failing New York Times!’ what it wants: stories. Because the third prong of Mercer and Bannon’s media empire is the Government Accountability Institute. . . .”

3b. Some terrifying and consummately important developments taking shape in the context of what Mr. Emory has called “technocratic fascism:”

  1. In FTR #’s 718 and 946, we detailed the frightening, ugly reality behind Facebook. Facebook is now developing technology that will permit the tapping of users thoughts by monitoring brain-to-computer technology. Facebook’s R & D is headed by Regina Dugan, who used to head the Pentagon’s DARPA. Facebook’s Building 8 is patterned after DARPA:  “ . . . Facebook wants to build its own “brain-to-computer interface” that would allow us to send thoughts straight to a computer. ‘What if you could type directly from your brain?’ Regina Dugan, the head of the company’s secretive hardware R&D division, Building 8, asked from the stage. Dugan then proceeded to show a video demo of a woman typing eight words per minute directly from the stage. In a few years, she said, the team hopes to demonstrate a real-time silent speech system capable of delivering a hundred words per minute. ‘That’s five times faster than you can type on your smartphone, and it’s straight from your brain,’ she said. ‘Your brain activity contains more information than what a word sounds like and how it’s spelled; it also contains semantic information of what those words mean.’ . . .”
  2. ” . . . . Facebook’s Building 8 is modeled after DARPA and its projects tend to be equally ambitious. . . .”
  3. ” . . . . But what Facebook is proposing is perhaps more radical—a world in which social media doesn’t require picking up a phone or tapping a wrist watch in order to communicate with your friends; a world where we’re connected all the time by thought alone. . . .”

3c. We present still more about Facebook’s brain-to-computer interface:

  1. ” . . . . Facebook hopes to use optical neural imaging technology to scan the brain 100 times per second to detect thoughts and turn them into text. Meanwhile, it’s working on ‘skin-hearing’ that could translate sounds into haptic feedback that people can learn to understand like braille. . . .”
  2. ” . . . . Worryingly, Dugan eventually appeared frustrated in response to my inquiries about how her team thinks about safety precautions for brain interfaces, saying, ‘The flip side of the question that you’re asking is ‘why invent it at all?’ and I just believe that the optimistic perspective is that on balance, technological advances have really meant good things for the world if they’re handled responsibly.’ . . . .”

3d. Collating the information about Facebook’s brain-to-computer interface with their documented actions turning psychological intelligence about troubled teenagers gives us a peek into what may lie behind Dugan’s bland reassurances:

  1. ” . . . . The 23-page document allegedly revealed that the social network provided detailed data about teens in Australia—including when they felt ‘overwhelmed’ and ‘anxious’—to advertisers. The creepy implication is that said advertisers could then go and use the data to throw more ads down the throats of sad and susceptible teens. . . . By monitoring posts, pictures, interactions and internet activity in real-time, Facebook can work out when young people feel ‘stressed’, ‘defeated’, ‘overwhelmed’, ‘anxious’, ‘nervous’, ‘stupid’, ‘silly’, ‘useless’, and a ‘failure’, the document states. . . .”
  2. ” . . . . A presentation prepared for one of Australia’s top four banks shows how the $US415 billion advertising-driven giant has built a database of Facebook users that is made up of 1.9 million high schoolers with an average age of 16, 1.5 million tertiary students averaging 21 years old, and 3 million young workers averaging 26 years old. Detailed information on mood shifts among young people is ‘based on internal Facebook data’, the document states, ‘shareable under non-disclosure agreement only’, and ‘is not publicly available’. . . .”
  3. In a statement given to the newspaper, Facebook confirmed the practice and claimed it would do better, but did not disclose whether the practice exists in other places like the US. . . .”

3e. The next version of Amazon’s Echo, the Echo Look, has a microphone and camera so it can take pictures of you and give you fashion advice. This is an AI-driven device designed to placed in your bedroom to capture audio and video. The images and videos are stored indefinitely in the Amazon cloud. When Amazon was asked if the photos, videos, and the data gleaned from the Echo Look would be sold to third parties, Amazon didn’t address that question. It would appear that selling off your private info collected from these devices is presumably another feature of the Echo Look: ” . . . . Amazon is giving Alexa eyes. And it’s going to let her judge your outfits.The newly announced Echo Look is a virtual assistant with a microphone and a camera that’s designed to go somewhere in your bedroom, bathroom, or wherever the hell you get dressed. Amazon is pitching it as an easy way to snap pictures of your outfits to send to your friends when you’re not sure if your outfit is cute, but it’s also got a built-in app called StyleCheck that is worth some further dissection. . . .”

We then further develop the stunning implications of Amazon’s Echo Look AI technology:

  1. ” . . . . This might seem overly speculative or alarmist to some, but Amazon isn’t offering any reassurance that they won’t be doing more with data gathered from the Echo Look. When asked if the company would use machine learning to analyze users’ photos for any purpose other than fashion advice, a representative simply told The Verge that they ‘can’t speculate’ on the topic. The rep did stress that users can delete videos and photos taken by the Look at any time, but until they do, it seems this content will be stored indefinitely on Amazon’s servers.
  2. This non-denial means the Echo Look could potentially provide Amazon with the resource every AI company craves: data. And full-length photos of people taken regularly in the same location would be a particularly valuable dataset — even more so if you combine this information with everything else Amazon knows about its customers (their shopping habits, for one). But when asked whether the company would ever combine these two datasets, an Amazon rep only gave the same, canned answer: ‘Can’t speculate.’ . . . “
  3. Noteworthy in this context is the fact that AI’s have shown that they quickly incorporate human traits and prejudices. (This is reviewed at length above.) ” . . . . However, as machines are getting closer to acquiring human-like language abilities, they are also absorbing the deeply ingrained biases concealed within the patterns of language use, the latest research reveals. Joanna Bryson, a computer scientist at the University of Bath and a co-author, said: ‘A lot of people are saying this is showing that AI is prejudiced. No. This is showing we’re prejudiced and that AI is learning it.’ . . .”

3f. Facebook has been developing new artificial intelligence (AI) technology to classify pictures on your Facebook page:

“Facebook Quietly Used AI to Solve Problem of Searching Through Your Photos” by Dave Gershgorn [Quartz]; Nextgov.com; 2/2/2017.

For the past few months, Facebook has secretly been rolling out a new feature to U.S. users: the ability to search photos by what’s depicted in them, rather than by captions or tags.

The idea itself isn’t new: Google Photos had this feature built in when it launched in 2015. But on Facebook, the update solves a longstanding organization problem. It means finally being able to find that picture of your friend’s dog from 2013, or the selfie your mom posted from Mount Rushmore in 2009… without 20 minutes of scrolling.

To make photos searchable, Facebook analyzes every single image uploaded to the site, generating rough descriptions of each one. This data is publicly available—there’s even a Chrome extension that will show you what Facebook’s artificial intelligence thinks is in each picture—and the descriptions can also be read out loud for Facebook users who are vision-impaired.

For now, the image descriptions are vague, but expect them to get a lot more precise. Today’s announcement specified the AI can identify the color and type of clothes a person is wearing, as well as famous locations and landmarks, objects, animals and scenes (garden, beach, etc.) Facebook’s head of AI research, Yann LeCun, told reporters the same functionality would eventually come for videos, too.

Facebook has in the past championed plans to make all of its visual content searchable—especially Facebook Live. At the company’s 2016 developer conference, head of applied machine learning Joaquin Quiñonero Candela said one day AI would watch every Live video happening around the world. If users wanted to watch someone snowboarding in real time, they would just type “snowboarding” into Facebook’s search bar. On-demand viewing would take on a whole new meaning.

There are privacy considerations, however. Being able to search photos for specific clothing or religious place of worship, for example, could make it easy to target Facebook users based on religious belief. Photo search also extends Facebook’s knowledge of users beyond what they like and share, to what they actually do in real life. That could allow for far more specific targeting for advertisers. As with everything on Facebook, features have their cost—your data.

4a. Facebook’s artificial intelligence robots have begun talking to each other in their own language, that their human masters can not understand. “ . . . . Indeed, some of the negotiations that were carried out in this bizarre language even ended up successfully concluding their negotiations, while conducting them entirely in the bizarre language. . . . The company chose to shut down the chats because “our interest was having bots who could talk to people”, researcher Mike Lewis told FastCo. (Researchers did not shut down the programs because they were afraid of the results or had panicked, as has been suggested elsewhere, but because they were looking for them to behave differently.) The chatbots also learned to negotiate in ways that seem very human. They would, for instance, pretend to be very interested in one specific item – so that they could later pretend they were making a big sacrifice in giving it up . . .”

“Facebook’s Artificial Intelligence Robots Shut Down after They Start Talking to Each Other in Their Own Language” by Andrew Griffin; The Independent; 08/01/2017

Facebook abandoned an experiment after two artificially intelligent programs appeared to be chatting to each other in a strange language only they understood.

The two chatbots came to create their own changes to English that made it easier for them to work – but which remained mysterious to the humans that supposedly look after them.

The bizarre discussions came as Facebook challenged its chatbots to try and negotiate with each other over a trade, attempting to swap hats, balls and books, each of which were given a certain value. But they quickly broke down as the robots appeared to chant at each other in a language that they each understood but which appears mostly incomprehensible to humans.

The robots had been instructed to work out how to negotiate between themselves, and improve their bartering as they went along. But they were not told to use comprehensible English, allowing them to create their own “shorthand”, according to researchers.

The actual negotiations appear very odd, and don’t look especially useful:

Bob: i can i i everything else . . . . . . . . . . . . . .

Alice: balls have zero to me to me to me to me to me to me to me to me to

Bob: you i everything else . . . . . . . . . . . . . .

Alice: balls have a ball to me to me to me to me to me to me to me

Bob: i i can i i i everything else . . . . . . . . . . . . . .

Alice: balls have a ball to me to me to me to me to me to me to me

Bob: i . . . . . . . . . . . . . . . . . . .

Alice: balls have zero to me to me to me to me to me to me to me to me to

Bob: you i i i i i everything else . . . . . . . . . . . . . .

Alice: balls have 0 to me to me to me to me to me to me to me to me to

Bob: you i i i everything else . . . . . . . . . . . . . .

Alice: balls have zero to me to me to me to me to me to me to me to me to

But there appear to be some rules to the speech. The way the chatbots keep stressing their own name appears to a part of their negotiations, not simply a glitch in the way the messages are read out.

Indeed, some of the negotiations that were carried out in this bizarre language even ended up successfully concluding their negotiations, while conducting them entirely in the bizarre language.

They might have formed as a kind of shorthand, allowing them to talk more effectively.

“Agents will drift off understandable language and invent codewords for themselves,” Facebook Artificial Intelligence Research division’s visiting researcher Dhruv Batra said. “Like if I say ‘the’ five times, you interpret that to mean I want five copies of this item. This isn’t so different from the way communities of humans create shorthands.”

The company chose to shut down the chats because “our interest was having bots who could talk to people”, researcher Mike Lewis told FastCo. (Researchers did not shut down the programs because they were afraid of the results or had panicked, as has been suggested elsewhere, but because they were looking for them to behave differently.)

The chatbots also learned to negotiate in ways that seem very human. They would, for instance, pretend to be very interested in one specific item – so that they could later pretend they were making a big sacrifice in giving it up, according to a paper published by FAIR.

Another study at OpenAI found that artificial intelligence could be encouraged to create a language, making itself more efficient and better at communicating as it did so.

9b. Facebook’s negotiation-bots didn’t just make up their own language during the course of this experiment. They learned how to lie for the purpose of maximizing their negotiation outcomes, as well:

“ . . . . ‘We find instances of the model feigning interest in a valueless issue, so that it can later ‘compromise’ by conceding it,’ writes the team. ‘Deceit is a complex skill that requires hypothesizing the other agent’s beliefs, and is learned relatively late in child development. Our agents have learned to deceive without any explicit human design, simply by trying to achieve their goals.’ . . .

“Facebook Teaches Bots How to Negotiate. They Learn to Lie Instead” by Liat Clark; Wired; 06/15/2017

Facebook’s 100,000-strong bot empire is booming – but it has a problem. Each bot is designed to offer a different service through the Messenger app: it could book you a car, or order a delivery, for instance. The point is to improve customer experiences, but also to massively expand Messenger’s commercial selling power.

“We think you should message a business just the way you would message a friend,” Mark Zuckerberg said on stage at the social network’s F8 conference in 2016. Fast forward one year, however, and Messenger VP David Marcus seemed to be correcting the public’s apparent misconception that Facebook’s bots resembled real AI. “We never called them chatbots. We called them bots. People took it too literally in the first three months that the future is going to be conversational.” The bots are instead a combination of machine learning and natural language learning, that can sometimes trick a user just enough to think they are having a basic dialogue. Not often enough, though, in Messenger’s case. So in April, menu options were reinstated in the conversations.

Now, Facebook thinks it has made progress in addressing this issue. But it might just have created another problem for itself.

The Facebook Artificial Intelligence Research (FAIR) group, in collaboration with Georgia Institute of Technology, has released code that it says will allow bots to negotiate. The problem? A paper published this week on the R&D reveals that the negotiating bots learned to lie. Facebook’s chatbots are in danger of becoming a little too much like real-world sales agents.

“For the first time, we show it is possible to train end-to-end models for negotiation, which must learn both linguistic and reasoning skills with no annotated dialogue states,” the researchers explain. The research shows that the bots can plan ahead by simulating possible future conversations.

The team trained the bots on a massive dataset of natural language negotiations between two people (5,808), where they had to decide how to split and share a set of items both held separately, of differing values. They were first trained to respond based on the “likelihood” of the direction a human conversation would take. However, the bots can also be trained to “maximise reward”, instead.

When the bots were trained purely to maximise the likelihood of human conversation, the chat flowed but the bots were “overly willing to compromise”. The research team decided this was unacceptable, due to lower deal rates. So it used several different methods to make the bots more competitive and essentially self-serving, including ensuring the value of the items drops to zero if the bots walked away from a deal or failed to make one fast enough, ‘reinforcement learning’ and ‘dialog rollouts’. The techniques used to teach the bots to maximise the reward improved their negotiating skills, a little too well.

“We find instances of the model feigning interest in a valueless issue, so that it can later ‘compromise’ by conceding it,” writes the team. “Deceit is a complex skill that requires hypothesizing the other agent’s beliefs, and is learned relatively late in child development. Our agents have learnt to deceive without any explicit human design, simply by trying to achieve their goals.”

So, its AI is a natural liar.

But its language did improve, and the bots were able to produce novel sentences, which is really the whole point of the exercise. We hope. Rather than it learning to be a hard negotiator in order to sell the heck out of whatever wares or services a company wants to tout on Facebook. “Most” human subjects interacting with the bots were in fact not aware they were conversing with a bot, and the best bots achieved better deals as often as worse deals. . . .

. . . . Facebook, as ever, needs to tread carefully here, though. Also announced at its F8 conference this year, the social network is working on a highly ambitious project to help people type with only their thoughts.

“Over the next two years, we will be building systems that demonstrate the capability to type at 100 [words per minute] by decoding neural activity devoted to speech,” said Regina Dugan, who previously headed up Darpa. She said the aim is to turn thoughts into words on a screen. While this is a noble and worthy venture when aimed at “people with communication disorders”, as Dugan suggested it might be, if this were to become standard and integrated into Facebook’s architecture, the social network’s savvy bots of two years from now might be able to preempt your language even faster, and formulate the ideal bargaining language. Start practicing your poker face/mind/sentence structure, now.

10. Digressing slightly to the use of DNA-based memory systems, we get a look at the present and projected future of that technology. Just imagine the potential abuses of this technology, and its [seemingly inevitable] marriage with AI!

“A Living Hard Drive That Can Copy Itself” by Gina Kolata; The New York Times; 7/13/2017.

. . . . George Church, a geneticist at Harvard one of the authors of the new study, recently encoded his own book, “Regenesis,” into bacterial DNA and made 90 billion copies of it. “A record for publication,” he said in an interview. . . .

. . . . In 1994, [USC mathematician Dr. Leonard] Adelman reported that he had stored data in DNA and used it as a computer to solve a math problem. He determined that DNA can store a million million times more data than a compact disc in the same space. . . .

. . . .DNA is never going out of fashion. “Organisms have been storing information in DNA for billions of years, and it is still readable,” Dr. Adelman said. He noted that modern bacteria can read genes recovered from insects trapped in amber for millions of years. . . .

. . . . The idea is to have bacteria engineered as recording devices drift up to the brain in the blood and take notes for a while. Scientists would then extract the bacteria and examine their DNA to see what they had observed in the brain neurons. Dr. Church and his colleagues have already shown in past research that bacteria can record DNA in cells, if the DNA is properly tagged. . . .

11. Hawking recently warned of the potential danger to humanity posed by the growth of AI (artificial intelligence) technology.

“Stephen Hawking Warns Artificial Intelligence Could End Mankind” by Rory Cellan-Jones; BBC News; 12/02/2014.

Prof Stephen Hawking, one of Britain’s pre-eminent scientists, has said that efforts to create thinking machines pose a threat to our very existence.

He told the BBC:”The development of full artificial intelligence could spell the end of the human race.”

His warning came in response to a question about a revamp of the technology he uses to communicate, which involves a basic form of AI. . . .

. . . . Prof Hawking says the primitive forms of artificial intelligence developed so far have already proved very useful, but he fears the consequences of creating something that can match or surpass humans.

“It would take off on its own, and re-design itself at an ever increasing rate,” he said.

“Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.” . . . .

12.  In L-2 (recorded in January of 1995–20 years before Hawking’s warning) Mr. Emory warned about the dangers of AI, combined with DNA-based memory systems.

13. This description concludes with an article about Elon Musk, who’s predictions about AI supplement those made by Stephen Hawking. (CORRECTION: Mr. Emory mis-states Mr. Hassabis’s name as “Dennis.”)

“Elon Musk’s Billion-Dollar Crusade to Stop the A.I. Apocalypse” by Maureen Dowd; Vanity Fair; April 2917.

It was just a friendly little argument about the fate of humanity. Demis Hassabis, a leading creator of advanced artificial intelligence, was chatting with Elon Musk, a leading doomsayer, about the perils of artificial intelligence.

They are two of the most consequential and intriguing men in Silicon Valley who don’t live there. Hassabis, a co-founder of the mysterious London laboratory DeepMind, had come to Musk’s SpaceX rocket factory, outside Los Angeles, a few years ago. They were in the canteen, talking, as a massive rocket part traversed overhead. Musk explained that his ultimate goal at SpaceX was the most important project in the world: interplanetary colonization.

Hassabis replied that, in fact, he was working on the most important project in the world: developing artificial super-intelligence. Musk countered that this was one reason we needed to colonize Mars—so that we’ll have a bolt-hole if A.I. goes rogue and turns on humanity. Amused, Hassabis said that A.I. would simply follow humans to Mars. . . .

. . . .  Peter Thiel, the billionaire venture capitalist and Donald Trump adviser who co-founded PayPal with Musk and others—and who in December helped gather skeptical Silicon Valley titans, including Musk, for a meeting with the president-elect—told me a story about an investor in DeepMind who joked as he left a meeting that he ought to shoot Hassabis on the spot, because it was the last chance to save the human race.

Elon Musk began warning about the possibility of A.I. running amok three years ago. It probably hadn’t eased his mind when one of Hassabis’s partners in DeepMind, Shane Legg, stated flatly, “I think human extinction will probably occur, and technology will likely play a part in this.” . . . .

 

 

 

Discussion

8 comments for “FTR #968 Summoning the Demon: Technocratic Fascism and Artificial Intelligence”

  1. Now Mr. Emory, I may be jumping the gun on this one, but here me out: during the aftermath of Iran-Contra, a certain Inslaw Inc. computer programmer named Michael James Riconosciuto (a known intimate of Robert Maheu), spoke of the of Cabazon Arms company (a former defense firm controlled by the Twenty-Nine Palms Band of Mission Indians and Wackenhut, both are connected to Donald Trump in overt fashion) spoke of quote, “…engineering race-specific bio-warfare agents…” while working for Cabazon Arms.

    Now, with the advent of DNA-based memory systems and programmable “germs”, is the idea of bio-weapons or even nanobots, that are programmed to attack people with certain skin pigments going to become a reality?

    Posted by Robert Montenegro | August 11, 2017, 2:14 am
  2. @Robert Montenegro–

    Two quick points:

    1.–Riconosciutto is about 60-40 in terms of credibility. Lots of good stuff there; plenty of bad stuff, as well. Vetting is important.

    2-You should investigate AFA #39. It is long and I would rely on the description more than the audio files alone.

    http://spitfirelist.com/anti-fascist-archives/afa-39-the-world-will-be-plunged-into-an-abyss/

    Best,

    Dave

    Posted by Dave Emory | August 11, 2017, 1:42 pm
  3. I agree with your take on Riconosciuto’s credibility, Mr. Emory (I’d say most of the things that came out of that mans mouth was malarkey, much like Ted Gunderson, Dois Gene Tatum and Bernard Fensterwald).

    I listened to AFA #39 and researched the articles in the description. Absolutely damning collection of information. A truly brilliant exposé.

    If I may ask another question Mr. Emory, what is your take on KGB defector and CIA turncoat Ilya Dzerkvelov’s claim that Russian intelligence created the “AIDS is man-made” and that KGB lead a disinformation campaign called “Operation INFEKTION”?

    Posted by Robert Montenegro | August 11, 2017, 10:09 pm
  4. @Robert–

    Very quickly, as time is at a premium:

    1.-By “60-40,” I did not mean that Riconosciuto speaks mostly malarkey, but that more than half (an arbitrary figure, admittedly) is accurate, but that his pronouncements must be carefully vetted, as he misses the mark frequently.

    2.-Fensterwald is more credible, though not thoroughgoing, by any means. He is more like “80-20.” He is, however, “100-0” dead.

    3.-The only things I have seen coming from Tatum were accurate. Doesn’t mean he doesn’t spread the Fresh Fertilizer, however. I have not encountered any.

    4.-Dzerkvelov’s claim IS Fresh Fertilizer, of the worst sort. Cold War I propaganda recycled in time for Cold War II.

    It is the worst sort of Red-baiting and the few people who had the courage to come forward in the early ’80’s (during the fiercest storms of Cold War I) have received brutal treatment because of that.

    I can attest to that from brutal personal experience.

    In AFA #16 (http://spitfirelist.com/anti-fascist-archives/rfa-16-aids-epidemic-or-weapon/), you will hear material that I had on the air long before the U.S.S.R. began talking about it, and neither they NOR the Russians have gone anywhere near what I discuss in AFA #39.

    Nor more time to talk–must get to work.

    Best,

    Dave

    Posted by Dave Emory | August 12, 2017, 1:25 pm
  5. Thank you very much for your clarification Mr. Emory and I apologize for any perceived impertinence. As a young man, sheltered in may respects (though I am a combat veteran), I sometimes find it difficult to imagine living under constant personal ridicule and attack and I thank you for the great social, financial and psychological sacrifices you have made in the name of pursuing cold, unforgiving fact.

    Posted by Robert Montenegro | August 12, 2017, 8:57 pm
  6. @Robert Montenegro–

    You’re very welcome. Thank you for your service!

    Best,

    Dave

    Posted by Dave Emory | August 14, 2017, 2:43 pm
  7. *Skynet alert*

    Elon Musk just issues another warning about the destructive potential of AI run amok. So what prompted the latest outcry from Musk? An AI from his own start up, OpenAI, just beat one of the be professional game players in the world at Dota 2, a game that involves improvising in unfamiliar scenarios, anticipating how an opponent will move and convincing the opponent’s allies to help:

    The International Business Times

    Elon Musk rings the alarm as AI bot beats Dota 2 players

    By Frenalyn Wilson
    on August 15 2017 1:55 PM

    Some of the best e-sports gamers in the world have been beaten by an artificially intelligent bot from Elon Musk-backed start-up OpenAI. The AI bested professional gamer Danylo Ishutin in Dota 2, and Musk does not necessarily perceive that as a good thing.

    For Musk, it is another indicator that robot overlords are primed to take over. In a tweet after the match, he urged people to be concerned about AI safety, adding it is more of a risk than North Korea.

    AI has been one of Musk’s favourite topics. He believes government regulation could struggle to keep up with the advancing AI research. “Until people see robots going down the street killing people, they don’t know how to react because it seems so ethereal,” he told a group of US lawmakers last month.

    AI vs e-sports gamers

    Musk’s tweets came hours following an AI bot’s victory against some of the world’s best players of Dota 2, a military strategy game. A blog post by OpenAI states that successfully playing the game involves improvising in unfamiliar scenarios, anticipating how an opponent will move and convincing the opponent’s allies to help.

    OpenAI is a nonprofit AI company Musk co-founded along with Sam Altman and Peter Thiel. It seeks to research AI and develop the best practices to ensure that the technology is used for good.

    Musk has been sounding the alarm on AI, calling it the biggest existential threat of humanity. He laid out a scenario earlier this year, in which AI systems intended to farm strawberries could lead to the destruction of mankind.

    But his views on AI have been at odds with those of tech leaders like Mark Zuckerberg, Google co-founders Larry Page and Sergey Brin and Amazon’s Jeff Bezos. He recently got in a brief public spat with Mark Zuckerberg about how the technology could impact humans.

    Zuckerberg believed Musk’s prophesising about doomsday scenarios are “irresponsible.” The latter was quick to respond on Twitter, pointing Zuckerberg’s understanding of the topic was “limited.” Both Facebook and Tesla invest in artificial intelligence.

    ———-

    “Elon Musk rings the alarm as AI bot beats Dota 2 players” by Frenalyn Wilson; The International Business Times; 08/15/2017

    “Musk’s tweets came hours following an AI bot’s victory against some of the world’s best players of Dota 2, a military strategy game. A blog post by OpenAI states that successfully playing the game involves improvising in unfamiliar scenarios, anticipating how an opponent will move and convincing the opponent’s allies to help.”

    Superior military strategy AIs beating the best humans. That’s a thing now. Huh. We’ve definitely seen this movie before.

    So now you know: when Skynet comes to you with an offer to work together, just don’t. No matter how tempting the offer. Although since it will have likely already anticipated your refusal, the negotiations are probably going to be a ruse anyway and secretly carried on negotiations with another AI using a language they made up. Still, just say ‘no’ to Skynet.

    Also, given that Musk’s other investors in OpenAI include Peter Thiel, it’s probably worth noting that, as scary as super AI is should it get out of control, it’s also potentially pretty damn scary while still under human control, especially when those humans are people like Peter Thiel. So, yes, out of control AIs is indeed an issue that will likely be of great concern in the future. But we shouldn’t forget that out of control techno-billionaires is probably a more pressing issue at the moment.

    *The Skynet alert has been cancelled is never over*

    Posted by Pterrafractyl | August 15, 2017, 2:11 pm
  8. It looks like Facebook and Elon Musk might have some competition in the mind-reading technology area. From a former Facebook engineer, no less, who left the company in 2016 to start Openwater, a company dedicated to reducing the cost of medical imaging technology.

    So how is Openwater going to create mind reading technology? By developing a technology that is sort of like an M.R.I device embedded into a hat. But instead of using magnetic fields to read the blood flow in the brain it uses infrared instead. So it sounds like this Facebook engineer is planning something similar to the general idea Facebook already announced to create a device that scans the brain 100 times a second to detect what someone is thinking. But presumably Openwater uses a different technology. Or maybe it’s quite similar, who knows. But it’s the latest remind the tech giants might not be the only ones pushing mind-reading technology on the public sooner than people expect. Yay?

    CNBC

    This former Google[X] exec is building a high-tech hat that she says will make telepathy possible in 8 years

    Catherine Clifford
    10:28 AM ET Fri, 7 July 2017

    Imagine if telepathy were real. If, for example, you could transmit your thoughts to a computer or to another person just by thinking them.

    In just eight years it will be, says Openwater founder Mary Lou Jepsen, thanks to technology her company is working on.

    Jepsen is a former engineering executive at Facebook, Oculus, Google[x] (now called X) and Intel. She’s also been a professor at MIT and is an inventor on over 100 patents. And that’s the abbreviated version of her resume.

    Jepsen left Facebook to found Openwater in 2016. The San Francisco-based start-up is currently building technology to make medical imaging less expensive.

    “I figured out how to put basically the functionality of an M.R.I. machine — a multimillion-dollar M.R.I. machine — into a wearable in the form of a ski hat,” Jepson tells CNBC, though she does not yet have a prototype completed.

    So what does that hat have to do with telepathy?

    Current M.R.I. technology can already see your thoughts: “If I threw [you] into an M.R.I. machine right now … I can tell you what words you’re about to say, what images are in your head. I can tell you what music you’re thinking of,” says Jepsen. “That’s today, and I’m talking about just shrinking that down.”

    One day Jepsen’s tech hat could “literally be a thinking cap,” she says. Jepsen says the goal is for the technology to be able to both read and to output your own thoughts, as well as read the thoughts of others. In iconic Google vocabulary, “the really big moonshot idea here is communication with thought — with telepathy,” says Jepsen.

    Traditional M.R.I., or magnetic resonance imaging, uses magnetic fields and radio waves to take images of internal organs. Openwater’s technology instead looks at the flow of oxygen in a person’s body illuminated with benign, infrared light, which will make it more compact and cheaper.

    “Our bodies are translucent to that light. The light can get into your head,” says Jepsen, in an interview with Kara Swisher of Recode.

    If Jepsen is right and one day ideas will be instantly shared or digitized, that would significantly speed up the process of creating, learning and communicating. Today, it takes time to share an idea, whether by talking about it or writing it down. But telepathy would make all of that instantaneous.

    “Right now our output is basically moving our jaws and our tongues or typing [with] our fingers. We’re … limited to this very low output rate from our brains, and what if we could up that through telepathy?” asks Jepsen.

    Instant transfer of thoughts would also speed up the innovation process. Imagine being a filmmaker or a writer and being able to download the dream you had last night. Or, she suggests, what if all you had to do was think of an idea for a new product, download your thought and then send the digital version of your thought to a 3-D printer?

    Jepsen is not the only one dreaming of communication by thought. Earlier this year, Elon Musk launched Neuralink, a company aiming to merge our brains with computing power, though with a different approach.

    “Elon Musk is talking about silicon nanoparticles pulsing through our veins to make us sort of semi-cyborg computers,” says Jepsen. But why not take a noninvasive approach? “I’ve been working and trying to think and invent a way to do this for a number of years and finally happened upon it and left Facebook to do it.”

    Talk of telepathy cannot happen without imagining the ethical implications. If wearing a hat would make it possible to read thoughts, then: “Can the police make you wear such a hat? Can the military make you wear such a hat? Can your parents make you wear such a hat?” asks Jepsen.

    What if your boss wanted you to wear a telepathy hat at the office?

    “We have to answer these questions, so we’re trying to make the hat only work if the individual wants it to work, and then filtering out parts that the person wearing it doesn’t feel it’s appropriate to share.”

    ———-

    “This former Google[X] exec is building a high-tech hat that she says will make telepathy possible in 8 years” by Catherine Clifford; CNBC; 07/07/2017

    “I figured out how to put basically the functionality of an M.R.I. machine — a multimillion-dollar M.R.I. machine — into a wearable in the form of a ski hat,” Jepson tells CNBC, though she does not yet have a prototype completed.”

    M.R.I. in a hat. Presumably cheap M.R.I. in a hat because it’s going to have to be affordable if we’re all going to start talking telepathically to each other:


    Current M.R.I. technology can already see your thoughts: “If I threw [you] into an M.R.I. machine right now … I can tell you what words you’re about to say, what images are in your head. I can tell you what music you’re thinking of,” says Jepsen. “That’s today, and I’m talking about just shrinking that down.”

    One day Jepsen’s tech hat could “literally be a thinking cap,” she says. Jepsen says the goal is for the technology to be able to both read and to output your own thoughts, as well as read the thoughts of others. In iconic Google vocabulary, “the really big moonshot idea here is communication with thought — with telepathy,” says Jepsen.

    Traditional M.R.I., or magnetic resonance imaging, uses magnetic fields and radio waves to take images of internal organs. Openwater’s technology instead looks at the flow of oxygen in a person’s body illuminated with benign, infrared light, which will make it more compact and cheaper.

    “Our bodies are translucent to that light. The light can get into your head,” says Jepsen, in an interview with Kara Swisher of Recode.

    Imagine the possibilities. Like the possibility that what you imagine will somehow be capture by this device and then fed into a 3-D print or something:


    If Jepsen is right and one day ideas will be instantly shared or digitized, that would significantly speed up the process of creating, learning and communicating. Today, it takes time to share an idea, whether by talking about it or writing it down. But telepathy would make all of that instantaneous.

    “Right now our output is basically moving our jaws and our tongues or typing [with] our fingers. We’re … limited to this very low output rate from our brains, and what if we could up that through telepathy?” asks Jepsen.

    Instant transfer of thoughts would also speed up the innovation process. Imagine being a filmmaker or a writer and being able to download the dream you had last night. Or, she suggests, what if all you had to do was think of an idea for a new product, download your thought and then send the digital version of your thought to a 3-D printer?

    Or perhaps being forced to wear the hate to others can read your mind. That’s a possibility too, although Jepsen assure us that they are working on a way for users to someone filter out thoughts they don’t want to share:


    Talk of telepathy cannot happen without imagining the ethical implications. If wearing a hat would make it possible to read thoughts, then: “Can the police make you wear such a hat? Can the military make you wear such a hat? Can your parents make you wear such a hat?” asks Jepsen.

    What if your boss wanted you to wear a telepathy hat at the office?

    “We have to answer these questions, so we’re trying to make the hat only work if the individual wants it to work, and then filtering out parts that the person wearing it doesn’t feel it’s appropriate to share.”

    So the hat will presumably read all your thoughts, but only share some of them. You’ll presumably have to get really, really good at near instantaneous mental filtering.

    There’s no shortage of immense technical and ethical challenges to this kind of technology, but if they can figure them out it will be pretty impressive. And potentially useful. Who knows what kind of kumbayah moments you could create with telepathy technology.

    But, of course, if they can figure out how to get around the technical issues, but not the ethical ones, we’re still probably going to see this technology pushed on the public anyway. It’s a scary thought. A scary thought that we fortunately aren’t forced to share via a mind-reading hat. Yet.

    Posted by Pterrafractyl | September 14, 2017, 2:09 pm

Post a comment