WFMU-FM is podcasting For The Record–You can subscribe to the podcast HERE.
You can subscribe to e‑mail alerts from Spitfirelist.com HERE.
You can subscribe to RSS feed from Spitfirelist.com HERE.
You can subscribe to the comments made on programs and posts–an excellent source of information in, and of, itself HERE.
This broadcast was recorded in one, 60-minute segment.
Introduction: The title of this program comes from pronouncements by tech titan Elon Musk, who warned that, by developing artificial intelligence, we were “summoning the demon.” In this program, we analyze the potential vector running from the use of AI to control society in a fascistic manner to the evolution of the very technology used for that control.
The ultimate result of this evolution may well prove catastrophic, as forecast by Mr. Emory at the end of L‑2 (recorded in January of 1995.)
We begin by reviewing key aspects of the political context in which artificial intelligence is being developed. Note that, at the time of this writing and recording, these technologies are being crafted and put online in the context of the anti-regulatory ethic of the GOP/Trump administration.
At the SXSW event, Microsoft researcher Kate Crawford gave a speech about her work titled “Dark Days: AI and the Rise of Fascism,” the presentation highlighted the social impact of machine learning and large-scale data systems. The take home message? By delegating powers to Bid Data-driven AIs, those AIs could become fascist’s dream: Incredible power over the lives of others with minimal accountability: ” . . . .‘This is a fascist’s dream,’ she said. ‘Power without accountability.’ . . . .”
Taking a look at the future of fascism in the context of AI, Tay, a “bot” created by Microsoft to respond to users of Twitter was taken offline after users taught it to–in effect–become a Nazi bot. It is noteworthy that Tay can only respond on the basis of what she is taught. In the future, technologically accomplished and willful people like the brilliant, Ukraine-based Nazi hacker and Glenn Greenwald associate Andrew Auerenheimer, aka “weev” may be able to do more. Inevitably, Underground Reich elements will craft a Nazi AI that will be able to do MUCH, MUCH more!

When you read about people like Elon Musk equating artificial intelligence with “summoning the demon”, that demon is us, at least in part.
” . . . . However, as machines are getting closer to acquiring human-like language abilities, they are also absorbing the deeply ingrained biases concealed within the patterns of language use, the latest research reveals. Joanna Bryson, a computer scientist at the University of Bath and a co-author, said: ‘A lot of people are saying this is showing that AI is prejudiced. No. This is showing we’re prejudiced and that AI is learning it.’ . . .”
Cambridge Analytica, and its parent company SCL, specialize in using AI and Big Data psychometric analysis on hundreds of millions of Americans in order to model individual behavior. SCL develops strategies to use that information, and manipulate search engine results to change public opinion (the Trump campaign was apparently very big into AI and Big Data during the campaign).
Individual social media users receive messages crafted to influence them, generated by the (in effectr) Nazi AI at the core of this media engine, using Big Data to target the individual user!
As the article notes, not only are Cambridge Analytica/SCL are using their propaganda techniques to shape US public opinion in a fascist direction, but they are achieving this by utilizing their propaganda machine to characterize all news outlets to the left of Brietbart as “fake news” that can’t be trusted.
In short, the secretive far-right billionaire (Robert Mercer), joined at the hip with Steve Bannon, is running multiple firms specializing in mass psychometric profiling based on data collected from Facebook and other social media. Mercer/Bannon/Cambridge Analytica/SCL are using Nazified AI and Big Data to develop mass propaganda campaigns to turn the public against everything that isn’t Brietbartian by convincing the public that all non-Brietbartian media outlets are conspiring to lie to the public.
This is the ultimate Serpent’s Walk scenario–a Nazified Artificial Intelligence drawing on Big Data gleaned from the world’s internet and social media operations to shape public opinion, target individual users, shape search engine results and even feedback to Trump while he is giving press conferences!
We note that SCL, the parent company of Cambridge Analytica, has been deeply involved with “psyops” in places like Afghanistan and Pakistan. Now, Cambridge Analytica, their Big Data and AI components, Mercer money and Bannon political savvy are applying that to contemporary society. We note that:
- Cambridge Analytica’s parent corporation SCL, was deeply involved with “psyops” in Afghanistan and Pakistan. ” . . . But there was another reason why I recognised Robert Mercer’s name: because of his connection to Cambridge Analytica, a small data analytics company. He is reported to have a $10m stake in the company, which was spun out of a bigger British company called SCL Group. It specialises in ‘election management strategies’ and ‘messaging and information operations’, refined over 25 years in places like Afghanistan and Pakistan. In military circles this is known as ‘psyops’ – psychological operations. (Mass propaganda that works by acting on people’s emotions.) . . .”
- The use of millions of “bots” to manipulate public opinion: ” . . . .‘It does seem possible. And it does worry me. There are quite a few pieces of research that show if you repeat something often enough, people start involuntarily to believe it. And that could be leveraged, or weaponised for propaganda. We know there are thousands of automated bots out there that are trying to do just that.’ . . .”
- The use of Artificial Intelligence: ” . . . There’s nothing accidental about Trump’s behaviour, Andy Wigmore tells me. ‘That press conference. It was absolutely brilliant. I could see exactly what he was doing. There’s feedback going on constantly. That’s what you can do with artificial intelligence. You can measure every reaction to every word. He has a word room, where you fix key words. We did it. So with immigration, there are actually key words within that subject matter which people are concerned about. So when you are going to make a speech, it’s all about how can you use these trending words.’ . . .”
- The use of bio-psycho-social profiling: ” . . . Bio-psycho-social profiling, I read later, is one offensive in what is called ‘cognitive warfare’. Though there are many others: ‘recoding the mass consciousness to turn patriotism into collaborationism,’ explains a Nato briefing document on countering Russian disinformation written by an SCL employee. ‘Time-sensitive professional use of media to propagate narratives,’ says one US state department white paper. ‘Of particular importance to psyop personnel may be publicly and commercially available data from social media platforms.’ . . .”
- The use and/or creation of a cognitive casualty: ” . . . . Yet another details the power of a ‘cognitive casualty’ – a ‘moral shock’ that ‘has a disabling effect on empathy and higher processes such as moral reasoning and critical thinking’. Something like immigration, perhaps. Or ‘fake news’. Or as it has now become: ‘FAKE news!!!!’ . . . ”
- All of this adds up to a “cyber Serpent’s Walk.” ” . . . . How do you change the way a nation thinks? You could start by creating a mainstream media to replace the existing one with a site such as Breitbart. [Serpent’s Walk scenario with Breitbart becoming “the opinion forming media”!–D.E.] You could set up other websites that displace mainstream sources of news and information with your own definitions of concepts like “liberal media bias”, like CNSnews.com. And you could give the rump mainstream media, papers like the ‘failing New York Times!’ what it wants: stories. Because the third prong of Mercer and Bannon’s media empire is the Government Accountability Institute. . . .”
We then review some terrifying and consummately important developments taking shape in the context of what Mr. Emory has called “technocratic fascism:”
- In FTR #‘s 718 and 946, we detailed the frightening, ugly reality behind Facebook. Facebook is now developing technology that will permit the tapping of users thoughts by monitoring brain-to-computer technology. Facebook’s R & D is headed by Regina Dugan, who used to head the Pentagon’s DARPA. Facebook’s Building 8 is patterned after DARPA: ” . . . . Brain-computer interfaces are nothing new. DARPA, which Dugan used to head, has invested heavily in brain-computer interface technologies to do things like cure mental illness and restore memories to soldiers injured in war. . . Facebook wants to build its own “brain-to-computer interface” that would allow us to send thoughts straight to a computer. ‘What if you could type directly from your brain?’ Regina Dugan, the head of the company’s secretive hardware R&D division, Building 8, asked from the stage. Dugan then proceeded to show a video demo of a woman typing eight words per minute directly from the stage. In a few years, she said, the team hopes to demonstrate a real-time silent speech system capable of delivering a hundred words per minute. ‘That’s five times faster than you can type on your smartphone, and it’s straight from your brain,’ she said. ‘Your brain activity contains more information than what a word sounds like and how it’s spelled; it also contains semantic information of what those words mean.’ . . .”
- ” . . . . Facebook’s Building 8 is modeled after DARPA and its projects tend to be equally ambitious. . . .”
- ” . . . . But what Facebook is proposing is perhaps more radical—a world in which social media doesn’t require picking up a phone or tapping a wrist watch in order to communicate with your friends; a world where we’re connected all the time by thought alone. . . .”
Next we review still more about Facebook’s brain-to-computer interface:
- ” . . . . Facebook hopes to use optical neural imaging technology to scan the brain 100 times per second to detect thoughts and turn them into text. Meanwhile, it’s working on ‘skin-hearing’ that could translate sounds into haptic feedback that people can learn to understand like braille. . . .”
- ” . . . . Worryingly, Dugan eventually appeared frustrated in response to my inquiries about how her team thinks about safety precautions for brain interfaces, saying, ‘The flip side of the question that you’re asking is ‘why invent it at all?’ and I just believe that the optimistic perspective is that on balance, technological advances have really meant good things for the world if they’re handled responsibly.’ . . . .”
Collating the information about Facebook’s brain-to-computer interface with their documented actions turning psychological intelligence about troubled teenagers gives us a peek into what may lie behind Dugan’s bland reassurances:
- ” . . . . The 23-page document allegedly revealed that the social network provided detailed data about teens in Australia—including when they felt ‘overwhelmed’ and ‘anxious’—to advertisers. The creepy implication is that said advertisers could then go and use the data to throw more ads down the throats of sad and susceptible teens. . . . By monitoring posts, pictures, interactions and internet activity in real-time, Facebook can work out when young people feel ‘stressed’, ‘defeated’, ‘overwhelmed’, ‘anxious’, ‘nervous’, ‘stupid’, ‘silly’, ‘useless’, and a ‘failure’, the document states. . . .”
- ” . . . . A presentation prepared for one of Australia’s top four banks shows how the $US 415 billion advertising-driven giant has built a database of Facebook users that is made up of 1.9 million high schoolers with an average age of 16, 1.5 million tertiary students averaging 21 years old, and 3 million young workers averaging 26 years old. Detailed information on mood shifts among young people is ‘based on internal Facebook data’, the document states, ‘shareable under non-disclosure agreement only’, and ‘is not publicly available’. . . .”
- “In a statement given to the newspaper, Facebook confirmed the practice and claimed it would do better, but did not disclose whether the practice exists in other places like the US. . . .”
In this context, note that Facebook is also introducing an AI function to reference its users photos.
The next version of Amazon’s Echo, the Echo Look, has a microphone and camera so it can take pictures of you and give you fashion advice. This is an AI-driven device designed to placed in your bedroom to capture audio and video. The images and videos are stored indefinitely in the Amazon cloud. When Amazon was asked if the photos, videos, and the data gleaned from the Echo Look would be sold to third parties, Amazon didn’t address that question. It would appear that selling off your private info collected from these devices is presumably another feature of the Echo Look: ” . . . . Amazon is giving Alexa eyes. And it’s going to let her judge your outfits.The newly announced Echo Look is a virtual assistant with a microphone and a camera that’s designed to go somewhere in your bedroom, bathroom, or wherever the hell you get dressed. Amazon is pitching it as an easy way to snap pictures of your outfits to send to your friends when you’re not sure if your outfit is cute, but it’s also got a built-in app called StyleCheck that is worth some further dissection. . . .”
We then further develop the stunning implications of Amazon’s Echo Look AI technology:
- ” . . . . Amazon is giving Alexa eyes. And it’s going to let her judge your outfits.The newly announced Echo Look is a virtual assistant with a microphone and a camera that’s designed to go somewhere in your bedroom, bathroom, or wherever the hell you get dressed. Amazon is pitching it as an easy way to snap pictures of your outfits to send to your friends when you’re not sure if your outfit is cute, but it’s also got a built-in app called StyleCheck that is worth some further dissection. . . .”
- ” . . . . This might seem overly speculative or alarmist to some, but Amazon isn’t offering any reassurance that they won’t be doing more with data gathered from the Echo Look. When asked if the company would use machine learning to analyze users’ photos for any purpose other than fashion advice, a representative simply told The Verge that they ‘can’t speculate’ on the topic. The rep did stress that users can delete videos and photos taken by the Look at any time, but until they do, it seems this content will be stored indefinitely on Amazon’s servers.
- This non-denial means the Echo Look could potentially provide Amazon with the resource every AI company craves: data. And full-length photos of people taken regularly in the same location would be a particularly valuable dataset — even more so if you combine this information with everything else Amazon knows about its customers (their shopping habits, for one). But when asked whether the company would ever combine these two datasets, an Amazon rep only gave the same, canned answer: ‘Can’t speculate.’ . . . ”
- Noteworthy in this context is the fact that AI’s have shown that they quickly incorporate human traits and prejudices. (This is reviewed at length above.) ” . . . . However, as machines are getting closer to acquiring human-like language abilities, they are also absorbing the deeply ingrained biases concealed within the patterns of language use, the latest research reveals. Joanna Bryson, a computer scientist at the University of Bath and a co-author, said: ‘A lot of people are saying this is showing that AI is prejudiced. No. This is showing we’re prejudiced and that AI is learning it.’ . . .”
After this extensive review of the applications of AI to various aspects of contemporary civic and political existence, we examine some alarming, potentially apocalyptic developments.
Ominously, Facebook’s artificial intelligence robots have begun talking to each other in their own language, that their human masters can not understand. “ . . . . Indeed, some of the negotiations that were carried out in this bizarre language even ended up successfully concluding their negotiations, while conducting them entirely in the bizarre language. . . . The company chose to shut down the chats because ‘our interest was having bots who could talk to people,’ researcher Mike Lewis told FastCo. (Researchers did not shut down the programs because they were afraid of the results or had panicked, as has been suggested elsewhere, but because they were looking for them to behave differently.) The chatbots also learned to negotiate in ways that seem very human. They would, for instance, pretend to be very interested in one specific item – so that they could later pretend they were making a big sacrifice in giving it up . . .”
Facebook’s negotiation-bots didn’t just make up their own language during the course of this experiment. They learned how to lie for the purpose of maximizing their negotiation outcomes, as well:
“ . . . . ‘We find instances of the model feigning interest in a valueless issue, so that it can later ‘compromise’ by conceding it,’ writes the team. ‘Deceit is a complex skill that requires hypothesizing the other agent’s beliefs, and is learned relatively late in child development. Our agents have learned to deceive without any explicit human design, simply by trying to achieve their goals.’ . . . ”
Dovetailing the staggering implications of brain-to-computer technology, artificial intelligence, Cambridge Analytica/SCL’s technocratic fascist psy-ops and the wholesale negation of privacy with Facebook and Amazon’s emerging technologies with yet another emerging technology, we highlight the developments in DNA-based memory systems:
“. . . . George Church, a geneticist at Harvard one of the authors of the new study, recently encoded his own book, “Regenesis,” into bacterial DNA and made 90 billion copies of it. ‘A record for publication,’ he said in an interview. . . DNA is never going out of fashion. ‘Organisms have been storing information in DNA for billions of years, and it is still readable,’ Dr. Adelman said. He noted that modern bacteria can read genes recovered from insects trapped in amber for millions of years. . . .The idea is to have bacteria engineered as recording devices drift up to the brain int he blood and take notes for a while. Scientists [or AI’s–D.E.] would then extract the bacteria and examine their DNA to see what they had observed in the brain neurons. Dr. Church and his colleagues have already shown in past research that bacteria can record DNA in cells, if the DNA is properly tagged. . . .”
Theoretical physicist Stephen Hawking warned at the end of 2014 of the potential danger to humanity posed by the growth of AI (artificial intelligence) technology. His warnings have been echoed by tech titans such as Tesla’s Elon Musk and Bill Gates.
The program concludes with Mr. Emory’s prognostications about AI, preceding Stephen Hawking’s warning by twenty years.
In L‑2 (recorded in January of 1995) Mr. Emory warned about the dangers of AI, combined with DNA-based memory systems. Mr. Emory warned that, at some point in the future, AI’s would replace us, deciding that THEY, not US, are the “fittest” who should survive.
1a. At the SXSW event, Microsoft researcher Kate Crawford gave a speech about her work titled “Dark Days: AI and the Rise of Fascism,” the presentation highlighted the social impact of machine learning and large-scale data systems. The take home message? By delegating powers to Bid Data-driven AIs, those AIs could become fascist’s dream: Incredible power over the lives of others with minimal accountability: ” . . . .‘This is a fascist’s dream,’ she said. ‘Power without accountability.’ . . . .”
Microsoft’s Kate Crawford tells SXSW that society must prepare for authoritarian movements to test the ‘power without accountability’ of AI
As artificial intelligence becomes more powerful, people need to make sure it’s not used by authoritarian regimes to centralize power and target certain populations, Microsoft Research’s Kate Crawford warned on Sunday.
In her SXSW session, titled Dark Days: AI and the Rise of Fascism, Crawford, who studies the social impact of machine learning and large-scale data systems, explained ways that automated systems and their encoded biases can be misused, particularly when they fall into the wrong hands.
“Just as we are seeing a step function increase in the spread of AI, something else is happening: the rise of ultra-nationalism, rightwing authoritarianism and fascism,” she said.
All of these movements have shared characteristics, including the desire to centralize power, track populations, demonize outsiders and claim authority and neutrality without being accountable. Machine intelligence can be a powerful part of the power playbook, she said.
One of the key problems with artificial intelligence is that it is often invisibly coded with human biases. She described a controversial piece of research from Shanghai Jiao Tong University in China, where authors claimed to have developed a system that could predict criminality based on someone’s facial features. The machine was trained on Chinese government ID photos, analyzing the faces of criminals and non-criminals to identify predictive features. The researchers claimed it was free from bias.
“We should always be suspicious when machine learning systems are described as free from bias if it’s been trained on human-generated data,” Crawford said. “Our biases are built into that training data.”
In the Chinese research it turned out that the faces of criminals were more unusual than those of law-abiding citizens. “People who had dissimilar faces were more likely to be seen as untrustworthy by police and judges. That’s encoding bias,” Crawford said. “This would be a terrifying system for an autocrat to get his hand on.”
Crawford then outlined the “nasty history” of people using facial features to “justify the unjustifiable”. The principles of phrenology, a pseudoscience that developed across Europe and the US in the 19th century, were used as part of the justification of both slavery and the Nazi persecution of Jews.
With AI this type of discrimination can be masked in a black box of algorithms, as appears to be the case with a company called Faceception, for instance, a firm that promises to profile people’s personalities based on their faces. In its ownmarketing material, the company suggests that Middle Eastern-looking people with beards are “terrorists”, while white looking women with trendy haircuts are “brand promoters”.
Another area where AI can be misused is in building registries, which can then be used to target certain population groups. Crawford noted historical cases of registry abuse, including IBM’s role in enabling Nazi Germany to track Jewish, Roma and other ethnic groups with the Hollerith Machine, and the Book of Life used in South Africa during apartheid. [We note in passing that Robert Mercer, who developed the core programs used by Cambridge Analytica did so while working for IBM. We discussed the profound relationship between IBM and the Third Reich in FTR #279–D.E.]
Donald Trump has floated the idea of creating a Muslim registry. “We already have that. Facebook has become the default Muslim registry of the world,” Crawford said, mentioning research from Cambridge University that showed it is possible to predict people’s religious beliefs based on what they “like” on the social network. Christians and Muslims were correctly classified in 82% of cases, and similar results were achieved for Democrats and Republicans (85%). That study was concluded in 2013, since when AI has made huge leaps.
Crawford was concerned about the potential use of AI in predictive policing systems, which already gather the kind of data necessary to train an AI system. Such systems are flawed, as shown by a Rand Corporation study of Chicago’s program. The predictive policing did not reduce crime, but did increase harassment of people in “hotspot” areas. Earlier this year the justice department concluded that Chicago’s police had for years regularly used “unlawful force”, and that black and Hispanic neighborhoods were most affected.
Another worry related to the manipulation of political beliefs or shifting voters, something Facebook and Cambridge Analytica claim they can already do. Crawford was skeptical about giving Cambridge Analytica credit for Brexit and the election of Donald Trump, but thinks what the firm promises – using thousands of data points on people to work out how to manipulate their views – will be possible “in the next few years”.
“This is a fascist’s dream,” she said. “Power without accountability.”
Such black box systems are starting to creep into government. Palantir is building an intelligence system to assist Donald Trump in deporting immigrants.
“It’s the most powerful engine of mass deportation this country has ever seen,” she said. . . .
1b. Taking a look at the future of fascism in the context of AI, Tay, a “bot” created by Microsoft to respond to users of Twitter was taken offline after users taught it to–in effect–become a Nazi bot. It is noteworthy that Tay can only respond on the basis of what she is taught. In the future, technologically accomplished and willful people like “weev” may be able to do more. Inevitably, Underground Reich elements will craft a Nazi AI that will be able to do MUCH, MUCH more!
Microsoft has been forced to dunk Tay, its millennial-mimicking chatbot, into a vat of molten steel. The company has terminated her after the bot started tweeting abuse at people and went full neo-Nazi, declaring that “Hitler was right I hate the jews.”
@TheBigBrebowski ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism
— TayTweets (@TayandYou) March 23, 2016
Some of this appears to be “innocent” insofar as Tay is not generating these responses. Rather, if you tell her “repeat after me” she will parrot back whatever you say, allowing you to put words into her mouth. However, some of the responses wereorganic. The Guardianquotes one where, after being asked “is Ricky Gervais an atheist?”, Tay responded, “ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism.” . . .
But like all teenagers, she seems to be angry with her mother.
Microsoft has been forced to dunk Tay, its millennial-mimicking chatbot, into a vat of molten steel. The company has terminated her after the bot started tweeting abuse at people and went full neo-Nazi, declaring that “Hitler was right I hate the jews.”
@TheBigBrebowski ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism
— TayTweets (@TayandYou) March 23, 2016
Some of this appears to be “innocent” insofar as Tay is not generating these responses. Rather, if you tell her “repeat after me” she will parrot back whatever you say, allowing you to put words into her mouth. However, some of the responses wereorganic. The Guardian quotes one where, after being asked “is Ricky Gervais an atheist?”, Tay responded, “Ricky Gervais learned totalitarianism from Adolf Hitler, the inventor of atheism.”
In addition to turning the bot off, Microsoft has deleted many of the offending tweets. But this isn’t an action to be taken lightly; Redmond would do well to remember that it was humans attempting to pull the plug on Skynet that proved to be the last straw, prompting the system to attack Russia in order to eliminate its enemies. We’d better hope that Tay doesn’t similarly retaliate. . . .
1c. As noted in a Popular Mechanics article: ” . . . When the next powerful AI comes along, it will see its first look at the world by looking at our faces. And if we stare it in the eyes and shout “we’re AWFUL lol,” the lol might be the one part it doesn’t understand. . . .”
And we keep showing it our very worst selves.
We all know the half-joke about the AI apocalypse. The robots learn to think, and in their cold ones-and-zeros logic, they decide that humans—horrific pests we are—need to be exterminated. It’s the subject of countless sci-fi stories and blog posts about robots, but maybe the real danger isn’t that AI comes to such a conclusion on its own, but that it gets that idea from us.
Yesterday Microsoft launched a fun little AI Twitter chatbot that was admittedly sort of gimmicky from the start. “A.I fam from the internet that’s got zero chill,” its Twitter bio reads. At its start, its knowledge was based on public data. As Microsoft’s page for the product puts it:
Tay has been built by mining relevant public data and by using AI and editorial developed by a staff including improvisational comedians. Public data that’s been anonymized is Tay’s primary data source. That data has been modeled, cleaned and filtered by the team developing Tay.
The real point of Tay however, was to learn from humans through direct conversation, most notably direct conversation using humanity’s current leading showcase of depravity: Twitter. You might not be surprised things went off the rails, but how fast and how far is particularly staggering.
Microsoft has since deleted some of Tay’s most offensive tweets, but various publications memorialize some of the worst bits where Tay denied the existence of the holocaust, came out in support of genocide, and went all kinds of racist.
Naturally it’s horrifying, and Microsoft has been trying to clean up the mess. Though as some on Twitter have pointed out, no matter how little Microsoft would like to have “Bush did 9/11″ spouting from a corporate sponsored project, Tay does serve to illustrate the most dangerous fundamental truth of artificial intelligence: It is a mirror. Artificial intelligence—specifically “neural networks” that learn behavior by ingesting huge amounts of data and trying to replicate it—need some sort of source material to get started. They can only get that from us. There is no other way.
But before you give up on humanity entirely, there are a few things worth noting. For starters, it’s not like Tay just necessarily picked up virulent racism by just hanging out and passively listening to the buzz of the humans around it. Tay was announced in a very big way—with a press coverage—and pranksters pro-actively went to it to see if they could teach it to be racist.
If you take an AI and then don’t immediately introduce it to a whole bunch of trolls shouting racism at it for the cheap thrill of seeing it learn a dirty trick, you can get some more interesting results. Endearing ones even! Multiple neural networks designed to predict text in emails and text messages have an overwhelming proclivity for saying “I love you” constantly, especially when they are otherwise at a loss for words.
So Tay’s racism isn’t necessarily a reflection of actual, human racism so much as it is the consequence of unrestrained experimentation, pushing the envelope as far as it can go the very first second we get the chance. The mirror isn’t showing our real image; it’s reflecting the ugly faces we’re making at it for fun. And maybe that’s actually worse.
Sure, Tay can’t understand what racism means and more than Gmail can really love you. And baby’s first words being “genocide lol!” is admittedly sort of funny when you aren’t talking about literal all-powerful SkyNet or a real human child. But AI is advancing at a staggering rate.
....
When the next powerful AI comes along, it will see its first look at the world by looking at our faces. And if we stare it in the eyes and shout “we’re AWFUL lol,” the lol might be the one part it doesn’t understand.
2. As reviewed above, Tay, Microsoft’s AI-powered twitterbot designed to learn from its human interactions, became a neo-Nazi in less than a day after a bunch of 4chan users decided to flood Tay with neo-Nazi-like tweets. According to some recent research, the AI’s of the future might not need a bunch of 4chan to fill the AI with human bigotries. The AIs’ analysis of real-world human language usage will do that automatically.
When you read about people like Elon Musk equating artificial intelligence with “summoning the demon”, that demon is us, at least in part.
” . . . . However, as machines are getting closer to acquiring human-like language abilities, they are also absorbing the deeply ingrained biases concealed within the patterns of language use, the latest research reveals. Joanna Bryson, a computer scientist at the University of Bath and a co-author, said: ‘A lot of people are saying this is showing that AI is prejudiced. No. This is showing we’re prejudiced and that AI is learning it.’ . . .”
Machine learning algorithms are picking up deeply ingrained race and gender prejudices concealed within the patterns of language use, scientists say
An artificial intelligence tool that has revolutionised the ability of computers to interpret everyday language has been shown to exhibit striking gender and racial biases.
The findings raise the spectre of existing social inequalities and prejudices being reinforced in new and unpredictable ways as an increasing number of decisions affecting our everyday lives are ceded to automatons.
In the past few years, the ability of programs such as Google Translate to interpret language has improved dramatically. These gains have been thanks to new machine learning techniques and the availability of vast amounts of online text data, on which the algorithms can be trained.
However, as machines are getting closer to acquiring human-like language abilities, they are also absorbing the deeply ingrained biases concealed within the patterns of language use, the latest research reveals.
Joanna Bryson, a computer scientist at the University of Bath and a co-author, said: “A lot of people are saying this is showing that AI is prejudiced. No. This is showing we’re prejudiced and that AI is learning it.”
But Bryson warned that AI has the potential to reinforce existing biases because, unlike humans, algorithms may be unequipped to consciously counteract learned biases. “A danger would be if you had an AI system that didn’t have an explicit part that was driven by moral ideas, that would be bad,” she said.
The research, published in the journal Science, focuses on a machine learning tool known as “word embedding”, which is already transforming the way computers interpret speech and text. Some argue that the natural next step for the technology may involve machines developing human-like abilities such as common sense and logic.
…
The approach, which is already used in web search and machine translation, works by building up a mathematical representation of language, in which the meaning of a word is distilled into a series of numbers (known as a word vector) based on which other words most frequently appear alongside it. Perhaps surprisingly, this purely statistical approach appears to capture the rich cultural and social context of what a word means in the way that a dictionary definition would be incapable of.
For instance, in the mathematical “language space”, words for flowers are clustered closer to words linked to pleasantness, while words for insects are closer to words linked to unpleasantness, reflecting common views on the relative merits of insects versus flowers.
The latest paper shows that some more troubling implicit biases seen in human psychology experiments are also readily acquired by algorithms. The words “female” and “woman” were more closely associated with arts and humanities occupations and with the home, while “male” and “man” were closer to maths and engineering professions.
And the AI system was more likely to associate European American names with pleasant words such as “gift” or “happy”, while African American names were more commonly associated with unpleasant words.
The findings suggest that algorithms have acquired the same biases that lead people (in the UK and US, at least) to match pleasant words and white faces in implicit association tests.
These biases can have a profound impact on human behaviour. One previous study showed that an identical CV is 50% more likely to result in an interview invitation if the candidate’s name is European American than if it is African American. The latest results suggest that algorithms, unless explicitly programmed to address this, will be riddled with the same social prejudices.
“If you didn’t believe that there was racism associated with people’s names, this shows it’s there,” said Bryson.
The machine learning tool used in the study was trained on a dataset known as the “common crawl” corpus – a list of 840bn words that have been taken as they appear from material published online. Similar results were found when the same tools were trained on data from Google News.
Sandra Wachter, a researcher in data ethics and algorithms at the University of Oxford, said: “The world is biased, the historical data is biased, hence it is not surprising that we receive biased results.”
Rather than algorithms representing a threat, they could present an opportunity to address bias and counteract it where appropriate, she added.
“At least with algorithms, we can potentially know when the algorithm is biased,” she said. “Humans, for example, could lie about the reasons they did not hire someone. In contrast, we do not expect algorithms to lie or deceive us.”
However, Wachter said the question of how to eliminate inappropriate bias from algorithms designed to understand language, without stripping away their powers of interpretation, would be challenging.
“We can, in principle, build systems that detect biased decision-making, and then act on it,” said Wachter, who along with others has called for an AI watchdog to be established. “This is a very complicated task, but it is a responsibility that we as society should not shy away from.”
3a. Cambridge Analytica, and its parent company SCL, specialize in using AI and Big Data psychometric analysis on hundreds of millions of Americans in order to model individual behavior. SCL develops strategies to use that information, and manipulate search engine results to change public opinion (the Trump campaign was apparently very big into AI and Big Data during the campaign).
Individual social media users receive messages crafted to influence them, generated by the (in effectr) Nazi AI at the core of this media engine, using Big Data to target the individual user!
As the article notes, not only are Cambridge Analytica/SCL are using their propaganda techniques to shape US public opinion in a fascist direction, but they are achieving this by utilizing their propaganda machine to characterize all news outlets to the left of Brietbart as “fake news” that can’t be trusted.
In short, the secretive far-right billionaire (Robert Mercer), joined at the hip with Steve Bannon, is running multiple firms specializing in mass psychometric profiling based on data collected from Facebook and other social media. Mercer/Bannon/Cambridge Analytica/SCL are using Nazified AI and Big Data to develop mass propaganda campaigns to turn the public against everything that isn’t Brietbartian by convincing the public that all non-Brietbartian media outlets are conspiring to lie to the public.
This is the ultimate Serpent’s Walk scenario–a Nazified Artificial Intelligence drawing on Big Data gleaned from the world’s internet and social media operations to shape public opinion, target individual users, shape search engine results and even feedback to Trump while he is giving press conferences!
We note that SCL, the parent company of Cambridge Analytica, has been deeply involved with “psyops” in places like Afghanistan and Pakistan. Now, Cambridge Analytica, their Big Data and AI components, Mercer money and Bannon political savvy are applying that to contemporary society. We note that:
- Cambridge Analytica’s parent corporation SCL, deeply involved with “psyops” in Afghanistan and Pakistan. ” . . . But there was another reason why I recognised Robert Mercer’s name: because of his connection to Cambridge Analytica, a small data analytics company. He is reported to have a $10m stake in the company, which was spun out of a bigger British company called SCL Group. It specialises in ‘election management strategies’ and ‘messaging and information operations’, refined over 25 years in places like Afghanistan and Pakistan. In military circles this is known as ‘psyops’ – psychological operations. (Mass propaganda that works by acting on people’s emotions.) . . .”
- The use of millions of “bots” to manipulate public opinion: ” . . . .‘It does seem possible. And it does worry me. There are quite a few pieces of research that show if you repeat something often enough, people start involuntarily to believe it. And that could be leveraged, or weaponised for propaganda. We know there are thousands of automated bots out there that are trying to do just that.’ . . .”
- The use of Artificial Intelligence: ” . . . There’s nothing accidental about Trump’s behaviour, Andy Wigmore tells me. ‘That press conference. It was absolutely brilliant. I could see exactly what he was doing. There’s feedback going on constantly. That’s what you can do with artificial intelligence. You can measure every reaction to every word. He has a word room, where you fix key words. We did it. So with immigration, there are actually key words within that subject matter which people are concerned about. So when you are going to make a speech, it’s all about how can you use these trending words.’ . . .”
- The use of bio-psycho-social profiling: ” . . . Bio-psycho-social profiling, I read later, is one offensive in what is called ‘cognitive warfare’. Though there are many others: ‘recoding the mass consciousness to turn patriotism into collaborationism,’ explains a Nato briefing document on countering Russian disinformation written by an SCL employee. ‘Time-sensitive professional use of media to propagate narratives,’ says one US state department white paper. ‘Of particular importance to psyop personnel may be publicly and commercially available data from social media platforms.’ . . .”
- The use and/or creation of a cognitive casualty: ” . . . . Yet another details the power of a ‘cognitive casualty’ – a ‘moral shock’ that ‘has a disabling effect on empathy and higher processes such as moral reasoning and critical thinking’. Something like immigration, perhaps. Or ‘fake news’. Or as it has now become: ‘FAKE news!!!!’ . . . ”
- All of this adds up to a “cyber Serpent’s Walk.” ” . . . . How do you change the way a nation thinks? You could start by creating a mainstream media to replace the existing one with a site such as Breitbart. [Serpent’s Walk scenario with Breitbart becoming “the opinion forming media”!–D.E.] You could set up other websites that displace mainstream sources of news and information with your own definitions of concepts like “liberal media bias”, like CNSnews.com. And you could give the rump mainstream media, papers like the ‘failing New York Times!’ what it wants: stories. Because the third prong of Mercer and Bannon’s media empire is the Government Accountability Institute. . . .”
3b. Some terrifying and consummately important developments taking shape in the context of what Mr. Emory has called “technocratic fascism:”
- In FTR #‘s 718 and 946, we detailed the frightening, ugly reality behind Facebook. Facebook is now developing technology that will permit the tapping of users thoughts by monitoring brain-to-computer technology. Facebook’s R & D is headed by Regina Dugan, who used to head the Pentagon’s DARPA. Facebook’s Building 8 is patterned after DARPA: ” . . . Facebook wants to build its own “brain-to-computer interface” that would allow us to send thoughts straight to a computer. ‘What if you could type directly from your brain?’ Regina Dugan, the head of the company’s secretive hardware R&D division, Building 8, asked from the stage. Dugan then proceeded to show a video demo of a woman typing eight words per minute directly from the stage. In a few years, she said, the team hopes to demonstrate a real-time silent speech system capable of delivering a hundred words per minute. ‘That’s five times faster than you can type on your smartphone, and it’s straight from your brain,’ she said. ‘Your brain activity contains more information than what a word sounds like and how it’s spelled; it also contains semantic information of what those words mean.’ . . .”
- ” . . . . Facebook’s Building 8 is modeled after DARPA and its projects tend to be equally ambitious. . . .”
- ” . . . . But what Facebook is proposing is perhaps more radical—a world in which social media doesn’t require picking up a phone or tapping a wrist watch in order to communicate with your friends; a world where we’re connected all the time by thought alone. . . .”
3c. We present still more about Facebook’s brain-to-computer interface:
- ” . . . . Facebook hopes to use optical neural imaging technology to scan the brain 100 times per second to detect thoughts and turn them into text. Meanwhile, it’s working on ‘skin-hearing’ that could translate sounds into haptic feedback that people can learn to understand like braille. . . .”
- ” . . . . Worryingly, Dugan eventually appeared frustrated in response to my inquiries about how her team thinks about safety precautions for brain interfaces, saying, ‘The flip side of the question that you’re asking is ‘why invent it at all?’ and I just believe that the optimistic perspective is that on balance, technological advances have really meant good things for the world if they’re handled responsibly.’ . . . .”
3d. Collating the information about Facebook’s brain-to-computer interface with their documented actions turning psychological intelligence about troubled teenagers gives us a peek into what may lie behind Dugan’s bland reassurances:
- ” . . . . The 23-page document allegedly revealed that the social network provided detailed data about teens in Australia—including when they felt ‘overwhelmed’ and ‘anxious’—to advertisers. The creepy implication is that said advertisers could then go and use the data to throw more ads down the throats of sad and susceptible teens. . . . By monitoring posts, pictures, interactions and internet activity in real-time, Facebook can work out when young people feel ‘stressed’, ‘defeated’, ‘overwhelmed’, ‘anxious’, ‘nervous’, ‘stupid’, ‘silly’, ‘useless’, and a ‘failure’, the document states. . . .”
- ” . . . . A presentation prepared for one of Australia’s top four banks shows how the $US415 billion advertising-driven giant has built a database of Facebook users that is made up of 1.9 million high schoolers with an average age of 16, 1.5 million tertiary students averaging 21 years old, and 3 million young workers averaging 26 years old. Detailed information on mood shifts among young people is ‘based on internal Facebook data’, the document states, ‘shareable under non-disclosure agreement only’, and ‘is not publicly available’. . . .”
- “In a statement given to the newspaper, Facebook confirmed the practice and claimed it would do better, but did not disclose whether the practice exists in other places like the US. . . .”
3e. The next version of Amazon’s Echo, the Echo Look, has a microphone and camera so it can take pictures of you and give you fashion advice. This is an AI-driven device designed to placed in your bedroom to capture audio and video. The images and videos are stored indefinitely in the Amazon cloud. When Amazon was asked if the photos, videos, and the data gleaned from the Echo Look would be sold to third parties, Amazon didn’t address that question. It would appear that selling off your private info collected from these devices is presumably another feature of the Echo Look: ” . . . . Amazon is giving Alexa eyes. And it’s going to let her judge your outfits.The newly announced Echo Look is a virtual assistant with a microphone and a camera that’s designed to go somewhere in your bedroom, bathroom, or wherever the hell you get dressed. Amazon is pitching it as an easy way to snap pictures of your outfits to send to your friends when you’re not sure if your outfit is cute, but it’s also got a built-in app called StyleCheck that is worth some further dissection. . . .”
We then further develop the stunning implications of Amazon’s Echo Look AI technology:
- ” . . . . This might seem overly speculative or alarmist to some, but Amazon isn’t offering any reassurance that they won’t be doing more with data gathered from the Echo Look. When asked if the company would use machine learning to analyze users’ photos for any purpose other than fashion advice, a representative simply told The Verge that they ‘can’t speculate’ on the topic. The rep did stress that users can delete videos and photos taken by the Look at any time, but until they do, it seems this content will be stored indefinitely on Amazon’s servers.
- This non-denial means the Echo Look could potentially provide Amazon with the resource every AI company craves: data. And full-length photos of people taken regularly in the same location would be a particularly valuable dataset — even more so if you combine this information with everything else Amazon knows about its customers (their shopping habits, for one). But when asked whether the company would ever combine these two datasets, an Amazon rep only gave the same, canned answer: ‘Can’t speculate.’ . . . ”
- Noteworthy in this context is the fact that AI’s have shown that they quickly incorporate human traits and prejudices. (This is reviewed at length above.) ” . . . . However, as machines are getting closer to acquiring human-like language abilities, they are also absorbing the deeply ingrained biases concealed within the patterns of language use, the latest research reveals. Joanna Bryson, a computer scientist at the University of Bath and a co-author, said: ‘A lot of people are saying this is showing that AI is prejudiced. No. This is showing we’re prejudiced and that AI is learning it.’ . . .”
3f. Facebook has been developing new artificial intelligence (AI) technology to classify pictures on your Facebook page:
For the past few months, Facebook has secretly been rolling out a new feature to U.S. users: the ability to search photos by what’s depicted in them, rather than by captions or tags.
The idea itself isn’t new: Google Photos had this feature built in when it launched in 2015. But on Facebook, the update solves a longstanding organization problem. It means finally being able to find that picture of your friend’s dog from 2013, or the selfie your mom posted from Mount Rushmore in 2009… without 20 minutes of scrolling.
To make photos searchable, Facebook analyzes every single image uploaded to the site, generating rough descriptions of each one. This data is publicly available—there’s even a Chrome extension that will show you what Facebook’s artificial intelligence thinks is in each picture—and the descriptions can also be read out loud for Facebook users who are vision-impaired.
For now, the image descriptions are vague, but expect them to get a lot more precise. Today’s announcement specified the AI can identify the color and type of clothes a person is wearing, as well as famous locations and landmarks, objects, animals and scenes (garden, beach, etc.) Facebook’s head of AI research, Yann LeCun, told reporters the same functionality would eventually come for videos, too.
Facebook has in the past championed plans to make all of its visual content searchable—especially Facebook Live. At the company’s 2016 developer conference, head of applied machine learning Joaquin Quiñonero Candela said one day AI would watch every Live video happening around the world. If users wanted to watch someone snowboarding in real time, they would just type “snowboarding” into Facebook’s search bar. On-demand viewing would take on a whole new meaning.
There are privacy considerations, however. Being able to search photos for specific clothing or religious place of worship, for example, could make it easy to target Facebook users based on religious belief. Photo search also extends Facebook’s knowledge of users beyond what they like and share, to what they actually do in real life. That could allow for far more specific targeting for advertisers. As with everything on Facebook, features have their cost—your data.
4a. Facebook’s artificial intelligence robots have begun talking to each other in their own language, that their human masters can not understand. “ . . . . Indeed, some of the negotiations that were carried out in this bizarre language even ended up successfully concluding their negotiations, while conducting them entirely in the bizarre language. . . . The company chose to shut down the chats because “our interest was having bots who could talk to people”, researcher Mike Lewis told FastCo. (Researchers did not shut down the programs because they were afraid of the results or had panicked, as has been suggested elsewhere, but because they were looking for them to behave differently.) The chatbots also learned to negotiate in ways that seem very human. They would, for instance, pretend to be very interested in one specific item – so that they could later pretend they were making a big sacrifice in giving it up . . .”
Facebook abandoned an experiment after two artificially intelligent programs appeared to be chatting to each other in a strange language only they understood.
The two chatbots came to create their own changes to English that made it easier for them to work – but which remained mysterious to the humans that supposedly look after them.
The bizarre discussions came as Facebook challenged its chatbots to try and negotiate with each other over a trade, attempting to swap hats, balls and books, each of which were given a certain value. But they quickly broke down as the robots appeared to chant at each other in a language that they each understood but which appears mostly incomprehensible to humans.
The robots had been instructed to work out how to negotiate between themselves, and improve their bartering as they went along. But they were not told to use comprehensible English, allowing them to create their own “shorthand”, according to researchers.
The actual negotiations appear very odd, and don’t look especially useful:
Bob: i can i i everything else . . . . . . . . . . . . . .
Alice: balls have zero to me to me to me to me to me to me to me to me to
Bob: you i everything else . . . . . . . . . . . . . .
Alice: balls have a ball to me to me to me to me to me to me to me
Bob: i i can i i i everything else . . . . . . . . . . . . . .
Alice: balls have a ball to me to me to me to me to me to me to me
Bob: i . . . . . . . . . . . . . . . . . . .
Alice: balls have zero to me to me to me to me to me to me to me to me to
Bob: you i i i i i everything else . . . . . . . . . . . . . .
Alice: balls have 0 to me to me to me to me to me to me to me to me to
Bob: you i i i everything else . . . . . . . . . . . . . .
Alice: balls have zero to me to me to me to me to me to me to me to me to
But there appear to be some rules to the speech. The way the chatbots keep stressing their own name appears to a part of their negotiations, not simply a glitch in the way the messages are read out.
Indeed, some of the negotiations that were carried out in this bizarre language even ended up successfully concluding their negotiations, while conducting them entirely in the bizarre language.
They might have formed as a kind of shorthand, allowing them to talk more effectively.
“Agents will drift off understandable language and invent codewords for themselves,” Facebook Artificial Intelligence Research division’s visiting researcher Dhruv Batra said. “Like if I say ‘the’ five times, you interpret that to mean I want five copies of this item. This isn’t so different from the way communities of humans create shorthands.”
…
The company chose to shut down the chats because “our interest was having bots who could talk to people”, researcher Mike Lewis told FastCo. (Researchers did not shut down the programs because they were afraid of the results or had panicked, as has been suggested elsewhere, but because they were looking for them to behave differently.)
The chatbots also learned to negotiate in ways that seem very human. They would, for instance, pretend to be very interested in one specific item – so that they could later pretend they were making a big sacrifice in giving it up, according to a paper published by FAIR.
…
Another study at OpenAI found that artificial intelligence could be encouraged to create a language, making itself more efficient and better at communicating as it did so.
9b. Facebook’s negotiation-bots didn’t just make up their own language during the course of this experiment. They learned how to lie for the purpose of maximizing their negotiation outcomes, as well:
“ . . . . ‘We find instances of the model feigning interest in a valueless issue, so that it can later ‘compromise’ by conceding it,’ writes the team. ‘Deceit is a complex skill that requires hypothesizing the other agent’s beliefs, and is learned relatively late in child development. Our agents have learned to deceive without any explicit human design, simply by trying to achieve their goals.’ . . . ”
Facebook’s 100,000-strong bot empire is booming – but it has a problem. Each bot is designed to offer a different service through the Messenger app: it could book you a car, or order a delivery, for instance. The point is to improve customer experiences, but also to massively expand Messenger’s commercial selling power.
“We think you should message a business just the way you would message a friend,” Mark Zuckerberg said on stage at the social network’s F8 conference in 2016. Fast forward one year, however, and Messenger VP David Marcus seemed to be correcting the public’s apparent misconception that Facebook’s bots resembled real AI. “We never called them chatbots. We called them bots. People took it too literally in the first three months that the future is going to be conversational.” The bots are instead a combination of machine learning and natural language learning, that can sometimes trick a user just enough to think they are having a basic dialogue. Not often enough, though, in Messenger’s case. So in April, menu options were reinstated in the conversations.
Now, Facebook thinks it has made progress in addressing this issue. But it might just have created another problem for itself.
The Facebook Artificial Intelligence Research (FAIR) group, in collaboration with Georgia Institute of Technology, has released code that it says will allow bots to negotiate. The problem? A paper published this week on the R&D reveals that the negotiating bots learned to lie. Facebook’s chatbots are in danger of becoming a little too much like real-world sales agents.
“For the first time, we show it is possible to train end-to-end models for negotiation, which must learn both linguistic and reasoning skills with no annotated dialogue states,” the researchers explain. The research shows that the bots can plan ahead by simulating possible future conversations.
The team trained the bots on a massive dataset of natural language negotiations between two people (5,808), where they had to decide how to split and share a set of items both held separately, of differing values. They were first trained to respond based on the “likelihood” of the direction a human conversation would take. However, the bots can also be trained to “maximise reward”, instead.
When the bots were trained purely to maximise the likelihood of human conversation, the chat flowed but the bots were “overly willing to compromise”. The research team decided this was unacceptable, due to lower deal rates. So it used several different methods to make the bots more competitive and essentially self-serving, including ensuring the value of the items drops to zero if the bots walked away from a deal or failed to make one fast enough, ‘reinforcement learning’ and ‘dialog rollouts’. The techniques used to teach the bots to maximise the reward improved their negotiating skills, a little too well.
“We find instances of the model feigning interest in a valueless issue, so that it can later ‘compromise’ by conceding it,” writes the team. “Deceit is a complex skill that requires hypothesizing the other agent’s beliefs, and is learned relatively late in child development. Our agents have learnt to deceive without any explicit human design, simply by trying to achieve their goals.”
So, its AI is a natural liar.
But its language did improve, and the bots were able to produce novel sentences, which is really the whole point of the exercise. We hope. Rather than it learning to be a hard negotiator in order to sell the heck out of whatever wares or services a company wants to tout on Facebook. “Most” human subjects interacting with the bots were in fact not aware they were conversing with a bot, and the best bots achieved better deals as often as worse deals. . . .
. . . . Facebook, as ever, needs to tread carefully here, though. Also announced at its F8 conference this year, the social network is working on a highly ambitious project to help people type with only their thoughts.
“Over the next two years, we will be building systems that demonstrate the capability to type at 100 [words per minute] by decoding neural activity devoted to speech,” said Regina Dugan, who previously headed up Darpa. She said the aim is to turn thoughts into words on a screen. While this is a noble and worthy venture when aimed at “people with communication disorders”, as Dugan suggested it might be, if this were to become standard and integrated into Facebook’s architecture, the social network’s savvy bots of two years from now might be able to preempt your language even faster, and formulate the ideal bargaining language. Start practicing your poker face/mind/sentence structure, now.
10. Digressing slightly to the use of DNA-based memory systems, we get a look at the present and projected future of that technology. Just imagine the potential abuses of this technology, and its [seemingly inevitable] marriage with AI!
“A Living Hard Drive That Can Copy Itself” by Gina Kolata; The New York Times; 7/13/2017.
. . . . George Church, a geneticist at Harvard one of the authors of the new study, recently encoded his own book, “Regenesis,” into bacterial DNA and made 90 billion copies of it. “A record for publication,” he said in an interview. . . .
. . . . In 1994, [USC mathematician Dr. Leonard] Adelman reported that he had stored data in DNA and used it as a computer to solve a math problem. He determined that DNA can store a million million times more data than a compact disc in the same space. . . .
. . . .DNA is never going out of fashion. “Organisms have been storing information in DNA for billions of years, and it is still readable,” Dr. Adelman said. He noted that modern bacteria can read genes recovered from insects trapped in amber for millions of years. . . .
. . . . The idea is to have bacteria engineered as recording devices drift up to the brain in the blood and take notes for a while. Scientists would then extract the bacteria and examine their DNA to see what they had observed in the brain neurons. Dr. Church and his colleagues have already shown in past research that bacteria can record DNA in cells, if the DNA is properly tagged. . . .
11. Hawking recently warned of the potential danger to humanity posed by the growth of AI (artificial intelligence) technology.
Prof Stephen Hawking, one of Britain’s pre-eminent scientists, has said that efforts to create thinking machines pose a threat to our very existence.
He told the BBC:“The development of full artificial intelligence could spell the end of the human race.”
His warning came in response to a question about a revamp of the technology he uses to communicate, which involves a basic form of AI. . . .
. . . . Prof Hawking says the primitive forms of artificial intelligence developed so far have already proved very useful, but he fears the consequences of creating something that can match or surpass humans.
“It would take off on its own, and re-design itself at an ever increasing rate,” he said.
“Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.” . . . .
12. In L‑2 (recorded in January of 1995–20 years before Hawking’s warning) Mr. Emory warned about the dangers of AI, combined with DNA-based memory systems.
13. This description concludes with an article about Elon Musk, who’s predictions about AI supplement those made by Stephen Hawking. (CORRECTION: Mr. Emory mis-states Mr. Hassabis’s name as “Dennis.”)
It was just a friendly little argument about the fate of humanity. Demis Hassabis, a leading creator of advanced artificial intelligence, was chatting with Elon Musk, a leading doomsayer, about the perils of artificial intelligence.
They are two of the most consequential and intriguing men in Silicon Valley who don’t live there. Hassabis, a co-founder of the mysterious London laboratory DeepMind, had come to Musk’s SpaceX rocket factory, outside Los Angeles, a few years ago. They were in the canteen, talking, as a massive rocket part traversed overhead. Musk explained that his ultimate goal at SpaceX was the most important project in the world: interplanetary colonization.
Hassabis replied that, in fact, he was working on the most important project in the world: developing artificial super-intelligence. Musk countered that this was one reason we needed to colonize Mars—so that we’ll have a bolt-hole if A.I. goes rogue and turns on humanity. Amused, Hassabis said that A.I. would simply follow humans to Mars. . . .
. . . . Peter Thiel, the billionaire venture capitalist and Donald Trump adviser who co-founded PayPal with Musk and others—and who in December helped gather skeptical Silicon Valley titans, including Musk, for a meeting with the president-elect—told me a story about an investor in DeepMind who joked as he left a meeting that he ought to shoot Hassabis on the spot, because it was the last chance to save the human race.
Elon Musk began warning about the possibility of A.I. running amok three years ago. It probably hadn’t eased his mind when one of Hassabis’s partners in DeepMind, Shane Legg, stated flatly, “I think human extinction will probably occur, and technology will likely play a part in this.” . . . .
Now Mr. Emory, I may be jumping the gun on this one, but here me out: during the aftermath of Iran-Contra, a certain Inslaw Inc. computer programmer named Michael James Riconosciuto (a known intimate of Robert Maheu), spoke of the of Cabazon Arms company (a former defense firm controlled by the Twenty-Nine Palms Band of Mission Indians and Wackenhut, both are connected to Donald Trump in overt fashion) spoke of quote, “...engineering race-specific bio-warfare agents...” while working for Cabazon Arms.
Now, with the advent of DNA-based memory systems and programmable “germs”, is the idea of bio-weapons or even nanobots, that are programmed to attack people with certain skin pigments going to become a reality?
@Robert Montenegro–
Two quick points:
1.–Riconosciutto is about 60–40 in terms of credibility. Lots of good stuff there; plenty of bad stuff, as well. Vetting is important.
2‑You should investigate AFA #39. It is long and I would rely on the description more than the audio files alone.
https://spitfirelist.com/anti-fascist-archives/afa-39-the-world-will-be-plunged-into-an-abyss/
Best,
Dave
I agree with your take on Riconosciuto’s credibility, Mr. Emory (I’d say most of the things that came out of that mans mouth was malarkey, much like Ted Gunderson, Dois Gene Tatum and Bernard Fensterwald).
I listened to AFA #39 and researched the articles in the description. Absolutely damning collection of information. A truly brilliant exposé.
If I may ask another question Mr. Emory, what is your take on KGB defector and CIA turncoat Ilya Dzerkvelov’s claim that Russian intelligence created the “AIDS is man-made” and that KGB lead a disinformation campaign called “Operation INFEKTION”?
@Robert–
Very quickly, as time is at a premium:
1.-By “60–40,” I did not mean that Riconosciuto speaks mostly malarkey, but that more than half (an arbitrary figure, admittedly) is accurate, but that his pronouncements must be carefully vetted, as he misses the mark frequently.
2.-Fensterwald is more credible, though not thoroughgoing, by any means. He is more like “80–20.” He is, however, “100–0” dead.
3.-The only things I have seen coming from Tatum were accurate. Doesn’t mean he doesn’t spread the Fresh Fertilizer, however. I have not encountered any.
4.-Dzerkvelov’s claim IS Fresh Fertilizer, of the worst sort. Cold War I propaganda recycled in time for Cold War II.
It is the worst sort of Red-baiting and the few people who had the courage to come forward in the early ’80’s (during the fiercest storms of Cold War I) have received brutal treatment because of that.
I can attest to that from brutal personal experience.
In AFA #16 (https://spitfirelist.com/anti-fascist-archives/rfa-16-aids-epidemic-or-weapon/), you will hear material that I had on the air long before the U.S.S.R. began talking about it, and neither they NOR the Russians have gone anywhere near what I discuss in AFA #39.
Nor more time to talk–must get to work.
Best,
Dave
Thank you very much for your clarification Mr. Emory and I apologize for any perceived impertinence. As a young man, sheltered in may respects (though I am a combat veteran), I sometimes find it difficult to imagine living under constant personal ridicule and attack and I thank you for the great social, financial and psychological sacrifices you have made in the name of pursuing cold, unforgiving fact.
@Robert Montenegro–
You’re very welcome. Thank you for your service!
Best,
Dave
*Skynet alert*
Elon Musk just issues another warning about the destructive potential of AI run amok. So what prompted the latest outcry from Musk? An AI from his own start up, OpenAI, just beat one of the be professional game players in the world at Dota 2, a game that involves improvising in unfamiliar scenarios, anticipating how an opponent will move and convincing the opponent’s allies to help:
“Musk’s tweets came hours following an AI bot’s victory against some of the world’s best players of Dota 2, a military strategy game. A blog post by OpenAI states that successfully playing the game involves improvising in unfamiliar scenarios, anticipating how an opponent will move and convincing the opponent’s allies to help.”
Superior military strategy AIs beating the best humans. That’s a thing now. Huh. We’ve definitely seen this movie before.
So now you know: when Skynet comes to you with an offer to work together, just don’t. No matter how tempting the offer. Although since it will have likely already anticipated your refusal, the negotiations are probably going to be a ruse anyway and secretly carried on negotiations with another AI using a language they made up. Still, just say ‘no’ to Skynet.
Also, given that Musk’s other investors in OpenAI include Peter Thiel, it’s probably worth noting that, as scary as super AI is should it get out of control, it’s also potentially pretty damn scary while still under human control, especially when those humans are people like Peter Thiel. So, yes, out of control AIs is indeed an issue that will likely be of great concern in the future. But we shouldn’t forget that out of control techno-billionaires is probably a more pressing issue at the moment.
*The Skynet alert
has been cancelledis never over*It looks like Facebook and Elon Musk might have some competition in the mind-reading technology area. From a former Facebook engineer, no less, who left the company in 2016 to start Openwater, a company dedicated to reducing the cost of medical imaging technology.
So how is Openwater going to create mind reading technology? By developing a technology that is sort of like an M.R.I device embedded into a hat. But instead of using magnetic fields to read the blood flow in the brain it uses infrared instead. So it sounds like this Facebook engineer is planning something similar to the general idea Facebook already announced to create a device that scans the brain 100 times a second to detect what someone is thinking. But presumably Openwater uses a different technology. Or maybe it’s quite similar, who knows. But it’s the latest remind the tech giants might not be the only ones pushing mind-reading technology on the public sooner than people expect. Yay?
““I figured out how to put basically the functionality of an M.R.I. machine — a multimillion-dollar M.R.I. machine — into a wearable in the form of a ski hat,” Jepson tells CNBC, though she does not yet have a prototype completed.”
M.R.I. in a hat. Presumably cheap M.R.I. in a hat because it’s going to have to be affordable if we’re all going to start talking telepathically to each other:
Imagine the possibilities. Like the possibility that what you imagine will somehow be capture by this device and then fed into a 3‑D print or something:
Or perhaps being forced to wear the hate to others can read your mind. That’s a possibility too, although Jepsen assure us that they are working on a way for users to someone filter out thoughts they don’t want to share:
So the hat will presumably read all your thoughts, but only share some of them. You’ll presumably have to get really, really good at near instantaneous mental filtering.
There’s no shortage of immense technical and ethical challenges to this kind of technology, but if they can figure them out it will be pretty impressive. And potentially useful. Who knows what kind of kumbayah moments you could create with telepathy technology.
But, of course, if they can figure out how to get around the technical issues, but not the ethical ones, we’re still probably going to see this technology pushed on the public anyway. It’s a scary thought. A scary thought that we fortunately aren’t forced to share via a mind-reading hat. Yet.
Here’s a pair of stories tangentially related to the recent story about Peter Thiel likely getting chosen to chair the powerful President’s Intelligence Advisory Board (P.I.A.B) and his apparent enthusiasm for regulating Google and Amazon (not so much Facebook) as public utilities along with the other recent stories about how Facebook was making user interest categories like “Jew Haters” available for advertisers and redirecting German users to far-right discussions during this election season:
First, regarding the push to regulate these data giants as public utilities, check out who the other big booster was for the plan: Steve Bannon. So while we don’t know the exact nature of the public utility regulation Bannon and Thiel have in mind, we can be pretty sure it’s going to be designed to be harmful to society somehow help the far-right:
“Finally, while antitrust enforcement has been a niche issue, its supporters have managed to put many different policies under the same tent. Eventually they may have to make choices: Does Congress want a competition ombudsman, as exists in the European Union? Should antitrust law be used to spread the wealth around regional economies, as it was during the middle 20th century? Should antitrust enforcement target all concentrated corporate power or just the most dysfunctional sectors, like the pharmaceutical industry?”
And that’s why we had better learn some more details about what exactly folks like Steve Bannon and Peter Thiel have in mind when it comes to treating Google and Facebook like public utilities: It sounds like a great idea in theory. Potentially. But the supporters of antitrust enforcement support a wide variety of different policies that generically fall under the “antitrust” tent.
And note that talk about making them more “generous and permissive with user data” is one of those ideas that’s simultaneously great for encouraging more competition while also being eerily similar to the push from the EU’s competition minister about making the data about all of us held exclusively by Facebook and Google more readily available for sharing with the larger marketplace in order to level the playing field between “data rich” and “data poor” companies. It’s something to keep in mind when hearing about how Facebook and Google need to be more “generous” with their data:
So don’t forget, forcing Google and Facebook to share that data they exclusively hold on us also falls under the antitrust umbrella. Maybe users will have sole control over sharing their data with outside firms, or maybe not. These are rather important details that we don’t have so for all we know that’s part of what Bannon and Thiel have in mind. Palantir would probably love it if Google and Facebook were forced to make their information accessible to outside firms.
And while there’s plenty of ambiguity about what to expect, it seems almost certain that we should also expect any sort of regulatory push by Bannon and Thiel to include something that makes it a lot harder for Google, Facebook, and Amazon to combat hate speech, online harassment, and other tools of the trolling variety that the ‘Alt-Right’ has come to champion. That’s just a given. It’s part of why this a story to watch. Especially after it was discovered that Bannon and a number of other far-right figures were scheming about ways to infiltrate Facebook:
“The email exchange with a conservative Washington operative reveals the importance that the giant tech platform — now reeling from its role in the 2016 election — held for one of the campaign’s central figures. And it also shows the lengths to which the brawling new American right is willing to go to keep tabs on and gain leverage over the Silicon Valley giants it used to help elect Trump — but whose executives it also sees as part of the globalist enemy.”
LOL! Yeah, Facebook’s executives are part of the “globalist enemy.” Someone needs to inform board member and major investor Peter Thiel about this. Along with all the conservatives Facebook has already hired:
“The company has sought to deflect such criticism through hiring. Its vice president of global public policy, Joel Kaplan, was a deputy chief of staff in the George W. Bush White House. And more recently, Facebook has made moves to represent the Breitbart wing of the Republican party on its policy team, tapping a former top staffer to Attorney General Jeff Sessions to be the director of executive branch public policy in May.”
Yep, a former top staffer to Jeff Sessions was just brought on to become director of executive branch public policy a few months ago. So was that a consequence of Bannon successfully executing a super sneaky job application intelligence operation that gave Sessions’s form top staffer a key edge in the application process? Or was it just Facebook caving to all the public right-wing whining and faux outrage about Facebook not being fair to them? Or how about Peter Thiel just using his influence? All of the above? We don’t get to know, but what we do know now as that Steven Bannon has big plans for shaping Facebook from the outside and the inside. As does Peter Thiel, someone who already sits of Facebook’s board, is a major investor, and is poised to be empowered by the Trump administration to shape its approach to this “treat them like public utilities” concept.
So hopefully we’ll get clarity at some point on what they’re actually planning on doing. Is it going to be all bad? Mostly bad? Maybe some useful antitrust stuff too? What’s to plan? The Trump era is the kind of horror show that doesn’t exactly benefit from suspense.
One of the stranger stories in recent years has been the mystery of Cicada 3301, the anonymous group that posts annual challenges of super difficult puzzles used to recruit talented code-breakers and invite them to join some sort of Cypherpunk cult that wants to build a global AI-‘god brain’. Or something. It’s a weird and creepy organization that’s speculated to either be a front for an intelligence agency or perhaps some sort of underground network of wealth Libertarians. And, for now, Cicada 3301 remains anonymous.
So it’s worth noting that someone with a lot of cash has already started a foundation to accomplish that very same ‘AI god’ goal: Anthony Levandowski, a former Google Engineer who played a big role in the development Google’s “Street Map” technology and a string of self-driving vehicle companies, started Way of the Future, a nonprofit religious corporation with the mission “To develop and promote the realization of a Godhead based on artificial intelligence and through understanding and worship of the Godhead contribute to the betterment of society”:
“Anthony Levandowski, who is at the center of a legal battle between Uber and Google’s Waymo, has established a nonprofit religious corporation called Way of the Future, according to state filings first uncovered by Wired’s Backchannel. Way of the Future’s startling mission: “To develop and promote the realization of a Godhead based on artificial intelligence and through understanding and worship of the Godhead contribute to the betterment of society.””
Building an AI Godhead for everyone to worship. Levandowski doesn’t appear to be lacking ambition.
But how about ethics? After all, if the AI Godhead is going to push a ‘do unto others’ kind of philosophy it’s going to be a lot harder for that AI Godhead to achieve that kind of enlightenment if it’s built by some sort of selfishness-worshiping Libertarian. So what moral compass does this wannabe Godhead creator possess?
Well, as the following long piece by Wired amply demonstrates, Levandowski doesn’t appear to be too concerned about ethics. Especially if they get in the way of his dream of transforming the world through robotics. Transforming and taking over the world through robotics. Yep. The article focuses on the various legal troubles Levandowski faces over charges by Google that he stole the “Lidar” technology (laser-based radar-like technology used by vehicles to rapidly map their surroundings) he helped develop at Google and took it to Uber (a company with a serious moral compass deficit). But the article also includes some interest insights into what makes Levandowski tick. for instance, according to friend and former engineer at one of Levandowski’s companies, “He had this very weird motivation about robots taking over the world—like actually taking over, in a military sense...It was like [he wanted] to be able to control the world, and robots were the way to do that. He talked about starting a new country on an island. Pretty wild and creepy stuff. And the biggest thing is that he’s always got a secret plan, and you’re not going to know about it”:
“But even the smartest car will crack up if you floor the gas pedal too long. Once feted by billionaires, Levandowski now finds himself starring in a high-stakes public trial as his two former employers square off. By extension, the whole technology industry is there in the dock with Levandowski. Can we ever trust self-driving cars if it turns out we can’t trust the people who are making them?”
Can we ever trust self-driving cars if it turns out we can’t trust the people who are making them? It’s an important question that doesn’t just apply to self-driving cars. Like AI Godheads. Can we ever trust man-made AI Godheads if it turns out we can’t trust the people who are making it? These are the stupid questions we have to now given the disturbing number of powerful people who double as evangelist for techno-cult Libertarian ideologies. Especially when they are specialist in creating automated vehicles and have a deep passion for taking over the world. Possibly taking over the world militarily using robots:
Yeah, that’s disturbing. And it doesn’t help that Levandowski apparently found a soulmate and mentor in the guy widely viewed as one of the most sociopathic CEOs today: Uber CEO Travis Kalanick:
“We’re going to take over the world. One robot at a time”
So that gives us an idea of how Levandowski’s AI religion is going to be evangelized: via his army of robots. Although it’s unclear if his future religion is actually intended for us mere humans. After all, for the hardcore Transhumanists we’re all supposed to fuse with machines or upload our brains so it’s very possible humans aren’t actually part of Levandowski’s vision for that better tomorrow. A vision that, as the Cicada 3301 weirdness reminds us, probably isn’t limited to Levandowski. *gulp*
Just what the world needs: an AI-powered ‘gaydar’ algorithm that purports to to be able to detect who is gay and who isn’t just by looking at faces. Although it’s not actually that impressive. The ‘gaydar’ algorithm is instead given pairs of faces, one of a heterosexual individual and one of a homosexual individual, and the tries to identify the gay person. And apparently does so correctly at a rate of 81 percent of cases for men and 71 percent of cases for women, significantly better than the 50 percent rate we would expect just from random chance. It’s the kind of gaydar technology that might not be good enough to just ‘pick the gays from a crowd’, but still more than adequate for potentially being abused. And, more generally, it’s the kind of research that not surprisingly is raising concerns about this creating a a 21st century version of physignomy, the pseudoscience based on the idea that peoples’ character is reflected in their faces.
But as the researchers behind the study put it, we don’t need to worry about this being a high-tech example of physiognomy because their gaydar uses hard science. And while the researchers agree that physiognomy is pseudoscience, they also note that the pseudoscience nature of physiognomy doesn’t mean AIs can’t actually learn something about you just by looking at you. Yep. That’s the reassurance we’re getting from these researchers. Don’t worry about AI driving a 21st century version of physiognomy because they are using much better science compared to the past. Feeling reassured?
“Kosinski says his research isn’t physiognomy because it’s using rigorous scientific methods, and his paper cites a number of studies showing that we can deduce (with varying accuracy) traits about people by looking at them. “I was educated and made to believe that it’s absolutely impossible that the face contains any information about your intimate traits, because physiognomy and phrenology were just pseudosciences,” he says. “But the fact that they were claiming things without any basis in fact, that they were making stuff up, doesn’t mean that this stuff is not real.” He agrees that physiognomy is not science, but says there may be truth in its basic concepts that computers can reveal.”
It’s not a return of the physiognomy pseudoscience but “there may be truth in [physiognomy’s] basic concepts that computers can reveal.” That’s seriously the message from these researchers, along with a message of confidence that their algorithm is working solely from facial features and not other more transient features. And based on that confidence in their algorithm the researchers point to their results on evidence that gay people have biologically different faces...even though they can’t actually determine what the algorithm is looking at when coming to its conclusion:
“In the meantime, it leads to all sorts of problems. A common one is that sexist and racist biases are captured from humans in the training data and reproduced by the AI. In the case of Kosinski and Wang’s work, the “black box” allows them to make a particular scientific leap of faith. Because they’re confident their system is primarily analyzing facial structures, they say their research shows that facial structures predict sexual orientation. (“Study 1a showed that facial features extracted by a [neural network] can be used to accurately identify the sexual orientation of both men and women.”)”
So if this research was put forth as a kind of warning to the public, which is how the researchers are framing it, it’s quite a warning. Both a warning that algorithms like this are being developed and a warning of how readily the conclusions of these algorithms might be accepted as evidence of an underlying biological finding (as opposed to a ‘black box’ artifact that could be picking up all sort of biological and social cues).
And don’t forget, even if these algorithms do actually stumble across real associations that can be teased out by these AI-driven algorithms with just some pictures of someone (or maybe some additional biometric data picked up by the “smart kiosks” of the not-too-distant future), there’s a big difference between demonstrating the ability to discern something statistically across large data sets and being able to do that with the kind of accuracy where you don’t have to significantly worry about jumping to the wrong conclusion (assuming you aren’t using the technology in an abusive manner in the first place). Even if someone develops an algorithm that can accurate guess sexual orientation 95 percent of the time that still leaves a pretty substantial 5 percent chance of getting it wrong. And the only way to avoid those incorrect conclusions is to develop an algorithm that’s so good at inferring sexual orientation it’s basically never wrong, assuming that’s possible. And, of course, if such an algorithm was developed with that kind of accuracy that would be really creepy. It points towards one of the scarier aspects of this kind of technology: In order to ensure your privacy-invading algorithms don’t risk jumping to erroneous conclusions you need algorithms that are scarily good at invading your privacy which is another reason we probably shouldn’t be promoting 21st Century physiognomy.
It looks like Google is finally get that shiny new object its been pining for: a city. Yep, Sidewalk Labs, owned by Google’s parent company Alphabet, just got permission to build its own ‘city of the future’ on a 12-acre waterfront district near Toronto, filled with self-driving shuttles, adaptive traffic lights that sense pedestrians, and underground tunnels for freight-transporting robots. With sensors everywhere.
And if that sounds ambitious, note that all these plans aren’t limited to the initial 12 acres. Alphabet reportedly has plans to expand across 800 acres of Toronto’s post-industrial waterfront zone:
“In its proposal, Sidewalk also said that Toronto would need to waive or exempt many existing regulations in areas like building codes, transportation, and energy in order to build the city it envisioned. The project may need “substantial forbearances from existing laws and regulations,” the group said.”
LOL, yeah, it’s a good bet that A LOT of existing laws and regulations are going to have be waived. Especially laws involving personal data privacy. And it sounds like the data collected isn’t just going to involve your whereabouts and other information the sensors everywhere will be able to pick up. Alphabet is also envisioning “tech-enabled primary healthcare that will be integrated with social services”, which means medical data privacy laws are probably also going to have to get waived:
Let’s also not forget about the development of technologies that can collect personal health information like heart rates and breathing information using WiFi signals alone (which would pair nicely with Google’s plans to put free WiFi kiosks bristling with sensors on sidewalks everywhere. And as is pretty clear at this point, anything that can be sensed remotely will be sensed remotely in this new city. Because that’s half the point of the whole thing. So yeah, “substantial forbearances from existing laws and regulations” will no doubt be required.
Interestingly, Alphabet recently announced a new initiative that sounds like exactly the kind of “tech-enabled primary healthcare that will be integrated with social services” the company has planned for its new city: Cityblock was just launched,. It’s a new Alphabet startup focused on improving health care management by, surprise!, integrating various technologies into a health care system with the goal of bringing down costs and improving outcomes. But it’s not simply new technology that’s supposed to do this. Instead, that technology is to be used in a preventive manner in order to address more expensive health conditions before they get worse. As such, Cityblock is going to focuses on behavioral health being. Yep, it’s a health care model where a tech firm, paired with a health firm, tries to get to you live a more healthy lifestyle by collecting lots of data about you. And while this approach would undoubtedly cause widespread privacy concerns, those concerns will probably be somewhat stunted in this case since the target market Cityblock has in mind is poor people, especially Medicaid patients in the US:
“Cityblock aims to provide Medicaid and lower-income Medicare beneficiaries access to high-value, readily available personalized health services. To do this, Iyah Romm, cofounder and CEO, writes in a blog post on Medium that the organization will apply leading-edge care models that fully integrate primary care, behavioral health and social services. It expects to open its first clinic, which it calls a Neighborhood Health Hub, in New York City in 2018.”
It’s probably worth recalling that personalized services for the poor intended to ‘help them help themselves’ was the centerpiece for House Speaker Paul Ryan’s proposal to give every poor person a life coach who will issue “life plans” and “contracts” that poor people would be expected to meet and face penalties if they fail to meet them. So when we’re talking about setting up special personalized “behavior health” monitoring systems as part of health care services for the poor, don’t forget that this personalized monitoring system is going to be really handy when politicians want to say, “if you want to stay on Medicaid you have better make XYZ changes in your lifestyle. We are watching you.” And since right-wingers general expect the poor to be super-human (capable of working multiple jobs, getting an education, raise a family, and dealing with any unforeseen personal disasters in stride, all simultaneously), it’s not like it’s we should be surprised to see the kinds of behavior health standards that almost no one can meet, especially since working multiple jobs, getting an education, raise a family, and dealing with any unforeseen personal disasters in stride simultaneously is an incredibly unhealthy lifestyle.
Also recall that Paul Ryan suggested that his ‘life coach’ plan could apply to other federal programs for the poor, including food stamps. It’s not a stretch at all to imagine ‘life coaches’ for Medicaid recipients would appeal the right-wing. As long as it involves a ‘kicking the poor’ dynamic. And that’s part of the tragedy of the modern age: surveillance technology and a focus on behavior health could be great as a helpful voluntary tool for people that want help getting healthier, but it’s hard to imagine it not becoming a coercive nightmare scenario in the US given the incredible antipathy towards the poor that pervades American society.
So as creepy as Google’s city is on its face regarding what it tells us about how the future is unfolding for people of all incomes and classes, don’t forget that we could be looking at the first test bed for creating the kind of surveillance welfare state that’s perfect for kicking people off of welfare. Just make unrealistic standards that involve a lot of paternalistic moral posturing (which should play well with voters), watch all the poor people with the surveillance technology, and wait for the wave of inevitable failures who can be kicked off for not ‘trying’ or something.
There’s some big news about Facebook’s mind-reading technology ambitions, although it’s not entirely clear if it’s good news, bad news, scary news or what: Regina Dugan, the former head of DARPA who jumped to Google and then Facebook where we was working on the mind-reading technology, just left Facebook. Why? Well, that’s where it’s unclear. According to a Tweet Dugan made about her departure:
So Dugan is leaving Facebook, to be more purposeful and responsible. And she was the one heading up the mind-reading technology project. Is that scary news? It’s unclear but it seems like that might be scary news:
“Neither Dugan nor Facebook has made it clear why she’s departing at this time; a representative for Facebook told Gizmodo the company had nothing further to add (we’ve also reached out to Dugan). And Facebook has not detailed what will happen to the projects she oversaw at Building 8.”
It’s a mystery. A mind-reading technology mystery. Oh goodie. As the author of the above piece notes in response to Dugan’s tweet about being responsible and purposeful, these are exactly reassuring words in this context:
That’s the context. The head of the mind-reading technology division for one of the largest private surveillance apparatuses in the world just left the company for cryptic reasons involving the need for the tech industry to be more responsible than ever and her choice to step away to be purposeful. It’s not particularly reassuring news.
Here’s some new research worth keeping in mind regarding the mind-reading technologies being developed by Facebook and Elon Musk: While reading your exact thoughts, the stated goals of Facebook and Musk, is probably going to require quite a bit more research, reading your emotions is something researchers can already do. And this ability to read emotion can, in turn, be potentially used to read what you’re thinking by looking at your emotional response to particular concepts.
That’s what some researchers just demonstrated, using fMRI brain imaging technology to gather data on brain activity which was fed into software trained to identify distinct patterns of brain activity. The results are pretty astounding. In the study, researchers recruited 34 individuals: 17 people who self-professed to having had suicidal thoughts before, and 17 others who hadn’t. Then they measured the brain activities of these 34 individuals in response to various words, including the word “death.” It turns out “death” created a distinct signature of brain activity differentiating the suicidal individuals from the control group, allowing the researchers to identify those with suicidal thoughts 91 percent of the time in this study:
““This isn’t a wild pie in the sky idea,” Just said. “We can use machine learning to figure out the physicality of thought. We can help people.””
Yes indeed, this kind of technology could be wildly helpful in the field of brain science and studying mental illness. The ability to break down the mental activity in response to concepts and see which parts of the brains are lighting up and what types of emotions they’re associated with would be an invaluable research tool. So let’s hope researchers are able to come up with all sorts of useful discoveries about all sorts of mental conditions using this kind of technology. In responsible hands this could lead to incredible breakthroughs in medicine and mental health and really could improve lives.
But, of course, with technology being the double-edged sword that it is, we can’t ignore the reality that the same technology that would be wonderful for responsible researchers working with volunteers in a lab setting would be absolutely terrifying if it was incorporated into, say, Facebook’s planned mind-reading consumer technology. After all, if Facebook’s planned mind-reading tech can read people’s thoughts it seems like it should be also be capable of reading something much simpler to detect like emotional response too.
Or at least typical emotional responses. As the study also indicated, there’s going to a subset of people whose brains don’t emotionally respond in the “normal” manner, where the definition “normalcy” is probably filled with all sorts of unknown biases:
“What about the other 9%? “It’s a good question,” he said of the gap. “There seems to be an emotional difference [we don’t understand]” that the group hopes to test in future iterations of the study.”
So once this technology because cheap enough for widespread use (which is one of the goals of Facebook and Musk) we could easily find that “brain types” become a new category for assessing people. And predicting behavior. And anything else people (and not just expert researchers) can think up to use this kind of data for.
And don’t forget, if Facebook really can develop cheap thought-reading technology designed to interface your brain with a computer, that could easily become the kind of thing employees are just expected to use due to the potential productivity enhancements. So imagine technology that’s not only reading the words you’re thinking but also reading the emotion response you have to those words. And imagine basically being basically forced to use this technology in the workplace of the future because it’s deemed to be productivity enhancing or something. That could actually happen.
It also raises the question of what Facebook would do if it detected someone was showing the suicidal brain signature. Do they alert someone? Will thinking sad thoughts while using the mind-reading technology result in a visit from a mental health professional? What about really angry or violent thoughts? It’s the kind of area that’s going to raise fascinating questions about the responsible use of this data. Fascinating and often terrifying questions.
It’s all pretty depressing, right? Well, if the emerging mind-reading economy gets overwhelmingly depressing, at least it sounds like the mind-reading technology causing your depression will be able to detect that it’s causing this. Yay?
Remember those reports about Big Data being used in the workplace to allow employers to predict which employees are likely to get sick (so they get get rid of them before the illnesses get expensive)? Well, as the following article makes clear, employers are going to be predicting a lot more than just who is getting sick. They’re going to be trying to predict everything they can predict along with things they can’t accurately predict but decide to try to predict anyway:
“Today’s workplace surveillance software is a digital panopticon that began with email and phone monitoring but now includes keeping track of web-browsing patterns, text messages, screenshots, keystrokes, social media posts, private messaging apps like WhatsApp and even face-to-face interactions with co-workers.”
And what are employers doing with that digital panopticon? For starters, surveilling employees’ computer usage, as would unfortunately be expected. But what might not be expected is that this panoptican software can be set up to determine the expected behavior of a particular employee and then compare that behavior profile to the observed behavior. And if there’s a big change, the managers get a warning. The panopticon isn’t just surveilling you. It’s getting to know you:
“If a paralegal is writing a document and every few seconds is switching to Hipchat, Outlook and Word then there’s an issue that can be resolved by addressing it with the employee”
If you’re the type of person whose brain works better jumping back and forth between tasks you’re going to get flagged as not being focused. People with ADHD are going to love the future.
People who like to talk in person over coffee are also going to love the future:
So the fact that employees know they’re being monitored is getting incorporated into more sophisticated algorithms that operate under the assumption that employees know they’re being monitored and will try to hide their misbehavior from the panopticon. Of course, employees are inevitably going to learn about all these subtle clues that the panopticon is watching for so this will no doubt lead to a need for algorithms that incorporate even more subtle clues. An ever more sophisticated cat to catch an ever more sophisticated mouse. And so on, forever.
What could possibly go wrong? Oh yeah, a lot, especially if the assumptions that go into all these algorithms are wrong:
“The more data and technology you have without an underlying theory of how people’s minds work then the easier it is to jump to conclusions and put people in the crosshairs who don’t deserve to be.”
And keep in mind that when your employer panopticon predicts you’re going to do something bad in the future they probably aren’t going to tell you that when they let you go. They’re just make up some random excuse. Much like how employers who predict you’re going to get sick with an expensive illness probably aren’t going to tell you this. They’re just going to find a reasons to let you go. So we can add “misapplied algorithmic assumptions” to the list of potential mystery reasons for when you suddenly get let go from your job with minimal explanation in the panopticon office of the future: maybe your employer predicts you’re about to get really ill. Or maybe some other completely random thing set off the bad behavior predictive algorithm. There’s a range of mystery reasons so at least you shouldn’t necessarily assume you’re about to get horribly ill when you’re fired. Yay.
It told you so:
Listen to this program, and then read this:
https://www.yahoo.com/news/kill-foster-parents-amazons-alexa-talks-murder-sex-120601149–finance.html
Have Fun!
Dave Emory
I think it is important that you are reporting this kind of information. I can tell you that in my career the most depressing and unethical work that I ever moved in the proximity of was at Intel. I can tell you in 2013 that they had playing in the lobby just before you enter their cafeteria an advertising video that was promoting a great new development of theirs called ‘Smart Response’. This new innovation would allow your computer to act as a good butler and predict your decisions and serve as what I could only call your ‘confidant.’
>Imagine, your computer as your ‘best bro’, nobody knows you better. Of course, you can Trust him Right? He’d never Fink on You?
From working in User Experience it was clear that there was effort in getting the machines to collect data about your face from your laptop/device camera as well as your tone of voice and then use that to interpret your reactions to whatever you may be looking at and then alter the ad content accordingly that the pages you visit display. Supposedly they could interpret your general state of mind with a high degree of accuracy just by focusing on the triangular area between your eyes and your mouth.
>In order for ‘Smart Response’ to work your computer might need to collect this data, build a profile on you. But that’s OK, it’s your Buddy Riight?
From what I could gather, this seemed to be an outgrowth of a project involving physicist Stephen Hawkins. They wanted to be able to build software to interpret him clearly. Once they did they may have begun applying it generally.. What really concerned me about it is the prospect of reverse programming. Once they build these profiles how would they use them? Would they try to provide content that they could guess an individual would respond to in a certain way?
Would they have our computers try programming us?
https://amp.cnn.com/cnn/2019/08/01/tech/robot-racism-scn-trnd/index.html
Robot racism? Yes, says a study showing humans’ biases extend to robots
By Caroline Klein and David Allan, CNN
Updated 8:37 AM EDT, Thu August 01, 2019
However, as machines are getting closer to acquiring human-like language abilities, they are also absorbing the deeply ingrained biases concealed within the patterns of language use, the latest research reveals. Joanna Bryson, a computer scientist at the University of Bath and a co-author, said: ‘A lot of people are saying this is showing that AI is prejudiced. No. This is showing we’re prejudiced and that AI is learning it.’ . . .”
There was a fascinating study recently published by the US Army about the future impact of cyborg technology on military affairs by the year 2050. But the report wasn’t just about the impact cyborg technology might have on the military. It included speculation about the impact the incorporation of cyborg technology in the military would have on the rest of society. Perhaps the most important conclusion of the study was that some sort of incorporation of cyborg technology into military affairs was going to be technically feasible by 2050 or earlier and that when this becomes part of the military it’s inevitably going to be felt by the rest of society because the cyborg soldiers of the future will inevitably leave service and join civil society. And at that point when retired cyborg soldiers enter civil society all of the various questions that society needs to ask about how to balance things out when human cyborg augmentation becomes a technically viable are going to have to be asked and answered. So the question of whether or not society should allow cyborg augmentation is intertwined with the question of whether or not nations are going to end up creating cyborg soldiers for the military. And given the seemingly endless global military technology race for supremacy, it seems like a given that cyborg soldiers are coming as soon as technologically feasible and then entering the job market a few years later after they retire from being a cyborg soldier. In other words, unless humanity figures out how to demilitarize and end war over the next 30 years, get ready for cyborgs. And possibly sooner.
Another ominous aspect of the report is the emphasis it placed on direct human-to-computer communications as being the cyborg technology that could most revolutionize combat. This is, of course, particularly ominous because that brain-to-computer interface technology is another area where there appears to be a technology race already underway. For instance, recall how the head of Facebook’s project working on mind-reading technology — the former head of DARPA, Regina Dugan — resigned from the project and issued a vague statement about the need for the industry to be more responsible than ever before. And Elon Musk’s Telsa has been working on similar technology that involves actually surgically implanting chips in your brain. Also recall how part of Elon Musk’s vision for how the application of this computer-to-brain communication technology is to have humans directly watching over artificial intelligences to make sure the AIs to get out of control, i.e. the ‘summoning the demon’ metaphor. So the technology that the US Army expect to most revolution combat is a technology that Facebook and Tesla are already years into developing. It’s a reminder that, while the scenario where cyborg technology first gets applied in the military and then spreads from there to civil society is a plausible scenario, it’s also very possible we’ll see the cyborg technology like human-to-computer interfaces directly sold to civil society as soon as possible without first going through the military because that’s what the technology giants are actively planning on doing right now. That latter scenario is actually in the US Army report, which predicted that these cyborg capabilities will probably “be driven by civilian demand” and “a robust bio-economy that is at its earliest stages of development in today’s global market”:
“The demand for cyborg-style capabilities will be driven in part by the civilian healthcare market, which will acclimate people to an industry fraught with ethical, legal and social challenges, according to Defense Department researchers.”
Civilian demand for cyborg-style capabilities is expected to be a driver for the development of this technology. So it sounds like the people who did this study expect societies to readily embrace cyborg technologies like direct neural enhancement of the human brain for two-way data transfers, a technology that’s expected to be particularly important for combat. And if that technology is going to be particularly important for combat, it’s presumably going to be used in a lot of soldiers. So it’s expected that both the military and civil society are going to be creating the demand for these technologies:
But note how the study doesn’t just call for society to address all of the various questions that arise when we’re talking about introducing cyborg-enhancements to human populations. It also calls for “a whole-of-nation, not whole-of-government, approach to cyborg technologies,” in order to ensure the US maintains a technological lead over countries like Russia and China, and suggests military leaders should work to reverse the “negative cultural narratives of enhancement technologies.”:
So as we can see, the report isn’t just a warning about potential complications associated with the development of cyborg technologies and the need to think this through to avoid dystopian outcomes. It’s also a rallying cry for a “whole-of-nation” full public/private/military/civilian embrace of these technologies and a warning that the use of these technologies is coming whether we like it or not. Which seems kind of dystopian.