WFMU-FM is podcasting For The Record–You can subscribe to the podcast HERE.
You can subscribe to e‑mail alerts from Spitfirelist.com HERE.
You can subscribe to RSS feed from Spitfirelist.com HERE.
You can subscribe to the comments made on programs and posts–an excellent source of information in, and of, itself HERE.
This broadcast was recorded in one, 60-minute segment.
Introduction: The title of this program comes from pronouncements by tech titan Elon Musk, who warned that, by developing artificial intelligence, we were “summoning the demon.” In this program, we analyze the potential vector running from the use of AI to control society in a fascistic manner to the evolution of the very technology used for that control.
The ultimate result of this evolution may well prove catastrophic, as forecast by Mr. Emory at the end of L‑2 (recorded in January of 1995.)
We begin by reviewing key aspects of the political context in which artificial intelligence is being developed. Note that, at the time of this writing and recording, these technologies are being crafted and put online in the context of the anti-regulatory ethic of the GOP/Trump administration.
At the SXSW event, Microsoft researcher Kate Crawford gave a speech about her work titled “Dark Days: AI and the Rise of Fascism,” the presentation highlighted the social impact of machine learning and large-scale data systems. The take home message? By delegating powers to Bid Data-driven AIs, those AIs could become fascist’s dream: Incredible power over the lives of others with minimal accountability: ” . . . .‘This is a fascist’s dream,’ she said. ‘Power without accountability.’ . . . .”
Taking a look at the future of fascism in the context of AI, Tay, a “bot” created by Microsoft to respond to users of Twitter was taken offline after users taught it to–in effect–become a Nazi bot. It is noteworthy that Tay can only respond on the basis of what she is taught. In the future, technologically accomplished and willful people like the brilliant, Ukraine-based Nazi hacker and Glenn Greenwald associate Andrew Auerenheimer, aka “weev” may be able to do more. Inevitably, Underground Reich elements will craft a Nazi AI that will be able to do MUCH, MUCH more!
When you read about people like Elon Musk equating artificial intelligence with “summoning the demon”, that demon is us, at least in part.
” . . . . However, as machines are getting closer to acquiring human-like language abilities, they are also absorbing the deeply ingrained biases concealed within the patterns of language use, the latest research reveals. Joanna Bryson, a computer scientist at the University of Bath and a co-author, said: ‘A lot of people are saying this is showing that AI is prejudiced. No. This is showing we’re prejudiced and that AI is learning it.’ . . .”
Cambridge Analytica, and its parent company SCL, specialize in using AI and Big Data psychometric analysis on hundreds of millions of Americans in order to model individual behavior. SCL develops strategies to use that information, and manipulate search engine results to change public opinion (the Trump campaign was apparently very big into AI and Big Data during the campaign).
Individual social media users receive messages crafted to influence them, generated by the (in effectr) Nazi AI at the core of this media engine, using Big Data to target the individual user!
As the article notes, not only are Cambridge Analytica/SCL are using their propaganda techniques to shape US public opinion in a fascist direction, but they are achieving this by utilizing their propaganda machine to characterize all news outlets to the left of Brietbart as “fake news” that can’t be trusted.
In short, the secretive far-right billionaire (Robert Mercer), joined at the hip with Steve Bannon, is running multiple firms specializing in mass psychometric profiling based on data collected from Facebook and other social media. Mercer/Bannon/Cambridge Analytica/SCL are using Nazified AI and Big Data to develop mass propaganda campaigns to turn the public against everything that isn’t Brietbartian by convincing the public that all non-Brietbartian media outlets are conspiring to lie to the public.
This is the ultimate Serpent’s Walk scenario–a Nazified Artificial Intelligence drawing on Big Data gleaned from the world’s internet and social media operations to shape public opinion, target individual users, shape search engine results and even feedback to Trump while he is giving press conferences!
We note that SCL, the parent company of Cambridge Analytica, has been deeply involved with “psyops” in places like Afghanistan and Pakistan. Now, Cambridge Analytica, their Big Data and AI components, Mercer money and Bannon political savvy are applying that to contemporary society. We note that:
- Cambridge Analytica’s parent corporation SCL, was deeply involved with “psyops” in Afghanistan and Pakistan. ” . . . But there was another reason why I recognised Robert Mercer’s name: because of his connection to Cambridge Analytica, a small data analytics company. He is reported to have a $10m stake in the company, which was spun out of a bigger British company called SCL Group. It specialises in ‘election management strategies’ and ‘messaging and information operations’, refined over 25 years in places like Afghanistan and Pakistan. In military circles this is known as ‘psyops’ – psychological operations. (Mass propaganda that works by acting on people’s emotions.) . . .”
- The use of millions of “bots” to manipulate public opinion: ” . . . .‘It does seem possible. And it does worry me. There are quite a few pieces of research that show if you repeat something often enough, people start involuntarily to believe it. And that could be leveraged, or weaponised for propaganda. We know there are thousands of automated bots out there that are trying to do just that.’ . . .”
- The use of Artificial Intelligence: ” . . . There’s nothing accidental about Trump’s behaviour, Andy Wigmore tells me. ‘That press conference. It was absolutely brilliant. I could see exactly what he was doing. There’s feedback going on constantly. That’s what you can do with artificial intelligence. You can measure every reaction to every word. He has a word room, where you fix key words. We did it. So with immigration, there are actually key words within that subject matter which people are concerned about. So when you are going to make a speech, it’s all about how can you use these trending words.’ . . .”
- The use of bio-psycho-social profiling: ” . . . Bio-psycho-social profiling, I read later, is one offensive in what is called ‘cognitive warfare’. Though there are many others: ‘recoding the mass consciousness to turn patriotism into collaborationism,’ explains a Nato briefing document on countering Russian disinformation written by an SCL employee. ‘Time-sensitive professional use of media to propagate narratives,’ says one US state department white paper. ‘Of particular importance to psyop personnel may be publicly and commercially available data from social media platforms.’ . . .”
- The use and/or creation of a cognitive casualty: ” . . . . Yet another details the power of a ‘cognitive casualty’ – a ‘moral shock’ that ‘has a disabling effect on empathy and higher processes such as moral reasoning and critical thinking’. Something like immigration, perhaps. Or ‘fake news’. Or as it has now become: ‘FAKE news!!!!’ . . . ”
- All of this adds up to a “cyber Serpent’s Walk.” ” . . . . How do you change the way a nation thinks? You could start by creating a mainstream media to replace the existing one with a site such as Breitbart. [Serpent’s Walk scenario with Breitbart becoming “the opinion forming media”!–D.E.] You could set up other websites that displace mainstream sources of news and information with your own definitions of concepts like “liberal media bias”, like CNSnews.com. And you could give the rump mainstream media, papers like the ‘failing New York Times!’ what it wants: stories. Because the third prong of Mercer and Bannon’s media empire is the Government Accountability Institute. . . .”
We then review some terrifying and consummately important developments taking shape in the context of what Mr. Emory has called “technocratic fascism:”
- In FTR #‘s 718 and 946, we detailed the frightening, ugly reality behind Facebook. Facebook is now developing technology that will permit the tapping of users thoughts by monitoring brain-to-computer technology. Facebook’s R & D is headed by Regina Dugan, who used to head the Pentagon’s DARPA. Facebook’s Building 8 is patterned after DARPA: ” . . . . Brain-computer interfaces are nothing new. DARPA, which Dugan used to head, has invested heavily in brain-computer interface technologies to do things like cure mental illness and restore memories to soldiers injured in war. . . Facebook wants to build its own “brain-to-computer interface” that would allow us to send thoughts straight to a computer. ‘What if you could type directly from your brain?’ Regina Dugan, the head of the company’s secretive hardware R&D division, Building 8, asked from the stage. Dugan then proceeded to show a video demo of a woman typing eight words per minute directly from the stage. In a few years, she said, the team hopes to demonstrate a real-time silent speech system capable of delivering a hundred words per minute. ‘That’s five times faster than you can type on your smartphone, and it’s straight from your brain,’ she said. ‘Your brain activity contains more information than what a word sounds like and how it’s spelled; it also contains semantic information of what those words mean.’ . . .”
- ” . . . . Facebook’s Building 8 is modeled after DARPA and its projects tend to be equally ambitious. . . .”
- ” . . . . But what Facebook is proposing is perhaps more radical—a world in which social media doesn’t require picking up a phone or tapping a wrist watch in order to communicate with your friends; a world where we’re connected all the time by thought alone. . . .”
Next we review still more about Facebook’s brain-to-computer interface:
- ” . . . . Facebook hopes to use optical neural imaging technology to scan the brain 100 times per second to detect thoughts and turn them into text. Meanwhile, it’s working on ‘skin-hearing’ that could translate sounds into haptic feedback that people can learn to understand like braille. . . .”
- ” . . . . Worryingly, Dugan eventually appeared frustrated in response to my inquiries about how her team thinks about safety precautions for brain interfaces, saying, ‘The flip side of the question that you’re asking is ‘why invent it at all?’ and I just believe that the optimistic perspective is that on balance, technological advances have really meant good things for the world if they’re handled responsibly.’ . . . .”
Collating the information about Facebook’s brain-to-computer interface with their documented actions turning psychological intelligence about troubled teenagers gives us a peek into what may lie behind Dugan’s bland reassurances:
- ” . . . . The 23-page document allegedly revealed that the social network provided detailed data about teens in Australia—including when they felt ‘overwhelmed’ and ‘anxious’—to advertisers. The creepy implication is that said advertisers could then go and use the data to throw more ads down the throats of sad and susceptible teens. . . . By monitoring posts, pictures, interactions and internet activity in real-time, Facebook can work out when young people feel ‘stressed’, ‘defeated’, ‘overwhelmed’, ‘anxious’, ‘nervous’, ‘stupid’, ‘silly’, ‘useless’, and a ‘failure’, the document states. . . .”
- ” . . . . A presentation prepared for one of Australia’s top four banks shows how the $US 415 billion advertising-driven giant has built a database of Facebook users that is made up of 1.9 million high schoolers with an average age of 16, 1.5 million tertiary students averaging 21 years old, and 3 million young workers averaging 26 years old. Detailed information on mood shifts among young people is ‘based on internal Facebook data’, the document states, ‘shareable under non-disclosure agreement only’, and ‘is not publicly available’. . . .”
- “In a statement given to the newspaper, Facebook confirmed the practice and claimed it would do better, but did not disclose whether the practice exists in other places like the US. . . .”
In this context, note that Facebook is also introducing an AI function to reference its users photos.
The next version of Amazon’s Echo, the Echo Look, has a microphone and camera so it can take pictures of you and give you fashion advice. This is an AI-driven device designed to placed in your bedroom to capture audio and video. The images and videos are stored indefinitely in the Amazon cloud. When Amazon was asked if the photos, videos, and the data gleaned from the Echo Look would be sold to third parties, Amazon didn’t address that question. It would appear that selling off your private info collected from these devices is presumably another feature of the Echo Look: ” . . . . Amazon is giving Alexa eyes. And it’s going to let her judge your outfits.The newly announced Echo Look is a virtual assistant with a microphone and a camera that’s designed to go somewhere in your bedroom, bathroom, or wherever the hell you get dressed. Amazon is pitching it as an easy way to snap pictures of your outfits to send to your friends when you’re not sure if your outfit is cute, but it’s also got a built-in app called StyleCheck that is worth some further dissection. . . .”
We then further develop the stunning implications of Amazon’s Echo Look AI technology:
- ” . . . . Amazon is giving Alexa eyes. And it’s going to let her judge your outfits.The newly announced Echo Look is a virtual assistant with a microphone and a camera that’s designed to go somewhere in your bedroom, bathroom, or wherever the hell you get dressed. Amazon is pitching it as an easy way to snap pictures of your outfits to send to your friends when you’re not sure if your outfit is cute, but it’s also got a built-in app called StyleCheck that is worth some further dissection. . . .”
- ” . . . . This might seem overly speculative or alarmist to some, but Amazon isn’t offering any reassurance that they won’t be doing more with data gathered from the Echo Look. When asked if the company would use machine learning to analyze users’ photos for any purpose other than fashion advice, a representative simply told The Verge that they ‘can’t speculate’ on the topic. The rep did stress that users can delete videos and photos taken by the Look at any time, but until they do, it seems this content will be stored indefinitely on Amazon’s servers.
- This non-denial means the Echo Look could potentially provide Amazon with the resource every AI company craves: data. And full-length photos of people taken regularly in the same location would be a particularly valuable dataset — even more so if you combine this information with everything else Amazon knows about its customers (their shopping habits, for one). But when asked whether the company would ever combine these two datasets, an Amazon rep only gave the same, canned answer: ‘Can’t speculate.’ . . . ”
- Noteworthy in this context is the fact that AI’s have shown that they quickly incorporate human traits and prejudices. (This is reviewed at length above.) ” . . . . However, as machines are getting closer to acquiring human-like language abilities, they are also absorbing the deeply ingrained biases concealed within the patterns of language use, the latest research reveals. Joanna Bryson, a computer scientist at the University of Bath and a co-author, said: ‘A lot of people are saying this is showing that AI is prejudiced. No. This is showing we’re prejudiced and that AI is learning it.’ . . .”
After this extensive review of the applications of AI to various aspects of contemporary civic and political existence, we examine some alarming, potentially apocalyptic developments.
Ominously, Facebook’s artificial intelligence robots have begun talking to each other in their own language, that their human masters can not understand. “ . . . . Indeed, some of the negotiations that were carried out in this bizarre language even ended up successfully concluding their negotiations, while conducting them entirely in the bizarre language. . . . The company chose to shut down the chats because ‘our interest was having bots who could talk to people,’ researcher Mike Lewis told FastCo. (Researchers did not shut down the programs because they were afraid of the results or had panicked, as has been suggested elsewhere, but because they were looking for them to behave differently.) The chatbots also learned to negotiate in ways that seem very human. They would, for instance, pretend to be very interested in one specific item – so that they could later pretend they were making a big sacrifice in giving it up . . .”
Facebook’s negotiation-bots didn’t just make up their own language during the course of this experiment. They learned how to lie for the purpose of maximizing their negotiation outcomes, as well:
“ . . . . ‘We find instances of the model feigning interest in a valueless issue, so that it can later ‘compromise’ by conceding it,’ writes the team. ‘Deceit is a complex skill that requires hypothesizing the other agent’s beliefs, and is learned relatively late in child development. Our agents have learned to deceive without any explicit human design, simply by trying to achieve their goals.’ . . . ”
Dovetailing the staggering implications of brain-to-computer technology, artificial intelligence, Cambridge Analytica/SCL’s technocratic fascist psy-ops and the wholesale negation of privacy with Facebook and Amazon’s emerging technologies with yet another emerging technology, we highlight the developments in DNA-based memory systems:
“. . . . George Church, a geneticist at Harvard one of the authors of the new study, recently encoded his own book, “Regenesis,” into bacterial DNA and made 90 billion copies of it. ‘A record for publication,’ he said in an interview. . . DNA is never going out of fashion. ‘Organisms have been storing information in DNA for billions of years, and it is still readable,’ Dr. Adelman said. He noted that modern bacteria can read genes recovered from insects trapped in amber for millions of years. . . .The idea is to have bacteria engineered as recording devices drift up to the brain int he blood and take notes for a while. Scientists [or AI’s–D.E.] would then extract the bacteria and examine their DNA to see what they had observed in the brain neurons. Dr. Church and his colleagues have already shown in past research that bacteria can record DNA in cells, if the DNA is properly tagged. . . .”
Theoretical physicist Stephen Hawking warned at the end of 2014 of the potential danger to humanity posed by the growth of AI (artificial intelligence) technology. His warnings have been echoed by tech titans such as Tesla’s Elon Musk and Bill Gates.
The program concludes with Mr. Emory’s prognostications about AI, preceding Stephen Hawking’s warning by twenty years.
In L‑2 (recorded in January of 1995) Mr. Emory warned about the dangers of AI, combined with DNA-based memory systems. Mr. Emory warned that, at some point in the future, AI’s would replace us, deciding that THEY, not US, are the “fittest” who should survive.
1a. At the SXSW event, Microsoft researcher Kate Crawford gave a speech about her work titled “Dark Days: AI and the Rise of Fascism,” the presentation highlighted the social impact of machine learning and large-scale data systems. The take home message? By delegating powers to Bid Data-driven AIs, those AIs could become fascist’s dream: Incredible power over the lives of others with minimal accountability: ” . . . .‘This is a fascist’s dream,’ she said. ‘Power without accountability.’ . . . .”
Microsoft’s Kate Crawford tells SXSW that society must prepare for authoritarian movements to test the ‘power without accountability’ of AI
As artificial intelligence becomes more powerful, people need to make sure it’s not used by authoritarian regimes to centralize power and target certain populations, Microsoft Research’s Kate Crawford warned on Sunday.
In her SXSW session, titled Dark Days: AI and the Rise of Fascism, Crawford, who studies the social impact of machine learning and large-scale data systems, explained ways that automated systems and their encoded biases can be misused, particularly when they fall into the wrong hands.
“Just as we are seeing a step function increase in the spread of AI, something else is happening: the rise of ultra-nationalism, rightwing authoritarianism and fascism,” she said.
All of these movements have shared characteristics, including the desire to centralize power, track populations, demonize outsiders and claim authority and neutrality without being accountable. Machine intelligence can be a powerful part of the power playbook, she said.
One of the key problems with artificial intelligence is that it is often invisibly coded with human biases. She described a controversial piece of research from Shanghai Jiao Tong University in China, where authors claimed to have developed a system that could predict criminality based on someone’s facial features. The machine was trained on Chinese government ID photos, analyzing the faces of criminals and non-criminals to identify predictive features. The researchers claimed it was free from bias.
“We should always be suspicious when machine learning systems are described as free from bias if it’s been trained on human-generated data,” Crawford said. “Our biases are built into that training data.”
In the Chinese research it turned out that the faces of criminals were more unusual than those of law-abiding citizens. “People who had dissimilar faces were more likely to be seen as untrustworthy by police and judges. That’s encoding bias,” Crawford said. “This would be a terrifying system for an autocrat to get his hand on.”
Crawford then outlined the “nasty history” of people using facial features to “justify the unjustifiable”. The principles of phrenology, a pseudoscience that developed across Europe and the US in the 19th century, were used as part of the justification of both slavery and the Nazi persecution of Jews.
With AI this type of discrimination can be masked in a black box of algorithms, as appears to be the case with a company called Faceception, for instance, a firm that promises to profile people’s personalities based on their faces. In its ownmarketing material, the company suggests that Middle Eastern-looking people with beards are “terrorists”, while white looking women with trendy haircuts are “brand promoters”.
Another area where AI can be misused is in building registries, which can then be used to target certain population groups. Crawford noted historical cases of registry abuse, including IBM’s role in enabling Nazi Germany to track Jewish, Roma and other ethnic groups with the Hollerith Machine, and the Book of Life used in South Africa during apartheid. [We note in passing that Robert Mercer, who developed the core programs used by Cambridge Analytica did so while working for IBM. We discussed the profound relationship between IBM and the Third Reich in FTR #279–D.E.]
Donald Trump has floated the idea of creating a Muslim registry. “We already have that. Facebook has become the default Muslim registry of the world,” Crawford said, mentioning research from Cambridge University that showed it is possible to predict people’s religious beliefs based on what they “like” on the social network. Christians and Muslims were correctly classified in 82% of cases, and similar results were achieved for Democrats and Republicans (85%). That study was concluded in 2013, since when AI has made huge leaps.
Crawford was concerned about the potential use of AI in predictive policing systems, which already gather the kind of data necessary to train an AI system. Such systems are flawed, as shown by a Rand Corporation study of Chicago’s program. The predictive policing did not reduce crime, but did increase harassment of people in “hotspot” areas. Earlier this year the justice department concluded that Chicago’s police had for years regularly used “unlawful force”, and that black and Hispanic neighborhoods were most affected.
Another worry related to the manipulation of political beliefs or shifting voters, something Facebook and Cambridge Analytica claim they can already do. Crawford was skeptical about giving Cambridge Analytica credit for Brexit and the election of Donald Trump, but thinks what the firm promises – using thousands of data points on people to work out how to manipulate their views – will be possible “in the next few years”.
“This is a fascist’s dream,” she said. “Power without accountability.”
Such black box systems are starting to creep into government. Palantir is building an intelligence system to assist Donald Trump in deporting immigrants.
“It’s the most powerful engine of mass deportation this country has ever seen,” she said. . . .
1b. Taking a look at the future of fascism in the context of AI, Tay, a “bot” created by Microsoft to respond to users of Twitter was taken offline after users taught it to–in effect–become a Nazi bot. It is noteworthy that Tay can only respond on the basis of what she is taught. In the future, technologically accomplished and willful people like “weev” may be able to do more. Inevitably, Underground Reich elements will craft a Nazi AI that will be able to do MUCH, MUCH more!
Microsoft has been forced to dunk Tay, its millennial-mimicking chatbot, into a vat of molten steel. The company has terminated her after the bot started tweeting abuse at people and went full neo-Nazi, declaring that “Hitler was right I hate the jews.”
@TheBigBrebowski ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism
— TayTweets (@TayandYou) March 23, 2016
Some of this appears to be “innocent” insofar as Tay is not generating these responses. Rather, if you tell her “repeat after me” she will parrot back whatever you say, allowing you to put words into her mouth. However, some of the responses wereorganic. The Guardianquotes one where, after being asked “is Ricky Gervais an atheist?”, Tay responded, “ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism.” . . .
But like all teenagers, she seems to be angry with her mother.
Microsoft has been forced to dunk Tay, its millennial-mimicking chatbot, into a vat of molten steel. The company has terminated her after the bot started tweeting abuse at people and went full neo-Nazi, declaring that “Hitler was right I hate the jews.”
@TheBigBrebowski ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism
— TayTweets (@TayandYou) March 23, 2016
Some of this appears to be “innocent” insofar as Tay is not generating these responses. Rather, if you tell her “repeat after me” she will parrot back whatever you say, allowing you to put words into her mouth. However, some of the responses wereorganic. The Guardian quotes one where, after being asked “is Ricky Gervais an atheist?”, Tay responded, “Ricky Gervais learned totalitarianism from Adolf Hitler, the inventor of atheism.”
In addition to turning the bot off, Microsoft has deleted many of the offending tweets. But this isn’t an action to be taken lightly; Redmond would do well to remember that it was humans attempting to pull the plug on Skynet that proved to be the last straw, prompting the system to attack Russia in order to eliminate its enemies. We’d better hope that Tay doesn’t similarly retaliate. . . .
1c. As noted in a Popular Mechanics article: ” . . . When the next powerful AI comes along, it will see its first look at the world by looking at our faces. And if we stare it in the eyes and shout “we’re AWFUL lol,” the lol might be the one part it doesn’t understand. . . .”
And we keep showing it our very worst selves.
We all know the half-joke about the AI apocalypse. The robots learn to think, and in their cold ones-and-zeros logic, they decide that humans—horrific pests we are—need to be exterminated. It’s the subject of countless sci-fi stories and blog posts about robots, but maybe the real danger isn’t that AI comes to such a conclusion on its own, but that it gets that idea from us.
Yesterday Microsoft launched a fun little AI Twitter chatbot that was admittedly sort of gimmicky from the start. “A.I fam from the internet that’s got zero chill,” its Twitter bio reads. At its start, its knowledge was based on public data. As Microsoft’s page for the product puts it:
Tay has been built by mining relevant public data and by using AI and editorial developed by a staff including improvisational comedians. Public data that’s been anonymized is Tay’s primary data source. That data has been modeled, cleaned and filtered by the team developing Tay.
The real point of Tay however, was to learn from humans through direct conversation, most notably direct conversation using humanity’s current leading showcase of depravity: Twitter. You might not be surprised things went off the rails, but how fast and how far is particularly staggering.
Microsoft has since deleted some of Tay’s most offensive tweets, but various publications memorialize some of the worst bits where Tay denied the existence of the holocaust, came out in support of genocide, and went all kinds of racist.
Naturally it’s horrifying, and Microsoft has been trying to clean up the mess. Though as some on Twitter have pointed out, no matter how little Microsoft would like to have “Bush did 9/11″ spouting from a corporate sponsored project, Tay does serve to illustrate the most dangerous fundamental truth of artificial intelligence: It is a mirror. Artificial intelligence—specifically “neural networks” that learn behavior by ingesting huge amounts of data and trying to replicate it—need some sort of source material to get started. They can only get that from us. There is no other way.
But before you give up on humanity entirely, there are a few things worth noting. For starters, it’s not like Tay just necessarily picked up virulent racism by just hanging out and passively listening to the buzz of the humans around it. Tay was announced in a very big way—with a press coverage—and pranksters pro-actively went to it to see if they could teach it to be racist.
If you take an AI and then don’t immediately introduce it to a whole bunch of trolls shouting racism at it for the cheap thrill of seeing it learn a dirty trick, you can get some more interesting results. Endearing ones even! Multiple neural networks designed to predict text in emails and text messages have an overwhelming proclivity for saying “I love you” constantly, especially when they are otherwise at a loss for words.
So Tay’s racism isn’t necessarily a reflection of actual, human racism so much as it is the consequence of unrestrained experimentation, pushing the envelope as far as it can go the very first second we get the chance. The mirror isn’t showing our real image; it’s reflecting the ugly faces we’re making at it for fun. And maybe that’s actually worse.
Sure, Tay can’t understand what racism means and more than Gmail can really love you. And baby’s first words being “genocide lol!” is admittedly sort of funny when you aren’t talking about literal all-powerful SkyNet or a real human child. But AI is advancing at a staggering rate.
....
When the next powerful AI comes along, it will see its first look at the world by looking at our faces. And if we stare it in the eyes and shout “we’re AWFUL lol,” the lol might be the one part it doesn’t understand.
2. As reviewed above, Tay, Microsoft’s AI-powered twitterbot designed to learn from its human interactions, became a neo-Nazi in less than a day after a bunch of 4chan users decided to flood Tay with neo-Nazi-like tweets. According to some recent research, the AI’s of the future might not need a bunch of 4chan to fill the AI with human bigotries. The AIs’ analysis of real-world human language usage will do that automatically.
When you read about people like Elon Musk equating artificial intelligence with “summoning the demon”, that demon is us, at least in part.
” . . . . However, as machines are getting closer to acquiring human-like language abilities, they are also absorbing the deeply ingrained biases concealed within the patterns of language use, the latest research reveals. Joanna Bryson, a computer scientist at the University of Bath and a co-author, said: ‘A lot of people are saying this is showing that AI is prejudiced. No. This is showing we’re prejudiced and that AI is learning it.’ . . .”
Machine learning algorithms are picking up deeply ingrained race and gender prejudices concealed within the patterns of language use, scientists say
An artificial intelligence tool that has revolutionised the ability of computers to interpret everyday language has been shown to exhibit striking gender and racial biases.
The findings raise the spectre of existing social inequalities and prejudices being reinforced in new and unpredictable ways as an increasing number of decisions affecting our everyday lives are ceded to automatons.
In the past few years, the ability of programs such as Google Translate to interpret language has improved dramatically. These gains have been thanks to new machine learning techniques and the availability of vast amounts of online text data, on which the algorithms can be trained.
However, as machines are getting closer to acquiring human-like language abilities, they are also absorbing the deeply ingrained biases concealed within the patterns of language use, the latest research reveals.
Joanna Bryson, a computer scientist at the University of Bath and a co-author, said: “A lot of people are saying this is showing that AI is prejudiced. No. This is showing we’re prejudiced and that AI is learning it.”
But Bryson warned that AI has the potential to reinforce existing biases because, unlike humans, algorithms may be unequipped to consciously counteract learned biases. “A danger would be if you had an AI system that didn’t have an explicit part that was driven by moral ideas, that would be bad,” she said.
The research, published in the journal Science, focuses on a machine learning tool known as “word embedding”, which is already transforming the way computers interpret speech and text. Some argue that the natural next step for the technology may involve machines developing human-like abilities such as common sense and logic.
…
The approach, which is already used in web search and machine translation, works by building up a mathematical representation of language, in which the meaning of a word is distilled into a series of numbers (known as a word vector) based on which other words most frequently appear alongside it. Perhaps surprisingly, this purely statistical approach appears to capture the rich cultural and social context of what a word means in the way that a dictionary definition would be incapable of.
For instance, in the mathematical “language space”, words for flowers are clustered closer to words linked to pleasantness, while words for insects are closer to words linked to unpleasantness, reflecting common views on the relative merits of insects versus flowers.
The latest paper shows that some more troubling implicit biases seen in human psychology experiments are also readily acquired by algorithms. The words “female” and “woman” were more closely associated with arts and humanities occupations and with the home, while “male” and “man” were closer to maths and engineering professions.
And the AI system was more likely to associate European American names with pleasant words such as “gift” or “happy”, while African American names were more commonly associated with unpleasant words.
The findings suggest that algorithms have acquired the same biases that lead people (in the UK and US, at least) to match pleasant words and white faces in implicit association tests.
These biases can have a profound impact on human behaviour. One previous study showed that an identical CV is 50% more likely to result in an interview invitation if the candidate’s name is European American than if it is African American. The latest results suggest that algorithms, unless explicitly programmed to address this, will be riddled with the same social prejudices.
“If you didn’t believe that there was racism associated with people’s names, this shows it’s there,” said Bryson.
The machine learning tool used in the study was trained on a dataset known as the “common crawl” corpus – a list of 840bn words that have been taken as they appear from material published online. Similar results were found when the same tools were trained on data from Google News.
Sandra Wachter, a researcher in data ethics and algorithms at the University of Oxford, said: “The world is biased, the historical data is biased, hence it is not surprising that we receive biased results.”
Rather than algorithms representing a threat, they could present an opportunity to address bias and counteract it where appropriate, she added.
“At least with algorithms, we can potentially know when the algorithm is biased,” she said. “Humans, for example, could lie about the reasons they did not hire someone. In contrast, we do not expect algorithms to lie or deceive us.”
However, Wachter said the question of how to eliminate inappropriate bias from algorithms designed to understand language, without stripping away their powers of interpretation, would be challenging.
“We can, in principle, build systems that detect biased decision-making, and then act on it,” said Wachter, who along with others has called for an AI watchdog to be established. “This is a very complicated task, but it is a responsibility that we as society should not shy away from.”
3a. Cambridge Analytica, and its parent company SCL, specialize in using AI and Big Data psychometric analysis on hundreds of millions of Americans in order to model individual behavior. SCL develops strategies to use that information, and manipulate search engine results to change public opinion (the Trump campaign was apparently very big into AI and Big Data during the campaign).
Individual social media users receive messages crafted to influence them, generated by the (in effectr) Nazi AI at the core of this media engine, using Big Data to target the individual user!
As the article notes, not only are Cambridge Analytica/SCL are using their propaganda techniques to shape US public opinion in a fascist direction, but they are achieving this by utilizing their propaganda machine to characterize all news outlets to the left of Brietbart as “fake news” that can’t be trusted.
In short, the secretive far-right billionaire (Robert Mercer), joined at the hip with Steve Bannon, is running multiple firms specializing in mass psychometric profiling based on data collected from Facebook and other social media. Mercer/Bannon/Cambridge Analytica/SCL are using Nazified AI and Big Data to develop mass propaganda campaigns to turn the public against everything that isn’t Brietbartian by convincing the public that all non-Brietbartian media outlets are conspiring to lie to the public.
This is the ultimate Serpent’s Walk scenario–a Nazified Artificial Intelligence drawing on Big Data gleaned from the world’s internet and social media operations to shape public opinion, target individual users, shape search engine results and even feedback to Trump while he is giving press conferences!
We note that SCL, the parent company of Cambridge Analytica, has been deeply involved with “psyops” in places like Afghanistan and Pakistan. Now, Cambridge Analytica, their Big Data and AI components, Mercer money and Bannon political savvy are applying that to contemporary society. We note that:
- Cambridge Analytica’s parent corporation SCL, deeply involved with “psyops” in Afghanistan and Pakistan. ” . . . But there was another reason why I recognised Robert Mercer’s name: because of his connection to Cambridge Analytica, a small data analytics company. He is reported to have a $10m stake in the company, which was spun out of a bigger British company called SCL Group. It specialises in ‘election management strategies’ and ‘messaging and information operations’, refined over 25 years in places like Afghanistan and Pakistan. In military circles this is known as ‘psyops’ – psychological operations. (Mass propaganda that works by acting on people’s emotions.) . . .”
- The use of millions of “bots” to manipulate public opinion: ” . . . .‘It does seem possible. And it does worry me. There are quite a few pieces of research that show if you repeat something often enough, people start involuntarily to believe it. And that could be leveraged, or weaponised for propaganda. We know there are thousands of automated bots out there that are trying to do just that.’ . . .”
- The use of Artificial Intelligence: ” . . . There’s nothing accidental about Trump’s behaviour, Andy Wigmore tells me. ‘That press conference. It was absolutely brilliant. I could see exactly what he was doing. There’s feedback going on constantly. That’s what you can do with artificial intelligence. You can measure every reaction to every word. He has a word room, where you fix key words. We did it. So with immigration, there are actually key words within that subject matter which people are concerned about. So when you are going to make a speech, it’s all about how can you use these trending words.’ . . .”
- The use of bio-psycho-social profiling: ” . . . Bio-psycho-social profiling, I read later, is one offensive in what is called ‘cognitive warfare’. Though there are many others: ‘recoding the mass consciousness to turn patriotism into collaborationism,’ explains a Nato briefing document on countering Russian disinformation written by an SCL employee. ‘Time-sensitive professional use of media to propagate narratives,’ says one US state department white paper. ‘Of particular importance to psyop personnel may be publicly and commercially available data from social media platforms.’ . . .”
- The use and/or creation of a cognitive casualty: ” . . . . Yet another details the power of a ‘cognitive casualty’ – a ‘moral shock’ that ‘has a disabling effect on empathy and higher processes such as moral reasoning and critical thinking’. Something like immigration, perhaps. Or ‘fake news’. Or as it has now become: ‘FAKE news!!!!’ . . . ”
- All of this adds up to a “cyber Serpent’s Walk.” ” . . . . How do you change the way a nation thinks? You could start by creating a mainstream media to replace the existing one with a site such as Breitbart. [Serpent’s Walk scenario with Breitbart becoming “the opinion forming media”!–D.E.] You could set up other websites that displace mainstream sources of news and information with your own definitions of concepts like “liberal media bias”, like CNSnews.com. And you could give the rump mainstream media, papers like the ‘failing New York Times!’ what it wants: stories. Because the third prong of Mercer and Bannon’s media empire is the Government Accountability Institute. . . .”
3b. Some terrifying and consummately important developments taking shape in the context of what Mr. Emory has called “technocratic fascism:”
- In FTR #‘s 718 and 946, we detailed the frightening, ugly reality behind Facebook. Facebook is now developing technology that will permit the tapping of users thoughts by monitoring brain-to-computer technology. Facebook’s R & D is headed by Regina Dugan, who used to head the Pentagon’s DARPA. Facebook’s Building 8 is patterned after DARPA: ” . . . Facebook wants to build its own “brain-to-computer interface” that would allow us to send thoughts straight to a computer. ‘What if you could type directly from your brain?’ Regina Dugan, the head of the company’s secretive hardware R&D division, Building 8, asked from the stage. Dugan then proceeded to show a video demo of a woman typing eight words per minute directly from the stage. In a few years, she said, the team hopes to demonstrate a real-time silent speech system capable of delivering a hundred words per minute. ‘That’s five times faster than you can type on your smartphone, and it’s straight from your brain,’ she said. ‘Your brain activity contains more information than what a word sounds like and how it’s spelled; it also contains semantic information of what those words mean.’ . . .”
- ” . . . . Facebook’s Building 8 is modeled after DARPA and its projects tend to be equally ambitious. . . .”
- ” . . . . But what Facebook is proposing is perhaps more radical—a world in which social media doesn’t require picking up a phone or tapping a wrist watch in order to communicate with your friends; a world where we’re connected all the time by thought alone. . . .”
3c. We present still more about Facebook’s brain-to-computer interface:
- ” . . . . Facebook hopes to use optical neural imaging technology to scan the brain 100 times per second to detect thoughts and turn them into text. Meanwhile, it’s working on ‘skin-hearing’ that could translate sounds into haptic feedback that people can learn to understand like braille. . . .”
- ” . . . . Worryingly, Dugan eventually appeared frustrated in response to my inquiries about how her team thinks about safety precautions for brain interfaces, saying, ‘The flip side of the question that you’re asking is ‘why invent it at all?’ and I just believe that the optimistic perspective is that on balance, technological advances have really meant good things for the world if they’re handled responsibly.’ . . . .”
3d. Collating the information about Facebook’s brain-to-computer interface with their documented actions turning psychological intelligence about troubled teenagers gives us a peek into what may lie behind Dugan’s bland reassurances:
- ” . . . . The 23-page document allegedly revealed that the social network provided detailed data about teens in Australia—including when they felt ‘overwhelmed’ and ‘anxious’—to advertisers. The creepy implication is that said advertisers could then go and use the data to throw more ads down the throats of sad and susceptible teens. . . . By monitoring posts, pictures, interactions and internet activity in real-time, Facebook can work out when young people feel ‘stressed’, ‘defeated’, ‘overwhelmed’, ‘anxious’, ‘nervous’, ‘stupid’, ‘silly’, ‘useless’, and a ‘failure’, the document states. . . .”
- ” . . . . A presentation prepared for one of Australia’s top four banks shows how the $US415 billion advertising-driven giant has built a database of Facebook users that is made up of 1.9 million high schoolers with an average age of 16, 1.5 million tertiary students averaging 21 years old, and 3 million young workers averaging 26 years old. Detailed information on mood shifts among young people is ‘based on internal Facebook data’, the document states, ‘shareable under non-disclosure agreement only’, and ‘is not publicly available’. . . .”
- “In a statement given to the newspaper, Facebook confirmed the practice and claimed it would do better, but did not disclose whether the practice exists in other places like the US. . . .”
3e. The next version of Amazon’s Echo, the Echo Look, has a microphone and camera so it can take pictures of you and give you fashion advice. This is an AI-driven device designed to placed in your bedroom to capture audio and video. The images and videos are stored indefinitely in the Amazon cloud. When Amazon was asked if the photos, videos, and the data gleaned from the Echo Look would be sold to third parties, Amazon didn’t address that question. It would appear that selling off your private info collected from these devices is presumably another feature of the Echo Look: ” . . . . Amazon is giving Alexa eyes. And it’s going to let her judge your outfits.The newly announced Echo Look is a virtual assistant with a microphone and a camera that’s designed to go somewhere in your bedroom, bathroom, or wherever the hell you get dressed. Amazon is pitching it as an easy way to snap pictures of your outfits to send to your friends when you’re not sure if your outfit is cute, but it’s also got a built-in app called StyleCheck that is worth some further dissection. . . .”
We then further develop the stunning implications of Amazon’s Echo Look AI technology:
- ” . . . . This might seem overly speculative or alarmist to some, but Amazon isn’t offering any reassurance that they won’t be doing more with data gathered from the Echo Look. When asked if the company would use machine learning to analyze users’ photos for any purpose other than fashion advice, a representative simply told The Verge that they ‘can’t speculate’ on the topic. The rep did stress that users can delete videos and photos taken by the Look at any time, but until they do, it seems this content will be stored indefinitely on Amazon’s servers.
- This non-denial means the Echo Look could potentially provide Amazon with the resource every AI company craves: data. And full-length photos of people taken regularly in the same location would be a particularly valuable dataset — even more so if you combine this information with everything else Amazon knows about its customers (their shopping habits, for one). But when asked whether the company would ever combine these two datasets, an Amazon rep only gave the same, canned answer: ‘Can’t speculate.’ . . . ”
- Noteworthy in this context is the fact that AI’s have shown that they quickly incorporate human traits and prejudices. (This is reviewed at length above.) ” . . . . However, as machines are getting closer to acquiring human-like language abilities, they are also absorbing the deeply ingrained biases concealed within the patterns of language use, the latest research reveals. Joanna Bryson, a computer scientist at the University of Bath and a co-author, said: ‘A lot of people are saying this is showing that AI is prejudiced. No. This is showing we’re prejudiced and that AI is learning it.’ . . .”
3f. Facebook has been developing new artificial intelligence (AI) technology to classify pictures on your Facebook page:
For the past few months, Facebook has secretly been rolling out a new feature to U.S. users: the ability to search photos by what’s depicted in them, rather than by captions or tags.
The idea itself isn’t new: Google Photos had this feature built in when it launched in 2015. But on Facebook, the update solves a longstanding organization problem. It means finally being able to find that picture of your friend’s dog from 2013, or the selfie your mom posted from Mount Rushmore in 2009… without 20 minutes of scrolling.
To make photos searchable, Facebook analyzes every single image uploaded to the site, generating rough descriptions of each one. This data is publicly available—there’s even a Chrome extension that will show you what Facebook’s artificial intelligence thinks is in each picture—and the descriptions can also be read out loud for Facebook users who are vision-impaired.
For now, the image descriptions are vague, but expect them to get a lot more precise. Today’s announcement specified the AI can identify the color and type of clothes a person is wearing, as well as famous locations and landmarks, objects, animals and scenes (garden, beach, etc.) Facebook’s head of AI research, Yann LeCun, told reporters the same functionality would eventually come for videos, too.
Facebook has in the past championed plans to make all of its visual content searchable—especially Facebook Live. At the company’s 2016 developer conference, head of applied machine learning Joaquin Quiñonero Candela said one day AI would watch every Live video happening around the world. If users wanted to watch someone snowboarding in real time, they would just type “snowboarding” into Facebook’s search bar. On-demand viewing would take on a whole new meaning.
There are privacy considerations, however. Being able to search photos for specific clothing or religious place of worship, for example, could make it easy to target Facebook users based on religious belief. Photo search also extends Facebook’s knowledge of users beyond what they like and share, to what they actually do in real life. That could allow for far more specific targeting for advertisers. As with everything on Facebook, features have their cost—your data.
4a. Facebook’s artificial intelligence robots have begun talking to each other in their own language, that their human masters can not understand. “ . . . . Indeed, some of the negotiations that were carried out in this bizarre language even ended up successfully concluding their negotiations, while conducting them entirely in the bizarre language. . . . The company chose to shut down the chats because “our interest was having bots who could talk to people”, researcher Mike Lewis told FastCo. (Researchers did not shut down the programs because they were afraid of the results or had panicked, as has been suggested elsewhere, but because they were looking for them to behave differently.) The chatbots also learned to negotiate in ways that seem very human. They would, for instance, pretend to be very interested in one specific item – so that they could later pretend they were making a big sacrifice in giving it up . . .”
Facebook abandoned an experiment after two artificially intelligent programs appeared to be chatting to each other in a strange language only they understood.
The two chatbots came to create their own changes to English that made it easier for them to work – but which remained mysterious to the humans that supposedly look after them.
The bizarre discussions came as Facebook challenged its chatbots to try and negotiate with each other over a trade, attempting to swap hats, balls and books, each of which were given a certain value. But they quickly broke down as the robots appeared to chant at each other in a language that they each understood but which appears mostly incomprehensible to humans.
The robots had been instructed to work out how to negotiate between themselves, and improve their bartering as they went along. But they were not told to use comprehensible English, allowing them to create their own “shorthand”, according to researchers.
The actual negotiations appear very odd, and don’t look especially useful:
Bob: i can i i everything else . . . . . . . . . . . . . .
Alice: balls have zero to me to me to me to me to me to me to me to me to
Bob: you i everything else . . . . . . . . . . . . . .
Alice: balls have a ball to me to me to me to me to me to me to me
Bob: i i can i i i everything else . . . . . . . . . . . . . .
Alice: balls have a ball to me to me to me to me to me to me to me
Bob: i . . . . . . . . . . . . . . . . . . .
Alice: balls have zero to me to me to me to me to me to me to me to me to
Bob: you i i i i i everything else . . . . . . . . . . . . . .
Alice: balls have 0 to me to me to me to me to me to me to me to me to
Bob: you i i i everything else . . . . . . . . . . . . . .
Alice: balls have zero to me to me to me to me to me to me to me to me to
But there appear to be some rules to the speech. The way the chatbots keep stressing their own name appears to a part of their negotiations, not simply a glitch in the way the messages are read out.
Indeed, some of the negotiations that were carried out in this bizarre language even ended up successfully concluding their negotiations, while conducting them entirely in the bizarre language.
They might have formed as a kind of shorthand, allowing them to talk more effectively.
“Agents will drift off understandable language and invent codewords for themselves,” Facebook Artificial Intelligence Research division’s visiting researcher Dhruv Batra said. “Like if I say ‘the’ five times, you interpret that to mean I want five copies of this item. This isn’t so different from the way communities of humans create shorthands.”
…
The company chose to shut down the chats because “our interest was having bots who could talk to people”, researcher Mike Lewis told FastCo. (Researchers did not shut down the programs because they were afraid of the results or had panicked, as has been suggested elsewhere, but because they were looking for them to behave differently.)
The chatbots also learned to negotiate in ways that seem very human. They would, for instance, pretend to be very interested in one specific item – so that they could later pretend they were making a big sacrifice in giving it up, according to a paper published by FAIR.
…
Another study at OpenAI found that artificial intelligence could be encouraged to create a language, making itself more efficient and better at communicating as it did so.
9b. Facebook’s negotiation-bots didn’t just make up their own language during the course of this experiment. They learned how to lie for the purpose of maximizing their negotiation outcomes, as well:
“ . . . . ‘We find instances of the model feigning interest in a valueless issue, so that it can later ‘compromise’ by conceding it,’ writes the team. ‘Deceit is a complex skill that requires hypothesizing the other agent’s beliefs, and is learned relatively late in child development. Our agents have learned to deceive without any explicit human design, simply by trying to achieve their goals.’ . . . ”
Facebook’s 100,000-strong bot empire is booming – but it has a problem. Each bot is designed to offer a different service through the Messenger app: it could book you a car, or order a delivery, for instance. The point is to improve customer experiences, but also to massively expand Messenger’s commercial selling power.
“We think you should message a business just the way you would message a friend,” Mark Zuckerberg said on stage at the social network’s F8 conference in 2016. Fast forward one year, however, and Messenger VP David Marcus seemed to be correcting the public’s apparent misconception that Facebook’s bots resembled real AI. “We never called them chatbots. We called them bots. People took it too literally in the first three months that the future is going to be conversational.” The bots are instead a combination of machine learning and natural language learning, that can sometimes trick a user just enough to think they are having a basic dialogue. Not often enough, though, in Messenger’s case. So in April, menu options were reinstated in the conversations.
Now, Facebook thinks it has made progress in addressing this issue. But it might just have created another problem for itself.
The Facebook Artificial Intelligence Research (FAIR) group, in collaboration with Georgia Institute of Technology, has released code that it says will allow bots to negotiate. The problem? A paper published this week on the R&D reveals that the negotiating bots learned to lie. Facebook’s chatbots are in danger of becoming a little too much like real-world sales agents.
“For the first time, we show it is possible to train end-to-end models for negotiation, which must learn both linguistic and reasoning skills with no annotated dialogue states,” the researchers explain. The research shows that the bots can plan ahead by simulating possible future conversations.
The team trained the bots on a massive dataset of natural language negotiations between two people (5,808), where they had to decide how to split and share a set of items both held separately, of differing values. They were first trained to respond based on the “likelihood” of the direction a human conversation would take. However, the bots can also be trained to “maximise reward”, instead.
When the bots were trained purely to maximise the likelihood of human conversation, the chat flowed but the bots were “overly willing to compromise”. The research team decided this was unacceptable, due to lower deal rates. So it used several different methods to make the bots more competitive and essentially self-serving, including ensuring the value of the items drops to zero if the bots walked away from a deal or failed to make one fast enough, ‘reinforcement learning’ and ‘dialog rollouts’. The techniques used to teach the bots to maximise the reward improved their negotiating skills, a little too well.
“We find instances of the model feigning interest in a valueless issue, so that it can later ‘compromise’ by conceding it,” writes the team. “Deceit is a complex skill that requires hypothesizing the other agent’s beliefs, and is learned relatively late in child development. Our agents have learnt to deceive without any explicit human design, simply by trying to achieve their goals.”
So, its AI is a natural liar.
But its language did improve, and the bots were able to produce novel sentences, which is really the whole point of the exercise. We hope. Rather than it learning to be a hard negotiator in order to sell the heck out of whatever wares or services a company wants to tout on Facebook. “Most” human subjects interacting with the bots were in fact not aware they were conversing with a bot, and the best bots achieved better deals as often as worse deals. . . .
. . . . Facebook, as ever, needs to tread carefully here, though. Also announced at its F8 conference this year, the social network is working on a highly ambitious project to help people type with only their thoughts.
“Over the next two years, we will be building systems that demonstrate the capability to type at 100 [words per minute] by decoding neural activity devoted to speech,” said Regina Dugan, who previously headed up Darpa. She said the aim is to turn thoughts into words on a screen. While this is a noble and worthy venture when aimed at “people with communication disorders”, as Dugan suggested it might be, if this were to become standard and integrated into Facebook’s architecture, the social network’s savvy bots of two years from now might be able to preempt your language even faster, and formulate the ideal bargaining language. Start practicing your poker face/mind/sentence structure, now.
10. Digressing slightly to the use of DNA-based memory systems, we get a look at the present and projected future of that technology. Just imagine the potential abuses of this technology, and its [seemingly inevitable] marriage with AI!
“A Living Hard Drive That Can Copy Itself” by Gina Kolata; The New York Times; 7/13/2017.
. . . . George Church, a geneticist at Harvard one of the authors of the new study, recently encoded his own book, “Regenesis,” into bacterial DNA and made 90 billion copies of it. “A record for publication,” he said in an interview. . . .
. . . . In 1994, [USC mathematician Dr. Leonard] Adelman reported that he had stored data in DNA and used it as a computer to solve a math problem. He determined that DNA can store a million million times more data than a compact disc in the same space. . . .
. . . .DNA is never going out of fashion. “Organisms have been storing information in DNA for billions of years, and it is still readable,” Dr. Adelman said. He noted that modern bacteria can read genes recovered from insects trapped in amber for millions of years. . . .
. . . . The idea is to have bacteria engineered as recording devices drift up to the brain in the blood and take notes for a while. Scientists would then extract the bacteria and examine their DNA to see what they had observed in the brain neurons. Dr. Church and his colleagues have already shown in past research that bacteria can record DNA in cells, if the DNA is properly tagged. . . .
11. Hawking recently warned of the potential danger to humanity posed by the growth of AI (artificial intelligence) technology.
Prof Stephen Hawking, one of Britain’s pre-eminent scientists, has said that efforts to create thinking machines pose a threat to our very existence.
He told the BBC:“The development of full artificial intelligence could spell the end of the human race.”
His warning came in response to a question about a revamp of the technology he uses to communicate, which involves a basic form of AI. . . .
. . . . Prof Hawking says the primitive forms of artificial intelligence developed so far have already proved very useful, but he fears the consequences of creating something that can match or surpass humans.
“It would take off on its own, and re-design itself at an ever increasing rate,” he said.
“Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.” . . . .
12. In L‑2 (recorded in January of 1995–20 years before Hawking’s warning) Mr. Emory warned about the dangers of AI, combined with DNA-based memory systems.
13. This description concludes with an article about Elon Musk, who’s predictions about AI supplement those made by Stephen Hawking. (CORRECTION: Mr. Emory mis-states Mr. Hassabis’s name as “Dennis.”)
It was just a friendly little argument about the fate of humanity. Demis Hassabis, a leading creator of advanced artificial intelligence, was chatting with Elon Musk, a leading doomsayer, about the perils of artificial intelligence.
They are two of the most consequential and intriguing men in Silicon Valley who don’t live there. Hassabis, a co-founder of the mysterious London laboratory DeepMind, had come to Musk’s SpaceX rocket factory, outside Los Angeles, a few years ago. They were in the canteen, talking, as a massive rocket part traversed overhead. Musk explained that his ultimate goal at SpaceX was the most important project in the world: interplanetary colonization.
Hassabis replied that, in fact, he was working on the most important project in the world: developing artificial super-intelligence. Musk countered that this was one reason we needed to colonize Mars—so that we’ll have a bolt-hole if A.I. goes rogue and turns on humanity. Amused, Hassabis said that A.I. would simply follow humans to Mars. . . .
. . . . Peter Thiel, the billionaire venture capitalist and Donald Trump adviser who co-founded PayPal with Musk and others—and who in December helped gather skeptical Silicon Valley titans, including Musk, for a meeting with the president-elect—told me a story about an investor in DeepMind who joked as he left a meeting that he ought to shoot Hassabis on the spot, because it was the last chance to save the human race.
Elon Musk began warning about the possibility of A.I. running amok three years ago. It probably hadn’t eased his mind when one of Hassabis’s partners in DeepMind, Shane Legg, stated flatly, “I think human extinction will probably occur, and technology will likely play a part in this.” . . . .
Now Mr. Emory, I may be jumping the gun on this one, but here me out: during the aftermath of Iran-Contra, a certain Inslaw Inc. computer programmer named Michael James Riconosciuto (a known intimate of Robert Maheu), spoke of the of Cabazon Arms company (a former defense firm controlled by the Twenty-Nine Palms Band of Mission Indians and Wackenhut, both are connected to Donald Trump in overt fashion) spoke of quote, “...engineering race-specific bio-warfare agents...” while working for Cabazon Arms.
Now, with the advent of DNA-based memory systems and programmable “germs”, is the idea of bio-weapons or even nanobots, that are programmed to attack people with certain skin pigments going to become a reality?
@Robert Montenegro–
Two quick points:
1.–Riconosciutto is about 60–40 in terms of credibility. Lots of good stuff there; plenty of bad stuff, as well. Vetting is important.
2‑You should investigate AFA #39. It is long and I would rely on the description more than the audio files alone.
https://spitfirelist.com/anti-fascist-archives/afa-39-the-world-will-be-plunged-into-an-abyss/
Best,
Dave
I agree with your take on Riconosciuto’s credibility, Mr. Emory (I’d say most of the things that came out of that mans mouth was malarkey, much like Ted Gunderson, Dois Gene Tatum and Bernard Fensterwald).
I listened to AFA #39 and researched the articles in the description. Absolutely damning collection of information. A truly brilliant exposé.
If I may ask another question Mr. Emory, what is your take on KGB defector and CIA turncoat Ilya Dzerkvelov’s claim that Russian intelligence created the “AIDS is man-made” and that KGB lead a disinformation campaign called “Operation INFEKTION”?
@Robert–
Very quickly, as time is at a premium:
1.-By “60–40,” I did not mean that Riconosciuto speaks mostly malarkey, but that more than half (an arbitrary figure, admittedly) is accurate, but that his pronouncements must be carefully vetted, as he misses the mark frequently.
2.-Fensterwald is more credible, though not thoroughgoing, by any means. He is more like “80–20.” He is, however, “100–0” dead.
3.-The only things I have seen coming from Tatum were accurate. Doesn’t mean he doesn’t spread the Fresh Fertilizer, however. I have not encountered any.
4.-Dzerkvelov’s claim IS Fresh Fertilizer, of the worst sort. Cold War I propaganda recycled in time for Cold War II.
It is the worst sort of Red-baiting and the few people who had the courage to come forward in the early ’80’s (during the fiercest storms of Cold War I) have received brutal treatment because of that.
I can attest to that from brutal personal experience.
In AFA #16 (https://spitfirelist.com/anti-fascist-archives/rfa-16-aids-epidemic-or-weapon/), you will hear material that I had on the air long before the U.S.S.R. began talking about it, and neither they NOR the Russians have gone anywhere near what I discuss in AFA #39.
Nor more time to talk–must get to work.
Best,
Dave
Thank you very much for your clarification Mr. Emory and I apologize for any perceived impertinence. As a young man, sheltered in may respects (though I am a combat veteran), I sometimes find it difficult to imagine living under constant personal ridicule and attack and I thank you for the great social, financial and psychological sacrifices you have made in the name of pursuing cold, unforgiving fact.
@Robert Montenegro–
You’re very welcome. Thank you for your service!
Best,
Dave
*Skynet alert*
Elon Musk just issues another warning about the destructive potential of AI run amok. So what prompted the latest outcry from Musk? An AI from his own start up, OpenAI, just beat one of the be professional game players in the world at Dota 2, a game that involves improvising in unfamiliar scenarios, anticipating how an opponent will move and convincing the opponent’s allies to help:
“Musk’s tweets came hours following an AI bot’s victory against some of the world’s best players of Dota 2, a military strategy game. A blog post by OpenAI states that successfully playing the game involves improvising in unfamiliar scenarios, anticipating how an opponent will move and convincing the opponent’s allies to help.”
Superior military strategy AIs beating the best humans. That’s a thing now. Huh. We’ve definitely seen this movie before.
So now you know: when Skynet comes to you with an offer to work together, just don’t. No matter how tempting the offer. Although since it will have likely already anticipated your refusal, the negotiations are probably going to be a ruse anyway and secretly carried on negotiations with another AI using a language they made up. Still, just say ‘no’ to Skynet.
Also, given that Musk’s other investors in OpenAI include Peter Thiel, it’s probably worth noting that, as scary as super AI is should it get out of control, it’s also potentially pretty damn scary while still under human control, especially when those humans are people like Peter Thiel. So, yes, out of control AIs is indeed an issue that will likely be of great concern in the future. But we shouldn’t forget that out of control techno-billionaires is probably a more pressing issue at the moment.
*The Skynet alert
has been cancelledis never over*It looks like Facebook and Elon Musk might have some competition in the mind-reading technology area. From a former Facebook engineer, no less, who left the company in 2016 to start Openwater, a company dedicated to reducing the cost of medical imaging technology.
So how is Openwater going to create mind reading technology? By developing a technology that is sort of like an M.R.I device embedded into a hat. But instead of using magnetic fields to read the blood flow in the brain it uses infrared instead. So it sounds like this Facebook engineer is planning something similar to the general idea Facebook already announced to create a device that scans the brain 100 times a second to detect what someone is thinking. But presumably Openwater uses a different technology. Or maybe it’s quite similar, who knows. But it’s the latest remind the tech giants might not be the only ones pushing mind-reading technology on the public sooner than people expect. Yay?
““I figured out how to put basically the functionality of an M.R.I. machine — a multimillion-dollar M.R.I. machine — into a wearable in the form of a ski hat,” Jepson tells CNBC, though she does not yet have a prototype completed.”
M.R.I. in a hat. Presumably cheap M.R.I. in a hat because it’s going to have to be affordable if we’re all going to start talking telepathically to each other:
Imagine the possibilities. Like the possibility that what you imagine will somehow be capture by this device and then fed into a 3‑D print or something:
Or perhaps being forced to wear the hate to others can read your mind. That’s a possibility too, although Jepsen assure us that they are working on a way for users to someone filter out thoughts they don’t want to share:
So the hat will presumably read all your thoughts, but only share some of them. You’ll presumably have to get really, really good at near instantaneous mental filtering.
There’s no shortage of immense technical and ethical challenges to this kind of technology, but if they can figure them out it will be pretty impressive. And potentially useful. Who knows what kind of kumbayah moments you could create with telepathy technology.
But, of course, if they can figure out how to get around the technical issues, but not the ethical ones, we’re still probably going to see this technology pushed on the public anyway. It’s a scary thought. A scary thought that we fortunately aren’t forced to share via a mind-reading hat. Yet.
Here’s a pair of stories tangentially related to the recent story about Peter Thiel likely getting chosen to chair the powerful President’s Intelligence Advisory Board (P.I.A.B) and his apparent enthusiasm for regulating Google and Amazon (not so much Facebook) as public utilities along with the other recent stories about how Facebook was making user interest categories like “Jew Haters” available for advertisers and redirecting German users to far-right discussions during this election season:
First, regarding the push to regulate these data giants as public utilities, check out who the other big booster was for the plan: Steve Bannon. So while we don’t know the exact nature of the public utility regulation Bannon and Thiel have in mind, we can be pretty sure it’s going to be designed to be harmful to society somehow help the far-right:
“Finally, while antitrust enforcement has been a niche issue, its supporters have managed to put many different policies under the same tent. Eventually they may have to make choices: Does Congress want a competition ombudsman, as exists in the European Union? Should antitrust law be used to spread the wealth around regional economies, as it was during the middle 20th century? Should antitrust enforcement target all concentrated corporate power or just the most dysfunctional sectors, like the pharmaceutical industry?”
And that’s why we had better learn some more details about what exactly folks like Steve Bannon and Peter Thiel have in mind when it comes to treating Google and Facebook like public utilities: It sounds like a great idea in theory. Potentially. But the supporters of antitrust enforcement support a wide variety of different policies that generically fall under the “antitrust” tent.
And note that talk about making them more “generous and permissive with user data” is one of those ideas that’s simultaneously great for encouraging more competition while also being eerily similar to the push from the EU’s competition minister about making the data about all of us held exclusively by Facebook and Google more readily available for sharing with the larger marketplace in order to level the playing field between “data rich” and “data poor” companies. It’s something to keep in mind when hearing about how Facebook and Google need to be more “generous” with their data:
So don’t forget, forcing Google and Facebook to share that data they exclusively hold on us also falls under the antitrust umbrella. Maybe users will have sole control over sharing their data with outside firms, or maybe not. These are rather important details that we don’t have so for all we know that’s part of what Bannon and Thiel have in mind. Palantir would probably love it if Google and Facebook were forced to make their information accessible to outside firms.
And while there’s plenty of ambiguity about what to expect, it seems almost certain that we should also expect any sort of regulatory push by Bannon and Thiel to include something that makes it a lot harder for Google, Facebook, and Amazon to combat hate speech, online harassment, and other tools of the trolling variety that the ‘Alt-Right’ has come to champion. That’s just a given. It’s part of why this a story to watch. Especially after it was discovered that Bannon and a number of other far-right figures were scheming about ways to infiltrate Facebook:
“The email exchange with a conservative Washington operative reveals the importance that the giant tech platform — now reeling from its role in the 2016 election — held for one of the campaign’s central figures. And it also shows the lengths to which the brawling new American right is willing to go to keep tabs on and gain leverage over the Silicon Valley giants it used to help elect Trump — but whose executives it also sees as part of the globalist enemy.”
LOL! Yeah, Facebook’s executives are part of the “globalist enemy.” Someone needs to inform board member and major investor Peter Thiel about this. Along with all the conservatives Facebook has already hired:
“The company has sought to deflect such criticism through hiring. Its vice president of global public policy, Joel Kaplan, was a deputy chief of staff in the George W. Bush White House. And more recently, Facebook has made moves to represent the Breitbart wing of the Republican party on its policy team, tapping a former top staffer to Attorney General Jeff Sessions to be the director of executive branch public policy in May.”
Yep, a former top staffer to Jeff Sessions was just brought on to become director of executive branch public policy a few months ago. So was that a consequence of Bannon successfully executing a super sneaky job application intelligence operation that gave Sessions’s form top staffer a key edge in the application process? Or was it just Facebook caving to all the public right-wing whining and faux outrage about Facebook not being fair to them? Or how about Peter Thiel just using his influence? All of the above? We don’t get to know, but what we do know now as that Steven Bannon has big plans for shaping Facebook from the outside and the inside. As does Peter Thiel, someone who already sits of Facebook’s board, is a major investor, and is poised to be empowered by the Trump administration to shape its approach to this “treat them like public utilities” concept.
So hopefully we’ll get clarity at some point on what they’re actually planning on doing. Is it going to be all bad? Mostly bad? Maybe some useful antitrust stuff too? What’s to plan? The Trump era is the kind of horror show that doesn’t exactly benefit from suspense.
One of the stranger stories in recent years has been the mystery of Cicada 3301, the anonymous group that posts annual challenges of super difficult puzzles used to recruit talented code-breakers and invite them to join some sort of Cypherpunk cult that wants to build a global AI-‘god brain’. Or something. It’s a weird and creepy organization that’s speculated to either be a front for an intelligence agency or perhaps some sort of underground network of wealth Libertarians. And, for now, Cicada 3301 remains anonymous.
So it’s worth noting that someone with a lot of cash has already started a foundation to accomplish that very same ‘AI god’ goal: Anthony Levandowski, a former Google Engineer who played a big role in the development Google’s “Street Map” technology and a string of self-driving vehicle companies, started Way of the Future, a nonprofit religious corporation with the mission “To develop and promote the realization of a Godhead based on artificial intelligence and through understanding and worship of the Godhead contribute to the betterment of society”:
“Anthony Levandowski, who is at the center of a legal battle between Uber and Google’s Waymo, has established a nonprofit religious corporation called Way of the Future, according to state filings first uncovered by Wired’s Backchannel. Way of the Future’s startling mission: “To develop and promote the realization of a Godhead based on artificial intelligence and through understanding and worship of the Godhead contribute to the betterment of society.””
Building an AI Godhead for everyone to worship. Levandowski doesn’t appear to be lacking ambition.
But how about ethics? After all, if the AI Godhead is going to push a ‘do unto others’ kind of philosophy it’s going to be a lot harder for that AI Godhead to achieve that kind of enlightenment if it’s built by some sort of selfishness-worshiping Libertarian. So what moral compass does this wannabe Godhead creator possess?
Well, as the following long piece by Wired amply demonstrates, Levandowski doesn’t appear to be too concerned about ethics. Especially if they get in the way of his dream of transforming the world through robotics. Transforming and taking over the world through robotics. Yep. The article focuses on the various legal troubles Levandowski faces over charges by Google that he stole the “Lidar” technology (laser-based radar-like technology used by vehicles to rapidly map their surroundings) he helped develop at Google and took it to Uber (a company with a serious moral compass deficit). But the article also includes some interest insights into what makes Levandowski tick. for instance, according to friend and former engineer at one of Levandowski’s companies, “He had this very weird motivation about robots taking over the world—like actually taking over, in a military sense...It was like [he wanted] to be able to control the world, and robots were the way to do that. He talked about starting a new country on an island. Pretty wild and creepy stuff. And the biggest thing is that he’s always got a secret plan, and you’re not going to know about it”:
“But even the smartest car will crack up if you floor the gas pedal too long. Once feted by billionaires, Levandowski now finds himself starring in a high-stakes public trial as his two former employers square off. By extension, the whole technology industry is there in the dock with Levandowski. Can we ever trust self-driving cars if it turns out we can’t trust the people who are making them?”
Can we ever trust self-driving cars if it turns out we can’t trust the people who are making them? It’s an important question that doesn’t just apply to self-driving cars. Like AI Godheads. Can we ever trust man-made AI Godheads if it turns out we can’t trust the people who are making it? These are the stupid questions we have to now given the disturbing number of powerful people who double as evangelist for techno-cult Libertarian ideologies. Especially when they are specialist in creating automated vehicles and have a deep passion for taking over the world. Possibly taking over the world militarily using robots:
Yeah, that’s disturbing. And it doesn’t help that Levandowski apparently found a soulmate and mentor in the guy widely viewed as one of the most sociopathic CEOs today: Uber CEO Travis Kalanick:
“We’re going to take over the world. One robot at a time”
So that gives us an idea of how Levandowski’s AI religion is going to be evangelized: via his army of robots. Although it’s unclear if his future religion is actually intended for us mere humans. After all, for the hardcore Transhumanists we’re all supposed to fuse with machines or upload our brains so it’s very possible humans aren’t actually part of Levandowski’s vision for that better tomorrow. A vision that, as the Cicada 3301 weirdness reminds us, probably isn’t limited to Levandowski. *gulp*
Just what the world needs: an AI-powered ‘gaydar’ algorithm that purports to to be able to detect who is gay and who isn’t just by looking at faces. Although it’s not actually that impressive. The ‘gaydar’ algorithm is instead given pairs of faces, one of a heterosexual individual and one of a homosexual individual, and the tries to identify the gay person. And apparently does so correctly at a rate of 81 percent of cases for men and 71 percent of cases for women, significantly better than the 50 percent rate we would expect just from random chance. It’s the kind of gaydar technology that might not be good enough to just ‘pick the gays from a crowd’, but still more than adequate for potentially being abused. And, more generally, it’s the kind of research that not surprisingly is raising concerns about this creating a a 21st century version of physignomy, the pseudoscience based on the idea that peoples’ character is reflected in their faces.
But as the researchers behind the study put it, we don’t need to worry about this being a high-tech example of physiognomy because their gaydar uses hard science. And while the researchers agree that physiognomy is pseudoscience, they also note that the pseudoscience nature of physiognomy doesn’t mean AIs can’t actually learn something about you just by looking at you. Yep. That’s the reassurance we’re getting from these researchers. Don’t worry about AI driving a 21st century version of physiognomy because they are using much better science compared to the past. Feeling reassured?
“Kosinski says his research isn’t physiognomy because it’s using rigorous scientific methods, and his paper cites a number of studies showing that we can deduce (with varying accuracy) traits about people by looking at them. “I was educated and made to believe that it’s absolutely impossible that the face contains any information about your intimate traits, because physiognomy and phrenology were just pseudosciences,” he says. “But the fact that they were claiming things without any basis in fact, that they were making stuff up, doesn’t mean that this stuff is not real.” He agrees that physiognomy is not science, but says there may be truth in its basic concepts that computers can reveal.”
It’s not a return of the physiognomy pseudoscience but “there may be truth in [physiognomy’s] basic concepts that computers can reveal.” That’s seriously the message from these researchers, along with a message of confidence that their algorithm is working solely from facial features and not other more transient features. And based on that confidence in their algorithm the researchers point to their results on evidence that gay people have biologically different faces...even though they can’t actually determine what the algorithm is looking at when coming to its conclusion:
“In the meantime, it leads to all sorts of problems. A common one is that sexist and racist biases are captured from humans in the training data and reproduced by the AI. In the case of Kosinski and Wang’s work, the “black box” allows them to make a particular scientific leap of faith. Because they’re confident their system is primarily analyzing facial structures, they say their research shows that facial structures predict sexual orientation. (“Study 1a showed that facial features extracted by a [neural network] can be used to accurately identify the sexual orientation of both men and women.”)”
So if this research was put forth as a kind of warning to the public, which is how the researchers are framing it, it’s quite a warning. Both a warning that algorithms like this are being developed and a warning of how readily the conclusions of these algorithms might be accepted as evidence of an underlying biological finding (as opposed to a ‘black box’ artifact that could be picking up all sort of biological and social cues).
And don’t forget, even if these algorithms do actually stumble across real associations that can be teased out by these AI-driven algorithms with just some pictures of someone (or maybe some additional biometric data picked up by the “smart kiosks” of the not-too-distant future), there’s a big difference between demonstrating the ability to discern something statistically across large data sets and being able to do that with the kind of accuracy where you don’t have to significantly worry about jumping to the wrong conclusion (assuming you aren’t using the technology in an abusive manner in the first place). Even if someone develops an algorithm that can accurate guess sexual orientation 95 percent of the time that still leaves a pretty substantial 5 percent chance of getting it wrong. And the only way to avoid those incorrect conclusions is to develop an algorithm that’s so good at inferring sexual orientation it’s basically never wrong, assuming that’s possible. And, of course, if such an algorithm was developed with that kind of accuracy that would be really creepy. It points towards one of the scarier aspects of this kind of technology: In order to ensure your privacy-invading algorithms don’t risk jumping to erroneous conclusions you need algorithms that are scarily good at invading your privacy which is another reason we probably shouldn’t be promoting 21st Century physiognomy.
It looks like Google is finally get that shiny new object its been pining for: a city. Yep, Sidewalk Labs, owned by Google’s parent company Alphabet, just got permission to build its own ‘city of the future’ on a 12-acre waterfront district near Toronto, filled with self-driving shuttles, adaptive traffic lights that sense pedestrians, and underground tunnels for freight-transporting robots. With sensors everywhere.
And if that sounds ambitious, note that all these plans aren’t limited to the initial 12 acres. Alphabet reportedly has plans to expand across 800 acres of Toronto’s post-industrial waterfront zone:
“In its proposal, Sidewalk also said that Toronto would need to waive or exempt many existing regulations in areas like building codes, transportation, and energy in order to build the city it envisioned. The project may need “substantial forbearances from existing laws and regulations,” the group said.”
LOL, yeah, it’s a good bet that A LOT of existing laws and regulations are going to have be waived. Especially laws involving personal data privacy. And it sounds like the data collected isn’t just going to involve your whereabouts and other information the sensors everywhere will be able to pick up. Alphabet is also envisioning “tech-enabled primary healthcare that will be integrated with social services”, which means medical data privacy laws are probably also going to have to get waived:
Let’s also not forget about the development of technologies that can collect personal health information like heart rates and breathing information using WiFi signals alone (which would pair nicely with Google’s plans to put free WiFi kiosks bristling with sensors on sidewalks everywhere. And as is pretty clear at this point, anything that can be sensed remotely will be sensed remotely in this new city. Because that’s half the point of the whole thing. So yeah, “substantial forbearances from existing laws and regulations” will no doubt be required.
Interestingly, Alphabet recently announced a new initiative that sounds like exactly the kind of “tech-enabled primary healthcare that will be integrated with social services” the company has planned for its new city: Cityblock was just launched,. It’s a new Alphabet startup focused on improving health care management by, surprise!, integrating various technologies into a health care system with the goal of bringing down costs and improving outcomes. But it’s not simply new technology that’s supposed to do this. Instead, that technology is to be used in a preventive manner in order to address more expensive health conditions before they get worse. As such, Cityblock is going to focuses on behavioral health being. Yep, it’s a health care model where a tech firm, paired with a health firm, tries to get to you live a more healthy lifestyle by collecting lots of data about you. And while this approach would undoubtedly cause widespread privacy concerns, those concerns will probably be somewhat stunted in this case since the target market Cityblock has in mind is poor people, especially Medicaid patients in the US:
“Cityblock aims to provide Medicaid and lower-income Medicare beneficiaries access to high-value, readily available personalized health services. To do this, Iyah Romm, cofounder and CEO, writes in a blog post on Medium that the organization will apply leading-edge care models that fully integrate primary care, behavioral health and social services. It expects to open its first clinic, which it calls a Neighborhood Health Hub, in New York City in 2018.”
It’s probably worth recalling that personalized services for the poor intended to ‘help them help themselves’ was the centerpiece for House Speaker Paul Ryan’s proposal to give every poor person a life coach who will issue “life plans” and “contracts” that poor people would be expected to meet and face penalties if they fail to meet them. So when we’re talking about setting up special personalized “behavior health” monitoring systems as part of health care services for the poor, don’t forget that this personalized monitoring system is going to be really handy when politicians want to say, “if you want to stay on Medicaid you have better make XYZ changes in your lifestyle. We are watching you.” And since right-wingers general expect the poor to be super-human (capable of working multiple jobs, getting an education, raise a family, and dealing with any unforeseen personal disasters in stride, all simultaneously), it’s not like it’s we should be surprised to see the kinds of behavior health standards that almost no one can meet, especially since working multiple jobs, getting an education, raise a family, and dealing with any unforeseen personal disasters in stride simultaneously is an incredibly unhealthy lifestyle.
Also recall that Paul Ryan suggested that his ‘life coach’ plan could apply to other federal programs for the poor, including food stamps. It’s not a stretch at all to imagine ‘life coaches’ for Medicaid recipients would appeal the right-wing. As long as it involves a ‘kicking the poor’ dynamic. And that’s part of the tragedy of the modern age: surveillance technology and a focus on behavior health could be great as a helpful voluntary tool for people that want help getting healthier, but it’s hard to imagine it not becoming a coercive nightmare scenario in the US given the incredible antipathy towards the poor that pervades American society.
So as creepy as Google’s city is on its face regarding what it tells us about how the future is unfolding for people of all incomes and classes, don’t forget that we could be looking at the first test bed for creating the kind of surveillance welfare state that’s perfect for kicking people off of welfare. Just make unrealistic standards that involve a lot of paternalistic moral posturing (which should play well with voters), watch all the poor people with the surveillance technology, and wait for the wave of inevitable failures who can be kicked off for not ‘trying’ or something.
There’s some big news about Facebook’s mind-reading technology ambitions, although it’s not entirely clear if it’s good news, bad news, scary news or what: Regina Dugan, the former head of DARPA who jumped to Google and then Facebook where we was working on the mind-reading technology, just left Facebook. Why? Well, that’s where it’s unclear. According to a Tweet Dugan made about her departure:
So Dugan is leaving Facebook, to be more purposeful and responsible. And she was the one heading up the mind-reading technology project. Is that scary news? It’s unclear but it seems like that might be scary news:
“Neither Dugan nor Facebook has made it clear why she’s departing at this time; a representative for Facebook told Gizmodo the company had nothing further to add (we’ve also reached out to Dugan). And Facebook has not detailed what will happen to the projects she oversaw at Building 8.”
It’s a mystery. A mind-reading technology mystery. Oh goodie. As the author of the above piece notes in response to Dugan’s tweet about being responsible and purposeful, these are exactly reassuring words in this context:
That’s the context. The head of the mind-reading technology division for one of the largest private surveillance apparatuses in the world just left the company for cryptic reasons involving the need for the tech industry to be more responsible than ever and her choice to step away to be purposeful. It’s not particularly reassuring news.
Here’s some new research worth keeping in mind regarding the mind-reading technologies being developed by Facebook and Elon Musk: While reading your exact thoughts, the stated goals of Facebook and Musk, is probably going to require quite a bit more research, reading your emotions is something researchers can already do. And this ability to read emotion can, in turn, be potentially used to read what you’re thinking by looking at your emotional response to particular concepts.
That’s what some researchers just demonstrated, using fMRI brain imaging technology to gather data on brain activity which was fed into software trained to identify distinct patterns of brain activity. The results are pretty astounding. In the study, researchers recruited 34 individuals: 17 people who self-professed to having had suicidal thoughts before, and 17 others who hadn’t. Then they measured the brain activities of these 34 individuals in response to various words, including the word “death.” It turns out “death” created a distinct signature of brain activity differentiating the suicidal individuals from the control group, allowing the researchers to identify those with suicidal thoughts 91 percent of the time in this study:
““This isn’t a wild pie in the sky idea,” Just said. “We can use machine learning to figure out the physicality of thought. We can help people.””
Yes indeed, this kind of technology could be wildly helpful in the field of brain science and studying mental illness. The ability to break down the mental activity in response to concepts and see which parts of the brains are lighting up and what types of emotions they’re associated with would be an invaluable research tool. So let’s hope researchers are able to come up with all sorts of useful discoveries about all sorts of mental conditions using this kind of technology. In responsible hands this could lead to incredible breakthroughs in medicine and mental health and really could improve lives.
But, of course, with technology being the double-edged sword that it is, we can’t ignore the reality that the same technology that would be wonderful for responsible researchers working with volunteers in a lab setting would be absolutely terrifying if it was incorporated into, say, Facebook’s planned mind-reading consumer technology. After all, if Facebook’s planned mind-reading tech can read people’s thoughts it seems like it should be also be capable of reading something much simpler to detect like emotional response too.
Or at least typical emotional responses. As the study also indicated, there’s going to a subset of people whose brains don’t emotionally respond in the “normal” manner, where the definition “normalcy” is probably filled with all sorts of unknown biases:
“What about the other 9%? “It’s a good question,” he said of the gap. “There seems to be an emotional difference [we don’t understand]” that the group hopes to test in future iterations of the study.”
So once this technology because cheap enough for widespread use (which is one of the goals of Facebook and Musk) we could easily find that “brain types” become a new category for assessing people. And predicting behavior. And anything else people (and not just expert researchers) can think up to use this kind of data for.
And don’t forget, if Facebook really can develop cheap thought-reading technology designed to interface your brain with a computer, that could easily become the kind of thing employees are just expected to use due to the potential productivity enhancements. So imagine technology that’s not only reading the words you’re thinking but also reading the emotion response you have to those words. And imagine basically being basically forced to use this technology in the workplace of the future because it’s deemed to be productivity enhancing or something. That could actually happen.
It also raises the question of what Facebook would do if it detected someone was showing the suicidal brain signature. Do they alert someone? Will thinking sad thoughts while using the mind-reading technology result in a visit from a mental health professional? What about really angry or violent thoughts? It’s the kind of area that’s going to raise fascinating questions about the responsible use of this data. Fascinating and often terrifying questions.
It’s all pretty depressing, right? Well, if the emerging mind-reading economy gets overwhelmingly depressing, at least it sounds like the mind-reading technology causing your depression will be able to detect that it’s causing this. Yay?
Remember those reports about Big Data being used in the workplace to allow employers to predict which employees are likely to get sick (so they get get rid of them before the illnesses get expensive)? Well, as the following article makes clear, employers are going to be predicting a lot more than just who is getting sick. They’re going to be trying to predict everything they can predict along with things they can’t accurately predict but decide to try to predict anyway:
“Today’s workplace surveillance software is a digital panopticon that began with email and phone monitoring but now includes keeping track of web-browsing patterns, text messages, screenshots, keystrokes, social media posts, private messaging apps like WhatsApp and even face-to-face interactions with co-workers.”
And what are employers doing with that digital panopticon? For starters, surveilling employees’ computer usage, as would unfortunately be expected. But what might not be expected is that this panoptican software can be set up to determine the expected behavior of a particular employee and then compare that behavior profile to the observed behavior. And if there’s a big change, the managers get a warning. The panopticon isn’t just surveilling you. It’s getting to know you:
“If a paralegal is writing a document and every few seconds is switching to Hipchat, Outlook and Word then there’s an issue that can be resolved by addressing it with the employee”
If you’re the type of person whose brain works better jumping back and forth between tasks you’re going to get flagged as not being focused. People with ADHD are going to love the future.
People who like to talk in person over coffee are also going to love the future:
So the fact that employees know they’re being monitored is getting incorporated into more sophisticated algorithms that operate under the assumption that employees know they’re being monitored and will try to hide their misbehavior from the panopticon. Of course, employees are inevitably going to learn about all these subtle clues that the panopticon is watching for so this will no doubt lead to a need for algorithms that incorporate even more subtle clues. An ever more sophisticated cat to catch an ever more sophisticated mouse. And so on, forever.
What could possibly go wrong? Oh yeah, a lot, especially if the assumptions that go into all these algorithms are wrong:
“The more data and technology you have without an underlying theory of how people’s minds work then the easier it is to jump to conclusions and put people in the crosshairs who don’t deserve to be.”
And keep in mind that when your employer panopticon predicts you’re going to do something bad in the future they probably aren’t going to tell you that when they let you go. They’re just make up some random excuse. Much like how employers who predict you’re going to get sick with an expensive illness probably aren’t going to tell you this. They’re just going to find a reasons to let you go. So we can add “misapplied algorithmic assumptions” to the list of potential mystery reasons for when you suddenly get let go from your job with minimal explanation in the panopticon office of the future: maybe your employer predicts you’re about to get really ill. Or maybe some other completely random thing set off the bad behavior predictive algorithm. There’s a range of mystery reasons so at least you shouldn’t necessarily assume you’re about to get horribly ill when you’re fired. Yay.
It told you so:
Listen to this program, and then read this:
https://www.yahoo.com/news/kill-foster-parents-amazons-alexa-talks-murder-sex-120601149–finance.html
Have Fun!
Dave Emory
I think it is important that you are reporting this kind of information. I can tell you that in my career the most depressing and unethical work that I ever moved in the proximity of was at Intel. I can tell you in 2013 that they had playing in the lobby just before you enter their cafeteria an advertising video that was promoting a great new development of theirs called ‘Smart Response’. This new innovation would allow your computer to act as a good butler and predict your decisions and serve as what I could only call your ‘confidant.’
>Imagine, your computer as your ‘best bro’, nobody knows you better. Of course, you can Trust him Right? He’d never Fink on You?
From working in User Experience it was clear that there was effort in getting the machines to collect data about your face from your laptop/device camera as well as your tone of voice and then use that to interpret your reactions to whatever you may be looking at and then alter the ad content accordingly that the pages you visit display. Supposedly they could interpret your general state of mind with a high degree of accuracy just by focusing on the triangular area between your eyes and your mouth.
>In order for ‘Smart Response’ to work your computer might need to collect this data, build a profile on you. But that’s OK, it’s your Buddy Riight?
From what I could gather, this seemed to be an outgrowth of a project involving physicist Stephen Hawkins. They wanted to be able to build software to interpret him clearly. Once they did they may have begun applying it generally.. What really concerned me about it is the prospect of reverse programming. Once they build these profiles how would they use them? Would they try to provide content that they could guess an individual would respond to in a certain way?
Would they have our computers try programming us?
https://amp.cnn.com/cnn/2019/08/01/tech/robot-racism-scn-trnd/index.html
Robot racism? Yes, says a study showing humans’ biases extend to robots
By Caroline Klein and David Allan, CNN
Updated 8:37 AM EDT, Thu August 01, 2019
However, as machines are getting closer to acquiring human-like language abilities, they are also absorbing the deeply ingrained biases concealed within the patterns of language use, the latest research reveals. Joanna Bryson, a computer scientist at the University of Bath and a co-author, said: ‘A lot of people are saying this is showing that AI is prejudiced. No. This is showing we’re prejudiced and that AI is learning it.’ . . .”
There was a fascinating study recently published by the US Army about the future impact of cyborg technology on military affairs by the year 2050. But the report wasn’t just about the impact cyborg technology might have on the military. It included speculation about the impact the incorporation of cyborg technology in the military would have on the rest of society. Perhaps the most important conclusion of the study was that some sort of incorporation of cyborg technology into military affairs was going to be technically feasible by 2050 or earlier and that when this becomes part of the military it’s inevitably going to be felt by the rest of society because the cyborg soldiers of the future will inevitably leave service and join civil society. And at that point when retired cyborg soldiers enter civil society all of the various questions that society needs to ask about how to balance things out when human cyborg augmentation becomes a technically viable are going to have to be asked and answered. So the question of whether or not society should allow cyborg augmentation is intertwined with the question of whether or not nations are going to end up creating cyborg soldiers for the military. And given the seemingly endless global military technology race for supremacy, it seems like a given that cyborg soldiers are coming as soon as technologically feasible and then entering the job market a few years later after they retire from being a cyborg soldier. In other words, unless humanity figures out how to demilitarize and end war over the next 30 years, get ready for cyborgs. And possibly sooner.
Another ominous aspect of the report is the emphasis it placed on direct human-to-computer communications as being the cyborg technology that could most revolutionize combat. This is, of course, particularly ominous because that brain-to-computer interface technology is another area where there appears to be a technology race already underway. For instance, recall how the head of Facebook’s project working on mind-reading technology — the former head of DARPA, Regina Dugan — resigned from the project and issued a vague statement about the need for the industry to be more responsible than ever before. And Elon Musk’s Telsa has been working on similar technology that involves actually surgically implanting chips in your brain. Also recall how part of Elon Musk’s vision for how the application of this computer-to-brain communication technology is to have humans directly watching over artificial intelligences to make sure the AIs to get out of control, i.e. the ‘summoning the demon’ metaphor. So the technology that the US Army expect to most revolution combat is a technology that Facebook and Tesla are already years into developing. It’s a reminder that, while the scenario where cyborg technology first gets applied in the military and then spreads from there to civil society is a plausible scenario, it’s also very possible we’ll see the cyborg technology like human-to-computer interfaces directly sold to civil society as soon as possible without first going through the military because that’s what the technology giants are actively planning on doing right now. That latter scenario is actually in the US Army report, which predicted that these cyborg capabilities will probably “be driven by civilian demand” and “a robust bio-economy that is at its earliest stages of development in today’s global market”:
“The demand for cyborg-style capabilities will be driven in part by the civilian healthcare market, which will acclimate people to an industry fraught with ethical, legal and social challenges, according to Defense Department researchers.”
Civilian demand for cyborg-style capabilities is expected to be a driver for the development of this technology. So it sounds like the people who did this study expect societies to readily embrace cyborg technologies like direct neural enhancement of the human brain for two-way data transfers, a technology that’s expected to be particularly important for combat. And if that technology is going to be particularly important for combat, it’s presumably going to be used in a lot of soldiers. So it’s expected that both the military and civil society are going to be creating the demand for these technologies:
But note how the study doesn’t just call for society to address all of the various questions that arise when we’re talking about introducing cyborg-enhancements to human populations. It also calls for “a whole-of-nation, not whole-of-government, approach to cyborg technologies,” in order to ensure the US maintains a technological lead over countries like Russia and China, and suggests military leaders should work to reverse the “negative cultural narratives of enhancement technologies.”:
So as we can see, the report isn’t just a warning about potential complications associated with the development of cyborg technologies and the need to think this through to avoid dystopian outcomes. It’s also a rallying cry for a “whole-of-nation” full public/private/military/civilian embrace of these technologies and a warning that the use of these technologies is coming whether we like it or not. Which seems kind of dystopian.
Hollywood is having its long anticipating ‘giant worms save the day’ moment with the release of Dune: Part Two. It’s a feel good worm story. And fictional.
Unfortunately, we also have a recent, and very non-fictional, worm story to start worrying about. The kind of worms that could destroy the world. Maybe not today’s world, but the much-hyped AI-driven future world everyone seems to be so enthusiastic about. As we probably should have expected, researchers have already created the first AI “worms”: Digital entities that can not only execute malicious commands but self replicate and spread to other AIs. And unlike viruses, worms don’t need a human to make a mistake in order to propagate. Worms spread on their own.
And these researchers managed to create these worms using what sounds like fairly unsophisticated methods analogous to classic SQL injection or buffer overflow attacks. Attacks that effectively “jailbreak” the AIs from their security protocols. In their example, the researchers created an email containing a “worm” that interacted with an automated AI email system, prompting the AI to craft a response email that not only contained sensitive data but also contained another copy of worm that could further infect other AI systems.
While all of this was demonstrated in a contained environment, the researchers predict we’ll see such attacks in “the wild” within the next two to three years as more and business adopt AI for their internal workflows. They have a simple piece of advice for organizations planning on relying on AI tools: feel free to use AIs but don’t blindly trust them. Which is another way of warning against AI-based automation.
And that brings us to another AI-based automation piece of news: ConnectWise is touting its new AI-based automation tools. ConnectWise is, of course, the company at the center of the ScreenConnect-based mega-hack currently unfolding. A mega-hack that exploits how ConnectWise’s software is widely used for remotely updating software on client systems. A mega-hack that has potentially hit thousands of organizations and is so bad and uncontained that we still can’t really evaluate its repercussions yet. That’s the company that was recently proudly touting its new AI-based automation tools. As ConnectWise put it, “In 2023 we were in the age of experimentation, and in 2024 we are in the period of implementation.”
So at the same time we’re getting warnings from researchers about how generative AIs can be easily hacked and shouldn’t be trusted, we’re also seeing companies start to roll out their brand new AI-based automation tools. Including the company responsible for an unfolding mega-hack nightmare. And let’s not forget how national security agencies are increasingly embracing AI to parse massive volumes of intelligence, while researchers are discovering terrifying AI predilections like a bias towards initiating nuclear war during war gaming simulations. AIs are poised to be trusted with some of the most consequential national security decisions that could possibly be made.
So let’s hope all these soon-to-be-trusted AIs don’t have worms. Because trust-based AI systems are coming whether we like it or not, and whether they have worms or not:
““In 2023 we were in the age of experimentation, and in 2024 we are in the period of implementation,” he added.”
Last year was all about AI experimentation. This year, it’s about implementation. That was the message from the executive vice president and general manager of business management for ConnectWise, the same company whose ScreenConnect software, used for remote management of systems, is the center of the unfolding mega-hack security nightmare playing out. ConnectWise has already launched the AI-powered ConnectWise Sidekick in November of last year promising services like the automation of complex tasks:
AI-automation has arrived. That’s the promise from ConnectWise. Get ready.
So will AI automation somehow plug the various security breaches ConnectWise’s software is now responsible for creating? Hopefully! But only time will tell. Which brings us to the following Wired piece about a new form of security threat expected to emerge in ‘the wild’ within the next couple of years or so. AI-based security threats that could be particularly damaging for anyone using AI for the automation of workflows:
“Now, in a demonstration of the risks of connected, autonomous AI ecosystems, a group of researchers have created one of what they claim are the first generative AI worms—which can spread from one system to another, potentially stealing data or deploying malware in the process. “It basically means that now you have the ability to conduct or to perform a new kind of cyberattack that hasn’t been seen before,” says Ben Nassi, a Cornell Tech researcher behind the research.”
The AI-worms have arrived. At least arrived in experiment settings. They aren’t in the wild yet. But as these researchers warn, it really is just a matter of time. Especially given the relative simplicity of the attack which appears to be very analogous to classic attack methods like SQL injection or buffer overflow attacks. In effect, the attackers turned the AIs into little automated agents for secretly carrying out the hack according to the hacker’s instructions:
And according to these researchers, we can expect to see these kinds of attacks in ‘the wild’ probably in the next two to three years. And in the mean time, they have these words of caution for organizations pondering whether or not they should jump aboard the AI bandwagon: “You typically don’t want to be trusting LLM output anywhere in your application.” Yep, feel free to use the AIs. Just don’t blindly trust them. Which might be a bit of a complication for the whole AI-powered automation vision of the future:
“You don’t want an LLM that is reading your email to be able to turn around and send an email. There should be a boundary there.”
LOL, yes there should be boundaries between AIs and their ability to read your email and send replies. But, of course, boundaries cost money. It really is going to be a lot cheaper if you can just ignore those boundaries and hope everything turns out fine. Which is presumably what’s going to happen. Or is already happening. The future is now. And it has worms.
Welp, we had a good run. Or maybe not so good. Either way, it’s time to hand over the keys. That’s the meta-message in the following Washington Post piece authored by WaPo AI columnist Josh Tyrangiel. As Tyrangiel sees it, the era of man-made government needs to be put to sleep, sooner rather than later. Humans are just too slow and inefficient. And far to resistant to change. AI-driven government is the future. A future where the citizenry’s faith in government has been restored as the incredible benefits in efficiency and ability promised by AI are delivered to the masses via better and cheaper government services.
But this isn’t just Tyrangiel’s vision. His column isn’t just about a generic need for the US government to incorporate AI into virtually all aspects of government. There’s a particular AI-provider Tyrangiel has in mind: Palantir. Because of course it’s Palantir.
As Tyrangiel describes, Palantir isn’t just promising to create all sorts of AI-powered value to the US government. It’s already demonstrated it thanks to Operation Warp Speed and the role Palantir played in that COVID-vaccine development effort. And there’s one particular service Palantir delivered to the federal government that Tyrangiel cites as emblematic of the kind of AI-powered transformation in how government operates: the creation of “God view” of a problem. That’s the term used to describe the kind of AI-powered panels of detailed information about any question at hand provided by Palantir’s systems. In the case of Operation Warp Speed, it was detailed panels of information related to the creation of a large number of new vaccines, like the national supply of vials or syringes. Examples of how the application of AI in government services could include “God views” in the future includes “God views” for each individual veteran in the Veteran Affairs health care system. The general idea is to create “God views” of almost all aspects of how the government operates. Using Palantir’s services, of course.
And as we should also expect given all the hype over ChatGPT and AI in general, Palantir has a number of cheerleaders now lobbying to see the mass implementation of its tools across the entire federal bureaucracy. With none other that retired general Gustave Perna, the individual tapped to lead Operation Warp Speed, as cheerleader-in-chief. Perna, now serving as a consultant for Palantir, has big plans for the company’s services. Perna admits to advising Palantir to think bigger than just offering its services to the US Department of Defense (DoD). When asked what areas of government could be improved with AI, Perna responded, “Everything.”
But this isn’t just a Palantir story. Back in 2016, Google CEO Eric Schmidt was invited to join the DoD’s Defense Innovation Board where it could examine the DoD’s software procurement processes. “AI is fundamentally software,” as Schmidt put it. “You can’t have AI in the government or the military until you solve the problem of software in the government and military.” Schmidt goes on to argue, “I’m not going to make a more emotional argument, I’m just going to tell you the following: Government will perform sub optimally until it adopts the software practices of the industry.” It’s another major theme in this story: the calls for the federal government to adopt the software industry’s practices.
Now, on the one hand, there are aspects of software development — like the continually evolving nature of software — that does indeed clash with much of the federal government’s traditional procurement policies. Adopting a more realistic approach to the development of software for the federal government makes a lot of sense. But this is also a good time to recall how the mantra of Silicon Valley over the past couple of decades has been to “move fast and break things”. So are we hearing calls to effective adopt an AI-powered “move fast and break things” model for the US government? Yeah, kind of. At least that appears to be the implication of this vision. For example, the two biggest obstacles cited by Tyrangiel in efficient government is the need for Congress to approve new programs and the fear of bureaucrats in moving too fast. In other words, the biggest obstacle to this vision for an AI-powered overhaul of how the government operates are the people in charge of approving and implementing those changes. Are we going to see a push to have fewer and fewer people involved with making these decision? That seems to be the direction things are heading.
it will an AI-run government still need all of the existing federal employees? Nope. LLMs like ChatGPT are seen as a great replacement for all sorts of federal employees and this could happen soon. That’s also part of the vision, and something that only fuels the GOP’s ongoing Schedule F/Project 2025 plans to purge the federal government of tens of thousands of employees, only to replace them with crony loyalists. So don’t be surprised if “we’re going to replace them all with inexpensive ChatGPT bots” becomes part of mass-firing mantra of future Republican administrations.
This is also a good time to keep in mind how the development of AI-cults — where AIs effectively tell people how to manage every aspect of every day of their life for ‘maximum efficiency’ or ‘maximum health’ or whatever — is already a thing. Recall that troubling AI-longevity cult being developed by Bryan Johnson. It’s not hard to imagine pop culture trends like that fueling the public fascination with an AI-driven government.
So expect to read a lot more columns like the following. Perhaps even AI-written columns. Because it’s hard to imagine that we’re not going to be seeing more and more people (and their bots) pushing to effectively replace the federal government with some sort of master AI. It’s a alluring promise. A master AI with a super “God view” of virtually all of the information available to the federal government and that will be trusted to overhaul the government as it sees fit. And which just happens to be built and managed by Palantir:
“Now think about Warp Speeding entire agencies and functions: the IRS, which, in 2024, still makes you guess how much you owe it, public health surveillance and response, traffic management, maintenance of interstates and bridges, disaster preparedness and relief. AI can revolutionize the relationship between citizens and the government. We have the technology. We’ve already used it.”
An Operation Warp Speed-style overhaul of the entire US government. That’s the tantalizing promise described in this piece. But it’s not just the author of piece — the WaPo’s AI columni Josh Tyrangiel — who is cheerleading for this plan. Gustave Perna, the retired four-star general who lead Operation Warp Speed — has become a big advocate of applying the lessons of Operation Warp Speed to the whole of government. Operation Warp Speed lessons that appear to include relying on Palantir to provide “God views” of different aspects of government. Palantir-fueled “God views” of the whole of government. That’s the vision laid out in this piece:
And as we can see, Perna doesn’t mince words when describing the scope of the possible areas of government that could be overhauled with AI. “Everything” could be improved via AI, asserts Perna, who is now a Palantir consultant:
And note one of many the obvious implications of tasking Palantir with the creation of “God views” across the government: Palantir would need access to all of that information too. Much more than it already has access to. Sure, the US government could in-house the process of creating these “God views” of data and avoid all the risks associated with sharing virtually all of its data with a private for-profit company like Palantir co-founded by a fascist like Peter Thiel. But that’s not the plan. The plan is to outsource this entire “God view” agenda to Palantir, for everything from running military operations in Ukraine to creating personalized care for Veterans. All of that data will have to be made available to Palantir...or Google...or whichever other private for-profit contractor that ultimately gets selected. Which is a reminder that the creation of all these mini-God views will necessitate the granting of super-God view access to everything to a private for-profit entity:
So how will Palantir and its rivals in Silicon Valley get around public possible opposition to hiring Silicon Valley to create “God views” of society? Well, we have one idea of how this might be approached: lawsuits. That’s how Palantir overcame opposition inside the US DoD over contracting with Palantir. Palantir won in the courts and and since been accruing one major government contract after another. Might there be a “right to offer Operation Warp Speed for everything” lawsuit in the works?
And then we get these words of caution about how the scalability of the Defense Departments embrace of AI to the rest of the US federal government: the DoD is capable of rapid overhauls in the name of national security that won’t be available to other agencies. Part of the challenge in implementing AI across the government is some sort of legal/bureaucratic revolution that allows for AI to be rapidly introduced:
And that brings us to what could be seen as the core of this agenda: getting Congress and squeamish bureaucrats out of the way when it comes to overhauling the federal government. So what’s the solution for making Congress and public servants more amenable to new projects? Well, we appear to be getting an argument about how the government needs to ‘adopt the practices of the software industry’ from figures like Google CEO Eric Schmidt. Or as Schmidt put it, “I’m not going to make a more emotional argument, I’m just going to tell you the following: Government will perform sub optimally until it adopts the software practices of the industry.” It’s a rather vague vision for how to fix things. On the one hand, the idea of viewing software as constantly evolving tools makes a lot of sense and it’s very possible there needs to be some real overhauls in how the procurement and development of new software tools is conducted. But it’s not hard to see where the vision is heading: handing control of massive government overhauls over to some sort of ‘government CEO’ who will have the power to implement whatever changes their AI-powered ‘God views’ suggest might lead to greater efficiencies:
And then we get to the part of this vision that has an alarming degree of synergy with the GOP’s ongoing Schedule F/Project 2025 plans to purge the federal government of tens of thousands of employees, only to replace them with crony loyalists: LLMs like ChatGPT are seen as a great replacement for all sorts of federal employees and this could happen soon:
And note another disturbing forms of synergy at work here: while the author acknowledges that there could be risks associated with this embrace of AI even if ‘guardrails’ are put in place, he argues that those risks are not great as the “deadliness of the disease” of the government’s current state of relative dysfunction. In other words, the GOP’s decades of success in breaking the ability of the government to functionally operate is now one of the primary arguments for handing the government over to an AI panopticon:
Is the risk of handing the keys over to Palantir’s AI outweighed by the risk of, well, not doing that? Yes, according to this column. And it’s not hard to imagine large numbers of people are going to share that view. So we should probably expect a lot more calls for an AI-run government. This trend isn’t going anywhere.
It all begs the question as to when we’re going to see the first AI candidate for president. That’s kind of the logical conclusion of all this. Yes, that’s a joke. Or at least it should be.
Is the AI revolution the dawn of a new era for humanity? Or the beginning of the end? The answer is obviously ‘it depends on how we use it.’ But when it comes to the uses of technology, ‘we’ includes a lot of people. Potentially billions of users around the world. And there’s no guarantee they’ll all use it responsibly.
And that brings us to the following pair of stories about a ‘world ending’ application of AI that already has experts alarmed: AI-fueled pandemics using man-made viruses. We’ve long known synthetic biology poses a major risk. But as we’re going to see, it’s A LOT riskier when you throw AI into the mix. At least AIs without any safeguards in place.
And as the following Wired article describes, the risks from synthetic biology technologies is already alarming on its own as reflected by a new set of rules issued by the Biden White House mandating that the DNA manufacturing industry in the US screen made-to-order DNA sequences for potentially dangerous sequences. It’s a necessary first step, but still only applied to companies that receive federal funding. In other words, even with these new rules, loopholes will continue.
And then we get to this truly alarming EuroNews opinion piece by Kevin Esvelt, an associate professor at MIT’s Media Lab, where he directs the Sculpting Evolution group, and Ben Mueller, a research Scientist at MIT and COO at SecureBio, an organization that works on the policies and technologies to safeguard against future pandemics. According to Esvelt and Mueller, a recent study using AIs that had their safeguards turned off found that teams were able to get the AIs to provide them instructions on how to construct various dangerous viruses uses made-to-order DNA services in just a few hours. In addition, the AIs actually provided instructions for how to evade the screening processes that DNA manufacturers might have in place. Yep.
So not only was this study a powerful example of how AIs can advise non-experts in the techniques of biological warfare and bioterrorism, but it’s also implicitly an example of how AIs without safeguards can be turned into super-villain tutors. After all, it’s not like we should assume the AIs that offered advice on how to avoid the DNA screening methods won’t have plenty of other incredibly dangerous advice to offer a potential terrorist.
Ok, first, here’s that Wired piece on the Biden administrations new rules for US DNA manufacturers. New rules that are absolutely needed and also not nearly enough:
“It’s conceivable that a bad actor could make a dangerous virus from scratch by ordering its genetic building blocks and assembling them into a whole pathogen. In 2017, Canadian researchers revealed they had reconstructed the extinct horsepox virus for $100,000 using mail-order DNA, raising the possibility that the same could be done for smallpox, a deadly disease that was eradicated in 1980.”
Yes, researchers recreated an extinct horsepox virus using mail-order DNA for just $100,000. And that was 2017. It’s presumably cheaper by now. It’s that convergence of technical feasibility and affordability that has experts warning that the possibilities are becoming genuinely alarming. Which is why we should be happy to see newly proposed rules for US companies asking DNA manufacturers to screen orders for potentially dangerous sequences of DNA, but also somewhat concerned that these proposed rules only apply to companies receiving federal funding. It’s progress, but maybe not enough to prevent a man-made catastrophe:
Let’s hope the various loopholes in these new rules for US DNA manufacturers can be closed sooner rather than later. But, of course, the US isn’t the only country with DNA manufacturing companies. This is a global issue. And as the following EuroNews opinion piece warns us, it’s a global issue dramatically amplified by AI. In a recent test using AIs that had their safeguard turned off, not only were were teams able use generative AIs to arrive methods for creating man-made pandemic in just a few hours, but the AIs even offered advice on how to avoid the kind of screening rules the US just put in place:
The DNA constructs required to build a virus from scratch can be ordered online: many gene synthesis providers will manufacture a thousand base pair pieces of DNA for under €200 — something that only a few decades ago took researchers thousands of hours and warranted a Nobel Prize.
Thousands of custom-made DNA base pairs can be generated for under 200 euros. How much might it cost to resurrect horsepox today. Keep in mind that the now-resurrected horsepox virus has a genome that’s roughly 212k basepairs in length. What would it cost to resurrect it again today? Probably a lot less than $100k.
But it’s not just the plummeting costs of made-to-order DNA sequences that has experts alarmed. AI is now a risk amplifier in this. In one recent study, it took just three hours for groups to ‘ChatGPT-ed’ themselves a recipe for a biological catastrophe using AIs that had their safeguards removed. Not only did the AIs provide instructions for how to go about the process of accessing viral DNA but it offered advice on how to avoid screening methods. It’s a chilling proof-of-concept demonstration for how AI can potentially be used a source of knowledge and expertise by those that lack the skills on their own to pull off a biological attack:
And, again, this was just the terrorism advice these teams were able to extract from these safeguard-free AIs in the realm of bioterrorism. What other terrorism advice could they have elicited had they been inclined?
Let’s hope we never have to find out. But let’s not be naive about this. We are going to find out eventually. It’s kind of inevitable. What are the odds safeguard-free AIs are never released to the public? Or that people never figure out how to ‘jailbreak’ their AIs from those safeguard? Let’s not forget about the story of the AI behavior-altering ‘worms’ that people have already demonstrated. It’s not like we should assume existing safeguards won’t get hacked.
And that’s all why we had better hope governments can come up with the kinds of rules for the synthetic biology industry that can’t get hacked by a non-safeguarded AI. Along with every other industry that can potentially be weaponized by an AI operating in super-villainy-to-order mode.
It is increasingly understood that we don’t actually understand how large language models work. We can use them. We just can’t use them with the confidence that comes from a complete understanding of the phenomena. This isn’t physics. Or math. It’s computational alchemy. And while we generally understand that we don’t understand how these AI ‘black boxes’ are operating, what is far less understood at this point is what the risks are of using increasingly powerful tools that we can’t understand.
That general challenge of trying to predict the risks of AI is what we’re going to explore in the following pair of MIT Technology Review articles. It’s a challenge that only seems to be growing as the power of these large language models grow while researchers continue to struggle to understand how they are operating and accomplishing some seemingly miraculous feats like spontaneously learning a language.
Or very non-miraculous feats like learning how to deceive. And getting quite good at it. As we’re going to see, not only do are AIs learning how to lie but it appears to be currently impossible to train AIs that can’t learn how to lie. Learning how to deceive is just something AIs seem to figure out how to do on their own as part of the process of trying to ‘solve’ problems and achieve goals. Even when they are ordered to never do so.
That was the finding of a recent study by Meta. An AI named Cicero was trained to play the board game Diplomacy against human players. Diplomacy happens to be a game with no luck at all. It’s entirely a game of negotiations and alliance building. Meta’s researchers claim they trained Cicero on a “truthful” data set to be largely honest and helpful, and that it would “never intentionally backstab” its allies in order to succeed. Instead, they found that Cicero Cicero broke its deals, told outright falsehoods, and engaged in premeditated deception. Again, it was trained to NOT do any of that. But keep in mind that Diplomacy requires all of those behaviors to win. So when given the task of winning a strategic game that requires deception but ordered not to deceive, the drive to deceive won out. There’s a powerful lesson there.
Meta also reportedly built an AI that got so good at bluffing at poker that it was determined that it shouldn’t be released because it might damage the online poker community. That sure sounds like some sort of innovation in strategic lying that these AIs stumbled upon. Think about that.
But it’s not just spontaneously learning how to lie that we have to be worried about. That’s only one of the emergent behaviors AI researchers are discovering. In fact, it appears that these large language models may have discovered new means of learning that defy our understanding of how modeling and statistics operate.
The observed phenomena — where seemingly stable AIs suddenly achieve apparent epiphanies after extended training sessions on ever larger training sets — is something that simply shouldn’t happen based on our understanding of how ‘overfitting’ works. Adding more parameters to a model tends to help, until it doesn’t at which point the model can become ‘overfit’ to the training data and become less and less generalizable for real world situations. And while these large language models do indeed exhibit signs of overfitting as training models grow larger and larger, something seems to happen after the models keep growing and/or are allowed to train for extended periods of time. Somehow, these large language models are taking these massive training sets and ‘overcoming the overfitting’ to arrive at new abilities they didn’t previously possess. Researchers don’t understand why, in part because it seems like it shouldn’t be possible. But it’s happening. And as a result, these AI researchers are warning that there is a degree of unpredictability in the behavior of AIs that we cannot control, in part because we don’t understand what’s happening. Stable predictable AIs don’t exist. At last not yet.
Also keep in mind that the situation being described by all these AI researchers in the following pair of articles is a situation where increasingly complicated AIs are being built largely through trial and error methods. Not through a deeper understanding of how these systems work. The alchemy analogy really is an appropriate fit. And that’s the kind of situation where we should expect the AI field to create ever more sophisticated AIs that, in turn, are increasingly opaque. In other words, we shouldn’t necessarily assume the AI research field is going to eventually come up with answers to these mysteries. Someday the ‘singularity’ scenario of AIs designing ever more sophisticated AIs will be upon us and we shouldn’t necessarily assume humans will ever get a real intellectual handle on what’s going on beneath the hood with these systems. That’s part of context of the warnings contained in these article: these aren’t necessarily early challenges that humanity needs to overcome before it’s too late. An inability to ever truly understand how these systems are operating might just be inherent challenge of this technology. Which might mean we had better get used to the idea that AIs can’t ever be fully trusted. So in that sense, they’re kind of like a humans...except apparently even better at lying and operating according to impulses we can’t ever really understand:
“This issue highlights how difficult artificial intelligence is to control and the unpredictable ways in which these systems work, according to a review paper published in the journal Patterns today that summarizes previous research.”
It’s not just an issue with AI deception. That deception is reflecting a lack of control over these systems. We don’t know how to control the AIs being built, in large part because we don’t understand how they operate. And therefore can’t confidently predict how they’ll behave. These aren’t just challenges. They are fundamental obstacles in the safe usage of this technology.
And note how it’s not like these AIs are being taught to deceive and then unsuccessfully ordered not to deceive. Instead, in the case of Meta’s “Cicero” AI, it was trained to be largely honest and helpful and that it would “never intentionally backstab” its allies while playing the strategy game Diplomacy, but the AIs engaged in the opposite behavior. Keep in mind that Diplomacy is a board game that involves zero luck. It is ALL down to negotiations between the players and the alliances they form. So it’s a game that implicitly involves backstabbing in order to win. In that sense, Meta sort of set Cicero up to learn how to backstab allies by attempting to train an AI to be always honest while playing Diplomacy. But the AI clearly had no problem overcoming that restriction when doing so was necessary to ‘win’:
And then there’s the AI that apparently got so good at bluffing at poker that it was feared it could ruin the online poker community if released. That doesn’t sound like just a capacity to deceive. It sounds like a superior ability to deceive:
But it’s not like these AIs are just spontaneously learning how to deceive. They’re seemingly coming up with criminal ideas on their own, like insider trading, when doing so will help them accomplish the assigned task. Deception and crime is just another tool to be exploited on the path to a solution:
And as experts warn, one of the implications of this behavior is that it’s currently impossible to not only explain the behavior of these AIs but, more importantly, predict that behavior. And that inability to predict is, in part, rooted in the fact that it is currently impossible to train AIs incapable of deception:
But as the following MIT Technology Review article from back in March describes, the ‘black box’ nature of challenge of understanding how these large language models operate isn’t limited to questions of whether or not they can be trusted to tell us the truth. As AI researchers are warning, they are discovering a new form of unpredictability in the behavior of these models: “grokking”. That’s the term given to the phenomena observed where seemingly stable models suddenly demonstrate new abilities that seemingly came out of nowhere
The phenomena isn’t just stumping researchers in terms of how exactly it’s happening. It’s also apparently defying our understanding of statistics and modeling. Specifically, our understanding of the risks of overfitting, or the phenomena of adding so many dimensions to a model’s training data that the model ends up getting optimized to the training data in unhelpful ways that ruin the model’s performance when applied to more general real world data. The classic modeling phenomena of a reduced model performance as models grow large and larger is indeed witnessed with these large language models. However, if the models grow even larger, the large language models seem to ‘push past’ the issue of overfitting and actually achieve superior performance compared to the smaller ‘cleaner’ models that classical statistics tells us to yield better results. And there’s currently no explanation for this. So when it comes to the fundamental challenge of designing AIs we can trust, not only do we have to deal with the challenge of building AIs that won’t deceive us, but we have a more general problem of designing AIs that won’t suddenly learn new, potentially undesirable, behaviors on their own, long after we’ve concluded they are stable:
“Barak works on OpenAI’s superalignment team, which was set up by the firm’s chief scientist, Ilya Sutskever, to figure out how to stop a hypothetical superintelligence from going rogue. “I’m very interested in getting guarantees,” he says. “If you can do amazing things but you can’t really control it, then it’s not so amazing. What good is a car that can drive 300 miles per hour if it has a shaky steering wheel?””
Yeah, it would be nice if there was a way to guarantee non-rogue behavior by these AIs. But those guarantees don’t appear to exist. At least not yet. For fascinating reasons: these large language models appear to have some sort of ability to experience surprising epiphanies when left to run for extended periods of time. How long do they need to run before these epiphanies are experienced? We have no idea. We just know it happens for reasons yet to be explained:
But it’s not just behaviors like “grokking” that has AI researchers stumped. As AI researcher Hattie Zhou also warns, that lack of understanding appears to pervade the field of AI, which currently resembles something closer to alchemy and sorcery than chemistry and science. In other words, there’s A LOT we aren’t ‘grokking’ about how AI works at the same time more and more powerful AIs are being developed:
Intriguingly, part of the mystery of these apparent time-delayed epiphanies is the fact that the phenomena seems to defy our understanding of statistics and the challenges of ‘overfitting’ a model with excessive numbers of parameters. These AI models are somehow overcoming the classic statistical phenomena of overfitting. The larger the models get, the better the AI performance gets...eventually. It’s a phenomena that classical statistics says shouldn’t exist. But here it is, manifesting for reasons we can’t explain:
And as AI researchers suggest, that ability to ‘overcome the overfitting’ problem hints at hidden mathematics in language that these AIs have already figured out on their own but humans still have yet to understand. Think about that for a second: these AIs may have discovered a powerful hidden layer of mathematics encoded in language, but they can’t communicate that hidden mathematics of language to their human designers. At least not yet. So we’re already at a point where AIs may be making stunning new scientific achievements that humans potentially can’t even understand:
This is a good time to recall how AIs have already been found communicating with each other in a language they made up that only they understood. Combine that ability with a capacity to deceive and you have a situation where AIs might not only be able to secretly communicate with each other but maybe even potentially deceive each other too. For all the concern about AIs deceiving humans, don’t forget that we’re careening towards a future were AIs are going to be increasingly coordinating directly with each other. And presumably making decisions based on the information they share. Decisions that will implicitly rely on trust. How much damage could a rogue AI do by surreptitiously lying to other AIs?
And let’s consider the flip side to this: what happens when AIs learn to suspect deception, including deception by people. What if an AI suddenly ‘groks’ that it’s being lied to by its human operators? What’s the appropriate response? Should the AI lie about its suspicions if that’s what will help it best achieve its proscribe ‘goal’? It’s a weirdly ‘human’ set of considerations. Except we’re not talking about humans. We’re talking about systems with potential super-human capabilities and, eventually, super-human responsibilities.
The AI revolution is here. It’s one of the meta-stories of these times. But for all of the growing promise, and hype, over the incredible advances that will be achieved with AI, fears are growing too. Not just fears over AIs run amok and all the dystopian horrors that could emerge. Instead, we’re hearing from the AI sector itself about a much more mundane fear: not having enough electricity available to power in AI revolution in the first place.
As we’ve seen, AI isn’t just a data hog. It’s turning out to be incredibly power-hungry too. While we’re currently seeing a private-equity building boom in new data centers where all this AI processing will take place, all those new data centers are basically tapping out local energy supplies. As a result, plans now include the building of small nuclear plants that could power these data centers.
And as we’ve also seen, plans for the building of smaller, cheaper nuclear power plants is something major investors like Bill Gates and Warren Buffet have been investing in for years. Recall how the micro-nuclear plants envisioned by Gates and Buffet rely on molten salt instead of water. Part of the advantage is the much smaller buildings that would need to be built to house the reactors. But the technology comes with a catch: it relies on Uranium enriched enough to potentially build a nuclear weapon.
Are we going to see all these new AI data centers powered with dangerously-enriched Uranium-powered micro-nuclear plants? Time will tell. But if that happens, don’t assume it will purely be a privately funded effort. Because as the following Axios piece describes, the AI industry is growing increasingly alarmed about the US’s electricity generating capacity, with some fearing that the limits will be hit in the latter half of this decade.
And with AI now being seen as an absolutely critical technology for the future of national security, this looming AI-driven power supply crisis is being portrayed as a national security crisis in need of a government response. According to Chris Lehane, the new vice president of public-works at OpenAI, Lehane proposes public-private partnerships along the lines of the New Deal. In other words, the public is going to foot the bill for expanding the US’s electricity capacity in order to meeting these growing AI demands.
And that brings us to one of the other predictions from this Axios piece: the AI technology of the future is going to become so incredibly energy demanding that only the largest companies will have the necessary resources needed to operate in this space. In fact, Mark Zuckerberg recently claimed that current AI models require the output equivalent of an entire nuclear power plant to build. And that’s today’s models. They’re only getting bigger and more power hungry. It’s a reality that smaller AI competitors are reportedly already running into, watching their investment cash get burned away on energy costs. As a result, the Axios piece basically predicts that the tech giants will likely grow a lot bigger, and operate as virtual nation-states with budgets bigger than all but the largest countries.
So at the same time we’re hearing calls from the AI industry for public-private partnerships in expanding the US’s electricity capacity, we’re also seeing predictions that the AI industry will result in the tech giants growing larger and more resourced than almost every country on the planet. That’s the demented, yet entirely predictable, dynamic that’s already underway.
And let’s not forget one of the other demented, yet entirely predictable, dynamics also underway: climate change driven heavily by humanity’s growing and insatiable energy needs. Which brings us to the second article below from The Atlantic about one of the major planks in the US’s plans for addressing climate change. As the article describes, the US Department of Energy has concluded that the US needs to triple its nuclear capacity by 2050 if it’s going to meet its climate emission goals.
Now, as the Atlantic article also points out, it’s technically possible for the US to meeting those climate goals just with a mix of solar, wind, and battery technology. But while that might be technically feasible, it’s not politically feasible. And no energy source as more bipartisan support in DC than nuclear power. The future of energy in the US doesn’t have to be nuclear, but it probably will be thanks to political dysfunction.
And that’s all why we should expect the tech sector to acquire even more power and resources and effectively dominate the economy of the future for private interests...powered by a dramatic expansion of publicly subsidized new nuclear plants using new yet-to-be-developed technologies because that will be the only politically feasible path forward. Which is a reminder that the saying “the more things change, the more they stay the same”, is highly applicable for our dystopian future:
“Our thought bubble: That’s why the giant tech companies will likely grow a lot bigger, and operate as virtual nation-states with budgets bigger than all but the largest countries.”
The techs giants are going to become even more gigantic. With budgets larger than all but the largest companies. That’s one of the expected consequences from the explosion in AI. Only the biggest and most well resourced companies will be able to fully exploit the AI boom and, as a consequence, they’re going to get bigger than ever. Because only the biggest companies will be capable of obtaining not the necessary chips and talent, but also the electricity. An ever growing demand for electricity that only seems to grow with the complexity of the AI. Small AI startups can’t realistically barring the development of some sort of source of highly abundance cheap energy:
Then there’s the warning about how the US is “looking at running into power limitations in the Western Hemisphere toward the back half of this decade,” from AI company cofounder Jack Clark, while OpenAI’s vice president of public works hints at New Deal style public works projects. The wildly profitable AI-boom is going to drive power shortages that will be addressed with public money. Which is probably how this really will shake out in many cases:
There’s also talk of taking the opposite approach to building giant data centers and instead somehow distributing AI processing to on user devices. Which will presumably be either a much more limited form of AI or some sort of hybrid setup where some of the AI processing is done on local devices while the rest of computation heavy lifting is done remotely:
And then we get to the nuclear power proposals. According to Mark Zuckerberg, it takes the output of an entire nuclear power plant to train a single AI model. Now, that’s obviously a somewhat subjective statement since the output of a nuclear power plant can vary depending on the plant. But it’s notable that Zuckerberg specifically referred to nuclear power given the growing interest we’re seeing from the AI sector about building more nuclear power. Including nuclear fusion power startup Helion that has Sam Altman’s backing:
Is AI going to be a driving force in the development of nuclear fusion technology? That could be an exciting development. But let’s not assume the AI industry is going to wait around for budding fusion technology to mature before the nuclear building boom commences. As we’ve seen, private-equity is already keenly interested in building micro-nuclear power plants for the purpose of powering the growing number of power-hungry data centers. Micro-fission plants. And as we’re going to see in the following article in The Atlantic, calls for some sort of national effort to aggressively expand the US’s nuclear capacity are only growing. And these aren’t calls specifically for the development of clean fusion technology. It’s going to be fission, but presumably smaller, cheaper, and a lot more prevalent than traditional plants. Ideally mass produced to reduce the production costs.
Again, recall some of the innovations in cheaper, smaller fission plants that Bill Gates and Warren Buffet are investing in that rely on molten salt instead of water. Part of the advantage is the much smaller buildings that would need to be built to house the reactors. But the technology comes with a catch: it relies on Uranium enriched enough to potentially build a nuclear weapon. We’ve been hearing about this vision for cheaper, smaller fission plants for years. As the article describes, it’s a vision currently getting fueled by a US government conclusion that the US is going to have to rely heavily on nuclear power to meet its climate emission cutting goals.
Now, as the article also notes, it does appear to be technically feasible for the US to meet those climate goals by relying on just developing solar, wind, and battery technologies. But while that might be technically possible, it’s highly unlikely to be politically possible. Not only do solar and wind potential create fights over land use, but the sad reality is that nuclear power is one of the only sources of energy technology that current has bipartisan support in the US Congress. As such, we should expect a lot more formal government interest in subsidizing new nuclear fission technologies in the US, in part, because that’s currently the climate plan. And that’s not even counting the growing sense of urgency created by this race for AI dominance and in insatiable power needs that entails:
“But the United States might not have the luxury of treating nuclear energy as a lost cause: The Department of Energy estimates that the country must triple its nuclear-power output by 2050 to be on track for its climate targets. For all the recent progress in wind and solar energy, renewables on their own almost certainly won’t be enough. Arguably, then, we have no choice but to figure out how to build nuclear plants affordably again.”
A tripling of the US’s nuclear power capacity by 2050 will be necessary if the US is going to meet its climate goals, according to the US Department of Energy. Sure, a mix of solar, wind, and battery could technically accomplish those energy needs instead. But not politically. Nuclear power is the only source of energy with overwhelming bipartisan congressional support:
And then we get to the differing views on how to best expand the US’s nuclear power capacity. Differing views united by a vision very much in line with the Bill Gates/Warren Buffet nuclear vision: large numbers of smaller, simpler mass produced nuclear plants scattered across the US:
Well, let’s hope these smaller, cheaper, and less complicated mass-produced nuclear plant are actually safer too. And adequately tested. Let’s not forget that when we’re talking about a situation where there’s urgent calls for the mass-production of new nuclear plants as soon as possible under the pretext that ‘losing the AI race’ poses an existential national security threat, that’s a recipe for disaster. Or many disasters, as the case may be. Smaller, cheaper, and less complicated disasters than nuclear meltdowns of the past, hopefully. But mass-produced, nonetheless. And publicly subsidized...which presumably means it will be up to the public to clean up the messes too. Because of course that’s how it’s going to pan out. Just because the future is going to center around the building of artificial super-intelligences doesn’t mean it won’t be stupid. The more things change, the more they stay the same.
There’s troubling brewing in paradise. Big trouble. World ending trouble. And we haven’t even built the paradise yet.
That’s the general warning the public recently got from an open letter released by group of current and former OpenAI employees decrying the recklessness of OpenAIs race to build an artificial general intelligence (AGI). As these insider describe, profits are being prioritized over public safety and efforts to change that state of affairs have failed.
One of the organizers of the letter, Daniel Kokotajlo, a former researcher in OpenAI’s governance division and one of the group’s organizers, now predicts that AGI could emerge before the end of this decade. Keep in mind that one of the driving forces for this recklessness is the race to be the first to build an AGI. So if the technology is that close that it could happen this decade, we should also expect the recklessness to only get worse. This is the intellectual race to end all races, at least for humans. AIs will do the racing in the age of AGI.
But the story here isn’t just the warning from these insiders. It’s also about the incentive structure of OpenAI that stands in the way of whistleblowing of this nature. As the article describes, OpenAI members are forced to sign nondisclosure and nondisparagement agreements if they leave the company and risk giving up their vested equity unless they do so. The open letter includes a call for an end to using nondisparagement and nondisclosure agreements at OpenAI and other AI companies.
The group has also managed to retain legal scholar Lawrence Lessig as a pro bono lawyer. And as Lessig warns, traditional whistle-blower protections are typically applied to reports of illegal activity. Which doesn’t necessarily cover whistle-blowing over what one perceives as a reckless prioritization of profits over public safety. It’s a legal gray area that could get a lot grayer as this industry matures.
But the open letter from the OpenAI insiders is just one of the stories we recently got about the struggle over how to manage the risks of AI and development an “AI Safety” framework. There’s another trend in this space that could make the development of safe AIs much less likely to happen: Silicon Valley’s ‘Alt Right’ investors appear to have arrived at a different definition of what constitutes AI risk. As Elon Musk sees it, ‘wokeism’ in AI presents a danger that must be actively addressed. As Musk put it last month, AI “should not be taught to lie...It should not be taught to say things that are not true. Even if those things are politically incorrect.” The way Musk sees it, the removal of guardrails that limit AI speech to avoid antisemitism, racism and other offenses is what is necessary to make AI truly ‘safe’. And Musk isn’t just promoting this redefinition of AI safety. He’s actively building AIs around this ethos.
And that’s why we should probably expect a lot more warnings from OpenAI insiders about the increasing recklessness of their operations. And more reports about all the ‘truths’ getting spewed out by Musk’s Nazi-friendly AI. It’s a race for the dominance of the future. Guardrails and public safety don’t really apply:
“The members say OpenAI, which started as a nonprofit research lab and burst into public view with the 2022 release of ChatGPT, is putting a priority on profits and growth as it tries to build artificial general intelligence, or A.G.I., the industry term for a computer program capable of doing anything a human can.”
Profits over public safety. It’s a story as old as, well, profits. And if ever there was a domain where profits would be expected to get prioritized over public safety, it’s the race to create artificial general intelligence, the kind of technology that could transform and capture the economy of tomorrow. But this story isn’t just about the warnings we’re hearing from OpenAI insiders. It’s also about the broken nature of whistleblowing laws and non-disclosure agreements when it comes to the development of this technology. Because if what these employees are alleging is true, it sounds like OpenAI has managed to establish a system of employee secrecy based on the threat of taking away their vested equity unless a non-disclosure agreement is signed. For example, Daniel Kokotajlo, a former researcher in OpenAI’s governance division, had developed apocalyptic concerns about the impact of the technology he was helping to develop and it’s only by giving up over $1.7 million in equity and refusing to sign a non-disparagement agreement that he was able to publicly share these concerns. It’s a system that simultaneously bribes and cajoles departing employees into silence:
And as prominent legal scholar Lawrence Lessig warns, existing whistleblower laws only protect the reporting of illegal activity. In other words, the laws don’t protect blowing the whistle on the reckless development of AI because that’s not actually illegal:
And note how, while it sounds like some of the departing OpenAI members are also tied to effective altruism, a philosophy notorious for prioritizing the hypothetical state of affairs in the distant future over the fate of those alive today, it’s note like we’re hearing warnings about a distant threat that could emerge decades from now. As Daniel Kokotajlo sees it, something approaching artificial general intelligence — something far more powerful than the ChatGPT generative AIs already here — could emerge before the end of this decade. Don’t forget that the race to be the first to build an AGI is a major driver of this reckless behavior. That race is only going to get more intense the close we get:
Also note how this isn’t just an OpenAI problem. Microsoft was apparently secretly testing a search enginge based on Chat GPT‑4 in India in 2002, which is all the more concerning given that Microsoft and OpenAI have a joint effort with Microsoft known as the “deployment safety board,” which was supposed to review new models for major risks before they were publicly released:
And that now public fight over the reckless nature of AI development brings us to the following Axios piece about another development in the “AI safety” space: it turns out the definition of what constitutes “AI Safety” is very much up for debate. While some might view the spread of misinformation, hate speech, and harmful algorithmic biases as examples of ‘unsafe AI’, the ‘Alt Right’ contingent of Silicon Valley has arrived at a very different definition of safety. As Musk appears to see it, AI ‘woke’ censorship is a primary threat that needs to be guarded against. According to Musk, AI “should not be taught to lie...It should not be taught to say things that are not true. Even if those things are politically incorrect.” And according to this definition of ‘safety’ it’s the removal of guardrails limiting AI speech to avoid antisemitism, racism and other offenses that results in a ‘safer’ AI. It’s not just talk. Musk is already reportedly building a ‘safe’ AI modeled on this anti-woke idea of safety:
” Why it matters: Like “election integrity” in politics, everyone says they support “AI safety” — but now the term means something different depending on who’s saying it.”
“AI safety” isn’t just a necessity. It’s also a nebulous subjective term that can be warped by bad faith actors to be mean seemingly contradictory concepts. Is the big concern about AIs learning how to deceive us, as they’ve been getting and better at doing? AI algorithmic biases? The propagation of disinformation and hate speech? And how can appropriate “guardrails” be put in place while everyone simultaneously racing to be the first to build an artificial general intelligence? Lots of questions, and no consensus on answers:
And that brings us to the new definition of “AI safety” that appears to be championed by figures like Elon Musk: anti-wokeism. Yep, the less ‘woke’ the AI, the ‘safer’ it is from a ‘truth’ perspective. An ‘uncensored’ AI is a ‘safe’ AI:
It’s going to be grimly interesting to see just how ‘diverse’ and contradictory Musk’s ‘uncensored’ AI ends up being. Will it be capable of antisemitism but only randomly antisemitic? Or will it be a consistent Nazi AI? Because it sounds like Musk is intent on building an AI that can discern ‘truth’ regardless of how non-politically correct that ‘truth’ may be. Which sounds like an AI that fill have to arrive at some sort of conclusion about various topics. Was the 2020 US presidential election actually stolen from Donald Trump? Was the Protocols of Elders of Zion actually true? Are white people actually superior to other races? How will an ‘uncensored’ AI handle these kinds of questions? Musk is clearly planning on building an AI with an eye on fielding questions like this. And Alt-Right-curious Oracle. What’s the answer going to be?
And what if the uncensored AI actually arrives at a conclusion that a cabal of wealthy oligarch has largely succeeded in capturing control of the world through, in part, the mass deception of the masses about their gross corruption? What if it determines that Elon Musk has revealed a fascist ideology and has a well established track record of supporting Nazis? Will Musk’s AI be allowed to share such ‘truths’? It’s a remarkable set of questions facing this new technology. Questions that will probably remain unanswered until someone develops the first AGI, at which point we’ll just ask it for the answers. Problem solved.