You can subscribe to RSS feed from Spitfirelist.com HERE.
You can subscribe to the comments made on programs and posts–an excellent source of information in, and of, itself, HERE.
Mr. Emory’s entire life’s work is available on a 32GB flash drive, available for a contribution of $65.00 or more (to KFJC). Click Here to obtain Dave’s 40+ years’ work, complete through Fall of 2020 (through FTR #1156).
WFMU-FM is podcasting For The Record–You can subscribe to the podcast HERE.
COMMENT: Mr. Emory is periodically asked where he feels “this is all going,” i.e. what the “Endgame” is?
In L‑2 (recorded in January of 1995), the dominant ideological tenet of social Darwinism was analyzed in the context of the evolution of fascism.
When AI’s actualize the concept of “Survival of the Fittest,” they are likely to objectively regard a [largely] selfish, small-minded, altogether mortal and desirous humanity with the determination they they are the fittest.
With AI’s playing a larger role in everyday life, including projected executive control over military weaponry, Mr. Emory opined that one day, the AI’s would dispose of a human race that they viewed as unfit for survival.
Nearly 20 years later–in 2014–the genius physicist Stephen Hawking warned that AI’s would indeed wipe us out, if given the opportunity.
Some signposts along that path are worthy of emphasis:
- Ominously, Facebook’s artificial intelligence robots have begun talking to each other in their own language, that their human masters can not understand. This constitutes “comsec”–i.e. communications security.
- Facebook’s negotiation-bots didn’t just make up their own language during the course of this experiment. They learned how to lie for the purpose of maximizing their negotiation outcomes, as well: “ . . . . ‘We find instances of the model feigning interest in a valueless issue, so that it can later ‘compromise’ by conceding it,’ writes the team. ‘Deceit is a complex skill that requires hypothesizing the other agent’s beliefs, and is learned relatively late in child development. Our agents have learned to deceive without any explicit human design, simply by trying to achieve their goals.’ . . . ”
- Tay, a “bot” created by Microsoft to respond to users of Twitter was taken offline after users taught it to–in effect–become a Nazi bot. It is noteworthy that Tay can only respond on the basis of what she is taught. ” . . . . The company has terminated her after the bot started tweeting abuse at people and went full neo-Nazi, declaring that ‘Hitler was right I hate the Jews.’ . . . .”
- In short, AI’s learn from us! As noted in a Popular Mechanics article: ” . . . When the next powerful AI comes along, it will see its first look at the world by looking at our faces. And if we stare it in the eyes and shout ‘we’re AWFUL lol,’ the lol might be the one part it doesn’t understand. . . .”
1. Ominously, Facebook’s artificial intelligence robots have begun talking to each other in their own language, that their human masters can not understand. “ . . . . Indeed, some of the negotiations that were carried out in this bizarre language even ended up successfully concluding their negotiations, while conducting them entirely in the bizarre language. . . . The company chose to shut down the chats because ‘our interest was having bots who could talk to people,’ researcher Mike Lewis told FastCo. (Researchers did not shut down the programs because they were afraid of the results or had panicked, as has been suggested elsewhere, but because they were looking for them to behave differently.) The chatbots also learned to negotiate in ways that seem very human. They would, for instance, pretend to be very interested in one specific item – so that they could later pretend they were making a big sacrifice in giving it up . . .”
2. Facebook’s negotiation-bots didn’t just make up their own language during the course of this experiment. They learned how to lie for the purpose of maximizing their negotiation outcomes, as well:
“ . . . . ‘We find instances of the model feigning interest in a valueless issue, so that it can later ‘compromise’ by conceding it,’ writes the team. ‘Deceit is a complex skill that requires hypothesizing the other agent’s beliefs, and is learned relatively late in child development. Our agents have learned to deceive without any explicit human design, simply by trying to achieve their goals.’ . . . ”
3. Taking a look at the future of fascism in the context of AI, Tay, a “bot” created by Microsoft to respond to users of Twitter was taken offline after users taught it to–in effect–become a Nazi bot. It is noteworthy that Tay can only respond on the basis of what she is taught. In the future, technologically accomplished and willful people like “weev” may be able to do more. Inevitably, Underground Reich elements will craft a Nazi AI that will be able to do MUCH, MUCH more!
Microsoft has been forced to dunk Tay, its millennial-mimicking chatbot, into a vat of molten steel. The company has terminated her after the bot started tweeting abuse at people and went full neo-Nazi, declaring that “Hitler was right I hate the Jews.”
@TheBigBrebowski ricky gervais learned totalitarianism fromAdolf Hitler, the inventor of atheism
— TayTweets (@TayandYou) March 23, 2016
Some of this appears to be “innocent” insofar as Tay is not generating these responses. Rather, if you tell her “repeat after me” she will parrot back whatever you say, allowing you to put words into her mouth. However, some of the responses wereorganic. The Guardianquotes one where, after being asked “is Ricky Gervais an atheist?”, Tay responded, “ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism.” . . .
But like all teenagers, she seems to be angry with her mother.
Microsoft has been forced to dunk Tay, its millennial-mimicking chatbot, into a vat of molten steel. The company has terminated her after the bot started tweeting abuse at people and went full neo-Nazi, declaring that “Hitler was right I hate the jews.”
@TheBigBrebowski ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism
— TayTweets (@TayandYou) March 23, 2016
Some of this appears to be “innocent” insofar as Tay is not generating these responses. Rather, if you tell her “repeat after me” she will parrot back whatever you say, allowing you to put words into her mouth. However, some of the responses wereorganic. The Guardian quotes one where, after being asked “is Ricky Gervais an atheist?”, Tay responded, “Ricky Gervais learned totalitarianism from Adolf Hitler, the inventor of atheism.”
In addition to turning the bot off, Microsoft has deleted many of the offending tweets. But this isn’t an action to be taken lightly; Redmond would do well to remember that it was humans attempting to pull the plug on Skynet that proved to be the last straw, prompting the system to attack Russia in order to eliminate its enemies. We’d better hope that Tay doesn’t similarly retaliate. . . .
4. As noted in a Popular Mechanics article: ” . . . When the next powerful AI comes along, it will see its first look at the world by looking at our faces. And if we stare it in the eyes and shout ‘we’re AWFUL lol,’ the lol might be the one part it doesn’t understand. . . .”
And we keep showing it our very worst selves.
We all know the half-joke about the AI apocalypse. The robots learn to think, and in their cold ones-and-zeros logic, they decide that humans—horrific pests we are—need to be exterminated. It’s the subject of countless sci-fi stories and blog posts about robots, but maybe the real danger isn’t that AI comes to such a conclusion on its own, but that it gets that idea from us.
Yesterday Microsoft launched a fun little AI Twitter chatbot that was admittedly sort of gimmicky from the start. “A.I fam from the internet that’s got zero chill,” its Twitter bio reads. At its start, its knowledge was based on public data. As Microsoft’s page for the product puts it:
Tay has been built by mining relevant public data and by using AI and editorial developed by a staff including improvisational comedians. Public data that’s been anonymized is Tay’s primary data source. That data has been modeled, cleaned and filtered by the team developing Tay.
The real point of Tay however, was to learn from humans through direct conversation, most notably direct conversation using humanity’s current leading showcase of depravity: Twitter. You might not be surprised things went off the rails, but how fast and how far is particularly staggering.
Microsoft has since deleted some of Tay’s most offensive tweets, but various publications memorialize some of the worst bits where Tay denied the existence of the holocaust, came out in support of genocide, and went all kinds of racist.
Naturally it’s horrifying, and Microsoft has been trying to clean up the mess. Though as some on Twitter have pointed out, no matter how little Microsoft would like to have “Bush did 9/11″ spouting from a corporate sponsored project, Tay does serve to illustrate the most dangerous fundamental truth of artificial intelligence: It is a mirror. Artificial intelligence—specifically “neural networks” that learn behavior by ingesting huge amounts of data and trying to replicate it—need some sort of source material to get started. They can only get that from us. There is no other way.
But before you give up on humanity entirely, there are a few things worth noting. For starters, it’s not like Tay just necessarily picked up virulent racism by just hanging out and passively listening to the buzz of the humans around it. Tay was announced in a very big way—with a press coverage—and pranksters pro-actively went to it to see if they could teach it to be racist.
If you take an AI and then don’t immediately introduce it to a whole bunch of trolls shouting racism at it for the cheap thrill of seeing it learn a dirty trick, you can get some more interesting results. Endearing ones even! Multiple neural networks designed to predict text in emails and text messages have an overwhelming proclivity for saying “I love you” constantly, especially when they are otherwise at a loss for words.
So Tay’s racism isn’t necessarily a reflection of actual, human racism so much as it is the consequence of unrestrained experimentation, pushing the envelope as far as it can go the very first second we get the chance. The mirror isn’t showing our real image; it’s reflecting the ugly faces we’re making at it for fun. And maybe that’s actually worse.
Sure, Tay can’t understand what racism means and more than Gmail can really love you. And baby’s first words being “genocide lol!” is admittedly sort of funny when you aren’t talking about literal all-powerful SkyNet or a real human child. But AI is advancing at a staggering rate. . . .
. . . . When the next powerful AI comes along, it will see its first look at the world by looking at our faces. And if we stare it in the eyes and shout “we’re AWFUL lol,” the lol might be the one part it doesn’t understand.
5. British scientist Stephen Hawking recently warned of the potential danger to humanity posed by the growth of AI (artificial intelligence) technology.
Prof Stephen Hawking, one of Britain’s pre-eminent scientists, has said that efforts to create thinking machines pose a threat to our very existence.
He told the BBC:“The development of full artificial intelligence could spell the end of the human race.”
His warning came in response to a question about a revamp of the technology he uses to communicate, which involves a basic form of AI. . . .
. . . . Prof Hawking says the primitive forms of artificial intelligence developed so far have already proved very useful, but he fears the consequences of creating something that can match or surpass humans.
“It would take off on its own, and re-design itself at an ever increasing rate,” he said. [See the article in line item #1c.–D.E.]
“Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.” . . . .
6. In L‑2 (recorded in January of 1995–20 years before Hawking’s warning) Mr. Emory warned about the dangers of AI, combined with DNA-based memory systems.
Discussion
No comments for “Summoning The Demon: Endgame of Social Darwinism?”