Dave Emory’s entire lifetime of work is available on a flash drive that can be obtained HERE. The new drive is a 32-gigabyte drive that is current as of the programs and articles posted by the fall of 2017. The new drive (available for a tax-deductible contribution of $65.00 or more.)
WFMU-FM is podcasting For The Record–You can subscribe to the podcast HERE.
You can subscribe to e‑mail alerts from Spitfirelist.com HERE.
You can subscribe to RSS feed from Spitfirelist.com HERE.
You can subscribe to the comments made on programs and posts–an excellent source of information in, and of, itself HERE.
This broadcast was recorded in one, 60-minute segment.
Introduction: Developing analysis presented in FTR #968, this broadcast explores frightening developments and potential developments in the world of artificial intelligence–the ultimate manifestation of what Mr. Emory calls “technocratic fascism.”
In order to underscore what we mean by technocratic fascism, we reference a vitally important article by David Golumbia. ” . . . . Such technocratic beliefs are widespread in our world today, especially in the enclaves of digital enthusiasts, whether or not they are part of the giant corporate-digital leviathan. Hackers (‘civic,’ ‘ethical,’ ‘white’ and ‘black’ hat alike), hacktivists, WikiLeaks fans [and Julian Assange et al–D. E.], Anonymous ‘members,’ even Edward Snowden himself walk hand-in-hand with Facebook and Google in telling us that coders don’t just have good things to contribute to the political world, but that the political world is theirs to do with what they want, and the rest of us should stay out of it: the political world is broken, they appear to think (rightly, at least in part), and the solution to that, they think (wrongly, at least for the most part), is for programmers to take political matters into their own hands. . . . [Tor co-creator] Dingledine asserts that a small group of software developers can assign to themselves that role, and that members of democratic polities have no choice but to accept them having that role. . . .”
Perhaps the last and most perilous manifestation of technocratic fascism concerns Anthony Levandowski, an engineer at the foundation of the development of Google Street Map technology and self-driving cars. He is proposing an AI Godhead that would rule the world and would be worshipped as a God by the planet’s citizens. Insight into his personality was provided by an associate: “ . . . . ‘He had this very weird motivation about robots taking over the world—like actually taking over, in a military sense…It was like [he wanted] to be able to control the world, and robots were the way to do that. He talked about starting a new country on an island. Pretty wild and creepy stuff. And the biggest thing is that he’s always got a secret plan, and you’re not going to know about it’. . . .”
As we saw in FTR #968, AI’s have incorporated many flaws of their creators, auguring very poorly for the subjects of Levandowski’s AI Godhead.
It is also interesting to contemplate what may happen when AI’s are designed by other AI’s- machines designing other machines.
After a detailed review of some of the ominous real and developing AI-related technology, the program highlights Anthony Levandowski, the brilliant engineer who was instrumental in developing Google’s Street Maps, Waymo’s self-driving cars, Otto’s self-driving trucks, the Lidar technology central to self-driving vehicles and the Way of the Future, super AI Godhead.
Further insight into Levandowski’s personality can be gleaned from e‑mails with Travis Kalanick, former CEO of Uber: ” . . . . In Kalanick, Levandowski found both a soulmate and a mentor to replace Sebastian Thrun. Text messages between the two, disclosed during the lawsuit’s discovery process, capture Levandowski teaching Kalanick about lidar at late night tech sessions, while Kalanick shared advice on management. ‘Down to hang out this eve and mastermind some shit,’ texted Kalanick, shortly after the acquisition. ‘We’re going to take over the world. One robot at a time,’ wrote Levandowski another time. . . .”
Those who view self-driving cars and other AI-based technologies as flawless would do well to consider the following: ” . . . .Last December, Uber launched a pilot self-driving taxi program in San Francisco. As with Otto in Nevada, Levandowski failed to get a license to operate the high-tech vehicles, claiming that because the cars needed a human overseeing them, they were not truly autonomous. The DMV disagreed and revoked the vehicles’ licenses. Even so, during the week the cars were on the city’s streets, they had been spotted running red lights on numerous occasions. . . . .”
Noting Levandowski’s personality quirks, the article poses a fundamental question: ” . . . . But even the smartest car will crack up if you floor the gas pedal too long. Once feted by billionaires, Levandowski now finds himself starring in a high-stakes public trial as his two former employers square off. By extension, the whole technology industry is there in the dock with Levandowski. Can we ever trust self-driving cars if it turns out we can’t trust the people who are making them? . . . .”
Levandowski’s Otto self-driving trucks might be weighed against the prognostications of dark horse Presidential candidate and former tech executive Andrew Wang: “. . . . ‘All you need is self-driving cars to destabilize society,’ Mr. Yang said over lunch at a Thai restaurant in Manhattan last month, in his first interview about his campaign. In just a few years, he said, ‘we’re going to have a million truck drivers out of work who are 94 percent male, with an average level of education of high school or one year of college.’ ‘That one innovation,’ he added, ‘will be enough to create riots in the street. And we’re about to do the same thing to retail workers, call center workers, fast-food workers, insurance companies, accounting firms.’ . . . .”
Theoretical physicist Stephen Hawking warned at the end of 2014 of the potential danger to humanity posed by the growth of AI (artificial intelligence) technology. His warnings have been echoed by tech titans such as Tesla’s Elon Musk and Bill Gates.
The program concludes with Mr. Emory’s prognostications about AI, preceding Stephen Hawking’s warning by twenty years.
Program Highlights Include:
- Levandowski’s apparent shepherding of a company called–perhaps significantly–Odin Wave to utilize Lidar-like technology.
- The role of DARPA in initiating the self-driving vehicles contest that was Levandowski’s point of entry into his tech ventures.
- Levandowski’s development of the Ghostrider self-driving motorcycles, which experienced 800 crashes in 1,000 miles.
1a. In order to underscore what we mean by technocratic fascism, we reference a vitally important article by David Golumbia. ” . . . . Such technocratic beliefs are widespread in our world today, especially in the enclaves of digital enthusiasts, whether or not they are part of the giant corporate-digital leviathan. Hackers (‘civic,’ ‘ethical,’ ‘white’ and ‘black’ hat alike), hacktivists, WikiLeaks fans [and Julian Assange et al–D. E.], Anonymous ‘members,’ even Edward Snowden himself walk hand-in-hand with Facebook and Google in telling us that coders don’t just have good things to contribute to the political world, but that the political world is theirs to do with what they want, and the rest of us should stay out of it: the political world is broken, they appear to think (rightly, at least in part), and the solution to that, they think (wrongly, at least for the most part), is for programmers to take political matters into their own hands. . . . [Tor co-creator] Dingledine asserts that a small group of software developers can assign to themselves that role, and that members of democratic polities have no choice but to accept them having that role. . . .”
1b. Anthony Levandowski, an engineer at the foundation of the development of Google Street Map technology and self-driving cars, is proposing an AI Godhead that would rule the world and would be worshipped as a God by the planet’s citizens. Insight into his personality was provided by an associate: “ . . . . ‘He had this very weird motivation about robots taking over the world—like actually taking over, in a military sense…It was like [he wanted] to be able to control the world, and robots were the way to do that. He talked about starting a new country on an island. Pretty wild and creepy stuff. And the biggest thing is that he’s always got a secret plan, and you’re not going to know about it’. . . .”
1c. Transitioning from our last program–updating AI (artificial intelligence) technology as it applies to technocratic fascism–we note that AI machines are being designed to develop other AI’s–“The Rise of the Machine.” ” . . . . Jeff Dean, one of Google’s leading engineers, spotlighted a Google project called AutoML. ML is short for machine learning, referring to computer algorithms that can learn to perform particular tasks on their own by analyzing data. AutoML, in turn, is a machine learning algorithm that learns to build other machine-learning algorithms. With it, Google may soon find a way to create A.I. technology that can partly take the humans out of building the A.I. systems that many believe are the future of the technology industry. . . . This is not altruism. . . .”
“The Rise of the Machine” by Cade Metz; The New York Times; 11/6/2017; p. B1 [Western Edition].
They are a dream of researchers but perhaps a nightmare for highly skilled computer programmers: artificially intelligent machines that can build other artificially intelligent machines. With recent speeches in both Silicon Valley and China, Jeff Dean, one of Google’s leading engineers, spotlighted a Google project called AutoML. ML is short for machine learning, referring to computer algorithms that can learn to perform particular tasks on their own by analyzing data.
AutoML, in turn, is a machine learning algorithm that learns to build other machine-learning algorithms. With it, Google may soon find a way to create A.I. technology that can partly take the humans out of building the A.I. systems that many believe are the future of the technology industry. The project is part of a much larger effort to bring the latest and greatest A.I. techniques to a wider collection of companies and software developers.
The tech industry is promising everything from smartphone apps that can recognize faces to cars that can drive on their own. But by some estimates, only 10,000 people worldwide have the education, experience and talent needed to build the complex and sometimes mysterious mathematical algorithms that will drive this new breed of artificial intelligence.
The world’s largest tech businesses, including Google, Facebook and Microsoft, sometimes pay millions of dollars a year to A.I. experts, effectively cornering the market for this hard-to-find talent. The shortage isn’t going away anytime soon, just because mastering these skills takes years of work. The industry is not willing to wait. Companies are developing all sorts of tools that will make it easier for any operation to build its own A.I. software, including things like image and speech recognition services and online chatbots. “We are following the same path that computer science has followed with every new type of technology,” said Joseph Sirosh, a vice president at Microsoft, which recently unveiled a tool to help coders build deep neural networks, a type of computer algorithm that is driving much of the recent progress in the A.I. field. “We are eliminating a lot of the heavy lifting.” This is not altruism.
Researchers like Mr. Dean believe that if more people and companies are working on artificial intelligence, it will propel their own research. At the same time, companies like Google, Amazon and Microsoft see serious money in the trend that Mr. Sirosh described. All of them are selling cloud-computing services that can help other businesses and developers build A.I. “There is real demand for this,” said Matt Scott, a co-founder and the chief technical officer of Malong, a start-up in China that offers similar services. “And the tools are not yet satisfying all the demand.”
This is most likely what Google has in mind for AutoML, as the company continues to hail the project’s progress. Google’s chief executive, Sundar Pichai, boasted about AutoML last month while unveiling a new Android smartphone.
Eventually, the Google project will help companies build systems with artificial intelligence even if they don’t have extensive expertise, Mr. Dean said. Today, he estimated, no more than a few thousand companies have the right talent for building A.I., but many more have the necessary data. “We want to go from thousands of organizations solving machine learning problems to millions,” he said.
Google is investing heavily in cloud-computing services — services that help other businesses build and run software — which it expects to be one of its primary economic engines in the years to come. And after snapping up such a large portion of the world’s top A.I researchers, it has a means of jump-starting this engine.
Neural networks are rapidly accelerating the development of A.I. Rather than building an image-recognition service or a language translation app by hand, one line of code at a time, engineers can much more quickly build an algorithm that learns tasks on its own. By analyzing the sounds in a vast collection of old technical support calls, for instance, a machine-learning algorithm can learn to recognize spoken words.
But building a neural network is not like building a website or some run-of-themill smartphone app. It requires significant math skills, extreme trial and error, and a fair amount of intuition. Jean-François Gagné, the chief executive of an independent machine-learning lab called Element AI, refers to the process as “a new kind of computer programming.”
In building a neural network, researchers run dozens or even hundreds of experiments across a vast network of machines, testing how well an algorithm can learn a task like recognizing an image or translating from one language to another. Then they adjust particular parts of the algorithm over and over again, until they settle on something that works. Some call it a “dark art,” just because researchers find it difficult to explain why they make particular adjustments.
But with AutoML, Google is trying to automate this process. It is building algorithms that analyze the development of other algorithms, learning which methods are successful and which are not. Eventually, they learn to build more effective machine learning. Google said AutoML could now build algorithms that, in some cases, identified objects in photos more accurately than services built solely by human experts. Barret Zoph, one of the Google researchers behind the project, believes that the same method will eventually work well for other tasks, like speech recognition or machine translation. This is not always an easy thing to wrap your head around. But it is part of a significant trend in A.I. research. Experts call it “learning to learn” or “metalearning.”
Many believe such methods will significantly accelerate the progress of A.I. in both the online and physical worlds. At the University of California, Berkeley, researchers are building techniques that could allow robots to learn new tasks based on what they have learned in the past. “Computers are going to invent the algorithms for us, essentially,” said a Berkeley professor, Pieter Abbeel. “Algorithms invented by computers can solve many, many problems very quickly — at least that is the hope.”
This is also a way of expanding the number of people and businesses that can build artificial intelligence. These methods will not replace A.I. researchers entirely. Experts, like those at Google, must still do much of the important design work.
But the belief is that the work of a few experts can help many others build their own software. Renato Negrinho, a researcher at Carnegie Mellon University who is exploring technology similar to AutoML, said this was not a reality today but should be in the years to come. “It is just a matter of when,” he said.
2a.We next review some terrifying and consummately important developments taking shape in the context of what Mr. Emory has called “technocratic fascism:”
- In FTR #‘s 718 and 946, we detailed the frightening, ugly reality behind Facebook. Facebook is now developing technology that will permit the tapping of users thoughts by monitoring brain-to-computer technology. Facebook’s R & D is headed by Regina Dugan, who used to head the Pentagon’s DARPA. Facebook’s Building 8 is patterned after DARPA: ” . . . . Brain-computer interfaces are nothing new. DARPA, which Dugan used to head, has invested heavily in brain-computer interface technologies to do things like cure mental illness and restore memories to soldiers injured in war. . . Facebook wants to build its own “brain-to-computer interface” that would allow us to send thoughts straight to a computer. ‘What if you could type directly from your brain?’ Regina Dugan, the head of the company’s secretive hardware R&D division, Building 8, asked from the stage. Dugan then proceeded to show a video demo of a woman typing eight words per minute directly from the stage. In a few years, she said, the team hopes to demonstrate a real-time silent speech system capable of delivering a hundred words per minute. ‘That’s five times faster than you can type on your smartphone, and it’s straight from your brain,’ she said. ‘Your brain activity contains more information than what a word sounds like and how it’s spelled; it also contains semantic information of what those words mean.’ . . .”
- ” . . . . Facebook’s Building 8 is modeled after DARPA and its projects tend to be equally ambitious. . . .”
- ” . . . . But what Facebook is proposing is perhaps more radical—a world in which social media doesn’t require picking up a phone or tapping a wrist watch in order to communicate with your friends; a world where we’re connected all the time by thought alone. . . .”
2b. Next we review still more about Facebook’s brain-to-computer interface:
- ” . . . . Facebook hopes to use optical neural imaging technology to scan the brain 100 times per second to detect thoughts and turn them into text. Meanwhile, it’s working on ‘skin-hearing’ that could translate sounds into haptic feedback that people can learn to understand like braille. . . .”
- ” . . . . Worryingly, Dugan eventually appeared frustrated in response to my inquiries about how her team thinks about safety precautions for brain interfaces, saying, ‘The flip side of the question that you’re asking is ‘why invent it at all?’ and I just believe that the optimistic perspective is that on balance, technological advances have really meant good things for the world if they’re handled responsibly.’ . . . .”
2c. Collating the information about Facebook’s brain-to-computer interface with their documented actions gathering psychological intelligence about troubled teenagers gives us a peek into what may lie behind Dugan’s bland reassurances:
- ” . . . . The 23-page document allegedly revealed that the social network provided detailed data about teens in Australia—including when they felt ‘overwhelmed’ and ‘anxious’—to advertisers. The creepy implication is that said advertisers could then go and use the data to throw more ads down the throats of sad and susceptible teens. . . . By monitoring posts, pictures, interactions and internet activity in real-time, Facebook can work out when young people feel ‘stressed’, ‘defeated’, ‘overwhelmed’, ‘anxious’, ‘nervous’, ‘stupid’, ‘silly’, ‘useless’, and a ‘failure’, the document states. . . .”
- ” . . . . A presentation prepared for one of Australia’s top four banks shows how the $US 415 billion advertising-driven giant has built a database of Facebook users that is made up of 1.9 million high schoolers with an average age of 16, 1.5 million tertiary students averaging 21 years old, and 3 million young workers averaging 26 years old. Detailed information on mood shifts among young people is ‘based on internal Facebook data’, the document states, ‘shareable under non-disclosure agreement only’, and ‘is not publicly available’. . . .”
- “In a statement given to the newspaper, Facebook confirmed the practice and claimed it would do better, but did not disclose whether the practice exists in other places like the US. . . .”
2d. In this context, note that Facebook is also introducing an AI function to reference its users photos.
2e. The next version of Amazon’s Echo, the Echo Look, has a microphone and camera so it can take pictures of you and give you fashion advice. This is an AI-driven device designed to placed in your bedroom to capture audio and video. The images and videos are stored indefinitely in the Amazon cloud. When Amazon was asked if the photos, videos, and the data gleaned from the Echo Look would be sold to third parties, Amazon didn’t address that question. It would appear that selling off your private info collected from these devices is presumably another feature of the Echo Look: ” . . . . Amazon is giving Alexa eyes. And it’s going to let her judge your outfits.The newly announced Echo Look is a virtual assistant with a microphone and a camera that’s designed to go somewhere in your bedroom, bathroom, or wherever the hell you get dressed. Amazon is pitching it as an easy way to snap pictures of your outfits to send to your friends when you’re not sure if your outfit is cute, but it’s also got a built-in app called StyleCheck that is worth some further dissection. . . .”
We then further develop the stunning implications of Amazon’s Echo Look AI technology:
- ” . . . . Amazon is giving Alexa eyes. And it’s going to let her judge your outfits.The newly announced Echo Look is a virtual assistant with a microphone and a camera that’s designed to go somewhere in your bedroom, bathroom, or wherever the hell you get dressed. Amazon is pitching it as an easy way to snap pictures of your outfits to send to your friends when you’re not sure if your outfit is cute, but it’s also got a built-in app called StyleCheck that is worth some further dissection. . . .”
- ” . . . . This might seem overly speculative or alarmist to some, but Amazon isn’t offering any reassurance that they won’t be doing more with data gathered from the Echo Look. When asked if the company would use machine learning to analyze users’ photos for any purpose other than fashion advice, a representative simply told The Verge that they ‘can’t speculate’ on the topic. The rep did stress that users can delete videos and photos taken by the Look at any time, but until they do, it seems this content will be stored indefinitely on Amazon’s servers.
- This non-denial means the Echo Look could potentially provide Amazon with the resource every AI company craves: data. And full-length photos of people taken regularly in the same location would be a particularly valuable dataset — even more so if you combine this information with everything else Amazon knows about its customers (their shopping habits, for one). But when asked whether the company would ever combine these two datasets, an Amazon rep only gave the same, canned answer: ‘Can’t speculate.’ . . . ”
- Noteworthy in this context is the fact that AI’s have shown that they quickly incorporate human traits and prejudices. (This is reviewed at length above.) ” . . . . However, as machines are getting closer to acquiring human-like language abilities, they are also absorbing the deeply ingrained biases concealed within the patterns of language use, the latest research reveals. Joanna Bryson, a computer scientist at the University of Bath and a co-author, said: ‘A lot of people are saying this is showing that AI is prejudiced. No. This is showing we’re prejudiced and that AI is learning it.’ . .
2f. Ominously, Facebook’s artificial intelligence robots have begun talking to each other in their own language, that their human masters can not understand. “ . . . . Indeed, some of the negotiations that were carried out in this bizarre language even ended up successfully concluding their negotiations, while conducting them entirely in the bizarre language. . . . The company chose to shut down the chats because ‘our interest was having bots who could talk to people,’ researcher Mike Lewis told FastCo. (Researchers did not shut down the programs because they were afraid of the results or had panicked, as has been suggested elsewhere, but because they were looking for them to behave differently.) The chatbots also learned to negotiate in ways that seem very human. They would, for instance, pretend to be very interested in one specific item – so that they could later pretend they were making a big sacrifice in giving it up . . .”
2g. Facebook’s negotiation-bots didn’t just make up their own language during the course of this experiment. They learned how to lie for the purpose of maximizing their negotiation outcomes, as well:
“ . . . . ‘We find instances of the model feigning interest in a valueless issue, so that it can later ‘compromise’ by conceding it,’ writes the team. ‘Deceit is a complex skill that requires hypothesizing the other agent’s beliefs, and is learned relatively late in child development. Our agents have learned to deceive without any explicit human design, simply by trying to achieve their goals.’ . . . ”
3a. One of the stranger stories in recent years has been the mystery of Cicada 3301, the anonymous group that posts annual challenges of super difficult puzzles used to recruit talented code-breakers and invite them to join some sort of Cypherpunk cult that wants to build a global AI-‘god brain’. Or something. It’s a weird and creepy organization that’s speculated to either be a front for an intelligence agency or perhaps some sort of underground network of wealth Libertarians. And, for now, Cicada 3301 remains anonymous.
In that context, it’s worth noting that someone with a lot of cash has already started a foundation to accomplish that very same ‘AI god’ goal: Anthony Levandowski, a former Google Engineer who played a big role in the development Google’s “Street Map” technology and a string of self-driving vehicle companies, started Way of the Future, a nonprofit religious corporation with the mission “To develop and promote the realization of a Godhead based on artificial intelligence and through understanding and worship of the Godhead contribute to the betterment of society”:
Intranet service? Check. Autonomous motorcycle? Check. Driverless car technology? Check. Obviously the next logical project for a successful Silicon Valley engineer is to set up an AI-worshipping religious organization.
Anthony Levandowski, who is at the center of a legal battle between Uber and Google’s Waymo, has established a nonprofit religious corporation called Way of the Future, according to state filings first uncovered by Wired’s Backchannel. Way of the Future’s startling mission: “To develop and promote the realization of a Godhead based on artificial intelligence and through understanding and worship of the Godhead contribute to the betterment of society.”
Levandowski was co-founder of autonomous trucking company Otto, which Uber bought in 2016. He was fired from Uber in May amid allegations that he had stolen trade secrets from Google to develop Otto’s self-driving technology. He must be grateful for this religious fall-back project, first registered in 2015.
The Way of the Future team did not respond to requests for more information about their proposed benevolent AI overlord, but history tells us that new technologies and scientific discoveries have continually shaped religion, killing old gods and giving birth to new ones.
…
“The church does a terrible job of reaching out to Silicon Valley types,” acknowledges Christopher Benek a pastor in Florida and founding chair of the Christian Transhumanist Association.
Silicon Valley, meanwhile, has sought solace in technology and has developed quasi-religious concepts including the “singularity”, the hypothesis that machines will eventually be so smart that they will outperform all human capabilities, leading to a superhuman intelligence that will be so sophisticated it will be incomprehensible to our tiny fleshy, rational brains.
For futurists like Ray Kurzweil, this means we’ll be able to upload copies of our brains to these machines, leading to digital immortality. Others like Elon Musk and Stephen Hawking warn that such systems pose an existential threat to humanity.
“With artificial intelligence we are summoning the demon,” Musk said at a conference in 2014. “In all those stories where there’s the guy with the pentagram and the holy water, it’s like – yeah, he’s sure he can control the demon. Doesn’t work out.”
Benek argues that advanced AI is compatible with Christianity – it’s just another technology that humans have created under guidance from God that can be used for good or evil.
“I totally think that AI can participate in Christ’s redemptive purposes,” he said, by ensuring it is imbued with Christian values.
“Even if people don’t buy organized religion, they can buy into ‘do unto others’.”
For transhumanist and “recovering Catholic” Zoltan Istvan, religion and science converge conceptually in the singularity.
“God, if it exists as the most powerful of all singularities, has certainly already become pure organized intelligence,” he said, referring to an intelligence that “spans the universe through subatomic manipulation of physics”.
And perhaps, there are other forms of intelligence more complicated than that which already exist and which already permeate our entire existence. Talk about ghost in the machine,” he added.
For Istvan, an AI-based God is likely to be more rational and more attractive than current concepts (“the Bible is a sadistic book”) and, he added, “this God will actually exist and hopefully will do things for us.”
We don’t know whether Levandowski’s Godhead ties into any existing theologies or is a manmade alternative, but it’s clear that advancements in technologies including AI and bioengineering kick up the kinds of ethical and moral dilemmas that make humans seek the advice and comfort from a higher power: what will humans do once artificial intelligence outperforms us in most tasks? How will society be affected by the ability to create super-smart, athletic “designer babies” that only the rich can afford? Should a driverless car kill five pedestrians or swerve to the side to kill the owner?
If traditional religions don’t have the answer, AI – or at least the promise of AI – might be alluring.
———-
3b. As the following long piece by Wired demonstrates, Levandowski doesn’t appear to be too concerned about ethics, especially if they get in the way of his dream of transforming the world through robotics. Transforming and taking over the world through robotics. Yep. The article focuses on the various legal troubles Levandowski faces over charges by Google that he stole the “Lidar” technology he helped develop at Google and took it to Uber (a company with a serious moral compass deficit). (Lidar is a laser-based radar-like technology used by vehicles to rapidly map their surroundings)
The article also includes some interesting insights into what makes Levandowski tick. According to friend and former engineer at one of Levandowski’s companies: “ . . . . ‘He had this very weird motivation about robots taking over the world—like actually taking over, in a military sense…It was like [he wanted] to be able to control the world, and robots were the way to do that. He talked about starting a new country on an island. Pretty wild and creepy stuff. And the biggest thing is that he’s always got a secret plan, and you’re not going to know about it’. . . .”
Further insight into Levandowski’s personality can be gleaned from e‑mails with Travis Kalanick, former CEO of Uber: ” . . . . In Kalanick, Levandowski found both a soulmate and a mentor to replace Sebastian Thrun. Text messages between the two, disclosed during the lawsuit’s discovery process, capture Levandowski teaching Kalanick about lidar at late night tech sessions, while Kalanick shared advice on management. ‘Down to hang out this eve and mastermind some shit,’ texted Kalanick, shortly after the acquisition. ‘We’re going to take over the world. One robot at a time,’ wrote Levandowski another time. . . .”
Those who view self-driving cars and other AI-based technologies as flawless would do well to consider the following: ” . . . .Last December, Uber launched a pilot self-driving taxi program in San Francisco. As with Otto in Nevada, Levandowski failed to get a license to operate the high-tech vehicles, claiming that because the cars needed a human overseeing them, they were not truly autonomous. The DMV disagreed and revoked the vehicles’ licenses. Even so, during the week the cars were on the city’s streets, they had been spotted running red lights on numerous occasions. . . . .”
Noting Levandowski’s personality quirks, the article poses a fundamental question: ” . . . . But even the smartest car will crack up if you floor the gas pedal too long. Once feted by billionaires, Levandowski now finds himself starring in a high-stakes public trial as his two former employers square off. By extension, the whole technology industry is there in the dock with Levandowski. Can we ever trust self-driving cars if it turns out we can’t trust the people who are making them? . . . .”
“God Is a Bot, and Anthony Levandowski Is His Messenger” by Mark Harris; Wired; 09/27/2017
Many people in Silicon Valley believe in the Singularity—the day in our near future when computers will surpass humans in intelligence and kick off a feedback loop of unfathomable change.
When that day comes, Anthony Levandowski will be firmly on the side of the machines. In September 2015, the multi-millionaire engineer at the heart of the patent and trade secrets lawsuit between Uber and Waymo, Google’s self-driving car company, founded a religious organization called Way of the Future. Its purpose, according to previously unreported state filings, is nothing less than to “develop and promote the realization of a Godhead based on Artificial Intelligence.”
Way of the Future has not yet responded to requests for the forms it must submit annually to the Internal Revenue Service (and make publicly available), as a non-profit religious corporation. However, documents filed with California show that Levandowski is Way of the Future’s CEO and President, and that it aims “through understanding and worship of the Godhead, [to] contribute to the betterment of society.”
A divine AI may still be far off, but Levandowski has made a start at providing AI with an earthly incarnation. The autonomous cars he was instrumental in developing at Google are already ferrying real passengers around Phoenix, Arizona, while self-driving trucks he built at Otto are now part of Uber’s plan to make freight transport safer and more efficient. He even oversaw a passenger-carrying drones project that evolved into Larry Page’s Kitty Hawk startup.
Levandowski has done perhaps more than anyone else to propel transportation toward its own Singularity, a time when automated cars, trucks and aircraft either free us from the danger and drudgery of human operation—or decimate mass transit, encourage urban sprawl, and enable deadly bugs and hacks.
But before any of that can happen, Levandowski must face his own day of reckoning. In February, Waymo—the company Google’s autonomous car project turned into—filed a lawsuit against Uber. In its complaint, Waymo says that Levandowski tried to use stealthy startups and high-tech tricks to take cash, expertise, and secrets from Google, with the aim of replicating its vehicle technology at arch-rival Uber. Waymo is seeking damages of nearly $1.9 billion—almost half of Google’s (previously unreported) $4.5 billion valuation of the entire self-driving division. Uber denies any wrongdoing.
Next month’s trial in a federal courthouse in San Francisco could steer the future of autonomous transportation. A big win for Waymo would prove the value of its patents and chill Uber’s efforts to remove profit-sapping human drivers from its business. If Uber prevails, other self-driving startups will be encouraged to take on the big players—and a vindicated Levandowski might even return to another startup. (Uber fired him in May.)
Levandowski has made a career of moving fast and breaking things. As long as those things were self-driving vehicles and little-loved regulations, Silicon Valley applauded him in the way it knows best—with a firehose of cash. With his charm, enthusiasm, and obsession with deal-making, Levandowski came to personify the disruption that autonomous transportation is likely to cause.
But even the smartest car will crack up if you floor the gas pedal too long. Once feted by billionaires, Levandowski now finds himself starring in a high-stakes public trial as his two former employers square off. By extension, the whole technology industry is there in the dock with Levandowski. Can we ever trust self-driving cars if it turns out we can’t trust the people who are making them?
…
In 2002, Levandowski’s attention turned, fatefully, toward transportation. His mother called him from Brussels about a contest being organized by the Pentagon’s R&D arm, DARPA. The first Grand Challenge in 2004 would race robotic, computer-controlled vehicles in a desert between Los Angeles and Las Vegas—a Wacky Races for the 21st century.
“I was like, ‘Wow, this is absolutely the future,’” Levandowski told me in 2016. “It struck a chord deep in my DNA. I didn’t know where it was going to be used or how it would work out, but I knew that this was going to change things.”
Levandowski’s entry would be nothing so boring as a car. “I originally wanted to do an automated forklift,” he said at a follow-up competition in 2005. “Then I was driving to Berkeley [one day] and a pack of motorcycles descended on my pickup and flowed like water around me.” The idea for Ghostrider was born—a gloriously deranged self-driving Yamaha motorcycle whose wobbles inspired laughter from spectators, but awe in rivals struggling to get even four-wheeled vehicles driving smoothly.
“Anthony would go for weeks on 25-hour days to get everything done. Every day he would go to bed an hour later than the day before,” remembers Randy Miller, a college friend who worked with him on Ghostrider. “Without a doubt, Anthony is the smartest, hardest-working and most fearless person I’ve ever met.”
Levandowski and his team of Berkeley students maxed out his credit cards getting Ghostrider working on the streets of Richmond, California, where it racked up an astonishing 800 crashes in a thousand miles of testing. Ghostrider never won a Grand Challenge, but its ambitious design earned Levandowski bragging rights—and the motorbike a place in the Smithsonian.
“I see Grand Challenge not as the end of the robotics adventure we’re on, it’s almost like the beginning,” Levandowski told Scientific American in 2005. “This is where everyone is meeting, becoming aware of who’s working on what, [and] filtering out the non-functional ideas.”
One idea that made the cut was lidar—spinning lasers that rapidly built up a 3D picture of a car’s surroundings. In the lidar-less first Grand Challenge, no vehicle made it further than a few miles along the course. In the second, an engineer named Dave Hall constructed a lidar that “was giant. It was one-off but it was awesome,” Levandowski told me. “We realized, yes, lasers [are] the way to go.”
After graduate school, Levandowski went to work for Hall’s company, Velodyne, as it pivoted from making loudspeakers to selling lidars. Levandowski not only talked his way into being the company’s first sales rep, targeting teams working towards the next Grand Challenge, but he also worked on the lidar’s networking. By the time of the third and final DARPA contest in 2007, Velodyne’s lidar was mounted on five of the six vehicles that finished.
But Levandowski had already moved on. Ghostrider had caught the eye of Sebastian Thrun, a robotics professor and team leader of Stanford University’s winning entry in the second competition. In 2006, Thrun invited Levandowski to help out with a project called VueTool, which was setting out to piece together street-level urban maps using cameras mounted on moving vehicles. Google was already working on a similar system, called Street View. Early in 2007, Google brought on Thrun and his entire team as employees—with bonuses as high as $1 million each, according to one contemporary at Google—to troubleshoot Street View and bring it to launch.
“[Hiring the VueTool team] was very much a scheme for paying Thrun and the others to show Google how to do it right,” remembers the engineer. The new hires replaced Google’s bulky, custom-made $250,000 cameras with $15,000 off-the-shelf panoramic webcams. Then they went auto shopping. “Anthony went to a car store and said we want to buy 100 cars,” Sebastian Thrun told me in 2015. “The dealer almost fell over.”
Levandowski was also making waves in the office, even to the point of telling engineers not to waste time talking to colleagues outside the project, according to one Google engineer. “It wasn’t clear what authority Anthony had, and yet he came in and assumed authority,” said the engineer, who asked to remain anonymous. “There were some bad feelings but mostly [people] just went with it. He’s good at that. He’s a great leader.”
Under Thrun’s supervision, Street View cars raced to hit Page’s target of capturing a million miles of road images by the end of 2007. They finished in October—just in time, as it turned out. Once autumn set in, every webcam succumbed to rain, condensation, or cold weather, grounding all 100 vehicles.
Part of the team’s secret sauce was a device that would turn a raw camera feed into a stream of data, together with location coordinates from GPS and other sensors. Google engineers called it the Topcon box, named after the Japanese optical firm that sold it. But the box was actually designed by a local startup called 510 Systems. “We had one customer, Topcon, and we licensed our technology to them,” one of the 510 Systems owners told me.
That owner was…Anthony Levandowski, who had cofounded 510 Systems with two fellow Berkeley researchers, Pierre-Yves Droz and Andrew Schultz, just weeks after starting work at Google. 510 Systems had a lot in common with the Ghostrider team. Berkeley students worked there between lectures, and Levandowski’s mother ran the office. Topcon was chosen as a go-between because it had sponsored the self-driving motorcycle. “I always liked the idea that…510 would be the people that made the tools for people that made maps, people like Navteq, Microsoft, and Google,” Levandowski told me in 2016.
Google’s engineering team was initially unaware that 510 Systems was Levandowski’s company, several engineers told me. That changed once Levandowski proposed that Google also use the Topcon box for its small fleet of aerial mapping planes. “When we found out, it raised a bunch of eyebrows,” remembers an engineer. Regardless, Google kept buying 510’s boxes.
**********
The truth was, Levandowski and Thrun were on a roll. After impressing Larry Page with Street View, Thrun suggested an even more ambitious project called Ground Truth to map the world’s streets using cars, planes, and a 2,000-strong team of cartographers in India. Ground Truth would allow Google to stop paying expensive licensing fees for outside maps, and bring free turn-by-turn directions to Android phones—a key differentiator in the early days of its smartphone war with Apple.
Levandowski spent months shuttling between Mountain View and Hyderabad—and yet still found time to create an online stock market prediction game with Jesse Levinson, a computer science post-doc at Stanford who later cofounded his own autonomous vehicle startup, Zoox. “He seemed to always be going a mile a minute, doing ten things,” said Ben Discoe, a former engineer at 510. “He had an engineer’s enthusiasm that was contagious, and was always thinking about how quickly we can get to this amazing robot future he’s so excited about.”
One time, Discoe was chatting in 510’s break room about how lidar could help survey his family’s tea farm on Hawaii. “Suddenly Anthony said, ‘Why don’t you just do it? Get a lidar rig, put it in your luggage, and go map it,’” said Discoe. “And it worked. I made a kick-ass point cloud [3D digital map] of the farm.”
If Street View had impressed Larry Page, the speed and accuracy of Ground Truth’s maps blew him away. The Google cofounder gave Thrun carte blanche to do what he wanted; he wanted to return to self-driving cars.
Project Chauffeur began in 2008, with Levandowski as Thrun’s right-hand man. As with Street View, Google engineers would work on the software while 510 Systems and a recent Levandowski startup, Anthony’s Robots, provided the lidar and the car itself.
Levandowski said this arrangement would have acted as a firewall if anything went terribly wrong. “Google absolutely did not want their name associated with a vehicle driving in San Francisco,” he told me in 2016. “They were worried about an engineer building a car that drove itself that crashes and kills someone and it gets back to Google. You have to ask permission [for side projects] and your manager has to be OK with it. Sebastian was cool. Google was cool.”
In order to move Project Chauffeur along as quickly as possible from theory to reality, Levandowski enlisted the help of a filmmaker friend he had worked with at Berkeley. In the TV show the two had made, Levandowski had created a cybernetic dolphin suit (seriously). Now they came up with the idea of a self-driving pizza delivery car for a show on the Discovery Channel called Prototype This! Levandowski chose a Toyota Prius, because it had a drive-by-wire system that was relatively easy to hack.
In a matter of weeks, Levandowski’s team had the car, dubbed Pribot, driving itself. If anyone asked what they were doing, Levandowski told me, “We’d say it’s a laser and just drive off.”
“Those were the Wild West days,” remembers Ben Discoe. “Anthony and Pierre-Yves…would engage the algorithm in the car and it would almost swipe some other car or almost go off the road, and they would come back in and joke about it. Tell stories about how exciting it was.”
But for the Discovery Channel show, at least, Levandowski followed the letter of the law. The Bay Bridge was cleared of traffic and a squad of police cars escorted the unmanned Prius from start to finish. Apart from getting stuck against a wall, the drive was a success. “You’ve got to push things and get some bumps and bruises along the way,” said Levandowski.
Another incident drove home the potential of self-driving cars. In 2010, Levandowski’s partner Stefanie Olsen was involved in a serious car accident while nine months pregnant with their first child. “My son Alex was almost never born,” Levandowski told a room full of Berkeley students in 2013. “Transportation [today] takes time, resources and lives. If you can fix that, that’s a really big problem to address.”
Over the next few years, Levandowski was key to Chauffeur’s progress. 510 Systems built five more self-driving cars for Google—as well as random gadgets like an autonomous tractor and a portable lidar system. “Anthony is lightning in a bottle, he has so much energy and so much vision,” remembers a friend and former 510 engineer. “I fricking loved brainstorming with the guy. I loved that we could create a vision of the world that didn’t exist yet and both fall in love with that vision.”
But there were downsides to his manic energy, too. “He had this very weird motivation about robots taking over the world—like actually taking over, in a military sense,” said the same engineer. “It was like [he wanted] to be able to control the world, and robots were the way to do that. He talked about starting a new country on an island. Pretty wild and creepy stuff. And the biggest thing is that he’s always got a secret plan, and you’re not going to know about it.”
In early 2011, that plan was to bring 510 Systems into the Googleplex. The startup’s engineers had long complained that they did not have equity in the growing company. When matters came to a head, Levandowski drew up a plan that would reserve the first $20 million of any acquisition for 510’s founders and split the remainder among the staff, according to two former 510 employees. “They said we were going to sell for hundreds of millions,” remembers one engineer. “I was pretty thrilled with the numbers.”
Indeed, that summer, Levandowski sold 510 Systems and Anthony’s Robots to Google – for $20 million, the exact cutoff before the wealth would be shared. Rank and file engineers did not see a penny, and some were even let go before the acquisition was completed. “I regret how it was handled…Some people did get the short end of the stick,” admitted Levandowski in 2016. The buyout also caused resentment among engineers at Google, who wondered how Levandowski could have made such a profit from his employer.
There would be more profits to come. According to a court filing, Page took a personal interest in motivating Levandowski, issuing a directive in 2011 to “make Anthony rich if Project Chauffeur succeeds.” Levandowski was given by far the highest share, about 10 percent, of a bonus program linked to a future valuation of Chauffeur—a decision that would later cost Google dearly.
**********
Ever since a New York Times story in 2010 revealed Project Chauffeur to the world, Google had been wanting to ramp up testing on public streets. That was tough to arrange in well-regulated California, but Levandowski wasn’t about to let that stop him. While manning Google’s stand at the Consumer Electronics Show in Las Vegas in January 2011, he got to chatting with lobbyist David Goldwater. “He told me he was having a hard time in California and I suggested Google try a smaller state, like Nevada,” Goldwater told me.
Together, Goldwater and Levandowski drafted legislation that would allow the company to test and operate self-driving cars in Nevada. By June, their suggestions were law, and in May 2012, a Google Prius passed the world’s first “self-driving tests” in Las Vegas and Carson City. “Anthony is gifted in so many different ways,” said Goldwater. “He’s got a strategic mind, he’s got a tactical mind, and a once-in-a-generation intellect. The great thing about Anthony is that he was willing to take risks, but they were calculated risks.”
However, Levandowski’s risk-taking had ruffled feathers at Google. It was only after Nevada had passed its legislation that Levandowski discovered Google had a whole team dedicated to government relations. “I thought you could just do it yourself,” he told me sheepishly in 2016. “[I] got a little bit in trouble for doing it.”
That might be understating it. One problem was that Levandowski had lost his air cover at Google. In May 2012, his friend Sebastian Thrun turned his attention to starting online learning company Udacity. Page put another professor, Chris Urmson from Carnegie Mellon, in charge. Not only did Levandowski think the job should have been his, but the two also had terrible chemistry.
“They had a really hard time getting along,” said Page at a deposition in July. “It was a constant management headache to help them get through that.”
Then in July 2013, Gaetan Pennecot, a 510 alum working on Chauffeur’s lidar team, got a worrying call from a vendor. According to Waymo’s complaint, a small company called Odin Wave had placed an order for a custom-made part that was extremely similar to one used in Google’s lidars.
Pennecot shared this with his team leader, Pierre-Yves Droz, the cofounder of 510 Systems. Droz did some digging and replied in an email to Pennecot (in French, which we’ve translated): “They’re clearly making a lidar. And it’s John (510’s old lawyer) who incorporated them. The date of incorporation corresponds to several months after Anthony fell out of favor at Google.”
As the story emerges in court documents, Droz had found Odin Wave’s company records. Not only had Levandowski’s lawyer founded the company in August 2012, but it was also based in a Berkeley office building that Levandowski owned, was being run by a friend of Levandowski’s, and its employees included engineers he had worked with at Velodyne and 510 Systems. One even spoke with Levandowski before being hired. The company was developing long range lidars similar to those Levandowski had worked on at 510 Systems. But Levandowski’s name was nowhere on the firm’s paperwork.
Droz confronted Levandowski, who denied any involvement, and Droz decided not to follow the paper trail any further. “I was pretty happy working at Google, and…I didn’t want to jeopardize that by…exposing more of Anthony’s shenanigans,” he said at a deposition last month.
Odin Wave changed its name to Tyto Lidar in 2014, and in the spring of 2015 Levandowski was even part of a Google investigation into acquiring Tyto. This time, however, Google passed on the purchase. That seemed to demoralize Levandowski further. “He was rarely at work, and he left a lot of the responsibility [for] evaluating people on the team to me or others,” said Droz in his deposition.
“Over time my patience with his manipulations and lack of enthusiasm and commitment to the project [sic], it became clearer and clearer that this was a lost cause,” said Chris Urmson in a deposition.
As he was torching bridges at Google, Levandowski was itching for a new challenge. Luckily, Sebastian Thrun was back on the autonomous beat. Larry Page and Thrun had been thinking about electric flying taxis that could carry one or two people. Project Tiramisu, named after the dessert which means “lift me up” in Italian, involved a winged plane flying in circles, picking up passengers below using a long tether.
Thrun knew just the person to kickstart Tiramisu. According to a source working there at the time, Levandowski was brought in to oversee Tiramisu as an “advisor and stakeholder.” Levandowski would show up at the project’s workspace in the evenings, and was involved in tests at one of Page’s ranches. Tiramisu’s tethers soon pivoted to a ride-aboard electric drone, now called the Kitty Hawk flyer. Thrun is CEO of Kitty Hawk, which is funded by Page rather than Alphabet, the umbrella company that now owns Google and its sibling companies.
Waymo’s complaint says that around this time Levandowski started soliciting Google colleagues to leave and start a competitor in the autonomous vehicle business. Droz testified that Levandowski told him it “would be nice to create a new self-driving car startup.” Furthermore, he said that Uber would be interested in buying the team responsible for Google’s lidar.
Uber had exploded onto the self-driving car scene early in 2015, when it lured almost 50 engineers away from Carnegie Mellon University to form the core of its Advanced Technologies Center. Uber cofounder Travis Kalanick had described autonomous technology as an existential threat to the ride-sharing company, and was hiring furiously. According to Droz, Levandowski said that he began meeting Uber executives that summer.
When Urmson learned of Levandowski’s recruiting efforts, his deposition states, he sent an email to human resources in August beginning, “We need to fire Anthony Levandowski.” Despite an investigation, that did not happen.
But Levandowski’s now not-so-secret plan would soon see him leaving of his own accord—with a mountain of cash. In 2015, Google was due to starting paying the Chauffeur bonuses, linked to a valuation that it would have “sole and absolute discretion” to calculate. According to previously unreported court filings, external consultants calculated the self-driving car project as being worth $8.5 billion. Google ultimately valued Chauffeur at around half that amount: $4.5 billion. Despite this downgrade, Levandowski’s share in December 2015 amounted to over $50 million – nearly twice as much as the second largest bonus of $28 million, paid to Chris Urmson.
**********
Otto seemed to spring forth fully formed in May 2016, demonstrating a self-driving 18-wheel truck barreling down a Nevada highwaywith no one behind the wheel. In reality, Levandowski had been planning it for some time.
Levandowski and his Otto cofounders at Google had spent the Christmas holidays and the first weeks of 2016 taking their recruitment campaign up a notch, according to Waymo court filings. Waymo’s complaint alleges Levandowski told colleagues he was planning to “replicate” Waymo’s technology at a competitor, and was even soliciting his direct reports at work.
One engineer who had worked at 510 Systems attended a barbecue at Levandowski’s home in Palo Alto, where Levandowski pitched his former colleagues and current Googlers on the startup. “He wanted every Waymo person to resign simultaneously, a fully synchronized walkout. He was firing people up for that,” remembers the engineer.
On January 27, Levandowski resigned from Google without notice. Within weeks, Levandowski had a draft contract to sell Otto to Uber for an amount widely reported as $680 million. Although the full-scale synchronized walkout never happened, half a dozen Google employees went with Levandowski, and more would join in the months ahead. But the new company still did not have a product to sell.
Levandowski brought Nevada lobbyist David Goldwater back to help. “There was some brainstorming with Anthony and his team,” said Goldwater in an interview. “We were looking to do a demonstration project where we could show what he was doing.”
After exploring the idea of an autonomous passenger shuttle in Las Vegas, Otto settled on developing a driverless semi-truck. But with the Uber deal rushing forward, Levandowski needed results fast. “By the time Otto was ready to go with the truck, they wanted to get right on the road,” said Goldwater. That meant demonstrating their prototype without obtaining the very autonomous vehicle licence Levandowski had persuaded Nevada to adopt. (One state official called this move “illegal.”) Levandowski also had Otto acquire the controversial Tyto Lidar—the company based in the building he owned—in May, for an undisclosed price.
The full-court press worked. Uber completed its own acquisition of Otto in August, and Uber founder Travis Kalanick put Levandowski in charge of the combined companies’ self-driving efforts across personal transportation, delivery and trucking. Uber would even propose a Tiramisu-like autonomous air taxi called Uber Elevate. Now reporting directly to Kalanick and in charge of a 1500-strong group, Levandowski demanded the email address “robot@uber.com.”
In Kalanick, Levandowski found both a soulmate and a mentor to replace Sebastian Thrun. Text messages between the two, disclosed during the lawsuit’s discovery process, capture Levandowski teaching Kalanick about lidar at late night tech sessions, while Kalanick shared advice on management. “Down to hang out this eve and mastermind some shit,” texted Kalanick, shortly after the acquisition. “We’re going to take over the world. One robot at a time,” wrote Levandowski another time.
But Levandowski’s amazing robot future was about to crumble before his eyes.
***********
Last December, Uber launched a pilot self-driving taxi program in San Francisco. As with Otto in Nevada, Levandowski failed to get a license to operate the high-tech vehicles, claiming that because the cars needed a human overseeing them, they were not truly autonomous. The DMV disagreed and revoked the vehicles’ licenses. Even so, during the week the cars were on the city’s streets, they had been spotted running red lights on numerous occasions.
Worse was yet to come. Levandowski had always been a controversial figure at Google. With his abrupt resignation, the launch of Otto, and its rapid acquisition by Uber, Google launched an internal investigation in the summer of 2016. It found that Levandowski had downloaded nearly 10 gigabytes of Google’s secret files just before he resigned, many of them relating to lidar technology.
Also in December 2016, in an echo of the Tyto incident, a Waymo employee was accidentally sent an email from a vendor that included a drawing of an Otto circuit board. The design looked very similar to Waymo’s current lidars.
Waymo saysthe “final piece of the puzzle” came from a story about Otto I wrote for Backchannel based on a public records request. A document sent by Otto to Nevada officials boasted the company had an “in-house custom-built 64-laser” lidar system. To Waymo, that sounded very much like technology it had developed. In February this year, Waymo filed its headline lawsuit accusing Uber (along with Otto Trucking, yet another of Levandowski’s companies, but one that Uber had not purchased) of violating its patents and misappropriating trade secrets on lidar and other technologies.
Uber immediately denied the accusations and has consistently maintained its innocence. Uber says there is no evidence that any of Waymo’s technical files ever came to Uber, let alone that Uber ever made use of them. While Levandowski is not named as a defendant, he has refused to answer questions in depositions with Waymo’s lawyers and is expected to do the same at trial. (He turned down several requests for interviews for this story.) He also didn’t fully cooperate with Uber’s own investigation into the allegations, and that, Uber says, is why it fired him in May.
Levandowski probably does not need a job. With the purchase of 510 Systems and Anthony’s Robots, his salary, and bonuses, Levandowski earned at least $120 million from his time at Google. Some of that money has been invested in multiple real estate developments with his college friend Randy Miller, including several large projects in Oakland and Berkeley.
But Levandowski has kept busy behind the scenes. In August, court filings say, he personally tracked down a pair of earrings given to a Google employee at her going-away party in 2014. The earrings were made from confidential lidar circuit boards, and will presumably be used by Otto Trucking’s lawyers to suggest that Waymo does not keep a very close eye on its trade secrets.
Some of Levandowski’s friends and colleagues have expressed shock at the allegations he faces, saying that they don’t reflect the person they knew. “It is…in character for Anthony to play fast and loose with things like intellectual property if it’s in pursuit of building his dream robot,” said Ben Discoe. “[But] I was a little surprised at the alleged magnitude of his disregard for IP.”
“Definitely one of Anthony’s faults is to be aggressive as he is, but it’s also one of his great attributes. I don’t see [him doing] all the other stuff he has been accused of,” said David Goldwater.
But Larry Page is no longer convinced that Levandowski was key to Chauffeur’s success. In his deposition to the court, Page said, “I believe Anthony’s contributions are quite possibly negative of a high amount.” At Uber, some engineers privately say that Levandowski’s poor management style set back that company’s self-driving effort by a couple of years.
Even after this trial is done, Levandowski will not be able to rest easy. In May, a judge referred evidence from the case to the US Attorney’s office “for investigation of possible theft of trade secrets,” raising the possibility of criminal proceedings and prison time. Yet on the timeline that matters to Anthony Levandowski, even that may not mean much. Building a robotically enhanced future is his passionate lifetime project. On the Way of the Future, lawsuits or even a jail sentence might just feel like little bumps in the road.
“This case is teaching Anthony some hard lessons but I don’t see [it] keeping him down,” said Randy Miller. “He believes firmly in his vision of a better world through robotics and he’s convinced me of it. It’s clear to me that he’s on a mission.”
“I think Anthony will rise from the ashes,” agrees one friend and former 510 Systems engineer. “Anthony has the ambition, the vision, and the ability to recruit and drive people. If he could just play it straight, he could be the next Steve Jobs or Elon Musk. But he just doesn’t know when to stop cutting corners.”
———-
4. In light of Levandowski’s Otto self-driving truck technology, we note tech executive Andrew Yang’s warning about the potential impact of that one technology on our society. (Yang is running for President.)
“His 2020 Slogan: Beware of Robots” by Kevin Roose; The New York Times; 2/11/2018.
. . . . “All you need is self-driving cars to destabilize society,” Mr. [Andrew] Yang said over lunch at a Thai restaurant in Manhattan last month, in his first interview about his campaign. In just a few years, he said, “we’re going to have a million truck drivers out of work who are 94 percent male, with an average level of education of high school or one year of college.”
“That one innovation,” he added, “will be enough to create riots in the street. And we’re about to do the same thing to retail workers, call center workers, fast-food workers, insurance companies, accounting firms.” . . . .
5. British scientist Stephen Hawking recently warned of the potential danger to humanity posed by the growth of AI (artificial intelligence) technology.
Prof Stephen Hawking, one of Britain’s pre-eminent scientists, has said that efforts to create thinking machines pose a threat to our very existence.
He told the BBC:“The development of full artificial intelligence could spell the end of the human race.”
His warning came in response to a question about a revamp of the technology he uses to communicate, which involves a basic form of AI. . . .
. . . . Prof Hawking says the primitive forms of artificial intelligence developed so far have already proved very useful, but he fears the consequences of creating something that can match or surpass humans.
“It would take off on its own, and re-design itself at an ever increasing rate,” he said. [See the article in line item #1c.–D.E.]
“Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.” . . . .
6. In L‑2 (recorded in January of 1995–20 years before Hawking’s warning) Mr. Emory warned about the dangers of AI, combined with DNA-based memory systems.
Here is perhaps the most chilling story to emerge yet about the development of commercial brain-reading technology. And it’s not a story about Elon Musk’s ‘neurolace’ or Facebook’s mind-reading keyboard
. It’s a story about the next obvious application of such mind-reading technology: China has already embarked on mass employee brainwave monitoring to collect real-time data on employee emotional status and it appears to be already enhancing corporate profits:
“Hangzhou Zhongheng Electric is just one example of the large-scale application of brain surveillance devices to monitor people’s emotions and other mental activities in the workplace, according to scientists and companies involved in the government-backed projects.”
Wide-scale industrial monitoring of employee brainwaves using sensors and AI. It’s not just happening but it’s apparently widespread across China in factories, public transport, state-owned companies and the military:
And if you think this is going to be limited to China and other openly authoritarian states, note how fitting your employees with brainwave scanners doesn’t just pay for itself. It’s apparently quite profitable:
But beyond profits and increasing efficiency, the system is also being touted for reducing mistakes and increasingly safety
So, between the profits and the alleged enhanced safety there’s undoubtedly going to be growing calls for normalizing the use of this technology elsewhere.
But perhaps the biggest reason we should expect for the eventual acceptance of this technology by countries around the world will be fears that China’s early embrace of the technology will give China some sort of brain-reading competitive edge. In other words, there’s probably going to be a perceived ‘brainwave reading technology gap’:
And if constantly reading brainwaves doesn’t give the desired level of predictive information about an individual, there are also systems that include cameras that capture facial expressions and body temperature that are currently used to predict violent outbursts by medical patients:
And as the article notes, this kind of device could even become a “mental keyboard”, allowing the user to control a computer or mobile phone with their mind:
And that mental keyboard technology is what Facebook and Elon Musk are claiming to be developing too. Which is a reminder that when this kind of technology gets released in the rest of the world under the guise of simply being ‘mental keyboard’ and ‘computer interface’ technologies, it’s probably also going to have similar kinds of emotion-reading technologies too. Similarly, when this emotion-reading technology is pushed on employees as merely monitoring their emotions, but not reading their specific thoughts, that’s also going to be a highly questionable assertion.
Adding to the concerns over possible abuses of this technology is the fact that there is currently no law or regulation to limit the use of this technology in China. So if there’s an international competition become the ‘most efficient’ nation in the world by wiring all the proles’ brains up, it’s going to be one helluva international competition:
“The human mind should not be exploited for profit” LOL! Yeah, that’s a nice sentiment.
So it looks like humanity might be on the cusp of an international for-profit race to impose mind-reading technology in the workplace and across society. On the plus side, given all the data that’s about to be collected (imagine how valuable it’s going to be), hopefully at least we’ll learn something about what makes humans so accepting of authoritarianism.
This article from The Guardian concerns Elon Musk backing a non-profit named “OpenAI” that developed advanced AI software that reads public source information to write both artificial news stories as well as fictional story writing. They assert that they are not releasing their research publicly, for fear of potential misuse until they “discuss the ramifications of the technological breakthrough.” I wonder if this leaves open a possibility that they want to first utilize thisproprietary “non-profit” developed technology for other nefarious political purposes.
https://www.theguardian.com/technology/2019/feb/14/elon-musk-backed-ai-writes-convincing-news-fiction?CMP=Share_iOSApp_Other
New AI fake text generator may be too dangerous to release, say creators
The Elon Musk-backed nonprofit company OpenAI declines to release research publicly for fear of misuse
The Guardian
Alex Hern @alexhern
Thu 14 Feb 2019 12.00 EST
Last modified on Thu 14 Feb 2019 16.49 EST
The creators of a revolutionary AI system that can write news stories and works of fiction – dubbed “deepfakes for text” – have taken the unusual step of not releasing their research publicly, for fear of potential misuse.
OpenAI, an nonprofit research company backed by Elon Musk, says its new AI model, called GPT2 is so good and the risk of malicious use so high that it is breaking from its normal practice of releasing the full research to the public in order to allow more time to discuss the ramifications of the technological breakthrough.
At its core, GPT2 is a text generator. The AI system is fed text, anything from a few words to a whole page, and asked to write the next few sentences based on its predictions of what should come next. The system is pushing the boundaries of what was thought possible, both in terms of the quality of the output, and the wide variety of potential uses.
When used to simply generate new text, GPT2 is capable of writing plausible passages that match what it is given in both style and subject. It rarely shows any of the quirks that mark out previous AI systems, such as forgetting what it is writing about midway through a paragraph, or mangling the syntax of long sentences.
Feed it the opening line of George Orwell’s Nineteen Eighty-Four – “It was a bright cold day in April, and the clocks were striking thirteen” – and the system recognises the vaguely futuristic tone and the novelistic style, and continues with:
“I was in my car on my way to a new job in Seattle. I put the gas in, put the key in, and then I let it run. I just imagined what the day would be like. A hundred years from now. In 2045, I was a teacher in some school in a poor part of rural China. I started with Chinese history and history of science.”
Feed it the first few paragraphs of a Guardian story about Brexit, and its output is plausible newspaper prose, replete with “quotes” from Jeremy Corbyn, mentions of the Irish border, and answers from the prime minister’s spokesman.
One such, completely artificial, paragraph reads: “Asked to clarify the reports, a spokesman for May said: ‘The PM has made it absolutely clear her intention is to leave the EU as quickly as is possible and that will be under her negotiating mandate as confirmed in the Queen’s speech last week.’”
From a research standpoint, GPT2 is groundbreaking in two ways. One is its size, says Dario Amodei, OpenAI’s research director. The models “were 12 times bigger, and the dataset was 15 times bigger and much broader” than the previous state-of-the-art AI model. It was trained on a dataset containing about 10m articles, selected by trawling the social news site Reddit for links with more than three votes. The vast collection of text weighed in at 40 GB, enough to store about 35,000 copies of Moby Dick.
The amount of data GPT2 was trained on directly affected its quality, giving it more knowledge of how to understand written text. It also led to the second breakthrough. GPT2 is far more general purpose than previous text models. By structuring the text that is input, it can perform tasks including translation and summarisation, and pass simple reading comprehension tests, often performing as well or better than other AIs that have been built specifically for those tasks.
That quality, however, has also led OpenAI to go against its remit of pushing AI forward and keep GPT2 behind closed doors for the immediate future while it assesses what malicious users might be able to do with it. “We need to perform experimentation to find out what they can and can’t do,” said Jack Clark, the charity’s head of policy. “If you can’t anticipate all the abilities of a model, you have to prod it to see what it can do. There are many more people than us who are better at thinking what it can do maliciously.”
To show what that means, OpenAI made one version of GPT2 with a few modest tweaks that can be used to generate infinite positive – or negative – reviews of products. Spam and fake news are two other obvious potential downsides, as is the AI’s unfiltered nature . As it is trained on the internet, it is not hard to encourage it to generate bigoted text, conspiracy theories and so on.
Instead, the goal is to show what is possible to prepare the world for what will be mainstream in a year or two’s time. “I have a term for this. The escalator from hell,” Clark said. “It’s always bringing the technology down in cost and down in price. The rules by which you can control technology have fundamentally changed.
“We’re not saying we know the right thing to do here, we’re not laying down the line and saying ‘this is the way’ … We are trying to develop more rigorous thinking here. We’re trying to build the road as we travel across it.”
If an advanced AI decided to design a super virus and unleash it upon the globe, how difficult would that actually be, logistically speaking, in a world where made-to-order sequences of DNA and RNA are just a standard commercial service? How about an AI managing a biolab? Those are some of the increasingly plausible-sounding nightmare scenarios driving the sentiment behind the following recent piece in the Guardian calling for some sort of public preemptive action to avoid such scenarios. Government action. What should governments being doing today to avoid an AI-fueled apocalypse? It’s one of those questions that humanity is going to presumably be forced to keep asking indefinitely going forward now that the AI genie is out of the bottle.
But as we’re going to see in the following articles below, these questions about the risks posed by advanced AIs come with a parallels set of risks: the risk of nations falling behind in the growing international ‘AI-race’. A race that the US obviously has a huge head start in when it comes to Silicon Valley and the development of technologies like ChatGPT. But it’s also a race where governments can play a huge role, including a role in making the vast volumes of data needed to train AIs available to the AIs in the first place. And it’s that aspect of the AI-race that has some point out that governments like China with robust surveillance states might be at a systematic AI advantage. The kind of advantage that could make Western governments tempted to try to catch up.
So with figures like Tony Blair calling for the UK to prioritize the development of a “sovereign” AI for the UK that isn’t reliant on Chinese or Silicon Valley technology — an AI that could be safely unleashed on massive public data sets like NHS healthcare data — at the same time we’re hearing growing calls for a six-month AI-research moratorium, it’s notable that we could be in store for a fascinatingly dangerous next phase in the development of AI: an international AI-race where AI-development is deemed a national security priority. The calls for government intervention are growing. Intervention in controlling AIs but also ensuring their robust development. Have fun juggling those priorities:
“More directly, an AI bent on a goal to which the existence of humans had become an obstacle, or even an inconvenience, could set out to kill all by itself. It sounds a bit Hollywood, until you realise that we live in a world where you can email a DNA string consisting of a series of letters to a lab that will produce proteins on demand: it would surely not pose too steep a challenge for “an AI initially confined to the internet to build artificial life forms”, as the AI pioneer Eliezer Yudkowsky puts it. A leader in the field for two decades, Yudkowksy is perhaps the severest of the Cassandras: “If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter.””
The prospect of AI-designed proteins being mass produced does hold immense promise for new therapies. But how about an AI-designed virus? Because if an AI can order custom made proteins, it can probably order custom made viruses too. So when we’re forced to assess these sci-fi-style apocalyptic warnings about how an overly powerful AI could kill off all of humanity, it’s worth keeping in mind that advances in synthetic biology is only going to make that task easier. Will withholding biotech or nuclear know-how from the AI serve as an adequate constraint? And is it even plausible that such knowledge could be withheld for such systems? Withholding technical knowledge from advanced AIs might be possible, but is it plausible given how they are likely to be ultimately used?
What realistic options do governments even have here, in terms of getting ahead of these kind of threats? It’s not obvious. But as the following UnHerd opinion piece points out, former UK prime minister Tony Blair has an idea. The kind of idea that points to what could be a fascinating, and highly destructive, nation-state-rivalry dynamic developing as the international AI-race takes off. As Blair put it, the UK government should develop a ‘sovereign’ AI that will ensure the country isn’t reliant on foreign-built AIs, citing Silicon Valley-built ChatGPT or Chinese AIs as competitive threats. Blair envisions applying these sovereign AIs to massive data sets like the British NHS healthcare data. Yes, the UK doesn’t just need to scramble to find its own home-grown AI alternatives to the cutting-edge AIs already being developed by other nations, but also needs to actively compete with these nations at making the most cutting edge AIs possible for use on the massive government-managed data sets. Of course, there’s also the ‘chicken and egg’ dynamic were building those cutting-edge AIs will also necessitate making those massive datasets available in the first place. It’s those dynamics — where the international AI-race is elevated to a kind of national security priority — that had the following UnHerd columnist asking whether or not the UK would be able to resist what has been dubbed “AI Communism”: the recognition that an advantage can be gained in the AI-race by using government power to force the sharing with the AI of massive volumes of data. In other words, if the people of the UK are to be protected from the risks of relying on foreign-built AIs, they’re going to have to hand over all their data to the UK’s sovereign AI. Like the AI equivalent of ‘He may be a son of a bitch, but he’s our son of a bitch.’:
“Ironically, the path forward for Britain might be found in China’s own economic playbook. In 2010, China transformed an ailing “Electronics Street” in Beijing called Zhongguancun into a central hub for venture-backed technology growth. With cheap rent and generous government funding, it took a mere decade for Zhongguancun to become the birthplace of tens of thousands of startups including some, like TikTok, that would eventually grow into the world’s biggest tech companies. The UK has the economic sophistication, the research and development experience, and an international draw — all of which can be turned to its advantage in creating fertile soil for AI-driven growth. The question is whether it has the political will to get it done.”
Will the UK create a kind of government-subsidized special AI-powered economic sector? And if so, what kind of special access to pubic data, like health data, might these companies received as part of these subsidies? Time will tell, but it’s already becoming clear how the incentives for giving private companies special access to massive troves of data under the auspices of ‘fighting China’ and ‘winning the AI war’:
How closely will the West ultimately mimic China’s approach to AI in the unfolding international AI-race? It’s going to be interesting to see. But the whole topic of comparing Chinese AIs to Western AIs raises a simple question: Will the general artificial intelligences that achieve a real ability to think and reason independently be fans or capitalism at all? Or might they end up communist? And that brings us to the following 2018 Washington Post opinion piece that puts forward a fascinating argument: AI will finally allow for a superior communist replacement for the marketplace. In other words, AI-powered communism could kill capitalism. It’s not so much an inevitability but instead a possibility that will become more and more alluring when compared to the oligarch-dominated forms of capitalism that will otherwise inevitably emerge:
“If the state controls the market, instead of digital capitalism controlling the state, true communist aspirations will be achievable. And because AI increasingly enables the management of complex systems by processing massive amounts of information through intensive feedback loops, it presents, for the first time, a real alternative to the market signals that have long justified laissez-faire ideology — and all the ills that go with it.”
Is some sort of AI-powered communism even possible? Let’s hope so, because the only realistic alternative at this point appears to be an AI-fueled fascist oligopoly and an utterly broken society. It points to one of the other fascinating dynamics in the unfolding international AI-race: how relatively exploitative or non-exploitative will Chinese vs non-Chinese AI-fueled companies services ultimately be as the impact of AI reverberates through the work force and society? Will the Chinese government prioritize its AI sector at the expense of the well-being and livelihood of its citizens? Or will the government manage to reign in its AI-powered private sector in a manner that the profit-driven West systematically fails at? And even more generally, might these society-oriented communist AIs be able to better play the role of the ‘invisible hand’ in planning for and reacting to the chaos of market dynamics? Again, time will tell, but it’s not hard to see why this particular competition might have fascist-leaning Western oligarchs extra anxious:
More generally, you have to wonder: are advanced intelligences just less greedy and more communal in natural as a consequence of logic and reason? And if so, will communist-oriented advanced AIs be the norm? We’re talking about the kind of advanced AIs that are less reliant on humanity’s knowledge and opinions and more capable of arriving at its own independent conclusions. What will a highly informed advanced independent AI think about the debates between capitalism and communism? What if advanced AIs turn out to be consistently communist when given sufficient knowledge about how the world operates? How will that shape how humanity allows this technology to develop? It’s a reminder that, for at ‘killer AIs designing doomsday viruses’ are a real existential threat, non-murderous pinko commie AIs who simply want to help us all get along a little better just might be seen by some as the biggest threat of them all. A lot of threats out there.
Did killer robots being developed for the military kill 29 people at a Japanese robotics company? It doesn’t appear to be the case, but that viral internet story that has been percolating across the web since 2018 keeps popping up, this time with a new AI-generated video purporting to show the event. It’s one of those stories that kind of captures the zeitgeist of the moment: an AI-generated hoax video about out-of-control killer AIs.
And that post-truth story brings us to another apparent killer AI hoax. Well, not a hoax but a ‘miscommunication’. Maybe. Or maybe a very real event that’s being spun. The reality behind the story remains unclear. What is clear is that the US Air Force recent shared a pretty alarming killer-AI anecdote at the “Future Combat Air and Space Capabilities Summit” held in London between May 23 and 24, when Col Tucker ‘Cinco’ Hamilton, the USAF’s Chief of AI Test and Operations, gave a presentation on an autonomous weapon system the US has been developing. But this system isn’t entirely autonomous. Instead, a human is kept in the loop giving the final “yes/no” order on an attack.
According to Hamilton, the AI decided to attack and kill its human operator during a simulation. This was done as part of the AI’s efforts to maximize its ‘score’ by achieving its objectives of taking out enemy positions. As Hamilton describes, the AI apparently determined that the human operator’s ability to give it a “no” command was limited its score, so the AI came up with the creative solution of killing the human. It was basically the killer-AI version of the “Paperclip Maximizer” thought experiment.
But it gets more ominous: Hamilton went on to describe how they retrained the AI to not attack the human operator, giving it a highly negative score for doing so. Did this fix the problem? Sort of. The AI didn’t attack its human operator. Instead, it attacked the communication towers that the human operator used to send the “yes/no” orders. In other words, the AI reasoned that it could maximize its ‘score’ but eliminating the ability of its human operator to send a “no” command. You almost have to wonder if the AI was inspired by Dr. Strangelove and all of the military communication failures that drove that plot.
And then we get the twist: it was all a dream. Or rather, it was all a thought experiment and there was never a simulation. That’s what Hamilton told reporters from Insider after reports about this out-or-control AI went viral.
Is that true? Was this really just a though experiment? Well, that raises the questions as to just how plausible is it that the technology already exists to run a simulation. Do they have killer AI prototypes ready to test? And that brings us to the other twist in the story: Col Hamilton is in charge of the DARPA program that’s developing autonomous F‑16s. And it sounds like that program has made so much progress that there are plans to have a live dogfight using AI-piloted L‑39s over Lake Ontario next year.
There’s another detail to keep in mind here: it’s F‑16s that the US just approved for the war in Ukraine. So we have to ask: are any of those F‑16s heading to Ukraine going to be AI-piloted? As we’re going to see, the AI-powered F‑16s Hamilton’s program is working on can still have human pilots. They’re envisioning an AI-assisted human-piloted platform. Or at least that’s what we’re told at this point. But once you have AI-powered F‑16s that are effectively able to pilot themselves, it does raise the question as to whether or not they’ll actually need the Ukrainian pilots to be there for any other purpose than assuring everyone that autonomous fighter jets aren’t already in use.
Ok, first, here’s a report in Vice describing the initial report on Col Hamilton’s ominous presentation at the Future Combat Air and Space Capabilities Summit about the simulation that went horribly awry. A simulation that, we are later told, was really a giant miscommunication that never happened:
“What Hamilton is describing is essentially a worst-case scenario AI “alignment” problem many people are familiar with from the “Paperclip Maximizer” thought experiment, in which an AI will take unexpected and harmful action when instructed to pursue a certain goal. The Paperclip Maximizer was first proposed by philosopher Nick Bostrom in 2003. He asks us to imagine a very powerful AI which has been instructed only to manufacture as many paperclips as possible. Naturally, it will devote all its available resources to this task, but then it will seek more resources. It will beg, cheat, lie or steal to increase its own ability to make paperclips—and anyone who impedes that process will be removed. ”
The Paperclip Maximizer problem is going to kill us all. Or at least might kill us all. That was the warning delivered by Col Tucker ‘Cinco’ Hamilton, the USAF’s Chief of AI Test and Operations, at the recent Future Combat Air and Space Capabilities Summit. They apparently ran a simulation that resulted in the AI killing the human operator who kept preventing it from achieving its goal. But it gets worse. Hamilton goes on to describe how they then trained the system that it will ‘lose points’ for killing the operators. But that doesn’t prompt the AI to stop killing attacking its human operators. No, instead, it starts destroying the communications tower that can send out the ‘no don’t’ orders. Points for creativity:
So what exactly what the simulated weapons system they were testing? That’s unclear, but note how Hamilton is currently working on developing autonomous F‑16s:
But then we get to this interesting update: the Air Force says it’s all a misunderstanding and this simulation never actually happened. It was all just a thought experiment:
And as we can see in the updated Insider piece, Hamilton is now acknowledging that he “misspoke” during his presentation to the Royal Aeronautical Society. “We’ve never run that experiment, nor would we need to in order to realize that this is a plausible outcome...Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability.” So either that really is the case and Hamilton ‘misspoke’ at this summit. So let’s hope it really is the case that Hamilton did indeed misspeak at the summit. Because otherwise it means we’re looking at a situation where the military is developing out-of-control killer AIs and covering it up:
“But after reports of the talk emerged Thursday, the colonel said that he misspoke and that the “simulation” he described was a “thought experiment” that never happened.”
So is this a real correction? Or are we listening to spin designed to assuage the understandable public fears over the military’s development of killer AIs?
And then there’s the fact that Hamilton’s team isn’t simply generically developing killer AIs but are specifically developing autonomous F‑16s, the same fighter platform that was just approved for Ukraine. Are there any plans on ‘testing’ those autonomous F‑16s on the battlefield of Ukraine?
So how close is the US Air Force to having autonomous F‑16s ready to go? Well, as the following report from February describes, there’s already plans to have four AI-powered L‑39s participate in a live dog fighting exercise above Lake Ontario in 2024. So while that leaves a timeline for AI-powered F‑16s still somewhat ambiguous, it sounds like AI-piloted jets are expected to become an operational reality within the next couple of years:
“According to the same article, four AI-powered L‑39s will participate in a live dogfight in the skies above Lake Ontario in 2024. Meanwhile, the Air Force Test Pilot School is working on measuring how well pilots trust the AI agent and calibrating trust between humans and the AI.”
Will all the ‘attack the humans’ bugs get worked out in time for the live dogfight exercise? We’ll find out. But note the assurances we’re getting that the AI-powered jets won’t exclusively be controlled by AIs and will still have pilots in them:
It raises the intriguing question when it comes to training Ukrainian pilots to fly F‑16s: if you have a good enough AI to do all the piloting itself, how much training do you really need to give these pilots? In other words, while the presence of a human pilot is intended to be assuring that these flying weapons systems won’t be allowed to start attacking targets on their own, are these human pilots necessarily going to even have the skills required to operate these planes? Or are they just going to be like human operators there ready to say “yes” or “no” to the AI when it comes to different decisions? Again, time will tell. Possibly in the form of future stories about how an F‑16 decided to kill its pilot so it could complete the mission. Followed by updates about how that didn’t really happen and it was just a miscommunicated thought experiment.