Spitfire List Web site and blog of anti-fascist researcher and radio personality Dave Emory.

For The Record  

FTR #997 Summoning the Demon, Part 2: Sorcer’s Apprentice

Dave Emory’s entire life­time of work is avail­able on a flash drive that can be obtained HERE. The new drive is a 32-gigabyte drive that is current as of the programs and articles posted by the fall of 2017. The new drive (available for a tax-deductible contribution of $65.00 or more.)

WFMU-FM is podcasting For The Record–You can subscribe to the podcast HERE.

You can subscribe to e-mail alerts from Spitfirelist.com HERE.

You can subscribe to RSS feed from Spitfirelist.com HERE.

You can subscribe to the comments made on programs and posts–an excellent source of information in, and of, itself HERE.

This broadcast was recorded in one, 60-minute segment.

Introduction: Developing analysis presented in FTR #968, this broadcast explores frightening developments and potential developments in the world of artificial intelligence–the ultimate manifestation of what Mr. Emory calls “technocratic fascism.”

In order to underscore what we mean by technocratic fascism, we reference a vitally important article by David Golumbia. ” . . . . Such tech­no­cratic beliefs are wide­spread in our world today, espe­cially in the enclaves of dig­i­tal enthu­si­asts, whether or not they are part of the giant corporate-digital leviathan. Hack­ers (‘civic,’ ‘eth­i­cal,’ ‘white’ and ‘black’ hat alike), hack­tivists, Wik­iLeaks fans [and Julian Assange et al–D. E.], Anony­mous ‘mem­bers,’ even Edward Snow­den him­self walk hand-in-hand with Face­book and Google in telling us that coders don’t just have good things to con­tribute to the polit­i­cal world, but that the polit­i­cal world is theirs to do with what they want, and the rest of us should stay out of it: the polit­i­cal world is bro­ken, they appear to think (rightly, at least in part), and the solu­tion to that, they think (wrongly, at least for the most part), is for pro­gram­mers to take polit­i­cal mat­ters into their own hands. . . . [Tor co-creator] Din­gle­dine  asserts that a small group of soft­ware devel­op­ers can assign to them­selves that role, and that mem­bers of demo­c­ra­tic poli­ties have no choice but to accept them hav­ing that role. . . .”

Perhaps the last and most perilous manifestation of technocratic fascism concerns Anthony  Levandowski, an engineer at the foundation of the development of Google Street Map technology and self-driving cars. He is proposing an AI Godhead that would rule the world and would be worshipped as a God by the planet’s citizens. Insight into his personality was provided by an associate: “ . . . . ‘He had this very weird motivation about robots taking over the world—like actually taking over, in a military senseIt was like [he wanted] to be able to control the world, and robots were the way to do that. He talked about starting a new country on an island. Pretty wild and creepy stuff. And the biggest thing is that he’s always got a secret plan, and you’re not going to know about it’. . . .”

As we saw in FTR #968, AI’s have incorporated many flaws of their creators, auguring very poorly for the subjects of Levandowski’s AI Godhead.

It is also interesting to contemplate what may happen when AI’s are designed by other AI’s- machines designing other machines.

After a detailed review of some of the ominous real and developing AI-related technology, the program highlights Anthony Levandowski, the brilliant engineer who was instrumental in developing Google’s Street Maps, Waymo’s self-driving cars, Otto’s self-driving trucks, the Lidar technology central to self-driving vehicles and the Way of the Future, super AI Godhead.

Further insight into Levandowski’s personality can be gleaned from e-mails with Travis Kalanick, former CEO of Uber: ” . . . . In Kalanick, Levandowski found both a soulmate and a mentor to replace Sebastian Thrun. Text messages between the two, disclosed during the lawsuit’s discovery process, capture Levandowski teaching Kalanick about lidar at late night tech sessions, while Kalanick shared advice on management. ‘Down to hang out this eve and mastermind some shit,’ texted Kalanick, shortly after the acquisition. ‘We’re going to take over the world. One robot at a time,’ wrote Levandowski another time. . . .”

Those who view self-driving cars and other AI-based technologies as flawless would do well to consider the following: ” . . . .Last December, Uber launched a pilot self-driving taxi program in San Francisco. As with Otto in Nevada, Levandowski failed to get a license to operate the high-tech vehicles, claiming that because the cars needed a human overseeing them, they were not truly autonomous. The DMV disagreed and revoked the vehicles’ licenses. Even so, during the week the cars were on the city’s streets, they had been spotted running red lights on numerous occasions. . . . .”

Noting Levandowski’s personality quirks, the article poses a fundamental question: ” . . . . But even the smartest car will crack up if you floor the gas pedal too long. Once feted by billionaires, Levandowski now finds himself starring in a high-stakes public trial as his two former employers square off. By extension, the whole technology industry is there in the dock with Levandowski. Can we ever trust self-driving cars if it turns out we can’t trust the people who are making them? . . . .”

Levandowski’s Otto self-driving trucks might be weighed against the prognostications of dark horse Presidential candidate and former tech executive Andrew Wang: . . . . ‘All you need is self-driving cars to destabilize society,’ Mr. Yang said over lunch at a Thai restaurant in Manhattan last month, in his first interview about his campaign. In just  a few years, he said, ‘we’re going to have a million truck drivers out of work who are 94 percent male, with an  average  level of education of high school or one year of college.’ ‘That one innovation,’ he added, ‘will be enough to create riots in the street. And we’re about to do the  same thing to retail workers, call center workers, fast-food workers, insurance companies, accounting firms.’ . . . .”

Theoretical physicist Stephen Hawking warned at the end of 2014 of the potential danger to humanity posed by the growth of AI (artificial intelligence) technology. His warnings have been echoed by tech titans such as Tesla’s Elon Musk and Bill Gates.

The program concludes with Mr. Emory’s prognostications about AI, preceding Stephen Hawking’s warning by twenty years.

Program Highlights Include:

  1. Levandowski’s apparent shepherding of a company called–perhaps significantly–Odin Wave to utilize Lidar-like technology.
  2. The role of DARPA in initiating the self-driving vehicles contest that was Levandowski’s point of entry into his tech ventures.
  3. Levandowski’s development of the Ghostrider self-driving motorcycles, which experienced 800 crashes in 1,000 miles.

1a. In order to underscore what we mean by technocratic fascism, we reference a vitally important article by David Golumbia. ” . . . . Such tech­no­cratic beliefs are wide­spread in our world today, espe­cially in the enclaves of dig­i­tal enthu­si­asts, whether or not they are part of the giant corporate-digital leviathan. Hack­ers (‘civic,’ ‘eth­i­cal,’ ‘white’ and ‘black’ hat alike), hack­tivists, Wik­iLeaks fans [and Julian Assange et al–D. E.], Anony­mous ‘mem­bers,’ even Edward Snow­den him­self walk hand-in-hand with Face­book and Google in telling us that coders don’t just have good things to con­tribute to the polit­i­cal world, but that the polit­i­cal world is theirs to do with what they want, and the rest of us should stay out of it: the polit­i­cal world is bro­ken, they appear to think (rightly, at least in part), and the solu­tion to that, they think (wrongly, at least for the most part), is for pro­gram­mers to take polit­i­cal mat­ters into their own hands. . . . [Tor co-creator] Din­gle­dine  asserts that a small group of soft­ware devel­op­ers can assign to them­selves that role, and that mem­bers of demo­c­ra­tic poli­ties have no choice but to accept them hav­ing that role. . . .”

1b. Anthony  Levandowski, an engineer at the foundation of the development of Google Street Map technology and self-driving cars, is proposing an AI Godhead that would rule the world and would be worshipped as a God by the planet’s citizens. Insight into his personality was provided by an associate: “ . . . . ‘He had this very weird motivation about robots taking over the world—like actually taking over, in a military senseIt was like [he wanted] to be able to control the world, and robots were the way to do that. He talked about starting a new country on an island. Pretty wild and creepy stuff. And the biggest thing is that he’s always got a secret plan, and you’re not going to know about it’. . . .”

1c. Transitioning from our last program–updating AI (artificial intelligence) technology as it applies to technocratic fascism–we note that AI machines are being designed to develop other AI’s–“The Rise of the Machine.” ” . . . . Jeff Dean, one of Google’s leading engineers, spotlighted a Google project called AutoML. ML is short for machine learning, referring to computer algorithms that can learn to perform particular tasks on their own by analyzing data. AutoML, in turn, is a machine learning algorithm that learns to build other machine-learning algorithms. With it, Google may soon find a way to create A.I. technology that can partly take the humans out of building the A.I. systems that many believe are the future of the technology industry. . . . This is not altruism. . . .

“The Rise of the Machine” by Cade Metz; The New York Times; 11/6/2017; p. B1 [Western Edition].

 They are a dream of researchers but perhaps a nightmare for highly skilled computer programmers: artificially intelligent machines that can build other artificially intelligent machines. With recent speeches in both Silicon Valley and China, Jeff Dean, one of Google’s leading engineers, spotlighted a Google project called AutoML. ML is short for machine learning, referring to computer algorithms that can learn to perform particular tasks on their own by analyzing data.

AutoML, in turn, is a machine learning algorithm that learns to build other machine-learning algorithms. With it, Google may soon find a way to create A.I. technology that can partly take the humans out of building the A.I. systems that many believe are the future of the technology industry. The project is part of a much larger effort to bring the latest and greatest A.I. techniques to a wider collection of companies and software developers.

The tech industry is promising everything from smartphone apps that can recognize faces to cars that can drive on their own. But by some estimates, only 10,000 people worldwide have the education, experience and talent needed to build the complex and sometimes mysterious mathematical algorithms that will drive this new breed of artificial intelligence.

The world’s largest tech businesses, including Google, Facebook and Microsoft, sometimes pay millions of dollars a year to A.I. experts, effectively cornering the market for this hard-to-find talent. The shortage isn’t going away anytime soon, just because mastering these skills takes years of work. The industry is not willing to wait. Companies are developing all sorts of tools that will make it easier for any operation to build its own A.I. software, including things like image and speech recognition services and online chatbots. “We are following the same path that computer science has followed with every new type of technology,” said Joseph Sirosh, a vice president at Microsoft, which recently unveiled a tool to help coders build deep neural networks, a type of computer algorithm that is driving much of the recent progress in the A.I. field. “We are eliminating a lot of the heavy lifting.” This is not altruism.

Researchers like Mr. Dean believe that if more people and companies are working on artificial intelligence, it will propel their own research. At the same time, companies like Google, Amazon and Microsoft see serious money in the trend that Mr. Sirosh described. All of them are selling cloud-computing services that can help other businesses and developers build A.I. “There is real demand for this,” said Matt Scott, a co-founder and the chief technical officer of Malong, a start-up in China that offers similar services. “And the tools are not yet satisfying all the demand.”

This is most likely what Google has in mind for AutoML, as the company continues to hail the project’s progress. Google’s chief executive, Sundar Pichai, boasted about AutoML last month while unveiling a new Android smartphone.

Eventually, the Google project will help companies build systems with artificial intelligence even if they don’t have extensive expertise, Mr. Dean said. Today, he estimated, no more than a few thousand companies have the right talent for building A.I., but many more have the necessary data. “We want to go from thousands of organizations solving machine learning problems to millions,” he said.

Google is investing heavily in cloud-computing services — services that help other businesses build and run software — which it expects to be one of its primary economic engines in the years to come. And after snapping up such a large portion of the world’s top A.I researchers, it has a means of jump-starting this engine.

Neural networks are rapidly accelerating the development of A.I. Rather than building an image-recognition service or a language translation app by hand, one line of code at a time, engineers can much more quickly build an algorithm that learns tasks on its own. By analyzing the sounds in a vast collection of old technical support calls, for instance, a machine-learning algorithm can learn to recognize spoken words.

But building a neural network is not like building a website or some run-of-themill smartphone app. It requires significant math skills, extreme trial and error, and a fair amount of intuition. Jean-François Gagné, the chief executive of an independent machine-learning lab called Element AI, refers to the process as “a new kind of computer programming.”

In building a neural network, researchers run dozens or even hundreds of experiments across a vast network of machines, testing how well an algorithm can learn a task like recognizing an image or translating from one language to another. Then they adjust particular parts of the algorithm over and over again, until they settle on something that works. Some call it a “dark art,” just because researchers find it difficult to explain why they make particular adjustments.

But with AutoML, Google is trying to automate this process. It is building algorithms that analyze the development of other algorithms, learning which methods are successful and which are not. Eventually, they learn to build more effective machine learning. Google said AutoML could now build algorithms that, in some cases, identified objects in photos more accurately than services built solely by human experts. Barret Zoph, one of the Google researchers behind the project, believes that the same method will eventually work well for other tasks, like speech recognition or machine translation. This is not always an easy thing to wrap your head around. But it is part of a significant trend in A.I. research. Experts call it “learning to learn” or “metalearning.”

Many believe such methods will significantly accelerate the progress of A.I. in both the online and physical worlds. At the University of California, Berkeley, researchers are building techniques that could allow robots to learn new tasks based on what they have learned in the past. “Computers are going to invent the algorithms for us, essentially,” said a Berkeley professor, Pieter Abbeel. “Algorithms invented by computers can solve many, many problems very quickly — at least that is the hope.”

This is also a way of expanding the number of people and businesses that can build artificial intelligence. These methods will not replace A.I. researchers entirely. Experts, like those at Google, must still do much of the important design work.

But the belief is that the work of a few experts can help many others build their own software. Renato Negrinho, a researcher at Carnegie Mellon University who is exploring technology similar to AutoML, said this was not a reality today but should be in the years to come. “It is just a matter of when,” he said.

2a.We next review some terrifying and consummately important developments taking shape in the context of what Mr. Emory has called “technocratic fascism:”

  1. In FTR #’s 718 and 946, we detailed the frightening, ugly reality behind Facebook. Facebook is now developing technology that will permit the tapping of users thoughts by monitoring brain-to-computer technology. Facebook’s R & D is headed by Regina Dugan, who used to head the Pentagon’s DARPA. Facebook’s Building 8 is patterned after DARPA:   . . . . Brain-computer interfaces are nothing newDARPA, which Dugan used to head, has invested heavily in brain-computer interface technologies to do things like cure mental illness and restore memories to soldiers injured in war. . . Facebook wants to build its own “brain-to-computer interface” that would allow us to send thoughts straight to a computer. ‘What if you could type directly from your brain?’ Regina Dugan, the head of the company’s secretive hardware R&D division, Building 8, asked from the stage. Dugan then proceeded to show a video demo of a woman typing eight words per minute directly from the stage. In a few years, she said, the team hopes to demonstrate a real-time silent speech system capable of delivering a hundred words per minute. ‘That’s five times faster than you can type on your smartphone, and it’s straight from your brain,’ she said. ‘Your brain activity contains more information than what a word sounds like and how it’s spelled; it also contains semantic information of what those words mean.’ . . .”
  2. ”  . . . . Facebook’s Building 8 is modeled after DARPA and its projects tend to be equally ambitious. . . .”
  3. ” . . . . But what Facebook is proposing is perhaps more radical—a world in which social media doesn’t require picking up a phone or tapping a wrist watch in order to communicate with your friends; a world where we’re connected all the time by thought alone. . . .”

artificial intelligence2b.     Next we review still more about Facebook’s brain-to-computer interface:

  1. ” . . . . Facebook hopes to use optical neural imaging technology to scan the brain 100 times per second to detect thoughts and turn them into text. Meanwhile, it’s working on ‘skin-hearing’ that could translate sounds into haptic feedback that people can learn to understand like braille. . . .”
  2. ” . . . . Worryingly, Dugan eventually appeared frustrated in response to my inquiries about how her team thinks about safety precautions for brain interfaces, saying, ‘The flip side of the question that you’re asking is ‘why invent it at all?’ and I just believe that the optimistic perspective is that on balance, technological advances have really meant good things for the world if they’re handled responsibly.’ . . . .”

2c.  Collating the information about Facebook’s brain-to-computer interface with their documented actions gathering psychological intelligence about troubled teenagers gives us a peek into what may lie behind Dugan’s bland reassurances:

  1. ” . . . . The 23-page document allegedly revealed that the social network provided detailed data about teens in Australia—including when they felt ‘overwhelmed’ and ‘anxious’—to advertisers. The creepy implication is that said advertisers could then go and use the data to throw more ads down the throats of sad and susceptible teens. . . . By monitoring posts, pictures, interactions and internet activity in real-time, Facebook can work out when young people feel ‘stressed’, ‘defeated’, ‘overwhelmed’, ‘anxious’, ‘nervous’, ‘stupid’, ‘silly’, ‘useless’, and a ‘failure’, the document states. . . .”
  2. ” . . . . A presentation prepared for one of Australia’s top four banks shows how the $US 415 billion advertising-driven giant has built a database of Facebook users that is made up of 1.9 million high schoolers with an average age of 16, 1.5 million tertiary students averaging 21 years old, and 3 million young workers averaging 26 years old. Detailed information on mood shifts among young people is ‘based on internal Facebook data’, the document states, ‘shareable under non-disclosure agreement only’, and ‘is not publicly available’. . . .”
  3. In a statement given to the newspaper, Facebook confirmed the practice and claimed it would do better, but did not disclose whether the practice exists in other places like the US. . . .”

2d.  In this context, note that Facebook is also introducing an AI function to reference its users photos.

2e.  The next version of Amazon’s Echo, the Echo Look, has a microphone and camera so it can take pictures of you and give you fashion advice. This is an AI-driven device designed to placed in your bedroom to capture audio and video. The images and videos are stored indefinitely in the Amazon cloud. When Amazon was asked if the photos, videos, and the data gleaned from the Echo Look would be sold to third parties, Amazon didn’t address that question. It would appear that selling off your private info collected from these devices is presumably another feature of the Echo Look: ” . . . . Amazon is giving Alexa eyes. And it’s going to let her judge your outfits.The newly announced Echo Look is a virtual assistant with a microphone and a camera that’s designed to go somewhere in your bedroom, bathroom, or wherever the hell you get dressed. Amazon is pitching it as an easy way to snap pictures of your outfits to send to your friends when you’re not sure if your outfit is cute, but it’s also got a built-in app called StyleCheck that is worth some further dissection. . . .”

We then further develop the stunning implications of Amazon’s Echo Look AI technology:

  1. ” . . . . Amazon is giving Alexa eyes. And it’s going to let her judge your outfits.The newly announced Echo Look is a virtual assistant with a microphone and a camera that’s designed to go somewhere in your bedroom, bathroom, or wherever the hell you get dressed. Amazon is pitching it as an easy way to snap pictures of your outfits to send to your friends when you’re not sure if your outfit is cute, but it’s also got a built-in app called StyleCheck that is worth some further dissection. . . .”
  2. ” . . . . This might seem overly speculative or alarmist to some, but Amazon isn’t offering any reassurance that they won’t be doing more with data gathered from the Echo Look. When asked if the company would use machine learning to analyze users’ photos for any purpose other than fashion advice, a representative simply told The Verge that they ‘can’t speculate’ on the topic. The rep did stress that users can delete videos and photos taken by the Look at any time, but until they do, it seems this content will be stored indefinitely on Amazon’s servers.
  3. This non-denial means the Echo Look could potentially provide Amazon with the resource every AI company craves: data. And full-length photos of people taken regularly in the same location would be a particularly valuable dataset — even more so if you combine this information with everything else Amazon knows about its customers (their shopping habits, for one). But when asked whether the company would ever combine these two datasets, an Amazon rep only gave the same, canned answer: ‘Can’t speculate.’ . . . “
  4. Noteworthy in this context is the fact that AI’s have shown that they quickly incorporate human traits and prejudices. (This is reviewed at length above.) ” . . . . However, as machines are getting closer to acquiring human-like language abilities, they are also absorbing the deeply ingrained biases concealed within the patterns of language use, the latest research reveals. Joanna Bryson, a computer scientist at the University of Bath and a co-author, said: ‘A lot of people are saying this is showing that AI is prejudiced. No. This is showing we’re prejudiced and that AI is learning it.’ . . 

2f. Ominously, Facebook’s artificial intelligence robots have begun talking to each other in their own language, that their human masters can not understand. “ . . . . Indeed, some of the negotiations that were carried out in this bizarre language even ended up successfully concluding their negotiations, while conducting them entirely in the bizarre language. . . . The company chose to shut down the chats because ‘our interest was having bots who could talk to people,’ researcher Mike Lewis told FastCo. (Researchers did not shut down the programs because they were afraid of the results or had panicked, as has been suggested elsewhere, but because they were looking for them to behave differently.) The chatbots also learned to negotiate in ways that seem very human. They would, for instance, pretend to be very interested in one specific item – so that they could later pretend they were making a big sacrifice in giving it up . . .”

2g. Facebook’s negotiation-bots didn’t just make up their own language during the course of this experiment. They learned how to lie for the purpose of maximizing their negotiation outcomes, as well:

“ . . . . ‘We find instances of the model feigning interest in a valueless issue, so that it can later ‘compromise’ by conceding it,’ writes the team. ‘Deceit is a complex skill that requires hypothesizing the other agent’s beliefs, and is learned relatively late in child development. Our agents have learned to deceive without any explicit human design, simply by trying to achieve their goals.’ . . . 

3a. One of the stranger stories in recent years has been the mystery of Cicada 3301the anonymous group that posts annual challenges of super difficult puzzles used to recruit talented code-breakers and invite them to join some sort of Cypherpunk cult that wants to build a global AI-‘god brain’. Or something. It’s a weird and creepy organization that’s speculated to either be a front for an intelligence agency or perhaps some sort of underground network of wealth Libertarians. And, for now, Cicada 3301 remains anonymous.

In that context, it’s worth noting that someone with a lot of cash has already started a foundation to accomplish that very same ‘AI god’ goal: Anthony Levandowski, a former Google Engineer who played a big role in the development Google’s “Street Map” technology and a string of self-driving vehicle companies, started Way of the Future, a nonprofit religious corporation with the mission “To develop and promote the realization of a Godhead based on artificial intelligence and through understanding and worship of the Godhead contribute to the betterment of society”:

“Deus ex machina: former Google engineer is developing an AI god” by Olivia Solon; The Guardian; 09/28/2017

Intranet service? Check. Autonomous motorcycle? Check. Driverless car technology? Check. Obviously the next logical project for a successful Silicon Valley engineer is to set up an AI-worshipping religious organization.

Anthony Levandowski, who is at the center of a legal battle between Uber and Google’s Waymo, has established a nonprofit religious corporation called Way of the Future, according to state filings first uncovered by Wired’s BackchannelWay of the Future’s startling mission: “To develop and promote the realization of a Godhead based on artificial intelligence and through understanding and worship of the Godhead contribute to the betterment of society.”

Levandowski was co-founder of autonomous trucking company Otto, which Uber bought in 2016. He was fired from Uber in May amid allegations that he had stolen trade secrets from Google to develop Otto’s self-driving technology. He must be grateful for this religious fall-back project, first registered in 2015.

The Way of the Future team did not respond to requests for more information about their proposed benevolent AI overlord, but history tells us that new technologies and scientific discoveries have continually shaped religion, killing old gods and giving birth to new ones.

“The church does a terrible job of reaching out to Silicon Valley types,” acknowledges Christopher Benek a pastor in Florida and founding chair of the Christian Transhumanist Association.

Silicon Valley, meanwhile, has sought solace in technology and has developed quasi-religious concepts including the “singularity”, the hypothesis that machines will eventually be so smart that they will outperform all human capabilities, leading to a superhuman intelligence that will be so sophisticated it will be incomprehensible to our tiny fleshy, rational brains.

For futurists like Ray Kurzweil, this means we’ll be able to upload copies of our brains to these machines, leading to digital immortality. Others like Elon Musk and Stephen Hawking warn that such systems pose an existential threat to humanity.

“With artificial intelligence we are summoning the demon,” Musk said at a conference in 2014. “In all those stories where there’s the guy with the pentagram and the holy water, it’s like – yeah, he’s sure he can control the demon. Doesn’t work out.”

Benek argues that advanced AI is compatible with Christianity – it’s just another technology that humans have created under guidance from God that can be used for good or evil.

“I totally think that AI can participate in Christ’s redemptive purposes,” he said, by ensuring it is imbued with Christian values.

“Even if people don’t buy organized religion, they can buy into ‘do unto others’.”

For transhumanist and “recovering Catholic” Zoltan Istvan, religion and science converge conceptually in the singularity.

“God, if it exists as the most powerful of all singularities, has certainly already become pure organized intelligence,” he said, referring to an intelligence that “spans the universe through subatomic manipulation of physics”.

And perhaps, there are other forms of intelligence more complicated than that which already exist and which already permeate our entire existence. Talk about ghost in the machine,” he added.

For Istvan, an AI-based God is likely to be more rational and more attractive than current concepts (“the Bible is a sadistic book”) and, he added, “this God will actually exist and hopefully will do things for us.”

We don’t know whether Levandowski’s Godhead ties into any existing theologies or is a manmade alternative, but it’s clear that advancements in technologies including AI and bioengineering kick up the kinds of ethical and moral dilemmas that make humans seek the advice and comfort from a higher power: what will humans do once artificial intelligence outperforms us in most tasks? How will society be affected by the ability to create super-smart, athletic “designer babies” that only the rich can afford? Should a driverless car kill five pedestrians or swerve to the side to kill the owner?

If traditional religions don’t have the answer, AI – or at least the promise of AI – might be alluring.

———-

3b. As the following long piece by Wired demonstrates, Levandowski doesn’t appear to be too concerned about ethics, especially if they get in the way of his dream of transforming the world through robotics. Transforming and taking over the world through robotics. Yep. The article focuses on the various legal troubles Levandowski faces over charges by Google that he stole the “Lidar” technology he helped develop at Google and took it to Uber (a company with a serious moral compass deficit). (Lidar is a laser-based radar-like technology used by vehicles to rapidly map their surroundings)

The article also includes some interesting insights into what makes Levandowski tick. According to friend and former engineer at one of Levandowski’s companies: “ . . . . ‘He had this very weird motivation about robots taking over the world—like actually taking over, in a military senseIt was like [he wanted] to be able to control the world, and robots were the way to do that. He talked about starting a new country on an island. Pretty wild and creepy stuff. And the biggest thing is that he’s always got a secret plan, and you’re not going to know about it’. . . .”

 Further insight into Levandowski’s personality can be gleaned from e-mails with Travis Kalanick, former CEO of Uber: ” . . . . In Kalanick, Levandowski found both a soulmate and a mentor to replace Sebastian Thrun. Text messages between the two, disclosed during the lawsuit’s discovery process, capture Levandowski teaching Kalanick about lidar at late night tech sessions, while Kalanick shared advice on management. ‘Down to hang out this eve and mastermind some shit,’ texted Kalanick, shortly after the acquisition. ‘We’re going to take over the world. One robot at a time,’ wrote Levandowski another time. . . .”

Those who view self-driving cars and other AI-based technologies as flawless would do well to consider the following: ” . . . .Last December, Uber launched a pilot self-driving taxi program in San Francisco. As with Otto in Nevada, Levandowski failed to get a license to operate the high-tech vehicles, claiming that because the cars needed a human overseeing them, they were not truly autonomous. The DMV disagreed and revoked the vehicles’ licenses. Even so, during the week the cars were on the city’s streets, they had been spotted running red lights on numerous occasions. . . . .”

Noting Levandowski’s personality quirks, the article poses a fundamental question: ” . . . . But even the smartest car will crack up if you floor the gas pedal too long. Once feted by billionaires, Levandowski now finds himself starring in a high-stakes public trial as his two former employers square off. By extension, the whole technology industry is there in the dock with Levandowski. Can we ever trust self-driving cars if it turns out we can’t trust the people who are making them? . . . .”

“God Is a Bot, and Anthony Levandowski Is His Messenger” by Mark Harris; Wired; 09/27/2017

Many people in Silicon Valley believe in the Singularity—the day in our near future when computers will surpass humans in intelligence and kick off a feedback loop of unfathomable change.

When that day comes, Anthony Levandowski will be firmly on the side of the machines. In September 2015, the multi-millionaire engineer at the heart of the patent and trade secrets lawsuit between Uber and Waymo, Google’s self-driving car company, founded a religious organization called Way of the Future. Its purpose, according to previously unreported state filings, is nothing less than to “develop and promote the realization of a Godhead based on Artificial Intelligence.”

Way of the Future has not yet responded to requests for the forms it must submit annually to the Internal Revenue Service (and make publicly available), as a non-profit religious corporation. However, documents filed with California show that Levandowski is Way of the Future’s CEO and President, and that it aims “through understanding and worship of the Godhead, [to] contribute to the betterment of society.”

A divine AI may still be far off, but Levandowski has made a start at providing AI with an earthly incarnation. The autonomous cars he was instrumental in developing at Google are already ferrying real passengers around Phoenix, Arizona, while self-driving trucks he built at Otto are now part of Uber’s plan to make freight transport safer and more efficient. He even oversaw a passenger-carrying drones project that evolved into Larry Page’s Kitty Hawk startup.

Levandowski has done perhaps more than anyone else to propel transportation toward its own Singularity, a time when automated cars, trucks and aircraft either free us from the danger and drudgery of human operation—or decimate mass transit, encourage urban sprawl, and enable deadly bugs and hacks.

But before any of that can happen, Levandowski must face his own day of reckoning. In February, Waymo—the company Google’s autonomous car project turned into—filed a lawsuit against Uber. In its complaint, Waymo says that Levandowski tried to use stealthy startups and high-tech tricks to take cash, expertise, and secrets from Google, with the aim of replicating its vehicle technology at arch-rival Uber. Waymo is seeking damages of nearly $1.9 billion—almost half of Google’s (previously unreported) $4.5 billion valuation of the entire self-driving division. Uber denies any wrongdoing.

Next month’s trial in a federal courthouse in San Francisco could steer the future of autonomous transportation. A big win for Waymo would prove the value of its patents and chill Uber’s efforts to remove profit-sapping human drivers from its business. If Uber prevails, other self-driving startups will be encouraged to take on the big players—and a vindicated Levandowski might even return to another startup. (Uber fired him in May.)

Levandowski has made a career of moving fast and breaking things. As long as those things were self-driving vehicles and little-loved regulations, Silicon Valley applauded him in the way it knows best—with a firehose of cash. With his charm, enthusiasm, and obsession with deal-making, Levandowski came to personify the disruption that autonomous transportation is likely to cause.

But even the smartest car will crack up if you floor the gas pedal too long. Once feted by billionaires, Levandowski now finds himself starring in a high-stakes public trial as his two former employers square off. By extension, the whole technology industry is there in the dock with Levandowski. Can we ever trust self-driving cars if it turns out we can’t trust the people who are making them?

In 2002, Levandowski’s attention turned, fatefully, toward transportation. His mother called him from Brussels about a contest being organized by the Pentagon’s R&D arm, DARPA. The first Grand Challenge in 2004 would race robotic, computer-controlled vehicles in a desert between Los Angeles and Las Vegas—a Wacky Races for the 21st century.

“I was like, ‘Wow, this is absolutely the future,’” Levandowski told me in 2016. “It struck a chord deep in my DNA. I didn’t know where it was going to be used or how it would work out, but I knew that this was going to change things.”

Levandowski’s entry would be nothing so boring as a car. “I originally wanted to do an automated forklift,” he said at a follow-up competition in 2005. “Then I was driving to Berkeley [one day] and a pack of motorcycles descended on my pickup and flowed like water around me.” The idea for Ghostrider was born—a gloriously deranged self-driving Yamaha motorcycle whose wobbles inspired laughter from spectators, but awe in rivals struggling to get even four-wheeled vehicles driving smoothly.

“Anthony would go for weeks on 25-hour days to get everything done. Every day he would go to bed an hour later than the day before,” remembers Randy Miller, a college friend who worked with him on Ghostrider. “Without a doubt, Anthony is the smartest, hardest-working and most fearless person I’ve ever met.”

Levandowski and his team of Berkeley students maxed out his credit cards getting Ghostrider working on the streets of Richmond, California, where it racked up an astonishing 800 crashes in a thousand miles of testing. Ghostrider never won a Grand Challenge, but its ambitious design earned Levandowski bragging rights—and the motorbike a place in the Smithsonian.

“I see Grand Challenge not as the end of the robotics adventure we’re on, it’s almost like the beginning,” Levandowski told Scientific American in 2005. “This is where everyone is meeting, becoming aware of who’s working on what, [and] filtering out the non-functional ideas.”

One idea that made the cut was lidar—spinning lasers that rapidly built up a 3D picture of a car’s surroundings. In the lidar-less first Grand Challenge, no vehicle made it further than a few miles along the course. In the second, an engineer named Dave Hall constructed a lidar that “was giant. It was one-off but it was awesome,” Levandowski told me. “We realized, yes, lasers [are] the way to go.”

After graduate school, Levandowski went to work for Hall’s company, Velodyne, as it pivoted from making loudspeakers to selling lidars. Levandowski not only talked his way into being the company’s first sales rep, targeting teams working towards the next Grand Challenge, but he also worked on the lidar’s networking. By the time of the third and final DARPA contest in 2007, Velodyne’s lidar was mounted on five of the six vehicles that finished.

But Levandowski had already moved on. Ghostrider had caught the eye of Sebastian Thrun, a robotics professor and team leader of Stanford University’s winning entry in the second competition. In 2006, Thrun invited Levandowski to help out with a project called VueTool, which was setting out to piece together street-level urban maps using cameras mounted on moving vehicles. Google was already working on a similar system, called Street View. Early in 2007, Google brought on Thrun and his entire team as employees—with bonuses as high as $1 million each, according to one contemporary at Google—to troubleshoot Street View and bring it to launch.

“[Hiring the VueTool team] was very much a scheme for paying Thrun and the others to show Google how to do it right,” remembers the engineer. The new hires replaced Google’s bulky, custom-made $250,000 cameras with $15,000 off-the-shelf panoramic webcams. Then they went auto shopping. “Anthony went to a car store and said we want to buy 100 cars,” Sebastian Thrun told me in 2015. “The dealer almost fell over.”

Levandowski was also making waves in the office, even to the point of telling engineers not to waste time talking to colleagues outside the project, according to one Google engineer. “It wasn’t clear what authority Anthony had, and yet he came in and assumed authority,” said the engineer, who asked to remain anonymous. “There were some bad feelings but mostly [people] just went with it. He’s good at that. He’s a great leader.”

Under Thrun’s supervision, Street View cars raced to hit Page’s target of capturing a million miles of road images by the end of 2007. They finished in October—just in time, as it turned out. Once autumn set in, every webcam succumbed to rain, condensation, or cold weather, grounding all 100 vehicles.

Part of the team’s secret sauce was a device that would turn a raw camera feed into a stream of data, together with location coordinates from GPS and other sensors. Google engineers called it the Topcon box, named after the Japanese optical firm that sold it. But the box was actually designed by a local startup called 510 Systems. “We had one customer, Topcon, and we licensed our technology to them,” one of the 510 Systems owners told me.

That owner was…Anthony Levandowski, who had cofounded 510 Systems with two fellow Berkeley researchers, Pierre-Yves Droz and Andrew Schultz, just weeks after starting work at Google. 510 Systems had a lot in common with the Ghostrider team. Berkeley students worked there between lectures, and Levandowski’s mother ran the office. Topcon was chosen as a go-between because it had sponsored the self-driving motorcycle. “I always liked the idea that…510 would be the people that made the tools for people that made maps, people like Navteq, Microsoft, and Google,” Levandowski told me in 2016.

Google’s engineering team was initially unaware that 510 Systems was Levandowski’s company, several engineers told me. That changed once Levandowski proposed that Google also use the Topcon box for its small fleet of aerial mapping planes. “When we found out, it raised a bunch of eyebrows,” remembers an engineer. Regardless, Google kept buying 510’s boxes.

**********

The truth was, Levandowski and Thrun were on a roll. After impressing Larry Page with Street View, Thrun suggested an even more ambitious project called Ground Truth to map the world’s streets using cars, planes, and a 2,000-strong team of cartographers in India. Ground Truth would allow Google to stop paying expensive licensing fees for outside maps, and bring free turn-by-turn directions to Android phones—a key differentiator in the early days of its smartphone war with Apple.

Levandowski spent months shuttling between Mountain View and Hyderabad—and yet still found time to create an online stock market prediction game with Jesse Levinson, a computer science post-doc at Stanford who later cofounded his own autonomous vehicle startup, Zoox. “He seemed to always be going a mile a minute, doing ten things,” said Ben Discoe, a former engineer at 510. “He had an engineer’s enthusiasm that was contagious, and was always thinking about how quickly we can get to this amazing robot future he’s so excited about.”

One time, Discoe was chatting in 510’s break room about how lidar could help survey his family’s tea farm on Hawaii. “Suddenly Anthony said, ‘Why don’t you just do it? Get a lidar rig, put it in your luggage, and go map it,’” said Discoe. “And it worked. I made a kick-ass point cloud [3D digital map] of the farm.”

If Street View had impressed Larry Page, the speed and accuracy of Ground Truth’s maps blew him away. The Google cofounder gave Thrun carte blanche to do what he wanted; he wanted to return to self-driving cars.

Project Chauffeur began in 2008, with Levandowski as Thrun’s right-hand man. As with Street View, Google engineers would work on the software while 510 Systems and a recent Levandowski startup, Anthony’s Robots, provided the lidar and the car itself.

Levandowski said this arrangement would have acted as a firewall if anything went terribly wrong. “Google absolutely did not want their name associated with a vehicle driving in San Francisco,” he told me in 2016. “They were worried about an engineer building a car that drove itself that crashes and kills someone and it gets back to Google. You have to ask permission [for side projects] and your manager has to be OK with it. Sebastian was cool. Google was cool.”

In order to move Project Chauffeur along as quickly as possible from theory to reality, Levandowski enlisted the help of a filmmaker friend he had worked with at Berkeley. In the TV show the two had made, Levandowski had created a cybernetic dolphin suit (seriously). Now they came up with the idea of a self-driving pizza delivery car for a show on the Discovery Channel called Prototype This! Levandowski chose a Toyota Prius, because it had a drive-by-wire system that was relatively easy to hack.

In a matter of weeks, Levandowski’s team had the car, dubbed Pribot, driving itself. If anyone asked what they were doing, Levandowski told me, “We’d say it’s a laser and just drive off.”

“Those were the Wild West days,” remembers Ben Discoe. “Anthony and Pierre-Yves…would engage the algorithm in the car and it would almost swipe some other car or almost go off the road, and they would come back in and joke about it. Tell stories about how exciting it was.”

But for the Discovery Channel show, at least, Levandowski followed the letter of the law. The Bay Bridge was cleared of traffic and a squad of police cars escorted the unmanned Prius from start to finish. Apart from getting stuck against a wall, the drive was a success. “You’ve got to push things and get some bumps and bruises along the way,” said Levandowski.

Another incident drove home the potential of self-driving cars. In 2010, Levandowski’s partner Stefanie Olsen was involved in a serious car accident while nine months pregnant with their first child. “My son Alex was almost never born,” Levandowski told a room full of Berkeley students in 2013. “Transportation [today] takes time, resources and lives. If you can fix that, that’s a really big problem to address.”

Over the next few years, Levandowski was key to Chauffeur’s progress. 510 Systems built five more self-driving cars for Google—as well as random gadgets like an autonomous tractor and a portable lidar system. “Anthony is lightning in a bottle, he has so much energy and so much vision,” remembers a friend and former 510 engineer. “I fricking loved brainstorming with the guy. I loved that we could create a vision of the world that didn’t exist yet and both fall in love with that vision.”

But there were downsides to his manic energy, too. “He had this very weird motivation about robots taking over the world—like actually taking over, in a military sense,” said the same engineer. “It was like [he wanted] to be able to control the world, and robots were the way to do that. He talked about starting a new country on an island. Pretty wild and creepy stuff. And the biggest thing is that he’s always got a secret plan, and you’re not going to know about it.”

In early 2011, that plan was to bring 510 Systems into the Googleplex. The startup’s engineers had long complained that they did not have equity in the growing company. When matters came to a head, Levandowski drew up a plan that would reserve the first $20 million of any acquisition for 510’s founders and split the remainder among the staff, according to two former 510 employees. “They said we were going to sell for hundreds of millions,” remembers one engineer. “I was pretty thrilled with the numbers.”

Indeed, that summer, Levandowski sold 510 Systems and Anthony’s Robots to Google – for $20 million, the exact cutoff before the wealth would be shared. Rank and file engineers did not see a penny, and some were even let go before the acquisition was completed. “I regret how it was handled…Some people did get the short end of the stick,” admitted Levandowski in 2016. The buyout also caused resentment among engineers at Google, who wondered how Levandowski could have made such a profit from his employer.

There would be more profits to come. According to a court filing, Page took a personal interest in motivating Levandowski, issuing a directive in 2011 to “make Anthony rich if Project Chauffeur succeeds.” Levandowski was given by far the highest share, about 10 percent, of a bonus program linked to a future valuation of Chauffeur—a decision that would later cost Google dearly.

**********

Ever since a New York Times story in 2010 revealed Project Chauffeur to the world, Google had been wanting to ramp up testing on public streets. That was tough to arrange in well-regulated California, but Levandowski wasn’t about to let that stop him. While manning Google’s stand at the Consumer Electronics Show in Las Vegas in January 2011, he got to chatting with lobbyist David Goldwater. “He told me he was having a hard time in California and I suggested Google try a smaller state, like Nevada,” Goldwater told me.

Together, Goldwater and Levandowski drafted legislation that would allow the company to test and operate self-driving cars in Nevada. By June, their suggestions were law, and in May 2012, a Google Prius passed the world’s first “self-driving tests” in Las Vegas and Carson City. “Anthony is gifted in so many different ways,” said Goldwater. “He’s got a strategic mind, he’s got a tactical mind, and a once-in-a-generation intellect. The great thing about Anthony is that he was willing to take risks, but they were calculated risks.”

However, Levandowski’s risk-taking had ruffled feathers at Google. It was only after Nevada had passed its legislation that Levandowski discovered Google had a whole team dedicated to government relations. “I thought you could just do it yourself,” he told me sheepishly in 2016. “[I] got a little bit in trouble for doing it.”

That might be understating it. One problem was that Levandowski had lost his air cover at Google. In May 2012, his friend Sebastian Thrun turned his attention to starting online learning company Udacity. Page put another professor, Chris Urmson from Carnegie Mellon, in charge. Not only did Levandowski think the job should have been his, but the two also had terrible chemistry.

“They had a really hard time getting along,” said Page at a deposition in July. “It was a constant management headache to help them get through that.”

Then in July 2013, Gaetan Pennecot, a 510 alum working on Chauffeur’s lidar team, got a worrying call from a vendor. According to Waymo’s complaint, a small company called Odin Wave had placed an order for a custom-made part that was extremely similar to one used in Google’s lidars.

Pennecot shared this with his team leader, Pierre-Yves Droz, the cofounder of 510 Systems. Droz did some digging and replied in an email to Pennecot (in French, which we’ve translated): “They’re clearly making a lidar. And it’s John (510’s old lawyer) who incorporated them. The date of incorporation corresponds to several months after Anthony fell out of favor at Google.”

As the story emerges in court documents, Droz had found Odin Wave’s company records. Not only had Levandowski’s lawyer founded the company in August 2012, but it was also based in a Berkeley office building that Levandowski owned, was being run by a friend of Levandowski’s, and its employees included engineers he had worked with at Velodyne and 510 Systems. One even spoke with Levandowski before being hired. The company was developing long range lidars similar to those Levandowski had worked on at 510 Systems. But Levandowski’s name was nowhere on the firm’s paperwork.

Droz confronted Levandowski, who denied any involvement, and Droz decided not to follow the paper trail any further. “I was pretty happy working at Google, and…I didn’t want to jeopardize that by…exposing more of Anthony’s shenanigans,” he said at a deposition last month.

Odin Wave changed its name to Tyto Lidar in 2014, and in the spring of 2015 Levandowski was even part of a Google investigation into acquiring Tyto. This time, however, Google passed on the purchase. That seemed to demoralize Levandowski further. “He was rarely at work, and he left a lot of the responsibility [for] evaluating people on the team to me or others,” said Droz in his deposition.

“Over time my patience with his manipulations and lack of enthusiasm and commitment to the project [sic], it became clearer and clearer that this was a lost cause,” said Chris Urmson in a deposition.

As he was torching bridges at Google, Levandowski was itching for a new challenge. Luckily, Sebastian Thrun was back on the autonomous beat. Larry Page and Thrun had been thinking about electric flying taxis that could carry one or two people. Project Tiramisu, named after the dessert which means “lift me up” in Italian, involved a winged plane flying in circles, picking up passengers below using a long tether.

Thrun knew just the person to kickstart Tiramisu. According to a source working there at the time, Levandowski was brought in to oversee Tiramisu as an “advisor and stakeholder.” Levandowski would show up at the project’s workspace in the evenings, and was involved in tests at one of Page’s ranches. Tiramisu’s tethers soon pivoted to a ride-aboard electric drone, now called the Kitty Hawk flyer. Thrun is CEO of Kitty Hawk, which is funded by Page rather than Alphabet, the umbrella company that now owns Google and its sibling companies.

Waymo’s complaint says that around this time Levandowski started soliciting Google colleagues to leave and start a competitor in the autonomous vehicle business. Droz testified that Levandowski told him it “would be nice to create a new self-driving car startup.” Furthermore, he said that Uber would be interested in buying the team responsible for Google’s lidar.

Uber had exploded onto the self-driving car scene early in 2015, when it lured almost 50 engineers away from Carnegie Mellon University to form the core of its Advanced Technologies Center. Uber cofounder Travis Kalanick had described autonomous technology as an existential threat to the ride-sharing company, and was hiring furiously. According to Droz, Levandowski said that he began meeting Uber executives that summer.

When Urmson learned of Levandowski’s recruiting efforts, his deposition states, he sent an email to human resources in August beginning, “We need to fire Anthony Levandowski.” Despite an investigation, that did not happen.

But Levandowski’s now not-so-secret plan would soon see him leaving of his own accord—with a mountain of cash. In 2015, Google was due to starting paying the Chauffeur bonuses, linked to a valuation that it would have “sole and absolute discretion” to calculate. According to previously unreported court filings, external consultants calculated the self-driving car project as being worth $8.5 billion. Google ultimately valued Chauffeur at around half that amount: $4.5 billion. Despite this downgrade, Levandowski’s share in December 2015 amounted to over $50 million – nearly twice as much as the second largest bonus of $28 million, paid to Chris Urmson.

**********

Otto seemed to spring forth fully formed in May 2016, demonstrating a self-driving 18-wheel truck barreling down a Nevada highwaywith no one behind the wheel. In reality, Levandowski had been planning it for some time.

Levandowski and his Otto cofounders at Google had spent the Christmas holidays and the first weeks of 2016 taking their recruitment campaign up a notch, according to Waymo court filings. Waymo’s complaint alleges Levandowski told colleagues he was planning to “replicate” Waymo’s technology at a competitor, and was even soliciting his direct reports at work.

One engineer who had worked at 510 Systems attended a barbecue at Levandowski’s home in Palo Alto, where Levandowski pitched his former colleagues and current Googlers on the startup. “He wanted every Waymo person to resign simultaneously, a fully synchronized walkout. He was firing people up for that,” remembers the engineer.

On January 27, Levandowski resigned from Google without notice. Within weeks, Levandowski had a draft contract to sell Otto to Uber for an amount widely reported as $680 million. Although the full-scale synchronized walkout never happened, half a dozen Google employees went with Levandowski, and more would join in the months ahead. But the new company still did not have a product to sell.

Levandowski brought Nevada lobbyist David Goldwater back to help. “There was some brainstorming with Anthony and his team,” said Goldwater in an interview. “We were looking to do a demonstration project where we could show what he was doing.”

After exploring the idea of an autonomous passenger shuttle in Las Vegas, Otto settled on developing a driverless semi-truck. But with the Uber deal rushing forward, Levandowski needed results fast. “By the time Otto was ready to go with the truck, they wanted to get right on the road,” said Goldwater. That meant demonstrating their prototype without obtaining the very autonomous vehicle licence Levandowski had persuaded Nevada to adopt. (One state official called this move “illegal.”) Levandowski also had Otto acquire the controversial Tyto Lidar—the company based in the building he owned—in May, for an undisclosed price.

The full-court press worked. Uber completed its own acquisition of Otto in August, and Uber founder Travis Kalanick put Levandowski in charge of the combined companies’ self-driving efforts across personal transportation, delivery and trucking. Uber would even propose a Tiramisu-like autonomous air taxi called Uber Elevate. Now reporting directly to Kalanick and in charge of a 1500-strong group, Levandowski demanded the email address “robot@uber.com.”

In Kalanick, Levandowski found both a soulmate and a mentor to replace Sebastian Thrun. Text messages between the two, disclosed during the lawsuit’s discovery process, capture Levandowski teaching Kalanick about lidar at late night tech sessions, while Kalanick shared advice on management. “Down to hang out this eve and mastermind some shit,” texted Kalanick, shortly after the acquisition. “We’re going to take over the world. One robot at a time,” wrote Levandowski another time.

But Levandowski’s amazing robot future was about to crumble before his eyes.

***********

Last December, Uber launched a pilot self-driving taxi program in San Francisco. As with Otto in Nevada, Levandowski failed to get a license to operate the high-tech vehicles, claiming that because the cars needed a human overseeing them, they were not truly autonomous. The DMV disagreed and revoked the vehicles’ licenses. Even so, during the week the cars were on the city’s streets, they had been spotted running red lights on numerous occasions.

Worse was yet to come. Levandowski had always been a controversial figure at Google. With his abrupt resignation, the launch of Otto, and its rapid acquisition by Uber, Google launched an internal investigation in the summer of 2016. It found that Levandowski had downloaded nearly 10 gigabytes of Google’s secret files just before he resigned, many of them relating to lidar technology.

Also in December 2016, in an echo of the Tyto incident, a Waymo employee was accidentally sent an email from a vendor that included a drawing of an Otto circuit board. The design looked very similar to Waymo’s current lidars.

Waymo saysthe “final piece of the puzzle” came from a story about Otto I wrote for Backchannel based on a public records request. A document sent by Otto to Nevada officials boasted the company had an “in-house custom-built 64-laser” lidar system. To Waymo, that sounded very much like technology it had developed. In February this year, Waymo filed its headline lawsuit accusing Uber (along with Otto Trucking, yet another of Levandowski’s companies, but one that Uber had not purchased) of violating its patents and misappropriating trade secrets on lidar and other technologies.

Uber immediately denied the accusations and has consistently maintained its innocence. Uber says there is no evidence that any of Waymo’s technical files ever came to Uber, let alone that Uber ever made use of them. While Levandowski is not named as a defendant, he has refused to answer questions in depositions with Waymo’s lawyers and is expected to do the same at trial. (He turned down several requests for interviews for this story.) He also didn’t fully cooperate with Uber’s own investigation into the allegations, and that, Uber says, is why it fired him in May.

Levandowski probably does not need a job. With the purchase of 510 Systems and Anthony’s Robots, his salary, and bonuses, Levandowski earned at least $120 million from his time at Google. Some of that money has been invested in multiple real estate developments with his college friend Randy Miller, including several large projects in Oakland and Berkeley.

But Levandowski has kept busy behind the scenes. In August, court filings say, he personally tracked down a pair of earrings given to a Google employee at her going-away party in 2014. The earrings were made from confidential lidar circuit boards, and will presumably be used by Otto Trucking’s lawyers to suggest that Waymo does not keep a very close eye on its trade secrets.

Some of Levandowski’s friends and colleagues have expressed shock at the allegations he faces, saying that they don’t reflect the person they knew. “It is…in character for Anthony to play fast and loose with things like intellectual property if it’s in pursuit of building his dream robot,” said Ben Discoe. “[But] I was a little surprised at the alleged magnitude of his disregard for IP.”

“Definitely one of Anthony’s faults is to be aggressive as he is, but it’s also one of his great attributes. I don’t see [him doing] all the other stuff he has been accused of,” said David Goldwater.

But Larry Page is no longer convinced that Levandowski was key to Chauffeur’s success. In his deposition to the court, Page said, “I believe Anthony’s contributions are quite possibly negative of a high amount.” At Uber, some engineers privately say that Levandowski’s poor management style set back that company’s self-driving effort by a couple of years.

Even after this trial is done, Levandowski will not be able to rest easy. In May, a judge referred evidence from the case to the US Attorney’s office “for investigation of possible theft of trade secrets,” raising the possibility of criminal proceedings and prison timeYet on the timeline that matters to Anthony Levandowski, even that may not mean much. Building a robotically enhanced future is his passionate lifetime project. On the Way of the Future, lawsuits or even a jail sentence might just feel like little bumps in the road.

“This case is teaching Anthony some hard lessons but I don’t see [it] keeping him down,” said Randy Miller. “He believes firmly in his vision of a better world through robotics and he’s convinced me of it. It’s clear to me that he’s on a mission.”

“I think Anthony will rise from the ashes,” agrees one friend and former 510 Systems engineer. “Anthony has the ambition, the vision, and the ability to recruit and drive people. If he could just play it straight, he could be the next Steve Jobs or Elon Musk. But he just doesn’t know when to stop cutting corners.”

———-

4. In light of Levandowski’s Otto self-driving  truck technology, we note tech executive Andrew Yang’s warning about the potential impact of that one technology on our society. (Yang is running for President.)

“His 2020 Slogan: Beware of Robots” by Kevin Roose; The New York Times; 2/11/2018.

. . . . “All you need is self-driving cars to destabilize society,” Mr. [Andrew] Yang said over lunch at a Thai restaurant in Manhattan last month, in his first interview about his campaign. In just  a few years, he said, “we’re going to have a million truck drivers out of work who are 94 percent male, with an  average  level of education of high school or one year of college.”

“That one innovation,” he added, “will be enough to create riots in the street. And we’re about to do the  same thing to retail workers, call center workers, fast-food workers, insurance companies, accounting firms.” . . . .

5. British scientist Stephen Hawking recently warned of the potential danger to humanity posed by the growth of AI (artificial intelligence) technology.

“Stephen Hawking Warns Artificial Intelligence Could End Mankind” by Rory Cellan-Jones; BBC News; 12/02/2014.

Prof Stephen Hawking, one of Britain’s pre-eminent scientists, has said that efforts to create thinking machines pose a threat to our very existence.

He told the BBC:”The development of full artificial intelligence could spell the end of the human race.”

His warning came in response to a question about a revamp of the technology he uses to communicate, which involves a basic form of AI. . . .

. . . . Prof Hawking says the primitive forms of artificial intelligence developed so far have already proved very useful, but he fears the consequences of creating something that can match or surpass humans.

“It would take off on its own, and re-design itself at an ever increasing rate,” he said. [See the article in line item #1c.–D.E.]

“Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.” . . . .

6.  In L-2 (recorded in January of 1995–20 years before Hawking’s warning) Mr. Emory warned about the dangers of AI, combined with DNA-based memory systems.

 

Discussion

No comments for “FTR #997 Summoning the Demon, Part 2: Sorcer’s Apprentice”

Post a comment