Spitfire List Web site and blog of anti-fascist researcher and radio personality Dave Emory.

For The Record  

FTR #996 Civilization’s Twilight: Update on Technocratic Fascism

Dave Emory’s entire life­time of work is avail­able on a flash drive that can be obtained HERE. The new drive is a 32-gigabyte drive that is current as of the programs and articles posted by the fall of 2017. The new drive (available for a tax-deductible contribution of $65.00 or more.)

WFMU-FM is podcasting For The Record–You can subscribe to the podcast HERE.

You can subscribe to e-mail alerts from Spitfirelist.com HERE.

You can subscribe to RSS feed from Spitfirelist.com HERE.

You can subscribe to the comments made on programs and posts–an excellent source of information in, and of, itself HERE.

This broadcast was recorded in one, 60-minute segment.

Introduction: Updating our ongoing analysis of what Mr. Emory calls “technocratic fascism,” we examine how existing technologies are neutralizing and/or rendering obsolete foundational elements of our civilization and democratic governmental systems.

For purposes of refreshing the line of argument presented here, we reference a vitally important article by David Golumbia. ” . . . . Such tech­no­cratic beliefs are wide­spread in our world today, espe­cially in the enclaves of dig­i­tal enthu­si­asts, whether or not they are part of the giant corporate-digital leviathanHack­ers (‘civic,’ ‘eth­i­cal,’ ‘white’ and ‘black’ hat alike), hack­tivists, Wik­iLeaks fans [and Julian Assange et al–D. E.], Anony­mous ‘mem­bers,’ even Edward Snow­den him­self walk hand-in-hand with Face­book and Google in telling us that coders don’t just have good things to con­tribute to the polit­i­cal world, but that the polit­i­cal world is theirs to do with what they want, and the rest of us should stay out of it: the polit­i­cal world is bro­ken, they appear to think (rightly, at least in part), and the solu­tion to that, they think (wrongly, at least for the most part), is for pro­gram­mers to take polit­i­cal mat­ters into their own hands. . . . [Tor co-creator] Din­gle­dine  asserts that a small group of soft­ware devel­op­ers can assign to them­selves that role, and that mem­bers of demo­c­ra­tic poli­ties have no choice but to accept them hav­ing that role. . . .”

Beginning with a chilling opinion piece in the New York Times, we note that technological development threatens to super-charge the Big Lies that drive our world. As anyone who saw the file Star Wars film “Rogue One” knows, the technology required to create a nearly life-like computer-generated videos of a real person is already a reality. Once the province of movie studios and other firms with millions to spend, the technology is now available for download for free.

” . . . . In 2016 Gareth Edwards, the director of the Star Wars film ‘Rogue One,’ was able to create a scene featuring a young Princess Leia by manipulating images of Carrie Fisher as she looked in 1977. Mr. Edwards had the best hardware and software a $200 million Hollywood budget could buy. Less than two years later, images of similar quality can be created with software available for free download on Reddit. That was how a faked video supposedly of the actress Emma Watson in a shower with another woman ended up on the website Celeb Jihad. . . .”

The technology has already rendered obsolete selective editing such as that performed by James O’Keefe: ” . . . . as the novelist William Gibson once said, ‘The street finds its own uses for things.’ So do rogue political actors. The implications for democracy are eye-opening. The conservative political activist James O’Keefe has created a cottage industry manipulating political perceptions by editing footage in misleading ways. In 2018, low-tech editing like Mr. O’Keefe’s is already an anachronism: Imagine what even less scrupulous activists could do with the power to create ‘video’ framing real people for things they’ve never actually done. One harrowing potential eventuality: Fake video and audio may become so convincing that it can’t be distinguished from real recordings, rendering audio and video evidence inadmissible in court. . . .”

After highlighting a story about AI-generated “deepfake” pornography with people’s faces superimposed on others’ bodies in pornographic layouts, we note how robots have altered our political and commercial landscapes, through cyber technology: ” . . . . Robots are getting better, every day, at impersonating humans. When directed by opportunists, malefactors and sometimes even nation-states, they pose a particular threat to democratic societies, which are premised on being open to the people. Robots posing as people have become a menace. . . . In coming years, campaign finance limits will be (and maybe already are) evaded by robot armies posing as ‘small’ donors. And actual voting is another obvious target — perhaps the ultimate target. . . .”

Before the actual replacement of manual labor by robots, devices to technocratically “improve”–read “coercively engineer” workers are patented by Amazon and have been used on workers in some of their facilities. ” . . . . What if your employer made you wear a wristband that tracked your every move, and that even nudged you via vibrations when it judged that you were doing something wrong? What if your supervisor could identify every time you paused to scratch or fidget, and for how long you took a bathroom break? What may sound like dystopian fiction could become a reality for Amazon warehouse workers around the world. The company has won two patents for such a wristband. . . .”

For some U.K Amazon warehouse workers, the future is now: ” . . . . Max Crawford, a former Amazon warehouse worker in Britain, said in a phone interview, ‘After a year working on the floor, I felt like I had become a version of the robots I was working with.’ He described having to process hundreds of items in an hour — a pace so extreme that one day, he said, he fell over from dizziness. ‘There was no time to go to the loo,’ he said, using the British slang for toilet. ‘You had to process the items in seconds and then move on. If you didn’t meet targets, you were fired.’

“He worked back and forth at two Amazon warehouses for more than two years and then quit in 2015 because of health concerns, he said: ‘I got burned out.’ Mr. Crawford agreed that the wristbands might save some time and labor, but he said the tracking was ‘stalkerish’ and feared that workers might be unfairly scrutinized if their hands were found to be ‘in the wrong place at the wrong time.’ ‘They want to turn people into machines,’ he said. ‘The robotic technology isn’t up to scratch yet, so until it is, they will use human robots.’ . . . .”

Some tech workers, well placed at R & D pacesetters and giants such as Facebook and Google have done an about-face on the  impact of their earlier efforts and are now struggling against the misuse of the technologies they helped to launch:

” . . . . A group of Silicon Valley technologists who were early employees at Facebook and Google, alarmed over the ill effects of social networks and smartphones, are banding together to challenge the companies they helped build. . . . ‘The largest supercomputers in the world are inside of two companies — Google and Facebook — and where are we pointing them?’ Mr. [Tristan] Harris said. ‘We’re pointing them at people’s brains, at children.’ . . . . Mr. [RogerMcNamee] said he had joined the Center for Humane Technology because he was horrified by what he had helped enable as an early Facebook investor. ‘Facebook appeals to your lizard brain — primarily fear and anger,’ he said. ‘And with smartphones, they’ve got you for every waking moment.’ . . . .”

Transitioning to our next program–updating AI (artificial intelligence) technology as it applies to technocratic fascism–we note that AI machines are being designed to develop other AI’s–“The Rise of the Machine.” ” . . . . Jeff Dean, one of Google’s leading engineers, spotlighted a Google project called AutoML. ML is short for machine learning, referring to computer algorithms that can learn to perform particular tasks on their own by analyzing data. AutoML, in turn, is a machine learning algorithm that learns to build other machine-learning algorithms. With it, Google may soon find a way to create A.I. technology that can partly take the humans out of building the A.I. systems that many believe are the future of the technology industry. . . .”

Pro and Con on the subject of Artificial Intelligence

1.  There was a chilling recent opinion piece in the New York Times. Technological development threatens to super-charge the Big Lies that drive our world. As anyone who saw the file Star Wars film “Rogue One” knows well, the technology required to create a nearly life-like computer-generated videos of a real person is already a reality. Once the province of movie studios and other firms with millions to spend, the technology is now available for download for free.

” . . . . In 2016 Gareth Edwards, the director of the Star Wars film ‘Rogue One,’ was able to create a scene featuring a young Princess Leia by manipulating images of Carrie Fisher as she looked in 1977. Mr. Edwards had the best hardware and software a $200 million Hollywood budget could buy. Less than two years later, images of similar quality can be created with software available for free download on Reddit. That was how a faked video supposedly of the actress Emma Watson in a shower with another woman ended up on the website Celeb Jihad. . . .”

The technology has already rendered obsolete selective editing such as that performed by James O’Keefe: ” . . . . as the novelist William Gibson once said, ‘The street finds its own uses for things.’ So do rogue political actors. The implications for democracy are eye-opening. The conservative political activist James O’Keefe has created a cottage industry manipulating political perceptions by editing footage in misleading ways. In 2018, low-tech editing like Mr. O’Keefe’s is already an anachronism: Imagine what even less scrupulous activists could do with the power to create ‘video’ framing real people for things they’ve never actually done. One harrowing potential eventuality: Fake video and audio may become so convincing that it can’t be distinguished from real recordings, rendering audio and video evidence inadmissible in court. . . .”

“Our Hackable Political Future” by Henry J. Farrell and Rick Perlstein; The New York Times; 02/04/2018

Imagine it is the spring of 2019. A bottom-feeding website, perhaps tied to Russia, “surfaces” video of a sex scene starring an 18-year-old Kirsten Gillibrand. It is soon debunked as a fake, the product of a user-friendly video application that employs generative adversarial network technology to convincingly swap out one face for another.

It is the summer of 2019, and the story, predictably, has stuck around — part talk-show joke, part right-wing talking point. “It’s news,” political journalists say in their own defense. “People are talking about it. How can we not?”

Then it is fall. The junior senator from New York State announces her campaign for the presidency. At a diner in New Hampshire, one “low information” voter asks another: “Kirsten What’s-her-name? She’s running for president? Didn’t she have something to do with pornography?”

Welcome to the shape of things to come. In 2016 Gareth Edwards, the director of the Star Wars film “Rogue One,” was able to create a scene featuring a young Princess Leia by manipulating images of Carrie Fisher as she looked in 1977. Mr. Edwards had the best hardware and software a $200 million Hollywood budget could buy. Less than two years later, images of similar quality can be created with software available for free download on Reddit. That was how a faked video supposedly of the actress Emma Watson in a shower with another woman ended up on the website Celeb Jihad.

Programs like these have many legitimate applications. They can help computer-security experts probe for weaknesses in their defenses and help self-driving cars learn how to navigate unusual weather conditions. But as the novelist William Gibson once said, “The street finds its own uses for things.” So do rogue political actors. The implications for democracy are eye-opening.

The conservative political activist James O’Keefe has created a cottage industry manipulating political perceptions by editing footage in misleading ways. In 2018, low-tech editing like Mr. O’Keefe’s is already an anachronism: Imagine what even less scrupulous activists could do with the power to create “video” framing real people for things they’ve never actually done. One harrowing potential eventuality: Fake video and audio may become so convincing that it can’t be distinguished from real recordings, rendering audio and video evidence inadmissible in court.

A program called Face2Face, developed at Stanford, films one person speaking, then manipulates that person’s image to resemble someone else’s. Throw in voice manipulation technology, and you can literally make anyone say anything — or at least seem to.

The technology isn’t quite there; Princess Leia was a little wooden, if you looked carefully. But it’s closer than you might think. And even when fake video isn’t perfect, it can convince people who want to be convinced, especially when it reinforces offensive gender or racial stereotypes.

In 2007, Barack Obama’s political opponents insisted that footage existed of Michelle Obama ranting against “whitey.” In the future, they may not have to worry about whether it actually existed. If someone called their bluff, they may simply be able to invent it, using data from stock photos and pre-existing footage.

The next step would be one we are already familiar with: the exploitation of the algorithms used by social media sites like Twitter and Facebook to spread stories virally to those most inclined to show interest in them, even if those stories are fake.

It might be impossible to stop the advance of this kind of technology. But the relevant algorithms here aren’t only the ones that run on computer hardware. They are also the ones that undergird our too easily hacked media system, where garbage acquires the perfumed scent of legitimacy with all too much ease. Editors, journalists and news producers can play a role here — for good or for bad.

Outlets like Fox News spread stories about the murder of Democratic staff members and F.B.I. conspiracies to frame the president. Traditional news organizations, fearing that they might be left behind in the new attention economy, struggle to maximize “engagement with content.”

This gives them a built-in incentive to spread informational viruses that enfeeble the very democratic institutions that allow a free media to thrive. Cable news shows consider it their professional duty to provide “balance” by giving partisan talking heads free rein to spout nonsense — or amplify the nonsense of our current president.

It already feels as though we are living in an alternative science-fiction universe where no one agrees on what it true. Just think how much worse it will be when fake news becomes fake video. Democracy assumes that its citizens share the same reality. We’re about to find out whether democracy can be preserved when this assumption no longer holds.

2. Both Twitter and PornHub, the online pornography giant, are already taking action to remove numerous “Deepfake” videos of celebrities being super-imposed onto porn actors in response to the flood of such videos that are already being generated.

“PornHub, Twitter Ban ‘Deepfake’ AI-Modified Porn” by Angela Moscaritolo; PC Magazine; 02/07/2018.

It might be kind of comical to see Nicolas Cage’s face on the body of a woman, but expect to see less of this type of content floating around on PornHub and Twitter in the future.

As Motherboard first reported, both sites are taking action against artificial intelligence-powered pornography, known as “deepfakes.”

Deepfakes, for the uninitiated, are porn videos created by using a machine learning algorithm to match someone’s face to another person’s body. Loads of celebrities have had their faces used in porn scenes without their consent, and the results are almost flawless. Check out the SFW example below for a better idea of what we’re talking about.
[see chillingly realistic video of Nicolas Cage’s head on a woman’s body]
In a statement to PCMag on Wednesday, PornHub Vice President Corey Price said the company in 2015 introduced a submission form, which lets users easily flag nonconsensual content like revenge porn for removal. People have also started using that tool to flag deepfakes, he said.

The company still has a lot of cleaning up to do. Motherboard reported there are still tons of deepfakes on PornHub.

“I was able to easily find dozens of deepfakes posted in the last few days, many under the search term ‘deepfakes’ or with deepfakes and the name of celebrities in the title of the video,” Motherboard’s Samantha Cole wrote.

Over on Twitter, meanwhile, users can now be suspended for posting deepfakes and other nonconsensual porn.

“We will suspend any account we identify as the original poster of intimate media that has been produced or distributed without the subject’s consent,” a Twitter spokesperson told Motherboard. “We will also suspend any account dedicated to posting this type of content.”

The site reported that Discord and Gfycat take a similar stance on deepfakes. For now, these types of videos appear to be primarily circulating via Reddit, where the deepfake community currently boasts around 90,000 subscribers.

3. No “ifs,” “ands,” or “bots!”  ” . . . . Robots are getting better, every day, at impersonating humans. When directed by opportunists, malefactors and sometimes even nation-states, they pose a particular threat to democratic societies, which are premised on being open to the people. Robots posing as people have become a menace. . . . In coming years, campaign finance limits will be (and maybe already are) evaded by robot armies posing as ‘small’ donors. And actual voting is another obvious target — perhaps the ultimate target. . . .”

“Please Prove You’re Not a Robot” by Tim Wu; The New York Times; 7/16/2017; p. 8 (Review Section).

 When science fiction writers first imagined robot invasions, the idea was that bots would become smart and powerful enough to take over the world by force, whether on their own or as directed by some evildoer. In reality, something only slightly less scary is happening.

Robots are getting better, every day, at impersonating humans. When directed by opportunists, malefactors and sometimes even nation-states, they pose a particular threat to democratic societies, which are premised on being open to the people.

Robots posing as people have become a menace. For popular Broadway shows (need we say “Hamilton”?), it is actually bots, not humans, who do much and maybe most of the ticket buying. Shows sell out immediately, and the middlemen (quite literally, evil robot masters) reap millions in ill-gotten gains.

Philip Howard, who runs the Computational Propaganda Research Project at Oxford, studied the deployment of propaganda bots during voting on Brexit, and the recent American and French presidential elections. Twitter is particularly distorted by its millions of robot accounts; during the French election, it was principally Twitter robots who were trying to make #MacronLeaks into a scandal. Facebook has admitted it was essentially hacked during the American election in November. In Michigan, Mr. Howard notes, “junk news was shared just as widely as professional news in the days leading up to the election.”

Robots are also being used to attack the democratic features of the administrative state. This spring, the Federal Communications Commission put its proposed revocation of net neutrality up for public comment. In previous years such proceedings attracted millions of (human) commentators. This time, someone with an agenda but no actual public support unleashed robots who impersonated (via stolen identities) hundreds of thousands of people, flooding the system with fake comments against federal net neutrality rules.

To be sure, today’s impersonation-bots are different from the robots imagined in science fiction: They aren’t sentient, don’t carry weapons and don’t have physical bodies. Instead, fake humans just have whatever is necessary to make them seem human enough to “pass”: a name, perhaps a virtual appearance, a credit-card number and, if necessary, a profession, birthday and home address. They are brought to life by programs or scripts that give one person the power to imitate thousands.

The problem is almost certain to get worse, spreading to even more areas of life as bots are trained to become better at mimicking humans. Given the degree to which product reviews have been swamped by robots (which tend to hand out five stars with abandon), commercial sabotage in the form of negative bot reviews is not hard to predict.

In coming years, campaign finance limits will be (and maybe already are) evaded by robot armies posing as “small” donors. And actual voting is another obvious target — perhaps the ultimate target. So far, we’ve been content to leave the problem to the tech industry, where the focus has been on building defenses, usually in the form of Captchas (“completely automated public Turing test to tell computers and humans apart”), those annoying “type this” tests to prove you are not a robot. But leaving it all to industry is not a long-term solution.

For one thing, the defenses don’t actually deter impersonation bots, but perversely reward whoever can beat them. And perhaps the greatest problem for a democracy is that companies like Facebook and Twitter lack a serious financial incentive to do anything about matters of public concern, like the millions of fake users who are corrupting the democratic process.

Twitter estimates at least 27 million probably fake accounts; researchers suggest the real number is closer to 48 million, yet the company does little about the problem. The problem is a public as well as private one, and impersonation robots should be considered what the law calls “hostis humani generis”: enemies of mankind, like pirates and other outlaws. That would allow for a better offensive strategy: bringing the power of the state to bear on the people deploying the robot armies to attack commerce or democracy.

The ideal anti-robot campaign would employ a mixed technological and legal approach. Improved robot detection might help us find the robot masters or potentially help national security unleash counterattacks, which can be necessary when attacks come from overseas. There may be room for deputizing private parties to hunt down bad robots. A simple legal remedy would be a “ Blade Runner” law that makes it illegal to deploy any program that hides its real identity to pose as a human. Automated processes should be required to state, “I am a robot.” When dealing with a fake human, it would be nice to know.

Using robots to fake support, steal tickets or crash democracy really is the kind of evil that science fiction writers were warning about. The use of robots takes advantage of the fact that political campaigns, elections and even open markets make humanistic assumptions, trusting that there is wisdom or at least legitimacy in crowds and value in public debate. But when support and opinion can be manufactured, bad or unpopular arguments can win not by logic but by a novel, dangerous form of force — the ultimate threat to every democracy.

4. Before the actual replacement of manual labor by robots, devices to technocratically “improve”–read “coercively engineer” workers are patented by Amazon and have been used on workers in some of their facilities.

” . . . . What if your employer made you wear a wristband that tracked your every move, and that even nudged you via vibrations when it judged that you were doing something wrong? What if your supervisor could identify every time you paused to scratch or fidget, and for how long you took a bathroom break? What may sound like dystopian fiction could become a reality for Amazon warehouse workers around the world. The company has won two patents for such a wristband. . . .”

For some U.K Amazon warehouse workers, the future is now: ” . . . . Max Crawford, a former Amazon warehouse worker in Britain, said in a phone interview, ‘After a year working on the floor, I felt like I had become a version of the robots I was working with.’ He described having to process hundreds of items in an hour — a pace so extreme that one day, he said, he fell over from dizziness. ‘There was no time to go to the loo,’ he said, using the British slang for toilet. ‘You had to process the items in seconds and then move on. If you didn’t meet targets, you were fired.’

He worked back and forth at two Amazon warehouses for more than two years and then quit in 2015 because of health concerns, he said: ‘I got burned out.’ Mr. Crawford agreed that the wristbands might save some time and labor, but he said the tracking was ‘stalkerish’ and feared that workers might be unfairly scrutinized if their hands were found to be ‘in the wrong place at the wrong time.’ ‘They want to turn people into machines,’ he said. ‘The robotic technology isn’t up to scratch yet, so until it is, they will use human robots.’ . . . .”

“Track Hands of Workers? Amazon Has Patents for It” by Ceylan Yeginsu; The New York Times; 2/2/2018; P. B3 [Western Edition].

 What if your employer made you wear a wristband that tracked your every move, and that even nudged you via vibrations when it judged that you were doing something wrong? What if your supervisor could identify every time you paused to scratch or fidget, and for how long you took a bathroom break?

What may sound like dystopian fiction could become a reality for Amazon warehouse workers around the world. The company has won two patents for such a wristband, though it was unclear if Amazon planned to actually manufacture the tracking device and have employees wear it.

The online retail giant, which plans to build a second headquarters and recently shortlisted 20 potential host cities for it, has also been known to experiment inhouse with new technology before selling it worldwide.

Amazon, which rarely discloses information on its patents, could not immediately be reached for comment on Thursday. But the patent disclosure goes to the heart about a global debate about privacy and security. Amazon already has a reputation for a workplace culture that thrives on a hard-hitting management style, and has experimented with how far it can push white-collar workers in order to reach its delivery targets.

Privacy advocates, however, note that a lot can go wrong even with everyday tracking technology. On Monday, the tech industry was jolted by the discovery that Strava, a fitness app that allows users to track their activities and compare their performance with other people running or cycling in the same places, had unwittingly highlighted the locations of United States military bases and the movements of their personnel in Iraq and Syria.

The patent applications, filed in 2016, were published in September, and Amazon won them this week, according to GeekWire, which reported the patents’ publication on Tuesday. In theory, Amazon’s proposed technology would emit ultrasonic sound pulses and radio transmissions to track where an employee’s hands were in relation to inventory bins, and provide “haptic feedback” to steer the worker toward the correct bin.

The aim, Amazon says in the patent, is to streamline “time consuming” tasks, like responding to orders and packaging them for speedy delivery. With guidance from a wristband, workers could fill orders faster. Critics say such wristbands raise concerns about privacy and would add a new layer of surveillance to the workplace, and that the use of the devices could result in employees being treated more like robots than human beings.

Current and former Amazon employees said the company already used similar tracking technology in its warehouses and said they would not be surprised if it put the patents into practice.

Max Crawford, a former Amazon warehouse worker in Britain, said in a phone interview, “After a year working on the floor, I felt like I had become a version of the robots I was working with.” He described having to process hundreds of items in an hour — a pace so extreme that one day, he said, he fell over from dizziness. “There was no time to go to the loo,” he said, using the British slang for toilet. “You had to process the items in seconds and then move on. If you didn’t meet targets, you were fired.”

He worked back and forth at two Amazon warehouses for more than two years and then quit in 2015 because of health concerns, he said: “I got burned out.” Mr. Crawford agreed that the wristbands might save some time and labor, but he said the tracking was “stalkerish” and feared that workers might be unfairly scrutinized if their hands were found to be “in the wrong place at the wrong time.” “They want to turn people into machines,” he said. “The robotic technology isn’t up to scratch yet, so until it is, they will use human robots.”

Many companies file patents for products that never see the light of day. And Amazon would not be the first employer to push boundaries in the search for a more efficient, speedy work force. Companies are increasingly introducing artificial intelligence into the workplace to help with productivity, and technology is often used to monitor employee whereabouts.

One company in London is developing artificial intelligence systems to flag unusual workplace behavior, while another used a messaging application to track its employees. In Wisconsin, a technology company called Three Square Market offered employees an opportunity to have microchips implanted under their skin in order, it said, to be able to use its services seamlessly. Initially, more than 50 out of 80 staff members at its headquarters in River Falls, Wis., volunteered.

5. Some tech workers, well placed at R & D pacesetters and giants such as Facebook and Google have done an about-face on the  impact of their earlier efforts and are now struggling against the misuse of the technologies they helped to launch:

” . . . . A group of Silicon Valley technologists who were early employees at Facebook and Google, alarmed over the ill effects of social networks and smartphones, are banding together to challenge the companies they helped build. . . . ‘The largest supercomputers in the world are inside of two companies — Google and Facebook — and where are we pointing them?’ Mr. [Tristan] Harris said. ‘We’re pointing them at people’s brains, at children.’ . . . . Mr. [RogerMcNamee] said he had joined the Center for Humane Technology because he was horrified by what he had helped enable as an early Facebook investor. ‘Facebook appeals to your lizard brain — primarily fear and anger,’ he said. ‘And with smartphones, they’ve got you for every waking moment.’ . . . .”

“Early Facebook and Google Employees Join Forces to Fight What They Built” by Nellie Bowles; The New York Times; 2/5/2018; p. B6 [Western Edition].

A group of Silicon Valley technologists who were early employees at Facebook and Google, alarmed over the ill effects of social networks and smartphones, are banding together to challenge the companies they helped build. The cohort is creating a union of concerned experts called the Center for Humane Technology. Along with the nonprofit media watchdog group Common Sense Media, it also plans an anti-tech addiction lobbying effort and an ad campaign at 55,000 public schools in the United States.

The campaign, titled The Truth About Tech, will be funded with $7 million from Common Sense and capital raised by the Center for Humane Technology. Common Sense also has $50 million in donated media and airtime from partners including Comcast and DirecTV. It will be aimed at educating students, parents and teachers about the dangers of technology, including the depression that can come from heavy use of social media.

“We were on the inside,” said Tristan Harris, a former in-house ethicist at Google who is heading the new group. “We know what the companies measure. We know how they talk, and we know how the engineering works.”

The effect of technology, especially on younger minds, has become hotly debated in recent months. In January, two big Wall Street investors asked Apple to study the health effects of its products and to make it easier to limit children’s use of iPhones and iPads. Pediatric and mental health experts called on Facebook last week to abandon a messaging service the company had introduced for children as young as 6.

Parenting groups have also sounded the alarm about YouTube Kids, a product aimed at children that sometimes features disturbing content. “The largest supercomputers in the world are inside of two companies — Google and Facebook — and where are we pointing them?” Mr. Harris said. “We’re pointing them at people’s brains, at children.” Silicon Valley executives for years positioned their companies as tight-knit families and rarely spoke publicly against one another.

That has changed. Chamath Palihapitiya, a venture capitalist who was an early employee at Facebook, said in November that the social network was “ripping apart the social fabric of how society works.” The new Center for Humane Technology includes an unprecedented alliance of former employees of some of today’s biggest tech companies.

Apart from Mr. Harris, the center includes Sandy Parakilas, a former Facebook operations manager; Lynn Fox, a former Apple and Google communications executive; Dave Morin, a former Facebook executive; Justin Rosenstein, who created Facebook’s Like button and is a co-founder of Asana; Roger McNamee, an early investor in Facebook; and Renée DiResta, a technologist who studies bots. The group expects its numbers to grow.

Its first project to reform the industry will be to introduce a Ledger of Harms — a website aimed at guiding rank-and-file engineers who are concerned about what they are being asked to build. The site will include data on the health effects of different technologies and ways to make products that are healthier.

Jim Steyer, chief executive and founder of Common Sense, said the Truth About Tech campaign was modeled on antismoking drives and focused on children because of their vulnerability. That may sway tech chief executives to change, he said. Already, Apple’s chief executive, Timothy D. Cook, told The Guardian last month that he would not let his nephew on social media, while the Facebook investor Sean Parker also recently said of the social network that “God only knows what it’s doing to our children’s brains.”

Mr. Steyer said, “You see a degree of hypocrisy with all these guys in Silicon Valley.” The new group also plans to begin lobbying for laws to curtail the power of big tech companies. It will initially focus on two pieces of legislation: a bill being introduced by Senator Edward J. Markey, Democrat of Massachusetts, that would commission research on technology’s impact on children’s health, and a bill in California by State Senator Bob Hertzberg, a Democrat, which would prohibit the use of digital bots without identification.

Mr. McNamee said he had joined the Center for Humane Technology because he was horrified by what he had helped enable as an early Facebook investor. “Facebook appeals to your lizard brain — primarily fear and anger,” he said. “And with smartphones, they’ve got you for every waking moment.” He said the people who made these products could stop them before they did more harm. “This is an opportunity for me to correct a wrong,” Mr. McNamee said.

6. Transitioning to our next program–updating AI (artificial intelligence) technology as it applies to technocratic fascism–we note that AI machines are being designed to develop other AI’s–“The Rise of the Machine.” ” . . . . Jeff Dean, one of Google’s leading engineers, spotlighted a Google project called AutoML. ML is short for machine learning, referring to computer algorithms that can learn to perform particular tasks on their own by analyzing data. AutoML, in turn, is a machine learning algorithm that learns to build other machine-learning algorithms. With it, Google may soon find a way to create A.I. technology that can partly take the humans out of building the A.I. systems that many believe are the future of the technology industry. . . .”

“The Rise of the Machine” by Cade Metz; The New York Times; 11/6/2017; p. B1 [Western Edition].

 They are a dream of researchers but perhaps a nightmare for highly skilled computer programmers: artificially intelligent machines that can build other artificially intelligent machines. With recent speeches in both Silicon Valley and China, Jeff Dean, one of Google’s leading engineers, spotlighted a Google project called AutoML. ML is short for machine learning, referring to computer algorithms that can learn to perform particular tasks on their own by analyzing data.

AutoML, in turn, is a machine learning algorithm that learns to build other machine-learning algorithms. With it, Google may soon find a way to create A.I. technology that can partly take the humans out of building the A.I. systems that many believe are the future of the technology industry. The project is part of a much larger effort to bring the latest and greatest A.I. techniques to a wider collection of companies and software developers.

The tech industry is promising everything from smartphone apps that can recognize faces to cars that can drive on their own. But by some estimates, only 10,000 people worldwide have the education, experience and talent needed to build the complex and sometimes mysterious mathematical algorithms that will drive this new breed of artificial intelligence.

The world’s largest tech businesses, including Google, Facebook and Microsoft, sometimes pay millions of dollars a year to A.I. experts, effectively cornering the market for this hard-to-find talent. The shortage isn’t going away anytime soon, just because mastering these skills takes years of work. The industry is not willing to wait. Companies are developing all sorts of tools that will make it easier for any operation to build its own A.I. software, including things like image and speech recognition services and online chatbots. “We are following the same path that computer science has followed with every new type of technology,” said Joseph Sirosh, a vice president at Microsoft, which recently unveiled a tool to help coders build deep neural networks, a type of computer algorithm that is driving much of the recent progress in the A.I. field. “We are eliminating a lot of the heavy lifting.” This is not altruism.

Researchers like Mr. Dean believe that if more people and companies are working on artificial intelligence, it will propel their own research. At the same time, companies like Google, Amazon and Microsoft see serious money in the trend that Mr. Sirosh described. All of them are selling cloud-computing services that can help other businesses and developers build A.I. “There is real demand for this,” said Matt Scott, a co-founder and the chief technical officer of Malong, a start-up in China that offers similar services. “And the tools are not yet satisfying all the demand.”

This is most likely what Google has in mind for AutoML, as the company continues to hail the project’s progress. Google’s chief executive, Sundar Pichai, boasted about AutoML last month while unveiling a new Android smartphone.

Eventually, the Google project will help companies build systems with artificial intelligence even if they don’t have extensive expertise, Mr. Dean said. Today, he estimated, no more than a few thousand companies have the right talent for building A.I., but many more have the necessary data. “We want to go from thousands of organizations solving machine learning problems to millions,” he said.

Google is investing heavily in cloud-computing services — services that help other businesses build and run software — which it expects to be one of its primary economic engines in the years to come. And after snapping up such a large portion of the world’s top A.I researchers, it has a means of jump-starting this engine.

Neural networks are rapidly accelerating the development of A.I. Rather than building an image-recognition service or a language translation app by hand, one line of code at a time, engineers can much more quickly build an algorithm that learns tasks on its own. By analyzing the sounds in a vast collection of old technical support calls, for instance, a machine-learning algorithm can learn to recognize spoken words.

But building a neural network is not like building a website or some run-of-themill smartphone app. It requires significant math skills, extreme trial and error, and a fair amount of intuition. Jean-François Gagné, the chief executive of an independent machine-learning lab called Element AI, refers to the process as “a new kind of computer programming.”

In building a neural network, researchers run dozens or even hundreds of experiments across a vast network of machines, testing how well an algorithm can learn a task like recognizing an image or translating from one language to another. Then they adjust particular parts of the algorithm over and over again, until they settle on something that works. Some call it a “dark art,” just because researchers find it difficult to explain why they make particular adjustments.

But with AutoML, Google is trying to automate this process. It is building algorithms that analyze the development of other algorithms, learning which methods are successful and which are not. Eventually, they learn to build more effective machine learning. Google said AutoML could now build algorithms that, in some cases, identified objects in photos more accurately than services built solely by human experts. Barret Zoph, one of the Google researchers behind the project, believes that the same method will eventually work well for other tasks, like speech recognition or machine translation. This is not always an easy thing to wrap your head around. But it is part of a significant trend in A.I. research. Experts call it “learning to learn” or “metalearning.”

Many believe such methods will significantly accelerate the progress of A.I. in both the online and physical worlds. At the University of California, Berkeley, researchers are building techniques that could allow robots to learn new tasks based on what they have learned in the past. “Computers are going to invent the algorithms for us, essentially,” said a Berkeley professor, Pieter Abbeel. “Algorithms invented by computers can solve many, many problems very quickly — at least that is the hope.”

This is also a way of expanding the number of people and businesses that can build artificial intelligence. These methods will not replace A.I. researchers entirely. Experts, like those at Google, must still do much of the important design work.

But the belief is that the work of a few experts can help many others build their own software. Renato Negrinho, a researcher at Carnegie Mellon University who is exploring technology similar to AutoML, said this was not a reality today but should be in the years to come. “It is just a matter of when,” he said.

 

 

 

Discussion

8 comments for “FTR #996 Civilization’s Twilight: Update on Technocratic Fascism”

  1. After what might be the shortest stint ever as a New York Times op-ed columnist, the Times has a new job opening on opinion section. After announcing the hiring of Quinn Norton as a new technology/hacker culture columnist Tuesday afternoon, the Times let her go later that evening. Why the sudden cold feet? A tweet. Or, rather, a number of Norton’s tweets that were widely pointed out after Norton’s hiring.

    The numerous tweets where she would call people “fag” and “faggot” or used the N-word certainly didn’t help. But it was her tweets about Nazis that appear to be what really sank her employment prospects. So what did Quinn Norton tweet about Nazis that got her fired? That she has a Nazi friend. She doesn’t agree with her Nazi friend’s racist views, but they’re still friends and still talk sometimes.

    And who is this Nazi friend of hers? Neo-Nazi hacker Andrew ‘weev’ Auernheimer, of course. And as the following article points out, while Norton’s friendship with Auernhimer – who waged a death threat campaign against the employees of CNN, let’s not forget – is indeed a troubling, it’s not like Norton is the only tech/privacy journalist who considers the weev both a friend an a hero:

    Slate

    Why Would a Tech Journalist Be Friends With a Neo-Nazi Troll?
    Quinn Norton’s friendship with the notorious Weev helped lose her a job at the New York Times. She wasn’t his only unlikely pal.

    By April Glaser
    Feb 14, 2018 1:42 PM

    The New York Times opinion section announced a new hire Tuesday afternoon: Quinn Norton, a longtime journalist covering (and traveling in) the technology industry and adjacent hacker subculture, would become the editorial board’s lead opinion writer on the “power, culture, and consequences of technology.” Hours later, the job offer was gone.

    The sharp turn occurred soon after Norton shared her job news on Twitter, where it didn’t take long for people to surface tweets that, depending on how you interpret the explanations Norton tweeted Tuesday night, were either outright vile or at minimum colossal acts of bad judgment, no matter what online subcultures Norton was navigating when she wrote them. Between 2013 and 2014, she repeatedly used the slurs fag and faggot in public conversations on Twitter. A white woman, she used the N-word in a botched retweet of internet freedom pioneer John Perry Barlow and once jokingly responded to a thread on tone policing with “what’s up my nigga.” Then there was a Medium post from 2013 in which she meditated on and praised the life of John Rabe, a Nazi leader who also helped to save thousands of Chinese people during World War II. She called him her “personal patron saint of moral complexity.”

    And then, arguably most shocking of all, there were tweets in which Norton defended her long friendship with one of the most famous neo-Nazis in America, Andrew Auernheimer, known by his internet pseudonym Weev. Among his many lowlights, Weev co-ran the website the Daily Stormer, a hub for neo-Nazis and white supremacists.

    In a statement, the New York Times’ opinion editor, James Bennet, said, “Despite our review of Quinn Norton’s work and our conversations with her previous employers, this was new information to us.” On Twitter Tuesday night, Norton wrote, “I’m sorry I can’t do the work I wanted to do with them. I wish there had been a way, but ultimately, they need to feel safe with how the net will react to their opinion writers.” But it shouldn’t have taken a public outcry for the Times to realize that Norton, despite her impressive background covering the tech industry and some of the subcultures in its underbelly, was likely a poor fit for the job.

    Lots of us have friends, acquaintances, and relatives with opinions that are controversial yet not so vile we need to eject them from our lives. Outright Nazism is something else. So how could a self-described “queer-activist” with progressive bona fides and an apparent dedication to outing abusive figures in the tech industry be friends with a Nazi? For one thing, as Norton explained, she sometimes tried to speak the language of some of the more outré communities she covered, like Anons and trolls. Friend can mean a lot of different things, and her motives in speaking with Weev may have been admirable, if possibly misguided. But when you look back at the history of the internet freedom community with which she associated, her embrace of Weev fits into an ugly pattern. She was part of a community that supported Weev and his right to free expression, often while failing to denounce his values and everything his white nationalism, sexism, and anti-Semitism stood for. Anyone who thinks seriously about the web—and hires people to cover it—ought to reckon with why.

    Some background: In October, Norton reminded her followers that “Weev is a terrible person, & an old friend of mine,” as she wrote in one of the tweets that surfaced Tuesday night. “I’ve been very clear on this. Some of my friend are terrible people, & also my friends.” Weev has said that Jewish children “deserve to die,” encouraged death threats against his targets—often Jewish people and women—and released their addresses and contact information onto the internet, causing them to be so flooded with hate speech and threats of violence that some fled their homes. Yet Norton still found value in the friendship. “Weev doesn’t talk to me much anymore, but we talk about the racism whenever he does,” Norton explained in a tweet Tuesday night. She explained that her “door is open when he, or anyone, wants to talk” and clarified that she would always make their conversations “about the stupidity of racism” when they did get a chance to catch up.

    That Norton would keep her door open to a man who harms lives does not make her an outlier within parts of the hacker and digital rights community, which took up arms to defend Weev in 2010 after he worked with a team to expose a hole in AT&T’s security system that allowed the email addresses of about 114,000 iPad owners to be revealed—which he then shared with journalists. For that, Weev was sentenced to three years in jail for identity fraud and accessing a computer without the proper authorization. Despite being known as a particularly terrifying internet troll and anti-Semite, the Electronic Frontier Foundation (where I used to work), celebrated technology law professor Orin Kerr, and others in the internet freedom community came to Weev’s defense, arguing that when a security researcher finds a hole in a company’s system, it doesn’t mean the hacking was malicious and deserving of prosecution. They were right. Outside security researchers should be able to find and disclose vulnerabilities in order to keep everyone else safe without breaking a law.

    But the broader hacker community didn’t defend Weev on the merits of this particular case while simultaneously denouncing his hateful views. Instead it lionized him in keeping with its opposition to draconian computer crime laws. Artist Molly Crabapple painted a portrait of Weev. There was a “Free Weev” website; the slogan was printed on T-shirts. The charges were eventually overturned 28 months before the end of Weev’s sentence, and when a journalist accompanied his lawyer to pick Weev up from prison, he reportedly blasted a white power song on the drive home. During and after his imprisonment, Weev and Norton kept in touch.

    And during his time in jail, Norton appeared to pick up some trolling tendencies of her own. “Here’s the deal, faggot,” she wrote in a tweet from 2013. “Free speech comes with responsibility. not legal, but human. grown up. you can do this.” Norton defended herself Tuesday night, saying this language was only ever used in the context of her work with Anonymous, where that particular slur is a kind of shibboleth, but still, she was comfortable enough to use the word a lot, and on a public platform.

    Norton, like so many champions of internet freedom, is a staunch advocate of free speech. That was certainly the view that allowed so much of the internet freedom and hacker community to overlook Weev’s ardent anti-Semitism when he was on trial for breaking into AT&T’s computers. The thinking is that this is what comes with defending people’s civil liberties: Sometimes you’re going to defend a massive racist. That’s true for both internet activists and the ACLU. It’s also totally possible to defend someone’s right to say awful things and not become their “friend,” however you define the term. But that’s something Quinn didn’t do. And it’s something that many of Weev’s defenders didn’t do, either.

    When civil liberties are defended without adjacent calls for social and economic justice, the values that undergird calls for, say, free speech or protection from government search and seizure can collapse. This is why neo-Nazis feel emboldened to hold “free speech” rallies across the country. It is why racist online communities are able to rail against the monopolistic power of companies like Facebook and Google when they get booted off their platforms. Countless activists, engineers, and others have agitated for decades for an open web—but in the process they’ve too often neglected to fight for social and economic justice at the same time. They’ve defended free speech above all else, which encouraged platforms to allow racists and bigots and sexists and anti-Semites to gather there without much issue.

    In a way, Norton’s friendship with Weev can be made sense of through the lens of the communities that they both traveled through. They belonged to a group that had the prescient insight that the internet was worth fighting for. Those fights were often railing against the threat of censorship, in defense of personal privacy, and thus in defense of hackers who found security holes, and the ability to use the internet as freely as possible, without government meddling. It’s a train of thought that preserved free speech but didn’t simultaneously work as hard to defend communities that were ostracized on the internet because so much of that speech was harmful. Norton’s reporting has been valuable; her contribution to the #MeToo moment in the tech industry was, too. But what’s really needed to make sense of technology at our current juncture probably isn’t someone so committed to one of the lines of thought that helped get us here. Let’s hope the New York Times’ next pick for the job Norton would have had exerts some fresher thinking.

    ———-

    “Why Would a Tech Journalist Be Friends With a Neo-Nazi Troll?” by April Glaser; Slate; 02/14/2018

    “And then, arguably most shocking of all, there were tweets in which Norton defended her long friendship with one of the most famous neo-Nazis in America, Andrew Auernheimer, known by his internet pseudonym Weev. Among his many lowlights, Weev co-ran the website the Daily Stormer, a hub for neo-Nazis and white supremacists.”

    Yeah, there’s nothing quite like your tweet history defending your friendship with the guy who co-rans the Daily Stormer to spruce up one’s resume…assuming you’re applying for a job at Breitbart. But that might be a bit to far for the New York Times.

    And yet, as the article notes, Norton was far from alone in not just defending Auernheimer when he was facing prosecution for hacking AT&T (and that legitimately was overly harsh prosecution) but defending him while remaining friends despite the horrific Nazi views he openly stands for:


    Lots of us have friends, acquaintances, and relatives with opinions that are controversial yet not so vile we need to eject them from our lives. Outright Nazism is something else. So how could a self-described “queer-activist” with progressive bona fides and an apparent dedication to outing abusive figures in the tech industry be friends with a Nazi? For one thing, as Norton explained, she sometimes tried to speak the language of some of the more outré communities she covered, like Anons and trolls. Friend can mean a lot of different things, and her motives in speaking with Weev may have been admirable, if possibly misguided. But when you look back at the history of the internet freedom community with which she associated, her embrace of Weev fits into an ugly pattern. She was part of a community that supported Weev and his right to free expression, often while failing to denounce his values and everything his white nationalism, sexism, and anti-Semitism stood for. Anyone who thinks seriously about the web—and hires people to cover it—ought to reckon with why.

    Now, it’s not that Norton never criticizes Auernheimer’s views. It’s that she appears to still be friends and talk with him despite the fact that he really is a leading neo-Nazi who really does call for mass murder. Which, again, is something that goes far beyond Norton:


    Some background: In October, Norton reminded her followers that “Weev is a terrible person, & an old friend of mine,” as she wrote in one of the tweets that surfaced Tuesday night. “I’ve been very clear on this. Some of my friend are terrible people, & also my friends.” Weev has said that Jewish children “deserve to die,” encouraged death threats against his targets—often Jewish people and women—and released their addresses and contact information onto the internet, causing them to be so flooded with hate speech and threats of violence that some fled their homes. Yet Norton still found value in the friendship. “Weev doesn’t talk to me much anymore, but we talk about the racism whenever he does,” Norton explained in a tweet Tuesday night. She explained that her “door is open when he, or anyone, wants to talk” and clarified that she would always make their conversations “about the stupidity of racism” when they did get a chance to catch up.

    That Norton would keep her door open to a man who harms lives does not make her an outlier within parts of the hacker and digital rights community, which took up arms to defend Weev in 2010 after he worked with a team to expose a hole in AT&T’s security system that allowed the email addresses of about 114,000 iPad owners to be revealed—which he then shared with journalists. For that, Weev was sentenced to three years in jail for identity fraud and accessing a computer without the proper authorization. Despite being known as a particularly terrifying internet troll and anti-Semite, the Electronic Frontier Foundation (where I used to work), celebrated technology law professor Orin Kerr, and others in the internet freedom community came to Weev’s defense, arguing that when a security researcher finds a hole in a company’s system, it doesn’t mean the hacking was malicious and deserving of prosecution. They were right. Outside security researchers should be able to find and disclose vulnerabilities in order to keep everyone else safe without breaking a law.

    But the broader hacker community didn’t defend Weev on the merits of this particular case while simultaneously denouncing his hateful views. Instead it lionized him in keeping with its opposition to draconian computer crime laws. Artist Molly Crabapple painted a portrait of Weev. There was a “Free Weev” website; the slogan was printed on T-shirts. The charges were eventually overturned 28 months before the end of Weev’s sentence, and when a journalist accompanied his lawyer to pick Weev up from prison, he reportedly blasted a white power song on the drive home. During and after his imprisonment, Weev and Norton kept in touch.

    “But the broader hacker community didn’t defend Weev on the merits of this particular case while simultaneously denouncing his hateful views. Instead it lionized him in keeping with its opposition to draconian computer crime laws.”

    And that is the much bigger story in the story of the Quinn Norton’s 1/2 day as a New York Times technology columnist: within much of digital privacy community, Norton’s acceptance of Auernheimer despite his open aggressive neo-Nazi views isn’t the exception. It’s the rule.

    There was unfortunately not mention in the article about how Auernheimer partied with Glenn Greenwald and Laura Poitras in 2014 after his release from prison (when he was already sporting a giant swastika on his chest). Neither was there any mention of the fact that Auerneheimer appears to have been involved with both ‘Macron hacks’ in France’s elections last year and possibly the DNC hacks. But the article does make the important point that the story of Quinn Norton’s firing is really just a sub-story in the much larger story of remarkably widespread popularity of Andrew ‘weev’ Auernheimer within the tech and digital privacy communities and the roles he may have played in some of the biggest hacks of our times. And the story of tech’s ‘Nazi friend’ is, itself, just a sub-story in the much larger story of how pervasive far-right ideals and assumptions are in all sorts of tech sectors and technologies, whether it’s Bitcoin, the Cypherpunk movement’s extensive history of far-right thought, or the fascist roots behind Wikileaks. Hopefully the New York Times’s next pick for tech columnist will actually address these topics.

    Posted by Pterrafractyl | February 14, 2018, 3:33 pm
  2. Here’s another example of how the libertarian dream of internet platforms that are so secure that the companies themselves can’t even monitor what’s taking place on them turn out to be far-right misinformation dream platforms: WhatsApp – the Facebook-owned messaging platform that uses end-to-end strong encryption so, in theory, no one can crack the messages and no one, including WhatsApps, can monitor on the platform is used – is wildly popular in Brazil. 120 million of the country’s 200 million people use the app as many of them use it as their primary news source.

    So what kind of news are people getting on WhatsApp? Well, we don’t really know because it can’t be monitored. But we do get a hint of the kind of news people are getting on encrypted services like WhatsApp when those stories spreads to other platforms like Facebook or Youtube. And with Brazil facing an explosion of yellow fever and struggling to get people vaccinated we got a particularly nasty hint of the kind of ‘news’ people are getting on WhatsApp: dangerous professionally produced videos telling people an Alex Jones-style message that the yellow fever vaccine campaign is part of a secret government depopulation scheme. That’s the kind of ‘news’ people in Brazil are getting from WhatsApp. At least, that’s the ‘news’ we know about so far since the full content in an encrypted mystery:

    Wired

    When WhatsApp’s Fake News Problem Threatens Public Health

    Megan Molteni
    03.09.18 03:14 pm

    In remote areas of Brazil’s Amazon basin, yellow fever used to be a rare, if regular visitor. Every six to ten years, during the hot season, mosquitoes would pick it up from infected monkeys and spread it to a few loggers, hunters, and farmers at the forests’ edges in the northwestern part of the country. But in 2016, perhaps driven by climate change or deforestation or both, the deadly virus broke its pattern.

    Yellow fever began expanding south, even through the winter months, infecting more than 1,500 people and killing nearly 500. The mosquito-borne virus attacks the liver, causing its signature jaundice and internal hemorrhaging (the Mayans called it xekik, or “blood vomit”). Today, that pestilence is racing toward Rio de Janeiro and Sao Paulo at the rate of more than a mile a day,, turning Brazil’s coastal megacities into mega-ticking-timebombs. The only thing spreading faster is misinformation about the dangers of a yellow fever vaccine—the very thing that could halt the virus’s advance. And nowhere is it happening faster than on WhatsApp.

    In recent weeks, rumors of fatal vaccine reactions, mercury preservatives, and government conspiracies have surfaced with alarming speed on the Facebook-owned encrypted messaging service,, which is used by 120 million of Brazil’s roughly 200 million residents. The platform has long incubated and proliferated fake news, in Brazil in particular. With its modest data requirements, WhatsApp is especially popular among middle and lower income individuals there, many of whom rely on it as their primary news consumption platform. But as the country’s health authorities scramble to contain the worst outbreak in decades, WhatsApp’s misinformation trade threatens to go from destabilizing to deadly.

    On January 25, Brazilian health officials launched a mass campaign to vaccinate 95 percent of residents in the 69 municipalities directly in the disease’s path—a total of 23 million people. A yellow fever vaccine has been mandatory since 2002 for any Brazilian born in regions where the virus is endemic. But in the last two years the disease has pushed beyond its normal range into territories where fewer than a quarter of people are immune, including the urban areas of Rio and Sao Paulo.

    By the time of the announcement, the fake news cycle was already underway. Earlier in the month an audio message from a woman claiming to be a doctor at a well-known research institute began circulating on WhatsApp, warning that the vaccine is dangerous. (The institute denied that the recording came from any of its employees). A few weeks later it was a story linking the death of a university student to the vaccine. (That too proved to be a false report). In February, Igor Sacramento’s mother-in-law messaged him a pair of videos suggesting that the yellow fever vaccine was actually a scam aimed at reducing the world population. A health communication researcher at Fiocruz, one of Brazil’s largest scientific institutions, Sacramento recognized a scam when he saw one. And no, it wasn’t a global illuminati plot to kill off his countrymen. But he could understand why people would be taken in by it.

    “These videos are very sophisticated, with good editing, testimonials from experts, and personal experiences,” Sacramento says. It’s the same journalistic format people see on TV, so it bears the shape of truth. And when people share these videos or news stories within their social networks as personal messages, it changes the calculus of trust. “We are transitioning from a society that experienced truth based on facts to a society based on its experience of truth in intimacy, in emotion, in closeness.”

    People are more likely to believe rumours from family and friends. There’s no algorithm mediating the experience. And when that misinformation comes in the form of forwarded texts and videos—which look the same as personal messages in WhatsApp—they’re lent another layer of legitimacy. Then you get the network compounding effect; if you’re in multiple group chats that all receive the fake news, the repetition makes them more believable still.

    Of course, these are all just theories. Because of WhatsApp’s end-to-end encryption and the closed nature of its networks, it’s nearly impossible to study how misinformation moves through it. For users in countries with a history of state-sponsored violence, like Brazil, that secrecy is a feature. But it’s a bug for anyone trying to study them. “I think WhatsApp hoaxes and disinformation campaigns are a bit more pernicious [than Facebook] because their diffusion cannot be monitored,” says Pablo Ortellado, a fake news researcher and professor of public policy at the University of Sao Paulo. Misinformation on WhatsApp can only be identified when it jumps to other social media sites or bleeds into the real world.

    In Brazil, it’s starting to do both. One of the videos Sacramento received from his mother-in-law is still up on YouTube, where it’s been viewed over a million times. Other stories circulated on WhatsApp are now being shared in Facebook groups with thousands of users, mostly worried mothers exchanging stories and fears. And in the streets of Rio and Sao Paulo, some people are staying away from the health workers in white coats. As of February 27, only 5.5 million people had received the shot, though it’s difficult to say how much of the slow start is due to fake news as opposed to logistical delays. A spokeswoman for the Brazilian Ministry of Health said in an email that the agency has seen an uptick in concern from residents regarding post-vaccination adverse events since the start of the year and acknowledged that the spread of false news through social media can interfere with vaccination coverage, but did not comment on its specific impact on this latest campaign.

    While the Ministry of Health has engaged in a very active pro-vaccine education operation—publishing weekly newsletters, posting on social media, and getting people on the ground at churches, temples, trade unions, and clinics—health communication researchers like Sacramento say health officials made one glaring mistake. They didn’t pay close enough attention to language.

    You see, on top of all this, there’s a global yellow fever vaccine shortage going on at the moment. The vaccine is available at a limited number of clinics in the US, but it’s only used here as a travel shot. So far this year, the Centers for Disease Control and Prevention has registered no cases of the virus within US borders, though in light of the outbreak it did issue a Level 2 travel notice in January, urging all Americans traveling to the affected states in Brazil to get vaccinated first.

    Because it’s endemic in the country, Brazil makes its own vaccine, and is currently ramping up production from 5 million to 10 million doses per month by June. But in the interim, authorities are administering smaller doses of what they have on hand, known as a “fractional dose.” It’s a well-demonstrated emergency maneuver, which staved off a yellow fever outbreak in the Democratic Republic of the Congo in 2016. According to the WHO, it’s “the best way to stretch vaccine supplies and protect against as many people as possible.” But a partial dose, one that’s guaranteed for only 12 months, has been met by mistrust in Brazil, where a single vaccination had always been good for a lifetime of protection.

    “The population in general understood the wording of ‘fractionated’ to mean weak,” says Sacramento. Although technically correct, the word took on a more sinister meaning as it spread through social media circles. Some videos even claimed the fractionated vaccine could cause renal failure. And while they may be unscientific, they’re not completely wrong.

    Like any medicine, the yellow fever vaccine can cause side effects. Between 2 and 4 percent of people experience mild headaches, low-grade fevers, or pain at the site of injection. But there have also been rare reports of life-threatening allergic reactions and damage to the nervous system and other internal organs. According to the Health Ministry, six people died in 2017 on account of an adverse reaction to the vaccine. The agency estimates that one in 76,000 will have an anaphylactic reaction, one in 125,000 will experience a severe nervous system reaction, and one in 250,000 will suffer a life-threatening illness with organ failure. Which means that if 5 million people get vaccinated, you’ll wind up with about 20 organ failures, 50 nervous system issues, and 70 allergic shocks. Of course, if yellow fever infected 5 million people, 333,000 people could die.

    Not every fake news story is 100 percent false. But they are out of proportion with reality. That’s the thing about social media. It can amplify real but statistically unlikely things just as much as it spreads totally made up stuff. What you wind up with is a murky mix of information that has just enough truth to be credible.

    And that makes it a whole lot harder to fight. You can’t just start by shouting it all down. Sacramento says too often health officials opt to frame these rumors as a dichotomy: “Is this true or is this a myth?” That alienates people from the science. Instead, the institution where he works has begun to produce social media-specific videos that start a dialogue about the importance of vaccines, while remaining open to people’s fears. “Brazil is a country full of social inequalities and contradictions,” he says. “The only way to understand what is happening is to talk to people who are different from you.” Unfortunately, that’s the one thing WhatsApp is designed not to let you do.

    ———-

    “When WhatsApp’s Fake News Problem Threatens Public Health” by Megan Molteni; Wired; 03/09/2018

    “Yellow fever began expanding south, even through the winter months, infecting more than 1,500 people and killing nearly 500. The mosquito-borne virus attacks the liver, causing its signature jaundice and internal hemorrhaging (the Mayans called it xekik, or “blood vomit”). Today, that pestilence is racing toward Rio de Janeiro and Sao Paulo at the rate of more than a mile a day,, turning Brazil’s coastal megacities into mega-ticking-timebombs. The only thing spreading faster is misinformation about the dangers of a yellow fever vaccine—the very thing that could halt the virus’s advance. And nowhere is it happening faster than on WhatsApp.

    As the saying goes, a lie can travel halfway around the world before the truth can get its boots on. Especially in the age of the internet when random videos on messaging services like WhatsApp are treated as reliable news sources:


    In recent weeks, rumors of fatal vaccine reactions, mercury preservatives, and government conspiracies have surfaced with alarming speed on the Facebook-owned encrypted messaging service,, which is used by 120 million of Brazil’s roughly 200 million residents. The platform has long incubated and proliferated fake news, in Brazil in particular. With its modest data requirements, WhatsApp is especially popular among middle and lower income individuals there, many of whom rely on it as their primary news consumption platform. But as the country’s health authorities scramble to contain the worst outbreak in decades, WhatsApp’s misinformation trade threatens to go from destabilizing to deadly.

    So by the time the government began its big vaccination campaign to vaccinate 95 percent of residents in vulnerable areas, there was a fake news campaign about the vaccine using professional quality videos: fake doctors. Fake stories of deaths from vaccine. And the kind of production quality people expect from a news broadcast:


    On January 25, Brazilian health officials launched a mass campaign to vaccinate 95 percent of residents in the 69 municipalities directly in the disease’s path—a total of 23 million people. A yellow fever vaccine has been mandatory since 2002 for any Brazilian born in regions where the virus is endemic. But in the last two years the disease has pushed beyond its normal range into territories where fewer than a quarter of people are immune, including the urban areas of Rio and Sao Paulo.

    By the time of the announcement, the fake news cycle was already underway. Earlier in the month an audio message from a woman claiming to be a doctor at a well-known research institute began circulating on WhatsApp, warning that the vaccine is dangerous. (The institute denied that the recording came from any of its employees). A few weeks later it was a story linking the death of a university student to the vaccine. (That too proved to be a false report). In February, Igor Sacramento’s mother-in-law messaged him a pair of videos suggesting that the yellow fever vaccine was actually a scam aimed at reducing the world population. A health communication researcher at Fiocruz, one of Brazil’s largest scientific institutions, Sacramento recognized a scam when he saw one. And no, it wasn’t a global illuminati plot to kill off his countrymen. But he could understand why people would be taken in by it.

    “These videos are very sophisticated, with good editing, testimonials from experts, and personal experiences,” Sacramento says. It’s the same journalistic format people see on TV, so it bears the shape of truth. And when people share these videos or news stories within their social networks as personal messages, it changes the calculus of trust. “We are transitioning from a society that experienced truth based on facts to a society based on its experience of truth in intimacy, in emotion, in closeness.”

    “These videos are very sophisticated, with good editing, testimonials from experts, and personal experiences,” Sacramento says. It’s the same journalistic format people see on TV, so it bears the shape of truth. And when people share these videos or news stories within their social networks as personal messages, it changes the calculus of trust. “We are transitioning from a society that experienced truth based on facts to a society based on its experience of truth in intimacy, in emotion, in closeness.””

    So how widespread is the problem of high quality literal fake news content getting propagated on WhatsApp? Well, again, we don’t know. Because you can’t monitor how WhatsApp is used. Even the company can’t. It’s one of its ‘features’:


    People are more likely to believe rumours from family and friends. There’s no algorithm mediating the experience. And when that misinformation comes in the form of forwarded texts and videos—which look the same as personal messages in WhatsApp—they’re lent another layer of legitimacy. Then you get the network compounding effect; if you’re in multiple group chats that all receive the fake news, the repetition makes them more believable still.

    Of course, these are all just theories. Because of WhatsApp’s end-to-end encryption and the closed nature of its networks, it’s nearly impossible to study how misinformation moves through it. For users in countries with a history of state-sponsored violence, like Brazil, that secrecy is a feature. But it’s a bug for anyone trying to study them. “I think WhatsApp hoaxes and disinformation campaigns are a bit more pernicious [than Facebook] because their diffusion cannot be monitored,” says Pablo Ortellado, a fake news researcher and professor of public policy at the University of Sao Paulo. Misinformation on WhatsApp can only be identified when it jumps to other social media sites or bleeds into the real world.

    “Of course, these are all just theories. Because of WhatsApp’s end-to-end encryption and the closed nature of its networks, it’s nearly impossible to study how misinformation moves through it.”

    Yep, we have no idea what kinds of other high quality misinformation videos are getting produced. Of course, it’s not like there aren’t plenty of misinformation videos readily available on Youtube and Facebook, so we do have some idea of the general type of misinformation and far-right memes that going to flourish on platforms like WhatsApp. But for a very time-sensitive story, like getting people vaccinating before the killer virus turns into a pandemic, the inability to identify and combat disinformation like this really is quite dangerous.

    It’s a reminder that if humanity wants to embrace the cypherpunk revolution of ubiquitous strong encryption and truly a anonymous, untrackable internet, humanity is going to have to get a lot wiser. Wise enough to at least have some sort of reasonable social immune system against mind viruses like bogus news and far-right memes. Wise enough to identify and reject the many problems with ideologies like digital libertarianism. In other words, if humanity wants to safely embrace the cypherpunk revolution, it needs to be savvy enough to reject the cypherpunk revolution. It’s a bit of a paradox and a recurring theme with technology and power in general: if you want this power without destroying yourself you’re just going to have to be wise enough to use that power very carefully or just reject it outright, collectively and individually.

    But for now, we have literal fake news videos pushing anti-vaccine misinformation quietly ‘going viral’ on encrypted social media in order to promote the spread of a deadly biological virus. It seems like a milestone of self-destructive behavior was just reached by humanity. It was a group effort. Go team.

    Posted by Pterrafractyl | March 12, 2018, 10:46 am
  3. A great new book is out on the history of the Internet Surveillance Valley by Yasha Levine. Here is a link to a long interview
    http://mediaroots.org/surveillance-valley-the-secret-military-history-of-the-internet-with-yasha-levine/

    Posted by Hugo Turner | March 23, 2018, 11:37 am
  4. Here’s a quick update on the development of the ‘deepfake’ technology that can create realistically looking videos of anyone saying anything: according to experts, it should be advanced enough to cause major problems for things like political elections in a couple years. So if you were wondering what kind of ‘fake news’ nightmare is in store for the US 2020 election, it’s going to be the kind of nightmare that includes one fake video after another that looks completely real:

    Associated Press

    I never said that! High-tech deception of ‘deepfake’ videos

    By DEB RIECHMANN
    07/02/2018

    WASHINGTON (AP) — Hey, did my congressman really say that? Is that really President Donald Trump on that video, or am I being duped?

    New technology on the internet lets anyone make videos of real people appearing to say things they’ve never said. Republicans and Democrats predict this high-tech way of putting words in someone’s mouth will become the latest weapon in disinformation wars against the United States and other Western democracies.

    We’re not talking about lip-syncing videos. This technology uses facial mapping and artificial intelligence to produce videos that appear so genuine it’s hard to spot the phonies. Lawmakers and intelligence officials worry that the bogus videos — called deepfakes — could be used to threaten national security or interfere in elections.

    So far, that hasn’t happened, but experts say it’s not a question of if, but when.

    “I expect that here in the United States we will start to see this content in the upcoming midterms and national election two years from now,” said Hany Farid, a digital forensics expert at Dartmouth College in Hanover, New Hampshire. “The technology, of course, knows no borders, so I expect the impact to ripple around the globe.”

    When an average person can create a realistic fake video of the president saying anything they want, Farid said, “we have entered a new world where it is going to be difficult to know how to believe what we see.” The reverse is a concern, too. People may dismiss as fake genuine footage, say of a real atrocity, to score political points.

    Realizing the implications of the technology, the U.S. Defense Advanced Research Projects Agency is already two years into a four-year program to develop technologies that can detect fake images and videos. Right now, it takes extensive analysis to identify phony videos. It’s unclear if new ways to authenticate images or detect fakes will keep pace with deepfake technology.

    Deepfakes are so named because they utilize deep learning, a form of artificial intelligence. They are made by feeding a computer an algorithm, or set of instructions, lots of images and audio of a certain person. The computer program learns how to mimic the person’s facial expressions, mannerisms, voice and inflections. If you have enough video and audio of someone, you can combine a fake video of the person with a fake audio and get them to say anything you want.

    So far, deepfakes have mostly been used to smear celebrities or as gags, but it’s easy to foresee a nation state using them for nefarious activities against the U.S., said Sen. Marco Rubio, R-Fla., one of several members of the Senate intelligence committee who are expressing concern about deepfakes.

    A foreign intelligence agency could use the technology to produce a fake video of an American politician using a racial epithet or taking a bribe, Rubio says. They could use a fake video of a U.S. soldier massacring civilians overseas, or one of a U.S. official supposedly admitting a secret plan to carry out a conspiracy. Imagine a fake video of a U.S. leader — or an official from North Korea or Iran — warning the United States of an impending disaster.

    “It’s a weapon that could be used — timed appropriately and placed appropriately — in the same way fake news is used, except in a video form, which could create real chaos and instability on the eve of an election or a major decision of any sort,” Rubio told The Associated Press.

    Deepfake technology still has a few hitches. For instance, people’s blinking in fake videos may appear unnatural. But the technology is improving.

    Within a year or two, it’s going to be really hard for a person to distinguish between a real video and a fake video,” said Andrew Grotto, an international security fellow at the Center for International Security and Cooperation at Stanford University in California.

    “This technology, I think, will be irresistible for nation states to use in disinformation campaigns to manipulate public opinion, deceive populations and undermine confidence in our institutions,” Grotto said. He called for government leaders and politicians to clearly say it has no place in civilized political debate.

    Rubio noted that in 2009, the U.S. Embassy in Moscow complained to the Russian Foreign Ministry about a fake sex video it said was made to damage the reputation of a U.S. diplomat. The video showed the married diplomat, who was a liaison to Russian religious and human rights groups, making telephone calls on a dark street. The video then showed the diplomat in his hotel room, scenes that apparently were shot with a hidden camera. Later, the video appeared to show a man and a woman having sex in the same room with the lights off, although it was not at all clear that the man was the diplomat.

    John Beyrle, who was the U.S. ambassador in Moscow at the time, blamed the Russian government for the video, which he said was clearly fabricated.

    Michael McFaul, who was American ambassador in Russia between 2012 and 2014, said Russia has engaged in disinformation videos against various political actors for years and that he too had been a target. He has said that Russian state propaganda inserted his face into photographs and “spliced my speeches to make me say things I never uttered and even accused me of pedophilia.”

    ———-

    “I never said that! High-tech deception of ‘deepfake’ videos” by DEB RIECHMANN; Associated Press; 07/02/2018

    “I expect that here in the United States we will start to see this content in the upcoming midterms and national election two years from now,” said Hany Farid, a digital forensics expert at Dartmouth College in Hanover, New Hampshire. “The technology, of course, knows no borders, so I expect the impact to ripple around the globe.””

    Yep, the way Hany Farid, a digital forensics expert at Dartmouth College, sees it, we might even see ‘deepfakes’ impact the US midterms this year. The technology is basically ready to go.

    And while DARPA is reportedly already working on techniques for identifying fake images and videos, it’s still unclear if even an agency like DARPA will be able to keep up with advances in the technology. In other words, even after detection technology has been developed there’s still ALWAYS going to be the potential for cutting edge ‘deepfakes’ that can fool that detection technology. It’s just part of our technological landscape:


    Realizing the implications of the technology, the U.S. Defense Advanced Research Projects Agency is already two years into a four-year program to develop technologies that can detect fake images and videos. Right now, it takes extensive analysis to identify phony videos. It’s unclear if new ways to authenticate images or detect fakes will keep pace with deepfake technology.

    Deepfakes are so named because they utilize deep learning, a form of artificial intelligence. They are made by feeding a computer an algorithm, or set of instructions, lots of images and audio of a certain person. The computer program learns how to mimic the person’s facial expressions, mannerisms, voice and inflections. If you have enough video and audio of someone, you can combine a fake video of the person with a fake audio and get them to say anything you want.

    Deepfake technology still has a few hitches. For instance, people’s blinking in fake videos may appear unnatural. But the technology is improving.

    Within a year or two, it’s going to be really hard for a person to distinguish between a real video and a fake video,” said Andrew Grotto, an international security fellow at the Center for International Security and Cooperation at Stanford University in California.

    “This technology, I think, will be irresistible for nation states to use in disinformation campaigns to manipulate public opinion, deceive populations and undermine confidence in our institutions,” Grotto said. He called for government leaders and politicians to clearly say it has no place in civilized political debate.

    And while we’re guaranteed that any deepfakes introduced into the US elections will almost reflexively be blamed on Russia, the reality is that any intelligence agency on the planet (even private intelligence agencies) are going to be extremely tempted to develop these kinds of videos for propaganda purposes. And the 4Chan trolls and Alt Right are going to be investing massive amounts of time and energy into this, if they aren’t already. The list of suspects is inherently going to be massive:


    So far, deepfakes have mostly been used to smear celebrities or as gags, but it’s easy to foresee a nation state using them for nefarious activities against the U.S., said Sen. Marco Rubio, R-Fla., one of several members of the Senate intelligence committee who are expressing concern about deepfakes.

    A foreign intelligence agency could use the technology to produce a fake video of an American politician using a racial epithet or taking a bribe, Rubio says. They could use a fake video of a U.S. soldier massacring civilians overseas, or one of a U.S. official supposedly admitting a secret plan to carry out a conspiracy. Imagine a fake video of a U.S. leader — or an official from North Korea or Iran — warning the United States of an impending disaster.

    “It’s a weapon that could be used — timed appropriately and placed appropriately — in the same way fake news is used, except in a video form, which could create real chaos and instability on the eve of an election or a major decision of any sort,” Rubio told The Associated Press.

    Finally, let’s not forget about one of the more bizarre potential consequences of the emergence of deepfake technology: it’s going to be make easier than ever for Republicans to decry ‘fake news!’ when they are confronted with a true but political inconvenient story. Remember when Trump’s ambassador to the Netherlands, Pete Hoekstra, cried ‘fake news!’ when shown a video of his own comments? Well, that’s going to be a very common thing going forward. So when the inevitable montages of Trump saying one horrible thing after enough get rolled out for voters 2020, it’s going to be easier than ever for people to dismiss it as ‘fake news!’:


    When an average person can create a realistic fake video of the president saying anything they want, Farid said, “we have entered a new world where it is going to be difficult to know how to believe what we see.” The reverse is a concern, too. People may dismiss as fake genuine footage, say of a real atrocity, to score political points.

    Welcome to the world were you really can’t believe your lying eyes. Except when you can and should.

    So how will humanity handle a world where any random troll can create convincing fake videos? Well, based on our track record with how we handle other sources of information that can potentially be faked and requires a degree of wisdom and discernment to navigate, not well. Not well at all.

    Posted by Pterrafractyl | July 3, 2018, 2:47 pm
  5. Here’s the latest reminder that when the ‘deepfake’ video technology develops to the point of being indistinguishable from real videos the far right is going to go into overdrive creating videos purporting to prove virtually every far right fantasy you can imagine. Especially the ‘PizzaGate’ conspiracy theory pushed by the right wing in the final weeks of the 2016 alleging that Hillary Clinton and number of other prominent Democrats are part of a Satanist child abuse ring:

    Far right crackpot ‘journalist’ Liz Crokin is repeating her assertions that video of Hillary Clinton – specifically, Hillary sexually abusing a child and then cutting their face off and eating it – is floating around on the Dark Web is, according to her sources, is definitely real. And now Crokin warns that reports about ‘deepfake’ technology are just disinformation stories being preemptively put out by the Deep State to make the public skeptical when the videos of Hillary cutting the face off of a child come to light:

    Right Wing Watch

    Liz Crokin: Trump Confirmed The Existence Of A Video Showing Hilary Clinton Torturing A Child

    By Kyle Mantyla
    July 12, 2018 2:00 pm

    Right-wing “journalist” Liz Crokin appeared on Sheila Zilinsky’s podcast earlier this week, where the two unhinged conspiracy theorists discussed Crokin’s assertion that a video exists showing Hillary Clinton sexually abusing and torturing a child.

    “I understand that there is a video circulating on the dark web of [Clinton] eating the face of a child,” Zilinsky said. “Does this video exist?”

    “Yes,” responded Crokin. “There are videos that prove that Hillary Clinton is involved in child sex trafficking and pedophilia. I have sources that have told me that; I trust these sources, so there is evidence that exists that proves that she is involved in this stuff … I believe with all my heart that this is true.”

    After claiming that “the deep state” targeted former White House national security adviser Michael Flynn for destruction because he and his son “were exposing Pizzagate,” Crokin insisted that media reports warning about the ability to use modern technology to create fake videos that make it appear as if famous people are saying or doing things are secretly a part of an effort to prepare the public to dismiss the Clinton video when it is finally released.

    “Based off of what lies they report, I can tell what they’re afraid of, I can tell what’s really going on behind the scenes,” she said. “So the fact that they are saying, ‘Oh, if a tape comes out involving Hillary or Obama doing like a sex act or x, y, z, it’s fake news,’ that tells me that there are tapes that incriminate Obama and Hillary.”

    As further proof that such tapes exist, Crokin repeated her claim that President Trump’s tweet of a link to a fringe right-wing conspiracy website that featured a video of her discussing this supposed Clinton tape was confirmation of the veracity of her claims.

    “When President Trump retweeted MAGA Pill, MAGA Pill’s tweet was my video talking about Hillary Clinton’s sex tape,” she insisted. “I know President Trump, I’ve met him, I’ve studied him, I’ve reported on him … I’ve known him and of him and reported on him for a very long time. I understand how his brain works, I understand how he thinks, I understand ‘The Art of War,’ his favorite book, I understand this man. And I know that President Trump—there’s no way that he didn’t know when he retweeted MAGA Pill that my video talking about Hillary Clinton’s sex tape was MAGA Pill’s pinned tweet. There is no way that President Trump didn’t know that.”

    ———-

    “Liz Crokin: Trump Confirmed The Existence Of A Video Showing Hilary Clinton Torturing A Child” by Kyle Mantyla; Right Wing Watch; 07/12/2018

    ““I understand that there is a video circulating on the dark web of [Clinton] eating the face of a child,” Zilinsky said. “Does this video exist?””

    Welcome to our dystopian near-future. Does video of [insert horror scenario here] actually exist? Oh yes, we will be assured, it’s definitely totally real and you can totally trust my source!


    “Yes,” responded Crokin. “There are videos that prove that Hillary Clinton is involved in child sex trafficking and pedophilia. I have sources that have told me that; I trust these sources, so there is evidence that exists that proves that she is involved in this stuff … I believe with all my heart that this is true.”

    And someday, with advances in deepfake video technology and special effects they might actually produce such a video. It’s really just a matter of time, and at this point we have to just hope that they use special effects to fake a child having their face cut off and eaten and don’t actually do that to a kid (you never know when you’re dealing with Nazis and their fellow travelers).

    Crokin then went on to insist that the various news articles warning about the advances in deepfake technology were all part of a secret effort to protect Hillary when the video is finally released:


    After claiming that “the deep state” targeted former White House national security adviser Michael Flynn for destruction because he and his son “were exposing Pizzagate,” Crokin insisted that media reports warning about the ability to use modern technology to create fake videos that make it appear as if famous people are saying or doing things are secretly a part of an effort to prepare the public to dismiss the Clinton video when it is finally released.

    “Based off of what lies they report, I can tell what they’re afraid of, I can tell what’s really going on behind the scenes,” she said. “So the fact that they are saying, ‘Oh, if a tape comes out involving Hillary or Obama doing like a sex act or x, y, z, it’s fake news,’ that tells me that there are tapes that incriminate Obama and Hillary.”

    And as further evidence of her claims, Crokin points to President Trump retweeting a tweet to a website featuring a video of Crokin discussing this alleged Hillary-child-face-eating video:


    As further proof that such tapes exist, Crokin repeated her claim that President Trump’s tweet of a link to a fringe right-wing conspiracy website that featured a video of her discussing this supposed Clinton tape was confirmation of the veracity of her claims.

    “When President Trump retweeted MAGA Pill, MAGA Pill’s tweet was my video talking about Hillary Clinton’s sex tape,” she insisted. “I know President Trump, I’ve met him, I’ve studied him, I’ve reported on him … I’ve known him and of him and reported on him for a very long time. I understand how his brain works, I understand how he thinks, I understand ‘The Art of War,’ his favorite book, I understand this man. And I know that President Trump—there’s no way that he didn’t know when he retweeted MAGA Pill that my video talking about Hillary Clinton’s sex tape was MAGA Pill’s pinned tweet. There is no way that President Trump didn’t know that.”

    Yep, the president is promoting this lady. And that, right there, summarizes the next stage of America’s nightmare descent into neo-Nazi fantasies that’s just around the corner (not to mention the impact on the rest of the world).

    Of course, the denials that deepfake technology exist will start getting rather amusing after people use that same technology to create deepfakes of Trump, Crokin, and anyone else in the public eye (since you need lots of videos of someone to make the technology work).

    Also keep in mind that Crokin claims the child-face-eating video is merely one of the videos of Hillary Clinton floating around. There are many videos of Hillary that Crokin claims to be aware of. So when the child-face-eating video emerges, it’s probably going to just be a preview of what’s to come:

    Right Wing Watch

    Liz Crokin Claims That She Knows ‘One Hundred Percent’ That ‘A Video Of Hillary Clinton Sexually Abusing A Child Exists’

    By Kyle Mantyla | April 13, 2018 1:07 pm

    Fringe right-wing conspiracy theoris Liz Crokin posted a video on YouTube last night in which she declared that a gruesome video showing Hillary Clinton cutting the face off of a living child exists and will soon be released for all the world to see.

    “I know with absolute certainty that there is a tape that exists that involves Hillary Clinton sexually abusing a child,” Crokin said. “I have gotten this confirmed from very respectable and high-level sources.”

    Crokin said that reports that Russian-linked accounts posted a fake Clinton sex tape during the 2016 election are false, saying that no such fake video exists and that the stories about it are simply an effort to confuse the public “so when and if the actual video of Hillary Clinton sexually abusing a child comes out, the seeds of doubt are already planted in people’s heads.”

    “All I know is that, one hundred percent, a video of Hillary Clinton sexually abusing a child exists,” she said. “I know there’s many videos incriminating her, I just don’t know which one they are going to release. But there are people, there are claims that this sexual abuse video is on the dark web and I know that some people have seen it, some in law enforcement, the NYPD law enforcement, some NYPD officers have seen it and it made them sick, it made them cry, it made them vomit, some of them had to seek psychological counseling after this.”

    “I’m not going to go into too much detail because it’s so disgusting, but in this video, they cut off a child’s face as the child is alive,” Crokin claimed. “I’m just going to leave it at that.”

    ———-

    “Liz Crokin Claims That She Knows ‘One Hundred Percent’ That ‘A Video Of Hillary Clinton Sexually Abusing A Child Exists’” by Kyle Mantyla; Right Wing Watch; 04/13/2018

    “I’m not going to go into too much detail because it’s so disgusting, but in this video, they cut off a child’s face as the child is alive…I’m just going to leave it at that.”

    The child was alive when Hillary cut its face off and ate it after sexually abusing it. That’s what Crokin has spent months assuring her audience is a real thing and Donald Trump appears to be promoting her. Of course.

    And that’s just one of the alleged Hillary-as-beyond-evil-witch videos Crokin claims to certainty is real and in possession of some law enforcement officials (this is what the whole ‘QAnon’ obsession on the right is about):


    “All I know is that, one hundred percent, a video of Hillary Clinton sexually abusing a child exists,” she said. “I know there’s many videos incriminating her, I just don’t know which one they are going to release. But there are people, there are claims that this sexual abuse video is on the dark web and I know that some people have seen it, some in law enforcement, the NYPD law enforcement, some NYPD officers have seen it and it made them sick, it made them cry, it made them vomit, some of them had to seek psychological counseling after this.”

    Also notice how Crokin acts like she doesn’t want to go into the details, and yet gives all sorts of details that hint as something so horrific that the alleged NYPD officers who have seen the video needed psychological counseling. And that points towards one of the other aspects of this looming nightmare phase of American’s intellectual descent: the flood of far right fake videos that are going to be produced are probably going to be designed to psychologically traumatize the viewer. The global public is about to be exposed to one torture/murder/porn video of famous people after another because if you’re trying to impact you’re audience visually traumatizing them is an effective way to do it.

    It’s no accident that much of the far right conspiracy culture has a fixation on child sex crimes. Yes, some of that fixation is due to real cases or elite protected child abuse, like ‘The Franklin cover-up’ or Jimmy Savile and the profound moral gravity of such crimes if real organized elite sex abuse rings actually exist. The visceral revulsion of crimes of this nature make them inherently impactful. But in the right-wing conspiracy worldview pedophilia tends to play a central role (as anyone familiar with Alex Jones can attest to). Crokin is merely one example of that.

    And that’s exactly why we should expect these slew of fake videos that are inevitably going to be produced in droves for political gain to involve images that truly psychological scar the viewer. It’s just more impactful that way.

    So whether you’re a fan of Hillary Clinton or loath her, get ready to have her seared in your memory forever. Probably eating the face of a child or something like that.

    Posted by Pterrafractyl | July 13, 2018, 2:39 pm
  6. If you didn’t think access to a gun could get much easier in America, it’s time to rethink that proposition: Starting on August 1st, it will be legal to distribute instructions over the internet for create 3D printable guns according to US law. Recall how 3D printable guns were first developed in 2013 by far right crypto-anarchist Cody Wilson, the guy also behind the untraceable Bitcoin Dark Wallet and Hatreon, a crowdsourcing fundraising platform for neo-Nazis and other people who got kicked off of Patreon.

    Wilson first put instructions for 3D printable guns online back in 2013, but the US State Department forced him to take it down, arguing it amount to exporting weapons. Wilson sued and the case has been stuck in courts. Flash forward to April of 2018, and the US government suddenly decided to reverse course and drop its lawsuit. A settlement was reached and August 1 was declared the day 3D printable gun instructions will flood the internet.

    So it looks like the cypherpunk approach of using technology to get around political realities you don’t like is about to be applied to gun control laws, with untraceable guns for potentially anyone as one of the immediate consequences:

    Quartz

    The age of 3D-printed guns in America is here

    Hanna Kozlowska
    July 28, 2018

    A last-ditch effort to block a US organization from making instructions to 3D-print a gun available to download has failed. The template will be posted online on Wednesday (Aug. 1).

    From then, anyone with access to a 3D printer will be able to create their own firearm out of the same kind of material that’s used for Lego blocks. The guns are untraceable, and require no background check to make or own.

    “The age of the downloadable gun formally begins,” states the website of Defense Distributed, the non-profit defense firm that has fought for years to make this “age” possible.

    In April, Defense Distributed reached a settlement with the US State Department in a federal lawsuit that allowed publishing the plans on printing a gun to proceed, which took effect in June. On July 26, gun-control advocates asked a federal court in Texas to block the decision, but the judge decided not to intervene. Lawmakers in Washington also tried in the past week to mobilize against the development, but it’s likely all too late (paywall).

    The first of this kind of gun—designed by the founder of Defense Distributed Cody Wilson, a self-described crypto-anarchist—was “The Liberator,” a single shot pistol, which had a metal part that made it compliant with a US gun-detection law. When the plans were first released in 2013, Wilson claimed they were downloaded more than 100,000 times in the first couple of days. Shortly thereafter, the government said the enterprise was illegal.

    Defense Distributed sued the federal government in 2015 after it was blocked from publishing the 3D-printing plans online. With the April 2018 settlement, the government reversed its position. David Chipman, former agent at the Bureau of Alcohol, Tobacco, Firearms, and Explosives (ATF) and current adviser to Giffords, a gun control organization run by former congresswoman Gabby Giffords (who was infamously shot in the head), blames the about-face on the change in presidential administrations.

    The decision means that people who can’t pass a standard background check “like terrorists, convicted felons, and domestic abusers” will be able to print out a gun without a serial number, Chipman wrote in a blog post.. “This could have severe repercussions a decade from now if we allow weapons of this kind to multiply.”

    ———-

    “The age of 3D-printed guns in America is here” by Hanna Kozlowska; Quartz; 07/28/2018

    “A last-ditch effort to block a US organization from making instructions to 3D-print a gun available to download has failed. The template will be posted online on Wednesday (Aug. 1).”

    In just a copule more days, anyone with access to a 3D print will be able to create as many untraceable guns as they desire. This is thanks to a settlement reached in April by Cody Wilson’s company, Defense Distributed, and the federal government:


    From then, anyone with access to a 3D printer will be able to create their own firearm out of the same kind of material that’s used for Lego blocks. The guns are untraceable, and require no background check to make or own.

    “The age of the downloadable gun formally begins,” states the website of Defense Distributed, the non-profit defense firm that has fought for years to make this “age” possible.

    In April, Defense Distributed reached a settlement with the US State Department in a federal lawsuit that allowed publishing the plans on printing a gun to proceed, which took effect in June. On July 26, gun-control advocates asked a federal court in Texas to block the decision, but the judge decided not to intervene. Lawmakers in Washington also tried in the past week to mobilize against the development, but it’s likely all too late (paywall).

    The first of this kind of gun—designed by the founder of Defense Distributed Cody Wilson, a self-described crypto-anarchist—was “The Liberator,” a single shot pistol, which had a metal part that made it compliant with a US gun-detection law. When the plans were first released in 2013, Wilson claimed they were downloaded more than 100,000 times in the first couple of days. Shortly thereafter, the government said the enterprise was illegal.

    So why exactly did the US government suddenly drop the lawsuit in April and pave the way for the distribution of 3D printed gun instructions? Well, the answer appears to be that the Trump administration wanted the case dropped. At least that’s how gun control advocates interpreted it:


    Defense Distributed sued the federal government in 2015 after it was blocked from publishing the 3D-printing plans online. With the April 2018 settlement, the government reversed its position. David Chipman, former agent at the Bureau of Alcohol, Tobacco, Firearms, and Explosives (ATF) and current adviser to Giffords, a gun control organization run by former congresswoman Gabby Giffords (who was infamously shot in the head), blames the about-face on the change in presidential administrations.

    The decision means that people who can’t pass a standard background check “like terrorists, convicted felons, and domestic abusers” will be able to print out a gun without a serial number, Chipman wrote in a blog post.. “This could have severe repercussions a decade from now if we allow weapons of this kind to multiply.”

    Although, as the following article notes, it appears that one of the reasons for the change in the federal government’s attitude towards the case may have been due to a desire to change the current export laws regarding the export of guns in general that US gun manufacturers have long wanted. The Trump administration proposed revising and streamlining the process for exporting consumer firearms and related technical information, including tutorials for 3D printed guns. The rule change would also shift jurisdiction of some items from the State Department to the Commerce Department (don’t forget that it was the State Department that imposed the initial injunction on the distribution of 3D instructions). So it sounds like the Trump administration’s move to legalize the distribution of 3D printable gun instructions may have been part of a broader effort to export more guns in general:

    The New York Times

    ‘Downloadable Gun’ Clears a Legal Obstacle, and Activists Are Alarmed

    By Tiffany Hsu and Alan Feuer
    July 13, 2018

    Learning to make a so-called ghost gun — an untraceable, unregistered firearm without a serial number — could soon become much easier.

    The United States last month agreed to allow a Texas man to distribute online instruction manuals for a pistol that could be made by anyone with access to a 3-D printer. The man, Cody Wilson, had sued the government after the State Department forced him to take down the instructions because they violated export laws.

    Mr. Wilson, who is well known in anarchist and gun-rights communities, complained that his right to free speech was being stifled and that he was sharing computer code, not actual guns.

    The case was settled on June 29, and Mr. Wilson gave The New York Times a copy of the agreement this week. The settlement states that 3-D printing tutorials are approved “for public release (i.e. unlimited distribution) in any form.”

    The government also agreed to pay nearly $40,000 of Mr. Wilson’s legal fees.

    The willingness to resolve the case — after the government had won some lower court judgments — has raised alarms among gun-control advocates, who said it would make it easier for felons and others to get firearms. Some critics said it suggested close ties between the Trump administration and gun-ownership advocates, this week filing requests for documents that might explain why the government agreed to settle.

    The administration “capitulated in a case it had won at every step of the way,” said J. Adam Skaggs, the chief counsel for the Giffords Law Center to Prevent Gun Violence. “This isn’t a case where the underlying facts of the law changed. The only thing that changed was the administration.”

    Mr. Wilson’s organization, Defense Distributed, will repost its online guides on Aug. 1, when “the age of the downloadable gun formally begins,” according to its website. The files will include plans to make a variety of firearms using 3-D printers, including for AR-15-style rifles, which have been used in several mass shootings.

    Mr. Wilson said the settlement would allow gunmaking enthusiasts to come out from the shadows. Copies of his plans have circulated on the so-called dark web since his site went down.

    “I can see how it would attract more people and maybe lessen the tactic of having to hide your identity,” Mr. Wilson said of the settlement in an interview. “It’s not a huge space right now, but I do know that it’s only going to accelerate things.”

    But as the “landmark settlement” brings ghost gun instructions out into the open, it could also give felons and domestic abusers access to firearms that background checks would otherwise block them from owning, said Adam Winkler, a law professor at the University of California, Los Angeles.

    “The current laws are already difficult to enforce — they’re historically not especially powerful, and they’re riddled with loopholes — and this will just make those laws easier to evade,” Mr. Winkler said. “It not only allows this tech to flourish out of the underground but gives it legal sanction.”

    Some saw the settlement as proof that the Trump administration wanted to further deregulate the gun industry and increase access to firearms. This year, the administration proposed a rule change that would revise and streamline the process for exporting consumer firearms and related technical information, including tutorials for 3-D printed designs.

    The change, long sought by firearms manufacturers, would shift jurisdiction of certain items from the State Department to the Commerce Department, which uses a simpler licensing procedure for exports.

    On Thursday and Friday, the Brady Center to Prevent Gun Violence filed requests under the Freedom of Information Act for any documents showing how the government decided on the settlement over printable firearms, and whether organizations like the National Rifle Association or the National Shooting Sports Foundation were involved.

    Neither trade group commented for this article, but some gun advocates said Mr. Trump has been less helpful toward the firearms industry than he had suggested he would be.

    Mr. Wilson also said that “there has not been a pro-gun streak” under Mr. Trump’s Justice Department, though he praised the nomination of Judge Brett M. Kavanaugh, who is seen as a champion of Second Amendment rights, to the Supreme Court.

    “Trump will go to the N.R.A. and be like, ‘I’m your greatest friend,’ but unfortunately his D.O.J. has fought gun cases tooth and nail in the courts,” he said.

    Mr. Wilson clashed with the government in 2013 after he successfully printed a mostly plastic handgun — a tech-focused twist on a longstanding and generally legal tradition of do-it-yourself gunmaking that has included AR-15 crafting parties in enthusiasts’ garages. His creation inspired Philadelphia to pass legislation banning the use of 3-D printers to manufacture firearms.

    After Mr. Wilson posted online blueprints for the gun, they were downloaded more than 100,000 times within a few days, and later appeared on other websites and file-sharing services.

    The State Department quickly caught wind of the files and demanded that Mr. Wilson remove them, saying that they violated export regulations dealing with sensitive military hardware and technology.

    Mr. Wilson capitulated, and after two years paired up with the Second Amendment Foundation to file his lawsuit. A Federal District Court judge denied his request for a preliminary injunction against the State Department, a decision that an appellate court upheld. The Supreme Court declined to hear the case.

    In a statement, the State Department said the settlement with Mr. Wilson was voluntary and “entered into following negotiations,” adding that “the court did not rule in favor of the plaintiffs in this case.”

    To raise money for his legal defense, he said, he sold milling machines that can read digital design files and stamp out metal gun parts. Proceeds from the so-called Ghost Gunner machines, which cost $1,675 each, are used to run his organization, he said.

    Ghost guns, by their nature, are difficult to track.

    Guns manufactured for sale feature a serial number on the receiver, which houses the firing mechanism. But unfinished frames known as “80 percent” receivers can be easily purchased, completed with machinery like the Ghost Gunner and then combined with the remaining parts of the firearm, which are readily available online and at gun shows.

    But with the government adjusting the export rules that first sparked the case, Mr. Wilson will be able to freely publish blueprints for 3-D printers, said Alan M. Gottlieb, the founder of the Second Amendment Foundation, in a statement.

    “Not only is this a First Amendment victory for free speech,” he said, “it also is a devastating blow to the gun prohibition lobby.”

    ———-

    “Learning to make a so-called ghost gun — an untraceable, unregistered firearm without a serial number — could soon become much easier.”

    DIY untraceable firearms. That’s about to be a thing and the US government legally sanctioned it. And that sudden change of heart, combined with the legal victories the government previously enjoyed on this case, is what left so many gun control advocates assuming that the Trump administration decided to promote 3D printable gun:


    The government also agreed to pay nearly $40,000 of Mr. Wilson’s legal fees.

    The willingness to resolve the case — after the government had won some lower court judgments — has raised alarms among gun-control advocates, who said it would make it easier for felons and others to get firearms. Some critics said it suggested close ties between the Trump administration and gun-ownership advocates, this week filing requests for documents that might explain why the government agreed to settle.

    The administration “capitulated in a case it had won at every step of the way,” said J. Adam Skaggs, the chief counsel for the Giffords Law Center to Prevent Gun Violence. “This isn’t a case where the underlying facts of the law changed. The only thing that changed was the administration.”

    And note how a Federal District Court judge had denied Wilson’s request for a preliminary injunction against the State Department, a decision that an appellate court upheld. And the Supreme Court declined to hear the case. And the state Department issued a statement about how Wilson voluntary “entered into following negotiations,” and that “the court did not rule in favor of the plaintiffs in this case.” That sure doesn’t sound like a government that was on the verge of losing its case:


    The State Department quickly caught wind of the files and demanded that Mr. Wilson remove them, saying that they violated export regulations dealing with sensitive military hardware and technology.

    Mr. Wilson capitulated, and after two years paired up with the Second Amendment Foundation to file his lawsuit. A Federal District Court judge denied his request for a preliminary injunction against the State Department, a decision that an appellate court upheld. The Supreme Court declined to hear the case.

    In a statement, the State Department said the settlement with Mr. Wilson was voluntary and “entered into following negotiations,” adding that “the court did not rule in favor of the plaintiffs in this case.”

    But that apparent desire by the Trump administration to promote 3D printable guns might be less a reflection of a specific desire to promote 3D printable guns and more a reflection of the Trump administration’s desire to promote gun exports in general:


    Some saw the settlement as proof that the Trump administration wanted to further deregulate the gun industry and increase access to firearms. This year, the administration proposed a rule change that would revise and streamline the process for exporting consumer firearms and related technical information, including tutorials for 3-D printed designs.

    The change, long sought by firearms manufacturers, would shift jurisdiction of certain items from the State Department to the Commerce Department, which uses a simpler licensing procedure for exports.

    Regardless, we are just a couple days away from 3D printable gun instructions legally being distributed online. And it’s not just the 1-shot pistols. It’s going to include AR-15-style rifles:


    Mr. Wilson’s organization, Defense Distributed, will repost its online guides on Aug. 1, when “the age of the downloadable gun formally begins,” according to its website. The files will include plans to make a variety of firearms using 3-D printers, including for AR-15-style rifles, which have been used in several mass shootings.

    Mr. Wilson said the settlement would allow gunmaking enthusiasts to come out from the shadows. Copies of his plans have circulated on the so-called dark web since his site went down.

    And this probably also mean Wilson is going to be selling a lot more of those gun milling machines that allow anyone to create the metal components of their untraceable Ghost Guns:


    To raise money for his legal defense, he said, he sold milling machines that can read digital design files and stamp out metal gun parts. Proceeds from the so-called Ghost Gunner machines, which cost $1,675 each, are used to run his organization, he said.

    Ghost guns, by their nature, are difficult to track.

    Guns manufactured for sale feature a serial number on the receiver, which houses the firing mechanism. But unfinished frames known as “80 percent” receivers can be easily purchased, completed with machinery like the Ghost Gunner and then combined with the remaining parts of the firearm, which are readily available online and at gun shows.

    And keep in mind that, while this is a story about a US legal case, it’s effectively a global story. There’s undoubtedly going to be an explosion of 3D blueprints for pretty much any tool of violence one can imagine and it’s going to be accessible to anyone with an internet connection. They won’t need to go scouring the Dark Web or finding some illicit 3D printer instructions dealer. They’ll just go to one of the many websites that will be brimming with a growing library of all sorts of 3D printable sophisticated weaponry.

    So now we get to watch this grim experiment unfold. And who knows, it might actually reduce US gun exports by preemptively export markets with guns. Because why pay for an expensive imported gun when you can just print a cheap one?

    More generally, what’s going to happen when as 3D printable guns become a thing accessible to every corner of the globe? What kinds conflicts might pop up that simply wouldn’t have been possible before? Because we’re long familiar with conflicts fueled by large numbers of small arms flooding into a country, but someone has to be willing to supply those arms. There’s generally some sort of state sponsor for conflicts. What happens when any random group or movement can effectively arm themselves? Is humanity going to get even more violent as the cost of guns plummets and accessibility explodes? Probably. We’ll be collectively confirming that soon.

    Posted by Pterrafractyl | July 30, 2018, 3:43 pm
  7. Remember that report from November 2017, about how Google’s Android smartphones are secretly gathering surprisingly precise location information from smart phones even when people turn of “Location services” using information gathered from cell towers? Well, here’s a follow up report on that: Google claims it changed that policy, but it’s somehow still collecting very precise location data from Android phones and iPhones (if you use Google’s apps) even when you turn off “location services” (surprise!):

    Associated Press

    AP Exclusive: Google tracks your movements, like it or not

    By RYAN NAKASHIMA
    08/15/2018

    SAN FRANCISCO (AP) — Google wants to know where you go so badly that it records your movements even when you explicitly tell it not to.

    An Associated Press investigation found that many Google services on Android devices and iPhones store your location data even if you’ve used a privacy setting that says it will prevent Google from doing so.

    Computer-science researchers at Princeton confirmed these findings at the AP’s request.

    For the most part, Google is upfront about asking permission to use your location information. An app like Google Maps will remind you to allow access to location if you use it for navigating. If you agree to let it record your location over time, Google Maps will display that history for you in a “timeline” that maps out your daily movements.

    Storing your minute-by-minute travels carries privacy risks and has been used by police to determine the location of suspects — such as a warrant that police in Raleigh, North Carolina, served on Google last year to find devices near a murder scene. So the company lets you “pause” a setting called Location History.

    Google says that will prevent the company from remembering where you’ve been. Google’s support page on the subject states: “You can turn off Location History at any time. With Location History off, the places you go are no longer stored.”

    That isn’t true. Even with Location History paused, some Google apps automatically store time-stamped location data without asking. (It’s possible, although laborious, to delete it .)

    For example, Google stores a snapshot of where you are when you merely open its Maps app. Automatic daily weather updates on Android phones pinpoint roughly where you are. And some searches that have nothing to do with location, like “chocolate chip cookies,” or “kids science kits,” pinpoint your precise latitude and longitude — accurate to the square foot — and save it to your Google account.

    The privacy issue affects some two billion users of devices that run Google’s Android operating software and hundreds of millions of worldwide iPhone users who rely on Google for maps or search.

    Storing location data in violation of a user’s preferences is wrong, said Jonathan Mayer, a Princeton computer scientist and former chief technologist for the Federal Communications Commission’s enforcement bureau. A researcher from Mayer’s lab confirmed the AP’s findings on multiple Android devices; the AP conducted its own tests on several iPhones that found the same behavior.

    “If you’re going to allow users to turn off something called ‘Location History,’ then all the places where you maintain location history should be turned off,” Mayer said. “That seems like a pretty straightforward position to have.”

    Google says it is being perfectly clear.

    “There are a number of different ways that Google may use location to improve people’s experience, including: Location History, Web and App Activity, and through device-level Location Services,” a Google spokesperson said in a statement to the AP. “We provide clear descriptions of these tools, and robust controls so people can turn them on or off, and delete their histories at any time.”

    Google’s explanation did not convince several lawmakers.

    Sen. Mark Warner of Virginia told the AP it is “frustratingly common” for technology companies “to have corporate practices that diverge wildly from the totally reasonable expectations of their users,” and urged policies that would give users more control of their data. Rep. Frank Pallone of New Jersey called for “comprehensive consumer privacy and data security legislation” in the wake of the AP report.

    To stop Google from saving these location markers, the company says, users can turn off another setting, one that does not specifically reference location information. Called “Web and App Activity” and enabled by default, that setting stores a variety of information from Google apps and websites to your Google account.

    When paused, it will prevent activity on any device from being saved to your account. But leaving “Web & App Activity” on and turning “Location History” off only prevents Google from adding your movements to the “timeline,” its visualization of your daily travels. It does not stop Google’s collection of other location markers.

    You can delete these location markers by hand, but it’s a painstaking process since you have to select them individually, unless you want to delete all of your stored activity.

    You can see the stored location markers on a page in your Google account at myactivity.google.com, although they’re typically scattered under several different headers, many of which are unrelated to location.

    To demonstrate how powerful these other markers can be, the AP created a visual map of the movements of Princeton postdoctoral researcher Gunes Acar, who carried an Android phone with Location history off, and shared a record of his Google account.

    The map includes Acar’s train commute on two trips to New York and visits to The High Line park, Chelsea Market, Hell’s Kitchen, Central Park and Harlem. To protect his privacy, The AP didn’t plot the most telling and frequent marker — his home address.

    Huge tech companies are under increasing scrutiny over their data practices, following a series of privacy scandals at Facebook and new data-privacy rules recently adopted by the European Union. Last year, the business news site Quartz found that Google was tracking Android users by collecting the addresses of nearby cellphone towers even if all location services were off. Google changed the practice and insisted it never recorded the data anyway.

    Critics say Google’s insistence on tracking its users’ locations stems from its drive to boost advertising revenue.

    “They build advertising information out of data,” said Peter Lenz, the senior geospatial analyst at Dstillery, a rival advertising technology company. “More data for them presumably means more profit.”

    The AP learned of the issue from K. Shankari, a graduate researcher at UC Berkeley who studies the commuting patterns of volunteers in order to help urban planners. She noticed that her Android phone prompted her to rate a shopping trip to Kohl’s, even though she had turned Location History off.

    “So how did Google Maps know where I was?” she asked in a blog post .

    The AP wasn’t able to recreate Shankari’s experience exactly. But its attempts to do so revealed Google’s tracking. The findings disturbed her.

    “I am not opposed to background location tracking in principle,” she said. “It just really bothers me that it is not explicitly stated.”

    Google offers a more accurate description of how Location History actually works in a place you’d only see if you turn it off — a popup that appears when you “pause” Location History on your Google account webpage . There the company notes that “some location data may be saved as part of your activity on other Google services, like Search and Maps.”

    Google offers additional information in a popup that appears if you re-activate the “Web & App Activity” setting — an uncommon action for many users, since this setting is on by default. That popup states that, when active, the setting “saves the things you do on Google sites, apps, and services … and associated information, like location.”

    Warnings when you’re about to turn Location History off via Android and iPhone device settings are more difficult to interpret. On Android, the popup explains that “places you go with your devices will stop being added to your Location History map.” On the iPhone, it simply reads, “None of your Google apps will be able to store location data in Location History.”

    The iPhone text is technically true if potentially misleading. With Location History off, Google Maps and other apps store your whereabouts in a section of your account called “My Activity,” not “Location History.”

    Since 2014, Google has let advertisers track the effectiveness of online ads at driving foot traffic , a feature that Google has said relies on user location histories.

    The company is pushing further into such location-aware tracking to drive ad revenue, which rose 20 percent last year to $95.4 billion. At a Google Marketing Live summit in July, Google executives unveiled a new tool called “local campaigns” that dynamically uses ads to boost in-person store visits. It says it can measure how well a campaign drove foot traffic with data pulled from Google users’ location histories.

    Google also says location records stored in My Activity are used to target ads. Ad buyers can target ads to specific locations — say, a mile radius around a particular landmark — and typically have to pay more to reach this narrower audience.

    While disabling “Web & App Activity” will stop Google from storing location markers, it also prevents Google from storing information generated by searches and other activity. That can limit the effectiveness of the Google Assistant, the company’s digital concierge.

    ———-

    “AP Exclusive: Google tracks your movements, like it or not” by RYAN NAKASHIMA; Associated Press; 08/15/2018

    “An Associated Press investigation found that many Google services on Android devices and iPhones store your location data even if you’ve used a privacy setting that says it will prevent Google from doing so.”

    Yes, it turns out when you turn off location services on your Android smartphone, you’re merely turning off your own access to that location history. Google will still collect and keep that data for its own purposes.

    And while some commonly used apps that you would expect to require location data, like Google Maps or daily whether updates, automatically collect location data without ever asking, it sounds like certain searches that should have nothing to do with your location also trigger automatic location data. And the search-triggered locations obtained by Google apparently give your precise latitude and longitude, accurate to the square foot:


    For the most part, Google is upfront about asking permission to use your location information. An app like Google Maps will remind you to allow access to location if you use it for navigating. If you agree to let it record your location over time, Google Maps will display that history for you in a “timeline” that maps out your daily movements.

    Storing your minute-by-minute travels carries privacy risks and has been used by police to determine the location of suspects — such as a warrant that police in Raleigh, North Carolina, served on Google last year to find devices near a murder scene. So the company lets you “pause” a setting called Location History.

    Google says that will prevent the company from remembering where you’ve been. Google’s support page on the subject states: “You can turn off Location History at any time. With Location History off, the places you go are no longer stored.”

    That isn’t true. Even with Location History paused, some Google apps automatically store time-stamped location data without asking. (It’s possible, although laborious, to delete it .)

    For example, Google stores a snapshot of where you are when you merely open its Maps app. Automatic daily weather updates on Android phones pinpoint roughly where you are. And some searches that have nothing to do with location, like “chocolate chip cookies,” or “kids science kits,” pinpoint your precise latitude and longitude — accurate to the square foot — and save it to your Google account.

    Recall that the initial story from November about Google’s using cellphone tower triangulation to surreptitiously collect location data indicated that this type of location data was very accurate and could determine whether or not you stepped foot in a given retail store (it was being used for location-targeted ads). So if this latest report indicates that Google can get your location down to the nearest square foot that sounds like a reference to that cell tower triangulation technique.

    The article goes on to reference that report from November, and goes on to say that “Google changed the practice and insisted it never recorded the data anyway”. So Google apparently admitted to stopping something it says it was never doing in the first place. It’s the kind of nonsense corporate response that suggests that this program never ended, it was merely “changed”:


    Huge tech companies are under increasing scrutiny over their data practices, following a series of privacy scandals at Facebook and new data-privacy rules recently adopted by the European Union. Last year, the business news site Quartz found that Google was tracking Android users by collecting the addresses of nearby cellphone towers even if all location services were off. Google changed the practice and insisted it never recorded the data anyway.

    And Google does indeed admit that this data is being used for location-based ad targeting, so it sure sounds like nothing changed at that initial report in November:


    Since 2014, Google has let advertisers track the effectiveness of online ads at driving foot traffic , a feature that Google has said relies on user location histories.

    The company is pushing further into such location-aware tracking to drive ad revenue, which rose 20 percent last year to $95.4 billion. At a Google Marketing Live summit in July, Google executives unveiled a new tool called “local campaigns” that dynamically uses ads to boost in-person store visits. It says it can measure how well a campaign drove foot traffic with data pulled from Google users’ location histories.

    Google also says location records stored in My Activity are used to target ads. Ad buyers can target ads to specific locations — say, a mile radius around a particular landmark — and typically have to pay more to reach this narrower audience.

    So it there any way to stop Google from collecting your location history other than using a non-Android phone with no Google apps? Well, yes, there is a way. It’s just seemingly designed to be super confusing and counterintuitive:


    To stop Google from saving these location markers, the company says, users can turn off another setting, one that does not specifically reference location information. Called “Web and App Activity” and enabled by default, that setting stores a variety of information from Google apps and websites to your Google account.

    When paused, it will prevent activity on any device from being saved to your account. But leaving “Web & App Activity” on and turning “Location History” off only prevents Google from adding your movements to the “timeline,” its visualization of your daily travels. It does not stop Google’s collection of other location markers.

    You can delete these location markers by hand, but it’s a painstaking process since you have to select them individually, unless you want to delete all of your stored activity.

    You can see the stored location markers on a page in your Google account at myactivity.google.com, although they’re typically scattered under several different headers, many of which are unrelated to location.

    To demonstrate how powerful these other markers can be, the AP created a visual map of the movements of Princeton postdoctoral researcher Gunes Acar, who carried an Android phone with Location history off, and shared a record of his Google account.

    The map includes Acar’s train commute on two trips to New York and visits to The High Line park, Chelsea Market, Hell’s Kitchen, Central Park and Harlem. To protect his privacy, The AP didn’t plot the most telling and frequent marker — his home address.

    Google of course counters that it’s been clear all along:


    Google says it is being perfectly clear.

    “There are a number of different ways that Google may use location to improve people’s experience, including: Location History, Web and App Activity, and through device-level Location Services,” a Google spokesperson said in a statement to the AP. “We provide clear descriptions of these tools, and robust controls so people can turn them on or off, and delete their histories at any time.”

    And while it’s basically trolling the public at this point for Google to act like its location data policies have been anything other than opaque and confusing, that trollish response and those opaque policies to make one thing increasingly clear: Google has no intention of stopping this kind of data collection. If anything we should expect it to increase given the plans for more location-based services. Resistance is still futile. And not just because of Google, even if its one of the biggest offenders. It’s more a group effort.

    Posted by Pterrafractyl | August 15, 2018, 2:48 pm
  8. Here’s one of those articles that’s surprising in one sense and completely predictable in another sense: Yuval Noah Harari is a futurist philosopher and author of a number of popular books about where humanity is heading. Harari appears to be a largely dystopian futurist, envision a future where democracy is seen as obsolete and a techno-elite ruling class run companies with the technological capacity to essentially control the minds of masses. Masses that will increasingly be seen obsolete and useless. Harari even gave a recent TED Talk called “Why fascism is so tempting — and how your data could power it. So how do Silicon Valley’s CEO view Mr. Harari’s views? They apparently can’t get enough of him:

    The New York Times

    Tech C.E.O.s Are in Love With Their Principal Doomsayer

    The futurist philosopher Yuval Noah Harari thinks Silicon Valley is an engine of dystopian ruin. So why do the digital elite adore him so?

    By Nellie Bowles
    Nov. 9, 2018

    The futurist philosopher Yuval Noah Harari worries about a lot.

    He worries that Silicon Valley is undermining democracy and ushering in a dystopian hellscape in which voting is obsolete.

    He worries that by creating powerful influence machines to control billions of minds, the big tech companies are destroying the idea of a sovereign individual with free will.

    He worries that because the technological revolution’s work requires so few laborers, Silicon Valley is creating a tiny ruling class and a teeming, furious “useless class.”

    But lately, Mr. Harari is anxious about something much more personal. If this is his harrowing warning, then why do Silicon Valley C.E.O.s love him so?

    “One possibility is that my message is not threatening to them, and so they embrace it?” a puzzled Mr. Harari said one afternoon in October. “For me, that’s more worrying. Maybe I’m missing something?”

    When Mr. Harari toured the Bay Area this fall to promote his latest book, the reception was incongruously joyful. Reed Hastings, the chief executive of Netflix, threw him a dinner party. The leaders of X, Alphabet’s secretive research division, invited Mr. Harari over. Bill Gates reviewed the book “Fascinating” and “such a stimulating writer”) in The New York Times.

    “I’m interested in how Silicon Valley can be so infatuated with Yuval, which they are — it’s insane he’s so popular, they’re all inviting him to campus — yet what Yuval is saying undermines the premise of the advertising- and engagement-based model of their products,” said Tristan Harris, Google’s former in-house design ethicist and the co-founder of the Center for Humane Technology.

    Part of the reason might be that Silicon Valley, at a certain level, is not optimistic on the future of democracy. The more of a mess Washington becomes, the more interested the tech world is in creating something else, and it might not look like elected representation. Rank-and-file coders have long been wary of regulation and curious about alternative forms of government. A separatist streak runs through the place: Venture capitalists periodically call for California to secede or shatter, or for the creation of corporate nation-states. And this summer, Mark Zuckerberg, who has recommended Mr. Harari to his book club, acknowledged a fixation with the autocrat Caesar Augustus. “Basically,” Mr. Zuckerberg told The New Yorker, “through a really harsh approach, he established 200 years of world peace.”

    Mr. Harari, thinking about all this, puts it this way: “Utopia and dystopia depends on your values.”

    Mr. Harari, who has a Ph.D. from Oxford, is a 42-year-old Israeli philosopher and a history professor at Hebrew University of Jerusalem. The story of his current fame begins in 2011, when he published a book of notable ambition: to survey the whole of human existence. “Sapiens: A Brief History of Humankind,” first released in Hebrew, did not break new ground in terms of historical research. Nor did its premise — that humans are animals and our dominance is an accident — seem a likely commercial hit. But the casual tone and smooth way Mr. Harari tied together existing knowledge across fields made it a deeply pleasing read, even as the tome ended on the notion that the process of human evolution might be over. Translated into English in 2014, the book went on to sell more than eight million copies and made Mr. Harari a celebrity intellectual.

    He followed up with “Homo Deus: A Brief History of Tomorrow,” which outlined his vision of what comes after human evolution. In it, he describes Dataism, a new faith based around the power of algorithms. Mr. Harari’s future is one in which big data is worshiped, artificial intelligence surpasses human intelligence, and some humans develop Godlike abilities.

    Now, he has written a book about the present and how it could lead to that future: “21 Lessons for the 21st Century.” It is meant to be read as a series of warnings. His recent TED Talk was called “Why fascism is so tempting — and how your data could power it.

    His prophecies might have made him a Cassandra in Silicon Valley, or at the very least an unwelcome presence. Instead, he has had to reconcile himself to the locals’ strange delight. “If you make people start thinking far more deeply and seriously about these issues,” he told me, sounding weary, “some of the things they will think about might not be what you want them to think about.”

    ‘Brave New World’ as Aspirational Reading

    Mr. Harari agreed to let me tag along for a few days on his travels through the Valley, and one afternoon in September, I waited for him outside X’s offices, in Mountain View, while he spoke to the Alphabet employees inside. After a while, he emerged: a shy, thin, bespectacled man with a dusting of dark hair. Mr. Harari has a sort of owlish demeanor, in that he looks wise and also does not move his body very much, even while glancing to the side. His face is not particularly expressive, with the exception of one rogue eyebrow. When you catch his eye, there is a wary look — like he wants to know if you, too, understand exactly how bad the world is about to get.

    At the Alphabet talk, Mr. Harari had been accompanied by his publisher. They said that the younger employees had expressed concern about whether their work was contributing to a less free society, while the executives generally thought their impact was positive.

    Some workers had tried to predict how well humans would adapt to large technological change based on how they have responded to small shifts, like a new version of Gmail. Mr. Harari told them to think more starkly: If there isn’t a major policy intervention, most humans probably will not adapt at all.

    It made him sad, he told me, to see people build things that destroy their own societies, but he works every day to maintain an academic distance and remind himself that humans are just animals. “Part of it is really coming from seeing humans as apes, that this is how they behave,” he said, adding, “They’re chimpanzees. They’re sapiens. This is what they do.”

    He was slouching a little. Socializing exhausts him.

    As we boarded the black gull-wing Tesla Mr. Harari had rented for his visit, he brought up Aldous Huxley. Generations have been horrified by his novel “Brave New World,” which depicts a regime of emotion control and painless consumption. Readers who encounter the book today, Mr. Harari said, often think it sounds great. “Everything is so nice, and in that way it is an intellectually disturbing book because you’re really hard-pressed to explain what’s wrong with it,” he said. “And you do get today a vision coming out of some people in Silicon Valley which goes in that direction.”

    An Alphabet media relations manager later reached out to Mr. Harari’s team to tell him to tell me that the visit to X was not allowed to be part of this story. The request confused and then amused Mr. Harari. It is interesting, he said, that unlike politicians, tech companies do not need a free press, since they already control the means of message distribution.

    He said he had resigned himself to tech executives’ global reign, pointing out how much worse the politicians are. “I’ve met a number of these high-tech giants, and generally they’re good people,” he said. “They’re not Attila the Hun. In the lottery of human leaders, you could get far worse.”

    Some of his tech fans, he thinks, come to him out of anxiety. “Some may be very frightened of the impact of what they are doing,” Mr. Harari said.

    Still, their enthusiastic embrace of his work makes him uncomfortable. “It’s just a rule of thumb in history that if you are so much coddled by the elites it must mean that you don’t want to frighten them,” Mr. Harari said. “They can absorb you. You can become the intellectual entertainment.”

    Dinner, With a Side of Medically Engineered Immortality

    C.E.O. testimonials to Mr. Harari’s acumen are indeed not hard to come by. “I’m drawn to Yuval for his clarity of thought,” Jack Dorsey, the head of Twitter and Square, wrote in an email, going on to praise a particular chapter on meditation.

    And Mr. Hastings wrote: “Yuval’s the anti-Silicon Valley persona — he doesn’t carry a phone and he spends a lot of time contemplating while off the grid. We see in him who we wish we were.” He added, “His thinking on A.I. and biotech in his new book pushes our understanding of the dramas to unfold.”

    At the dinner Mr. Hastings co-hosted, academics and industry leaders debated the dangers of data collection, and to what degree longevity therapies will extend the human life span. (Mr. Harari has written that the ruling class will vastly outlive the useless.) “That evening was small, but could be magnified to symbolize his impact in the heart of Silicon Valley,” said Dr. Fei-Fei Li, an artificial intelligence expert who pushed internally at Google to keep secret the company’s efforts to process military drone footage for the Pentagon. “His book has that ability to bring these people together at a table, and that is his contribution.”

    A few nights earlier, Mr. Harari spoke to a sold-out theater of 3,500 in San Francisco. One ticket-holder walking in, an older man, told me it was brave and honest for Mr. Harari to use the term “useless class.”

    The author was paired for discussion with the prolific intellectual Sam Harris, who strode onstage in a gray suit and well-starched white button-down. Mr. Harari was less at ease, in a loose suit that crumpled around him, his hands clasped in his lap as he sat deep in his chair. But as he spoke about meditation — Mr. Harari spends two hours each day and two months each year in silence — he became commanding. In a region where self-optimization is paramount and meditation is a competitive sport, Mr. Harari’s devotion confers hero status.

    He told the audience that free will is an illusion, and that human rights are just a story we tell ourselves. Political parties, he said, might not make sense anymore. He went on to argue that the liberal world order has relied on fictions like “the customer is always right” and “follow your heart,” and that these ideas no longer work in the age of artificial intelligence, when hearts can be manipulated at scale.

    Everyone in Silicon Valley is focused on building the future, Mr. Harari continued, while most of the world’s people are not even needed enough to be exploited. “Now you increasingly feel that there are all these elites that just don’t need me,” he said. “And it’s much worse to be irrelevant than to be exploited.”

    The useless class he describes is uniquely vulnerable. “If a century ago you mounted a revolution against exploitation, you knew that when bad comes to worse, they can’t shoot all of us because they need us,” he said, citing army service and factory work.

    Now it is becoming less clear why the ruling elite would not just kill the new useless class. “You’re totally expendable,” he told the audience.

    This, Mr. Harari told me later, is why Silicon Valley is so excited about the concept of universal basic income, or stipends paid to people regardless of whether they work. The message is: “We don’t need you. But we are nice, so we’ll take care of you.”

    On Sept. 14, he published an essay in The Guardian assailing another old trope — that “the voter knows best.”

    “If humans are hackable animals, and if our choices and opinions don’t reflect our free will, what should the point of politics be?” he wrote. “How do you live when you realize … that your heart might be a government agent, that your amygdala might be working for Putin, and that the next thought that emerges in your mind might well be the result of some algorithm that knows you better than you know yourself? These are the most interesting questions humanity now faces.”

    ‘O.K., So Maybe Humankind Is Going to Disappear’

    Mr. Harari and his husband, Itzik Yahav, who is also his manager, rented a small house in Mountain View for their visit, and one morning I found them there making oatmeal. Mr. Harari observed that as his celebrity in Silicon Valley has risen, tech fans have focused on his lifestyle.

    “Silicon Valley was already kind of a hotbed for meditation and yoga and all these things,” he said. “And one of the things that made me kind of more popular and palatable is that I also have this bedrock.” He was wearing an old sweatshirt and denim track pants. His voice was quiet, but he gestured widely, waving his hands, hitting a jar of spatulas.

    Mr. Harari grew up in Kiryat Ata, near Haifa, and his father worked in the arms industry. His mother, who worked in office administration, now volunteers for her son handling his mail; he gets about 1,000 messages a week. Mr. Yahav’s mother is their accountant.

    Most days, Mr. Harari doesn’t use an alarm clock, and wakes up between 6:30 and 8:30 a.m., then meditates and has a cup of tea. He works until 4 or 5 p.m., then does another hour of meditation, followed by an hourlong walk, maybe a swim, and then TV with Mr. Yahav.

    The two met 16 years ago through the dating site Check Me Out. “We are not big believers in falling in love,” Mr. Harari said. “It was more a rational choice.”

    “We met each other and we thought, ‘O.K., we’re — O.K., let’s move in with each other,’” Mr. Yahav said.

    Mr. Yahav became Mr. Harari’s manager. During the period when English-language publishers were cool on the commercial viability of “Sapiens” — thinking it too serious for the average reader and not serious enough for the scholars — Mr. Yahav persisted, eventually landing the Jerusalem-based agent Deborah Harris. One day when Mr. Harari was away meditating, Mr. Yahav and Ms. Harris finally sold it at auction to Random House in London.

    Today, they have a team of eight based in Tel Aviv working on Mr. Harari’s projects. The director Ridley Scott and documentarian Asif Kapadia are adapting “Sapiens” into a TV show, and Mr. Harari is working on children’s books to reach a broader audience.

    ———-

    “Tech C.E.O.s Are in Love With Their Principal Doomsayer” by Nellie Bowles; The New York Times; 11/09/2018

    Part of the reason might be that Silicon Valley, at a certain level, is not optimistic on the future of democracy. The more of a mess Washington becomes, the more interested the tech world is in creating something else, and it might not look like elected representation. Rank-and-file coders have long been wary of regulation and curious about alternative forms of government. A separatist streak runs through the place: Venture capitalists periodically call for California to secede or shatter, or for the creation of corporate nation-states. And this summer, Mark Zuckerberg, who has recommended Mr. Harari to his book club, acknowledged a fixation with the autocrat Caesar Augustus. “Basically,” Mr. Zuckerberg told The New Yorker, “through a really harsh approach, he established 200 years of world peace.””

    A guy who specializes in worrying about techno elites destroying democracy and turning the masses into the ‘useless’ class is is extra worried about the fact those techno elites appear to love him. Hmmm…might that have something to do with the fact that the dystopian future he’s predicting assumes the tech elites completely dominate humanity? And, who knows, he’s probably giving them idea for how to accomplish this domination. So of course they love him:


    Now, he has written a book about the present and how it could lead to that future: “21 Lessons for the 21st Century.” It is meant to be read as a series of warnings. His recent TED Talk was called “Why fascism is so tempting — and how your data could power it.

    His prophecies might have made him a Cassandra in Silicon Valley, or at the very least an unwelcome presence. Instead, he has had to reconcile himself to the locals’ strange delight. “If you make people start thinking far more deeply and seriously about these issues,” he told me, sounding weary, “some of the things they will think about might not be what you want them to think about.”

    Plus, Harari appears to view rule by tech executives as preferable to politicians because he views them as ‘generally good people’. So, again, of course the tech elite love the guy. He’s predicting they dominate the future and doesn’t see that as all that bad:


    ‘Brave New World’ as Aspirational Reading

    Mr. Harari agreed to let me tag along for a few days on his travels through the Valley, and one afternoon in September, I waited for him outside X’s offices, in Mountain View, while he spoke to the Alphabet employees inside. After a while, he emerged: a shy, thin, bespectacled man with a dusting of dark hair. Mr. Harari has a sort of owlish demeanor, in that he looks wise and also does not move his body very much, even while glancing to the side. His face is not particularly expressive, with the exception of one rogue eyebrow. When you catch his eye, there is a wary look — like he wants to know if you, too, understand exactly how bad the world is about to get.

    At the Alphabet talk, Mr. Harari had been accompanied by his publisher. They said that the younger employees had expressed concern about whether their work was contributing to a less free society, while the executives generally thought their impact was positive.

    An Alphabet media relations manager later reached out to Mr. Harari’s team to tell him to tell me that the visit to X was not allowed to be part of this story. The request confused and then amused Mr. Harari. It is interesting, he said, that unlike politicians, tech companies do not need a free press, since they already control the means of message distribution.

    He said he had resigned himself to tech executives’ global reign, pointing out how much worse the politicians are. “I’ve met a number of these high-tech giants, and generally they’re good people,” he said. “They’re not Attila the Hun. In the lottery of human leaders, you could get far worse.”

    Some of his tech fans, he thinks, come to him out of anxiety. “Some may be very frightened of the impact of what they are doing,” Mr. Harari said.

    Still, their enthusiastic embrace of his work makes him uncomfortable. “It’s just a rule of thumb in history that if you are so much coddled by the elites it must mean that you don’t want to frighten them,” Mr. Harari said. “They can absorb you. You can become the intellectual entertainment.”

    He’s also predicting that these tech executives will use longevity technology to ‘vastly outlive the useless’, which clearly implies he’s predicting longevity technology gets developed but not share with ‘the useless’ (the rest of us):


    Dinner, With a Side of Medically Engineered Immortality

    C.E.O. testimonials to Mr. Harari’s acumen are indeed not hard to come by. “I’m drawn to Yuval for his clarity of thought,” Jack Dorsey, the head of Twitter and Square, wrote in an email, going on to praise a particular chapter on meditation.

    And Mr. Hastings wrote: “Yuval’s the anti-Silicon Valley persona — he doesn’t carry a phone and he spends a lot of time contemplating while off the grid. We see in him who we wish we were.” He added, “His thinking on A.I. and biotech in his new book pushes our understanding of the dramas to unfold.”

    At the dinner Mr. Hastings co-hosted, academics and industry leaders debated the dangers of data collection, and to what degree longevity therapies will extend the human life span. (Mr. Harari has written that the ruling class will vastly outlive the useless.) “That evening was small, but could be magnified to symbolize his impact in the heart of Silicon Valley,” said Dr. Fei-Fei Li, an artificial intelligence expert who pushed internally at Google to keep secret the company’s efforts to process military drone footage for the Pentagon. “His book has that ability to bring these people together at a table, and that is his contribution.”

    Harari has even gone on to question whether or not humans have any free will at all and explored the implications of the possibility that technology will allow the tech giants to essentially control what people think, effectively bio-hacking the human mind. And one of the implications he sees from this hijacking of human will is that political parties might not make sense anymore and human rights are just a story we tell ourselves. So, again, it’s not exactly hard to see why tech elites love the guy. He’s basically making the case for why we should just accept this dystopian future:


    A few nights earlier, Mr. Harari spoke to a sold-out theater of 3,500 in San Francisco. One ticket-holder walking in, an older man, told me it was brave and honest for Mr. Harari to use the term “useless class.”

    The author was paired for discussion with the prolific intellectual Sam Harris, who strode onstage in a gray suit and well-starched white button-down. Mr. Harari was less at ease, in a loose suit that crumpled around him, his hands clasped in his lap as he sat deep in his chair. But as he spoke about meditation — Mr. Harari spends two hours each day and two months each year in silence — he became commanding. In a region where self-optimization is paramount and meditation is a competitive sport, Mr. Harari’s devotion confers hero status.

    He told the audience that free will is an illusion, and that human rights are just a story we tell ourselves. Political parties, he said, might not make sense anymore. He went on to argue that the liberal world order has relied on fictions like “the customer is always right” and “follow your heart,” and that these ideas no longer work in the age of artificial intelligence, when hearts can be manipulated at scale.

    Everyone in Silicon Valley is focused on building the future, Mr. Harari continued, while most of the world’s people are not even needed enough to be exploited. “Now you increasingly feel that there are all these elites that just don’t need me,” he said. “And it’s much worse to be irrelevant than to be exploited.”

    The useless class he describes is uniquely vulnerable. “If a century ago you mounted a revolution against exploitation, you knew that when bad comes to worse, they can’t shoot all of us because they need us,” he said, citing army service and factory work.

    Now it is becoming less clear why the ruling elite would not just kill the new useless class. “You’re totally expendable,” he told the audience.

    This, Mr. Harari told me later, is why Silicon Valley is so excited about the concept of universal basic income, or stipends paid to people regardless of whether they work. The message is: “We don’t need you. But we are nice, so we’ll take care of you.”

    On Sept. 14, he published an essay in The Guardian assailing another old trope — that “the voter knows best.”

    “If humans are hackable animals, and if our choices and opinions don’t reflect our free will, what should the point of politics be?” he wrote. “How do you live when you realize … that your heart might be a government agent, that your amygdala might be working for Putin, and that the next thought that emerges in your mind might well be the result of some algorithm that knows you better than you know yourself? These are the most interesting questions humanity now faces.”

    So as we can see, it’s abundantly clear why Mr. Harari is suddenly the go-to guru for Silicon Valley’s elites: he’s depicting a dystopian future that’s utopian if you happen to be an authoritarian tech elite who wants to dominate the future. And he’s kind of portraying this future as just someone we should accept. Sure, we in the useless class should be plenty anxious about becoming useless, but don’t bother trying to organize against this future, especially since democratic politics is becoming pointless in an age when mass opinion can be hacked and manipulated. Just accept that this is the future and worry about adapting to it. That more or less appears to be Harari’s dystopian message. Which is as much a message about a dystopian present as it is about a dystopian future. A dystopian present where humanity is already so helpless that nothing can be done to prevent this dystopian future.

    And that all points towards one obvious area of futurism Harari could engage in that might actually turn off some of his Silicon Valley fan base: explore the phase of the future after the fascist techno elites have seized complete control of humanity and start warring amongst themselves. Don’t forget that one of feature of democracy is that it sort of creates a unifying force for all the brutal wannabe fascist oligarchs. They all have a common enemy. The people. But what happens when they’ve truly won and subjugated humanity or exterminated most of it? Won’t they proceed to go to war with each other at that point? If you’re a brutal cutthroat fascist oligarch of the future sharing power with other brutal cutthroat fascist oligarchs, are you really going to trust that they aren’t plotting to take you down and take over your neo-feudal personal empire? Does anyone doubt that if Peter Thiel managed to obtain longevity technology and cloning technology that there won’t be a clone army of Nazi Peter Thiels some day warring against rival fascist elites?

    These fascist overlords are also presumably going to be highly reliant on private robot armies. What kind of future should these fascist tech elites expect when they are all sporting rival competing private robot armies? That might sound fun at first, but do they really want to live in a world where their rivals also have private robot armies? Or advanced biowarfare arsenals? And how about AI going to war with these fascist elites? Does Mr. Harari have any opinions on the probability of Skynet emerging and the robots rebelling against their fascist human overlords? Perhaps if he explored how dystopian this tech elite-dominated future could be for the tech elites themselves, and further explored the inherent dangers a high-tech society run by and for competing authoritarian personalities present to those same competing authoritarian personalities, maybe they wouldn’t love him so much.

    Posted by Pterrafractyl | November 20, 2018, 4:21 pm

Post a comment