Spitfire List Web site and blog of anti-fascist researcher and radio personality Dave Emory.

For The Record  

FTR #996 Civilization’s Twilight: Update on Technocratic Fascism

Dave Emory’s entire life­time of work is avail­able on a flash drive that can be obtained HERE. The new drive is a 32-gigabyte drive that is current as of the programs and articles posted by the fall of 2017. The new drive (available for a tax-deductible contribution of $65.00 or more.)

WFMU-FM is podcasting For The Record–You can subscribe to the podcast HERE.

You can subscribe to e-mail alerts from Spitfirelist.com HERE.

You can subscribe to RSS feed from Spitfirelist.com HERE.

You can subscribe to the comments made on programs and posts–an excellent source of information in, and of, itself HERE.

This broadcast was recorded in one, 60-minute segment.

Introduction: Updating our ongoing analysis of what Mr. Emory calls “technocratic fascism,” we examine how existing technologies are neutralizing and/or rendering obsolete foundational elements of our civilization and democratic governmental systems.

For purposes of refreshing the line of argument presented here, we reference a vitally important article by David Golumbia. ” . . . . Such tech­no­cratic beliefs are wide­spread in our world today, espe­cially in the enclaves of dig­i­tal enthu­si­asts, whether or not they are part of the giant corporate-digital leviathanHack­ers (‘civic,’ ‘eth­i­cal,’ ‘white’ and ‘black’ hat alike), hack­tivists, Wik­iLeaks fans [and Julian Assange et al–D. E.], Anony­mous ‘mem­bers,’ even Edward Snow­den him­self walk hand-in-hand with Face­book and Google in telling us that coders don’t just have good things to con­tribute to the polit­i­cal world, but that the polit­i­cal world is theirs to do with what they want, and the rest of us should stay out of it: the polit­i­cal world is bro­ken, they appear to think (rightly, at least in part), and the solu­tion to that, they think (wrongly, at least for the most part), is for pro­gram­mers to take polit­i­cal mat­ters into their own hands. . . . [Tor co-creator] Din­gle­dine  asserts that a small group of soft­ware devel­op­ers can assign to them­selves that role, and that mem­bers of demo­c­ra­tic poli­ties have no choice but to accept them hav­ing that role. . . .”

Beginning with a chilling opinion piece in the New York Times, we note that technological development threatens to super-charge the Big Lies that drive our world. As anyone who saw the file Star Wars film “Rogue One” knows, the technology required to create a nearly life-like computer-generated videos of a real person is already a reality. Once the province of movie studios and other firms with millions to spend, the technology is now available for download for free.

” . . . . In 2016 Gareth Edwards, the director of the Star Wars film ‘Rogue One,’ was able to create a scene featuring a young Princess Leia by manipulating images of Carrie Fisher as she looked in 1977. Mr. Edwards had the best hardware and software a $200 million Hollywood budget could buy. Less than two years later, images of similar quality can be created with software available for free download on Reddit. That was how a faked video supposedly of the actress Emma Watson in a shower with another woman ended up on the website Celeb Jihad. . . .”

The technology has already rendered obsolete selective editing such as that performed by James O’Keefe: ” . . . . as the novelist William Gibson once said, ‘The street finds its own uses for things.’ So do rogue political actors. The implications for democracy are eye-opening. The conservative political activist James O’Keefe has created a cottage industry manipulating political perceptions by editing footage in misleading ways. In 2018, low-tech editing like Mr. O’Keefe’s is already an anachronism: Imagine what even less scrupulous activists could do with the power to create ‘video’ framing real people for things they’ve never actually done. One harrowing potential eventuality: Fake video and audio may become so convincing that it can’t be distinguished from real recordings, rendering audio and video evidence inadmissible in court. . . .”

After highlighting a story about AI-generated “deepfake” pornography with people’s faces superimposed on others’ bodies in pornographic layouts, we note how robots have altered our political and commercial landscapes, through cyber technology: ” . . . . Robots are getting better, every day, at impersonating humans. When directed by opportunists, malefactors and sometimes even nation-states, they pose a particular threat to democratic societies, which are premised on being open to the people. Robots posing as people have become a menace. . . . In coming years, campaign finance limits will be (and maybe already are) evaded by robot armies posing as ‘small’ donors. And actual voting is another obvious target — perhaps the ultimate target. . . .”

Before the actual replacement of manual labor by robots, devices to technocratically “improve”–read “coercively engineer” workers are patented by Amazon and have been used on workers in some of their facilities. ” . . . . What if your employer made you wear a wristband that tracked your every move, and that even nudged you via vibrations when it judged that you were doing something wrong? What if your supervisor could identify every time you paused to scratch or fidget, and for how long you took a bathroom break? What may sound like dystopian fiction could become a reality for Amazon warehouse workers around the world. The company has won two patents for such a wristband. . . .”

For some U.K Amazon warehouse workers, the future is now: ” . . . . Max Crawford, a former Amazon warehouse worker in Britain, said in a phone interview, ‘After a year working on the floor, I felt like I had become a version of the robots I was working with.’ He described having to process hundreds of items in an hour — a pace so extreme that one day, he said, he fell over from dizziness. ‘There was no time to go to the loo,’ he said, using the British slang for toilet. ‘You had to process the items in seconds and then move on. If you didn’t meet targets, you were fired.’

“He worked back and forth at two Amazon warehouses for more than two years and then quit in 2015 because of health concerns, he said: ‘I got burned out.’ Mr. Crawford agreed that the wristbands might save some time and labor, but he said the tracking was ‘stalkerish’ and feared that workers might be unfairly scrutinized if their hands were found to be ‘in the wrong place at the wrong time.’ ‘They want to turn people into machines,’ he said. ‘The robotic technology isn’t up to scratch yet, so until it is, they will use human robots.’ . . . .”

Some tech workers, well placed at R & D pacesetters and giants such as Facebook and Google have done an about-face on the  impact of their earlier efforts and are now struggling against the misuse of the technologies they helped to launch:

” . . . . A group of Silicon Valley technologists who were early employees at Facebook and Google, alarmed over the ill effects of social networks and smartphones, are banding together to challenge the companies they helped build. . . . ‘The largest supercomputers in the world are inside of two companies — Google and Facebook — and where are we pointing them?’ Mr. [Tristan] Harris said. ‘We’re pointing them at people’s brains, at children.’ . . . . Mr. [RogerMcNamee] said he had joined the Center for Humane Technology because he was horrified by what he had helped enable as an early Facebook investor. ‘Facebook appeals to your lizard brain — primarily fear and anger,’ he said. ‘And with smartphones, they’ve got you for every waking moment.’ . . . .”

Transitioning to our next program–updating AI (artificial intelligence) technology as it applies to technocratic fascism–we note that AI machines are being designed to develop other AI’s–“The Rise of the Machine.” ” . . . . Jeff Dean, one of Google’s leading engineers, spotlighted a Google project called AutoML. ML is short for machine learning, referring to computer algorithms that can learn to perform particular tasks on their own by analyzing data. AutoML, in turn, is a machine learning algorithm that learns to build other machine-learning algorithms. With it, Google may soon find a way to create A.I. technology that can partly take the humans out of building the A.I. systems that many believe are the future of the technology industry. . . .”

Pro and Con on the subject of Artificial Intelligence

1.  There was a chilling recent opinion piece in the New York Times. Technological development threatens to super-charge the Big Lies that drive our world. As anyone who saw the file Star Wars film “Rogue One” knows well, the technology required to create a nearly life-like computer-generated videos of a real person is already a reality. Once the province of movie studios and other firms with millions to spend, the technology is now available for download for free.

” . . . . In 2016 Gareth Edwards, the director of the Star Wars film ‘Rogue One,’ was able to create a scene featuring a young Princess Leia by manipulating images of Carrie Fisher as she looked in 1977. Mr. Edwards had the best hardware and software a $200 million Hollywood budget could buy. Less than two years later, images of similar quality can be created with software available for free download on Reddit. That was how a faked video supposedly of the actress Emma Watson in a shower with another woman ended up on the website Celeb Jihad. . . .”

The technology has already rendered obsolete selective editing such as that performed by James O’Keefe: ” . . . . as the novelist William Gibson once said, ‘The street finds its own uses for things.’ So do rogue political actors. The implications for democracy are eye-opening. The conservative political activist James O’Keefe has created a cottage industry manipulating political perceptions by editing footage in misleading ways. In 2018, low-tech editing like Mr. O’Keefe’s is already an anachronism: Imagine what even less scrupulous activists could do with the power to create ‘video’ framing real people for things they’ve never actually done. One harrowing potential eventuality: Fake video and audio may become so convincing that it can’t be distinguished from real recordings, rendering audio and video evidence inadmissible in court. . . .”

“Our Hackable Political Future” by Henry J. Farrell and Rick Perlstein; The New York Times; 02/04/2018

Imagine it is the spring of 2019. A bottom-feeding website, perhaps tied to Russia, “surfaces” video of a sex scene starring an 18-year-old Kirsten Gillibrand. It is soon debunked as a fake, the product of a user-friendly video application that employs generative adversarial network technology to convincingly swap out one face for another.

It is the summer of 2019, and the story, predictably, has stuck around — part talk-show joke, part right-wing talking point. “It’s news,” political journalists say in their own defense. “People are talking about it. How can we not?”

Then it is fall. The junior senator from New York State announces her campaign for the presidency. At a diner in New Hampshire, one “low information” voter asks another: “Kirsten What’s-her-name? She’s running for president? Didn’t she have something to do with pornography?”

Welcome to the shape of things to come. In 2016 Gareth Edwards, the director of the Star Wars film “Rogue One,” was able to create a scene featuring a young Princess Leia by manipulating images of Carrie Fisher as she looked in 1977. Mr. Edwards had the best hardware and software a $200 million Hollywood budget could buy. Less than two years later, images of similar quality can be created with software available for free download on Reddit. That was how a faked video supposedly of the actress Emma Watson in a shower with another woman ended up on the website Celeb Jihad.

Programs like these have many legitimate applications. They can help computer-security experts probe for weaknesses in their defenses and help self-driving cars learn how to navigate unusual weather conditions. But as the novelist William Gibson once said, “The street finds its own uses for things.” So do rogue political actors. The implications for democracy are eye-opening.

The conservative political activist James O’Keefe has created a cottage industry manipulating political perceptions by editing footage in misleading ways. In 2018, low-tech editing like Mr. O’Keefe’s is already an anachronism: Imagine what even less scrupulous activists could do with the power to create “video” framing real people for things they’ve never actually done. One harrowing potential eventuality: Fake video and audio may become so convincing that it can’t be distinguished from real recordings, rendering audio and video evidence inadmissible in court.

A program called Face2Face, developed at Stanford, films one person speaking, then manipulates that person’s image to resemble someone else’s. Throw in voice manipulation technology, and you can literally make anyone say anything — or at least seem to.

The technology isn’t quite there; Princess Leia was a little wooden, if you looked carefully. But it’s closer than you might think. And even when fake video isn’t perfect, it can convince people who want to be convinced, especially when it reinforces offensive gender or racial stereotypes.

In 2007, Barack Obama’s political opponents insisted that footage existed of Michelle Obama ranting against “whitey.” In the future, they may not have to worry about whether it actually existed. If someone called their bluff, they may simply be able to invent it, using data from stock photos and pre-existing footage.

The next step would be one we are already familiar with: the exploitation of the algorithms used by social media sites like Twitter and Facebook to spread stories virally to those most inclined to show interest in them, even if those stories are fake.

It might be impossible to stop the advance of this kind of technology. But the relevant algorithms here aren’t only the ones that run on computer hardware. They are also the ones that undergird our too easily hacked media system, where garbage acquires the perfumed scent of legitimacy with all too much ease. Editors, journalists and news producers can play a role here — for good or for bad.

Outlets like Fox News spread stories about the murder of Democratic staff members and F.B.I. conspiracies to frame the president. Traditional news organizations, fearing that they might be left behind in the new attention economy, struggle to maximize “engagement with content.”

This gives them a built-in incentive to spread informational viruses that enfeeble the very democratic institutions that allow a free media to thrive. Cable news shows consider it their professional duty to provide “balance” by giving partisan talking heads free rein to spout nonsense — or amplify the nonsense of our current president.

It already feels as though we are living in an alternative science-fiction universe where no one agrees on what it true. Just think how much worse it will be when fake news becomes fake video. Democracy assumes that its citizens share the same reality. We’re about to find out whether democracy can be preserved when this assumption no longer holds.

2. Both Twitter and PornHub, the online pornography giant, are already taking action to remove numerous “Deepfake” videos of celebrities being super-imposed onto porn actors in response to the flood of such videos that are already being generated.

“PornHub, Twitter Ban ‘Deepfake’ AI-Modified Porn” by Angela Moscaritolo; PC Magazine; 02/07/2018.

It might be kind of comical to see Nicolas Cage’s face on the body of a woman, but expect to see less of this type of content floating around on PornHub and Twitter in the future.

As Motherboard first reported, both sites are taking action against artificial intelligence-powered pornography, known as “deepfakes.”

Deepfakes, for the uninitiated, are porn videos created by using a machine learning algorithm to match someone’s face to another person’s body. Loads of celebrities have had their faces used in porn scenes without their consent, and the results are almost flawless. Check out the SFW example below for a better idea of what we’re talking about.
[see chillingly realistic video of Nicolas Cage’s head on a woman’s body]
In a statement to PCMag on Wednesday, PornHub Vice President Corey Price said the company in 2015 introduced a submission form, which lets users easily flag nonconsensual content like revenge porn for removal. People have also started using that tool to flag deepfakes, he said.

The company still has a lot of cleaning up to do. Motherboard reported there are still tons of deepfakes on PornHub.

“I was able to easily find dozens of deepfakes posted in the last few days, many under the search term ‘deepfakes’ or with deepfakes and the name of celebrities in the title of the video,” Motherboard’s Samantha Cole wrote.

Over on Twitter, meanwhile, users can now be suspended for posting deepfakes and other nonconsensual porn.

“We will suspend any account we identify as the original poster of intimate media that has been produced or distributed without the subject’s consent,” a Twitter spokesperson told Motherboard. “We will also suspend any account dedicated to posting this type of content.”

The site reported that Discord and Gfycat take a similar stance on deepfakes. For now, these types of videos appear to be primarily circulating via Reddit, where the deepfake community currently boasts around 90,000 subscribers.

3. No “ifs,” “ands,” or “bots!”  ” . . . . Robots are getting better, every day, at impersonating humans. When directed by opportunists, malefactors and sometimes even nation-states, they pose a particular threat to democratic societies, which are premised on being open to the people. Robots posing as people have become a menace. . . . In coming years, campaign finance limits will be (and maybe already are) evaded by robot armies posing as ‘small’ donors. And actual voting is another obvious target — perhaps the ultimate target. . . .”

“Please Prove You’re Not a Robot” by Tim Wu; The New York Times; 7/16/2017; p. 8 (Review Section).

 When science fiction writers first imagined robot invasions, the idea was that bots would become smart and powerful enough to take over the world by force, whether on their own or as directed by some evildoer. In reality, something only slightly less scary is happening.

Robots are getting better, every day, at impersonating humans. When directed by opportunists, malefactors and sometimes even nation-states, they pose a particular threat to democratic societies, which are premised on being open to the people.

Robots posing as people have become a menace. For popular Broadway shows (need we say “Hamilton”?), it is actually bots, not humans, who do much and maybe most of the ticket buying. Shows sell out immediately, and the middlemen (quite literally, evil robot masters) reap millions in ill-gotten gains.

Philip Howard, who runs the Computational Propaganda Research Project at Oxford, studied the deployment of propaganda bots during voting on Brexit, and the recent American and French presidential elections. Twitter is particularly distorted by its millions of robot accounts; during the French election, it was principally Twitter robots who were trying to make #MacronLeaks into a scandal. Facebook has admitted it was essentially hacked during the American election in November. In Michigan, Mr. Howard notes, “junk news was shared just as widely as professional news in the days leading up to the election.”

Robots are also being used to attack the democratic features of the administrative state. This spring, the Federal Communications Commission put its proposed revocation of net neutrality up for public comment. In previous years such proceedings attracted millions of (human) commentators. This time, someone with an agenda but no actual public support unleashed robots who impersonated (via stolen identities) hundreds of thousands of people, flooding the system with fake comments against federal net neutrality rules.

To be sure, today’s impersonation-bots are different from the robots imagined in science fiction: They aren’t sentient, don’t carry weapons and don’t have physical bodies. Instead, fake humans just have whatever is necessary to make them seem human enough to “pass”: a name, perhaps a virtual appearance, a credit-card number and, if necessary, a profession, birthday and home address. They are brought to life by programs or scripts that give one person the power to imitate thousands.

The problem is almost certain to get worse, spreading to even more areas of life as bots are trained to become better at mimicking humans. Given the degree to which product reviews have been swamped by robots (which tend to hand out five stars with abandon), commercial sabotage in the form of negative bot reviews is not hard to predict.

In coming years, campaign finance limits will be (and maybe already are) evaded by robot armies posing as “small” donors. And actual voting is another obvious target — perhaps the ultimate target. So far, we’ve been content to leave the problem to the tech industry, where the focus has been on building defenses, usually in the form of Captchas (“completely automated public Turing test to tell computers and humans apart”), those annoying “type this” tests to prove you are not a robot. But leaving it all to industry is not a long-term solution.

For one thing, the defenses don’t actually deter impersonation bots, but perversely reward whoever can beat them. And perhaps the greatest problem for a democracy is that companies like Facebook and Twitter lack a serious financial incentive to do anything about matters of public concern, like the millions of fake users who are corrupting the democratic process.

Twitter estimates at least 27 million probably fake accounts; researchers suggest the real number is closer to 48 million, yet the company does little about the problem. The problem is a public as well as private one, and impersonation robots should be considered what the law calls “hostis humani generis”: enemies of mankind, like pirates and other outlaws. That would allow for a better offensive strategy: bringing the power of the state to bear on the people deploying the robot armies to attack commerce or democracy.

The ideal anti-robot campaign would employ a mixed technological and legal approach. Improved robot detection might help us find the robot masters or potentially help national security unleash counterattacks, which can be necessary when attacks come from overseas. There may be room for deputizing private parties to hunt down bad robots. A simple legal remedy would be a “ Blade Runner” law that makes it illegal to deploy any program that hides its real identity to pose as a human. Automated processes should be required to state, “I am a robot.” When dealing with a fake human, it would be nice to know.

Using robots to fake support, steal tickets or crash democracy really is the kind of evil that science fiction writers were warning about. The use of robots takes advantage of the fact that political campaigns, elections and even open markets make humanistic assumptions, trusting that there is wisdom or at least legitimacy in crowds and value in public debate. But when support and opinion can be manufactured, bad or unpopular arguments can win not by logic but by a novel, dangerous form of force — the ultimate threat to every democracy.

4. Before the actual replacement of manual labor by robots, devices to technocratically “improve”–read “coercively engineer” workers are patented by Amazon and have been used on workers in some of their facilities.

” . . . . What if your employer made you wear a wristband that tracked your every move, and that even nudged you via vibrations when it judged that you were doing something wrong? What if your supervisor could identify every time you paused to scratch or fidget, and for how long you took a bathroom break? What may sound like dystopian fiction could become a reality for Amazon warehouse workers around the world. The company has won two patents for such a wristband. . . .”

For some U.K Amazon warehouse workers, the future is now: ” . . . . Max Crawford, a former Amazon warehouse worker in Britain, said in a phone interview, ‘After a year working on the floor, I felt like I had become a version of the robots I was working with.’ He described having to process hundreds of items in an hour — a pace so extreme that one day, he said, he fell over from dizziness. ‘There was no time to go to the loo,’ he said, using the British slang for toilet. ‘You had to process the items in seconds and then move on. If you didn’t meet targets, you were fired.’

He worked back and forth at two Amazon warehouses for more than two years and then quit in 2015 because of health concerns, he said: ‘I got burned out.’ Mr. Crawford agreed that the wristbands might save some time and labor, but he said the tracking was ‘stalkerish’ and feared that workers might be unfairly scrutinized if their hands were found to be ‘in the wrong place at the wrong time.’ ‘They want to turn people into machines,’ he said. ‘The robotic technology isn’t up to scratch yet, so until it is, they will use human robots.’ . . . .”

“Track Hands of Workers? Amazon Has Patents for It” by Ceylan Yeginsu; The New York Times; 2/2/2018; P. B3 [Western Edition].

 What if your employer made you wear a wristband that tracked your every move, and that even nudged you via vibrations when it judged that you were doing something wrong? What if your supervisor could identify every time you paused to scratch or fidget, and for how long you took a bathroom break?

What may sound like dystopian fiction could become a reality for Amazon warehouse workers around the world. The company has won two patents for such a wristband, though it was unclear if Amazon planned to actually manufacture the tracking device and have employees wear it.

The online retail giant, which plans to build a second headquarters and recently shortlisted 20 potential host cities for it, has also been known to experiment inhouse with new technology before selling it worldwide.

Amazon, which rarely discloses information on its patents, could not immediately be reached for comment on Thursday. But the patent disclosure goes to the heart about a global debate about privacy and security. Amazon already has a reputation for a workplace culture that thrives on a hard-hitting management style, and has experimented with how far it can push white-collar workers in order to reach its delivery targets.

Privacy advocates, however, note that a lot can go wrong even with everyday tracking technology. On Monday, the tech industry was jolted by the discovery that Strava, a fitness app that allows users to track their activities and compare their performance with other people running or cycling in the same places, had unwittingly highlighted the locations of United States military bases and the movements of their personnel in Iraq and Syria.

The patent applications, filed in 2016, were published in September, and Amazon won them this week, according to GeekWire, which reported the patents’ publication on Tuesday. In theory, Amazon’s proposed technology would emit ultrasonic sound pulses and radio transmissions to track where an employee’s hands were in relation to inventory bins, and provide “haptic feedback” to steer the worker toward the correct bin.

The aim, Amazon says in the patent, is to streamline “time consuming” tasks, like responding to orders and packaging them for speedy delivery. With guidance from a wristband, workers could fill orders faster. Critics say such wristbands raise concerns about privacy and would add a new layer of surveillance to the workplace, and that the use of the devices could result in employees being treated more like robots than human beings.

Current and former Amazon employees said the company already used similar tracking technology in its warehouses and said they would not be surprised if it put the patents into practice.

Max Crawford, a former Amazon warehouse worker in Britain, said in a phone interview, “After a year working on the floor, I felt like I had become a version of the robots I was working with.” He described having to process hundreds of items in an hour — a pace so extreme that one day, he said, he fell over from dizziness. “There was no time to go to the loo,” he said, using the British slang for toilet. “You had to process the items in seconds and then move on. If you didn’t meet targets, you were fired.”

He worked back and forth at two Amazon warehouses for more than two years and then quit in 2015 because of health concerns, he said: “I got burned out.” Mr. Crawford agreed that the wristbands might save some time and labor, but he said the tracking was “stalkerish” and feared that workers might be unfairly scrutinized if their hands were found to be “in the wrong place at the wrong time.” “They want to turn people into machines,” he said. “The robotic technology isn’t up to scratch yet, so until it is, they will use human robots.”

Many companies file patents for products that never see the light of day. And Amazon would not be the first employer to push boundaries in the search for a more efficient, speedy work force. Companies are increasingly introducing artificial intelligence into the workplace to help with productivity, and technology is often used to monitor employee whereabouts.

One company in London is developing artificial intelligence systems to flag unusual workplace behavior, while another used a messaging application to track its employees. In Wisconsin, a technology company called Three Square Market offered employees an opportunity to have microchips implanted under their skin in order, it said, to be able to use its services seamlessly. Initially, more than 50 out of 80 staff members at its headquarters in River Falls, Wis., volunteered.

5. Some tech workers, well placed at R & D pacesetters and giants such as Facebook and Google have done an about-face on the  impact of their earlier efforts and are now struggling against the misuse of the technologies they helped to launch:

” . . . . A group of Silicon Valley technologists who were early employees at Facebook and Google, alarmed over the ill effects of social networks and smartphones, are banding together to challenge the companies they helped build. . . . ‘The largest supercomputers in the world are inside of two companies — Google and Facebook — and where are we pointing them?’ Mr. [Tristan] Harris said. ‘We’re pointing them at people’s brains, at children.’ . . . . Mr. [RogerMcNamee] said he had joined the Center for Humane Technology because he was horrified by what he had helped enable as an early Facebook investor. ‘Facebook appeals to your lizard brain — primarily fear and anger,’ he said. ‘And with smartphones, they’ve got you for every waking moment.’ . . . .”

“Early Facebook and Google Employees Join Forces to Fight What They Built” by Nellie Bowles; The New York Times; 2/5/2018; p. B6 [Western Edition].

A group of Silicon Valley technologists who were early employees at Facebook and Google, alarmed over the ill effects of social networks and smartphones, are banding together to challenge the companies they helped build. The cohort is creating a union of concerned experts called the Center for Humane Technology. Along with the nonprofit media watchdog group Common Sense Media, it also plans an anti-tech addiction lobbying effort and an ad campaign at 55,000 public schools in the United States.

The campaign, titled The Truth About Tech, will be funded with $7 million from Common Sense and capital raised by the Center for Humane Technology. Common Sense also has $50 million in donated media and airtime from partners including Comcast and DirecTV. It will be aimed at educating students, parents and teachers about the dangers of technology, including the depression that can come from heavy use of social media.

“We were on the inside,” said Tristan Harris, a former in-house ethicist at Google who is heading the new group. “We know what the companies measure. We know how they talk, and we know how the engineering works.”

The effect of technology, especially on younger minds, has become hotly debated in recent months. In January, two big Wall Street investors asked Apple to study the health effects of its products and to make it easier to limit children’s use of iPhones and iPads. Pediatric and mental health experts called on Facebook last week to abandon a messaging service the company had introduced for children as young as 6.

Parenting groups have also sounded the alarm about YouTube Kids, a product aimed at children that sometimes features disturbing content. “The largest supercomputers in the world are inside of two companies — Google and Facebook — and where are we pointing them?” Mr. Harris said. “We’re pointing them at people’s brains, at children.” Silicon Valley executives for years positioned their companies as tight-knit families and rarely spoke publicly against one another.

That has changed. Chamath Palihapitiya, a venture capitalist who was an early employee at Facebook, said in November that the social network was “ripping apart the social fabric of how society works.” The new Center for Humane Technology includes an unprecedented alliance of former employees of some of today’s biggest tech companies.

Apart from Mr. Harris, the center includes Sandy Parakilas, a former Facebook operations manager; Lynn Fox, a former Apple and Google communications executive; Dave Morin, a former Facebook executive; Justin Rosenstein, who created Facebook’s Like button and is a co-founder of Asana; Roger McNamee, an early investor in Facebook; and Renée DiResta, a technologist who studies bots. The group expects its numbers to grow.

Its first project to reform the industry will be to introduce a Ledger of Harms — a website aimed at guiding rank-and-file engineers who are concerned about what they are being asked to build. The site will include data on the health effects of different technologies and ways to make products that are healthier.

Jim Steyer, chief executive and founder of Common Sense, said the Truth About Tech campaign was modeled on antismoking drives and focused on children because of their vulnerability. That may sway tech chief executives to change, he said. Already, Apple’s chief executive, Timothy D. Cook, told The Guardian last month that he would not let his nephew on social media, while the Facebook investor Sean Parker also recently said of the social network that “God only knows what it’s doing to our children’s brains.”

Mr. Steyer said, “You see a degree of hypocrisy with all these guys in Silicon Valley.” The new group also plans to begin lobbying for laws to curtail the power of big tech companies. It will initially focus on two pieces of legislation: a bill being introduced by Senator Edward J. Markey, Democrat of Massachusetts, that would commission research on technology’s impact on children’s health, and a bill in California by State Senator Bob Hertzberg, a Democrat, which would prohibit the use of digital bots without identification.

Mr. McNamee said he had joined the Center for Humane Technology because he was horrified by what he had helped enable as an early Facebook investor. “Facebook appeals to your lizard brain — primarily fear and anger,” he said. “And with smartphones, they’ve got you for every waking moment.” He said the people who made these products could stop them before they did more harm. “This is an opportunity for me to correct a wrong,” Mr. McNamee said.

6. Transitioning to our next program–updating AI (artificial intelligence) technology as it applies to technocratic fascism–we note that AI machines are being designed to develop other AI’s–“The Rise of the Machine.” ” . . . . Jeff Dean, one of Google’s leading engineers, spotlighted a Google project called AutoML. ML is short for machine learning, referring to computer algorithms that can learn to perform particular tasks on their own by analyzing data. AutoML, in turn, is a machine learning algorithm that learns to build other machine-learning algorithms. With it, Google may soon find a way to create A.I. technology that can partly take the humans out of building the A.I. systems that many believe are the future of the technology industry. . . .”

“The Rise of the Machine” by Cade Metz; The New York Times; 11/6/2017; p. B1 [Western Edition].

 They are a dream of researchers but perhaps a nightmare for highly skilled computer programmers: artificially intelligent machines that can build other artificially intelligent machines. With recent speeches in both Silicon Valley and China, Jeff Dean, one of Google’s leading engineers, spotlighted a Google project called AutoML. ML is short for machine learning, referring to computer algorithms that can learn to perform particular tasks on their own by analyzing data.

AutoML, in turn, is a machine learning algorithm that learns to build other machine-learning algorithms. With it, Google may soon find a way to create A.I. technology that can partly take the humans out of building the A.I. systems that many believe are the future of the technology industry. The project is part of a much larger effort to bring the latest and greatest A.I. techniques to a wider collection of companies and software developers.

The tech industry is promising everything from smartphone apps that can recognize faces to cars that can drive on their own. But by some estimates, only 10,000 people worldwide have the education, experience and talent needed to build the complex and sometimes mysterious mathematical algorithms that will drive this new breed of artificial intelligence.

The world’s largest tech businesses, including Google, Facebook and Microsoft, sometimes pay millions of dollars a year to A.I. experts, effectively cornering the market for this hard-to-find talent. The shortage isn’t going away anytime soon, just because mastering these skills takes years of work. The industry is not willing to wait. Companies are developing all sorts of tools that will make it easier for any operation to build its own A.I. software, including things like image and speech recognition services and online chatbots. “We are following the same path that computer science has followed with every new type of technology,” said Joseph Sirosh, a vice president at Microsoft, which recently unveiled a tool to help coders build deep neural networks, a type of computer algorithm that is driving much of the recent progress in the A.I. field. “We are eliminating a lot of the heavy lifting.” This is not altruism.

Researchers like Mr. Dean believe that if more people and companies are working on artificial intelligence, it will propel their own research. At the same time, companies like Google, Amazon and Microsoft see serious money in the trend that Mr. Sirosh described. All of them are selling cloud-computing services that can help other businesses and developers build A.I. “There is real demand for this,” said Matt Scott, a co-founder and the chief technical officer of Malong, a start-up in China that offers similar services. “And the tools are not yet satisfying all the demand.”

This is most likely what Google has in mind for AutoML, as the company continues to hail the project’s progress. Google’s chief executive, Sundar Pichai, boasted about AutoML last month while unveiling a new Android smartphone.

Eventually, the Google project will help companies build systems with artificial intelligence even if they don’t have extensive expertise, Mr. Dean said. Today, he estimated, no more than a few thousand companies have the right talent for building A.I., but many more have the necessary data. “We want to go from thousands of organizations solving machine learning problems to millions,” he said.

Google is investing heavily in cloud-computing services — services that help other businesses build and run software — which it expects to be one of its primary economic engines in the years to come. And after snapping up such a large portion of the world’s top A.I researchers, it has a means of jump-starting this engine.

Neural networks are rapidly accelerating the development of A.I. Rather than building an image-recognition service or a language translation app by hand, one line of code at a time, engineers can much more quickly build an algorithm that learns tasks on its own. By analyzing the sounds in a vast collection of old technical support calls, for instance, a machine-learning algorithm can learn to recognize spoken words.

But building a neural network is not like building a website or some run-of-themill smartphone app. It requires significant math skills, extreme trial and error, and a fair amount of intuition. Jean-François Gagné, the chief executive of an independent machine-learning lab called Element AI, refers to the process as “a new kind of computer programming.”

In building a neural network, researchers run dozens or even hundreds of experiments across a vast network of machines, testing how well an algorithm can learn a task like recognizing an image or translating from one language to another. Then they adjust particular parts of the algorithm over and over again, until they settle on something that works. Some call it a “dark art,” just because researchers find it difficult to explain why they make particular adjustments.

But with AutoML, Google is trying to automate this process. It is building algorithms that analyze the development of other algorithms, learning which methods are successful and which are not. Eventually, they learn to build more effective machine learning. Google said AutoML could now build algorithms that, in some cases, identified objects in photos more accurately than services built solely by human experts. Barret Zoph, one of the Google researchers behind the project, believes that the same method will eventually work well for other tasks, like speech recognition or machine translation. This is not always an easy thing to wrap your head around. But it is part of a significant trend in A.I. research. Experts call it “learning to learn” or “metalearning.”

Many believe such methods will significantly accelerate the progress of A.I. in both the online and physical worlds. At the University of California, Berkeley, researchers are building techniques that could allow robots to learn new tasks based on what they have learned in the past. “Computers are going to invent the algorithms for us, essentially,” said a Berkeley professor, Pieter Abbeel. “Algorithms invented by computers can solve many, many problems very quickly — at least that is the hope.”

This is also a way of expanding the number of people and businesses that can build artificial intelligence. These methods will not replace A.I. researchers entirely. Experts, like those at Google, must still do much of the important design work.

But the belief is that the work of a few experts can help many others build their own software. Renato Negrinho, a researcher at Carnegie Mellon University who is exploring technology similar to AutoML, said this was not a reality today but should be in the years to come. “It is just a matter of when,” he said.

 

 

 

Discussion

5 comments for “FTR #996 Civilization’s Twilight: Update on Technocratic Fascism”

  1. After what might be the shortest stint ever as a New York Times op-ed columnist, the Times has a new job opening on opinion section. After announcing the hiring of Quinn Norton as a new technology/hacker culture columnist Tuesday afternoon, the Times let her go later that evening. Why the sudden cold feet? A tweet. Or, rather, a number of Norton’s tweets that were widely pointed out after Norton’s hiring.

    The numerous tweets where she would call people “fag” and “faggot” or used the N-word certainly didn’t help. But it was her tweets about Nazis that appear to be what really sank her employment prospects. So what did Quinn Norton tweet about Nazis that got her fired? That she has a Nazi friend. She doesn’t agree with her Nazi friend’s racist views, but they’re still friends and still talk sometimes.

    And who is this Nazi friend of hers? Neo-Nazi hacker Andrew ‘weev’ Auernheimer, of course. And as the following article points out, while Norton’s friendship with Auernhimer – who waged a death threat campaign against the employees of CNN, let’s not forget – is indeed a troubling, it’s not like Norton is the only tech/privacy journalist who considers the weev both a friend an a hero:

    Slate

    Why Would a Tech Journalist Be Friends With a Neo-Nazi Troll?
    Quinn Norton’s friendship with the notorious Weev helped lose her a job at the New York Times. She wasn’t his only unlikely pal.

    By April Glaser
    Feb 14, 2018 1:42 PM

    The New York Times opinion section announced a new hire Tuesday afternoon: Quinn Norton, a longtime journalist covering (and traveling in) the technology industry and adjacent hacker subculture, would become the editorial board’s lead opinion writer on the “power, culture, and consequences of technology.” Hours later, the job offer was gone.

    The sharp turn occurred soon after Norton shared her job news on Twitter, where it didn’t take long for people to surface tweets that, depending on how you interpret the explanations Norton tweeted Tuesday night, were either outright vile or at minimum colossal acts of bad judgment, no matter what online subcultures Norton was navigating when she wrote them. Between 2013 and 2014, she repeatedly used the slurs fag and faggot in public conversations on Twitter. A white woman, she used the N-word in a botched retweet of internet freedom pioneer John Perry Barlow and once jokingly responded to a thread on tone policing with “what’s up my nigga.” Then there was a Medium post from 2013 in which she meditated on and praised the life of John Rabe, a Nazi leader who also helped to save thousands of Chinese people during World War II. She called him her “personal patron saint of moral complexity.”

    And then, arguably most shocking of all, there were tweets in which Norton defended her long friendship with one of the most famous neo-Nazis in America, Andrew Auernheimer, known by his internet pseudonym Weev. Among his many lowlights, Weev co-ran the website the Daily Stormer, a hub for neo-Nazis and white supremacists.

    In a statement, the New York Times’ opinion editor, James Bennet, said, “Despite our review of Quinn Norton’s work and our conversations with her previous employers, this was new information to us.” On Twitter Tuesday night, Norton wrote, “I’m sorry I can’t do the work I wanted to do with them. I wish there had been a way, but ultimately, they need to feel safe with how the net will react to their opinion writers.” But it shouldn’t have taken a public outcry for the Times to realize that Norton, despite her impressive background covering the tech industry and some of the subcultures in its underbelly, was likely a poor fit for the job.

    Lots of us have friends, acquaintances, and relatives with opinions that are controversial yet not so vile we need to eject them from our lives. Outright Nazism is something else. So how could a self-described “queer-activist” with progressive bona fides and an apparent dedication to outing abusive figures in the tech industry be friends with a Nazi? For one thing, as Norton explained, she sometimes tried to speak the language of some of the more outré communities she covered, like Anons and trolls. Friend can mean a lot of different things, and her motives in speaking with Weev may have been admirable, if possibly misguided. But when you look back at the history of the internet freedom community with which she associated, her embrace of Weev fits into an ugly pattern. She was part of a community that supported Weev and his right to free expression, often while failing to denounce his values and everything his white nationalism, sexism, and anti-Semitism stood for. Anyone who thinks seriously about the web—and hires people to cover it—ought to reckon with why.

    Some background: In October, Norton reminded her followers that “Weev is a terrible person, & an old friend of mine,” as she wrote in one of the tweets that surfaced Tuesday night. “I’ve been very clear on this. Some of my friend are terrible people, & also my friends.” Weev has said that Jewish children “deserve to die,” encouraged death threats against his targets—often Jewish people and women—and released their addresses and contact information onto the internet, causing them to be so flooded with hate speech and threats of violence that some fled their homes. Yet Norton still found value in the friendship. “Weev doesn’t talk to me much anymore, but we talk about the racism whenever he does,” Norton explained in a tweet Tuesday night. She explained that her “door is open when he, or anyone, wants to talk” and clarified that she would always make their conversations “about the stupidity of racism” when they did get a chance to catch up.

    That Norton would keep her door open to a man who harms lives does not make her an outlier within parts of the hacker and digital rights community, which took up arms to defend Weev in 2010 after he worked with a team to expose a hole in AT&T’s security system that allowed the email addresses of about 114,000 iPad owners to be revealed—which he then shared with journalists. For that, Weev was sentenced to three years in jail for identity fraud and accessing a computer without the proper authorization. Despite being known as a particularly terrifying internet troll and anti-Semite, the Electronic Frontier Foundation (where I used to work), celebrated technology law professor Orin Kerr, and others in the internet freedom community came to Weev’s defense, arguing that when a security researcher finds a hole in a company’s system, it doesn’t mean the hacking was malicious and deserving of prosecution. They were right. Outside security researchers should be able to find and disclose vulnerabilities in order to keep everyone else safe without breaking a law.

    But the broader hacker community didn’t defend Weev on the merits of this particular case while simultaneously denouncing his hateful views. Instead it lionized him in keeping with its opposition to draconian computer crime laws. Artist Molly Crabapple painted a portrait of Weev. There was a “Free Weev” website; the slogan was printed on T-shirts. The charges were eventually overturned 28 months before the end of Weev’s sentence, and when a journalist accompanied his lawyer to pick Weev up from prison, he reportedly blasted a white power song on the drive home. During and after his imprisonment, Weev and Norton kept in touch.

    And during his time in jail, Norton appeared to pick up some trolling tendencies of her own. “Here’s the deal, faggot,” she wrote in a tweet from 2013. “Free speech comes with responsibility. not legal, but human. grown up. you can do this.” Norton defended herself Tuesday night, saying this language was only ever used in the context of her work with Anonymous, where that particular slur is a kind of shibboleth, but still, she was comfortable enough to use the word a lot, and on a public platform.

    Norton, like so many champions of internet freedom, is a staunch advocate of free speech. That was certainly the view that allowed so much of the internet freedom and hacker community to overlook Weev’s ardent anti-Semitism when he was on trial for breaking into AT&T’s computers. The thinking is that this is what comes with defending people’s civil liberties: Sometimes you’re going to defend a massive racist. That’s true for both internet activists and the ACLU. It’s also totally possible to defend someone’s right to say awful things and not become their “friend,” however you define the term. But that’s something Quinn didn’t do. And it’s something that many of Weev’s defenders didn’t do, either.

    When civil liberties are defended without adjacent calls for social and economic justice, the values that undergird calls for, say, free speech or protection from government search and seizure can collapse. This is why neo-Nazis feel emboldened to hold “free speech” rallies across the country. It is why racist online communities are able to rail against the monopolistic power of companies like Facebook and Google when they get booted off their platforms. Countless activists, engineers, and others have agitated for decades for an open web—but in the process they’ve too often neglected to fight for social and economic justice at the same time. They’ve defended free speech above all else, which encouraged platforms to allow racists and bigots and sexists and anti-Semites to gather there without much issue.

    In a way, Norton’s friendship with Weev can be made sense of through the lens of the communities that they both traveled through. They belonged to a group that had the prescient insight that the internet was worth fighting for. Those fights were often railing against the threat of censorship, in defense of personal privacy, and thus in defense of hackers who found security holes, and the ability to use the internet as freely as possible, without government meddling. It’s a train of thought that preserved free speech but didn’t simultaneously work as hard to defend communities that were ostracized on the internet because so much of that speech was harmful. Norton’s reporting has been valuable; her contribution to the #MeToo moment in the tech industry was, too. But what’s really needed to make sense of technology at our current juncture probably isn’t someone so committed to one of the lines of thought that helped get us here. Let’s hope the New York Times’ next pick for the job Norton would have had exerts some fresher thinking.

    ———-

    “Why Would a Tech Journalist Be Friends With a Neo-Nazi Troll?” by April Glaser; Slate; 02/14/2018

    “And then, arguably most shocking of all, there were tweets in which Norton defended her long friendship with one of the most famous neo-Nazis in America, Andrew Auernheimer, known by his internet pseudonym Weev. Among his many lowlights, Weev co-ran the website the Daily Stormer, a hub for neo-Nazis and white supremacists.”

    Yeah, there’s nothing quite like your tweet history defending your friendship with the guy who co-rans the Daily Stormer to spruce up one’s resume…assuming you’re applying for a job at Breitbart. But that might be a bit to far for the New York Times.

    And yet, as the article notes, Norton was far from alone in not just defending Auernheimer when he was facing prosecution for hacking AT&T (and that legitimately was overly harsh prosecution) but defending him while remaining friends despite the horrific Nazi views he openly stands for:


    Lots of us have friends, acquaintances, and relatives with opinions that are controversial yet not so vile we need to eject them from our lives. Outright Nazism is something else. So how could a self-described “queer-activist” with progressive bona fides and an apparent dedication to outing abusive figures in the tech industry be friends with a Nazi? For one thing, as Norton explained, she sometimes tried to speak the language of some of the more outré communities she covered, like Anons and trolls. Friend can mean a lot of different things, and her motives in speaking with Weev may have been admirable, if possibly misguided. But when you look back at the history of the internet freedom community with which she associated, her embrace of Weev fits into an ugly pattern. She was part of a community that supported Weev and his right to free expression, often while failing to denounce his values and everything his white nationalism, sexism, and anti-Semitism stood for. Anyone who thinks seriously about the web—and hires people to cover it—ought to reckon with why.

    Now, it’s not that Norton never criticizes Auernheimer’s views. It’s that she appears to still be friends and talk with him despite the fact that he really is a leading neo-Nazi who really does call for mass murder. Which, again, is something that goes far beyond Norton:


    Some background: In October, Norton reminded her followers that “Weev is a terrible person, & an old friend of mine,” as she wrote in one of the tweets that surfaced Tuesday night. “I’ve been very clear on this. Some of my friend are terrible people, & also my friends.” Weev has said that Jewish children “deserve to die,” encouraged death threats against his targets—often Jewish people and women—and released their addresses and contact information onto the internet, causing them to be so flooded with hate speech and threats of violence that some fled their homes. Yet Norton still found value in the friendship. “Weev doesn’t talk to me much anymore, but we talk about the racism whenever he does,” Norton explained in a tweet Tuesday night. She explained that her “door is open when he, or anyone, wants to talk” and clarified that she would always make their conversations “about the stupidity of racism” when they did get a chance to catch up.

    That Norton would keep her door open to a man who harms lives does not make her an outlier within parts of the hacker and digital rights community, which took up arms to defend Weev in 2010 after he worked with a team to expose a hole in AT&T’s security system that allowed the email addresses of about 114,000 iPad owners to be revealed—which he then shared with journalists. For that, Weev was sentenced to three years in jail for identity fraud and accessing a computer without the proper authorization. Despite being known as a particularly terrifying internet troll and anti-Semite, the Electronic Frontier Foundation (where I used to work), celebrated technology law professor Orin Kerr, and others in the internet freedom community came to Weev’s defense, arguing that when a security researcher finds a hole in a company’s system, it doesn’t mean the hacking was malicious and deserving of prosecution. They were right. Outside security researchers should be able to find and disclose vulnerabilities in order to keep everyone else safe without breaking a law.

    But the broader hacker community didn’t defend Weev on the merits of this particular case while simultaneously denouncing his hateful views. Instead it lionized him in keeping with its opposition to draconian computer crime laws. Artist Molly Crabapple painted a portrait of Weev. There was a “Free Weev” website; the slogan was printed on T-shirts. The charges were eventually overturned 28 months before the end of Weev’s sentence, and when a journalist accompanied his lawyer to pick Weev up from prison, he reportedly blasted a white power song on the drive home. During and after his imprisonment, Weev and Norton kept in touch.

    “But the broader hacker community didn’t defend Weev on the merits of this particular case while simultaneously denouncing his hateful views. Instead it lionized him in keeping with its opposition to draconian computer crime laws.”

    And that is the much bigger story in the story of the Quinn Norton’s 1/2 day as a New York Times technology columnist: within much of digital privacy community, Norton’s acceptance of Auernheimer despite his open aggressive neo-Nazi views isn’t the exception. It’s the rule.

    There was unfortunately not mention in the article about how Auernheimer partied with Glenn Greenwald and Laura Poitras in 2014 after his release from prison (when he was already sporting a giant swastika on his chest). Neither was there any mention of the fact that Auerneheimer appears to have been involved with both ‘Macron hacks’ in France’s elections last year and possibly the DNC hacks. But the article does make the important point that the story of Quinn Norton’s firing is really just a sub-story in the much larger story of remarkably widespread popularity of Andrew ‘weev’ Auernheimer within the tech and digital privacy communities and the roles he may have played in some of the biggest hacks of our times. And the story of tech’s ‘Nazi friend’ is, itself, just a sub-story in the much larger story of how pervasive far-right ideals and assumptions are in all sorts of tech sectors and technologies, whether it’s Bitcoin, the Cypherpunk movement’s extensive history of far-right thought, or the fascist roots behind Wikileaks. Hopefully the New York Times’s next pick for tech columnist will actually address these topics.

    Posted by Pterrafractyl | February 14, 2018, 3:33 pm
  2. Here’s another example of how the libertarian dream of internet platforms that are so secure that the companies themselves can’t even monitor what’s taking place on them turn out to be far-right misinformation dream platforms: WhatsApp – the Facebook-owned messaging platform that uses end-to-end strong encryption so, in theory, no one can crack the messages and no one, including WhatsApps, can monitor on the platform is used – is wildly popular in Brazil. 120 million of the country’s 200 million people use the app as many of them use it as their primary news source.

    So what kind of news are people getting on WhatsApp? Well, we don’t really know because it can’t be monitored. But we do get a hint of the kind of news people are getting on encrypted services like WhatsApp when those stories spreads to other platforms like Facebook or Youtube. And with Brazil facing an explosion of yellow fever and struggling to get people vaccinated we got a particularly nasty hint of the kind of ‘news’ people are getting on WhatsApp: dangerous professionally produced videos telling people an Alex Jones-style message that the yellow fever vaccine campaign is part of a secret government depopulation scheme. That’s the kind of ‘news’ people in Brazil are getting from WhatsApp. At least, that’s the ‘news’ we know about so far since the full content in an encrypted mystery:

    Wired

    When WhatsApp’s Fake News Problem Threatens Public Health

    Megan Molteni
    03.09.18 03:14 pm

    In remote areas of Brazil’s Amazon basin, yellow fever used to be a rare, if regular visitor. Every six to ten years, during the hot season, mosquitoes would pick it up from infected monkeys and spread it to a few loggers, hunters, and farmers at the forests’ edges in the northwestern part of the country. But in 2016, perhaps driven by climate change or deforestation or both, the deadly virus broke its pattern.

    Yellow fever began expanding south, even through the winter months, infecting more than 1,500 people and killing nearly 500. The mosquito-borne virus attacks the liver, causing its signature jaundice and internal hemorrhaging (the Mayans called it xekik, or “blood vomit”). Today, that pestilence is racing toward Rio de Janeiro and Sao Paulo at the rate of more than a mile a day,, turning Brazil’s coastal megacities into mega-ticking-timebombs. The only thing spreading faster is misinformation about the dangers of a yellow fever vaccine—the very thing that could halt the virus’s advance. And nowhere is it happening faster than on WhatsApp.

    In recent weeks, rumors of fatal vaccine reactions, mercury preservatives, and government conspiracies have surfaced with alarming speed on the Facebook-owned encrypted messaging service,, which is used by 120 million of Brazil’s roughly 200 million residents. The platform has long incubated and proliferated fake news, in Brazil in particular. With its modest data requirements, WhatsApp is especially popular among middle and lower income individuals there, many of whom rely on it as their primary news consumption platform. But as the country’s health authorities scramble to contain the worst outbreak in decades, WhatsApp’s misinformation trade threatens to go from destabilizing to deadly.

    On January 25, Brazilian health officials launched a mass campaign to vaccinate 95 percent of residents in the 69 municipalities directly in the disease’s path—a total of 23 million people. A yellow fever vaccine has been mandatory since 2002 for any Brazilian born in regions where the virus is endemic. But in the last two years the disease has pushed beyond its normal range into territories where fewer than a quarter of people are immune, including the urban areas of Rio and Sao Paulo.

    By the time of the announcement, the fake news cycle was already underway. Earlier in the month an audio message from a woman claiming to be a doctor at a well-known research institute began circulating on WhatsApp, warning that the vaccine is dangerous. (The institute denied that the recording came from any of its employees). A few weeks later it was a story linking the death of a university student to the vaccine. (That too proved to be a false report). In February, Igor Sacramento’s mother-in-law messaged him a pair of videos suggesting that the yellow fever vaccine was actually a scam aimed at reducing the world population. A health communication researcher at Fiocruz, one of Brazil’s largest scientific institutions, Sacramento recognized a scam when he saw one. And no, it wasn’t a global illuminati plot to kill off his countrymen. But he could understand why people would be taken in by it.

    “These videos are very sophisticated, with good editing, testimonials from experts, and personal experiences,” Sacramento says. It’s the same journalistic format people see on TV, so it bears the shape of truth. And when people share these videos or news stories within their social networks as personal messages, it changes the calculus of trust. “We are transitioning from a society that experienced truth based on facts to a society based on its experience of truth in intimacy, in emotion, in closeness.”

    People are more likely to believe rumours from family and friends. There’s no algorithm mediating the experience. And when that misinformation comes in the form of forwarded texts and videos—which look the same as personal messages in WhatsApp—they’re lent another layer of legitimacy. Then you get the network compounding effect; if you’re in multiple group chats that all receive the fake news, the repetition makes them more believable still.

    Of course, these are all just theories. Because of WhatsApp’s end-to-end encryption and the closed nature of its networks, it’s nearly impossible to study how misinformation moves through it. For users in countries with a history of state-sponsored violence, like Brazil, that secrecy is a feature. But it’s a bug for anyone trying to study them. “I think WhatsApp hoaxes and disinformation campaigns are a bit more pernicious [than Facebook] because their diffusion cannot be monitored,” says Pablo Ortellado, a fake news researcher and professor of public policy at the University of Sao Paulo. Misinformation on WhatsApp can only be identified when it jumps to other social media sites or bleeds into the real world.

    In Brazil, it’s starting to do both. One of the videos Sacramento received from his mother-in-law is still up on YouTube, where it’s been viewed over a million times. Other stories circulated on WhatsApp are now being shared in Facebook groups with thousands of users, mostly worried mothers exchanging stories and fears. And in the streets of Rio and Sao Paulo, some people are staying away from the health workers in white coats. As of February 27, only 5.5 million people had received the shot, though it’s difficult to say how much of the slow start is due to fake news as opposed to logistical delays. A spokeswoman for the Brazilian Ministry of Health said in an email that the agency has seen an uptick in concern from residents regarding post-vaccination adverse events since the start of the year and acknowledged that the spread of false news through social media can interfere with vaccination coverage, but did not comment on its specific impact on this latest campaign.

    While the Ministry of Health has engaged in a very active pro-vaccine education operation—publishing weekly newsletters, posting on social media, and getting people on the ground at churches, temples, trade unions, and clinics—health communication researchers like Sacramento say health officials made one glaring mistake. They didn’t pay close enough attention to language.

    You see, on top of all this, there’s a global yellow fever vaccine shortage going on at the moment. The vaccine is available at a limited number of clinics in the US, but it’s only used here as a travel shot. So far this year, the Centers for Disease Control and Prevention has registered no cases of the virus within US borders, though in light of the outbreak it did issue a Level 2 travel notice in January, urging all Americans traveling to the affected states in Brazil to get vaccinated first.

    Because it’s endemic in the country, Brazil makes its own vaccine, and is currently ramping up production from 5 million to 10 million doses per month by June. But in the interim, authorities are administering smaller doses of what they have on hand, known as a “fractional dose.” It’s a well-demonstrated emergency maneuver, which staved off a yellow fever outbreak in the Democratic Republic of the Congo in 2016. According to the WHO, it’s “the best way to stretch vaccine supplies and protect against as many people as possible.” But a partial dose, one that’s guaranteed for only 12 months, has been met by mistrust in Brazil, where a single vaccination had always been good for a lifetime of protection.

    “The population in general understood the wording of ‘fractionated’ to mean weak,” says Sacramento. Although technically correct, the word took on a more sinister meaning as it spread through social media circles. Some videos even claimed the fractionated vaccine could cause renal failure. And while they may be unscientific, they’re not completely wrong.

    Like any medicine, the yellow fever vaccine can cause side effects. Between 2 and 4 percent of people experience mild headaches, low-grade fevers, or pain at the site of injection. But there have also been rare reports of life-threatening allergic reactions and damage to the nervous system and other internal organs. According to the Health Ministry, six people died in 2017 on account of an adverse reaction to the vaccine. The agency estimates that one in 76,000 will have an anaphylactic reaction, one in 125,000 will experience a severe nervous system reaction, and one in 250,000 will suffer a life-threatening illness with organ failure. Which means that if 5 million people get vaccinated, you’ll wind up with about 20 organ failures, 50 nervous system issues, and 70 allergic shocks. Of course, if yellow fever infected 5 million people, 333,000 people could die.

    Not every fake news story is 100 percent false. But they are out of proportion with reality. That’s the thing about social media. It can amplify real but statistically unlikely things just as much as it spreads totally made up stuff. What you wind up with is a murky mix of information that has just enough truth to be credible.

    And that makes it a whole lot harder to fight. You can’t just start by shouting it all down. Sacramento says too often health officials opt to frame these rumors as a dichotomy: “Is this true or is this a myth?” That alienates people from the science. Instead, the institution where he works has begun to produce social media-specific videos that start a dialogue about the importance of vaccines, while remaining open to people’s fears. “Brazil is a country full of social inequalities and contradictions,” he says. “The only way to understand what is happening is to talk to people who are different from you.” Unfortunately, that’s the one thing WhatsApp is designed not to let you do.

    ———-

    “When WhatsApp’s Fake News Problem Threatens Public Health” by Megan Molteni; Wired; 03/09/2018

    “Yellow fever began expanding south, even through the winter months, infecting more than 1,500 people and killing nearly 500. The mosquito-borne virus attacks the liver, causing its signature jaundice and internal hemorrhaging (the Mayans called it xekik, or “blood vomit”). Today, that pestilence is racing toward Rio de Janeiro and Sao Paulo at the rate of more than a mile a day,, turning Brazil’s coastal megacities into mega-ticking-timebombs. The only thing spreading faster is misinformation about the dangers of a yellow fever vaccine—the very thing that could halt the virus’s advance. And nowhere is it happening faster than on WhatsApp.

    As the saying goes, a lie can travel halfway around the world before the truth can get its boots on. Especially in the age of the internet when random videos on messaging services like WhatsApp are treated as reliable news sources:


    In recent weeks, rumors of fatal vaccine reactions, mercury preservatives, and government conspiracies have surfaced with alarming speed on the Facebook-owned encrypted messaging service,, which is used by 120 million of Brazil’s roughly 200 million residents. The platform has long incubated and proliferated fake news, in Brazil in particular. With its modest data requirements, WhatsApp is especially popular among middle and lower income individuals there, many of whom rely on it as their primary news consumption platform. But as the country’s health authorities scramble to contain the worst outbreak in decades, WhatsApp’s misinformation trade threatens to go from destabilizing to deadly.

    So by the time the government began its big vaccination campaign to vaccinate 95 percent of residents in vulnerable areas, there was a fake news campaign about the vaccine using professional quality videos: fake doctors. Fake stories of deaths from vaccine. And the kind of production quality people expect from a news broadcast:


    On January 25, Brazilian health officials launched a mass campaign to vaccinate 95 percent of residents in the 69 municipalities directly in the disease’s path—a total of 23 million people. A yellow fever vaccine has been mandatory since 2002 for any Brazilian born in regions where the virus is endemic. But in the last two years the disease has pushed beyond its normal range into territories where fewer than a quarter of people are immune, including the urban areas of Rio and Sao Paulo.

    By the time of the announcement, the fake news cycle was already underway. Earlier in the month an audio message from a woman claiming to be a doctor at a well-known research institute began circulating on WhatsApp, warning that the vaccine is dangerous. (The institute denied that the recording came from any of its employees). A few weeks later it was a story linking the death of a university student to the vaccine. (That too proved to be a false report). In February, Igor Sacramento’s mother-in-law messaged him a pair of videos suggesting that the yellow fever vaccine was actually a scam aimed at reducing the world population. A health communication researcher at Fiocruz, one of Brazil’s largest scientific institutions, Sacramento recognized a scam when he saw one. And no, it wasn’t a global illuminati plot to kill off his countrymen. But he could understand why people would be taken in by it.

    “These videos are very sophisticated, with good editing, testimonials from experts, and personal experiences,” Sacramento says. It’s the same journalistic format people see on TV, so it bears the shape of truth. And when people share these videos or news stories within their social networks as personal messages, it changes the calculus of trust. “We are transitioning from a society that experienced truth based on facts to a society based on its experience of truth in intimacy, in emotion, in closeness.”

    “These videos are very sophisticated, with good editing, testimonials from experts, and personal experiences,” Sacramento says. It’s the same journalistic format people see on TV, so it bears the shape of truth. And when people share these videos or news stories within their social networks as personal messages, it changes the calculus of trust. “We are transitioning from a society that experienced truth based on facts to a society based on its experience of truth in intimacy, in emotion, in closeness.””

    So how widespread is the problem of high quality literal fake news content getting propagated on WhatsApp? Well, again, we don’t know. Because you can’t monitor how WhatsApp is used. Even the company can’t. It’s one of its ‘features’:


    People are more likely to believe rumours from family and friends. There’s no algorithm mediating the experience. And when that misinformation comes in the form of forwarded texts and videos—which look the same as personal messages in WhatsApp—they’re lent another layer of legitimacy. Then you get the network compounding effect; if you’re in multiple group chats that all receive the fake news, the repetition makes them more believable still.

    Of course, these are all just theories. Because of WhatsApp’s end-to-end encryption and the closed nature of its networks, it’s nearly impossible to study how misinformation moves through it. For users in countries with a history of state-sponsored violence, like Brazil, that secrecy is a feature. But it’s a bug for anyone trying to study them. “I think WhatsApp hoaxes and disinformation campaigns are a bit more pernicious [than Facebook] because their diffusion cannot be monitored,” says Pablo Ortellado, a fake news researcher and professor of public policy at the University of Sao Paulo. Misinformation on WhatsApp can only be identified when it jumps to other social media sites or bleeds into the real world.

    “Of course, these are all just theories. Because of WhatsApp’s end-to-end encryption and the closed nature of its networks, it’s nearly impossible to study how misinformation moves through it.”

    Yep, we have no idea what kinds of other high quality misinformation videos are getting produced. Of course, it’s not like there aren’t plenty of misinformation videos readily available on Youtube and Facebook, so we do have some idea of the general type of misinformation and far-right memes that going to flourish on platforms like WhatsApp. But for a very time-sensitive story, like getting people vaccinating before the killer virus turns into a pandemic, the inability to identify and combat disinformation like this really is quite dangerous.

    It’s a reminder that if humanity wants to embrace the cypherpunk revolution of ubiquitous strong encryption and truly a anonymous, untrackable internet, humanity is going to have to get a lot wiser. Wise enough to at least have some sort of reasonable social immune system against mind viruses like bogus news and far-right memes. Wise enough to identify and reject the many problems with ideologies like digital libertarianism. In other words, if humanity wants to safely embrace the cypherpunk revolution, it needs to be savvy enough to reject the cypherpunk revolution. It’s a bit of a paradox and a recurring theme with technology and power in general: if you want this power without destroying yourself you’re just going to have to be wise enough to use that power very carefully or just reject it outright, collectively and individually.

    But for now, we have literal fake news videos pushing anti-vaccine misinformation quietly ‘going viral’ on encrypted social media in order to promote the spread of a deadly biological virus. It seems like a milestone of self-destructive behavior was just reached by humanity. It was a group effort. Go team.

    Posted by Pterrafractyl | March 12, 2018, 10:46 am
  3. A great new book is out on the history of the Internet Surveillance Valley by Yasha Levine. Here is a link to a long interview
    http://mediaroots.org/surveillance-valley-the-secret-military-history-of-the-internet-with-yasha-levine/

    Posted by Hugo Turner | March 23, 2018, 11:37 am
  4. Here’s a quick update on the development of the ‘deepfake’ technology that can create realistically looking videos of anyone saying anything: according to experts, it should be advanced enough to cause major problems for things like political elections in a couple years. So if you were wondering what kind of ‘fake news’ nightmare is in store for the US 2020 election, it’s going to be the kind of nightmare that includes one fake video after another that looks completely real:

    Associated Press

    I never said that! High-tech deception of ‘deepfake’ videos

    By DEB RIECHMANN
    07/02/2018

    WASHINGTON (AP) — Hey, did my congressman really say that? Is that really President Donald Trump on that video, or am I being duped?

    New technology on the internet lets anyone make videos of real people appearing to say things they’ve never said. Republicans and Democrats predict this high-tech way of putting words in someone’s mouth will become the latest weapon in disinformation wars against the United States and other Western democracies.

    We’re not talking about lip-syncing videos. This technology uses facial mapping and artificial intelligence to produce videos that appear so genuine it’s hard to spot the phonies. Lawmakers and intelligence officials worry that the bogus videos — called deepfakes — could be used to threaten national security or interfere in elections.

    So far, that hasn’t happened, but experts say it’s not a question of if, but when.

    “I expect that here in the United States we will start to see this content in the upcoming midterms and national election two years from now,” said Hany Farid, a digital forensics expert at Dartmouth College in Hanover, New Hampshire. “The technology, of course, knows no borders, so I expect the impact to ripple around the globe.”

    When an average person can create a realistic fake video of the president saying anything they want, Farid said, “we have entered a new world where it is going to be difficult to know how to believe what we see.” The reverse is a concern, too. People may dismiss as fake genuine footage, say of a real atrocity, to score political points.

    Realizing the implications of the technology, the U.S. Defense Advanced Research Projects Agency is already two years into a four-year program to develop technologies that can detect fake images and videos. Right now, it takes extensive analysis to identify phony videos. It’s unclear if new ways to authenticate images or detect fakes will keep pace with deepfake technology.

    Deepfakes are so named because they utilize deep learning, a form of artificial intelligence. They are made by feeding a computer an algorithm, or set of instructions, lots of images and audio of a certain person. The computer program learns how to mimic the person’s facial expressions, mannerisms, voice and inflections. If you have enough video and audio of someone, you can combine a fake video of the person with a fake audio and get them to say anything you want.

    So far, deepfakes have mostly been used to smear celebrities or as gags, but it’s easy to foresee a nation state using them for nefarious activities against the U.S., said Sen. Marco Rubio, R-Fla., one of several members of the Senate intelligence committee who are expressing concern about deepfakes.

    A foreign intelligence agency could use the technology to produce a fake video of an American politician using a racial epithet or taking a bribe, Rubio says. They could use a fake video of a U.S. soldier massacring civilians overseas, or one of a U.S. official supposedly admitting a secret plan to carry out a conspiracy. Imagine a fake video of a U.S. leader — or an official from North Korea or Iran — warning the United States of an impending disaster.

    “It’s a weapon that could be used — timed appropriately and placed appropriately — in the same way fake news is used, except in a video form, which could create real chaos and instability on the eve of an election or a major decision of any sort,” Rubio told The Associated Press.

    Deepfake technology still has a few hitches. For instance, people’s blinking in fake videos may appear unnatural. But the technology is improving.

    Within a year or two, it’s going to be really hard for a person to distinguish between a real video and a fake video,” said Andrew Grotto, an international security fellow at the Center for International Security and Cooperation at Stanford University in California.

    “This technology, I think, will be irresistible for nation states to use in disinformation campaigns to manipulate public opinion, deceive populations and undermine confidence in our institutions,” Grotto said. He called for government leaders and politicians to clearly say it has no place in civilized political debate.

    Rubio noted that in 2009, the U.S. Embassy in Moscow complained to the Russian Foreign Ministry about a fake sex video it said was made to damage the reputation of a U.S. diplomat. The video showed the married diplomat, who was a liaison to Russian religious and human rights groups, making telephone calls on a dark street. The video then showed the diplomat in his hotel room, scenes that apparently were shot with a hidden camera. Later, the video appeared to show a man and a woman having sex in the same room with the lights off, although it was not at all clear that the man was the diplomat.

    John Beyrle, who was the U.S. ambassador in Moscow at the time, blamed the Russian government for the video, which he said was clearly fabricated.

    Michael McFaul, who was American ambassador in Russia between 2012 and 2014, said Russia has engaged in disinformation videos against various political actors for years and that he too had been a target. He has said that Russian state propaganda inserted his face into photographs and “spliced my speeches to make me say things I never uttered and even accused me of pedophilia.”

    ———-

    “I never said that! High-tech deception of ‘deepfake’ videos” by DEB RIECHMANN; Associated Press; 07/02/2018

    “I expect that here in the United States we will start to see this content in the upcoming midterms and national election two years from now,” said Hany Farid, a digital forensics expert at Dartmouth College in Hanover, New Hampshire. “The technology, of course, knows no borders, so I expect the impact to ripple around the globe.””

    Yep, the way Hany Farid, a digital forensics expert at Dartmouth College, sees it, we might even see ‘deepfakes’ impact the US midterms this year. The technology is basically ready to go.

    And while DARPA is reportedly already working on techniques for identifying fake images and videos, it’s still unclear if even an agency like DARPA will be able to keep up with advances in the technology. In other words, even after detection technology has been developed there’s still ALWAYS going to be the potential for cutting edge ‘deepfakes’ that can fool that detection technology. It’s just part of our technological landscape:


    Realizing the implications of the technology, the U.S. Defense Advanced Research Projects Agency is already two years into a four-year program to develop technologies that can detect fake images and videos. Right now, it takes extensive analysis to identify phony videos. It’s unclear if new ways to authenticate images or detect fakes will keep pace with deepfake technology.

    Deepfakes are so named because they utilize deep learning, a form of artificial intelligence. They are made by feeding a computer an algorithm, or set of instructions, lots of images and audio of a certain person. The computer program learns how to mimic the person’s facial expressions, mannerisms, voice and inflections. If you have enough video and audio of someone, you can combine a fake video of the person with a fake audio and get them to say anything you want.

    Deepfake technology still has a few hitches. For instance, people’s blinking in fake videos may appear unnatural. But the technology is improving.

    Within a year or two, it’s going to be really hard for a person to distinguish between a real video and a fake video,” said Andrew Grotto, an international security fellow at the Center for International Security and Cooperation at Stanford University in California.

    “This technology, I think, will be irresistible for nation states to use in disinformation campaigns to manipulate public opinion, deceive populations and undermine confidence in our institutions,” Grotto said. He called for government leaders and politicians to clearly say it has no place in civilized political debate.

    And while we’re guaranteed that any deepfakes introduced into the US elections will almost reflexively be blamed on Russia, the reality is that any intelligence agency on the planet (even private intelligence agencies) are going to be extremely tempted to develop these kinds of videos for propaganda purposes. And the 4Chan trolls and Alt Right are going to be investing massive amounts of time and energy into this, if they aren’t already. The list of suspects is inherently going to be massive:


    So far, deepfakes have mostly been used to smear celebrities or as gags, but it’s easy to foresee a nation state using them for nefarious activities against the U.S., said Sen. Marco Rubio, R-Fla., one of several members of the Senate intelligence committee who are expressing concern about deepfakes.

    A foreign intelligence agency could use the technology to produce a fake video of an American politician using a racial epithet or taking a bribe, Rubio says. They could use a fake video of a U.S. soldier massacring civilians overseas, or one of a U.S. official supposedly admitting a secret plan to carry out a conspiracy. Imagine a fake video of a U.S. leader — or an official from North Korea or Iran — warning the United States of an impending disaster.

    “It’s a weapon that could be used — timed appropriately and placed appropriately — in the same way fake news is used, except in a video form, which could create real chaos and instability on the eve of an election or a major decision of any sort,” Rubio told The Associated Press.

    Finally, let’s not forget about one of the more bizarre potential consequences of the emergence of deepfake technology: it’s going to be make easier than ever for Republicans to decry ‘fake news!’ when they are confronted with a true but political inconvenient story. Remember when Trump’s ambassador to the Netherlands, Pete Hoekstra, cried ‘fake news!’ when shown a video of his own comments? Well, that’s going to be a very common thing going forward. So when the inevitable montages of Trump saying one horrible thing after enough get rolled out for voters 2020, it’s going to be easier than ever for people to dismiss it as ‘fake news!’:


    When an average person can create a realistic fake video of the president saying anything they want, Farid said, “we have entered a new world where it is going to be difficult to know how to believe what we see.” The reverse is a concern, too. People may dismiss as fake genuine footage, say of a real atrocity, to score political points.

    Welcome to the world were you really can’t believe your lying eyes. Except when you can and should.

    So how will humanity handle a world where any random troll can create convincing fake videos? Well, based on our track record with how we handle other sources of information that can potentially be faked and requires a degree of wisdom and discernment to navigate, not well. Not well at all.

    Posted by Pterrafractyl | July 3, 2018, 2:47 pm
  5. Here’s the latest reminder that when the ‘deepfake’ video technology develops to the point of being indistinguishable from real videos the far right is going to go into overdrive creating videos purporting to prove virtually every far right fantasy you can imagine. Especially the ‘PizzaGate’ conspiracy theory pushed by the right wing in the final weeks of the 2016 alleging that Hillary Clinton and number of other prominent Democrats are part of a Satanist child abuse ring:

    Far right crackpot ‘journalist’ Liz Crokin is repeating her assertions that video of Hillary Clinton – specifically, Hillary sexually abusing a child and then cutting their face off and eating it – is floating around on the Dark Web is, according to her sources, is definitely real. And now Crokin warns that reports about ‘deepfake’ technology are just disinformation stories being preemptively put out by the Deep State to make the public skeptical when the videos of Hillary cutting the face off of a child come to light:

    Right Wing Watch

    Liz Crokin: Trump Confirmed The Existence Of A Video Showing Hilary Clinton Torturing A Child

    By Kyle Mantyla
    July 12, 2018 2:00 pm

    Right-wing “journalist” Liz Crokin appeared on Sheila Zilinsky’s podcast earlier this week, where the two unhinged conspiracy theorists discussed Crokin’s assertion that a video exists showing Hillary Clinton sexually abusing and torturing a child.

    “I understand that there is a video circulating on the dark web of [Clinton] eating the face of a child,” Zilinsky said. “Does this video exist?”

    “Yes,” responded Crokin. “There are videos that prove that Hillary Clinton is involved in child sex trafficking and pedophilia. I have sources that have told me that; I trust these sources, so there is evidence that exists that proves that she is involved in this stuff … I believe with all my heart that this is true.”

    After claiming that “the deep state” targeted former White House national security adviser Michael Flynn for destruction because he and his son “were exposing Pizzagate,” Crokin insisted that media reports warning about the ability to use modern technology to create fake videos that make it appear as if famous people are saying or doing things are secretly a part of an effort to prepare the public to dismiss the Clinton video when it is finally released.

    “Based off of what lies they report, I can tell what they’re afraid of, I can tell what’s really going on behind the scenes,” she said. “So the fact that they are saying, ‘Oh, if a tape comes out involving Hillary or Obama doing like a sex act or x, y, z, it’s fake news,’ that tells me that there are tapes that incriminate Obama and Hillary.”

    As further proof that such tapes exist, Crokin repeated her claim that President Trump’s tweet of a link to a fringe right-wing conspiracy website that featured a video of her discussing this supposed Clinton tape was confirmation of the veracity of her claims.

    “When President Trump retweeted MAGA Pill, MAGA Pill’s tweet was my video talking about Hillary Clinton’s sex tape,” she insisted. “I know President Trump, I’ve met him, I’ve studied him, I’ve reported on him … I’ve known him and of him and reported on him for a very long time. I understand how his brain works, I understand how he thinks, I understand ‘The Art of War,’ his favorite book, I understand this man. And I know that President Trump—there’s no way that he didn’t know when he retweeted MAGA Pill that my video talking about Hillary Clinton’s sex tape was MAGA Pill’s pinned tweet. There is no way that President Trump didn’t know that.”

    ———-

    “Liz Crokin: Trump Confirmed The Existence Of A Video Showing Hilary Clinton Torturing A Child” by Kyle Mantyla; Right Wing Watch; 07/12/2018

    ““I understand that there is a video circulating on the dark web of [Clinton] eating the face of a child,” Zilinsky said. “Does this video exist?””

    Welcome to our dystopian near-future. Does video of [insert horror scenario here] actually exist? Oh yes, we will be assured, it’s definitely totally real and you can totally trust my source!


    “Yes,” responded Crokin. “There are videos that prove that Hillary Clinton is involved in child sex trafficking and pedophilia. I have sources that have told me that; I trust these sources, so there is evidence that exists that proves that she is involved in this stuff … I believe with all my heart that this is true.”

    And someday, with advances in deepfake video technology and special effects they might actually produce such a video. It’s really just a matter of time, and at this point we have to just hope that they use special effects to fake a child having their face cut off and eaten and don’t actually do that to a kid (you never know when you’re dealing with Nazis and their fellow travelers).

    Crokin then went on to insist that the various news articles warning about the advances in deepfake technology were all part of a secret effort to protect Hillary when the video is finally released:


    After claiming that “the deep state” targeted former White House national security adviser Michael Flynn for destruction because he and his son “were exposing Pizzagate,” Crokin insisted that media reports warning about the ability to use modern technology to create fake videos that make it appear as if famous people are saying or doing things are secretly a part of an effort to prepare the public to dismiss the Clinton video when it is finally released.

    “Based off of what lies they report, I can tell what they’re afraid of, I can tell what’s really going on behind the scenes,” she said. “So the fact that they are saying, ‘Oh, if a tape comes out involving Hillary or Obama doing like a sex act or x, y, z, it’s fake news,’ that tells me that there are tapes that incriminate Obama and Hillary.”

    And as further evidence of her claims, Crokin points to President Trump retweeting a tweet to a website featuring a video of Crokin discussing this alleged Hillary-child-face-eating video:


    As further proof that such tapes exist, Crokin repeated her claim that President Trump’s tweet of a link to a fringe right-wing conspiracy website that featured a video of her discussing this supposed Clinton tape was confirmation of the veracity of her claims.

    “When President Trump retweeted MAGA Pill, MAGA Pill’s tweet was my video talking about Hillary Clinton’s sex tape,” she insisted. “I know President Trump, I’ve met him, I’ve studied him, I’ve reported on him … I’ve known him and of him and reported on him for a very long time. I understand how his brain works, I understand how he thinks, I understand ‘The Art of War,’ his favorite book, I understand this man. And I know that President Trump—there’s no way that he didn’t know when he retweeted MAGA Pill that my video talking about Hillary Clinton’s sex tape was MAGA Pill’s pinned tweet. There is no way that President Trump didn’t know that.”

    Yep, the president is promoting this lady. And that, right there, summarizes the next stage of America’s nightmare descent into neo-Nazi fantasies that’s just around the corner (not to mention the impact on the rest of the world).

    Of course, the denials that deepfake technology exist will start getting rather amusing after people use that same technology to create deepfakes of Trump, Crokin, and anyone else in the public eye (since you need lots of videos of someone to make the technology work).

    Also keep in mind that Crokin claims the child-face-eating video is merely one of the videos of Hillary Clinton floating around. There are many videos of Hillary that Crokin claims to be aware of. So when the child-face-eating video emerges, it’s probably going to just be a preview of what’s to come:

    Right Wing Watch

    Liz Crokin Claims That She Knows ‘One Hundred Percent’ That ‘A Video Of Hillary Clinton Sexually Abusing A Child Exists’

    By Kyle Mantyla | April 13, 2018 1:07 pm

    Fringe right-wing conspiracy theoris Liz Crokin posted a video on YouTube last night in which she declared that a gruesome video showing Hillary Clinton cutting the face off of a living child exists and will soon be released for all the world to see.

    “I know with absolute certainty that there is a tape that exists that involves Hillary Clinton sexually abusing a child,” Crokin said. “I have gotten this confirmed from very respectable and high-level sources.”

    Crokin said that reports that Russian-linked accounts posted a fake Clinton sex tape during the 2016 election are false, saying that no such fake video exists and that the stories about it are simply an effort to confuse the public “so when and if the actual video of Hillary Clinton sexually abusing a child comes out, the seeds of doubt are already planted in people’s heads.”

    “All I know is that, one hundred percent, a video of Hillary Clinton sexually abusing a child exists,” she said. “I know there’s many videos incriminating her, I just don’t know which one they are going to release. But there are people, there are claims that this sexual abuse video is on the dark web and I know that some people have seen it, some in law enforcement, the NYPD law enforcement, some NYPD officers have seen it and it made them sick, it made them cry, it made them vomit, some of them had to seek psychological counseling after this.”

    “I’m not going to go into too much detail because it’s so disgusting, but in this video, they cut off a child’s face as the child is alive,” Crokin claimed. “I’m just going to leave it at that.”

    ———-

    “Liz Crokin Claims That She Knows ‘One Hundred Percent’ That ‘A Video Of Hillary Clinton Sexually Abusing A Child Exists’” by Kyle Mantyla; Right Wing Watch; 04/13/2018

    “I’m not going to go into too much detail because it’s so disgusting, but in this video, they cut off a child’s face as the child is alive…I’m just going to leave it at that.”

    The child was alive when Hillary cut its face off and ate it after sexually abusing it. That’s what Crokin has spent months assuring her audience is a real thing and Donald Trump appears to be promoting her. Of course.

    And that’s just one of the alleged Hillary-as-beyond-evil-witch videos Crokin claims to certainty is real and in possession of some law enforcement officials (this is what the whole ‘QAnon’ obsession on the right is about):


    “All I know is that, one hundred percent, a video of Hillary Clinton sexually abusing a child exists,” she said. “I know there’s many videos incriminating her, I just don’t know which one they are going to release. But there are people, there are claims that this sexual abuse video is on the dark web and I know that some people have seen it, some in law enforcement, the NYPD law enforcement, some NYPD officers have seen it and it made them sick, it made them cry, it made them vomit, some of them had to seek psychological counseling after this.”

    Also notice how Crokin acts like she doesn’t want to go into the details, and yet gives all sorts of details that hint as something so horrific that the alleged NYPD officers who have seen the video needed psychological counseling. And that points towards one of the other aspects of this looming nightmare phase of American’s intellectual descent: the flood of far right fake videos that are going to be produced are probably going to be designed to psychologically traumatize the viewer. The global public is about to be exposed to one torture/murder/porn video of famous people after another because if you’re trying to impact you’re audience visually traumatizing them is an effective way to do it.

    It’s no accident that much of the far right conspiracy culture has a fixation on child sex crimes. Yes, some of that fixation is due to real cases or elite protected child abuse, like ‘The Franklin cover-up’ or Jimmy Savile and the profound moral gravity of such crimes if real organized elite sex abuse rings actually exist. The visceral revulsion of crimes of this nature make them inherently impactful. But in the right-wing conspiracy worldview pedophilia tends to play a central role (as anyone familiar with Alex Jones can attest to). Crokin is merely one example of that.

    And that’s exactly why we should expect these slew of fake videos that are inevitably going to be produced in droves for political gain to involve images that truly psychological scar the viewer. It’s just more impactful that way.

    So whether you’re a fan of Hillary Clinton or loath her, get ready to have her seared in your memory forever. Probably eating the face of a child or something like that.

    Posted by Pterrafractyl | July 13, 2018, 2:39 pm

Post a comment