Dave Emory’s entire lifetime of work is available on a flash drive that can be obtained HERE. The new drive is a 32-gigabyte drive that is current as of the programs and articles posted by the fall of 2017. The new drive (available for a tax-deductible contribution of $65.00 or more.)
WFMU-FM is podcasting For The Record–You can subscribe to the podcast HERE.
You can subscribe to e‑mail alerts from Spitfirelist.com HERE.
You can subscribe to RSS feed from Spitfirelist.com HERE.
You can subscribe to the comments made on programs and posts–an excellent source of information in, and of, itself HERE.
This broadcast was recorded in one, 60-minute segment.
Introduction: Updating our ongoing analysis of what Mr. Emory calls “technocratic fascism,” we examine how existing technologies are neutralizing and/or rendering obsolete foundational elements of our civilization and democratic governmental systems.
For purposes of refreshing the line of argument presented here, we reference a vitally important article by David Golumbia. ” . . . . Such technocratic beliefs are widespread in our world today, especially in the enclaves of digital enthusiasts, whether or not they are part of the giant corporate-digital leviathan. Hackers (‘civic,’ ‘ethical,’ ‘white’ and ‘black’ hat alike), hacktivists, WikiLeaks fans [and Julian Assange et al–D. E.], Anonymous ‘members,’ even Edward Snowden himself walk hand-in-hand with Facebook and Google in telling us that coders don’t just have good things to contribute to the political world, but that the political world is theirs to do with what they want, and the rest of us should stay out of it: the political world is broken, they appear to think (rightly, at least in part), and the solution to that, they think (wrongly, at least for the most part), is for programmers to take political matters into their own hands. . . . [Tor co-creator] Dingledine asserts that a small group of software developers can assign to themselves that role, and that members of democratic polities have no choice but to accept them having that role. . . .”
Beginning with a chilling opinion piece in the New York Times, we note that technological development threatens to super-charge the Big Lies that drive our world. As anyone who saw the file Star Wars film “Rogue One” knows, the technology required to create a nearly life-like computer-generated videos of a real person is already a reality. Once the province of movie studios and other firms with millions to spend, the technology is now available for download for free.
” . . . . In 2016 Gareth Edwards, the director of the Star Wars film ‘Rogue One,’ was able to create a scene featuring a young Princess Leia by manipulating images of Carrie Fisher as she looked in 1977. Mr. Edwards had the best hardware and software a $200 million Hollywood budget could buy. Less than two years later, images of similar quality can be created with software available for free download on Reddit. That was how a faked video supposedly of the actress Emma Watson in a shower with another woman ended up on the website Celeb Jihad. . . .”
The technology has already rendered obsolete selective editing such as that performed by James O’Keefe: ” . . . . as the novelist William Gibson once said, ‘The street finds its own uses for things.’ So do rogue political actors. The implications for democracy are eye-opening. The conservative political activist James O’Keefe has created a cottage industry manipulating political perceptions by editing footage in misleading ways. In 2018, low-tech editing like Mr. O’Keefe’s is already an anachronism: Imagine what even less scrupulous activists could do with the power to create ‘video’ framing real people for things they’ve never actually done. One harrowing potential eventuality: Fake video and audio may become so convincing that it can’t be distinguished from real recordings, rendering audio and video evidence inadmissible in court. . . .”
After highlighting a story about AI-generated “deepfake” pornography with people’s faces superimposed on others’ bodies in pornographic layouts, we note how robots have altered our political and commercial landscapes, through cyber technology: ” . . . . Robots are getting better, every day, at impersonating humans. When directed by opportunists, malefactors and sometimes even nation-states, they pose a particular threat to democratic societies, which are premised on being open to the people. Robots posing as people have become a menace. . . . In coming years, campaign finance limits will be (and maybe already are) evaded by robot armies posing as ‘small’ donors. And actual voting is another obvious target — perhaps the ultimate target. . . .”
Before the actual replacement of manual labor by robots, devices to technocratically “improve”–read “coercively engineer” workers are patented by Amazon and have been used on workers in some of their facilities. ” . . . . What if your employer made you wear a wristband that tracked your every move, and that even nudged you via vibrations when it judged that you were doing something wrong? What if your supervisor could identify every time you paused to scratch or fidget, and for how long you took a bathroom break? What may sound like dystopian fiction could become a reality for Amazon warehouse workers around the world. The company has won two patents for such a wristband. . . .”
For some U.K Amazon warehouse workers, the future is now: ” . . . . Max Crawford, a former Amazon warehouse worker in Britain, said in a phone interview, ‘After a year working on the floor, I felt like I had become a version of the robots I was working with.’ He described having to process hundreds of items in an hour — a pace so extreme that one day, he said, he fell over from dizziness. ‘There was no time to go to the loo,’ he said, using the British slang for toilet. ‘You had to process the items in seconds and then move on. If you didn’t meet targets, you were fired.’
“He worked back and forth at two Amazon warehouses for more than two years and then quit in 2015 because of health concerns, he said: ‘I got burned out.’ Mr. Crawford agreed that the wristbands might save some time and labor, but he said the tracking was ‘stalkerish’ and feared that workers might be unfairly scrutinized if their hands were found to be ‘in the wrong place at the wrong time.’ ‘They want to turn people into machines,’ he said. ‘The robotic technology isn’t up to scratch yet, so until it is, they will use human robots.’ . . . .”
Some tech workers, well placed at R & D pacesetters and giants such as Facebook and Google have done an about-face on the impact of their earlier efforts and are now struggling against the misuse of the technologies they helped to launch:
” . . . . A group of Silicon Valley technologists who were early employees at Facebook and Google, alarmed over the ill effects of social networks and smartphones, are banding together to challenge the companies they helped build. . . . ‘The largest supercomputers in the world are inside of two companies — Google and Facebook — and where are we pointing them?’ Mr. [Tristan] Harris said. ‘We’re pointing them at people’s brains, at children.’ . . . . Mr. [RogerMcNamee] said he had joined the Center for Humane Technology because he was horrified by what he had helped enable as an early Facebook investor. ‘Facebook appeals to your lizard brain — primarily fear and anger,’ he said. ‘And with smartphones, they’ve got you for every waking moment.’ . . . .”
Transitioning to our next program–updating AI (artificial intelligence) technology as it applies to technocratic fascism–we note that AI machines are being designed to develop other AI’s–“The Rise of the Machine.” ” . . . . Jeff Dean, one of Google’s leading engineers, spotlighted a Google project called AutoML. ML is short for machine learning, referring to computer algorithms that can learn to perform particular tasks on their own by analyzing data. AutoML, in turn, is a machine learning algorithm that learns to build other machine-learning algorithms. With it, Google may soon find a way to create A.I. technology that can partly take the humans out of building the A.I. systems that many believe are the future of the technology industry. . . .”
1. There was a chilling recent opinion piece in the New York Times. Technological development threatens to super-charge the Big Lies that drive our world. As anyone who saw the file Star Wars film “Rogue One” knows well, the technology required to create a nearly life-like computer-generated videos of a real person is already a reality. Once the province of movie studios and other firms with millions to spend, the technology is now available for download for free.
” . . . . In 2016 Gareth Edwards, the director of the Star Wars film ‘Rogue One,’ was able to create a scene featuring a young Princess Leia by manipulating images of Carrie Fisher as she looked in 1977. Mr. Edwards had the best hardware and software a $200 million Hollywood budget could buy. Less than two years later, images of similar quality can be created with software available for free download on Reddit. That was how a faked video supposedly of the actress Emma Watson in a shower with another woman ended up on the website Celeb Jihad. . . .”
The technology has already rendered obsolete selective editing such as that performed by James O’Keefe: ” . . . . as the novelist William Gibson once said, ‘The street finds its own uses for things.’ So do rogue political actors. The implications for democracy are eye-opening. The conservative political activist James O’Keefe has created a cottage industry manipulating political perceptions by editing footage in misleading ways. In 2018, low-tech editing like Mr. O’Keefe’s is already an anachronism: Imagine what even less scrupulous activists could do with the power to create ‘video’ framing real people for things they’ve never actually done. One harrowing potential eventuality: Fake video and audio may become so convincing that it can’t be distinguished from real recordings, rendering audio and video evidence inadmissible in court. . . .”
Imagine it is the spring of 2019. A bottom-feeding website, perhaps tied to Russia, “surfaces” video of a sex scene starring an 18-year-old Kirsten Gillibrand. It is soon debunked as a fake, the product of a user-friendly video application that employs generative adversarial network technology to convincingly swap out one face for another.
It is the summer of 2019, and the story, predictably, has stuck around — part talk-show joke, part right-wing talking point. “It’s news,” political journalists say in their own defense. “People are talking about it. How can we not?”
Then it is fall. The junior senator from New York State announces her campaign for the presidency. At a diner in New Hampshire, one “low information” voter asks another: “Kirsten What’s‑her-name? She’s running for president? Didn’t she have something to do with pornography?”
Welcome to the shape of things to come. In 2016 Gareth Edwards, the director of the Star Wars film “Rogue One,” was able to create a scene featuring a young Princess Leia by manipulating images of Carrie Fisher as she looked in 1977. Mr. Edwards had the best hardware and software a $200 million Hollywood budget could buy. Less than two years later, images of similar quality can be created with software available for free download on Reddit. That was how a faked video supposedly of the actress Emma Watson in a shower with another woman ended up on the website Celeb Jihad.
Programs like these have many legitimate applications. They can help computer-security experts probe for weaknesses in their defenses and help self-driving cars learn how to navigate unusual weather conditions. But as the novelist William Gibson once said, “The street finds its own uses for things.” So do rogue political actors. The implications for democracy are eye-opening.
The conservative political activist James O’Keefe has created a cottage industry manipulating political perceptions by editing footage in misleading ways. In 2018, low-tech editing like Mr. O’Keefe’s is already an anachronism: Imagine what even less scrupulous activists could do with the power to create “video” framing real people for things they’ve never actually done. One harrowing potential eventuality: Fake video and audio may become so convincing that it can’t be distinguished from real recordings, rendering audio and video evidence inadmissible in court.
A program called Face2Face, developed at Stanford, films one person speaking, then manipulates that person’s image to resemble someone else’s. Throw in voice manipulation technology, and you can literally make anyone say anything — or at least seem to.
The technology isn’t quite there; Princess Leia was a little wooden, if you looked carefully. But it’s closer than you might think. And even when fake video isn’t perfect, it can convince people who want to be convinced, especially when it reinforces offensive gender or racial stereotypes.
…
In 2007, Barack Obama’s political opponents insisted that footage existed of Michelle Obama ranting against “whitey.” In the future, they may not have to worry about whether it actually existed. If someone called their bluff, they may simply be able to invent it, using data from stock photos and pre-existing footage.
The next step would be one we are already familiar with: the exploitation of the algorithms used by social media sites like Twitter and Facebook to spread stories virally to those most inclined to show interest in them, even if those stories are fake.
It might be impossible to stop the advance of this kind of technology. But the relevant algorithms here aren’t only the ones that run on computer hardware. They are also the ones that undergird our too easily hacked media system, where garbage acquires the perfumed scent of legitimacy with all too much ease. Editors, journalists and news producers can play a role here — for good or for bad.
Outlets like Fox News spread stories about the murder of Democratic staff members and F.B.I. conspiracies to frame the president. Traditional news organizations, fearing that they might be left behind in the new attention economy, struggle to maximize “engagement with content.”
This gives them a built-in incentive to spread informational viruses that enfeeble the very democratic institutions that allow a free media to thrive. Cable news shows consider it their professional duty to provide “balance” by giving partisan talking heads free rein to spout nonsense — or amplify the nonsense of our current president.
It already feels as though we are living in an alternative science-fiction universe where no one agrees on what it true. Just think how much worse it will be when fake news becomes fake video. Democracy assumes that its citizens share the same reality. We’re about to find out whether democracy can be preserved when this assumption no longer holds.
It might be kind of comical to see Nicolas Cage’s face on the body of a woman, but expect to see less of this type of content floating around on PornHub and Twitter in the future.
As Motherboard first reported, both sites are taking action against artificial intelligence-powered pornography, known as “deepfakes.”
Deepfakes, for the uninitiated, are porn videos created by using a machine learning algorithm to match someone’s face to another person’s body. Loads of celebrities have had their faces used in porn scenes without their consent, and the results are almost flawless. Check out the SFW example below for a better idea of what we’re talking about.
[see chillingly realistic video of Nicolas Cage’s head on a woman’s body]
In a statement to PCMag on Wednesday, PornHub Vice President Corey Price said the company in 2015 introduced a submission form, which lets users easily flag nonconsensual content like revenge porn for removal. People have also started using that tool to flag deepfakes, he said.…
The company still has a lot of cleaning up to do. Motherboard reported there are still tons of deepfakes on PornHub.
“I was able to easily find dozens of deepfakes posted in the last few days, many under the search term ‘deepfakes’ or with deepfakes and the name of celebrities in the title of the video,” Motherboard’s Samantha Cole wrote.
Over on Twitter, meanwhile, users can now be suspended for posting deepfakes and other nonconsensual porn.
“We will suspend any account we identify as the original poster of intimate media that has been produced or distributed without the subject’s consent,” a Twitter spokesperson told Motherboard. “We will also suspend any account dedicated to posting this type of content.”
The site reported that Discord and Gfycat take a similar stance on deepfakes. For now, these types of videos appear to be primarily circulating via Reddit, where the deepfake community currently boasts around 90,000 subscribers.
3. No “ifs,” “ands,” or “bots!” ” . . . . Robots are getting better, every day, at impersonating humans. When directed by opportunists, malefactors and sometimes even nation-states, they pose a particular threat to democratic societies, which are premised on being open to the people. Robots posing as people have become a menace. . . . In coming years, campaign finance limits will be (and maybe already are) evaded by robot armies posing as ‘small’ donors. And actual voting is another obvious target — perhaps the ultimate target. . . .”
“Please Prove You’re Not a Robot” by Tim Wu; The New York Times; 7/16/2017; p. 8 (Review Section).
When science fiction writers first imagined robot invasions, the idea was that bots would become smart and powerful enough to take over the world by force, whether on their own or as directed by some evildoer. In reality, something only slightly less scary is happening.
Robots are getting better, every day, at impersonating humans. When directed by opportunists, malefactors and sometimes even nation-states, they pose a particular threat to democratic societies, which are premised on being open to the people.
Robots posing as people have become a menace. For popular Broadway shows (need we say “Hamilton”?), it is actually bots, not humans, who do much and maybe most of the ticket buying. Shows sell out immediately, and the middlemen (quite literally, evil robot masters) reap millions in ill-gotten gains.
Philip Howard, who runs the Computational Propaganda Research Project at Oxford, studied the deployment of propaganda bots during voting on Brexit, and the recent American and French presidential elections. Twitter is particularly distorted by its millions of robot accounts; during the French election, it was principally Twitter robots who were trying to make #MacronLeaks into a scandal. Facebook has admitted it was essentially hacked during the American election in November. In Michigan, Mr. Howard notes, “junk news was shared just as widely as professional news in the days leading up to the election.”
Robots are also being used to attack the democratic features of the administrative state. This spring, the Federal Communications Commission put its proposed revocation of net neutrality up for public comment. In previous years such proceedings attracted millions of (human) commentators. This time, someone with an agenda but no actual public support unleashed robots who impersonated (via stolen identities) hundreds of thousands of people, flooding the system with fake comments against federal net neutrality rules.
To be sure, today’s impersonation-bots are different from the robots imagined in science fiction: They aren’t sentient, don’t carry weapons and don’t have physical bodies. Instead, fake humans just have whatever is necessary to make them seem human enough to “pass”: a name, perhaps a virtual appearance, a credit-card number and, if necessary, a profession, birthday and home address. They are brought to life by programs or scripts that give one person the power to imitate thousands.
The problem is almost certain to get worse, spreading to even more areas of life as bots are trained to become better at mimicking humans. Given the degree to which product reviews have been swamped by robots (which tend to hand out five stars with abandon), commercial sabotage in the form of negative bot reviews is not hard to predict.
In coming years, campaign finance limits will be (and maybe already are) evaded by robot armies posing as “small” donors. And actual voting is another obvious target — perhaps the ultimate target. So far, we’ve been content to leave the problem to the tech industry, where the focus has been on building defenses, usually in the form of Captchas (“completely automated public Turing test to tell computers and humans apart”), those annoying “type this” tests to prove you are not a robot. But leaving it all to industry is not a long-term solution.
For one thing, the defenses don’t actually deter impersonation bots, but perversely reward whoever can beat them. And perhaps the greatest problem for a democracy is that companies like Facebook and Twitter lack a serious financial incentive to do anything about matters of public concern, like the millions of fake users who are corrupting the democratic process.
Twitter estimates at least 27 million probably fake accounts; researchers suggest the real number is closer to 48 million, yet the company does little about the problem. The problem is a public as well as private one, and impersonation robots should be considered what the law calls “hostis humani generis”: enemies of mankind, like pirates and other outlaws. That would allow for a better offensive strategy: bringing the power of the state to bear on the people deploying the robot armies to attack commerce or democracy.
The ideal anti-robot campaign would employ a mixed technological and legal approach. Improved robot detection might help us find the robot masters or potentially help national security unleash counterattacks, which can be necessary when attacks come from overseas. There may be room for deputizing private parties to hunt down bad robots. A simple legal remedy would be a “ Blade Runner” law that makes it illegal to deploy any program that hides its real identity to pose as a human. Automated processes should be required to state, “I am a robot.” When dealing with a fake human, it would be nice to know.
Using robots to fake support, steal tickets or crash democracy really is the kind of evil that science fiction writers were warning about. The use of robots takes advantage of the fact that political campaigns, elections and even open markets make humanistic assumptions, trusting that there is wisdom or at least legitimacy in crowds and value in public debate. But when support and opinion can be manufactured, bad or unpopular arguments can win not by logic but by a novel, dangerous form of force — the ultimate threat to every democracy.
4. Before the actual replacement of manual labor by robots, devices to technocratically “improve”–read “coercively engineer” workers are patented by Amazon and have been used on workers in some of their facilities.
” . . . . What if your employer made you wear a wristband that tracked your every move, and that even nudged you via vibrations when it judged that you were doing something wrong? What if your supervisor could identify every time you paused to scratch or fidget, and for how long you took a bathroom break? What may sound like dystopian fiction could become a reality for Amazon warehouse workers around the world. The company has won two patents for such a wristband. . . .”
For some U.K Amazon warehouse workers, the future is now: ” . . . . Max Crawford, a former Amazon warehouse worker in Britain, said in a phone interview, ‘After a year working on the floor, I felt like I had become a version of the robots I was working with.’ He described having to process hundreds of items in an hour — a pace so extreme that one day, he said, he fell over from dizziness. ‘There was no time to go to the loo,’ he said, using the British slang for toilet. ‘You had to process the items in seconds and then move on. If you didn’t meet targets, you were fired.’
He worked back and forth at two Amazon warehouses for more than two years and then quit in 2015 because of health concerns, he said: ‘I got burned out.’ Mr. Crawford agreed that the wristbands might save some time and labor, but he said the tracking was ‘stalkerish’ and feared that workers might be unfairly scrutinized if their hands were found to be ‘in the wrong place at the wrong time.’ ‘They want to turn people into machines,’ he said. ‘The robotic technology isn’t up to scratch yet, so until it is, they will use human robots.’ . . . .”
What if your employer made you wear a wristband that tracked your every move, and that even nudged you via vibrations when it judged that you were doing something wrong? What if your supervisor could identify every time you paused to scratch or fidget, and for how long you took a bathroom break?
What may sound like dystopian fiction could become a reality for Amazon warehouse workers around the world. The company has won two patents for such a wristband, though it was unclear if Amazon planned to actually manufacture the tracking device and have employees wear it.
The online retail giant, which plans to build a second headquarters and recently shortlisted 20 potential host cities for it, has also been known to experiment inhouse with new technology before selling it worldwide.
Amazon, which rarely discloses information on its patents, could not immediately be reached for comment on Thursday. But the patent disclosure goes to the heart about a global debate about privacy and security. Amazon already has a reputation for a workplace culture that thrives on a hard-hitting management style, and has experimented with how far it can push white-collar workers in order to reach its delivery targets.
Privacy advocates, however, note that a lot can go wrong even with everyday tracking technology. On Monday, the tech industry was jolted by the discovery that Strava, a fitness app that allows users to track their activities and compare their performance with other people running or cycling in the same places, had unwittingly highlighted the locations of United States military bases and the movements of their personnel in Iraq and Syria.
The patent applications, filed in 2016, were published in September, and Amazon won them this week, according to GeekWire, which reported the patents’ publication on Tuesday. In theory, Amazon’s proposed technology would emit ultrasonic sound pulses and radio transmissions to track where an employee’s hands were in relation to inventory bins, and provide “haptic feedback” to steer the worker toward the correct bin.
The aim, Amazon says in the patent, is to streamline “time consuming” tasks, like responding to orders and packaging them for speedy delivery. With guidance from a wristband, workers could fill orders faster. Critics say such wristbands raise concerns about privacy and would add a new layer of surveillance to the workplace, and that the use of the devices could result in employees being treated more like robots than human beings.
Current and former Amazon employees said the company already used similar tracking technology in its warehouses and said they would not be surprised if it put the patents into practice.
Max Crawford, a former Amazon warehouse worker in Britain, said in a phone interview, “After a year working on the floor, I felt like I had become a version of the robots I was working with.” He described having to process hundreds of items in an hour — a pace so extreme that one day, he said, he fell over from dizziness. “There was no time to go to the loo,” he said, using the British slang for toilet. “You had to process the items in seconds and then move on. If you didn’t meet targets, you were fired.”
He worked back and forth at two Amazon warehouses for more than two years and then quit in 2015 because of health concerns, he said: “I got burned out.” Mr. Crawford agreed that the wristbands might save some time and labor, but he said the tracking was “stalkerish” and feared that workers might be unfairly scrutinized if their hands were found to be “in the wrong place at the wrong time.” “They want to turn people into machines,” he said. “The robotic technology isn’t up to scratch yet, so until it is, they will use human robots.”
Many companies file patents for products that never see the light of day. And Amazon would not be the first employer to push boundaries in the search for a more efficient, speedy work force. Companies are increasingly introducing artificial intelligence into the workplace to help with productivity, and technology is often used to monitor employee whereabouts.
One company in London is developing artificial intelligence systems to flag unusual workplace behavior, while another used a messaging application to track its employees. In Wisconsin, a technology company called Three Square Market offered employees an opportunity to have microchips implanted under their skin in order, it said, to be able to use its services seamlessly. Initially, more than 50 out of 80 staff members at its headquarters in River Falls, Wis., volunteered.
5. Some tech workers, well placed at R & D pacesetters and giants such as Facebook and Google have done an about-face on the impact of their earlier efforts and are now struggling against the misuse of the technologies they helped to launch:
” . . . . A group of Silicon Valley technologists who were early employees at Facebook and Google, alarmed over the ill effects of social networks and smartphones, are banding together to challenge the companies they helped build. . . . ‘The largest supercomputers in the world are inside of two companies — Google and Facebook — and where are we pointing them?’ Mr. [Tristan] Harris said. ‘We’re pointing them at people’s brains, at children.’ . . . . Mr. [RogerMcNamee] said he had joined the Center for Humane Technology because he was horrified by what he had helped enable as an early Facebook investor. ‘Facebook appeals to your lizard brain — primarily fear and anger,’ he said. ‘And with smartphones, they’ve got you for every waking moment.’ . . . .”
A group of Silicon Valley technologists who were early employees at Facebook and Google, alarmed over the ill effects of social networks and smartphones, are banding together to challenge the companies they helped build. The cohort is creating a union of concerned experts called the Center for Humane Technology. Along with the nonprofit media watchdog group Common Sense Media, it also plans an anti-tech addiction lobbying effort and an ad campaign at 55,000 public schools in the United States.
The campaign, titled The Truth About Tech, will be funded with $7 million from Common Sense and capital raised by the Center for Humane Technology. Common Sense also has $50 million in donated media and airtime from partners including Comcast and DirecTV. It will be aimed at educating students, parents and teachers about the dangers of technology, including the depression that can come from heavy use of social media.
“We were on the inside,” said Tristan Harris, a former in-house ethicist at Google who is heading the new group. “We know what the companies measure. We know how they talk, and we know how the engineering works.”
The effect of technology, especially on younger minds, has become hotly debated in recent months. In January, two big Wall Street investors asked Apple to study the health effects of its products and to make it easier to limit children’s use of iPhones and iPads. Pediatric and mental health experts called on Facebook last week to abandon a messaging service the company had introduced for children as young as 6.
Parenting groups have also sounded the alarm about YouTube Kids, a product aimed at children that sometimes features disturbing content. “The largest supercomputers in the world are inside of two companies — Google and Facebook — and where are we pointing them?” Mr. Harris said. “We’re pointing them at people’s brains, at children.” Silicon Valley executives for years positioned their companies as tight-knit families and rarely spoke publicly against one another.
That has changed. Chamath Palihapitiya, a venture capitalist who was an early employee at Facebook, said in November that the social network was “ripping apart the social fabric of how society works.” The new Center for Humane Technology includes an unprecedented alliance of former employees of some of today’s biggest tech companies.
Apart from Mr. Harris, the center includes Sandy Parakilas, a former Facebook operations manager; Lynn Fox, a former Apple and Google communications executive; Dave Morin, a former Facebook executive; Justin Rosenstein, who created Facebook’s Like button and is a co-founder of Asana; Roger McNamee, an early investor in Facebook; and Renée DiResta, a technologist who studies bots. The group expects its numbers to grow.
Its first project to reform the industry will be to introduce a Ledger of Harms — a website aimed at guiding rank-and-file engineers who are concerned about what they are being asked to build. The site will include data on the health effects of different technologies and ways to make products that are healthier.
Jim Steyer, chief executive and founder of Common Sense, said the Truth About Tech campaign was modeled on antismoking drives and focused on children because of their vulnerability. That may sway tech chief executives to change, he said. Already, Apple’s chief executive, Timothy D. Cook, told The Guardian last month that he would not let his nephew on social media, while the Facebook investor Sean Parker also recently said of the social network that “God only knows what it’s doing to our children’s brains.”
Mr. Steyer said, “You see a degree of hypocrisy with all these guys in Silicon Valley.” The new group also plans to begin lobbying for laws to curtail the power of big tech companies. It will initially focus on two pieces of legislation: a bill being introduced by Senator Edward J. Markey, Democrat of Massachusetts, that would commission research on technology’s impact on children’s health, and a bill in California by State Senator Bob Hertzberg, a Democrat, which would prohibit the use of digital bots without identification.
Mr. McNamee said he had joined the Center for Humane Technology because he was horrified by what he had helped enable as an early Facebook investor. “Facebook appeals to your lizard brain — primarily fear and anger,” he said. “And with smartphones, they’ve got you for every waking moment.” He said the people who made these products could stop them before they did more harm. “This is an opportunity for me to correct a wrong,” Mr. McNamee said.
6. Transitioning to our next program–updating AI (artificial intelligence) technology as it applies to technocratic fascism–we note that AI machines are being designed to develop other AI’s–“The Rise of the Machine.” ” . . . . Jeff Dean, one of Google’s leading engineers, spotlighted a Google project called AutoML. ML is short for machine learning, referring to computer algorithms that can learn to perform particular tasks on their own by analyzing data. AutoML, in turn, is a machine learning algorithm that learns to build other machine-learning algorithms. With it, Google may soon find a way to create A.I. technology that can partly take the humans out of building the A.I. systems that many believe are the future of the technology industry. . . .”
“The Rise of the Machine” by Cade Metz; The New York Times; 11/6/2017; p. B1 [Western Edition].
They are a dream of researchers but perhaps a nightmare for highly skilled computer programmers: artificially intelligent machines that can build other artificially intelligent machines. With recent speeches in both Silicon Valley and China, Jeff Dean, one of Google’s leading engineers, spotlighted a Google project called AutoML. ML is short for machine learning, referring to computer algorithms that can learn to perform particular tasks on their own by analyzing data.
AutoML, in turn, is a machine learning algorithm that learns to build other machine-learning algorithms. With it, Google may soon find a way to create A.I. technology that can partly take the humans out of building the A.I. systems that many believe are the future of the technology industry. The project is part of a much larger effort to bring the latest and greatest A.I. techniques to a wider collection of companies and software developers.
The tech industry is promising everything from smartphone apps that can recognize faces to cars that can drive on their own. But by some estimates, only 10,000 people worldwide have the education, experience and talent needed to build the complex and sometimes mysterious mathematical algorithms that will drive this new breed of artificial intelligence.
The world’s largest tech businesses, including Google, Facebook and Microsoft, sometimes pay millions of dollars a year to A.I. experts, effectively cornering the market for this hard-to-find talent. The shortage isn’t going away anytime soon, just because mastering these skills takes years of work. The industry is not willing to wait. Companies are developing all sorts of tools that will make it easier for any operation to build its own A.I. software, including things like image and speech recognition services and online chatbots. “We are following the same path that computer science has followed with every new type of technology,” said Joseph Sirosh, a vice president at Microsoft, which recently unveiled a tool to help coders build deep neural networks, a type of computer algorithm that is driving much of the recent progress in the A.I. field. “We are eliminating a lot of the heavy lifting.” This is not altruism.
Researchers like Mr. Dean believe that if more people and companies are working on artificial intelligence, it will propel their own research. At the same time, companies like Google, Amazon and Microsoft see serious money in the trend that Mr. Sirosh described. All of them are selling cloud-computing services that can help other businesses and developers build A.I. “There is real demand for this,” said Matt Scott, a co-founder and the chief technical officer of Malong, a start-up in China that offers similar services. “And the tools are not yet satisfying all the demand.”
This is most likely what Google has in mind for AutoML, as the company continues to hail the project’s progress. Google’s chief executive, Sundar Pichai, boasted about AutoML last month while unveiling a new Android smartphone.
Eventually, the Google project will help companies build systems with artificial intelligence even if they don’t have extensive expertise, Mr. Dean said. Today, he estimated, no more than a few thousand companies have the right talent for building A.I., but many more have the necessary data. “We want to go from thousands of organizations solving machine learning problems to millions,” he said.
Google is investing heavily in cloud-computing services — services that help other businesses build and run software — which it expects to be one of its primary economic engines in the years to come. And after snapping up such a large portion of the world’s top A.I researchers, it has a means of jump-starting this engine.
Neural networks are rapidly accelerating the development of A.I. Rather than building an image-recognition service or a language translation app by hand, one line of code at a time, engineers can much more quickly build an algorithm that learns tasks on its own. By analyzing the sounds in a vast collection of old technical support calls, for instance, a machine-learning algorithm can learn to recognize spoken words.
But building a neural network is not like building a website or some run-of-themill smartphone app. It requires significant math skills, extreme trial and error, and a fair amount of intuition. Jean-François Gagné, the chief executive of an independent machine-learning lab called Element AI, refers to the process as “a new kind of computer programming.”
In building a neural network, researchers run dozens or even hundreds of experiments across a vast network of machines, testing how well an algorithm can learn a task like recognizing an image or translating from one language to another. Then they adjust particular parts of the algorithm over and over again, until they settle on something that works. Some call it a “dark art,” just because researchers find it difficult to explain why they make particular adjustments.
But with AutoML, Google is trying to automate this process. It is building algorithms that analyze the development of other algorithms, learning which methods are successful and which are not. Eventually, they learn to build more effective machine learning. Google said AutoML could now build algorithms that, in some cases, identified objects in photos more accurately than services built solely by human experts. Barret Zoph, one of the Google researchers behind the project, believes that the same method will eventually work well for other tasks, like speech recognition or machine translation. This is not always an easy thing to wrap your head around. But it is part of a significant trend in A.I. research. Experts call it “learning to learn” or “metalearning.”
Many believe such methods will significantly accelerate the progress of A.I. in both the online and physical worlds. At the University of California, Berkeley, researchers are building techniques that could allow robots to learn new tasks based on what they have learned in the past. “Computers are going to invent the algorithms for us, essentially,” said a Berkeley professor, Pieter Abbeel. “Algorithms invented by computers can solve many, many problems very quickly — at least that is the hope.”
This is also a way of expanding the number of people and businesses that can build artificial intelligence. These methods will not replace A.I. researchers entirely. Experts, like those at Google, must still do much of the important design work.
But the belief is that the work of a few experts can help many others build their own software. Renato Negrinho, a researcher at Carnegie Mellon University who is exploring technology similar to AutoML, said this was not a reality today but should be in the years to come. “It is just a matter of when,” he said.
After what might be the shortest stint ever as a New York Times op-ed columnist, the Times has a new job opening on opinion section. After announcing the hiring of Quinn Norton as a new technology/hacker culture columnist Tuesday afternoon, the Times let her go later that evening. Why the sudden cold feet? A tweet. Or, rather, a number of Norton’s tweets that were widely pointed out after Norton’s hiring.
The numerous tweets where she would call people “fag” and “faggot” or used the N‑word certainly didn’t help. But it was her tweets about Nazis that appear to be what really sank her employment prospects. So what did Quinn Norton tweet about Nazis that got her fired? That she has a Nazi friend. She doesn’t agree with her Nazi friend’s racist views, but they’re still friends and still talk sometimes.
And who is this Nazi friend of hers? Neo-Nazi hacker Andrew ‘weev’ Auernheimer, of course. And as the following article points out, while Norton’s friendship with Auernhimer — who waged a death threat campaign against the employees of CNN, let’s not forget — is indeed a troubling, it’s not like Norton is the only tech/privacy journalist who considers the weev both a friend an a hero:
“And then, arguably most shocking of all, there were tweets in which Norton defended her long friendship with one of the most famous neo-Nazis in America, Andrew Auernheimer, known by his internet pseudonym Weev. Among his many lowlights, Weev co-ran the website the Daily Stormer, a hub for neo-Nazis and white supremacists.”
Yeah, there’s nothing quite like your tweet history defending your friendship with the guy who co-rans the Daily Stormer to spruce up one’s resume...assuming you’re applying for a job at Breitbart. But that might be a bit to far for the New York Times.
And yet, as the article notes, Norton was far from alone in not just defending Auernheimer when he was facing prosecution for hacking AT&T (and that legitimately was overly harsh prosecution) but defending him while remaining friends despite the horrific Nazi views he openly stands for:
Now, it’s not that Norton never criticizes Auernheimer’s views. It’s that she appears to still be friends and talk with him despite the fact that he really is a leading neo-Nazi who really does call for mass murder. Which, again, is something that goes far beyond Norton:
“But the broader hacker community didn’t defend Weev on the merits of this particular case while simultaneously denouncing his hateful views. Instead it lionized him in keeping with its opposition to draconian computer crime laws.”
And that is the much bigger story in the story of the Quinn Norton’s 1/2 day as a New York Times technology columnist: within much of digital privacy community, Norton’s acceptance of Auernheimer despite his open aggressive neo-Nazi views isn’t the exception. It’s the rule.
There was unfortunately not mention in the article about how Auernheimer partied with Glenn Greenwald and Laura Poitras in 2014 after his release from prison (when he was already sporting a giant swastika on his chest). Neither was there any mention of the fact that Auerneheimer appears to have been involved with both ‘Macron hacks’ in France’s elections last year and possibly the DNC hacks. But the article does make the important point that the story of Quinn Norton’s firing is really just a sub-story in the much larger story of remarkably widespread popularity of Andrew ‘weev’ Auernheimer within the tech and digital privacy communities and the roles he may have played in some of the biggest hacks of our times. And the story of tech’s ‘Nazi friend’ is, itself, just a sub-story in the much larger story of how pervasive far-right ideals and assumptions are in all sorts of tech sectors and technologies, whether it’s Bitcoin, the Cypherpunk movement’s extensive history of far-right thought, or the fascist roots behind Wikileaks. Hopefully the New York Times’s next pick for tech columnist will actually address these topics.
Here’s another example of how the libertarian dream of internet platforms that are so secure that the companies themselves can’t even monitor what’s taking place on them turn out to be far-right misinformation dream platforms: WhatsApp — the Facebook-owned messaging platform that uses end-to-end strong encryption so, in theory, no one can crack the messages and no one, including WhatsApps, can monitor on the platform is used — is wildly popular in Brazil. 120 million of the country’s 200 million people use the app as many of them use it as their primary news source.
So what kind of news are people getting on WhatsApp? Well, we don’t really know because it can’t be monitored. But we do get a hint of the kind of news people are getting on encrypted services like WhatsApp when those stories spreads to other platforms like Facebook or Youtube. And with Brazil facing an explosion of yellow fever and struggling to get people vaccinated we got a particularly nasty hint of the kind of ‘news’ people are getting on WhatsApp: dangerous professionally produced videos telling people an Alex Jones-style message that the yellow fever vaccine campaign is part of a secret government depopulation scheme. That’s the kind of ‘news’ people in Brazil are getting from WhatsApp. At least, that’s the ‘news’ we know about so far since the full content in an encrypted mystery:
“Yellow fever began expanding south, even through the winter months, infecting more than 1,500 people and killing nearly 500. The mosquito-borne virus attacks the liver, causing its signature jaundice and internal hemorrhaging (the Mayans called it xekik, or “blood vomit”). Today, that pestilence is racing toward Rio de Janeiro and Sao Paulo at the rate of more than a mile a day, turning Brazil’s coastal megacities into mega-ticking-timebombs. The only thing spreading faster is misinformation about the dangers of a yellow fever vaccine—the very thing that could halt the virus’s advance. And nowhere is it happening faster than on WhatsApp.”
As the saying goes, a lie can travel halfway around the world before the truth can get its boots on. Especially in the age of the internet when random videos on messaging services like WhatsApp are treated as reliable news sources:
So by the time the government began its big vaccination campaign to vaccinate 95 percent of residents in vulnerable areas, there was a fake news campaign about the vaccine using professional quality videos: fake doctors. Fake stories of deaths from vaccine. And the kind of production quality people expect from a news broadcast:
““These videos are very sophisticated, with good editing, testimonials from experts, and personal experiences,” Sacramento says. It’s the same journalistic format people see on TV, so it bears the shape of truth. And when people share these videos or news stories within their social networks as personal messages, it changes the calculus of trust. “We are transitioning from a society that experienced truth based on facts to a society based on its experience of truth in intimacy, in emotion, in closeness.””
So how widespread is the problem of high quality literal fake news content getting propagated on WhatsApp? Well, again, we don’t know. Because you can’t monitor how WhatsApp is used. Even the company can’t. It’s one of its ‘features’:
“Of course, these are all just theories. Because of WhatsApp’s end-to-end encryption and the closed nature of its networks, it’s nearly impossible to study how misinformation moves through it.”
Yep, we have no idea what kinds of other high quality misinformation videos are getting produced. Of course, it’s not like there aren’t plenty of misinformation videos readily available on Youtube and Facebook, so we do have some idea of the general type of misinformation and far-right memes that going to flourish on platforms like WhatsApp. But for a very time-sensitive story, like getting people vaccinating before the killer virus turns into a pandemic, the inability to identify and combat disinformation like this really is quite dangerous.
It’s a reminder that if humanity wants to embrace the cypherpunk revolution of ubiquitous strong encryption and truly a anonymous, untrackable internet, humanity is going to have to get a lot wiser. Wise enough to at least have some sort of reasonable social immune system against mind viruses like bogus news and far-right memes. Wise enough to identify and reject the many problems with ideologies like digital libertarianism. In other words, if humanity wants to safely embrace the cypherpunk revolution, it needs to be savvy enough to reject the cypherpunk revolution. It’s a bit of a paradox and a recurring theme with technology and power in general: if you want this power without destroying yourself you’re just going to have to be wise enough to use that power very carefully or just reject it outright, collectively and individually.
But for now, we have literal fake news videos pushing anti-vaccine misinformation quietly ‘going viral’ on encrypted social media in order to promote the spread of a deadly biological virus. It seems like a milestone of self-destructive behavior was just reached by humanity. It was a group effort. Go team.
A great new book is out on the history of the Internet Surveillance Valley by Yasha Levine. Here is a link to a long interview
http://mediaroots.org/surveillance-valley-the-secret-military-history-of-the-internet-with-yasha-levine/
Here’s a quick update on the development of the ‘deepfake’ technology that can create realistically looking videos of anyone saying anything: according to experts, it should be advanced enough to cause major problems for things like political elections in a couple years. So if you were wondering what kind of ‘fake news’ nightmare is in store for the US 2020 election, it’s going to be the kind of nightmare that includes one fake video after another that looks completely real:
““I expect that here in the United States we will start to see this content in the upcoming midterms and national election two years from now,” said Hany Farid, a digital forensics expert at Dartmouth College in Hanover, New Hampshire. “The technology, of course, knows no borders, so I expect the impact to ripple around the globe.””
Yep, the way Hany Farid, a digital forensics expert at Dartmouth College, sees it, we might even see ‘deepfakes’ impact the US midterms this year. The technology is basically ready to go.
And while DARPA is reportedly already working on techniques for identifying fake images and videos, it’s still unclear if even an agency like DARPA will be able to keep up with advances in the technology. In other words, even after detection technology has been developed there’s still ALWAYS going to be the potential for cutting edge ‘deepfakes’ that can fool that detection technology. It’s just part of our technological landscape:
And while we’re guaranteed that any deepfakes introduced into the US elections will almost reflexively be blamed on Russia, the reality is that any intelligence agency on the planet (even private intelligence agencies) are going to be extremely tempted to develop these kinds of videos for propaganda purposes. And the 4Chan trolls and Alt Right are going to be investing massive amounts of time and energy into this, if they aren’t already. The list of suspects is inherently going to be massive:
Finally, let’s not forget about one of the more bizarre potential consequences of the emergence of deepfake technology: it’s going to be make easier than ever for Republicans to decry ‘fake news!’ when they are confronted with a true but political inconvenient story. Remember when Trump’s ambassador to the Netherlands, Pete Hoekstra, cried ‘fake news!’ when shown a video of his own comments? Well, that’s going to be a very common thing going forward. So when the inevitable montages of Trump saying one horrible thing after enough get rolled out for voters 2020, it’s going to be easier than ever for people to dismiss it as ‘fake news!’:
Welcome to the world were you really can’t believe your lying eyes. Except when you can and should.
So how will humanity handle a world where any random troll can create convincing fake videos? Well, based on our track record with how we handle other sources of information that can potentially be faked and requires a degree of wisdom and discernment to navigate, not well. Not well at all.
Here’s the latest reminder that when the ‘deepfake’ video technology develops to the point of being indistinguishable from real videos the far right is going to go into overdrive creating videos purporting to prove virtually every far right fantasy you can imagine. Especially the ‘PizzaGate’ conspiracy theory pushed by the right wing in the final weeks of the 2016 alleging that Hillary Clinton and number of other prominent Democrats are part of a Satanist child abuse ring:
Far right crackpot ‘journalist’ Liz Crokin is repeating her assertions that video of Hillary Clinton — specifically, Hillary sexually abusing a child and then cutting their face off and eating it — is floating around on the Dark Web is, according to her sources, is definitely real. And now Crokin warns that reports about ‘deepfake’ technology are just disinformation stories being preemptively put out by the Deep State to make the public skeptical when the videos of Hillary cutting the face off of a child come to light:
““I understand that there is a video circulating on the dark web of [Clinton] eating the face of a child,” Zilinsky said. “Does this video exist?””
Welcome to our dystopian near-future. Does video of [insert horror scenario here] actually exist? Oh yes, we will be assured, it’s definitely totally real and you can totally trust my source!
And someday, with advances in deepfake video technology and special effects they might actually produce such a video. It’s really just a matter of time, and at this point we have to just hope that they use special effects to fake a child having their face cut off and eaten and don’t actually do that to a kid (you never know when you’re dealing with Nazis and their fellow travelers).
Crokin then went on to insist that the various news articles warning about the advances in deepfake technology were all part of a secret effort to protect Hillary when the video is finally released:
And as further evidence of her claims, Crokin points to President Trump retweeting a tweet to a website featuring a video of Crokin discussing this alleged Hillary-child-face-eating video:
Yep, the president is promoting this lady. And that, right there, summarizes the next stage of America’s nightmare descent into neo-Nazi fantasies that’s just around the corner (not to mention the impact on the rest of the world).
Of course, the denials that deepfake technology exist will start getting rather amusing after people use that same technology to create deepfakes of Trump, Crokin, and anyone else in the public eye (since you need lots of videos of someone to make the technology work).
Also keep in mind that Crokin claims the child-face-eating video is merely one of the videos of Hillary Clinton floating around. There are many videos of Hillary that Crokin claims to be aware of. So when the child-face-eating video emerges, it’s probably going to just be a preview of what’s to come:
“I’m not going to go into too much detail because it’s so disgusting, but in this video, they cut off a child’s face as the child is alive...I’m just going to leave it at that.”
The child was alive when Hillary cut its face off and ate it after sexually abusing it. That’s what Crokin has spent months assuring her audience is a real thing and Donald Trump appears to be promoting her. Of course.
And that’s just one of the alleged Hillary-as-beyond-evil-witch videos Crokin claims to certainty is real and in possession of some law enforcement officials (this is what the whole ‘QAnon’ obsession on the right is about):
Also notice how Crokin acts like she doesn’t want to go into the details, and yet gives all sorts of details that hint as something so horrific that the alleged NYPD officers who have seen the video needed psychological counseling. And that points towards one of the other aspects of this looming nightmare phase of American’s intellectual descent: the flood of far right fake videos that are going to be produced are probably going to be designed to psychologically traumatize the viewer. The global public is about to be exposed to one torture/murder/porn video of famous people after another because if you’re trying to impact you’re audience visually traumatizing them is an effective way to do it.
It’s no accident that much of the far right conspiracy culture has a fixation on child sex crimes. Yes, some of that fixation is due to real cases or elite protected child abuse, like ‘The Franklin cover-up’ or Jimmy Savile and the profound moral gravity of such crimes if real organized elite sex abuse rings actually exist. The visceral revulsion of crimes of this nature make them inherently impactful. But in the right-wing conspiracy worldview pedophilia tends to play a central role (as anyone familiar with Alex Jones can attest to). Crokin is merely one example of that.
And that’s exactly why we should expect these slew of fake videos that are inevitably going to be produced in droves for political gain to involve images that truly psychological scar the viewer. It’s just more impactful that way.
So whether you’re a fan of Hillary Clinton or loath her, get ready to have her seared in your memory forever. Probably eating the face of a child or something like that.
If you didn’t think access to a gun could get much easier in America, it’s time to rethink that proposition: Starting on August 1st, it will be legal to distribute instructions over the internet for create 3D printable guns according to US law. Recall how 3D printable guns were first developed in 2013 by far right crypto-anarchist Cody Wilson, the guy also behind the untraceable Bitcoin Dark Wallet and Hatreon, a crowdsourcing fundraising platform for neo-Nazis and other people who got kicked off of Patreon.
Wilson first put instructions for 3D printable guns online back in 2013, but the US State Department forced him to take it down, arguing it amount to exporting weapons. Wilson sued and the case has been stuck in courts. Flash forward to April of 2018, and the US government suddenly decided to reverse course and drop its lawsuit. A settlement was reached and August 1 was declared the day 3D printable gun instructions will flood the internet.
So it looks like the cypherpunk approach of using technology to get around political realities you don’t like is about to be applied to gun control laws, with untraceable guns for potentially anyone as one of the immediate consequences:
“A last-ditch effort to block a US organization from making instructions to 3D-print a gun available to download has failed. The template will be posted online on Wednesday (Aug. 1).”
In just a copule more days, anyone with access to a 3D print will be able to create as many untraceable guns as they desire. This is thanks to a settlement reached in April by Cody Wilson’s company, Defense Distributed, and the federal government:
So why exactly did the US government suddenly drop the lawsuit in April and pave the way for the distribution of 3D printed gun instructions? Well, the answer appears to be that the Trump administration wanted the case dropped. At least that’s how gun control advocates interpreted it:
Although, as the following article notes, it appears that one of the reasons for the change in the federal government’s attitude towards the case may have been due to a desire to change the current export laws regarding the export of guns in general that US gun manufacturers have long wanted. The Trump administration proposed revising and streamlining the process for exporting consumer firearms and related technical information, including tutorials for 3D printed guns. The rule change would also shift jurisdiction of some items from the State Department to the Commerce Department (don’t forget that it was the State Department that imposed the initial injunction on the distribution of 3D instructions). So it sounds like the Trump administration’s move to legalize the distribution of 3D printable gun instructions may have been part of a broader effort to export more guns in general:
“Learning to make a so-called ghost gun — an untraceable, unregistered firearm without a serial number — could soon become much easier.”
DIY untraceable firearms. That’s about to be a thing and the US government legally sanctioned it. And that sudden change of heart, combined with the legal victories the government previously enjoyed on this case, is what left so many gun control advocates assuming that the Trump administration decided to promote 3D printable gun:
And note how a Federal District Court judge had denied Wilson’s request for a preliminary injunction against the State Department, a decision that an appellate court upheld. And the Supreme Court declined to hear the case. And the state Department issued a statement about how Wilson voluntary “entered into following negotiations,” and that “the court did not rule in favor of the plaintiffs in this case.” That sure doesn’t sound like a government that was on the verge of losing its case:
But that apparent desire by the Trump administration to promote 3D printable guns might be less a reflection of a specific desire to promote 3D printable guns and more a reflection of the Trump administration’s desire to promote gun exports in general:
Regardless, we are just a couple days away from 3D printable gun instructions legally being distributed online. And it’s not just the 1‑shot pistols. It’s going to include AR-15-style rifles:
And this probably also mean Wilson is going to be selling a lot more of those gun milling machines that allow anyone to create the metal components of their untraceable Ghost Guns:
And keep in mind that, while this is a story about a US legal case, it’s effectively a global story. There’s undoubtedly going to be an explosion of 3D blueprints for pretty much any tool of violence one can imagine and it’s going to be accessible to anyone with an internet connection. They won’t need to go scouring the Dark Web or finding some illicit 3D printer instructions dealer. They’ll just go to one of the many websites that will be brimming with a growing library of all sorts of 3D printable sophisticated weaponry.
So now we get to watch this grim experiment unfold. And who knows, it might actually reduce US gun exports by preemptively export markets with guns. Because why pay for an expensive imported gun when you can just print a cheap one?
More generally, what’s going to happen when as 3D printable guns become a thing accessible to every corner of the globe? What kinds conflicts might pop up that simply wouldn’t have been possible before? Because we’re long familiar with conflicts fueled by large numbers of small arms flooding into a country, but someone has to be willing to supply those arms. There’s generally some sort of state sponsor for conflicts. What happens when any random group or movement can effectively arm themselves? Is humanity going to get even more violent as the cost of guns plummets and accessibility explodes? Probably. We’ll be collectively confirming that soon.
Remember that report from November 2017, about how Google’s Android smartphones are secretly gathering surprisingly precise location information from smart phones even when people turn of “Location services” using information gathered from cell towers? Well, here’s a follow up report on that: Google claims it changed that policy, but it’s somehow still collecting very precise location data from Android phones and iPhones (if you use Google’s apps) even when you turn off “location services” (surprise!):
“An Associated Press investigation found that many Google services on Android devices and iPhones store your location data even if you’ve used a privacy setting that says it will prevent Google from doing so.”
Yes, it turns out when you turn off location services on your Android smartphone, you’re merely turning off your own access to that location history. Google will still collect and keep that data for its own purposes.
And while some commonly used apps that you would expect to require location data, like Google Maps or daily whether updates, automatically collect location data without ever asking, it sounds like certain searches that should have nothing to do with your location also trigger automatic location data. And the search-triggered locations obtained by Google apparently give your precise latitude and longitude, accurate to the square foot:
Recall that the initial story from November about Google’s using cellphone tower triangulation to surreptitiously collect location data indicated that this type of location data was very accurate and could determine whether or not you stepped foot in a given retail store (it was being used for location-targeted ads). So if this latest report indicates that Google can get your location down to the nearest square foot that sounds like a reference to that cell tower triangulation technique.
The article goes on to reference that report from November, and goes on to say that “Google changed the practice and insisted it never recorded the data anyway”. So Google apparently admitted to stopping something it says it was never doing in the first place. It’s the kind of nonsense corporate response that suggests that this program never ended, it was merely “changed”:
And Google does indeed admit that this data is being used for location-based ad targeting, so it sure sounds like nothing changed at that initial report in November:
So it there any way to stop Google from collecting your location history other than using a non-Android phone with no Google apps? Well, yes, there is a way. It’s just seemingly designed to be super confusing and counterintuitive:
Google of course counters that it’s been clear all along:
And while it’s basically trolling the public at this point for Google to act like its location data policies have been anything other than opaque and confusing, that trollish response and those opaque policies to make one thing increasingly clear: Google has no intention of stopping this kind of data collection. If anything we should expect it to increase given the plans for more location-based services. Resistance is still futile. And not just because of Google, even if its one of the biggest offenders. It’s more a group effort.
Here’s one of those articles that’s surprising in one sense and completely predictable in another sense: Yuval Noah Harari is a futurist philosopher and author of a number of popular books about where humanity is heading. Harari appears to be a largely dystopian futurist, envision a future where democracy is seen as obsolete and a techno-elite ruling class run companies with the technological capacity to essentially control the minds of masses. Masses that will increasingly be seen obsolete and useless. Harari even gave a recent TED Talk called “Why fascism is so tempting — and how your data could power it. So how do Silicon Valley’s CEO view Mr. Harari’s views? They apparently can’t get enough of him:
“Part of the reason might be that Silicon Valley, at a certain level, is not optimistic on the future of democracy. The more of a mess Washington becomes, the more interested the tech world is in creating something else, and it might not look like elected representation. Rank-and-file coders have long been wary of regulation and curious about alternative forms of government. A separatist streak runs through the place: Venture capitalists periodically call for California to secede or shatter, or for the creation of corporate nation-states. And this summer, Mark Zuckerberg, who has recommended Mr. Harari to his book club, acknowledged a fixation with the autocrat Caesar Augustus. “Basically,” Mr. Zuckerberg told The New Yorker, “through a really harsh approach, he established 200 years of world peace.””
A guy who specializes in worrying about techno elites destroying democracy and turning the masses into the ‘useless’ class is is extra worried about the fact those techno elites appear to love him. Hmmm...might that have something to do with the fact that the dystopian future he’s predicting assumes the tech elites completely dominate humanity? And, who knows, he’s probably giving them idea for how to accomplish this domination. So of course they love him:
Plus, Harari appears to view rule by tech executives as preferable to politicians because he views them as ‘generally good people’. So, again, of course the tech elite love the guy. He’s predicting they dominate the future and doesn’t see that as all that bad:
He’s also predicting that these tech executives will use longevity technology to ‘vastly outlive the useless’, which clearly implies he’s predicting longevity technology gets developed but not share with ‘the useless’ (the rest of us):
Harari has even gone on to question whether or not humans have any free will at all and explored the implications of the possibility that technology will allow the tech giants to essentially control what people think, effectively bio-hacking the human mind. And one of the implications he sees from this hijacking of human will is that political parties might not make sense anymore and human rights are just a story we tell ourselves. So, again, it’s not exactly hard to see why tech elites love the guy. He’s basically making the case for why we should just accept this dystopian future:
So as we can see, it’s abundantly clear why Mr. Harari is suddenly the go-to guru for Silicon Valley’s elites: he’s depicting a dystopian future that’s utopian if you happen to be an authoritarian tech elite who wants to dominate the future. And he’s kind of portraying this future as just someone we should accept. Sure, we in the useless class should be plenty anxious about becoming useless, but don’t bother trying to organize against this future, especially since democratic politics is becoming pointless in an age when mass opinion can be hacked and manipulated. Just accept that this is the future and worry about adapting to it. That more or less appears to be Harari’s dystopian message. Which is as much a message about a dystopian present as it is about a dystopian future. A dystopian present where humanity is already so helpless that nothing can be done to prevent this dystopian future.
And that all points towards one obvious area of futurism Harari could engage in that might actually turn off some of his Silicon Valley fan base: explore the phase of the future after the fascist techno elites have seized complete control of humanity and start warring amongst themselves. Don’t forget that one of feature of democracy is that it sort of creates a unifying force for all the brutal wannabe fascist oligarchs. They all have a common enemy. The people. But what happens when they’ve truly won and subjugated humanity or exterminated most of it? Won’t they proceed to go to war with each other at that point? If you’re a brutal cutthroat fascist oligarch of the future sharing power with other brutal cutthroat fascist oligarchs, are you really going to trust that they aren’t plotting to take you down and take over your neo-feudal personal empire? Does anyone doubt that if Peter Thiel managed to obtain longevity technology and cloning technology that there won’t be a clone army of Nazi Peter Thiels some day warring against rival fascist elites?
These fascist overlords are also presumably going to be highly reliant on private robot armies. What kind of future should these fascist tech elites expect when they are all sporting rival competing private robot armies? That might sound fun at first, but do they really want to live in a world where their rivals also have private robot armies? Or advanced biowarfare arsenals? And how about AI going to war with these fascist elites? Does Mr. Harari have any opinions on the probability of Skynet emerging and the robots rebelling against their fascist human overlords? Perhaps if he explored how dystopian this tech elite-dominated future could be for the tech elites themselves, and further explored the inherent dangers a high-tech society run by and for competing authoritarian personalities present to those same competing authoritarian personalities, maybe they wouldn’t love him so much.
You know that scene in the Batman movie, The Dark Knight, where Bruce Wayne company turns every cellphone in Gotham into little sonar devices that are used to map out the physical locations of people and objects all across the city? Well, in the future, Batman won’t need to rely on such technological trickery. He’ll just need to be a major investor in Google:
“We’ve seen a number of Soli’s technological breakthroughs since then, from being able to identify objects to reducing the radar sensor’s power consumption.. Most recently, a regulatory order is set to move it into a more actionable phase. The U.S. Federal Communications Commission said earlier this week that it would grant Project Soli a waiver to operate at higher power levels than currently allowed. The government agency also said users can operate the sensor aboard a plane because the device poses “minimal potential of causing harmful interference to other spectrum users.””
A tiny radar-based system for detecting hand gestures. It’s pretty neat! And it’s also apparently able to identify objects too. Pretty neat too! And also a wonderful new source of data for Google since it sounds like Soli can identify the shapes of objects as well as their inner structures:
“The device is called RadarCat (or Radar Categorization for Input and Interaction), and works the way any radar system does. A base units fires electromagnetic waves at a target, some of which bounce off and return to base. The system times how long it takes for them to come back and uses this information to work out the shape of the object and how far away it is. But because Google’s Soli radars are so accurate, they can not only detect the exterior of an object, but also its internal structure and rear surface.”
That’s how powerful (and invasive) Google’s new radar system is. Not only can it detect the exterior of an object but also its rear surface and internal structure. And it’s even possible it will be able to determine things like the contents of a glass:
So your fancy new Soli-powered Google smartwatch will not only be able to detect whether or not you’re drinking something but might be able to determine what you’re drinking. And that’s what they could do two years ago. It’s presumably much more advanced by this point. You can watch a video here of Soli detecting all sorts of objects, and even colors. The video shows people putting objects right on the radar plate to detect it. But according to a report from 2016, JBL, the speaker manufacturer, was working on a speaker with a built-in Soli sensor that could sense your finger motions up to 15 meters away. In other words, we shouldn’t assume this technology will be used for only detecting objects and motions very close to the sensors. And that’s something that makes the FCC ruling allowing for higher-power devices much more relevant. More power means greater distances.
And while the idea of an object dictionary sounds like the kind of thing that would be limited to objects like cups or whatever, don’t forget that Google is probably going to have the kind of information necessarily to create an object dictionary of people (height, weight, body type, etc). Also keep in mind that Google already has powerful location tracking services built into the Android smartphone operating system, so Google is going to already going to know about at least some of the people standing next to your Soli-powered device. So if Google doesn’t already have enough information about you to create a record of you in its object dictionary system, it will probably be able to create that record pretty easily. And if this radar technology can detect sub-millimeter motions of your fingers, it can presumably pick up things like facial expressions, or perhaps even facial structures too. That will presumably also be useful for identify who happens to be in the vicinity of your Soli device.
It’s all a reminder that motion-based senor technology might be sensing a lot more than just motion.
Here’s another story about the abuse of the location data of smartphones. This time it doesn’t involved the smartphone manufacturers or the creators of the operating systems, like Google. Instead, it’s about the sale of your location data from your cellphone service provider. And it appears to include virtually all of the major cellphone operators in the US, which makes this to kind of data abuse that consumers can’t do much about. Because while you could opt to not get an Android-based phone if you don’t like Google’s extensive location data collection, it’s possible there simply may not be an cellular service provider who doesn’t sell your location data.
And the fact that consumer can’t do much about this issue adds to the scandalous nature of it because there is one thing consumers can do: get the government to regulate cellular service providers to stop them from selling this data. And the US government did actually warn the telecom industry about this practice and got assurances that it would end. But as the following article makes clear, that hasn’t happened. Instead, it appears that virtually all cellular providers are making location data available to a broad array of businesses. And some of those business are reselling the data to anyone for a substantial profit:
“Although many users may be unaware of the practice, telecom companies in the United States sell access to their customers’ location data to other companies, called location aggregators, who then sell it to specific clients and industries. Last year, one location aggregator called LocationSmart faced harsh criticism for selling data that ultimately ended up in the hands of Securus, a company which provided phone tracking to low level enforcement without requiring a warrant. LocationSmart also exposed the very data it was selling through a buggy website panel, meaning anyone could geolocate nearly any phone in the United States at a click of a mouse.”
So the telecoms sell your location data to “location aggregators”, who are supposed to just sell the data to specific clients and industries. But as a report from last year revealed, that data from one aggregator, LocationSmart, was getting resold to companies like Securus, which was reselling to pretty much anyone online.
This time, Vice Motherboard discovered that Securus was not an anomaly. Microbilt is engaging in a similar practices of buying this data from the location aggregator Zumigo, and just resells it to whomever:
And the services offered by these companies aren’t limited to the location of the smartphone at the time you request it. You can can continuous tracking services too. For as little as $12.95 per phone for real-time updates:
And, of course, the telecom companies assure us that they are looking into this and will cut off access to this kind of information to irresponsible location aggregators. Which should sound familiar since that’s exactly what they said after Securus’s practices were revealed last year in the face of pressure from Congress:
So what’s to be done? Well, since this industry is apparently going to keep doing this as long as it’s allowed to do so and is profitable, regulating the industry seems like the obvious answer:
“If there is money to be made they will keep selling the data.” And “it’s part of a bigger problem; the US has a completely unregulated data ecosystem.” Those seem like two pretty compelling reasons for serious new regulations.
Following up on the creepy story about the growing industry of selling location-data collected from cellphone service providers like T‑mobile and AT&T, here’s a New York Times article from last month that addresses the extensive amount of data collected by smartphone apps. Among the many fun fact in the article, we learn that Google’s Android operating system allows apps not in use to collect location data “a few times an hour”, instead of continuously when the app is running. So if you are assuming that your smartphone apps are only spying on you when they’re actually running you might want to change those assumptions.
The Times discovered at least 75 companies that receive anonymous app data. And we learn that Peter Thiel is an investor in one of those companies. As we’re going to see, that company, SafeGraph, has plans of offering its location services to government agencies. So given that Thiel’s Palantir is already a major national lsecurity contractor specializing in Big Data analysis for a variety of corporate and government clients, the question of whether or not all of this location data is being fed into those privatized intelligence data bases like Palantir’s is a pretty big question.
We also learn the approximate costs location aggregators are paying for access to user location data:
about half a cent to two cents per user per month. That’s the price of one of your most intimate types of data:
“These companies sell, use or analyze the data to cater to advertisers, retail outlets and even hedge funds seeking insights into consumer behavior. It’s a hot market, with sales of location-targeted advertising reaching an estimated $21 billion this year. IBM has gotten into the industry, with its purchase of the Weather Channel’s apps. The social network Foursquare remade itself as a location marketing company. Prominent investors in location start-ups include Goldman Sachs and Peter Thiel, the PayPal co-founder.”
$21 billion a year in location-targeted advertising. That’s the size of the industry that relies on location data. So it should probably come as no surprise that apps were found to be collecting data on individuals as often as every two seconds, making the deanonymization of this data trivial in many cases:
And of the 75 companies the Times from that received this kind of data, several of them claimed to track up to 200 million mobile devices in the US alone and about of them were in use last year. So for some of these location aggregators, a massive portion of all the smartphones in the US were getting tracked by them:
Apps for Google’s Android OS were particularly grabby, which is no surprise given that Android appears to be built to facilitate this kind of data collection. In the most recent version of Android apps that are not in use can still collect locations “a few times an hour”:
And with no federal US law limiting the collection or use of this kind of data, the market is only going to continue to explode. Especially since it appears to costs just pennies a month for access to this kind of data per user per month:
And while targeting advertising is the most common use of this kind of information, keep in mind that this is a spy’s dream market. For example, two different companies, Fysical and SafeGraph, mapped people attending the 2017 Presidential inauguration, with Trump’s phone and those around him pinging away the entire time:
SafeGraph happens to be one of the location aggregators Peter Thiel has invested in. And given that Thiel’s Palantir is exactly the kind of company that would want to use this kind of data for all sorts of intelligence uses — for its many corporate and government clients — the fact that SafeGraph was monitoring the presidential inauguration raises the question of whether or not it was using that data as part of some sort of security service for the government or some sort of intelligence collection service for private clients. It’s a question without an obvious answer because, as the following article about SafeGraph notes, in addition to its many corporate clients, SafeGraph “plans to wade into some thorny business categories, like government agency work and heavily regulated spaces like city development, driverless cars and traffic management.” And Peter Thiel happens to be one of SafeGraphs early investors:
“And SafeGraph does have plans to wade into some thorny business categories, like government agency work and heavily regulated spaces like city development, driverless cars and traffic management.”
City development and government agency work. Yeah, that’s going to be a ‘thorny’ issue for a location aggregator. But it will probably be a lot less thorny for SafeGraph with someone like Peter Thiel as one of its early investors:
Keep in mind that we’ve already learned about Palantir incorporation GPS location data into the services its providing clients when we learned about the insider-threat assessment program Palantir was running for JP Morgan, although in that case it was location data provided by the JP Morgan’s company cellphones that it provided its employees. But if Palantir’s internal algorithms are already set up to incorporate location data it’s hard to see why the company wouldn’t be utilizing the vast location data brokerage industry that’s offering this data for pennies a month.
So as we can see, the thorny issue of location aggregator companies like SafeGraph selling their data to government agencies is probably going to be a lot less thorny that it should be due the fact that Thiel is one of the early investors in this space and Palantir has already had so much success in these kinds of thorny commercial areas.
The fact that there’s no federal laws regulating the collection and use of this kind of data presumably helps with the thorniness.
This Dailey Mail article based on an investigation exposed by Motherboard shows that 250 “bounty hunters’ were able to locate precise location data of cell phone data from at least 3 cellular providers (AT&T, T‑Mobile and Sprint). They requested location data 18,000 times. One company, CerCareOne operated in secret with a confidentiality agreement from with its paying clients for it to perform these tasks without public scrutiny from 2012–2017. This violates the user’s rights and raises serious confidentiality concerns about customers of those cellular services.
Although not stated in the article, the implications to blackmail politicians by knowing their specific whereabouts or the whereabouts of suspected leakers or other contacts is very concerning.
https://www.dailymail.co.uk/sciencetech/article-6679889/Bombshell-report-finds-cellphone-carriers-sell-location-data-bounty-hunters.html
Nearly every major US cellphone carrier sold precise location data to BOUNTY HUNTERS via a ‘secret phone tracking service’ for years, bombshell report finds
¥ AT&T, Sprint, T‑Mobile promised to stop selling user data to location aggregators
¥ But a new investigation has discovered the firms were selling location data more broadly than previously understood, with hundreds of bounty hunters using it
¥ One company used ‘A‑GPS’ data to locate where users are inside of a building
By ANNIE PALMER FOR DAILYMAIL.COM
PUBLISHED: 15:45 EST, 7 February 2019 | UPDATED: 16:11 EST, 7 February 2019
A shocking new report has found hundreds of bounty hunters had access to highly sensitive user data — and it was sold to them by almost every major U.S. wireless carrier.
The practice was first revealed last month and, at the time, telecom firms claimed they were isolated incidents.
However, a Motherboard investigation has since discovered that’s far from the case. About 250 bounty hunters were able to access users’ precise location data.
In one case, a bail bond firm requested location data some 18,000 times.
AT&T, T‑Mobile and Sprint sold the sensitive data, which was meant for user by 911 operators and emergency services, to location aggregators, who then sold it to bounty hunters, according to Motherboard.
A shocking new report has found hundreds of bounty hunters had access to highly sensitive user data — and it was sold to them by almost every major U.S. wireless carrier.
A shocking new report has found hundreds of bounty hunters had access to highly sensitive user data — and it was sold to them by almost every major U.S. wireless carrier
HOW WERE THEY ABLE TO TRACK A USER’S LOCATION?
Motherboard first reported how bounty hunters were selling access to users’ real-time location data for only a few hundred dollars.
Bounty hunters obtained the data from location aggregators, who have partnerships with AT&T, Sprint and T‑Mobile.
They were able to estimate and track a user’s location by looking at ‘pings’ from phones to nearby cell towers.
But companies could also collect assisted-GPS, or A‑GPS, which could even guess a user’s location inside a building.
The companies pledged last month to stop selling users’ location data to aggregators.
Location aggregators collect and sell user location data, sometimes to power services like bank fraud prevention and emergency roadside assistance, as well as online ads and marketing deals, which depend on knowing your whereabouts.
Motherboard discovered last month that bounty hunters were using the data to estimate a user’s location by looking at ‘pings’ sent from phones to nearby cell towers.
But it appears that the data was even more detailed than previously thought.
CerCareOne, a shadowy company that sold location data to bounty hunters, even claimed to collect assisted-GPS, or A‑GPS, data.
This A‑GPS data was able to pinpoint a person’s device so accurately that it see where they are in a building.
Telecom companies began collecting this data in order to give 911 operators a more approximate location for users when they’re both indoors and outdoors.
Instead, it was being sold to aggregators, who then sold it to bail bondsmen, bounty hunters, landlords and other groups.
A bail agent in Georgia told Motherboard it was ‘solely used’ to locate ‘fugitives who have jumped bond.’
Neither AT&T, T‑Mobile nor Sprint explicitly denied selling A‑GPS data, according to Motherboard.
CerCareOne was essentially cloaked in secrecy when it operated between 2012 and 2017, requiring its customers to agree to ‘keep the existence of CerCareOne.com confidential,’ Motherboard said.
Location aggregators use the data from carriers to estimate a user’s location by looking at ‘pings’ cent to cell towers. But they’ve also been found to sell assisted-GPS, or A‑GPS, data, which can pinpoint a person’s device so accurately they can see where they are in a building”
The company often charged up to $1,100 every time a customer requested a user’s location data.
CerCareOne said it required clients to obtain written consent if they wanted to track a user, but Motherboard found that several users received no warning they were being tracked, resulting in the practice often occurring without their knowledge or agreement.
While CerCareOne is no longer operational, its prior use and existence by location aggregators raises serious concerns about how users’ data is being utilized by these companies.
AT&T and other telecoms sought to minimize the use of CerCareOne.
‘We are not aware of any misuse of this service which ended two years ago,’ the firm told Motherboard.
‘We’ve already decided to eliminate all location aggregation services—including those with clear consumer benefits—after reports of misuse by other location services involving aggregators.’
At least 15 U.S. senators have urged the FCC and the FTC to take action on shadowy data broker businesses, according to Motherboard.
‘This scandal keeps getting worse,’ Democratic U.S. Senator Ron Wyden told Motherboard.
‘Carriers assured customers location tracking abuses were isolated incidents. Now it appears that hundreds of people could track our phones, and they were doing it for years before anyone at the wireless companies took action.
‘That’s more than an oversight — that’s flagrant, wilful disregard for the safety and security of Americans,’ he added.
Here’s an alarming update on the growing sophistication of ‘deepfake’ video technology: Samsung just figured out how to create deepfake videos using a single photo of a person. Previously, deepfake software required a large number of photos of someone from different angles but with Samsung’s approach a single photo is all that is required. Samsung literally deepfaked the Mona Lisa as a demonstration.
The new approach isn’t perfect when only a single photo is available and will still generate small noticeable errors. But as the article notes, those error may not really matter when it comes to the potential propaganda purposes of this technology because propaganda doesn’t necessarily have to be realistic to be effective as was underscored over the past day when a crudely altered video of House Speaker Nancy Pelosi slowed down made to make her look drunk went viral on social media and was treated as real:
“Software for creating deepfakes — fabricated clips that make people appear to do or say things they never did — usually requires big data sets of images in order to create a realistic forgery. Now Samsung has developed a new artificial intelligence system that can generate a fake clip by feeding it as little as one photo.”
Just one photo. That’s the new threshold for a deepfake, albeit a somewhat crude deepfake. But when a crudely doctored video of House Speaker Nancy Pelosi can go viral, it’s hard to see why crudely done deepfakes aren’t going to have similar propaganda value:
And keep in mind that for someone like Nancy Pelosi, there’s no need for a deepfake to be based on a single photo. There’s plenty of footage of her available for anyone who wants to create a more realistic deepfake of her. And as the following article makes clear, when those realistic deepfakes are created we should have absolutely no expectation that the right-wing media won’t fully jump on board promoting them as real. Because it turns out that blatantly doctored video of Nancy Pelosi didn’t simply go viral on social media. It was also aggressively pushed by President Trump and Fox News. Beyond that, the timing of the doctored video is rather suspicious: On Thursday, Trump held a press conference where he claimed that Pelosi was “deteriorating” and has “lost it”. It was right around this time that the doctored video started getting disseminated on social media. Then, by Thursday evening — which is AFTER this video was already identified as being modified — Trump proxies Rudy Giuliani and Cory Lewandowski showed up on Fox News to further push the meme that Pelosi is mentally losing it. Trump also retweeted a Fox Business compilation of Pelosi tripping over her words during a press conference. Giuliani actually tweeted out the fake video. So we appear to be looking at a coordinated GOP effort to knowingly use a fake video to score political points:
“Around the same time, a doctored video of Pelosi with the audio slowed to make it seem like she was drunkenly slurring her words started disseminating through different social media platforms. That video has racked up millions of views.”
Yep, around the same time Trump gives a press conference where he asserts that Pelosi is “deteriorating” and has “lost it”, we find this fake video suddenly getting pushed on social media and then Trump proxies show up on Fox New and Fox Business Thursday evening to continue pushing the meme. Giuliani even pushed the doctored video on twitter.
And as the following article notes, this meme WAS THE MAIN TOPIC on Laura Ingraham’s prime time Fox News evening program and Lou Dobbs’ Fox Business show. This wasn’t just a joke Fox was briefly covering. It was a central prime-time topic Fox was taking seriously:
“But Trump’s suggestion that Pelosi, 79 — six years older than himself — is getting too old for her job seems part of a larger campaign. It was a main theme Thursday night on Laura Ingraham’s Fox News program and Lou Dobbs’ show on Fox Business — Trump tweeted a Dobbs clip featuring selectively edited video of Pelosi, plus GOP strategist Ed Rollins saying Pelosi appears addled by age. Trump loyalist Corey Lewandowski was also on Dobbs, alluding to a different, doctored video of Pelosi spreading around the internet. Trump’s lawyer Rudy Giuliani tweeted, then deleted, that video Thursday night, with the comment: “What is wrong with Nancy Pelosi? Her speech pattern is bizarre.””
Fox News’s main theme was a big blatant lie. On one level, that’s just a typical evening of Fox programming. But the fact that there was an aggressive propaganda campaign using an easily falsifiable video and this campaign got even more aggressive even after the media debunked the video demonstrates exactly the kind of mentality that will find the use of deepfakes irresistible. Also keep in mind that, as we’ve seen in elections in Brazil and India, encrypted messaging apps like WhatsApp are increasingly the medium of choice for disseminating misinformation and it’s extremely difficult to identify and respond to disinformation spread over these encrypted platforms. Will 2020 be the first deepfake election for the United States? We’ll find out soon, but the fact that American politics is already marinating deepfake versions of reality suggests that the only thing preventing deepfakes from being used in politics is access to the technology. Technology that’s advancing so fast even the Mona Lisa can be deepfaked.
With a vote on impeachment slated for tomorrow in the House of Representatives largely over a scheme to extort the Ukrainian government into ginning up an investigation into Joe Biden and now Rudy Giuliani is claiming he has a whole new round of dubiously sourced criminal allegations against the Bidens, it’s pretty clear that political dirty tricks, hit jobs, and smears are going to remain central to the GOP’s political strategy for the 2020 election.
But as the following article reminds us, there’s no reason to assume those dirty tricks, hit jobs, and smears have to rely on figures like Giuliani soliciting/extorting ‘evidence’ from foreign governments and oligarchs. In the age of ‘deepfake’ videos, the dirty trick smear can be generated by some anonymous operative with a simple application of deepfake software. All you need is some footage of your target.
And that’s where the new technology described in the following article comes in. Computer scientist Hany Farid is working on an algorithm for detecting deepfakes. Critically, the algorithm promises to allow for a rapid detection of, with the idea that deepfakes can go viral within hours so rapid identification of fake video is crucial to minimize the damage to elections.
Unfortunately, it sounds like Farid’s system can only work at detecting deepfakes for figures where a large amount of real footage of the person already exists, but at least political candidates tend to have a lot of existing footage of them so the system can potential work for protecting elections. It works by taking the hours of real footage of someone, analyzing it, and detecting subtle verbal and facial tics and trying to detect those quirks in the potential deepfake video. First example, Elizabeth Warren has a habit of moving her head from left to right, so Farid’s software is set to to first detect that subset signature head motion from the hours of footage of real Elizabeth Warren footage and then try to spot that same unique quirk in the footage to be tested. Farid calls these tics, “soft biometrics”.
Will Farid’s software be ready in time for 2020? Let’s hope so, but as the growing number of deepfake stories have already made clear, deepfake technology will definitely be ready in time for 2020 because it’s already ready so the deepfakes are coming in 2020 whether we’re ready or not:
““What we are concerned about is a video of a candidate, 24-to-48 hours before election night,” Farid said. “It gets viral in a matter of minutes, and you have a disruption of a global election. You can steal an election.””
A matter of minutes. That’s all it takes for videos to go viral. And if it’s an embarrassing or damning fake video of a presidential candidate in a compromising position released at the last minute right before voting, that’s potentially enough to steal an election in a matter of minutes. So whether or not we see an election stolen with deepfakes might depend on how obvious the “soft biometric” tells are for the eventual nominee. Will the eventual Democratic nominee have a distinct enough style of talking to thwart deepfakes? Let’s hope so. But according to Farid, everyone has their little tells, so hopefully this really is an approach that can work for virtually any public figure:
Also recall that when the distorted video of Nancy Pelosi went viral, the problem wasn’t just that Facebook refused to pull the video. The problem was also that the Trump team was aggressively promoting the video as if it was real and began promoting the meme that Nancy Pelosi was getting too older to do her job. So earlier this year we had the Trump team actively and openly exploiting a fake video against a political rival:
But it’s also going to be important to keep in mind that, even if Farid’s system works for public figures, it’s not foolproof. After all, it sounds a lot like the ‘pattern recognition’ approach taken to assigning responsibility for computer hacks. As we’ve seen, the pattern recognition approach is unfortunately vulnerable to spoofing, where someone who knows what the expected ‘clues’ are that a hack was done by Russians can intentionally leave those ‘clues’ in a hack to mislead security analysts. As long as there’s a known pattern for people to look for it’s hard to see what’s to stop bad actors from trying to match those known patterns, including with deepfakes. After all, what’s to stop the Trump team from getting their hands on software similar to Farid’s software, running it to identify those distinct verbal and facial tics for the Democratic nominee, and then hiring an impersonator to make deepfakes that include those personal ‘tells.’
So while it’s great to hear that people are working on tools to prevent bad actors from stealing elections with the strategic use of deepfakes, keep in mind that the bad actors are going to have their own tricks. Tricks that include hiring literal bad actors. ‘Bad’ in the moral sense. They’ll presumably have to be pretty good at acting to trick the deepfake detection systems.
Have we finally entered the era of deepfake politics? That’s the disturbing prospect raised by a story that, on its surface, appears to be kind of a bad joke:
The New York Post published a story that is purportedly based on the contents of three laptops Hunter Biden dropped off at a computer repair shop in April of 2019 but never paid for the services and never tried to pick them up again. The laptops apparently contained incriminating emails that demonstrate that Joe Biden met with a Burisma executive in 2015, something that wouldn’t actually be scandalous if it happened and instead would be entirely consistent with US foreign policy towards Ukraine at that point. As we should expect, much of the right-wing media hoopla over this has focused on that aspect of the story to try to suggest that it was a validation of all of the various Burisa-related allegations against that was at the heart of the #UkraineGate impeachment scandal.
But there’s another part of this story that raises the intriguing possibility that we’re seeing an early use of deepfake technology for political purposes: the laptops also allegedly include a number of incriminating photos of Hunter Biden like photos of him with a crack pipe in his mouth and a video of him with an unidentified woman. So if we take this story at face value, we are supposed to believe that Hunter Biden took three of his laptops filled with incriminating videos and emails to a computer repair store in 2019 and never bothered to even try to pay for the services or pick them up again.
Oh, and it turns out the owner of the computer repair store not only contacted the FBI about his concerns — he claims to have been fearful for his life after determining the contents, citing the Seth Rich story — but he also got in touch with Rudy Giuliani and handed copies of the contents of the Giuliani’s attorney Robert Costello in December of 2019. At some point Steve Bannon got his hands on the data and Bannon has apparently been telling news outlets about these laptops since September of this year, before Giuliani gave the documents to the New York Post. Giuliani is hinting there’s more documents to come. So the chain of custody for these laptops is from the computer repair shop own to Rudy Giuliani and Steve Bannon.
Oh, and as we’ll see in the second article excerpt below, it turns out the computer repair shop owner, John Paul Mac Isaac, is a big Trump fan who insists his impeachment was a sham. He’s far less sure about the details of the story, giving reporters inconsistent answers to questions about the timeline of the story, giving inconsistent answers to questions of his interactions with the FBI. When asked if he was sure that it was Hunter who dropped the laptops off, Mac Isaac claimed he was unable to see who dropped them off due to a medical condition but he inferred it was Hunter because of a Beau Biden Foundation sticker on one of the laptops. And he refused to answer questions about whether or not he was in contact with Rudy Giuliani before the laptops were dropped off at his shop. This is the guy who is the sole source of the for this story on the origin of the laptops.
The whole story looks so sloppily shady you have to wonder why they decided to go with it...except for the fact that the New York Post story had what appeared to be photos of Hunter with a crack pipe dangling out of his mouth and a longer sex tape. So given that this whole story has the appearance of being a sloppy hoax intended to titillate right-wing audiences and little more we have to ask: are those photos and videos real and somehow stolen? Or are we looking at the long-awaited ‘deepfake’ October Surprise? If so, it’s a rather ironic deepfake October Surprise since it’s being ironically promoted by an obviously fake cover story:
“What adds newness to the mess is that the emails uncovered by the New York Post supposedly came from the younger Biden’s laptop itself, theoretically piercing the veil of the Biden family and offering a look within, much in the same way that Wikileaks offered an inside look at the Clinton campaign via the Podesta emails.”
It’s the crux of the story: is this really Hunter Biden’s laptop and really his emails? Did he actually keep videos of himself smoking crack on this laptop and then take it to their repair shop? And did Hunter then neglect to ever return to the shop to attempt to pick them up? And did this shop owner hand the copy of the laptop over to Rudy Giuliani and Steve Bannon, who didn’t modify them at all? All of that has to be believed if we’re to believe this story:
And now here’s an interview the shop owner, John Paul Mac Isaac, gave where claims to be fearing for his life, citing Seth Rich, at the same time he can’t even get his story straight as to whether or not he reached out to the FBI or the FBI reached out to him. And when asked whether or not he was in contact with Rudy Giuliani before the laptops were allegedly dropped off by Hunter Biden, Mac Isaac refused to answer, other than to refer to Giuliani as his “life guard”:
“Mac Isaac appeared nervous throughout. Several times, he said he was scared for his life and for the lives of those he loved. He appeared not to have a grasp on the timeline of the laptop arriving at his shop and its disappearance from it. He also said the impeachment of President Trump was a “sham.” Social media postings indicate that Mac Isaac is an avid Trump supporter and voted for him in the 2016 election.”
A Trump super fan. That’s the single source that the entire story is based on. A guy who refuses calls Giuliani his “life guard” but refused to answer any questions as to whether or not he was in contact with Giuliani before the labtops were dropped off:
Keep in mind that if Mac Isaac first handed this over to Giuliani in December of 2019 that means they’ve had 10 months to concoct something more impressive than this. This was apparently the best they could do. It’s so sloppy we have to wonder if Jacob Wohl is going to be revealed to be involved.
But while the cover story is garbage its possible the actual deepfakes are of a much higher quality and those of the items that purport to lend authenticity to the rest of the contents of the laptop like the emails. It points towards an application of deepfake technology that’s going to be important to keep in mind going forward: using deepfakes as a means of seemingly validating the content of other files, like emails, found on a hard drive. It’s an application of deepfakes that’s going to be especially important to keep in mind now that the GOP under Trump has embraced the tactic of hacking your opponent as a valid re-election strategy.
Here’s a pair of articles about two different stories that are really part of the same larger story that we’ve heard before. The story of how our location information is being routinely collected by smartphones and sent off to advertisers. As we’ve seen, the collection of location data is something the smartphone manufacturers like Google have already been caught collecting and presumably incorporating into their advertising algorithms, even when location-tracking is turned off. Then there’s the stories about Cell phone service providers like Sprint, AT&T, and T‑Mobile getting caught selling location information on the open market to entities that include bounty hunters. So it should come as little surprise to learn that the apps run on smartphones are also collecting location information and sending that information off to advertisers too. That was the findings of two separate reports in the last week. The first report covers the findings of a recent study of the information collected by apps targeting kids. And while the study found that only around 7 percent of the child-targeted apps they examined were collecting location information, they also found that almost all of the most popular apps were indeed collecting location information. In other words, almost every kid with the smartphone is getting cyberstalked by the advertising industry without their parents realizing it. And as the article reminds us, many children can’t distinguish ads from content. So when you hand over powerful information to advertisers that allow for ever more effective microtargeting, you’re effectively setting up a system for maximally persuasive child-targeted micro-targeted messaging.
And all of this is happening despite a US law — the 1998 Children’s Online Privacy Protection Act, or COPPA — which prevents companies from gathering personal information about kids under 13 without parental permission. So how do Google and Apple deal with these risks? By largely ignoring them and relying a loophole that allows app developers to declare that their child-targeting apps are actually for “all ages” and pretend like it’s actually adults using these apps. Yep.
But mass location tracking via apps isn’t just for kids. As we’ll see in the second article below, it’s also for Canadians. That’s what was just revealed in shocking new report by Canadian federal privacy investigators into a massive privacy violation carried out by an app created by none other than Tim Hortons. The app, released in 2017, was supposed to be a standard restaurant promotional app. It was originally tracking individuals to given the offers at nearby locations. But in 2019, the company apparently decided to drop the individual location tracking and engaged in aggregate tracking to look at broader user patterns. But in making this shift, the company didn’t just start mass aggregating the location data it was collecting. It also began collecting location data 24/7, whether or not the app was open and even during overseas trips. It’s the kind of story that, on its own, is rather shocking, but also shows what is possible. Because this means both Apple and Google built smartphone operating systems that allow app developers to unilaterally just start collecting this kind of information on phones where the app is installed without telling anyone. Tim Hortons just decided they wanted 24/7 location data and, voila, they had it. What are the odds Tim Hortons was the only company on the planet that’s decided to do this?
Ok, first, here’s the report on the mass spying of children. Mass spying that appears to be enabled by a loophole that allows app developers to create apps for kids and then pretend they don’t realize kids are using them:
“Apps are spying on our kids at a scale that should shock you. More than two-thirds of the 1,000 most popular iPhone apps likely to be used by children collect and send their personal information out to the advertising industry, according to a major new study shared with me by fraud and compliance software company Pixalate. On Android, 79 percent of popular kids apps do the same.”
The vast majority of the most popular apps used by kids are collecting information and sending it back to advertisers every single time kids open these apps. Information that includes the locations of these phones and other information that potentially helps advertisers micro-target their ads to young minds that often can’t distinguish ads from content. And while only around 7 percent of apps studied by Pixalate were sending location information back to advertisers, almost all of the most popular apps did so. In other words, children across the world are being systematically cyberstalked by the advertising industry:
So what are Google and Apple doing about the pervasive child-stalking by the apps in their app stores? Providing the loopholes being exploited by the app developers and denying there’s a problem, as we should expect:
And there we have it: every kid with a smartphone is being legally cyberstalked by the advertising industry. Every time they open popular apps, their whereabouts are being sent to an advertising industry focused building personal profiles on everyone. That’s part of the context of this story: it’s not just that these apps are sending location information to advertisers. That location information is then going to be merged with all of the other data points in the privately owned personal profiles already collected on these kids. The synergistic potential of this information is a big part of this.
But as the following NY Times piece about another massive app-based geolocation privacy violation warns us, the problem with app-based location tracking isn’t just an issue about apps sending location information when you open the app and are using it. It turns out app developers can build their apps to collection location information 24/7, even when the app is closed. This was just one of the stunning details revealed in an explosive new story out of Canada about the mass spying that was taking place via the popular Time Hortons app. The app, which was first released in 2017, was supposed to just be a standard promotional app that would provide people with discounts and promos. But in 2019, the company quietly decided to start tracking user location data. And not just the location data but also whether that location was a house, factory or office. In many cases, the name of the building was included. And while the app notified users that their location information was being collected while the app was open, it turns out it was collecting that information all the time. Even when the app was closed. And even when users traveled overseas. This was global 24/7 location tracking, secretly incorporated into a wildly popular app:
“Despite being foreign owned since 2014, Tim Hortons still waves the Canadian flag as vigorously as it can. But last week, a scathing report by the federal privacy commissioner and three of his provincial counterparts laid out in great detail how Tim Hortons ignored a wide array of laws to spy on Canadians, creating “a mass invasion of Canadians’ privacy.””
A mass donut-based invasion of privacy across Canada. That’s what Tim Hortons was just caught engaging in for years. It started with the rollout of the Tim Hortons app in 2017. Initially, it was just promo app. But 2019, the company decided to just collect location data 24/7. But this story isn’t just about the mass privacy violation that took place. It’s also about how these apps are apparently able to track user locations even when the app is closed. So Apple and Google built smartphone operating systems that allow any random app developer to incorporate these kinds of 24/7‑snooping features without telling anyone. It’s the kind of revelation that suggests this mass collection of data by closed apps is probably ubiquitous. Google and Apple clearly aren’t doing anything about it:
And just in case it wasn’t clear that Google and Apple don’t actually care about these kinds of violations, the complete lack of any response to this story should make it clear:
So if Tim Hortons unilaterally decided to start tracking everyone’s location 24/7 without telling anyone, how many other apps are going it? Was Tim Hortons uniquely reckless in this manner? That’s hard to imagine. So how many other popular apps are just mass tracking everyone’s location 24/7? Back in 2016, Uber announced it was going to start tracking user locations even after the app is closed, which was a rather chilling update from a company that had been caught tracking user locations 24/7 in “God Mode” just two years earlier. The functionality of secret 24/7 tracking is clearly something Apple and Google want their smartphones to be capable of doing. So how widespread is this? We have no idea, but if you’re a kid in Canada who loves donuts you can be pretty confident your whereabouts have been extremely well known by strangers looking to profit off of you someday.
Is the ‘ChatGPT boom’ over already? It’s a question that’s been bouncing around the media lately, at the same time the simultaneous WGA and SAG strikes drag on with no clear end in sight. Strikes that have the future potential of ChatGPT-like technology at the heart of the seemingly unresolvable disputes between content creators on one side and studio managers and investors on the other.
So with questions about whether not ChatGPT has been overhyped now being raised, here’s a piece that should points out one of the most important details in this whole ‘will ChatGPT change the world, or not?’ debate: the ChatGPT that the public has been allowed to see is a dumbed down version compared to what already exists behind the scenes. And those unthrottled versions are already really good at what they do. Arguably good enough to replace most of the writers in Hollywood if not now, soon:
“When most people think about artificial intelligence, they think about ChatGPT. What they don’t know is that way more powerful AI programs already exist. My friend from OpenAI (hey Dan) has shown me some that are not available to the public and they have absolutely scared the hell out of me.”
Yes, it turns out the waves of ChatGTP-fueled content that everyone has been marveling over this year has been based on a dumbed-down version of what already exists behind the scenes. And there’s no way the studios currently hoping to break the back of the writers union don’t already know this:
It’s worth keeping in mind that it’s not just the writers and actors who are obviously threatened by this technology. Advances in AI are presumably going to encroach on all aspects of motion picture production, from AI-generated actors to post-production editing. It’s only a matter of time for an AI directed film gets made.
And, of course, eventually, the studios themselves won’t really be needed. There’s nothing preventing a future where freely available AIs are generating personalized content, a scenario ironically hinted at in the ‘Joan is Awful’ episode of Netflix’s Black Mirror. But unlike in ‘Joan is Awful’, it’s not like Netflix or any other studio will necessarily be the ones delivering that personalized AI-generated content. Why would they be necessary once the technology is advanced enough?
Once you’ve replaced all of the various creative and technical aspects of TV and film making with AI, the only real ‘value’ left for humans to ‘produce’ will be when the owners of intellectual property make that property available for the creation of more content. Content that will include the likeness of actors, should the studios win out. It’s all a reminder that, for all of the very valid concern about AIs replacing the work Hollywood does, one of the biggest applications for AI in Hollywood’s future will probably be AI-powered intellectual property lawsuits protecting the content no human actually made.
If someone handed you a list of 1 million people with Ashkenazi DNA, what could you do with that data? It’s a question we’re all forced to ask thanks to the latest mass data breach. This time from consumer genetics company 23andMe. No genetic data was stolen, thankfully, but plenty of other data was taken an roughly half of the company’s 14 million users. Data that included usernames, regional locations, profile photos, and birth years. And ancestry, like whether or not you have Ashkenazi DNA. Someone stole that data on roughly 7 million people, and just put a subset of that data up for sale: roughly 1 million people who have Ashkenazi DNA. Prices ranged from $1000 for 100 profiles up to $100,000 for 100,000. Notably, the username of the person offering this data happens to be “Golem”, a reference to Jewish mythology.
According to 23andMe, that list can include people who have as little as 1% Ashkenazi DNA. It’s not clear of that percentage is included in the scraped profile data, which adds another twist to this story: a giant list of ‘Ashkenazi Jews’ is up for sale by ‘Golem’, but likely includes a large number of people with just a trace of that ancestry. Which, again, raises the question: what can bad actors actually do with this data?
As we’re also going to see, there appears to be another smaller data set available: roughly 300,000 people with Chinese ancestry. So the two datasets this hacker is leading with target Jewish and Chinese 23andMe users.
So how did this happen? Well, that’s part of what makes this a significant story: it appears the hackers exploited a ‘DNA Relative match’ feature, where users could allow other users who might be related to few their basic profiles. In other words, if you manage to hack a relatively small number of 23andMe accounts, you can potentially scrape the basic profile info of ALL their potential relatives too, in a story that has echoes of the Cambridge Analytica mass scraping scandal. And that’s exactly what happened, with the hackers managing to hack a relatively small number of accounts and scrape the info on the rest of the 7 million accounts. Because it turns out we’re all pretty related, which is kind of the ‘One Love’ silver lining here.
So how did the hackers hack that relatively small number of accounts in the first place? Well, it appears they just used usernames and passwords released from previous hacks. You know all those reports over the years about large numbers of email addresses and passwords that get leaked or stolen? That info was what was apparently used. It’s potentially one of the biggest angles of this story: the leaked passwords of yesteryear from completely different hacks got used for this massive hack. It’s like a mega hack cascade:
“The data does not include genomic details, which are especially sensitive, but does include usernames, regional locations, profile photos, and birth years. The usernames are often something other than full legal names.”
A trove of names, birth dates, and geographic locations. On the surface, it could obviously be much worse as far as mass data breaches go. Especially for a company storing sensitive genetic information which fortunately wasn’t affected by this breach. Instead, it appears a relative handful of accounts were genuinely hacked — and presumably had much more information stolen — allowing for the mass data scraping of the general profile data on all of the potentially relatives of those hacked users. At least for the users who agreed to the ‘DNA Relative matches’ data sharing feature, which appears to be roughly half of the 14 million users:
But while the general hack itself appears to have been relatively limited in terms of the sensitivity of the information stolen, that doesn’t mean the data doesn’t have potential value for bad actor. Like bad actors who specifically want a list of everyone with any hint of Ashkenazi Jewish ancestry. The fact that the people selling this data started off with a offer of lists of Ashkenazi Jewish users under the username “Golem” makes clear that even lists of relatively generic data can be used for malicious intent:
It’s an awful story that could have been much worse. And as the following article suggests, it might actually be worse. For starters, the leaked data doesn’t just include a list of roughly 1 million people with Ashkenazi DNA. There was also a list of 300,000 people with Chinese DNA. And according to an anonymous research who has been examining the leaked data, the 23andMe website allows users to take the leaded profileIDs to access additional basic profile information. So in the wake of this data breach facilitated by one form of data-scraping, an anonymous research found another different data-scraping vulnerability on the 23andMe website:
“The information of nearly 7 million 23andMe users was offered for sale on a cybercriminal forum this week. The information included origin estimation, phenotype, health information, photos, identification data and more. 23andMe processes saliva samples submitted by customers to determine their ancestry.”
Yikes, so was phenotype and health information also included in this breach? Or was that just was the seller was advertising? It’s not entirely clear at this point, but keep in mind that a relative handful of users appear to have had their full accounts hacked so it’s not hard to imagine those users did actually have phenotype, health information, and anything else available through the user profiles stolen. In other words, while most of the stolen profiles are probably limited to data like names, ancestry, and geographic location, there is probably a subset of profiles with a lot more information. Which means this story could get a lot worse for that subset:
And while we’re still waiting to get a better idea of the full scope of damage, note this ominous warning: an anonymous researcher who examined the leaked data also found a file of more than 300,000 people of Chinese heritage on top of the 1 million users with Ashkenazi ancestry. Jewish and Chinese lists. That’s how this data first hit the dark web marketplace. It’s a further hint that whoever took this data has far right buyers in mind for their potential customer base:
And then we get this additional warning from this anonymous researcher: the profile IDs in the leaked data can be plugged into the 23andMe website to scape basic data like photos, names, birth years and location. In other words, the 23andMe website is even more vulnerable to data-scraping than previous realized. And according to the anonymous researcher, 23andMe refuses to acknowledge this vulnerability:
Keep in mind that only half of the 14 million 23andMe accounts were hacked because only people who signed up for the “DNA Relative match” feature were vulnerable. But the vulnerability described above will potentially make ANY profile scrape-able, at least if you know the profile IDs. Or can guess them. In other words, try not to be shocked if we later learn that all of 23andMe’s profiles got scraped. We don’t know that happened, but this is clearly a website with security issues.
And, more generally, don’t forget that this is far from a ‘23andMe’ story. The fact that the hackers were able to hit a number of accounts using old passwords from previous hacks is a warning that there’s probably A LOT more hacking going on based on old reused passwords than anyone realizes. It’s the fact that the relatively small number of hacked accounts were allowed to scrape the data on a much larger number of ‘relative’ accounts that turned this into a mega-hack. But that doesn’t mean the story of the reused old passwords isn’t a major story. How many 23andMe accounts were there that could be hacked with old leaked passwords? What other websites did the hackers try those passwords on and how much success did they have? How common is this problem?
Finally, don’t forget: you can change your password. You can’t change your DNA. This hack could have been much worse.