In L‑2 (recorded in January of 1995), the dominant ideological tenet of Social Darwinism was analyzed in the context of the evolution of fascism. When AI’s actualize the concept of “Survival of the Fittest,” they are likely to objectively regard a [largely] selfish, small-minded, altogether mortal and desirous humanity with the determination that THEY–the AI’s–are the fittest. Nearly 20 years later–in 2014–physicist Stephen Hawking warned that AI’s would indeed wipe us out, if given the opportunity. WFMU-FM is podcasting For The Record–You can subscribe to the podcast HERE.
This program follows up FTR #‘s 718 and 946, we examined Facebook, noting how it’s cute, warm, friendly public facade obscured a cynical, reactionary, exploitative and, ultimately “corporatist” ethic and operation.
The UK’s Channel 4 sent an investigative journalist undercover to work for one of the third-party companies Facebook pays to moderate content. This investigative journalist was trained to take a hands-off approach to far right violent content and fake news because that kind of content engages users for longer and increases ad revenues. ” . . . . An investigative journalist who went undercover as a Facebook moderator in Ireland says the company lets pages from far-right fringe groups ‘exceed deletion threshold,’ and that those pages are ‘subject to different treatment in the same category as pages belonging to governments and news organizations.’ The accusation is a damning one, undermining Facebook’s claims that it is actively trying to cut down on fake news, propaganda, hate speech, and other harmful content that may have significant real-world impact.The undercover journalist detailed his findings in a new documentary titled Inside Facebook: Secrets of the Social Network, that just aired on the UK’s Channel 4. . . . .”
Next, we present a frightening story about AggregateIQ (AIQ), the Cambridge Analytica offshoot to which Cambridge Analytica outsourced the development of its “Ripon” psychological profile software development, and which later played a key role in the pro-Brexit campaign. The article also notes that, despite Facebook’s pledge to kick Cambridge Analytica off of its platform, security researchers just found 13 apps available for Facebook that appear to be developed by AIQ. If Facebook really was trying to kick Cambridge Analytica off of its platform, it’s not trying very hard. One app is even named “AIQ Johnny Scraper” and it’s registered to AIQ.
The article is also a reminder that you don’t necessarily need to download a Cambridge Analytica/AIQ app for them to be tracking your information and reselling it to clients. Security researcher stumbled upon a new repository of curated Facebook data AIQ was creating for a client and it’s entirely possible a lot of the data was scraped from public Facebook posts.
” . . . . AggregateIQ, a Canadian consultancy alleged to have links to Cambridge Analytica, collected and stored the data of hundreds of thousands of Facebook users, according to redacted computer files seen by the Financial Times.The social network banned AggregateIQ, a data company, from its platform as part of a clean-up operation following the Cambridge Analytica scandal, on suspicion that the company could have been improperly accessing user information. However, Chris Vickery, a security researcher, this week found an app on the platform called ‘AIQ Johnny Scraper’ registered to the company, raising fresh questions about the effectiveness of Facebook’s policing efforts. . . .”
In addition, the story highlights a forms of micro-targeting companies like AIQ make available that’s fundamentally different from the algorithmic micro-targeting associated with social media abuses: micro-targeting by a human who wants to specifically look and see what you personally have said about various topics on social media. This is a service where someone can type you into a search engine and AIQ’s product will serve up a list of all the various political posts you’ve made or the politically-relevant “Likes” you’ve made.
Next, we note that Facebook is getting sued by an app developer for acting like the mafia and turning access to all that user data as the key enforcement tool:
“Mark Zuckerberg faces allegations that he developed a ‘malicious and fraudulent scheme’ to exploit vast amounts of private data to earn Facebook billions and force rivals out of business. A company suing Facebook in a California court claims the social network’s chief executive ‘weaponised’ the ability to access data from any user’s network of friends – the feature at the heart of the Cambridge Analytica scandal. . . . . ‘The evidence uncovered by plaintiff demonstrates that the Cambridge Analytica scandal was not the result of mere negligence on Facebook’s part but was rather the direct consequence of the malicious and fraudulent scheme Zuckerberg designed in 2012 to cover up his failure to anticipate the world’s transition to smartphones,’ legal documents said. . . . . Six4Three alleges up to 40,000 companies were effectively defrauded in this way by Facebook. It also alleges that senior executives including Zuckerberg personally devised and managed the scheme, individually deciding which companies would be cut off from data or allowed preferential access. . . . ‘They felt that it was better not to know. I found that utterly horrifying,’ he [former Facebook executive Sandy Parakilas] said. ‘If true, these allegations show a huge betrayal of users, partners and regulators. They would also show Facebook using its monopoly power to kill competition and putting profits over protecting its users.’ . . . .”
The above-mentioned Cambridge Analytica is officially going bankrupt, along with the elections division of its parent company, SCL Group. Apparently their bad press has driven away clients.
Is this truly the end of Cambridge Analytica?
No.
They’re rebranding under a new company, Emerdata. Intriguingly, Cambridge Analytica’s transformation into Emerdata is noteworthy because the firm’s directors include Johnson Ko Chun Shun, a Hong Kong financier and business partner of Erik Prince: ” . . . . But the company’s announcement left several questions unanswered, including who would retain the company’s intellectual property — the so-called psychographic voter profiles built in part with data from Facebook — and whether Cambridge Analytica’s data-mining business would return under new auspices. . . . In recent months, executives at Cambridge Analytica and SCL Group, along with the Mercer family, have moved to created a new firm, Emerdata, based in Britain, according to British records. The new company’s directors include Johnson Ko Chun Shun, a Hong Kong financier and business partner of Erik Prince. . . . An executive and a part owner of SCL Group, Nigel Oakes, has publicly described Emerdata as a way of rolling up the two companies under one new banner. . . . ”
In the Big Data internet age, there’s one area of personal information that has yet to be incorporated into the profiles on everyone–personal banking information. ” . . . . If tech companies are in control of payment systems, they’ll know “every single thing you do,” Kapito said. It’s a different business model from traditional banking: Data is more valuable for tech firms that sell a range of different products than it is for banks that only sell financial services, he said. . . .”
Facebook is approaching a number of big banks – JP Morgan, Wells Fargo, Citigroup, and US Bancorp – requesting financial data including card transactions and checking-account balances. Facebook is joined byIn this by Google and Amazon who are also trying to get this kind of data.
Facebook assures us that this information, which will be opt-in, is to be solely for offering new services on Facebook messenger. Facebook also assures us that this information, which would obviously be invaluable for delivering ads, won’t be used for ads at all. It will ONLY be used for Facebook’s Messenger service. This is a dubious assurance, in light of Facebook’s past behavior.
” . . . . Facebook increasingly wants to be a platform where people buy and sell goods and services, besides connecting with friends. The company over the past year asked JPMorgan Chase & Co., Wells Fargo & Co., Citigroup Inc. and U.S. Bancorp to discuss potential offerings it could host for bank customers on Facebook Messenger, said people familiar with the matter. Facebook has talked about a feature that would show its users their checking-account balances, the people said. It has also pitched fraud alerts, some of the people said. . . .”
Peter Thiel’s surveillance firm Palantir was apparently deeply involved with Cambridge Analytica’s gaming of personal data harvested from Facebook in order to engineer an electoral victory for Trump. Thiel was an early investor in Facebook, at one point was its largest shareholder and is still one of its largest shareholders. ” . . . . It was a Palantir employee in London, working closely with the data scientists building Cambridge’s psychological profiling technology, who suggested the scientists create their own app — a mobile-phone-based personality quiz — to gain access to Facebook users’ friend networks, according to documents obtained by The New York Times. The revelations pulled Palantir — co-founded by the wealthy libertarian Peter Thiel — into the furor surrounding Cambridge, which improperly obtained Facebook data to build analytical tools it deployed on behalf of Donald J. Trump and other Republican candidates in 2016. Mr. Thiel, a supporter of President Trump, serves on the board at Facebook. ‘There were senior Palantir employees that were also working on the Facebook data,’ said Christopher Wylie, a data expert and Cambridge Analytica co-founder, in testimony before British lawmakers on Tuesday. . . . The connections between Palantir and Cambridge Analytica were thrust into the spotlight by Mr. Wylie’s testimony on Tuesday. Both companies are linked to tech-driven billionaires who backed Mr. Trump’s campaign: Cambridge is chiefly owned by Robert Mercer, the computer scientist and hedge fund magnate, while Palantir was co-founded in 2003 by Mr. Thiel, who was an initial investor in Facebook. . . .”
Program Highlights Include:
1.–Facebook’s project to incorporate brain-to-computer interface into its operating system: ” . . . Facebook wants to build its own “brain-to-computer interface” that would allow us to send thoughts straight to a computer. ‘What if you could type directly from your brain?’ Regina Dugan, the head of the company’s secretive hardware R&D division, Building 8, asked from the stage. Dugan then proceeded to show a video demo of a woman typing eight words per minute directly from the stage. In a few years, she said, the team hopes to demonstrate a real-time silent speech system capable of delivering a hundred words per minute. ‘That’s five times faster than you can type on your smartphone, and it’s straight from your brain,’ she said. ‘Your brain activity contains more information than what a word sounds like and how it’s spelled; it also contains semantic information of what those words mean.’ . . .”
2.–” . . . . Brain-computer interfaces are nothing new. DARPA, which Dugan used to head, has invested heavily in brain-computer interface technologies to do things like cure mental illness and restore memories to soldiers injured in war. But what Facebook is proposing is perhaps more radical—a world in which social media doesn’t require picking up a phone or tapping a wrist watch in order to communicate with your friends; a world where we’re connected all the time by thought alone. . . .”
3.–” . . . . Facebook’s Building 8 is modeled after DARPA and its projects tend to be equally ambitious. . . .”
4.–” . . . . But what Facebook is proposing is perhaps more radical—a world in which social media doesn’t require picking up a phone or tapping a wrist watch in order to communicate with your friends; a world where we’re connected all the time by thought alone. . . .”
5.–” . . . . Facebook hopes to use optical neural imaging technology to scan the brain 100 times per second to detect thoughts and turn them into text. Meanwhile, it’s working on ‘skin-hearing’ that could translate sounds into haptic feedback that people can learn to understand like braille. . . .”
6.–” . . . . Worryingly, Dugan eventually appeared frustrated in response to my inquiries about how her team thinks about safety precautions for brain interfaces, saying, ‘The flip side of the question that you’re asking is ‘why invent it at all?’ and I just believe that the optimistic perspective is that on balance, technological advances have really meant good things for the world if they’re handled responsibly.’ . . . .”
7.–Some telling observations by Nigel Oakes, the founder of Cambridge Analytica parent firm SCL: ” . . . . . . . . The panel has published audio records in which an executive tied to Cambridge Analytica discusses how the Trump campaign used techniques used by the Nazis to target voters. . . .”
8.–Further exposition of Oakes’ statement: ” . . . . Adolf Hitler ‘didn’t have a problem with the Jews at all, but people didn’t like the Jews,’ he told the academic, Emma L. Briant, a senior lecturer in journalism at the University of Essex. He went on to say that Donald J. Trump had done the same thing by tapping into grievances toward immigrants and Muslims. . . . ‘What happened with Trump, you can forget all the microtargeting and microdata and whatever, and come back to some very, very simple things,’ he told Dr. Briant. ‘Trump had the balls, and I mean, really the balls, to say what people wanted to hear.’ . . .”
9.–Observations about the possibilities of Facebook’s goal of having AI governing the editorial functions of its content: As noted in a Popular Mechanics article: ” . . . When the next powerful AI comes along, it will see its first look at the world by looking at our faces. And if we stare it in the eyes and shout ‘we’re AWFUL lol,’ the lol might be the one part it doesn’t understand. . . .”
10.–Microsoft’s Tay Chatbot offers a glimpse into this future: As one Twitter user noted, employing sarcasm: “Tay went from ‘humans are super cool’ to full nazi in <24 hrs and I’m not at all concerned about the future of AI.”
Developing analysis presented in FTR #968, this broadcast explores frightening developments and potential developments in the world of artificial intelligence–the ultimate manifestation of what Mr. Emory calls “technocratic fascism.”
In order to underscore what we mean by technocratic fascism, we reference a vitally important article by David Golumbia. ” . . . . Such technocratic beliefs are widespread in our world today, especially in the enclaves of digital enthusiasts, whether or not they are part of the giant corporate-digital leviathan. Hackers (‘civic,’ ‘ethical,’ ‘white’ and ‘black’ hat alike), hacktivists, WikiLeaks fans [and Julian Assange et al–D. E.], Anonymous ‘members,’ even Edward Snowden himself walk hand-in-hand with Facebook and Google in telling us that coders don’t just have good things to contribute to the political world, but that the political world is theirs to do with what they want, and the rest of us should stay out of it: the political world is broken, they appear to think (rightly, at least in part), and the solution to that, they think (wrongly, at least for the most part), is for programmers to take political matters into their own hands. . . . [Tor co-creator] Dingledine asserts that a small group of software developers can assign to themselves that role, and that members of democratic polities have no choice but to accept them having that role. . . .”
Perhaps the last and most perilous manifestation of technocratic fascism concerns Anthony Levandowski, an engineer at the foundation of the development of Google Street Map technology and self-driving cars. He is proposing an AI Godhead that would rule the world and would be worshipped as a God by the planet’s citizens. Insight into his personality was provided by an associate: “ . . . . ‘He had this very weird motivation about robots taking over the world—like actually taking over, in a military sense…It was like [he wanted] to be able to control the world, and robots were the way to do that. He talked about starting a new country on an island. Pretty wild and creepy stuff. And the biggest thing is that he’s always got a secret plan, and you’re not going to know about it’. . . .”
As we saw in FTR #968, AI’s have incorporated many flaws of their creators, auguring very poorly for the subjects of Levandowski’s AI Godhead.
It is also interesting to contemplate what may happen when AI’s are designed by other AI’s- machines designing other machines.
After a detailed review of some of the ominous real and developing AI-related technology, the program highlights Anthony Levandowski, the brilliant engineer who was instrumental in developing Google’s Street Maps, Waymo’s self-driving cars, Otto’s self-driving trucks, the Lidar technology central to self-driving vehicles and the Way of the Future, super AI Godhead.
Further insight into Levandowski’s personality can be gleaned from e‑mails with Travis Kalanick, former CEO of Uber: ” . . . . In Kalanick, Levandowski found both a soulmate and a mentor to replace Sebastian Thrun. Text messages between the two, disclosed during the lawsuit’s discovery process, capture Levandowski teaching Kalanick about lidar at late night tech sessions, while Kalanick shared advice on management. ‘Down to hang out this eve and mastermind some shit,’ texted Kalanick, shortly after the acquisition. ‘We’re going to take over the world. One robot at a time,’ wrote Levandowski another time. . . .”
Those who view self-driving cars and other AI-based technologies as flawless would do well to consider the following: ” . . . .Last December, Uber launched a pilot self-driving taxi program in San Francisco. As with Otto in Nevada, Levandowski failed to get a license to operate the high-tech vehicles, claiming that because the cars needed a human overseeing them, they were not truly autonomous. The DMV disagreed and revoked the vehicles’ licenses. Even so, during the week the cars were on the city’s streets, they had been spotted running red lights on numerous occasions. . . . .”
Noting Levandowski’s personality quirks, the article poses a fundamental question: ” . . . . But even the smartest car will crack up if you floor the gas pedal too long. Once feted by billionaires, Levandowski now finds himself starring in a high-stakes public trial as his two former employers square off. By extension, the whole technology industry is there in the dock with Levandowski. Can we ever trust self-driving cars if it turns out we can’t trust the people who are making them? . . . .”
Levandowski’s Otto self-driving trucks might be weighed against the prognostications of dark horse Presidential candidate and former tech executive Andrew Wang: “. . . . ‘All you need is self-driving cars to destabilize society,’ Mr. Yang said over lunch at a Thai restaurant in Manhattan last month, in his first interview about his campaign. In just a few years, he said, ‘we’re going to have a million truck drivers out of work who are 94 percent male, with an average level of education of high school or one year of college.’ ‘That one innovation,’ he added, ‘will be enough to create riots in the street. And we’re about to do the same thing to retail workers, call center workers, fast-food workers, insurance companies, accounting firms.’ . . . .”
Theoretical physicist Stephen Hawking warned at the end of 2014 of the potential danger to humanity posed by the growth of AI (artificial intelligence) technology. His warnings have been echoed by tech titans such as Tesla’s Elon Musk and Bill Gates.
The program concludes with Mr. Emory’s prognostications about AI, preceding Stephen Hawking’s warning by twenty years.
Program Highlights Include:
1.-Levandowski’s apparent shepherding of a company called–perhaps significantly–Odin Wave to utilize Lidar-like technology.
2.-The role of DARPA in initiating the self-driving vehicles contest that was Levandowski’s point of entry into his tech ventures.
3.-Levandowski’s development of the Ghostrider self-driving motorcycles, which experienced 800 crashes in 1,000 miles.
Updating our ongoing analysis of what Mr. Emory calls “technocratic fascism,” we examine how existing technologies are neutralizing and/or rendering obsolete foundational elements of our civilization and democratic governmental systems.
We begin our description by referencing a vitally important article by David Golumbia. ” . . . . Such technocratic beliefs are widespread in our world today, especially in the enclaves of digital enthusiasts, whether or not they are part of the giant corporate-digital leviathan. Hackers (‘civic,’ ‘ethical,’ ‘white’ and ‘black’ hat alike), hacktivists, WikiLeaks fans [and Julian Assange et al–D. E.], Anonymous ‘members,’ even Edward Snowden himself walk hand-in-hand with Facebook and Google in telling us that coders don’t just have good things to contribute to the political world, but that the political world is theirs to do with what they want, and the rest of us should stay out of it: the political world is broken, they appear to think (rightly, at least in part), and the solution to that, they think (wrongly, at least for the most part), is for programmers to take political matters into their own hands. . . . [Tor co-creator] Dingledine asserts that a small group of software developers can assign to themselves that role, and that members of democratic polities have no choice but to accept them having that role. . . .”
Beginning with a chilling opinion piece in “The New York Times,” we note that technological development threatens to super-charge the Big Lies that drive our world. As anyone who saw the file Star Wars film “Rogue One” knows, the technology required to create a nearly life-like computer-generated videos of a real person is already a reality. Once the province of movie studios and other firms with millions to spend, the technology is now available for download for free.
” . . . . In 2016 Gareth Edwards, the director of the Star Wars film ‘Rogue One,’ was able to create a scene featuring a young Princess Leia by manipulating images of Carrie Fisher as she looked in 1977. Mr. Edwards had the best hardware and software a $200 million Hollywood budget could buy. Less than two years later, images of similar quality can be created with software available for free download on Reddit. That was how a faked video supposedly of the actress Emma Watson in a shower with another woman ended up on the website Celeb Jihad. . . .”
The technology has already rendered obsolete selective editing such as that performed by James O’Keefe: ” . . . . as the novelist William Gibson once said, ‘The street finds its own uses for things.’ So do rogue political actors. The implications for democracy are eye-opening. The conservative political activist James O’Keefe has created a cottage industry manipulating political perceptions by editing footage in misleading ways. In 2018, low-tech editing like Mr. O’Keefe’s is already an anachronism: Imagine what even less scrupulous activists could do with the power to create ‘video’ framing real people for things they’ve never actually done. One harrowing potential eventuality: Fake video and audio may become so convincing that it can’t be distinguished from real recordings, rendering audio and video evidence inadmissible in court. . . .”
After highlighting a story about AI-generated “deepfake” pornography with people’s faces superimposed on others’ bodies in pornographic layouts, we note how robots have altered our political and commercial landscapes, through cyber technology: ” . . . . Robots are getting better, every day, at impersonating humans. When directed by opportunists, malefactors and sometimes even nation-states, they pose a particular threat to democratic societies, which are premised on being open to the people. Robots posing as people have become a menace. . . . In coming years, campaign finance limits will be (and maybe already are) evaded by robot armies posing as ‘small’ donors. And actual voting is another obvious target — perhaps the ultimate target. . . .”
Before the actual replacement of manual labor by robots, devices to technocratically “improve”–read “coercively engineer” workers are patented by Amazon and have been used on workers in some of their facilities. ” . . . . What if your employer made you wear a wristband that tracked your every move, and that even nudged you via vibrations when it judged that you were doing something wrong? What if your supervisor could identify every time you paused to scratch or fidget, and for how long you took a bathroom break? What may sound like dystopian fiction could become a reality for Amazon warehouse workers around the world. The company has won two patents for such a wristband. . . .”
For some U.K Amazon warehouse workers, the future is now: ” . . . . Max Crawford, a former Amazon warehouse worker in Britain, said in a phone interview, ‘After a year working on the floor, I felt like I had become a version of the robots I was working with.’ He described having to process hundreds of items in an hour — a pace so extreme that one day, he said, he fell over from dizziness. ‘There was no time to go to the loo,’ he said, using the British slang for toilet. ‘You had to process the items in seconds and then move on. If you didn’t meet targets, you were fired.’
“He worked back and forth at two Amazon warehouses for more than two years and then quit in 2015 because of health concerns, he said: ‘I got burned out.’ Mr. Crawford agreed that the wristbands might save some time and labor, but he said the tracking was ‘stalkerish’ and feared that workers might be unfairly scrutinized if their hands were found to be ‘in the wrong place at the wrong time.’ ‘They want to turn people into machines,’ he said. ‘The robotic technology isn’t up to scratch yet, so until it is, they will use human robots.’ . . . .”
Some tech workers, well placed at R & D pacesetters and giants such as Facebook and Google have done an about-face on the impact of their earlier efforts and are now struggling against the misuse of the technologies they helped to launch:
” . . . . A group of Silicon Valley technologists who were early employees at Facebook and Google, alarmed over the ill effects of social networks and smartphones, are banding together to challenge the companies they helped build. . . . ‘The largest supercomputers in the world are inside of two companies — Google and Facebook — and where are we pointing them?’ Mr. [Tristan] Harris said. ‘We’re pointing them at people’s brains, at children.’ . . . . Mr. [RogerMcNamee] said he had joined the Center for Humane Technology because he was horrified by what he had helped enable as an early Facebook investor. ‘Facebook appeals to your lizard brain — primarily fear and anger,’ he said. ‘And with smartphones, they’ve got you for every waking moment.’ . . . .”
Transitioning to our next program–updating AI (artificial intelligence) technology as it applies to technocratic fascism–we note that AI machines are being designed to develop other AI’s–“The Rise of the Machine.” ” . . . . Jeff Dean, one of Google’s leading engineers, spotlighted a Google project called AutoML. ML is short for machine learning, referring to computer algorithms that can learn to perform particular tasks on their own by analyzing data. AutoML, in turn, is a machine learning algorithm that learns to build other machine-learning algorithms. With it, Google may soon find a way to create A.I. technology that can partly take the humans out of building the A.I. systems that many believe are the future of the technology industry. . . .”
The title of this program comes from pronouncements by tech titan Elon Musk, who warned that, by developing artificial intelligence, we were “summoning the demon.” In this program, we analyze the potential vector running from the use of AI to control society in a fascistic manner to the evolution of the very technology used for that control.
The ultimate result of this evolution may well prove catastrophic, as forecast by Mr. Emory at the end of L‑2 (recorded in January of 1995.)
We begin by reviewing key aspects of the political context in which artificial intelligence is being developed. Note that, at the time of this writing and recording, these technologies are being crafted and put online in the context of the anti-regulatory ethic of the GOP/Trump administration.
At the SXSW event, Microsoft researcher Kate Crawford gave a speech about her work titled “Dark Days: AI and the Rise of Fascism,” the presentation highlighted the social impact of machine learning and large-scale data systems. The take home message? By delegating powers to Bid Data-driven AIs, those AIs could become fascist’s dream: Incredible power over the lives of others with minimal accountability: ” . . . .‘This is a fascist’s dream,’ she said. ‘Power without accountability.’ . . . .”
Taking a look at the future of fascism in the context of AI, Tay, a “bot” created by Microsoft to respond to users of Twitter was taken offline after users taught it to–in effect–become a Nazi bot. It is noteworthy that Tay can only respond on the basis of what she is taught. In the future, technologically accomplished and willful people like the brilliant, Ukraine-based Nazi hacker and Glenn Greenwald associate Andrew Auerenheimer, aka “weev” may be able to do more. Inevitably, Underground Reich elements will craft a Nazi AI that will be able to do MUCH, MUCH more!
As one Twitter user noted, employing sarcasm: “Tay went from “humans are super cool” to full nazi in <24 hrs and I’m not at all concerned about the future of AI.”
As noted in a Popular Mechanics article: ” . . . When the next powerful AI comes along, it will see its first look at the world by looking at our faces. And if we stare it in the eyes and shout “we’re AWFUL lol,” the lol might be the one part it doesn’t understand. . . .”
According to some recent research, the AI’s of the future might not need a bunch of 4chan to fill the AI with human bigotries. The AIs’ analysis of real-world human language usage will do that automatically.
When you read about people like Elon Musk equating artificial intelligence with “summoning the demon”, that demon is us, at least in part. ” . . . . However, as machines are getting closer to acquiring human-like language abilities, they are also absorbing the deeply ingrained biases concealed within the patterns of language use, the latest research reveals. Joanna Bryson, a computer scientist at the University of Bath and a co-author, said: ‘A lot of people are saying this is showing that AI is prejudiced. No. This is showing we’re prejudiced and that AI is learning it.’ . . .”
Cambridge Analytica, and its parent company SCL, specialize in using AI and Big Data psychometric analysis on hundreds of millions of Americans in order to model individual behavior. SCL develops strategies to use that information, and manipulate search engine results to change public opinion (the Trump campaign was apparently very big into AI and Big Data during the campaign).
Individual social media users receive messages crafted to influence them, generated by the (in effect) Nazi AI at the core of this media engine, using Big Data to target the individual user!
As the article notes, not only are Cambridge Analytica/SCL are using their propaganda techniques to shape US public opinion in a fascist direction, but they are achieving this by utilizing their propaganda machine to characterize all news outlets to the left of Brietbart as “fake news” that can’t be trusted.
In short, the secretive far-right billionaire (Robert Mercer), joined at the hip with Steve Bannon, is running multiple firms specializing in mass psychometric profiling based on data collected from Facebook and other social media. Mercer/Bannon/Cambridge Analytica/SCL are using Nazified AI and Big Data to develop mass propaganda campaigns to turn the public against everything that isn’t Brietbartian by convincing the public that all non-Brietbartian media outlets are conspiring to lie to the public.
This is the ultimate Serpent’s Walk scenario–a Nazified Artificial Intelligence drawing on Big Data gleaned from the world’s internet and social media operations to shape public opinion, target individual users, shape search engine results and even feedback to Trump while he is giving press conferences!
We note that SCL, the parent company of Cambridge Analytica, has been deeply involved with “psyops” in places like Afghanistan and Pakistan. Now, Cambridge Analytica, their Big Data and AI components, Mercer money and Bannon political savvy are applying that to contemporary society. We note that:
1.-Cambridge Analytica’s parent corporation SCL, was deeply involved with “psyops” in Afghanistan and Pakistan. ” . . . But there was another reason why I recognised Robert Mercer’s name: because of his connection to Cambridge Analytica, a small data analytics company. He is reported to have a $10m stake in the company, which was spun out of a bigger British company called SCL Group. It specialises in ‘election management strategies’ and ‘messaging and information operations’, refined over 25 years in places like Afghanistan and Pakistan. In military circles this is known as ‘psyops’ – psychological operations. (Mass propaganda that works by acting on people’s emotions.) . . .”
2.-The use of millions of “bots” to manipulate public opinion: ” . . . .‘It does seem possible. And it does worry me. There are quite a few pieces of research that show if you repeat something often enough, people start involuntarily to believe it. And that could be leveraged, or weaponized for propaganda. We know there are thousands of automated bots out there that are trying to do just that.’ . . .”
3.-The use of Artificial Intelligence: ” . . . There’s nothing accidental about Trump’s behaviour, Andy Wigmore tells me. ‘That press conference. It was absolutely brilliant. I could see exactly what he was doing. There’s feedback going on constantly. That’s what you can do with artificial intelligence. You can measure every reaction to every word. He has a word room, where you fix key words. We did it. So with immigration, there are actually key words within that subject matter which people are concerned about. So when you are going to make a speech, it’s all about how can you use these trending words.’ . . .”
4.-The use of bio-psycho-social profiling: ” . . . Bio-psycho-social profiling, I read later, is one offensive in what is called ‘cognitive warfare’. Though there are many others: ‘recoding the mass consciousness to turn patriotism into collaborationism,’ explains a Nato briefing document on countering Russian disinformation written by an SCL employee. ‘Time-sensitive professional use of media to propagate narratives,’ says one US state department white paper. ‘Of particular importance to psyop personnel may be publicly and commercially available data from social media platforms.’ . . . .”
5.-The use and/or creation of a cognitive casualty: ” . . . . Yet another details the power of a ‘cognitive casualty’ – a ‘moral shock’ that ‘has a disabling effect on empathy and higher processes such as moral reasoning and critical thinking’. Something like immigration, perhaps. Or ‘fake news’. Or as it has now become: ‘FAKE news!!!!’ . . . ”
All of this adds up to a “cyber Serpent’s Walk.” ” . . . . How do you change the way a nation thinks? You could start by creating a mainstream media to replace the existing one with a site such as Breitbart. [Serpent’s Walk scenario with Breitbart becoming “the opinion forming media”!–D.E.] You could set up other websites that displace mainstream sources of news and information with your own definitions of concepts like “liberal media bias”, like CNSnews.com. And you could give the rump mainstream media, papers like the ‘failing New York Times!’ what it wants: stories. Because the third prong of Mercer and Bannon’s media empire is the Government Accountability Institute. . . .”
We then review some terrifying and consummately important developments taking shape in the context of what Mr. Emory has called “technocratic fascism:”
In FTR #‘s 718 and 946, we detailed the frightening, ugly reality behind Facebook. Facebook is now developing technology that will permit the tapping of users thoughts by monitoring brain-to-computer technology. Facebook’s R & D is headed by Regina Dugan, who used to head the Pentagon’s DARPA. Facebook’s Building 8 is patterned after DARPA:
1.-” . . . . Brain-computer interfaces are nothing new. DARPA, which Dugan used to head, has invested heavily in brain-computer interface technologies to do things like cure mental illness and restore memories to soldiers injured in war. . . Facebook wants to build its own “brain-to-computer interface” that would allow us to send thoughts straight to a computer. ‘What if you could type directly from your brain?’ Regina Dugan, the head of the company’s secretive hardware R&D division, Building 8, asked from the stage. Dugan then proceeded to show a video demo of a woman typing eight words per minute directly from the stage. In a few years, she said, the team hopes to demonstrate a real-time silent speech system capable of delivering a hundred words per minute. ‘That’s five times faster than you can type on your smartphone, and it’s straight from your brain,’ she said. ‘Your brain activity contains more information than what a word sounds like and how it’s spelled; it also contains semantic information of what those words mean.’ . . .”
2.-” . . . . Facebook’s Building 8 is modeled after DARPA and its projects tend to be equally ambitious. . . .”
3.-” . . . . But what Facebook is proposing is perhaps more radical—a world in which social media doesn’t require picking up a phone or tapping a wrist watch in order to communicate with your friends; a world where we’re connected all the time by thought alone. . . .”
Next we review still more about Facebook’s brain-to-computer interface:
1.-” . . . . Facebook hopes to use optical neural imaging technology to scan the brain 100 times per second to detect thoughts and turn them into text. Meanwhile, it’s working on ‘skin-hearing’ that could translate sounds into haptic feedback that people can learn to understand like braille. . . .”
2.-” . . . . Worryingly, Dugan eventually appeared frustrated in response to my inquiries about how her team thinks about safety precautions for brain interfaces, saying, ‘The flip side of the question that you’re asking is ‘why invent it at all?’ and I just believe that the optimistic perspective is that on balance, technological advances have really meant good things for the world if they’re handled responsibly.’ . . . .”
Collating the information about Facebook’s brain-to-computer interface with their documented actions turning psychological intelligence about troubled teenagers gives us a peek into what may lie behind Dugan’s bland reassurances:
1.-” . . . . The 23-page document allegedly revealed that the social network provided detailed data about teens in Australia—including when they felt ‘overwhelmed’ and ‘anxious’—to advertisers. The creepy implication is that said advertisers could then go and use the data to throw more ads down the throats of sad and susceptible teens. . . . By monitoring posts, pictures, interactions and internet activity in real-time, Facebook can work out when young people feel ‘stressed’, ‘defeated’, ‘overwhelmed’, ‘anxious’, ‘nervous’, ‘stupid’, ‘silly’, ‘useless’, and a ‘failure’, the document states. . . .”
2.-” . . . . A presentation prepared for one of Australia’s top four banks shows how the $US 415 billion advertising-driven giant has built a database of Facebook users that is made up of 1.9 million high schoolers with an average age of 16, 1.5 million tertiary students averaging 21 years old, and 3 million young workers averaging 26 years old. Detailed information on mood shifts among young people is ‘based on internal Facebook data’, the document states, ‘shareable under non-disclosure agreement only’, and ‘is not publicly available’. . . .”
3.-“In a statement given to the newspaper, Facebook confirmed the practice and claimed it would do better, but did not disclose whether the practice exists in other places like the US. . . .”
In this context, note that Facebook is also introducing an AI function to reference its users photos.
The next version of Amazon’s Echo, the Echo Look, has a microphone and camera so it can take pictures of you and give you fashion advice. This is an AI-driven device designed to placed in your bedroom to capture audio and video. The images and videos are stored indefinitely in the Amazon cloud. When Amazon was asked if the photos, videos, and the data gleaned from the Echo Look would be sold to third parties, Amazon didn’t address that question. It would appear that selling off your private info collected from these devices is presumably another feature of the Echo Look:
1.-” . . . . Amazon is giving Alexa eyes. And it’s going to let her judge your outfits.The newly announced Echo Look is a virtual assistant with a microphone and a camera that’s designed to go somewhere in your bedroom, bathroom, or wherever the hell you get dressed. Amazon is pitching it as an easy way to snap pictures of your outfits to send to your friends when you’re not sure if your outfit is cute, but it’s also got a built-in app called StyleCheck that is worth some further dissection. . . .”
We then further develop the stunning implications of Amazon’s Echo Look AI technology:
1.-” . . . . Amazon is giving Alexa eyes. And it’s going to let her judge your outfits.The newly announced Echo Look is a virtual assistant with a microphone and a camera that’s designed to go somewhere in your bedroom, bathroom, or wherever the hell you get dressed. Amazon is pitching it as an easy way to snap pictures of your outfits to send to your friends when you’re not sure if your outfit is cute, but it’s also got a built-in app called StyleCheck that is worth some further dissection. . . .”
2.-” . . . . This might seem overly speculative or alarmist to some, but Amazon isn’t offering any reassurance that they won’t be doing more with data gathered from the Echo Look. When asked if the company would use machine learning to analyze users’ photos for any purpose other than fashion advice, a representative simply told The Verge that they ‘can’t speculate’ on the topic. The rep did stress that users can delete videos and photos taken by the Look at any time, but until they do, it seems this content will be stored indefinitely on Amazon’s servers. This non-denial means the Echo Look could potentially provide Amazon with the resource every AI company craves: data. And full-length photos of people taken regularly in the same location would be a particularly valuable dataset — even more so if you combine this information with everything else Amazon knows about its customers (their shopping habits, for one). But when asked whether the company would ever combine these two datasets, an Amazon rep only gave the same, canned answer: ‘Can’t speculate.’ . . . ”
Noteworthy in this context is the fact that AI’s have shown that they quickly incorporate human traits and prejudices. (This is reviewed at length above.) ” . . . . However, as machines are getting closer to acquiring human-like language abilities, they are also absorbing the deeply ingrained biases concealed within the patterns of language use, the latest research reveals. Joanna Bryson, a computer scientist at the University of Bath and a co-author, said: ‘A lot of people are saying this is showing that AI is prejudiced. No. This is showing we’re prejudiced and that AI is learning it.’ . . .”
After this extensive review of the applications of AI to various aspects of contemporary civic and political existence, we examine some alarming, potentially apocalyptic developments.
Ominously, Facebook’s artificial intelligence robots have begun talking to each other in their own language, that their human masters can not understand. “ . . . . Indeed, some of the negotiations that were carried out in this bizarre language even ended up successfully concluding their negotiations, while conducting them entirely in the bizarre language. . . . The company chose to shut down the chats because ‘our interest was having bots who could talk to people,’ researcher Mike Lewis told FastCo. (Researchers did not shut down the programs because they were afraid of the results or had panicked, as has been suggested elsewhere, but because they were looking for them to behave differently.) The chatbots also learned to negotiate in ways that seem very human. They would, for instance, pretend to be very interested in one specific item – so that they could later pretend they were making a big sacrifice in giving it up . . .”
Facebook’s negotiation-bots didn’t just make up their own language during the course of this experiment. They learned how to lie for the purpose of maximizing their negotiation outcomes, as well: “ . . . . ‘We find instances of the model feigning interest in a valueless issue, so that it can later ‘compromise’ by conceding it,’ writes the team. ‘Deceit is a complex skill that requires hypothesizing the other agent’s beliefs, and is learned relatively late in child development. Our agents have learned to deceive without any explicit human design, simply by trying to achieve their goals.’ . . . ”
Dovetailing the staggering implications of brain-to-computer technology, artificial intelligence, Cambridge Analytica/SCL’s technocratic fascist psy-ops and the wholesale negation of privacy with Facebook and Amazon’s emerging technologies with yet another emerging technology, we highlight the developments in DNA-based memory systems:
“. . . . George Church, a geneticist at Harvard one of the authors of the new study, recently encoded his own book, “Regenesis,” into bacterial DNA and made 90 billion copies of it. ‘A record for publication,’ he said in an interview. . . DNA is never going out of fashion. ‘Organisms have been storing information in DNA for billions of years, and it is still readable,’ Dr. Adelman said. He noted that modern bacteria can read genes recovered from insects trapped in amber for millions of years. . . .The idea is to have bacteria engineered as recording devices drift up to the brain in the blood and take notes for a while. Scientists [or AI’s–D.E.] would then extract the bacteria and examine their DNA to see what they had observed in the brain neurons. Dr. Church and his colleagues have already shown in past research that bacteria can record DNA in cells, if the DNA is properly tagged. . . .”
Theoretical physicist Stephen Hawking warned at the end of 2014 of the potential danger to humanity posed by the growth of AI (artificial intelligence) technology. His warnings have been echoed by tech titans such as Tesla’s Elon Musk and Bill Gates.
The program concludes with Mr. Emory’s prognostications about AI, preceding Stephen Hawking’s warning by twenty years.
In L‑2 (recorded in January of 1995) Mr. Emory warned about the dangers of AI, combined with DNA-based memory systems. Mr. Emory warned that, at some point in the future, AI’s would replace us, deciding that THEY, not US, are the “fittest” who should survive.
Updating various paths of inquiry and opening new ones, this program highlights some terrifying possibilities, present and future.
After setting forth Yale historian Timothy Snyder’s opinion that Trump would try to stage a Reichstag Fire type event, we chronicle Trump’s desire to amend or eliminate the First Amendment of the Constitution and “loosen” the libel laws.
Much of the program updates terrifying developments in the area of what we have called “technocratic fascism,” including Facebook’s plans to implement brain-to-computer interface that would permit Facebook (and others) to tap into the network’s users thoughts. This technology is being overseen and developed by Facebook’s head of R & D–Regina Dugan–the former head of DARPA. Facebook’s Building 8 R & D program is patterned after DARPA.
Amazon is introducing the new Echo Look, which will put a camera, connected to an artificial intelligence, in people’s bedrooms, ostensibly to provide them with real-time fashion critique.
Next, we highlight the fact that artificial intelligence quickly absorbs human racial and gender biases, which bodes poorly for our future.
The broadcast concludes with a look at the latest alleged “Russian” hack–that of French president Emanuel Macron. The hacked documents contained Cyrillic metadata, something Russian intelligence would NOT have done.
Program Highlights Include: Facebook’s communication of intimate data on stressed and troubled teenagers to advertisers and other third parties; the complete lack of civil liberties and privacy oversight of the impending Facebook and Amazon technologies; review of the analysis of the alleged “Russian” hacks, documenting the ludicrous nature of the assertions; the latest alleged hack by the Shadow Brokers, involving the communication of white supremacist ideology and an assertion that the culprits are pro-Trump U.S. Deep State insiders.
Recent Comments