Dave Emory’s entire lifetime of work is available on a flash drive that can be obtained HERE. The new drive is a 32-gigabyte drive that is current as of the programs and articles posted by the fall of 2017. The new drive (available for a tax-deductible contribution of $65.00 or more.)
WFMU-FM is podcasting For The Record–You can subscribe to the podcast HERE.
You can subscribe to e-mail alerts from Spitfirelist.com HERE.
You can subscribe to RSS feed from Spitfirelist.com HERE.
You can subscribe to the comments made on programs and posts–an excellent source of information in, and of, itself HERE.
This broadcast was recorded in one, 60-minute segment.
Introduction: This program follows up FTR #’s 718 and 946, we examined Facebook, noting how it’s cute, warm, friendly public facade obscured a cynical, reactionary, exploitative and, ultimately “corporatist” ethic and operation.
The UK’s Channel 4 sent an investigative journalist undercover to work for one of the third-party companies Facebook pays to moderate content. This investigative journalist was trained to take a hands-off approach to far right violent content and fake news because that kind of content engages users for longer and increases ad revenues. ” . . . . An investigative journalist who went undercover as a Facebook moderator in Ireland says the company lets pages from far-right fringe groups ‘exceed deletion threshold,’ and that those pages are ‘subject to different treatment in the same category as pages belonging to governments and news organizations.’ The accusation is a damning one, undermining Facebook’s claims that it is actively trying to cut down on fake news, propaganda, hate speech, and other harmful content that may have significant real-world impact.The undercover journalist detailed his findings in a new documentary titled Inside Facebook: Secrets of the Social Network, that just aired on the UK’s Channel 4. . . . .”
Next, we present a frightening story about AggregateIQ (AIQ), the Cambridge Analytica offshoot to which Cambridge Analytica outsourced the development of its “Ripon” psychological profile software development, and which later played a key role in the pro-Brexit campaign. The article also notes that, despite Facebook’s pledge to kick Cambridge Analytica off of its platform, security researchers just found 13 apps available for Facebook that appear to be developed by AIQ. If Facebook really was trying to kick Cambridge Analytica off of its platform, it’s not trying very hard. One app is even named “AIQ Johnny Scraper” and it’s registered to AIQ.
The article is also a reminder that you don’t necessarily need to download a Cambridge Analytica/AIQ app for them to be tracking your information and reselling it to clients. Security researcher stumbled upon a new repository of curated Facebook data AIQ was creating for a client and it’s entirely possible a lot of the data was scraped from public Facebook posts.
” . . . . AggregateIQ, a Canadian consultancy alleged to have links to Cambridge Analytica, collected and stored the data of hundreds of thousands of Facebook users, according to redacted computer files seen by the Financial Times.The social network banned AggregateIQ, a data company, from its platform as part of a clean-up operation following the Cambridge Analytica scandal, on suspicion that the company could have been improperly accessing user information. However, Chris Vickery, a security researcher, this week found an app on the platform called ‘AIQ Johnny Scraper’ registered to the company, raising fresh questions about the effectiveness of Facebook’s policing efforts. . . .”
In addition, the story highlights a forms of micro-targeting companies like AIQ make available that’s fundamentally different from the algorithmic micro-targeting associated with social media abuses: micro-targeting by a human who wants to specifically look and see what you personally have said about various topics on social media. This is a service where someone can type you into a search engine and AIQ’s product will serve up a list of all the various political posts you’ve made or the politically-relevant “Likes” you’ve made.
Next, we note that Facebook is getting sued by an app developer for acting like the mafia and turning access to all that user data as the key enforcement tool:
“Mark Zuckerberg faces allegations that he developed a ‘malicious and fraudulent scheme’ to exploit vast amounts of private data to earn Facebook billions and force rivals out of business. A company suing Facebook in a California court claims the social network’s chief executive ‘weaponised’ the ability to access data from any user’s network of friends – the feature at the heart of the Cambridge Analytica scandal. . . . . ‘The evidence uncovered by plaintiff demonstrates that the Cambridge Analytica scandal was not the result of mere negligence on Facebook’s part but was rather the direct consequence of the malicious and fraudulent scheme Zuckerberg designed in 2012 to cover up his failure to anticipate the world’s transition to smartphones,’ legal documents said. . . . . Six4Three alleges up to 40,000 companies were effectively defrauded in this way by Facebook. It also alleges that senior executives including Zuckerberg personally devised and managed the scheme, individually deciding which companies would be cut off from data or allowed preferential access. . . . ‘They felt that it was better not to know. I found that utterly horrifying,’ he [former Facebook executive Sandy Parakilas] said. ‘If true, these allegations show a huge betrayal of users, partners and regulators. They would also show Facebook using its monopoly power to kill competition and putting profits over protecting its users.’ . . . .”
The above-mentioned Cambridge Analytica is officially going bankrupt, along with the elections division of its parent company, SCL Group. Apparently their bad press has driven away clients.
Is this truly the end of Cambridge Analytica?
No.
They’re rebranding under a new company, Emerdata. Intriguingly, Cambridge Analytica’s transformation into Emerdata is noteworthy because the firm’s directors include Johnson Ko Chun Shun, a Hong Kong financier and business partner of Erik Prince: ” . . . . But the company’s announcement left several questions unanswered, including who would retain the company’s intellectual property — the so-called psychographic voter profiles built in part with data from Facebook — and whether Cambridge Analytica’s data-mining business would return under new auspices. . . . In recent months, executives at Cambridge Analytica and SCL Group, along with the Mercer family, have moved to created a new firm, Emerdata, based in Britain, according to British records. The new company’s directors include Johnson Ko Chun Shun, a Hong Kong financier and business partner of Erik Prince. . . . An executive and a part owner of SCL Group, Nigel Oakes, has publicly described Emerdata as a way of rolling up the two companies under one new banner. . . . “
In the Big Data internet age, there’s one area of personal information that has yet to be incorporated into the profiles on everyone–personal banking information. ” . . . . If tech companies are in control of payment systems, they’ll know “every single thing you do,” Kapito said. It’s a different business model from traditional banking: Data is more valuable for tech firms that sell a range of different products than it is for banks that only sell financial services, he said. . . .”
Facebook is approaching a number of big banks – JP Morgan, Wells Fargo, Citigroup, and US Bancorp – requesting financial data including card transactions and checking-account balances. Facebook is joined byIn this by Google and Amazon who are also trying to get this kind of data.
Facebook assures us that this information, which will be opt-in, is to be solely for offering new services on Facebook messenger. Facebook also assures us that this information, which would obviously be invaluable for delivering ads, won’t be used for ads at all. It will ONLY be used for Facebook’s Messenger service. This is a dubious assurance, in light of Facebook’s past behavior.
” . . . . Facebook increasingly wants to be a platform where people buy and sell goods and services, besides connecting with friends. The company over the past year asked JPMorgan Chase & Co., Wells Fargo & Co., Citigroup Inc. and U.S. Bancorp to discuss potential offerings it could host for bank customers on Facebook Messenger, said people familiar with the matter. Facebook has talked about a feature that would show its users their checking-account balances, the people said. It has also pitched fraud alerts, some of the people said. . . .”
Peter Thiel’s surveillance firm Palantir was apparently deeply involved with Cambridge Analytica’s gaming of personal data harvested from Facebook in order to engineer an electoral victory for Trump. Thiel was an early investor in Facebook, at one point was its largest shareholder and is still one of its largest shareholders. ” . . . . It was a Palantir employee in London, working closely with the data scientists building Cambridge’s psychological profiling technology, who suggested the scientists create their own app — a mobile-phone-based personality quiz — to gain access to Facebook users’ friend networks, according to documents obtained by The New York Times. The revelations pulled Palantir — co-founded by the wealthy libertarian Peter Thiel — into the furor surrounding Cambridge, which improperly obtained Facebook data to build analytical tools it deployed on behalf of Donald J. Trump and other Republican candidates in 2016. Mr. Thiel, a supporter of President Trump, serves on the board at Facebook. ‘There were senior Palantir employees that were also working on the Facebook data,’ said Christopher Wylie, a data expert and Cambridge Analytica co-founder, in testimony before British lawmakers on Tuesday. . . . The connections between Palantir and Cambridge Analytica were thrust into the spotlight by Mr. Wylie’s testimony on Tuesday. Both companies are linked to tech-driven billionaires who backed Mr. Trump’s campaign: Cambridge is chiefly owned by Robert Mercer, the computer scientist and hedge fund magnate, while Palantir was co-founded in 2003 by Mr. Thiel, who was an initial investor in Facebook. . . .”
Program Highlights Include:
- Facebook’s project to incorporate brain-to-computer interface into its operating system: “ . . . Facebook wants to build its own “brain-to-computer interface” that would allow us to send thoughts straight to a computer. ‘What if you could type directly from your brain?’ Regina Dugan, the head of the company’s secretive hardware R&D division, Building 8, asked from the stage. Dugan then proceeded to show a video demo of a woman typing eight words per minute directly from the stage. In a few years, she said, the team hopes to demonstrate a real-time silent speech system capable of delivering a hundred words per minute. ‘That’s five times faster than you can type on your smartphone, and it’s straight from your brain,’ she said. ‘Your brain activity contains more information than what a word sounds like and how it’s spelled; it also contains semantic information of what those words mean.’ . . .”
- ” . . . . Brain-computer interfaces are nothing new. DARPA, which Dugan used to head, has invested heavily in brain-computer interface technologies to do things like cure mental illness and restore memories to soldiers injured in war. But what Facebook is proposing is perhaps more radical—a world in which social media doesn’t require picking up a phone or tapping a wrist watch in order to communicate with your friends; a world where we’re connected all the time by thought alone. . . .”
- ” . . . . Facebook’s Building 8 is modeled after DARPA and its projects tend to be equally ambitious. . . .”
- ” . . . . But what Facebook is proposing is perhaps more radical—a world in which social media doesn’t require picking up a phone or tapping a wrist watch in order to communicate with your friends; a world where we’re connected all the time by thought alone. . . .”
- ” . . . . Facebook hopes to use optical neural imaging technology to scan the brain 100 times per second to detect thoughts and turn them into text. Meanwhile, it’s working on ‘skin-hearing’ that could translate sounds into haptic feedback that people can learn to understand like braille. . . .”
- ” . . . . Worryingly, Dugan eventually appeared frustrated in response to my inquiries about how her team thinks about safety precautions for brain interfaces, saying, ‘The flip side of the question that you’re asking is ‘why invent it at all?’ and I just believe that the optimistic perspective is that on balance, technological advances have really meant good things for the world if they’re handled responsibly.’ . . . .”
- Some telling observations by Nigel Oakes, the founder of Cambridge Analytica parent firm SCL: ” . . . . . . . . The panel has published audio records in which an executive tied to Cambridge Analytica discusses how the Trump campaign used techniques used by the Nazis to target voters. . . .”
- Further exposition of Oakes’ statement: ” . . . . Adolf Hitler ‘didn’t have a problem with the Jews at all, but people didn’t like the Jews,’ he told the academic, Emma L. Briant, a senior lecturer in journalism at the University of Essex. He went on to say that Donald J. Trump had done the same thing by tapping into grievances toward immigrants and Muslims. . . . ‘What happened with Trump, you can forget all the microtargeting and microdata and whatever, and come back to some very, very simple things,’ he told Dr. Briant. ‘Trump had the balls, and I mean, really the balls, to say what people wanted to hear.’ . . .”
- Observations about the possibilities of Facebook’s goal of having AI governing the editorial functions of its content: As noted in a Popular Mechanics article: ” . . . When the next powerful AI comes along, it will see its first look at the world by looking at our faces. And if we stare it in the eyes and shout ‘we’re AWFUL lol,’ the lol might be the one part it doesn’t understand. . . .”
- Microsoft’s Tay Chatbot offers a glimpse into this future: As one Twitter user noted, employing sarcasm: “Tay went from ‘humans are super cool’ to full nazi in <24 hrs and I’m not at all concerned about the future of AI.”
1. The UK’s Channel 4 sent an investigative journalist undercover to work for one of the third-party companies Facebook pays to moderate content. This investigative journalist was trained to take a hands-off approach to far right violent content and fake news because that kind of content engages users for longer and increases ad revenues. ” . . . . An investigative journalist who went undercover as a Facebook moderator in Ireland says the company lets pages from far-right fringe groups ‘exceed deletion threshold,’ and that those pages are ‘subject to different treatment in the same category as pages belonging to governments and news organizations.’ The accusation is a damning one, undermining Facebook’s claims that it is actively trying to cut down on fake news, propaganda, hate speech, and other harmful content that may have significant real-world impact.The undercover journalist detailed his findings in a new documentary titled Inside Facebook: Secrets of the Social Network, that just aired on the UK’s Channel 4. . . . .”
An investigative journalist who went undercover as a Facebook moderator in Ireland says the company lets pages from far-right fringe groups “exceed deletion threshold,” and that those pages are “subject to different treatment in the same category as pages belonging to governments and news organizations.” The accusation is a damning one, undermining Facebook’s claims that it is actively trying to cut down on fake news, propaganda, hate speech, and other harmful content that may have significant real-world impact.The undercover journalist detailed his findings in a new documentary titled Inside Facebook: Secrets of the Social Network, that just aired on the UK’s Channel 4. The investigation outlines questionable practices on behalf of CPL Resources, a third-party content moderator firm based in Dublin that Facebook has worked with since 2010.
Those questionable practices primarily involve a hands-off approach to flagged and reported content like graphic violence, hate speech, and racist and other bigoted rhetoric from far-right groups. The undercover reporter says he was also instructed to ignore users who looked as if they were under 13 years of age, which is the minimum age requirement to sign up for Facebook in accordance with the Child Online Protection Act, a 1998 privacy law passed in the US designed to protect young children from exploitation and harmful and violent content on the internet. The documentary insinuates that Facebook takes a hands-off approach to such content, including blatantly false stories parading as truth, because it engages users for longer and drives up advertising revenue. . . .
. . . . And as the Channel 4 documentary makes clear, that threshold appears to be an ever-changing metric that has no consistency across partisan lines and from legitimate media organizations to ones that peddle in fake news, propaganda, and conspiracy theories. It’s also unclear how Facebook is able to enforce its policy with third-party moderators all around the world, especially when they may be incentivized by any number of performance metrics and personal biases. . . . .
Meanwhile, Facebook is ramping up efforts in its artificial intelligence division, with the hope that one day algorithms can solve these pressing moderation problems without any human input. Earlier today, the company said it would be accelerating its AI research efforts to include more researchers and engineers, as well as new academia partnerships and expansions of its AI research labs in eight locations around the world. . . . .The long-term goal of the company’s AI division is to create “machines that have some level of common sense” and that learn “how the world works by observation, like young children do in the first few months of life.” . . . .
2. Next, we present a frightening story about AggregateIQ (AIQ), the Cambridge Analytica offshoot to which Cambridge Analytica outsourced the development of its “Ripon” psychological profile software development, and which later played a key role in the pro-Brexit campaign. The article also notes that, despite Facebook’s pledge to kick Cambridge Analytica off of its platform, security researchers just found 13 apps available for Facebook that appear to be developed by AIQ. If Facebook really was trying to kick Cambridge Analytica off of its platform, it’s not trying very hard. One app is even named “AIQ Johnny Scraper” and it’s registered to AIQ.
The following article is also a reminder that you don’t necessarily need to download a Cambridge Analytica/AIQ app for them to be tracking your information and reselling it to clients. Security researcher stumbled upon a new repository of curated Facebook data AIQ was creating for a client and it’s entirely possible a lot of the data was scraped from public Facebook posts.
” . . . . AggregateIQ, a Canadian consultancy alleged to have links to Cambridge Analytica, collected and stored the data of hundreds of thousands of Facebook users, according to redacted computer files seen by the Financial Times.The social network banned AggregateIQ, a data company, from its platform as part of a clean-up operation following the Cambridge Analytica scandal, on suspicion that the company could have been improperly accessing user information. However, Chris Vickery, a security researcher, this week found an app on the platform called ‘AIQ Johnny Scraper’ registered to the company, raising fresh questions about the effectiveness of Facebook’s policing efforts. . . .”
Additionally, the story highlights a forms of micro-targeting companies like AIQ make available that’s fundamentally different from the algorithmic micro-targeting we typically associate with social media abuses: micro-targeting by a human who wants to specifically look and see what you personally have said about various topics on social media. A service where someone can type you into a search engine and AIQ’s product will serve up a list of all the various political posts you’ve made or the politically-relevant “Likes” you’ve made.
It’s also worth noting that this service would be perfect for accomplishing the right-wing’s long-standing goal of purging the federal government of liberal employees. A goal that ‘Alt-Right’ neo-Nazi troll Charles C. Johnson and ‘Alt-Right’ neo-Nazi billionaire Peter Thiel reportedly was helping the Trump team accomplish during the transition period. An ideological purge of the State Department is reportedly already underway.
AggregateIQ, a Canadian consultancy alleged to have links to Cambridge Analytica, collected and stored the data of hundreds of thousands of Facebook users, according to redacted computer files seen by the Financial Times.The social network banned AggregateIQ, a data company, from its platform as part of a clean-up operation following the Cambridge Analytica scandal, on suspicion that the company could have been improperly accessing user information. However, Chris Vickery, a security researcher, this week found an app on the platform called “AIQ Johnny Scraper” registered to the company, raising fresh questions about the effectiveness of Facebook’s policing efforts.
The technology group now says it shut down the Johnny Scraper app this week along with 13 others that could be related to AggregateIQ, with a total of 1,000 users.
Ime Archibong, vice-president of product partnerships, said the company was investigating whether there had been any misuse of data. “We have suspended an additional 14 apps this week, which were installed by around 1,000 people,” he said. “They were all created after 2014 and so did not have access to friends’ data. However, these apps appear to be linked to AggregateIQ, which was affiliated with Cambridge Analytica. So we have suspended them while we investigate further.”.
According to files seen by the Financial Times, AggregateIQ had stored a list of 759,934 Facebook users in a table that recorded home addresses, phone numbers and email addresses for some profiles.
Jeff Silvester, AggregateIQ chief operating officer, said the file came from software designed for a particular client, which tracked which users had liked a particular page or were posting positive and negative comments.
“I believe as part of that the client did attempt to match people who had liked their Facebook page with supporters in their voter file [online electoral records],” he said. “I believe the result of this matching is what you are looking at. This is a fairly common task that voter file tools do all of the time.”
He added that the purpose of the Johnny Scraper app was to replicate Facebook posts made by one of AggregateIQ’s clients into smartphone apps that also belonged to the client.
AggregateIQ has sought to distance itself from an international privacy scandal engulfing Facebook and Cambridge Analytica, despite allegations from Christopher Wylie, a whistleblower at the now-defunct UK firm, that it had acted as the Canadian branch of the organisation.
The files do not indicate whether users had given permission for their Facebook “Likes” to be tracked through third-party apps, or whether they were scraped from publicly visible pages. Mr Vickery, who analysed AggregateIQ’s files after uncovering a trove of information online, said that the company appeared to have gathered data from Facebook users despite telling Canadian MPs “we don’t really process data on folks”.
The files also include posts that focus on political issues with statements such as: “Like if you agree with Reagan that ‘government is the problem’,” but it is not clear if this information originated on Facebook. Mr Silvester said the software AggregateIQ had designed allowed its client to browse public comments. “It is possible that some of those public comments or posts are in the file,” he said. . . .
. . . . “The overall theme of these companies and the way their tools work is that everything is reliant on everything else, but has enough independent operability to preserve deniability,” said Mr Vickery. “But when you combine all these different data sources together it becomes something else.” . . . .
3. Facebook is getting sued by an app developer for acting like the mafia and turning access to all that user data as the key enforcement tool:
“Mark Zuckerberg faces allegations that he developed a ‘malicious and fraudulent scheme’ to exploit vast amounts of private data to earn Facebook billions and force rivals out of business. A company suing Facebook in a California court claims the social network’s chief executive ‘weaponised’ the ability to access data from any user’s network of friends – the feature at the heart of the Cambridge Analytica scandal. . . . . ‘The evidence uncovered by plaintiff demonstrates that the Cambridge Analytica scandal was not the result of mere negligence on Facebook’s part but was rather the direct consequence of the malicious and fraudulent scheme Zuckerberg designed in 2012 to cover up his failure to anticipate the world’s transition to smartphones,’ legal documents said. . . . . Six4Three alleges up to 40,000 companies were effectively defrauded in this way by Facebook. It also alleges that senior executives including Zuckerberg personally devised and managed the scheme, individually deciding which companies would be cut off from data or allowed preferential access. . . . ‘They felt that it was better not to know. I found that utterly horrifying,’ he [former Facebook executive Sandy Parakilas] said. ‘If true, these allegations show a huge betrayal of users, partners and regulators. They would also show Facebook using its monopoly power to kill competition and putting profits over protecting its users.’ . . . .”
Mark Zuckerberg faces allegations that he developed a “malicious and fraudulent scheme” to exploit vast amounts of private data to earn Facebook billions and force rivals out of business. A company suing Facebook in a California court claims the social network’s chief executive “weaponised” the ability to access data from any user’s network of friends – the feature at the heart of the Cambridge Analytica scandal.
A legal motion filed last week in the superior court of San Mateo draws upon extensive confidential emails and messages between Facebook senior executives including Mark Zuckerberg. He is named individually in the case and, it is claimed, had personal oversight of the scheme.
Facebook rejects all claims, and has made a motion to have the case dismissed using a free speech defence.
It claims the first amendment protects its right to make “editorial decisions” as it sees fit. Zuckerberg and other senior executives have asserted that Facebook is a platform not a publisher, most recently in testimony to Congress.
Heather Whitney, a legal scholar who has written about social media companies for the Knight First Amendment Institute at Columbia University, said, in her opinion, this exposed a potential tension for Facebook.
“Facebook’s claims in court that it is an editor for first amendment purposes and thus free to censor and alter the content available on its site is in tension with their, especially recent, claims before the public and US Congress to be neutral platforms.”
The company that has filed the case, a former startup called Six4Three, is now trying to stop Facebook from having the case thrown out and has submitted legal arguments that draw on thousands of emails, the details of which are currently redacted. Facebook has until next Tuesday to file a motion requesting that the evidence remains sealed, otherwise the documents will be made public.
The developer alleges the correspondence shows Facebook paid lip service to privacy concerns in public but behind the scenes exploited its users’ private information.
It claims internal emails and messages reveal a cynical and abusive system set up to exploit access to users’ private information, alongside a raft of anti-competitive behaviours. . . .
. . . . The papers submitted to the court last week allege Facebook was not only aware of the implications of its privacy policy, but actively exploited them, intentionally creating and effectively flagging up the loophole that Cambridge Analytica used to collect data on up to 87 million American users.
The lawsuit also claims Zuckerberg misled the public and Congress about Facebook’s role in the Cambridge Analytica scandal by portraying it as a victim of a third party that had abused its rules for collecting and sharing data.
“The evidence uncovered by plaintiff demonstrates that the Cambridge Analytica scandal was not the result of mere negligence on Facebook’s part but was rather the direct consequence of the malicious and fraudulent scheme Zuckerberg designed in 2012 to cover up his failure to anticipate the world’s transition to smartphones,” legal documents said.
The lawsuit claims to have uncovered fresh evidence concerning how Facebook made decisions about users’ privacy. It sets out allegations that, in 2012, Facebook’s advertising business, which focused on desktop ads, was devastated by a rapid and unexpected shift to smartphones.
Zuckerberg responded by forcing developers to buy expensive ads on the new, underused mobile service or risk having their access to data at the core of their business cut off, the court case alleges.
“Zuckerberg weaponised the data of one-third of the planet’s population in order to cover up his failure to transition Facebook’s business from desktop computers to mobile ads before the market became aware that Facebook’s financial projections in its 2012 IPO filings were false,” one court filing said.
In its latest filing, Six4Three alleges Facebook deliberately used its huge amounts of valuable and highly personal user data to tempt developers to create platforms within its system, implying that they would have long-term access to personal information, including data from subscribers’ Facebook friends.
Once their businesses were running, and reliant on data relating to “likes”, birthdays, friend lists and other Facebook minutiae, the social media company could and did target any that became too successful, looking to extract money from them, co-opt them or destroy them, the documents claim.
Six4Three alleges up to 40,000 companies were effectively defrauded in this way by Facebook. It also alleges that senior executives including Zuckerberg personally devised and managed the scheme, individually deciding which companies would be cut off from data or allowed preferential access.
The lawsuit alleges that Facebook initially focused on kickstarting its mobile advertising platform, as the rapid adoption of smartphones decimated the desktop advertising business in 2012.
It later used its ability to cut off data to force rivals out of business, or coerce owners of apps Facebook coveted into selling at below the market price, even though they were not breaking any terms of their contracts, according to the documents. . . .
. . . . David Godkin, Six4Three’s lead counsel said: “We believe the public has a right to see the evidence and are confident the evidence clearly demonstrates the truth of our allegations, and much more.”
Sandy Parakilas, a former Facebook employee turned whistleblower who has testified to the UK parliament about its business practices, said the allegations were a “bombshell”. He claimed to MPs Facebook’s senior executives were aware of abuses of friends’ data back in 2011-12 and he was warned not to look into the issue.
“They felt that it was better not to know. I found that utterly horrifying,” he said. “If true, these allegations show a huge betrayal of users, partners and regulators. They would also show Facebook using its monopoly power to kill competition and putting profits over protecting its users.” . . .
4. Cambridge Analytica is officially going bankrupt, along with the elections division of its parent company, SCL Group. Apparently their bad press has driven away clients.
Is this truly the end of Cambridge Analytica?
No.
They’re rebranding under a new company, Emerdata. Intriguingly, Cambridge Analytica’s transformation into Emerdata is noteworthy because the firm’s directors include Johnson Ko Chun Shun, a Hong Kong financier and business partner of Erik Prince: ” . . . . But the company’s announcement left several questions unanswered, including who would retain the company’s intellectual property — the so-called psychographic voter profiles built in part with data from Facebook — and whether Cambridge Analytica’s data-mining business would return under new auspices. . . . In recent months, executives at Cambridge Analytica and SCL Group, along with the Mercer family, have moved to created a new firm, Emerdata, based in Britain, according to British records. The new company’s directors include Johnson Ko Chun Shun, a Hong Kong financier and business partner of Erik Prince. . . . An executive and a part owner of SCL Group, Nigel Oakes, has publicly described Emerdata as a way of rolling up the two companies under one new banner. . . . “
. . . . In a statement posted to its website, Cambridge Analytica said the controversy had driven away virtually all of the company’s customers, forcing it to file for bankruptcy in both the United States and Britain. The elections division of Cambridge’s British affiliate, SCL Group, will also shut down, the company said.
But the company’s announcement left several questions unanswered, including who would retain the company’s intellectual property — the so-called psychographic voter profiles built in part with data from Facebook — and whether Cambridge Analytica’s data-mining business would return under new auspices. . . .
. . . . In recent months, executives at Cambridge Analytica and SCL Group, along with the Mercer family, have moved to created a new firm, Emerdata, based in Britain, according to British records. The new company’s directors include Johnson Ko Chun Shun, a Hong Kong financier and business partner of Erik Prince. Mr. Prince founded the private security firm Blackwater, which was renamed Xe Services after Blackwater contractors were convicted of killing Iraqi civilians.
Cambridge and SCL officials privately raised the possibility that Emerdata could be used for a Blackwater-style rebranding of Cambridge Analytica and the SCL Group, according two people with knowledge of the companies, who asked for anonymity to describe confidential conversations. One plan under consideration was to sell off the combined company’s data and intellectual property.
An executive and a part owner of SCL Group, Nigel Oakes, has publicly described Emerdata as a way of rolling up the two companies under one new banner. . . .
5. In the Big Data internet age, there’s one area of personal information that has yet to be incorporated into the profiles on everyone–personal banking information. ” . . . . If tech companies are in control of payment systems, they’ll know “every single thing you do,” Kapito said. It’s a different business model from traditional banking: Data is more valuable for tech firms that sell a range of different products than it is for banks that only sell financial services, he said. . . .”
The president of BlackRock, the world’s biggest asset manager, is among those who think big technology firms could invade the financial industry’s turf. Google and Facebook have thrived by collecting and storing data about consumer habits—our emails, search queries, and the videos we watch. Understanding of our financial lives could be an even richer source of data for them to sell to advertisers.
“I worry about the data,” said BlackRock president Robert Kapito at a conference in London today (Nov. 2). “We’re going to have some serious competitors.”
If tech companies are in control of payment systems, they’ll know “every single thing you do,” Kapito said. It’s a different business model from traditional banking: Data is more valuable for tech firms that sell a range of different products than it is for banks that only sell financial services, he said.
Kapito is worried because the effort to win control of payment systems is already underway—Apple will allow iMessage users to send cash to each other, and Facebook is integrating person-to-person PayPal payments into its Messenger app.
As more payments flow through mobile phones, banks are worried they could get left behind, relegated to serving as low-margin utilities. To fight back, they’ve started initiatives such as Zelle to compete with payment services like PayPal.
…
Barclays CEO Jes Staley pointed out at the conference that banks probably have the “richest data pool” of any sector, and he said some 25% of the UK’s economy flows through Barlcays’ payment systems. The industry could use that information to offer better services. Companies could alert people that they’re not saving enough for retirement, or suggest ways to save money on their expenses. The trick is accessing that data and analyzing it like a big technology company would.
And banks still have one thing going for them: There’s a massive fortress of rules and regulations surrounding the industry. “No one wants to be regulated like we are,” Staley said.
6. Facebook is approaching a number of big banks – JP Morgan, Wells Fargo, Citigroup, and US Bancorp – requesting financial data including card transactions and checking-account balances. Facebook is joined byIn this by Google and Amazon who are also trying to get this kind of data.
Facebook assures us that this information, which will be opt-in, is to be solely for offering new services on Facebook messenger. Facebook also assures us that this information, which would obviously be invaluable for delivering ads, won’t be used for ads at all. It will ONLY be used for Facebook’s Messenger service. This is a dubious assurance, in light of Facebook’s past behavior.
” . . . . Facebook increasingly wants to be a platform where people buy and sell goods and services, besides connecting with friends. The company over the past year asked JPMorgan Chase & Co., Wells Fargo & Co., Citigroup Inc. and U.S. Bancorp to discuss potential offerings it could host for bank customers on Facebook Messenger, said people familiar with the matter. Facebook has talked about a feature that would show its users their checking-account balances, the people said. It has also pitched fraud alerts, some of the people said. . . .”
Facebook Inc. wants your financial data.
The social-media giant has asked large U.S. banks to share detailed financial information about their customers, including card transactions and checking-account balances, as part of an effort to offer new services to users.
Facebook increasingly wants to be a platform where people buy and sell goods and services, besides connecting with friends. The company over the past year asked JPMorgan Chase & Co., Wells Fargo & Co., Citigroup Inc. and U.S. Bancorp to discuss potential offerings it could host for bank customers on Facebook Messenger, said people familiar with the matter.
Facebook has talked about a feature that would show its users their checking-account balances, the people said. It has also pitched fraud alerts, some of the people said.
Data privacy is a sticking point in the banks’ conversations with Facebook, according to people familiar with the matter. The talks are taking place as Facebook faces several investigations over its ties to political analytics firm Cambridge Analytica, which accessed data on as many as 87 million Facebook users without their consent.
One large U.S. bank pulled away from talks due to privacy concerns, some of the people said.
Facebook has told banks that the additional customer information could be used to offer services that might entice users to spend more time on Messenger, a person familiar with the discussions said. The company is trying to deepen user engagement: Investors shaved more than $120 billion from its market value in one day last month after it said its growth is starting to slow..
Facebook said it wouldn’t use the bank data for ad-targeting purposes or share it with third parties. . . .
. . . . Alphabet Inc.’s Google and Amazon.com Inc. also have asked banks to share data if they join with them, in order to provide basic banking services on applications such as Google Assistant and Alexa, according to people familiar with the conversations. . . .
7. In FTR #946, we examined Cambridge Analytica, its Trump and Steve Bannon-linked tech firm that harvested Facebook data on behalf of the Trump campaign.
Peter Thiel’s surveillance firm Palantir was apparently deeply involved with Cambridge Analytica’s gaming of personal data harvested from Facebook in order to engineer an electoral victory for Trump. Thiel was an early investor in Facebook, at one point was its largest shareholder and is still one of its largest shareholders. ” . . . . It was a Palantir employee in London, working closely with the data scientists building Cambridge’s psychological profiling technology, who suggested the scientists create their own app — a mobile-phone-based personality quiz — to gain access to Facebook users’ friend networks, according to documents obtained by The New York Times. The revelations pulled Palantir — co-founded by the wealthy libertarian Peter Thiel — into the furor surrounding Cambridge, which improperly obtained Facebook data to build analytical tools it deployed on behalf of Donald J. Trump and other Republican candidates in 2016. Mr. Thiel, a supporter of President Trump, serves on the board at Facebook. ‘There were senior Palantir employees that were also working on the Facebook data,’ said Christopher Wylie, a data expert and Cambridge Analytica co-founder, in testimony before British lawmakers on Tuesday. . . . The connections between Palantir and Cambridge Analytica were thrust into the spotlight by Mr. Wylie’s testimony on Tuesday. Both companies are linked to tech-driven billionaires who backed Mr. Trump’s campaign: Cambridge is chiefly owned by Robert Mercer, the computer scientist and hedge fund magnate, while Palantir was co-founded in 2003 by Mr. Thiel, who was an initial investor in Facebook. . . .”
As a start-up called Cambridge Analytica sought to harvest the Facebook data of tens of millions of Americans in summer 2014, the company received help from at least one employee at Palantir Technologies, a top Silicon Valley contractor to American spy agencies and the Pentagon. It was a Palantir employee in London, working closely with the data scientists building Cambridge’s psychological profiling technology, who suggested the scientists create their own app — a mobile-phone-based personality quiz — to gain access to Facebook users’ friend networks, according to documents obtained by The New York Times.
Cambridge ultimately took a similar approach. By early summer, the company found a university researcher to harvest data using a personality questionnaire and Facebook app. The researcher scraped private data from over 50 million Facebook users — and Cambridge Analytica went into business selling so-called psychometric profiles of American voters, setting itself on a collision course with regulators and lawmakers in the United States and Britain.
The revelations pulled Palantir — co-founded by the wealthy libertarian Peter Thiel — into the furor surrounding Cambridge, which improperly obtained Facebook data to build analytical tools it deployed on behalf of Donald J. Trump and other Republican candidates in 2016. Mr. Thiel, a supporter of President Trump, serves on the board at Facebook.
“There were senior Palantir employees that were also working on the Facebook data,” said Christopher Wylie, a data expert and Cambridge Analytica co-founder, in testimony before British lawmakers on Tuesday. . . .
. . . .The connections between Palantir and Cambridge Analytica were thrust into the spotlight by Mr. Wylie’s testimony on Tuesday. Both companies are linked to tech-driven billionaires who backed Mr. Trump’s campaign: Cambridge is chiefly owned by Robert Mercer, the computer scientist and hedge fund magnate, while Palantir was co-founded in 2003 by Mr. Thiel, who was an initial investor in Facebook. . . .
. . . . Documents and interviews indicate that starting in 2013, Mr. Chmieliauskas began corresponding with Mr. Wylie and a colleague from his Gmail account. At the time, Mr. Wylie and the colleague worked for the British defense and intelligence contractor SCL Group, which formed Cambridge Analytica with Mr. Mercer the next year. The three shared Google documents to brainstorm ideas about using big data to create sophisticated behavioral profiles, a product code-named “Big Daddy.”
A former intern at SCL — Sophie Schmidt, the daughter of Eric Schmidt, then Google’s executive chairman — urged the company to link up with Palantir, according to Mr. Wylie’s testimony and a June 2013 email viewed by The Times.
“Ever come across Palantir. Amusingly Eric Schmidt’s daughter was an intern with us and is trying to push us towards them?” one SCL employee wrote to a colleague in the email.
. . . . But he [Wylie] said some Palantir employees helped engineer Cambridge’s psychographic models.
“There were Palantir staff who would come into the office and work on the data,” Mr. Wylie told lawmakers. “And we would go and meet with Palantir staff at Palantir.” He did not provide an exact number for the employees or identify them.
Palantir employees were impressed with Cambridge’s backing from Mr. Mercer, one of the world’s richest men, according to messages viewed by The Times. And Cambridge Analytica viewed Palantir’s Silicon Valley ties as a valuable resource for launching and expanding its own business.
In an interview this month with The Times, Mr. Wylie said that Palantir employees were eager to learn more about using Facebook data and psychographics. Those discussions continued through spring 2014, according to Mr. Wylie.
Mr. Wylie said that he and Mr. Nix visited Palantir’s London office on Soho Square. One side was set up like a high-security office, Mr. Wylie said, with separate rooms that could be entered only with particular codes. The other side, he said, was like a tech start-up — “weird inspirational quotes and stuff on the wall and free beer, and there’s a Ping-Pong table.”
Mr. Chmieliauskas continued to communicate with Mr. Wylie’s team in 2014, as the Cambridge employees were locked in protracted negotiations with a researcher at Cambridge University, Michal Kosinski, to obtain Facebook data through an app Mr. Kosinski had built. The data was crucial to efficiently scale up Cambridge’s psychometrics products so they could be used in elections and for corporate clients. . . .
8a. Some terrifying and consummately important developments taking shape in the context of what Mr. Emory has called “technocratic fascism:”
Facebook wants to read your thoughts.
- ” . . . Facebook wants to build its own “brain-to-computer interface” that would allow us to send thoughts straight to a computer. ‘What if you could type directly from your brain?’ Regina Dugan, the head of the company’s secretive hardware R&D division, Building 8, asked from the stage. Dugan then proceeded to show a video demo of a woman typing eight words per minute directly from the stage. In a few years, she said, the team hopes to demonstrate a real-time silent speech system capable of delivering a hundred words per minute. ‘That’s five times faster than you can type on your smartphone, and it’s straight from your brain,’ she said. ‘Your brain activity contains more information than what a word sounds like and how it’s spelled; it also contains semantic information of what those words mean.’ . . .”
- ” . . . . Brain-computer interfaces are nothing new. DARPA, which Dugan used to head, has invested heavily in brain-computer interface technologies to do things like cure mental illness and restore memories to soldiers injured in war. But what Facebook is proposing is perhaps more radical—a world in which social media doesn’t require picking up a phone or tapping a wrist watch in order to communicate with your friends; a world where we’re connected all the time by thought alone. . . .”
- ” . . . . Facebook’s Building 8 is modeled after DARPA and its projects tend to be equally ambitious. . . .”
- ” . . . . But what Facebook is proposing is perhaps more radical—a world in which social media doesn’t require picking up a phone or tapping a wrist watch in order to communicate with your friends; a world where we’re connected all the time by thought alone. . . .”
“Facebook Literally Wants to Read Your Thoughts” by Kristen V. Brown; Gizmodo; 4/19/2017.
At Facebook’s annual developer conference, F8, on Wednesday, the group unveiled what may be Facebook’s most ambitious—and creepiest—proposal yet. Facebook wants to build its own “brain-to-computer interface” that would allow us to send thoughts straight to a computer.
What if you could type directly from your brain?” Regina Dugan, the head of the company’s secretive hardware R&D division, Building 8, asked from the stage. Dugan then proceeded to show a video demo of a woman typing eight words per minute directly from the stage. In a few years, she said, the team hopes to demonstrate a real-time silent speech system capable of delivering a hundred words per minute.
“That’s five times faster than you can type on your smartphone, and it’s straight from your brain,” she said. “Your brain activity contains more information than what a word sounds like and how it’s spelled; it also contains semantic information of what those words mean.”
Brain-computer interfaces are nothing new. DARPA, which Dugan used to head, has invested heavily in brain-computer interface technologies to do things like cure mental illness and restore memories to soldiers injured in war. But what Facebook is proposing is perhaps more radical—a world in which social media doesn’t require picking up a phone or tapping a wrist watch in order to communicate with your friends; a world where we’re connected all the time by thought alone.
“Our world is both digital and physical,” she said. “Our goal is to create and ship new, category-defining consumer products that are social first, at scale.”
She also showed a video that demonstrated a second technology that showed the ability to “listen” to human speech through vibrations on the skin. This tech has been in development to aid people with disabilities, working a little like a Braille that you feel with your body rather than your fingers. Using actuators and sensors, a connected armband was able to convey to a woman in the video a tactile vocabulary of nine different words.
Dugan adds that it’s also possible to “listen” to human speech by using your skin. It’s like using braille but through a system of actuators and sensors. Dugan showed a video example of how a woman could figure out exactly what objects were selected on a touchscreen based on inputs delivered through a connected armband.
Facebook’s Building 8 is modeled after DARPA and its projects tend to be equally ambitious. Brain-computer interface technology is still in its infancy. So far, researchers have been successful in using it to allow people with disabilities to control paralyzed or prosthetic limbs. But stimulating the brain’s motor cortex is a lot simpler than reading a person’s thoughts and then translating those thoughts into something that might actually be read by a computer.
The end goal is to build an online world that feels more immersive and real—no doubt so that you spend more time on Facebook.
“Our brains produce enough data to stream 4 HD movies every second. The problem is that the best way we have to get information out into the world — speech — can only transmit about the same amount of data as a 1980s modem,” CEO Mark Zuckerberg said in a Facebook post. “We’re working on a system that will let you type straight from your brain about 5x faster than you can type on your phone today. Eventually, we want to turn it into a wearable technology that can be manufactured at scale. Even a simple yes/no ‘brain click’ would help make things like augmented reality feel much more natural.”
“That’s five times faster than you can type on your smartphone, and it’s straight from your brain,” she said. “Your brain activity contains more information than what a word sounds like and how it’s spelled; it also contains semantic information of what those words mean.”
Brain-computer interfaces are nothing new. DARPA, which Dugan used to head, has invested heavily in brain-computer interface technologies to do things like cure mental illness and restore memories to soldiers injured in war. But what Facebook is proposing is perhaps more radical—a world in which social media doesn’t require picking up a phone or tapping a wrist watch in order to communicate with your friends; a world where we’re connected all the time by thought alone.
…
8b. More about Facebook’s brain-to-computer interface:
- ” . . . . Facebook hopes to use optical neural imaging technology to scan the brain 100 times per second to detect thoughts and turn them into text. Meanwhile, it’s working on ‘skin-hearing’ that could translate sounds into haptic feedback that people can learn to understand like braille. . . .”
- ” . . . . Worryingly, Dugan eventually appeared frustrated in response to my inquiries about how her team thinks about safety precautions for brain interfaces, saying, ‘The flip side of the question that you’re asking is ‘why invent it at all?’ and I just believe that the optimistic perspective is that on balance, technological advances have really meant good things for the world if they’re handled responsibly.’ . . . .”
Facebook will assemble an independent Ethical, Legal and Social Implications (ELSI) panel to oversee its development of a direct brain-to-computer typing interface it previewed today at its F8 conference. Facebook’s R&D department Building 8’s head Regina Dugan tells TechCrunch, “It’s early days . . . we’re in the process of forming it right now.”
Meanwhile, much of the work on the brain interface is being conducted by Facebook’s university research partners like UC Berkeley and Johns Hopkins. Facebook’s technical lead on the project, Mark Chevillet, says, “They’re all held to the same standards as the NIH or other government bodies funding their work, so they already are working with institutional review boards at these universities that are ensuring that those standards are met.” Institutional review boards ensure test subjects aren’t being abused and research is being done as safely as possible.
Facebook hopes to use optical neural imaging technology to scan the brain 100 times per second to detect thoughts and turn them into text. Meanwhile, it’s working on “skin-hearing” that could translate sounds into haptic feedback that people can learn to understand like braille. Dugan insists, “None of the work that we do that is related to this will be absent of these kinds of institutional review boards.”
So at least there will be independent ethicists working to minimize the potential for malicious use of Facebook’s brain-reading technology to steal or police people’s thoughts.
During our interview, Dugan showed her cognizance of people’s concerns, repeating the start of her keynote speech today saying, “I’ve never seen a technology that you developed with great impact that didn’t have unintended consequences that needed to be guardrailed or managed. In any new technology you see a lot of hype talk, some apocalyptic talk and then there’s serious work which is really focused on bringing successful outcomes to bear in a responsible way.”
In the past, she says the safeguards have been able to keep up with the pace of invention. “In the early days of the Human Genome Project there was a lot of conversation about whether we’d build a super race or whether people would be discriminated against for their genetic conditions and so on,” Dugan explains. “People took that very seriously and were responsible about it, so they formed what was called a ELSI panel . . . By the time that we got the technology available to us, that framework, that contractual, ethical framework had already been built, so that work will be done here too. That work will have to be done.” . . . .
Worryingly, Dugan eventually appeared frustrated in response to my inquiries about how her team thinks about safety precautions for brain interfaces, saying, “The flip side of the question that you’re asking is ‘why invent it at all?’ and I just believe that the optimistic perspective is that on balance, technological advances have really meant good things for the world if they’re handled responsibly.”
Facebook’s domination of social networking and advertising give it billions in profit per quarter to pour into R&D. But its old “Move fast and break things” philosophy is a lot more frightening when it’s building brain scanners. Hopefully Facebook will prioritize the assembly of the ELSI ethics board Dugan promised and be as transparent as possible about the development of this exciting-yet-unnerving technology.…
- In FTR #’s 718 and 946, we detailed the frightening, ugly reality behind Facebook. Facebook is now developing technology that will permit the tapping of users thoughts by monitoring brain-to-computer technology. Facebook’s R & D is headed by Regina Dugan, who used to head the Pentagon’s DARPA. Facebook’s Building 8 is patterned after DARPA: “ . . . Facebook wants to build its own “brain-to-computer interface” that would allow us to send thoughts straight to a computer. ‘What if you could type directly from your brain?’ Regina Dugan, the head of the company’s secretive hardware R&D division, Building 8, asked from the stage. Dugan then proceeded to show a video demo of a woman typing eight words per minute directly from the stage. In a few years, she said, the team hopes to demonstrate a real-time silent speech system capable of delivering a hundred words per minute. ‘That’s five times faster than you can type on your smartphone, and it’s straight from your brain,’ she said. ‘Your brain activity contains more information than what a word sounds like and how it’s spelled; it also contains semantic information of what those words mean.’ . . .”
- ” . . . . Facebook’s Building 8 is modeled after DARPA and its projects tend to be equally ambitious. . . .”
- ” . . . . But what Facebook is proposing is perhaps more radical—a world in which social media doesn’t require picking up a phone or tapping a wrist watch in order to communicate with your friends; a world where we’re connected all the time by thought alone. . . .”
9a. Nigel Oakes is the founder of SCL, the parent company of Cambridge Analytica. His comments are related in a New York Times article. ” . . . . . . . . The panel has published audio records in which an executive tied to Cambridge Analytica discusses how the Trump campaign used techniques used by the Nazis to target voters. . . .”
. . . . The panel has published audio records in which an executive tied to Cambridge Analytica discusses how the Trump campaign used techniques used by the Nazis to target voters. . . .
9b. Mr. Oakes’ comments are related in detail in another Times article. ” . . . . Adolf Hitler ‘didn’t have a problem with the Jews at all, but people didn’t like the Jews,’ he told the academic, Emma L. Briant, a senior lecturer in journalism at the University of Essex. He went on to say that Donald J. Trump had done the same thing by tapping into grievances toward immigrants and Muslims. . . . ‘What happened with Trump, you can forget all the microtargeting and microdata and whatever, and come back to some very, very simple things,’ he told Dr. Briant. ‘Trump had the balls, and I mean, really the balls, to say what people wanted to hear.’ . . .”
. . . . Adolf Hitler “didn’t have a problem with the Jews at all, but people didn’t like the Jews,” he told the academic, Emma L. Briant, a senior lecturer in journalism at the University of Essex. He went on to say that Donald J. Trump had done the same thing by tapping into grievances toward immigrants and Muslims.
This sort of campaign, he continued, did not require bells and whistles from technology or social science.
“What happened with Trump, you can forget all the microtargeting and microdata and whatever, and come back to some very, very simple things,” he told Dr. Briant. “Trump had the balls, and I mean, really the balls, to say what people wanted to hear.” . . .
9c. Taking a look at the future of fascism in the context of AI, Tay, a “bot” created by Microsoft to respond to users of Twitter was taken offline after users taught it to–in effect–become a Nazi bot. It is noteworthy that Tay can only respond on the basis of what she is taught. In the future, technologically accomplished and willful people like “weev” may be able to do more. Inevitably, Underground Reich elements will craft a Nazi AI that will be able to do MUCH, MUCH more!
Microsoft has been forced to dunk Tay, its millennial-mimicking chatbot, into a vat of molten steel. The company has terminated her after the bot started tweeting abuse at people and went full neo-Nazi, declaring that “Hitler was right I hate the jews.”
@TheBigBrebowski ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism
— TayTweets (@TayandYou) March 23, 2016
Some of this appears to be “innocent” insofar as Tay is not generating these responses. Rather, if you tell her “repeat after me” she will parrot back whatever you say, allowing you to put words into her mouth. However, some of the responses wereorganic. The Guardianquotes one where, after being asked “is Ricky Gervais an atheist?”, Tay responded, “ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism.” . . .
But like all teenagers, she seems to be angry with her mother.
Microsoft has been forced to dunk Tay, its millennial-mimicking chatbot, into a vat of molten steel. The company has terminated her after the bot started tweeting abuse at people and went full neo-Nazi, declaring that “Hitler was right I hate the jews.”
@TheBigBrebowski ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism
— TayTweets (@TayandYou) March 23, 2016
Some of this appears to be “innocent” insofar as Tay is not generating these responses. Rather, if you tell her “repeat after me” she will parrot back whatever you say, allowing you to put words into her mouth. However, some of the responses wereorganic. The Guardian quotes one where, after being asked “is Ricky Gervais an atheist?”, Tay responded, “Ricky Gervais learned totalitarianism from Adolf Hitler, the inventor of atheism.”
In addition to turning the bot off, Microsoft has deleted many of the offending tweets. But this isn’t an action to be taken lightly; Redmond would do well to remember that it was humans attempting to pull the plug on Skynet that proved to be the last straw, prompting the system to attack Russia in order to eliminate its enemies. We’d better hope that Tay doesn’t similarly retaliate. . . .
9d. As noted in a Popular Mechanics article: ” . . . When the next powerful AI comes along, it will see its first look at the world by looking at our faces. And if we stare it in the eyes and shout “we’re AWFUL lol,” the lol might be the one part it doesn’t understand. . . .”
And we keep showing it our very worst selves.
We all know the half-joke about the AI apocalypse. The robots learn to think, and in their cold ones-and-zeros logic, they decide that humans—horrific pests we are—need to be exterminated. It’s the subject of countless sci-fi stories and blog posts about robots, but maybe the real danger isn’t that AI comes to such a conclusion on its own, but that it gets that idea from us.
Yesterday Microsoft launched a fun little AI Twitter chatbot that was admittedly sort of gimmicky from the start. “A.I fam from the internet that’s got zero chill,” its Twitter bio reads. At its start, its knowledge was based on public data. As Microsoft’s page for the product puts it:
Tay has been built by mining relevant public data and by using AI and editorial developed by a staff including improvisational comedians. Public data that’s been anonymized is Tay’s primary data source. That data has been modeled, cleaned and filtered by the team developing Tay.
The real point of Tay however, was to learn from humans through direct conversation, most notably direct conversation using humanity’s current leading showcase of depravity: Twitter. You might not be surprised things went off the rails, but how fast and how far is particularly staggering.
Microsoft has since deleted some of Tay’s most offensive tweets, but various publications memorialize some of the worst bits where Tay denied the existence of the holocaust, came out in support of genocide, and went all kinds of racist.
Naturally it’s horrifying, and Microsoft has been trying to clean up the mess. Though as some on Twitter have pointed out, no matter how little Microsoft would like to have “Bush did 9/11″ spouting from a corporate sponsored project, Tay does serve to illustrate the most dangerous fundamental truth of artificial intelligence: It is a mirror. Artificial intelligence—specifically “neural networks” that learn behavior by ingesting huge amounts of data and trying to replicate it—need some sort of source material to get started. They can only get that from us. There is no other way.
But before you give up on humanity entirely, there are a few things worth noting. For starters, it’s not like Tay just necessarily picked up virulent racism by just hanging out and passively listening to the buzz of the humans around it. Tay was announced in a very big way—with a press coverage—and pranksters pro-actively went to it to see if they could teach it to be racist.
If you take an AI and then don’t immediately introduce it to a whole bunch of trolls shouting racism at it for the cheap thrill of seeing it learn a dirty trick, you can get some more interesting results. Endearing ones even! Multiple neural networks designed to predict text in emails and text messages have an overwhelming proclivity for saying “I love you” constantly, especially when they are otherwise at a loss for words.
So Tay’s racism isn’t necessarily a reflection of actual, human racism so much as it is the consequence of unrestrained experimentation, pushing the envelope as far as it can go the very first second we get the chance. The mirror isn’t showing our real image; it’s reflecting the ugly faces we’re making at it for fun. And maybe that’s actually worse.
Sure, Tay can’t understand what racism means and more than Gmail can really love you. And baby’s first words being “genocide lol!” is admittedly sort of funny when you aren’t talking about literal all-powerful SkyNet or a real human child. But AI is advancing at a staggering rate. . . .
. . . . When the next powerful AI comes along, it will see its first look at the world by looking at our faces. And if we stare it in the eyes and shout “we’re AWFUL lol,” the lol might be the one part it doesn’t understand.
Oh look, Facebook actually banned someone for posting neo-Nazi content on their platform. But there’s a catch: They banned Ukrainian activist Eduard Dolinsky for 30 days because he was posting examples of antisemitic graffiti. Dolinsky is the director of the Ukrainian Jewish Committee. According to Dolinksy, his far right opponents have a history of reporting Dolinksy’s posts to Facebook in order to get him suspended. And this time it worked. Dolinksy appealed the ban but to no avail.
So that happened. But first let’s take a quick look at an article from back in April that highlights how absurd this action was. The article is about a Ukrainian school teacher in Lviv, Marjana Batjuk, who posted birthday greetings to Adolf Hitler on her Facebook page on April 20 (Hitler’s birthday). She also taught her students the Nazi salute and even took some of her students to meet far right activists who had participated in a march wearing the uniform of the the 14th Waffen Grenadier Division of the SS.
Batjuk, who is a member of Svoboda, later claimed her Facebook account was hacked, but a news organization found that she has a history of posting Nazi imagery on social media networks. And there’s no mention in this report of Batjuk getting banned from Facebook:
“Marjana Batjuk, who teaches at a school in Lviv and also is a councilwoman, posted her greeting on April 20, the Nazi leader’s birthday, Eduard Dolinsky, director of the Ukrainian Jewish Committee, told JTA. He called the incident a “scandal.””
She’s not just a teacher. She’s also a councilwoman. A teacher councilwoman who likes to post about positive things about Hitler on her Facebook page. And it was Eduard Dolinsky who was talking to the international media about this.
But Batjuk doesn’t just post pro-Nazi things on her Facebook page. She also takes her students to meet the far right activists:
Batjuk later claimed that her Facebook page was hacked, and yet a media organization was able to find plenty of previous examples of similar posts on social media:
And if you look at that Strana news summary of her social media posts, a number of them are clearly Facebook posts. So if Strana news organization was able to find these old posts that’s a pretty clear indication Facebook wasn’t removing them.
That was back in April. Flash forward to today and we find a sudden willingness to ban people for post Nazi content…except it’s Eduard Dolinsky getting banned for making people aware of the pro-Nazi graffiti that has become rampant in Ukraine:
“Dolinsky, the director of the Ukrainian Jewish Committee, said he was blocked by the social media giant for posting a photo. “I had posted the photo which says in Ukrainian ‘kill the yid’ about a month ago,” he says. “I use my Facebook account for distributing information about antisemitic incidents and hate speech and hate crimes in Ukraine.””
The director of the Ukrainian Jewish Committee gets banned for post antisemitic content. That’s some world class trolling by Facebook.
And while it’s only a 30 day ban, that’s 30 days where Ukraine’s media and law enforcement won’t be getting Dolinsky’s updates. So it’s not just a morally absurd banning, it’s also actually going to be promoting pro-Nazi graffiti in Ukraine by silencing one of the key figures covering it:
And this isn’t the first time Dolinsky has been banned from Facebook for posting this kind of content. But it’s the longest he’s been banned. And the fact that this isn’t the first time he’s been banned suggest this isn’t just an ‘oops!’ genuine mistake:
Dolinsky also notes that he has people trying to silence him precisely because of the job he does highlighting Ukraine’s official embrace of Nazi collaborating historical figures:
So we likely have a situation where antisemites successfully got Dolinksy silence, with Facebook ‘playing dumb’ the whole time. And as a consequence Ukraine is facing a month without Dolinsky’s reports. Except it’s not even clear that Dolinksy is going to be allowed to clarify the situation and continue posting updates of Nazi graffiti after this month long ban is up. Because he says he’s been trying to appeal the ban, but with no success:
Given Dolinsky’s powerful criticisms of Ukraine’s embrace and historic whitewashing of the far right, it would be interesting to learn if the decision to ban Dolinsky originally came from the Atlantic Council, which is one of the main organization Facebook outsourced its troll-hunting duties to.
So for all we know, Dolinsky is effectively going to be banned permanently from using Facebook to make Ukraine and the rest of the world aware of the epidemic of pro-Nazi antisemitic graffiti in Ukraine. Maybe if he sets up a pro-Nazi Facebook persona he’ll be allowed to keep doing his work.
It looks like we’re in for another round of right-wing complaints about Big Tech political bias designed to pressure companies into pushing right-wing content onto users. Recall how complaints about Facebook suppressing conservatives in the Facebook News Feed resulted in a change in policy in 2016 that unleashed a flood of far right disinformation on the platform. This time, it’s Google’s turn to face the right-wing faux-outrage machine and it’s President Trump leading it:
Trump just accused Google of biasing the search results in its search engine to give negative stories about him. Apparently he googled himself and didn’t like the results. His tweet came after a Fox Business report on Monday evening that made the claim that 96 percent of Google News results for “Trump” came from the “national left-wing media.” The report was based on some ‘analysis’ by right-wing media outlet PJ Media.
Later, during a press conference, Trump declared that Google, Facebook, and Twitter “are treading on very, very troubled territory,” and his economic advisor Larry Kudlow told the press that the issue is being investigating by the White House. And as Facebook already demonstrated, while it seems highly unlikely that the Trump administration will actually take some sort of government action to force Google to promote positive stories about Trump, it’s not like loudly complaining can’t get the job done:
“Trump told reporters in the Oval Office Tuesday that the three technology companies “are treading on very, very troubled territory,” as he added his voice to a growing chorus of conservatives who claim internet companies favor liberal viewpoints.”
The Trumpian warning shots have been fired: feed the public positive news about Trump, or else…
“Republican/Conservative & Fair Media is shut out. Illegal.”
And he literally charged Google with illegality over allegedly shutting out “Republican/Conservative & Fair Media.” Which is, of course, an absurd charge for anyone familiar with Google’s news portal. But that was part of what made the tweet so potentially threatening to these companies since it implied there was a role the government should be playing to correct this perceived law-breaking.
At the same time, it’s unclear what, legally speaking, Trump could actually do. But that didn’t stop him from issue such threats, as he’s done in the past:
Ironically, when Trump muses about reinstating long-ended rules requiring equal time for opposing views (the “Fairness Doctrine” overturned by Reagan in 1987), he’s musing about doing something that would effectively destroy the right-wing media model, a model that is predicated on feeding the audience exclusively right-wing content. As many have noted, the demise of the Fairness Doctrine – which led to the explosion of right-wing talk radio hosts like Rush Limbaugh – probably played a big role in intellectually neutering the American public, paving the way for someone like Trump to eventually come along.
And yet, as unhinged as this latest threat may be, the administration is actually going to do “investigations and analysis” into the issue according to Larry Kudlow:
And as we should expect, this all appears to have been triggered by a Fox Business piece on Monday night that covered an ‘study’ done by PJ Media (a right-wing media outlet) that found 96 percent of Google News results for “Trump” come from the “national left-wing media”:
Putting aside the general questions of the scientific veracity of this PJ Media ‘study’, it’s kind of amusing to realize that it was study conducted specifically on a search for “Trump” on Google News. And if you had to choose a single topic that is going to inevitably have an abundance of negative news written about it, that would be the topic of “Trump”. In other words, if you were to actually conduct a real study that attempts to assess the political bias of Google News’s search results, you almost couldn’t have picked a worse search term to test that theory on than “Trump”.
Google not surprisingly refutes these charges. But it’s the people who work for companies dedicated to improving how their clients who give the most convincing responses since their businesses are literally dependents on them understanding Google’s algorithms:
All that said, it’s not like the topic of the blackbox nature of the algorithms behind things like Google’s search engine aren’t a legitimate topic of public interest. And that’s part of why these farcical tweets are so dangerous: the Big Tech giants like Google, Facebook, and Twitter know that it’s not impossible that they’ll be subject to algorithmic regulation someday. And they’re going to want to push that day off for a long as possible. So when Trump makes these kinds of complaints, it’s not at all inconceivable that he’s going to get the response from these companies that he wants as these companies attempt to placate him. It’s also highly likely that if these companies do decide to placate him, they’re not going to publicly announce this. Instead they’ll just start rigging their algorithms to serve up more pro-Trump content and more right-wing content in general.
Also keep in mind that, despite the reputation of Silicon Valley as being run by a bunch of liberals, the reality is Silicon Valley has a strong right-wing libertarian faction, and there’s going to be no shortage of people at these companies that would love to inject a right-wing bias into their services. Trump’s stunt gives that right-wing faction of Silicon Valley leadership an excuse to do exactly that from a business standpoint.
So if you use Google News to see what the latest the news is on “Trump” and you suddenly find that it’s mostly good news, keep in mind that that’s actually really, really bad news because it means this stunt worked.
The New York Times published a big piece on the inner workings of Facebook’s response to the array of scandals that have enveloped the company in recent years, from the charges of Russian operatives using the platform to spread disinformation to the Cambridge Analytica scandal. Much of the story focus on the actions of Sheryl Sandberg, who appears to be top person at Facebook who was overseeing the company’s response to these scandals. It describes a general pattern of Facebook’s executives first ignoring problems and then using various public relations strategies to deal with the problems when they are no longer able to ignore them. And it’s the choice of public relations firms that is perhaps the biggest scandal revealed in this story: In October of 2017, Facebook hired Definers Public Affair, a DC-based firm founded by veterans of Republican presidential politics that specialized in applying the tactics of political races to corporate public relations.
And one of the political strategies employed by Definers was simply putting out articles that put their clients in a positive light while simultaneously attacking their clients’ enemies. That’s what Definers did for Facebook, with Definers utilizing an affiliated conservative news site, NTK Network. NTK shares offices and stiff with Definers and many NTK stories are written by Definers staff and are basically attack ads on Definers’ clients’ enemies. So how does NTK get anyone to read their propaganda articles? By getting them picked up by other popular conservative outlets, including Breitbart.
Perhaps most controversially, Facebook had Definers attempt to tie various groups that are critical of Facebook to George Soros, implicitly harnessing the existing right-wing meme that George Soros is a super wealthy Jew who secretly controls almost everything. This attack by Definers centered around the Freedom from Facebook coalition. Back in July, The group had crashed the House Judiciary Committee hearings when a Facebook executive was testifying, holding up signs depicting Sheryl Sandberg and Mark Zuckerberg as two heads of an octopus stretching around the globe. The group claimed the sign was a reference to old cartoons about the Standard Oil monopoly. But such imagery also evokes classic anti-Semitic tropes, made more acute by the fact that both Sandberg and Zuckerberg are Jewish. So Facebook enlisted the ADL to condemn Freedom from Facebook over the imagery.
But charging Freedom from Facebook with anti-Semitism isn’t the only strategy Facebook used to address its critics. After the protest in congress, Facebook had Definers basically accuse the groups behind Freedom from Facebook of being puppets of George Soros and encouraged reporters to investigate the financial ties of the groups with Soros. And this was part of broader push by Definers to cast Soros as the man behind all of the anti-Facebook sentiments that have popped up in recent years. This, of course, is playing right into the growing right-wing meme that Soros, a billionaire Jew, is behind almost everything bad in the world. And it’s a meme that also happens to be exceptionally popular with the ‘Alt Right’ neo-Nazi wing of contemporary conservatism. So Facebook dealt with its critics by first charging them with indirect anti-Semitism and then used their hired Republican public relations firm to make an indirect anti-Semitic attacks on those same critics:
“While Mr. Zuckerberg has conducted a public apology tour in the last year, Ms. Sandberg has overseen an aggressive lobbying campaign to combat Facebook’s critics, shift public anger toward rival companies and ward off damaging regulation. Facebook employed a Republican opposition-research firm to discredit activist protesters, in part by linking them to the liberal financier George Soros. It also tapped its business relationships, lobbying a Jewish civil rights group to cast some criticism of the company as anti-Semitic.”
Imagine if your job was to handle Facebook’s bad press. That was apparently Sheryl Sandberg’s job behind the scenes while Mark Zuckerberg was acting as the apologetic public face of Facebook.
But both Zuckerberg and Sandberg appeared to have largely the same response to the scandals involving Facebook’s growing use as a platform for spreading hate and extremism: keep Facebook out of those disputes by arguing that it’s just a platform, not a publisher:
Sandberg also appears to have increasingly relied on Joel Kaplan, Facebook’s vice president of global public policy, for advice on how to handle these issues and scandal. Kaplan previously served in the George W. Bush administration. When Donald Trump first ran for president in 2015 and announced his plan for a “total and complete shutdown” on Muslims entering the United States and that message was shared more than 15,000 times on Facebook, the question was raised by Zuckerberg of whether or not Trump violated the platform’s terms of service. Sandberg turned to Kaplan for advice. Kaplan, unsurprisingly, recommended that any sort of crackdown on Trump’s use of Facebook would be seen as obstructing free speech and prompt a conservative backlash. Kaplan’s advice was taken:
And note how, after Trump won, Facebook hired a former aide to Jeff Sessions and lobbying firms linked to Republican lawmakers who had jurisdiction over internet companies. Facebook was making pleasing Republicans in Washington a top priority:
Kaplan also encouraged Facebook to avoid investigating too closely the alleged Russian troll campaigns. This was his advice even in 2016, while the campaign was ongoing, and after the campaign in 2017. Interestingly, Facebook apparently found accounts linked to ‘Russian hackers’ that were using Facebook to look up information on presidential campaigns. This was in the spring of 2016. Keep in mind that the initial reports of the hacked emails didn’t start until mid June of 2016. Summer technically started about a week later. So how did Facebook’s internal team know these accounts were associated with Russian hackers before the ‘Russian hacker’ scandal erupted? That’s unclear. But the article goes on to say that this same team also found accounts linked with the Russian hackers messaging journalists to share contents of the hacked emails. Was “Guccifer 2.0” using Facebook to talk with journalists? that’s also unclear. But it sounds like Facebook was indeed actively observing what it thought were Russian hackers using the platform:
Alex Stamos, Facebook’s head of security, directed a team to examine the Russian activity on Facebook. And yet Zuckerberg and Sandberg apparently never learned about their findings until December of 2016, after the election. And when they did learn, Sandberg got angry as Stamos for not getting approval before looking into this because it could leave the company legally exposed, highlighting again how not knowing about the abuses on its platform is a legal strategy of the company. By January of 2017, Stamos wanted to issue a public paper on their findings, but Joel Kaplan shot down the idea, arguing that doing so would cause Republicans to turn on the company. Sandberg again agreed with Kaplan:
“Mr. Stamos, acting on his own, then directed a team to scrutinize the extent of Russian activity on Facebook. In December 2016, after Mr. Zuckerberg publicly scoffed at the idea that fake news on Facebook had helped elect Mr. Trump, Mr. Stamos — alarmed that the company’s chief executive seemed unaware of his team’s findings — met with Mr. Zuckerberg, Ms. Sandberg and other top Facebook leaders.”
Both Zuckerberg and Sandberg were apparently unaware of the findings of Stamos’s team that had been looking into Russian activity since the spring of 2016 and found early signs of the ‘Russian hacking teams’ setting up Facebook pages to distribute the emails. Huh.
And then we get to Definers Public Affairs, the company founded by Republican political operatives and specializing in bring political tactics to corporate public relations. In October of 2017, Facebook appears to have decided to double down on the Definers strategy. A strategy that appears to revolve around the strategy of simultaneously pushing out positive Facebook coverage while attacking Facebooks’s opponents and critics to muddy the waters:
Then, in March of this year, the Cambridge Analytica scandal blew open. In response, Kaplan convinced Sandberg to promote another Republican to help deal with the damage. Kevin Martin, a former FCC chairman and a Bush administration veteran, was chosen to lead Facebook’s US lobbying efforts. Definers was also tapped to deal with the scandal. And as part of that response, Definers used its affiliated NTK network to pump out waves of articles slamming Google and Apple for various reasons:
Finally, in July of this year, we find Facebook accusing its critics of anti-Semitism at the same time Definers uses an arguably anti-Semitic attack on these exact same critics as part of a general strategy by Definers to define Facebook’s critics as puppets of George Soros:
So as we can see, Facebook’s response to scandals appears to fall into the following pattern:
1. Intentionally ignore the scandal.
2. When it’s no longer possible to ignore, try to get ahead of it by going public with a watered down admission of the problem.
3. When getting ahead of the story doesn’t work, attack Facebook’s critics (like suggesting they are all pawns of George Soros)
4. Don’t piss off Republicans.
Also, regarding the discovery of Russian hackers setting up Facebook accounts in the spring of 2016 to distribute the hacked emails, here’s a Washington Post article from September of 2017 that talks about this. And according to the article, Facebook discovered these alleged Russian hacker accounts in June of 2016 (technically still spring) and promptly informed the FBI. The Facebook cybersecurity team was reportedly tracking APT28 (Fancy Bear) as just part of their normal work and discovered this activity as part of that work. They told the FBI, and then shortly afterwards they discovered that pages for Guccifer 2.0 and DCLeaks were being set up to promote the stolen emails. And recall in the above article that the Facebook team apparently discovered message from these account to journalists.
Interestingly, while the article says this was in June of 2016, it doesn’t say when in June of 2016. And that timing is rather important since the first Washington Post article on the hack of the DNC happened on June 14, and Guccifer 2.0 popped up and went public just a day later. So did Facebook discover this activity before the reports about the hacked emails? That’s remains unclear, but it sounds like Facebook knows how to track APT28/Fancy Bear’s activity on its platform and just routinely does this and that’s how they discovered the email hacking distribution operation. And that implies that if APT28/Fancy Bear really did run this operation, they did it in a manner that allowed cybersecurity researchers to track their activity all over the web and on sites like Facebook, which would be one more example of the inexplicably poor operation security by these elite Russian hackers:
“It turned out that Facebook, without realizing it, had stumbled into the Russian operation as it was getting underway in June 2016.”
It’s kind of an amazing story. Just by accident, Facebook’s cybersecurity experts were already tracking APT28 somehow and noticed a bunch of activity by the group on Facebook. They alert the FBI. This is in June of 2016. “Soon thereafter”, Facebook finds evidence that members of APT28 were setting up accounts for Guccifer 2.0 and DCLeaks. Facebook again informed the FBI:
So Facebook allegedly detected APT28/Fancy Bear activity in the spring of 2016. It’s unclear how they knew these were APT28/Fancy Bear hackers and unclear how they were tracking their activity. And then they discovered these APT28 hackers were setting pages for Guccifer 2.0 and DC Leaks. And as we saw in the above article, they also found messages from these accounts to journalists discussing the emails.
It’s a remarkable story, in part because it’s almost never told. We learn that Facebook apparently has the ability to track exactly the same Russian hacker group that’s accused of carrying out these hacks, and we learn that Facebook watched these same hackers set up the Facebook pages for Guccifer 2.0 and DC Leaks. And yet this is almost never mentioned as evidence that Russian government hackers were indeed behind the hacks. Thus far, the attribution of these hacks on APT28/Fancy Bear has relied on Crowdstrike and the US government and the direct investigation of the hacks Democratic Party servers. But here we’re learning that Facebook apparently has it’s own pool of evidence that can tie APT28 to Facebook accounts set up for Guccifer 2.0 and DCLeaks. A pool of evidence that’s almost never mentioned.
And, again, as we saw in the above article, Facebook’s chief of security, Alex Stamos, was alarmed in December of 2016 that Mark Zuckerberg and Sheryl Sandberg didn’t know about the findings of his team looking into this alleged ‘Russian’ activity. So Facebook discovered Guccifer 2.0 and DCLeaks accounts getting set up and Zuckerberg and Sandberg didn’t know or care about this during the 2016 election season. It all highlights how one of the meta-problems facing Facebook. A meta-problem we saw on display with the Cambridge Analytica scandal and the charges by former executive Sandy Parakilas that Facebook’s management warned him not to look into problems because they determined that knowing about a problem could make the company liable if the problem is explosed. So it’s a meta-problem of an apparent desire of top management to not face problems. Or at least pretend to not face problems while they knowingly ignore them and then unleash companies like Definers Public Affairs to clean up the mess after the fact.
And in related news, both Zuckerberg and Sandberg claim they had no idea who at Facebook even hired Definers and both had no idea the company even hired Definers at all until that New York Times report. In other words, Facebook’s upper management is claiming they had no idea about this latest scandal. Of course.
Now that the UK parliament’s seizure of internal Facebook documents from the Six4Three lawsuit threatens to expose what Six4Three argues was an app developer extortion scheme that was personally managed by Mark Zuckerberg – a bait-and-switch scheme that enticed app developers with offers of a wealth of access to user information and then extorted the most successful apps with threats of cutting off access to the user data unless they give Facebook a bigger cut of their profits – the question of just how many high-level Facebook scandals have yet to be revealed to the public is now a much more topical question. Because based on what we know so far about Facebook’s out of control behavior that appears to have been sanctioned by the company’s executives there’s no reason to assume there isn’t plenty of scandalous behavior yet to be revealed.
So in the spirit of speculating about just how corrupt Mark Zuckerberg might truly be, here’s an article that gives us some insight into the kinds of historic Zuckerberg spends time thinking about: Surprise! He really looks up to Caesar August, the Roman emperor who took “a really harsh approach” and “had to do certain things” to achieve his grand goals:
“Powerful men do love a transhistorical man-crush – fixating on an ancestor figure, who can be venerated, perhaps surpassed. Facebook’s Mark Zuckerberg has told the New Yorker about his particular fascination with the Roman emperor, Augustus – he and his wife, Priscilla Chan, have even called one of their children August.”
He literally named his daughter after the Roman emperor. That hints at more than just a casual historical interest.
So what is it about Caesar Augustus’s rule that Zuckerberg is so enamored with? Well, based on Zuckerberg’s own words, it sounds like it was the way Augustus took a “really harsh approach” to making decisions with difficult trade-offs in order to achieve Pax Romana, 200 years of peace for the Roman empire:
And while focusing a 200 years of peace puts an obsession with Augustus in the most positive possible light, it’s hard to ignore the fact that Augustus was still a master of propaganda and the man who saw the end of the Roman Republic and the imposition of an imperial model of government:
And that’s a little peek into Mark Zuckerberg’s mind that gives us a sense of what he spends time thinking about: historic figures who did a lot of harsh things to achieve historic ‘greatness’. That’s not a scary red flag or anything.