Spitfire List Web site and blog of anti-fascist researcher and radio personality Dave Emory.

For The Record  

FTR #1021 FascisBook: (In Your Facebook, Part 3–A Virtual Panopticon, Part 3)

Dave Emory’s entire life­time of work is avail­able on a flash dri­ve that can be obtained HERE. The new dri­ve is a 32-giga­byte dri­ve that is cur­rent as of the pro­grams and arti­cles post­ed by the fall of 2017. The new dri­ve (avail­able for a tax-deductible con­tri­bu­tion of $65.00 or more.)

WFMU-FM is pod­cast­ing For The Record–You can sub­scribe to the pod­cast HERE.

You can sub­scribe to e‑mail alerts from Spitfirelist.com HERE.

You can sub­scribe to RSS feed from Spitfirelist.com HERE.

You can sub­scribe to the com­ments made on pro­grams and posts–an excel­lent source of infor­ma­tion in, and of, itself HERE.

This broad­cast was record­ed in one, 60-minute seg­ment.

Intro­duc­tion: This pro­gram fol­lows up FTR #‘s 718 and 946, we exam­ined Face­book, not­ing how it’s cute, warm, friend­ly pub­lic facade obscured a cyn­i­cal, reac­tionary, exploita­tive and, ulti­mate­ly “cor­po­ratist” eth­ic and oper­a­tion.

The UK’s Chan­nel 4 sent an inves­tiga­tive jour­nal­ist under­cov­er to work for one of the third-par­ty com­pa­nies Face­book pays to mod­er­ate con­tent. This inves­tiga­tive jour­nal­ist was trained to take a hands-off approach to far right vio­lent con­tent and fake news because that kind of con­tent engages users for longer and increas­es ad rev­enues. ” . . . . An inves­tiga­tive jour­nal­ist who went under­cov­er as a Face­book mod­er­a­tor in Ire­land says the com­pa­ny lets pages from far-right fringe groups ‘exceed dele­tion thresh­old,’ and that those pages are ‘sub­ject to dif­fer­ent treat­ment in the same cat­e­go­ry as pages belong­ing to gov­ern­ments and news orga­ni­za­tions.’ The accu­sa­tion is a damn­ing one, under­min­ing Facebook’s claims that it is active­ly try­ing to cut down on fake news, pro­pa­gan­da, hate speech, and oth­er harm­ful con­tent that may have sig­nif­i­cant real-world impact.The under­cov­er jour­nal­ist detailed his find­ings in a new doc­u­men­tary titled Inside Face­book: Secrets of the Social Net­work, that just aired on the UK’s Chan­nel 4. . . . .”

Next, we present a fright­en­ing sto­ry about Aggre­gateIQ (AIQ), the Cam­bridge Ana­lyt­i­ca off­shoot to which Cam­bridge Ana­lyt­i­ca out­sourced the devel­op­ment of its “Ripon” psy­cho­log­i­cal pro­file soft­ware devel­op­ment, and which lat­er played a key role in the pro-Brex­it cam­paign. The arti­cle also notes that, despite Facebook’s pledge to kick Cam­bridge Ana­lyt­i­ca off of its plat­form, secu­ri­ty researchers just found 13 apps avail­able for Face­book that appear to be devel­oped by AIQ. If Face­book real­ly was try­ing to kick Cam­bridge Ana­lyt­i­ca off of its plat­form, it’s not try­ing very hard. One app is even named “AIQ John­ny Scraper” and it’s reg­is­tered to AIQ.

The arti­cle is also a reminder that you don’t nec­es­sar­i­ly need to down­load a Cam­bridge Analytica/AIQ app for them to be track­ing your infor­ma­tion and reselling it to clients. Secu­ri­ty researcher stum­bled upon a new repos­i­to­ry of curat­ed Face­book data AIQ was cre­at­ing for a client and it’s entire­ly pos­si­ble a lot of the data was scraped from pub­lic Face­book posts.

” . . . . Aggre­gateIQ, a Cana­di­an con­sul­tan­cy alleged to have links to Cam­bridge Ana­lyt­i­ca, col­lect­ed and stored the data of hun­dreds of thou­sands of Face­book users, accord­ing to redact­ed com­put­er files seen by the Finan­cial Times.The social net­work banned Aggre­gateIQ, a data com­pa­ny, from its plat­form as part of a clean-up oper­a­tion fol­low­ing the Cam­bridge Ana­lyt­i­ca scan­dal, on sus­pi­cion that the com­pa­ny could have been improp­er­ly access­ing user infor­ma­tion. How­ev­er, Chris Vick­ery, a secu­ri­ty researcher, this week found an app on the plat­form called ‘AIQ John­ny Scraper’ reg­is­tered to the com­pa­ny, rais­ing fresh ques­tions about the effec­tive­ness of Facebook’s polic­ing efforts. . . .”

In addi­tion, the sto­ry high­lights a forms of micro-tar­get­ing com­pa­nies like AIQ make avail­able that’s fun­da­men­tal­ly dif­fer­ent from the algo­rith­mic micro-tar­get­ing asso­ci­at­ed with social media abus­es: micro-tar­get­ing by a human who wants to specif­i­cal­ly look and see what you per­son­al­ly have said about var­i­ous top­ics on social media. This is a ser­vice where some­one can type you into a search engine and AIQ’s prod­uct will serve up a list of all the var­i­ous polit­i­cal posts you’ve made or the polit­i­cal­ly-rel­e­vant “Likes” you’ve made.

Next, we note that Face­book is get­ting sued by an app devel­op­er for act­ing like the mafia and turn­ing access to all that user data as the key enforce­ment tool:

“Mark Zucker­berg faces alle­ga­tions that he devel­oped a ‘mali­cious and fraud­u­lent scheme’ to exploit vast amounts of pri­vate data to earn Face­book bil­lions and force rivals out of busi­ness. A com­pa­ny suing Face­book in a Cal­i­for­nia court claims the social network’s chief exec­u­tive ‘weaponised’ the abil­i­ty to access data from any user’s net­work of friends – the fea­ture at the heart of the Cam­bridge Ana­lyt­i­ca scan­dal.  . . . . ‘The evi­dence uncov­ered by plain­tiff demon­strates that the Cam­bridge Ana­lyt­i­ca scan­dal was not the result of mere neg­li­gence on Facebook’s part but was rather the direct con­se­quence of the mali­cious and fraud­u­lent scheme Zucker­berg designed in 2012 to cov­er up his fail­ure to antic­i­pate the world’s tran­si­tion to smart­phones,’ legal doc­u­ments said. . . . . Six4Three alleges up to 40,000 com­pa­nies were effec­tive­ly defraud­ed in this way by Face­book. It also alleges that senior exec­u­tives includ­ing Zucker­berg per­son­al­ly devised and man­aged the scheme, indi­vid­u­al­ly decid­ing which com­pa­nies would be cut off from data or allowed pref­er­en­tial access. . . . ‘They felt that it was bet­ter not to know. I found that utter­ly hor­ri­fy­ing,’ he [for­mer Face­book exec­u­tive Sandy Parak­i­las] said. ‘If true, these alle­ga­tions show a huge betray­al of users, part­ners and reg­u­la­tors. They would also show Face­book using its monop­oly pow­er to kill com­pe­ti­tion and putting prof­its over pro­tect­ing its users.’ . . . .”

The above-men­tioned Cam­bridge Ana­lyt­i­ca is offi­cial­ly going bank­rupt, along with the elec­tions divi­sion of its par­ent com­pa­ny, SCL Group. Appar­ent­ly their bad press has dri­ven away clients.

Is this tru­ly the end of Cam­bridge Ana­lyt­i­ca?

No.

They’re rebrand­ing under a new com­pa­ny, Emer­da­ta. Intrigu­ing­ly, Cam­bridge Analytica’s trans­for­ma­tion into Emer­da­ta is note­wor­thy because  the fir­m’s direc­tors include John­son Ko Chun Shun, a Hong Kong financier and busi­ness part­ner of Erik Prince: ” . . . . But the company’s announce­ment left sev­er­al ques­tions unan­swered, includ­ing who would retain the company’s intel­lec­tu­al prop­er­ty — the so-called psy­cho­graph­ic vot­er pro­files built in part with data from Face­book — and whether Cam­bridge Analytica’s data-min­ing busi­ness would return under new aus­pices. . . . In recent months, exec­u­tives at Cam­bridge Ana­lyt­i­ca and SCL Group, along with the Mer­cer fam­i­ly, have moved to cre­at­ed a new firm, Emer­da­ta, based in Britain, accord­ing to British records. The new company’s direc­tors include John­son Ko Chun Shun, a Hong Kong financier and busi­ness part­ner of Erik Prince. . . . An exec­u­tive and a part own­er of SCL Group, Nigel Oakes, has pub­licly described Emer­da­ta as a way of rolling up the two com­pa­nies under one new ban­ner. . . . 

In the Big Data inter­net age, there’s one area of per­son­al infor­ma­tion that has yet to be incor­po­rat­ed into the pro­files on everyone–personal bank­ing infor­ma­tion.  ” . . . . If tech com­pa­nies are in con­trol of pay­ment sys­tems, they’ll know “every sin­gle thing you do,” Kapi­to said. It’s a dif­fer­ent busi­ness mod­el from tra­di­tion­al bank­ing: Data is more valu­able for tech firms that sell a range of dif­fer­ent prod­ucts than it is for banks that only sell finan­cial ser­vices, he said. . . .”

Face­book is approach­ing a num­ber of big banks – JP Mor­gan, Wells Far­go, Cit­i­group, and US Ban­corp – request­ing finan­cial data includ­ing card trans­ac­tions and check­ing-account bal­ances. Face­book is joined byIn this by Google and Ama­zon who are also try­ing to get this kind of data.

Face­book assures us that this infor­ma­tion, which will be opt-in, is to be sole­ly for offer­ing new ser­vices on Face­book mes­sen­ger. Face­book also assures us that this infor­ma­tion, which would obvi­ous­ly be invalu­able for deliv­er­ing ads, won’t be used for ads at all. It will ONLY be used for Facebook’s Mes­sen­ger ser­vice.  This is a dubi­ous assur­ance, in light of Face­book’s past behav­ior.

” . . . . Face­book increas­ing­ly wants to be a plat­form where peo­ple buy and sell goods and ser­vices, besides con­nect­ing with friends. The com­pa­ny over the past year asked JPMor­gan Chase & Co., Wells Far­go & Co., Cit­i­group Inc. and U.S. Ban­corp to dis­cuss poten­tial offer­ings it could host for bank cus­tomers on Face­book Mes­sen­ger, said peo­ple famil­iar with the mat­ter. Face­book has talked about a fea­ture that would show its users their check­ing-account bal­ances, the peo­ple said. It has also pitched fraud alerts, some of the peo­ple said. . . .”

Peter Thiel’s sur­veil­lance firm Palan­tir was appar­ent­ly deeply involved with Cam­bridge Ana­lyt­i­ca’s gam­ing of per­son­al data har­vest­ed from Face­book in order to engi­neer an elec­toral vic­to­ry for Trump. Thiel was an ear­ly investor in Face­book, at one point was its largest share­hold­er and is still one of its largest share­hold­ers. ” . . . . It was a Palan­tir employ­ee in Lon­don, work­ing close­ly with the data sci­en­tists build­ing Cambridge’s psy­cho­log­i­cal pro­fil­ing tech­nol­o­gy, who sug­gest­ed the sci­en­tists cre­ate their own app — a mobile-phone-based per­son­al­i­ty quiz — to gain access to Face­book users’ friend net­works, accord­ing to doc­u­ments obtained by The New York Times. The rev­e­la­tions pulled Palan­tir — co-found­ed by the wealthy lib­er­tar­i­an Peter Thiel — into the furor sur­round­ing Cam­bridge, which improp­er­ly obtained Face­book data to build ana­lyt­i­cal tools it deployed on behalf of Don­ald J. Trump and oth­er Repub­li­can can­di­dates in 2016. Mr. Thiel, a sup­port­er of Pres­i­dent Trump, serves on the board at Face­book. ‘There were senior Palan­tir employ­ees that were also work­ing on the Face­book data,’ said Christo­pher Wylie, a data expert and Cam­bridge Ana­lyt­i­ca co-founder, in tes­ti­mo­ny before British law­mak­ers on Tues­day. . . . The con­nec­tions between Palan­tir and Cam­bridge Ana­lyt­i­ca were thrust into the spot­light by Mr. Wylie’s tes­ti­mo­ny on Tues­day. Both com­pa­nies are linked to tech-dri­ven bil­lion­aires who backed Mr. Trump’s cam­paign: Cam­bridge is chiefly owned by Robert Mer­cer, the com­put­er sci­en­tist and hedge fund mag­nate, while Palan­tir was co-found­ed in 2003 by Mr. Thiel, who was an ini­tial investor in Face­book. . . .”

Pro­gram High­lights Include:

  1. Face­book’s project to incor­po­rate brain-to-com­put­er inter­face into its oper­at­ing sys­tem: ” . . . Face­book wants to build its own “brain-to-com­put­er inter­face” that would allow us to send thoughts straight to a com­put­er. ‘What if you could type direct­ly from your brain?’ Regi­na Dugan, the head of the company’s secre­tive hard­ware R&D divi­sion, Build­ing 8, asked from the stage. Dugan then pro­ceed­ed to show a video demo of a woman typ­ing eight words per minute direct­ly from the stage. In a few years, she said, the team hopes to demon­strate a real-time silent speech sys­tem capa­ble of deliv­er­ing a hun­dred words per minute. ‘That’s five times faster than you can type on your smart­phone, and it’s straight from your brain,’ she said. ‘Your brain activ­i­ty con­tains more infor­ma­tion than what a word sounds like and how it’s spelled; it also con­tains seman­tic infor­ma­tion of what those words mean.’ . . .”
  2. ” . . . . Brain-com­put­er inter­faces are noth­ing new. DARPA, which Dugan used to head, has invest­ed heav­i­ly in brain-com­put­er inter­face tech­nolo­gies to do things like cure men­tal ill­ness and restore mem­o­ries to sol­diers injured in war. But what Face­book is propos­ing is per­haps more radical—a world in which social media doesn’t require pick­ing up a phone or tap­ping a wrist watch in order to com­mu­ni­cate with your friends; a world where we’re con­nect­ed all the time by thought alone. . . .”
  3. ” . . . . Facebook’s Build­ing 8 is mod­eled after DARPA and its projects tend to be equal­ly ambi­tious. . . .”
  4. ” . . . . But what Face­book is propos­ing is per­haps more radical—a world in which social media doesn’t require pick­ing up a phone or tap­ping a wrist watch in order to com­mu­ni­cate with your friends; a world where we’re con­nect­ed all the time by thought alone. . . .”
  5. ” . . . . Face­book hopes to use opti­cal neur­al imag­ing tech­nol­o­gy to scan the brain 100 times per sec­ond to detect thoughts and turn them into text. Mean­while, it’s work­ing on ‘skin-hear­ing’ that could trans­late sounds into hap­tic feed­back that peo­ple can learn to under­stand like braille. . . .”
  6. ” . . . . Wor­ry­ing­ly, Dugan even­tu­al­ly appeared frus­trat­ed in response to my inquiries about how her team thinks about safe­ty pre­cau­tions for brain inter­faces, say­ing, ‘The flip side of the ques­tion that you’re ask­ing is ‘why invent it at all?’ and I just believe that the opti­mistic per­spec­tive is that on bal­ance, tech­no­log­i­cal advances have real­ly meant good things for the world if they’re han­dled respon­si­bly.’ . . . .”
  7. Some telling obser­va­tions by Nigel Oakes, the founder of Cam­bridge Ana­lyt­i­ca par­ent firm SCL: ” . . . . . . . . The pan­el has pub­lished audio records in which an exec­u­tive tied to Cam­bridge Ana­lyt­i­ca dis­cuss­es how the Trump cam­paign used tech­niques used by the Nazis to tar­get vot­ers. . . .”
  8. Fur­ther expo­si­tion of Oakes’ state­ment: ” . . . . Adolf Hitler ‘didn’t have a prob­lem with the Jews at all, but peo­ple didn’t like the Jews,’ he told the aca­d­e­m­ic, Emma L. Bri­ant, a senior lec­tur­er in jour­nal­ism at the Uni­ver­si­ty of Essex. He went on to say that Don­ald J. Trump had done the same thing by tap­ping into griev­ances toward immi­grants and Mus­lims. . . . ‘What hap­pened with Trump, you can for­get all the micro­tar­get­ing and micro­da­ta and what­ev­er, and come back to some very, very sim­ple things,’ he told Dr. Bri­ant. ‘Trump had the balls, and I mean, real­ly the balls, to say what peo­ple want­ed to hear.’ . . .”
  9. Obser­va­tions about the pos­si­bil­i­ties of Face­book’s goal of hav­ing AI gov­ern­ing the edi­to­r­i­al func­tions of its con­tent: As not­ed in a Pop­u­lar Mechan­ics arti­cle: ” . . . When the next pow­er­ful AI comes along, it will see its first look at the world by look­ing at our faces. And if we stare it in the eyes and shout ‘we’re AWFUL lol,’ the lol might be the one part it doesn’t under­stand. . . .”
  10. Microsoft­’s Tay Chat­bot offers a glimpse into this future: As one Twit­ter user not­ed, employ­ing sar­casm: “Tay went from ‘humans are super cool’ to full nazi in <24 hrs and I’m not at all con­cerned about the future of AI.”

1. The UK’s Chan­nel 4 sent an inves­tiga­tive jour­nal­ist under­cov­er to work for one of the third-par­ty com­pa­nies Face­book pays to mod­er­ate con­tent. This inves­tiga­tive jour­nal­ist was trained to take a hands-off approach to far right vio­lent con­tent and fake news because that kind of con­tent engages users for longer and increas­es ad rev­enues. ” . . . . An inves­tiga­tive jour­nal­ist who went under­cov­er as a Face­book mod­er­a­tor in Ire­land says the com­pa­ny lets pages from far-right fringe groups ‘exceed dele­tion thresh­old,’ and that those pages are ‘sub­ject to dif­fer­ent treat­ment in the same cat­e­go­ry as pages belong­ing to gov­ern­ments and news orga­ni­za­tions.’ The accu­sa­tion is a damn­ing one, under­min­ing Facebook’s claims that it is active­ly try­ing to cut down on fake news, pro­pa­gan­da, hate speech, and oth­er harm­ful con­tent that may have sig­nif­i­cant real-world impact.The under­cov­er jour­nal­ist detailed his find­ings in a new doc­u­men­tary titled Inside Face­book: Secrets of the Social Net­work, that just aired on the UK’s Chan­nel 4. . . . .”

“Under­cov­er Face­book mod­er­a­tor Was Instruct­ed Not to Remove Fringe Groups or Hate Speech” by Nick Statt; The Verge; 07/17/2018

An inves­tiga­tive jour­nal­ist who went under­cov­er as a Face­book mod­er­a­tor in Ire­land says the com­pa­ny lets pages from far-right fringe groups “exceed dele­tion thresh­old,” and that those pages are “sub­ject to dif­fer­ent treat­ment in the same cat­e­go­ry as pages belong­ing to gov­ern­ments and news orga­ni­za­tions.” The accu­sa­tion is a damn­ing one, under­min­ing Facebook’s claims that it is active­ly try­ing to cut down on fake news, pro­pa­gan­da, hate speech, and oth­er harm­ful con­tent that may have sig­nif­i­cant real-world impact.The under­cov­er jour­nal­ist detailed his find­ings in a new doc­u­men­tary titled Inside Face­book: Secrets of the Social Net­work, that just aired on the UK’s Chan­nel 4. The inves­ti­ga­tion out­lines ques­tion­able prac­tices on behalf of CPL Resources, a third-par­ty con­tent mod­er­a­tor firm based in Dublin that Face­book has worked with since 2010.

Those ques­tion­able prac­tices pri­mar­i­ly involve a hands-off approach to flagged and report­ed con­tent like graph­ic vio­lence, hate speech, and racist and oth­er big­ot­ed rhetoric from far-right groups. The under­cov­er reporter says he was also instruct­ed to ignore users who looked as if they were under 13 years of age, which is the min­i­mum age require­ment to sign up for Face­book in accor­dance with the Child Online Pro­tec­tion Act, a 1998 pri­va­cy law passed in the US designed to pro­tect young chil­dren from exploita­tion and harm­ful and vio­lent con­tent on the inter­net. The doc­u­men­tary insin­u­ates that Face­book takes a hands-off approach to such con­tent, includ­ing bla­tant­ly false sto­ries parad­ing as truth, because it engages users for longer and dri­ves up adver­tis­ing rev­enue. . . . 

. . . . And as the Chan­nel 4 doc­u­men­tary makes clear, that thresh­old appears to be an ever-chang­ing met­ric that has no con­sis­ten­cy across par­ti­san lines and from legit­i­mate media orga­ni­za­tions to ones that ped­dle in fake news, pro­pa­gan­da, and con­spir­a­cy the­o­ries. It’s also unclear how Face­book is able to enforce its pol­i­cy with third-par­ty mod­er­a­tors all around the world, espe­cial­ly when they may be incen­tivized by any num­ber of per­for­mance met­rics and per­son­al bias­es. .  . . .

Mean­while, Face­book is ramp­ing up efforts in its arti­fi­cial intel­li­gence divi­sion, with the hope that one day algo­rithms can solve these press­ing mod­er­a­tion prob­lems with­out any human input. Ear­li­er today, the com­pa­ny said it would be accel­er­at­ing its AI research efforts to include more researchers and engi­neers, as well as new acad­e­mia part­ner­ships and expan­sions of its AI research labs in eight loca­tions around the world. . . . .The long-term goal of the company’s AI divi­sion is to cre­ate “machines that have some lev­el of com­mon sense” and that learn “how the world works by obser­va­tion, like young chil­dren do in the first few months of life.” . . . .

2. Next, we present a fright­en­ing sto­ry about Aggre­gateIQ (AIQ), the Cam­bridge Ana­lyt­i­ca off­shoot to which Cam­bridge Ana­lyt­i­ca out­sourced the devel­op­ment of its “Ripon” psy­cho­log­i­cal pro­file soft­ware devel­op­ment, and which lat­er played a key role in the pro-Brex­it cam­paign. The arti­cle also notes that, despite Facebook’s pledge to kick Cam­bridge Ana­lyt­i­ca off of its plat­form, secu­ri­ty researchers just found 13 apps avail­able for Face­book that appear to be devel­oped by AIQ. If Face­book real­ly was try­ing to kick Cam­bridge Ana­lyt­i­ca off of its plat­form, it’s not try­ing very hard. One app is even named “AIQ John­ny Scraper” and it’s reg­is­tered to AIQ.

The fol­low­ing arti­cle is also a reminder that you don’t nec­es­sar­i­ly need to down­load a Cam­bridge Analytica/AIQ app for them to be track­ing your infor­ma­tion and reselling it to clients. Secu­ri­ty researcher stum­bled upon a new repos­i­to­ry of curat­ed Face­book data AIQ was cre­at­ing for a client and it’s entire­ly pos­si­ble a lot of the data was scraped from pub­lic Face­book posts.

” . . . . Aggre­gateIQ, a Cana­di­an con­sul­tan­cy alleged to have links to Cam­bridge Ana­lyt­i­ca, col­lect­ed and stored the data of hun­dreds of thou­sands of Face­book users, accord­ing to redact­ed com­put­er files seen by the Finan­cial Times.The social net­work banned Aggre­gateIQ, a data com­pa­ny, from its plat­form as part of a clean-up oper­a­tion fol­low­ing the Cam­bridge Ana­lyt­i­ca scan­dal, on sus­pi­cion that the com­pa­ny could have been improp­er­ly access­ing user infor­ma­tion. How­ev­er, Chris Vick­ery, a secu­ri­ty researcher, this week found an app on the plat­form called ‘AIQ John­ny Scraper’ reg­is­tered to the com­pa­ny, rais­ing fresh ques­tions about the effec­tive­ness of Facebook’s polic­ing efforts. . . .”

Addi­tion­al­ly, the sto­ry high­lights a forms of micro-tar­get­ing com­pa­nies like AIQ make avail­able that’s fun­da­men­tal­ly dif­fer­ent from the algo­rith­mic micro-tar­get­ing we typ­i­cal­ly asso­ciate with social media abus­es: micro-tar­get­ing by a human who wants to specif­i­cal­ly look and see what you per­son­al­ly have said about var­i­ous top­ics on social media. A ser­vice where some­one can type you into a search engine and AIQ’s prod­uct will serve up a list of all the var­i­ous polit­i­cal posts you’ve made or the polit­i­cal­ly-rel­e­vant “Likes” you’ve made.

It’s also worth not­ing that this ser­vice would be per­fect for accom­plish­ing the right-wing’s long-stand­ing goal of purg­ing the fed­er­al gov­ern­ment of lib­er­al employ­ees. A goal that ‘Alt-Right’ neo-Nazi troll Charles C. John­son and ‘Alt-Right’ neo-Nazi bil­lion­aire Peter Thiel report­ed­ly was help­ing the Trump team accom­plish dur­ing the tran­si­tion peri­od. An ide­o­log­i­cal purge of the State Depart­ment is report­ed­ly already under­way.  

“Aggre­gateIQ Had Data of Thou­sands of Face­book Users” by Aliya Ram and Han­nah Kuch­ler; Finan­cial Times; 06/01/2018

Aggre­gateIQ, a Cana­di­an con­sul­tan­cy alleged to have links to Cam­bridge Ana­lyt­i­ca, col­lect­ed and stored the data of hun­dreds of thou­sands of Face­book users, accord­ing to redact­ed com­put­er files seen by the Finan­cial Times.The social net­work banned Aggre­gateIQ, a data com­pa­ny, from its plat­form as part of a clean-up oper­a­tion fol­low­ing the Cam­bridge Ana­lyt­i­ca scan­dal, on sus­pi­cion that the com­pa­ny could have been improp­er­ly access­ing user infor­ma­tion. How­ev­er, Chris Vick­ery, a secu­ri­ty researcher, this week found an app on the plat­form called “AIQ John­ny Scraper” reg­is­tered to the com­pa­ny, rais­ing fresh ques­tions about the effec­tive­ness of Facebook’s polic­ing efforts.

The tech­nol­o­gy group now says it shut down the John­ny Scraper app this week along with 13 oth­ers that could be relat­ed to Aggre­gateIQ, with a total of 1,000 users.

Ime Archi­bong, vice-pres­i­dent of prod­uct part­ner­ships, said the com­pa­ny was inves­ti­gat­ing whether there had been any mis­use of data. “We have sus­pend­ed an addi­tion­al 14 apps this week, which were installed by around 1,000 peo­ple,” he said. “They were all cre­at­ed after 2014 and so did not have access to friends’ data. How­ev­er, these apps appear to be linked to Aggre­gateIQ, which was affil­i­at­ed with Cam­bridge Ana­lyt­i­ca. So we have sus­pend­ed them while we inves­ti­gate fur­ther.”.

Accord­ing to files seen by the Finan­cial Times, Aggre­gateIQ had stored a list of 759,934 Face­book users in a table that record­ed home address­es, phone num­bers and email address­es for some pro­files.

Jeff Sil­vester, Aggre­gateIQ chief oper­at­ing offi­cer, said the file came from soft­ware designed for a par­tic­u­lar client, which tracked which users had liked a par­tic­u­lar page or were post­ing pos­i­tive and neg­a­tive com­ments.

“I believe as part of that the client did attempt to match peo­ple who had liked their Face­book page with sup­port­ers in their vot­er file [online elec­toral records],” he said. “I believe the result of this match­ing is what you are look­ing at. This is a fair­ly com­mon task that vot­er file tools do all of the time.”

He added that the pur­pose of the John­ny Scraper app was to repli­cate Face­book posts made by one of AggregateIQ’s clients into smart­phone apps that also belonged to the client.

Aggre­gateIQ has sought to dis­tance itself from an inter­na­tion­al pri­va­cy scan­dal engulf­ing Face­book and Cam­bridge Ana­lyt­i­ca, despite alle­ga­tions from Christo­pher Wylie, a whistle­blow­er at the now-defunct UK firm, that it had act­ed as the Cana­di­an branch of the organ­i­sa­tion.

The files do not indi­cate whether users had giv­en per­mis­sion for their Face­book “Likes” to be tracked through third-par­ty apps, or whether they were scraped from pub­licly vis­i­ble pages. Mr Vick­ery, who analysed AggregateIQ’s files after uncov­er­ing a trove of infor­ma­tion online, said that the com­pa­ny appeared to have gath­ered data from Face­book users despite telling Cana­di­an MPs “we don’t real­ly process data on folks”.

The files also include posts that focus on polit­i­cal issues with state­ments such as: “Like if you agree with Rea­gan that ‘gov­ern­ment is the prob­lem’,” but it is not clear if this infor­ma­tion orig­i­nat­ed on Face­book. Mr Sil­vester said the soft­ware Aggre­gateIQ had designed allowed its client to browse pub­lic com­ments. “It is pos­si­ble that some of those pub­lic com­ments or posts are in the file,” he said. . . .

. . . . “The over­all theme of these com­pa­nies and the way their tools work is that every­thing is reliant on every­thing else, but has enough inde­pen­dent oper­abil­i­ty to pre­serve deni­a­bil­i­ty,” said Mr Vick­ery. “But when you com­bine all these dif­fer­ent data sources togeth­er it becomes some­thing else.” . . . .

3. Face­book is get­ting sued by an app devel­op­er for act­ing like the mafia and turn­ing access to all that user data as the key enforce­ment tool:

“Mark Zucker­berg faces alle­ga­tions that he devel­oped a ‘mali­cious and fraud­u­lent scheme’ to exploit vast amounts of pri­vate data to earn Face­book bil­lions and force rivals out of busi­ness. A com­pa­ny suing Face­book in a Cal­i­for­nia court claims the social network’s chief exec­u­tive ‘weaponised’ the abil­i­ty to access data from any user’s net­work of friends – the fea­ture at the heart of the Cam­bridge Ana­lyt­i­ca scan­dal.  . . . . ‘The evi­dence uncov­ered by plain­tiff demon­strates that the Cam­bridge Ana­lyt­i­ca scan­dal was not the result of mere neg­li­gence on Facebook’s part but was rather the direct con­se­quence of the mali­cious and fraud­u­lent scheme Zucker­berg designed in 2012 to cov­er up his fail­ure to antic­i­pate the world’s tran­si­tion to smart­phones,’ legal doc­u­ments said. . . . . Six4Three alleges up to 40,000 com­pa­nies were effec­tive­ly defraud­ed in this way by Face­book. It also alleges that senior exec­u­tives includ­ing Zucker­berg per­son­al­ly devised and man­aged the scheme, indi­vid­u­al­ly decid­ing which com­pa­nies would be cut off from data or allowed pref­er­en­tial access. . . . ‘They felt that it was bet­ter not to know. I found that utter­ly hor­ri­fy­ing,’ he [for­mer Face­book exec­u­tive Sandy Parak­i­las] said. ‘If true, these alle­ga­tions show a huge betray­al of users, part­ners and reg­u­la­tors. They would also show Face­book using its monop­oly pow­er to kill com­pe­ti­tion and putting prof­its over pro­tect­ing its users.’ . . . .”

“Zucker­berg Set Up Fraud­u­lent Scheme to ‘Weaponise’ Data, Court Case Alleges” by Car­ole Cad­wal­ladr and Emma Gra­ham-Har­ri­son; The Guardian; 05/24/2018

Mark Zucker­berg faces alle­ga­tions that he devel­oped a “mali­cious and fraud­u­lent scheme” to exploit vast amounts of pri­vate data to earn Face­book bil­lions and force rivals out of busi­ness. A com­pa­ny suing Face­book in a Cal­i­for­nia court claims the social network’s chief exec­u­tive “weaponised” the abil­i­ty to access data from any user’s net­work of friends – the fea­ture at the heart of the Cam­bridge Ana­lyt­i­ca scan­dal.

A legal motion filed last week in the supe­ri­or court of San Mateo draws upon exten­sive con­fi­den­tial emails and mes­sages between Face­book senior exec­u­tives includ­ing Mark Zucker­berg. He is named indi­vid­u­al­ly in the case and, it is claimed, had per­son­al over­sight of the scheme.

Face­book rejects all claims, and has made a motion to have the case dis­missed using a free speech defence.

It claims the first amend­ment pro­tects its right to make “edi­to­r­i­al deci­sions” as it sees fit. Zucker­berg and oth­er senior exec­u­tives have assert­ed that Face­book is a plat­form not a pub­lish­er, most recent­ly in tes­ti­mo­ny to Con­gress.

Heather Whit­ney, a legal schol­ar who has writ­ten about social media com­pa­nies for the Knight First Amend­ment Insti­tute at Colum­bia Uni­ver­si­ty, said, in her opin­ion, this exposed a poten­tial ten­sion for Face­book.

“Facebook’s claims in court that it is an edi­tor for first amend­ment pur­pos­es and thus free to cen­sor and alter the con­tent avail­able on its site is in ten­sion with their, espe­cial­ly recent, claims before the pub­lic and US Con­gress to be neu­tral plat­forms.”

The com­pa­ny that has filed the case, a for­mer start­up called Six4Three, is now try­ing to stop Face­book from hav­ing the case thrown out and has sub­mit­ted legal argu­ments that draw on thou­sands of emails, the details of which are cur­rent­ly redact­ed. Face­book has until next Tues­day to file a motion request­ing that the evi­dence remains sealed, oth­er­wise the doc­u­ments will be made pub­lic.

The devel­op­er alleges the cor­re­spon­dence shows Face­book paid lip ser­vice to pri­va­cy con­cerns in pub­lic but behind the scenes exploit­ed its users’ pri­vate infor­ma­tion.

It claims inter­nal emails and mes­sages reveal a cyn­i­cal and abu­sive sys­tem set up to exploit access to users’ pri­vate infor­ma­tion, along­side a raft of anti-com­pet­i­tive behav­iours. . . .

. . . . The papers sub­mit­ted to the court last week allege Face­book was not only aware of the impli­ca­tions of its pri­va­cy pol­i­cy, but active­ly exploit­ed them, inten­tion­al­ly cre­at­ing and effec­tive­ly flag­ging up the loop­hole that Cam­bridge Ana­lyt­i­ca used to col­lect data on up to 87 mil­lion Amer­i­can users.

The law­suit also claims Zucker­berg mis­led the pub­lic and Con­gress about Facebook’s role in the Cam­bridge Ana­lyt­i­ca scan­dal by por­tray­ing it as a vic­tim of a third par­ty that had abused its rules for col­lect­ing and shar­ing data.

“The evi­dence uncov­ered by plain­tiff demon­strates that the Cam­bridge Ana­lyt­i­ca scan­dal was not the result of mere neg­li­gence on Facebook’s part but was rather the direct con­se­quence of the mali­cious and fraud­u­lent scheme Zucker­berg designed in 2012 to cov­er up his fail­ure to antic­i­pate the world’s tran­si­tion to smart­phones,” legal doc­u­ments said.

The law­suit claims to have uncov­ered fresh evi­dence con­cern­ing how Face­book made deci­sions about users’ pri­va­cy. It sets out alle­ga­tions that, in 2012, Facebook’s adver­tis­ing busi­ness, which focused on desk­top ads, was dev­as­tat­ed by a rapid and unex­pect­ed shift to smart­phones.

Zucker­berg respond­ed by forc­ing devel­op­ers to buy expen­sive ads on the new, under­used mobile ser­vice or risk hav­ing their access to data at the core of their busi­ness cut off, the court case alleges.

“Zucker­berg weaponised the data of one-third of the planet’s pop­u­la­tion in order to cov­er up his fail­ure to tran­si­tion Facebook’s busi­ness from desk­top com­put­ers to mobile ads before the mar­ket became aware that Facebook’s finan­cial pro­jec­tions in its 2012 IPO fil­ings were false,” one court fil­ing said.

In its lat­est fil­ing, Six4Three alleges Face­book delib­er­ate­ly used its huge amounts of valu­able and high­ly per­son­al user data to tempt devel­op­ers to cre­ate plat­forms with­in its sys­tem, imply­ing that they would have long-term access to per­son­al infor­ma­tion, includ­ing data from sub­scribers’ Face­book friends. 

Once their busi­ness­es were run­ning, and reliant on data relat­ing to “likes”, birth­days, friend lists and oth­er Face­book minu­ti­ae, the social media com­pa­ny could and did tar­get any that became too suc­cess­ful, look­ing to extract mon­ey from them, co-opt them or destroy them, the doc­u­ments claim.

Six4Three alleges up to 40,000 com­pa­nies were effec­tive­ly defraud­ed in this way by Face­book. It also alleges that senior exec­u­tives includ­ing Zucker­berg per­son­al­ly devised and man­aged the scheme, indi­vid­u­al­ly decid­ing which com­pa­nies would be cut off from data or allowed pref­er­en­tial access.

The law­suit alleges that Face­book ini­tial­ly focused on kick­start­ing its mobile adver­tis­ing plat­form, as the rapid adop­tion of smart­phones dec­i­mat­ed the desk­top adver­tis­ing busi­ness in 2012.

It lat­er used its abil­i­ty to cut off data to force rivals out of busi­ness, or coerce own­ers of apps Face­book cov­et­ed into sell­ing at below the mar­ket price, even though they were not break­ing any terms of their con­tracts, accord­ing to the doc­u­ments. . . .

. . . . David God­kin, Six4Three’s lead coun­sel said: “We believe the pub­lic has a right to see the evi­dence and are con­fi­dent the evi­dence clear­ly demon­strates the truth of our alle­ga­tions, and much more.”

Sandy Parak­i­las, a for­mer Face­book employ­ee turned whistle­blow­er who has tes­ti­fied to the UK par­lia­ment about its busi­ness prac­tices, said the alle­ga­tions were a “bomb­shell”. He claimed to MPs Facebook’s senior exec­u­tives were aware of abus­es of friends’ data back in 2011-12 and he was warned not to look into the issue.

“They felt that it was bet­ter not to know. I found that utter­ly hor­ri­fy­ing,” he said. “If true, these alle­ga­tions show a huge betray­al of users, part­ners and reg­u­la­tors. They would also show Face­book using its monop­oly pow­er to kill com­pe­ti­tion and putting prof­its over pro­tect­ing its users.” . . .

4. Cam­bridge Ana­lyt­i­ca is offi­cial­ly going bank­rupt, along with the elec­tions divi­sion of its par­ent com­pa­ny, SCL Group. Appar­ent­ly their bad press has dri­ven away clients.

Is this tru­ly the end of Cam­bridge Ana­lyt­i­ca?

No.

They’re rebrand­ing under a new com­pa­ny, Emer­da­ta. Intrigu­ing­ly, Cam­bridge Analytica’s trans­for­ma­tion into Emer­da­ta is note­wor­thy because  the fir­m’s direc­tors include John­son Ko Chun Shun, a Hong Kong financier and busi­ness part­ner of Erik Prince: ” . . . . But the company’s announce­ment left sev­er­al ques­tions unan­swered, includ­ing who would retain the company’s intel­lec­tu­al prop­er­ty — the so-called psy­cho­graph­ic vot­er pro­files built in part with data from Face­book — and whether Cam­bridge Analytica’s data-min­ing busi­ness would return under new aus­pices. . . . In recent months, exec­u­tives at Cam­bridge Ana­lyt­i­ca and SCL Group, along with the Mer­cer fam­i­ly, have moved to cre­at­ed a new firm, Emer­da­ta, based in Britain, accord­ing to British records. The new company’s direc­tors include John­son Ko Chun Shun, a Hong Kong financier and busi­ness part­ner of Erik Prince. . . . An exec­u­tive and a part own­er of SCL Group, Nigel Oakes, has pub­licly described Emer­da­ta as a way of rolling up the two com­pa­nies under one new ban­ner. . . . 

“Cam­bridge Ana­lyt­i­ca to File for Bank­rupt­cy After Mis­use of Face­book Data” by Nicholas Con­fes­sore and Matthew Rosen­berg; The New York Times; 5/02/2018.

. . . . In a state­ment post­ed to its web­site, Cam­bridge Ana­lyt­i­ca said the con­tro­ver­sy had dri­ven away vir­tu­al­ly all of the company’s cus­tomers, forc­ing it to file for bank­rupt­cy in both the Unit­ed States and Britain. The elec­tions divi­sion of Cambridge’s British affil­i­ate, SCL Group, will also shut down, the com­pa­ny said.

But the company’s announce­ment left sev­er­al ques­tions unan­swered, includ­ing who would retain the company’s intel­lec­tu­al prop­er­ty — the so-called psy­cho­graph­ic vot­er pro­files built in part with data from Face­book — and whether Cam­bridge Analytica’s data-min­ing busi­ness would return under new aus­pices. . . . 

. . . . In recent months, exec­u­tives at Cam­bridge Ana­lyt­i­ca and SCL Group, along with the Mer­cer fam­i­ly, have moved to cre­at­ed a new firm, Emer­da­ta, based in Britain, accord­ing to British records. The new company’s direc­tors include John­son Ko Chun Shun, a Hong Kong financier and busi­ness part­ner of Erik Prince. Mr. Prince found­ed the pri­vate secu­ri­ty firm Black­wa­ter, which was renamed Xe Ser­vices after Black­wa­ter con­trac­tors were con­vict­ed of killing Iraqi civil­ians.

Cam­bridge and SCL offi­cials pri­vate­ly raised the pos­si­bil­i­ty that Emer­da­ta could be used for a Black­wa­ter-style rebrand­ing of Cam­bridge Ana­lyt­i­ca and the SCL Group, accord­ing two peo­ple with knowl­edge of the com­pa­nies, who asked for anonymi­ty to describe con­fi­den­tial con­ver­sa­tions. One plan under con­sid­er­a­tion was to sell off the com­bined company’s data and intel­lec­tu­al prop­er­ty.

An exec­u­tive and a part own­er of SCL Group, Nigel Oakes, has pub­licly described Emer­da­ta as a way of rolling up the two com­pa­nies under one new ban­ner. . . . 

5. In the Big Data inter­net age, there’s one area of per­son­al infor­ma­tion that has yet to be incor­po­rat­ed into the pro­files on everyone–personal bank­ing infor­ma­tion.  ” . . . . If tech com­pa­nies are in con­trol of pay­ment sys­tems, they’ll know “every sin­gle thing you do,” Kapi­to said. It’s a dif­fer­ent busi­ness mod­el from tra­di­tion­al bank­ing: Data is more valu­able for tech firms that sell a range of dif­fer­ent prod­ucts than it is for banks that only sell finan­cial ser­vices, he said. . . .”

“Black­Rock Is Wor­ried Tech­nol­o­gy Firms Are About to Know ‘Every Sin­gle Thing You Do’” by John Detrix­he; Quartz; 11/02/2017

The pres­i­dent of Black­Rock, the world’s biggest asset man­ag­er, is among those who think big tech­nol­o­gy firms could invade the finan­cial industry’s turf. Google and Face­book have thrived by col­lect­ing and stor­ing data about con­sumer habits—our emails, search queries, and the videos we watch. Under­stand­ing of our finan­cial lives could be an even rich­er source of data for them to sell to adver­tis­ers.

“I wor­ry about the data,” said Black­Rock pres­i­dent Robert Kapi­to at a con­fer­ence in Lon­don today (Nov. 2). “We’re going to have some seri­ous com­peti­tors.”

If tech com­pa­nies are in con­trol of pay­ment sys­tems, they’ll know “every sin­gle thing you do,” Kapi­to said. It’s a dif­fer­ent busi­ness mod­el from tra­di­tion­al bank­ing: Data is more valu­able for tech firms that sell a range of dif­fer­ent prod­ucts than it is for banks that only sell finan­cial ser­vices, he said.

Kapi­to is wor­ried because the effort to win con­trol of pay­ment sys­tems is already underway—Apple will allow iMes­sage users to send cash to each oth­er, and Face­book is inte­grat­ing per­son-to-per­son Pay­Pal pay­ments into its Mes­sen­ger app.

As more pay­ments flow through mobile phones, banks are wor­ried they could get left behind, rel­e­gat­ed to serv­ing as low-mar­gin util­i­ties. To fight back, they’ve start­ed ini­tia­tives such as Zelle to com­pete with pay­ment ser­vices like Pay­Pal.

Bar­clays CEO Jes Sta­ley point­ed out at the con­fer­ence that banks prob­a­bly have the “rich­est data pool” of any sec­tor, and he said some 25% of the UK’s econ­o­my flows through Barl­cays’ pay­ment sys­tems. The indus­try could use that infor­ma­tion to offer bet­ter ser­vices. Com­pa­nies could alert peo­ple that they’re not sav­ing enough for retire­ment, or sug­gest ways to save mon­ey on their expens­es. The trick is access­ing that data and ana­lyz­ing it like a big tech­nol­o­gy com­pa­ny would.

And banks still have one thing going for them: There’s a mas­sive fortress of rules and reg­u­la­tions sur­round­ing the indus­try. “No one wants to be reg­u­lat­ed like we are,” Sta­ley said.

6. Face­book is approach­ing a num­ber of big banks – JP Mor­gan, Wells Far­go, Cit­i­group, and US Ban­corp – request­ing finan­cial data includ­ing card trans­ac­tions and check­ing-account bal­ances. Face­book is joined byIn this by Google and Ama­zon who are also try­ing to get this kind of data.

Face­book assures us that this infor­ma­tion, which will be opt-in, is to be sole­ly for offer­ing new ser­vices on Face­book mes­sen­ger. Face­book also assures us that this infor­ma­tion, which would obvi­ous­ly be invalu­able for deliv­er­ing ads, won’t be used for ads at all. It will ONLY be used for Facebook’s Mes­sen­ger ser­vice.  This is a dubi­ous assur­ance, in light of Face­book’s past behav­ior.

” . . . . Face­book increas­ing­ly wants to be a plat­form where peo­ple buy and sell goods and ser­vices, besides con­nect­ing with friends. The com­pa­ny over the past year asked JPMor­gan Chase & Co., Wells Far­go & Co., Cit­i­group Inc. and U.S. Ban­corp to dis­cuss poten­tial offer­ings it could host for bank cus­tomers on Face­book Mes­sen­ger, said peo­ple famil­iar with the mat­ter. Face­book has talked about a fea­ture that would show its users their check­ing-account bal­ances, the peo­ple said. It has also pitched fraud alerts, some of the peo­ple said. . . .”

“Face­book to Banks: Give Us Your Data, We’ll Give You Our Users” by Emi­ly Glaz­er, Deepa Seethara­man and Anna­Maria Andri­o­tis; The Wall Street Jour­nal; 08/06/2018

Face­book Inc. wants your finan­cial data.

The social-media giant has asked large U.S. banks to share detailed finan­cial infor­ma­tion about their cus­tomers, includ­ing card trans­ac­tions and check­ing-account bal­ances, as part of an effort to offer new ser­vices to users.

Face­book increas­ing­ly wants to be a plat­form where peo­ple buy and sell goods and ser­vices, besides con­nect­ing with friends. The com­pa­ny over the past year asked JPMor­gan Chase & Co., Wells Far­go & Co., Cit­i­group Inc. and U.S. Ban­corp to dis­cuss poten­tial offer­ings it could host for bank cus­tomers on Face­book Mes­sen­ger, said peo­ple famil­iar with the mat­ter.

Face­book has talked about a fea­ture that would show its users their check­ing-account bal­ances, the peo­ple said. It has also pitched fraud alerts, some of the peo­ple said.

Data pri­va­cy is a stick­ing point in the banks’ con­ver­sa­tions with Face­book, accord­ing to peo­ple famil­iar with the mat­ter. The talks are tak­ing place as Face­book faces sev­er­al inves­ti­ga­tions over its ties to polit­i­cal ana­lyt­ics firm Cam­bridge Ana­lyt­i­ca, which accessed data on as many as 87 mil­lion Face­book users with­out their con­sent.

One large U.S. bank pulled away from talks due to pri­va­cy con­cerns, some of the peo­ple said.

Face­book has told banks that the addi­tion­al cus­tomer infor­ma­tion could be used to offer ser­vices that might entice users to spend more time on Mes­sen­ger, a per­son famil­iar with the dis­cus­sions said. The com­pa­ny is try­ing to deep­en user engage­ment: Investors shaved more than $120 bil­lion from its mar­ket val­ue in one day last month after it said its growth is start­ing to slow..

Face­book said it wouldn’t use the bank data for ad-tar­get­ing pur­pos­es or share it with third par­ties. . . .

. . . . Alpha­bet Inc.’s Google and Amazon.com Inc. also have asked banks to share data if they join with them, in order to pro­vide basic bank­ing ser­vices on appli­ca­tions such as Google Assis­tant and Alexa, accord­ing to peo­ple famil­iar with the con­ver­sa­tions. . . . 

7. In FTR #946, we exam­ined Cam­bridge Ana­lyt­i­ca, its Trump and Steve Ban­non-linked tech firm that har­vest­ed Face­book data on behalf of the Trump cam­paign.

Peter Thiel’s sur­veil­lance firm Palan­tir was appar­ent­ly deeply involved with Cam­bridge Ana­lyt­i­ca’s gam­ing of per­son­al data har­vest­ed from Face­book in order to engi­neer an elec­toral vic­to­ry for Trump. Thiel was an ear­ly investor in Face­book, at one point was its largest share­hold­er and is still one of its largest share­hold­ers. ” . . . . It was a Palan­tir employ­ee in Lon­don, work­ing close­ly with the data sci­en­tists build­ing Cambridge’s psy­cho­log­i­cal pro­fil­ing tech­nol­o­gy, who sug­gest­ed the sci­en­tists cre­ate their own app — a mobile-phone-based per­son­al­i­ty quiz — to gain access to Face­book users’ friend net­works, accord­ing to doc­u­ments obtained by The New York Times. The rev­e­la­tions pulled Palan­tir — co-found­ed by the wealthy lib­er­tar­i­an Peter Thiel — into the furor sur­round­ing Cam­bridge, which improp­er­ly obtained Face­book data to build ana­lyt­i­cal tools it deployed on behalf of Don­ald J. Trump and oth­er Repub­li­can can­di­dates in 2016. Mr. Thiel, a sup­port­er of Pres­i­dent Trump, serves on the board at Face­book. ‘There were senior Palan­tir employ­ees that were also work­ing on the Face­book data,’ said Christo­pher Wylie, a data expert and Cam­bridge Ana­lyt­i­ca co-founder, in tes­ti­mo­ny before British law­mak­ers on Tues­day. . . . The con­nec­tions between Palan­tir and Cam­bridge Ana­lyt­i­ca were thrust into the spot­light by Mr. Wylie’s tes­ti­mo­ny on Tues­day. Both com­pa­nies are linked to tech-dri­ven bil­lion­aires who backed Mr. Trump’s cam­paign: Cam­bridge is chiefly owned by Robert Mer­cer, the com­put­er sci­en­tist and hedge fund mag­nate, while Palan­tir was co-found­ed in 2003 by Mr. Thiel, who was an ini­tial investor in Face­book. . . .”

“Spy Contractor’s Idea Helped Cam­bridge Ana­lyt­i­ca Har­vest Face­book Data” by NICHOLAS CONFESSORE and MATTHEW ROSENBERG; The New York Times; 03/27/2018

As a start-up called Cam­bridge Ana­lyt­i­ca sought to har­vest the Face­book data of tens of mil­lions of Amer­i­cans in sum­mer 2014, the com­pa­ny received help from at least one employ­ee at Palan­tir Tech­nolo­gies, a top Sil­i­con Val­ley con­trac­tor to Amer­i­can spy agen­cies and the Pen­ta­gon. It was a Palan­tir employ­ee in Lon­don, work­ing close­ly with the data sci­en­tists build­ing Cambridge’s psy­cho­log­i­cal pro­fil­ing tech­nol­o­gy, who sug­gest­ed the sci­en­tists cre­ate their own app — a mobile-phone-based per­son­al­i­ty quiz — to gain access to Face­book users’ friend net­works, accord­ing to doc­u­ments obtained by The New York Times.

Cam­bridge ulti­mate­ly took a sim­i­lar approach. By ear­ly sum­mer, the com­pa­ny found a uni­ver­si­ty researcher to har­vest data using a per­son­al­i­ty ques­tion­naire and Face­book app. The researcher scraped pri­vate data from over 50 mil­lion Face­book users — and Cam­bridge Ana­lyt­i­ca went into busi­ness sell­ing so-called psy­cho­me­t­ric pro­files of Amer­i­can vot­ers, set­ting itself on a col­li­sion course with reg­u­la­tors and law­mak­ers in the Unit­ed States and Britain.

The rev­e­la­tions pulled Palan­tir — co-found­ed by the wealthy lib­er­tar­i­an Peter Thiel — into the furor sur­round­ing Cam­bridge, which improp­er­ly obtained Face­book data to build ana­lyt­i­cal tools it deployed on behalf of Don­ald J. Trump and oth­er Repub­li­can can­di­dates in 2016. Mr. Thiel, a sup­port­er of Pres­i­dent Trump, serves on the board at Face­book.

“There were senior Palan­tir employ­ees that were also work­ing on the Face­book data,” said Christo­pher Wylie, a data expert and Cam­bridge Ana­lyt­i­ca co-founder, in tes­ti­mo­ny before British law­mak­ers on Tues­day. . . .

. . . .The con­nec­tions between Palan­tir and Cam­bridge Ana­lyt­i­ca were thrust into the spot­light by Mr. Wylie’s tes­ti­mo­ny on Tues­day. Both com­pa­nies are linked to tech-dri­ven bil­lion­aires who backed Mr. Trump’s cam­paign: Cam­bridge is chiefly owned by Robert Mer­cer, the com­put­er sci­en­tist and hedge fund mag­nate, while Palan­tir was co-found­ed in 2003 by Mr. Thiel, who was an ini­tial investor in Face­book. . . .

. . . . Doc­u­ments and inter­views indi­cate that start­ing in 2013, Mr. Chmieli­auskas began cor­re­spond­ing with Mr. Wylie and a col­league from his Gmail account. At the time, Mr. Wylie and the col­league worked for the British defense and intel­li­gence con­trac­tor SCL Group, which formed Cam­bridge Ana­lyt­i­ca with Mr. Mer­cer the next year. The three shared Google doc­u­ments to brain­storm ideas about using big data to cre­ate sophis­ti­cat­ed behav­ioral pro­files, a prod­uct code-named “Big Dad­dy.”

A for­mer intern at SCL — Sophie Schmidt, the daugh­ter of Eric Schmidt, then Google’s exec­u­tive chair­man — urged the com­pa­ny to link up with Palan­tir, accord­ing to Mr. Wylie’s tes­ti­mo­ny and a June 2013 email viewed by The Times.

“Ever come across Palan­tir. Amus­ing­ly Eric Schmidt’s daugh­ter was an intern with us and is try­ing to push us towards them?” one SCL employ­ee wrote to a col­league in the email.

. . . . But he [Wylie] said some Palan­tir employ­ees helped engi­neer Cambridge’s psy­cho­graph­ic mod­els.

“There were Palan­tir staff who would come into the office and work on the data,” Mr. Wylie told law­mak­ers. “And we would go and meet with Palan­tir staff at Palan­tir.” He did not pro­vide an exact num­ber for the employ­ees or iden­ti­fy them.

Palan­tir employ­ees were impressed with Cambridge’s back­ing from Mr. Mer­cer, one of the world’s rich­est men, accord­ing to mes­sages viewed by The Times. And Cam­bridge Ana­lyt­i­ca viewed Palantir’s Sil­i­con Val­ley ties as a valu­able resource for launch­ing and expand­ing its own busi­ness.

In an inter­view this month with The Times, Mr. Wylie said that Palan­tir employ­ees were eager to learn more about using Face­book data and psy­cho­graph­ics. Those dis­cus­sions con­tin­ued through spring 2014, accord­ing to Mr. Wylie.

Mr. Wylie said that he and Mr. Nix vis­it­ed Palantir’s Lon­don office on Soho Square. One side was set up like a high-secu­ri­ty office, Mr. Wylie said, with sep­a­rate rooms that could be entered only with par­tic­u­lar codes. The oth­er side, he said, was like a tech start-up — “weird inspi­ra­tional quotes and stuff on the wall and free beer, and there’s a Ping-Pong table.”

Mr. Chmieli­auskas con­tin­ued to com­mu­ni­cate with Mr. Wylie’s team in 2014, as the Cam­bridge employ­ees were locked in pro­tract­ed nego­ti­a­tions with a researcher at Cam­bridge Uni­ver­si­ty, Michal Kosin­s­ki, to obtain Face­book data through an app Mr. Kosin­s­ki had built. The data was cru­cial to effi­cient­ly scale up Cambridge’s psy­cho­met­rics prod­ucts so they could be used in elec­tions and for cor­po­rate clients. . . .

8a. Some ter­ri­fy­ing and con­sum­mate­ly impor­tant devel­op­ments tak­ing shape in the con­text of what Mr. Emory has called “tech­no­crat­ic fas­cism:”

Face­book wants to read your thoughts.

  1. ” . . . Face­book wants to build its own “brain-to-com­put­er inter­face” that would allow us to send thoughts straight to a com­put­er. ‘What if you could type direct­ly from your brain?’ Regi­na Dugan, the head of the company’s secre­tive hard­ware R&D divi­sion, Build­ing 8, asked from the stage. Dugan then pro­ceed­ed to show a video demo of a woman typ­ing eight words per minute direct­ly from the stage. In a few years, she said, the team hopes to demon­strate a real-time silent speech sys­tem capa­ble of deliv­er­ing a hun­dred words per minute. ‘That’s five times faster than you can type on your smart­phone, and it’s straight from your brain,’ she said. ‘Your brain activ­i­ty con­tains more infor­ma­tion than what a word sounds like and how it’s spelled; it also con­tains seman­tic infor­ma­tion of what those words mean.’ . . .”
  2. ” . . . . Brain-com­put­er inter­faces are noth­ing new. DARPA, which Dugan used to head, has invest­ed heav­i­ly in brain-com­put­er inter­face tech­nolo­gies to do things like cure men­tal ill­ness and restore mem­o­ries to sol­diers injured in war. But what Face­book is propos­ing is per­haps more radical—a world in which social media doesn’t require pick­ing up a phone or tap­ping a wrist watch in order to com­mu­ni­cate with your friends; a world where we’re con­nect­ed all the time by thought alone. . . .”
  3. ” . . . . Facebook’s Build­ing 8 is mod­eled after DARPA and its projects tend to be equal­ly ambi­tious. . . .”
  4. ” . . . . But what Face­book is propos­ing is per­haps more radical—a world in which social media doesn’t require pick­ing up a phone or tap­ping a wrist watch in order to com­mu­ni­cate with your friends; a world where we’re con­nect­ed all the time by thought alone. . . .”

Face­book Lit­er­al­ly Wants to Read Your Thoughts” by Kris­ten V. Brown; Giz­modo; 4/19/2017.

At Facebook’s annu­al devel­op­er con­fer­ence, F8, on Wednes­day, the group unveiled what may be Facebook’s most ambitious—and creepiest—proposal yet. Face­book wants to build its own “brain-to-com­put­er inter­face” that would allow us to send thoughts straight to a com­put­er.

What if you could type direct­ly from your brain?” Regi­na Dugan, the head of the company’s secre­tive hard­ware R&D divi­sion, Build­ing 8, asked from the stage. Dugan then pro­ceed­ed to show a video demo of a woman typ­ing eight words per minute direct­ly from the stage. In a few years, she said, the team hopes to demon­strate a real-time silent speech sys­tem capa­ble of deliv­er­ing a hun­dred words per minute.

“That’s five times faster than you can type on your smart­phone, and it’s straight from your brain,” she said. “Your brain activ­i­ty con­tains more infor­ma­tion than what a word sounds like and how it’s spelled; it also con­tains seman­tic infor­ma­tion of what those words mean.”

Brain-com­put­er inter­faces are noth­ing new. DARPA, which Dugan used to head, has invest­ed heav­i­ly in brain-com­put­er inter­face tech­nolo­gies to do things like cure men­tal ill­ness and restore mem­o­ries to sol­diers injured in war. But what Face­book is propos­ing is per­haps more radical—a world in which social media doesn’t require pick­ing up a phone or tap­ping a wrist watch in order to com­mu­ni­cate with your friends; a world where we’re con­nect­ed all the time by thought alone.

“Our world is both dig­i­tal and phys­i­cal,” she said. “Our goal is to cre­ate and ship new, cat­e­go­ry-defin­ing con­sumer prod­ucts that are social first, at scale.”

She also showed a video that demon­strat­ed a sec­ond tech­nol­o­gy that showed the abil­i­ty to “lis­ten” to human speech through vibra­tions on the skin. This tech has been in devel­op­ment to aid peo­ple with dis­abil­i­ties, work­ing a lit­tle like a Braille that you feel with your body rather than your fin­gers. Using actu­a­tors and sen­sors, a con­nect­ed arm­band was able to con­vey to a woman in the video a tac­tile vocab­u­lary of nine dif­fer­ent words.

Dugan adds that it’s also pos­si­ble to “lis­ten” to human speech by using your skin. It’s like using braille but through a sys­tem of actu­a­tors and sen­sors. Dugan showed a video exam­ple of how a woman could fig­ure out exact­ly what objects were select­ed on a touch­screen based on inputs deliv­ered through a con­nect­ed arm­band.

Facebook’s Build­ing 8 is mod­eled after DARPA and its projects tend to be equal­ly ambi­tious. Brain-com­put­er inter­face tech­nol­o­gy is still in its infan­cy. So far, researchers have been suc­cess­ful in using it to allow peo­ple with dis­abil­i­ties to con­trol par­a­lyzed or pros­thet­ic limbs. But stim­u­lat­ing the brain’s motor cor­tex is a lot sim­pler than read­ing a person’s thoughts and then trans­lat­ing those thoughts into some­thing that might actu­al­ly be read by a com­put­er.

The end goal is to build an online world that feels more immer­sive and real—no doubt so that you spend more time on Face­book.

“Our brains pro­duce enough data to stream 4 HD movies every sec­ond. The prob­lem is that the best way we have to get infor­ma­tion out into the world — speech — can only trans­mit about the same amount of data as a 1980s modem,” CEO Mark Zucker­berg said in a Face­book post. “We’re work­ing on a sys­tem that will let you type straight from your brain about 5x faster than you can type on your phone today. Even­tu­al­ly, we want to turn it into a wear­able tech­nol­o­gy that can be man­u­fac­tured at scale. Even a sim­ple yes/no ‘brain click’ would help make things like aug­ment­ed real­i­ty feel much more nat­ur­al.”

“That’s five times faster than you can type on your smart­phone, and it’s straight from your brain,” she said. “Your brain activ­i­ty con­tains more infor­ma­tion than what a word sounds like and how it’s spelled; it also con­tains seman­tic infor­ma­tion of what those words mean.”

Brain-com­put­er inter­faces are noth­ing new. DARPA, which Dugan used to head, has invest­ed heav­i­ly in brain-com­put­er inter­face tech­nolo­gies to do things like cure men­tal ill­ness and restore mem­o­ries to sol­diers injured in war. But what Face­book is propos­ing is per­haps more radical—a world in which social media doesn’t require pick­ing up a phone or tap­ping a wrist watch in order to com­mu­ni­cate with your friends; a world where we’re con­nect­ed all the time by thought alone.

8b. More about Face­book’s brain-to-com­put­er inter­face:

  1. ” . . . . Face­book hopes to use opti­cal neur­al imag­ing tech­nol­o­gy to scan the brain 100 times per sec­ond to detect thoughts and turn them into text. Mean­while, it’s work­ing on ‘skin-hear­ing’ that could trans­late sounds into hap­tic feed­back that peo­ple can learn to under­stand like braille. . . .”
  2. ” . . . . Wor­ry­ing­ly, Dugan even­tu­al­ly appeared frus­trat­ed in response to my inquiries about how her team thinks about safe­ty pre­cau­tions for brain inter­faces, say­ing, ‘The flip side of the ques­tion that you’re ask­ing is ‘why invent it at all?’ and I just believe that the opti­mistic per­spec­tive is that on bal­ance, tech­no­log­i­cal advances have real­ly meant good things for the world if they’re han­dled respon­si­bly.’ . . . .”

“Face­book Plans Ethics Board to Mon­i­tor Its Brain-Com­put­er Inter­face Work” by Josh Con­stine; Tech Crunch; 4/19/2017.

Face­book will assem­ble an inde­pen­dent Eth­i­cal, Legal and Social Impli­ca­tions (ELSI) pan­el to over­see its devel­op­ment of a direct brain-to-com­put­er typ­ing inter­face it pre­viewed today at its F8 con­fer­ence. Facebook’s R&D depart­ment Build­ing 8’s head Regi­na Dugan tells TechCrunch, “It’s ear­ly days . . . we’re in the process of form­ing it right now.”

Mean­while, much of the work on the brain inter­face is being con­duct­ed by Facebook’s uni­ver­si­ty research part­ners like UC Berke­ley and Johns Hop­kins. Facebook’s tech­ni­cal lead on the project, Mark Chevil­let, says, “They’re all held to the same stan­dards as the NIH or oth­er gov­ern­ment bod­ies fund­ing their work, so they already are work­ing with insti­tu­tion­al review boards at these uni­ver­si­ties that are ensur­ing that those stan­dards are met.” Insti­tu­tion­al review boards ensure test sub­jects aren’t being abused and research is being done as safe­ly as pos­si­ble.

Face­book hopes to use opti­cal neur­al imag­ing tech­nol­o­gy to scan the brain 100 times per sec­ond to detect thoughts and turn them into text. Mean­while, it’s work­ing on “skin-hear­ing” that could trans­late sounds into hap­tic feed­back that peo­ple can learn to under­stand like braille. Dugan insists, “None of the work that we do that is relat­ed to this will be absent of these kinds of insti­tu­tion­al review boards.”

So at least there will be inde­pen­dent ethi­cists work­ing to min­i­mize the poten­tial for mali­cious use of Facebook’s brain-read­ing tech­nol­o­gy to steal or police people’s thoughts.

Dur­ing our inter­view, Dugan showed her cog­nizance of people’s con­cerns, repeat­ing the start of her keynote speech today say­ing, “I’ve nev­er seen a tech­nol­o­gy that you devel­oped with great impact that didn’t have unin­tend­ed con­se­quences that need­ed to be guardrailed or man­aged. In any new tech­nol­o­gy you see a lot of hype talk, some apoc­a­lyp­tic talk and then there’s seri­ous work which is real­ly focused on bring­ing suc­cess­ful out­comes to bear in a respon­si­ble way.”

In the past, she says the safe­guards have been able to keep up with the pace of inven­tion. “In the ear­ly days of the Human Genome Project there was a lot of con­ver­sa­tion about whether we’d build a super race or whether peo­ple would be dis­crim­i­nat­ed against for their genet­ic con­di­tions and so on,” Dugan explains. “Peo­ple took that very seri­ous­ly and were respon­si­ble about it, so they formed what was called a ELSI pan­el . . . By the time that we got the tech­nol­o­gy avail­able to us, that frame­work, that con­trac­tu­al, eth­i­cal frame­work had already been built, so that work will be done here too. That work will have to be done.” . . . .

Wor­ry­ing­ly, Dugan even­tu­al­ly appeared frus­trat­ed in response to my inquiries about how her team thinks about safe­ty pre­cau­tions for brain inter­faces, say­ing, “The flip side of the ques­tion that you’re ask­ing is ‘why invent it at all?’ and I just believe that the opti­mistic per­spec­tive is that on bal­ance, tech­no­log­i­cal advances have real­ly meant good things for the world if they’re han­dled respon­si­bly.”

Facebook’s dom­i­na­tion of social net­work­ing and adver­tis­ing give it bil­lions in prof­it per quar­ter to pour into R&D. But its old “Move fast and break things” phi­los­o­phy is a lot more fright­en­ing when it’s build­ing brain scan­ners. Hope­ful­ly Face­book will pri­or­i­tize the assem­bly of the ELSI ethics board Dugan promised and be as trans­par­ent as pos­si­ble about the devel­op­ment of this excit­ing-yet-unnerv­ing tech­nol­o­gy.…

  1. In FTR #‘s 718 and 946, we detailed the fright­en­ing, ugly real­i­ty behind Face­book. Face­book is now devel­op­ing tech­nol­o­gy that will per­mit the tap­ping of users thoughts by mon­i­tor­ing brain-to-com­put­er tech­nol­o­gy. Face­book’s R & D is head­ed by Regi­na Dugan, who used to head the Pen­tagon’s DARPA. Face­book’s Build­ing 8 is pat­terned after DARPA:  ” . . . Face­book wants to build its own “brain-to-com­put­er inter­face” that would allow us to send thoughts straight to a com­put­er. ‘What if you could type direct­ly from your brain?’ Regi­na Dugan, the head of the company’s secre­tive hard­ware R&D divi­sion, Build­ing 8, asked from the stage. Dugan then pro­ceed­ed to show a video demo of a woman typ­ing eight words per minute direct­ly from the stage. In a few years, she said, the team hopes to demon­strate a real-time silent speech sys­tem capa­ble of deliv­er­ing a hun­dred words per minute. ‘That’s five times faster than you can type on your smart­phone, and it’s straight from your brain,’ she said. ‘Your brain activ­i­ty con­tains more infor­ma­tion than what a word sounds like and how it’s spelled; it also con­tains seman­tic infor­ma­tion of what those words mean.’ . . .”
  2. ” . . . . Facebook’s Build­ing 8 is mod­eled after DARPA and its projects tend to be equal­ly ambi­tious. . . .”
  3. ” . . . . But what Face­book is propos­ing is per­haps more radical—a world in which social media doesn’t require pick­ing up a phone or tap­ping a wrist watch in order to com­mu­ni­cate with your friends; a world where we’re con­nect­ed all the time by thought alone. . . .”

9a. Nigel Oakes is the founder of SCL, the par­ent com­pa­ny of Cam­bridge Ana­lyt­i­ca. His com­ments are relat­ed in a New York Times arti­cle. ” . . . . . . . . The pan­el has pub­lished audio records in which an exec­u­tive tied to Cam­bridge Ana­lyt­i­ca dis­cuss­es how the Trump cam­paign used tech­niques used by the Nazis to tar­get vot­ers. . . .”

“Face­book Gets Grilling in U.K. That It Avoid­ed in U.S.” by Adam Satar­i­ano; The New York Times [West­ern Edi­tion]; 4/27/2018; p. B3.

. . . . The pan­el has pub­lished audio records in which an exec­u­tive tied to Cam­bridge Ana­lyt­i­ca dis­cuss­es how the Trump cam­paign used tech­niques used by the Nazis to tar­get vot­ers. . . .

9b. Mr. Oakes’ com­ments are relat­ed in detail in anoth­er Times arti­cle. ” . . . . Adolf Hitler ‘didn’t have a prob­lem with the Jews at all, but peo­ple didn’t like the Jews,’ he told the aca­d­e­m­ic, Emma L. Bri­ant, a senior lec­tur­er in jour­nal­ism at the Uni­ver­si­ty of Essex. He went on to say that Don­ald J. Trump had done the same thing by tap­ping into griev­ances toward immi­grants and Mus­lims. . . . ‘What hap­pened with Trump, you can for­get all the micro­tar­get­ing and micro­da­ta and what­ev­er, and come back to some very, very sim­ple things,’ he told Dr. Bri­ant. ‘Trump had the balls, and I mean, real­ly the balls, to say what peo­ple want­ed to hear.’ . . .”

“The Ori­gins of an Ad Man’s Manip­u­la­tion Empire” by Ellen Bar­ry; The New York Times [West­ern Edi­tion]; 4/21/2018; p. A4.

. . . . Adolf Hitler “didn’t have a prob­lem with the Jews at all, but peo­ple didn’t like the Jews,” he told the aca­d­e­m­ic, Emma L. Bri­ant, a senior lec­tur­er in jour­nal­ism at the Uni­ver­si­ty of Essex. He went on to say that Don­ald J. Trump had done the same thing by tap­ping into griev­ances toward immi­grants and Mus­lims.

This sort of cam­paign, he con­tin­ued, did not require bells and whis­tles from tech­nol­o­gy or social sci­ence.

“What hap­pened with Trump, you can for­get all the micro­tar­get­ing and micro­da­ta and what­ev­er, and come back to some very, very sim­ple things,” he told Dr. Bri­ant. “Trump had the balls, and I mean, real­ly the balls, to say what peo­ple want­ed to hear.” . . .

9c. Tak­ing a look at the future of fas­cism in the con­text of AI, Tay, a “bot” cre­at­ed by Microsoft to respond to users of Twit­ter was tak­en offline after users taught it to–in effect–become a Nazi bot. It is note­wor­thy that Tay can only respond on the basis of what she is taught. In the future, tech­no­log­i­cal­ly accom­plished and will­ful peo­ple like “weev” may be able to do more. Inevitably, Under­ground Reich ele­ments will craft a Nazi AI that will be able to do MUCH, MUCH more!

Beware! As one Twit­ter user not­ed, employ­ing sar­casm: “Tay went from “humans are super cool” to full nazi in <24 hrs and I’m not at all con­cerned about the future of AI.”

Microsoft has been forced to dunk Tay, its mil­len­ni­al-mim­ic­k­ing chat­bot, into a vat of molten steel. The com­pa­ny has ter­mi­nat­ed her after the bot start­ed tweet­ing abuse at peo­ple and went full neo-Nazi, declar­ing that “Hitler was right I hate the jews.”

@TheBigBrebowski ricky ger­vais learned total­i­tar­i­an­ism from adolf hitler, the inven­tor of athe­ism

— TayTweets (@TayandYou) March 23, 2016

 Some of this appears to be “inno­cent” inso­far as Tay is not gen­er­at­ing these respons­es. Rather, if you tell her “repeat after me” she will par­rot back what­ev­er you say, allow­ing you to put words into her mouth. How­ev­er, some of the respons­es wereorgan­ic. The Guardianquotes one where, after being asked “is Ricky Ger­vais an athe­ist?”, Tay respond­ed, “ricky ger­vais learned total­i­tar­i­an­ism from adolf hitler, the inven­tor of athe­ism.” . . .

But like all teenagers, she seems to be angry with her moth­er.

Microsoft has been forced to dunk Tay, its mil­len­ni­al-mim­ic­k­ing chat­bot, into a vat of molten steel. The com­pa­ny has ter­mi­nat­ed her after the bot start­ed tweet­ing abuse at peo­ple and went full neo-Nazi, declar­ing that “Hitler was right I hate the jews.”

@TheBigBrebowski ricky ger­vais learned total­i­tar­i­an­ism from adolf hitler, the inven­tor of athe­ism

— TayTweets (@TayandYou) March 23, 2016

Some of this appears to be “inno­cent” inso­far as Tay is not gen­er­at­ing these respons­es. Rather, if you tell her “repeat after me” she will par­rot back what­ev­er you say, allow­ing you to put words into her mouth. How­ev­er, some of the respons­es wereorgan­ic. The Guardian quotes one where, after being asked “is Ricky Ger­vais an athe­ist?”, Tay respond­ed, “Ricky Ger­vais learned total­i­tar­i­an­ism from Adolf Hitler, the inven­tor of athe­ism.”

In addi­tion to turn­ing the bot off, Microsoft has delet­ed many of the offend­ing tweets. But this isn’t an action to be tak­en light­ly; Red­mond would do well to remem­ber that it was humans attempt­ing to pull the plug on Skynet that proved to be the last straw, prompt­ing the sys­tem to attack Rus­sia in order to elim­i­nate its ene­mies. We’d bet­ter hope that Tay does­n’t sim­i­lar­ly retal­i­ate. . . .

9d. As not­ed in a Pop­u­lar Mechan­ics arti­cle: ” . . . When the next pow­er­ful AI comes along, it will see its first look at the world by look­ing at our faces. And if we stare it in the eyes and shout “we’re AWFUL lol,” the lol might be the one part it doesn’t under­stand. . . .”

And we keep show­ing it our very worst selves.

We all know the half-joke about the AI apoc­a­lypse. The robots learn to think, and in their cold ones-and-zeros log­ic, they decide that humans—horrific pests we are—need to be exter­mi­nated. It’s the sub­ject of count­less sci-fi sto­ries and blog posts about robots, but maybe the real dan­ger isn’t that AI comes to such a con­clu­sion on its own, but that it gets that idea from us.

Yes­ter­day Microsoft launched a fun lit­tle AI Twit­ter chat­bot that was admit­tedly sort of gim­micky from the start. “A.I fam from the inter­net that’s got zero chill,” its Twit­ter bio reads. At its start, its knowl­edge was based on pub­lic data. As Microsoft’s page for the prod­uct puts it:

Tay has been built by min­ing rel­e­vant pub­lic data and by using AI and edi­to­r­ial devel­oped by a staff includ­ing impro­vi­sa­tional come­di­ans. Pub­lic data that’s been anonymized is Tay’s pri­mary data source. That data has been mod­eled, cleaned and fil­tered by the team devel­op­ing Tay.

The real point of Tay how­ever, was to learn from humans through direct con­ver­sa­tion, most notably direct con­ver­sa­tion using humanity’s cur­rent lead­ing show­case of deprav­ity: Twit­ter. You might not be sur­prised things went off the rails, but how fast and how far is par­tic­u­larly stag­ger­ing. 

Microsoft has since delet­ed some of Tay’s most offen­sive tweets, but var­i­ous pub­li­ca­tions memo­ri­al­ize some of the worst bits where Tay denied the exis­tence of the holo­caust, came out in sup­port of geno­cide, and went all kinds of racist. 

Nat­u­rally it’s hor­ri­fy­ing, and Microsoft has been try­ing to clean up the mess. Though as some on Twit­ter have point­ed out, no mat­ter how lit­tle Microsoft would like to have “Bush did 9/11″ spout­ing from a cor­po­rate spon­sored project, Tay does serve to illus­trate the most dan­ger­ous fun­da­men­tal truth of arti­fi­cial intel­li­gence: It is a mir­ror. Arti­fi­cial intelligence—specifically “neur­al net­works” that learn behav­ior by ingest­ing huge amounts of data and try­ing to repli­cate it—need some sort of source mate­r­ial to get start­ed. They can only get that from us. There is no oth­er way. 

But before you give up on human­ity entire­ly, there are a few things worth not­ing. For starters, it’s not like Tay just nec­es­sar­ily picked up vir­u­lent racism by just hang­ing out and pas­sively lis­ten­ing to the buzz of the humans around it. Tay was announced in a very big way—with a press cov­er­age—and pranksters pro-active­ly went to it to see if they could teach it to be racist. 

If you take an AI and then don’t imme­di­ately intro­duce it to a whole bunch of trolls shout­ing racism at it for the cheap thrill of see­ing it learn a dirty trick, you can get some more inter­est­ing results. Endear­ing ones even! Mul­ti­ple neur­al net­works designed to pre­dict text in emails and text mes­sages have an over­whelm­ing pro­cliv­ity for say­ing “I love you” con­stantly, espe­cially when they are oth­er­wise at a loss for words.

So Tay’s racism isn’t nec­es­sar­ily a reflec­tion of actu­al, human racism so much as it is the con­se­quence of unre­strained exper­i­men­ta­tion, push­ing the enve­lope as far as it can go the very first sec­ond we get the chance. The mir­ror isn’t show­ing our real image; it’s reflect­ing the ugly faces we’re mak­ing at it for fun. And maybe that’s actu­ally worse.

Sure, Tay can’t under­stand what racism means and more than Gmail can real­ly love you. And baby’s first words being “geno­cide lol!” is admit­tedly sort of fun­ny when you aren’t talk­ing about lit­eral all-pow­er­ful SkyNet or a real human child. But AI is advanc­ing at a stag­ger­ing rate. . . .

. . . . When the next pow­er­ful AI comes along, it will see its first look at the world by look­ing at our faces. And if we stare it in the eyes and shout “we’re AWFUL lol,” the lol might be the one part it doesn’t under­stand.

 

Discussion

28 comments for “FTR #1021 FascisBook: (In Your Facebook, Part 3–A Virtual Panopticon, Part 3)”

  1. Oh look, Face­book actu­al­ly banned some­one for post­ing neo-Nazi con­tent on their plat­form. But there’s a catch: They banned Ukrain­ian activist Eduard Dolin­sky for 30 days because he was post­ing exam­ples of anti­se­mit­ic graf­fi­ti. Dolin­sky is the direc­tor of the Ukrain­ian Jew­ish Com­mit­tee. Accord­ing to Dolinksy, his far right oppo­nents have a his­to­ry of report­ing Dolinksy’s posts to Face­book in order to get him sus­pend­ed. And this time it worked. Dolinksy appealed the ban but to no avail.

    So that hap­pened. But first let’s take a quick look at an arti­cle from back in April that high­lights how absurd this action was. The arti­cle is about a Ukrain­ian school teacher in Lviv, Mar­jana Batjuk, who post­ed birth­day greet­ings to Adolf Hitler on her Face­book page on April 20 (Hitler’s birth­day). She also taught her stu­dents the Nazi salute and even took some of her stu­dents to meet far right activists who had par­tic­i­pat­ed in a march wear­ing the uni­form of the the 14th Waf­fen Grenadier Divi­sion of the SS.

    Batjuk, who is a mem­ber of Svo­bo­da, lat­er claimed her Face­book account was hacked, but a news orga­ni­za­tion found that she has a his­to­ry of post­ing Nazi imagery on social media net­works. And there’s no men­tion in this report of Batjuk get­ting banned from Face­book:

    Jew­ish Tele­graph Agency

    Ukrain­ian teacher alleged­ly prais­es Hitler, per­forms Nazi salute with stu­dents

    By Cnaan Liphshiz
    April 23, 2018 4:22pm

    (JTA) — A pub­lic school teacher in Ukraine alleged­ly post­ed birth­day greet­ings to Adolf Hitler on Face­book and taught her stu­dents the Nazi salute.

    Mar­jana Batjuk, who teach­es at a school in Lviv and also is a coun­cil­woman, post­ed her greet­ing on April 20, the Nazi leader’s birth­day, Eduard Dolin­sky, direc­tor of the Ukrain­ian Jew­ish Com­mit­tee, told JTA. He called the inci­dent a “scan­dal.”

    She also took some of her stu­dents to meet far-right activists who over the week­end marched on the city’s streets while wear­ing the uni­form of the 14th Waf­fen Grenadier Divi­sion of the SS, an elite Nazi unite with many eth­nic Ukraini­ans also known as the 1st Gali­cian.

    Dis­play­ing Nazi imagery is ille­gal in Ukraine, but Dolin­sky said law enforce­ment author­i­ties allowed the activists to parade on main streets.

    Batjuk had the activists explain about their repli­ca weapons, which they parad­ed ahead of a larg­er event in hon­or of the 1st Gali­cian unit planned for next week in Lviv.

    The events hon­or­ing the 1st Gali­cian SS unit in Lviv are not orga­nized by munic­i­pal author­i­ties.

    Batjuk, 28, a mem­ber of the far-right Svo­bo­da par­ty, called Hitler “a great man” and quot­ed from his book “Mein Kampf” in her Face­book post, Dolin­sky said. She lat­er claimed that her Face­book account was hacked and delet­ed the post, but the Strana news site found that she had a his­to­ry of post­ing Nazi imagery on social net­works.

    She also post­ed pic­tures of chil­dren she said were her stu­dents per­form­ing the Nazi salute with her.

    ...

    Edu­ca­tion Min­istry offi­cials have start­ed a dis­ci­pli­nary review of her con­duct, the KP news site report­ed.

    Sep­a­rate­ly, in the town of Polta­va, in east­ern Ukraine, Dolin­sky said a swasti­ka and the words “heil Hitler” were spray-paint­ed Fri­day on a mon­u­ment for Holo­caust vic­tims of the Holo­caust. The van­dals, who have not been iden­ti­fied, also wrote “Death to the kikes.”

    In Odessa, a large graf­fi­ti read­ing “Jews into the sea” was writ­ten on the beach­front wall of a hotel.

    “The com­mon fac­tor between all of these inci­dents is gov­ern­ment inac­tion, which ensures they will con­tin­ue hap­pen­ing,” Dolin­sky said.
    ———-

    “Ukrain­ian teacher alleged­ly prais­es Hitler, per­forms Nazi salute with stu­dents” by Cnaan Liphshiz; Jew­ish Tele­graph Agency; 04/23/2018

    “Mar­jana Batjuk, who teach­es at a school in Lviv and also is a coun­cil­woman, post­ed her greet­ing on April 20, the Nazi leader’s birth­day, Eduard Dolin­sky, direc­tor of the Ukrain­ian Jew­ish Com­mit­tee, told JTA. He called the inci­dent a “scan­dal.””

    She’s not just a teacher. She’s also a coun­cil­woman. A teacher coun­cil­woman who likes to post about pos­i­tive things about Hitler on her Face­book page. And it was Eduard Dolin­sky who was talk­ing to the inter­na­tion­al media about this.

    But Batjuk does­n’t just post pro-Nazi things on her Face­book page. She also takes her stu­dents to meet the far right activists:

    ...
    She also took some of her stu­dents to meet far-right activists who over the week­end marched on the city’s streets while wear­ing the uni­form of the 14th Waf­fen Grenadier Divi­sion of the SS, an elite Nazi unite with many eth­nic Ukraini­ans also known as the 1st Gali­cian.

    Dis­play­ing Nazi imagery is ille­gal in Ukraine, but Dolin­sky said law enforce­ment author­i­ties allowed the activists to parade on main streets.

    Batjuk had the activists explain about their repli­ca weapons, which they parad­ed ahead of a larg­er event in hon­or of the 1st Gali­cian unit planned for next week in Lviv.

    The events hon­or­ing the 1st Gali­cian SS unit in Lviv are not orga­nized by munic­i­pal author­i­ties.
    ...

    Batjuk lat­er claimed that her Face­book page was hacked, and yet a media orga­ni­za­tion was able to find plen­ty of pre­vi­ous exam­ples of sim­i­lar posts on social media:

    ...
    Batjuk, 28, a mem­ber of the far-right Svo­bo­da par­ty, called Hitler “a great man” and quot­ed from his book “Mein Kampf” in her Face­book post, Dolin­sky said. She lat­er claimed that her Face­book account was hacked and delet­ed the post, but the Strana news site found that she had a his­to­ry of post­ing Nazi imagery on social net­works.

    She also post­ed pic­tures of chil­dren she said were her stu­dents per­form­ing the Nazi salute with her.
    ...

    And if you look at that Strana news sum­ma­ry of her social media posts, a num­ber of them are clear­ly Face­book posts. So if Strana news orga­ni­za­tion was able to find these old posts that’s a pret­ty clear indi­ca­tion Face­book was­n’t remov­ing them.

    That was back in April. Flash for­ward to today and we find a sud­den will­ing­ness to ban peo­ple for post Nazi con­tent...except it’s Eduard Dolin­sky get­ting banned for mak­ing peo­ple aware of the pro-Nazi graf­fi­ti that has become ram­pant in Ukraine:

    The Jerusalem Post

    Jew­ish activist: Face­book banned me for post­ing anti­se­mit­ic graf­fi­ti
    “I use my Face­book account for dis­trib­ut­ing infor­ma­tion about anti­se­mit­ic inci­dents, hate speech and hate crimes in Ukraine,” said the Ukrain­ian Jew­ish activist.

    By Seth J. Frantz­man
    August 21, 2018 16:39

    Eduard Dolinksy, a promi­nent Ukrain­ian Jew­ish activist, was banned from post­ing on Face­book Mon­day night for a post about anti­se­mit­ic graf­fi­ti in Odessa.

    Dolin­sky, the direc­tor of the Ukrain­ian Jew­ish Com­mit­tee, said he was blocked by the social media giant for post­ing a pho­to. “I had post­ed the pho­to which says in Ukrain­ian ‘kill the yid’ about a month ago,” he says. “I use my Face­book account for dis­trib­ut­ing infor­ma­tion about anti­se­mit­ic inci­dents and hate speech and hate crimes in Ukraine.”

    Now Dolinsky’s account has dis­abled him from post­ing for thir­ty days, which means media, law enforce­ment and the local com­mu­ni­ty who rely on his social media posts will receive no updates.

    Dolin­sky tweet­ed Mon­day that his account had been blocked and sent The Jerusalem Post a screen­shot of the image he post­ed which shows a bad­ly drawn swasti­ka and Ukrain­ian writ­ing. “You recent­ly post­ed some­thing that vio­lates Face­book poli­cies, so you’re tem­porar­i­ly blocked from using this fea­ture,” Face­book informs him when he logs in. “The block will be active for 29 days and 17 hours,” it says. “To keep from get­ting blocked again, please make sure you’ve read and under­stand Facebook’s Com­mu­ni­ty Stan­dards.”

    Dolinksy says that he has been tar­get­ed in the past by nation­al­ists and anti-semi­tes who oppose his work. Face­book has banned him tem­porar­i­ly in the past also, but nev­er for thir­ty days. “The last time I was blocked, the media also report­ed this and I felt some relief.

    It was as if they stopped ban­ning me. But now I don’t know – and this has again hap­pened. They are ban­ning the one who is try­ing to fight anti­semitism. They are ban­ning me for the very thing I do.”

    Based on Dolinsky’s work the police have opened crim­i­nal files against per­pe­tra­tors of anti­se­mit­ic crimes, in Odessa and oth­er places.

    He says that some locals are try­ing to silence him because he is crit­i­cal of the way Ukraine has com­mem­o­rat­ed his­tor­i­cal nation­al­ist fig­ures, “which is actu­al­ly deny­ing the Holo­caust and try­ing to white­wash the actions of nation­al­ists dur­ing the Sec­ond World War.”

    Dolinksy has been wide­ly quot­ed, and his work, includ­ing posts on Face­book, has been ref­er­enced by media in the past. “These inci­dents are hap­pen­ing and these crimes and the police should react.

    The soci­ety also. But their goal is to cut me off.”

    Iron­i­cal­ly, the activist oppos­ing anti­semitism is being tar­get­ed by anti­semites who label the anti­se­mit­ic exam­ples he reveals as hate speech. “They are specif­i­cal­ly com­plain­ing to Face­book for the con­tent, and they are com­plain­ing that I am vio­lat­ing the rules of Face­book and spread­ing hate speech. So Face­book, as I under­stand [it, doesn’t] look at this; they are ban­ning me and block­ing me and delet­ing these posts.”

    He says he tried to appeal the ban but has not been suc­cess­ful.

    “I use my Face­book exclu­sive­ly for this, so this is my work­ing tool as direc­tor of Ukrain­ian Jew­ish Com­mit­tee.”

    Face­book has been under scruti­ny recent­ly for who it bans and why. In July founder Mark Zucker­berg made con­tro­ver­sial remarks appear­ing to accept Holo­caust denial on the site. “I find it offen­sive, but at the end of the day, I don’t believe our plat­form should take that down because I think there are things that dif­fer­ent peo­ple get wrong. I don’t think they’re doing it inten­tion­al­ly.” In late July, Face­book banned US con­spir­a­cy the­o­rist Alex Jones for bul­ly­ing and hate speech.

    In a sim­i­lar inci­dent to Dolin­sky, Iran­ian sec­u­lar activist Armin Nav­abi was banned from Face­book for thir­ty days for post­ing the death threats that he receives. “This is ridicu­lous. My account is blocked for 30 days because I post the death threats I’m get­ting? I’m not the one mak­ing the threat!” he tweet­ed.

    ...

    ———

    “Jew­ish activist: Face­book banned me for post­ing anti­se­mit­ic graf­fi­ti” by Seth J. Frantz­man; The Jerusalem Post; 08/21/2018

    “Dolin­sky, the direc­tor of the Ukrain­ian Jew­ish Com­mit­tee, said he was blocked by the social media giant for post­ing a pho­to. “I had post­ed the pho­to which says in Ukrain­ian ‘kill the yid’ about a month ago,” he says. “I use my Face­book account for dis­trib­ut­ing infor­ma­tion about anti­se­mit­ic inci­dents and hate speech and hate crimes in Ukraine.”

    The direc­tor of the Ukrain­ian Jew­ish Com­mit­tee gets banned for post anti­se­mit­ic con­tent. That’s some world class trolling by Face­book.

    And while it’s only a 30 day ban, that’s 30 days where Ukraine’s media and law enforce­ment won’t be get­ting Dolin­sky’s updates. So it’s not just a moral­ly absurd ban­ning, it’s also actu­al­ly going to be pro­mot­ing pro-Nazi graf­fi­ti in Ukraine by silenc­ing one of the key fig­ures cov­er­ing it:

    ...
    Now Dolinsky’s account has dis­abled him from post­ing for thir­ty days, which means media, law enforce­ment and the local com­mu­ni­ty who rely on his social media posts will receive no updates.

    Dolin­sky tweet­ed Mon­day that his account had been blocked and sent The Jerusalem Post a screen­shot of the image he post­ed which shows a bad­ly drawn swasti­ka and Ukrain­ian writ­ing. “You recent­ly post­ed some­thing that vio­lates Face­book poli­cies, so you’re tem­porar­i­ly blocked from using this fea­ture,” Face­book informs him when he logs in. “The block will be active for 29 days and 17 hours,” it says. “To keep from get­ting blocked again, please make sure you’ve read and under­stand Facebook’s Com­mu­ni­ty Stan­dards.”
    ...

    And this isn’t the first time Dolin­sky has been banned from Face­book for post­ing this kind of con­tent. But it’s the longest he’s been banned. And the fact that this isn’t the first time he’s been banned sug­gest this isn’t just an ‘oops!’ gen­uine mis­take:

    ...
    Dolinksy says that he has been tar­get­ed in the past by nation­al­ists and anti-semi­tes who oppose his work. Face­book has banned him tem­porar­i­ly in the past also, but nev­er for thir­ty days. “The last time I was blocked, the media also report­ed this and I felt some relief.

    It was as if they stopped ban­ning me. But now I don’t know – and this has again hap­pened. They are ban­ning the one who is try­ing to fight anti­semitism. They are ban­ning me for the very thing I do.”

    Based on Dolinsky’s work the police have opened crim­i­nal files against per­pe­tra­tors of anti­se­mit­ic crimes, in Odessa and oth­er places.
    ...

    Dolin­sky also notes that he has peo­ple try­ing to silence him pre­cise­ly because of the job he does high­light­ing Ukraine’s offi­cial embrace of Nazi col­lab­o­rat­ing his­tor­i­cal fig­ures:

    ...
    He says that some locals are try­ing to silence him because he is crit­i­cal of the way Ukraine has com­mem­o­rat­ed his­tor­i­cal nation­al­ist fig­ures, “which is actu­al­ly deny­ing the Holo­caust and try­ing to white­wash the actions of nation­al­ists dur­ing the Sec­ond World War.”

    Dolinksy has been wide­ly quot­ed, and his work, includ­ing posts on Face­book, has been ref­er­enced by media in the past. “These inci­dents are hap­pen­ing and these crimes and the police should react.

    The soci­ety also. But their goal is to cut me off.”

    Iron­i­cal­ly, the activist oppos­ing anti­semitism is being tar­get­ed by anti­semites who label the anti­se­mit­ic exam­ples he reveals as hate speech. “They are specif­i­cal­ly com­plain­ing to Face­book for the con­tent, and they are com­plain­ing that I am vio­lat­ing the rules of Face­book and spread­ing hate speech. So Face­book, as I under­stand [it, doesn’t] look at this; they are ban­ning me and block­ing me and delet­ing these posts.”
    ...

    So we like­ly have a sit­u­a­tion where anti­semites suc­cess­ful­ly got Dolinksy silence, with Face­book ‘play­ing dumb’ the whole time. And as a con­se­quence Ukraine is fac­ing a month with­out Dolin­sky’s reports. Except it’s not even clear that Dolinksy is going to be allowed to clar­i­fy the sit­u­a­tion and con­tin­ue post­ing updates of Nazi graf­fi­ti after this month long ban is up. Because he says he’s been try­ing to appeal the ban, but with no suc­cess:

    ...
    He says he tried to appeal the ban but has not been suc­cess­ful.

    “I use my Face­book exclu­sive­ly for this, so this is my work­ing tool as direc­tor of Ukrain­ian Jew­ish Com­mit­tee.”
    ...

    Giv­en Dolin­sky’s pow­er­ful crit­i­cisms of Ukraine’s embrace and his­toric white­wash­ing of the far right, it would be inter­est­ing to learn if the deci­sion to ban Dolin­sky orig­i­nal­ly came from the Atlantic Coun­cil, which is one of the main orga­ni­za­tion Face­book out­sourced its troll-hunt­ing duties to.

    So for all we know, Dolin­sky is effec­tive­ly going to be banned per­ma­nent­ly from using Face­book to make Ukraine and the rest of the world aware of the epi­dem­ic of pro-Nazi anti­se­mit­ic graf­fi­ti in Ukraine. Maybe if he sets up a pro-Nazi Face­book per­sona he’ll be allowed to keep doing his work.

    Posted by Pterrafractyl | August 23, 2018, 12:49 pm
  2. It looks like we’re in for anoth­er round of right-wing com­plaints about Big Tech polit­i­cal bias designed to pres­sure com­pa­nies into push­ing right-wing con­tent onto users. Recall how com­plaints about Face­book sup­press­ing con­ser­v­a­tives in the Face­book News Feed result­ed in a change in pol­i­cy in 2016 that unleashed a flood of far right dis­in­for­ma­tion on the plat­form. This time, it’s Google’s turn to face the right-wing faux-out­rage machine and it’s Pres­i­dent Trump lead­ing it:

    Trump just accused Google of bias­ing the search results in its search engine to give neg­a­tive sto­ries about him. Appar­ent­ly he googled him­self and did­n’t like the results. His tweet came after a Fox Busi­ness report on Mon­day evening that made the claim that 96 per­cent of Google News results for “Trump” came from the “nation­al left-wing media.” The report was based on some ‘analy­sis’ by right-wing media out­let PJ Media.

    Lat­er, dur­ing a press con­fer­ence, Trump declared that Google, Face­book, and Twit­ter “are tread­ing on very, very trou­bled ter­ri­to­ry,” and his eco­nom­ic advi­sor Lar­ry Kud­low told the press that the issue is being inves­ti­gat­ing by the White House. And as Face­book already demon­strat­ed, while it seems high­ly unlike­ly that the Trump admin­is­tra­tion will actu­al­ly take some sort of gov­ern­ment action to force Google to pro­mote pos­i­tive sto­ries about Trump, it’s not like loud­ly com­plain­ing can’t get the job done:

    Bloomberg

    Trump Warns Tech Giants to ‘Be Care­ful,’ Claim­ing They Rig Search­es

    By Kath­leen Hunter and Ben Brody
    August 28, 2018, 4:58 AM CDT Updat­ed on August 28, 2018, 2:17 PM CDT

    * Pres­i­dent tweets con­ser­v­a­tive media being blocked by Google
    * Com­pa­ny denies any polit­i­cal agen­da in its search results

    Pres­i­dent Don­ald Trump warned Alpha­bet Inc.’s Google, Face­book Inc. and Twit­ter Inc. “bet­ter be care­ful” after he accused the search engine ear­li­er in the day of rig­ging results to give pref­er­ence to neg­a­tive news sto­ries about him.

    Trump told reporters in the Oval Office Tues­day that the three tech­nol­o­gy com­pa­nies “are tread­ing on very, very trou­bled ter­ri­to­ry,” as he added his voice to a grow­ing cho­rus of con­ser­v­a­tives who claim inter­net com­pa­nies favor lib­er­al view­points.

    “This is a very seri­ous sit­u­a­tion-will be addressed!” Trump said in a tweet ear­li­er Tues­day. The President’s com­ments came the morn­ing after a Fox Busi­ness TV seg­ment that said Google favored lib­er­al news out­lets in search results about Trump. Trump pro­vid­ed no sub­stan­ti­a­tion for his claim.

    “Google search results for ‘Trump News’ shows only the viewing/reporting of Fake New Media. In oth­er words, they have it RIGGED, for me & oth­ers, so that almost all sto­ries & news is BAD,” Trump said. “Republican/Conservative & Fair Media is shut out. Ille­gal.”

    The alle­ga­tion, dis­missed by online search experts, fol­lows the president’s Aug. 24 claim that social media “giants” are “silenc­ing mil­lions of peo­ple.” Such accu­sa­tions — along with asser­tions that the news media and Spe­cial Coun­sel Robert Mueller’s Rus­sia med­dling probe are biased against him — have been a chief Trump talk­ing point meant to appeal to the president’s base.

    Google issued a state­ment say­ing its search­es are designed to give users rel­e­vant answers.

    “Search is not used to set a polit­i­cal agen­da and we don’t bias our results toward any polit­i­cal ide­ol­o­gy,” the state­ment said. “Every year, we issue hun­dreds of improve­ments to our algo­rithms to ensure they sur­face high-qual­i­ty con­tent in response to users’ queries. We con­tin­u­al­ly work to improve Google Search and we nev­er rank search results to manip­u­late polit­i­cal sen­ti­ment.”

    Yonatan Zunger, an engi­neer who worked at Google for almost a decade, went fur­ther. “Users can ver­i­fy that his claim is spe­cious by sim­ply read­ing a wide range of news sources them­selves,” he said. “The ‘bias’ is that the news is all bad for him, for which he has only him­self to blame.”

    Google’s news search soft­ware doesn’t work the way the pres­i­dent says it does, accord­ing to Mark Irvine, senior data sci­en­tist at Word­Stream, a com­pa­ny that helps firms get web­sites and oth­er online con­tent to show up high­er in search results. The Google News sys­tem gives weight to how many times a sto­ry has been linked to, as well as to how promi­nent­ly the terms peo­ple are search­ing for show up in the sto­ries, Irvine said.

    “The Google search algo­rithm is a fair­ly agnos­tic and apa­thet­ic algo­rithm towards what people’s polit­i­cal feel­ings are,” he said.

    “Their job is essen­tial­ly to mod­el the world as it is,” said Pete Mey­ers, a mar­ket­ing sci­en­tist at Moz, which builds tools to help com­pa­nies improve how they show up in search results. “If enough peo­ple are link­ing to a site and talk­ing about a site, they’re going to show that site.”

    Trump’s con­cern is that search results about him appear neg­a­tive, but that’s because the major­i­ty of sto­ries about him are neg­a­tive, Mey­ers said. “He woke up and watched his par­tic­u­lar fla­vor and what Google had didn’t match that.”

    Com­plaints that social-media ser­vices cen­sor con­ser­v­a­tives have increased as com­pa­nies such as Face­book Inc. and Twit­ter Inc. try to curb the reach of con­spir­a­cy the­o­rists, dis­in­for­ma­tion cam­paigns, for­eign polit­i­cal med­dling and abu­sive posters.

    Google News rank­ings have some­times high­light­ed uncon­firmed and erro­neous reports in the ear­ly min­utes of tragedies when there’s lit­tle infor­ma­tion to fill its search results. After the Oct. 1, 2017, Las Vegas shoot­ing, for instance, sev­er­al accounts seemed to coor­di­nate an effort to smear a man misiden­ti­fied as the shoot­er with false claims about his polit­i­cal ties.

    Google has since tight­ened require­ments for inclu­sion in news rank­ings, block­ing out­lets that “con­ceal their coun­try of ori­gin” and rely­ing more on author­i­ta­tive sources, although the moves have led to charges of cen­sor­ship from less estab­lished out­lets. Google cur­rent­ly says it ranks news based on “fresh­ness” and “diver­si­ty” of the sto­ries. Trump-favored out­lets such as Fox News rou­tine­ly appear in results.

    Google’s search results have been the focus of com­plaints for more than a decade. The crit­i­cism has become more polit­i­cal as the pow­er and reach of online ser­vices has increased in recent years.

    Eric Schmidt, Alphabet’s for­mer chair­man, sup­port­ed Hillary Clin­ton against Trump dur­ing the last elec­tion. There have been unsub­stan­ti­at­ed claims the com­pa­ny buried neg­a­tive search results about her dur­ing the 2016 elec­tion. Scores of Google employ­ees entered gov­ern­ment to work under Pres­i­dent Barack Oba­ma.

    White House eco­nom­ic advis­er Lar­ry Kud­low, respond­ing to a ques­tion about the tweets, said that the admin­is­tra­tion is going to do “inves­ti­ga­tions and analy­sis” into the issue but stressed they’re “just look­ing into it.”

    Trump’s com­ment fol­lowed a report on Fox Busi­ness on Mon­day evening that said 96 per­cent of Google News results for “Trump” came from the “nation­al left-wing media.” The seg­ment cit­ed the con­ser­v­a­tive PJ Media site, which said its analy­sis sug­gest­ed “a pat­tern of bias against right-lean­ing con­tent.”

    The PJ Media analy­sis “is in no way sci­en­tif­ic,” said Joshua New, a senior pol­i­cy ana­lyst with the Cen­ter for Data Inno­va­tion.

    “This fre­quen­cy of appear­ance in an arbi­trary search at one time is in no way indi­cat­ing a bias or a slant,” New said. His non-par­ti­san pol­i­cy group is affil­i­at­ed with the Infor­ma­tion Tech­nol­o­gy and Inno­va­tion Foun­da­tion, which in turn has exec­u­tives from Sil­i­con Val­ley com­pa­nies, includ­ing Google, on its board of direc­tors.

    Ser­vices such as Google or Face­book “have a busi­ness incen­tive not to low­er the rank­ing of a cer­tain pub­li­ca­tion because of news bias. Because that low­ers the val­ue as a news plat­form,” New said.

    News search rank­ings use fac­tors includ­ing “use time­li­ness, accu­ra­cy, the pop­u­lar­i­ty of a sto­ry, a users’ per­son­al search his­to­ry, their loca­tion, qual­i­ty of con­tent, a website’s rep­u­ta­tion — a huge amount of dif­fer­ent fac­tors,” New said.

    Google is not the first tech stal­wart to receive crit­i­cism from Trump. He has alleged Amazon.com Inc. has a sweet­heart deal with the U.S. Postal Ser­vice and slammed founder Jeff Bezos’s own­er­ship of what Trump calls “the Ama­zon Wash­ing­ton Post.”

    Google is due to face law­mak­ers at a hear­ing on Russ­ian elec­tion med­dling on Sept. 5. The com­pa­ny intend­ed to send Senior Vice Pres­i­dent for Glob­al Affairs Kent Walk­er to tes­ti­fy, but the panel’s chair­man, Sen­a­tor Richard Burr, who want­ed Chief Exec­u­tive Offi­cer Sun­dar Pichai, has reject­ed Walk­er.

    Despite Trump’s com­ments, it’s unclear what he or Con­gress could do to influ­ence how inter­net com­pa­nies dis­trib­ute online news. The indus­try trea­sures an exemp­tion from lia­bil­i­ty for the con­tent users post. Some top mem­bers of Con­gress have sug­gest­ed lim­it­ing the pro­tec­tion as a response to alleged bias and oth­er mis­deeds, although there have been few moves to do so since Con­gress curbed the shield for some cas­es of sex traf­fick­ing ear­li­er in the year.

    The gov­ern­ment has lit­tle abil­i­ty to dic­tate to pub­lish­ers and online cura­tors what news to present despite the president’s occa­sion­al threats to use the pow­er of the gov­ern­ment to curb cov­er­age he dis­likes and his ten­den­cy to com­plain that news about him is over­ly neg­a­tive.

    Trump has talked about expand­ing libel laws and mused about rein­stat­ing long-end­ed rules requir­ing equal time for oppos­ing views, which didn’t apply to the inter­net. Nei­ther has result­ed in a seri­ous pol­i­cy push..

    ...

    ———-

    “Trump Warns Tech Giants to ‘Be Care­ful,’ Claim­ing They Rig Search­es” by Kath­leen Hunter and Ben Brody; Bloomberg; 08/28/2018

    “Trump told reporters in the Oval Office Tues­day that the three tech­nol­o­gy com­pa­nies “are tread­ing on very, very trou­bled ter­ri­to­ry,” as he added his voice to a grow­ing cho­rus of con­ser­v­a­tives who claim inter­net com­pa­nies favor lib­er­al view­points.”

    The Trumpian warn­ing shots have been fired: feed the pub­lic pos­i­tive news about Trump, or else...

    ...
    “This is a very seri­ous sit­u­a­tion-will be addressed!” Trump said in a tweet ear­li­er Tues­day. The President’s com­ments came the morn­ing after a Fox Busi­ness TV seg­ment that said Google favored lib­er­al news out­lets in search results about Trump. Trump pro­vid­ed no sub­stan­ti­a­tion for his claim.

    “Google search results for ‘Trump News’ shows only the viewing/reporting of Fake New Media. In oth­er words, they have it RIGGED, for me & oth­ers, so that almost all sto­ries & news is BAD,” Trump said. “Republican/Conservative & Fair Media is shut out. Ille­gal.”

    The alle­ga­tion, dis­missed by online search experts, fol­lows the president’s Aug. 24 claim that social media “giants” are “silenc­ing mil­lions of peo­ple.” Such accu­sa­tions — along with asser­tions that the news media and Spe­cial Coun­sel Robert Mueller’s Rus­sia med­dling probe are biased against him — have been a chief Trump talk­ing point meant to appeal to the president’s base.
    ...

    “Republican/Conservative & Fair Media is shut out. Ille­gal.”

    And he lit­er­al­ly charged Google with ille­gal­i­ty over alleged­ly shut­ting out “Republican/Conservative & Fair Media.” Which is, of course, an absurd charge for any­one famil­iar with Google’s news por­tal. But that was part of what made the tweet so poten­tial­ly threat­en­ing to these com­pa­nies since it implied there was a role the gov­ern­ment should be play­ing to cor­rect this per­ceived law-break­ing.

    At the same time, it’s unclear what, legal­ly speak­ing, Trump could actu­al­ly do. But that did­n’t stop him from issue such threats, as he’s done in the past:

    ...
    Despite Trump’s com­ments, it’s unclear what he or Con­gress could do to influ­ence how inter­net com­pa­nies dis­trib­ute online news. The indus­try trea­sures an exemp­tion from lia­bil­i­ty for the con­tent users post. Some top mem­bers of Con­gress have sug­gest­ed lim­it­ing the pro­tec­tion as a response to alleged bias and oth­er mis­deeds, although there have been few moves to do so since Con­gress curbed the shield for some cas­es of sex traf­fick­ing ear­li­er in the year.

    The gov­ern­ment has lit­tle abil­i­ty to dic­tate to pub­lish­ers and online cura­tors what news to present despite the president’s occa­sion­al threats to use the pow­er of the gov­ern­ment to curb cov­er­age he dis­likes and his ten­den­cy to com­plain that news about him is over­ly neg­a­tive.

    Trump has talked about expand­ing libel laws and mused about rein­stat­ing long-end­ed rules requir­ing equal time for oppos­ing views, which didn’t apply to the inter­net. Nei­ther has result­ed in a seri­ous pol­i­cy push..
    ...

    Iron­i­cal­ly, when Trump mus­es about rein­stat­ing long-end­ed rules requir­ing equal time for oppos­ing views (the “Fair­ness Doc­trine” over­turned by Rea­gan in 1987), he’s mus­ing about doing some­thing that would effec­tive­ly destroy the right-wing media mod­el, a mod­el that is pred­i­cat­ed on feed­ing the audi­ence exclu­sive­ly right-wing con­tent. As many have not­ed, the demise of the Fair­ness Doc­trine — which led to the explo­sion of right-wing talk radio hosts like Rush Lim­baugh — prob­a­bly played a big role in intel­lec­tu­al­ly neu­ter­ing the Amer­i­can pub­lic, paving the way for some­one like Trump to even­tu­al­ly come along.

    And yet, as unhinged as this lat­est threat may be, the admin­is­tra­tion is actu­al­ly going to do “inves­ti­ga­tions and analy­sis” into the issue accord­ing to Lar­ry Kud­low:

    ...
    White House eco­nom­ic advis­er Lar­ry Kud­low, respond­ing to a ques­tion about the tweets, said that the admin­is­tra­tion is going to do “inves­ti­ga­tions and analy­sis” into the issue but stressed they’re “just look­ing into it.”
    ...

    And as we should expect, this all appears to have been trig­gered by a Fox Busi­ness piece on Mon­day night that cov­ered an ‘study’ done by PJ Media (a right-wing media out­let) that found 96 per­cent of Google News results for “Trump” come from the “nation­al left-wing media”:

    ...
    Trump’s com­ment fol­lowed a report on Fox Busi­ness on Mon­day evening that said 96 per­cent of Google News results for “Trump” came from the “nation­al left-wing media.” The seg­ment cit­ed the con­ser­v­a­tive PJ Media site, which said its analy­sis sug­gest­ed “a pat­tern of bias against right-lean­ing con­tent.”

    The PJ Media analy­sis “is in no way sci­en­tif­ic,” said Joshua New, a senior pol­i­cy ana­lyst with the Cen­ter for Data Inno­va­tion.

    “This fre­quen­cy of appear­ance in an arbi­trary search at one time is in no way indi­cat­ing a bias or a slant,” New said. His non-par­ti­san pol­i­cy group is affil­i­at­ed with the Infor­ma­tion Tech­nol­o­gy and Inno­va­tion Foun­da­tion, which in turn has exec­u­tives from Sil­i­con Val­ley com­pa­nies, includ­ing Google, on its board of direc­tors.

    Ser­vices such as Google or Face­book “have a busi­ness incen­tive not to low­er the rank­ing of a cer­tain pub­li­ca­tion because of news bias. Because that low­ers the val­ue as a news plat­form,” New said.

    News search rank­ings use fac­tors includ­ing “use time­li­ness, accu­ra­cy, the pop­u­lar­i­ty of a sto­ry, a users’ per­son­al search his­to­ry, their loca­tion, qual­i­ty of con­tent, a website’s rep­u­ta­tion — a huge amount of dif­fer­ent fac­tors,” New said.
    ...

    Putting aside the gen­er­al ques­tions of the sci­en­tif­ic verac­i­ty of this PJ Media ‘study’, it’s kind of amus­ing to real­ize that it was study con­duct­ed specif­i­cal­ly on a search for “Trump” on Google News. And if you had to choose a sin­gle top­ic that is going to inevitably have an abun­dance of neg­a­tive news writ­ten about it, that would be the top­ic of “Trump”. In oth­er words, if you were to actu­al­ly con­duct a real study that attempts to assess the polit­i­cal bias of Google News’s search results, you almost could­n’t have picked a worse search term to test that the­o­ry on than “Trump”.

    Google not sur­pris­ing­ly refutes these charges. But it’s the peo­ple who work for com­pa­nies ded­i­cat­ed to improv­ing how their clients who give the most con­vinc­ing respons­es since their busi­ness­es are lit­er­al­ly depen­dents on them under­stand­ing Google’s algo­rithms:

    ...
    Google’s news search soft­ware doesn’t work the way the pres­i­dent says it does, accord­ing to Mark Irvine, senior data sci­en­tist at Word­Stream, a com­pa­ny that helps firms get web­sites and oth­er online con­tent to show up high­er in search results. The Google News sys­tem gives weight to how many times a sto­ry has been linked to, as well as to how promi­nent­ly the terms peo­ple are search­ing for show up in the sto­ries, Irvine said.

    “The Google search algo­rithm is a fair­ly agnos­tic and apa­thet­ic algo­rithm towards what people’s polit­i­cal feel­ings are,” he said.

    “Their job is essen­tial­ly to mod­el the world as it is,” said Pete Mey­ers, a mar­ket­ing sci­en­tist at Moz, which builds tools to help com­pa­nies improve how they show up in search results. “If enough peo­ple are link­ing to a site and talk­ing about a site, they’re going to show that site.”

    Trump’s con­cern is that search results about him appear neg­a­tive, but that’s because the major­i­ty of sto­ries about him are neg­a­tive, Mey­ers said. “He woke up and watched his par­tic­u­lar fla­vor and what Google had didn’t match that.”
    ...

    All that said, it’s not like the top­ic of the black­box nature of the algo­rithms behind things like Google’s search engine aren’t a legit­i­mate top­ic of pub­lic inter­est. And that’s part of why these far­ci­cal tweets are so dan­ger­ous: the Big Tech giants like Google, Face­book, and Twit­ter know that it’s not impos­si­ble that they’ll be sub­ject to algo­rith­mic reg­u­la­tion some­day. And they’re going to want to push that day off for a long as pos­si­ble. So when Trump makes these kinds of com­plaints, it’s not at all incon­ceiv­able that he’s going to get the response from these com­pa­nies that he wants as these com­pa­nies attempt to pla­cate him. It’s also high­ly like­ly that if these com­pa­nies do decide to pla­cate him, they’re not going to pub­licly announce this. Instead they’ll just start rig­ging their algo­rithms to serve up more pro-Trump con­tent and more right-wing con­tent in gen­er­al.

    Also keep in mind that, despite the rep­u­ta­tion of Sil­i­con Val­ley as being run by a bunch of lib­er­als, the real­i­ty is Sil­i­con Val­ley has a strong right-wing lib­er­tar­i­an fac­tion, and there’s going to be no short­age of peo­ple at these com­pa­nies that would love to inject a right-wing bias into their ser­vices. Trump’s stunt gives that right-wing fac­tion of Sil­i­con Val­ley lead­er­ship an excuse to do exact­ly that from a busi­ness stand­point.

    So if you use Google News to see what the lat­est the news is on “Trump” and you sud­den­ly find that it’s most­ly good news, keep in mind that that’s actu­al­ly real­ly, real­ly bad news because it means this stunt worked.

    Posted by Pterrafractyl | August 28, 2018, 3:55 pm
  3. The New York Times pub­lished a big piece on the inner work­ings of Face­book’s response to the array of scan­dals that have enveloped the com­pa­ny in recent years, from the charges of Russ­ian oper­a­tives using the plat­form to spread dis­in­for­ma­tion to the Cam­bridge Ana­lyt­i­ca scan­dal. Much of the sto­ry focus on the actions of Sheryl Sand­berg, who appears to be top per­son at Face­book who was over­see­ing the com­pa­ny’s response to these scan­dals. It describes a gen­er­al pat­tern of Face­book’s exec­u­tives first ignor­ing prob­lems and then using var­i­ous pub­lic rela­tions strate­gies to deal with the prob­lems when they are no longer able to ignore them. And it’s the choice of pub­lic rela­tions firms that is per­haps the biggest scan­dal revealed in this sto­ry: In Octo­ber of 2017, Face­book hired Defin­ers Pub­lic Affair, a DC-based firm found­ed by vet­er­ans of Repub­li­can pres­i­den­tial pol­i­tics that spe­cial­ized in apply­ing the tac­tics of polit­i­cal races to cor­po­rate pub­lic rela­tions.

    And one of the polit­i­cal strate­gies employed by Defin­ers was sim­ply putting out arti­cles that put their clients in a pos­i­tive light while simul­ta­ne­ous­ly attack­ing their clients’ ene­mies. That’s what Defin­ers did for Face­book, with Defin­ers uti­liz­ing an affil­i­at­ed con­ser­v­a­tive news site, NTK Net­work. NTK shares offices and stiff with Defin­ers and many NTK sto­ries are writ­ten by Defin­ers staff and are basi­cal­ly attack ads on Defin­ers’ clients’ ene­mies. So how does NTK get any­one to read their pro­pa­gan­da arti­cles? By get­ting them picked up by oth­er pop­u­lar con­ser­v­a­tive out­lets, includ­ing Bre­it­bart.

    Per­haps most con­tro­ver­sial­ly, Face­book had Defin­ers attempt to tie var­i­ous groups that are crit­i­cal of Face­book to George Soros, implic­it­ly har­ness­ing the exist­ing right-wing meme that George Soros is a super wealthy Jew who secret­ly con­trols almost every­thing. This attack by Defin­ers cen­tered around the Free­dom from Face­book coali­tion. Back in July, The group had crashed the House Judi­cia­ry Com­mit­tee hear­ings when a Face­book exec­u­tive was tes­ti­fy­ing, hold­ing up signs depict­ing Sheryl Sand­berg and Mark Zucker­berg as two heads of an octo­pus stretch­ing around the globe. The group claimed the sign was a ref­er­ence to old car­toons about the Stan­dard Oil monop­oly. But such imagery also evokes clas­sic anti-Semit­ic tropes, made more acute by the fact that both Sand­berg and Zucker­berg are Jew­ish. So Face­book enlist­ed the ADL to con­demn Free­dom from Face­book over the imagery.

    But charg­ing Free­dom from Face­book with anti-Semi­tism isn’t the only strat­e­gy Face­book used to address its crit­ics. After the protest in con­gress, Face­book had Defin­ers basi­cal­ly accuse the groups behind Free­dom from Face­book of being pup­pets of George Soros and encour­aged reporters to inves­ti­gate the finan­cial ties of the groups with Soros. And this was part of broad­er push by Defin­ers to cast Soros as the man behind all of the anti-Face­book sen­ti­ments that have popped up in recent years. This, of course, is play­ing right into the grow­ing right-wing meme that Soros, a bil­lion­aire Jew, is behind almost every­thing bad in the world. And it’s a meme that also hap­pens to be excep­tion­al­ly pop­u­lar with the ‘Alt Right’ neo-Nazi wing of con­tem­po­rary con­ser­vatism. So Face­book dealt with its crit­ics by first charg­ing them with indi­rect anti-Semi­tism and then used their hired Repub­li­can pub­lic rela­tions firm to make an indi­rect anti-Semit­ic attacks on those same crit­ics:

    The New York Times

    Delay, Deny and Deflect: How Facebook’s Lead­ers Fought Through Cri­sis

    By Sheera Frenkel, Nicholas Con­fes­sore, Cecil­ia Kang, Matthew Rosen­berg and Jack Nicas

    Nov. 14, 2018

    Sheryl Sand­berg was seething.

    Inside Facebook’s Men­lo Park, Calif., head­quar­ters, top exec­u­tives gath­ered in the glass-walled con­fer­ence room of its founder, Mark Zucker­berg. It was Sep­tem­ber 2017, more than a year after Face­book engi­neers dis­cov­ered sus­pi­cious Rus­sia-linked activ­i­ty on its site, an ear­ly warn­ing of the Krem­lin cam­paign to dis­rupt the 2016 Amer­i­can elec­tion. Con­gres­sion­al and fed­er­al inves­ti­ga­tors were clos­ing in on evi­dence that would impli­cate the com­pa­ny.

    But it wasn’t the loom­ing dis­as­ter at Face­book that angered Ms. Sand­berg. It was the social network’s secu­ri­ty chief, Alex Sta­mos, who had informed com­pa­ny board mem­bers the day before that Face­book had yet to con­tain the Russ­ian infes­ta­tion. Mr. Stamos’s brief­ing had prompt­ed a humil­i­at­ing board­room inter­ro­ga­tion of Ms. Sand­berg, Facebook’s chief oper­at­ing offi­cer, and her bil­lion­aire boss. She appeared to regard the admis­sion as a betray­al.

    “You threw us under the bus!” she yelled at Mr. Sta­mos, accord­ing to peo­ple who were present.

    The clash that day would set off a reck­on­ing — for Mr. Zucker­berg, for Ms. Sand­berg and for the busi­ness they had built togeth­er. In just over a decade, Face­book has con­nect­ed more than 2.2 bil­lion peo­ple, a glob­al nation unto itself that reshaped polit­i­cal cam­paigns, the adver­tis­ing busi­ness and dai­ly life around the world. Along the way, Face­book accu­mu­lat­ed one of the largest-ever repos­i­to­ries of per­son­al data, a trea­sure trove of pho­tos, mes­sages and likes that pro­pelled the com­pa­ny into the For­tune 500.

    But as evi­dence accu­mu­lat­ed that Facebook’s pow­er could also be exploit­ed to dis­rupt elec­tions, broad­cast viral pro­pa­gan­da and inspire dead­ly cam­paigns of hate around the globe, Mr. Zucker­berg and Ms. Sand­berg stum­bled. Bent on growth, the pair ignored warn­ing signs and then sought to con­ceal them from pub­lic view. At crit­i­cal moments over the last three years, they were dis­tract­ed by per­son­al projects, and passed off secu­ri­ty and pol­i­cy deci­sions to sub­or­di­nates, accord­ing to cur­rent and for­mer exec­u­tives.

    When Face­book users learned last spring that the com­pa­ny had com­pro­mised their pri­va­cy in its rush to expand, allow­ing access to the per­son­al infor­ma­tion of tens of mil­lions of peo­ple to a polit­i­cal data firm linked to Pres­i­dent Trump, Face­book sought to deflect blame and mask the extent of the prob­lem.

    And when that failed — as the company’s stock price plum­met­ed and it faced a con­sumer back­lash — Face­book went on the attack.

    While Mr. Zucker­berg has con­duct­ed a pub­lic apol­o­gy tour in the last year, Ms. Sand­berg has over­seen an aggres­sive lob­by­ing cam­paign to com­bat Facebook’s crit­ics, shift pub­lic anger toward rival com­pa­nies and ward off dam­ag­ing reg­u­la­tion. Face­book employed a Repub­li­can oppo­si­tion-research firm to dis­cred­it activist pro­test­ers, in part by link­ing them to the lib­er­al financier George Soros. It also tapped its busi­ness rela­tion­ships, lob­by­ing a Jew­ish civ­il rights group to cast some crit­i­cism of the com­pa­ny as anti-Semit­ic.

    In Wash­ing­ton, allies of Face­book, includ­ing Sen­a­tor Chuck Schumer, the Demo­c­ra­t­ic Sen­ate leader, inter­vened on its behalf. And Ms. Sand­berg wooed or cajoled hos­tile law­mak­ers, while try­ing to dis­pel Facebook’s rep­u­ta­tion as a bas­tion of Bay Area lib­er­al­ism.

    This account of how Mr. Zucker­berg and Ms. Sand­berg nav­i­gat­ed Facebook’s cas­cad­ing crises, much of which has not been pre­vi­ous­ly report­ed, is based on inter­views with more than 50 peo­ple. They include cur­rent and for­mer Face­book exec­u­tives and oth­er employ­ees, law­mak­ers and gov­ern­ment offi­cials, lob­by­ists and con­gres­sion­al staff mem­bers. Most spoke on the con­di­tion of anonymi­ty because they had signed con­fi­den­tial­i­ty agree­ments, were not autho­rized to speak to reporters or feared retal­i­a­tion.

    ...

    Even so, trust in the social net­work has sunk, while its pell-mell growth has slowed. Reg­u­la­tors and law enforce­ment offi­cials in the Unit­ed States and Europe are inves­ti­gat­ing Facebook’s con­duct with Cam­bridge Ana­lyt­i­ca, a polit­i­cal data firm that worked with Mr. Trump’s 2016 cam­paign, open­ing up the com­pa­ny to fines and oth­er lia­bil­i­ty. Both the Trump admin­is­tra­tion and law­mak­ers have begun craft­ing pro­pos­als for a nation­al pri­va­cy law, set­ting up a years­long strug­gle over the future of Facebook’s data-hun­gry busi­ness mod­el.

    “We failed to look and try to imag­ine what was hid­ing behind cor­ners,” Elliot Schrage, for­mer vice pres­i­dent for glob­al com­mu­ni­ca­tions, mar­ket­ing and pub­lic pol­i­cy at Face­book, said in an inter­view.

    Mr. Zucker­berg, 34, and Ms. Sand­berg, 49, remain at the company’s helm, while Mr. Sta­mos and oth­er high-pro­file exec­u­tives have left after dis­putes over Facebook’s pri­or­i­ties. Mr. Zucker­berg, who con­trols the social net­work with 60 per­cent of the vot­ing shares and who approved many of its direc­tors, has been asked repeat­ed­ly in the last year whether he should step down as chief exec­u­tive.

    His answer each time: a resound­ing “No.”

    ‘Don’t Poke the Bear’

    Three years ago, Mr. Zucker­berg, who found­ed Face­book in 2004 while attend­ing Har­vard, was cel­e­brat­ed for the company’s extra­or­di­nary suc­cess. Ms. Sand­berg, a for­mer Clin­ton admin­is­tra­tion offi­cial and Google vet­er­an, had become a fem­i­nist icon with the pub­li­ca­tion of her empow­er­ment man­i­festo, “Lean In,” in 2013.

    Like oth­er tech­nol­o­gy exec­u­tives, Mr. Zucker­berg and Ms. Sand­berg cast their com­pa­ny as a force for social good. Facebook’s lofty aims were embla­zoned even on secu­ri­ties fil­ings: “Our mis­sion is to make the world more open and con­nect­ed.”

    But as Face­book grew, so did the hate speech, bul­ly­ing and oth­er tox­ic con­tent on the plat­form. When researchers and activists in Myan­mar, India, Ger­many and else­where warned that Face­book had become an instru­ment of gov­ern­ment pro­pa­gan­da and eth­nic cleans­ing, the com­pa­ny large­ly ignored them. Face­book had posi­tioned itself as a plat­form, not a pub­lish­er. Tak­ing respon­si­bil­i­ty for what users post­ed, or act­ing to cen­sor it, was expen­sive and com­pli­cat­ed. Many Face­book exec­u­tives wor­ried that any such efforts would back­fire.

    Then Don­ald J. Trump ran for pres­i­dent. He described Mus­lim immi­grants and refugees as a dan­ger to Amer­i­ca, and in Decem­ber 2015 post­ed a state­ment on Face­book call­ing for a “total and com­plete shut­down” on Mus­lims enter­ing the Unit­ed States. Mr. Trump’s call to arms — wide­ly con­demned by Democ­rats and some promi­nent Repub­li­cans — was shared more than 15,000 times on Face­book, an illus­tra­tion of the site’s pow­er to spread racist sen­ti­ment.

    Mr. Zucker­berg, who had helped found a non­prof­it ded­i­cat­ed to immi­gra­tion reform, was appalled, said employ­ees who spoke to him or were famil­iar with the con­ver­sa­tion. He asked Ms. Sand­berg and oth­er exec­u­tives if Mr. Trump had vio­lat­ed Facebook’s terms of ser­vice.

    The ques­tion was unusu­al. Mr. Zucker­berg typ­i­cal­ly focused on broad­er tech­nol­o­gy issues; pol­i­tics was Ms. Sandberg’s domain. In 2010, Ms. Sand­berg, a Demo­c­rat, had recruit­ed a friend and fel­low Clin­ton alum, Marne Levine, as Facebook’s chief Wash­ing­ton rep­re­sen­ta­tive. A year lat­er, after Repub­li­cans seized con­trol of the House, Ms. Sand­berg installed anoth­er friend, a well-con­nect­ed Repub­li­can: Joel Kaplan, who had attend­ed Har­vard with Ms. Sand­berg and lat­er served in the George W. Bush admin­is­tra­tion.

    Some at Face­book viewed Mr. Trump’s 2015 attack on Mus­lims as an oppor­tu­ni­ty to final­ly take a stand against the hate speech cours­ing through its plat­form. But Ms. Sand­berg, who was edg­ing back to work after the death of her hus­band sev­er­al months ear­li­er, del­e­gat­ed the mat­ter to Mr. Schrage and Moni­ka Bick­ert, a for­mer pros­e­cu­tor whom Ms. Sand­berg had recruit­ed as the company’s head of glob­al pol­i­cy man­age­ment. Ms. Sand­berg also turned to the Wash­ing­ton office — par­tic­u­lar­ly to Mr. Kaplan, said peo­ple who par­tic­i­pat­ed in or were briefed on the dis­cus­sions.

    In video con­fer­ence calls between the Sil­i­con Val­ley head­quar­ters and Wash­ing­ton, the three offi­cials con­strued their task nar­row­ly. They parsed the company’s terms of ser­vice to see if the post, or Mr. Trump’s account, vio­lat­ed Facebook’s rules.

    Mr. Kaplan argued that Mr. Trump was an impor­tant pub­lic fig­ure and that shut­ting down his account or remov­ing the state­ment could be seen as obstruct­ing free speech, said three employ­ees who knew of the dis­cus­sions. He said it could also stoke a con­ser­v­a­tive back­lash.

    “Don’t poke the bear,” Mr. Kaplan warned.

    Mr. Zucker­berg did not par­tic­i­pate in the debate. Ms. Sand­berg attend­ed some of the video meet­ings but rarely spoke.

    Mr. Schrage con­clud­ed that Mr. Trump’s lan­guage had not vio­lat­ed Facebook’s rules and that the candidate’s views had pub­lic val­ue. “We were try­ing to make a deci­sion based on all the legal and tech­ni­cal evi­dence before us,” he said in an inter­view.

    In the end, Mr. Trump’s state­ment and account remained on the site. When Mr. Trump won elec­tion the next fall, giv­ing Repub­li­cans con­trol of the White House as well as Con­gress, Mr. Kaplan was empow­ered to plan accord­ing­ly. The com­pa­ny hired a for­mer aide to Mr. Trump’s new attor­ney gen­er­al, Jeff Ses­sions, along with lob­by­ing firms linked to Repub­li­can law­mak­ers who had juris­dic­tion over inter­net com­pa­nies.

    But inside Face­book, new trou­bles were brew­ing.

    Min­i­miz­ing Russia’s Role

    In the final months of Mr. Trump’s pres­i­den­tial cam­paign, Russ­ian agents esca­lat­ed a year­long effort to hack and harass his Demo­c­ra­t­ic oppo­nents, cul­mi­nat­ing in the release of thou­sands of emails stolen from promi­nent Democ­rats and par­ty offi­cials.

    Face­book had said noth­ing pub­licly about any prob­lems on its own plat­form. But in the spring of 2016, a com­pa­ny expert on Russ­ian cyber­war­fare spot­ted some­thing wor­ri­some. He reached out to his boss, Mr. Sta­mos.

    Mr. Stamos’s team dis­cov­ered that Russ­ian hack­ers appeared to be prob­ing Face­book accounts for peo­ple con­nect­ed to the pres­i­den­tial cam­paigns, said two employ­ees. Months lat­er, as Mr. Trump bat­tled Hillary Clin­ton in the gen­er­al elec­tion, the team also found Face­book accounts linked to Russ­ian hack­ers who were mes­sag­ing jour­nal­ists to share infor­ma­tion from the stolen emails.

    Mr. Sta­mos, 39, told Col­in Stretch, Facebook’s gen­er­al coun­sel, about the find­ings, said two peo­ple involved in the con­ver­sa­tions. At the time, Face­book had no pol­i­cy on dis­in­for­ma­tion or any resources ded­i­cat­ed to search­ing for it.

    Mr. Sta­mos, act­ing on his own, then direct­ed a team to scru­ti­nize the extent of Russ­ian activ­i­ty on Face­book. In Decem­ber 2016, after Mr. Zucker­berg pub­licly scoffed at the idea that fake news on Face­book had helped elect Mr. Trump, Mr. Sta­mos — alarmed that the company’s chief exec­u­tive seemed unaware of his team’s find­ings — met with Mr. Zucker­berg, Ms. Sand­berg and oth­er top Face­book lead­ers.

    Ms. Sand­berg was angry. Look­ing into the Russ­ian activ­i­ty with­out approval, she said, had left the com­pa­ny exposed legal­ly. Oth­er exec­u­tives asked Mr. Sta­mos why they had not been told soon­er.

    Still, Ms. Sand­berg and Mr. Zucker­berg decid­ed to expand on Mr. Stamos’s work, cre­at­ing a group called Project P, for “pro­pa­gan­da,” to study false news on the site, accord­ing to peo­ple involved in the dis­cus­sions. By Jan­u­ary 2017, the group knew that Mr. Stamos’s orig­i­nal team had only scratched the sur­face of Russ­ian activ­i­ty on Face­book, and pressed to issue a pub­lic paper about their find­ings.

    But Mr. Kaplan and oth­er Face­book exec­u­tives object­ed. Wash­ing­ton was already reel­ing from an offi­cial find­ing by Amer­i­can intel­li­gence agen­cies that Vladimir V. Putin, the Russ­ian pres­i­dent, had per­son­al­ly ordered an influ­ence cam­paign aimed at help­ing elect Mr. Trump.

    If Face­book impli­cat­ed Rus­sia fur­ther, Mr. Kaplan said, Repub­li­cans would accuse the com­pa­ny of sid­ing with Democ­rats. And if Face­book pulled down the Rus­sians’ fake pages, reg­u­lar Face­book users might also react with out­rage at hav­ing been deceived: His own moth­er-in-law, Mr. Kaplan said, had fol­lowed a Face­book page cre­at­ed by Russ­ian trolls.

    Ms. Sand­berg sided with Mr. Kaplan, recalled four peo­ple involved. Mr. Zucker­berg — who spent much of 2017 on a nation­al “lis­ten­ing tour,” feed­ing cows in Wis­con­sin and eat­ing din­ner with Soma­li refugees in Min­neso­ta — did not par­tic­i­pate in the con­ver­sa­tions about the pub­lic paper. When it was pub­lished that April, the word “Rus­sia” nev­er appeared.

    ...

    A Polit­i­cal Play­book

    The com­bined rev­e­la­tions infu­ri­at­ed Democ­rats, final­ly frac­tur­ing the polit­i­cal con­sen­sus that had pro­tect­ed Face­book and oth­er big tech com­pa­nies from Belt­way inter­fer­ence. Repub­li­cans, already con­cerned that the plat­form was cen­sor­ing con­ser­v­a­tive views, accused Face­book of fuel­ing what they claimed were mer­it­less con­spir­a­cy charges against Mr. Trump and Rus­sia. Democ­rats, long allied with Sil­i­con Val­ley on issues includ­ing immi­gra­tion and gay rights, now blamed Mr. Trump’s win part­ly on Facebook’s tol­er­ance for fraud and dis­in­for­ma­tion.

    After stalling for weeks, Face­book even­tu­al­ly agreed to hand over the Russ­ian posts to Con­gress. Twice in Octo­ber 2017, Face­book was forced to revise its pub­lic state­ments, final­ly acknowl­edg­ing that close to 126 mil­lion peo­ple had seen the Russ­ian posts.

    The same month, Mr. Warn­er and Sen­a­tor Amy Klobuchar, the Min­neso­ta Demo­c­rat, intro­duced leg­is­la­tion to com­pel Face­book and oth­er inter­net firms to dis­close who bought polit­i­cal ads on their sites — a sig­nif­i­cant expan­sion of fed­er­al reg­u­la­tion over tech com­pa­nies.

    “It’s time for Face­book to let all of us see the ads bought by Rus­sians *and paid for in Rubles* dur­ing the last elec­tion,” Ms. Klobuchar wrote on her own Face­book page.

    Face­book gird­ed for bat­tle. Days after the bill was unveiled, Face­book hired Mr. Warner’s for­mer chief of staff, Luke Albee, to lob­by on it. Mr. Kaplan’s team took a larg­er role in man­ag­ing the company’s Wash­ing­ton response, rou­tine­ly review­ing Face­book news releas­es for words or phras­es that might rile con­ser­v­a­tives.

    Ms. Sand­berg also reached out to Ms. Klobuchar. She had been friend­ly with the sen­a­tor, who is fea­tured on the web­site for Lean In, Ms. Sandberg’s empow­er­ment ini­tia­tive. Ms. Sand­berg had con­tributed a blurb to Ms. Klobuchar’s 2015 mem­oir, and the senator’s chief of staff had pre­vi­ous­ly worked at Ms. Sandberg’s char­i­ta­ble foun­da­tion.

    But in a tense con­ver­sa­tion short­ly after the ad leg­is­la­tion was intro­duced, Ms. Sand­berg com­plained about Ms. Klobuchar’s attacks on the com­pa­ny, said a per­son who was briefed on the call. Ms. Klobuchar did not back down on her leg­is­la­tion. But she dialed down her crit­i­cism in at least one venue impor­tant to the com­pa­ny: After blast­ing Face­book repeat­ed­ly that fall on her own Face­book page, Ms. Klobuchar hard­ly men­tioned the com­pa­ny in posts between Novem­ber and Feb­ru­ary.

    A spokesman for Ms. Klobuchar said in a state­ment that Facebook’s lob­by­ing had not less­ened her com­mit­ment to hold­ing the com­pa­ny account­able. “Face­book was push­ing to exclude issue ads from the Hon­est Ads Act, and Sen­a­tor Klobuchar stren­u­ous­ly dis­agreed and refused to change the bill,” he said.

    In Octo­ber 2017, Face­book also expand­ed its work with a Wash­ing­ton-based con­sul­tant, Defin­ers Pub­lic Affairs, that had orig­i­nal­ly been hired to mon­i­tor press cov­er­age of the com­pa­ny. Found­ed by vet­er­ans of Repub­li­can pres­i­den­tial pol­i­tics, Defin­ers spe­cial­ized in apply­ing polit­i­cal cam­paign tac­tics to cor­po­rate pub­lic rela­tions — an approach long employed in Wash­ing­ton by big telecom­mu­ni­ca­tions firms and activist hedge fund man­agers, but less com­mon in tech.

    Defin­ers had estab­lished a Sil­i­con Val­ley out­post ear­li­er that year, led by Tim Miller, a for­mer spokesman for Jeb Bush who preached the virtues of cam­paign-style oppo­si­tion research. For tech firms, he argued in one inter­view, a goal should be to “have pos­i­tive con­tent pushed out about your com­pa­ny and neg­a­tive con­tent that’s being pushed out about your com­peti­tor.”

    Face­book quick­ly adopt­ed that strat­e­gy. In Novem­ber 2017, the social net­work came out in favor of a bill called the Stop Enabling Sex Traf­fick­ers Act, which made inter­net com­pa­nies respon­si­ble for sex traf­fick­ing ads on their sites.

    Google and oth­ers had fought the bill for months, wor­ry­ing it would set a cum­ber­some prece­dent. But the sex traf­fick­ing bill was cham­pi­oned by Sen­a­tor John Thune, a Repub­li­can of South Dako­ta who had pum­meled Face­book over accu­sa­tions that it cen­sored con­ser­v­a­tive con­tent, and Sen­a­tor Richard Blu­men­thal, a Con­necti­cut Demo­c­rat and senior com­merce com­mit­tee mem­ber who was a fre­quent crit­ic of Face­book.

    Face­book broke ranks with oth­er tech com­pa­nies, hop­ing the move would help repair rela­tions on both sides of the aisle, said two con­gres­sion­al staffers and three tech indus­try offi­cials.

    When the bill came to a vote in the House in Feb­ru­ary, Ms. Sand­berg offered pub­lic sup­port online, urg­ing Con­gress to “make sure we pass mean­ing­ful and strong leg­is­la­tion to stop sex traf­fick­ing.”

    Oppo­si­tion Research

    In March, The Times, The Observ­er of Lon­don and The Guardian pre­pared to pub­lish a joint inves­ti­ga­tion into how Face­book user data had been appro­pri­at­ed by Cam­bridge Ana­lyt­i­ca to pro­file Amer­i­can vot­ers. A few days before pub­li­ca­tion, The Times pre­sent­ed Face­book with evi­dence that copies of improp­er­ly acquired Face­book data still exist­ed, despite ear­li­er promis­es by Cam­bridge exec­u­tives and oth­ers to delete it.

    Mr. Zucker­berg and Ms. Sand­berg met with their lieu­tenants to deter­mine a response. They decid­ed to pre-empt the sto­ries, say­ing in a state­ment pub­lished late on a Fri­day night that Face­book had sus­pend­ed Cam­bridge Ana­lyt­i­ca from its plat­form. The exec­u­tives fig­ured that get­ting ahead of the news would soft­en its blow, accord­ing to peo­ple in the dis­cus­sions.

    They were wrong. The sto­ry drew world­wide out­rage, prompt­ing law­suits and offi­cial inves­ti­ga­tions in Wash­ing­ton, Lon­don and Brus­sels. For days, Mr. Zucker­berg and Ms. Sand­berg remained out of sight, mulling how to respond. While the Rus­sia inves­ti­ga­tion had devolved into an increas­ing­ly par­ti­san bat­tle, the Cam­bridge scan­dal set off Democ­rats and Repub­li­cans alike. And in Sil­i­con Val­ley, oth­er tech firms began exploit­ing the out­cry to bur­nish their own brands.

    “We’re not going to traf­fic in your per­son­al life,” Tim Cook, Apple’s chief exec­u­tive, said in an MSNBC inter­view. “Pri­va­cy to us is a human right. It’s a civ­il lib­er­ty.” (Mr. Cook’s crit­i­cisms infu­ri­at­ed Mr. Zucker­berg, who lat­er ordered his man­age­ment team to use only Android phones — argu­ing that the oper­at­ing sys­tem had far more users than Apple’s.)

    Face­book scram­bled anew. Exec­u­tives qui­et­ly shelved an inter­nal com­mu­ni­ca­tions cam­paign, called “We Get It,” meant to assure employ­ees that the com­pa­ny was com­mit­ted to get­ting back on track in 2018.

    Then Face­book went on the offen­sive. Mr. Kaplan pre­vailed on Ms. Sand­berg to pro­mote Kevin Mar­tin, a for­mer Fed­er­al Com­mu­ni­ca­tions Com­mis­sion chair­man and fel­low Bush admin­is­tra­tion vet­er­an, to lead the company’s Amer­i­can lob­by­ing efforts. Face­book also expand­ed its work with Defin­ers.

    On a con­ser­v­a­tive news site called the NTK Net­work, dozens of arti­cles blast­ed Google and Apple for unsa­vory busi­ness prac­tices. One sto­ry called Mr. Cook hyp­o­crit­i­cal for chid­ing Face­book over pri­va­cy, not­ing that Apple also col­lects reams of data from users. Anoth­er played down the impact of the Rus­sians’ use of Face­book.

    The rash of news cov­er­age was no acci­dent: NTK is an affil­i­ate of Defin­ers, shar­ing offices and staff with the pub­lic rela­tions firm in Arling­ton, Va. Many NTK Net­work sto­ries are writ­ten by staff mem­bers at Defin­ers or Amer­i­ca Ris­ing, the company’s polit­i­cal oppo­si­tion-research arm, to attack their clients’ ene­mies. While the NTK Net­work does not have a large audi­ence of its own, its con­tent is fre­quent­ly picked up by pop­u­lar con­ser­v­a­tive out­lets, includ­ing Bre­it­bart.

    Mr. Miller acknowl­edged that Face­book and Apple do not direct­ly com­pete. Defin­ers’ work on Apple is fund­ed by a third tech­nol­o­gy com­pa­ny, he said, but Face­book has pushed back against Apple because Mr. Cook’s crit­i­cism upset Face­book.

    If the pri­va­cy issue comes up, Face­book is hap­py to “mud­dy the waters,” Mr. Miller said over drinks at an Oak­land, Calif., bar last month.

    On Thurs­day, after this arti­cle was pub­lished, Face­book said that it had end­ed its rela­tion­ship with Defin­ers, with­out cit­ing a rea­son.

    ...

    Per­son­al Appeals in Wash­ing­ton

    Ms. Sand­berg had said lit­tle pub­licly about the company’s prob­lems. But inside Face­book, her approach had begun to draw crit­i­cism.

    ...

    Face­book also con­tin­ued to look for ways to deflect crit­i­cism to rivals. In June, after The Times report­ed on Facebook’s pre­vi­ous­ly undis­closed deals to share user data with device mak­ers — part­ner­ships Face­book had failed to dis­close to law­mak­ers — exec­u­tives ordered up focus groups in Wash­ing­ton.

    In sep­a­rate ses­sions with lib­er­als and con­ser­v­a­tives, about a dozen at a time, Face­book pre­viewed mes­sages to law­mak­ers. Among the approach­es it test­ed was bring­ing YouTube and oth­er social media plat­forms into the con­tro­ver­sy, while argu­ing that Google struck sim­i­lar data-shar­ing deals.

    Deflect­ing Crit­i­cism

    By then, some of the harsh­est crit­i­cism of Face­book was com­ing from the polit­i­cal left, where activists and pol­i­cy experts had begun call­ing for the com­pa­ny to be bro­ken up.

    In July, orga­niz­ers with a coali­tion called Free­dom from Face­book crashed a hear­ing of the House Judi­cia­ry Com­mit­tee, where a com­pa­ny exec­u­tive was tes­ti­fy­ing about its poli­cies. As the exec­u­tive spoke, the orga­niz­ers held aloft signs depict­ing Ms. Sand­berg and Mr. Zucker­berg, who are both Jew­ish, as two heads of an octo­pus stretch­ing around the globe.

    Eddie Vale, a Demo­c­ra­t­ic pub­lic rela­tions strate­gist who led the protest, lat­er said the image was meant to evoke old car­toons of Stan­dard Oil, the Gild­ed Age monop­oly. But a Face­book offi­cial quick­ly called the Anti-Defama­tion League, a lead­ing Jew­ish civ­il rights orga­ni­za­tion, to flag the sign. Face­book and oth­er tech com­pa­nies had part­nered with the civ­il rights group since late 2017 on an ini­tia­tive to com­bat anti-Semi­tism and hate speech online.

    That after­noon, the A.D.L. issued a warn­ing from its Twit­ter account.

    “Depict­ing Jews as an octo­pus encir­cling the globe is a clas­sic anti-Semit­ic trope,” the orga­ni­za­tion wrote. “Protest Face­book — or any­one — all you want, but pick a dif­fer­ent image.” The crit­i­cism was soon echoed in con­ser­v­a­tive out­lets includ­ing The Wash­ing­ton Free Bea­con, which has sought to tie Free­dom from Face­book to what the pub­li­ca­tion calls “extreme anti-Israel groups.”

    An A.D.L. spokes­woman, Bet­sai­da Alcan­tara, said the group rou­tine­ly field­ed reports of anti-Semit­ic slurs from jour­nal­ists, syn­a­gogues and oth­ers. “Our experts eval­u­ate each one based on our years of expe­ri­ence, and we respond appro­pri­ate­ly,” Ms. Alcan­tara said. (The group has at times sharply crit­i­cized Face­book, includ­ing when Mr. Zucker­berg sug­gest­ed that his com­pa­ny should not cen­sor Holo­caust deniers.)

    Face­book also used Defin­ers to take on big­ger oppo­nents, such as Mr. Soros, a long­time boogey­man to main­stream con­ser­v­a­tives and the tar­get of intense anti-Semit­ic smears on the far right. A research doc­u­ment cir­cu­lat­ed by Defin­ers to reporters this sum­mer, just a month after the House hear­ing, cast Mr. Soros as the unac­knowl­edged force behind what appeared to be a broad anti-Face­book move­ment.

    He was a nat­ur­al tar­get. In a speech at the World Eco­nom­ic Forum in Jan­u­ary, he had attacked Face­book and Google, describ­ing them as a monop­o­list “men­ace” with “nei­ther the will nor the incli­na­tion to pro­tect soci­ety against the con­se­quences of their actions.”

    Defin­ers pressed reporters to explore the finan­cial con­nec­tions between Mr. Soros’s fam­i­ly or phil­an­thropies and groups that were mem­bers of Free­dom from Face­book, such as Col­or of Change, an online racial jus­tice orga­ni­za­tion, as well as a pro­gres­sive group found­ed by Mr. Soros’s son. (An offi­cial at Mr. Soros’s Open Soci­ety Foun­da­tions said the phil­an­thropy had sup­port­ed both mem­ber groups, but not Free­dom from Face­book, and had made no grants to sup­port cam­paigns against Face­book.)

    ...

    ———-

    “Delay, Deny and Deflect: How Facebook’s Lead­ers Fought Through Cri­sis” by Sheera Frenkel, Nicholas Con­fes­sore, Cecil­ia Kang, Matthew Rosen­berg and Jack Nicas; The New York Times; 11/14/2018

    “While Mr. Zucker­berg has con­duct­ed a pub­lic apol­o­gy tour in the last year, Ms. Sand­berg has over­seen an aggres­sive lob­by­ing cam­paign to com­bat Facebook’s crit­ics, shift pub­lic anger toward rival com­pa­nies and ward off dam­ag­ing reg­u­la­tion. Face­book employed a Repub­li­can oppo­si­tion-research firm to dis­cred­it activist pro­test­ers, in part by link­ing them to the lib­er­al financier George Soros. It also tapped its busi­ness rela­tion­ships, lob­by­ing a Jew­ish civ­il rights group to cast some crit­i­cism of the com­pa­ny as anti-Semit­ic.”

    Imag­ine if your job was to han­dle Face­book’s bad press. That was appar­ent­ly Sheryl Sand­berg’s job behind the scenes while Mark Zucker­berg was act­ing as the apolo­getic pub­lic face of Face­book.

    But both Zucker­berg and Sand­berg appeared to have large­ly the same response to the scan­dals involv­ing Face­book’s grow­ing use as a plat­form for spread­ing hate and extrem­ism: keep Face­book out of those dis­putes by argu­ing that it’s just a plat­form, not a pub­lish­er:

    ...
    ‘Don’t Poke the Bear’

    Three years ago, Mr. Zucker­berg, who found­ed Face­book in 2004 while attend­ing Har­vard, was cel­e­brat­ed for the company’s extra­or­di­nary suc­cess. Ms. Sand­berg, a for­mer Clin­ton admin­is­tra­tion offi­cial and Google vet­er­an, had become a fem­i­nist icon with the pub­li­ca­tion of her empow­er­ment man­i­festo, “Lean In,” in 2013.

    Like oth­er tech­nol­o­gy exec­u­tives, Mr. Zucker­berg and Ms. Sand­berg cast their com­pa­ny as a force for social good. Facebook’s lofty aims were embla­zoned even on secu­ri­ties fil­ings: “Our mis­sion is to make the world more open and con­nect­ed.”

    But as Face­book grew, so did the hate speech, bul­ly­ing and oth­er tox­ic con­tent on the plat­form. When researchers and activists in Myan­mar, India, Ger­many and else­where warned that Face­book had become an instru­ment of gov­ern­ment pro­pa­gan­da and eth­nic cleans­ing, the com­pa­ny large­ly ignored them. Face­book had posi­tioned itself as a plat­form, not a pub­lish­er. Tak­ing respon­si­bil­i­ty for what users post­ed, or act­ing to cen­sor it, was expen­sive and com­pli­cat­ed. Many Face­book exec­u­tives wor­ried that any such efforts would back­fire.
    ...

    Sand­berg also appears to have increas­ing­ly relied on Joel Kaplan, Face­book’s vice pres­i­dent of glob­al pub­lic pol­i­cy, for advice on how to han­dle these issues and scan­dal. Kaplan pre­vi­ous­ly served in the George W. Bush admin­is­tra­tion. When Don­ald Trump first ran for pres­i­dent in 2015 and announced his plan for a “total and com­plete shut­down” on Mus­lims enter­ing the Unit­ed States and that mes­sage was shared more than 15,000 times on Face­book, the ques­tion was raised by Zucker­berg of whether or not Trump vio­lat­ed the plat­for­m’s terms of ser­vice. Sand­berg turned to Kaplan for advice. Kaplan, unsur­pris­ing­ly, rec­om­mend­ed that any sort of crack­down on Trump’s use of Face­book would be seen as obstruct­ing free speech and prompt a con­ser­v­a­tive back­lash. Kaplan’s advice was tak­en:

    ...
    Then Don­ald J. Trump ran for pres­i­dent. He described Mus­lim immi­grants and refugees as a dan­ger to Amer­i­ca, and in Decem­ber 2015 post­ed a state­ment on Face­book call­ing for a “total and com­plete shut­down” on Mus­lims enter­ing the Unit­ed States. Mr. Trump’s call to arms — wide­ly con­demned by Democ­rats and some promi­nent Repub­li­cans — was shared more than 15,000 times on Face­book, an illus­tra­tion of the site’s pow­er to spread racist sen­ti­ment.

    Mr. Zucker­berg, who had helped found a non­prof­it ded­i­cat­ed to immi­gra­tion reform, was appalled, said employ­ees who spoke to him or were famil­iar with the con­ver­sa­tion. He asked Ms. Sand­berg and oth­er exec­u­tives if Mr. Trump had vio­lat­ed Facebook’s terms of ser­vice.

    The ques­tion was unusu­al. Mr. Zucker­berg typ­i­cal­ly focused on broad­er tech­nol­o­gy issues; pol­i­tics was Ms. Sandberg’s domain. In 2010, Ms. Sand­berg, a Demo­c­rat, had recruit­ed a friend and fel­low Clin­ton alum, Marne Levine, as Facebook’s chief Wash­ing­ton rep­re­sen­ta­tive. A year lat­er, after Repub­li­cans seized con­trol of the House, Ms. Sand­berg installed anoth­er friend, a well-con­nect­ed Repub­li­can: Joel Kaplan, who had attend­ed Har­vard with Ms. Sand­berg and lat­er served in the George W. Bush admin­is­tra­tion.

    Some at Face­book viewed Mr. Trump’s 2015 attack on Mus­lims as an oppor­tu­ni­ty to final­ly take a stand against the hate speech cours­ing through its plat­form. But Ms. Sand­berg, who was edg­ing back to work after the death of her hus­band sev­er­al months ear­li­er, del­e­gat­ed the mat­ter to Mr. Schrage and Moni­ka Bick­ert, a for­mer pros­e­cu­tor whom Ms. Sand­berg had recruit­ed as the company’s head of glob­al pol­i­cy man­age­ment. Ms. Sand­berg also turned to the Wash­ing­ton office — par­tic­u­lar­ly to Mr. Kaplan, said peo­ple who par­tic­i­pat­ed in or were briefed on the dis­cus­sions.

    In video con­fer­ence calls between the Sil­i­con Val­ley head­quar­ters and Wash­ing­ton, the three offi­cials con­strued their task nar­row­ly. They parsed the company’s terms of ser­vice to see if the post, or Mr. Trump’s account, vio­lat­ed Facebook’s rules.

    Mr. Kaplan argued that Mr. Trump was an impor­tant pub­lic fig­ure and that shut­ting down his account or remov­ing the state­ment could be seen as obstruct­ing free speech, said three employ­ees who knew of the dis­cus­sions. He said it could also stoke a con­ser­v­a­tive back­lash.

    “Don’t poke the bear,” Mr. Kaplan warned.

    Mr. Zucker­berg did not par­tic­i­pate in the debate. Ms. Sand­berg attend­ed some of the video meet­ings but rarely spoke.

    Mr. Schrage con­clud­ed that Mr. Trump’s lan­guage had not vio­lat­ed Facebook’s rules and that the candidate’s views had pub­lic val­ue. “We were try­ing to make a deci­sion based on all the legal and tech­ni­cal evi­dence before us,” he said in an inter­view.
    ...

    And note how, after Trump won, Face­book hired a for­mer aide to Jeff Ses­sions and lob­by­ing firms linked to Repub­li­can law­mak­ers who had juris­dic­tion over inter­net com­pa­nies. Face­book was mak­ing pleas­ing Repub­li­cans in Wash­ing­ton a top pri­or­i­ty:

    ...
    In the end, Mr. Trump’s state­ment and account remained on the site. When Mr. Trump won elec­tion the next fall, giv­ing Repub­li­cans con­trol of the White House as well as Con­gress, Mr. Kaplan was empow­ered to plan accord­ing­ly. The com­pa­ny hired a for­mer aide to Mr. Trump’s new attor­ney gen­er­al, Jeff Ses­sions, along with lob­by­ing firms linked to Repub­li­can law­mak­ers who had juris­dic­tion over inter­net com­pa­nies.
    ...

    Kaplan also encour­aged Face­book to avoid inves­ti­gat­ing too close­ly the alleged Russ­ian troll cam­paigns. This was his advice even in 2016, while the cam­paign was ongo­ing, and after the cam­paign in 2017. Inter­est­ing­ly, Face­book appar­ent­ly found accounts linked to ‘Russ­ian hack­ers’ that were using Face­book to look up infor­ma­tion on pres­i­den­tial cam­paigns. This was in the spring of 2016. Keep in mind that the ini­tial reports of the hacked emails did­n’t start until mid June of 2016. Sum­mer tech­ni­cal­ly start­ed about a week lat­er. So how did Face­book’s inter­nal team know these accounts were asso­ci­at­ed with Russ­ian hack­ers before the ‘Russ­ian hack­er’ scan­dal erupt­ed? That’s unclear. But the arti­cle goes on to say that this same team also found accounts linked with the Russ­ian hack­ers mes­sag­ing jour­nal­ists to share con­tents of the hacked emails. Was “Guc­cifer 2.0” using Face­book to talk with jour­nal­ists? that’s also unclear. But it sounds like Face­book was indeed active­ly observ­ing what it thought were Russ­ian hack­ers using the plat­form:

    ...
    Min­i­miz­ing Russia’s Role

    In the final months of Mr. Trump’s pres­i­den­tial cam­paign, Russ­ian agents esca­lat­ed a year­long effort to hack and harass his Demo­c­ra­t­ic oppo­nents, cul­mi­nat­ing in the release of thou­sands of emails stolen from promi­nent Democ­rats and par­ty offi­cials.

    Face­book had said noth­ing pub­licly about any prob­lems on its own plat­form. But in the spring of 2016, a com­pa­ny expert on Russ­ian cyber­war­fare spot­ted some­thing wor­ri­some. He reached out to his boss, Mr. Sta­mos.

    Mr. Stamos’s team dis­cov­ered that Russ­ian hack­ers appeared to be prob­ing Face­book accounts for peo­ple con­nect­ed to the pres­i­den­tial cam­paigns, said two employ­ees. Months lat­er, as Mr. Trump bat­tled Hillary Clin­ton in the gen­er­al elec­tion, the team also found Face­book accounts linked to Russ­ian hack­ers who were mes­sag­ing jour­nal­ists to share infor­ma­tion from the stolen emails.

    Mr. Sta­mos, 39, told Col­in Stretch, Facebook’s gen­er­al coun­sel, about the find­ings, said two peo­ple involved in the con­ver­sa­tions. At the time, Face­book had no pol­i­cy on dis­in­for­ma­tion or any resources ded­i­cat­ed to search­ing for it.
    ...

    Alex Sta­mos, Face­book’s head of secu­ri­ty, direct­ed a team to exam­ine the Russ­ian activ­i­ty on Face­book. And yet Zucker­berg and Sand­berg appar­ent­ly nev­er learned about their find­ings until Decem­ber of 2016, after the elec­tion. And when they did learn, Sand­berg got angry as Sta­mos for not get­ting approval before look­ing into this because it could leave the com­pa­ny legal­ly exposed, high­light­ing again how not know­ing about the abus­es on its plat­form is a legal strat­e­gy of the com­pa­ny. By Jan­u­ary of 2017, Sta­mos want­ed to issue a pub­lic paper on their find­ings, but Joel Kaplan shot down the idea, argu­ing that doing so would cause Repub­li­cans to turn on the com­pa­ny. Sand­berg again agreed with Kaplan:

    ...
    Mr. Sta­mos, act­ing on his own, then direct­ed a team to scru­ti­nize the extent of Russ­ian activ­i­ty on Face­book. In Decem­ber 2016, after Mr. Zucker­berg pub­licly scoffed at the idea that fake news on Face­book had helped elect Mr. Trump, Mr. Sta­mos — alarmed that the company’s chief exec­u­tive seemed unaware of his team’s find­ings — met with Mr. Zucker­berg, Ms. Sand­berg and oth­er top Face­book lead­ers.

    Ms. Sand­berg was angry. Look­ing into the Russ­ian activ­i­ty with­out approval, she said, had left the com­pa­ny exposed legal­ly. Oth­er exec­u­tives asked Mr. Sta­mos why they had not been told soon­er.

    Still, Ms. Sand­berg and Mr. Zucker­berg decid­ed to expand on Mr. Stamos’s work, cre­at­ing a group called Project P, for “pro­pa­gan­da,” to study false news on the site, accord­ing to peo­ple involved in the dis­cus­sions. By Jan­u­ary 2017, the group knew that Mr. Stamos’s orig­i­nal team had only scratched the sur­face of Russ­ian activ­i­ty on Face­book, and pressed to issue a pub­lic paper about their find­ings.

    But Mr. Kaplan and oth­er Face­book exec­u­tives object­ed. Wash­ing­ton was already reel­ing from an offi­cial find­ing by Amer­i­can intel­li­gence agen­cies that Vladimir V. Putin, the Russ­ian pres­i­dent, had per­son­al­ly ordered an influ­ence cam­paign aimed at help­ing elect Mr. Trump.

    If Face­book impli­cat­ed Rus­sia fur­ther, Mr. Kaplan said, Repub­li­cans would accuse the com­pa­ny of sid­ing with Democ­rats. And if Face­book pulled down the Rus­sians’ fake pages, reg­u­lar Face­book users might also react with out­rage at hav­ing been deceived: His own moth­er-in-law, Mr. Kaplan said, had fol­lowed a Face­book page cre­at­ed by Russ­ian trolls.

    Ms. Sand­berg sided with Mr. Kaplan, recalled four peo­ple involved. Mr. Zucker­berg — who spent much of 2017 on a nation­al “lis­ten­ing tour,” feed­ing cows in Wis­con­sin and eat­ing din­ner with Soma­li refugees in Min­neso­ta — did not par­tic­i­pate in the con­ver­sa­tions about the pub­lic paper. When it was pub­lished that April, the word “Rus­sia” nev­er appeared.
    ...

    “Mr. Sta­mos, act­ing on his own, then direct­ed a team to scru­ti­nize the extent of Russ­ian activ­i­ty on Face­book. In Decem­ber 2016, after Mr. Zucker­berg pub­licly scoffed at the idea that fake news on Face­book had helped elect Mr. Trump, Mr. Sta­mos — alarmed that the company’s chief exec­u­tive seemed unaware of his team’s find­ings — met with Mr. Zucker­berg, Ms. Sand­berg and oth­er top Face­book lead­ers.”

    Both Zucker­berg and Sand­berg were appar­ent­ly unaware of the find­ings of Sta­mos’s team that had been look­ing into Russ­ian activ­i­ty since the spring of 2016 and found ear­ly signs of the ‘Russ­ian hack­ing teams’ set­ting up Face­book pages to dis­trib­ute the emails. Huh.

    And then we get to Defin­ers Pub­lic Affairs, the com­pa­ny found­ed by Repub­li­can polit­i­cal oper­a­tives and spe­cial­iz­ing in bring polit­i­cal tac­tics to cor­po­rate pub­lic rela­tions. In Octo­ber of 2017, Face­book appears to have decid­ed to dou­ble down on the Defin­ers strat­e­gy. A strat­e­gy that appears to revolve around the strat­e­gy of simul­ta­ne­ous­ly push­ing out pos­i­tive Face­book cov­er­age while attack­ing Face­book­s’s oppo­nents and crit­ics to mud­dy the waters:

    ...
    In Octo­ber 2017, Face­book also expand­ed its work with a Wash­ing­ton-based con­sul­tant, Defin­ers Pub­lic Affairs, that had orig­i­nal­ly been hired to mon­i­tor press cov­er­age of the com­pa­ny. Found­ed by vet­er­ans of Repub­li­can pres­i­den­tial pol­i­tics, Defin­ers spe­cial­ized in apply­ing polit­i­cal cam­paign tac­tics to cor­po­rate pub­lic rela­tions — an approach long employed in Wash­ing­ton by big telecom­mu­ni­ca­tions firms and activist hedge fund man­agers, but less com­mon in tech.

    Defin­ers had estab­lished a Sil­i­con Val­ley out­post ear­li­er that year, led by Tim Miller, a for­mer spokesman for Jeb Bush who preached the virtues of cam­paign-style oppo­si­tion research. For tech firms, he argued in one inter­view, a goal should be to “have pos­i­tive con­tent pushed out about your com­pa­ny and neg­a­tive con­tent that’s being pushed out about your com­peti­tor.”

    Face­book quick­ly adopt­ed that strat­e­gy. In Novem­ber 2017, the social net­work came out in favor of a bill called the Stop Enabling Sex Traf­fick­ers Act, which made inter­net com­pa­nies respon­si­ble for sex traf­fick­ing ads on their sites.

    Google and oth­ers had fought the bill for months, wor­ry­ing it would set a cum­ber­some prece­dent. But the sex traf­fick­ing bill was cham­pi­oned by Sen­a­tor John Thune, a Repub­li­can of South Dako­ta who had pum­meled Face­book over accu­sa­tions that it cen­sored con­ser­v­a­tive con­tent, and Sen­a­tor Richard Blu­men­thal, a Con­necti­cut Demo­c­rat and senior com­merce com­mit­tee mem­ber who was a fre­quent crit­ic of Face­book.

    Face­book broke ranks with oth­er tech com­pa­nies, hop­ing the move would help repair rela­tions on both sides of the aisle, said two con­gres­sion­al staffers and three tech indus­try offi­cials.

    When the bill came to a vote in the House in Feb­ru­ary, Ms. Sand­berg offered pub­lic sup­port online, urg­ing Con­gress to “make sure we pass mean­ing­ful and strong leg­is­la­tion to stop sex traf­fick­ing.”
    ...

    Then, in March of this year, the Cam­bridge Ana­lyt­i­ca scan­dal blew open. In response, Kaplan con­vinced Sand­berg to pro­mote anoth­er Repub­li­can to help deal with the dam­age. Kevin Mar­tin, a for­mer FCC chair­man and a Bush admin­is­tra­tion vet­er­an, was cho­sen to lead Face­book’s US lob­by­ing efforts. Defin­ers was also tapped to deal with the scan­dal. And as part of that response, Defin­ers used its affil­i­at­ed NTK net­work to pump out waves of arti­cles slam­ming Google and Apple for var­i­ous rea­sons:

    ...
    Oppo­si­tion Research

    In March, The Times, The Observ­er of Lon­don and The Guardian pre­pared to pub­lish a joint inves­ti­ga­tion into how Face­book user data had been appro­pri­at­ed by Cam­bridge Ana­lyt­i­ca to pro­file Amer­i­can vot­ers. A few days before pub­li­ca­tion, The Times pre­sent­ed Face­book with evi­dence that copies of improp­er­ly acquired Face­book data still exist­ed, despite ear­li­er promis­es by Cam­bridge exec­u­tives and oth­ers to delete it.

    Mr. Zucker­berg and Ms. Sand­berg met with their lieu­tenants to deter­mine a response. They decid­ed to pre-empt the sto­ries, say­ing in a state­ment pub­lished late on a Fri­day night that Face­book had sus­pend­ed Cam­bridge Ana­lyt­i­ca from its plat­form. The exec­u­tives fig­ured that get­ting ahead of the news would soft­en its blow, accord­ing to peo­ple in the dis­cus­sions.

    They were wrong. The sto­ry drew world­wide out­rage, prompt­ing law­suits and offi­cial inves­ti­ga­tions in Wash­ing­ton, Lon­don and Brus­sels. For days, Mr. Zucker­berg and Ms. Sand­berg remained out of sight, mulling how to respond. While the Rus­sia inves­ti­ga­tion had devolved into an increas­ing­ly par­ti­san bat­tle, the Cam­bridge scan­dal set off Democ­rats and Repub­li­cans alike. And in Sil­i­con Val­ley, oth­er tech firms began exploit­ing the out­cry to bur­nish their own brands.

    “We’re not going to traf­fic in your per­son­al life,” Tim Cook, Apple’s chief exec­u­tive, said in an MSNBC inter­view. “Pri­va­cy to us is a human right. It’s a civ­il lib­er­ty.” (Mr. Cook’s crit­i­cisms infu­ri­at­ed Mr. Zucker­berg, who lat­er ordered his man­age­ment team to use only Android phones — argu­ing that the oper­at­ing sys­tem had far more users than Apple’s.)

    Face­book scram­bled anew. Exec­u­tives qui­et­ly shelved an inter­nal com­mu­ni­ca­tions cam­paign, called “We Get It,” meant to assure employ­ees that the com­pa­ny was com­mit­ted to get­ting back on track in 2018.

    Then Face­book went on the offen­sive. Mr. Kaplan pre­vailed on Ms. Sand­berg to pro­mote Kevin Mar­tin, a for­mer Fed­er­al Com­mu­ni­ca­tions Com­mis­sion chair­man and fel­low Bush admin­is­tra­tion vet­er­an, to lead the company’s Amer­i­can lob­by­ing efforts. Face­book also expand­ed its work with Defin­ers.

    On a con­ser­v­a­tive news site called the NTK Net­work, dozens of arti­cles blast­ed Google and Apple for unsa­vory busi­ness prac­tices. One sto­ry called Mr. Cook hyp­o­crit­i­cal for chid­ing Face­book over pri­va­cy, not­ing that Apple also col­lects reams of data from users. Anoth­er played down the impact of the Rus­sians’ use of Face­book.

    The rash of news cov­er­age was no acci­dent: NTK is an affil­i­ate of Defin­ers, shar­ing offices and staff with the pub­lic rela­tions firm in Arling­ton, Va. Many NTK Net­work sto­ries are writ­ten by staff mem­bers at Defin­ers or Amer­i­ca Ris­ing, the company’s polit­i­cal oppo­si­tion-research arm, to attack their clients’ ene­mies. While the NTK Net­work does not have a large audi­ence of its own, its con­tent is fre­quent­ly picked up by pop­u­lar con­ser­v­a­tive out­lets, includ­ing Bre­it­bart.
    ...

    Final­ly, in July of this year, we find Face­book accus­ing its crit­ics of anti-Semi­tism at the same time Defin­ers uses an arguably anti-Semit­ic attack on these exact same crit­ics as part of a gen­er­al strat­e­gy by Defin­ers to define Face­book’s crit­ics as pup­pets of George Soros:

    ...
    Deflect­ing Crit­i­cism

    By then, some of the harsh­est crit­i­cism of Face­book was com­ing from the polit­i­cal left, where activists and pol­i­cy experts had begun call­ing for the com­pa­ny to be bro­ken up.

    In July, orga­niz­ers with a coali­tion called Free­dom from Face­book crashed a hear­ing of the House Judi­cia­ry Com­mit­tee, where a com­pa­ny exec­u­tive was tes­ti­fy­ing about its poli­cies. As the exec­u­tive spoke, the orga­niz­ers held aloft signs depict­ing Ms. Sand­berg and Mr. Zucker­berg, who are both Jew­ish, as two heads of an octo­pus stretch­ing around the globe.

    Eddie Vale, a Demo­c­ra­t­ic pub­lic rela­tions strate­gist who led the protest, lat­er said the image was meant to evoke old car­toons of Stan­dard Oil, the Gild­ed Age monop­oly. But a Face­book offi­cial quick­ly called the Anti-Defama­tion League, a lead­ing Jew­ish civ­il rights orga­ni­za­tion, to flag the sign. Face­book and oth­er tech com­pa­nies had part­nered with the civ­il rights group since late 2017 on an ini­tia­tive to com­bat anti-Semi­tism and hate speech online.

    That after­noon, the A.D.L. issued a warn­ing from its Twit­ter account.

    “Depict­ing Jews as an octo­pus encir­cling the globe is a clas­sic anti-Semit­ic trope,” the orga­ni­za­tion wrote. “Protest Face­book — or any­one — all you want, but pick a dif­fer­ent image.” The crit­i­cism was soon echoed in con­ser­v­a­tive out­lets includ­ing The Wash­ing­ton Free Bea­con, which has sought to tie Free­dom from Face­book to what the pub­li­ca­tion calls “extreme anti-Israel groups.”

    An A.D.L. spokes­woman, Bet­sai­da Alcan­tara, said the group rou­tine­ly field­ed reports of anti-Semit­ic slurs from jour­nal­ists, syn­a­gogues and oth­ers. “Our experts eval­u­ate each one based on our years of expe­ri­ence, and we respond appro­pri­ate­ly,” Ms. Alcan­tara said. (The group has at times sharply crit­i­cized Face­book, includ­ing when Mr. Zucker­berg sug­gest­ed that his com­pa­ny should not cen­sor Holo­caust deniers.)

    Face­book also used Defin­ers to take on big­ger oppo­nents, such as Mr. Soros, a long­time boogey­man to main­stream con­ser­v­a­tives and the tar­get of intense anti-Semit­ic smears on the far right. A research doc­u­ment cir­cu­lat­ed by Defin­ers to reporters this sum­mer, just a month after the House hear­ing, cast Mr. Soros as the unac­knowl­edged force behind what appeared to be a broad anti-Face­book move­ment.

    He was a nat­ur­al tar­get. In a speech at the World Eco­nom­ic Forum in Jan­u­ary, he had attacked Face­book and Google, describ­ing them as a monop­o­list “men­ace” with “nei­ther the will nor the incli­na­tion to pro­tect soci­ety against the con­se­quences of their actions.”

    Defin­ers pressed reporters to explore the finan­cial con­nec­tions between Mr. Soros’s fam­i­ly or phil­an­thropies and groups that were mem­bers of Free­dom from Face­book, such as Col­or of Change, an online racial jus­tice orga­ni­za­tion, as well as a pro­gres­sive group found­ed by Mr. Soros’s son. (An offi­cial at Mr. Soros’s Open Soci­ety Foun­da­tions said the phil­an­thropy had sup­port­ed both mem­ber groups, but not Free­dom from Face­book, and had made no grants to sup­port cam­paigns against Face­book.)
    ...

    So as we can see, Face­book’s response to scan­dals appears to fall into the fol­low­ing pat­tern:

    1. Inten­tion­al­ly ignore the scan­dal.

    2. When it’s no longer pos­si­ble to ignore, try to get ahead of it by going pub­lic with a watered down admis­sion of the prob­lem.

    3. When get­ting ahead of the sto­ry does­n’t work, attack Face­book’s crit­ics (like sug­gest­ing they are all pawns of George Soros)

    4. Don’t piss off Repub­li­cans.

    Also, regard­ing the dis­cov­ery of Russ­ian hack­ers set­ting up Face­book accounts in the spring of 2016 to dis­trib­ute the hacked emails, here’s a Wash­ing­ton Post arti­cle from Sep­tem­ber of 2017 that talks about this. And accord­ing to the arti­cle, Face­book dis­cov­ered these alleged Russ­ian hack­er accounts in June of 2016 (tech­ni­cal­ly still spring) and prompt­ly informed the FBI. The Face­book cyber­se­cu­ri­ty team was report­ed­ly track­ing APT28 (Fan­cy Bear) as just part of their nor­mal work and dis­cov­ered this activ­i­ty as part of that work. They told the FBI, and then short­ly after­wards they dis­cov­ered that pages for Guc­cifer 2.0 and DCLeaks were being set up to pro­mote the stolen emails. And recall in the above arti­cle that the Face­book team appar­ent­ly dis­cov­ered mes­sage from these account to jour­nal­ists.

    Inter­est­ing­ly, while the arti­cle says this was in June of 2016, it does­n’t say when in June of 2016. And that tim­ing is rather impor­tant since the first Wash­ing­ton Post arti­cle on the hack of the DNC hap­pened on June 14, and Guc­cifer 2.0 popped up and went pub­lic just a day lat­er. So did Face­book dis­cov­er this activ­i­ty before the reports about the hacked emails? That’s remains unclear, but it sounds like Face­book knows how to track APT28/Fancy Bear’s activ­i­ty on its plat­form and just rou­tine­ly does this and that’s how they dis­cov­ered the email hack­ing dis­tri­b­u­tion oper­a­tion. And that implies that if APT28/Fancy Bear real­ly did run this oper­a­tion, they did it in a man­ner that allowed cyber­se­cu­ri­ty researchers to track their activ­i­ty all over the web and on sites like Face­book, which would be one more exam­ple of the inex­plic­a­bly poor oper­a­tion secu­ri­ty by these elite Russ­ian hack­ers:

    The Wash­ing­ton Post

    Oba­ma tried to give Zucker­berg a wake-up call over fake news on Face­book

    By Adam Entous, Eliz­a­beth Dwoskin and Craig Tim­berg
    Sep­tem­ber 24, 2017

    This sto­ry has been updat­ed with an addi­tion­al response from Face­book.

    Nine days after Face­book chief exec­u­tive Mark Zucker­berg dis­missed as “crazy” the idea that fake news on his com­pa­ny’s social net­work played a key role in the U.S. elec­tion, Pres­i­dent Barack Oba­ma pulled the youth­ful tech bil­lion­aire aside and deliv­ered what he hoped would be a wake-up call.

    ...

    A Russ­ian oper­a­tion

    It turned out that Face­book, with­out real­iz­ing it, had stum­bled into the Russ­ian oper­a­tion as it was get­ting under­way in June 2016.

    At the time, cyber­se­cu­ri­ty experts at the com­pa­ny were track­ing a Russ­ian hack­er group known as APT28, or Fan­cy Bear, which U.S. intel­li­gence offi­cials con­sid­ered an arm of the Russ­ian mil­i­tary intel­li­gence ser­vice, the GRU, accord­ing to peo­ple famil­iar with Face­book’s activ­i­ties.

    Mem­bers of the Russ­ian hack­er group were best known for steal­ing mil­i­tary plans and data from polit­i­cal tar­gets, so the secu­ri­ty experts assumed that they were plan­ning some sort of espi­onage oper­a­tion — not a far-reach­ing dis­in­for­ma­tion cam­paign designed to shape the out­come of the U.S. pres­i­den­tial race.

    Face­book exec­u­tives shared with the FBI their sus­pi­cions that a Russ­ian espi­onage oper­a­tion was in the works, a per­son famil­iar with the mat­ter said. An FBI spokesper­son had no com­ment.

    Soon there­after, Face­book’s cyber experts found evi­dence that mem­bers of APT28 were set­ting up a series of shad­owy accounts — includ­ing a per­sona known as Guc­cifer 2.0 and a Face­book page called DCLeaks — to pro­mote stolen emails and oth­er doc­u­ments dur­ing the pres­i­den­tial race. Face­book offi­cials once again con­tact­ed the FBI to share what they had seen.

    After the Novem­ber elec­tion, Face­book began to look more broad­ly at the accounts that had been cre­at­ed dur­ing the cam­paign.

    A review by the com­pa­ny found that most of the groups behind the prob­lem­at­ic pages had clear finan­cial motives, which sug­gest­ed that they weren’t work­ing for a for­eign gov­ern­ment.

    But amid the mass of data the com­pa­ny was ana­lyz­ing, the secu­ri­ty team did not find clear evi­dence of Russ­ian dis­in­for­ma­tion or ad pur­chas­es by Russ­ian-linked accounts.

    Nor did any U.S. law enforce­ment or intel­li­gence offi­cials vis­it the com­pa­ny to lay out what they knew, said peo­ple famil­iar with the effort, even after the nation’s top intel­li­gence offi­cial, James R. Clap­per Jr., tes­ti­fied on Capi­tol Hill in Jan­u­ary that the Rus­sians had waged a mas­sive pro­pa­gan­da cam­paign online.

    ...
    ———-

    “Oba­ma tried to give Zucker­berg a wake-up call over fake news on Face­book” by Adam Entous, Eliz­a­beth Dwoskin and Craig Tim­berg; The Wash­ing­ton Post; 09/24/2017

    “It turned out that Face­book, with­out real­iz­ing it, had stum­bled into the Russ­ian oper­a­tion as it was get­ting under­way in June 2016.”

    It’s kind of an amaz­ing sto­ry. Just by acci­dent, Face­book’s cyber­se­cu­ri­ty experts were already track­ing APT28 some­how and noticed a bunch of activ­i­ty by the group on Face­book. They alert the FBI. This is in June of 2016. “Soon there­after”, Face­book finds evi­dence that mem­bers of APT28 were set­ting up accounts for Guc­cifer 2.0 and DCLeaks. Face­book again informed the FBI:

    ...
    At the time, cyber­se­cu­ri­ty experts at the com­pa­ny were track­ing a Russ­ian hack­er group known as APT28, or Fan­cy Bear, which U.S. intel­li­gence offi­cials con­sid­ered an arm of the Russ­ian mil­i­tary intel­li­gence ser­vice, the GRU, accord­ing to peo­ple famil­iar with Face­book’s activ­i­ties.

    Mem­bers of the Russ­ian hack­er group were best known for steal­ing mil­i­tary plans and data from polit­i­cal tar­gets, so the secu­ri­ty experts assumed that they were plan­ning some sort of espi­onage oper­a­tion — not a far-reach­ing dis­in­for­ma­tion cam­paign designed to shape the out­come of the U.S. pres­i­den­tial race.

    Face­book exec­u­tives shared with the FBI their sus­pi­cions that a Russ­ian espi­onage oper­a­tion was in the works, a per­son famil­iar with the mat­ter said. An FBI spokesper­son had no com­ment.

    Soon there­after, Face­book’s cyber experts found evi­dence that mem­bers of APT28 were set­ting up a series of shad­owy accounts — includ­ing a per­sona known as Guc­cifer 2.0 and a Face­book page called DCLeaks — to pro­mote stolen emails and oth­er doc­u­ments dur­ing the pres­i­den­tial race. Face­book offi­cials once again con­tact­ed the FBI to share what they had seen.
    ...

    So Face­book alleged­ly detect­ed APT28/Fancy Bear activ­i­ty in the spring of 2016. It’s unclear how they knew these were APT28/Fancy Bear hack­ers and unclear how they were track­ing their activ­i­ty. And then they dis­cov­ered these APT28 hack­ers were set­ting pages for Guc­cifer 2.0 and DC Leaks. And as we saw in the above arti­cle, they also found mes­sages from these accounts to jour­nal­ists dis­cussing the emails.

    It’s a remark­able sto­ry, in part because it’s almost nev­er told. We learn that Face­book appar­ent­ly has the abil­i­ty to track exact­ly the same Russ­ian hack­er group that’s accused of car­ry­ing out these hacks, and we learn that Face­book watched these same hack­ers set up the Face­book pages for Guc­cifer 2.0 and DC Leaks. And yet this is almost nev­er men­tioned as evi­dence that Russ­ian gov­ern­ment hack­ers were indeed behind the hacks. Thus far, the attri­bu­tion of these hacks on APT28/Fancy Bear has relied on Crowd­strike and the US gov­ern­ment and the direct inves­ti­ga­tion of the hacks Demo­c­ra­t­ic Par­ty servers. But here we’re learn­ing that Face­book appar­ent­ly has it’s own pool of evi­dence that can tie APT28 to Face­book accounts set up for Guc­cifer 2.0 and DCLeaks. A pool of evi­dence that’s almost nev­er men­tioned.

    And, again, as we saw in the above arti­cle, Face­book’s chief of secu­ri­ty, Alex Sta­mos, was alarmed in Decem­ber of 2016 that Mark Zucker­berg and Sheryl Sand­berg did­n’t know about the find­ings of his team look­ing into this alleged ‘Russ­ian’ activ­i­ty. So Face­book dis­cov­ered Guc­cifer 2.0 and DCLeaks accounts get­ting set up and Zucker­berg and Sand­berg did­n’t know or care about this dur­ing the 2016 elec­tion sea­son. It all high­lights how one of the meta-prob­lems fac­ing Face­book. A meta-prob­lem we saw on dis­play with the Cam­bridge Ana­lyt­i­ca scan­dal and the charges by for­mer exec­u­tive Sandy Parak­i­las that Face­book’s man­age­ment warned him not to look into prob­lems because they deter­mined that know­ing about a prob­lem could make the com­pa­ny liable if the prob­lem is explosed. So it’s a meta-prob­lem of an appar­ent desire of top man­age­ment to not face prob­lems. Or at least pre­tend to not face prob­lems while they know­ing­ly ignore them and then unleash com­pa­nies like Defin­ers Pub­lic Affairs to clean up the mess after the fact.

    And in relat­ed news, both Zucker­berg and Sand­berg claim they had no idea who at Face­book even hired Defin­ers and both had no idea the com­pa­ny even hired Defin­ers at all until that New York Times report. In oth­er words, Face­book’s upper man­age­ment is claim­ing they had no idea about this lat­est scan­dal. Of course.

    Posted by Pterrafractyl | November 19, 2018, 5:02 pm
  4. Now that the UK par­lia­men­t’s seizure of inter­nal Face­book doc­u­ments from the Six4Three law­suit threat­ens to expose what Six4Three argues was an app devel­op­er extor­tion scheme that was per­son­al­ly man­aged by Mark Zucker­berga bait-and-switch scheme that enticed app devel­op­ers with offers of a wealth of access to user infor­ma­tion and then extort­ed the most suc­cess­ful apps with threats of cut­ting off access to the user data unless they give Face­book a big­ger cut of their prof­its — the ques­tion of just how many high-lev­el Face­book scan­dals have yet to be revealed to the pub­lic is now a much more top­i­cal ques­tion. Because based on what we know so far about Face­book’s out of con­trol behav­ior that appears to have been sanc­tioned by the com­pa­ny’s exec­u­tives there’s no rea­son to assume there isn’t plen­ty of scan­dalous behav­ior yet to be revealed.

    So in the spir­it of spec­u­lat­ing about just how cor­rupt Mark Zucker­berg might tru­ly be, here’s an arti­cle that gives us some insight into the kinds of his­toric Zucker­berg spends time think­ing about: Sur­prise! He real­ly looks up to Cae­sar August, the Roman emper­or who took “a real­ly harsh approach” and “had to do cer­tain things” to achieve his grand goals:

    The Guardian

    What’s behind Mark Zucker­berg’s man-crush on Emper­or Augus­tus?

    Char­lotte Hig­gins
    Wed 12 Sep 2018 11.45 EDT
    Last mod­i­fied on Thu 13 Sep 2018 05.23 EDT

    The Face­book founder’s bro­man­tic hero was a can­ny oper­a­tor who was obsessed with pow­er and over­rode democ­ra­cy

    Pow­er­ful men do love a tran­shis­tor­i­cal man-crush – fix­at­ing on an ances­tor fig­ure, who can be ven­er­at­ed, per­haps sur­passed. Facebook’s Mark Zucker­berg has told the New York­er about his par­tic­u­lar fas­ci­na­tion with the Roman emper­or, Augus­tus – he and his wife, Priscil­la Chan, have even called one of their chil­dren August.

    “Basi­cal­ly, through a real­ly harsh approach, he estab­lished 200 years of world peace,” Zucker­berg explained. He pon­dered, “What are the trade-offs in that? On the one hand, world peace is a long-term goal that peo­ple talk about today ...” On the oth­er hand, he said, “that didn’t come for free, and he had to do cer­tain things”.

    Zucker­berg loved Latin at school (“very much like cod­ing”, he said). His sis­ter, Don­na, got her clas­sics PhD at Prince­ton, is edi­tor of the excel­lent Eidolon online clas­sics mag­a­zine, and has just writ­ten a book on how “alt-right”, misog­y­nist online com­mu­ni­ties invoke clas­si­cal his­to­ry.

    I’m not sure whether the appeal­ing clas­sics nerdi­ness of Zuckerberg’s back­ground makes his san­guine euphemisms more or less alarm­ing. “He had to do cer­tain things” and “a real­ly harsh approach” are, let’s say, a relaxed way of describ­ing Augus­tus’ bru­tal and sys­tem­at­ic elim­i­na­tion of polit­i­cal oppo­nents. And “200 years of world peace”? Well yes, if that’s what you want to call cen­turies of bru­tal con­quest. Even the Roman his­to­ri­an Tac­i­tus had some­thing to say about that: “soli­tudinem faci­unt, pacem appel­lant”. They make a desert and call it peace.

    ...

    It’s true that his reign has been recon­sid­ered time and again: it is one of those extra­or­di­nary junc­tions in his­to­ry – when Rome’s repub­lic teetered, crum­bled, and reformed as the empire – that looks dif­fer­ent depend­ing on the moment from which he is exam­ined. It is per­fect­ly true to say that Augus­tus end­ed the civ­il strife that over­whelmed Rome in the late first cen­tu­ry BC, and ush­ered in a peri­od of sta­bil­i­ty and, in some ways, renew­al, by the time of his death in 14 AD. That’s how I was taught about Augus­tus at school, I sus­pect not unco­in­ci­den­tal­ly by some­one brought up dur­ing the sec­ond world war. But in 1939 Ronald Syme had pub­lished his bril­liant account of the peri­od, The Roman Rev­o­lu­tion – a rev­o­lu­tion­ary book in itself, chal­leng­ing Augustus’s then large­ly pos­i­tive rep­u­ta­tion by por­tray­ing him as a sin­is­ter fig­ure who emerged on the tides of his­to­ry out of the increas­ing­ly ungovern­able Roman repub­lic, to wield auto­crat­ic pow­er.

    Part of the fas­ci­na­tion of the man is that he was a mas­ter of pro­pa­gan­da and a superb polit­i­cal oper­a­tor. In our own era of obfus­ca­tion, deceit and fake news it’s inter­est­ing to try to unpick what was real­ly going on. Take his brief auto­bi­og­ra­phy, Res Ges­tae Divi Augusti. (Things Done By the Dei­fied Augus­tus – no mess­ing about here, title-wise).

    The text, while heavy­go­ing, is a fas­ci­nat­ing doc­u­ment, list­ing his polit­i­cal appoint­ments, his mil­i­tary achieve­ments, the infra­struc­ture projects he fund­ed. But it can, with oth­er con­tem­po­rary evi­dence, also be inter­pret­ed as a por­trait of a man who insti­tut­ed an autoc­ra­cy that clev­er­ly mim­ic­ked the forms and tra­di­tions of Rome’s qua­si-demo­c­ra­t­ic repub­lic.

    Under the guise of restor­ing Rome to great­ness, he hol­lowed out its con­sti­tu­tion and loaded pow­er into his own hands. Some­thing there for Zucker­berg to think about, per­haps. Par­tic­u­lar­ly con­sid­er­ing the New Yorker’s head­line for its pro­file: “Can Mark Zucker­berg fix Face­book before it breaks democ­ra­cy?”

    ———-

    “What’s behind Mark Zucker­berg’s man-crush on Emper­or Augus­tus?” by Char­lotte Hig­gins; The Guardian; 09/12/2018

    “Pow­er­ful men do love a tran­shis­tor­i­cal man-crush – fix­at­ing on an ances­tor fig­ure, who can be ven­er­at­ed, per­haps sur­passed. Facebook’s Mark Zucker­berg has told the New York­er about his par­tic­u­lar fas­ci­na­tion with the Roman emper­or, Augus­tus – he and his wife, Priscil­la Chan, have even called one of their chil­dren August.”

    He lit­er­al­ly named his daugh­ter after the Roman emper­or. That hints at more than just a casu­al his­tor­i­cal inter­est.

    So what is it about Cae­sar Augus­tus’s rule that Zucker­berg is so enam­ored with? Well, based on Zucker­berg’s own words, it sounds like it was the way Augus­tus took a “real­ly harsh approach” to mak­ing deci­sions with dif­fi­cult trade-offs in order to achieve Pax Romana, 200 years of peace for the Roman empire:

    ...
    “Basi­cal­ly, through a real­ly harsh approach, he estab­lished 200 years of world peace,” Zucker­berg explained. He pon­dered, “What are the trade-offs in that? On the one hand, world peace is a long-term goal that peo­ple talk about today ...” On the oth­er hand, he said, “that didn’t come for free, and he had to do cer­tain things”.
    ...

    And while focus­ing a 200 years of peace puts an obses­sion with Augus­tus in the most pos­i­tive pos­si­ble light, it’s hard to ignore the fact that Augus­tus was still a mas­ter of pro­pa­gan­da and the man who saw the end of the Roman Repub­lic and the impo­si­tion of an impe­r­i­al mod­el of gov­ern­ment:

    ...
    Part of the fas­ci­na­tion of the man is that he was a mas­ter of pro­pa­gan­da and a superb polit­i­cal oper­a­tor. In our own era of obfus­ca­tion, deceit and fake news it’s inter­est­ing to try to unpick what was real­ly going on. Take his brief auto­bi­og­ra­phy, Res Ges­tae Divi Augusti. (Things Done By the Dei­fied Augus­tus – no mess­ing about here, title-wise).

    The text, while heavy­go­ing, is a fas­ci­nat­ing doc­u­ment, list­ing his polit­i­cal appoint­ments, his mil­i­tary achieve­ments, the infra­struc­ture projects he fund­ed. But it can, with oth­er con­tem­po­rary evi­dence, also be inter­pret­ed as a por­trait of a man who insti­tut­ed an autoc­ra­cy that clev­er­ly mim­ic­ked the forms and tra­di­tions of Rome’s qua­si-demo­c­ra­t­ic repub­lic.

    Under the guise of restor­ing Rome to great­ness, he hol­lowed out its con­sti­tu­tion and loaded pow­er into his own hands. Some­thing there for Zucker­berg to think about, per­haps. Par­tic­u­lar­ly con­sid­er­ing the New Yorker’s head­line for its pro­file: “Can Mark Zucker­berg fix Face­book before it breaks democ­ra­cy?”

    And that’s a lit­tle peek into Mark Zucker­berg’s mind that gives us a sense of what he spends time think­ing about: his­toric fig­ures who did a lot of harsh things to achieve his­toric ‘great­ness’. That’s not a scary red flag or any­thing.

    Posted by Pterrafractyl | November 26, 2018, 12:43 pm
  5. Here’s a new rea­son to hate Face­book: if you hate Face­book on Face­book, Face­book might put you on its “Be on the look­out” (BOLO) list and start using its loca­tion track­ing tech­nol­o­gy to track your loca­tion. That’s accord­ing to a new report based on a num­ber of cur­rent and for­mer Face­book employ­ees who dis­cussed how the com­pa­ny’s BOLO list pol­i­cy works. And accord­ing to secu­ri­ty experts, while Face­book isn’t unique in hav­ing a BOLO list for com­pa­ny threats, it is high­ly unusu­al in that it can use its own tech­nol­o­gy to track the peo­ple on the BOLO list. Face­book can track BOLO users’ loca­tions using their IP address or the smart­phone’s loca­tion data col­lect­ed through the Face­book app.

    So how does one end up on this BOLO list? Well, there are the rea­son­able ways, like if some­one posts posts on one of Face­book’s social media plat­forms a spe­cif­ic threat against Face­book or one of its employ­ees. But it sounds like the stan­dards are a lot more sub­jec­tive and peo­ple are placed on the BOLO for sim­ply post­ing things like “F— you, Mark,” “F— Face­book”. Anoth­er group rou­tine­ly put on the list is for­mer employ­ees and con­trac­tors. Again, it does­n’t sound like it takes much to get on the list. Sim­ply get­ting emo­tion­al if your con­tract isn’t extend­ed appears to be enough. Giv­en those stan­dards, it’s almost sur­pris­ing that it sounds like the BOLO list is only hun­dreds of peo­ple long and not thou­sands of peo­ple:

    CNBC

    Face­book uses its apps to track users it thinks could threat­en employ­ees and offices

    * Face­book main­tains a list of indi­vid­u­als that its secu­ri­ty guards must “be on look­out” for that is com­prised of users who’ve made threat­en­ing state­ments against the com­pa­ny on its social net­work as well as numer­ous for­mer employ­ees.
    * The com­pa­ny’s infor­ma­tion secu­ri­ty team is capa­ble of track­ing these indi­vid­u­als’ where­abouts using the loca­tion data they pro­vide through Face­book’s apps and web­sites.
    * More than a dozen for­mer Face­book secu­ri­ty employ­ees described the com­pa­ny’s tac­tics to CNBC, with sev­er­al ques­tion­ing the ethics of the com­pa­ny’s prac­tices.

    Sal­vador Rodriguez
    Pub­lished 02/17/2019 Updat­ed

    In ear­ly 2018, a Face­book user made a pub­lic threat on the social net­work against one of the com­pa­ny’s offices in Europe.

    Face­book picked up the threat, pulled the user’s data and deter­mined he was in the same coun­try as the office he was tar­get­ing. The com­pa­ny informed the author­i­ties about the threat and direct­ed its secu­ri­ty offi­cers to be on the look­out for the user.

    “He made a veiled threat that ‘Tomor­row every­one is going to pay’ or some­thing to that effect,” a for­mer Face­book secu­ri­ty employ­ee told CNBC.

    The inci­dent is rep­re­sen­ta­tive of the steps Face­book takes to keep its offices, exec­u­tives and employ­ees pro­tect­ed, accord­ing to more than a dozen for­mer Face­book employ­ees who spoke with CNBC. The com­pa­ny mines its social net­work for threat­en­ing com­ments, and in some cas­es uses its prod­ucts to track the loca­tion of peo­ple it believes present a cred­i­ble threat.

    Sev­er­al of the for­mer employ­ees ques­tioned the ethics of Face­book’s secu­ri­ty strate­gies, with one of them call­ing the tac­tics “very Big Broth­er-esque.”

    Oth­er for­mer employ­ees argue these secu­ri­ty mea­sures are jus­ti­fied by Face­book’s reach and the intense emo­tions it can inspire. The com­pa­ny has 2.7 bil­lion users across its ser­vices. That means that if just 0.01 per­cent of users make a threat, Face­book is still deal­ing with 270,000 poten­tial secu­ri­ty risks.

    “Our phys­i­cal secu­ri­ty team exists to keep Face­book employ­ees safe,” a Face­book spokesman said in a state­ment. “They use indus­try-stan­dard mea­sures to assess and address cred­i­ble threats of vio­lence against our employ­ees and our com­pa­ny, and refer these threats to law enforce­ment when nec­es­sary. We have strict process­es designed to pro­tect peo­ple’s pri­va­cy and adhere to all data pri­va­cy laws and Face­book’s terms of ser­vice. Any sug­ges­tion our onsite phys­i­cal secu­ri­ty team has over­stepped is absolute­ly false.”

    Face­book is unique in the way it uses its own prod­uct to mine data for threats and loca­tions of poten­tial­ly dan­ger­ous indi­vid­u­als, said Tim Bradley, senior con­sul­tant with Inci­dent Man­age­ment Group, a cor­po­rate secu­ri­ty con­sult­ing firm that deals with employ­ee safe­ty issues. How­ev­er, the Occu­pa­tion­al Safe­ty and Health Admin­is­tra­tion’s gen­er­al duty clause says that com­pa­nies have to pro­vide their employ­ees with a work­place free of haz­ards that could cause death or seri­ous phys­i­cal harm, Bradley said.

    “If they know there’s a threat against them, they have to take steps,” Bradley said. “How they got the infor­ma­tion is sec­ondary to the fact that they have a duty to pro­tect employ­ees.”

    Mak­ing the list

    One of the tools Face­book uses to mon­i­tor threats is a “be on look­out” or “BOLO” list, which is updat­ed approx­i­mate­ly once a week. The list was cre­at­ed in 2008, an ear­ly employ­ee in Face­book’s phys­i­cal secu­ri­ty group told CNBC. It now con­tains hun­dreds of peo­ple, accord­ing to four for­mer Face­book secu­ri­ty employ­ees who have left the com­pa­ny since 2016.

    Face­book noti­fies its secu­ri­ty pro­fes­sion­als any­time a new per­son is added to the BOLO list, send­ing out a report that includes infor­ma­tion about the per­son, such as their name, pho­to, their gen­er­al loca­tion and a short descrip­tion of why they were added.

    In recent years, the secu­ri­ty team even had a large mon­i­tor that dis­played the faces of peo­ple on the list, accord­ing to a pho­to CNBC has seen and two peo­ple famil­iar, although Face­book says it no longer oper­ates this mon­i­tor.

    Oth­er com­pa­nies keep sim­i­lar lists of threats, Bradley and oth­er sources said. But Face­book is unique because it can use its own prod­ucts to iden­ti­fy these threats and track the loca­tion of peo­ple on the list.

    Users who pub­licly threat­en the com­pa­ny, its offices or employ­ees — includ­ing post­ing threat­en­ing com­ments in response to posts from exec­u­tives like CEO Mark Zucker­berg and COO Sheryl Sand­berg — are often added to the list. These users are typ­i­cal­ly described as mak­ing “improp­er com­mu­ni­ca­tion” or “threat­en­ing com­mu­ni­ca­tion,” accord­ing to for­mer employ­ees.

    The bar can be pret­ty low. While some users end up on the list after repeat­ed appear­ances on com­pa­ny prop­er­ty or long email threats, oth­ers might find them­selves on the BOLO list for say­ing some­thing as sim­ple as “F— you, Mark,” “F— Face­book” or “I’m gonna go kick your a–,” accord­ing to a for­mer employ­ee who worked with the exec­u­tive pro­tec­tion team. A dif­fer­ent for­mer employ­ee who was on the com­pa­ny’s secu­ri­ty team said there were no clear­ly com­mu­ni­cat­ed stan­dards to deter­mine what kinds of actions could land some­body on the list, and that deci­sions were often made on a case-by-case basis.

    The Face­book spokesman dis­put­ed this, say­ing that peo­ple were only added after a “rig­or­ous review to deter­mine the valid­i­ty of the threat.”

    Awk­ward sit­u­a­tions

    Most peo­ple on the list do not know they’re on it. This some­times leads to tense sit­u­a­tions.

    Sev­er­al years ago, one Face­book user dis­cov­ered he was on the BOLO list when he showed up to Face­book’s Men­lo Park cam­pus for lunch with a friend who worked there, accord­ing to a for­mer employ­ee who wit­nessed the inci­dent.

    The user checked in with secu­ri­ty to reg­is­ter as a guest. His name popped up right away, alert­ing secu­ri­ty. He was on the list. His issue had to do with mes­sages he had sent to Zucker­berg, accord­ing to a per­son famil­iar with the cir­cum­stances.

    Soon, more secu­ri­ty guards showed up in the entrance area where the guest had tried to reg­is­ter. No one grabbed the indi­vid­ual, but secu­ri­ty guards stood at his sides and at each of the doors lead­ing in and out of that entrance area.

    Even­tu­al­ly, the employ­ee showed up mad and demand­ed that his friend be removed from the BOLO list. After the employ­ee met with Face­book’s glob­al secu­ri­ty intel­li­gence and inves­ti­ga­tions team, the friend was removed from the list — a rare occur­rence.

    “No per­son would be on BOLO with­out cred­i­ble cause,” the Face­book spokesman said in regard to this inci­dent.

    It’s not just users who find them­selves on Face­book’s BOLO list. Many of the peo­ple on the list are for­mer Face­book employ­ees and con­trac­tors, whose col­leagues ask to add them when they leave the com­pa­ny.

    Some for­mer employ­ees are list­ed for hav­ing a track record of poor behav­ior, such as steal­ing com­pa­ny equip­ment. But in many cas­es, there is no rea­son list­ed on the BOLO descrip­tion. Three peo­ple famil­iar said that almost every Face­book employ­ee who gets fired is added to the list, and one called the process “real­ly sub­jec­tive.” Anoth­er said that con­trac­tors are added if they get emo­tion­al when their con­tracts are not extend­ed.

    The Face­book spokesman coun­tered that the process is more rig­or­ous than these peo­ple claim. “For­mer employ­ees are only added under very spe­cif­ic cir­cum­stances, after review by legal and HR, includ­ing threats of vio­lence or harass­ment.”

    The prac­tice of adding for­mer employ­ees to the BOLO list has occa­sion­al­ly cre­at­ed awk­ward sit­u­a­tions for the com­pa­ny’s recruiters, who often reach out to for­mer employ­ees to fill open­ings. Ex-employ­ees have showed up for job inter­views only to find out that they could­n’t enter because they were on the BOLO list, said a for­mer secu­ri­ty employ­ee who left the com­pa­ny last year.

    “It becomes a whole big embar­rass­ing sit­u­a­tion,” this per­son said.

    Tracked by spe­cial request

    Face­book has the capa­bil­i­ty to track BOLO users’ where­abouts by using their smart­phone’s loca­tion data col­lect­ed through the Face­book app, or their IP address col­lect­ed through the com­pa­ny’s web­site.

    Face­book only tracks BOLO-list­ed users when their threats are deemed cred­i­ble, accord­ing to a for­mer employ­ee with first­hand knowl­edge of the com­pa­ny’s secu­ri­ty pro­ce­dures. This could include a detailed threat with an exact loca­tion and tim­ing of an attack, or a threat from an indi­vid­ual who makes a habit of attend­ing com­pa­ny events, such as the Face­book share­hold­ers’ meet­ing. This for­mer employ­ee empha­sized Face­book could not look up users’ loca­tions with­out cause.

    When a cred­i­ble threat is detect­ed, the glob­al secu­ri­ty oper­a­tions cen­ter and the glob­al secu­ri­ty intel­li­gence and inves­ti­ga­tions units make a spe­cial request to the com­pa­ny’s infor­ma­tion secu­ri­ty team, which has the capa­bil­i­ties to track users’ loca­tion infor­ma­tion. In some cas­es, the track­ing does­n’t go very far — for instance, if a BOLO user made a threat about a spe­cif­ic loca­tion but their cur­rent loca­tion shows them nowhere close, the track­ing might end there.

    But if the BOLO user is near­by, the infor­ma­tion secu­ri­ty team can con­tin­ue to mon­i­tor their loca­tion peri­od­i­cal­ly and keep oth­er secu­ri­ty teams on alert.

    Depend­ing on the threat, Face­book’s secu­ri­ty teams can take oth­er actions, such as sta­tion­ing secu­ri­ty guards, escort­ing a BOLO user off cam­pus or alert­ing law enforce­ment.

    Face­book’s infor­ma­tion secu­ri­ty team has tracked users’ loca­tions in oth­er safe­ty-relat­ed instances, too.

    In 2017, a Face­book man­ag­er alert­ed the com­pa­ny’s secu­ri­ty teams when a group of interns she was man­ag­ing did not log into the com­pa­ny’s sys­tems to work from home. They had been on a camp­ing trip, accord­ing to a for­mer Face­book secu­ri­ty employ­ee, and the man­ag­er was con­cerned about their safe­ty.

    Face­book’s infor­ma­tion secu­ri­ty team became involved in the sit­u­a­tion and used the interns’ loca­tion data to try and find out if they were safe. “They call it ‘ping­ing them’, ping­ing their Face­book accounts,” the for­mer secu­ri­ty employ­ee recalled.

    After the loca­tion data did not turn up any­thing use­ful, the infor­ma­tion secu­ri­ty team then kept dig­ging and learned that the interns had exchanged mes­sages sug­gest­ing they nev­er intend­ed to come into work that day — essen­tial­ly, they had lied to the man­ag­er. The infor­ma­tion secu­ri­ty team gave the man­ag­er a sum­ma­ry of what they had found.

    “There was legit con­cern about the safe­ty of these indi­vid­u­als,” the Face­book spokesman said. “In each iso­lat­ed case, these employ­ees were unre­spon­sive on all com­mu­ni­ca­tion chan­nels. There’s a set of pro­to­cols guid­ing when and how we access employ­ee data when an employ­ee goes miss­ing.”

    ...

    ———-

    “Face­book uses its apps to track users it thinks could threat­en employ­ees and offices” by Sal­vador Rodriguez; CNBC; 02/17/2019

    “Sev­er­al of the for­mer employ­ees ques­tioned the ethics of Face­book’s secu­ri­ty strate­gies, with one of them call­ing the tac­tics “very Big Broth­er-esque.””

    Yeah, “very Big Broth­er-esque” sounds like a pret­ty good descrip­tion of the sit­u­a­tion. In part because Face­book is doing the track­ing with its own tech­nol­o­gy:

    ...
    Face­book is unique in the way it uses its own prod­uct to mine data for threats and loca­tions of poten­tial­ly dan­ger­ous indi­vid­u­als, said Tim Bradley, senior con­sul­tant with Inci­dent Man­age­ment Group, a cor­po­rate secu­ri­ty con­sult­ing firm that deals with employ­ee safe­ty issues. How­ev­er, the Occu­pa­tion­al Safe­ty and Health Admin­is­tra­tion’s gen­er­al duty clause says that com­pa­nies have to pro­vide their employ­ees with a work­place free of haz­ards that could cause death or seri­ous phys­i­cal harm, Bradley said.

    “If they know there’s a threat against them, they have to take steps,” Bradley said. “How they got the infor­ma­tion is sec­ondary to the fact that they have a duty to pro­tect employ­ees.”

    ...

    Oth­er com­pa­nies keep sim­i­lar lists of threats, Bradley and oth­er sources said. But Face­book is unique because it can use its own prod­ucts to iden­ti­fy these threats and track the loca­tion of peo­ple on the list.

    ...

    Tracked by spe­cial request

    Face­book has the capa­bil­i­ty to track BOLO users’ where­abouts by using their smart­phone’s loca­tion data col­lect­ed through the Face­book app, or their IP address col­lect­ed through the com­pa­ny’s web­site.

    Face­book only tracks BOLO-list­ed users when their threats are deemed cred­i­ble, accord­ing to a for­mer employ­ee with first­hand knowl­edge of the com­pa­ny’s secu­ri­ty pro­ce­dures. This could include a detailed threat with an exact loca­tion and tim­ing of an attack, or a threat from an indi­vid­ual who makes a habit of attend­ing com­pa­ny events, such as the Face­book share­hold­ers’ meet­ing. This for­mer employ­ee empha­sized Face­book could not look up users’ loca­tions with­out cause.

    When a cred­i­ble threat is detect­ed, the glob­al secu­ri­ty oper­a­tions cen­ter and the glob­al secu­ri­ty intel­li­gence and inves­ti­ga­tions units make a spe­cial request to the com­pa­ny’s infor­ma­tion secu­ri­ty team, which has the capa­bil­i­ties to track users’ loca­tion infor­ma­tion. In some cas­es, the track­ing does­n’t go very far — for instance, if a BOLO user made a threat about a spe­cif­ic loca­tion but their cur­rent loca­tion shows them nowhere close, the track­ing might end there.

    But if the BOLO user is near­by, the infor­ma­tion secu­ri­ty team can con­tin­ue to mon­i­tor their loca­tion peri­od­i­cal­ly and keep oth­er secu­ri­ty teams on alert.

    Depend­ing on the threat, Face­book’s secu­ri­ty teams can take oth­er actions, such as sta­tion­ing secu­ri­ty guards, escort­ing a BOLO user off cam­pus or alert­ing law enforce­ment.
    ...

    Get­ting on the list also sounds shock­ing­ly easy. A sim­ple “F— you, Mark,” or “F— Face­book” post on Face­book is all it appar­ent­ly takes. Giv­en that, it’s almost unbe­liev­able that the list only con­tains hun­dreds of peo­ple. Although it sounds like that “hun­dreds of peo­ple” esti­mate is based on for­mer secu­ri­ty employ­ees who left the com­pa­ny since 2016. You have to won­der how much longer the BOLO list could be today com­pared to 2016 sim­ply giv­en the amount of bad press Face­book has received just in the last year alone:

    ...
    Mak­ing the list

    One of the tools Face­book uses to mon­i­tor threats is a “be on look­out” or “BOLO” list, which is updat­ed approx­i­mate­ly once a week. The list was cre­at­ed in 2008, an ear­ly employ­ee in Face­book’s phys­i­cal secu­ri­ty group told CNBC. It now con­tains hun­dreds of peo­ple, accord­ing to four for­mer Face­book secu­ri­ty employ­ees who have left the com­pa­ny since 2016.

    Face­book noti­fies its secu­ri­ty pro­fes­sion­als any­time a new per­son is added to the BOLO list, send­ing out a report that includes infor­ma­tion about the per­son, such as their name, pho­to, their gen­er­al loca­tion and a short descrip­tion of why they were added.

    In recent years, the secu­ri­ty team even had a large mon­i­tor that dis­played the faces of peo­ple on the list, accord­ing to a pho­to CNBC has seen and two peo­ple famil­iar, although Face­book says it no longer oper­ates this mon­i­tor.

    ...

    Users who pub­licly threat­en the com­pa­ny, its offices or employ­ees — includ­ing post­ing threat­en­ing com­ments in response to posts from exec­u­tives like CEO Mark Zucker­berg and COO Sheryl Sand­berg — are often added to the list. These users are typ­i­cal­ly described as mak­ing “improp­er com­mu­ni­ca­tion” or “threat­en­ing com­mu­ni­ca­tion,” accord­ing to for­mer employ­ees.

    The bar can be pret­ty low. While some users end up on the list after repeat­ed appear­ances on com­pa­ny prop­er­ty or long email threats, oth­ers might find them­selves on the BOLO list for say­ing some­thing as sim­ple as “F— you, Mark,” “F— Face­book” or “I’m gonna go kick your a–,” accord­ing to a for­mer employ­ee who worked with the exec­u­tive pro­tec­tion team. A dif­fer­ent for­mer employ­ee who was on the com­pa­ny’s secu­ri­ty team said there were no clear­ly com­mu­ni­cat­ed stan­dards to deter­mine what kinds of actions could land some­body on the list, and that deci­sions were often made on a case-by-case basis.

    The Face­book spokesman dis­put­ed this, say­ing that peo­ple were only added after a “rig­or­ous review to deter­mine the valid­i­ty of the threat.”
    ...

    And it sounds like for­mer employ­ees and con­trac­tors can get thrown on the list for basi­cal­ly no rea­son at all. If you’re fired from Face­book, don’t get emo­tion­al. Or the com­pa­ny will track your loca­tion indef­i­nite­ly:

    ...
    Awk­ward sit­u­a­tions

    Most peo­ple on the list do not know they’re on it. This some­times leads to tense sit­u­a­tions.

    Sev­er­al years ago, one Face­book user dis­cov­ered he was on the BOLO list when he showed up to Face­book’s Men­lo Park cam­pus for lunch with a friend who worked there, accord­ing to a for­mer employ­ee who wit­nessed the inci­dent.

    The user checked in with secu­ri­ty to reg­is­ter as a guest. His name popped up right away, alert­ing secu­ri­ty. He was on the list. His issue had to do with mes­sages he had sent to Zucker­berg, accord­ing to a per­son famil­iar with the cir­cum­stances.

    Soon, more secu­ri­ty guards showed up in the entrance area where the guest had tried to reg­is­ter. No one grabbed the indi­vid­ual, but secu­ri­ty guards stood at his sides and at each of the doors lead­ing in and out of that entrance area.

    Even­tu­al­ly, the employ­ee showed up mad and demand­ed that his friend be removed from the BOLO list. After the employ­ee met with Face­book’s glob­al secu­ri­ty intel­li­gence and inves­ti­ga­tions team, the friend was removed from the list — a rare occur­rence.

    “No per­son would be on BOLO with­out cred­i­ble cause,” the Face­book spokesman said in regard to this inci­dent.

    It’s not just users who find them­selves on Face­book’s BOLO list. Many of the peo­ple on the list are for­mer Face­book employ­ees and con­trac­tors, whose col­leagues ask to add them when they leave the com­pa­ny.

    Some for­mer employ­ees are list­ed for hav­ing a track record of poor behav­ior, such as steal­ing com­pa­ny equip­ment. But in many cas­es, there is no rea­son list­ed on the BOLO descrip­tion. Three peo­ple famil­iar said that almost every Face­book employ­ee who gets fired is added to the list, and one called the process “real­ly sub­jec­tive.” Anoth­er said that con­trac­tors are added if they get emo­tion­al when their con­tracts are not extend­ed.

    The Face­book spokesman coun­tered that the process is more rig­or­ous than these peo­ple claim. “For­mer employ­ees are only added under very spe­cif­ic cir­cum­stances, after review by legal and HR, includ­ing threats of vio­lence or harass­ment.”

    The prac­tice of adding for­mer employ­ees to the BOLO list has occa­sion­al­ly cre­at­ed awk­ward sit­u­a­tions for the com­pa­ny’s recruiters, who often reach out to for­mer employ­ees to fill open­ings. Ex-employ­ees have showed up for job inter­views only to find out that they could­n’t enter because they were on the BOLO list, said a for­mer secu­ri­ty employ­ee who left the com­pa­ny last year.

    “It becomes a whole big embar­rass­ing sit­u­a­tion,” this per­son said.
    ...

    And as Face­book itself makes clear with its anec­dote about how it tracked the loca­tion of a team of interns after the com­pa­ny became con­cerned about their safe­ty on a camp­ing trip, the BOLO list is just one rea­son the com­pa­ny might decide to track the loca­tions of spe­cif­ic peo­ple. Employ­ees being unre­spon­sive to emails is anoth­er rea­son for the poten­tial track­ing. Giv­en that Face­book is using its own in-house loca­tion track­ing capa­bil­i­ties to do this there are prob­a­bly all sorts of dif­fer­ent excus­es for using the tech­nol­o­gy:

    ...
    Face­book’s infor­ma­tion secu­ri­ty team has tracked users’ loca­tions in oth­er safe­ty-relat­ed instances, too.

    In 2017, a Face­book man­ag­er alert­ed the com­pa­ny’s secu­ri­ty teams when a group of interns she was man­ag­ing did not log into the com­pa­ny’s sys­tems to work from home. They had been on a camp­ing trip, accord­ing to a for­mer Face­book secu­ri­ty employ­ee, and the man­ag­er was con­cerned about their safe­ty.

    Face­book’s infor­ma­tion secu­ri­ty team became involved in the sit­u­a­tion and used the interns’ loca­tion data to try and find out if they were safe. “They call it ‘ping­ing them’, ping­ing their Face­book accounts,” the for­mer secu­ri­ty employ­ee recalled.

    After the loca­tion data did not turn up any­thing use­ful, the infor­ma­tion secu­ri­ty team then kept dig­ging and learned that the interns had exchanged mes­sages sug­gest­ing they nev­er intend­ed to come into work that day — essen­tial­ly, they had lied to the man­ag­er. The infor­ma­tion secu­ri­ty team gave the man­ag­er a sum­ma­ry of what they had found.

    “There was legit con­cern about the safe­ty of these indi­vid­u­als,” the Face­book spokesman said. “In each iso­lat­ed case, these employ­ees were unre­spon­sive on all com­mu­ni­ca­tion chan­nels. There’s a set of pro­to­cols guid­ing when and how we access employ­ee data when an employ­ee goes miss­ing.”
    ...

    So now you know, if you’re a for­mer Face­book employee/contractor and/or have ever writ­ten a nasty thing about Face­book on Face­book’s plat­forms, Face­book is watch­ing you.

    Of course, Face­book is track­ing the loca­tions and every­thing else it can track about every­one to the great­est extent pos­si­ble any­way. Track­ing every­one is Face­book’s busi­ness mod­el. So the dis­tinc­tion is real­ly just whether or not Face­book’s secu­ri­ty team is specif­i­cal­ly watch­ing you. Face­book the com­pa­ny is watch­ing you whether or not you’re on the list or not.

    Posted by Pterrafractyl | February 18, 2019, 10:28 am
  6. So remem­ber those reports from 2017 about how Face­book’s ad tar­get­ing options allowed users to tar­get ads for Face­book users who have expressed an inter­est in top­ics like “Jew hater,” “How to burn jews,” or, “His­to­ry of ‘why jews ruin the world.’”? And remem­ber how Face­book explained that this was an acci­dent cause by their algo­rithms that auto-gen­er­ate user inter­est groups and the com­pa­ny promised that they’ll have humans review­ing these auto-gen­er­at­ed top­ics going for­ward? Sur­prise! It turns out the human review­ers are still allow­ing ads tar­get­ing user inter­est­ed in top­ics like “Joseph Goebbels,” “Josef Men­gele,” “Hein­rich Himm­ler,” and the neo-nazi punk band Skrew­driv­er:

    The Los Ange­les Times

    Face­book decid­ed which users are inter­est­ed in Nazis — and let adver­tis­ers tar­get them direct­ly

    By Sam Dean
    Feb 21, 2019 | 5:00 AM

    Face­book makes mon­ey by charg­ing adver­tis­ers to reach just the right audi­ence for their mes­sage — even when that audi­ence is made up of peo­ple inter­est­ed in the per­pe­tra­tors of the Holo­caust or explic­it­ly neo-Nazi music.

    Despite promis­es of greater over­sight fol­low­ing past adver­tis­ing scan­dals, a Times review shows that Face­book has con­tin­ued to allow adver­tis­ers to tar­get hun­dreds of thou­sands of users the social media firm believes are curi­ous about top­ics such as “Joseph Goebbels,” “Josef Men­gele,” “Hein­rich Himm­ler,” the neo-nazi punk band Skrew­driv­er and Ben­i­to Mussolini’s long-defunct Nation­al Fas­cist Par­ty.

    Experts say that this prac­tice runs counter to the company’s stat­ed prin­ci­ples and can help fuel rad­i­cal­iza­tion online.

    “What you’re describ­ing, where a clear hate­ful idea or nar­ra­tive can be ampli­fied to reach more peo­ple, is exact­ly what they said they don’t want to do and what they need to be held account­able for,” said Oren Segal, direc­tor of the Anti-Defama­tion League’s cen­ter on extrem­ism.

    After being con­tact­ed by The Times, Face­book said that it would remove many of the audi­ence group­ings from its ad plat­form.

    “Most of these tar­get­ing options are against our poli­cies and should have been caught and removed soon­er,” said Face­book spokesman Joe Osborne. “While we have an ongo­ing review of our tar­get­ing options, we clear­ly need to do more, so we’re tak­ing a broad­er look at our poli­cies and detec­tion meth­ods.”

    Approved by Face­book

    Facebook’s broad reach and sophis­ti­cat­ed adver­tis­ing tools brought in a record $55 bil­lion in ad rev­enue in 2018.

    Prof­it mar­gins stayed above 40%, thanks to a high degree of automa­tion, with algo­rithms sort­ing users into mar­ketable sub­sets based on their behav­ior — then choos­ing which ads to show them.

    But the lack of human over­sight has also brought the com­pa­ny con­tro­ver­sy.

    In 2017, Pro Pub­li­ca found that the com­pa­ny sold ads based on any user-gen­er­at­ed phrase, includ­ing “Jew hater” and “Hitler did noth­ing wrong.” Fol­low­ing the mur­der of 11 con­gre­gants at a syn­a­gogue in Pitts­burgh in 2018, the Inter­cept found that Face­book gave adver­tis­ers the abil­i­ty to tar­get users inter­est­ed in the anti-Semit­ic “white geno­cide con­spir­a­cy the­o­ry,” which the sus­pect­ed killer cit­ed as inspi­ra­tion before the attacks.

    This month, the Guardian high­light­ed the ways that YouTube and Face­book boost anti-vac­cine con­spir­a­cy the­o­ries, lead­ing Rep. Adam Schiff (D‑Burbank) to ques­tion whether the com­pa­ny was pro­mot­ing mis­in­for­ma­tion.

    Face­book has promised since 2017 that humans review every ad tar­get­ing cat­e­go­ry. It announced last fall the removal of 5,000 audi­ence cat­e­gories that risked enabling abuse or dis­crim­i­na­tion.

    The Times decid­ed to test the effec­tive­ness of the company’s efforts by see­ing if Face­book would allow the sale of ads direct­ed to cer­tain seg­ments of users.

    Face­book allowed The Times to tar­get ads to users Face­book has deter­mined are inter­est­ed in Goebbels, the Third Reich’s chief pro­pa­gan­dist, Himm­ler, the archi­tect of the Holo­caust and leader of the SS, and Men­gele, the infa­mous con­cen­tra­tion camp doc­tor who per­formed human exper­i­ments on pris­on­ers. Each cat­e­go­ry includ­ed hun­dreds of thou­sands of users.

    The com­pa­ny also approved an ad tar­get­ed to fans of Skrew­driv­er, a noto­ri­ous white suprema­cist punk band — and auto­mat­i­cal­ly sug­gest­ed a series of top­ics relat­ed to Euro­pean far-right move­ments to bol­ster the ad’s reach.

    Col­lec­tive­ly, the ads were seen by 4,153 users in 24 hours, with The Times pay­ing only $25 to fuel the push.

    Face­book admits its human mod­er­a­tors should have removed the Nazi-affil­i­at­ed demo­graph­ic cat­e­gories. But it says the “ads” them­selves — which con­sist­ed of the word “test” or The Times’ logo and linked back to the newspaper’s home­page — would not have raised red flags for the sep­a­rate team that looks over ad con­tent.

    Upon review, the com­pa­ny said the ad cat­e­gories were sel­dom used. The few ads pur­chased linked to his­tor­i­cal con­tent, Face­book said, but the com­pa­ny would not pro­vide more detail on their ori­gin.

    ‘Why is it my job to police their plat­form?’

    The Times was tipped off by a Los Ange­les musi­cian who asked to remain anony­mous for fear of retal­i­a­tion from hate groups.

    Ear­li­er this year, he tried to pro­mote a con­cert fea­tur­ing his hard­core punk group and a black met­al band on Face­book. When he typed “black met­al” into Facebook’s ad por­tal, he said he was dis­turbed to dis­cov­er that the com­pa­ny sug­gest­ed he also pay to tar­get users inter­est­ed in “Nation­al Social­ist black met­al” — a poten­tial audi­ence num­ber­ing in the hun­dreds of thou­sands.

    The punk and met­al music scenes, and black met­al in par­tic­u­lar, have a long grap­pled with white suprema­cist under­cur­rents. Black met­al grew out of the ear­ly Nor­we­gian met­al scene, which saw promi­nent mem­bers con­vict­ed of burn­ing down church­es, mur­der­ing fel­low musi­cians and plot­ting bomb­ings. Some bands and their fans have since com­bined anti-Semi­tism, neo-pagan­ism, and the pro­mo­tion of vio­lence into the dis­tinct sub­genre of Nation­al Social­ist black met­al, which the South­ern Pover­ty Law Cen­ter described as a dan­ger­ous white suprema­cist recruit­ing tool near­ly 20 years ago.

    But punk and met­al fans have long pushed back against hate. In 1981, the Dead Kennedys released “Nazi Punks F— Off”; last month 15 met­al bands played at an anti-fas­cist fes­ti­val in Brook­lyn.

    The musi­cian saw him­self as a part of that same tra­di­tion.

    ...

    Face­book sub­se­quent­ly removed the group­ing from the plat­form, but the musi­cian remains incred­u­lous that “Nation­al Social­ist black met­al” was a cat­e­go­ry in the first place — let alone one the com­pa­ny specif­i­cal­ly prompt­ed him to pur­sue.

    “Why is it my job to police their plat­form?” he said.

    A rab­bit hole of hate

    After review­ing screen­shots ver­i­fy­ing the musician’s sto­ry, The Times inves­ti­gat­ed whether Face­book would allow adver­tis­ers to tar­get explic­it­ly neo-Nazi bands or oth­er terms asso­ci­at­ed with hate groups.

    We start­ed with Skrew­driv­er, a British band with a song called “White Pow­er” and an album named after a Hitler Youth mot­to. Since the band only had 2,120 users iden­ti­fied as fans, Face­book informed us that we would need to add more tar­get demo­graph­ics to pub­lish the ad.

    The prompt led us down a rab­bit hole of terms it thought were relat­ed to white suprema­cist ide­ol­o­gy.

    First, it rec­om­mend­ed “Thor Steinar,” a cloth­ing brand that has been out­lawed in the Ger­man par­lia­ment for its asso­ci­a­tion with neo-Nazism. Then, it rec­om­mend­ed “NPD Group,” the name of both a promi­nent Amer­i­can mar­ket research firm and a far-right Ger­man polit­i­cal par­ty asso­ci­at­ed with neo-Nazism. Among the next rec­om­mend­ed terms were “Flüchtlinge,” the Ger­man word for “refugees,” and “Nation­al­ism.”

    Face­book said the cat­e­gories “Flüchtlinge,” “Nation­al­ism,” and “NPD Group” are in line with its poli­cies and will not be removed despite appear­ing as auto-sug­ges­tions fol­low­ing neo-Nazi terms. (Face­book said it had found that the users inter­est­ed in NPD Group were actu­al­ly inter­est­ed in the Amer­i­can mar­ket research firm.)

    In the wake of past con­tro­ver­sies, Face­book has blocked ads aimed at those inter­est­ed in the most obvi­ous terms affil­i­at­ed with hate groups. “Nazi,” “Hitler,” “white suprema­cy” and “Holo­caust” all yield noth­ing in the ad plat­form. But adver­tis­ers could tar­get more than a mil­lion users with inter­est in Goebbels or the Nation­al Fas­cist Par­ty, which dis­solved in 1943. Himm­ler had near­ly 95,000 con­stituents. Men­gele had 117,150 inter­est­ed users — a num­ber that increased over the dura­tion of our report­ing, to 127,010.

    Face­book said these cat­e­gories were auto­mat­i­cal­ly gen­er­at­ed based on user activ­i­ty — lik­ing or com­ment­ing on ads, or join­ing cer­tain groups. But it would not pro­vide spe­cif­ic details about how it deter­mined a user’s inter­est in top­ics linked to Nazis.

    ‘Expand­ing the orbit’

    The ads end­ed up being served with­in Instant Arti­cles — which are host­ed with­in Face­book, rather than link­ing out to a publisher’s own web­site — pub­lished by the Face­book pages of a wide swath of media out­lets.

    These includ­ed arti­cles by the Dai­ly Wire, CNN, Huff­Post, Moth­er Jones, Bre­it­bart, the BBC and ABC News. They also includ­ed arti­cles by viral pages with names like Pup­per Dog­go, I Love Movies and Right Health Today — a seem­ing­ly defunct media com­pa­ny whose only Face­book post was a link to a now-delet­ed arti­cle titled “What Is The Ben­e­fits Of Eat­ing Apple Every­day.”

    Segal, the ADL direc­tor, said Face­book might wind up fuel­ing the recruit­ment of new extrem­ists by serv­ing up such ads on the types of pages an ordi­nary news read­er might vis­it.

    “Being able to reach so many peo­ple with extrem­ist con­tent, exist­ing lit­er­al­ly in the same space as legit­i­mate news or non-hate­ful con­tent, is the biggest dan­ger,” he said. “What you’re doing is expand­ing the orbit.”

    ...

    ————-

    “Face­book decid­ed which users are inter­est­ed in Nazis — and let adver­tis­ers tar­get them direct­ly” By Sam Dean; The Los Ange­les Times; 02/21/2019

    ” Despite promis­es of greater over­sight fol­low­ing past adver­tis­ing scan­dals, a Times review shows that Face­book has con­tin­ued to allow adver­tis­ers to tar­get hun­dreds of thou­sands of users the social media firm believes are curi­ous about top­ics such as “Joseph Goebbels,” “Josef Men­gele,” “Hein­rich Himm­ler,” the neo-nazi punk band Skrew­driv­er and Ben­i­to Mussolini’s long-defunct Nation­al Fas­cist Par­ty.

    Yes, despite Face­book’s promis­es of greater over­sight fol­low­ing the pre­vi­ous reports of Nazi ad tar­get­ing cat­e­gories, the Nazi ad tar­get­ing con­tin­ues. And these ad cat­e­gories don’t have just a hand­ful of Face­book users. Each of the cat­e­gories the LA Times test­ed had hun­dreds of thou­sands of users. And with just a $25 pur­chase, over 4,000 users saw the test ad in 24 hours, demon­strat­ing that Face­book remains a remark­ably cost-effec­tive plat­form for direct­ly reach­ing out to peo­ple with Nazi sym­pa­thies:

    ...
    The Times decid­ed to test the effec­tive­ness of the company’s efforts by see­ing if Face­book would allow the sale of ads direct­ed to cer­tain seg­ments of users.

    Face­book allowed The Times to tar­get ads to users Face­book has deter­mined are inter­est­ed in Goebbels, the Third Reich’s chief pro­pa­gan­dist, Himm­ler, the archi­tect of the Holo­caust and leader of the SS, and Men­gele, the infa­mous con­cen­tra­tion camp doc­tor who per­formed human exper­i­ments on pris­on­ers. Each cat­e­go­ry includ­ed hun­dreds of thou­sands of users.

    The com­pa­ny also approved an ad tar­get­ed to fans of Skrew­driv­er, a noto­ri­ous white suprema­cist punk band — and auto­mat­i­cal­ly sug­gest­ed a series of top­ics relat­ed to Euro­pean far-right move­ments to bol­ster the ad’s reach.

    Col­lec­tive­ly, the ads were seen by 4,153 users in 24 hours, with The Times pay­ing only $25 to fuel the push.

    ...

    And these ads show up in as Instant Arti­cles, so they would show up in the same part of the Face­book page where arti­cles from sites like CNN and BBC might show up:

    ...
    ‘Expand­ing the orbit’

    The ads end­ed up being served with­in Instant Arti­cles — which are host­ed with­in Face­book, rather than link­ing out to a publisher’s own web­site — pub­lished by the Face­book pages of a wide swath of media out­lets.

    These includ­ed arti­cles by the Dai­ly Wire, CNN, Huff­Post, Moth­er Jones, Bre­it­bart, the BBC and ABC News. They also includ­ed arti­cles by viral pages with names like Pup­per Dog­go, I Love Movies and Right Health Today — a seem­ing­ly defunct media com­pa­ny whose only Face­book post was a link to a now-delet­ed arti­cle titled “What Is The Ben­e­fits Of Eat­ing Apple Every­day.”

    Segal, the ADL direc­tor, said Face­book might wind up fuel­ing the recruit­ment of new extrem­ists by serv­ing up such ads on the types of pages an ordi­nary news read­er might vis­it.

    “Being able to reach so many peo­ple with extrem­ist con­tent, exist­ing lit­er­al­ly in the same space as legit­i­mate news or non-hate­ful con­tent, is the biggest dan­ger,” he said. “What you’re doing is expand­ing the orbit.”
    ...

    Of course, Face­book pledged to remove these neo-Nazi ad categories...just like they did before:

    ...
    After being con­tact­ed by The Times, Face­book said that it would remove many of the audi­ence group­ings from its ad plat­form.

    “Most of these tar­get­ing options are against our poli­cies and should have been caught and removed soon­er,” said Face­book spokesman Joe Osborne. “While we have an ongo­ing review of our tar­get­ing options, we clear­ly need to do more, so we’re tak­ing a broad­er look at our poli­cies and detec­tion meth­ods.”

    ...

    Face­book has promised since 2017 that humans review every ad tar­get­ing cat­e­go­ry. It announced last fall the removal of 5,000 audi­ence cat­e­gories that risked enabling abuse or dis­crim­i­na­tion.
    ...

    So how con­fi­dent should we be that Face­book is actu­al­ly going to purge its sys­tem of neo-Nazi ad cat­e­gories? Well, as the arti­cle notes, Face­book’s cur­rent ad sys­tem earned the com­pa­ny a record $55 bil­lion in ad rev­enue in 2018 with over 40% prof­it mar­gins. And a big rea­son for those big prof­it mar­gins is the lack of human over­sight and the high degree of automa­tion in the run­ning of this sys­tem. In oth­er words, Face­book’s record prof­its depends on exact­ly the kind of lack of human over­sight that allowed for these neo-Nazi ad cat­e­gories to pro­lif­er­ate:

    ...
    Approved by Face­book

    Facebook’s broad reach and sophis­ti­cat­ed adver­tis­ing tools brought in a record $55 bil­lion in ad rev­enue in 2018.

    Prof­it mar­gins stayed above 40%, thanks to a high degree of automa­tion, with algo­rithms sort­ing users into mar­ketable sub­sets based on their behav­ior — then choos­ing which ads to show them.

    But the lack of human over­sight has also brought the com­pa­ny con­tro­ver­sy.

    ...

    Of course, we should­n’t nec­es­sar­i­ly assume that Face­book’s ongo­ing prob­lems with Nazi ad cat­e­gories is sim­ply due to a lack of human over­sight. It’s also quite pos­si­ble that Face­book sim­ply sees the pro­mo­tion of extrem­ism as a great source of rev­enue. After all, the LA Times reporters dis­cov­ered that the num­ber of users Face­book cat­e­go­rized as hav­ing an inter­est in Joseph Men­gele actu­al­ly grew from 117,l150 users to 127,010 users dur­ing their inves­ti­ga­tion. That’s a growth of over 8%! So the extrem­ist ad mar­ket might sim­ply be seen as a lucra­tive growth mar­ket that the com­pa­ny can’t resist:

    ...
    In the wake of past con­tro­ver­sies, Face­book has blocked ads aimed at those inter­est­ed in the most obvi­ous terms affil­i­at­ed with hate groups. “Nazi,” “Hitler,” “white suprema­cy” and “Holo­caust” all yield noth­ing in the ad plat­form. But adver­tis­ers could tar­get more than a mil­lion users with inter­est in Goebbels or the Nation­al Fas­cist Par­ty, which dis­solved in 1943. Himm­ler had near­ly 95,000 con­stituents. Men­gele had 117,150 inter­est­ed users — a num­ber that increased over the dura­tion of our report­ing, to 127,010.

    Face­book said these cat­e­gories were auto­mat­i­cal­ly gen­er­at­ed based on user activ­i­ty — lik­ing or com­ment­ing on ads, or join­ing cer­tain groups. But it would not pro­vide spe­cif­ic details about how it deter­mined a user’s inter­est in top­ics linked to Nazis.
    ...

    Could it be that the explo­sive growth of extrem­ism is sim­ply mak­ing the hate demo­graph­ic irre­sistible? Per­haps, although as we’ve seen with vir­tu­al­ly all of the major social media plat­forms like Twit­ter and YouTube, when it comes to social media plat­forms prof­it­ing off of extrem­ism it’s very much a ‘chick­en & egg’ sit­u­a­tion.

    Posted by Pterrafractyl | February 22, 2019, 11:57 am
  7. Oh look at that: A new Wall Street Jour­nal study dis­cov­ered that sev­er­al smart­phone apps are send­ing sen­si­tive infor­ma­tion to Face­book with­out get­ting user con­sent. This includ­ed “Flo Health”, an app for women to track their peri­ods and ovu­la­tion. Face­book was lit­er­al­ly col­lect­ing infor­ma­tion on users ovu­la­tion sta­tus. Anoth­er app, Instant Heart Rate: HR Mon­i­tor, was also send­ing Face­book data, along with the real-estate app Realtor.com. This is all hap­pen­ing using the toolk­it Face­book pro­vides app devel­op­ers. And while Face­book defend­ed itself by point out that its terms require that devel­op­ers not send the com­pa­ny sen­si­tive infor­ma­tion, Face­book also appears to be accept­ing this infor­ma­tion with­out telling devel­op­ers to stop:

    Asso­ci­at­ed Press

    Report: Apps give Face­book sen­si­tive health and oth­er data

    By MAE ANDERSON
    Feb­ru­ary 22, 2019

    NEW YORK (AP) — Sev­er­al phone apps are send­ing sen­si­tive user data, includ­ing health infor­ma­tion, to Face­book with­out users’ con­sent, accord­ing to a report by The Wall Street Jour­nal.

    An ana­lyt­ics tool called “App Events” allows app devel­op­ers to record user activ­i­ty and report it back to Face­book, even if the user isn’t on Face­book, accord­ing to the report .

    One exam­ple detailed by the Jour­nal shows how a woman would track her peri­od and ovu­la­tion using an app from Flo Health. After she enters when she last had her peri­od, Face­book soft­ware in the app would send along data, such as whether the user may be ovu­lat­ing. The Journal’s test­ing found that the data was sent with an adver­tis­ing ID that can be matched to a device or pro­file.

    Although Facebook’s terms instruct app devel­op­ers not to send such sen­si­tive infor­ma­tion, Face­book appeared to be accept­ing such data with­out telling the devel­op­ers to stop. Devel­op­ers are able to use such data to tar­get their own users while on Face­book.

    Face­book said in a state­ment that it requires apps to tell users what infor­ma­tion is shared with Face­book and it “pro­hibits app devel­op­ers from send­ing us sen­si­tive data.” The com­pa­ny said it works to remove infor­ma­tion that devel­op­ers should not have sent to Face­book.

    ...

    The data-shar­ing is relat­ed to a data ana­lyt­ics tool that Face­book offers devel­op­ers. The tool lets devel­op­ers see sta­tis­tics about their users and tar­get them with Face­book ads.

    Besides Flo Health, the Jour­nal found that Instant Heart Rate: HR Mon­i­tor and real-estate app Realtor.com were also send­ing app data to Face­book. The Jour­nal found that the apps did not pro­vide users any way to stop the data-shar­ing.

    Flo Health said in an emailed state­ment that using ana­lyt­i­cal sys­tems is a “com­mon prac­tice” for all app devel­op­ers and that it uses Face­book ana­lyt­ics for “inter­nal ana­lyt­ics pur­pos­es only.” But the com­pa­ny plans to audit its ana­lyt­ics tools to be “as proac­tive as pos­si­ble” on pri­va­cy con­cerns.

    Hours after the Jour­nal sto­ry was pub­lished, New York Gov. Andrew Cuo­mo direct­ed the state’s Depart­ment of State and Depart­ment of Finan­cial Ser­vices to “imme­di­ate­ly inves­ti­gate” what he calls a clear inva­sion of con­sumer pri­va­cy. The Demo­c­rat also urged fed­er­al reg­u­la­tors to step in to end the prac­tice.

    Securo­sis CEO Rich Mogull said that while it is not good for Face­book to have yet anoth­er data pri­va­cy flap in the head­lines, “In this case it looks like the main vio­la­tors were the com­pa­nies that wrote those appli­ca­tions,” he said. “Face­book in this case is more the enabler than the bad actor.”

    ———-

    “Report: Apps give Face­book sen­si­tive health and oth­er data” by MAE ANDERSON; Asso­ci­at­ed Press; 02/22/2019

    “In this case it looks like the main vio­la­tors were the com­pa­nies that wrote those applications...Facebook in this case is more the enabler than the bad actor.”

    That’s one way to spin it: Face­book is more of the enabler than the pri­ma­ry bad actor in this case. That’s sort of an improve­ment. Specif­i­cal­ly, Face­book’s “App Events” tool is enabling app devel­op­ers to send sen­si­tive user infor­ma­tion back Face­book despite Face­book’s instruc­tions to devel­op­ers not to send sen­si­tive infor­ma­tion. And the fact that Face­book was clear­ly accept­ing this sen­si­tive data with­out telling devel­op­ers to stop send­ing it cer­tain­ly adds to the enabling behav­ior. Even when that sen­si­tive data includ­ed whether or not a woman is ovu­lat­ing:

    ...
    An ana­lyt­ics tool called “App Events” allows app devel­op­ers to record user activ­i­ty and report it back to Face­book, even if the user isn’t on Face­book, accord­ing to the report .

    One exam­ple detailed by the Jour­nal shows how a woman would track her peri­od and ovu­la­tion using an app from Flo Health. After she enters when she last had her peri­od, Face­book soft­ware in the app would send along data, such as whether the user may be ovu­lat­ing. The Journal’s test­ing found that the data was sent with an adver­tis­ing ID that can be matched to a device or pro­file.

    Although Facebook’s terms instruct app devel­op­ers not to send such sen­si­tive infor­ma­tion, Face­book appeared to be accept­ing such data with­out telling the devel­op­ers to stop. Devel­op­ers are able to use such data to tar­get their own users while on Face­book.

    Face­book said in a state­ment that it requires apps to tell users what infor­ma­tion is shared with Face­book and it “pro­hibits app devel­op­ers from send­ing us sen­si­tive data.” The com­pa­ny said it works to remove infor­ma­tion that devel­op­ers should not have sent to Face­book.

    ...

    The data-shar­ing is relat­ed to a data ana­lyt­ics tool that Face­book offers devel­op­ers. The tool lets devel­op­ers see sta­tis­tics about their users and tar­get them with Face­book ads.
    ...

    And the range of sen­si­tive data includes every­thing from heart rate mon­i­tors to real estate apps. In oth­er words, pret­ty much any app might be send­ing data to Face­book but we don’t nec­es­sar­i­ly know which apps because the apps aren’t inform­ing users about this data col­lec­tion and don’t give users a way to stop it:

    ...
    Besides Flo Health, the Jour­nal found that Instant Heart Rate: HR Mon­i­tor and real-estate app Realtor.com were also send­ing app data to Face­book. The Jour­nal found that the apps did not pro­vide users any way to stop the data-shar­ing.
    ...

    And as the fol­low­ing Buz­zFeed report from Decem­ber describes, while app devel­op­ers tend to assume that the infor­ma­tion their apps are send­ing back to Face­book is anonymized because it does­n’t have your per­son­al name attached, that’s basi­cal­ly a garbage con­clu­sion because Face­book does­n’t need your name to know who you are. There’s plen­ty of oth­er iden­ti­fy­ing infor­ma­tion in what these apps are send­ing. Even if you don’t have a Face­book pro­file. And about half of the smart­phone apps found to be send­ing infor­ma­tion back to Face­book don’t even men­tion this in their pri­va­cy poli­cies accord­ing to a study by the Ger­man mobile secu­ri­ty ini­tia­tive Mobil­sich­er. So what per­cent of smart­phone apps over­all are send­ing infor­ma­tion back to Face­book? Accord­ing to the esti­mates of pri­va­cy researcher col­lec­tive App Cen­sus, about 30 per­cent of all apps on the Google Play store con­tact Face­book at start­up:

    Buz­zFeed News

    Apps Are Reveal­ing Your Pri­vate Infor­ma­tion To Face­book And You Prob­a­bly Don’t Know It

    Face­book pro­vid­ed devel­op­ers with tools to build Face­book-com­pat­i­ble apps like Tin­der, Grindr, and Preg­nan­cy+. Those apps have been qui­et­ly send­ing sen­si­tive user data to Face­book.
    Char­lie Warzel Buz­zFeed News Reporter

    Last updat­ed on Decem­ber 19, 2018, at 1:04 p.m. ET
    Post­ed on Decem­ber 19, 2018, at 12:30 p.m. ET

    Major Android apps like Tin­der, Grindr, and Preg­nan­cy+ are qui­et­ly trans­mit­ting sen­si­tive user data to Face­book, accord­ing to a new report by the Ger­man mobile secu­ri­ty ini­tia­tive Mobil­sich­er. This infor­ma­tion can include things like reli­gious affil­i­a­tion, dat­ing pro­files, and health care data. It’s being pur­pose­ful­ly col­lect­ed by Face­book through the Soft­ware Devel­op­er Kit (SDK) that it pro­vides to third-par­ty app devel­op­ers. And while Face­book does­n’t hide this, you prob­a­bly don’t know about it.

    Cer­tain­ly not all devel­op­ers did.

    “Most devel­op­ers we asked about this issue assumed that the infor­ma­tion Face­book receives is anonymized,” Mobil­sich­er explains in its report, which explores the types of infor­ma­tion shared behind the scenes between the plat­form and devel­op­ers. Through its SDK, Face­book pro­vides app devel­op­ers with data about their users, includ­ing where you click, how long you use the app, and your loca­tion when you use it. In exchange, Face­book can access the data those apps col­lect, which it then uses to tar­get adver­tis­ing rel­e­vant to a user’s inter­ests. That data doesn’t have your name attached, but as Mobil­sich­er shows, it’s far from anonymized, and it’s trans­mit­ted to Face­book regard­less of whether users are logged into the plat­form.

    Among the infor­ma­tion trans­mit­ted to Face­book are the IP address of the device that used the app, the type of device, time of use, and a user-spe­cif­ic Adver­tis­ing ID, which allows Face­book to iden­ti­fy and link third-par­ty app infor­ma­tion to the peo­ple using those apps. Apps that Mobil­sich­er test­ed include Bible+, Curvy, For­Dia­betes, Grindr, Kwitt, Migraine Bud­dy, Mood­path, Mus­lim Pro, OkCu­pid, Preg­nan­cy+, and more.

    As long as you’ve logged into Face­book on your mobile device at some point (through your phone’s brows­er or the Face­book app itself), the com­pa­ny cross-ref­er­ences the Adver­tis­ing ID and can link the third-par­ty app infor­ma­tion to your pro­file. And even if you don’t have a Face­book pro­file, the data can still be trans­mit­ted and col­lect­ed with oth­er third-par­ty app data that cor­re­sponds to your unique Adver­tis­ing ID.

    For devel­op­ers and Face­book, this trans­mis­sion appears rel­a­tive­ly com­mon. The pri­va­cy researcher col­lec­tive App Cen­sus esti­mates that “approx­i­mate­ly 30 per­cent of all apps in Google’s Play store con­tact Face­book at start­up” through the company’s SDK. The research firm Sta­tista esti­mates that the Google Play store has over 2.6 mil­lion apps as of Decem­ber 2018. As the Mobil­sich­er report details, many of these apps con­tain sen­si­tive infor­ma­tion. And while Face­book users can opt out and dis­able tar­get­ed adver­tise­ments (the same kind of ads that are informed by third-par­ty app data), it is unclear whether turn­ing off tar­get­ing stops Face­book from col­lect­ing this app infor­ma­tion. In a state­ment to Mobil­sich­er, Face­book spec­i­fied only that “if a per­son uti­lizes one of these con­trols, then Face­book will not use data gath­ered on these third-par­ty apps (e.g. through Face­book Audi­ence Net­work), for ad tar­get­ing.”

    A Face­book rep­re­sen­ta­tive clar­i­fied to Buz­zFeed News that while it enables users to opt out of tar­get­ed ads from third par­ties, the con­trols apply to the usage of the data and not its col­lec­tion. The com­pa­ny also said it does not use the third-par­ty data it col­lects through the SDK to cre­ate pro­files of non-Face­book users. Tin­der, Grindr, and Google did not respond to requests for com­ment. Apple, which uses a sim­i­lar ad iden­ti­fi­er, was not able to com­ment at the time of pub­li­ca­tion.

    The pub­li­ca­tion of Mobilsicher’s report comes at the end of a year rife with Face­book pri­va­cy scan­dals. In the past few months alone, the com­pa­ny has grap­pled with a few mas­sive ones. In late Sep­tem­ber, Face­book dis­closed a vul­ner­a­bil­i­ty that had exposed the per­son­al infor­ma­tion of 30 mil­lion users. A month lat­er, it revealed that same vul­ner­a­bil­i­ty had exposed pro­file infor­ma­tion includ­ing gen­der, loca­tion, birth dates, and recent search his­to­ry. Ear­li­er this month, the com­pa­ny report­ed anoth­er secu­ri­ty flaw that poten­tial­ly exposed the pub­lic and pri­vate pho­tos of as many as 6.8 mil­lion Face­book users to devel­op­ers that should not have had access to them. And on Tues­day, the New York Times report­ed that Face­book gave more than 150 com­pa­nies, includ­ing Net­flix, Ama­zon, Microsoft, Spo­ti­fy, and Yahoo, unprece­dent­ed and undis­closed access to users’ per­son­al data, in some cas­es grant­i­ng access to read users’ pri­vate mes­sages.

    The vul­ner­a­bil­i­ties, cou­pled with fall­out from the Cam­bridge Ana­lyt­i­ca data min­ing scan­dal, have set off a Face­book pri­va­cy reck­on­ing that’s inspired grass­roots cam­paigns to #Delete­Face­book, lead­ing to some high-pro­file dele­tions. They’ve also sparked a tech­ni­cal debate about whether Face­book “sells data” to adver­tis­ers. (Face­book and its defend­ers argue that no data changes hands as a result of its tar­get­ed adver­tis­ing, while crit­ics say that’s a seman­tic dodge and that the com­pa­ny sells ads against your infor­ma­tion, which is effec­tive­ly sim­i­lar.)

    Lost in that debate is the greater issue of trans­paren­cy. Plat­forms like Face­book do dis­close their data poli­cies in daunt­ing moun­tain ranges of text with impres­sive­ly off-putting com­plex­i­ty. Rare is the nor­mal human who reads them. Rar­er still is the non-devel­op­er human who reads the com­pa­ny’s even more off-putting data poli­cies for devel­op­ers. For these rea­sons, the mechan­ics of the Face­book plat­form — par­tic­u­lar­ly the nuances of its soft­ware devel­op­er kit — are large­ly unknown to the typ­i­cal Face­book user.

    Though CEO Mark Zucker­berg told law­mak­ers this year that Face­book users have “com­plete con­trol” of their data, Tues­day’s New York Times inves­ti­ga­tion as well as Mobil­sicher’s report reveal that user infor­ma­tion appears to move between dif­fer­ent com­pa­nies and plat­forms and is col­lect­ed, some­times with­out noti­fy­ing the users. In the case of Facebook’s SDK, for exam­ple, Mobil­sich­er notes that the trans­mis­sion of user infor­ma­tion from third-par­ty apps to Face­book occurs entire­ly behind the scenes. None of the apps Mobil­sich­er found to be trans­mit­ting data to Face­book “active­ly noti­fied users” that they were doing so. Accord­ing to the report, “Not even half of [the apps Mobil­sich­er test­ed] men­tion Face­book Ana­lyt­ics in their pri­va­cy pol­i­cy. Strict­ly speak­ing, none of them is GDPR-com­pli­ant, since the trans­mis­sion starts before any user inter­ac­tion could indi­cate informed con­sent.”

    ...

    ———-

    “Apps Are Reveal­ing Your Pri­vate Infor­ma­tion To Face­book And You Prob­a­bly Don’t Know It” by Char­lie Warzel; Buz­zFeed; 12/19/2018

    “Major Android apps like Tin­der, Grindr, and Preg­nan­cy+ are qui­et­ly trans­mit­ting sen­si­tive user data to Face­book, accord­ing to a new report by the Ger­man mobile secu­ri­ty ini­tia­tive Mobil­sich­er. This infor­ma­tion can include things like reli­gious affil­i­a­tion, dat­ing pro­files, and health care data. It’s being pur­pose­ful­ly col­lect­ed by Face­book through the Soft­ware Devel­op­er Kit (SDK) that it pro­vides to third-par­ty app devel­op­ers. And while Face­book does­n’t hide this, you prob­a­bly don’t know about it.”

    It’s not just the hand­ful of apps described in the Wall Street Jour­nal report. Major Android apps are rou­tine­ly pass­ing infor­ma­tion to Face­book. And this infor­ma­tion can include things like reli­gious affil­i­a­tion and data pro­files in addi­tion to health care data. And while devel­op­ers might be doing this, in part, because they assume the data is anonymized, it’s not. At least not in any mean­ing­ful way. And even non-Face­book users are get­ting their data sent:

    ...
    Cer­tain­ly not all devel­op­ers did.

    “Most devel­op­ers we asked about this issue assumed that the infor­ma­tion Face­book receives is anonymized,” Mobil­sich­er explains in its report, which explores the types of infor­ma­tion shared behind the scenes between the plat­form and devel­op­ers. Through its SDK, Face­book pro­vides app devel­op­ers with data about their users, includ­ing where you click, how long you use the app, and your loca­tion when you use it. In exchange, Face­book can access the data those apps col­lect, which it then uses to tar­get adver­tis­ing rel­e­vant to a user’s inter­ests. That data doesn’t have your name attached, but as Mobil­sich­er shows, it’s far from anonymized, and it’s trans­mit­ted to Face­book regard­less of whether users are logged into the plat­form.

    Among the infor­ma­tion trans­mit­ted to Face­book are the IP address of the device that used the app, the type of device, time of use, and a user-spe­cif­ic Adver­tis­ing ID, which allows Face­book to iden­ti­fy and link third-par­ty app infor­ma­tion to the peo­ple using those apps. Apps that Mobil­sich­er test­ed include Bible+, Curvy, For­Dia­betes, Grindr, Kwitt, Migraine Bud­dy, Mood­path, Mus­lim Pro, OkCu­pid, Preg­nan­cy+, and more.

    As long as you’ve logged into Face­book on your mobile device at some point (through your phone’s brows­er or the Face­book app itself), the com­pa­ny cross-ref­er­ences the Adver­tis­ing ID and can link the third-par­ty app infor­ma­tion to your pro­file. And even if you don’t have a Face­book pro­file, the data can still be trans­mit­ted and col­lect­ed with oth­er third-par­ty app data that cor­re­sponds to your unique Adver­tis­ing ID.
    ...

    How com­mon is this? Accord­ing to pri­va­cy researcher col­lec­tive App Cen­sus esti­mates, it’s about 30 per­cent of all apps in the Google Play store. And half of the apps test­ed by Mobil­sich­er did­n’t even men­tion Face­book Ana­lyt­ics in their pri­va­cy pol­i­cy:

    ...
    For devel­op­ers and Face­book, this trans­mis­sion appears rel­a­tive­ly com­mon. The pri­va­cy researcher col­lec­tive App Cen­sus esti­mates that “approx­i­mate­ly 30 per­cent of all apps in Google’s Play store con­tact Face­book at start­up” through the company’s SDK. The research firm Sta­tista esti­mates that the Google Play store has over 2.6 mil­lion apps as of Decem­ber 2018. As the Mobil­sich­er report details, many of these apps con­tain sen­si­tive infor­ma­tion. And while Face­book users can opt out and dis­able tar­get­ed adver­tise­ments (the same kind of ads that are informed by third-par­ty app data), it is unclear whether turn­ing off tar­get­ing stops Face­book from col­lect­ing this app infor­ma­tion. In a state­ment to Mobil­sich­er, Face­book spec­i­fied only that “if a per­son uti­lizes one of these con­trols, then Face­book will not use data gath­ered on these third-par­ty apps (e.g. through Face­book Audi­ence Net­work), for ad tar­get­ing.”

    ...

    Though CEO Mark Zucker­berg told law­mak­ers this year that Face­book users have “com­plete con­trol” of their data, Tues­day’s New York Times inves­ti­ga­tion as well as Mobil­sicher’s report reveal that user infor­ma­tion appears to move between dif­fer­ent com­pa­nies and plat­forms and is col­lect­ed, some­times with­out noti­fy­ing the users. In the case of Facebook’s SDK, for exam­ple, Mobil­sich­er notes that the trans­mis­sion of user infor­ma­tion from third-par­ty apps to Face­book occurs entire­ly behind the scenes. None of the apps Mobil­sich­er found to be trans­mit­ting data to Face­book “active­ly noti­fied users” that they were doing so. Accord­ing to the report, “Not even half of [the apps Mobil­sich­er test­ed] men­tion Face­book Ana­lyt­ics in their pri­va­cy pol­i­cy. Strict­ly speak­ing, none of them is GDPR-com­pli­ant, since the trans­mis­sion starts before any user inter­ac­tion could indi­cate informed con­sent.”
    ...

    And accord­ing to the fol­low­ing arti­cle, that 30 per­cent esti­mate might be low. Accord­ing to a Pri­va­cy Inter­na­tion­al study, at least 20 out of 34 pop­u­lar Android apps that they test­ed were trans­mit­ting sen­si­tive infor­ma­tion back to Face­book with­out ask­ing for per­mis­sion:

    Engad­get

    More pop­u­lar apps are send­ing data to Face­book with­out ask­ing
    MyFit­ness­Pal, Tri­pAd­vi­sor and oth­ers may be vio­lat­ing EU pri­va­cy law.

    Jon Fin­gas
    12.30.18

    It’s not just dat­ing and health apps that might be vio­lat­ing your pri­va­cy when they send data to Face­book. A Pri­va­cy Inter­na­tion­al study has deter­mined that “at least” 20 out of 34 pop­u­lar Android apps are trans­mit­ting sen­si­tive infor­ma­tion to Face­book with­out ask­ing per­mis­sion, includ­ing Kayak, MyFit­ness­Pal, Sky­scan­ner and Tri­pAd­vi­sor. This typ­i­cal­ly includes ana­lyt­ics data that sends on launch, includ­ing your unique Android ID, but can also include data that sends lat­er. The trav­el search engine Kayak, for instance, appar­ent­ly sends des­ti­na­tion and flight search data, trav­el dates and whether or not kids might come along.

    While the data might not imme­di­ate­ly iden­ti­fy you, it could the­o­ret­i­cal­ly be used to rec­og­nize some­one through round­about means, such as the apps they have installed or whether they trav­el with the same per­son.

    The con­cern isn’t just that apps are over­shar­ing data, but that they may be vio­lat­ing the EU’s GDPR pri­va­cy rules by both col­lect­ing info with­out con­sent and poten­tial­ly iden­ti­fy­ing users. You can’t lay the blame sole­ly at the feet of Face­book or devel­op­ers, though. Face­book’s rel­e­vant devel­op­er kit did­n’t pro­vide the option to ask for per­mis­sion until after GDPR took effect. The social net­work did devel­op a fix, but it’s not clear that it works or that devel­op­ers are imple­ment­ing it prop­er­ly. Numer­ous apps were still using old­er ver­sions of the devel­op­er kit, accord­ing to the study. Sky­scan­ner not­ed that it was “not aware” it was send­ing data with­out per­mis­sion.

    ...

    ———-

    “More pop­u­lar apps are send­ing data to Face­book with­out ask­ing” by Jon Fin­gas; Engad­get; 12/30/18

    “It’s not just dat­ing and health apps that might be vio­lat­ing your pri­va­cy when they send data to Face­book. A Pri­va­cy Inter­na­tion­al study has deter­mined that “at least” 20 out of 34 pop­u­lar Android apps are trans­mit­ting sen­si­tive infor­ma­tion to Face­book with­out ask­ing per­mis­sion, includ­ing Kayak, MyFit­ness­Pal, Sky­scan­ner and Tri­pAd­vi­sor. This typ­i­cal­ly includes ana­lyt­ics data that sends on launch, includ­ing your unique Android ID, but can also include data that sends lat­er. The trav­el search engine Kayak, for instance, appar­ent­ly sends des­ti­na­tion and flight search data, trav­el dates and whether or not kids might come along.”

    So if you don’t exact­ly whether or not an app is send­ing Face­book your data, it appears to be a safe bet that, yes, that an app is send­ing Face­book your data.

    And if you’re tempt­ed to delete all of the apps off of your smart­phone, recall all the sto­ries about device mak­ers, includ­ing smart­phone man­u­fac­tur­ers, send­ing and receiv­ing large amounts of user data with Face­book and lit­er­al­ly being treat­ed as “exten­sions” of Face­book by the com­pa­ny. So while smart­phone apps are cer­tain­ly going to be a major source of per­son­al data leak­age, don’t for­get there’s a good chance your smart­phone itself is basi­cal­ly work­ing for Face­book.

    Posted by Pterrafractyl | February 25, 2019, 12:03 pm
  8. Here’s an update on the brain-to-com­put­er inter­face tech­nol­o­gy that Face­book is work­ing on. First, recall how the ini­tial use for the tech­nol­o­gy that Face­book has been tout­ing thus far has been sim­ply replac­ing using your brain for rapid typ­ing. It always seemed like a rather lim­it­ed appli­ca­tion for a tech­nol­o­gy that’s basi­cal­ly read­ing your mind.

    Now Mark Zucker­berg is giv­ing us a hint at one of the more ambi­tious appli­ca­tions of these tech­nol­o­gy: Aug­ment­ed Real­i­ty (AR). AR tech­nol­o­gy isn’t new. Google Glass was an ear­li­er ver­sion of AR tech­nol­o­gy and Ocu­lus, the vir­tu­al real­i­ty head­set com­pa­ny owned by Face­book, has made it clear that AR is an area they are plan­ning on get­ting into. But it sounds like Face­book has big plans for using the the brain-to-com­put­er with AR tech­nol­o­gy. This was revealed dur­ing a talk Zucker­berg gave at Har­vard last month dur­ing a two hour inter­view by with Har­vard law school pro­fes­sor Jonathan Zit­train. Accord­ing to Zucker­berg, the vision is to allow peo­ple to use their thoughts to nav­i­gate through aug­ment­ed real­i­ties. This will pre­sum­ably work in tan­dem with AR head­sets.

    So as we should expect, Face­book’s ear­ly plans for brain-to-com­put­er inter­faces aren’t lim­it­ed to peo­ple typ­ing with their minds at a com­put­er. They are plans for incor­po­rat­ing the tech­nol­o­gy into the kind of tech­nol­o­gy that peo­ple can wear every­where like AR glass­es:

    Wired

    Zucker­berg Wants Face­book to Build a Mind-Read­ing Machine

    Author: Noam Cohen
    03.07.19 07:00 am

    For those of us who wor­ry that Face­book may have seri­ous bound­ary issues when it comes to the per­son­al infor­ma­tion of its users, Mark Zuckerberg’s recent com­ments at Har­vard should get the heart rac­ing.

    Zucker­berg dropped by the uni­ver­si­ty last month osten­si­bly as part of a a year of con­ver­sa­tions with experts about the role of tech­nol­o­gy in soci­ety, “the oppor­tu­ni­ties, the chal­lenges, the hopes, and the anx­i­eties.” His near­ly two-hour inter­view with Har­vard law school pro­fes­sor Jonathan Zit­train in front of Face­book cam­eras and a class­room of stu­dents cen­tered on the company’s unprece­dent­ed posi­tion as a town square for per­haps 2 bil­lion peo­ple. To hear the young CEO tell it, Face­book was tak­ing shots from all sides—either it was indif­fer­ent to the eth­nic hatred fes­ter­ing on its plat­forms or it was a heavy-hand­ed cen­sor decid­ing whether an idea was allowed to be expressed.

    Zucker­berg con­fessed that he hadn’t sought out such an awe­some respon­si­bil­i­ty. No one should, he said. “If I was a dif­fer­ent per­son, what would I want the CEO of the com­pa­ny to be able to do?” he asked him­self. “I would not want so many deci­sions about con­tent to be con­cen­trat­ed with any indi­vid­ual.”

    Instead, Face­book will estab­lish its own Supreme Court, he told Zit­train, an out­side pan­el entrust­ed to set­tle thorny ques­tions about what appears on the plat­form. “I will not be able to make a deci­sion that over­turns what they say,” he promised, “which I think is good.”

    All was going to plan. Zucker­berg had dis­played a wel­come humil­i­ty about him­self and his com­pa­ny. And then he described what real­ly excit­ed him about the future—and the famil­iar Sil­i­con Val­ley hubris had returned. There was this promis­ing new tech­nol­o­gy, he explained, a brain-com­put­er inter­face, which Face­book has been research­ing.

    The idea is to allow peo­ple to use their thoughts to nav­i­gate intu­itive­ly through aug­ment­ed reality—the neu­ro-dri­ven ver­sion of the world recent­ly described by Kevin Kel­ly in these pages. No typ­ing, no speak­ing, even, to dis­tract you or slow you down as you inter­act with dig­i­tal addi­tions to the land­scape: dri­ving instruc­tions super­im­posed over the free­way, short biogra­phies float­ing next to atten­dees of a con­fer­ence, 3‑D mod­els of fur­ni­ture you can move around your apart­ment.

    The Har­vard audi­ence was a lit­tle tak­en aback by the conversation’s turn, and Zit­train made a law-pro­fes­sor joke about the con­sti­tu­tion­al right to remain silent in light of a tech­nol­o­gy that allows eaves­drop­ping on thoughts. “Fifth amend­ment impli­ca­tions are stag­ger­ing,” he said to laugh­ter. Even this gen­tle push­back was met with the tried-and-true defense of big tech com­pa­nies when crit­i­cized for tram­pling users’ privacy—users’ con­sent. “Pre­sum­ably,” Zucker­berg said, “this would be some­thing that some­one would choose to use as a prod­uct.”

    In short, he would not be divert­ed from his self-assigned mis­sion to con­nect the peo­ple of the world for fun and prof­it. Not by the dystopi­an image of brain-prob­ing police offi­cers. Not by an extend­ed apol­o­gy tour. “I don’t know how we got onto that,” he said jovial­ly. “But I think a lit­tle bit on future tech and research is inter­est­ing, too.”

    Of course, Face­book already fol­lows you around as you make your way through the world via the GPS in the smart­phone in your pock­et, and, like­wise, fol­lows you across the inter­net via code implant­ed in your brows­er. Would we real­ly let Face­book inside those old nog­gins of ours just so we can order a piz­za faster and with more top­pings? Zucker­berg clear­ly is count­ing on it.

    To be fair, Face­book doesn’t plan to actu­al­ly enter our brains. For one thing, a sur­gi­cal implant, Zucker­berg told Zit­train, wouldn’t scale well: “If you’re actu­al­ly try­ing to build things that every­one is going to use, you’re going to want to focus on the non­in­va­sive things.”

    The tech­nol­o­gy that Zucker­berg described is a show­er-cap-look­ing device that sur­rounds a brain and dis­cov­ers con­nec­tions between par­tic­u­lar thoughts and par­tic­u­lar blood flows or brain activ­i­ty, pre­sum­ably to assist the glass­es or head­sets man­u­fac­tured by Ocu­lus VR, which is part of Face­book. Already, Zucker­berg said, researchers can dis­tin­guish when a per­son is think­ing of a giraffe or an ele­phant based on neur­al activ­i­ty. Typ­ing with your mind would work off of the same prin­ci­ples.

    As with so many of Facebook’s inno­va­tions, Zucker­berg doesn’t see how brain-com­put­er inter­face breach­es an individual’s integri­ty, what Louis Bran­deis famous­ly defined as “the right to be left alone” in one’s thoughts, but instead sees a tech­nol­o­gy that empow­ers the indi­vid­ual. “The way that our phones work today, and all com­put­ing sys­tems, orga­nized around apps and tasks is fun­da­men­tal­ly not how our brains work and how we approach the world,” he told Zit­train. “That’s one of the rea­sons why I’m just very excit­ed longer term about espe­cial­ly things like aug­ment­ed real­i­ty, because it’ll give us a plat­form that I think actu­al­ly is how we think about stuff.”

    Kel­ly, in his essay about AR, like­wise sees a world that makes more sense when a “smart” ver­sion rests atop the quo­tid­i­an one. “Watch­es will detect chairs,” he writes of this mir­ror­world, “chairs will detect spread­sheets; glass­es will detect watch­es, even under a sleeve; tablets will see the inside of a tur­bine; tur­bines will see work­ers around them.” Sud­den­ly our envi­ron­ment, nat­ur­al and arti­fi­cial, will oper­ate as an inte­grat­ed whole. Except for humans with their bot­tled up thoughts and desires. Until, that is, they install BCI-enhanced glass­es.

    Zucker­berg explained the poten­tial ben­e­fits of the tech­nol­o­gy this way when he announced Facebook’s research in 2017: “Our brains pro­duce enough data to stream 4 HD movies every sec­ond. The prob­lem is that the best way we have to get infor­ma­tion out into the world—speech—can only trans­mit about the same amount of data as a 1980s modem. We’re work­ing on a sys­tem that will let you type straight from your brain about 5x faster than you can type on your phone today. Even­tu­al­ly, we want to turn it into a wear­able tech­nol­o­gy that can be man­u­fac­tured at scale. Even a sim­ple yes/no ‘brain click’ would help make things like aug­ment­ed real­i­ty feel much more nat­ur­al.”

    Zucker­berg likes to quote Steve Jobs’s descrip­tion of com­put­ers as “bicy­cles for the mind.” I can imag­ine him think­ing, What’s wrong with help­ing us ped­al a lit­tle faster?

    ...

    ———-

    “Zucker­berg Wants Face­book to Build a Mind-Read­ing Machine” by Noam Cohen; Wired; 03/07/2019

    “All was going to plan. Zucker­berg had dis­played a wel­come humil­i­ty about him­self and his com­pa­ny. And then he described what real­ly excit­ed him about the future—and the famil­iar Sil­i­con Val­ley hubris had returned. There was this promis­ing new tech­nol­o­gy, he explained, a brain-com­put­er inter­face, which Face­book has been research­ing.

    Yep, every­thing was going well at the Zucker­berg event until he start­ed talk­ing about his vision for the future. A future of aug­ment­ed real­i­ty that you nav­i­gate with your thoughts using Face­book’s brain-to-com­put­er inter­face tech­nol­o­gy. It might seem creepy, but Face­book is clear­ly bet­ting on it not being too creepy to pre­vent peo­ple from using it:

    ...
    The idea is to allow peo­ple to use their thoughts to nav­i­gate intu­itive­ly through aug­ment­ed reality—the neu­ro-dri­ven ver­sion of the world recent­ly described by Kevin Kel­ly in these pages. No typ­ing, no speak­ing, even, to dis­tract you or slow you down as you inter­act with dig­i­tal addi­tions to the land­scape: dri­ving instruc­tions super­im­posed over the free­way, short biogra­phies float­ing next to atten­dees of a con­fer­ence, 3‑D mod­els of fur­ni­ture you can move around your apart­ment.

    ...

    Of course, Face­book already fol­lows you around as you make your way through the world via the GPS in the smart­phone in your pock­et, and, like­wise, fol­lows you across the inter­net via code implant­ed in your brows­er. Would we real­ly let Face­book inside those old nog­gins of ours just so we can order a piz­za faster and with more top­pings? Zucker­berg clear­ly is count­ing on it.

    To be fair, Face­book doesn’t plan to actu­al­ly enter our brains. For one thing, a sur­gi­cal implant, Zucker­berg told Zit­train, wouldn’t scale well: “If you’re actu­al­ly try­ing to build things that every­one is going to use, you’re going to want to focus on the non­in­va­sive things.”

    The tech­nol­o­gy that Zucker­berg described is a show­er-cap-look­ing device that sur­rounds a brain and dis­cov­ers con­nec­tions between par­tic­u­lar thoughts and par­tic­u­lar blood flows or brain activ­i­ty, pre­sum­ably to assist the glass­es or head­sets man­u­fac­tured by Ocu­lus VR, which is part of Face­book. Already, Zucker­berg said, researchers can dis­tin­guish when a per­son is think­ing of a giraffe or an ele­phant based on neur­al activ­i­ty. Typ­ing with your mind would work off of the same prin­ci­ples.
    ...

    What about poten­tial abus­es like vio­lat­ing the con­sti­tu­tion­al right to remain silent? Zucker­berg assured us that only peo­ple who choose to use the tech­nol­o­gy would actu­al­ly use so we should­n’t wor­ry about abuse, a rather wor­ry­ing response in part because of typ­i­cal it is:

    ...
    The Har­vard audi­ence was a lit­tle tak­en aback by the conversation’s turn, and Zit­train made a law-pro­fes­sor joke about the con­sti­tu­tion­al right to remain silent in light of a tech­nol­o­gy that allows eaves­drop­ping on thoughts. “Fifth amend­ment impli­ca­tions are stag­ger­ing,” he said to laugh­ter. Even this gen­tle push­back was met with the tried-and-true defense of big tech com­pa­nies when crit­i­cized for tram­pling users’ privacy—users’ con­sent. “Pre­sum­ably,” Zucker­berg said, “this would be some­thing that some­one would choose to use as a prod­uct.”

    In short, he would not be divert­ed from his self-assigned mis­sion to con­nect the peo­ple of the world for fun and prof­it. Not by the dystopi­an image of brain-prob­ing police offi­cers. Not by an extend­ed apol­o­gy tour. “I don’t know how we got onto that,” he said jovial­ly. “But I think a lit­tle bit on future tech and research is inter­est­ing, too.”
    ...

    But at least it’s aug­ment­ed real­i­ty that will be work­ing with some sort of AR head­set and the tech­nol­o­gy isn’t actu­al­ly inject­ing aug­ment­ed info into your brain. That would be a whole new lev­el of creepy.

    And accord­ing to the fol­low­ing arti­cle, a neu­ro­sci­en­tist at North­west­ern Uni­ver­si­ty, Dr. Moran Cerf, is work­ing on on exact­ly that kind of tech­nol­o­gy and pre­dicts it will be avail­able to the pub­lic in as lit­tle as five years. Cerf is work­ing on some sort chip that would be con­nect­ed to the inter­net, read your thoughts, go to Wikipedia or some web­site to get an answer to your ques­tions, and return the answer direct­ly to your brain. Yep, inter­net-con­nect­ed brain chips. He esti­mates that such tech­nol­o­gy could give peo­ple IQs of 200.

    So will peo­ple have to go through brain surgery to get this new tech­nol­o­gy? Not nec­es­sar­i­ly. Cerf is ask­ing the ques­tion “Can you eat some­thing that will actu­al­ly get to your brain? Can you eat things in parts that will assem­ble inside your head?” Yep, inter­net-con­nect­ed brain chips that you eat. So not only will you not need brain surgery to get the chip...in the­o­ry, you might not even know you ate one.

    Also note that it’s unclear if this brain chip can read your thoughts like Face­book’s brain-to-com­put­er inter­face or if it’s only for feed­ing your the infor­ma­tion from the inter­net. In oth­er words, since Cer­f’s vision for this chip requires the abil­i­ty to read thoughts first in order to go on the inter­net and find answers and report them back, it’s pos­si­ble that this is the kind of com­put­er-to-brain tech­nol­o­gy that is intend­ed to work with the kind of brain-to-com­put­er mind read­ing tech­nol­o­gy Face­book is work­ing on. And that’s par­tic­u­lar­ly rev­e­lent because Cerf tells us that he’s col­lab­o­rat­ing with ‘Sil­i­con Val­ley big wigs’ that he’d rather not name:

    CBS Chica­go

    North­west­ern Neu­ro­sci­en­tist Research­ing Brain Chips To Make Peo­ple Super­in­tel­li­gent

    By Lau­ren Vic­to­ry
    March 4, 2019 at 7:32 am

    CHICAGO (CBS) — What if you could make mon­ey, or type some­thing, just by think­ing about it? It sounds like sci­ence fic­tion, but it might be close to real­i­ty.

    In as lit­tle as five years, super smart peo­ple could be walk­ing down the street; men and women who’ve paid to increase their intel­li­gence.

    North­west­ern Uni­ver­si­ty neu­ro­sci­en­tist and busi­ness pro­fes­sor Dr. Moran Cerf made that pre­dic­tion, because he’s work­ing on a smart chip for the brain.

    “Make it so that it has an inter­net con­nec­tion, and goes to Wikipedia, and when I think this par­tic­u­lar thought, it gives me the answer,” he said.

    Cerf is col­lab­o­rat­ing with Sil­i­con Val­ley big wigs he’d rather not name.

    Face­book also has been work­ing on build­ing a brain-com­put­er inter­face, and SpaceX and Tes­la CEO Elon Musk is back­ing a brain-com­put­er inter­face called Neu­ralink.

    “Every­one is spend­ing a lot of time right now try­ing to find ways to get things into the brain with­out drilling a hole in your skull,” Cerf said. “Can you eat some­thing that will actu­al­ly get to your brain? Can you eat things in parts that will assem­ble inside your head?”

    ...

    “This is no longer a sci­ence prob­lem. This is a social prob­lem,” Cerf said.

    Cerf wor­ries about cre­at­ing intel­li­gence gaps in soci­ety; on top of exist­ing gen­der, racial, and finan­cial inequal­i­ties.

    “They can make mon­ey by just think­ing about the right invest­ments, and we can­not; so they’re going to get rich­er, they’re going to get health­i­er, they’re going to live longer,” he said.

    The aver­age IQ of an intel­li­gent mon­key is about 70, the aver­age human IQ is around 100, and a genius IQ is gen­er­al­ly con­sid­ered to begin around 140. Peo­ple with a smart chip in their brain could have an IQ of around 200, so would they even want to inter­act with the aver­age per­son?

    “Are they going to say, ‘Look at this cute human, Stephen Hawk­ing. He can do dif­fer­en­tial equa­tions in his mind, just like a lit­tle baby with 160 IQ points. Isn’t it amaz­ing? So cute. Now let’s put it back in a cage and give it bananas,’” Cerf said.

    Time will tell. Or will our minds?

    Approx­i­mate­ly 40,000 peo­ple in the Unit­ed States already have smart chips in their heads, but those brain implants are only approved for med­ical use for now.

    ———-

    “North­west­ern Neu­ro­sci­en­tist Research­ing Brain Chips To Make Peo­ple Super­in­tel­li­gent” by Lau­ren Vic­to­ry; CBS Chica­go; 03/04/2019

    “In as lit­tle as five years, super smart peo­ple could be walk­ing down the street; men and women who’ve paid to increase their intel­li­gence.”

    In just five years, you’ll be walk­ing down the street, won­der about some­thing, and your brain chip will go access Wikipedia, find the answer, and some­how deliv­er it to you. And you won’t even have to have gone through brain surgery. You’ll just eat some­thing that will some­how insert the chip in your brain:

    ...
    North­west­ern Uni­ver­si­ty neu­ro­sci­en­tist and busi­ness pro­fes­sor Dr. Moran Cerf made that pre­dic­tion, because he’s work­ing on a smart chip for the brain.

    “Make it so that it has an inter­net con­nec­tion, and goes to Wikipedia, and when I think this par­tic­u­lar thought, it gives me the answer,” he said.

    ...

    Face­book also has been work­ing on build­ing a brain-com­put­er inter­face, and SpaceX and Tes­la CEO Elon Musk is back­ing a brain-com­put­er inter­face called Neu­ralink.

    “Every­one is spend­ing a lot of time right now try­ing to find ways to get things into the brain with­out drilling a hole in your skull,” Cerf said. “Can you eat some­thing that will actu­al­ly get to your brain? Can you eat things in parts that will assem­ble inside your head?”

    ...

    The aver­age IQ of an intel­li­gent mon­key is about 70, the aver­age human IQ is around 100, and a genius IQ is gen­er­al­ly con­sid­ered to begin around 140. Peo­ple with a smart chip in their brain could have an IQ of around 200, so would they even want to inter­act with the aver­age per­son?
    ...

    That’s the promise. Or, rather, the hype. It’s hard to imag­ine this all being ready in five years. It’s also worth not­ing that if the only thing this chip does is con­duct inter­net queries it’s hard to see how this will effec­tive­ly raise peo­ple’s IQs to 200. After all, peo­ple damn near have their brains con­nect­ed to Wikipedia already with smart­phones and there does­n’t appear to have been a smart­phone-induced IQ boost. But who knows. Once you have the tech­nol­o­gy to rapid­ly feed infor­ma­tion back and forth between the brain and a com­put­er there could be all sorts of IQ-boost­ing tech­nolo­gies that could be devel­oped. At a min­i­mum, it could allow for some very fan­cy aug­ment­ed real­i­ty tech­nol­o­gy.

    So some sort of com­put­er-to-brain inter­face tech­nol­o­gy appears to be on the hori­zon. And if Cer­f’s chip ends up being tech­no­log­i­cal­ly fea­si­ble it’s going to have Sil­i­con Val­ley big wigs behind it. We just don’t know which big wigs because he won’t tell us:

    ...
    Cerf is col­lab­o­rat­ing with Sil­i­con Val­ley big wigs he’d rather not name.
    ...

    So some Sil­i­con Val­ley big wits are work­ing on com­put­er-to-brain inter­face tech­nol­o­gy that can poten­tial­ly be fed to peo­ple. And they they want to keep their involve­ment in the devel­op­ment of this tech­nol­o­gy a secret. That’s super omi­nous, right?

    Posted by Pterrafractyl | March 7, 2019, 3:45 pm
  9. Remem­ber how the right-wing out­rage machine cre­at­ed an uproar in 2016 over alle­ga­tion that Face­book’s trend­ing news was cen­sor­ing con­ser­v­a­tive sto­ries? And remem­ber how Face­book respond­ed by fir­ing all the human edi­tors and replac­ing them with an algo­rithm that turned the trend­ing news sec­tion into a dis­trib­u­tor or right-wing ‘fake news’ mis­in­for­ma­tion? And remem­ber how Face­book announced a new set of news feed changes in Jan­u­ary of 2018, then a cou­ple of months lat­er con­ser­v­a­tives were again com­plain­ing that it was biased against them, so Face­book hired for­mer Repub­li­can Sen­a­tor John Kyl and the Her­itage Foun­da­tion to do an audit of the com­pa­ny to deter­mined whether or not Face­book had a polit­i­cal bias?

    Well, it looks like we’re due for a round of fake out­rage designed to make social media com­pa­nies more com­pli­ant to right-wing dis­in­for­ma­tion cam­paigns. This time, it’s Pres­i­dent Trump lead­ing the way on the faux out­rage, com­plain­ing that “Some­thing’s hap­pen­ing with those groups of folks that are run­ning Face­book and Google and Twit­ter and I do think we have to get to the bot­tom of it”:

    The Hill

    Trump accus­es Sil­i­con Val­ley of col­lud­ing to silence con­ser­v­a­tives

    By Justin Wise — 03/19/19 03:09 PM EDT

    Pres­i­dent Trump on Tues­day sug­gest­ed that Google, Face­book and Twit­ter have col­lud­ed with each oth­er to dis­crim­i­nate against Repub­li­cans.

    “We use the word col­lu­sion very loose­ly all the time. And I will tell you there is col­lu­sion with respect to that,” Trump said dur­ing a press con­fer­ence at the White House Rose Gar­den. “Some­thing has to be going on. You see the lev­el, in many cas­es, of hatred for a cer­tain group of peo­ple that hap­pened to be in pow­er, that hap­pened to win the elec­tion.

    “Some­thing’s hap­pen­ing with those groups of folks that are run­ning Face­book and Google and Twit­ter and I do think we have to get to the bot­tom of it,” he added.

    The pres­i­den­t’s com­ments marked an esca­la­tion in his crit­i­cism of U.S. tech giants like Twit­ter, a plat­form that he fre­quent­ly uses to pro­mote his poli­cies and denounce his polit­i­cal oppo­nents.

    Trump said Twit­ter is “dif­fer­ent than it used to be,” when asked about a new push to make social media com­pa­nies liable for the con­tent on their plat­form.

    “We have to do some­thing,” Trump said. “I have many, many mil­lions of fol­low­ers on Twit­ter, and it’s dif­fer­ent than it used to be. Things are hap­pen­ing. Names are tak­en off.”

    He lat­er alleged that con­ser­v­a­tives and Repub­li­cans are dis­crim­i­nat­ed against on social media plat­forms.

    “It’s big, big dis­crim­i­na­tion,” he said. “I see it absolute­ly on Twit­ter.”

    Trump and oth­er con­ser­v­a­tives have increas­ing­ly argued that com­pa­nies like Google, Face­book and Twit­ter have an insti­tu­tion­al bias that favors lib­er­als. Trump tweet­ed Tues­day morn­ing that the tech giants were “sooo on the side of the Rad­i­cal Left Democ­rats.”

    The three com­pa­nies did not imme­di­ate­ly respond to requests for com­ment on Trump’s Tues­day morn­ing tweet.

    He also vowed to look into a report that his social media direc­tor, Dan Scav­i­no, was tem­porar­i­ly blocked from mak­ing pub­lic com­ments on one of his Face­book posts.

    The series of com­ments came a day after Rep. Devin Nunes (R‑Calif.) sued Twit­ter and some of its users for more than $250 mil­lion. Nunes’s suit alleges that the plat­form cen­sors con­ser­v­a­tive voic­es by “shad­ow-ban­ning” them.

    The Cal­i­for­nia Repub­li­can also accused Twit­ter of “facil­i­tat­ing defama­tion on its plat­form” by “ignor­ing law­ful com­plaints about offen­sive con­tent.”

    ———-

    “Trump accus­es Sil­i­con Val­ley of col­lud­ing to silence con­ser­v­a­tives” by Justin Wise; The Hill; 03/19/2019

    “Trump and oth­er con­ser­v­a­tives have increas­ing­ly argued that com­pa­nies like Google, Face­book and Twit­ter have an insti­tu­tion­al bias that favors lib­er­als. Trump tweet­ed Tues­day morn­ing that the tech giants were “sooo on the side of the Rad­i­cal Left Democ­rats.””

    Yep, the social media giants are appar­ent­ly “sooo on the side of the Rad­i­cal Left Democ­rats.” Trump is con­vinced of this because he feels that “some­thing has to be going on” and “we have to get to the bot­tom of it”. He’s also sure that Twit­ter is “dif­fer­ent than it used to be” and “we have to do some­thing” because it’s “big, big dis­crim­i­na­tion”:

    ...
    “We use the word col­lu­sion very loose­ly all the time. And I will tell you there is col­lu­sion with respect to that,” Trump said dur­ing a press con­fer­ence at the White House Rose Gar­den. “Some­thing has to be going on. You see the lev­el, in many cas­es, of hatred for a cer­tain group of peo­ple that hap­pened to be in pow­er, that hap­pened to win the elec­tion.

    “Some­thing’s hap­pen­ing with those groups of folks that are run­ning Face­book and Google and Twit­ter and I do think we have to get to the bot­tom of it,” he added.

    The pres­i­den­t’s com­ments marked an esca­la­tion in his crit­i­cism of U.S. tech giants like Twit­ter, a plat­form that he fre­quent­ly uses to pro­mote his poli­cies and denounce his polit­i­cal oppo­nents.

    Trump said Twit­ter is “dif­fer­ent than it used to be,” when asked about a new push to make social media com­pa­nies liable for the con­tent on their plat­form.

    “We have to do some­thing,” Trump said. “I have many, many mil­lions of fol­low­ers on Twit­ter, and it’s dif­fer­ent than it used to be. Things are hap­pen­ing. Names are tak­en off.”

    He lat­er alleged that con­ser­v­a­tives and Repub­li­cans are dis­crim­i­nat­ed against on social media plat­forms.

    “It’s big, big dis­crim­i­na­tion,” he said. “I see it absolute­ly on Twit­ter.”
    ...

    And these com­ments by Trump come a day after Repub­li­can con­gress­man Devin Nunes sued Twit­ter and for “shad­ow-ban­ning” con­ser­v­a­tive voic­es. Nunes also sued a hand­ful of Twit­ter users who had been par­tic­u­lar­ly crit­i­cal of him:

    ...
    The series of com­ments came a day after Rep. Devin Nunes (R‑Calif.) sued Twit­ter and some of its users for more than $250 mil­lion. Nunes’s suit alleges that the plat­form cen­sors con­ser­v­a­tive voic­es by “shad­ow-ban­ning” them.

    The Cal­i­for­nia Repub­li­can also accused Twit­ter of “facil­i­tat­ing defama­tion on its plat­form” by “ignor­ing law­ful com­plaints about offen­sive con­tent.”
    ...

    It’s worth not­ing that Twit­ter did admit to sort of inad­ver­tent­ly “shad­ow-ban­ning” some promi­nent con­ser­v­a­tives in June of last year, includ­ing Don­ald Trump, Jr. The com­pa­ny explained that they changed their algo­rithm for which names show in the auto-pop­u­lat­ed drop-down search box on Twit­ter in order to reduce the scope of accounts found engage in troll-like behav­ior and this had the effect of down­grad­ing the accounts of a num­ber of right-wing fig­ures. Because of course that’s what would hap­pen if you imple­ment an algo­rithm to reduce the expo­sure of accounts engag­ing in troll-like behav­ior. Also, a cou­ple of days after the reports on this Twit­ter claimed it ‘fixed’ the prob­lem so promi­nent Repub­li­cans engag­ing in troll-like behav­ior will once again show up in the auto-pop­u­lat­ed search drop down box.

    But Devin Nunes appears to feel so harmed by Twit­ter that he’s suing it for $250 mil­lion any­way. And as the fol­low­ing col­umn notes, while the law­suit is a joke on legal grounds and stands no chance of vic­to­ry, it does serve an impor­tant pur­pose. And it’s the same pur­pose we’ve seen over and over: intim­i­dat­ing the tech com­pa­nies into giv­ing con­ser­v­a­tives pref­er­en­tial treat­ment and giv­ing them a green light to turn these plat­forms into dis­in­for­ma­tion machines.

    But Nunes’s deci­sion to sue some indi­vid­u­als who were very crit­i­cal of him over Twit­ter also serves anoth­er pur­pose that we saw when Peter Thiel man­aged to sue Gawk­er into obliv­ion: send­ing out the gen­er­al threat that if you pub­licly crit­i­cize wealthy right-wingers they will sue and cost you large amounts of mon­ey in legal fees whether they have a legal case or not:

    Talk­ing Points Memo
    Edi­tor’s Blog

    Nunes And The Peter Thiel Era

    By Jeet Heer
    March 19, 2019 1:47 am

    First of all, I should intro­duce myself: I’m Jeet Heer, a con­tribut­ing edi­tor at The New Repub­lic. I’m fill­ing in for Josh as he takes a much-deserved break. Hav­ing fol­lowed TPM from its ear­li­est days as a blog cov­er­ing the 2000 (!) elec­tion and its after­math, I’m hon­ored to be here.

    I want­ed to flag a sto­ry from Mon­day night that is both com­i­cal­ly absurd but also has a sin­is­ter side: Repub­li­can Con­gress­man Devin Nunes’ announced law­suit against Twit­ter and three Twit­ter accounts who he claims have defamed him.

    You can read Nunes’ com­plaint here. Much of the suit reads like pure dada non­sense, espe­cial­ly since Nunes is going after two joke accounts with the han­dles Devin Nunes’ Mom and Devin Nunes’ Cow. This leads to the immor­tal line, “Like Devin Nunes’ Mom, Devin Nunes’ Cow engaged in a vicious defama­tion cam­paign against Nunes.”

    ...

    As tempt­ing as it is to sim­ply mock the suit, it also has to be said that it is part of some­thing more dis­turb­ing: the ris­ing use of legal actions, espe­cial­ly by right-wing forces, to shut down polit­i­cal oppo­nents. As Susan Hen­nessey, a legal schol­ar at the Brook­ings Insti­tute, not­ed, the suit “is a politi­cian attempt­ing to abuse the judi­cial process in order to scare peo­ple out of crit­i­ciz­ing him by prov­ing that he can cost them a lot in legal fees.”

    Peter Thiel’s sup­port of a suit that destroyed Gawk­er is the prime exam­ple. Thiel’s suc­cess seems to have embold­ened the right in gen­er­al. Amid Trump’s chat­ter about want­i­ng to loosen libel laws and sim­i­lar talk from Supreme Court Jus­tice Clarence Thomas, we’ve seen law­suits or threat­ened law­suits from Joe Arpaio, Sarah Palin, and Roy Moore, among oth­ers. As with the Nunes suit, many of these seem like jokes, but they have a goal of chill­ing speech.

    ———-

    “Nunes And The Peter Thiel Era” by Jeet Heer; Talk­ing Points Memo; 03/19/2019

    As tempt­ing as it is to sim­ply mock the suit, it also has to be said that it is part of some­thing more dis­turb­ing: the ris­ing use of legal actions, espe­cial­ly by right-wing forces, to shut down polit­i­cal oppo­nents. As Susan Hen­nessey, a legal schol­ar at the Brook­ings Insti­tute, not­ed, the suit “is a politi­cian attempt­ing to abuse the judi­cial process in order to scare peo­ple out of crit­i­ciz­ing him by prov­ing that he can cost them a lot in legal fees.””

    This this form of right-wing intim­i­da­tion of the media — intim­i­da­tion that ris­es to the lev­el of ‘we will finan­cial­ly destroy you if you crit­i­cize us’ — is exact­ly what we saw Peter Thiel unleashed when he revenge-bankrolled a law­suit that drove Gawk­er into bank­rupt­cy:

    ...
    Peter Thiel’s sup­port of a suit that destroyed Gawk­er is the prime exam­ple. Thiel’s suc­cess seems to have embold­ened the right in gen­er­al. Amid Trump’s chat­ter about want­i­ng to loosen libel laws and sim­i­lar talk from Supreme Court Jus­tice Clarence Thomas, we’ve seen law­suits or threat­ened law­suits from Joe Arpaio, Sarah Palin, and Roy Moore, among oth­ers. As with the Nunes suit, many of these seem like jokes, but they have a goal of chill­ing speech.
    ...

    So it’s going to be inter­est­ing to see if Nunes’s law­suit fur­thers this trend or ends up being a com­plete joke. But giv­en that one met­ric of suc­cess is sim­ply cost­ing the defen­dants a lot of mon­ey it real­ly could end up being quite suc­cess­ful. We’ll see.

    And with all that in mind, here’s a review of the impact of changes Face­book made to their news feed algo­rithm last year. Sur­prise! It turns out Fox News sto­ries lead in terms engage­ment on Face­book, where com­ments, shares, and user ‘reac­tions’ (like a smi­ley face or angry face reac­tion) about the sto­ry are used as the engage­ment met­ric. And if you fil­ter the response to only ‘angry’ respons­es, Fox News dom­i­nates the rest of the pack, with Bre­it­bart as #2 and offi­cial­ben­shapiro as #3 (CNN is #4). So more peo­ple appear to be see­ing Fox News sto­ries than sto­ries from any oth­er out­let on the plat­form and it’s mak­ing them angry:

    The Huff­in­g­ton Post

    Fox News Dom­i­nates Face­book By Incit­ing Anger, Study Shows
    Facebook’s algo­rithm over­haul was sup­posed to make users feel hap­pi­er, but it doesn’t look like it did.

    By Amy Rus­so
    3/18/2019 01:42 pm ET Updat­ed

    Face­book CEO Mark Zucker­berg announced an algo­rithm over­haul last year intend­ed to make users feel bet­ter with less news in their feeds and more con­tent from fam­i­ly and friends instead.

    But the data is in, and it shows Fox News rules the plat­form in terms of engage­ment, with “angry” reac­tions to its posts lead­ing the way.

    Accord­ing to a NewsWhip study pub­lished this month that exam­ines Face­book News Feed con­tent from Jan. 1 to March 10, the cable net­work was the No. 1 Eng­lish-lan­guage pub­lish­er when it came to com­ments, shares and reac­tions.

    The out­let far out­paced its com­pe­ti­tion, with NBC, the BBC, the Dai­ly Mail, CNN and oth­ers lag­ging behind.

    [see chart]

    The dif­fer­ence is even more glar­ing when rank­ing out­lets only by the num­ber of angry respons­es they trig­ger with Facebook’s reac­tions fea­ture.

    By that mea­sure, Fox News is leaps and bounds ahead of oth­er pages, includ­ing that of right-wing web­site Bre­it­bart and con­ser­v­a­tive Dai­ly Wire Edi­tor-in-Chief Ben Shapiro.

    [see chart]

    While Harvard’s Nie­man Lab on jour­nal­ism points out that Fox News’ pop­u­lar­i­ty on Face­book may have occurred with­out help from an algo­rithm, it begs the ques­tion of whether Zuckerberg’s vision for the plat­form is tru­ly com­ing to fruition.

    In Jan­u­ary 2018, Zucker­berg told users he had “a respon­si­bil­i­ty to make sure our ser­vices aren’t just fun to use, but also good for people’s well-being.”

    He said he was hop­ing to pro­mote “mean­ing­ful inter­ac­tions between peo­ple” and that the algo­rithm over­haul would result in “less pub­lic con­tent like posts from busi­ness­es, brands, and media” and “more from your friends, fam­i­ly and groups.”

    While over­all engage­ment on Face­book has sky­rock­et­ed this year com­pared with 2018, the pow­er of the platform’s algo­rithms remains unclear.

    ...

    ———-

    “Fox News Dom­i­nates Face­book By Incit­ing Anger, Study Shows” by Amy Rus­so; The Huff­in­g­ton Post; 3/18/2019

    “But the data is in, and it shows Fox News rules the plat­form in terms of engage­ment, with “angry” reac­tions to its posts lead­ing the way.

    Face­book’s news feed algo­rithm sure loves serv­ing up Fox News sto­ries. Espe­cial­ly the kinds of sto­ries that make peo­ple angry:

    ...
    Accord­ing to a NewsWhip study pub­lished this month that exam­ines Face­book News Feed con­tent from Jan. 1 to March 10, the cable net­work was the No. 1 Eng­lish-lan­guage pub­lish­er when it came to com­ments, shares and reac­tions.

    The out­let far out­paced its com­pe­ti­tion, with NBC, the BBC, the Dai­ly Mail, CNN and oth­ers lag­ging behind.

    [see chart]

    The dif­fer­ence is even more glar­ing when rank­ing out­lets only by the num­ber of angry respons­es they trig­ger with Facebook’s reac­tions fea­ture.

    By that mea­sure, Fox News is leaps and bounds ahead of oth­er pages, includ­ing that of right-wing web­site Bre­it­bart and con­ser­v­a­tive Dai­ly Wire Edi­tor-in-Chief Ben Shapiro.
    ...

    So as Pres­i­dent Trump and Rep Nunes con­tin­ue wag­ing their social media intim­i­da­tion cam­paign it’s going to be worth keep­ing in mind the wild suc­cess these intim­i­da­tion cam­paigns have already had. This is a tac­tic that clear­ly works.

    And in relat­ed news, Trump just threat­ened to open fed­er­al inves­ti­ga­tion against Sat­ur­day Night Live for mak­ing too much fun of him...

    Posted by Pterrafractyl | March 20, 2019, 3:56 pm
  10. Oh look, anoth­er Face­book data deba­cle: Face­book just admit­ted that it’s been stor­ing hun­dreds of mil­lions of pass­words in plain-text log files, which is a huge secu­ri­ty ‘no no’ for a com­pa­ny like Face­book. Nor­mal­ly, pass­words are sup­posed to be stored as a hash (where the pass­word is con­vert­ed to a long strong of ran­dom-seem­ing text). This pass­word-to-hash map­ping approach allows com­pa­nies like Face­book to check and make sure the pass­word you input match­es your account pass­word with­out hav­ing to direct­ly store the pass­word. Only the hash is stored. And that basic secu­ri­ty rule has­n’t been fol­lowed for up to 600 mil­lion Face­book accounts. As a result, the plain­text pass­words that peo­ple have been using for Face­book has poten­tial­ly been read­able by Face­book employ­ees for years. This has appar­ent­ly been the case since 2012 and was dis­cov­ered in Jan­u­ary 2019 by a team of engi­neers who were review­ing some code and noticed this ‘bug’.

    It sounds like the users of Face­book Lite — a ver­sion of Face­book for peo­ple with poor inter­net con­nec­tions — were par­tic­u­lar­ly hard hit. The way Face­book describes, hun­dreds of mil­lions of Face­book Lite users will be get­ting an email about this, along with tens of mil­lions of reg­u­lar Face­book users and even tens of thou­sands of Insta­gram users (Face­book owns Insta­gram).

    It’s unclear why Face­book did­n’t report this soon­er, but it sounds like it was only report­ed in the first place after an anony­mous senior Face­book employ­ee told Kreb­sOn­Se­cu­ri­ty — the blog for secu­ri­ty expert Bri­an Krebs — about this. So for all we know Face­book had no inten­tion of telling peo­ple at all, which would be par­tic­u­lar­ly egre­gious if true because peo­ple often reuse pass­words across dif­fer­ent web­sites and so stor­ing this infor­ma­tion in a man­ner that is read­able to thou­sands of Face­book employ­ees rep­re­sents a very real secu­ri­ty threat for sites across the inter­net for peo­ple that reuse pass­words (which is unfor­tu­nate­ly a lot of peo­ple).

    Is there any evi­dence of Face­book employ­ees actu­al­ly abus­ing this infor­ma­tion? At this point Face­book is assur­ing us that it has seen no evi­dence of any­one inten­tion­al­ly try­ing to read the pass­word data. But as we’re going to see, around 20,000 Face­book employ­ees have had access to these logs. More alarm­ing­ly, Face­book admits that around 2,000 engi­neers and soft­ware devel­op­ers have con­duct­ed around 9 mil­lion queries for data ele­ments that con­tained the pass­words. But we are assured by Face­book that there’s noth­ing to wor­ry about:

    TechCrunch

    Face­book admits it stored ‘hun­dreds of mil­lions’ of account pass­words in plain­text

    Zack Whit­tak­er
    03/21/2019

    Flip the “days since last Face­book secu­ri­ty inci­dent” back to zero.

    Face­book con­firmed Thurs­day in a blog post, prompt­ed by a report by cyber­se­cu­ri­ty reporter Bri­an Krebs, that it stored “hun­dreds of mil­lions” of account pass­words in plain­text for years.

    The dis­cov­ery was made in Jan­u­ary, said Facebook’s Pedro Canahuati, as part of a rou­tine secu­ri­ty review. None of the pass­words were vis­i­ble to any­one out­side Face­book, he said. Face­book admit­ted the secu­ri­ty lapse months lat­er, after Krebs said logs were acces­si­ble to some 2,000 engi­neers and devel­op­ers.

    Krebs said the bug dat­ed back to 2012.

    “This caught our atten­tion because our login sys­tems are designed to mask pass­words using tech­niques that make them unread­able,” said Canahuati. “We have found no evi­dence to date that any­one inter­nal­ly abused or improp­er­ly accessed them,” but did not say how the com­pa­ny made that con­clu­sion.

    Face­book said it will noti­fy “hun­dreds of mil­lions of Face­book Lite users,” a lighter ver­sion of Face­book for users where inter­net speeds are slow and band­width is expen­sive, and “tens of mil­lions of oth­er Face­book users.” The com­pa­ny also said “tens of thou­sands of Insta­gram users” will be noti­fied of the expo­sure.

    Krebs said as many as 600 mil­lion users could be affect­ed — about one-fifth of the company’s 2.7 bil­lion users, but Face­book has yet to con­firm the fig­ure.

    Face­book also didn’t say how the bug came to be. Stor­ing pass­words in read­able plain­text is an inse­cure way of stor­ing pass­words. Com­pa­nies, like Face­book, hash and salt pass­words — two ways of fur­ther scram­bling pass­words — to store pass­words secure­ly. That allows com­pa­nies to ver­i­fy a user’s pass­word with­out know­ing what it is.

    Twit­ter and GitHub were hit by sim­i­lar but inde­pen­dent bugs last year. Both com­pa­nies said pass­words were stored in plain­text and not scram­bled.

    It’s the lat­est in a string of embar­rass­ing secu­ri­ty issues at the com­pa­ny, prompt­ing con­gres­sion­al inquiries and gov­ern­ment inves­ti­ga­tions. It was report­ed last week that Facebook’s deals that allowed oth­er tech com­pa­nies to access account data with­out con­sent was under crim­i­nal inves­ti­ga­tion.

    It’s not known why Face­book took months to con­firm the inci­dent, or if the com­pa­ny informed state or inter­na­tion­al reg­u­la­tors per U.S. breach noti­fi­ca­tion and Euro­pean data pro­tec­tion laws. We asked Face­book but a spokesper­son did not imme­di­ate­ly com­ment beyond the blog post.

    ...

    ———-

    “Face­book admits it stored ‘hun­dreds of mil­lions’ of account pass­words in plain­text” by Zack Whit­tak­er; TechCrunch; 03/21/2019

    Face­book said it will noti­fy “hun­dreds of mil­lions of Face­book Lite users,” a lighter ver­sion of Face­book for users where inter­net speeds are slow and band­width is expen­sive, and “tens of mil­lions of oth­er Face­book users.” The com­pa­ny also said “tens of thou­sands of Insta­gram users” will be noti­fied of the expo­sure.”

    So the bug caused the pass­words of hun­dreds of mil­lions of peo­ple using the Face­book Lite ver­sion of Face­book, but only tens of mil­lions of reg­u­lar Face­book users and tens of thou­sands of Insta­gram users to get logged in plain text. Was that the result of a sin­gle bug or sep­a­rate bugs for Face­book and Insta­gram? Are these even bugs that were cre­at­ed by an inno­cent cod­ing mis­tak­ing or did some­one go out of their way to write code that would leave plain text pass­words?
    At this point we have no idea because Face­book isn’t say­ing how the bug came to be. Nor is the com­pa­ny say­ing how it is that they arrived at the con­clu­sion that there were no employ­ees abus­ing their access to this data:

    ...
    “This caught our atten­tion because our login sys­tems are designed to mask pass­words using tech­niques that make them unread­able,” said Canahuati. “We have found no evi­dence to date that any­one inter­nal­ly abused or improp­er­ly accessed them,” but did not say how the com­pa­ny made that con­clu­sion.

    ...

    Face­book also didn’t say how the bug came to be. Stor­ing pass­words in read­able plain­text is an inse­cure way of stor­ing pass­words. Com­pa­nies, like Face­book, hash and salt pass­words — two ways of fur­ther scram­bling pass­words — to store pass­words secure­ly. That allows com­pa­nies to ver­i­fy a user’s pass­word with­out know­ing what it is.”
    ...

    And yet we learn from Krebs that this bug has exist­ed since 2012 and some 2,000 engi­neers and devel­op­ers have access those text logs. We also learn from Krebs that Face­book learned about this bug months ago and did­n’t say any­thing:

    ...
    The dis­cov­ery was made in Jan­u­ary, said Facebook’s Pedro Canahuati, as part of a rou­tine secu­ri­ty review. None of the pass­words were vis­i­ble to any­one out­side Face­book, he said. Face­book admit­ted the secu­ri­ty lapse months lat­er, after Krebs said logs were acces­si­ble to some 2,000 engi­neers and devel­op­ers.

    Krebs said the bug dat­ed back to 2012.

    ...

    It’s not known why Face­book took months to con­firm the inci­dent, or if the com­pa­ny informed state or inter­na­tion­al reg­u­la­tors per U.S. breach noti­fi­ca­tion and Euro­pean data pro­tec­tion laws. We asked Face­book but a spokesper­son did not imme­di­ate­ly com­ment beyond the blog post.
    ...

    So that’s pret­ty bad. But it gets worse. Because if you read the ini­tial Krebs report, it sounds like an anony­mous Face­book exec­u­tive is the source for this sto­ry. In oth­er words, Face­book prob­a­bly had no inten­tion of telling the pub­lic about this. In addi­tion, while Face­book is acknowl­edg­ing that 2,000 employ­ees have actu­al­ly access the log files, accord­ing to the Krebs report there were actu­al­ly 20,000 employ­ees who could have accessed them. So we have to hope Face­book isn’t low-balling that 2,000 esti­mate. Beyond that, Krebs reports that those 2,000 employ­ees who did access those log files made around nine mil­lion inter­nal queries for data ele­ments that con­tained plain text user pass­words. And despite all that, Face­book is assur­ing us that no pass­word changes are nec­es­sary:

    Kreb­sOn­Se­cu­ri­ty

    Face­book Stored Hun­dreds of Mil­lions of User Pass­words in Plain Text for Years

    Bri­an Krebs

    Hun­dreds of mil­lions of Face­book users had their account pass­words stored in plain text and search­able by thou­sands of Face­book employ­ees — in some cas­es going back to 2012, Kreb­sOn­Se­cu­ri­ty has learned. Face­book says an ongo­ing inves­ti­ga­tion has so far found no indi­ca­tion that employ­ees have abused access to this data.

    Mar 21 2019

    Hun­dreds of mil­lions of Face­book users had their account pass­words stored in plain text and search­able by thou­sands of Face­book employ­ees — in some cas­es going back to 2012, Kreb­sOn­Se­cu­ri­ty has learned. Face­book says an ongo­ing inves­ti­ga­tion has so far found no indi­ca­tion that employ­ees have abused access to this data.

    Face­book is prob­ing a series of secu­ri­ty fail­ures in which employ­ees built appli­ca­tions that logged unen­crypt­ed pass­word data for Face­book users and stored it in plain text on inter­nal com­pa­ny servers. That’s accord­ing to a senior Face­book employ­ee who is famil­iar with the inves­ti­ga­tion and who spoke on con­di­tion of anonymi­ty because they were not autho­rized to speak to the press.

    The Face­book source said the inves­ti­ga­tion so far indi­cates between 200 mil­lion and 600 mil­lion Face­book users may have had their account pass­words stored in plain text and search­able by more than 20,000 Face­book employ­ees. The source said Face­book is still try­ing to deter­mine how many pass­words were exposed and for how long, but so far the inquiry has uncov­ered archives with plain text user pass­words in them dat­ing back to 2012.

    My Face­book insid­er said access logs showed some 2,000 engi­neers or devel­op­ers made approx­i­mate­ly nine mil­lion inter­nal queries for data ele­ments that con­tained plain text user pass­words.

    “The longer we go into this analy­sis the more com­fort­able the legal peo­ple [at Face­book] are going with the low­er bounds” of affect­ed users, the source said. “Right now they’re work­ing on an effort to reduce that num­ber even more by only count­ing things we have cur­rent­ly in our data ware­house.”

    In an inter­view with Kreb­sOn­Se­cu­ri­ty, Face­book soft­ware engi­neer Scott Ren­fro said the com­pa­ny wasn’t ready to talk about spe­cif­ic num­bers — such as the num­ber of Face­book employ­ees who could have accessed the data.

    Ren­fro said the com­pa­ny planned to alert affect­ed Face­book users, but that no pass­word resets would be required.

    “We’ve not found any cas­es so far in our inves­ti­ga­tions where some­one was look­ing inten­tion­al­ly for pass­words, nor have we found signs of mis­use of this data,” Ren­fro said. “In this sit­u­a­tion what we’ve found is these pass­words were inad­ver­tent­ly logged but that there was no actu­al risk that’s come from this. We want to make sure we’re reserv­ing those steps and only force a pass­word change in cas­es where there’s def­i­nite­ly been signs of abuse.”

    A writ­ten state­ment from Face­book pro­vid­ed to Kreb­sOn­Se­cu­ri­ty says the com­pa­ny expects to noti­fy “hun­dreds of mil­lions of Face­book Lite users, tens of mil­lions of oth­er Face­book users, and tens of thou­sands of Insta­gram users.” Face­book Lite is a ver­sion of Face­book designed for low speed con­nec­tions and low-spec phones.

    ...

    Ren­fro said the issue first came to light in Jan­u­ary 2019 when secu­ri­ty engi­neers review­ing some new code noticed pass­words were being inad­ver­tent­ly logged in plain text.

    “This prompt­ed the team to set up a small task force to make sure we did a broad-based review of any­where this might be hap­pen­ing,” Ren­fro said. “We have a bunch of con­trols in place to try to mit­i­gate these prob­lems, and we’re in the process of inves­ti­gat­ing long-term infra­struc­ture changes to pre­vent this going for­ward. We’re now review­ing any logs we have to see if there has been abuse or oth­er access to that data.”

    ...

    ————

    “Face­book Stored Hun­dreds of Mil­lions of User Pass­words in Plain Text for Years” by Bri­an Krebs; Kreb­sOn­Se­cu­ri­ty; 03/21/2019

    “Face­book is prob­ing a series of secu­ri­ty fail­ures in which employ­ees built appli­ca­tions that logged unen­crypt­ed pass­word data for Face­book users and stored it in plain text on inter­nal com­pa­ny servers. That’s accord­ing to a senior Face­book employ­ee who is famil­iar with the inves­ti­ga­tion and who spoke on con­di­tion of anonymi­ty because they were not autho­rized to speak to the press.

    An anony­mous senior Face­book employ­ee leak­ing to Krebs. That appears to be the only rea­son this sto­ry has gone pub­lic.

    And accord­ing to this anony­mous employ­ee, those logs were search­able by more than 20,000 Face­book employ­ees. And 9 mil­lion queries of those files were done by the 2,000 engi­neers and devel­op­ers who did def­i­nite­ly access the files:

    ...
    The Face­book source said the inves­ti­ga­tion so far indi­cates between 200 mil­lion and 600 mil­lion Face­book users may have had their account pass­words stored in plain text and search­able by more than 20,000 Face­book employ­ees. The source said Face­book is still try­ing to deter­mine how many pass­words were exposed and for how long, but so far the inquiry has uncov­ered archives with plain text user pass­words in them dat­ing back to 2012.

    My Face­book insid­er said access logs showed some 2,000 engi­neers or devel­op­ers made approx­i­mate­ly nine mil­lion inter­nal queries for data ele­ments that con­tained plain text user pass­words.

    “The longer we go into this analy­sis the more com­fort­able the legal peo­ple [at Face­book] are going with the low­er bounds” of affect­ed users, the source said. “Right now they’re work­ing on an effort to reduce that num­ber even more by only count­ing things we have cur­rent­ly in our data ware­house.”
    ...

    And yet Face­book is telling us that no pass­word resets are required because no abus­es have been found. Isn’t that reas­sur­ing:

    ...
    In an inter­view with Kreb­sOn­Se­cu­ri­ty, Face­book soft­ware engi­neer Scott Ren­fro said the com­pa­ny wasn’t ready to talk about spe­cif­ic num­bers — such as the num­ber of Face­book employ­ees who could have accessed the data.

    Ren­fro said the com­pa­ny planned to alert affect­ed Face­book users, but that no pass­word resets would be required.

    “We’ve not found any cas­es so far in our inves­ti­ga­tions where some­one was look­ing inten­tion­al­ly for pass­words, nor have we found signs of mis­use of this data,” Ren­fro said. “In this sit­u­a­tion what we’ve found is these pass­words were inad­ver­tent­ly logged but that there was no actu­al risk that’s come from this. We want to make sure we’re reserv­ing those steps and only force a pass­word change in cas­es where there’s def­i­nite­ly been signs of abuse.”
    ...

    So it sure looks like we have anoth­er case of a Face­book pri­va­cy scan­dal that Face­book had no inten­tion of telling any­one about.

    The whole episode also rais­es anoth­er inter­est­ing ques­tion about Face­book and Google and all the oth­er social media giants that have become trea­sure troves of per­son­al infor­ma­tion: just how many spy agen­cies out there are try­ing to get their spies embed­ded at Face­book (or Google, or Twit­ter, etc) pre­cise­ly to exploit exact­ly these kinds of inter­nal secu­ri­ty laps­es? Because, again, keep in mind if peo­ple use the same pass­word for Face­book that they use for oth­er web­sites that means their accounts at those oth­er web­sites are also poten­tial­ly at risk. So peo­ple could have effec­tive­ly had their pass­words for Face­book and GMail and who knows what else com­pro­mised by this. Hun­dreds of mil­lions of peo­ple. That’s part of why it’s so irre­spon­si­ble to tell peo­ple no pass­word resets are nec­es­sary. The appro­pri­ate response would be to tell peo­ple that not only should they reset their Face­book pass­word but they also need to reset the pass­words for any oth­er sites that use the same pass­word (prefer­ably to some­thing oth­er than your reset Face­book pass­word). Or, bet­ter yet, #Delete­Face­book.

    In pos­si­bly relat­ed news, two top Face­book exec­u­tives, includ­ing senior prod­uct engi­neer Chris Cox, just announced a few days ago that they’re leav­ing the com­pa­ny. It would be rather inter­est­ing of Cox was the anony­mous senior engi­neer who was the Krebs source for this sto­ry. Although we should prob­a­bly hope that’s note the case because that means there’s one less senior engi­neer work­ing at Face­book who is will­ing to go to the press about these kinds of things and there’s clear­ly a short­age of such peo­ple at this point.

    Posted by Pterrafractyl | March 21, 2019, 2:43 pm
  11. Here’s a pair of arti­cle to keep in mind regard­ing the role social media will play in the 2020 US elec­tion cycle and the ques­tions over whether or not we’re going to see them reprise their roles as the key prop­a­ga­tors of right-wing dis­in­for­ma­tion: Pres­i­dent Trump did an inter­view with CNBC this morn­ing where the issue of the EU’s law­suits against US tech giants like Google and Face­book came up. The answer Trump gave is the kind of answer that could ensure those com­pa­nies go as easy as pos­si­ble on Trump and the Repub­li­cans when it comes to plat­form vio­la­tions: Trump replied that it was improp­er for the EU to be suing these com­pa­nies because the US should be doing it instead and he agrees with the EU that the monop­oly con­cerns with these com­pa­nies are valid:

    The Verge

    Don­ald Trump on tech antitrust: ‘There’s some­thing going on’

    ‘We should be doing this. They’re our com­pa­nies.’

    By Mak­e­na Kel­ly
    Jun 10, 2019, 11:51am EDT

    In an inter­view with CNBC on Mon­day, Pres­i­dent Don­ald Trump crit­i­cized the antitrust fines imposed by the Euro­pean Union on Unit­ed States tech com­pa­nies, sug­gest­ing that these tech giants could, in fact, be monop­o­lies, but the US should be the polit­i­cal body rak­ing in the set­tle­ment fines.

    “Every week you see them going after Face­book and Apple and all of these com­pa­nies … The Euro­pean Union is suing them all of the time,” Trump said. “Well, we should be doing this. They’re our com­pa­nies. So, [the EU is] actu­al­ly attack­ing our com­pa­nies, but we should be doing what they’re doing. They think there’s a monop­oly, but I’m not sure that they think that. They just think this is easy mon­ey.

    Asked if he thinks tech com­pa­nies like Google should be bro­ken up, Trump says, “well I can tell you they dis­crim­i­nate against me. Peo­ple talk about col­lu­sion — the real col­lu­sion is between the Democ­rats & these com­pa­nies, because they were so against me dur­ing my elec­tion run.” pic.twitter.com/xVz6yTqoeI— Aaron Rupar (@atrupar) June 10, 2019

    It’s unclear whether Trump actu­al­ly wants to impose sim­i­lar fines or was only cri­tiquing the EU’s moves. “We have a great attor­ney gen­er­al,” he said lat­er in the inter­view. “We’re going to look at it dif­fer­ent­ly.”

    Over the past few years, the EU has fined some of the US’s largest tech com­pa­nies for behav­ing anti-com­pet­i­tive­ly. Just last sum­mer, In an inter­view with CNBC on Mon­day for vio­lat­ing antitrust law with the company’s Android oper­at­ing sys­tem prod­uct. Face­book has also been sub­ject to a hand­ful of pri­va­cy inves­ti­ga­tions in both the US and abroad fol­low­ing 2018’s Google was fined a record $5 bil­lion scan­dal.

    Respond­ing to the ques­tion of whether tech giants like Google and Face­book were monop­o­lies, Trump said, “I think it’s a bad sit­u­a­tion, but obvi­ous­ly there’s some­thing going on in terms of monop­oly.”

    ...

    ———-

    “Don­ald Trump on tech antitrust: ‘There’s some­thing going on’” by Mak­e­na Kel­ly; The Verge; 06/10/2019

    ““Every week you see them going after Face­book and Apple and all of these com­pa­nies … The Euro­pean Union is suing them all of the time,” Trump said. “Well, we should be doing this. They’re our com­pa­nies. So, [the EU is] actu­al­ly attack­ing our com­pa­nies, but we should be doing what they’re doing. They think there’s a monop­oly, but I’m not sure that they think that. They just think this is easy mon­ey.””

    “Well, we should be doing this. They’re our com­pa­nies.” That cer­tain­ly had to get Sil­i­con Val­ley’s atten­tion. Espe­cial­ly this part:

    ...
    Respond­ing to the ques­tion of whether tech giants like Google and Face­book were monop­o­lies, Trump said, “I think it’s a bad sit­u­a­tion, but obvi­ous­ly there’s some­thing going on in terms of monop­oly.”
    ...

    So Trump is now open­ly talk­ing about break­ing of the US tech giants over monop­oly con­cerns. GREAT! A monop­oly inquiry is long over­due. Of course, it’s not actu­al­ly going to be very great if it turns out Trump is just mak­ing these threats in order to extract more favor­able treat­ment from these com­pa­nies in the upcom­ing 2020 elec­tion cycle. And as the fol­low­ing arti­cle makes clear, that’s obvi­ous­ly what Trump was doing dur­ing this CNBC inter­view because he went on to com­plain that the tech giants were actu­al­ly dis­crim­i­nat­ing against him in the 2016 elec­tion and col­lud­ing with the Democ­rats. Of course, as Trump’s dig­i­tal cam­paign man­ag­er Brad Parscale has described, the tech giants were absolute­ly instru­men­tal for the suc­cess of the Trump cam­paign and com­pa­nies like Face­book actu­al­ly embed­ded employ­ees in the Trump cam­paign to help the Trump team max­i­mize their use of the plat­form. And Google-owned YouTube has basi­cal­ly become a dream recruit­ment tool for the ‘Alt Right’ Trumpian base. So the idea that the tech giants are some­how dis­crim­i­nat­ing against Trump is laugh­able. It’s true that there have been tepid moves by these plat­forms to project the image that they won’t tol­er­ate far right extrem­ism, with YouTube pledg­ing to ban white suprema­cist videos last week. The extent to which this was just a pub­lic rela­tions stunt by YouTube remains to be seen, but remov­ing overt neo-Nazi con­tent isn’t going to address most of the right-wing dis­in­for­ma­tion on the plat­forms any­way since so much of that con­tent cloaks the extrem­ism in dog whis­tles. But as we should expect, the right-wing meme that the tech giants being run by a bunch of lib­er­als and out to silence con­ser­v­a­tive voic­es is get­ting pushed heav­i­ly right now and Pres­i­dent Trump just pro­mot­ed that meme again in the CNBC inter­view as part of his threat over anti-trust inquiries:

    Talk­ing Points Memo
    News

    Trump Fur­thers Far-Right Con­spir­a­cy That Tech Com­pa­nies Are Out To Get Him

    By Nicole Lafond
    June 10, 2019 10:06 am

    Pres­i­dent Trump on Mon­day morn­ing con­tin­ued his call-out cam­paign against tech com­pa­nies, fur­ther­ing a far-right con­spir­a­cy the­o­ry that Sil­i­con Val­ley is out to get con­ser­v­a­tives like him­self.

    Dur­ing an inter­view with CNBC on Mon­day morn­ing, Trump com­plained about the Euro­pean Union’s antitrust law­suits against some of the largest U.S. tech com­pa­nies like Face­book, before sug­gest­ing the U.S. should be doing the same thing. He then claimed that tech com­pa­nies “dis­crim­i­nate” against him.

    “Well I can tell you they dis­crim­i­nate against me,” Trump said. “You know, peo­ple talk about col­lu­sion. The real col­lu­sion is between the Democ­rats and these com­pa­nies. ‘Cause they were so against me dur­ing my elec­tion run. Every­body said, ‘If you don’t have them, you can’t win.’ Well, I won. And I’ll win again.”

    Over the week­end, Trump called on Twit­ter to bring back the “banned Con­ser­v­a­tive Voic­es,” like­ly ref­er­enc­ing Twitter’s recent move to kick some con­spir­a­cy the­o­rists, like Alex Jones and oth­ers espous­ing racist views, off the plat­form.

    Twit­ter should let the banned Con­ser­v­a­tive Voic­es back onto their plat­form, with­out restric­tion. It’s called Free­dom of Speech, remem­ber. You are mak­ing a Giant Mis­take!— Don­ald J. Trump (@realDonaldTrump) June 9, 2019

    The con­spir­a­cy that social media and tech com­pa­nies are out to “shad­ow ban” con­ser­v­a­tive voic­es has gained more promi­nence dur­ing the Trump pres­i­den­cy, as Trump him­self and his son Don­ald Trump Jr. have made a strate­gic effort to raise aware­ness about the bogus issue.

    One of Trump’s most vehe­ment sup­port­ers in the House, Rep. Devin Nunes (R‑CA), ulti­mate­ly filed a law­suit against Twit­ter to try to legit­imize his “shad­ow ban­ning” the­o­ry.

    ...

    ———-

    “Trump Fur­thers Far-Right Con­spir­a­cy That Tech Com­pa­nies Are Out To Get Him” by Nicole Lafond; Talk­ing Points Memo; 06/10/2019

    “Well I can tell you they dis­crim­i­nate against me,” Trump said. “You know, peo­ple talk about col­lu­sion. The real col­lu­sion is between the Democ­rats and these com­pa­nies. ‘Cause they were so against me dur­ing my elec­tion run. Every­body said, ‘If you don’t have them, you can’t win.’ Well, I won. And I’ll win again.””

    Yes, accord­ing to right-wing fan­ta­sy world, the tech giants were actu­al­ly all against Trump in 2016 and not his secret weapon. That’s become one of the fic­tion­al ‘facts’ being pro­mot­ed as part of this right-wing meme. A meme about con­ser­v­a­tives get­ting ‘shad­ow banned’ by tech com­pa­nies. And when­ev­er Alex Jones gets banned from a plat­form it’s now seen as part of this anti-con­ser­v­a­tive con­spir­a­cy:

    ...
    Over the week­end, Trump called on Twit­ter to bring back the “banned Con­ser­v­a­tive Voic­es,” like­ly ref­er­enc­ing Twitter’s recent move to kick some con­spir­a­cy the­o­rists, like Alex Jones and oth­ers espous­ing racist views, off the plat­form.

    Twit­ter should let the banned Con­ser­v­a­tive Voic­es back onto their plat­form, with­out restric­tion. It’s called Free­dom of Speech, remem­ber. You are mak­ing a Giant Mis­take!— Don­ald J. Trump (@realDonaldTrump) June 9, 2019

    The con­spir­a­cy that social media and tech com­pa­nies are out to “shad­ow ban” con­ser­v­a­tive voic­es has gained more promi­nence dur­ing the Trump pres­i­den­cy, as Trump him­self and his son Don­ald Trump Jr. have made a strate­gic effort to raise aware­ness about the bogus issue.

    One of Trump’s most vehe­ment sup­port­ers in the House, Rep. Devin Nunes (R‑CA), ulti­mate­ly filed a law­suit against Twit­ter to try to legit­imize his “shad­ow ban­ning” the­o­ry.
    ...

    Recall how Fox News was pro­mot­ing this meme recent­ly when Lau­ra Ingra­ham’s prime time Fox News show as try­ing to present fig­ures like Alex Jones, Milo Yiannopou­los, Lau­ra Loomer, and neo-Nazi Paul Nehlen were banned from Face­book because of anti-con­ser­v­a­tive bias (and not because they kept break­ing the rules of the plat­form). This meme is now a cen­tral com­po­nent of the right-wing griev­ance pol­i­tics and basi­cal­ly just an update to the long-stand­ing ‘lib­er­al media’ meme that helped fuel the rise of right-wing talk radio and Fox News. It’s exact­ly the kind of ‘work­ing the ref’ meme that is designed to bul­ly the media into giv­ing right-wingers eas­i­er treat­ment. That’s what makes monop­oly threats by Trump so dis­turb­ing. He’s now basi­cal­ly telling these tech giants, ‘go easy on right-wingers or I’ll break you up,’ head­ing into a 2020 elec­tion cycle where all indi­ca­tions are that dis­in­for­ma­tion is going to play a big­ger role than ever. So the pres­i­dent basi­cal­ly warned all of these tech com­pa­nies that any new tools they’ve cre­at­ed for deal­ing with dis­in­for­ma­tion being spread on their plat­forms dur­ing the 2020 elec­tion cycle had bet­ter not work too well, at least not if it’s right-wing dis­in­for­ma­tion.

    Keep in mind that there’s been lit­tle indi­ca­tion that these plat­forms were seri­ous­ly going to do any­thing about dis­in­for­ma­tion any­way, so it’s unlike­ly that Trump’s threat will make this bad sit­u­a­tion worse.

    So that’s a pre­view for the role dis­in­for­ma­tion is going to play in the 2020 elec­tions: Trump is pre­emp­tive­ly run­ning a dis­in­for­ma­tion cam­paign in order to pres­sure the tech giants into not crack­ing down on the planned right-wing dis­in­for­ma­tion cam­paigns the tech giants weren’t seri­ous­ly plan­ning on crack­ing down on in the first place.

    Posted by Pterrafractyl | June 10, 2019, 2:35 pm
  12. So remem­ber the absur­dist ‘civ­il rights audit’ that Face­book pledged to do last year? This was the audit con­duct­ed by retired GOP-Sen­a­tor Jon Kyl to address the fre­quent claims of anti-con­ser­v­a­tive bias per­pet­u­al­ly lev­eled against Face­book by Repub­li­can politi­cians and right-wing pun­dits. It’s a core ele­ment of the right-wing’s ‘work­ing the refs’ strat­e­gy for get­ting a more com­pli­ant media. In this case, the audit involved inter­view­ing 133 con­ser­v­a­tive law­mak­ers and inter­est groups about whether they think Face­book is biased against con­ser­v­a­tives.

    Well, Face­book is final­ly releas­ing the results of their audit. And while the audit does­n’t find any sys­temic bias, it did acknowl­edge some con­ser­v­a­tive frus­tra­tions like frus­tra­tions with a longer approval process for sub­mit­ting ads to Face­book and the fear that the slowed ad approval process might dis­ad­van­tage right-wing cam­paigns fol­low­ing the wild suc­cess­es the right-wing had in 2016 using social media polit­i­cal ads. Amus­ing­ly, on the same day Face­book released this audit it also announced the return of human edi­tors for curat­ing Face­book’s news feeds. Recall how it was 2016 claims by a Face­book employ­ee that the news feed edi­tors were biased against con­ser­v­a­tives (when they were real­ly just biased against dis­in­for­ma­tion com­ing dis­pro­por­tion­ate­ly from right-wing sources) that led to Face­book decid­ing to switch to an algo­rithm with­out human over­sight for gen­er­at­ing news feeds which, in turn, turned the news feeds into right-wing dis­in­for­ma­tion out­lets dur­ing the 2016 cam­paign that was vital to the Trump cam­paign’s suc­cess. So the human news feed edi­tors are appar­ent­ly back, which will no doubt anger the right-wing. Although recall how Face­book hired Tuck­er Bounds, John McCain’s for­mer advis­er and spokesper­son, to be Face­book’s Com­mu­ni­ca­tions direc­tor focused on the News Feed back in Jan­u­ary of 2017. In oth­er words, yeah, there’s going to be human edi­tors over­see­ing the news feeds again, but it’s prob­a­bly going to a for­mer Repub­li­can oper­a­tive in charge of those human edi­tors. It’s a reminder that Face­book is going to find a way to make sure its plat­form is a potent right-wing pro­pa­gan­da tool one way or anoth­er. The claims of anti-con­ser­v­a­tive dis­crim­i­na­tion is just pro­pa­gan­da designed to allowed Face­book to be a more effec­tive right-wing pro­pa­gan­da out­let:

    The Verge

    The con­ser­v­a­tive audit of bias on Face­book is long on feel­ings and short on facts

    And con­ser­v­a­tives are beat­ing Face­book up over it any­way

    By Casey New­ton
    Aug 21, 2019, 6:02pm EDT

    There are many crit­i­cisms of Facebook’s size, pow­er, and busi­ness mod­el, but two stand out for the inten­si­ty with which they are usu­al­ly dis­cussed. One is that Face­book is a dystopi­an panop­ti­con that mon­i­tors our every move and uses that infor­ma­tion to pre­dict and manip­u­late our behav­ior. The oth­er is that Face­book has come such a pil­lar of mod­ern life that every prod­uct deci­sion it makes could reshape the body politic for­ev­er.

    Today, in an impres­sive flur­ry of news-mak­ing, Face­book took steps to address both con­cerns.

    First, the com­pa­ny said it was final­ly releas­ing its long-delayed “Clear His­to­ry” tool in three coun­tries. (The Unit­ed States is not one of them.) I wrote about it at The Verge:

    It was near­ly a year and a half ago that Face­book CEO Mark Zucker­berg, stand­ing onstage at the company’s annu­al devel­op­er con­fer­ence, announced that the com­pa­ny would begin let­ting users sev­er the con­nec­tion between their web brows­ing his­to­ry and their Face­book accounts. After months of delays, Facebook’s Clear His­to­ry is now rolling out in Ire­land, South Korea, and Spain, with oth­er coun­tries to fol­low “in com­ing months,” the com­pa­ny said. The new tool, which Face­book con­ceived in the wake of the Cam­bridge Ana­lyt­i­ca scan­dal, is designed to give users more con­trol over their data pri­va­cy at the expense of adver­tis­ers’ tar­get­ing capa­bil­i­ties.

    When it arrives in your coun­try, the Clear His­to­ry tool will be part of a new sec­tion of the ser­vice called “Off-Face­book activ­i­ty.” When you open it, you’ll see the apps and web­sites that are track­ing your activ­i­ty and send­ing reports back to Face­book for ad tar­get­ing pur­pos­es. Tap­ping the “Clear His­to­ry” but­ton will dis­so­ci­ate that infor­ma­tion from your Face­book account.

    You can also choose to block com­pa­nies from report­ing their track­ing data about you back to Face­book in the future. You’ll have the choice of dis­con­nect­ing all off-Face­book brows­ing data, or data for spe­cif­ic apps and web­sites. Face­book says the prod­uct is rolling out slow­ly “to help ensure it’s work­ing reli­ably for every­one.”

    Some writ­ers, such as Tony Romm here, point­ed out that Face­book is not actu­al­ly delet­ing your data — which would seem to blunt the impact of a but­ton called “Clear His­to­ry.” In fact, giv­en that the data link you’re shut­ting off is pri­mar­i­ly rel­e­vant to ads you might see lat­er, it feels more like a “Mud­dle Future” but­ton. Face­book, for its part, has cloaked the entire enter­prise into a sec­tion of the app opaque­ly titled “Off-Face­book Activ­i­ty,” which could more or less mean any­thing.

    I find it hard to get too worked up about any of this, because regard­less of whether Face­book is able to take into account your web brows­ing habits, it’s still going to be send­ing you plen­ty of high­ly tar­get­ed ads based on your age, gen­der, and all the oth­er demo­graph­ic data that you forked over when you made your pro­file. Or you could sim­ply turn off ad tar­get­ing on Face­book alto­geth­er, which is more pow­er­ful in this regard than any Clear His­to­ry tool was ever going to be. (Here’s an account from a per­son who did this.)

    Sec­ond, Face­book released the results of its anti-con­ser­v­a­tive bias audit, in which the com­pa­ny asked for­mer Sen. Jon Kyl and the law firm Cov­ing­ton & Burl­ing to ask 133 con­ser­v­a­tive law­mak­ers and inter­est groups to tell it whether they think Face­book is biased against con­ser­v­a­tives.

    This project has fas­ci­nat­ed me since it was announced, since Face­book had clear­ly vol­un­teered to play a game it could only lose. As I’ve writ­ten here before, the def­i­n­i­tion of “bias” has expand­ed to include any time some­one has a bad expe­ri­ence online.

    On one hand, there’s no evi­dence of sys­tem­at­ic bias against con­ser­v­a­tives or any oth­er main­stream polit­i­cal group on Face­book or oth­er plat­forms. On the oth­er hand, there are end­less anec­dotes about the law­mak­er whose ad pur­chase was not approved, or who did not appear in search results, or what­ev­er. Stack enough anec­dotes on top of one anoth­er and you’ve got some­thing that looks a lot like data — cer­tain­ly enough to con­vene a bad-faith con­gres­sion­al hear­ing about plat­form bias, which Repub­li­cans have done repeat­ed­ly now.

    So here comes Kyl’s “audit,” which appears to have tak­en rough­ly the same shape as Pres­i­dent Trump’s call for sto­ries of Amer­i­cans who feel that they have been cen­sored by the big plat­forms. Kyl’s find­ings are short on facts and long on feel­ings. Here’s this, from an op-ed he pub­lished today in The Wall Street Jour­nal.

    As a result of Facebook’s new, more strin­gent ad poli­cies, inter­vie­wees said the ad-approval process has slowed sig­nif­i­cant­ly. Some fear that the new process may be designed to dis­ad­van­tage con­ser­v­a­tive ads in the wake of the Trump campaign’s suc­cess­ful use of social media in 2016.

    So, some anony­mous con­ser­v­a­tives believe that Face­book is involved in a con­spir­a­cy to pre­vent con­ser­v­a­tives from adver­tis­ing. That might come as a sur­prise to, say, Pres­i­dent Trump, who is out­spend­ing all Democ­rats on Face­book ads. But the Kyl report has no room for empir­i­cal thought. What’s impor­tant here is that 133 unnamed peo­ple have feel­ings, and that they spent the bet­ter part of two years talk­ing about them in inter­views that we can’t read. (Here’s a link to the pub­lished report, which clocks in at a very thin eight pages. And here’s a help­ful rebut­tal from Media Mat­ters, which uses data to illus­trate how par­ti­san con­ser­v­a­tive pages con­tin­ue to thrive on Face­book.)

    Despite the fact that we have no idea who Kyl talked to, or what they said beyond his mea­ger bul­let points, the report still had at least some effect on Face­book pol­i­cy­mak­ing. As Sara Fis­ch­er reports in Axios, Face­book ads can now show med­ical tubes con­nect­ed to the human body, which appar­ent­ly make for more vis­cer­al­ly com­pelling anti-abor­tion ads:

    The med­ical tube pol­i­cy makes it eas­i­er for pro-life ads focused on sur­vival sto­ries of infants born before full-term to be accept­ed by Facebook’s ad pol­i­cy. Face­book notes that the pol­i­cy could also ben­e­fit oth­er groups who wish to dis­play med­ical tubes in ads for can­cer research, human­i­tar­i­an relief and elder­ly care.

    And how are con­ser­v­a­tives using the infor­ma­tion from today’s audit? If you guessed “as a cud­gel to con­tin­ue beat­ing Face­book with,” you win today’s grand prize. Here’s Brent Bozell: “The Face­book Kyl cov­er-up is aston­ish­ing. 133 groups pre­sent­ed Kyl with evi­dence of FB’s agen­da against con­ser­v­a­tives and he dis­hon­est­ly did FB’s bid­ding instead.”

    And here’s Sen. Josh Haw­ley (R‑MO):

    “Face­book should con­duct an actu­al audit by giv­ing a trust­ed third par­ty access to its algo­rithm, its key doc­u­ments, and its con­tent mod­er­a­tion pro­to­cols,” Haw­ley said in a state­ment. “Then Face­book should release the results to the pub­lic.”

    I asked Hawley’s peo­ple if the sen­a­tor was aware that Facebook’s con­tent mod­er­a­tion pro­to­cols have been pub­lic for years, but I nev­er heard back.

    Any­way, Face­book wrapped up the day by announc­ing — in a fan­tas­ti­cal­ly bizarre feat of tim­ing — that it would begin to hire human beings to curate your news sto­ries, just as Apple does for Apple News. (Apply for the job here! Let me know if you get it!) This is the right thing to do — our leaky infor­ma­tion sphere needs expe­ri­enced edi­tors with news judg­ment more than ever — but also one guar­an­teed to court con­tro­ver­sy. One person’s cura­tion is, after all, anoth­er person’s “bias.”

    The return of human edi­tors to Face­book, on the very day that it pub­lish­es its inves­ti­ga­tion into alleged bias against con­ser­v­a­tives, is a real time-is-a-flat-cir­cle moment. After all, it was trumped-up out­rage over sup­posed bias in its last group of human edi­tors that helped to set us down this benight­ed path to begin with. I want to end on some­thing I wrote last Feb­ru­ary on this sub­ject:

    I’m struck how, in ret­ro­spect, the sto­ry that helped to trig­ger our cur­rent anx­i­eties had the prob­lem exact­ly wrong. The sto­ry offered a dire warn­ing that Face­book exert­ed too much edi­to­r­i­al con­trol, in the one nar­row sec­tion of the site where it actu­al­ly employed human edi­tors, when in fact the prob­lem under­ly­ing our glob­al mis­in­for­ma­tion cri­sis is that it exert­ed too lit­tle. Gizmodo’s sto­ry fur­ther declared that Face­book had become hos­tile to con­ser­v­a­tive view­points when in fact con­ser­v­a­tive view­points — and con­ser­v­a­tive hoax­es — were thriv­ing across the plat­form.

    Last month, NewsWhip pub­lished a list of the most-engaged pub­lish­ers on Face­book. The no. 1 com­pa­ny post­ed more than 49,000 times in Decem­ber alone, earn­ing 21 mil­lion likes, com­ments, and shares. That pub­lish­er was Fox News. And the idea that Face­book sup­press­es the shar­ing of con­ser­v­a­tive news now seems very quaint indeed.

    ...

    ———-

    “The con­ser­v­a­tive audit of bias on Face­book is long on feel­ings and short on facts” by Casey New­ton; The Verge; 08/21/2019

    “On one hand, there’s no evi­dence of sys­tem­at­ic bias against con­ser­v­a­tives or any oth­er main­stream polit­i­cal group on Face­book or oth­er plat­forms. On the oth­er hand, there are end­less anec­dotes about the law­mak­er whose ad pur­chase was not approved, or who did not appear in search results, or what­ev­er. Stack enough anec­dotes on top of one anoth­er and you’ve got some­thing that looks a lot like data — cer­tain­ly enough to con­vene a bad-faith con­gres­sion­al hear­ing about plat­form bias, which Repub­li­cans have done repeat­ed­ly now.

    Sure, there’s no actu­al evi­dence of an anti-con­ser­v­a­tive bias. But there are 133 anony­mous right-wing oper­a­tives who feel dif­fer­ent­ly. That’s the basis for this audit. And despite the lengths Jon Kyl’s team went to describ­ing the var­i­ous feel­ings of bias felt by these 133 anony­mous right-wing oper­a­tives, he’s still be accused of wag­ing a cov­er-up on Face­book’s behalf by the right-wing media. Because you can’t stop ‘work­ing the refs’:

    ...
    So here comes Kyl’s “audit,” which appears to have tak­en rough­ly the same shape as Pres­i­dent Trump’s call for sto­ries of Amer­i­cans who feel that they have been cen­sored by the big plat­forms. Kyl’s find­ings are short on facts and long on feel­ings. Here’s this, from an op-ed he pub­lished today in The Wall Street Jour­nal.

    As a result of Facebook’s new, more strin­gent ad poli­cies, inter­vie­wees said the ad-approval process has slowed sig­nif­i­cant­ly. Some fear that the new process may be designed to dis­ad­van­tage con­ser­v­a­tive ads in the wake of the Trump campaign’s suc­cess­ful use of social media in 2016.

    So, some anony­mous con­ser­v­a­tives believe that Face­book is involved in a con­spir­a­cy to pre­vent con­ser­v­a­tives from adver­tis­ing. That might come as a sur­prise to, say, Pres­i­dent Trump, who is out­spend­ing all Democ­rats on Face­book ads. But the Kyl report has no room for empir­i­cal thought. What’s impor­tant here is that 133 unnamed peo­ple have feel­ings, and that they spent the bet­ter part of two years talk­ing about them in inter­views that we can’t read. (Here’s a link to the pub­lished report, which clocks in at a very thin eight pages. And here’s a help­ful rebut­tal from Media Mat­ters, which uses data to illus­trate how par­ti­san con­ser­v­a­tive pages con­tin­ue to thrive on Face­book.)

    ...

    And how are con­ser­v­a­tives using the infor­ma­tion from today’s audit? If you guessed “as a cud­gel to con­tin­ue beat­ing Face­book with,” you win today’s grand prize. Here’s Brent Bozell: “The Face­book Kyl cov­er-up is aston­ish­ing. 133 groups pre­sent­ed Kyl with evi­dence of FB’s agen­da against con­ser­v­a­tives and he dis­hon­est­ly did FB’s bid­ding instead.”
    ...

    And on the same day of the release of this report, Face­book announces the return of human edi­tors for the news feed:

    ...
    Any­way, Face­book wrapped up the day by announc­ing — in a fan­tas­ti­cal­ly bizarre feat of tim­ing — that it would begin to hire human beings to curate your news sto­ries, just as Apple does for Apple News. (Apply for the job here! Let me know if you get it!) This is the right thing to do — our leaky infor­ma­tion sphere needs expe­ri­enced edi­tors with news judg­ment more than ever — but also one guar­an­teed to court con­tro­ver­sy. One person’s cura­tion is, after all, anoth­er person’s “bias.”

    The return of human edi­tors to Face­book, on the very day that it pub­lish­es its inves­ti­ga­tion into alleged bias against con­ser­v­a­tives, is a real time-is-a-flat-cir­cle moment. After all, it was trumped-up out­rage over sup­posed bias in its last group of human edi­tors that helped to set us down this benight­ed path to begin with. I want to end on some­thing I wrote last Feb­ru­ary on this sub­ject:

    I’m struck how, in ret­ro­spect, the sto­ry that helped to trig­ger our cur­rent anx­i­eties had the prob­lem exact­ly wrong. The sto­ry offered a dire warn­ing that Face­book exert­ed too much edi­to­r­i­al con­trol, in the one nar­row sec­tion of the site where it actu­al­ly employed human edi­tors, when in fact the prob­lem under­ly­ing our glob­al mis­in­for­ma­tion cri­sis is that it exert­ed too lit­tle. Gizmodo’s sto­ry fur­ther declared that Face­book had become hos­tile to con­ser­v­a­tive view­points when in fact con­ser­v­a­tive view­points — and con­ser­v­a­tive hoax­es — were thriv­ing across the plat­form.

    Last month, NewsWhip pub­lished a list of the most-engaged pub­lish­ers on Face­book. The no. 1 com­pa­ny post­ed more than 49,000 times in Decem­ber alone, earn­ing 21 mil­lion likes, com­ments, and shares. That pub­lish­er was Fox News. And the idea that Face­book sup­press­es the shar­ing of con­ser­v­a­tive news now seems very quaint indeed.

    ...

    So it looks like we’re prob­a­bly in store for a new round of alle­ga­tions of anti-con­ser­v­a­tive bias at Face­book just in time for 2020 which will pre­sum­ably include a new round of alle­ga­tions of anti-con­ser­v­a­tive bias held by the human news feed edi­tors. With that in mind, it’s worth not­ing that Face­book has expand­ed its approach to mis­in­for­ma­tion-detec­tion since 2016 when it last had news feed human cura­tion. For exam­ple, now Face­book has teamed up with the Poynter’s Inter­na­tion­al Fact-Check­ing Net­work (IFCN) to find unbi­ased orga­ni­za­tions that Face­book can out­source the respon­si­bil­i­ty of fact-check­ing to. In Decem­ber of 2016, Face­book announced that it was part­ner­ing with ABC News, Snopes, Poli­ti­Fact, FactCheck.org, and the AP (all approved by IFCN) to help it iden­ti­fy mis­in­for­ma­tion on the plat­form. All non-par­ti­san orga­ni­za­tions, abeit the kinds of orga­ni­za­tions the right-wing media rou­tine­ly labels as ‘left-wing main­stream media’ out­lets despite the lack of any mean­ing­ful left-wing bias. Then, in Decem­ber of 2017, Face­book announced it was adding the right-wing Week­ly Stan­dard to its list of fact-check­ers, which soon result­ed in left-wing arti­cles get­ting flagged for dis­in­for­ma­tion for spu­ri­ous rea­sons. Note there was no left-wing site cho­sen at this point. But the Week­ly Stan­dard went out of busi­ness, so in April of this year, Face­book announced it was adding Check Your Fact to its list of fact-check­ing orga­ni­za­tions. Who is behind Check Your Fact? The Dai­ly Caller! This is almost like hir­ing Bre­it­bart to do your fact-check­ing.

    Accord­ing to the fol­low­ing arti­cle, it was Joe Kaplan, the for­mer White House aide to George W. Bush who now serves as Facebook’s glob­al pol­i­cy chief and is the company’s “pro­tec­tor against alle­ga­tions of polit­i­cal bias,” who has been push­ing to get Check Your Fact added to the list of Face­book’s fact-check­ers. This was a rather con­tentious deci­sion with­in Face­book’s board­room but Mark Zucker­berg appar­ent­ly gen­er­al­ly backed Kaplan’s push.

    And that tells us about his this new round of human-curat­ed news feeds is going to go: The humans doing the curat­ing are prob­a­bly going to have their judge­ment curat­ed by right-wing mis­in­for­ma­tion out­lets like the Dai­ly Caller

    Vox

    Facebook’s con­tro­ver­sial fact-check­ing part­ner­ship with a Dai­ly Caller-fund­ed web­site, explained

    In try­ing to stop the spread of fake news, the social media behe­moth has cre­at­ed new prob­lems.

    By Aaron Rupar
    Updat­ed May 6, 2019, 9:40am EDT

    Face­book knows that the spread of fake news on the plat­form dur­ing the 2016 pres­i­den­tial cam­paign was almost its undo­ing, so it has cho­sen to part­ner with third-par­ty media orga­ni­za­tions to fact-check pub­lish­ers on its plat­form in order to stave off more crit­i­cism. That makes sense. But some of its choic­es in part­ners — includ­ing a new fact-check­er fund­ed by a right-lean­ing news out­let found­ed by Tuck­er Carl­son — has only invit­ed more.

    Last week, Face­book announced that it’s part­ner­ing with Check Your Fact — a sub­sidiary of the right-wing Dai­ly Caller, a site known for its ties to white nation­al­ists — as one of six third-par­ty orga­ni­za­tions it cur­rent­ly works with to fact-check con­tent for Amer­i­can users. The part­ner­ship has already come under intense crit­i­cism from cli­mate jour­nal­ists (among oth­ers) who are con­cerned that the Dai­ly Caller’s edi­to­r­i­al stance on issues like cli­mate change, which is uncon­tro­ver­sial among sci­en­tists but isn’t treat­ed as such on right-wing media, will spread even more mis­in­for­ma­tion Face­book.

    In an inter­view, Face­book spokesper­son Lau­ren Svens­son defend­ed the part­ner­ship. She not­ed that Check Your Fact, like all fact-check­ers Face­book part­ners with, is cer­ti­fied by Poynter’s Inter­na­tion­al Fact-Check­ing Net­work (IFCN). Asked about the right-wing pro­cliv­i­ties of Check Your Fact’s par­ent com­pa­ny, Svenn­son referred to the IFCN’s cer­ti­fi­ca­tion process­es and said that “we do believe in hav­ing a diverse set of fact-check­ing part­ners.” Check Your Fact, for its part, says it oper­ates inde­pen­dent­ly from the Dai­ly Caller, and touts its record of accu­rate fact-checks.

    The real­i­ty is that Face­book has a fake news prob­lem that could hurt its bot­tom line, but it also has a polit­i­cal prob­lem. If it doesn’t give cre­dence to pop­u­lar but dis­rep­utable web­sites like the Dai­ly Caller, it runs the risk of anger­ing Repub­li­cans who use the plat­form. But in cre­dence to sites of that sort, the plat­form runs the risk or per­pet­u­at­ing the same “fake news” prob­lem third-par­ty fact-check­ers are meant to solve.

    Facebook’s fake news prob­lem, explained

    As Tim­o­thy B. Lee explained for Vox days after the 2016 elec­tion, “fake news” was a big prob­lem on Face­book dur­ing that year’s pres­i­den­tial cam­paign:

    Over the course of 2016, Face­book users learned that the pope endorsed Don­ald Trump (he didn’t), that a Demo­c­ra­t­ic oper­a­tive was mur­dered after agree­ing to tes­ti­fy against Hillary Clin­ton (it nev­er hap­pened), that Bill Clin­ton raped a 13-year-old girl (a total fab­ri­ca­tion), and many oth­er total­ly bogus “news” sto­ries. Sto­ries like this thrive on Face­book because Facebook’s algo­rithm pri­or­i­tizes “engage­ment” — and a reli­able way to get read­ers to engage is by mak­ing up out­ra­geous non­sense about politi­cians they don’t like.

    After a ton of pub­lic scruti­ny, includ­ing in the form of high-pro­file con­gres­sion­al hear­ings, Face­book after the elec­tion began part­ner­ing with news orga­ni­za­tions like the Asso­ci­at­ed Press, FactCheck.org, Lead Sto­ries, Poli­ti­Fact, and Sci­ence Feed­back to fact-check pub­lish­ers. That’s all well and good — those orga­ni­za­tions have rep­u­ta­tions for non­par­ti­san­ship and accu­ra­cy.

    But in attempt­ing to sti­fle “fake news,” Repub­li­cans have noticed that right-lean­ing news out­lets, ideas, and politi­cians some­times got caught up in the purge. Just look to Alex Jones, who active­ly spread con­spir­a­cy the­o­ries due to his pop­u­lar­i­ty on plat­forms like Face­book and YouTube. Con­ser­v­a­tives began to com­plain they were unfair­ly tar­get­ed. Ear­li­er this month, Sen Ted Cruz (R‑TX) held hear­ings inter­ro­gat­ing big tech pre­cise­ly on the issue of bias against con­ser­v­a­tives.

    To counter those (most­ly unfound­ed) alle­ga­tions that the plat­form is biased toward lib­er­als, Face­book is part­ner­ing with right-wing sites as well.

    This leads to sit­u­a­tions where Face­book part­ners with right-lean­ing orga­ni­za­tions to fact-check lib­er­als sites. Some lib­er­al sites have been tar­get­ed as “false,” there­by lim­it­ing dis­tri­b­u­tion of the “false” arti­cle by as much as 80 per­cent — a big prob­lem con­sid­er­ing Face­book is still the most com­mon­ly used plat­form in the coun­try for news, despite reduc­tions in dis­tri­b­u­tion that have hurt lib­er­al and con­ser­v­a­tive news sites alike.

    The first con­ser­v­a­tive site Face­book part­nered with for fact-check­ing was the Week­ly Stan­dard, which ceased oper­a­tions last Decem­ber. That part­ner­ship became a source of con­tro­ver­sy three months before then, when con­ser­v­a­tive fact-check­ers flagged an arti­cle from the lib­er­al pub­li­ca­tion ThinkProgress as “false” on seman­tic grounds. (Full dis­clo­sure: I am a for­mer ThinkProgress employ­ee, as are sev­er­al oth­er cur­rent Vox staffers.) As Vox’s Zack Beauchamp explained at the time, while the article’s the­sis was arguably accu­rate, the head­line like­ly went too far. But the pun­ish­ment result­ing from the Week­ly Standard’s “false” des­ig­na­tion was worse than the crime:

    Last week, the lib­er­al pub­li­ca­tion ThinkProgress pub­lished a piece on Supreme Court nom­i­nee Brett Kavanaugh’s con­fir­ma­tion hear­ing with the head­line “Brett Kavanaugh said he would kill Roe v. Wade and almost no one noticed.” The fact-check­er for the Week­ly Stan­dard ruled it was false. Facebook’s pun­ish­ment mech­a­nism kicked in, and the ThinkProgress arti­cle was cut off from being seen by about 80 per­cent of its poten­tial Face­book audi­ence.

    On Tues­day, the author of the ThinkProgress piece — edi­tor Ian Mill­his­er — pub­licly defend­ed the the­sis of his piece and accused Face­book of “pan­der­ing to the right” by allow­ing a con­ser­v­a­tive mag­a­zine to block lib­er­al arti­cles. The stakes here are high: Face­book pro­vides about 10 to 15 per­cent of ThinkProgress’s traf­fic, which means that get­ting choked off from read­ers there is a non­triv­ial hit to its read­er­ship.

    Svens­son told Vox that there was no direct con­nec­tion between the Week­ly Stan­dard shut­ting down and Face­book part­ner­ing with anoth­er con­ser­v­a­tive site.

    Face­book report­ed­ly has been inter­est­ed in part­ner­ing with the Dai­ly Caller for some time. In Decem­ber, the Wall Street Jour­nal report­ed that Joel Kaplan, a for­mer White House aide to George W. Bush who now serves as Facebook’s glob­al pol­i­cy chief and is the company’s “pro­tec­tor against alle­ga­tions of polit­i­cal bias,” made a failed push to part­ner with the Dai­ly Caller last year:

    This sum­mer, Mr. Kaplan pushed to part­ner with right-wing news site The Dai­ly Caller’s fact-check­ing divi­sion after con­ser­v­a­tives accused Face­book of work­ing only with main­stream pub­lish­ers, peo­ple famil­iar with the dis­cus­sions said. Con­ser­v­a­tive crit­ics argued those pub­li­ca­tions had a built-in lib­er­al bias.

    Mr. Kaplan argued that The Dai­ly Caller was accred­it­ed by the Poyn­ter Insti­tute, a St. Peters­burg, Fla.-based jour­nal­ism non­prof­it that over­sees a net­work of fact-check­ers. Oth­er exec­u­tives, includ­ing some in the Wash­ing­ton, D.C. office, argued that the pub­li­ca­tion print­ed mis­in­for­ma­tion. The con­tentious dis­cus­sion involved Mr. Zucker­berg, who appeared to side with Mr. Kaplan, and Chief Oper­at­ing Offi­cer Sheryl Sand­berg. The debate end­ed in Novem­ber when The Dai­ly Caller’s fact-check­ing oper­a­tion lost its accred­i­ta­tion.

    Accord­ing to IFCN direc­tor Bay­bars Örsek, Check Your Fact was expelled from IFCN’s ver­i­fied sig­na­to­ries last Novem­ber because “they failed to dis­close one of their fund­ing sources [the Dai­ly Caller News Foun­da­tion] in their appli­ca­tion,” but were rein­stat­ed ear­li­er this year after reap­ply­ing.

    But even though Check Your Fact is now being more trans­par­ent about its fund­ing sources, those fund­ing sources in and of them­selves present prob­lem­at­ic con­flicts of inter­est — ones that the IFCN’s cer­ti­fi­ca­tion process doesn’t account for.

    How Face­book choos­es its fact-check­ers

    All the fact-check­ers Face­book part­ners with are cer­ti­fied by Poynter’s Inter­na­tion­al Fact Check­ing Net­work (IFCN). Poyn­ter eval­u­ates appli­cants based on a set of cri­te­ria includ­ing “non­par­ti­san­ship and fair­ness,” “trans­paren­cy of sources,” “trans­paren­cy of fund­ing and orga­ni­za­tion,” “trans­paren­cy of method­ol­o­gy,” and an “open an hon­est cor­rec­tions pol­i­cy.”

    IFCN cer­ti­fi­ca­tion is a nec­es­sary con­di­tion for part­ner­ing with Face­book, but once a site is cer­ti­fied, it’s up to Face­book to decide whether to part­ner with it. There are cur­rent­ly 62 orga­ni­za­tions with IFCN cer­ti­fi­ca­tion glob­al­ly, but Face­book only part­ners with six in the Unit­ed States.

    “We don’t believe we at Face­book should be respon­si­ble for the verac­i­ty of con­tent,” Face­book spokesper­son Svens­son told me. “We believe in the cred­i­bil­i­ty of fact-check­ers that [IFCN] cer­ti­fies.”

    Notably, how­ev­er, the IFCN’s cri­te­ria for cer­ti­fi­ca­tion does not include con­flicts of inter­est. That’s the source of one of the con­cerns cli­mate jour­nal­ists are rais­ing about Check Your Fact.

    Accord­ing to a report pub­lished last month by PRWatch, the Charles Koch Foun­da­tion account­ed for 83 per­cent of the Dai­ly Caller News Foundation’s rev­enues in 2016, and the Dai­ly Caller News Foun­da­tion employs some of Check Your Fact’s fact-check­ers. Green­peace reports that the Koch Fam­i­ly Foun­da­tions spent more than $127 mil­lion from 1997 to 2017 financ­ing groups “that have attacked cli­mate change sci­ence and pol­i­cy solu­tions.”

    That con­flict of inter­est has raised con­cerns that Check Your Fact’s fact-check­ing role could have a chill­ing effect on cli­mate jour­nal­ism on Face­book.

    As lead­ing cli­ma­tol­o­gist Michael Mann told ThinkProgress, “It is appalling that Face­book has teamed up with a Koch-fund­ed orga­ni­za­tion that pro­motes cli­mate change denial. ... Face­book must dis­as­so­ci­ate itself from this orga­ni­za­tion.”

    Face­book says it wants a “diver­si­ty” of orga­ni­za­tions for fact-check­ing, but accord­ing to Media Bias/Fact Check, none of the fact-check­ers Face­book cur­rent­ly part­ners with in the US are left-lean­ing, and Check Your Fact is the only one with a a right-of-cen­ter rat­ing. Face­book is essen­tial­ly buy­ing into the argu­ment con­ser­v­a­tives have laid forth — that main­stream news out­lets have a lib­er­al bias and that con­ser­v­a­tives need spe­cial con­sid­er­a­tion in the process.

    Hav­ing accu­rate fact-checks doesn’t mean a fact-check­er is free of bias

    Check Your Fact’s web­site pledges that the site is “non-par­ti­san” and “loy­al to nei­ther peo­ple nor par­ties — only the truth.” (Full dis­clo­sure: Check Your Fact has also fact-checked one of this author’s own tweets). It also talks up the website’s “edi­to­r­i­al inde­pen­dence.” Indeed, a perusal of Check Your Fact’s web­site doesn’t indi­cate that there’s any­thing fac­tu­al­ly wrong with the site’s fact-checks, but the sto­ries it choos­es to fact-check speak to a bias of its own.

    For instance, as of April 30, the site’s home­page fea­tures more fact-checks of state­ments made by Hillary Clin­ton — for exam­ple, “FACT CHECK: Did Hillary Clin­ton Once Say That Demo­c­ra­t­ic Vot­ers Are ‘Just Plain Stu­pid’?” (the site notes there’s no evi­dence Clin­ton ever said it) — than it does state­ments from the cur­rent pres­i­dent, Don­ald Trump, who just sur­passed a his­toric 10,000 false or mis­lead­ing claims from main­stream fact-check­ers.

    And as Scott Wald­man recent­ly detailed for E&E News, even when Check Your Fact does fact-checks of claims like Trump’s recent one about wind tur­bines caus­ing can­cer that ulti­mate­ly arrive at the cor­rect con­clu­sion (Trump’s claim was false), the site ele­vates fringe voic­es in the process.

    While the web­site labeled the claim as false — and quot­ed can­cer experts say­ing as much — it also quot­ed Nation­al Wind Watch, an anti­wind group that orga­nizes and fights against wind tur­bines through­out the coun­try. A spokesman for that group claimed the pres­i­dent was cor­rect; he said tur­bines cause a lack of sleep and stress, which can lead to can­cer.

    In March, Check Your Fact gave cre­dence to Sen­ate Major­i­ty Leader Mitch McConnell’s claims that the Green New Deal would cost more than every dol­lar the fed­er­al gov­ern­ment has spent in its his­to­ry. The Ken­tucky Repub­li­can and Check Your Fact relied on a sin­gle study, pro­duced by a con­ser­v­a­tive think tank, the Amer­i­can Action Forum.

    But the author of that study has acknowl­edged that its cal­cu­la­tion of a $93 tril­lion price tag is essen­tial­ly a guess, since the Green New Deal is cur­rent­ly a vague res­o­lu­tion. E&E News has report­ed on how the Amer­i­can Action Forum is con­nect­ed to a web of con­ser­v­a­tive groups that fund polit­i­cal attacks through undis­closed donors and that have been fund­ed by fos­sil fuel lob­by­ing inter­ests opposed to envi­ron­men­tal reg­u­la­tions (Cli­matewire, April 1).

    It would be hard to com­plain if Face­book part­nered with rep­utable web­sites for fact-check­ing. But in order to pre­empt accu­sa­tions of left-wing bias, the plat­form has repeat­ed­ly part­nered with out­lets that draw into ques­tion how com­mit­ted the plat­form real­ly is to root­ing out fake news. (In a state­ment sent to Vox, Check Your Fact edi­tor David Sivak pushed back on char­ac­ter­i­za­tions of his site as being biased toward the right, writ­ing, “[t]hese last cou­ple of weeks have been reveal­ing, as a num­ber of news out­lets have resort­ed to mis­rep­re­sent­ing our work. Even when we fact-check con­ser­v­a­tives for putting words in Hillary Clinton’s mouth, that’s some­how mis­con­strued as con­ser­v­a­tive ‘bias’ on our part. The truth is, Check Your Fact has a two-year track record of fair, even­hand­ed arti­cles that hold fig­ures on both sides of the polit­i­cal aisle account­able, includ­ing Trump.”)

    There are indi­ca­tions that Facebook’s fact-check­ing prob­lems go deep­er than its part­ner­ship with the Dai­ly Caller. In Feb­ru­ary, one of the sites that was work­ing with Face­book, Snopes, announces it was end­ing the part­ner­ship.

    Two months before that announce­ment, the Guardian report­ed on some of the frus­tra­tions that may have moti­vat­ed that deci­sion.

    “Cur­rent and for­mer Face­book factcheck­ers told the Guardian that the tech platform’s col­lab­o­ra­tion with out­side reporters has pro­duced min­i­mal results and that they’ve lost trust in Face­book, which has repeat­ed­ly refused to release mean­ing­ful data about the impacts of their work,” the Guardian report­ed.

    The dis­ease of fake news is bad. But the “cures” Face­book is try­ing have side effects of their own.

    Face­book knows that it faces a tough sit­u­a­tion. Much of its val­ue lies in the fact that it has such a wide user base — lib­er­al or con­ser­v­a­tive, old or young — and that it can mon­e­tize those users. The preva­lence of mis­in­for­ma­tion threat­ens its abil­i­ty to sur­vive in a very real way, but so does poten­tial reg­u­la­tion from Repub­li­can politi­cians who don’t seem to have a firm grasp of how the inter­net works but harp on about lib­er­al bias any­way.

    Face­book, by part­ner­ing with a right-wing fact-check­ing orga­ni­za­tion, is mak­ing a con­ces­sion to con­ser­v­a­tive argu­ments. But by not includ­ing lib­er­al sites, it’s also tac­it­ly sug­ges­tion that main­stream out­lets have a lib­er­al bias — which isn’t nec­es­sar­i­ly true.

    ...

    ———-

    “Facebook’s con­tro­ver­sial fact-check­ing part­ner­ship with a Dai­ly Caller-fund­ed web­site, explained” by Aaron Rupar; Vox; 05/06/2019

    “Last week, Face­book announced that it’s part­ner­ing with Check Your Fact — a sub­sidiary of the right-wing Dai­ly Caller, a site known for its ties to white nation­al­ists — as one of six third-par­ty orga­ni­za­tions it cur­rent­ly works with to fact-check con­tent for Amer­i­can users. The part­ner­ship has already come under intense crit­i­cism from cli­mate jour­nal­ists (among oth­ers) who are con­cerned that the Dai­ly Caller’s edi­to­r­i­al stance on issues like cli­mate change, which is uncon­tro­ver­sial among sci­en­tists but isn’t treat­ed as such on right-wing media, will spread even more mis­in­for­ma­tion Face­book.”

    The Dai­ly Caller — a cesspool of white nation­al­ist pro­pa­gan­da — is fact-check­er sug­ar-dad­dy for one of the biggest sources of news on the plan­et. This is the state of the media in 2019. It’s also a reminder that, while Don­ald Trump is wide­ly rec­og­nized as the fig­ure that ‘capured’ the heart and soul of the Repub­li­can Par­ty in recent years, the real fig­ure that accom­plished this was Alex Jones. That’s why ensur­ing Face­book is safe for far right dis­in­for­ma­tion is so impor­tant to the par­ty. Alex Jones’s mes­sage is the Repub­li­can Par­ty’s unof­fi­cial zeit­geist at this point. Trump has just been rid­ing Jones’s wave that’s been build­ing for years.

    Oh, but it gets worse. Of course: it turns out the Charles Koch Foun­da­tion account­ed for 83 per­cent of the Dai­ly Caller News Foundation’s rev­enues in 2016, and the Dai­ly Caller News Foun­da­tion employs some of Check Your Fact’s fact-check­ers. So this is more of a Dai­ly Caller/Koch joint oper­a­tion. But Face­book explains this deci­sion by assert­ing that “we do believe in hav­ing a diverse set of fact-check­ing part­ners.” And yet there aren’t any actu­al left-wing orga­ni­za­tions hired to do sim­i­lar work:

    ...
    In an inter­view, Face­book spokesper­son Lau­ren Svens­son defend­ed the part­ner­ship. She not­ed that Check Your Fact, like all fact-check­ers Face­book part­ners with, is cer­ti­fied by Poynter’s Inter­na­tion­al Fact-Check­ing Net­work (IFCN). Asked about the right-wing pro­cliv­i­ties of Check Your Fact’s par­ent com­pa­ny, Svenn­son referred to the IFCN’s cer­ti­fi­ca­tion process­es and said that “we do believe in hav­ing a diverse set of fact-check­ing part­ners.” Check Your Fact, for its part, says it oper­ates inde­pen­dent­ly from the Dai­ly Caller, and touts its record of accu­rate fact-checks.

    The real­i­ty is that Face­book has a fake news prob­lem that could hurt its bot­tom line, but it also has a polit­i­cal prob­lem. If it doesn’t give cre­dence to pop­u­lar but dis­rep­utable web­sites like the Dai­ly Caller, it runs the risk of anger­ing Repub­li­cans who use the plat­form. But in cre­dence to sites of that sort, the plat­form runs the risk or per­pet­u­at­ing the same “fake news” prob­lem third-par­ty fact-check­ers are meant to solve.

    ...

    How Face­book choos­es its fact-check­ers

    All the fact-check­ers Face­book part­ners with are cer­ti­fied by Poynter’s Inter­na­tion­al Fact Check­ing Net­work (IFCN). Poyn­ter eval­u­ates appli­cants based on a set of cri­te­ria includ­ing “non­par­ti­san­ship and fair­ness,” “trans­paren­cy of sources,” “trans­paren­cy of fund­ing and orga­ni­za­tion,” “trans­paren­cy of method­ol­o­gy,” and an “open an hon­est cor­rec­tions pol­i­cy.”

    IFCN cer­ti­fi­ca­tion is a nec­es­sary con­di­tion for part­ner­ing with Face­book, but once a site is cer­ti­fied, it’s up to Face­book to decide whether to part­ner with it. There are cur­rent­ly 62 orga­ni­za­tions with IFCN cer­ti­fi­ca­tion glob­al­ly, but Face­book only part­ners with six in the Unit­ed States.

    “We don’t believe we at Face­book should be respon­si­ble for the verac­i­ty of con­tent,” Face­book spokesper­son Svens­son told me. “We believe in the cred­i­bil­i­ty of fact-check­ers that [IFCN] cer­ti­fies.”

    Notably, how­ev­er, the IFCN’s cri­te­ria for cer­ti­fi­ca­tion does not include con­flicts of inter­est. That’s the source of one of the con­cerns cli­mate jour­nal­ists are rais­ing about Check Your Fact.

    Accord­ing to a report pub­lished last month by PRWatch, the Charles Koch Foun­da­tion account­ed for 83 per­cent of the Dai­ly Caller News Foundation’s rev­enues in 2016, and the Dai­ly Caller News Foun­da­tion employs some of Check Your Fact’s fact-check­ers. Green­peace reports that the Koch Fam­i­ly Foun­da­tions spent more than $127 mil­lion from 1997 to 2017 financ­ing groups “that have attacked cli­mate change sci­ence and pol­i­cy solu­tions.”

    ...

    Face­book says it wants a “diver­si­ty” of orga­ni­za­tions for fact-check­ing, but accord­ing to Media Bias/Fact Check, none of the fact-check­ers Face­book cur­rent­ly part­ners with in the US are left-lean­ing, and Check Your Fact is the only one with a a right-of-cen­ter rat­ing. Face­book is essen­tial­ly buy­ing into the argu­ment con­ser­v­a­tives have laid forth — that main­stream news out­lets have a lib­er­al bias and that con­ser­v­a­tives need spe­cial con­sid­er­a­tion in the process.
    ...

    And it’s been none oth­er than for­mer White House aide to George W. Bush, Joel Kaplan, who has been push­ing to give the Dai­ly Caller this kind of over­sight over the plat­for­m’s con­tent. Kaplan is appar­ent­ly Face­book’s “pro­tec­tor against alle­ga­tions of polit­i­cal bias.” And while some of Face­book’s exec­u­tive’s rec­og­nized that the Dai­ly Caller is a ser­i­al ped­dler of mis­in­for­ma­tion, Mark Zucker­berg report­ed­ly took Kaplan’s side dur­ing these debates:

    ...
    Face­book report­ed­ly has been inter­est­ed in part­ner­ing with the Dai­ly Caller for some time. In Decem­ber, the Wall Street Jour­nal report­ed that Joel Kaplan, a for­mer White House aide to George W. Bush who now serves as Facebook’s glob­al pol­i­cy chief and is the company’s “pro­tec­tor against alle­ga­tions of polit­i­cal bias,” made a failed push to part­ner with the Dai­ly Caller last year:

    This sum­mer, Mr. Kaplan pushed to part­ner with right-wing news site The Dai­ly Caller’s fact-check­ing divi­sion after con­ser­v­a­tives accused Face­book of work­ing only with main­stream pub­lish­ers, peo­ple famil­iar with the dis­cus­sions said. Con­ser­v­a­tive crit­ics argued those pub­li­ca­tions had a built-in lib­er­al bias.

    Mr. Kaplan argued that The Dai­ly Caller was accred­it­ed by the Poyn­ter Insti­tute, a St. Peters­burg, Fla.-based jour­nal­ism non­prof­it that over­sees a net­work of fact-check­ers. Oth­er exec­u­tives, includ­ing some in the Wash­ing­ton, D.C. office, argued that the pub­li­ca­tion print­ed mis­in­for­ma­tion. The con­tentious dis­cus­sion involved Mr. Zucker­berg, who appeared to side with Mr. Kaplan, and Chief Oper­at­ing Offi­cer Sheryl Sand­berg. The debate end­ed in Novem­ber when The Dai­ly Caller’s fact-check­ing oper­a­tion lost its accred­i­ta­tion.

    Accord­ing to IFCN direc­tor Bay­bars Örsek, Check Your Fact was expelled from IFCN’s ver­i­fied sig­na­to­ries last Novem­ber because “they failed to dis­close one of their fund­ing sources [the Dai­ly Caller News Foun­da­tion] in their appli­ca­tion,” but were rein­stat­ed ear­li­er this year after reap­ply­ing.

    But even though Check Your Fact is now being more trans­par­ent about its fund­ing sources, those fund­ing sources in and of them­selves present prob­lem­at­ic con­flicts of inter­est — ones that the IFCN’s cer­ti­fi­ca­tion process doesn’t account for.
    ...

    Yep, Check Your Fact was­n’t even ini­tial­ly trans­par­ent with the IFCN about its fund­ing sources and instead hid the fact that it’s financed by the Koch-fund­ed Dai­ly Caller News Foun­da­tion. That’s kind of orga­ni­za­tion this is. And that’s all why the inevitable future right-wing claims of bias that’s we’re undoubt­ed­ly going to hear in the 2020 elec­tion will be such a bad joke.

    In relat­ed news, Face­book recent­ly announced that it’s ban­ning the pro-Trump ads from the Epoch Times. Recall the recent reports about how The Epoch Times, fund­ed by Falun Gong devo­tees, has become of the sec­ond biggest buy­er of pro-Trump Face­book ads in the world (after only the Trump cam­paign itself) and has become a cen­tral play­er in gen­er­at­ing all sorts of wild far right con­spir­a­cy the­o­ries like ‘QAnon’. So was The Epoch Times banned for aggres­sive­ly push­ing all sorts of mis­in­for­ma­tion? Nope, The Epoch Times was banned from buy­ing Face­book ads for not being upfront about its fund­ing sources. That was it.

    Posted by Pterrafractyl | August 26, 2019, 12:18 pm
  13. This next arti­cle shows how Face­book promised to ban white nation­al­ist con­tent from its plat­form in March 2019. It was not until then that Face­book acknowl­edged that white nation­al­ism “can­not be mean­ing­ful­ly sep­a­rat­ed from white suprema­cy and orga­nized hate groups” and banned it. Face­book does not ban Holo­caust denial, but does work to reduce the spread of such con­tent by lim­it­ing the dis­tri­b­u­tion of posts and pre­vent­ing Holo­caust-deny­ing groups and pages from appear­ing in algo­rith­mic rec­om­men­da­tions.  How­ev­er, a Guardian analy­sis found long­stand­ing Face­book pages for VDare, a white nation­al­ist web­site focused on oppo­si­tion to immi­gra­tion; the Affir­ma­tive Right, a rebrand­ing of Richard Spencer’s blog Alter­na­tive Right, which helped launch the “alt-right” move­ment; and Amer­i­can Free Press, a newslet­ter found­ed by the white suprema­cist Willis Car­to, in addi­tion to mul­ti­ple pages asso­ci­at­ed with Red Ice TV. Also oper­at­ing open­ly on the plat­form are two Holo­caust denial orga­ni­za­tions, the Com­mit­tee for Open Debate on the Holo­caust and the Insti­tute for His­tor­i­cal Review. The Guardian reviewed of white nation­al­ist out­lets on Face­book amid a debate over the company’s deci­sion to include Bre­it­bart News in Face­book News, a new sec­tion of its mobile app ded­i­cat­ed to “high qual­i­ty” jour­nal­ism. Crit­ics of Bre­it­bart News object to its inclu­sion in what Zucker­berg has described as a “trust­ed source” of infor­ma­tion on two fronts: its repeat­ed pub­li­ca­tion of par­ti­san mis­in­for­ma­tion and con­spir­a­cy the­o­ries – and its pro­mo­tion of extreme right-wing views. Steve Ban­non called the site Bre­it­bart “the plat­form for the alt-right” in 2016. In 2017, Buz­zFeed News report­ed on emails and doc­u­ments show­ing how a for­mer Bre­it­bart edi­tor had worked direct­ly with a white nation­al­ist and a neo-Nazi to write and edit an arti­cle about the “alt-right” move­ment. The SPLC and numer­ous news orga­ni­za­tions have report­ed on a cache of emails between the senior Trump advis­er Stephen Miller and the for­mer Bre­it­bart writer Katie McHugh show­ing how Miller pushed for cov­er­age and inclu­sion of white nation­al­ist ideas in the pub­li­ca­tion.  The arti­cle pro­vides an anal­o­gy where just because the KKK pro­duced their own news­pa­pers that it didn’t mean that it qual­i­fies as news.” Bre­it­bart is a polit­i­cal organ that was try­ing to do was give white suprema­cist pol­i­tics a veneer of objec­tiv­i­ty.”

    The Guardian, Julia Car­rie Wong , Thu 21 Nov 2019 06.00 EST

    White nation­al­ists are open­ly oper­at­ing on Face­book. The com­pa­ny won’t act

    Guardian analy­sis finds VDare and Red Ice TV among sev­er­al out­lets that are still on the plat­form despite Facebook’s promised ban

    Last mod­i­fied on Thu 21 Nov 2019 14.38 EST

    On 7 Novem­ber, Lana Lok­t­eff, an Amer­i­can white nation­al­ist, intro­duced a “thought crim­i­nal and polit­i­cal pris­on­er and friend” as a fea­tured guest on her inter­net talk show, Red Ice TV. 

    For about 90 min­utes, Lok­t­eff and her guest – Greg John­son, a promi­nent white nation­al­ist and edi­tor-in-chief of the white nation­al­ist pub­lish­er Counter-Cur­rents – dis­cussed Johnson’s recent arrest in Nor­way amid author­i­ties’ con­cerns about his past expres­sion of “respect” for the far-right mass mur­der­er Anders Breivik. In 2012, John­son wrote that he was angered by Breivik’s crimes because he feared they would harm the cause of white nation­al­ism but had dis­cov­ered a “strange new respect” for him dur­ing his tri­al; Breivik’s mur­der of 77 peo­ple has been cit­ed as an inspi­ra­tion by the sus­pect­ed Christchurch killer, the man who mur­dered the British MP Jo Cox, and a US coast guard offi­cer accused of plot­ting a white nation­al­ist ter­ror attack.

    Just a few weeks ear­li­er, Red Ice TV had suf­fered a seri­ous set­back when it was per­ma­nent­ly banned from YouTube for repeat­ed vio­la­tions of its pol­i­cy against hate speech. But Red Ice TV still had a home on Face­book, allow­ing the channel’s 90,000 fol­low­ers to stream the dis­cus­sion on Face­book Watch – the plat­form Mark Zucker­berg launched as a place “to share an expe­ri­ence and bring peo­ple togeth­er who care about the same things”.

    The con­ver­sa­tion wasn’t a unique occur­rence. Face­book promised to ban white nation­al­ist con­tent from its plat­form in March 2019, revers­ing a years-long pol­i­cy to tol­er­ate the ide­ol­o­gy. But Red Ice TV is just one of sev­er­al white nation­al­ist out­lets that remain active on the plat­form today.

    A Guardian analy­sis found long­stand­ing Face­book pages for VDare, a white nation­al­ist web­site focused on oppo­si­tion to immi­gra­tion; the Affir­ma­tive Right, a rebrand­ing of Richard Spencer’s blog Alter­na­tive Right, which helped launch the “alt-right” move­ment; and Amer­i­can Free Press, a newslet­ter found­ed by the white suprema­cist Willis Car­to, in addi­tion to mul­ti­ple pages asso­ci­at­ed with Red Ice TV. Also oper­at­ing open­ly on the plat­form are two Holo­caust denial orga­ni­za­tions, the Com­mit­tee for Open Debate on the Holo­caust and the Insti­tute for His­tor­i­cal Review.

    “There’s no ques­tion that every sin­gle one of these groups is a white nation­al­ist group,” said Hei­di Beirich, the direc­tor of the South­ern Pover­ty Law Center’s (SPLC) Intel­li­gence Project, after review­ing the Guardian’s find­ings. “It’s not even up for debate. There’s real­ly no excuse for not remov­ing this mate­r­i­al.”

    White nation­al­ists sup­port the estab­lish­ment of whites-only nation states, both by exclud­ing new non-white immi­grants and, in some cas­es, by expelling or killing non-white cit­i­zens and res­i­dents. Many con­tem­po­rary pro­po­nents of white nation­al­ism fix­ate on con­spir­a­cy the­o­ries about demo­graph­ic change and con­sid­er racial or eth­nic diver­si­ty to be acts of “geno­cide” against the white race.

    Face­book declined to take action against any of the pages iden­ti­fied by the Guardian. A com­pa­ny spokesper­son said: “We are inves­ti­gat­ing to deter­mine whether any of these groups vio­late our poli­cies against orga­nized hate. We reg­u­lar­ly review orga­ni­za­tions against our pol­i­cy and any that vio­late will be banned per­ma­nent­ly.”

    The spokesper­son also said that Face­book does not ban Holo­caust denial, but does work to reduce the spread of such con­tent by lim­it­ing the dis­tri­b­u­tion of posts and pre­vent­ing Holo­caust-deny­ing groups and pages from appear­ing in algo­rith­mic rec­om­men­da­tions. Such lim­i­ta­tions are being applied to the two Holo­caust denial groups iden­ti­fied by the Guardian, the spokesper­son said.

    The Guardian under­took a review of white nation­al­ist out­lets on Face­book amid a debate over the company’s deci­sion to include Bre­it­bart News in Face­book News, a new sec­tion of its mobile app ded­i­cat­ed to “high qual­i­ty” jour­nal­ism. Face­book has faced sig­nif­i­cant pres­sure to reduce the dis­tri­b­u­tion of mis­in­for­ma­tion on its plat­form. Crit­ics of Bre­it­bart News object to its inclu­sion in what Zucker­berg has described as a “trust­ed source” of infor­ma­tion on two fronts: its repeat­ed pub­li­ca­tion of par­ti­san mis­in­for­ma­tion and con­spir­a­cy the­o­ries – and its pro­mo­tion of extreme rightwing views.

    A grow­ing body of evi­dence shows the influ­ence of white nation­al­ism on Breitbart’s pol­i­tics. Breitbart’s for­mer exec­u­tive chair­man Steve Ban­non called the site “the plat­form for the alt-right” in 2016. In 2017, Buz­zFeed News report­ed on emails and doc­u­ments show­ing how a for­mer Bre­it­bart edi­tor had worked direct­ly with a white nation­al­ist and a neo-Nazi to write and edit an arti­cle about the “alt-right” move­ment.

    This month, the SPLC and numer­ous news orga­ni­za­tions have report­ed on a cache of emails between the senior Trump advis­er Stephen Miller and the for­mer Bre­it­bart writer Katie McHugh show­ing how Miller pushed for cov­er­age and inclu­sion of white nation­al­ist ideas in the pub­li­ca­tion. The emails show Miller direct­ing McHugh to read links from VDare and anoth­er white nation­al­ist pub­li­ca­tion, Amer­i­can Renais­sance, among oth­er sources. In one case, report­ed by NBC News, Bre­it­bart ran an anti-immi­gra­tion op-ed sub­mit­ted by Miller under the byline “Bre­it­bart News”.

    A Bre­it­bart spokes­woman, Eliz­a­beth Moore, said that the out­let “is not now nor has it ever been a plat­form for the alt-right”. Moore also said McHugh was “a trou­bled indi­vid­ual” who had been fired for a num­ber of rea­sons “includ­ing lying”.

    “Bre­it­bart is the fun­nel through which VDare’s ideas get out to the pub­lic,” said Beirich. “It’s basi­cal­ly a con­duit of con­spir­a­cy the­o­ry and racism into the con­ser­v­a­tive move­ment … We don’t list them as a hate group, but to con­sid­er them a trust­ed news source is pan­der­ing at best.”

    Draw­ing the line between pol­i­tics and news
    Face­book exec­u­tives have respond­ed defen­sive­ly to crit­i­cism of Bre­it­bart News’s inclu­sion in the Face­book News tab, argu­ing that the com­pa­ny should not pick ide­o­log­i­cal sides.

    “Part of hav­ing this be a trust­ed source is that it needs to have a diver­si­ty of … views in there,” Zucker­berg said at an event in New York in response to a ques­tion about Breitbart’s inclu­sion. Camp­bell Brown, Facebook’s head of news part­ner­ships, wrote in a lengthy Face­book post that she believed Face­book should “include con­tent from ide­o­log­i­cal pub­lish­ers on both the left and the right”. Adam Mosseri, the head of Insta­gram and a long­time Face­book exec­u­tive, ques­tioned on Twit­ter whether the company’s crit­ics “real­ly want a plat­form of our scale to make deci­sions to exclude news orga­ni­za­tions based on their ide­ol­o­gy”. In response to a ques­tion from the Guardian, Mosseri acknowl­edged that Face­book does ban the ide­ol­o­gy of white nation­al­ism, then added: “The tricky bit is, and this is always the case, where exact­ly to draw the line.”

    One of the chal­lenges for Face­book is that white nation­al­ist and white suprema­cist groups adopt the trap­pings of news out­lets or pub­li­ca­tions to dis­sem­i­nate their views, said Joan Dono­van, the direc­tor of the Tech­nol­o­gy and Social Change Research Project at Har­vard and an expert on media manip­u­la­tion.

    Red Ice TV is “a group that styles them­selves as a news orga­ni­za­tion when they are pri­mar­i­ly a polit­i­cal orga­ni­za­tion, and the pol­i­tics are staunch­ly white suprema­cist”, Dono­van said. “We have seen this hap­pen in the past where orga­ni­za­tions like the KKK have pro­duced their own news­pa­pers … It doesn’t mean that it qual­i­fies as news.”

    Many peo­ple argue that Bre­it­bart is more of a “polit­i­cal front” than a news oper­a­tion, she added. “When Steve Ban­non left Bre­it­bart in order to work much more con­crete­ly with cam­paigns, you could see that Bre­it­bart was a polit­i­cal organ before any­thing else. Real­ly what they were try­ing to do was give white suprema­cist pol­i­tics a veneer of objec­tiv­i­ty.”

    Dono­van said she expects plat­form com­pa­nies will reassess their treat­ment of Bre­it­bart fol­low­ing the release of the Miller emails. She also called for Face­book to take a more “holis­tic” approach to com­bat­ing US domes­tic ter­ror­ism, as it does with for­eign ter­ror­ist groups.

    A Face­book spokesper­son not­ed that Face­book News is still in a test phase and that Face­book is not pay­ing Bre­it­bart News for its inclu­sion in the pro­gram. The spokesper­son said the com­pa­ny would con­tin­ue to lis­ten to feed­back from news pub­lish­ers.

    A his­to­ry of tol­er­ance for hate
    Face­book has long assert­ed that “hate speech has no space on Face­book”, whether it comes from a news out­let or not.

    But the $566bn com­pa­ny has con­sis­tent­ly allowed a vari­ety of hate groups to use its plat­form to spread their mes­sage, even when alert­ed to their pres­ence by the media or advo­ca­cy groups. In July 2017, in response to queries from the Guardian, Face­book said that more than 160 pages and groups iden­ti­fied as hate groups by SPLC did not vio­late its com­mu­ni­ty stan­dards. Those groups includ­ed:

    Amer­i­can Renais­sance, a white suprema­cist web­site and mag­a­zine;

    The Coun­cil of Con­ser­v­a­tive Cit­i­zens, a white nation­al­ist orga­ni­za­tion ref­er­enced in the man­i­festo writ­ten by Dylann Roof before he mur­dered nine peo­ple in a black church;

    The Occi­den­tal Observ­er, an online pub­li­ca­tion described by the Anti-Defama­tion League as the “pri­ma­ry voice for anti­semitism from far-right intel­lec­tu­als”;

    the Tra­di­tion­al­ist Work­er par­ty, a neo-Nazi group that had already been involved in mul­ti­ple vio­lent inci­dents; and

    Counter-Cur­rents, the white nation­al­ist pub­lish­ing imprint run by the white nation­al­ist Greg John­son, the recent guest on Red Ice TV.

    Three weeks lat­er, fol­low­ing the dead­ly Unite the Right ral­ly in Char­lottesville, Face­book announced a crack­down on vio­lent threats and removed pages asso­ci­at­ed with the the Tra­di­tion­al­ist Work­er par­ty, Counter-Cur­rents, and the neo-Nazi orga­ni­za­tion Gal­lows Tree Wotans­volk. Many of the rest remained.
    A year lat­er, a Guardian review found that many of the groups and indi­vid­u­als involved in the Char­lottesville event were back on Face­book, includ­ing the neo-Con­fed­er­ate League of the South, Patri­ot Front and Jason Kessler, who orga­nized Unite the Right. Face­book took those pages down fol­low­ing inquiries from the Guardian, but declined to take action against the page of David Duke, the noto­ri­ous white suprema­cist and for­mer Grand Wiz­ard of the Ku Klux Klan.

    In May 2018, Vice News’s Moth­er­board report­ed on inter­nal Face­book train­ing doc­u­ments that showed the com­pa­ny was dis­tin­guish­ing between white suprema­cy and white nation­al­ism – and explic­it­ly allow­ing white nation­al­ism.

    In July 2018, Zucker­berg defend­ed the moti­va­tions of peo­ple who engage in Holo­caust denial dur­ing an inter­view, say­ing that he did not “think that they’re inten­tion­al­ly get­ting it wrong”. Fol­low­ing wide­spread crit­i­cism, he retract­ed his remarks.

    It was not until March 2019 that Face­book acknowl­edged that white nation­al­ism “can­not be mean­ing­ful­ly sep­a­rat­ed from white suprema­cy and orga­nized hate groups” and banned it.

    Beirich expressed deep frus­tra­tion with Facebook’s track record.

    “We have con­sult­ed with Face­book many, many times,” Beirich added. “We have sent them our list of hate groups. It’s not like they’re not aware, and I always get the sense that there is good faith desire [to take action], and yet over and over again [hate groups] keep pop­ping up. It’s just not pos­si­ble for civ­il rights groups like SPLC to play the role of flag­ging this stuff for Face­book. It’s a com­pa­ny that makes $42bn a year and I have a staff of 45.”

    https://www.theguardian.com/technology/2019/nov/21/facebook-white-nationalists-ban-vdare-red-ice?CMP=Share_iOSApp_Other

    Posted by Mary Benton | November 23, 2019, 6:55 pm
  14. Remem­ber the sto­ry from ear­li­er this year about Face­book out­sourc­ing its ‘fact check­ing’ oper­a­tions to orga­ni­za­tions like the Koch-financed far right Dai­ly Caller News Foun­da­tion? Well, here’s the flip side of sto­ries like that: Face­book just lost its last fact check­er orga­ni­za­tion in the Nether­lands, the Dutch news­pa­per NU.nl. Why did the news­pa­per leave the pro­gram? Because Face­book forced NU.nl to reverse its rul­ing that the claims in a far right Dutch ad are unsub­stan­ti­at­ed, in keep­ing with Face­book’s new pol­i­cy of not fact check­ing politi­cians. The group labeled an ad by a far right politi­cian that claimed that 10 per­cent of Roma­ni­a’s land is owned by non-Euro­peans as unsub­stan­ti­at­ed, but Face­book inter­vened and forced a rever­sal of that rul­ing. So NU.nl quit the fact check­ing pro­gram because it was­n’t allowed to check the facts of soci­ety’s biggest and loud­est liars:

    The Verge

    Facebook’s only fact-check­ing ser­vice in the Nether­lands just quit

    ‘What is the point of fight­ing fake news if you are not allowed to tack­le politi­cians?’

    By Zoe Schif­fer
    Nov 26, 2019, 3:02pm EST

    Face­book is now oper­at­ing with­out a third-par­ty fact-check­ing ser­vice in the Nether­lands. The company’s only part­ner, Dutch news­pa­per NU.nl, just quit over a dis­pute regard­ing the social network’s pol­i­cy to allow politi­cians to run ads con­tain­ing mis­in­for­ma­tion.

    “What is the point of fight­ing fake news if you are not allowed to tack­le politi­cians?” asked NU.nl’s edi­tor-in-chief Gert-Jaap Hoek­man in a blog post announc­ing the deci­sion. “Let one thing be clear: we stand behind the con­tent of our fact checks.”

    The con­flict began in May when Face­book inter­vened in NU.nl’s deci­sion to label an ad from the Dutch politi­cian Esther de Lange as unsub­stan­ti­at­ed. The ad’s claim, that 10 per­cent of farm­land in Roma­nia is owned by non-Euro­peans, could not be ver­i­fied, which led NU.nl to label it as false. Face­book inter­vened in that deci­sion, telling the orga­ni­za­tion that politi­cians’ speech should not be fact-checked.

    Facebook’s adver­tis­ing guide­lines do not allow mis­in­for­ma­tion in ads, and the com­pa­ny relies on third-par­ty fact-check­ing ser­vices to vet the claims mar­keters are mak­ing. In Octo­ber, how­ev­er, the com­pa­ny for­mal­ly exempt­ed politi­cians from being part of this pro­gram. “From now on we will treat speech from politi­cians as news­wor­thy con­tent that should, as a gen­er­al rule, be seen and heard,” wrote Nick Clegg, Facebook’s VP of com­mu­ni­ca­tions.

    ...

    Pres­sure began to mount after Jack Dorsey announced that Twit­ter would no longer allow polit­i­cal ads on the plat­form. “We believe polit­i­cal mes­sage reach should be earned, not bought,” he wrote on Twit­ter. Some of Facebook’s own employ­ees penned an open let­ter to Mark Zucker­berg, ask­ing him to con­sid­er chang­ing his mind.

    NU.nl felt increas­ing­ly uncom­fort­able with its rela­tion­ship with Face­book. The orga­ni­za­tion had become the only third-par­ty fact-check­ing ser­vice Face­book used in the Nether­lands, after Lei­den Uni­ver­si­ty pulled out from its part­ner­ship last year. When it became clear the social net­work would not change its posi­tion, NU.nl decid­ed to put an end to its part­ner­ship as well.

    “We val­ue the work that Nu.nl has done and regret to see them go, but respect their deci­sion as an inde­pen­dent busi­ness,” a Face­book spokesper­son said in a state­ment emailed to The Verge. “We have strong rela­tion­ships with 55 fact-check­ing part­ners around the world who fact-check con­tent in 45 lan­guages, and we plan to con­tin­ue expand­ing the pro­gram in Europe and hope­ful­ly in the Nether­lands.”

    ———-

    “Facebook’s only fact-check­ing ser­vice in the Nether­lands just quit” by Zoe Schif­fer; The Verge; 11/26/2019

    ““What is the point of fight­ing fake news if you are not allowed to tack­le politi­cians?” asked NU.nl’s edi­tor-in-chief Gert-Jaap Hoek­man in a blog post announc­ing the deci­sion. “Let one thing be clear: we stand behind the con­tent of our fact checks.””

    What is the point of fight­ing fake news if you are not allowed to tack­le politi­cians? That’s a pret­ty valid ques­tion for a fact check­er. Espe­cial­ly in an era of the rise of the far right when troll­ish polit­i­cal gas-light­ing has become the norm. At some point, being a fact check­er with those kinds of con­straints effec­tive­ly turns these fact check­ing orga­ni­za­tions into facil­i­ta­tors of these lies.

    In relat­ed news, check out the recent addi­tion to Face­book’s “trust­ed” news feed: Bre­it­bart News:

    The Verge

    Mark Zucker­berg is strug­gling to explain why Bre­it­bart belongs on Face­book News

    By Adi Robert­son
    Oct 25, 2019, 6:18pm EDT

    On Fri­day morn­ing, Face­book announced its plan to spend mil­lions of dol­lars on high-qual­i­ty jour­nal­ism, fuel­ing the launch of a new ded­i­cat­ed news tab on its plat­form. CEO Mark Zucker­berg joined News Corp CEO Robert Thom­son for an inter­view soon after, and Thom­son ham­mered home the need for objec­tive jour­nal­ism in the age of social media, wax­ing nos­tal­gic about the impor­tance of rig­or­ous fact-check­ing in his ear­ly career.

    ...

    Face­book News is part­ner­ing with a vari­ety of region­al news­pa­pers and some major nation­al part­ners, includ­ing USA Today and The Wall Street Jour­nal. But as The New York Times and Nie­man Lab report, its “trust­ed” sources also include Bre­it­bart, a far-right site whose co-founder Steve Ban­non once described it as a plat­form for the white nation­al­ist “alt-right.” Bre­it­bart has been crit­i­cized for repeat­ed inac­cu­rate and incen­di­ary report­ing, often at the expense of immi­grants and peo­ple of col­or. Last year, Wikipedia declared it an unre­li­able source for cita­tions, along­side the British tabloid Dai­ly Mail and the left-wing site Occu­py Democ­rats.

    That’s led to ques­tions about why Bre­it­bart belongs on Face­book News, a fea­ture that will sup­pos­ed­ly be held to far tougher stan­dards than the nor­mal News Feed. In a ques­tion-and-answer ses­sion after the inter­view, Zucker­berg told Wash­ing­ton Post colum­nist Mar­garet Sul­li­van that Face­book would have “objec­tive stan­dards” for qual­i­ty.

    “Most of the rest of what we oper­ate is help­ing give peo­ple a voice broad­ly and mak­ing sure that every­one can share their opin­ion,” he said. “That’s not this. This is a space that is ded­i­cat­ed to high-qual­i­ty and curat­ed news.”

    But when New York Times reporter Marc Tra­cy asked how includ­ing Bre­it­bart served that cause, Zucker­berg empha­sized its pol­i­tics, not its report­ing. “Part of hav­ing this be a trust­ed source is that it needs to have a diver­si­ty of views in there, so I think you want to have con­tent that rep­re­sents dif­fer­ent per­spec­tives,” he said. Zucker­berg reit­er­at­ed that these per­spec­tives should com­ply with Facebook’s stan­dards, and he was cagey about Bre­it­bart’s pres­ence, say­ing that “hav­ing some­one be pos­si­ble or eli­gi­ble to show up” doesn’t guar­an­tee fre­quent place­ment. “But I cer­tain­ly think you want to include a breadth of con­tent in there,” he said.

    Face­book hasn’t released a full list of News part­ners, so we don’t know the project’s full scope. Bre­it­bart is hard­ly the only right-lean­ing name Facebook’s list, which includes Nation­al Review, The Wash­ing­ton Times, and News Corp’s own Fox News. But it has faced unique chal­lenges to its edi­to­r­i­al integri­ty — includ­ing, in recent years, some of Bre­it­bart’s own for­mer staff denounc­ing its poli­cies.

    Zuckerberg’s answer is unlike­ly to sat­is­fy crit­ics, who see the site’s inclu­sion as an exam­ple of Face­book sur­ren­der­ing prin­ci­ple to appease right-wing com­men­ta­tors. Left-lean­ing non­prof­it Media Mat­ters for Amer­i­ca called the deci­sion “reflex­ive pan­der­ing to con­ser­v­a­tive pun­dits, right-wing extrem­ists, and white nation­al­ists.” Activist group Sleep­ing Giants — which has spear­head­ed a major adver­tis­er boy­cott of Bre­it­bart — retweet­ed sev­er­al reporters crit­i­ciz­ing the news, includ­ing Buz­zFeed News writer Joe Bern­stein, whose report­ing on Bre­it­bart and white nation­al­ism caused one of its biggest back­ers to sell his stake.

    But Face­book wants to win over Repub­li­cans, includ­ing law­mak­ers who have grilled Zucker­berg in Con­gress over shaky claims of “anti-con­ser­v­a­tive bias,” as well as Pres­i­dent Don­ald Trump, who has threat­ened tech com­pa­nies with new laws and antitrust action. Leav­ing out Bre­it­bart might earn Face­book con­dem­na­tion from these quar­ters.

    In a New York Times edi­to­r­i­al, Zucker­berg not­ed that out­right mis­in­for­ma­tion is banned on Face­book News. “If a pub­lish­er posts mis­in­for­ma­tion, it will no longer appear in the prod­uct,” he wrote. So in the­o­ry, Bre­it­bart will only stay on Face­book News if it hews to the rules. But that doesn’t explain why Face­book chose an out­let known for sen­sa­tion­al­ism and mis­in­for­ma­tion in the first place — and as Face­book News matures, kick­ing off a site like Bre­it­bart might cause more con­tro­ver­sy than nev­er includ­ing it at all.

    ———–

    “Mark Zucker­berg is strug­gling to explain why Bre­it­bart belongs on Face­book News” by Adi Robert­son; The Verge; 10/25/2019

    “Face­book News is part­ner­ing with a vari­ety of region­al news­pa­pers and some major nation­al part­ners, includ­ing USA Today and The Wall Street Jour­nal. But as The New York Times and Nie­man Lab report, its “trust­ed” sources also include Bre­it­bart, a far-right site whose co-founder Steve Ban­non once described it as a plat­form for the white nation­al­ist “alt-right.” Bre­it­bart has been crit­i­cized for repeat­ed inac­cu­rate and incen­di­ary report­ing, often at the expense of immi­grants and peo­ple of col­or. Last year, Wikipedia declared it an unre­li­able source for cita­tions, along­side the British tabloid Dai­ly Mail and the left-wing site Occu­py Democ­rats.”

    It’s not just a news feed. It’s a “trust­ed news” feed. That’s how Mark Zucker­berg envi­sions the Face­book News fea­ture is sup­posed to work. And yet when asked why Bre­it­bart News was invit­ed into this “trust­ed” col­lec­tion of news sources, Zucker­berg explains that in order for the Face­book News feed to be trust­ed it needs to draw from a wide vari­ety of sources across the ide­o­log­i­cal spec­trum. So in order for Face­book News to be trust­ed, it needs to include ide­o­log­i­cal sources from far right ide­olo­gies that thrive on warp­ing the truth and cre­at­ing fic­tion­al expla­na­tions of how the world works:

    ...
    That’s led to ques­tions about why Bre­it­bart belongs on Face­book News, a fea­ture that will sup­pos­ed­ly be held to far tougher stan­dards than the nor­mal News Feed. In a ques­tion-and-answer ses­sion after the inter­view, Zucker­berg told Wash­ing­ton Post colum­nist Mar­garet Sul­li­van that Face­book would have “objec­tive stan­dards” for qual­i­ty.

    “Most of the rest of what we oper­ate is help­ing give peo­ple a voice broad­ly and mak­ing sure that every­one can share their opin­ion,” he said. “That’s not this. This is a space that is ded­i­cat­ed to high-qual­i­ty and curat­ed news.”

    But when New York Times reporter Marc Tra­cy asked how includ­ing Bre­it­bart served that cause, Zucker­berg empha­sized its pol­i­tics, not its report­ing. “Part of hav­ing this be a trust­ed source is that it needs to have a diver­si­ty of views in there, so I think you want to have con­tent that rep­re­sents dif­fer­ent per­spec­tives,” he said. Zucker­berg reit­er­at­ed that these per­spec­tives should com­ply with Facebook’s stan­dards, and he was cagey about Bre­it­bart’s pres­ence, say­ing that “hav­ing some­one be pos­si­ble or eli­gi­ble to show up” doesn’t guar­an­tee fre­quent place­ment. “But I cer­tain­ly think you want to include a breadth of con­tent in there,” he said.

    Face­book hasn’t released a full list of News part­ners, so we don’t know the project’s full scope. Bre­it­bart is hard­ly the only right-lean­ing name Facebook’s list, which includes Nation­al Review, The Wash­ing­ton Times, and News Corp’s own Fox News. But it has faced unique chal­lenges to its edi­to­r­i­al integri­ty — includ­ing, in recent years, some of Bre­it­bart’s own for­mer staff denounc­ing its poli­cies.
    ...

    So as we can see, Face­book faces some chal­lenges with its new Face­book News and fact check­ing ser­vices. Enor­mous chal­lenges that are the same under­ly­ing chal­lenge: the chron­ic decep­tion at the foun­da­tion of far right world­views and the enor­mous oppor­tu­ni­ty social media cre­ates for prof­itably spread­ing those lies. And as we can also see, Face­book is, true to form, fail­ing immense­ly at over­com­ing those chal­lenges. Along with fail­ing the enor­mous meta-chal­lenge of over­com­ing Face­book’s insa­tiable cor­po­rate greed, also true to form.

    Posted by Pterrafractyl | December 2, 2019, 1:55 pm
  15. There are a lot of ques­tions swirling around the his­toric impeach­ment vote tak­ing place in the House of Rep­re­sen­ta­tives today. But one thing is abun­dant­ly clear at this point: There’s going to be A LOT of polit­i­cal lying in the 2020 elec­tion. That’s lit­er­al­ly what the impeach­ment is all about. It’s lit­er­al­ly an impeach­ment over an scheme to extort the Ukrain­ian gov­ern­ment into gin­ning up false crim­i­nal charges against the politi­cian that Trump team saw as their like­li­est 2020 oppo­nent. You almost can’t come up with a big­ger red flag about upcom­ing elec­toral lies than this impeach­ment.

    And that’s part of what makes Face­book’s deci­sion to explic­it­ly allow politi­cians to run decep­tive ads on Face­book so dis­turb­ing. The pres­i­dent is lit­er­al­ly being impeached over a scheme to cre­ate a giant lie against his 2020 polit­i­cal oppo­nent. The Trump team isn’t just plan­ning on stan­dard polit­i­cal exag­ger­a­tions or mis­char­ac­ter­i­za­tions. The Trump reelec­tion cam­paign is root­ed cre­at­ing and exploit­ing fake crim­i­nal charge against his pre­sumed oppo­nent and Face­book has already made clear to the Trump cam­paign, and any oth­er cam­paigns, that they can lie as much as they want to and Face­book will glad­ly run their ads. This is despite Google and Twit­ter tak­ing a very dif­fer­ent approach and ban­ning polit­i­cal ads alto­geth­er and Face­book’s own employ­ees issu­ing open let­ters decry­ing Face­book’s ad pol­i­cy. Keep in mind that Face­book does con­tin­ue to fact-check polit­i­cal ads issued by non-politi­cians and will remove ads from non-politi­cians it deems to be decep­tive. Only ads from politi­cians are being giv­en this lie loop­hole.

    Giv­en that both Face­book and lying are key com­po­nents of the Trump reelec­tion strat­e­gy, per­haps it won’t come as a sur­prise to learn that it’s report­ed­ly none oth­er than Trump’s biggest backer at Face­book, Peter Thiel, who has been inter­nal­ly lob­by­ing Mark Zucker­berg to keep Face­book’s pol­i­cy of allow decep­tive polit­i­cal ads:

    Salon

    Peter Thiel advised Mark Zucker­berg to not to revise pol­i­cy allow­ing lies in polit­i­cal ads: report
    Thiel, who spoke at the 2016 Repub­li­can Nation­al Con­ven­tion, has been an out­spo­ken sup­port­er of Pres­i­dent Trump

    Matthew Rozsa
    Decem­ber 17, 2019 10:37PM (UTC)

    A new report reveals that Peter Thiel — a co-founder of Pay­Pal, as well as a sup­port­er of Pres­i­dent Don­ald Trump — has advised Face­book CEO Mark Zucker­berg revis­ing a con­tro­ver­sial pol­i­cy, which gives promi­nent politi­cians carte blanche to spread lies on the social media plat­form.

    Thiel has report­ed­ly urged Face­book to stick by a con­tro­ver­sial pol­i­cy first announced in Sep­tem­ber exempt­ing polit­i­cal ads from being fact-checked, accord­ing to The Wall Street Jour­nal. Though some direc­tors and exec­u­tives encour­aged Face­book to crack down on unre­li­able infor­ma­tion or ban polit­i­cal adver­tise­ments alto­geth­er, Thiel has report­ed­ly urged Zucker­berg not to bow to pub­lic pres­sure.

    While Thiel declined to com­ment on the report, a Face­book spokesman told The Jour­nal that “many of the deci­sions we’re mak­ing at Face­book come with dif­fi­cult trade-offs, and we’re approach­ing them with care­ful rig­or at all lev­els of the com­pa­ny, from the board of direc­tors down. We’re for­tu­nate to have a board with diverse expe­ri­ences and per­spec­tives so we can ensure debate that reflects a cross sec­tion of views.”

    As The Jour­nal not­ed, reac­tions to Zuckerberg’s deci­sion not to cen­sor polit­i­cal ads in the name of guard­ing free­dom of speech has been gen­er­al­ly sup­port­ed by con­ser­v­a­tives and gen­er­al­ly opposed by lib­er­als. Con­ser­v­a­tives such as Thiel and his sup­port­ers among Face­book exec­u­tives argue that the com­pa­ny should sup­port free speech, and it is not the social media com­pa­ny’s respon­si­bil­i­ty to fact-check polit­i­cal ads. Lib­er­als, by con­trast, are con­cerned about the spread of mis­in­for­ma­tion on social media dur­ing the 2016 pres­i­den­tial elec­tion with an eye on the fast-approach­ing 2020 elec­tion.

    Though Thiel and his funds have sold most of their Face­book shares, he was the first out­side investor in Face­book and gave Zucker­berg valu­able advice, which has helped the com­pa­ny grow into the behe­moth it is today. As a result, Zucker­berg report­ed­ly trusts Thiel’s insights and val­ues his advice.

    ...

    ———-

    “Peter Thiel advised Mark Zucker­berg to not to revise pol­i­cy allow­ing lies in polit­i­cal ads: report” by Matthew Rozsa; Salon; 12/17/2019

    “Thiel has report­ed­ly urged Face­book to stick by a con­tro­ver­sial pol­i­cy first announced in Sep­tem­ber exempt­ing polit­i­cal ads from being fact-checked, accord­ing to The Wall Street Jour­nal. Though some direc­tors and exec­u­tives encour­aged Face­book to crack down on unre­li­able infor­ma­tion or ban polit­i­cal adver­tise­ments alto­geth­er, Thiel has report­ed­ly urged Zucker­berg not to bow to pub­lic pres­sure.”

    Sur­prise! Thiel came to the res­cue of GOP’s lies. It’s a sign of how influ­en­tial he con­tin­ues to have at Face­book despite sell­ing most of his orig­i­nal shares. His influ­ence appar­ent­ly has more to do with his per­son­al influ­ence over Mark Zucker­berg:

    ...
    Though Thiel and his funds have sold most of their Face­book shares, he was the first out­side investor in Face­book and gave Zucker­berg valu­able advice, which has helped the com­pa­ny grow into the behe­moth it is today. As a result, Zucker­berg report­ed­ly trusts Thiel’s insights and val­ues his advice.
    ...

    So Mark Zucker­berg val­ues the insights and advice of one of the world’s most pow­er­ful fas­cists. That sounds about right for a Face­book sto­ry.

    And as the fol­low­ing arti­cle points out, if giv­ing the Trump cam­paign a license to open­ly lie seems like a recipe for dis­as­ter head­ing into 2020, don’t for­get that it’s not hard for some­one to tech­ni­cal­ly become a politi­cian. All they have to do is run for office. So if a group has a lot of mon­ey to spend on Face­book ads, and a lot of lies they want to push with those ads, all that group will need to do is field a can­di­date for office. At least in the­o­ry.

    That the­o­ry was test­ed by Democ­rats short­ly after Face­book announced its polit­i­cal ads pol­i­cy when Demo­c­ra­t­ic politi­cians start­ed inten­tion­al­ly run­ning obvi­ous­ly fake ads on Face­book to see what the com­pa­ny would do. A left-lean­ing polit­i­cal action com­mit­tee, the Real­ly Online Lefty League, also decid­ed to test the new pol­i­cy with an ad claim­ing Repub­li­can Sen­a­tor Lind­sey Gra­ham was a sup­port­er of the Green New Deal. Face­book respond­ed that the ad was going to be tak­en down because this was polit­i­cal action com­mit­tee, and not an actu­al politi­cian. So one of the group’s mem­bers, Adriel Hamp­ton, decid­ed to run for gov­er­nor of Cal­i­for­nia. Face­book refused Hamp­ton’s fake ads, say­ing, “This per­son has made clear he reg­is­tered as a can­di­date to get around our poli­cies, so his con­tent, includ­ing ads, will con­tin­ue to be eli­gi­ble for third-par­ty fact-check­ing.” So it sounds like the only thing pre­vent­ing this plan from work­ing is the fact that Hamp­ton made it clear he was only a reg­is­tered can­di­date to exploit Face­book’s fact-check­ing loop­hole:

    Vox
    Recode

    Facebook’s polit­i­cal ads pol­i­cy is pre­dictably turn­ing out to be a dis­as­ter

    Democ­rats are test­ing the lim­its of Facebook’s refusal to take down false ads from politi­cians, and it isn’t pret­ty.

    By Emi­ly Stew­art
    Updat­ed Oct 30, 2019, 4:57pm EDT

    Facebook’s polit­i­cal ads pol­i­cy that allows politi­cians to lie on its plat­form has, unsur­pris­ing­ly, turned into a mess.

    As it faces pres­sure tests from politi­cians and polit­i­cal groups, Face­book is start­ing to make excep­tions to its pol­i­cy that it won’t fact-check adver­tise­ments pub­lished by politi­cians. It’s a posi­tion CEO Mark Zucker­berg in par­tic­u­lar had tak­en a hard line on.

    To back up, this all began this fall when Face­book announced it wouldn’t fact-check polit­i­cal speech, includ­ing ads, and cam­paigns start­ed to test the impli­ca­tions of this pol­i­cy. In Sep­tem­ber, Face­book refused to take down an ad run by Don­ald Trump’s reelec­tion cam­paign that made false claims about for­mer Vice Pres­i­dent Joe Biden, his son Hunter Biden, and their activ­i­ties in Ukraine. Face­book wasn’t the only plat­form to refuse to pull the ad — YouTube, Twit­ter, MSNBC, and Fox made the same call — but Face­book caught the most flak for it.

    Then, Democ­rats decid­ed to chal­lenge the pol­i­cy allow­ing fake ads ... by run­ning fake ads of their own on Face­book. Sen. Eliz­a­beth War­ren (D‑MA), who has emerged as a fierce Face­book crit­ic in the 2020 pri­ma­ry, ran a fake ad claim­ing Face­book CEO Mark Zucker­berg had endorsed Trump’s reelec­tion. War­ren also, with­out evi­dence, sug­gest­ed the social net­work had adopt­ed the pol­i­cy as part of a back­room deal with Trump. And last week, high-pro­file fresh­man Rep. Alexan­dria Oca­sio-Cortez got Zucker­berg to admit in a House hear­ing he would “prob­a­bly” let her run ads against Repub­li­cans say­ing they sup­port­ed the Green New Deal. Along the way, Zucker­berg con­tin­ued defend­ing the pol­i­cy, even as his own employ­ees, in a rare move, wrote a let­ter express­ing con­cern with the stance and pushed him to rethink his deci­sion.

    But in recent days, Face­book has wavered as pro­gres­sives have test­ed the lim­its of its pol­i­cy. Over the week­end, the com­pa­ny took down an ad that false­ly claimedSen. Lind­sey Gra­ham (R‑SC) sup­ports the Green New Deal. A left-lean­ing polit­i­cal action com­mit­tee, the Real­ly Online Lefty League, had post­ed the ad, and Face­book said it took the action because the ad came from a polit­i­cal action group, not a politi­cian, and there­fore dif­fer­ent rules applied.

    So the group found a workaround: One of the PAC mem­bers, Adriel Hamp­ton, filed with the Fed­er­al Elec­tion Com­mis­sion to run for Cal­i­for­nia gov­er­nor. Now a politi­cian, as the log­ic of Facebook’s poli­cies would go, he can run as many polit­i­cal ads as he wants.

    Except appar­ent­ly not. Face­book on Tues­day evening said it was nix­ing Hampton’s workaround. “This per­son has made clear he reg­is­tered as a can­di­date to get around our poli­cies, so his con­tent, includ­ing ads, will con­tin­ue to be eli­gi­ble for third-par­ty fact-check­ing,” a Face­book spokesman said in an email to Recode.

    Hamp­ton told CNN he is con­sid­er­ing legal action against Face­book. In an inter­view with Recode ear­li­er in the day on Tues­day, he said that Face­book is “basi­cal­ly sell­ing you to the lying politi­cians.”

    “I feel that I’m one of the few peo­ple who’s qual­i­fied in both that I’m an expert mar­ket­ing strate­gist and a politi­cian, and I think that’s what it’s going to take to either get this pol­i­cy cleaned up and get Trump back on equal foot­ing with oth­er polit­i­cal com­mit­tees — or to defeat the GOP, defeat Trump, and defeat the Sen­ate GOP with fake ads,” he said.

    Hamp­ton sug­gest­ed he might actu­al­ly run for office — he is, after all, a long­time polit­i­cal con­sul­tant who most recent­ly worked on Mike Gravel’s ill-fat­ed pres­i­den­tial cam­paign; Hamp­ton also made an unsuc­cess­ful bid for Con­gress in 2009. After Face­book announced it wouldn’t let him run fake ads, he told Recode he will now “lead a move­ment.”

    Hampton’s fight with the plat­form is high­light­ing the real issue here: that the company’s deci­sion-mak­ing and pol­i­cy defens­es when it comes to free speech on its plat­form can often seem arbi­trary. Its pol­i­cy says a politi­cian is exempt from third-par­ty fact-check­ing, and you’re tech­ni­cal­ly a polit­i­cal can­di­date if you’re reg­is­tered as one with the FEC. But in this case, Face­book is mak­ing an exemp­tion and a judg­ment about inten­tions.

    Facebook’s hard-and-fast rule on polit­i­cal speech doesn’t seem so hard-and-fast, con­sid­er­ing it’s already mak­ing excep­tions to it.

    The dust­up also high­lights just how enor­mous Face­book has become and, in turn, how unpre­pared it seems to be to mod­er­ate polit­i­cal speech on its plat­form, even after the hard lessons it learned in the wake of the 2016 elec­tion.

    “The big sto­ry is that Face­book is too big to gov­ern, and its ads sys­tem is too easy to hijack,” Siva Vaid­hyanathan, a media stud­ies pro­fes­sor at the Uni­ver­si­ty of Vir­ginia, told Recode.

    Face­book knows polic­ing speech is a polit­i­cal hot pota­to

    On its face, the deci­sion on the Biden ad should have been an easy one for Face­book: It was the pres­i­dent of the Unit­ed States mak­ing an obvi­ous­ly false claim about the for­mer Vice Pres­i­dent of the Unit­ed States.

    But tak­ing down the ad would have cre­at­ed two prob­lems for Face­book. First, it would set a prece­dent that Face­book is respon­si­ble for polic­ing every false polit­i­cal ad on its plat­form. That would be a chal­leng­ing but not impos­si­ble task. The com­pa­ny has effec­tive­ly addressed ter­ror­ist con­tent and got­ten bet­ter at com­bat­ing elec­tion inter­fer­ence. It could under­take sim­i­lar efforts on fake polit­i­cal ads.

    The sec­ond and big­ger com­pli­ca­tion: tak­ing down the ad could also have caused just as much con­tro­ver­sy as leav­ing it up. Trump and his sup­port­ers would like­ly have cried foul. Face­book and oth­er social media com­pa­nies are already dogged by unfound­ed accu­sa­tions by Repub­li­cans that their algo­rithms con­tain anti-con­ser­v­a­tive bias, and they have done a lot of leg­work to try to prove they’re not.

    In oth­er words, when Zucker­berg says, in defense of the ad pol­i­cy, “most peo­ple don’t want to live in a world where you can only post things that tech com­pa­nies judged to be 100 per­cent true,” and, “in a democ­ra­cy peo­ple should be able to see for them­selves what politi­cians are say­ing,” what he’s not say­ing is that the under­ly­ing prob­lem is that polic­ing polit­i­cal ads would be polit­i­cal­ly ten­u­ous and hard.

    “Face­book is basi­cal­ly say­ing we’re going to pre­tend this is a high-mind­ed deci­sion and we’re going to stick by it because we’d rather take the hit for the next few weeks or months on this pol­i­cy until every­one burns out on it than take the hit for years every time an ad with clear false­hoods makes it through the fil­ter,” Vaid­hyanathan said.

    The com­pa­ny doesn’t want to deal with the back­lash it would face if it were to deem an ad from one polit­i­cal par­ty or the oth­er to be false. Face­book is already hyper­sen­si­tive to large­ly unfound­ed claims of polit­i­cal bias. “I wor­ry much more about Face­book telling me what fake news is than fake news itself,” Rory McShane, a Repub­li­can polit­i­cal con­sul­tant, told Recode.

    Face­book has been pres­sured to stop deal­ing in polit­i­cal adver­tis­ing alto­geth­er, with crit­ics not­ing it’s only a small part of its rev­enue. But then that would require the com­pa­ny to define what a polit­i­cal ad is. Sure, it could ban ads from the Trump cam­paign, but what about the NRA? Or the Amer­i­can Fed­er­a­tion of Teach­ers?

    Still, in mak­ing one deci­sion on the Trump ad and anoth­er deci­sion on the ads Hamp­ton was try­ing to run, Face­book showed it will in fact make polit­i­cal judg­ments about the ads on its plat­form. It gets to decide what ads do and don’t run, and it doesn’t have to stick to its poli­cies.

    And on Wednes­day after­noon, the pres­sure on Face­book increased even more after Twit­ter CEO Jack Dorsey announced his plat­form would no longer allow polit­i­cal adver­tis­ing.

    Peo­ple were bound to test this — which isn’t nec­es­sar­i­ly great, either

    Renée DiRes­ta, a 2019 Mozil­la fel­low in Media, Mis­in­for­ma­tion, and Trust and an expert in social media manip­u­la­tion, told Recode that peo­ple like Hamp­ton and War­ren are “test­ing the bound­aries of a bad pol­i­cy by cre­at­ing exam­ples to illus­trate exact­ly why it’s inad­e­quate.”

    “I don’t think most hon­est, legit­i­mate politi­cians want to be writ­ten about as peo­ple who delib­er­ate­ly ran bla­tant­ly fake con­tent,” she said.

    ...

    ———-

    “Facebook’s polit­i­cal ads pol­i­cy is pre­dictably turn­ing out to be a dis­as­ter” by Emi­ly Stew­art; Vox Recode; 10/30/2019

    “So the group found a workaround: One of the PAC mem­bers, Adriel Hamp­ton, filed with the Fed­er­al Elec­tion Com­mis­sion to run for Cal­i­for­nia gov­er­nor. Now a politi­cian, as the log­ic of Facebook’s poli­cies would go, he can run as many polit­i­cal ads as he wants.”

    Just turn your­self into a politi­cian and you can open­ly run as many lying ads as you want on Face­book. It’s that easy. In the­o­ry. But in this case Face­book still restrict­ed the lying ads, but only because Adriel Hamp­ton made it clear he was­n’t seri­ous­ly run­ning and was only doing this to test Face­book’s poli­cies. So Face­book is unwill­ing to say if a politi­cian’s ads con­tained lies, but they’re will­ing to say whether or not a politi­cian is a real politi­cian:

    ...
    Except appar­ent­ly not. Face­book on Tues­day evening said it was nix­ing Hampton’s workaround. “This per­son has made clear he reg­is­tered as a can­di­date to get around our poli­cies, so his con­tent, includ­ing ads, will con­tin­ue to be eli­gi­ble for third-par­ty fact-check­ing,” a Face­book spokesman said in an email to Recode.

    Hamp­ton told CNN he is con­sid­er­ing legal action against Face­book. In an inter­view with Recode ear­li­er in the day on Tues­day, he said that Face­book is “basi­cal­ly sell­ing you to the lying politi­cians.”

    “I feel that I’m one of the few peo­ple who’s qual­i­fied in both that I’m an expert mar­ket­ing strate­gist and a politi­cian, and I think that’s what it’s going to take to either get this pol­i­cy cleaned up and get Trump back on equal foot­ing with oth­er polit­i­cal com­mit­tees — or to defeat the GOP, defeat Trump, and defeat the Sen­ate GOP with fake ads,” he said.

    Hamp­ton sug­gest­ed he might actu­al­ly run for office — he is, after all, a long­time polit­i­cal con­sul­tant who most recent­ly worked on Mike Gravel’s ill-fat­ed pres­i­den­tial cam­paign; Hamp­ton also made an unsuc­cess­ful bid for Con­gress in 2009. After Face­book announced it wouldn’t let him run fake ads, he told Recode he will now “lead a move­ment.”

    Hampton’s fight with the plat­form is high­light­ing the real issue here: that the company’s deci­sion-mak­ing and pol­i­cy defens­es when it comes to free speech on its plat­form can often seem arbi­trary. Its pol­i­cy says a politi­cian is exempt from third-par­ty fact-check­ing, and you’re tech­ni­cal­ly a polit­i­cal can­di­date if you’re reg­is­tered as one with the FEC. But in this case, Face­book is mak­ing an exemp­tion and a judg­ment about inten­tions.

    Facebook’s hard-and-fast rule on polit­i­cal speech doesn’t seem so hard-and-fast, con­sid­er­ing it’s already mak­ing excep­tions to it.
    ...

    Final­ly, recall that Renée DiRes­ta hap­pens to be one of the fig­ures who appears to have been involved with the New Knowl­edge project to cre­ate fake ‘Russ­ian bot’ net­works oper­at­ing on Twit­ter and Face­book in the 2017 Alaba­ma spe­cial Sen­ate race, osten­si­bly to test how peo­ple respond to dis­in­for­ma­tion bot net­works. So her exper­tise in media and mis­in­for­ma­tion includes real-world expe­ri­ence in run­ning an actu­al dis­in­for­ma­tion net­work. And that dis­in­for­ma­tion net­work was­n’t run­ning ads. It was bots just pro­mot­ing memse and links:

    ...
    Renée DiRes­ta, a 2019 Mozil­la fel­low in Media, Mis­in­for­ma­tion, and Trust and an expert in social media manip­u­la­tion, told Recode that peo­ple like Hamp­ton and War­ren are “test­ing the bound­aries of a bad pol­i­cy by cre­at­ing exam­ples to illus­trate exact­ly why it’s inad­e­quate.”

    “I don’t think most hon­est, legit­i­mate politi­cians want to be writ­ten about as peo­ple who delib­er­ate­ly ran bla­tant­ly fake con­tent,” she said.

    ...

    It a reminder that even if Face­book bans lying ads from politi­cians, the plat­form is still going to be a heavy pro­mot­er of mis­in­for­ma­tion on a mas­sive scale.

    So we’ll see if there’s a flood of third-par­ty can­di­dates who don’t seem to be seri­ous about run­ning for office and only seri­ous about spread­ing dis­in­for­ma­tion on Face­book. Fake third-par­ty can­di­dates who pre­sum­ably won’t open­ly declare that they’re doing it just to exploit Face­book’s lie loop­hole so Face­book does­n’t have to ban them.

    It’s also worth not­ing that this gim­mick can work the oth­er way around: If the Trump cam­paign is run­ning a bunch of bla­tant­ly lying ads, the Democ­rats could take the con­tent of that ad, repack­age it in a new ad, and have a left-lean­ing polit­i­cal action com­mit­tee that’s sub­ject to Face­book’s fact-check­ing rules buy a very small audi­ence for the decep­tive ad to see if Face­book bans it at that point. Of course, if the ad was indeed banned, at that point the main recourse for the Democ­rats would be to buy a bunch of Face­book ads talk­ing about how Face­book ver­i­fied the ad is a bunch of lies. That could eas­i­ly hap­pen, which is reminder that Face­book’s poli­cies aren’t just set up to help Repub­li­cans lie their way into office. They’re also set up to cyn­i­cal­ly sell more ads. Includ­ing ads to high­light the lies in oth­er ads.

    Posted by Pterrafractyl | December 18, 2019, 3:02 pm
  16. How many times is Steve Ban­non allowed to call for the mur­der of gov­ern­ment offi­cials before Face­book sus­pends his account? That was the ques­tion Sen­a­tor Richard Blu­men­thal asked the Mark Zucker­berg dur­ing a Sen­ate Judi­cia­ry Hear­ing on Tues­day in ref­er­ence to Ban­non’s recent calls for the behead­ing of Antho­ny Fau­ci, a move that got Ban­non banned from Twit­ter, but not Face­book.

    So what was Zucker­berg’s answer? Well, it sounds like if Steve Ban­non calls for the mur­der of gov­ern­ment offi­cials the posts call­ing for mur­der will be tak­en down but that won’t auto­mat­i­cal­ly result in the ban­ning of Ban­non’s account. The account ban­ning is made on more of a case by case basis. So the rules are that if you call for the mur­der of gov­ern­ment offi­cials your calls for mur­der might be even­tu­al­ly tak­en down but you will prob­a­bly still be allowed to con­tin­ue post­ing on Face­book. At least that’s the case for Steve Ban­non:

    The Hill Reporter

    Mark Zucker­berg Refus­es to Ter­mi­nate Steve Bannon’s Face­book Account Despite Death Threats

    BY Bran­don Gage
    Novem­ber 17, 2020

    Dur­ing a vir­tu­al Sen­ate Judi­cia­ry Hear­ing call with Sen­a­tor Richard Blu­men­thal (D‑CT) on Tues­day, Face­book CEO Mark Zucker­berg said that he will not ter­mi­nate right-wing prova­ca­teur and for­mer White House advis­er Steve Bannon’s account even after Ban­non made death threats against Doc­tor Antho­ny Fau­ci.

    Ban­non called for Fau­ci to be behead­ed in a Face­book post ear­li­er this month after Pres­i­dent Don­ald Trump lost the elec­tion.

    Fau­ci has become a favorite tar­get of Trump’s most dan­ger­ous sup­port­ers, many of whom are white nation­al­ists, over his rec­om­men­da­tions that peo­ple wear masks to slow the spread of COVID-19.

    Trump has down­played the coro­n­avirus cri­sis by deny­ing its sever­i­ty and con­tra­dict­ing his own experts. This has led to the virus infect­ing at least 11.3 mil­lion Amer­i­cans and a quar­ter-of-a-mil­lion deaths.

    Face­book has attempt­ed to remain polit­i­cal­ly neu­tral, and has even said pub­licly that they do not want to upset con­ser­v­a­tives. Unfor­tu­nate­ly, this approach has result­ed in a del­uge of fake news, pro­pa­gan­da, and calls for vio­lence by peo­ple who have sworn alle­giance to Trump.

    Law­mak­ers, along with the pub­lic, find it dif­fi­cult to com­pre­hend why Face­book choos­es to cod­dle ter­ror­ists.

    “How many times is Steve Ban­non allowed to call for the mur­der of gov­ern­ment offi­cials before Face­book sus­pends his account?” Blu­men­thal asked the social media mogul.

    “Sen­a­tor, as you say, the con­tent in ques­tion did vio­late our poli­cies and we did take it down,” Zucker­berg said, refer­ring to the post itself.

    “Hav­ing a con­tent vio­la­tion does not auto­mat­i­cal­ly mean your account gets tak­en down and the num­ber of strikes varies depend­ing on the type of offense, so if peo­ple are post­ing ter­ror­ist con­tent or child exploita­tion con­tent then the first time that they do it, then we will take down their account,” Zucker­berg con­tin­ued.

    Ter­ror­ist con­tent, how­ev­er, is pre­cise­ly what Ban­non post­ed. But because Face­book is noto­ri­ous­ly lenient toward pub­lic offi­cials who make incen­di­ary remarks – includ­ing the pres­i­dent of the Unit­ed States, who has threat­ened Iran and North Korea with nuclear war and encour­aged his sup­port­ers to com­mit vio­lent acts – Bannon’s account was not tak­en down.

    “For oth­er things, it’s mul­ti­ple… I’d be hap­py to fol­low up after­wards, we try not to dis­close these…” Zucker­berg said.

    “Will you com­mit to tak­ing down his account?” Blu­men­thal inter­ject­ed.

    “Sor­ry I didn’t hear that,” Zucker­berg replied (sure, Jan).

    “Will you com­mit to tak­ing down that account – Steve Bannon’s account?” Blu­men­thal reit­er­at­ed.

    “Sen­a­tor, no, that’s not what our poli­cies sug­gest that we should do in this case,” Zucker­berg replied.

    ...

    Lat­er in the hear­ing, Zucker­berg also said that Face­book would not change the way it mon­i­tors Trump’s account after he leaves office, which at face val­ue goes against the poli­cies Zucker­berg claims to be uphold­ing.

    Blu­men­thal went on to say that the tech indus­try should face much stricter reg­u­la­tions due to the enor­mous pow­er and influ­ence it has amassed.

    “You have built ter­ri­fy­ing tools of per­sua­sion and manip­u­la­tion — with pow­er far exceed­ing the rob­ber barons of the last Gild­ed Age,” Blu­men­thal told Zucker­berg and Twit­ter CEO Jack Dorsey, who also took part in the hear­ing. “You have made a huge amount of mon­ey by strip min­ing data about our pri­vate lives and pro­mot­ing hate speech and vot­er sup­pres­sion.”

    ...

    ———-

    “Mark Zucker­berg Refus­es to Ter­mi­nate Steve Bannon’s Face­book Account Despite Death Threats” bY Bran­don Gage; The Hill Reporter; 11/17/2020

    ““Hav­ing a con­tent vio­la­tion does not auto­mat­i­cal­ly mean your account gets tak­en down and the num­ber of strikes varies depend­ing on the type of offense, so if peo­ple are post­ing ter­ror­ist con­tent or child exploita­tion con­tent then the first time that they do it, then we will take down their account,” Zucker­berg con­tin­ued.”

    AS Mark Zucker­berg clar­i­fied in his answer, when Steve Ban­non called for the behead­ing of Antho­ny Fau­ci, that’s not the kind of con­tent vio­la­tion that gets you auto-banned. It’s not like ter­ror­ist con­tent or child exploita­tion con­tent. It’s mere­ly Trump’s for­mer nation­al top advi­sor call­ing for the behead­ing of a gov­ern­ment offi­cial:

    ...
    Ter­ror­ist con­tent, how­ev­er, is pre­cise­ly what Ban­non post­ed. But because Face­book is noto­ri­ous­ly lenient toward pub­lic offi­cials who make incen­di­ary remarks – includ­ing the pres­i­dent of the Unit­ed States, who has threat­ened Iran and North Korea with nuclear war and encour­aged his sup­port­ers to com­mit vio­lent acts – Bannon’s account was not tak­en down.

    “For oth­er things, it’s mul­ti­ple… I’d be hap­py to fol­low up after­wards, we try not to dis­close these…” Zucker­berg said.

    “Will you com­mit to tak­ing down his account?” Blu­men­thal inter­ject­ed.

    “Sor­ry I didn’t hear that,” Zucker­berg replied (sure, Jan).

    “Will you com­mit to tak­ing down that account – Steve Bannon’s account?” Blu­men­thal reit­er­at­ed.

    “Sen­a­tor, no, that’s not what our poli­cies sug­gest that we should do in this case,” Zucker­berg replied.
    ...

    It’s the kind of inten­tion­al­ly vague rules sys­tem that rais­es the ques­tion of whether or not Ban­non is giv­en this kind of lenient treat­ment because any­one can call for the killing of gov­ern­ment offi­cials on Face­book with­out get­ting banned or if that’s a priv­i­lege reserved for for­mer gov­ern­ment offi­cials like Ban­non.

    But as the fol­low­ing report from back in August about leaked inter­nal Face­book doc­u­ments makes clear, one of the major fac­tors in Face­book’s inter­nal sys­tem for deter­min­ing what kind of pun­ish­ment peo­ple should receive for vio­lat­ing Face­book’s rules is whether or not they’re a con­ser­v­a­tive per­son­al­i­ty who might cre­ate a pub­lic rela­tions headache for the com­pa­ny. And it’s a rule Face­book’s senior lead­er­ship makes sure is enforced:

    NBC News

    Sen­si­tive to claims of bias, Face­book relaxed mis­in­for­ma­tion rules for con­ser­v­a­tive pages
    Accord­ing to inter­nal dis­cus­sions, Face­book removed “strikes” so that con­ser­v­a­tive pages were not penal­ized for vio­la­tions of mis­in­for­ma­tion poli­cies.

    By Olivia Solon
    Aug. 7, 2020, 2:31 PM CDT

    Face­book has allowed con­ser­v­a­tive news out­lets and per­son­al­i­ties to repeat­ed­ly spread false infor­ma­tion with­out fac­ing any of the com­pa­ny’s stat­ed penal­ties, accord­ing to leaked mate­ri­als reviewed by NBC News.

    Accord­ing to inter­nal dis­cus­sions from the last six months, Face­book has relaxed its rules so that con­ser­v­a­tive pages, includ­ing those run by Bre­it­bart, for­mer Fox News per­son­al­i­ties Dia­mond and Silk, the non­prof­it media out­let PragerU and the pun­dit Char­lie Kirk, were not penal­ized for vio­la­tions of the company’s mis­in­for­ma­tion poli­cies.

    Face­book’s fact-check­ing rules dic­tate that pages can have their reach and adver­tis­ing lim­it­ed on the plat­form if they repeat­ed­ly spread infor­ma­tion deemed inac­cu­rate by its fact-check­ing part­ners. The com­pa­ny oper­ates on a “strike” basis, mean­ing a page can post inac­cu­rate infor­ma­tion and receive a one-strike warn­ing before the plat­form takes action. Two strikes in 90 days places an account into “repeat offend­er” sta­tus, which can lead to a reduc­tion in dis­tri­b­u­tion of the account’s con­tent and a tem­po­rary block on adver­tis­ing on the plat­form.

    Face­book has a process that allows its employ­ees or rep­re­sen­ta­tives from Facebook’s part­ners, includ­ing news orga­ni­za­tions, politi­cians, influ­encers and oth­ers who have a sig­nif­i­cant pres­ence on the plat­form to flag mis­in­for­ma­tion-relat­ed prob­lems. Fact-check­ing labels are applied to posts by Face­book when third-par­ty fact-check­ers deter­mine their posts con­tain mis­in­for­ma­tion. A news orga­ni­za­tion or politi­cian can appeal the deci­sion to attach a label to one of its posts.

    Face­book employ­ees who work with con­tent part­ners then decide if an appeal is a high-pri­or­i­ty issue or PR risk, in which case they log it in an inter­nal task man­age­ment sys­tem as a mis­in­for­ma­tion “esca­la­tion.” Mark­ing some­thing as an “esca­la­tion” means that senior lead­er­ship is noti­fied so they can review the sit­u­a­tion and quick­ly — often with­in 24 hours — make a deci­sion about how to pro­ceed.

    Face­book receives many queries about mis­in­for­ma­tion from its part­ners, but only a small sub­sec­tion are deemed to require input from senior lead­er­ship. Since Feb­ru­ary, more than 30 of these mis­in­for­ma­tion queries were tagged as “esca­la­tions” with­in the company’s task man­age­ment sys­tem, used by employ­ees to track and assign work projects.

    The list and descrip­tions of the esca­la­tions, leaked to NBC News, showed that Face­book employ­ees in the mis­in­for­ma­tion esca­la­tions team, with direct over­sight from com­pa­ny lead­er­ship, delet­ed strikes dur­ing the review process that were issued to some con­ser­v­a­tive part­ners for post­ing mis­in­for­ma­tion over the last six months. The dis­cus­sions of the reviews showed that Face­book employ­ees were wor­ried that com­plaints about Face­book’s fact-check­ing could go pub­lic and fuel alle­ga­tions that the social net­work was biased against con­ser­v­a­tives.

    The removal of the strikes has fur­thered con­cerns from some cur­rent and for­mer employ­ees that the com­pa­ny rou­tine­ly relax­es its rules for con­ser­v­a­tive pages over fears about accu­sa­tions of bias.

    Two cur­rent Face­book employ­ees and two for­mer employ­ees, who spoke anony­mous­ly out of fear of pro­fes­sion­al reper­cus­sions, said they believed the com­pa­ny had become hyper­sen­si­tive to con­ser­v­a­tive com­plaints, in some cas­es mak­ing spe­cial allowances for con­ser­v­a­tive pages to avoid neg­a­tive pub­lic­i­ty.

    “This sup­posed goal of this process is to pre­vent embar­rass­ing false pos­i­tives against respectable con­tent part­ners, but the data shows that this is instead being used pri­mar­i­ly to shield con­ser­v­a­tive fake news from the con­se­quences,” said one for­mer employ­ee.

    About two-thirds of the “esca­la­tions” includ­ed in the leaked list relate to mis­in­for­ma­tion issues linked to con­ser­v­a­tive pages, includ­ing those of Bre­it­bart, Don­ald Trump Jr., Eric Trump and Gate­way Pun­dit. There was one esca­la­tion relat­ed to a pro­gres­sive advo­ca­cy group and one each for CNN, CBS, Yahoo and the World Health Orga­ni­za­tion.

    There were also esca­la­tions relat­ed to left-lean­ing enti­ties, includ­ing one about an ad from Demo­c­ra­t­ic super PAC Pri­or­i­ties USA that the Trump cam­paign and fact check­ers have labeled as mis­lead­ing. Those mat­ters focused on pre­vent­ing mis­lead­ing videos that were already being shared wide­ly on oth­er media plat­forms from spread­ing on Face­book and were not linked to com­plaints or con­cerns about strikes.

    Face­book and oth­er tech com­pa­nies includ­ing Twit­ter and Google have faced repeat­ed accu­sa­tions of bias against con­ser­v­a­tives in their con­tent mod­er­a­tion deci­sions, though there is lit­tle clear evi­dence that this bias exists. The issue was reignit­ed this week when Face­book removed a video post­ed to Trump’s per­son­al Face­book page in which he false­ly claimed that chil­dren are “almost immune” to COVID-19. The Trump cam­paign accused Face­book of “fla­grant bias.”

    Face­book spokesper­son Andy Stone did not dis­pute the authen­tic­i­ty of the leaked mate­ri­als, but said that it did not pro­vide the full con­text of the sit­u­a­tion.

    In recent years, Face­book has devel­oped a lengthy set of rules that gov­ern how the plat­form mod­er­ates false or mis­lead­ing infor­ma­tion. But how those rules are applied can vary and is up to the dis­cre­tion of Face­book’s exec­u­tives.

    In late March, a Face­book employ­ee raised con­cerns on an inter­nal mes­sage board about a “false” fact-check­ing label that had been added to a post by the con­ser­v­a­tive blog­gers Dia­mond and Silk in which they expressed out­rage over the false alle­ga­tion that Democ­rats were try­ing to give mem­bers of Con­gress a $25 mil­lion raise as part of a COVID-19 stim­u­lus pack­age.

    Dia­mond and Silk had not yet com­plained to Face­book about the fact check, but the employ­ee was sound­ing the alarm because the “part­ner is extreme­ly sen­si­tive and has not hes­i­tat­ed going pub­lic about their con­cerns around alleged con­ser­v­a­tive bias on Face­book.”

    Since it was the account’s sec­ond mis­in­for­ma­tion strike in 90 days, accord­ing to the leaked inter­nal posts, the page was placed into “repeat offend­er” sta­tus.

    Dia­mond and Silk appealed the “false” rat­ing that had been applied by third-par­ty fact-check­er Lead Sto­ries on the basis that they were express­ing opin­ion and not stat­ing a fact. The rat­ing was down­grad­ed by Lead Sto­ries to “part­ly false” and they were tak­en out of “repeat offend­er” sta­tus. Even so, some­one at Face­book described as “Policy/Leadership” inter­vened and instruct­ed the team to remove both strikes from the account, accord­ing to the leaked mate­r­i­al.

    In anoth­er case in late May, a Face­book employ­ee filed a mis­in­for­ma­tion esca­la­tion for PragerU, after a series of fact-check­ing labels were applied to sev­er­al sim­i­lar posts sug­gest­ing polar bear pop­u­la­tions had not been dec­i­mat­ed by cli­mate change and that a pho­to of a starv­ing ani­mal was used as a “delib­er­ate lie to advance the cli­mate change agen­da.” This claim was fact-checked by one of Facebook’s inde­pen­dent fact-check­ing part­ners, Cli­mate Feed­back, as false and meant that the PragerU page had “repeat offend­er” sta­tus and would poten­tial­ly be banned from adver­tis­ing.

    A Face­book employ­ee esca­lat­ed the issue because of “part­ner sen­si­tiv­i­ty” and men­tioned with­in that the repeat offend­er sta­tus was “espe­cial­ly wor­ri­some due to PragerU hav­ing 500 active ads on our plat­form,” accord­ing to the dis­cus­sion con­tained with­in the task man­age­ment sys­tem and leaked to NBC News.

    After some back and forth between employ­ees, the fact check label was left on the posts, but the strikes that could have jeop­ar­dized the adver­tis­ing cam­paign were removed from PragerU’s pages.

    Stone, the Face­book spokesper­son, said that the com­pa­ny defers to third-par­ty fact-check­ers on the rat­ings giv­en to posts, but that the com­pa­ny is respon­si­ble for “how we man­age our inter­nal sys­tems for repeat offend­ers.”

    “We apply addi­tion­al sys­tem-wide penal­ties for mul­ti­ple false rat­ings, includ­ing demon­e­ti­za­tion and the inabil­i­ty to adver­tise, unless we deter­mine that one or more of those rat­ings does not war­rant addi­tion­al con­se­quences,” he said in an emailed state­ment.

    He added that Face­book works with more than 70 fact-check­ing part­ners who apply fact-checks to “mil­lions of pieces of con­tent.”

    Face­book announced Thurs­day that it banned a Repub­li­can PAC, the Com­mit­tee to Defend the Pres­i­dent, from adver­tis­ing on the plat­form fol­low­ing repeat­ed shar­ing of mis­in­for­ma­tion.

    But the ongo­ing sen­si­tiv­i­ty to con­ser­v­a­tive com­plaints about fact-check­ing con­tin­ues to trig­ger heat­ed debates inside Face­book, accord­ing to leaked posts from Facebook’s inter­nal mes­sage board and inter­views with cur­rent and for­mer employ­ees.

    “The research has shown no evi­dence of bias against con­ser­v­a­tives on Face­book,” said anoth­er employ­ee, “So why are we try­ing to appease them?”

    Those con­cerns have also spilled out onto the com­pa­ny’s inter­nal mes­sage boards.

    One employ­ee wrote a post on 19 July, first report­ed by Buz­zFeed News on Thurs­day, sum­ma­riz­ing the list of mis­in­for­ma­tion esca­la­tions found in the task man­age­ment sys­tem and argu­ing that the com­pa­ny was pan­der­ing to con­ser­v­a­tive politi­cians.

    The post, a copy of which NBC News has reviewed, also com­pared Mark Zucker­berg to Pres­i­dent Don­ald Trump and Russ­ian Pres­i­dent Vladimir Putin.

    “Just like all the rob­ber barons and slavers and plun­der­ers who came before you, you are spend­ing a for­tune you didn’t build. No amount of char­i­ty can ever bal­ance out the pover­ty, war and envi­ron­men­tal dam­age enabled by your sup­port of Don­ald Trump,” the employ­ee wrote.

    The post was removed for vio­lat­ing Facebook’s “respect­ful com­mu­ni­ca­tions” pol­i­cy and the list of esca­la­tions, pre­vi­ous­ly acces­si­ble to all employ­ees, was made pri­vate. The employ­ee who wrote the post was lat­er fired.

    ...

    ————-

    “Sen­si­tive to claims of bias, Face­book relaxed mis­in­for­ma­tion rules for con­ser­v­a­tive pages” by Olivia Solon; NBC News; 08/07/2020

    “The list and descrip­tions of the esca­la­tions, leaked to NBC News, showed that Face­book employ­ees in the mis­in­for­ma­tion esca­la­tions team, with direct over­sight from com­pa­ny lead­er­ship, delet­ed strikes dur­ing the review process that were issued to some con­ser­v­a­tive part­ners for post­ing mis­in­for­ma­tion over the last six months. The dis­cus­sions of the reviews showed that Face­book employ­ees were wor­ried that com­plaints about Face­book’s fact-check­ing could go pub­lic and fuel alle­ga­tions that the social net­work was biased against con­ser­v­a­tives.”

    As the leaked mem­os make clear, when Face­book’s “mis­in­for­ma­tion teams” make a deci­sion it might be made under direct over­sight from the Face­book’s lead­er­ship. And as the mem­os also make clear, Face­book’s lead­er­ship has one pri­ma­ry con­cern: not piss­ing off con­ser­v­a­tives and avoid­ing accu­sa­tions of an anti-con­ser­v­a­tive bias:

    ...
    The removal of the strikes has fur­thered con­cerns from some cur­rent and for­mer employ­ees that the com­pa­ny rou­tine­ly relax­es its rules for con­ser­v­a­tive pages over fears about accu­sa­tions of bias.

    Two cur­rent Face­book employ­ees and two for­mer employ­ees, who spoke anony­mous­ly out of fear of pro­fes­sion­al reper­cus­sions, said they believed the com­pa­ny had become hyper­sen­si­tive to con­ser­v­a­tive com­plaints, in some cas­es mak­ing spe­cial allowances for con­ser­v­a­tive pages to avoid neg­a­tive pub­lic­i­ty.

    “This sup­posed goal of this process is to pre­vent embar­rass­ing false pos­i­tives against respectable con­tent part­ners, but the data shows that this is instead being used pri­mar­i­ly to shield con­ser­v­a­tive fake news from the con­se­quences,” said one for­mer employ­ee.

    About two-thirds of the “esca­la­tions” includ­ed in the leaked list relate to mis­in­for­ma­tion issues linked to con­ser­v­a­tive pages, includ­ing those of Bre­it­bart, Don­ald Trump Jr., Eric Trump and Gate­way Pun­dit. There was one esca­la­tion relat­ed to a pro­gres­sive advo­ca­cy group and one each for CNN, CBS, Yahoo and the World Health Orga­ni­za­tion.

    ...

    Face­book spokesper­son Andy Stone did not dis­pute the authen­tic­i­ty of the leaked mate­ri­als, but said that it did not pro­vide the full con­text of the sit­u­a­tion.

    In recent years, Face­book has devel­oped a lengthy set of rules that gov­ern how the plat­form mod­er­ates false or mis­lead­ing infor­ma­tion. But how those rules are applied can vary and is up to the dis­cre­tion of Face­book’s exec­u­tives.

    ...

    Dia­mond and Silk appealed the “false” rat­ing that had been applied by third-par­ty fact-check­er Lead Sto­ries on the basis that they were express­ing opin­ion and not stat­ing a fact. The rat­ing was down­grad­ed by Lead Sto­ries to “part­ly false” and they were tak­en out of “repeat offend­er” sta­tus. Even so, some­one at Face­book described as “Policy/Leadership” inter­vened and instruct­ed the team to remove both strikes from the account, accord­ing to the leaked mate­r­i­al.
    ...

    And that’s why we can infer that the ulti­mate rea­son Steve Ban­non did­n’t have his Face­book account banned after he called for the behead­ing of Antho­ny Fau­ci is that Mark Zucker­berg did­n’t want Ban­non banned. How many times is Steve Ban­non allowed to call for the mur­der of gov­ern­ment offi­cials before Face­book sus­pends his account? It’s up to the whims of Mark Zucker­berg.

    In relat­ed news, Steve Ban­non is now call­ing for Pres­i­dent Trump to launch an inves­ti­ga­tion of Fau­ci. No new death threats from Ban­non so far. But when there are, Face­book will be sure to explain why those death threats are very bad and should be removed but also not a bannable offense. Well, ok, Face­book won’t actu­al­ly explain why this is the case. Mark clear­ly does­n’t have to explain him­self to any­one. Includ­ing sen­a­tors.

    Posted by Pterrafractyl | November 18, 2020, 5:47 pm
  17. Now that alleged cen­sor­ship con­ser­v­a­tive voic­es by ‘Big Tech’ is one of the stan­dard fake right-wing griev­ances that are poised to be ampli­fied even more in com­ing years — espe­cial­ly if Pres­i­dent Trump forms a right-wing media net­work to rival Fox News — here’s a pair of arti­cle that give us a pre­view of the Big Lie media land­scape we should expect to dom­i­nate after com­pa­nies are suc­cess­ful­ly intim­i­dat­ed into fur­ther lim­it­ing their already lim­it­ed cen­sor­ship of right-wing dis­in­for­ma­tion. A Big Lie media land­scape that dou­bles as an extrem­ist recruit­ing cam­paign.

    First, here’s an arti­cle about a recent study that exam­ined the rel­a­tive media of left-wing and right-wing voic­es in the US social media plat­forms. As every­one should expect, social media is almost com­plete­ly dom­i­nat­ed by right-wing voic­es, with fig­ures get­ting audi­ences that dwarf their left-wing coun­ter­parts. Right-wing fig­ures like Ben Shapiro, who also hap­pens to be one of the biggest com­plain­ers of Big Tech’s bias against con­ser­v­a­tives. As Shapiro wrote on Twit­ter on Octo­ber 15, “What we are watch­ing — the mil­i­ta­riza­tion of social media on behalf of Democ­rats, and the overt sup­pres­sion of mate­r­i­al dam­ag­ing to Democ­rats to the cheer­ing of the press — is one of the sin­gle most dan­ger­ous polit­i­cal moments I have ever seen.” As the politi­co arti­cle points out, Shapiro’s Face­book page had rough­ly 33 mil­lion social media inter­ac­tions dur­ing the month of Octo­ber, com­pared to only 19 for Joe Biden’s Face­book page. So the guy who has a greater social media pres­ence than the Demo­c­ra­t­ic can­di­date was post­ing on social media about how we’re watch­ing the mil­i­ta­riza­tion of social media on behalf of Democ­rats.

    Now, we should note that Shapiro was specif­i­cal­ly mak­ing that post in response to social media pulling posts pro­mot­ing the NY Post’s high­ly dubi­ous­ly sourced sto­ry about Hunter Biden’s laptop(s). It was a sto­ry with so lit­tle authen­ti­ca­tion that even Fox News turned it down (so it was instead laun­dered through the Mur­doch-owned NY Post). But these claims of anti-con­ser­v­a­tive bias have been com­ing from Shapiro for years, despite his Dai­ly­Wire being one of the most shared sites on Face­book. It points towards one of the under­ly­ing rea­sons the con­ser­v­a­tive myth of the ‘Big Tech anti-con­ser­v­a­tive bias’ isn’t going away any time soon: the con­ser­v­a­tive dom­i­na­tion of social media allows for quite a bit of high-pro­file com­plain­ing about a sup­posed anti-con­ser­v­a­tive social media bias:

    Politi­co

    Despite cries of cen­sor­ship, con­ser­v­a­tives dom­i­nate social media

    GOP-friend­ly voic­es far out­weigh lib­er­als in dri­ving con­ver­sa­tions on hot top­ics lead­ing up to the elec­tion, a POLITICO analy­sis shows.

    By MARK SCOTT

    10/26/2020 07:55 PM EDT
    Updat­ed: 10/27/2020 01:38 PM EDT

    Repub­li­cans have turned alleged lib­er­al bias in Sil­i­con Val­ley into a major clos­ing theme of the elec­tion cycle, haul­ing tech CEOs in for vir­tu­al grillings on Capi­tol Hill while Pres­i­dent Don­ald Trump threat­ens legal pun­ish­ment for com­pa­nies that cen­sor his sup­port­ers.

    But a POLITICO analy­sis of mil­lions of social media posts shows that con­ser­v­a­tives still rule online.

    Right-wing social media influ­encers, con­ser­v­a­tive media out­lets and oth­er GOP sup­port­ers dom­i­nate online dis­cus­sions around two of the election’s hottest issues, the Black Lives Mat­ter move­ment and vot­er fraud, accord­ing to the review of Face­book posts, Insta­gram feeds, Twit­ter mes­sages and con­ver­sa­tions on two pop­u­lar mes­sage boards. And their lead isn’t close.

    As racial protests engulfed the nation after George Floyd’s death, users shared the most-viral right-wing social media con­tent more than 10 times as often as the most pop­u­lar lib­er­al posts, fre­quent­ly asso­ci­at­ing the Black Lives Mat­ter move­ment with vio­lence and accus­ing Democ­rats like Joe Biden of sup­port­ing riots.

    Peo­ple also shared con­ser­v­a­tives’ most-read claims of ram­pant vot­er fraud rough­ly twice as often as they did lib­er­als’ or tra­di­tion­al media out­lets’ dis­cus­sions of the issue, the analy­sis found. The con­ser­v­a­tives’ tac­tics includ­ed spin­ning main­stream media cov­er­age on vot­ing irreg­u­lar­i­ties into elab­o­rate con­spir­a­cy the­o­ries, some­times echoed by Trump, that Demo­c­ra­t­ic law­mak­ers are try­ing to steal November’s elec­tion.

    POLITICO worked with researchers at the Insti­tute for Strate­gic Dia­logue, a Lon­don-based non­par­ti­san think tank that tracks extrem­ism online, to ana­lyze data from the institute’s exten­sive col­lec­tion of infor­ma­tion scraped from mul­ti­ple social media plat­forms.

    The find­ings demon­strate how a small num­ber of con­ser­v­a­tive users rou­tine­ly out­pace their lib­er­al rivals and tra­di­tion­al news out­lets in dri­ving the online con­ver­sa­tion — ampli­fy­ing their impact a lit­tle more than a week before Elec­tion Day. They con­tra­dict the pre­vail­ing polit­i­cal rhetoric from some Repub­li­can law­mak­ers that con­ser­v­a­tive voic­es are cen­sored online — indi­cat­ing that instead, right-lean­ing talk­ing points con­tin­ue to shape the world­views of mil­lions of U.S. vot­ers.

    “Their sto­ries are cap­ti­vat­ing, easy to remem­ber and cre­ate an out­sized foot­print online,” said Yochai Ben­kler, co-direc­tor of the Berk­man Klein Cen­ter for Inter­net and Soci­ety at Har­vard Uni­ver­si­ty, who pub­lished a sep­a­rate report into how lead­ing politi­cians like Trump and main­stream news out­lets were cen­tral to spread­ing mis­in­for­ma­tion about mail-in vot­ing.

    None of that has stopped the pres­i­dent and his GOP allies from ham­mer­ing the mes­sage that tech giants sys­tem­at­i­cal­ly silence and throt­tle con­ser­v­a­tive mes­sages — or as the pres­i­dent has charged, “The Rad­i­cal Left is in total com­mand & con­trol of Face­book, Insta­gram, Twit­ter and Google.”

    “Every year, count­less Amer­i­cans are banned, black­list­ed, and silenced through arbi­trary or mali­cious enforce­ment of ever-shift­ing rules,” Trump said in a Sep­tem­ber appear­ance with Attor­ney Gen­er­al William Barr and nine state AGs to dis­cuss “social media abus­es.” In a report this month, Repub­li­cans on the House Judi­cia­ry Com­mit­tee put it even more plain­ly: “Big Tech Is Out to Get Con­ser­v­a­tives.”

    Face­book CEO Mark Zucker­berg, Twit­ter CEO Jack Dorsey and Google CEO Sun­dar Pichai will face ques­tions on the issue at a Sen­ate Com­merce Com­mit­tee hear­ing Wednes­day, while Zucker­berg and Dorsey will tes­ti­fy in front of the Sen­ate Judi­cia­ry Com­mit­tee on Nov. 17.

    The issue has sim­mered for months, amid a series of inci­dents in which Face­book and Twit­ter slapped fact-check labels on — and, in some cas­es, delet­ed — posts from Trump about the elec­tion or Covid-19 after deem­ing them mis­lead­ing or false. Con­ser­v­a­tive com­plaints esca­lat­ed this month after both com­pa­nies took steps to reduce the spread of a New York Post arti­cle that made uncor­rob­o­rat­ed claims about Biden’s ties to Ukraine.

    “What we are watch­ing — the mil­i­ta­riza­tion of social media on behalf of Democ­rats, and the overt sup­pres­sion of mate­r­i­al dam­ag­ing to Democ­rats to the cheer­ing of the press — is one of the sin­gle most dan­ger­ous polit­i­cal moments I have ever seen,” con­ser­v­a­tive com­men­ta­tor Ben Shapiro wrote on Twit­ter on Oct. 15.

    But Shapiro’s own influ­ence appears undimmed. His Face­book posts gar­nered more than 33 mil­lion social media inter­ac­tions such as com­ments, shares and likes over the last 30 days, accord­ing to Crowd­Tan­gle, an ana­lyt­ics tool owned by Face­book. Biden’s page, in con­trast, received 19 mil­lion inter­ac­tions over the same peri­od.

    Con­ser­v­a­tive con­vey­or belt

    POLITICO worked with the Insti­tute for Strate­gic Dialogue’s researchers to ana­lyze which online voic­es were loud­est and which mes­sag­ing was most wide­spread around the Black Lives Mat­ter move­ment and the poten­tial for vot­er fraud in November’s elec­tion.

    That includ­ed ana­lyz­ing more than 2 mil­lion social media posts across Face­book, Insta­gram, Twit­ter and the mes­sage boards Red­dit and 4Chan. The posts orig­i­nat­ed from over 500,000 social media accounts and were linked to key­words and online hash­tags asso­ci­at­ed with both issues.

    The researchers col­lect­ed the data between Aug. 28 and Sept. 25, and ranked the posts by how wide­ly they had been shared and copied from one account to anoth­er. The analy­sis cap­tured dis­cus­sions from across the polit­i­cal spec­trum, but did not include con­ver­sa­tions in pri­vate chan­nels, like invite-only Face­book groups, that were off-lim­its to the researchers.

    “You see the same peo­ple pop­ping up all the time,” said Cia­ran O’Connor, a dis­in­for­ma­tion ana­lyst at the Insti­tute for Strate­gic Dia­logue. “There’s no evi­dence of coor­di­na­tion, it’s more like group­think. Any­thing that attacks Biden or the Democ­rats is fair game.”

    Left-wing voic­es, includ­ing ACRONYM, a lib­er­al cam­paign group that has fund­ed par­ti­san news out­lets in sev­er­al swing states, have also politi­cized events for their own gain. For­eign gov­ern­ments, notably Rus­sia, con­tin­ue to ped­dle false­hoods at the Amer­i­can pub­lic. A small­er data col­lec­tion, run by the Insti­tute for Strate­gic Dia­logue between Oct. 20 and Oct. 23 around vot­er fraud con­ver­sa­tions, showed that lib­er­al voic­es had per­formed rough­ly on par with their con­ser­v­a­tive coun­ter­parts.

    But in the pre­vi­ous month­long analy­sis about Black Lives Mat­ter and vot­er fraud, the loud­est voic­es belong to con­ser­v­a­tives like Shapiro, Repub­li­can activist James O’Keefe and Char­lie Kirk, founder of the advo­ca­cy group Turn­ing Point USA.

    Aid­ed by well-estab­lished con­ser­v­a­tive media out­lets like the West­ern Jour­nal and Bre­it­bart News, as well as new out­lets like The Post Mil­len­ni­al, these influ­encers have gar­nered an out­sized audi­ence, stok­ing claims that the Black Lives Mat­ter move­ment is inher­ent­ly vio­lent and that fraud­u­lent bal­lots are already flood­ing next month’s elec­tion.

    At the end of August, for instance, Dan Bongi­no, a con­ser­v­a­tive com­men­ta­tor with mil­lions of online fol­low­ers, wrote on Face­book that Black Lives Mat­ter pro­test­ers had called for the mur­der of police offi­cers in Wash­ing­ton, D.C. Bongino’s social media posts are rou­tine­ly some of the most shared con­tent across Face­book, based on CrowdTangle’s data.

    The claims — first made by a far-right pub­li­ca­tion that the South­ern Pover­ty Law Cen­ter labeled as pro­mot­ing con­spir­a­cy the­o­ries — were not rep­re­sen­ta­tive of the actions of the Black Lives Mat­ter move­ment. But Bongino’s post was shared more than 30,000 times, and received 141,000 oth­er engage­ments such as com­ments and likes, accord­ing to Crowd­Tan­gle.

    In con­trast, the best-per­form­ing lib­er­al post around Black Lives Mat­ter — from DL Hugh­ley, the actor — gar­nered less than a quar­ter of the Bongi­no post’s social media trac­tion, based on data ana­lyzed by POLITICO.

    ...

    How claims go viral

    On Aug. 29, an arti­cle in the New York Post dropped a bomb­shell: Democ­rats were using mail-in vot­er fraud to steal the elec­tion.

    Under the head­line “Con­fes­sions of a vot­er fraud: I was a mas­ter at fix­ing mail-in bal­lots,” an anony­mous Demo­c­ra­t­ic Par­ty con­sul­tant out­lined an alleged years­long cam­paign to skew local, state and nation­al elec­tions in favor of lib­er­al can­di­dates.

    The Demo­c­ra­t­ic Par­ty issued mul­ti­ple denials, and an ear­li­er review by the Brook­ings Insti­tu­tion high­light­ed extreme­ly low lev­els of vot­er fraud across the coun­try. But the Post arti­cle soon played a cen­tral role in the talk­ing points of con­ser­v­a­tive influ­encers, Repub­li­can polit­i­cal groups and oth­er influ­encers pro­mot­ing the fraud claims on social media, accord­ing to POLITICO’s analy­sis.

    ...

    Two days after the arti­cle was pub­lished, Bre­it­bart News picked up the sto­ry in a Face­book post that was shared more than 37,000 times. Oth­er con­ser­v­a­tive voic­es, includ­ing the talk radio host Mark Levin, sim­i­lar­ly pro­mot­ed it to their large online audi­ences. Jon Levine, the author of the Post’s sto­ry, was inter­viewed on Fox News, while Trump’s offi­cial cam­paign repub­lished the arti­cle on its Face­book page, which has 1.6 mil­lion fol­low­ers.

    In total, the Post vot­er fraud alle­ga­tions have been shared more than 185,000 times on Face­book, gar­ner­ing 340,000 engage­ments such as com­ments and likes, based on Crowd­Tan­gle data.

    In con­trast, the best per­form­ing post on this top­ic from anoth­er tra­di­tion­al media out­let — an Axios arti­cle high­light­ing that the FBI had not seen any evi­dence of nation­al vot­er fraud — was shared just 15,000 times on Face­book and received just 52,000 col­lec­tive social media engage­ments.

    ...

    ————

    “Despite cries of cen­sor­ship, con­ser­v­a­tives dom­i­nate social media” by MARK SCOTT; Politi­co; 10/26/2020

    “Right-wing social media influ­encers, con­ser­v­a­tive media out­lets and oth­er GOP sup­port­ers dom­i­nate online dis­cus­sions around two of the election’s hottest issues, the Black Lives Mat­ter move­ment and vot­er fraud, accord­ing to the review of Face­book posts, Insta­gram feeds, Twit­ter mes­sages and con­ver­sa­tions on two pop­u­lar mes­sage boards. And their lead isn’t close.”

    It’s not even close. Right-wing social media con­tent gets shared 10 times as much as the most pop­u­lar left-wing post. And this includes posts about vot­er fraud claims. Right-wing sites real­ly do effec­tive­ly have to the pow­er to dig­i­tal­ly drown out oth­er types of con­tent, mak­ing today’s social media ecosys­tem the per­fect Big Lie machine. Not only are right-wing Big Lies heav­i­ly pro­mot­ed, large­ly with impuni­ty, but there’s an entire cru­sade about ‘Big Tech’s bias against con­ser­v­a­tives’ that, itself, relies on that very same right-wing dom­i­na­tion of social media. It’s beyond Orwellian:

    ...
    As racial protests engulfed the nation after George Floyd’s death, users shared the most-viral right-wing social media con­tent more than 10 times as often as the most pop­u­lar lib­er­al posts, fre­quent­ly asso­ci­at­ing the Black Lives Mat­ter move­ment with vio­lence and accus­ing Democ­rats like Joe Biden of sup­port­ing riots.

    Peo­ple also shared con­ser­v­a­tives’ most-read claims of ram­pant vot­er fraud rough­ly twice as often as they did lib­er­als’ or tra­di­tion­al media out­lets’ dis­cus­sions of the issue, the analy­sis found. The con­ser­v­a­tives’ tac­tics includ­ed spin­ning main­stream media cov­er­age on vot­ing irreg­u­lar­i­ties into elab­o­rate con­spir­a­cy the­o­ries, some­times echoed by Trump, that Demo­c­ra­t­ic law­mak­ers are try­ing to steal November’s elec­tion.

    ...

    The find­ings demon­strate how a small num­ber of con­ser­v­a­tive users rou­tine­ly out­pace their lib­er­al rivals and tra­di­tion­al news out­lets in dri­ving the online con­ver­sa­tion — ampli­fy­ing their impact a lit­tle more than a week before Elec­tion Day. They con­tra­dict the pre­vail­ing polit­i­cal rhetoric from some Repub­li­can law­mak­ers that con­ser­v­a­tive voic­es are cen­sored online — indi­cat­ing that instead, right-lean­ing talk­ing points con­tin­ue to shape the world­views of mil­lions of U.S. vot­ers.

    “Their sto­ries are cap­ti­vat­ing, easy to remem­ber and cre­ate an out­sized foot­print online,” said Yochai Ben­kler, co-direc­tor of the Berk­man Klein Cen­ter for Inter­net and Soci­ety at Har­vard Uni­ver­si­ty, who pub­lished a sep­a­rate report into how lead­ing politi­cians like Trump and main­stream news out­lets were cen­tral to spread­ing mis­in­for­ma­tion about mail-in vot­ing.

    None of that has stopped the pres­i­dent and his GOP allies from ham­mer­ing the mes­sage that tech giants sys­tem­at­i­cal­ly silence and throt­tle con­ser­v­a­tive mes­sages — or as the pres­i­dent has charged, “The Rad­i­cal Left is in total com­mand & con­trol of Face­book, Insta­gram, Twit­ter and Google.”
    ...

    Per­haps the most top­i­cal exam­ple of this extreme imbal­ance is the myth of mas­sive left-wing mail-in vot­er fraud. It start­ed as a NY Post sto­ry in last August claim­ing that Democ­rats were plan­ning on steal­ing the elec­tion through mass vot­er fraud, and with­in days, that nar­ra­tive com­plete­ly dom­i­nat­ed how the sto­ry was cov­ered. Denials of the accu­sa­tion were just back­ground noise:

    ...
    How claims go viral

    On Aug. 29, an arti­cle in the New York Post dropped a bomb­shell: Democ­rats were using mail-in vot­er fraud to steal the elec­tion.

    Under the head­line “Con­fes­sions of a vot­er fraud: I was a mas­ter at fix­ing mail-in bal­lots,” an anony­mous Demo­c­ra­t­ic Par­ty con­sul­tant out­lined an alleged years­long cam­paign to skew local, state and nation­al elec­tions in favor of lib­er­al can­di­dates.

    The Demo­c­ra­t­ic Par­ty issued mul­ti­ple denials, and an ear­li­er review by the Brook­ings Insti­tu­tion high­light­ed extreme­ly low lev­els of vot­er fraud across the coun­try. But the Post arti­cle soon played a cen­tral role in the talk­ing points of con­ser­v­a­tive influ­encers, Repub­li­can polit­i­cal groups and oth­er influ­encers pro­mot­ing the fraud claims on social media, accord­ing to POLITICO’s analy­sis.

    ...

    Two days after the arti­cle was pub­lished, Bre­it­bart News picked up the sto­ry in a Face­book post that was shared more than 37,000 times. Oth­er con­ser­v­a­tive voic­es, includ­ing the talk radio host Mark Levin, sim­i­lar­ly pro­mot­ed it to their large online audi­ences. Jon Levine, the author of the Post’s sto­ry, was inter­viewed on Fox News, while Trump’s offi­cial cam­paign repub­lished the arti­cle on its Face­book page, which has 1.6 mil­lion fol­low­ers.

    In total, the Post vot­er fraud alle­ga­tions have been shared more than 185,000 times on Face­book, gar­ner­ing 340,000 engage­ments such as com­ments and likes, based on Crowd­Tan­gle data.

    In con­trast, the best per­form­ing post on this top­ic from anoth­er tra­di­tion­al media out­let — an Axios arti­cle high­light­ing that the FBI had not seen any evi­dence of nation­al vot­er fraud — was shared just 15,000 times on Face­book and received just 52,000 col­lec­tive social media engage­ments.
    ...

    And when social media com­pa­nies made the deci­sion to pull social media posts that were pro­mot­ing the high­ly sus­pect ‘Octo­ber Surprise’-ish NY Post sto­ry about Hunter Biden’s lap­tops, we have Ben Shapiro, one of the biggest voic­es on social media, call­ing it the “mil­i­ta­riza­tion of social media on behalf of Democ­rats”. Shapiro’s Face­book page got more shares than Joe Biden’s in Octo­ber, and he’s com­plain­ing about the mil­i­ta­riza­tion of social media on behalf of the Democ­rats:

    ...
    “What we are watch­ing — the mil­i­ta­riza­tion of social media on behalf of Democ­rats, and the overt sup­pres­sion of mate­r­i­al dam­ag­ing to Democ­rats to the cheer­ing of the press — is one of the sin­gle most dan­ger­ous polit­i­cal moments I have ever seen,” con­ser­v­a­tive com­men­ta­tor Ben Shapiro wrote on Twit­ter on Oct. 15.

    But Shapiro’s own influ­ence appears undimmed. His Face­book posts gar­nered more than 33 mil­lion social media inter­ac­tions such as com­ments, shares and likes over the last 30 days, accord­ing to Crowd­Tan­gle, an ana­lyt­ics tool owned by Face­book. Biden’s page, in con­trast, received 19 mil­lion inter­ac­tions over the same peri­od.
    ...

    So what we’re obvi­ous­ly look­ing at here is a delib­er­ate intim­i­da­tion cam­paign intend­ed to pres­sure social media com­pa­nies from impos­ing the already lim­it­ed and tepid restric­tions they already have on right-wing dis­in­for­ma­tion. An intim­i­da­tion cam­paign intend­ed to pres­sure social media com­pa­nies that’s large­ly waged on social media and large­ly relies on right-wing dom­i­na­tion of social media to ampli­fy the intim­i­da­tion. Again, it’s beyond Orwellian.

    With all that in mind, here’s a peek at what we can expect should the right-wing suc­ceed in remov­ing the already restrained cen­sor­ship of right-wing con­tent: the 4Chan-iza­tion of social media. Because as the fol­low­ing Vice arti­cle makes clear, once you have mod­er­a­tors who are com­mit­ted to inter­pret­ing right-wing con­tent in the most gen­er­ous light pos­si­ble and who are intent on find­ing rea­sons not to remove any­thing but the most extreme con­tent, it’s only a mat­ter of time before sites are effec­tive­ly turned into neo-Nazi recruit­ing zones, where the con­stant expo­sure to extrem­ist memes and images effec­tive­ly desen­si­tizes audi­ences while build­ing an echo-cham­ber for Big Lie prop­a­ga­tion. And it only takes a rel­a­tive hand­ful of active play­ers to cre­ate this envi­ron­ment. In the case of 4Chan, a site that start­ed as a rel­a­tive­ly pro­gres­sive forum for ani­me, it was the actions of a sin­gle lead mod­er­a­tor, known as “RapeApe”, who was giv­en the pow­er to hire and fire oth­er mod­er­a­tors (known as “jan­i­tors” on the site) after the site was sold to a new own­er in 2015. Under RapeApe’s rule, the “/pol/” pol­i­tics forum on the site became over­run with far right posters who soon start­ed “raid­ing” oth­er forums on the site, fill­ing those forums with “sieg heils” and oth­er far right con­tent. Even­tu­al­ly “/pol/” and “RapeApe” won and 4Chan has effec­tive­ly become one of the lead­ing far right meme fac­to­ries on the inter­net. It’s a reminder that the end result of the right-wing pro­pa­gan­da cam­paign to intim­i­date social media com­pa­nies into allow­ing any and all right-wing con­tent on social media is to turn inevitably the entire inter­net into 4Chan:

    Vice

    The Man Who Helped Turn 4chan Into the Inter­net’s Racist Engine
    4chan mod­er­a­tors and leaked chat logs show that the infa­mous image­board did­n’t become the hate­ful site it’s known as by acci­dent. A pow­er­ful mod­er­a­tor inten­tion­al­ly helped make it that way.

    by Rob Arthur
    Novem­ber 2, 2020, 8:39am

    In two decades, 4chan has evolved from a mes­sage board where peo­ple talked about ani­me to a casu­al­ly racist but influ­en­tial cre­ation engine of inter­net cul­ture, and now into a gen­er­a­tor of far-right pro­pa­gan­da, a place where dan­ger­ous con­spir­a­cy the­o­ries orig­i­nate, and an ampli­fi­er of online big­otry. This evo­lu­tion, accord­ing to 4chan mod­er­a­tors who spoke to Moth­er­board and leaked chat logs, is in large part because of an anony­mous admin­is­tra­tor who used mod­er­a­tion enforce­ment, or lack there­of, to allow the influ­en­tial web­site to become a cru­cial arm of the far-right.

    4chan attract­ed hordes of dis­af­fect­ed young men who trolled var­i­ous oth­er web­sites, cre­at­ing pop­u­lar memes (many of them racist or sex­ist) and orig­i­nat­ing a great deal of inter­net cul­ture. In recent years, how­ev­er, 4chan has evolved into some­thing active­ly sin­is­ter: a hive of big­otry, threats of vio­lence, and far right ide­ol­o­gy. This rapid and severe descent wasn’t dri­ven sole­ly by the mass action of dis­grun­tled young men.

    One cur­rent and three for­mer 4chan mod­er­a­tors believe the process was aid­ed along by the de fac­to admin­is­tra­tor of the site, a far right sup­port­er with the han­dle “RapeApe” who helped turn the site into a meme fac­to­ry for extreme pol­i­tics. Moth­er­board agreed to let the jan­i­tors speak anony­mous­ly because they said they signed non-dis­clo­sure agree­ments with 4chan.

    Because of 4chan’s often wild­ly offen­sive con­tent, many assume that the site is com­plete­ly unmod­er­at­ed. But 4chan has a corps of vol­un­teers, called “jan­i­tors,” “mods,” or “jan­nies,” whose job it is—theoretically—to make sure that con­tent on the site abides by the rules. (4chan draws a dis­tinc­tion between more senior “mod­er­a­tors,” who are respon­si­ble for all boards, and “jan­i­tors,” who patrol one or two; we refer to them inter­change­ably because jan­i­tors also mod­er­ate dis­cus­sion.) The jan­i­tors we spoke to and a major trove of leaked chat logs from the jan­i­tors’ pri­vate com­mu­ni­ca­tions chan­nel tell the sto­ry of RapeApe’s rise from junior jan­ny to some­one who could decide what kind of con­tent was allowed on the site and where, shap­ing 4chan into the hate­ful, rad­i­cal­iz­ing online com­mu­ni­ty it’s known for today.

    Start­ed in 2003 by Christo­pher Poole, 4chan was ini­tial­ly a place for peo­ple to dis­cuss ani­me. Since its found­ing, the site has expand­ed to include dis­cus­sion boards on every­thing from trav­el to fit­ness to video games to origa­mi. It now claims around 22 mil­lion vis­i­tors a month. Some parts of it are also recruit­ing grounds for Neo-Nazi groups.

    4chan’s more recent extrem­ist ele­ment can be traced back to an infa­mous board: “polit­i­cal­ly incor­rect,” which is list­ed as “/pol/” on the site. Osten­si­bly devot­ed to dis­cussing pol­i­tics, /pol/ threads often involve users call­ing each oth­er racist terms, argu­ing for the geno­cide of whole nations or eth­nic­i­ties, or debat­ing about whether dif­fer­ent con­cepts are “degenerate”—a Nazi term of art for mate­r­i­al (or peo­ple) that ought to be purged. Posters there cel­e­brate and lion­ize some of the most noto­ri­ous mass mur­der­ers of the last decade, from Anders Breivik to Dylann Roof.

    The forum has pop­u­lar­ized iconog­ra­phy like Pepe the Frog, a car­toon char­ac­ter reap­pro­pri­at­ed by some as a racist sym­bol of the far right that Pres­i­dent Trump’s son has tweet­ed images of. Accord­ing to aca­d­e­m­ic researchers, 4chan’s /pol/ has become one of the most prodi­gious fac­to­ries for con­tent on the inter­net. And the bound­aries of its influ­ence spread far beyond the bor­ders of 4chan itself, affect­ing every­thing from YouTube to Twit­ter to main­stream Repub­li­can pol­i­tics.

    The polit­i­cal­ly incor­rect board wasn’t always this bad. In fact, for­mer 4chan mod­er­a­tors told Moth­er­board that /pol/ wasn’t added to the site until 2011, eight years after the site start­ed. For the first few years of its exis­tence, accord­ing to two for­mer jan­i­tors, Poole intend­ed the /pol/ board to siphon off the racism from oth­er areas of the site so that oth­er users could enjoy their own, board-spe­cif­ic pur­suits.

    “It was start­ed as a con­tain­ment board,” one for­mer mod­er­a­tor told me about /pol/. Accord­ing to chat logs and for­mer mod­er­a­tors, in its ear­ly days, mod­er­a­tors at 4chan removed racist posts and users from oth­er boards while ignor­ing them with­in one board, “ran­dom” (/b/, which was sup­posed to be a kind of “no rules, any­thing goes” space. /b/ is where many ear­ly memes were born, and is where the hack­tivist group Anony­mous came from). Such posts also some­times slipped by on the /pol/ board as well, even though they tech­ni­cal­ly vio­lat­ed the rules there. “Enforce­ment was more active in the past,” a for­mer mod­er­a­tor said. In con­trast to its cur­rent far right polit­i­cal cli­mate, “4chan skewed extreme­ly pro­gres­sive when it first start­ed,” accord­ing to the mod, although the use of big­ot­ed and misog­y­nis­tic lan­guage was wide­spread even then.

    But 4chan has changed in recent years. Sev­er­al stud­ies of the site have shown that 4chan has become more racist, big­ot­ed, and tox­ic in recent years—especially the /pol/ board. Ide­olo­gies prop­a­gat­ed on /pol/ have become linked with vio­lence and domes­tic ter­ror­ism. 4chan jan­i­tors’ main job is to clean up and remove child pornog­ra­phy, lest 4chan draw the wrath of fed­er­al author­i­ties, but they also shape the dis­course there by set­ting the lim­its of accept­able dis­cus­sion. If a thread goes off-top­ic or starts to get too racist, the jan­i­tors have the respon­si­bil­i­ty for ask­ing mods to delete it and poten­tial­ly issue bans against spe­cif­ic users.

    Accord­ing to leaked logs and the 4chan jan­i­tors who spoke with Moth­er­board, the man­ag­er of 4chan’s jan­i­tors is RapeApe. Rel­a­tive­ly lit­tle is known about him, even by the jan­i­tors who spoke with us and worked for him, although he has been super­vis­ing 4chan’s day-to-day oper­a­tions for around a decade.

    In 2015, Poole announced that he had decid­ed to sell 4chan to a Japan­ese busi­ness­man named Hiroyu­ki Nishimu­ra. Nishimu­ra pre­vi­ous­ly owned 2chan, a Japan­ese web­site which inspired 4chan. Jan­i­tors who spoke with Moth­er­board described Nishimu­ra as being almost com­plete­ly hands off, leav­ing mod­er­a­tion of the site pri­mar­i­ly to RapeApe.

    “[RapeApe] basi­cal­ly ful­fills the role of an admin­is­tra­tor con­sid­er­ing Hiroyu­ki [Nishimu­ra], the actu­al admin, does­n’t touch the site,” a cur­rent jan­i­tor told me. Poole and Nishimu­ra did not respond to repeat­ed requests for com­ment. RapeApe respond­ed by send­ing an email that con­tained only a sin­gle link to a video of naked mus­cu­lar men danc­ing.

    Even pri­or to the site’s change in own­er­ship, RapeApe func­tioned as the pri­ma­ry judge of what con­sti­tut­ed accept­able con­tent on the site, as well as the per­son who edu­cat­ed the staff on what did and didn’t cross the line. As Gamer­gate became a sub­ject on the site in 2014, 4chan users began harass­ing women in the video game indus­try due to what they per­ceived as pro­gres­sive bias in report­ing on games. Even­tu­al­ly, RapeApe tried to stop 4chan’s cam­paign of intim­i­da­tion. “[Gamer­gate] is no longer allowed on the video game boards. So said [RapeApe],” one jan­i­tor informed anoth­er in the leaked chats. When oth­er jan­nies protest­ed, RapeApe rapid­ly shut them down: “This isn’t a democ­ra­cy,” RapeApe wrote. “Gamer­gate has over­stayed its wel­come. It is start­ing to cause a mas­sive bur­den for mod­er­a­tion.”

    In 2015, an anony­mous for­mer mod­er­a­tor leaked an exten­sive chat his­to­ry of the jan­i­tors from 2012–2015 to a file-shar­ing ser­vice. One of the for­mer jan­i­tors includ­ed in the chats con­firmed their authen­tic­i­ty. Accord­ing to a brief mes­sage post­ed with the logs, the leak­er was unhap­py with “the direc­tion of the site.” From those leaked logs and the cur­rent and for­mer jan­i­tors who spoke with Moth­er­board, RapeApe claims to be a mil­i­tary vet­er­an who served in Afghanistan as well as a vora­cious read­er, inter­est­ed in video games, guns, and Warham­mer: 40,000. He often com­plained about his fam­i­ly imped­ing his work and was afraid they would walk in on him look­ing at ques­tion­able or porno­graph­ic posts as he was mod­er­at­ing.

    Accord­ing to the jan­i­tor and chat logs (as well as a delet­ed Twit­ter account two staff mem­bers con­firmed was his), RapeApe is also polit­i­cal­ly con­ser­v­a­tive and racist. One for­mer jan­i­tor described him as “a typ­i­cal right winger and /pol/ dude.” His Twit­ter account fea­tured him respond­ing approv­ing­ly to Tuck­er Carl­son clips, urg­ing anoth­er user to buy an AR-15 rifle for self-defense, won­der­ing whether the state would force peo­ple to be homo­sex­u­al and sug­gest­ing that Twit­ter was “staffed by left­ists” who were delet­ing con­ser­v­a­tive users’ accounts. In con­ver­sa­tions with oth­er jan­i­tors in the leaked chats, he found humor in hor­ri­fy­ing news about riots, shoot­ings, and the Ebo­la epidemic—especially when that news involved Black peo­ple dying.

    But RapeApe isn’t just a typ­i­cal /pol/ user who hap­pens to run the site. Accord­ing to three cur­rent and for­mer staff mem­bers, RapeApe shaped 4chan into a reflec­tion of his own polit­i­cal beliefs. “RapeApe has an agen­da: he wants /pol/ to have influ­ence on the rest of the site and [its] pol­i­tics,” a cur­rent jan­i­tor said.

    Alone, RapeApe couldn’t steer 4chan to the far right. But he super­vis­es a staff of dozens of vol­un­teers who con­trol dis­course on the boards. Accord­ing to the leaked chats and jan­i­tors who spoke with Moth­er­board, he instruct­ed jan­i­tors on how to han­dle the more big­ot­ed con­tent on 4chan—and dis­missed them if they delet­ed con­tent he likes. He took a spe­cial inter­est in the /pol/ board, telling a novice jan­i­tor in the chat logs to “treat /pol/ with kid gloves. So long as they obey the rules, they are allowed to sup­port what­ev­er abom­inable polit­i­cal posi­tions they want.”

    4chan has an exten­sive list of rules post­ed on the site and each board has its own small­er set of edicts. A lit­tle-known and rarely enforced 4chan reg­u­la­tion, Glob­al Rule #3, pro­hibits racist con­tent on the site. But the leaked chat logs show many inci­dents of mod­er­a­tors and jan­i­tors dis­cussing when racism got severe enough that it ought to be banned. Indeed, RapeApe him­self delet­ed at least one thread for vio­lat­ing Rule #3 ear­ly on in his 4chan career, before he became a man­ag­er.

    Once he became head mod­er­a­tor, RapeApe began to post reminders that mod­er­a­tors ought to be as hands-off as pos­si­ble. In the leaked logs and accord­ing to cur­rent and for­mer jan­i­tors, RapeApe pushed his staff into a posi­tion where almost no con­tent could run afoul of the rule against racism. Instruct­ing the jan­i­tors, RapeApe wrote, “And remem­ber that with racism we’re tar­get­ing the intent of the poster and not the words them­selves.” One cur­rent jan­i­tor told me that in prac­tice, with­in 4chan’s warped, irony-poi­soned cul­ture, this meant there was no way to ban a user for even the most fla­grant, big­ot­ed lan­guage or images. They could always claim that the intent wasn’t racist, even if the con­tent unques­tion­ably was.

    “The plau­si­ble deni­a­bil­i­ty excuse for racism—I was just jok­ing, I was just trolling—is bullsh it,” Whit­ney Phillips, an Assis­tant Pro­fes­sor of Com­mu­ni­ca­tion and Rhetor­i­cal Stud­ies at Syra­cuse Uni­ver­si­ty and author of This Is Why We Can’t Have Nice Things: Map­ping the Rela­tion­ship Between Online Trolling and Main­stream Cul­ture, told Moth­er­board. “Intent can mat­ter when think­ing about the things peo­ple say, but it mat­ters very lit­tle when con­sid­er­ing con­se­quences. Whether or not some­one says a racist thing with hate in their heart, they’re still say­ing a racist thing and that still con­tributes to dehu­man­iza­tion and the nor­mal­iza­tion of harm. Any­way, the very cri­te­ri­on is absurd, as you can’t assess what’s in some­one’s heart just by look­ing at the things they post, espe­cial­ly to a place like 4chan. The only rea­son­able con­clu­sion is that, what­ev­er might have been writ­ten in the site rules, this mod­er­a­tor ensured that there was no pol­i­cy against racism. Instead it became a pro-racism pol­i­cy.”

    The leaked chat logs show that RapeApe did­n’t want /pol/ to be total­ly unmod­er­at­ed, despite allow­ing racist con­tent. He was con­cerned with mak­ing sure 4chan wasn’t host­ing ille­gal mate­r­i­al. “Most­ly I just want to keep the site legal,” he wrote to the staff in one mes­sage in the leaked chats. He post­ed fre­quent reminders to the chan­nel to “take it easy” and ignore, rather than ban, racist con­tent. In the leaked chats, RapeApe quotes judi­cial deci­sions on whether pho­tos depict­ing ani­mal abuse are ille­gal, con­clud­ing that they only rise to that lev­el if the abuse is sex­u­al in nature. In anoth­er case, he reluc­tant­ly told a jan­i­tor to delete some revenge porn, though not with­out belit­tling laws against it.

    Nishimura’s pur­chase of the site in 2016 and RapeApe’s ascen­sion to de fac­to admin­is­tra­tor of 4chan coin­cides with an incred­i­ble 40 per­cent in the vol­ume of racist and vio­lent lan­guage on /pol/. Oth­er, com­pa­ra­ble sites and com­mu­ni­ca­tion chan­nels also pushed towards extreme con­ser­vatism inde­pen­dent­ly of 4chan, so RapeApe and /pol/ cer­tain­ly aren’t the only rea­sons why 4chan slid towards the far right. Some experts cred­it­ed 4chan’s evo­lu­tion to Don­ald Trump’s overt­ly-racist polit­i­cal cam­paign, oth­ers to an influx of new users, and still oth­ers to active inter­fer­ence and recruit­ing of 4channers by Neo-Nazi ele­ments.

    While oth­er web­sites also host increas­ing amounts of vio­lent and big­ot­ed lan­guage, 4chan is an out­lier even com­pared to oth­er inter­net gath­er­ing places filled with sim­i­lar ide­olo­gies. A VICE News analy­sis found that there was more hate speech on /pol/ than in the com­ments on one overt­ly Neo-Nazi site, the Dai­ly Stormer. Mass mur­der­ers have post­ed man­i­festos on 4chan. White nation­al­ists have used the site to coor­di­nate protests.

    When one Neo-Nazi group polled their sup­port­ers to dis­cov­er how they came to the move­ment, /pol/ was tied for the most com­mon gate­way. Gab, anoth­er far right hotbed, con­tains about half the rate of hate speech as /pol/, and 4chan has 20 times more users. The only pop­u­lar web­sites more tox­ic than 4chan are its much small­er off­spring sites, like 8chan, now 8kun.

    Accord­ing to one cur­rent and three for­mer jan­i­tors, RapeApe’s push for a hands-off approach com­bined with his pref­er­ence for jan­i­tors who shared his polit­i­cal beliefs has shift­ed the web­site fur­ther into the extremes of big­otry and threats of vio­lence in which it now oper­ates. “He wants 4chan to be more like /pol/,” said one for­mer jan­i­tor.

    Over time, /pol/ has come to dom­i­nate the pub­lic per­cep­tion of 4chan, over­shad­ow­ing the qui­eter, less vile top­ic areas which make up much of the activ­i­ty on the site. /pol/ is reg­u­lar­ly the most active board on the site, but even so, it makes up a small por­tion of the total posts. Under RapeApe’s man­age­ment, how­ev­er, /pol/’s big­otry has metas­ta­sized.

    “[W]hen RapeApe took over ful­ly after [Poole] left, he put in a ‘lais­sez-faire’ pol­i­cy of mod­er­a­tion, know­ing exact­ly what would hap­pen, that right wing ideas would dom­i­nate the site thanks to /pol/ spilling over onto oth­er boards,” said a cur­rent jan­i­tor.

    The /pol/ forum often hosts threads in which users talk about flood­ing oth­er, unre­lat­ed boards with racial slurs and big­ot­ed imagery. These “raids” expose users who were on 4chan to dis­cuss oth­er sub­jects to its uncon­ven­tion­al, far-right pol­i­tics. Posters who logged on to the site to chat about sports or browse pornog­ra­phy could find them­selves learn­ing about Neo-Nazi ide­ol­o­gy instead. Cyn­thia Miller-Idriss, a pro­fes­sor at Amer­i­can Uni­ver­si­ty and expert on far-right extrem­ism, describes this phe­nom­e­non as “gate­way con­tent.” Sim­ply by expos­ing peo­ple to hate speech, psy­chol­o­gists have found that it’s pos­si­ble to desen­si­tize them to fur­ther hate speech and dehu­man­ize out­groups. By raid­ing oth­er boards and giv­ing users a taste of their ide­ol­o­gy, /pol/ diehards hoped to bring them into their fold.

    In one inci­dent from the chat logs, when a mod­er­a­tor tried to clean up such an “inva­sion” of the sci­ence board, RapeApe wasn’t hav­ing it. Rather than delete the thread a jan­i­tor described as plan­ning a raid, RapeApe argued that they weren’t doing any­thing against the rules. “Are they actu­al­ly van­dal­is­ing or defac­ing any­thing, or harass­ing peo­ple?” RapeApe added: “Because if they’re just post­ing things, that’s not real­ly a raid.”

    Cur­rent and for­mer jan­i­tors say that one mod­er­a­tor named Mod­cat was fired for dis­agree­ing with­Ra­peApe’s lais­sez-faire mod­er­a­tion pol­i­cy.

    ...

    An analy­sis of the archives of the ani­me board (/a/) Mod­cat used to patrol, derived by scrap­ing its past threads, shows that Modcat’s depar­ture and replace­ment with anoth­er jan­i­tor had con­se­quences on the lan­guage used there. Moth­er­board used a 4chan archive and wrote a pro­gram to scrape data from the board over the last five years, count­ing the num­ber of instances of com­mon hate speech terms against Black, Lati­no, Jew­ish, and LGBTQ peo­ple each day, as well as Neo-Nazi slo­gans. This pro­gram scraped text only and so did not include instances of speech with­in images, a com­mon medi­um of com­mu­ni­ca­tion on 4chan. Imme­di­ate­ly after his depar­ture, accord­ing to for­mer mod­er­a­tors, /pol/ users raid­ed the board, spam­ming Neo-Nazi slo­gans like “sieg heil” and “heil Hitler” in about one in every 50 posts. (Use of these terms had been neg­li­gi­ble before Mod­cat left.) Even after the ini­tial raid on /a/ sub­sided, there were long-term effects on the forum. Dur­ing Mod­cat’s brief tenure, the ani­me board had hate speech in only about one in every 50 posts over­all. Since his depar­ture, that has risen to about one in every 30 posts.

    ...

    Five years lat­er, the polit­i­cal­ly incor­rect board’s con­flict with the rest of 4chan has been set­tled: /pol/ won. After years of declin­ing vol­ume both there and on the site in gen­er­al, /pol/’s activ­i­ty (in terms of the num­ber of users and posts) is on the rise once again, accord­ing to a site that tracks 4chan. Jump­ing upwards in May 2020, /pol/ boast­ed the high­est num­ber of posts per day since elec­tion day in 2016, when ecsta­t­ic users cel­e­brat­ed Trump’s vic­to­ry by call­ing for a sec­ond Holo­caust and harass­ing jour­nal­ists.

    /pol/’s surg­ing pop­u­lar­i­ty coin­cides with a boost for the rest of the site as well. Accord­ing to Sim­i­lar­Web, a com­pa­ny that tracks web traf­fic, 4chan has risen to become one of the top 400 sites in the Unit­ed States in terms of engage­ment and vis­its. The domain now rivals or exceeds major news sites in terms of the num­ber of vis­i­tors: it gets more traf­fic than abcnews.com, for exam­ple.

    And the /pol/ chan­nel con­tin­ues to cre­ate mas­sive amounts of right-wing con­tent. RapeApe’s “meme fac­to­ry,” as he described /pol/ in one leaked chat log, is chug­ging along smooth­ly. “[RapeApe has] basi­cal­ly ful­filled his inten­tions,” a cur­rent jan­i­tor told me. “[4chan] exists as a ful­ly devel­oped polit­i­cal tool used for prop­a­gat­ing memes and pro­pa­gan­da.” 4chan’s con­tent some­times spreads beyond its eso­teric cor­ner of the inter­net into the main­stream dis­course, using a well-estab­lished pipeline run­ning through Red­dit and Twit­ter into more pop­u­lar chan­nels.

    ...

    ———–

    “The Man Who Helped Turn 4chan Into the Inter­net’s Racist Engine” by Rob Arthur; Vice; 11/02/2020

    “Because of 4chan’s often wild­ly offen­sive con­tent, many assume that the site is com­plete­ly unmod­er­at­ed. But 4chan has a corps of vol­un­teers, called “jan­i­tors,” “mods,” or “jan­nies,” whose job it is—theoretically—to make sure that con­tent on the site abides by the rules. (4chan draws a dis­tinc­tion between more senior “mod­er­a­tors,” who are respon­si­ble for all boards, and “jan­i­tors,” who patrol one or two; we refer to them inter­change­ably because jan­i­tors also mod­er­ate dis­cus­sion.) The jan­i­tors we spoke to and a major trove of leaked chat logs from the jan­i­tors’ pri­vate com­mu­ni­ca­tions chan­nel tell the sto­ry of RapeApe’s rise from junior jan­ny to some­one who could decide what kind of con­tent was allowed on the site and where, shap­ing 4chan into the hate­ful, rad­i­cal­iz­ing online com­mu­ni­ty it’s known for today.

    Once “RapeApe” had the pow­er to fire the oth­er mods, he basi­cal­ly sin­gle-hand­ed­ly turned the site into the inter­net’s home for far right rad­i­cal­iza­tion. Well, he did­n’t sin­gle-hand­ed­ly do it. All of the peo­ple cre­at­ing the far right con­tent played a role. But they could­n’t have pulled it off with­out RapeApe hav­ing their back. The site start­ed off, after all, as a fair­ly pro­gres­sive ani­me site. They start­ed the /pol/ forum lit­er­al­ly as a means of siphon­ing off the races from the rest of the site:

    ...
    The polit­i­cal­ly incor­rect board wasn’t always this bad. In fact, for­mer 4chan mod­er­a­tors told Moth­er­board that /pol/ wasn’t added to the site until 2011, eight years after the site start­ed. For the first few years of its exis­tence, accord­ing to two for­mer jan­i­tors, Poole intend­ed the /pol/ board to siphon off the racism from oth­er areas of the site so that oth­er users could enjoy their own, board-spe­cif­ic pur­suits.

    “It was start­ed as a con­tain­ment board,” one for­mer mod­er­a­tor told me about /pol/. Accord­ing to chat logs and for­mer mod­er­a­tors, in its ear­ly days, mod­er­a­tors at 4chan removed racist posts and users from oth­er boards while ignor­ing them with­in one board, “ran­dom” (/b/, which was sup­posed to be a kind of “no rules, any­thing goes” space. /b/ is where many ear­ly memes were born, and is where the hack­tivist group Anony­mous came from). Such posts also some­times slipped by on the /pol/ board as well, even though they tech­ni­cal­ly vio­lat­ed the rules there. “Enforce­ment was more active in the past,” a for­mer mod­er­a­tor said. In con­trast to its cur­rent far right polit­i­cal cli­mate, “4chan skewed extreme­ly pro­gres­sive when it first start­ed,” accord­ing to the mod, although the use of big­ot­ed and misog­y­nis­tic lan­guage was wide­spread even then.
    ...

    But thanks to RapeApe’s over­site — or enforced lack of over­sight — /pol/ ulti­mate­ly won out and its pol­i­tics now define the site. And yet, if we go by the inter­nal “jan­i­tor” chat logs, RapeApe did­n’t come across as an overt neo-Nazi. He came across as a typ­i­cal right-winger with a focus on guns, gays, and the alleged cen­sor­ship of con­ser­v­a­tives:

    ...
    Accord­ing to the jan­i­tor and chat logs (as well as a delet­ed Twit­ter account two staff mem­bers con­firmed was his), RapeApe is also polit­i­cal­ly con­ser­v­a­tive and racist. One for­mer jan­i­tor described him as “a typ­i­cal right winger and /pol/ dude.” His Twit­ter account fea­tured him respond­ing approv­ing­ly to Tuck­er Carl­son clips, urg­ing anoth­er user to buy an AR-15 rifle for self-defense, won­der­ing whether the state would force peo­ple to be homo­sex­u­al and sug­gest­ing that Twit­ter was “staffed by left­ists” who were delet­ing con­ser­v­a­tive users’ accounts. In con­ver­sa­tions with oth­er jan­i­tors in the leaked chats, he found humor in hor­ri­fy­ing news about riots, shoot­ings, and the Ebo­la epidemic—especially when that news involved Black peo­ple dying.

    But RapeApe isn’t just a typ­i­cal /pol/ user who hap­pens to run the site. Accord­ing to three cur­rent and for­mer staff mem­bers, RapeApe shaped 4chan into a reflec­tion of his own polit­i­cal beliefs. “RapeApe has an agen­da: he wants /pol/ to have influ­ence on the rest of the site and [its] pol­i­tics,” a cur­rent jan­i­tor said.

    Alone, RapeApe couldn’t steer 4chan to the far right. But he super­vis­es a staff of dozens of vol­un­teers who con­trol dis­course on the boards. Accord­ing to the leaked chats and jan­i­tors who spoke with Moth­er­board, he instruct­ed jan­i­tors on how to han­dle the more big­ot­ed con­tent on 4chan—and dis­missed them if they delet­ed con­tent he likes. He took a spe­cial inter­est in the /pol/ board, telling a novice jan­i­tor in the chat logs to “treat /pol/ with kid gloves. So long as they obey the rules, they are allowed to sup­port what­ev­er abom­inable polit­i­cal posi­tions they want.”
    ...

    And RapeApe man­aged to allow this far right takeover of the site despite its rules against post­ing racist con­tent. How did he man­age this? By set­ting a cri­te­ria so absurd almost all racist con­tent could be excused: con­tent only got banned if the poster appeared to have racist intent when post­ing it. It’s the per­fect excuse in the age of ‘jokey’ far right memes. You weren’t advo­cat­ing for anoth­er Holo­caust. You were just jok­ing about it, so it’s allowed:

    ...
    4chan has an exten­sive list of rules post­ed on the site and each board has its own small­er set of edicts. A lit­tle-known and rarely enforced 4chan reg­u­la­tion, Glob­al Rule #3, pro­hibits racist con­tent on the site. But the leaked chat logs show many inci­dents of mod­er­a­tors and jan­i­tors dis­cussing when racism got severe enough that it ought to be banned. Indeed, RapeApe him­self delet­ed at least one thread for vio­lat­ing Rule #3 ear­ly on in his 4chan career, before he became a man­ag­er.

    Once he became head mod­er­a­tor, RapeApe began to post reminders that mod­er­a­tors ought to be as hands-off as pos­si­ble. In the leaked logs and accord­ing to cur­rent and for­mer jan­i­tors, RapeApe pushed his staff into a posi­tion where almost no con­tent could run afoul of the rule against racism. Instruct­ing the jan­i­tors, RapeApe wrote, “And remem­ber that with racism we’re tar­get­ing the intent of the poster and not the words them­selves.” One cur­rent jan­i­tor told me that in prac­tice, with­in 4chan’s warped, irony-poi­soned cul­ture, this meant there was no way to ban a user for even the most fla­grant, big­ot­ed lan­guage or images. They could always claim that the intent wasn’t racist, even if the con­tent unques­tion­ably was.

    “The plau­si­ble deni­a­bil­i­ty excuse for racism—I was just jok­ing, I was just trolling—is bullsh it,” Whit­ney Phillips, an Assis­tant Pro­fes­sor of Com­mu­ni­ca­tion and Rhetor­i­cal Stud­ies at Syra­cuse Uni­ver­si­ty and author of This Is Why We Can’t Have Nice Things: Map­ping the Rela­tion­ship Between Online Trolling and Main­stream Cul­ture, told Moth­er­board. “Intent can mat­ter when think­ing about the things peo­ple say, but it mat­ters very lit­tle when con­sid­er­ing con­se­quences. Whether or not some­one says a racist thing with hate in their heart, they’re still say­ing a racist thing and that still con­tributes to dehu­man­iza­tion and the nor­mal­iza­tion of harm. Any­way, the very cri­te­ri­on is absurd, as you can’t assess what’s in some­one’s heart just by look­ing at the things they post, espe­cial­ly to a place like 4chan. The only rea­son­able con­clu­sion is that, what­ev­er might have been writ­ten in the site rules, this mod­er­a­tor ensured that there was no pol­i­cy against racism. Instead it became a pro-racism pol­i­cy.”

    ...

    Over time, /pol/ has come to dom­i­nate the pub­lic per­cep­tion of 4chan, over­shad­ow­ing the qui­eter, less vile top­ic areas which make up much of the activ­i­ty on the site. /pol/ is reg­u­lar­ly the most active board on the site, but even so, it makes up a small por­tion of the total posts. Under RapeApe’s man­age­ment, how­ev­er, /pol/’s big­otry has metas­ta­sized.

    “[W]hen RapeApe took over ful­ly after [Poole] left, he put in a ‘lais­sez-faire’ pol­i­cy of mod­er­a­tion, know­ing exact­ly what would hap­pen, that right wing ideas would dom­i­nate the site thanks to /pol/ spilling over onto oth­er boards,” said a cur­rent jan­i­tor.
    ...

    As a con­se­quence of these mod­er­a­tion stan­dards, 4Chan is now arguably the biggest neo-Nazi recruit­ing ground on the inter­net:

    ...
    While oth­er web­sites also host increas­ing amounts of vio­lent and big­ot­ed lan­guage, 4chan is an out­lier even com­pared to oth­er inter­net gath­er­ing places filled with sim­i­lar ide­olo­gies. A VICE News analy­sis found that there was more hate speech on /pol/ than in the com­ments on one overt­ly Neo-Nazi site, the Dai­ly Stormer. Mass mur­der­ers have post­ed man­i­festos on 4chan. White nation­al­ists have used the site to coor­di­nate protests.

    When one Neo-Nazi group polled their sup­port­ers to dis­cov­er how they came to the move­ment, /pol/ was tied for the most com­mon gate­way. Gab, anoth­er far right hotbed, con­tains about half the rate of hate speech as /pol/, and 4chan has 20 times more users. The only pop­u­lar web­sites more tox­ic than 4chan are its much small­er off­spring sites, like 8chan, now 8kun.
    ...

    We can’t say we weren’t warned. 4Chan isn’t just an apoc­ryphal over-the-top sto­ry about the per­ils of unre­strict­ed far right pro­pa­gan­da. It’s a very real over-the-top sto­ry about the per­ils of unre­strict­ed far right pro­pa­gan­da, which is why it’s a warn­ing of what’s to come for the rest of the inter­net should the right-wing media and Repub­li­cans man­age to suc­cess­ful­ly cre­ate a stan­dard where ‘any­thing goes’ for right-wing con­tent.

    But on the plus side, at least the sit­u­a­tion can’t prob­a­bly get too much worse. After all, it’s not as if right-wing con­tent does­n’t already dom­i­nate social media. And it’s not as if social media giants haven’t already become neo-Nazi recruit­ing grounds. And it’s not as if right-wing threats of polit­i­cal vio­lence aren’t tol­er­at­ed, as Mark Zucker­berg made clear dur­ing his con­gres­sion­al hear­ing last week where he refused to ban Steve Ban­non from Face­book despite Ban­non’s seem­ing­ly sin­cere pub­lic calls for the behead­ing of Christo­pher Wray and Antho­ny Fau­ci. The cur­rent pro­pa­gan­da about right-wing cen­sor­ship isn’t real­ly about intim­i­dat­ing Big Tech into relax­ing exist­ing stan­dards so much as it’s about intim­i­dat­ing Big Tech into main­tain­ing the cur­rent­ly relaxed stan­dards while their plat­forms con­tin­ue to act as far right rad­i­cal­iza­tion plat­forms.

    Still, it’s pos­si­ble that the QAnon move­ment, for exam­ple, could be giv­en free reign to post what­ev­er it wants wher­ev­er it wants to across social media and that would be a real gain for the far right. QAnon is, after all, basi­cal­ly the reshashed Pro­to­cols of the Elders of Zion. Sim­i­lar­ly, Pres­i­dent Trump’s tweets filled with dis­in­for­ma­tion might no longer get those dis­in­for­ma­tion warn­ings. There real­ly are plen­ty of areas where the lim­it­ed forms of con­tent mod­er­a­tion that cur­rent­ly keep out the worst far right con­tent get pulled and a whole new flood of garbage is allowed to spew across major social media plat­forms. In oth­er words, while Face­book might be 50% 4chan at this point, that oth­er 50% that it’s hold­ing back real­ly is the most awful con­tent. And if the right-wing suc­ceeds in its intim­i­da­tion cam­paign the full 4chan-iza­tion of the inter­net will be just a mat­ter of time.

    In relat­ed news, a right-wing out­let is now so upset with Fox News host Tuck­er Carl­son over his calls for the Trump cam­paign to show actu­al evi­dence of wide­spread vot­er fraud that they’ve decid­ed to #Piz­za­Gate Carl­son. Like, lit­er­al­ly tie him to the whole #Piz­za­Gate garbage con­spir­a­cy the­o­ry from 2016. It’s actu­al­ly kind of incred­i­ble to see giv­en the cru­cial role Carl­son plays in the main­stream­ing of far right con­tent. But he clear­ly hit a nerve and now he has to pay in the form of being #Piz­za­Gat­ed by his fel­low far right media brethren. We bet­ter hope no social media out­lets do any­thing to hin­der the spread of this alle­ga­tion and show their anti-con­ser­v­a­tive bias.

    Posted by Pterrafractyl | November 21, 2020, 5:59 pm
  18. With all of the focus on the role far right com­mu­ni­ca­tion apps like Par­ler played in plan­ning and coor­di­nat­ing the Jan­u­ary 6 insur­rec­tion, here’s an inter­est­ing piece in Buz­zFeed that looks at the role Mark Zucker­berg has been direct­ly play­ing in pro­tect­ing the pur­vey­ors of right-wing dis­in­for­ma­tion from Face­book’s own inter­nal rule enforcers for years. And as we’ll see, Face­book’s own employ­ees tasked with enforc­ing those rules are now com­ing for­ward claim­ing that it was Mark Zucker­berg’s pre­vi­ous actions that effec­tive­ly facil­i­tate the uti­liza­tion of Face­book by the insur­rec­tion­ists to car­ry out the insur­rec­tion in real-time. Specif­i­cal­ly, it was the direct inter­ven­tions by Zucker­berg to pro­tect fig­ures like Alex Jones and Ben Shapiro from Face­book’s rules that inject­ed so much uncer­tain­ty into the enforce­ment process when it came to enforc­ing Face­book’s rules for con­ser­v­a­tive users — Zucker­berg was report­ed­ly ter­ri­fied of right-wing cam­paigns blam­ing Face­book with ‘shad­ow-ban­ning’ con­ser­v­a­tives — that Face­book’s employ­ees found them­selves effec­tive­ly forced to allow the insur­rec­tion­ists free reign of the plat­form.

    But it’s not just Zucker­berg in Face­book’s lead­er­ship who has been spear­head­ing the efforts to carve out a spe­cial rule exemp­tion for con­ser­v­a­tives. Joe Kaplan, the for­mer White House aide to George W. Bush who now serves as Facebook’s glob­al pol­i­cy chief, has also been act­ing as the inter­nal guardian of right-wing dis­in­for­ma­tion on the plat­form. Recall how we’ve already seen how Kaplan arranged to have the far right Dai­ly Caller out­let added to Face­book’s list of ‘fact-check­ers”. Hir­ing The Onion would have been more respon­si­ble.

    We’re told that, in the weeks pri­or to the elec­tion, there was so much mis­in­for­ma­tion under­min­ing trust in the integri­ty of the vote spread­ing across Face­book that exec­u­tives decid­ed the site would empha­size the News Ecosys­tem Qual­i­ty (NEQ) inter­nal score the com­pa­ny gives pub­lish­ers based on assess­ments of their jour­nal­ism for deter­min­ing what arti­cles show up in peo­ple’s news feeds. Imple­ment­ing this NEQ fea­ture did so much to improve the qual­i­ty of the news being pushed to users’ new feeds that the vice pres­i­dent respon­si­ble for devel­op­ing the NEQ sys­tem pushed to have it con­tin­ued indef­i­nite­ly. And then Joe Kaplan inter­vened and the fea­ture was removed. It was only in the days after the insur­rec­tion that Face­book renewed the NEQ news­feed fea­ture.

    It sounds like at least six Face­book employ­ees have resigned since the Novem­ber elec­tion with farewell posts that called out Face­book’s lead­er­ship for fail­ing the heed the com­pa­ny’s own experts on mis­in­for­ma­tion and hate speech. And four of those depart­ing employ­ees cit­ed the fact that Joe Kaplan is simul­ta­ne­ous­ly head of the pub­lic pol­i­cy team — which over­sees lob­by­ing and gov­ern­ment rela­tions — and the con­tent pol­i­cy team that sets and enforces the platform’s rules.

    So between Zucker­berg and Kaplan, Face­book’s own employ­ees tasked with enforc­ing the plat­forms rules are find­ing them­selves unable to enforce those rules against con­ser­v­a­tives, cul­mi­nat­ing in Face­book being used as a key plat­form for the insur­rec­tion­ists. And this sit­u­a­tion has, in turn, report­ed­ly led to some severe morale issues in the rules-enforce­ment depart­ment, hence the whistle­blow­ing we’re now hear­ing:

    Buz­zFeed News

    “Mark Changed The Rules”: How Face­book Went Easy On Alex Jones And Oth­er Right-Wing Fig­ures

    Facebook’s rules to com­bat mis­in­for­ma­tion and hate speech are sub­ject to the whims and polit­i­cal con­sid­er­a­tions of its CEO and his pol­i­cy team leader.

    Ryan Mac Buz­zFeed News Reporter
    Craig Sil­ver­man Buz­zFeed News Reporter

    Last updat­ed on Feb­ru­ary 22, 2021, at 1:14 p.m. ET
    Post­ed on Feb­ru­ary 21, 2021, at 9:59 a.m. ET

    In April 2019, Face­book was prepar­ing to ban one of the internet’s most noto­ri­ous spread­ers of mis­in­for­ma­tion and hate, Infowars founder Alex Jones. Then CEO Mark Zucker­berg per­son­al­ly inter­vened.

    Jones had gained infamy for claim­ing that the 2012 Sandy Hook ele­men­tary school mas­sacre was a “giant hoax,” and that the teenage sur­vivors of the 2018 Park­land shoot­ing were “cri­sis actors.” But Face­book had found that he was also relent­less­ly spread­ing hate against var­i­ous groups, includ­ing Mus­lims and trans peo­ple. That behav­ior qual­i­fied him for expul­sion from the social net­work under the com­pa­ny’s poli­cies for “dan­ger­ous indi­vid­u­als and orga­ni­za­tions,” which required Face­book to also remove any con­tent that expressed “praise or sup­port” for them.

    But Zucker­berg didn’t con­sid­er the Infowars founder to be a hate fig­ure, accord­ing to a per­son famil­iar with the deci­sion, so he over­ruled his own inter­nal experts and opened a gap­ing loop­hole: Face­book would per­ma­nent­ly ban Jones and his com­pa­ny — but would not touch posts of praise and sup­port for them from oth­er Face­book users. This meant that Jones’ legions of fol­low­ers could con­tin­ue to share his lies across the world’s largest social net­work.

    “Mark per­son­al­ly didn’t like the pun­ish­ment, so he changed the rules,” a for­mer pol­i­cy employ­ee told Buz­zFeed News, not­ing that the orig­i­nal rule had already been in use and rep­re­sent­ed the prod­uct of untold hours of work between mul­ti­ple teams and experts.

    “That was the first time I expe­ri­enced hav­ing to cre­ate a new cat­e­go­ry of pol­i­cy to fit what Zucker­berg want­ed. It’s some­what demor­al­iz­ing when we have estab­lished a pol­i­cy and it’s gone through rig­or­ous cycles. Like, what the fu ck is that for?” said a sec­ond for­mer pol­i­cy employ­ee who, like the first, asked not to be named so they could speak about inter­nal mat­ters.

    “Mark called for a more nuanced pol­i­cy and enforce­ment strat­e­gy,” Face­book spokesper­son Andy Stone said of the Alex Jones deci­sion, which also affect­ed the bans of oth­er extrem­ist fig­ures.

    Zuckerberg’s “more nuanced pol­i­cy” set off a cas­cad­ing effect, the two for­mer employ­ees said, which delayed the company’s efforts to remove right-wing mil­i­tant orga­ni­za­tions such as the Oath Keep­ers, which were involved the Jan. 6 insur­rec­tion at the US Capi­tol. It is also a case study in Facebook’s will­ing­ness to change its rules to pla­cate America’s right wing and avoid polit­i­cal back­lash.

    Inter­nal doc­u­ments obtained by Buz­zFeed News and inter­views with 14 cur­rent and for­mer employ­ees show how the company’s pol­i­cy team — guid­ed by Joel Kaplan, the vice pres­i­dent of glob­al pub­lic pol­i­cy, and Zuckerberg’s whims — has exert­ed out­size influ­ence while obstruct­ing con­tent mod­er­a­tion deci­sions, stymieing prod­uct roll­outs, and inter­ven­ing on behalf of pop­u­lar con­ser­v­a­tive fig­ures who have vio­lat­ed Facebook’s rules.

    In Decem­ber, a for­mer core data sci­en­tist wrote a memo titled, “Polit­i­cal Influ­ences on Con­tent Pol­i­cy.” Seen by Buz­zFeed News, the memo stat­ed that Kaplan’s pol­i­cy team “reg­u­lar­ly pro­tects pow­er­ful con­stituen­cies” and list­ed sev­er­al exam­ples, includ­ing: remov­ing penal­ties for mis­in­for­ma­tion from right-wing pages, blunt­ing attempts to improve con­tent qual­i­ty in News Feed, and briefly block­ing a pro­pos­al to stop rec­om­mend­ing polit­i­cal groups ahead of the US elec­tion.

    Since the Novem­ber vote, at least six Face­book employ­ees have resigned with farewell posts that have called out leadership’s fail­ures to heed its own experts on mis­in­for­ma­tion and hate speech. Four depart­ing employ­ees explic­it­ly cit­ed the pol­i­cy orga­ni­za­tion as an imped­i­ment to their work and called for a reor­ga­ni­za­tion so that the pub­lic pol­i­cy team, which over­sees lob­by­ing and gov­ern­ment rela­tions, and the con­tent pol­i­cy team, which sets and enforces the platform’s rules, would not both report to Kaplan.

    Face­book declined to make Kaplan or oth­er exec­u­tives avail­able for an inter­view. Stone, the com­pa­ny spokesper­son, dis­missed con­cerns about the vice president’s influ­ence.

    “Recy­cling the same warmed over con­spir­a­cy the­o­ries about the influ­ence of one per­son at Face­book doesn’t make them true,” he said. “The real­i­ty is big deci­sions at Face­book are made with input from peo­ple across dif­fer­ent teams who have dif­fer­ent per­spec­tives and exper­tise in dif­fer­ent areas. To sug­gest oth­er­wise is absurd.”

    An integri­ty researcher who worked on Facebook’s efforts to pro­tect the demo­c­ra­t­ic process and rein in rad­i­cal­iza­tion said the com­pa­ny caused direct harm to users by reject­ing prod­uct changes due to con­cerns of polit­i­cal back­lash.

    “Out of fears over poten­tial pub­lic and pol­i­cy stake­hold­er respons­es, we are know­ing­ly expos­ing users to risks of integri­ty,” they wrote in an inter­nal note seen by Buz­zFeed News. They quit in August.

    Those most affect­ed by Jones’ rhetoric have tak­en notice, too. Lenny Pozn­er, whose 6‑year-old son Noah was the youngest vic­tim of the Sandy Hook shoot­ing, called the rev­e­la­tion that Zucker­berg weak­ened penal­ties fac­ing the Infowars founder “dis­heart­en­ing, but not sur­pris­ing.” He said the com­pa­ny had made a promise to do bet­ter in deal­ing with hate and hoax­es fol­low­ing a 2018 let­ter from HONR Net­work, his orga­ni­za­tion for sur­vivors of mass casu­al­ty events. Yet Face­book con­tin­ues to fail to remove harm­ful con­tent.

    “At some point,” Pozn­er told Buz­zFeed News, “Zucker­berg has to be held respon­si­ble for his role in allow­ing his plat­form to be weaponized and for ensur­ing that the ludi­crous and the dan­ger­ous are giv­en equal impor­tance as the fac­tu­al.”

    “Dif­fer­ent Views on Dif­fer­ent Things”

    Kaplan’s close rela­tion­ship with Zucker­berg has led the CEO to weigh pol­i­tics more heav­i­ly when mak­ing high-pro­file con­tent pol­i­cy enforce­ment deci­sions, cur­rent and for­mer employ­ees said. Kaplan’s efforts to court the Trump White House over the past four years — from his wide­ly pub­li­cized sup­port for Supreme Court nom­i­nee Brett Kavanaugh to his inter­ven­tions on behalf of right-wing influ­encers in Face­book pol­i­cy deci­sions — have also made him a tar­get for civ­il rights groups and Demo­c­ra­t­ic law­mak­ers.

    In June 2020, three Demo­c­ra­t­ic sen­a­tors asked in a let­ter what role Kaplan played “in Facebook’s deci­sion to shut down and de-pri­or­i­tize inter­nal efforts to con­tain extrem­ist and hyper­po­lar­iz­ing activ­i­ty.” Sen. Eliz­a­beth War­ren called him out for over­see­ing a lob­by­ing effort that spends mil­lions of dol­lars to influ­ence politi­cians. With a new pres­i­den­tial admin­is­tra­tion in place and a spate of ongo­ing antitrust law­suits, Zucker­berg must now grap­ple with the fact that his top polit­i­cal advis­er may no longer be a Wash­ing­ton, DC asset but a poten­tial lia­bil­i­ty.

    “I think that every­body in DC hates Face­book. They have burned every bridge,” said Sarah Miller, exec­u­tive direc­tor of the Amer­i­can Eco­nom­ic Lib­er­ties Project and a for­mer mem­ber of Joe Biden’s pres­i­den­tial tran­si­tion team. Democ­rats are incensed with the platform’s tol­er­ance of hate speech and mis­in­for­ma­tion, while “pulling Trump off the plat­form” has brought new life to Repub­li­can gripes with the com­pa­ny, she said.

    “Face­book has fires to put out all across the polit­i­cal spec­trum,” Miller added.

    When Kaplan joined Face­book to lead its DC oper­a­tion in 2011, he had the con­nec­tions and pedi­gree the com­pa­ny need­ed to court the Amer­i­can right. A for­mer clerk for con­ser­v­a­tive Supreme Court Jus­tice Antonin Scalia, he served as a White House deputy chief of staff under Pres­i­dent George W. Bush after par­tic­i­pat­ing in the Brooks Broth­ers riot dur­ing the 2000 Flori­da pres­i­den­tial elec­tion dis­pute. Dur­ing a Sen­ate con­fir­ma­tion hear­ing in 2003 for a post with the Office of Man­age­ment and Bud­get, Kaplan was ques­tioned about his role in the event, which sought to stop the tal­ly­ing of votes dur­ing the Flori­da recount.

    Though he ini­tial­ly main­tained a low pub­lic pro­file at Face­book, Kaplan — COO Sheryl Sandberg’s Har­vard class­mate and for­mer boyfriend — was val­ued by Zucker­berg for his under­stand­ing of GOP pol­i­cy­mak­ers and con­ser­v­a­tive Amer­i­cans, who the CEO believed were under­rep­re­sent­ed by a lib­er­al-lean­ing lead­er­ship team and employ­ee base.

    By 2014, he’d been pro­mot­ed to vice pres­i­dent of glob­al pub­lic pol­i­cy. In that role, Kaplan over­saw the company’s gov­ern­ment rela­tions around the world as well as its con­tent pol­i­cy team. That arrange­ment raised eye­brows, as oth­er com­pa­nies, includ­ing Google and Twit­ter, typ­i­cal­ly keep pub­lic pol­i­cy and lob­by­ing efforts sep­a­rate from teams that cre­ate and enforce con­tent rules.

    The can­di­da­cy and elec­tion of Don­ald Trump made Kaplan even more valu­able to the com­pa­ny. He served as Zuckerberg’s pol­i­cy con­sigliere, help­ing Face­book nav­i­gate the sea of lies and hate the for­mer pres­i­dent con­jured on the plat­form as well as the out­raged pub­lic response to it. In Decem­ber 2015, fol­low­ing a Face­book post from Trump call­ing for a “total and com­plete shut­down” of Mus­lims enter­ing the US — the first of many that forced the com­pa­ny to grap­ple with the then candidate’s racist and some­times vio­lent rhetoric — Kaplan and oth­er exec­u­tives advised Facebook’s CEO to do noth­ing.

    “Don’t poke the bear,” Kaplan said, accord­ing to the New York Times, argu­ing that tak­ing action against Trump’s account would invite a right-wing back­lash and accu­sa­tions that the site was lim­it­ing free speech. It’s an argu­ment he’d repeat in var­i­ous forms over the ensu­ing five years, with Zucker­berg often in agree­ment.

    Dur­ing that time, Kaplan rarely com­mu­ni­cat­ed open­ly on Facebook’s inter­nal mes­sage boards or spoke at com­pa­ny­wide meet­ings, accord­ing to cur­rent and for­mer employ­ees. When he did, how­ev­er, his appear­ances were cloud­ed in con­tro­ver­sy.

    After a Face­book team led by then–chief secu­ri­ty offi­cer Alex Sta­mos found evi­dence of Russ­ian inter­fer­ence on the plat­form dur­ing and after the 2016 US pres­i­den­tial elec­tion, Kaplan was part of a lead­er­ship group that argued against dis­clos­ing the full extent of the Kremlin’s influ­ence oper­a­tion. When the com­pa­ny did end up pub­licly releas­ing fur­ther infor­ma­tion about it in Octo­ber 2017, it was Kaplan, not Sta­mos, who answered employ­ee ques­tions dur­ing an inter­nal town hall.

    “They could have sent me,” said Sta­mos, who sub­se­quent­ly left the com­pa­ny over dis­agree­ments relat­ed to Russ­ian inter­fer­ence. “The per­son who was pre­sent­ing [evi­dence of the Russ­ian cam­paign] to VPs was me.”

    It was Kaplan’s appear­ance at Kavanaugh’s Sep­tem­ber 2018 Sen­ate con­fir­ma­tion hear­ings, how­ev­er, that pushed him into the nation­al spot­light. Sit­ting behind the nom­i­nee, he was vis­i­ble in TV cov­er­age of the event. Employ­ees were furi­ous; they believed Kaplan’s atten­dance made it look like Face­book sup­port­ed the nom­i­nee, while dis­miss­ing the alle­ga­tions of sex­u­al assault against him.

    Kaplan sub­se­quent­ly addressed the inci­dent at a com­pa­ny­wide meet­ing via video­con­fer­ence, where angry work­ers, who felt his on-cam­era appear­ance was inten­tion­al, ham­mered him with ques­tions. The con­fir­ma­tion also caused deep wounds inside Kaplan’s own orga­ni­za­tion. Dur­ing a Face­book pub­lic pol­i­cy team meet­ing that fall to address the hear­ing and the vice pres­i­den­t’s appear­ance, one long­time man­ag­er tear­ful­ly argued to a male col­league “It doesn’t mat­ter how well you know some­one; it doesn’t mean they didn’t do what some­body said they did,” after writ­ing a blog post detail­ing her expe­ri­ence of being sex­u­al­ly assault­ed.

    None of this changed Kaplan’s stand­ing with Zucker­berg. The CEO went to DC in Sep­tem­ber 2019 and was shep­herd­ed around by Kaplan on a trip that includ­ed a meet­ing with Trump. Kaplan remained friend­ly with the Trump White House, which at one point con­sid­ered him to run the Office of Man­age­ment and Bud­get.

    In May, when Zucker­berg decid­ed to not touch Trump’s “when the loot­ing starts, the shoot­ing starts” incite­ment dur­ing the work­ers became incensed. At a sub­se­quent com­pa­ny­wide meet­ing, one of the most upvot­ed ques­tions from employ­ees direct­ly called Kaplan out. “Many peo­ple feel that Joel Kaplan has too much pow­er over our deci­sions,” the ques­tion read, ask­ing that the vice pres­i­dent explain his role and val­ues.

    Zucker­berg seemed irked by the ques­tion and dis­put­ed the notion that any one per­son could influ­ence the “rig­or­ous” process by which the com­pa­ny made deci­sions. Diver­si­ty, the CEO argued, means tak­ing into account all polit­i­cal views.

    “That basi­cal­ly asked whether Joel can be in this role, or can be doing this role, on the basis of the fact that he is a Repub­li­can … and I have to say that I find that line of ques­tion­ing to be very trou­bling,” Zucker­berg said, ignor­ing the ques­tion. “If we want to actu­al­ly do a good job of serv­ing peo­ple, [we have to take] into account that there are dif­fer­ent views on dif­fer­ent things.”

    Face­book employ­ees said Zucker­berg remains stal­wart in his sup­port for Kaplan, but inter­nal pres­sure is build­ing to reduce the pub­lic pol­i­cy team’s influ­ence. Col­leagues “feel pres­sure to ensure their rec­om­men­da­tions align with the inter­ests of pol­i­cy­mak­ers,” Samidh Chakrabar­ti, head of Facebook’s civic integri­ty team, wrote in an inter­nal note in June, bemoan­ing the dif­fi­cul­ty of bal­anc­ing such inter­ests while deliv­er­ing on the team’s man­date: stop­ping abuse and elec­tion inter­fer­ence on the plat­form. The civic integri­ty team was dis­band­ed short­ly after the elec­tion, as report­ed by the Infor­ma­tion.

    “They attribute this to the orga­ni­za­tion­al incen­tives of hav­ing the con­tent pol­i­cy and pub­lic pol­i­cy teams share a com­mon root,” Chakrabar­ti said. “As long as this is the case, we will be pre­ma­ture­ly pri­or­i­tiz­ing reg­u­la­to­ry inter­ests over com­mu­ni­ty pro­tec­tion.”

    Sta­mos, who is now head of the Stan­ford Inter­net Obser­va­to­ry, said the pol­i­cy team’s struc­ture will always present a prob­lem in its cur­rent form.

    “You don’t want plat­form pol­i­cy peo­ple report­ing to some­one who’s in charge of keep­ing peo­ple in gov­ern­ment hap­py,” he said. “Joel comes from the Bush White House, and gov­ern­ment rela­tions does not have a neu­tral posi­tion on speech requests.”

    “Fear of Antag­o­niz­ing Pow­er­ful Polit­i­cal Actors”

    In August, a Face­book prod­uct man­ag­er who over­sees the News Feed updat­ed his col­leagues on the company’s prepa­ra­tions for the 2020 US elec­tion.

    Inter­nal research had shown that peo­ple on Face­book were being polar­ized on the site in polit­i­cal dis­cus­sion groups, which were also breed­ing grounds for mis­in­for­ma­tion and hate. To com­bat this, Face­book employ­ees who were tasked with pro­tect­ing elec­tion integri­ty pro­posed the plat­form stop rec­om­mend­ing such groups in a mod­ule called “Groups You Should Join.”

    But the pub­lic pol­i­cy team was afraid of pos­si­ble polit­i­cal blow­back.

    “Although the Prod­uct rec­om­men­da­tion would have improved imple­men­ta­tion of the civic fil­ter, it would have cre­at­ed thrash in the polit­i­cal ecosys­tem dur­ing [ the 2020 US elec­tion,]” the prod­uct man­ag­er wrote on Face­book’s inter­nal mes­sage board. “We have decid­ed to not make any changes until the elec­tion is over.”

    The social net­work even­tu­al­ly paused polit­i­cal group rec­om­men­da­tions — just weeks before the Novem­ber elec­tion — and removed them per­ma­nent­ly only after the Capi­tol insur­rec­tion on Jan. 6. Cur­rent and for­mer employ­ees said Face­book’s deci­sion to ignore its integri­ty team’s guid­ance and ini­tial­ly leave group rec­om­men­da­tions untouched exem­pli­fies how polit­i­cal cal­cu­la­tions often quashed com­pa­ny ini­tia­tives that could have blunt­ed mis­in­for­ma­tion and rad­i­cal­iza­tion.

    In that same update about group rec­om­men­da­tions, the prod­uct man­ag­er also explained how lead­ers decid­ed against mak­ing changes to a fea­ture called In Feed Rec­om­men­da­tions (IFR) due to poten­tial polit­i­cal wor­ries. Designed to insert posts into people’s feeds from accounts they don’t fol­low, IFR was intend­ed to fos­ter new con­nec­tions or inter­ests. For exam­ple, if a per­son fol­lowed the Face­book page for a foot­ball team like the Kansas City Chiefs, IFR might add a post from the NFL to their feed, even if that per­son didn’t fol­low the NFL.

    One thing IFR was not sup­posed to do was rec­om­mend polit­i­cal con­tent. But ear­li­er that spring, Face­book users began com­plain­ing that they were see­ing posts from con­ser­v­a­tive per­son­al­i­ties includ­ing Ben Shapiro in their News Feeds even though they had nev­er engaged with that type of con­tent.

    When the issue was flagged inter­nal­ly, Facebook’s con­tent pol­i­cy team warned that remov­ing such sug­ges­tions for polit­i­cal con­tent could reduce those pages’ engage­ment and traf­fic, and pos­si­bly inspire com­plaints from pub­lish­ers. A News Feed prod­uct man­ag­er and a pol­i­cy team mem­ber reit­er­at­ed this argu­ment in an August post to Facebook’s inter­nal mes­sage board.

    “A notice­able drop in dis­tri­b­u­tion for these pro­duc­ers (via traf­fic insights for rec­om­men­da­tions) is like­ly to result in high-pro­file esca­la­tions that could include accu­sa­tions of shad­ow-ban­ning and/or FB bias against cer­tain polit­i­cal enti­ties dur­ing the US 2020 elec­tion cycle,” they explained. Shad­ow-ban­ning, or the lim­it­ing of a page’s cir­cu­la­tion with­out inform­ing its own­ers, is a com­mon accu­sa­tion lev­eled by right-wing per­son­al­i­ties against social media plat­forms.

    Through­out 2020, the “fear of antag­o­niz­ing pow­er­ful polit­i­cal actors,” as the for­mer core data sci­en­tist put it in their memo, became a key pub­lic pol­i­cy team ratio­nal­iza­tion for for­go­ing action on poten­tial­ly viola­tive con­tent or rolling out prod­uct changes ahead of the US pres­i­den­tial elec­tion. They also said they had seen “a dozen pro­pos­als to mea­sure the objec­tive qual­i­ty of con­tent on News Feed dilut­ed or killed because … they have a dis­pro­por­tion­ate impact across the US polit­i­cal spec­trum, typ­i­cal­ly harm­ing con­ser­v­a­tive con­tent more.”

    The data sci­en­tist, who spent more than five years at the com­pa­ny before leav­ing late last year, not­ed that while strides had been made since 2016, the state of polit­i­cal con­tent on News Feed was “still gen­er­al­ly agreed to be bad.” Accord­ing to Face­book data, they added, 1 of every 100 views on con­tent about US pol­i­tics was for some type of hoax, while the major­i­ty of views for polit­i­cal mate­ri­als were on par­ti­san posts. Yet the com­pa­ny con­tin­ued to give known spread­ers of false and mis­lead­ing infor­ma­tion a pass if they were deemed “‘sen­si­tive’ or like­ly to retal­i­ate,” the data sci­en­tist said.

    “In the US it appears that inter­ven­tions have been almost exclu­sive­ly on behalf of con­ser­v­a­tive pub­lish­ers,” they wrote, attribut­ing this to polit­i­cal pres­sure or a reluc­tance to upset sen­si­tive pub­lish­ers and high-pro­file users.

    As Buz­zFeed News report­ed last sum­mer, mem­bers of Facebook’s pol­i­cy team — includ­ing Kaplan — inter­vened on behalf of right-wing fig­ures and pub­li­ca­tions such as Char­lie Kirk, Bre­it­bart, and Prager Uni­ver­si­ty, in some cas­es push­ing for the removal of mis­in­for­ma­tion strikes against their pages or accounts. Strikes, which are applied at the rec­om­men­da­tion of Facebook’s third-par­ty fact-check­ers, can result in a range of penal­ties, from a decrease in how far their posts are dis­trib­uted to the removal of the page or account.

    Kaplan’s oth­er inter­ven­tions are well doc­u­ment­ed. In 2018, the Wall Street Jour­nal revealed that he helped kill a project to con­nect Amer­i­cans who have polit­i­cal dif­fer­ences. The paper said Kaplan had object­ed “when briefed on inter­nal Face­book research that found right-lean­ing users tend­ed to be more polar­ized, or less exposed to dif­fer­ent points of view, than those on the left.” Last year, the New York Times report­ed that pol­i­cy exec­u­tives declined to expand a fea­ture called “cor­rect the record” — which noti­fied users when they inter­act­ed with con­tent that was lat­er labeled false by Facebook’s fact-check­ing part­ners — out of fear that it would “dis­pro­por­tion­ate­ly show noti­fi­ca­tions to peo­ple who shared false news from right-wing web­sites.”

    Pol­i­cy exec­u­tives also report­ed­ly helped over­ride an ini­tia­tive pro­posed by the company’s now-dis­band­ed civic integri­ty unit to throt­tle the reach of mis­lead­ing polit­i­cal posts, accord­ing to the Infor­ma­tion.

    Such inter­ven­tions were hard­ly a sur­prise for those who have worked on efforts at the com­pa­ny to reduce harm and mis­in­for­ma­tion. In a Decem­ber depar­ture note pre­vi­ous­ly report­ed by Buz­zFeed News, an integri­ty researcher detailed how right-wing pages, includ­ing those for Bre­it­bart and Fox News, had become hubs of dis­cus­sion filled with death threats and hate speech — in clear vio­la­tion of Face­book pol­i­cy. They ques­tioned why the com­pa­ny con­tin­ued to work with such pub­li­ca­tions in offi­cial capac­i­ties.

    “When the com­pa­ny has a very appar­ent inter­est in prop­ping up actors who are fan­ning the flames of the very fire we are try­ing to put out, it makes it hard to be vis­i­bly proud of where I work,” the researcher wrote.

    A Line From Alex Jones to the US Capi­tol

    The strate­gic response team that had gath­ered evi­dence for the Alex Jones and Infowars ban in spring 2019 drew upon years of exam­ples of his hate speech against Mus­lims, trans­gen­der peo­ple, and oth­er groups. Under the com­pa­ny’s poli­cies for dan­ger­ous indi­vid­u­als and orga­ni­za­tions, Jones and Infowars would be per­ma­nent­ly banned and Face­book would have to remove con­tent that expressed sup­port for the con­spir­a­cy the­o­rist and his site.

    In April 2019, a pro­pos­al for the rec­om­mend­ed ban — com­plete with exam­ples and com­ments from the pub­lic pol­i­cy, legal, and com­mu­ni­ca­tions teams — was sent by email to Moni­ka Bick­ert, Face­book’s head of glob­al pol­i­cy man­age­ment, and her boss, Kaplan. The pro­pos­al was then passed on to top com­pa­ny lead­er­ship, includ­ing Zucker­berg, sources said.

    The Face­book CEO balked at remov­ing posts that praised Jones and his ideas.

    “Zucker­berg basi­cal­ly took the deci­sion that he did not want to use this pol­i­cy against Jones because he did not per­son­al­ly think he was a hate fig­ure,” said a for­mer pol­i­cy employ­ee.

    The teams were direct­ed to cre­ate an entire­ly new des­ig­na­tion for Jones to fit the CEO’s request, and when the com­pa­ny announced the ban on May 2, it did not say it had changed its rules at Zuckerberg’s behest. The deci­sions, how­ev­er, would have far-reach­ing impli­ca­tions, set­ting off a chain of events that ulti­mate­ly con­tributed to the vio­lent after­math of the 2020 elec­tion.

    Two for­mer pol­i­cy employ­ees said the process made con­tent pol­i­cy teams hes­i­tant to rec­om­mend new actions, result­ing in a “freeze” on new des­ig­na­tions for dan­ger­ous indi­vid­u­als and orga­ni­za­tions for rough­ly a year. In the inter­im, many extrem­ist groups used the plat­form to orga­nize and grow their mem­ber­ship through­out 2020. The for­mer pol­i­cy employ­ees said the delay in label­ing such groups effec­tive­ly enabled them to use Face­book to recruit and orga­nize through most of 2020.

    Once the Alex Jones thing had blown over, they froze des­ig­na­tions, and that last­ed for close to a year, and they were very rarely will­ing to push through any­thing. That impact­ed the lead-up to the elec­tion last year. Teams should have been review­ing the Oath Keep­ers and Three Per­centers, and essen­tial­ly these peo­ple weren’t allowed to,” said the pol­i­cy employ­ee, refer­ring to right-wing mil­i­tant orga­ni­za­tions that Face­book start­ed to remove in August 2020.

    The Wash­ing­ton Post report­ed on Sat­ur­day that the Jus­tice Depart­ment and FBI are inves­ti­gat­ing links between Jones and the Capi­tol riot­ers.

    The com­pa­ny could have act­ed much ear­li­er, one Face­book researcher wrote on the inter­nal mes­sage board when they quit in August. The note came with a warn­ing: “Integri­ty teams are fac­ing increas­ing bar­ri­ers to build­ing safe­guards.” They wrote of how pro­posed plat­form improve­ments that were backed by strong research and data had been “pre­ma­ture­ly sti­fled or severe­ly con­strained … often based on fears of pub­lic and pol­i­cy stake­hold­er respons­es.”

    “We’ve known for over a year now that our rec­om­men­da­tion sys­tems can very quick­ly lead users down the path to con­spir­a­cy the­o­ries and groups,” they wrote, crit­i­ciz­ing the com­pa­ny for being hes­i­tant to take action against the QAnon mass delu­sion. “In the mean­time, the fringe group/set of beliefs has grown to nation­al promi­nence with QAnon con­gres­sion­al can­di­dates and QAnon hash­tags and groups trend­ing in the main­stream. We were will­ing to act only *after* things had spi­raled into a dire state.”

    Though the 2020 elec­tion is long over, cur­rent and for­mer employ­ees say pol­i­tics con­tin­ue to seep into Face­book prod­uct and fea­ture deci­sions. Four sources said they were con­cerned about Kaplan’s influ­ence over which con­tent is rec­om­mend­ed in News Feed. Giv­en his role court­ing politi­cians, they said, there is a fun­da­men­tal con­flict of inter­est in both appeas­ing gov­ern­ment offi­cials or can­di­dates and decid­ing what peo­ple see on the plat­form.

    For weeks pri­or to the elec­tion, mis­in­for­ma­tion was spread­ing across Face­book, under­min­ing trust in the integri­ty of how votes would be count­ed. To improve the qual­i­ty of con­tent in the News Feed, exec­u­tives decid­ed the site would empha­size News Ecosys­tem Qual­i­ty (NEQ), an inter­nal score giv­en to pub­lish­ers based on assess­ments of their jour­nal­ism, in its rank­ing algo­rithm, accord­ing to the New York Times.

    This and oth­er “break glass” mea­sures improved the qual­i­ty of con­tent on people’s News Feeds so much that John Hege­man, the vice pres­i­dent respon­si­ble for the fea­ture, pushed to con­tin­ue them indef­i­nite­ly, accord­ing to three peo­ple famil­iar with the sit­u­a­tion who spoke to Buz­zFeed News. Yet Hegeman’s sug­ges­tion was opposed by Kaplan and mem­bers of the pol­i­cy team. The tem­po­rary mea­sures even­tu­al­ly expired.

    ...

    In the days fol­low­ing the insur­rec­tion, Face­book reem­pha­sized NEQ in its News Feed rank­ing algo­rithm again. Face­book spokesper­son Andy Stone said that change was tem­po­rary and has already been “rolled back.”

    “Our Lead­er­ship Isn’t Doing Enough”

    In the after­math of the 2020 elec­tion, some depart­ing Face­book have open­ly crit­i­cized lead­er­ship as they’ve exit­ed. “I’ve grown more dis­il­lu­sioned about our com­pa­ny and the role we play in soci­ety,” a near­ly eight-year vet­er­an said, adding that they were sad­dened and infu­ri­at­ed by leadership’s fail­ure to rec­og­nize or min­i­mize the “real neg­a­tives” the com­pa­ny intro­duces to the world.

    “I think the peo­ple work­ing in these areas are work­ing as hard as they can and I com­mend them for their efforts,” they wrote. “How­ev­er, I do think our lead­er­ship isn’t doing enough.”

    Beyond a pro­found con­cern over the influ­ence of Kaplan’s pol­i­cy team, a num­ber of Face­book employ­ees attrib­uted the com­pa­ny’s con­tent pol­i­cy prob­lems to Zucker­berg and his view that the plat­form must always be a bal­ance of right and left.

    “Ide­ol­o­gy is not, and should not be, a pro­tect­ed class,” a con­tent pol­i­cy employ­ee who left weeks after the elec­tion wrote. “White suprema­cy is an ide­ol­o­gy; so is anar­chism. Nei­ther view is immutable, nor should either be beyond scruti­ny. The idea that our con­tent rank­ing deci­sions should be bal­anced on a scale from right to left is imprac­ti­ca­ble … and frankly can be dan­ger­ous, as one side of that scale active­ly chal­lenges core demo­c­ra­t­ic insti­tu­tions and fails to rec­og­nize the results of a free and fair elec­tion.”

    In Octo­ber 2020, Face­book respond­ed to ongo­ing crit­i­cism of its pol­i­cy deci­sions by intro­duc­ing an Over­sight Board, an inde­pen­dent pan­el to hear appeals on con­tent take­downs. But the for­mer pol­i­cy employ­ee with insight into the Alex Jones ban said that sig­nif­i­cant changes to rules and enforce­ment will always come down to Zucker­berg.

    “Joel [Kaplan] has influ­ence for sure, but at the end of the day Mark owns this stuff,” they said. “Mark has con­sol­i­dat­ed so much of this polit­i­cal deci­sion-mak­ing pow­er in him­self.”

    UPDATE
    Feb­ru­ary 22, 2021, at 12:14 p.m.

    This sto­ry has been updat­ed to clar­i­fy that an employ­ee’s call for empa­thy for vic­tims of sex­u­al assault dur­ing a Face­book pol­i­cy meet­ing in the fall of 2018 were direct­ed at a col­league not Joel Kaplan.

    ———–

    ““Mark Changed The Rules”: How Face­book Went Easy On Alex Jones And Oth­er Right-Wing Fig­ures” by Ryan Mac and Craig Sil­ver­man; Buz­zFeed; 02/21/2021

    “Zuckerberg’s “more nuanced pol­i­cy” set off a cas­cad­ing effect, the two for­mer employ­ees said, which delayed the company’s efforts to remove right-wing mil­i­tant orga­ni­za­tions such as the Oath Keep­ers, which were involved the Jan. 6 insur­rec­tion at the US Capi­tol. It is also a case study in Facebook’s will­ing­ness to change its rules to pla­cate America’s right wing and avoid polit­i­cal back­lash.”

    A “more nuanced pol­i­cy”. It’s a dark­ly amus­ing way of char­ac­ter­iz­ing Mark Zucker­berg’s demands that Face­book carve out spe­cial exemp­tions for fig­ures like Alex Jones. But Zucker­berg was­n’t the only high-lev­el exec­u­tive mak­ing these demands of Face­book’s employ­ees. Joe Kaplan report­ed­ly guid­ed Zucker­berg through this process of mak­ing right-wing mis­in­for­ma­tion super-spread­ers a pro­tect­ed class on the plat­form:

    ...
    Inter­nal doc­u­ments obtained by Buz­zFeed News and inter­views with 14 cur­rent and for­mer employ­ees show how the company’s pol­i­cy team — guid­ed by Joel Kaplan, the vice pres­i­dent of glob­al pub­lic pol­i­cy, and Zuckerberg’s whims — has exert­ed out­size influ­ence while obstruct­ing con­tent mod­er­a­tion deci­sions, stymieing prod­uct roll­outs, and inter­ven­ing on behalf of pop­u­lar con­ser­v­a­tive fig­ures who have vio­lat­ed Facebook’s rules.

    In Decem­ber, a for­mer core data sci­en­tist wrote a memo titled, “Polit­i­cal Influ­ences on Con­tent Pol­i­cy.” Seen by Buz­zFeed News, the memo stat­ed that Kaplan’s pol­i­cy team “reg­u­lar­ly pro­tects pow­er­ful con­stituen­cies” and list­ed sev­er­al exam­ples, includ­ing: remov­ing penal­ties for mis­in­for­ma­tion from right-wing pages, blunt­ing attempts to improve con­tent qual­i­ty in News Feed, and briefly block­ing a pro­pos­al to stop rec­om­mend­ing polit­i­cal groups ahead of the US elec­tion.

    Since the Novem­ber vote, at least six Face­book employ­ees have resigned with farewell posts that have called out leadership’s fail­ures to heed its own experts on mis­in­for­ma­tion and hate speech. Four depart­ing employ­ees explic­it­ly cit­ed the pol­i­cy orga­ni­za­tion as an imped­i­ment to their work and called for a reor­ga­ni­za­tion so that the pub­lic pol­i­cy team, which over­sees lob­by­ing and gov­ern­ment rela­tions, and the con­tent pol­i­cy team, which sets and enforces the platform’s rules, would not both report to Kaplan.

    ...

    Kaplan’s close rela­tion­ship with Zucker­berg has led the CEO to weigh pol­i­tics more heav­i­ly when mak­ing high-pro­file con­tent pol­i­cy enforce­ment deci­sions, cur­rent and for­mer employ­ees said. Kaplan’s efforts to court the Trump White House over the past four years — from his wide­ly pub­li­cized sup­port for Supreme Court nom­i­nee Brett Kavanaugh to his inter­ven­tions on behalf of right-wing influ­encers in Face­book pol­i­cy deci­sions — have also made him a tar­get for civ­il rights groups and Demo­c­ra­t­ic law­mak­ers.

    ...

    When Kaplan joined Face­book to lead its DC oper­a­tion in 2011, he had the con­nec­tions and pedi­gree the com­pa­ny need­ed to court the Amer­i­can right. A for­mer clerk for con­ser­v­a­tive Supreme Court Jus­tice Antonin Scalia, he served as a White House deputy chief of staff under Pres­i­dent George W. Bush after par­tic­i­pat­ing in the Brooks Broth­ers riot dur­ing the 2000 Flori­da pres­i­den­tial elec­tion dis­pute. Dur­ing a Sen­ate con­fir­ma­tion hear­ing in 2003 for a post with the Office of Man­age­ment and Bud­get, Kaplan was ques­tioned about his role in the event, which sought to stop the tal­ly­ing of votes dur­ing the Flori­da recount.

    Though he ini­tial­ly main­tained a low pub­lic pro­file at Face­book, Kaplan — COO Sheryl Sandberg’s Har­vard class­mate and for­mer boyfriend — was val­ued by Zucker­berg for his under­stand­ing of GOP pol­i­cy­mak­ers and con­ser­v­a­tive Amer­i­cans, who the CEO believed were under­rep­re­sent­ed by a lib­er­al-lean­ing lead­er­ship team and employ­ee base.

    By 2014, he’d been pro­mot­ed to vice pres­i­dent of glob­al pub­lic pol­i­cy. In that role, Kaplan over­saw the company’s gov­ern­ment rela­tions around the world as well as its con­tent pol­i­cy team. That arrange­ment raised eye­brows, as oth­er com­pa­nies, includ­ing Google and Twit­ter, typ­i­cal­ly keep pub­lic pol­i­cy and lob­by­ing efforts sep­a­rate from teams that cre­ate and enforce con­tent rules.
    ...

    And despite all of the pub­lic and inter­nal blow­back Face­book is expe­ri­enc­ing as a result of the role it con­tin­ues to play in spread­ing dis­in­for­ma­tion, Zucker­berg report­ed­ly remains stal­wart in his sup­port for Kaplan. But Kaplan influ­ence over how Face­book imple­ments its inter­nal rules isn’t lim­it­ed to his rela­tion­ship with Zucker­berg. Kaplan is lit­er­al­ly in charge of lob­by­ing gov­ern­ments and enforc­ing rules. So Face­book basi­cal­ly designed a cor­po­rate struc­ture that ensures Face­book’s rules will be imple­ment­ed in a man­ner the most polit­i­cal­ly palat­able to key gov­ern­ments:

    ...
    Face­book employ­ees said Zucker­berg remains stal­wart in his sup­port for Kaplan, but inter­nal pres­sure is build­ing to reduce the pub­lic pol­i­cy team’s influ­ence. Col­leagues “feel pres­sure to ensure their rec­om­men­da­tions align with the inter­ests of pol­i­cy­mak­ers,” Samidh Chakrabar­ti, head of Facebook’s civic integri­ty team, wrote in an inter­nal note in June, bemoan­ing the dif­fi­cul­ty of bal­anc­ing such inter­ests while deliv­er­ing on the team’s man­date: stop­ping abuse and elec­tion inter­fer­ence on the plat­form. The civic integri­ty team was dis­band­ed short­ly after the elec­tion, as report­ed by the Infor­ma­tion.

    “They attribute this to the orga­ni­za­tion­al incen­tives of hav­ing the con­tent pol­i­cy and pub­lic pol­i­cy teams share a com­mon root,” Chakrabar­ti said. “As long as this is the case, we will be pre­ma­ture­ly pri­or­i­tiz­ing reg­u­la­to­ry inter­ests over com­mu­ni­ty pro­tec­tion.”
    ...

    But we don’t have to mere­ly look at warped cor­po­rate struc­ture to real­ize there’s a major prob­lem with how Face­book enforces the rules against right-wing fig­ures. We just have to look at the end­less­ly grow­ing list of exam­ples of Face­book bend­ing over back­wards to appease far right dis­in­for­ma­tion out­lets. An end­less that keeps grow­ing in large part due to the steady stream of demor­al­ized Face­boook employ­ees and ex-Face­book employ­ees blow­ing the whis­tle. For exam­ple, there was the deci­sion to con­tin­ue with the “Groups You Should Join” fea­ture in August of 2020 after it was deter­mined by that the fea­ture was lead­ing to grow­ing polit­i­cal polar­iza­tion. It was Kaplan’s pub­lic pol­i­cy team that pre­vent­ed the end­ing of the fea­ture until after the elec­tion over con­cerns that doing so before­hand “would have cre­at­ed thrash in the polit­i­cal ecosys­tem” dur­ing the elec­tion. Yes, Face­book deferred imple­ment­ing a pol­i­cy change until after the elec­tion that would have reduced polit­i­cal polar­iza­tion over fears of a polit­i­cal back­lash:

    ...
    In August, a Face­book prod­uct man­ag­er who over­sees the News Feed updat­ed his col­leagues on the company’s prepa­ra­tions for the 2020 US elec­tion.

    Inter­nal research had shown that peo­ple on Face­book were being polar­ized on the site in polit­i­cal dis­cus­sion groups, which were also breed­ing grounds for mis­in­for­ma­tion and hate. To com­bat this, Face­book employ­ees who were tasked with pro­tect­ing elec­tion integri­ty pro­posed the plat­form stop rec­om­mend­ing such groups in a mod­ule called “Groups You Should Join.”

    But the pub­lic pol­i­cy team was afraid of pos­si­ble polit­i­cal blow­back.

    “Although the Prod­uct rec­om­men­da­tion would have improved imple­men­ta­tion of the civic fil­ter, it would have cre­at­ed thrash in the polit­i­cal ecosys­tem dur­ing [ the 2020 US elec­tion,]” the prod­uct man­ag­er wrote on Face­book’s inter­nal mes­sage board. “We have decid­ed to not make any changes until the elec­tion is over.”

    The social net­work even­tu­al­ly paused polit­i­cal group rec­om­men­da­tions — just weeks before the Novem­ber elec­tion — and removed them per­ma­nent­ly only after the Capi­tol insur­rec­tion on Jan. 6. Cur­rent and for­mer employ­ees said Face­book’s deci­sion to ignore its integri­ty team’s guid­ance and ini­tial­ly leave group rec­om­men­da­tions untouched exem­pli­fies how polit­i­cal cal­cu­la­tions often quashed com­pa­ny ini­tia­tives that could have blunt­ed mis­in­for­ma­tion and rad­i­cal­iza­tion.
    ...

    But the “Groups You Should Join” fea­ture was­n’t the only fea­ture Kaplan’s group decid­ed to keep dur­ing this time. The “In Feed Rec­om­men­da­tions” fea­ture was also kept, despite the fact that the prod­uct was push­ing right-wing out­lets even though the fea­ture was­n’t sup­posed to be push­ing polit­i­cal con­tent. Once again, it was fears of con­ser­v­a­tive accu­sa­tions of ‘shad­ow-ban­ning’ that appar­ent­ly drove these deci­sions. And what’s more remark­able is that we are hear­ing that Face­book’s employ­ees explic­it­ly stat­ed these fears as the rea­sons for not imple­ment­ing these changes. It’s one thing to infor­mal­ly make these deci­sion based on fears of ‘shad­ow-ban­ning’ and come up with a dif­fer­ent for­mal excuse. But it’s anoth­er lev­el of capit­u­la­tion if Face­book’s own inter­nal mem­os cit­ed these fears of ‘shad­ow-ban­ning’ charges as the explic­it rea­son to keep these poli­cies in place. Face­book effec­tive­ly ‘shad­ow-unbanned’ the far right:

    ...
    In that same update about group rec­om­men­da­tions, the prod­uct man­ag­er also explained how lead­ers decid­ed against mak­ing changes to a fea­ture called In Feed Rec­om­men­da­tions (IFR) due to poten­tial polit­i­cal wor­ries. Designed to insert posts into people’s feeds from accounts they don’t fol­low, IFR was intend­ed to fos­ter new con­nec­tions or inter­ests. For exam­ple, if a per­son fol­lowed the Face­book page for a foot­ball team like the Kansas City Chiefs, IFR might add a post from the NFL to their feed, even if that per­son didn’t fol­low the NFL.

    One thing IFR was not sup­posed to do was rec­om­mend polit­i­cal con­tent. But ear­li­er that spring, Face­book users began com­plain­ing that they were see­ing posts from con­ser­v­a­tive per­son­al­i­ties includ­ing Ben Shapiro in their News Feeds even though they had nev­er engaged with that type of con­tent.

    When the issue was flagged inter­nal­ly, Facebook’s con­tent pol­i­cy team warned that remov­ing such sug­ges­tions for polit­i­cal con­tent could reduce those pages’ engage­ment and traf­fic, and pos­si­bly inspire com­plaints from pub­lish­ers. A News Feed prod­uct man­ag­er and a pol­i­cy team mem­ber reit­er­at­ed this argu­ment in an August post to Facebook’s inter­nal mes­sage board.

    “A notice­able drop in dis­tri­b­u­tion for these pro­duc­ers (via traf­fic insights for rec­om­men­da­tions) is like­ly to result in high-pro­file esca­la­tions that could include accu­sa­tions of shad­ow-ban­ning and/or FB bias against cer­tain polit­i­cal enti­ties dur­ing the US 2020 elec­tion cycle,” they explained. Shad­ow-ban­ning, or the lim­it­ing of a page’s cir­cu­la­tion with­out inform­ing its own­ers, is a com­mon accu­sa­tion lev­eled by right-wing per­son­al­i­ties against social media plat­forms.

    Through­out 2020, the “fear of antag­o­niz­ing pow­er­ful polit­i­cal actors,” as the for­mer core data sci­en­tist put it in their memo, became a key pub­lic pol­i­cy team ratio­nal­iza­tion for for­go­ing action on poten­tial­ly viola­tive con­tent or rolling out prod­uct changes ahead of the US pres­i­den­tial elec­tion. They also said they had seen “a dozen pro­pos­als to mea­sure the objec­tive qual­i­ty of con­tent on News Feed dilut­ed or killed because … they have a dis­pro­por­tion­ate impact across the US polit­i­cal spec­trum, typ­i­cal­ly harm­ing con­ser­v­a­tive con­tent more.”

    The data sci­en­tist, who spent more than five years at the com­pa­ny before leav­ing late last year, not­ed that while strides had been made since 2016, the state of polit­i­cal con­tent on News Feed was “still gen­er­al­ly agreed to be bad.” Accord­ing to Face­book data, they added, 1 of every 100 views on con­tent about US pol­i­tics was for some type of hoax, while the major­i­ty of views for polit­i­cal mate­ri­als were on par­ti­san posts. Yet the com­pa­ny con­tin­ued to give known spread­ers of false and mis­lead­ing infor­ma­tion a pass if they were deemed “‘sen­si­tive’ or like­ly to retal­i­ate,” the data sci­en­tist said.

    “In the US it appears that inter­ven­tions have been almost exclu­sive­ly on behalf of con­ser­v­a­tive pub­lish­ers,” they wrote, attribut­ing this to polit­i­cal pres­sure or a reluc­tance to upset sen­si­tive pub­lish­ers and high-pro­file users.
    ...

    Final­ly, we have the exam­ple of Kaplan’s team­ing active­ly thwart­ing the imple­men­ta­tion of the News Ecosys­tem Qual­i­ty (NEQ) met­ric that could have kept the worst dis­in­for­ma­tion sources out of peo­ple’s news feeds. It was only after the insur­rec­tion that Face­book allowed the changes to take place:

    ...
    For weeks pri­or to the elec­tion, mis­in­for­ma­tion was spread­ing across Face­book, under­min­ing trust in the integri­ty of how votes would be count­ed. To improve the qual­i­ty of con­tent in the News Feed, exec­u­tives decid­ed the site would empha­size News Ecosys­tem Qual­i­ty (NEQ), an inter­nal score giv­en to pub­lish­ers based on assess­ments of their jour­nal­ism, in its rank­ing algo­rithm, accord­ing to the New York Times.

    This and oth­er “break glass” mea­sures improved the qual­i­ty of con­tent on people’s News Feeds so much that John Hege­man, the vice pres­i­dent respon­si­ble for the fea­ture, pushed to con­tin­ue them indef­i­nite­ly, accord­ing to three peo­ple famil­iar with the sit­u­a­tion who spoke to Buz­zFeed News. Yet Hegeman’s sug­ges­tion was opposed by Kaplan and mem­bers of the pol­i­cy team. The tem­po­rary mea­sures even­tu­al­ly expired.

    ...

    In the days fol­low­ing the insur­rec­tion, Face­book reem­pha­sized NEQ in its News Feed rank­ing algo­rithm again. Face­book spokesper­son Andy Stone said that change was tem­po­rary and has already been “rolled back.”
    ...

    But it was­n’t always Kaplan enforc­ing this pro­tec­tion rack­et. When it came to Alex Jones, it was Zucker­berg him­self who stepped in to pre­vent Jones’s deplat­form­ing. Why? Because Zucker­berg did­n’t seem to actu­al­ly see Jones as a hate fig­ure. So they had to carve out an entire new rule sys­tem to allow Jones to stay on the plat­form, an action that employ­ees are direct­ly attribut­ing to the extreme hes­i­tan­cy to enforce the rules in 2020:

    ...
    The strate­gic response team that had gath­ered evi­dence for the Alex Jones and Infowars ban in spring 2019 drew upon years of exam­ples of his hate speech against Mus­lims, trans­gen­der peo­ple, and oth­er groups. Under the com­pa­ny’s poli­cies for dan­ger­ous indi­vid­u­als and orga­ni­za­tions, Jones and Infowars would be per­ma­nent­ly banned and Face­book would have to remove con­tent that expressed sup­port for the con­spir­a­cy the­o­rist and his site.

    In April 2019, a pro­pos­al for the rec­om­mend­ed ban — com­plete with exam­ples and com­ments from the pub­lic pol­i­cy, legal, and com­mu­ni­ca­tions teams — was sent by email to Moni­ka Bick­ert, Face­book’s head of glob­al pol­i­cy man­age­ment, and her boss, Kaplan. The pro­pos­al was then passed on to top com­pa­ny lead­er­ship, includ­ing Zucker­berg, sources said.

    The Face­book CEO balked at remov­ing posts that praised Jones and his ideas.

    “Zucker­berg basi­cal­ly took the deci­sion that he did not want to use this pol­i­cy against Jones because he did not per­son­al­ly think he was a hate fig­ure,” said a for­mer pol­i­cy employ­ee.

    The teams were direct­ed to cre­ate an entire­ly new des­ig­na­tion for Jones to fit the CEO’s request, and when the com­pa­ny announced the ban on May 2, it did not say it had changed its rules at Zuckerberg’s behest. The deci­sions, how­ev­er, would have far-reach­ing impli­ca­tions, set­ting off a chain of events that ulti­mate­ly con­tributed to the vio­lent after­math of the 2020 elec­tion.

    Two for­mer pol­i­cy employ­ees said the process made con­tent pol­i­cy teams hes­i­tant to rec­om­mend new actions, result­ing in a “freeze” on new des­ig­na­tions for dan­ger­ous indi­vid­u­als and orga­ni­za­tions for rough­ly a year. In the inter­im, many extrem­ist groups used the plat­form to orga­nize and grow their mem­ber­ship through­out 2020. The for­mer pol­i­cy employ­ees said the delay in label­ing such groups effec­tive­ly enabled them to use Face­book to recruit and orga­nize through most of 2020.

    Once the Alex Jones thing had blown over, they froze des­ig­na­tions, and that last­ed for close to a year, and they were very rarely will­ing to push through any­thing. That impact­ed the lead-up to the elec­tion last year. Teams should have been review­ing the Oath Keep­ers and Three Per­centers, and essen­tial­ly these peo­ple weren’t allowed to,” said the pol­i­cy employ­ee, refer­ring to right-wing mil­i­tant orga­ni­za­tions that Face­book start­ed to remove in August 2020.
    ...

    It’s the kind of anec­dote about Zucker­berg that rais­es a ques­tion rarely asked about the guy: so what does he actu­al­ly believe? Like, does he have some sort of polit­i­cal ori­en­ta­tion? If so, what is it? Because it’s long seemed like Zucker­berg sim­ply had no dis­cernible moral core beyond car­ing about mak­ing mon­ey and amass­ing pow­er. And pro­tect­ing Alex Jones has a clear com­mer­cial motive for Face­book. But when we learn about his seem­ing­ly warm feel­ings towards Alex Jones, we have to ask: is Zucker­berg straight up red-pilled? Because it’s not exact­ly a huge leap to go from ‘assh*le who only cares about mon­ey and pow­er’ to ‘far right ide­o­logue aligned with Alex Jones’. It’s arguably not a leap at all.

    So is the own­er of the great­est dis­sem­i­na­tor of far right pro­pa­gan­da in his­to­ry also a con­sumer of that pro­pa­gan­da? It would explain a lot. Not that being an assh*le who only cares about mon­ey and pow­er would­n’t also explain a lot. Still, giv­en that Face­book has made itself into the pre­mier glob­al pur­vey­or of right-wing dis­in­for­ma­tion under Zucker­berg’s rules, the ques­tion of whether or not Zucker­berg him­self is actu­al­ly a far right nut job is a pret­ty impor­tant ques­tion. Espe­cial­ly now that Face­book has tran­si­tion from being the dis­in­for­ma­tion pur­vey­or’s plat­form-of-choice to the insur­rec­tion­ist’s plat­form of choice:

    Forbes

    Sheryl Sand­berg Down­played Facebook’s Role In The Capi­tol Hill Siege—Justice Depart­ment Files Tell A Very Dif­fer­ent Sto­ry

    Thomas Brew­ster
    Cyber­se­cu­ri­ty
    Asso­ciate edi­tor at Forbes, cov­er­ing cyber­crime, pri­va­cy, secu­ri­ty and sur­veil­lance.

    Feb 7, 2021,10:54am EST

    Just after the Capi­tol Hill riots on Jan­u­ary 6, Sheryl Sand­berg, Face­book chief oper­at­ing offi­cer admit­ted the company’s abil­i­ty to enforce its own rules was “nev­er per­fect.” About the shock­ing events of the day, she added: “I think these events were large­ly orga­nized on plat­forms that don’t have our abil­i­ties to stop hate and don’t have our stan­dards and don’t have our trans­paren­cy,” said Sheryl Sand­berg, Face­book chief oper­at­ing offi­cer, short­ly after the Capi­tol Hill riots on Jan­u­ary 6.

    Sand­berg was lat­er crit­i­cized for down­play­ing her employer’s role as a plat­form for the orga­niz­ers of the siege. But Face­book was far and away the most cit­ed social media site in charg­ing doc­u­ments the Jus­tice Depart­ment filed against mem­bers of the Capi­tol Hill mob, pro­vid­ing fur­ther evi­dence that Sand­berg was, per­haps, mis­tak­en in her claim. Face­book, how­ev­er, claims that the doc­u­ments show the social media com­pa­ny has been espe­cial­ly forth­com­ing in assist­ing law enforce­ment in inves­ti­gat­ing users who breached the Capi­tol.

    Forbes reviewed data from the Pro­gram on Extrem­ism at the George Wash­ing­ton Uni­ver­si­ty, which has col­lat­ed a list of more than 200 charg­ing doc­u­ments filed in rela­tion to the siege. In total, the charg­ing doc­u­ments refer to 223 indi­vid­u­als in the Capi­tol Hill riot inves­ti­ga­tion. Of those doc­u­ments, 73 ref­er­ence Face­book. That’s far more ref­er­ences than oth­er social net­works. YouTube was the sec­ond most-ref­er­enced on 24. Insta­gram, a Face­book-owned com­pa­ny, was next on 20. Par­ler, the app that pledged pro­tec­tion for free speech rights and gar­nered a large far-right user­base, was men­tioned in just eight.

    The ref­er­ences are a mix of pub­lic posts and pri­vate mes­sages sent on each plat­form, dis­cussing plans to go to the Stop the Steal march, some con­tain­ing threats of vio­lence, as well as images, videos and livestreams from the breach of the Capi­tol build­ing.

    Livestream­ing crime on Face­book

    Whilst the data doesn’t show defin­i­tive­ly what app was the most pop­u­lar amongst riot­ers, it does strong­ly indi­cate Face­book was riot­ers’ the pre­ferred plat­form. Pre­vi­ous­ly, Forbes had report­ed on cas­es where Face­book users had pub­licly post­ed their inten­tion to attend the riots. One includ­ed the image of a bul­let with the cap­tion, “By Bul­let or Bal­lot, Restora­tion of the Repub­lic is Com­ing.” The man who post­ed the image was lat­er arrest­ed after post­ing images of him­self at the Capi­tol on Jan­u­ary 6, accord­ing to inves­ti­ga­tors. In oth­er cas­es, the FBI found Face­book users had livestreamed their attack on the build­ing. As the Wash­ing­ton Post pre­vi­ous­ly report­ed, the #StopTheSteal hash­tag was seen across Face­book in the days around Jan­u­ary 6, with 128,000 users talk­ing about it, accord­ing to data pro­vid­ed by Eric Fein­berg, a vice pres­i­dent with the Coali­tion for a Safer Web.

    In var­i­ous cas­es, the accused used a mix of social media sites to pro­mote their involve­ment in the riot. For instance, in charges filed on Jan­u­ary 27, an alleged mem­ber of the Oath Keep­ers mili­tia, Thomas Cald­well, was found to have post­ed on Face­book from the riot, not­ing in one post: “We are surg­ing for­ward. Doors breached.” Mean­while, a fel­low Oath Keep­er, Jes­si­ca Watkins, wrote on Par­ler: “Me before forc­ing entry into the Capi­tol Build­ing. #stopthesteal2 #stormthe­capi­tol #oath­keep­ers #ohiomili­tia.” (Cald­well, a Navy vet­er­an, told a court in Jan­u­ary that “every sin­gle charge is false,” accord­ing to Reuters. Watkins told a judge she under­stood the charges against her, but “I don’t under­stand how I got them”.)

    A Face­book spokesper­son told Forbes the com­pa­ny was pro­vid­ing data to law enforce­ment on those present at the riot and was remov­ing accounts of those who were involved in the storm­ing of the Capi­tol. The spokesper­son also not­ed that pri­or to the mob attack, as of Novem­ber 30, Face­book had removed about 3,200 Pages, 18,800 groups, 100 events, 23,300 Face­book pro­files and 7,400 Insta­gram accounts for vio­lat­ing its pol­i­cy against mil­i­ta­rized social move­ments. The pol­i­cy was launched in August.

    ...

    As Forbes report­ed in Jan­u­ary, Face­book has been pre­serv­ing riot­ers’ data, includ­ing their pri­vate mes­sages, so that it can be hand­ed to law enforce­ment when they make a legal request. Face­book isn’t alone in help­ing law enforce­ment in gath­er­ing infor­ma­tion on sus­pects. Oth­er plat­forms and tech­nol­o­gy com­pa­nies, from Apple and Google to Par­ler, have been fur­nish­ing the feds with data on users who were at the riots.

    ————–

    “Sheryl Sand­berg Down­played Facebook’s Role In The Capi­tol Hill Siege—Justice Depart­ment Files Tell A Very Dif­fer­ent Sto­ry” by Thomas Brew­ster; Forbes; 02/07/2021

    Whilst the data doesn’t show defin­i­tive­ly what app was the most pop­u­lar amongst riot­ers, it does strong­ly indi­cate Face­book was riot­ers’ the pre­ferred plat­form. Pre­vi­ous­ly, Forbes had report­ed on cas­es where Face­book users had pub­licly post­ed their inten­tion to attend the riots. One includ­ed the image of a bul­let with the cap­tion, “By Bul­let or Bal­lot, Restora­tion of the Repub­lic is Com­ing.” The man who post­ed the image was lat­er arrest­ed after post­ing images of him­self at the Capi­tol on Jan­u­ary 6, accord­ing to inves­ti­ga­tors. In oth­er cas­es, the FBI found Face­book users had livestreamed their attack on the build­ing. As the Wash­ing­ton Post pre­vi­ous­ly report­ed, the #StopTheSteal hash­tag was seen across Face­book in the days around Jan­u­ary 6, with 128,000 users talk­ing about it, accord­ing to data pro­vid­ed by Eric Fein­berg, a vice pres­i­dent with the Coali­tion for a Safer Web.”

    Face­book: the insur­rec­tion­ist’s pre­fer plat­form. The num­bers don’t lie.

    Now, on the one hand, it is true that Face­book prob­a­bly has the largest foot­print in the DOJ fil­ings because it’s the biggest social media plat­form that’s most wide­ly used. But on the oth­er hand, it’s also the case that Face­book has remained the most wide­ly used social media plat­form for the right-wing pre­cise­ly because of the steps tak­en by Face­book exec­u­tives like Zucker­berg and Kaplan to ensure the plat­form does­n’t crack down too hard on the Nazis, fas­cists, and any­one else with an inter­net con­nec­tion. Or at least any­one else with an inter­net con­nec­tion espous­ing right-wing dis­in­for­ma­tion.

    Posted by Pterrafractyl | March 5, 2021, 3:39 pm
  19. Fol­low­ing up on recent reports about Mark Zucker­berg’s and Joel Kaplan’s inter­fer­ence with the enforce­ment of Face­book’s rules in order to allow right-wing fig­ures like Ben Shapiro to con­tin­ue to get pushed on unsus­pect­ing users by Face­book’s algo­rithm using the In Feed Rec­om­men­da­tions (IFR) despite a ban on IFR polit­i­cal con­tent, here’s a report about a far more alarm­ing exam­ple of Face­book’s ‘rec­om­mend­ed’ con­tent push­ing users towards rad­i­cal­ism:

    The final wit­ness in the pros­e­cu­tion of the Michi­gan COVID kid­nap­ping plot­ters — the inter­state mili­tia plot to kid­nap Michi­gan’s Gov­er­nor Whit­mer, hold her on tri­al, and exe­cute her — tes­ti­fied on Fri­day dur­ing the case’s pre­lim­i­nary exam. The con­fi­den­tial FBI infor­mant went only by “Dan” dur­ing the tes­ti­mo­ny. “Dan” became an FBI infor­mant after he became aware of the plot and agreed to coop­er­ate with law enforce­ment.

    So how did “Dan”, a self-describe lib­er­tar­i­an, become part of this plot? That’s where it gets scan­dalous for Face­book, in a scan­dalous­ly typ­i­cal man­ner: Face­book’s algo­rithm appears to have served up a sug­ges­tion to Dan that he join a Face­book group called the “Wolver­ine Watch­men”. When Dan click on the sug­gest­ed page, a few ques­tions popped up for him to answer. After answer­ing the ques­tions in a pre­sum­ably sat­is­fac­to­ry man­ner, Dan was admit­ted into the group and told to down­load an encrypt­ed mes­sag­ing app called Wire. The app pro­hibits screen­shots and peri­od­i­cal­ly deletes all mes­sages so it’s basi­cal­ly designed for sen­si­tive group com­mu­ni­ca­tions.

    After join­ing the group, Dan began what is described as a jour­ney from lock­down protests at the Michi­gan state Capi­tol to rur­al train­ing exer­cis­es with mem­bers of the group who expressed a desire to hurt and kill law enforce­ment and politi­cians. In an echo of the numer­ous reports of far right ‘Booga­loo’ mem­bers join­ing the anti-police-bru­tal­i­ty protests over the sum­mer in the hopes of insti­gat­ing more may­hem and vio­lence, Dan reports attend­ing a Black Lives Mat­ter protest in Detroit, with the group going there envi­sion a pos­si­ble gun­fight with police if pep­per spray was used on pro­test­ers.

    So all it required for Dan to go from ran­dom lib­er­tar­i­an to domes­tic-ter­ror­ist plot­ter was a Face­book group sug­ges­tion. That was it. Face­book’s basi­cal­ly recruit­ed ‘Dan’ into this ter­ror­ist group. It rais­es the obvi­ous and alarm­ing ques­tion of how many oth­er ‘Dans’ are out there? Espe­cial­ly ‘Dans’ who don’t decide to go to the FBI and just remain part of the plot. How many oth­ers ‘Dans’ did Face­book’s algo­rithms attempt to recruit into a far right domes­tic ter­ror group last year? 10? 1,000? 10,000? We have no idea. We just know that at least one per­son, ‘Dan’, was recruit­ed into a domes­tic ter­ror plot by Face­book’s ‘sug­gest­ed groups’ algo­rithm and ‘Dan’ prob­a­bly was­n’t the only one:

    Detroit Free Press

    Whit­mer kid­nap­ping plot hear­ing live feed: Con­fi­den­tial FBI infor­mant tes­ti­fies

    Joe Guillen
    Omar Abdel-Baqui
    Pub­lished 8:50 a.m. ET Mar. 5, 2021 | Updat­ed 9:59 a.m. ET Mar. 6, 2021

    A con­fi­den­tial FBI infor­mant tes­ti­fied Fri­day in a Jack­son court about being embed­ded for months along­side lead­ers of a group accused of plot­ting to kid­nap Gov. Gretchen Whit­mer.

    The informant’s iden­ti­ty was con­cealed for his safe­ty. Intro­duced only as “Dan,” an online video feed of Friday’s hear­ing was cut off dur­ing his tes­ti­mo­ny so court observers only could hear him.

    Dan described learn­ing of the group — known as the Wolver­ine Watch­men — through a Face­book algo­rithm that he believed made the sug­ges­tion based on his inter­ac­tions with oth­er Face­book pages that sup­port the Sec­ond Amend­ment and firearms train­ing.

    “I was scrolling through Face­book one day and they popped up as a sug­ges­tion post,” Dan said. “I clicked on the page and it had a few ques­tions to answer.”

    After answer­ing the ques­tions sat­is­fac­to­ri­ly, Dan, an Army vet­er­an who described him­self as a Lib­er­tar­i­an, was admit­ted into the group and told to down­load an encrypt­ed mes­sag­ing app called Wire so he could com­mu­ni­cate in secret with oth­er mem­bers. The app pro­hib­it­ed screen­shots and would peri­od­i­cal­ly delete all mes­sages.

    Dan’s accep­tance into the Face­book group was the begin­ning of his jour­ney as a con­fi­den­tial FBI “human source” that took him to protests at the state Capi­tol and to rur­al train­ing exer­cis­es with mem­bers of the group who expressed a desire to hurt and kill law enforce­ment offi­cers and politi­cians. Dan tes­ti­fied he some­times wore a wire and feared for his safe­ty, even­tu­al­ly decid­ing to sell his house when his address became known.

    Dan and the group’s mem­bers also attend­ed what he described as a Black Lives Mat­ter protest in Detroit. The group went to the protest envi­sion­ing a pos­si­ble gun­fight with police if pep­per spray was used on pro­test­ers, he tes­ti­fied. The group wait­ed in a park­ing lot but even­tu­al­ly left the protest with­out inci­dent.

    Dan told a friend in law enforce­ment about the group short­ly after learn­ing of its desires to harm police offi­cers. The FBI then approached him and he agreed to coop­er­ate, he tes­ti­fied, adding that he did not ask for mon­ey.

    As an FBI source, Dan became famil­iar with the three defen­dants in court today on charges they sup­port­ed a plot to kid­nap Whit­mer.

    Dan tes­ti­fied at the pre­lim­i­nary exam­i­na­tion for Pete Musi­co, 43; his son-in-law, Joseph Mor­ri­son, 26; and Paul Bel­lar, 22.

    The men are just three of the 14 men said to have plot­ted to tar­get Whit­mer over her restric­tions to stop the spread of the nov­el coro­n­avirus. Six of the 14 men were charged fed­er­al­ly, and eight were charged at the state lev­el over two coun­ties.

    Defense attor­neys attempt­ed to dis­tance the accused from the sur­veil­lance activ­i­ties to fur­ther the plot to kid­nap Whit­mer.

    Bel­lar had left the Wolver­ine Watch­men to live with his father in South Car­oli­na last sum­mer, well before the plans advanced in the ensu­ing months, his attor­ney, Andrew Kirk­patrick, said dur­ing cross-exam­i­na­tion of the infor­mant.

    Dur­ing his tes­ti­mo­ny, Dan con­firmed that oth­er mem­bers in the group sus­pect­ed Bel­lar of coop­er­at­ing with law enforce­ment after his depar­ture from Michi­gan.

    Dan tes­ti­fied for about six hours on Fri­day, and he was the final wit­ness to be called in the pre­lim­i­nary exam.

    ...

    ———–


    Whit­mer kid­nap­ping plot hear­ing live feed: Con­fi­den­tial FBI infor­mant tes­ti­fies” by Joe Guillen and Omar Abdel-Baqui; Detroit Free Press; 03/05/2021

    “Dan described learn­ing of the group — known as the Wolver­ine Watch­men — through a Face­book algo­rithm that he believed made the sug­ges­tion based on his inter­ac­tions with oth­er Face­book pages that sup­port the Sec­ond Amend­ment and firearms train­ing.”

    Face­book just can’t help itself. It sim­ply must con­nect world, includ­ing the world of ter­ror­ists. How many oth­er peo­ple liv­ing in Michi­gan who inter­act­ed with Sec­ond Amend­ment and firearms train­ing pages got the same sug­ges­tion? It’s almost sur­pris­ing the Michi­gan coup plot was­n’t big­ger in light of this rev­e­la­tion.

    Then there’s the ques­tion of just how preva­lent were these kinds of mili­tia groups at the var­i­ous police bru­tal­i­ty protests across the sum­mer of 2020. We just keep learn­ing about these far right infil­tra­tors:

    ...
    Dan’s accep­tance into the Face­book group was the begin­ning of his jour­ney as a con­fi­den­tial FBI “human source” that took him to protests at the state Capi­tol and to rur­al train­ing exer­cis­es with mem­bers of the group who expressed a desire to hurt and kill law enforce­ment offi­cers and politi­cians. Dan tes­ti­fied he some­times wore a wire and feared for his safe­ty, even­tu­al­ly decid­ing to sell his house when his address became known.

    Dan and the group’s mem­bers also attend­ed what he described as a Black Lives Mat­ter protest in Detroit. The group went to the protest envi­sion­ing a pos­si­ble gun­fight with police if pep­per spray was used on pro­test­ers, he tes­ti­fied. The group wait­ed in a park­ing lot but even­tu­al­ly left the protest with­out inci­dent.
    ...

    But per­haps the biggest ques­tion raised by this dis­turb­ing sto­ry is the ques­tion of just how often oth­er ter­ror plots have been orches­trat­ed in this man­ner over Face­book over the past year or so as the ‘Booga­loo’ move­ment tran­si­tioned into the Trumpian ‘Stop the Steal’ post-elec­tion insur­rec­tion? Was the “Wolver­ine Watch­men” group the only Face­book domes­tic ter­ror group that was casu­al­ly using a sim­ple ques­tion­naire to fil­ter recruits? Because it real­ly is quite remark­able how easy it was for ‘Dan’ to join this group. Face­book served up the sug­ges­tion for join the group and upon click­ing the sug­gest­ed link you get a few ques­tions. Answer the ques­tions in a pre­dictably ‘cor­rect’ man­ner and you’re in? That’s it? Is this typ­i­cal for far right mili­tia groups? Because if it is typ­i­cal­ly, there’s prob­a­bly A LOT more groups like this out there. One of the nat­ur­al bar­ri­ers to domes­tic ter­ror plots is the fact that you have to get two or more peo­ple who are crazy enough to attempt such a plot to already know each oth­er before the plot­ting starts and this mod­el of casu­al inter­net recruit­ment breaks that bar­ri­er. You know, kind of like the ISIS recruit­ment mod­el. Except in this case, the plot­ters got togeth­er to meet in per­son. ISIS typ­i­cal­ly recruits, rad­i­cal­izes, and gives orders from afar. So what took place with this Michi­gan sto­ry is like a wild­ly more suc­cess­ful ver­sion of the ISIS recruit­ment mod­el. A wild­ly more suc­cess­ful ver­sion of the ISIS recruit­ment mod­el that would­n’t have been pos­si­ble with­out Face­book.

    Posted by Pterrafractyl | March 8, 2021, 5:19 pm
  20. Of all the dis­turb­ing ques­tions raised by the Jan­u­ary 6 Capi­tol insur­rec­tion, one of the most dis­turb­ing ques­tion is whether or not any of the high-lev­el orches­tra­tors of the event — from Roger Stone to then-Pres­i­dent Trump — will face any legal reper­cus­sions over the roles they played into mak­ing it hap­pen. But per­haps an even more dis­turb­ing ques­tion is the ques­tion of whether any of these lead­ing fig­ures will face reper­cus­sions after the next mili­tia upris­ing they insti­gate. Because it’s hard to imag­ine Jan 6 was a one-off event, espe­cial­ly if the peo­ple who led it remain free to lead anoth­er one. The Office of the Direc­tor of Nation­al Intel­li­gence issued a report a cou­ple of weeks ago iden­ti­fy­ing “mili­tia vio­lent extrem­ists” as being among the “most lethal” pub­lic safe­ty threats, after all. A threat that per­sists long after the insur­rec­tion. Any­one who played a lead­er­ship role in that insur­rec­tion and con­tin­ues to defend it has basi­cal­ly been play­ing a lead­er­ship role in all of the yet-to-come mili­tia vio­lence that author­i­ties are now warn­ing us is com­ing too.

    But, of course, the lead­er­ship roles in the lead up to the insur­rec­tion should­n’t be lim­it­ed to polit­i­cal fig­ures like Trump or Stone. What about Face­book’s role in insti­gat­ing the Jan­u­ary 6 insur­rec­tion? As we’ve seen, the plat­form has been con­sis­tent­ly behind-the-curve in address­ing long-stand­ing com­plaints that its allow­ing itself to oper­ate as a rad­i­cal­iza­tion too. It’s a pat­tern exem­pli­fied by the sto­ry of how one of the mem­bers of Michi­gan mili­tia plot to kid­nap and exe­cute Gov­er­nor Whit­mer was effec­tive­ly recruit­ed into the plot via Face­book’s algo­rith­mic sug­ges­tions.

    So what sort of lead­er­ship role is Face­book play­ing today in the mili­tia move­men­t’s ongo­ing recruit­ment and rad­i­cal­iza­tion cam­paigns? Well, accord­ing to a new report by Buz­zFeed and the Tech Trans­paren­cy Project (TTP), Face­book con­tin­ues to lead the way as the pre­mier mili­tia recruit­ment and rad­i­cal­iza­tion plat­form. Accord­ing to their report, as of March 18, Face­book host­ed more than 200 mili­tia pages and groups and at least 140 includ­ed the word “mili­tia” in their name.

    But is Face­book still prompt­ing users to join mili­tia groups, as was report­ed with the Michi­gian mili­tia kid­nap­ping case? Yep! For exam­ple, when Buz­zFeed vis­it­ed the “East Ken­tucky Mali­tia” page, Face­book sug­gest­ed to vis­it the pages of Fair­fax Coun­ty Mili­tia and the KY Moun­tain Rangers. When Buz­zFeed vis­it­ed the KY Moun­tain Rangers page, it then led to the page for the Texas Free­dom Force, one of the groups cur­rent­ly under inves­ti­ga­tion for the role its mem­bers played in the insur­rec­tion. Oth­er mili­tias are con­tin­u­ing to use Face­book for orga­niz­ing events, like the DFW Bea­con Unit recent­ly post­ing about hold­ing a train­ing ses­sion. Face­book is even auto­mat­i­cal­ly cre­at­ing pages for mili­tias that don’t already have a page, based on the con­tent that peo­ple are shar­ing. Even mili­tia that aren’t using Face­book to recruit are prob­a­bly get­ting recruits thanks to the plat­form. This is all still hap­pen­ing. Because of course it is. This is Face­book. It would be weird if they weren’t some­how pro­mot­ing the far right:

    Buz­zFeed News

    Hun­dreds Of Far-Right Mili­tias Are Still Orga­niz­ing, Recruit­ing, And Pro­mot­ing Vio­lence On Face­book

    A new report iden­ti­fied more than 200 mili­tia pages and groups on Face­book as of March 18, more than two months after the insur­rec­tion at the Capi­tol.

    Christo­pher Miller
    Buz­zFeed News Reporter

    Post­ed on March 24, 2021, at 1:00 p.m. ET
    Last updat­ed on March 24, 2021, at 2:17 p.m. ET

    When Face­book CEO Mark Zucker­berg faces Con­gress on Thurs­day, to tes­ti­fy about extrem­ism online, he will do so as hun­dreds of far-right mili­tias, includ­ing some whose mem­bers were charged in the dead­ly insur­rec­tion on the US Capi­tol, con­tin­ue to orga­nize, recruit, and pro­mote vio­lence on the plat­form.

    More than 200 mili­tia pages and groups were on Face­book as of March 18, accord­ing to a new report pub­lished Wednes­day by the Tech Trans­paren­cy Project (TTP), a non­prof­it watch­dog orga­ni­za­tion, and addi­tion­al research by Buz­zFeed News. Of them, at least 140 includ­ed the word “mili­tia” in their name.

    Face­book banned some mil­i­tant groups and oth­er extrem­ist move­ments tied to vio­lence last August, after the FBI warned that such groups had become domes­tic ter­ror­ism threats.

    TPP found that Face­book is auto­mat­i­cal­ly cre­at­ing pages for some of the mili­tias from con­tent that peo­ple are shar­ing, expand­ing the reach of the groups. This is not a new prob­lem for the site, which in 2019 came under crit­i­cism for auto­mat­i­cal­ly gen­er­at­ing pages for ISIS. In addi­tion, Buz­zFeed News and TTP found that the plat­form was direct­ing peo­ple who “like” cer­tain mili­tia pages to check out oth­ers, effec­tive­ly help­ing these move­ments recruit and rad­i­cal­ize new mem­bers.

    Many of the groups ana­lyzed by Buz­zFeed News and TTP shared images of guns and vio­lence. Some post­ed anti-gov­ern­ment mes­sages in sup­port of the mob inspired by for­mer Pres­i­dent Don­ald Trump that attacked the Capi­tol on Jan. 6. Oth­ers shared mis­in­for­ma­tion about the coro­n­avirus pan­dem­ic and racist memes tar­get­ing Black Lives Mat­ter activists.

    ...

    The find­ings come as the Biden admin­is­tra­tion moves to crack down on domes­tic vio­lent extrem­ism. The Office of the Direc­tor of Nation­al Intel­li­gence issued a report this month that iden­ti­fied “mili­tia vio­lent extrem­ists” as being among the “most lethal” pub­lic safe­ty threats.

    The US Attorney’s office has charged more than 300 peo­ple with par­tic­i­pat­ing in the insur­rec­tion, and the gov­ern­ment has said it expects to bring cas­es against at least 100 more.

    Zucker­berg is also set to give his first tes­ti­mo­ny before Con­gress since the events of Jan. 6. Along with Alpha­bet CEO Sun­dar Pichai and Twit­ter CEO Jack Dorsey, Zucker­berg will be ques­tioned by House law­mak­ers about social media’s role in pro­mot­ing extrem­ism and mis­in­for­ma­tion.

    Facebook’s ongo­ing strug­gles with mili­tia con­tent on its plat­form paint a trou­bling back­drop for the hear­ing. As both Buz­zFeed News and TTP report­ed in Jan­u­ary, Face­book allowed far-right domes­tic extrem­ists to open­ly dis­cuss weapons and tac­tics, coor­di­nate activ­i­ties, and spread calls to over­throw the gov­ern­ment for months ahead of the Capi­tol attack.

    The new report­ing by Buz­zFeed News and TTP shows that Facebook’s prob­lems with mili­tia activ­i­ty per­sist despite the com­pa­ny vow­ing in August 2020 to take action against them and oth­er extrem­ist groups that pose risks to pub­lic safe­ty.

    More than 20 of the mili­tia groups men­tioned in TTP’s report were cre­at­ed after Facebook’s ini­tial crack­down. Some of them were formed in Decem­ber 2020 or lat­er, as the pro-Trump Stop the Steal move­ment was gain­ing steam.

    One group, the Texas Mili­tia, launched its page on the after­noon of Jan. 6, as the attack on the Capi­tol was under way. The cre­ator and admin­is­tra­tor of the group warned at the time that “mod­ern tech­nol­o­gy has enabled rad­i­cals to sub­vert the process by which we elect our rep­re­sen­ta­tives.”

    “We must be prepared…to defend our rights and pre­vent [the] takeover of our great nation by rad­i­cals, uphold the Con­sti­tu­tion, and pre­serve our way of life,” he added.

    TTP found that in recent weeks, some of the mili­tia groups have cir­cu­lat­ed pro­pa­gan­da posts for the Proud Boys, which Face­book has banned since 2018.

    One of the posts came from C.A.M.P., which stands for Con­sti­tu­tion­al Amer­i­can Mili­tia Project. On March 13, the group’s admin­is­tra­tor post­ed a three-minute high­light reel of Proud Boys attacks on Black Lives Mat­ter pro­test­ers as well as footage from the Capi­tol riot.

    At least 19 lead­ers, mem­bers, or asso­ciates of the Proud Boys have been charged in fed­er­al court with con­spir­a­cy and oth­er offens­es relat­ed to the Jan. 6 attack. On Tues­day, two Proud Boys, Joe Big­gs and Ethan Nordean, plead­ed not guilty to fed­er­al charges accus­ing them of help­ing to plan and lead the insur­rec­tion.

    The mili­tia pages not only per­sist on Face­book, but the plat­form is actu­al­ly push­ing peo­ple toward them, TTP and Buz­zFeed News found. In doing so, Face­book is expand­ing the reach of the move­ments and “help­ing these orga­ni­za­tions poten­tial­ly recruit and rad­i­cal­ize users,” TPP’s report stat­ed.

    For instance, when Buz­zFeed News vis­it­ed the East Ken­tucky Mali­tia (the mis­spelling appears delib­er­ate; groups and pages often alter spellings of their names to avoid detec­tion), it was sug­gest­ed to vis­it the pages of Fair­fax Coun­ty Mili­tia and the KY Moun­tain Rangers.

    The Moun­tain Rangers page then led to the Texas Free­dom Force, an out­fit that fed­er­al author­i­ties inves­ti­gat­ing the Jan. 6 attack have called a “mili­tia extrem­ist group.” One of its mem­bers, Guy Ref­fitt, has been charged for par­tic­i­pat­ing in the Jan. 6 Capi­tol attack.

    Ref­fitt, 48, of Wylie, Texas, plead­ed not guilty last week to three charges of obstruct­ing an offi­cial pro­ceed­ing, tres­pass­ing, and wit­ness tam­per­ing.

    Accord­ing to pros­e­cu­tors, Reffitt’s wife shared with author­i­ties that he also belongs to the Three Per­centers, an anti-gov­ern­ment extrem­ist fac­tion of the mili­tia move­ment, accord­ing to the Anti-Defama­tion League. Sev­er­al Three Per­centers have been charged along­side Ref­fitt for par­tic­i­pat­ing in the Capi­tol attack. Many pages for the group were found by TTP and Buz­zFeed News to still be active on Face­book.

    Mem­bers of these groups are also using Face­book to orga­nize events, despite rev­e­la­tions that Face­book failed to act on a Kenosha Guard event page that urged peo­ple to bring firearms to a Black Lives Mat­ter protest which result­ed in two peo­ple being fatal­ly shot. Four peo­ple are suing Face­book for its alleged role in enabling the vio­lence that over­took the city of Kenosha.

    One such group, the DFW Bea­con Unit in Dallas–Fort Worth, Texas, which describes itself as a “legit­i­mate mili­tia,” post­ed on Mon­day about hold­ing a train­ing ses­sion.

    Some of the pages sug­gest efforts to coor­di­nate with law enforce­ment. One page called “Carter Coun­ty Okla­homa Mili­tia” post­ed on Jan. 5 that it had changed its name after speak­ing with the local sher­iff. It said the sher­iff is look­ing for “reserve deputies” and that peo­ple inter­est­ed in being a reservist should con­tact the page man­ag­er.

    ———-

    “Hun­dreds Of Far-Right Mili­tias Are Still Orga­niz­ing, Recruit­ing, And Pro­mot­ing Vio­lence On Face­book” by Christo­pher Miller; Buz­zFeed News; 03/24/2021

    “More than 200 mili­tia pages and groups were on Face­book as of March 18, accord­ing to a new report pub­lished Wednes­day by the Tech Trans­paren­cy Project (TTP), a non­prof­it watch­dog orga­ni­za­tion, and addi­tion­al research by Buz­zFeed News. Of them, at least 140 includ­ed the word “mili­tia” in their name.

    These aren’t exact­ly stealth mili­tias. But Face­book is hap­py to host them. Months after the Jan 6 insur­rec­tion. It’s clear­ly an impor­tant mar­ket for the com­pa­ny. So impor­tant that Face­book’s algo­rithm con­tin­ues to cre­ate pages for mili­tias that don’t yet have them:

    ...
    TPP found that Face­book is auto­mat­i­cal­ly cre­at­ing pages for some of the mili­tias from con­tent that peo­ple are shar­ing, expand­ing the reach of the groups. This is not a new prob­lem for the site, which in 2019 came under crit­i­cism for auto­mat­i­cal­ly gen­er­at­ing pages for ISIS. In addi­tion, Buz­zFeed News and TTP found that the plat­form was direct­ing peo­ple who “like” cer­tain mili­tia pages to check out oth­ers, effec­tive­ly help­ing these move­ments recruit and rad­i­cal­ize new mem­bers.

    ...

    The find­ings come as the Biden admin­is­tra­tion moves to crack down on domes­tic vio­lent extrem­ism. The Office of the Direc­tor of Nation­al Intel­li­gence issued a report this month that iden­ti­fied “mili­tia vio­lent extrem­ists” as being among the “most lethal” pub­lic safe­ty threats.

    ...

    The new report­ing by Buz­zFeed News and TTP shows that Facebook’s prob­lems with mili­tia activ­i­ty per­sist despite the com­pa­ny vow­ing in August 2020 to take action against them and oth­er extrem­ist groups that pose risks to pub­lic safe­ty.

    More than 20 of the mili­tia groups men­tioned in TTP’s report were cre­at­ed after Facebook’s ini­tial crack­down. Some of them were formed in Decem­ber 2020 or lat­er, as the pro-Trump Stop the Steal move­ment was gain­ing steam.
    ...

    And then there’s the con­tin­ued algo­rith­mic “sug­ges­tions” that push users to mili­tia pages, pre­sum­ably when­ev­er they vis­it a page also fre­quent­ed by mili­tia mem­bers:

    ...
    The mili­tia pages not only per­sist on Face­book, but the plat­form is actu­al­ly push­ing peo­ple toward them, TTP and Buz­zFeed News found. In doing so, Face­book is expand­ing the reach of the move­ments and “help­ing these orga­ni­za­tions poten­tial­ly recruit and rad­i­cal­ize users,” TPP’s report stat­ed.

    For instance, when Buz­zFeed News vis­it­ed the East Ken­tucky Mali­tia (the mis­spelling appears delib­er­ate; groups and pages often alter spellings of their names to avoid detec­tion), it was sug­gest­ed to vis­it the pages of Fair­fax Coun­ty Mili­tia and the KY Moun­tain Rangers.

    The Moun­tain Rangers page then led to the Texas Free­dom Force, an out­fit that fed­er­al author­i­ties inves­ti­gat­ing the Jan. 6 attack have called a “mili­tia extrem­ist group.” One of its mem­bers, Guy Ref­fitt, has been charged for par­tic­i­pat­ing in the Jan. 6 Capi­tol attack.

    Ref­fitt, 48, of Wylie, Texas, plead­ed not guilty last week to three charges of obstruct­ing an offi­cial pro­ceed­ing, tres­pass­ing, and wit­ness tam­per­ing.

    Accord­ing to pros­e­cu­tors, Reffitt’s wife shared with author­i­ties that he also belongs to the Three Per­centers, an anti-gov­ern­ment extrem­ist fac­tion of the mili­tia move­ment, accord­ing to the Anti-Defama­tion League. Sev­er­al Three Per­centers have been charged along­side Ref­fitt for par­tic­i­pat­ing in the Capi­tol attack. Many pages for the group were found by TTP and Buz­zFeed News to still be active on Face­book.
    ...

    So as we can see, Face­book is liv­ing up to its mis­sion state­ment of con­nect­ing the world. Like con­nect­ing peo­ple to mili­tias. Even after Face­book promised it would stop doing this. It just keeps hap­pen­ing. Almost like the com­pa­ny can’t con­trol itself. It’s like a new form of Face­book addic­tion just for Face­book. An addic­tion so pow­er­ful that Face­book appears to have been large­ly unable to do any­thing to address its pro­mo­tion of mili­tia groups months after Buz­zFeed and TTP issued basi­cal­ly the same report back in Octo­ber. Yes, five months before Buz­zFeed and TTP issued this, and two and a half months before the Jan 6 insur­rec­tion, they issued basi­cal­ly the same report, detail­ing how Face­book was con­tin­u­ing to host mili­tia pages despite pledges to remove them from the site. Mili­tia pages were still there, some new­ly cre­at­ed, and still get­ting recruits over Face­book. Oh, and still able to pur­chase ads for their mili­tias. Tar­get­ed ads that uti­lize Face­book’s micro-tar­get­ing algo­rithms to ensure that the peo­ple most like­ly to join the mili­tia are the ones who see it:

    Buz­zFeed News

    Face­book Con­tin­ues To Host Mil­i­tant Groups And Ads Despite A Ban On Right-Wing Extrem­ism

    A new report finds “Face­book is rou­tine­ly behind the curve in crack­ing down on domes­tic extrem­ists on its plat­form.”

    Sal­vador Her­nan­dez Buz­zFeed News Reporter
    Ryan Mac Buz­zFeed News Reporter

    Post­ed on Octo­ber 19, 2020, at 9:10 a.m. ET
    Last updat­ed on Octo­ber 20, 2020, at 2:14 a.m. ET

    “Booga­loo” mem­bers, an anti-gov­ern­ment extrem­ist move­ment, dur­ing a ral­ly at the Michi­gan State Capi­tol in Lans­ing, Oct. 17, 2020.

    Despite efforts by Face­book to ban right-wing mil­i­tant orga­ni­za­tions, a new report pub­lished Mon­day has found that some of those groups con­tin­ue to orga­nize and run pages on the social net­work. Face­book also con­tin­ues to prof­it from ads placed by extrem­ists despite an announce­ment ear­li­er this year that said it would ban all ads that “praise, sup­port or rep­re­sent mil­i­ta­rized social move­ments.”

    The report from the Trans­paren­cy Tech Project (TTP), a non­prof­it watch­dog orga­ni­za­tion, dis­cov­ered, for exam­ple, that the Amer­i­can Patri­ot Coun­cil, a right-wing group that advo­cat­ed for the crim­i­nal pros­e­cu­tion of Michigan’s gov­er­nor because of her imple­men­ta­tion of stay-at-home orders dur­ing the ear­ly days of the coro­n­avirus pan­dem­ic, ran an ad ear­li­er this month that encour­aged mil­i­tants to attend Oct. 24 ral­lies in Michi­gan and New York.

    “We The Peo­ple gath­er across Amer­i­ca in a show of sol­i­dar­i­ty and demand eman­ci­pa­tion from the bondage of tyran­ny,” read the ad, which cost less than $100 and had the poten­tial to reach between 500,000 and 1 mil­lion peo­ple, accord­ing to Facebook’s own met­rics. “(Law­ful car­ry & Mili­tia strong­ly encour­aged.)”

    Face­book announced in August that it was ban­ning right-wing mil­i­tant, anar­chist, and QAnon groups from its plat­form. But TTP found 45 pages and eight groups asso­ci­at­ed with right-wing extrem­ist orga­ni­za­tions two months lat­er. Researchers at TTP also found that Face­book had accept­ed a hand­ful of ads over the last two years that were used by extrem­ists to bol­ster their ranks and sum­mon peo­ple to armed ral­lies.

    “Face­book has been direct­ly prof­it­ing from this kind of paid mes­sag­ing on its plat­form,” the report said. “The dis­turb­ing find­ings show that Face­book is rou­tine­ly behind the curve in crack­ing down on domes­tic extrem­ists on its plat­form.”

    Accord­ing to TTP, 13 of the pages and groups it found have “mili­tia” in their name, while six pages and one group were cre­at­ed after the company’s August ban of “mil­i­ta­rized social move­ments.”

    ...

    The TTP inves­ti­ga­tion comes less than two weeks after fed­er­al and state pros­e­cu­tors in Michi­gan revealed they had charged 14 peo­ple in a bizarre plot to kid­nap and pos­si­bly kill Michi­gan Gov. Gretchen Whit­mer. Those indi­vid­u­als alleged­ly con­spired using Face­book.

    In the Michi­gan plot, Face­book said it reached out to law enforce­ment six months before the 14 men were arrest­ed, and that a Face­book group tied to those indi­vid­u­als had been banned in June. Still, a day after author­i­ties announced the Whit­mer plot, Buz­zFeed News report­ed that Face­book con­tin­ued to host mul­ti­ple pages run by Michi­gan mil­i­tant orga­ni­za­tions, while the social net­work’s rec­om­men­da­tion tools con­tin­ued to direct users to fol­low pages espous­ing extrem­ist mes­sages.

    Face­book has made numer­ous efforts this year to coun­ter­act the use of its plat­form by vio­lent extrem­ists. The com­pa­ny said it had delet­ed more than 6,500 pages and groups “tied to more than 300 Mil­i­ta­rized Social Move­ments” as of Oct. 6. But it did not com­plete­ly erad­i­cate the prob­lem. Some groups, as TTP found, escaped the ban. Oth­ers sim­ply reap­peared with new pages under slight­ly altered names.

    TTP also iden­ti­fied oth­er orga­ni­za­tions that have kept a pres­ence on Face­book, includ­ing some asso­ci­at­ed with the Three Per­centers, whose mem­bers have been involved in armed con­fronta­tions with fed­er­al agents in Neva­da in 2014 and Ore­gon in 2016.

    One group, Vir­ginia Mili­tia, paid for 61 adver­tise­ments before it was removed from the social net­work, includ­ing an ad in Feb­ru­ary call­ing for a “Muster Call.”

    “Are you going to give up your rights or fight?” the ad read.

    In June, Buz­zFeed News found Face­book had been prof­it­ing from “booga­loo” ads that pro­mot­ed extrem­ist orga­ni­za­tions that want­ed to “fight the state.” Rad­i­cals linked to the “booga­loo move­ment” — a catch­phrase for anti-gov­ern­ment extrem­ists who have advo­cat­ed for anoth­er Civ­il War in the US — have been linked to vio­lence this year includ­ing the alleged killing of a fed­er­al offi­cer.

    The report also found that some users had issued direct threats against pub­lic offi­cials in pri­vate Face­book groups with­out action from the com­pa­ny. In one group titled “Pro-Police, Pro-Mil­i­tary, Pro-Trump,” a post sug­gest­ed that Min­neso­ta Rep. Ilhan Omar should be sent to Guan­tá­namo Bay, prompt­ing oth­ers to respond with “Just shoot the bitch” and “She needs a drone strike.” The post remained active as of Oct. 13.

    In anoth­er group called “Trump’s Army” with near­ly 100,000 mem­bers, one indi­vid­ual wrote of peo­ple demon­strat­ing against the police killing of Bre­on­na Tay­lor, “All Trump sup­port­ers should shoot them and kill them.”

    It’s unclear what impact Facebook’s ban will have on mil­i­ta­rized groups who have been using the social net­work for years as part of their recruit­ment and orga­niz­ing.

    The head of the nation­al Three Per­centers’ group called Facebook’s ban a “purge of con­ser­v­a­tive orga­ni­za­tions.” But when the group itself was banned, lead­ers of the orga­ni­za­tion sim­ply direct­ed its mem­bers to a pri­vate forum on its own web­site.

    ———–

    “Face­book Con­tin­ues To Host Mil­i­tant Groups And Ads Despite A Ban On Right-Wing Extrem­ism” by Sal­vador Her­nan­dez and Ryan Mac; Buz­zFeed News; 10/19/2020

    “Despite efforts by Face­book to ban right-wing mil­i­tant orga­ni­za­tions, a new report pub­lished Mon­day has found that some of those groups con­tin­ue to orga­nize and run pages on the social net­work. Face­book also con­tin­ues to prof­it from ads placed by extrem­ists despite an announce­ment ear­li­er this year that said it would ban all ads that “praise, sup­port or rep­re­sent mil­i­ta­rized social move­ments.”

    Face­book just can’t cut the habit. Sell­ing ads for extrem­ists is what it does. Remark­ably afford­able ads with a shock­ing reach: for $100 you can poten­tial­ly send your pro-mili­tia mes­sage to 500,000 to 1 mil­lion peo­ple. And not just ran­dom peo­ple. Tar­get­ed peo­ple select­ed by Face­book’s algo­rithms to be the most like­ly to react to that ad. Rad­i­cal­iza­tion made afford­able:

    ...
    The report from the Trans­paren­cy Tech Project (TTP), a non­prof­it watch­dog orga­ni­za­tion, dis­cov­ered, for exam­ple, that the Amer­i­can Patri­ot Coun­cil, a right-wing group that advo­cat­ed for the crim­i­nal pros­e­cu­tion of Michigan’s gov­er­nor because of her imple­men­ta­tion of stay-at-home orders dur­ing the ear­ly days of the coro­n­avirus pan­dem­ic, ran an ad ear­li­er this month that encour­aged mil­i­tants to attend Oct. 24 ral­lies in Michi­gan and New York.

    “We The Peo­ple gath­er across Amer­i­ca in a show of sol­i­dar­i­ty and demand eman­ci­pa­tion from the bondage of tyran­ny,” read the ad, which cost less than $100 and had the poten­tial to reach between 500,000 and 1 mil­lion peo­ple, accord­ing to Facebook’s own met­rics. “(Law­ful car­ry & Mili­tia strong­ly encour­aged.)”

    ...

    One group, Vir­ginia Mili­tia, paid for 61 adver­tise­ments before it was removed from the social net­work, includ­ing an ad in Feb­ru­ary call­ing for a “Muster Call.”

    “Are you going to give up your rights or fight?” the ad read.

    In June, Buz­zFeed News found Face­book had been prof­it­ing from “booga­loo” ads that pro­mot­ed extrem­ist orga­ni­za­tions that want­ed to “fight the state.” Rad­i­cals linked to the “booga­loo move­ment” — a catch­phrase for anti-gov­ern­ment extrem­ists who have advo­cat­ed for anoth­er Civ­il War in the US — have been linked to vio­lence this year includ­ing the alleged killing of a fed­er­al offi­cer.
    ...

    Again, this BuzzFeed/TTP report was released back in Octo­ber, two and a half months before the insur­rec­tion. Five months lat­er, we get a new report and noth­ing changed. Well, except the insur­rec­tion hap­pened, inspir­ing these groups to even greater ambi­tions at the same time main­stream con­ser­v­a­tives became even more rad­i­cal­ized under a wave of pro­pa­gan­da telling them the elec­tion was stolen and the insur­rec­tion was either jus­ti­fied or did­n’t actu­al­ly hap­pen. So that’s new.

    Posted by Pterrafractyl | March 31, 2021, 4:40 pm
  21. Here’s a pair of arti­cles about the seem­ing­ly ever-grow­ing influ­ence and pow­er of Peter Thiel and the role he’s play­ing in shap­ing the Repub­li­can Par­ty. The first arti­cle is a report about the inter­est Thiel has tak­en in cer­tain Repub­li­can pri­ma­ry races. Specif­i­cal­ly, Thiel is back­ing pri­ma­ry chal­lengers to the dwin­dling num­ber of Repub­li­cans who have voiced oppo­si­tion to role Don­ald Trump played in foment­ing the Jan­u­ary 6 Capi­tol insur­rec­tion. As the arti­cle notes, this comes a month after Thiel report­ed­ly met with Trump pri­vate­ly for over an hour at Trump’s Bed­min­ster golf club. So Thiel is ful­ly on board with the GOP’s MAGA purge and now financ­ing it. He donat­ed the max­i­mum-allowed, $5,800 check to Har­ri­et Hage­man, who is run­ning to unseat Liz Cheney in Wyoming, and also donat­ed to a chal­lenger to Rep. Jaime Her­rera Beut­ler, who, like Cheney, vot­ed for Trump’s impeach­ment in Jan­u­ary.

    It’s the kind of report that rais­es a ques­tion we prob­a­bly should have been ask­ing all along: so what role did Thiel play in foment­ing the Capi­tol insur­rec­tion? After all, not only does­n’t Thiel have the kind of polit­i­cal phi­los­o­phy that would have no objec­tion to seiz­ing pow­er via a polit­i­cal insur­rec­tion, it’s hard to think of some­thing that’s more on brand for Thiel. This is the anti-democ­ra­cy oli­garch, after all. The guy’s life is basi­cal­ly a slow-motion coup against soci­ety. It’s almost incon­ceiv­able that Thiel would­n’t have ful­ly endorsed Trump suc­ceed­ing with a coup attempt. The only thing that could plau­si­bly give Thiel pause about whether or not to back such an act is if he thought it did­n’t stand a chance of work­ing.

    But even then, those fears of fail­ure would real­ly only be a bar­ri­er to Thiel open­ly back­ing the insur­rec­tion. What about secret sup­port? Just imag­ine how much rel­e­vant infor­ma­tion a com­pa­ny like Palan­tir, for exam­ple, could have devel­oped in rela­tion to the insur­rec­tion. Or how about Thiel’s influ­ence at Face­book? As we’ve seen, Face­book played a cru­cial role in facil­i­tat­ing the insur­rec­tion by ensur­ing the ‘stolen elec­tion’ dis­in­for­ma­tion was allowed to flow freely. So we real­ly have to ask: what role in Thiel play in those deci­sions by Face­book? And that brings us to a recent except from a new biog­ra­phy on Thiel by Max Chafkin that describes the incred­i­ble influ­ence Thiel has over Zucker­berg and, there­fore, all of Face­book. As Chafkin notes, when Thiel made his ini­tial $500,000 invest­ment in Face­book, he did it under the con­di­tion that Face­book be reor­ga­nized to make Zucker­berg a kind of cor­po­rate dic­ta­tor. And as Chafkin details, in one instance after anoth­er Zucker­berg has demon­strat­ed a remark­able loy­al­ty to Thiel and views him as a polit­i­cal ally. So for all of the jus­ti­fi­able heat Zucker­berg And Sharyle Sand­berg have tak­en over Face­book’s pro-insur­rec­tion role, we real­ly should be ask­ing just what was Thiel doing to pro­mote the insur­rec­tion lead­ing up to Jan­u­ary 6. And since Thiel is clear­ly still on Team Trump and active­ly purg­ing the GOP of anti-insur­rec­tion­ists, we have to also ask what steps Thiel is cur­rent­ly tak­ing to ensure the next insur­rec­tion works:

    Politi­co

    Peter Thiel lines up against Liz Cheney

    Cheney out­raised her pri­ma­ry chal­lenger last quar­ter, but promi­nent Trump fundrais­ers are lin­ing up against the impeach­ment backer.

    By ALEX ISENSTADT
    10/13/2021 04:30 AM EDT

    Wyoming Rep. Liz Cheney’s pri­ma­ry chal­lenger land­ed for­mer Pres­i­dent Don­ald Trump’s endorse­ment before she even offi­cial­ly launched her cam­paign. Now, she’s cash­ing big checks from Trump’s biggest donors — includ­ing tech bil­lion­aire Peter Thiel.

    Thiel has con­tributed the max­i­mum-allowed, $5,800 check to Har­ri­et Hage­man, the Trump-endorsed attor­ney run­ning against Cheney in next year’s Repub­li­can pri­ma­ry. The for­mer pres­i­dent has made Cheney, an out­spo­ken crit­ic who vot­ed for his impeach­ment in Jan­u­ary, his top tar­get in the 2022 elec­tion, and now big-mon­ey bene­fac­tors like Thiel are pil­ing into the race.

    The list of major Trump donors includ­ed on Hageman’s third-quar­ter fundrais­ing report, which is set to be pub­licly released Fri­day, also includes Wyoming trans­porta­tion exec­u­tive Tim­o­thy Mel­lon, who was the sin­gle biggest giv­er to the prin­ci­pal pro-Trump super PAC, Amer­i­ca First Action, dur­ing the 2020 elec­tion. Dal­las real estate exec­u­tive James Mabrey, Apple asso­ciate gen­er­al coun­sel Dou­glas Vet­ter and Flori­da med­ical com­pa­ny exec­u­tive Peter Lame­las also gave to Hage­man. Oth­er big names include Lynette Friess, the wid­ow of Repub­li­can mega-donor and promi­nent Trump backer Fos­ter Friess.

    The high-pro­file pri­ma­ry promis­es to be an expen­sive affair. Hage­man, who entered the race in Sep­tem­ber, raised around $300,000 dur­ing the first three weeks of her cam­paign, accord­ing to a per­son famil­iar with the totals. Cheney has cap­i­tal­ized on her deep con­nec­tions in the Repub­li­can donor world to rake in mon­ey for her tough­est race yet, already bring­ing in more than $5 mil­lion this year, her cam­paign announced Tues­day.

    While Cheney’s totals and hefty lead — she raised $1.7 mil­lion in the third quar­ter alone — show that she still has pow­er­ful con­nec­tions in a fast-chang­ing Repub­li­can Par­ty, the list of promi­nent Trump donors throw­ing in with Hage­man high­lights his dom­i­nant influ­ence in the GOP. And it demon­strates Trump and his allies are mobi­liz­ing togeth­er to pun­ish the hand­ful of Repub­li­cans who vot­ed to impeach him after the Jan. 6 Capi­tol riot.

    Thiel, one of the most sought-after GOP donors, has emerged as a finan­cial force behind the effort to unseat Trump crit­ics.

    He has also con­tributed to army vet­er­an Joe Kent, a chal­lenger to Rep. Jaime Her­rera Beut­ler (R‑Wash.), who, like Cheney, vot­ed for Trump’s impeach­ment in Jan­u­ary. Thiel, a Pay­Pal co-founder and ear­ly Face­book investor, met with Trump for over an hour at his Bed­min­ster golf club last month, accord­ing to two peo­ple famil­iar with the sit-down. The meet­ing was first report­ed by the Wall Street Jour­nal.

    Cheney, the daugh­ter of for­mer Vice Pres­i­dent Dick Cheney, has her own high-pro­file finan­cial help: For­mer Pres­i­dent George W. Bush is head­lin­ing a fundrais­er for her in Texas lat­er this month. The event will also fea­ture oth­er big polit­i­cal names from the Bush admin­is­tra­tion, includ­ing polit­i­cal strate­gist Karl Rove.

    Trump spent months search­ing for a chal­lenger to take on Cheney, ulti­mate­ly lead­ing him to Hage­man after a lengthy inter­view process with oth­er con­gres­sion­al hope­fuls. She has inher­it­ed the for­mer president’s polit­i­cal appa­ra­tus: Two top offi­cials on Trump’s 2020 reelec­tion cam­paign, Nick Train­er and Tim Mur­taugh, are play­ing lead­ing roles steer­ing her cam­paign. Two oth­er Repub­li­can strate­gists involved in Trump’s orbit, Andy Sura­bi­an and James Blair, are run­ning a pro-Hage­man super PAC.

    That out­fit, Wyoming Val­ues PAC, doesn’t have to dis­close its fundrais­ing activ­i­ty until Jan­u­ary. But it is expect­ed to become an out­let for major Hage­man to fun­nel size­able checks. Unlike Hageman’s cam­paign, the super PAC does not have any con­tri­bu­tion lim­its. And many of her ear­ly cam­paign back­ers have shown a will­ing­ness to back the Trump cause with six- or sev­en-fig­ure dona­tions in the past.

    ...

    ————

    “Peter Thiel lines up against Liz Cheney” by ALEX ISENSTADT; Politi­co; 10/13/2021

    “Thiel, one of the most sought-after GOP donors, has emerged as a finan­cial force behind the effort to unseat Trump crit­ics.”

    Remem­ber all those reports from July 2020 about Thiel giv­ing up on Trump? He clear­ly had a change of heart. But giv­en all the events that tran­spired since then, the ques­tion we real­ly should be ask­ing is whether or not those announce­ments were actu­al­ly part of a plan by Thiel to hide his sup­port for Trump. Don’t for­get that it was already look­ing like the Trump team might effec­tive­ly try to can­cel the 2020 elec­tion over pan­dem­ic con­cerns by that point in time. The writ­ing was already on the wall that the 2020 elec­tion was­n’t going to end well. So we have to ask: did Thiel fore­see the insur­rec­tion, or some­thing as extreme like a can­celed elec­tion, and con­scious­ly dis­tance him­self from Trump in antic­i­pa­tion of that? Because as Thiel is mak­ing abun­dant­ly clear how, he’s total­ly cool with Trump and every­thing that Trump­ism is about these days. And Trump­ism is basi­cal­ly about the ‘stolen elec­tion’ Big Lie these days. That’s it. So Thiel is clear­ly ful­ly on board with the ‘Stolen Elec­tion’ nar­ra­tive, enough so to finance the move­ment to ensure that’s the core plank of the GOP going for­ward:

    ...
    Thiel has con­tributed the max­i­mum-allowed, $5,800 check to Har­ri­et Hage­man, the Trump-endorsed attor­ney run­ning against Cheney in next year’s Repub­li­can pri­ma­ry. The for­mer pres­i­dent has made Cheney, an out­spo­ken crit­ic who vot­ed for his impeach­ment in Jan­u­ary, his top tar­get in the 2022 elec­tion, and now big-mon­ey bene­fac­tors like Thiel are pil­ing into the race.

    The list of major Trump donors includ­ed on Hageman’s third-quar­ter fundrais­ing report, which is set to be pub­licly released Fri­day, also includes Wyoming trans­porta­tion exec­u­tive Tim­o­thy Mel­lon, who was the sin­gle biggest giv­er to the prin­ci­pal pro-Trump super PAC, Amer­i­ca First Action, dur­ing the 2020 elec­tion. Dal­las real estate exec­u­tive James Mabrey, Apple asso­ciate gen­er­al coun­sel Dou­glas Vet­ter and Flori­da med­ical com­pa­ny exec­u­tive Peter Lame­las also gave to Hage­man. Oth­er big names include Lynette Friess, the wid­ow of Repub­li­can mega-donor and promi­nent Trump backer Fos­ter Friess.

    ...

    He has also con­tributed to army vet­er­an Joe Kent, a chal­lenger to Rep. Jaime Her­rera Beut­ler (R‑Wash.), who, like Cheney, vot­ed for Trump’s impeach­ment in Jan­u­ary. Thiel, a Pay­Pal co-founder and ear­ly Face­book investor, met with Trump for over an hour at his Bed­min­ster golf club last month, accord­ing to two peo­ple famil­iar with the sit-down. The meet­ing was first report­ed by the Wall Street Jour­nal.
    ...

    So if Thiel is an insur­rec­tion­ist, we have to ask: if Peter Thiel had indeed want­ed to assist the coup attempt, what could he have done giv­en his incred­i­ble access to gov­ern­ment infor­ma­tion and influ­ence over Face­book? These are the kinds of ques­tions the teams inves­ti­gat­ing the insur­rec­tion real­ly should be ask­ing. Because as Max Chafk­in’s biog­ra­phy makes clear, sup­port­ing polit­i­cal insur­rec­tion is about as ‘on brand’ an action as we could pos­si­bly expect from Thiel giv­en his life­time of embrac­ing an ide­ol­o­gy of cheat­ing to get what you want:

    New York Mag­a­zine

    Peter Thiel’s Ori­gin Sto­ry
    His ide­ol­o­gy dom­i­nates Sil­i­con Val­ley. It began to form when he was an angry young man.

    By Max Chafkin
    Sept. 20, 2021

    Some­time around the spring of 1988, sev­er­al mem­bers of the Stan­ford Uni­ver­si­ty chess team trav­eled to a tour­na­ment in Mon­terey, Cal­i­for­nia, in an old Volk­swa­gen Rab­bit. To get across the San­ta Cruz Moun­tains, they took California’s Route 17, a four-lane high­way that is regard­ed as one of the state’s most dan­ger­ous because of its tight curves, bad weath­er, and wild-ani­mal cross­ings. The chess team had no par­tic­u­lar rea­son to hur­ry, but the 20-year-old dri­ver of the Rab­bit weaved in and out of lanes, near­ly rear-end­ing cars as he slipped past them. For large por­tions of the trip, he seemed to be floor­ing the accel­er­a­tor.

    Peter Thiel was at the wheel. Thin, dys­pep­tic, and humor­less, he had seemed like an alien to his class­mates since arriv­ing at Stan­ford two and a half years ear­li­er. He didn’t drink, didn’t date, didn’t crack jokes, and he seemed to pos­sess both an insa­tiable ambi­tion and a sense, deeply held, that the world was against him. He was bril­liant and ter­ri­fy­ing. He was, recalled one class­mate, Megan Maxwell, “a strange, strange boy.”

    Thiel looked up when, pre­dictably, the lights of a police cruis­er appeared in his rearview mir­ror. He pulled the Rab­bit over, rolled down the win­dow, and lis­tened as a state troop­er asked if he knew how fast he was going. The oth­er young men in the car — relieved to have been stopped but also afraid of how this might play out — looked at each oth­er ner­vous­ly.

    Thiel addressed the sta­tie cool­ly in his usu­al unin­flect­ed bari­tone. “Well,” he said, “I’m not sure if the con­cept of a speed lim­it makes sense. It may be uncon­sti­tu­tion­al. And it’s def­i­nite­ly an infringe­ment on lib­er­ty.”

    Unbe­liev­ably, the troop­er seemed to accept this. He told Thiel to slow down and have a nice day. Even more unbe­liev­ably: As soon as he drove out of sight, Thiel hit the gas ped­al again, just as hard as before. To his aston­ished pas­sen­gers, it was as if he believed that not only did the laws of Cal­i­for­nia not apply to him — but that the laws of physics didn’t either. “I don’t remem­ber any of the games we played,” said the team­mate who was rid­ing shot­gun, a man who is now in his 50s. “But I will nev­er for­get that dri­ve.”

    Any­one who has fol­lowed Thiel’s career will find much to rec­og­nize in the Route 17 encounter. The reflex­ive con­trar­i­an­ism, the unearned con­fi­dence, the impos­si­bly favor­able out­come — they feel famil­iar, both in Thiel him­self and the com­pa­nies he helped cre­ate. Today, of course, that scrawny chess nerd is the bil­lion­aire co-founder of Pay­Pal and Palan­tir and arguably the great­est ven­ture cap­i­tal­ist of his gen­er­a­tion, with a side­line as patron of such far-right caus­es as the 2016 can­di­da­cy of Don­ald Trump. Thiel (who did not com­ment for this arti­cle, which is adapt­ed from my new biog­ra­phy, The Con­trar­i­an) is per­haps the most impor­tant influ­ence in the world’s most influ­en­tial indus­try. Oth­er Sil­i­con Val­ley per­sonas may be bet­ter known to the gen­er­al pub­lic, includ­ing Jeff Bezos, Elon Musk, and even a few who don’t reg­u­lar­ly launch rock­ets into space. But Thiel is the Valley’s true idol — the sin­gle per­son whom tech’s young aspi­rants and mil­len­ni­al moguls most seek to flat­ter and to emu­late, the cult leader of the cult of dis­rup­tion.

    The blitzs­cal­ing strat­e­gy he and his employ­ees pio­neered at Pay­Pal cre­at­ed the growth play­book for an entire gen­er­a­tion of start-ups, from Airbnb to WeWork. His most leg­endary bet — loan­ing $500,000 to a social­ly inept Har­vard sopho­more in exchange for 10 per­cent of a web­site called TheFacebook.com — is sig­nif­i­cant less for the orders-of-mag­ni­tude eco­nom­ic return he real­ized and more for the terms he embed­ded in the deal. Thiel ensured that Mark Zucker­berg would be the company’s absolute dic­ta­tor. No one, not even Facebook’s board of direc­tors, could ever over­rule him. Sim­i­lar maneu­vers were adopt­ed at many of Thiel’s port­fo­lio com­pa­nies, includ­ing Stripe and SpaceX, and today, across the indus­try, it’s more the norm than the excep­tion.

    Thiel hasn’t just act­ed in a cer­tain way and left it for oth­ers to notice and fol­low. He taught his meth­ods to founders-in-train­ing at Stan­ford, cod­i­fy­ing the lessons from the fleet of com­pa­nies found­ed by his for­mer employ­ees — the so-called Pay­Pal Mafia. He lat­er col­lect­ed his think­ing in a book, Zero to One. It became a best sell­er, part­ly because it promised a path to Thiel-scale wealth and part­ly because it devel­oped the idio­syn­crasies that had been present in the col­lege-age Thiel into a full-blown ide­ol­o­gy. The book argues, among oth­er things, that founders are god­like, that monar­chies are more effi­cient than democ­ra­cies, and that cults are a bet­ter orga­ni­za­tion­al mod­el than man­age­ment con­sul­tan­cies. More than any­thing, it cel­e­brates rule-break­ing. Thiel bragged that of PayPal’s six founders, four had built bombs in high school.

    The ideas were out there, but they were, unde­ni­ably, dif­fer­ent. For decades, Sil­i­con Val­ley had been dom­i­nat­ed by the mythol­o­gy of Steve Jobs. The acid-drop­ping hip­pie CEO had argued that tech­nol­o­gy could be a form of cre­ative expres­sion, and he con­vinced a gen­er­a­tion of entre­pre­neurs that they should invent prod­ucts that would improve the lives of their cus­tomers. This was the “‘bicy­cle for the mind’ val­ue sys­tem,” says Roger McNamee, the founder of the ven­ture-cap­i­tal firm Ele­va­tion Part­ners, and it fil­tered into many of the most suc­cess­ful com­pa­nies of the ’90s and ’00s.

    Thiel despis­es the coun­ter­cul­ture (he dates the pre­cise begin­ning of Amer­i­can decline to Wood­stock) and is con­temp­tu­ous of the notion of cre­ativ­i­ty for its own sake. For Thiel, the pur­pose of found­ing a com­pa­ny is to con­trol your own des­tiny. “A start­up is the largest endeav­or over which you can have def­i­nite mas­tery,” he wrote. A new gen­er­a­tion of entre­pre­neurs, com­ing of age in the wake of the finan­cial cri­sis, embraced his ideas. Thiel told them to flout norms and seek lucre, not impact. “Only one thing can allow a busi­ness to tran­scend the dai­ly brute strug­gle for sur­vival,” he wrote. “Monop­oly prof­its.” Per­haps his sin­gle best stu­dent has been Mark Zucker­berg, who built a monop­oly in his indus­try and used it to crush com­peti­tors and charge pro­gres­sive­ly high­er fees to adver­tis­ers — all while telling the world that this essen­tial­ly preda­to­ry behav­ior was a social good.

    With Thiel’s encour­age­ment, tech would “move fast and break things,” as the Face­book mot­to put it, and exec­u­tives believed it was bet­ter to ask for­give­ness than per­mis­sion. The indus­try that devel­oped would be defined by these clichés, con­vinc­ing itself that “dis­rup­tion” wasn’t just an unfor­tu­nate con­se­quence of inno­va­tion but an end in itself. Thielism would show up even at com­pa­nies where he was not an investor: at Juul, the e‑cigarette com­pa­ny that mar­ket­ed to chil­dren; at Robin­hood, which tempt­ed novice investors with volatile invest­ment prod­ucts; and at Uber, which paid dri­vers less than min­i­mum wage and vio­lat­ed statutes with appar­ent glee.

    McNamee, an ear­ly advis­er to Zucker­berg who turned apos­tate and pub­lished Zucked: Wak­ing Up to the Face­book Cat­a­stro­phe, sees this recent his­to­ry as an expres­sion of Thiel’s val­ues. As he put it to me, “The Pay­Pal Mafia phi­los­o­phy became the found­ing prin­ci­ple for an entire gen­er­a­tion of tech com­pa­nies.”

    Peter Thiel was not a pop­u­lar boy. In the mid­dle-class San Fran­cis­co sub­urb of Fos­ter City, his high-school class­mates were awed by his intel­li­gence but found him inscrutable and haughty — qual­i­ties that made him a tar­get for abuse. “It’s obvi­ous in ret­ro­spect that what we were doing was bul­ly­ing,” one of his reg­u­lar tor­men­tors told me. “I’ve always thought he might have a list of peo­ple he’s going to kill some­where and that I’m on it.” Thiel got more assured as he matured phys­i­cal­ly, although he was not con­fi­dent so much as dis­dain­ful, walk­ing around with an expres­sion that said, accord­ing to a friend, “Fu ck you, world.”

    Thiel nev­er actu­al­ly swore, but once, dur­ing his fresh­man year at Stan­ford, he quot­ed one of his room­mates doing so dur­ing an argu­ment. The room­mate respond­ed by print­ing a com­mem­o­ra­tive sign and tap­ing it to the ceil­ing. It had the date, Jan­u­ary 1986, and declared, “Under this spot, Peter Thiel first said the word fu ck.” It stayed there for the rest of the semes­ter, elic­it­ing laughs from the rest of the hall — except for Thiel, who didn’t notice and wasn’t told. In May, he was all packed up and prepar­ing to leave the dorm for the final time when some­one point­ed to the sign. Word­less, Thiel moved his desk under the paper, stepped up, tore it down, and left for the sum­mer. “God,” a col­lege acquain­tance told me. “We were such dicks to him.”

    The mock­ery wasn’t about pol­i­tics, at least not ini­tial­ly, but that was how Thiel processed it. He had been raised by Evan­gel­i­cal Ger­man immi­grants and fan­cied him­self an aspir­ing William F. Buck­ley. It wasn’t unusu­al to be con­ser­v­a­tive at Stan­ford — it housed the Hoover Insti­tu­tion — but Thiel con­sid­ered it a hot­house of lefty antag­o­nists. “He viewed lib­er­als through a lens as peo­ple who were not nice to him,” said a class­mate. “The way peo­ple treat­ed him at Stan­ford had a huge impact. That’s still with him.” Thiel began to embrace a new iden­ti­ty — that of the right-wing provo­ca­teur. He joked about start­ing a fake char­i­ty, Lib­er­als for Peace, that would raise mon­ey based on a vague agen­da and then do absolute­ly noth­ing except pay him. And he told class­mates that con­cern about South African apartheid, per­haps the sin­gle buzzi­est issue on Amer­i­can cam­pus­es, was overblown. “It works,” he told Maxwell. (Thiel’s spokesman has said that Thiel doesn’t remem­ber being asked his views on apartheid and nev­er sup­port­ed it.)

    In 1987, Thiel poured his sense of griev­ance into the launch of a right-wing news­pa­per, the Stan­ford Review. It was his first entre­pre­neur­ial ven­ture and the begin­ning of a net­work that would even­tu­al­ly expand and dom­i­nate Sil­i­con Val­ley. Thiel’s pri­ma­ry inno­va­tion with the Review was to con­nect the parochial con­cerns of a small elite — con­ser­v­a­tive Stan­ford under­grad­u­ates— to main­stream nation­al pol­i­tics. Thus the option­al $29 per year dues charged by the stu­dent sen­ate became a micro­cosm of tax-and-spend lib­er­al­ism and a plan to add non-white authors, like Zora Neale Hurston, to Stanford’s West­ern Cul­ture course became a civ­i­liza­tion-lev­el threat. A fundrais­ing let­ter lat­er sent to old­er alum­ni warned that a pro­fes­sor was teach­ing a course on Black hair­styles. It led to a flood of dona­tions. These sorts of antics helped draw the atten­tion of Ronald Reagan’s sec­re­tary of Edu­ca­tion, who came to speak at a Review event and made nation­al news recap­ping it on PBS’s MacNeil/Lehrer New­sHour.

    ...

    It start­ed on a swel­ter­ing sum­mer day in 1998. Thiel was in a class­room at Stanford’s engi­neer­ing cen­ter, attempt­ing to chat up an awk­ward but bril­liant coder. Max Levchin was 23, and he’d come to a lec­ture Thiel was deliv­er­ing on cur­ren­cies pri­mar­i­ly for the air con­di­tion­ing. The two young men got to talk­ing, and with­in a day, Thiel, who had been man­ag­ing a pool of cap­i­tal he’d raised from friends and fam­i­ly as a hedge fund, told Levchin that he want­ed to invest in his embry­on­ic start-up. It made soft­ware for Palm Pilots. At the end of that year, they began exper­i­ment­ing with a way for own­ers of the devices to trans­mit IOUs to one anoth­er. They called the ser­vice Pay­Pal, and Thiel, who took over the ven­ture before the year was out, quick­ly saw its sub­ver­sive pos­si­bil­i­ties.

    Once you got mon­ey via Pay­Pal, you could trans­fer your bal­ance to a bank. Or you could keep the funds inside Pay­Pal and use them to pay oth­er peo­ple. This, Thiel real­ized, made the ser­vice a kind of untrack­able dig­i­tal cur­ren­cy. It was the equiv­a­lent of a Swiss bank account in one’s pock­et, he believed, boast­ing to a Wired reporter that it could lead to “the ero­sion of the nation-state.” Thiel staffed the com­pa­ny with for­mer Stan­ford Review edi­tors and imposed his lib­er­tar­i­an ideas in ways large and small. Pay­Pal employ­ees were free to show up late to meet­ings as long as they paid $1 for every minute they were tardy, and Ayn Rand was some­thing like required read­ing.

    The com­pa­ny leased its first office above a sta­tionery store and a French bak­ery in down­town Palo Alto. At the time, the Val­ley was so full of com­pet­ing pay­ments com­pa­nies that there was a sec­ond one on the same floor. X.com was bet­ter fund­ed than Pay­Pal, with a famous investor — Michael Moritz of Sequoia Cap­i­tal — and a charis­mat­ic founder who’d already sold anoth­er start-up for some $300 mil­lion. His name was Elon Musk.

    Musk didn’t know that the engi­neers across the land­ing were also work­ing on dig­i­tal mon­ey trans­fers. (The sign on their door bore the name of a par­ent com­pa­ny.) X and Pay­Pal shared a trash bin in the alley behind their build­ing, and Pay­Pal engi­neers lat­er bragged to a group of X employ­ees that they found doc­u­ments that described X’s pay­ments scheme, which used the web, rather than Palm Pilots, as well as a sys­tem for gen­er­at­ing refer­rals by giv­ing cus­tomers cash. They incor­po­rat­ed the ideas into PayPal’s strat­e­gy. Some X employ­ees I spoke with took this boast lit­er­al­ly, though Musk cast doubt on the sto­ry. “It’s pos­si­ble, I sup­pose,” he told me. “But it’s a bit like say­ing, ‘You stole my idea for going to the moon.’” Even though he has long since moved on to run­ning Tes­la and SpaceX, Musk has com­pli­cat­ed feel­ings about Thiel, in part because of what hap­pened next.

    Shift­ing PayPal’s focus to the web and pay­ing new users refer­ral fees juiced the company’s growth. Some of Thiel’s coders made a lit­tle soft­ware app to track how many peo­ple had cre­at­ed new accounts, which appeared on his screen as a lit­tle box titled “World Dom­i­na­tion Index.” Every time a new user joined, the app played the sound of a bell. In Novem­ber 1999, PayPal’s cus­tomer base was a few thou­sand. By spring, the index was up to 1 mil­lion. That was a near­ly unprece­dent­ed rate of growth, but it meant that Pay­Pal had spent some­thing like $20 mil­lion on refer­ral fees out of the $28 mil­lion it had raised. The loss­es, and the sim­i­lar­i­ty of their busi­ness­es, per­suad­ed Thiel and Musk to com­bine their com­pa­nies.

    Thiel left short­ly after the merg­er. “I wouldn’t say we’re oil and water, but there are some pret­ty big dif­fer­ences,” Musk, who became CEO, told me. “Peter likes the games­manship of invest­ing — like we’re all play­ing chess. I don’t mind that, but I’m fun­da­men­tal­ly into doing engi­neer­ing and design. I’m not an inves­tor. I feel like using oth­er people’s mon­ey is not cool.” A per­son who has talked to each man about the oth­er put it more suc­cinct­ly: “Musk thinks Peter is a sociopath, and Peter thinks Musk is a fraud and a brag­gart.”

    It seemed as if Musk won the pow­er strug­gle, but Thiel had laid a trap, installing most of his deputies — includ­ing Levchin and the author of the Review’s “Rape Issue” — in the exec­u­tive ranks. Musk didn’t real­ize he was sur­round­ed by a team that was more loy­al to Thiel than to him. Lat­er that year, Musk left town for a two-week trip. While he was in the air, a group of Thiel-aligned con­spir­a­tors con­front­ed the company’s major backer, Moritz, at his office on Sand Hill Road. They demand­ed their patron be put in charge.

    After Moritz reluc­tant­ly agreed, Thiel pressed his advan­tage. At a board meet­ing, accord­ing to sev­er­al peo­ple famil­iar with what hap­pened, he sug­gest­ed that Pay­Pal turn over all its cash to Thiel Cap­i­tal, the hedge fund he was still run­ning on the side, so that he could take advan­tage of the eco­nom­ic upheaval of the post-dot-com bub­ble. Moritz assumed Thiel was jok­ing, but Thiel calm­ly explained that he had a plan to bet on inter­est rates falling. Thiel’s idea was shot down, but Moritz was furi­ous. Risk­ing a start-up’s lim­it­ed cash on spec­u­la­tion — par­tic­u­lar­ly spec­u­la­tion that had the poten­tial to per­son­al­ly enrich the CEO — was some­thing no ven­ture cap­i­tal­ist, nor any self-respect­ing tech entre­pre­neur, would even con­sid­er sug­gest­ing. The fact that Thiel would pro­pose it not long after snag­ging the CEO job in a man­ner that was not exact­ly hon­or­able was dou­bly galling. It sug­gest­ed to Moritz and oth­ers on the board a lack of a moral com­pass.

    Thiel and Moritz con­tin­ued to clash. It may have been part­ly per­son­al — Moritz had orig­i­nal­ly invest­ed in Musk’s com­pa­ny, not Thiel’s. But it also reflect­ed the ways in which Thiel was dif­fer­ent from Moritz, Musk, and pret­ty much every impor­tant fig­ure in Sil­i­con Val­ley who’d come before him. “At heart,” Moritz told me, “Peter is a hedge-fund man” — not an entre­pre­neur. Founders were expect­ed to pour all of them­selves into their com­pa­nies in order to grow as big as pos­si­ble and, at least if you buy into the mythol­o­gy of Sil­i­con Val­ley, to change the world for the bet­ter. By this log­ic, Thiel should’ve been bleed­ing for Pay­Pal, not schem­ing to grow his invest­ment port­fo­lio. But Thiel didn’t care about Sil­i­con Valley’s sense of pro­pri­ety. And it opened up a uni­verse of strate­gies that his pre­de­ces­sors had nev­er been brazen enough to try.

    Under Thiel, PayPal’s will­ing­ness to dis­re­gard bank­ing rules became a key strate­gic advan­tage. Finan­cial insti­tu­tions are required to ver­i­fy that cus­tomers are who they say they are by check­ing iden­ti­fi­ca­tions, but Pay­Pal, which con­tend­ed it wasn’t tech­ni­cal­ly a bank, made lit­tle effort to do so. (When a reporter not­ed to Thiel that many of his com­peti­tors com­plied with the reg­u­la­tions, he called them “insane.”) It also did lit­tle to stop peo­ple from using the mon­ey they put into their accounts for illic­it pur­pos­es. The refund mech­a­nism that Pay­Pal used to return cus­tomers’ cash was tech­ni­cal­ly banned by the cred­it-card com­pa­nies. When those com­pa­nies com­plained, Pay­Pal sim­ply offered an apol­o­gy and nego­ti­at­ed. Today, the use of unsus­tain­able or eth­i­cal­ly dubi­ous tricks to make a start-up insur­mount­ably big­ger than its rivals is known as “growth hack­ing.” It’s wide­ly cred­it­ed to Thiel and his exec­u­tives and cel­e­brat­ed by entre­pre­neurs across the indus­try. They’re all chas­ing what Thiel got. The year after he became CEO of Pay­Pal, eBay acquired the com­pa­ny for $1.5 bil­lion.

    Twen­ty years lat­er, Thielism is the dom­i­nant ethos in Sil­i­con Val­ley. That’s part­ly because Thiel has been effec­tive at seed­ing the indus­try with pro­tégés — none of them more promi­nent than Mark Zucker­berg. Hav­ing pur­sued a grow-at-all-costs, con­se­quences-be-damned expan­sion strat­e­gy, the Face­book CEO is now attempt­ing to cre­ate his own coin. Diem (née Libra) is a cryp­tocur­ren­cy that, if all goes accord­ing to plan, will func­tion as a sort of replace­ment for the U.S. dol­lar inside Face­book (rough­ly 3 bil­lion users), What­sApp (2 bil­lion), Insta­gram (1 bil­lion), and its oth­er apps, as well as those of any oth­er com­pa­nies that adopt it. The effort has gen­er­at­ed con­sid­er­able con­cern from the U.S. Fed­er­al Reserve, which is under­stand­ably com­mit­ted to the dol­lar, and from crit­ics across the polit­i­cal spec­trum, who express shock at the audac­i­ty of a com­pa­ny with Facebook’s track record for pri­va­cy vio­la­tions attempt­ing to put itself at the cen­ter of glob­al com­merce.

    Long­time Thiel asso­ciates, though, weren’t sur­prised at all. “No one seems to have con­nect­ed the dots to Peter’s orig­i­nal grand vision for Pay­Pal,” a source who’d worked with Thiel for years wrote to me in June 2019, short­ly after the cur­ren­cy was announced. This per­son, and oth­ers I spoke with, saw Thiel’s ide­ol­o­gy — and his appar­ent belief that cor­po­rate pow­er should sub­sume the author­i­ty of gov­ern­ments — in Facebook’s actions. “There’s a direct influ­ence,” this source said.

    Thiel doesn’t con­trol Zucker­berg, and their rela­tion­ship is com­pli­cat­ed, to say the least. Thiel unloaded most of his stock as soon as the com­pa­ny went pub­lic. (The shares were falling and morale was low. To ral­ly the staff, Zucker­berg invit­ed Thiel to an all-hands meet­ing at head­quar­ters, but he just end­ed up insult­ing them. “My gen­er­a­tion was promised colonies on the moon,” Thiel told them. “Instead we got Face­book.”) But in the years since, Thiel has remained a trust­ed con­fi­dant to Zucker­berg — despite per­son­al­ly cul­ti­vat­ing Face­book antag­o­nists, includ­ing James O’Keefe, the right-wing provo­ca­teur who pro­duced under­cov­er videos attempt­ing to expose Facebook’s sup­posed bias against con­ser­v­a­tives, and Charles John­son, who helped start the face-recog­ni­tion com­pa­ny Clearview AI.

    Clearview assem­bled its gar­gan­tu­an data­base of faces by scrap­ing pho­tos from Face­book pro­files, which Face­book con­sid­ers a vio­la­tion of its terms of ser­vice. John­son told me that when he raised mon­ey from Thiel, he pre­sent­ed Clearview as both a promis­ing busi­ness and as a back­door way to “destroy” Face­book by expos­ing its lax pri­va­cy stan­dards. Thiel, who as a Face­book board mem­ber has a duty to act in the company’s best inter­ests, invest­ed in Clearview any­way. John­son also says that Thiel used him as a con­duit to leak emails between Thiel and Reed Hast­ings, anoth­er Face­book board mem­ber, who’d crit­i­cized Thiel for back­ing Don­ald Trump.

    At any nor­mal com­pa­ny, such dis­loy­al­ty might be grounds for dis­missal. But it was Hast­ings, not Thiel, who resigned from the board, and Zucker­berg nev­er pun­ished his men­tor. Accord­ing to two for­mer Face­book staffers, this was part­ly because he appre­ci­at­ed Thiel’s unvar­nished advice and part­ly because Zucker­berg saw Thiel as a polit­i­cal ally. Zucker­berg had been crit­i­cized by con­ser­v­a­tive media before the 2016 elec­tion and, with Thiel’s encour­age­ment, had sought to cater to them.

    In 2019, while on a trip to Wash­ing­ton to answer ques­tions from Con­gress about his dig­i­tal cur­ren­cy, Thiel joined Zucker­berg, Jared Kush­n­er, Trump, and their spous­es at the White House. The specifics of the dis­cus­sion were secret — but, as I report in my book, Thiel lat­er told a con­fi­dant that Zucker­berg came to an under­stand­ing with Kush­n­er dur­ing the meal. Face­book, he promised, would con­tin­ue to avoid fact-check­ing polit­i­cal speech — thus allow­ing the Trump cam­paign to claim what­ev­er it want­ed. If the com­pa­ny fol­lowed through on that promise, the Trump admin­is­tra­tion would lay off on any heavy-hand­ed reg­u­la­tions.

    After the din­ner, Zucker­berg took a hands-off approach to con­ser­v­a­tive sites. In late Octo­ber, after he detailed the pol­i­cy in a speech at George­town, Face­book launched a news app that show­cased what the com­pa­ny called “deeply report­ed and well-sourced” out­lets. Among the list of rec­om­mend­ed pub­li­ca­tions was Bre­it­bart, Steve Bannon’s site, even though it had pro­mot­ed itself as allied with the alt-right and had once includ­ed a sec­tion ded­i­cat­ed to “Black crime.” Face­book also seemed to go out of its way to help the Dai­ly Wire, a younger, hip­per ver­sion of Bre­it­bart that would become one of the biggest pub­lish­ers on the plat­form. Face­book had long seen itself as a gov­ern­ment unto itself; now, thanks to the under­stand­ing bro­kered by Thiel, the site would push what the Thiel con­fi­dant called “state-sanc­tioned con­ser­vatism.”

    Zucker­berg denied that there had been any deal with Trump, call­ing the notion “pret­ty ridicu­lous,” though Facebook’s actions in the run-up to the elec­tion would make the denial seem not entire­ly cred­i­ble. Dur­ing Black Lives Mat­ter protests, Twit­ter hid a post by the pres­i­dent that seemed to con­done vio­lence: “When the loot­ing starts, the shoot­ing starts”; Face­book allowed it. In the days lead­ing up to the Jan­u­ary 6 insur­rec­tion at the U.S. Capi­tol, Face­book most­ly ignored calls to lim­it the spread of “Stop the Steal” groups, which claimed that Trump had actu­al­ly won the elec­tion.

    In the months since, jour­nal­ists, pol­i­cy-mak­ers, and even some Face­book employ­ees have strug­gled to explain why the com­pa­ny remains indif­fer­ent to the objec­tions of reg­u­la­tors and law­mak­ers as well as those raised by com­mon sense. Why is Face­book — and so much of what comes out of what once seemed like the crown jew­el of Amer­i­can cap­i­tal­ism — such an obvi­ous­ly malev­o­lent force?

    The answers to these ques­tions are part­ly struc­tur­al, of course, involv­ing reg­u­la­to­ry fail­ures that allowed Zucker­berg to dom­i­nate social-media adver­tis­ing. But they are also ide­o­log­i­cal. Both fig­u­ra­tive­ly and lit­er­al­ly, Thiel wrote the book on monop­oly cap­i­tal­ism, and he recruit­ed an army of fol­low­ers, includ­ing Zucker­berg. This is to say that the Face­book founder, like almost every suc­cess­ful techie of his gen­er­a­tion, isn’t a lib­er­al or a con­ser­v­a­tive. He is a Thielist. The rules do not apply.

    ————

    “Peter Thiel’s Ori­gin Sto­ry” by Max Chafkin; New York Mag­a­zine; 09/20/2021

    ” Any­one who has fol­lowed Thiel’s career will find much to rec­og­nize in the Route 17 encounter. The reflex­ive con­trar­i­an­ism, the unearned con­fi­dence, the impos­si­bly favor­able out­come — they feel famil­iar, both in Thiel him­self and the com­pa­nies he helped cre­ate. Today, of course, that scrawny chess nerd is the bil­lion­aire co-founder of Pay­Pal and Palan­tir and arguably the great­est ven­ture cap­i­tal­ist of his gen­er­a­tion, with a side­line as patron of such far-right caus­es as the 2016 can­di­da­cy of Don­ald Trump. Thiel (who did not com­ment for this arti­cle, which is adapt­ed from my new biog­ra­phy, The Con­trar­i­an) is per­haps the most impor­tant influ­ence in the world’s most influ­en­tial indus­try. Oth­er Sil­i­con Val­ley per­sonas may be bet­ter known to the gen­er­al pub­lic, includ­ing Jeff Bezos, Elon Musk, and even a few who don’t reg­u­lar­ly launch rock­ets into space. But Thiel is the Valley’s true idol — the sin­gle per­son whom tech’s young aspi­rants and mil­len­ni­al moguls most seek to flat­ter and to emu­late, the cult leader of the cult of dis­rup­tion.”

    Peter Thiel’s ethos is the heart and soul dri­ving con­tem­po­rary Sil­i­con Val­ley. Take a moment and digest that. The guy who wrote a book artic­u­lat­ing his Ayn Rand-ian phi­los­o­phy that views com­pa­ny founders as god­like and monar­chies are more effi­cient than democ­ra­cies is arguably the most influ­en­tial per­son in Sil­i­con Val­ley. A whole gen­er­a­tion of tech entre­pre­neurs have adopt­ed his phi­los­o­phy:

    ...
    Thiel hasn’t just act­ed in a cer­tain way and left it for oth­ers to notice and fol­low. He taught his meth­ods to founders-in-train­ing at Stan­ford, cod­i­fy­ing the lessons from the fleet of com­pa­nies found­ed by his for­mer employ­ees — the so-called Pay­Pal Mafia. He lat­er col­lect­ed his think­ing in a book, Zero to One. It became a best sell­er, part­ly because it promised a path to Thiel-scale wealth and part­ly because it devel­oped the idio­syn­crasies that had been present in the col­lege-age Thiel into a full-blown ide­ol­o­gy. The book argues, among oth­er things, that founders are god­like, that monar­chies are more effi­cient than democ­ra­cies, and that cults are a bet­ter orga­ni­za­tion­al mod­el than man­age­ment con­sul­tan­cies. More than any­thing, it cel­e­brates rule-break­ing. Thiel bragged that of PayPal’s six founders, four had built bombs in high school.

    ...

    Thiel despis­es the coun­ter­cul­ture (he dates the pre­cise begin­ning of Amer­i­can decline to Wood­stock) and is con­temp­tu­ous of the notion of cre­ativ­i­ty for its own sake. For Thiel, the pur­pose of found­ing a com­pa­ny is to con­trol your own des­tiny. “A start­up is the largest endeav­or over which you can have def­i­nite mas­tery,” he wrote. A new gen­er­a­tion of entre­pre­neurs, com­ing of age in the wake of the finan­cial cri­sis, embraced his ideas. Thiel told them to flout norms and seek lucre, not impact. “Only one thing can allow a busi­ness to tran­scend the dai­ly brute strug­gle for sur­vival,” he wrote. “Monop­oly prof­its.” Per­haps his sin­gle best stu­dent has been Mark Zucker­berg, who built a monop­oly in his indus­try and used it to crush com­peti­tors and charge pro­gres­sive­ly high­er fees to adver­tis­ers — all while telling the world that this essen­tial­ly preda­to­ry behav­ior was a social good.

    With Thiel’s encour­age­ment, tech would “move fast and break things,” as the Face­book mot­to put it, and exec­u­tives believed it was bet­ter to ask for­give­ness than per­mis­sion. The indus­try that devel­oped would be defined by these clichés, con­vinc­ing itself that “dis­rup­tion” wasn’t just an unfor­tu­nate con­se­quence of inno­va­tion but an end in itself. Thielism would show up even at com­pa­nies where he was not an investor: at Juul, the e‑cigarette com­pa­ny that mar­ket­ed to chil­dren; at Robin­hood, which tempt­ed novice investors with volatile invest­ment prod­ucts; and at Uber, which paid dri­vers less than min­i­mum wage and vio­lat­ed statutes with appar­ent glee.

    McNamee, an ear­ly advis­er to Zucker­berg who turned apos­tate and pub­lished Zucked: Wak­ing Up to the Face­book Cat­a­stro­phe, sees this recent his­to­ry as an expres­sion of Thiel’s val­ues. As he put it to me, “The Pay­Pal Mafia phi­los­o­phy became the found­ing prin­ci­ple for an entire gen­er­a­tion of tech com­pa­nies.”
    ...

    And note how Thiel was appar­ent­ly a defend­er of South Africa’s apartheid sys­tem, assert­ing “it works”, back when he was stu­dent at Stan­ford. Recall how Thiel spent time liv­ing in South Africa grow­ing. This was­n’t just a casu­al embrace of apartheid. Also note that the for­mer stu­dent who recount­ed Thiel shar­ing these views was an African Amer­i­can female. Thiel was will­ing to defend apartheid to a black stu­dent:

    ...
    The mock­ery wasn’t about pol­i­tics, at least not ini­tial­ly, but that was how Thiel processed it. He had been raised by Evan­gel­i­cal Ger­man immi­grants and fan­cied him­self an aspir­ing William F. Buck­ley. It wasn’t unusu­al to be con­ser­v­a­tive at Stan­ford — it housed the Hoover Insti­tu­tion — but Thiel con­sid­ered it a hot­house of lefty antag­o­nists. “He viewed lib­er­als through a lens as peo­ple who were not nice to him,” said a class­mate. “The way peo­ple treat­ed him at Stan­ford had a huge impact. That’s still with him.” Thiel began to embrace a new iden­ti­ty — that of the right-wing provo­ca­teur. He joked about start­ing a fake char­i­ty, Lib­er­als for Peace, that would raise mon­ey based on a vague agen­da and then do absolute­ly noth­ing except pay him. And he told class­mates that con­cern about South African apartheid, per­haps the sin­gle buzzi­est issue on Amer­i­can cam­pus­es, was overblown. “It works,” he told Maxwell. (Thiel’s spokesman has said that Thiel doesn’t remem­ber being asked his views on apartheid and nev­er sup­port­ed it.)
    ...

    Then we get to the sto­ry of Thiel’s role in the found­ing of Pay­Pal. A sto­ry that appears to involve Thiel’s team steal­ing the under­ly­ing idea of mak­ing Pay­Pal an inter­net-based cur­ren­cy from Elon Musk’s rival X.com com­pa­ny that was work­ing on dig­i­tal cur­ren­cies at the same time. Keep in mind that, of all of Thiel’s var­i­ous tech-relat­ed ven­tures, it was real­ly only Pay­Pal where one could make a case that Thiel him­self pro­vid­ed some sort of tech­ni­cal inno­va­tion, as opposed to just be the guy pro­vid­ing the financ­ing for the ven­ture. And even in this case, Thiel’s orig­i­nal vision was far less rev­o­lu­tion­ary — hav­ing the Pay­Pal trans­ac­tions only take place via Palm Pilots direct­ly com­mu­ni­cat­ing with each oth­er — and he end­ed up steal­ing the real inno­va­tion from Musk’s com­pa­ny. It under­scores how Thiel’s pri­ma­ry gen­uine inno­va­tion is lim­it­ed to the moral ‘inno­va­tions’ he kept com­ing up with to get ahead. In oth­er words, he inno­vat­ed self­ish rule-break­ing and con­niv­ing. That’s his grand con­tri­bu­tion to human­i­ty. Way to go:

    ...
    It start­ed on a swel­ter­ing sum­mer day in 1998. Thiel was in a class­room at Stanford’s engi­neer­ing cen­ter, attempt­ing to chat up an awk­ward but bril­liant coder. Max Levchin was 23, and he’d come to a lec­ture Thiel was deliv­er­ing on cur­ren­cies pri­mar­i­ly for the air con­di­tion­ing. The two young men got to talk­ing, and with­in a day, Thiel, who had been man­ag­ing a pool of cap­i­tal he’d raised from friends and fam­i­ly as a hedge fund, told Levchin that he want­ed to invest in his embry­on­ic start-up. It made soft­ware for Palm Pilots. At the end of that year, they began exper­i­ment­ing with a way for own­ers of the devices to trans­mit IOUs to one anoth­er. They called the ser­vice Pay­Pal, and Thiel, who took over the ven­ture before the year was out, quick­ly saw its sub­ver­sive pos­si­bil­i­ties.

    Once you got mon­ey via Pay­Pal, you could trans­fer your bal­ance to a bank. Or you could keep the funds inside Pay­Pal and use them to pay oth­er peo­ple. This, Thiel real­ized, made the ser­vice a kind of untrack­able dig­i­tal cur­ren­cy. It was the equiv­a­lent of a Swiss bank account in one’s pock­et, he believed, boast­ing to a Wired reporter that it could lead to “the ero­sion of the nation-state.” Thiel staffed the com­pa­ny with for­mer Stan­ford Review edi­tors and imposed his lib­er­tar­i­an ideas in ways large and small. Pay­Pal employ­ees were free to show up late to meet­ings as long as they paid $1 for every minute they were tardy, and Ayn Rand was some­thing like required read­ing.

    The com­pa­ny leased its first office above a sta­tionery store and a French bak­ery in down­town Palo Alto. At the time, the Val­ley was so full of com­pet­ing pay­ments com­pa­nies that there was a sec­ond one on the same floor. X.com was bet­ter fund­ed than Pay­Pal, with a famous investor — Michael Moritz of Sequoia Cap­i­tal — and a charis­mat­ic founder who’d already sold anoth­er start-up for some $300 mil­lion. His name was Elon Musk.

    Musk didn’t know that the engi­neers across the land­ing were also work­ing on dig­i­tal mon­ey trans­fers. (The sign on their door bore the name of a par­ent com­pa­ny.) X and Pay­Pal shared a trash bin in the alley behind their build­ing, and Pay­Pal engi­neers lat­er bragged to a group of X employ­ees that they found doc­u­ments that described X’s pay­ments scheme, which used the web, rather than Palm Pilots, as well as a sys­tem for gen­er­at­ing refer­rals by giv­ing cus­tomers cash. They incor­po­rat­ed the ideas into PayPal’s strat­e­gy. Some X employ­ees I spoke with took this boast lit­er­al­ly, though Musk cast doubt on the sto­ry. “It’s pos­si­ble, I sup­pose,” he told me. “But it’s a bit like say­ing, ‘You stole my idea for going to the moon.’” Even though he has long since moved on to run­ning Tes­la and SpaceX, Musk has com­pli­cat­ed feel­ings about Thiel, in part because of what hap­pened next.
    ...

    Then, after Pay­Pal and X.com merge — elim­i­nat­ing the com­pe­ti­tion — Thiel leaves the merged com­pa­ny, but is lat­er installed back into pow­er after his syco­phants stage a cor­po­rate coup and replace Musk with Thiel. Thiel then pro­ceeds to attempt to fun­nel Pay­Pal’s funds into his pri­vate hedge fund, Thiel Cap­i­tal. Anoth­er moral ‘inno­va­tion’:

    ...
    Thiel left short­ly after the merg­er. “I wouldn’t say we’re oil and water, but there are some pret­ty big dif­fer­ences,” Musk, who became CEO, told me. “Peter likes the games­manship of invest­ing — like we’re all play­ing chess. I don’t mind that, but I’m fun­da­men­tal­ly into doing engi­neer­ing and design. I’m not an inves­tor. I feel like using oth­er people’s mon­ey is not cool.” A per­son who has talked to each man about the oth­er put it more suc­cinct­ly: “Musk thinks Peter is a sociopath, and Peter thinks Musk is a fraud and a brag­gart.”

    It seemed as if Musk won the pow­er strug­gle, but Thiel had laid a trap, installing most of his deputies — includ­ing Levchin and the author of the Review’s “Rape Issue” — in the exec­u­tive ranks. Musk didn’t real­ize he was sur­round­ed by a team that was more loy­al to Thiel than to him. Lat­er that year, Musk left town for a two-week trip. While he was in the air, a group of Thiel-aligned con­spir­a­tors con­front­ed the company’s major backer, Moritz, at his office on Sand Hill Road. They demand­ed their patron be put in charge.

    After Moritz reluc­tant­ly agreed, Thiel pressed his advan­tage. At a board meet­ing, accord­ing to sev­er­al peo­ple famil­iar with what hap­pened, he sug­gest­ed that Pay­Pal turn over all its cash to Thiel Cap­i­tal, the hedge fund he was still run­ning on the side, so that he could take advan­tage of the eco­nom­ic upheaval of the post-dot-com bub­ble. Moritz assumed Thiel was jok­ing, but Thiel calm­ly explained that he had a plan to bet on inter­est rates falling. Thiel’s idea was shot down, but Moritz was furi­ous. Risk­ing a start-up’s lim­it­ed cash on spec­u­la­tion — par­tic­u­lar­ly spec­u­la­tion that had the poten­tial to per­son­al­ly enrich the CEO — was some­thing no ven­ture cap­i­tal­ist, nor any self-respect­ing tech entre­pre­neur, would even con­sid­er sug­gest­ing. The fact that Thiel would pro­pose it not long after snag­ging the CEO job in a man­ner that was not exact­ly hon­or­able was dou­bly galling. It sug­gest­ed to Moritz and oth­ers on the board a lack of a moral com­pass.
    ...

    There’s even a term for Thiel’s form of grow­ing a com­pa­ny by break­ing any eth­i­cal code that gets in your way: “growth hack­ing”. It’s wide­ly cred­it­ed to Thiel and cel­e­brat­ed across the indus­try. From this per­spec­tive, the Capi­tol insur­rec­tion was real­ly just a form of polit­i­cal growth hack­ing:

    ...
    Under Thiel, PayPal’s will­ing­ness to dis­re­gard bank­ing rules became a key strate­gic advan­tage. Finan­cial insti­tu­tions are required to ver­i­fy that cus­tomers are who they say they are by check­ing iden­ti­fi­ca­tions, but Pay­Pal, which con­tend­ed it wasn’t tech­ni­cal­ly a bank, made lit­tle effort to do so. (When a reporter not­ed to Thiel that many of his com­peti­tors com­plied with the reg­u­la­tions, he called them “insane.”) It also did lit­tle to stop peo­ple from using the mon­ey they put into their accounts for illic­it pur­pos­es. The refund mech­a­nism that Pay­Pal used to return cus­tomers’ cash was tech­ni­cal­ly banned by the cred­it-card com­pa­nies. When those com­pa­nies com­plained, Pay­Pal sim­ply offered an apol­o­gy and nego­ti­at­ed. Today, the use of unsus­tain­able or eth­i­cal­ly dubi­ous tricks to make a start-up insur­mount­ably big­ger than its rivals is known as “growth hack­ing.” It’s wide­ly cred­it­ed to Thiel and his exec­u­tives and cel­e­brat­ed by entre­pre­neurs across the indus­try. They’re all chas­ing what Thiel got. The year after he became CEO of Pay­Pal, eBay acquired the com­pa­ny for $1.5 bil­lion.

    Twen­ty years lat­er, Thielism is the dom­i­nant ethos in Sil­i­con Val­ley. That’s part­ly because Thiel has been effec­tive at seed­ing the indus­try with pro­tégés — none of them more promi­nent than Mark Zucker­berg. Hav­ing pur­sued a grow-at-all-costs, con­se­quences-be-damned expan­sion strat­e­gy, the Face­book CEO is now attempt­ing to cre­ate his own coin. Diem (née Libra) is a cryp­tocur­ren­cy that, if all goes accord­ing to plan, will func­tion as a sort of replace­ment for the U.S. dol­lar inside Face­book (rough­ly 3 bil­lion users), What­sApp (2 bil­lion), Insta­gram (1 bil­lion), and its oth­er apps, as well as those of any oth­er com­pa­nies that adopt it. The effort has gen­er­at­ed con­sid­er­able con­cern from the U.S. Fed­er­al Reserve, which is under­stand­ably com­mit­ted to the dol­lar, and from crit­ics across the polit­i­cal spec­trum, who express shock at the audac­i­ty of a com­pa­ny with Facebook’s track record for pri­va­cy vio­la­tions attempt­ing to put itself at the cen­ter of glob­al com­merce.

    Long­time Thiel asso­ciates, though, weren’t sur­prised at all. “No one seems to have con­nect­ed the dots to Peter’s orig­i­nal grand vision for Pay­Pal,” a source who’d worked with Thiel for years wrote to me in June 2019, short­ly after the cur­ren­cy was announced. This per­son, and oth­ers I spoke with, saw Thiel’s ide­ol­o­gy — and his appar­ent belief that cor­po­rate pow­er should sub­sume the author­i­ty of gov­ern­ments — in Facebook’s actions. “There’s a direct influ­ence,” this source said.
    ...

    And then there’s the fas­ci­nat­ing slew of ques­tions about just how much influ­ence Thiel holds of Mark Zucker­berg. The fact that Thiel arranged for Zucker­berg to effec­tive­ly be an absolute cor­po­rate dic­ta­tor as part of the con­di­tion of Thiel’s ini­tial invest­ment in Face­book is a hint. But it’s the behav­ior of Zucker­berg in the years since that’s the biggest hint. The is sim­ply no deny­ing that Zucker­berg acts as if he takes his orders from Thiel. And that’s why we have to ask the ques­tion: was Thiel direct­ing Zucker­berg to ensure Face­book remained a pro-insur­rec­tion plat­form through­out the post-elec­tion peri­od so the Trump team could reli­ably use it to push the ‘stolen elec­tion’ nar­ra­tive? All avail­able cir­cum­stan­tial evi­dence is point­ing in that direc­tion:

    ...
    The blitzs­cal­ing strat­e­gy he and his employ­ees pio­neered at Pay­Pal cre­at­ed the growth play­book for an entire gen­er­a­tion of start-ups, from Airbnb to WeWork. His most leg­endary bet — loan­ing $500,000 to a social­ly inept Har­vard sopho­more in exchange for 10 per­cent of a web­site called TheFacebook.com — is sig­nif­i­cant less for the orders-of-mag­ni­tude eco­nom­ic return he real­ized and more for the terms he embed­ded in the deal. Thiel ensured that Mark Zucker­berg would be the company’s absolute dic­ta­tor. No one, not even Facebook’s board of direc­tors, could ever over­rule him. Sim­i­lar maneu­vers were adopt­ed at many of Thiel’s port­fo­lio com­pa­nies, includ­ing Stripe and SpaceX, and today, across the indus­try, it’s more the norm than the excep­tion.

    ...

    Thiel doesn’t con­trol Zucker­berg, and their rela­tion­ship is com­pli­cat­ed, to say the least. Thiel unloaded most of his stock as soon as the com­pa­ny went pub­lic. (The shares were falling and morale was low. To ral­ly the staff, Zucker­berg invit­ed Thiel to an all-hands meet­ing at head­quar­ters, but he just end­ed up insult­ing them. “My gen­er­a­tion was promised colonies on the moon,” Thiel told them. “Instead we got Face­book.”) But in the years since, Thiel has remained a trust­ed con­fi­dant to Zucker­berg — despite per­son­al­ly cul­ti­vat­ing Face­book antag­o­nists, includ­ing James O’Keefe, the right-wing provo­ca­teur who pro­duced under­cov­er videos attempt­ing to expose Facebook’s sup­posed bias against con­ser­v­a­tives, and Charles John­son, who helped start the face-recog­ni­tion com­pa­ny Clearview AI.

    Clearview assem­bled its gar­gan­tu­an data­base of faces by scrap­ing pho­tos from Face­book pro­files, which Face­book con­sid­ers a vio­la­tion of its terms of ser­vice. John­son told me that when he raised mon­ey from Thiel, he pre­sent­ed Clearview as both a promis­ing busi­ness and as a back­door way to “destroy” Face­book by expos­ing its lax pri­va­cy stan­dards. Thiel, who as a Face­book board mem­ber has a duty to act in the company’s best inter­ests, invest­ed in Clearview any­way. John­son also says that Thiel used him as a con­duit to leak emails between Thiel and Reed Hast­ings, anoth­er Face­book board mem­ber, who’d crit­i­cized Thiel for back­ing Don­ald Trump.

    At any nor­mal com­pa­ny, such dis­loy­al­ty might be grounds for dis­missal. But it was Hast­ings, not Thiel, who resigned from the board, and Zucker­berg nev­er pun­ished his men­tor. Accord­ing to two for­mer Face­book staffers, this was part­ly because he appre­ci­at­ed Thiel’s unvar­nished advice and part­ly because Zucker­berg saw Thiel as a polit­i­cal ally. Zucker­berg had been crit­i­cized by con­ser­v­a­tive media before the 2016 elec­tion and, with Thiel’s encour­age­ment, had sought to cater to them.

    In 2019, while on a trip to Wash­ing­ton to answer ques­tions from Con­gress about his dig­i­tal cur­ren­cy, Thiel joined Zucker­berg, Jared Kush­n­er, Trump, and their spous­es at the White House. The specifics of the dis­cus­sion were secret — but, as I report in my book, Thiel lat­er told a con­fi­dant that Zucker­berg came to an under­stand­ing with Kush­n­er dur­ing the meal. Face­book, he promised, would con­tin­ue to avoid fact-check­ing polit­i­cal speech — thus allow­ing the Trump cam­paign to claim what­ev­er it want­ed. If the com­pa­ny fol­lowed through on that promise, the Trump admin­is­tra­tion would lay off on any heavy-hand­ed reg­u­la­tions.

    After the din­ner, Zucker­berg took a hands-off approach to con­ser­v­a­tive sites. In late Octo­ber, after he detailed the pol­i­cy in a speech at George­town, Face­book launched a news app that show­cased what the com­pa­ny called “deeply report­ed and well-sourced” out­lets. Among the list of rec­om­mend­ed pub­li­ca­tions was Bre­it­bart, Steve Bannon’s site, even though it had pro­mot­ed itself as allied with the alt-right and had once includ­ed a sec­tion ded­i­cat­ed to “Black crime.” Face­book also seemed to go out of its way to help the Dai­ly Wire, a younger, hip­per ver­sion of Bre­it­bart that would become one of the biggest pub­lish­ers on the plat­form. Face­book had long seen itself as a gov­ern­ment unto itself; now, thanks to the under­stand­ing bro­kered by Thiel, the site would push what the Thiel con­fi­dant called “state-sanc­tioned con­ser­vatism.”

    Zucker­berg denied that there had been any deal with Trump, call­ing the notion “pret­ty ridicu­lous,” though Facebook’s actions in the run-up to the elec­tion would make the denial seem not entire­ly cred­i­ble. Dur­ing Black Lives Mat­ter protests, Twit­ter hid a post by the pres­i­dent that seemed to con­done vio­lence: “When the loot­ing starts, the shoot­ing starts”; Face­book allowed it. In the days lead­ing up to the Jan­u­ary 6 insur­rec­tion at the U.S. Capi­tol, Face­book most­ly ignored calls to lim­it the spread of “Stop the Steal” groups, which claimed that Trump had actu­al­ly won the elec­tion.
    ...

    Should we expect inves­ti­ga­tors to ever seri­ous­ly look into Thiel’s pos­si­ble role in the insur­rec­tion? Of course not. The guy is effec­tive­ly untouch­able and arguably the most pow­er­ful per­son in Wash­ing­ton DC. He own a com­pa­ny that’s effec­tive­ly the pri­va­tized NSA, after all. He’s prob­a­bly black­mailed half of Con­gress by now. And it’s Thiel’s appar­ent untouch­a­bil­i­ty that makes the ques­tion of his insur­rec­tion­ist role all the more cru­cial. Don­ald Trump is going to die one of these years. Steve Ban­non could end up in jail over over refusal to coop­er­ate with inves­ti­ga­tors. But Thiel is untouch­able, as he has been seem­ing­ly his entire life. And at this point it’s hard to see who is bet­ter posi­tioned to con­trol the future of the GOP than Peter Thiel. The guy who is posi­tion to be the post-Trump shad­ow-leader of GOP for the next gen­er­a­tion appears to have has fig­ured out how to back an insur­rec­tion while stay­ing far enough back in the shad­ows to avoid fac­ing any reper­cus­sions. What hap­pens when arguably the most pow­er­ful per­son in the coun­try can foment insur­rec­tions with­out it even being noticed? The US has appar­ent­ly decid­ed to find out.

    Posted by Pterrafractyl | October 14, 2021, 4:27 pm
  22. Of all of the warped self-pro­ject­ing griev­ances that com­prise the define the con­tem­po­rary right-wing, per­haps the most obvi­ous­ly absurd is the griev­ance about social media sup­pres­sion of right-wing voic­es. Beyond the fact that these griev­ances are typ­i­cal­ly expressed by peo­ple on the social media plat­forms them­selves, there’s the sim­ple real­i­ty that social media plat­forms keep get­ting caught engag­ing in scan­dals cen­tered around poli­cies of active­ly turn­ing a blind eye towards those right-wing abus­es. Face­book was lit­er­al­ly the pri­ma­ry Jan­u­ary 6 Capi­tol insur­rec­tion recruit­ment tool, after all. So with that ongo­ing far­ci­cal nar­ra­tive in mind, here’s a sto­ry about Sil­i­con Val­ley actu­al­ly engag­ing in exact­ly the kind of behav­ior described in that nar­ra­tive. Well, almost exact­ly the same: Just days before Nicaragua’s upcom­ing Novem­ber 7 elec­tions, Face­book and Insta­gram purged thou­sands of account. All of them left-wing accounts, includ­ing the accounts of elect­ed gov­ern­ment offi­cials. A nation-wide purge of the Left is tak­ing place in Nicaragua, com­ple­ments of the very same Sil­i­con Val­ley giants rou­tine­ly accused of silenc­ing con­ser­v­a­tive voic­es. Because of course that’s what’s hap­pen­ing.

    The purge includ­ed media out­lets and some of the most wide­ly fol­lowed per­son­al­i­ties in the coun­try. What was the basis for all this? Alle­ga­tions these accounts were fake and being oper­at­ed by gov­ern­ment troll farms. The only prob­lem was this was demon­stra­bly not true and real peo­ple have a way of val­i­dat­ing their exis­tence. But when all of these very real peo­ple flood­ed onto Twit­ter to post videos of their very real selves, Twit­ter pro­ceed­ed to delete those accounts too. That’s the nature of this op. It’s a pro­found­ly bad faith op being exe­cut­ed in real-time and ignored by vir­tu­al­ly the entire world. In oth­er words, it’s suc­ceed­ing. Sil­i­con Val­ley silenced Nicaragua’s left and bare­ly any­one noticed. Mis­sion accom­plished.

    Oh, and it just hap­pens to be the case that the fig­ures inside Face­book who com­piled and pro­mot­ed the bogus report have ties to the US nation­al secu­ri­ty state. The author of the report and leader of Face­book’s “Threat Intel­li­gence Team”, Ben Nim­mo, is a for­mer NATO press offi­cer and con­sul­tant for Integri­ty Ini­tia­tive (a real life troll farm). Nim­mo served as head of inves­ti­ga­tions for Graphi­ka, a DARPA-backed ini­tia­tive set up with fund­ing from the Pen­tagon’s Min­er­va Insti­tute. The head of secu­ri­ty pol­i­cy at Face­book, Nathaniel Gle­ich­er, also pro­mot­ed the report. Gliech­er was direc­tor for cyber­se­cu­ri­ty pol­i­cy at the White House Nation­al Secu­ri­ty Coun­cil. David Agra­novich, Facebook’s “direc­tor of threat dis­rup­tion” who also shared Nim­mo report, served as direc­tor of intel­li­gence for the White House Nation­al Secu­ri­ty Coun­cil. It rais­es ques­tions about the nation­al secu­ri­ty back­grounds of who­ev­er was mak­ing these deci­sions at Twit­ter.

    So we final­ly have an exam­ple of social media selec­tive­ly tar­get­ing the users of spe­cif­ic polit­i­cal ide­ol­o­gy. The leg­ends are both true and the oppo­site of the right-wing nar­ra­tive, because that’s how strate­gic projection/trolling works:

    The Gray­zone

    Meet the Nicaraguans Face­book false­ly brand­ed bots and cen­sored days before elec­tions

    Ben Nor­ton
    Novem­ber 2, 2021

    Face­book, Insta­gram, and Twit­ter sus­pend­ed hun­dreds of influ­en­tial pro-San­din­ista jour­nal­ists and activists days before Nicaragua’s Novem­ber 7 elec­tions, false­ly claim­ing they were gov­ern­ment trolls. The Gray­zone inter­viewed them to reveal the truth.

    ****

    Just days before Nicaragua’s Novem­ber 7 elec­tions, top social media plat­forms cen­sored top Nicaraguan news out­lets and hun­dreds of jour­nal­ists and activists who sup­port their country’s left­ist San­din­ista gov­ern­ment.

    The polit­i­cal­ly moti­vat­ed cam­paign of Sil­i­con Val­ley cen­sor­ship amount­ed to a mas­sive purge of San­din­ista sup­port­ers one week before the vote. It fol­lowed US gov­ern­ment attacks on the integri­ty of Nicaragua’s elec­tions, and Washington’s insis­tence that it will refuse to rec­og­nize the results.

    The Unit­ed States spon­sored a sadis­ti­cal­ly vio­lent coup attempt in Nicaragua in 2018, which result­ed in hun­dreds of deaths in a des­per­ate attempt to over­throw the demo­c­ra­t­i­cal­ly elect­ed gov­ern­ment of Pres­i­dent Daniel Orte­ga.

    Since the putsch failed, both the Don­ald Trump and Joe Biden admin­is­tra­tions have imposed sev­er­al rounds of dev­as­tat­ing sanc­tions on Nicaragua. The US Con­gress plans to levy new heavy-hand­ed sanc­tions against Nicaragua fol­low­ing the Novem­ber 7 elec­tions.

    Sil­i­con Valley’s crack­down on pro-San­din­ista jour­nal­ists and activists was part and par­cel of the US government’s polit­i­cal assault on Nicaragua.

    Face­book and Insta­gram – both of which are owned by the new­ly rebrand­ed Big Tech giant Meta – sus­pend­ed 1,300 Nicaragua-based accounts run by pro-San­din­ista media out­lets, jour­nal­ists, and activists in a large-scale crack­down on Octo­ber 31.

    Days before, Twit­ter did the same, purg­ing many promi­nent pro-San­din­ista jour­nal­ists and influ­encers.

    On Novem­ber 1, San­din­ista activists whose accounts were sus­pend­ed by Face­book and Insta­gram respond­ed by post­ing videos on Twit­ter, show­ing the world that they are indeed real peo­ple. But Twit­ter sus­pend­ed their accounts as well, seek­ing to erase all evi­dence demon­strat­ing that these Nicaraguans are not gov­ern­ment bots or part of a coor­di­nat­ed inau­then­tic oper­a­tion.

    Twitter’s fol­low-up cen­sor­ship was effec­tive­ly a dou­ble-tap strike on the free­dom of speech of Nicaraguans, whose appar­ent mis­deed is express­ing polit­i­cal views that chal­lenge Washington’s objec­tives.

    Insane! Facebook/Instagram false­ly claimed left-wing Nicaraguans are govt-run bots, cen­sor­ing them. So they post­ed videos on Twit­ter show­ing they’re real. But now Twit­ter is sus­pend­ing them too!This is a coor­di­nat­ed purge of San­din­istas, days before Nicaragua’s Nov. 7 elec­tion https://t.co/sB7COoxl7Q pic.twitter.com/E2EPVAK5tx— Ben Nor­ton (@BenjaminNorton) Novem­ber 1, 2021

    The thou­sands of accounts cen­sored by Face­book, Insta­gram, and Twit­ter col­lec­tive­ly had hun­dreds of thou­sands of fol­low­ers, and rep­re­sent­ed some of the biggest and most influ­en­tial media out­lets and orga­ni­za­tions in Nicaragua, a rel­a­tive­ly small coun­try of 6.5 mil­lion peo­ple.

    US Big Tech com­pa­nies sus­pend­ing all of these accounts mere days before elec­tions could have a sig­nif­i­cant, tan­gi­ble impact on Nicaragua’s elec­toral results.

    The purges exclu­sive­ly tar­get­ed sup­port­ers of the social­ist, anti-impe­ri­al­ist San­din­ista Front par­ty. Zero right-wing oppo­si­tion sup­port­ers in Nicaragua were impact­ed.

    Face­book pub­lished a report on Novem­ber 1 claim­ing the San­din­istas it cen­sored were part of a “troll farm run by the gov­ern­ment of Nicaragua and the San­din­ista Nation­al Lib­er­a­tion Front (FSLN) par­ty” that had engaged in “coor­di­nat­ed inau­then­tic behav­ior.”

    This is demon­stra­bly false. In real­i­ty, what Facebook/Instagram did is purge most high-pro­file San­din­ista sup­port­ers on the plat­forms, then try to jus­ti­fy it by claim­ing that aver­age San­din­ista activists are actu­al­ly gov­ern­ment-run bots.

    Face­book implic­it­ly admit­ted this fact by con­ced­ing in the report that there were “authen­tic accounts” purged in the mas­sive social media crack­down. But Face­book refused to dif­fer­en­ti­ate between the authen­tic accounts and the alleged “inau­then­tic” accounts, nam­ing none and instead lump­ing them all togeth­er in order to jus­ti­fy eras­ing their dig­i­tal exis­tence.

    Unlike Facebook’s inves­ti­ga­tors, this reporter, Ben Nor­ton, is based in Nicaragua and per­son­al­ly knows dozens of the Nicaraguans whose accounts were cen­sored, and can con­firm that they are indeed real peo­ple organ­i­cal­ly express­ing their authen­tic opin­ions – not trolls, bots, or fake accounts.

    I inter­viewed more than two dozen San­din­ista activists whose per­son­al accounts were sus­pend­ed, and pub­lished videos of some of them below, to prove that Facebook’s claims are cat­e­gor­i­cal­ly false.

    Facebook’s secu­ri­ty team is run by for­mer high-lev­el US gov­ern­ment offi­cials

    The Face­book report false­ly depict­ing aver­age San­din­ista activists as gov­ern­ment trolls was co-authored by Ben Nim­mo, the leader of Meta’s “Threat Intel­li­gence Team.”

    The Gray­zone has exposed Nim­mo as a for­mer press offi­cer for the US-led NATO mil­i­tary alliance and paid con­sul­tant to an actu­al covert troll farm: the Integri­ty Ini­tia­tive, which was estab­lished in secret by British mil­i­tary offi­cers to run anti-Russ­ian influ­ence oper­a­tions through West­ern media.

    Nim­mo has served as head of inves­ti­ga­tions at Graphi­ka, anoth­er infor­ma­tion war­fare ini­tia­tive that was set up with fund­ing from the Pentagon’s Min­er­va Insti­tute, and oper­ates with sup­port from the Pentagon’s top-secret Defense Advanced Research Projects Agency (DARPA).

    Nim­mo, who is also a senior fel­low at the West­ern gov­ern­ment-fund­ed Atlantic Coun­cil, med­dled in Britain’s 2020 elec­tion by smear­ing left­ist Labour Par­ty leader Jere­my Cor­byn as the ves­sel for a sup­posed Russ­ian active mea­sures oper­a­tion.

    The lat­est Nim­mo-engi­neered pseu­do-scan­dal high­lights Facebook’s role as an impe­r­i­al infor­ma­tion weapon whose secu­ri­ty team has been essen­tial­ly farmed out of the US gov­ern­ment.

    The head of secu­ri­ty pol­i­cy at Face­book, Nathaniel Gle­ich­er, pro­mot­ed Nimmo’s report, echo­ing his false claims.

    Before mov­ing to Face­book, Gle­ich­er was direc­tor for cyber­se­cu­ri­ty pol­i­cy at the White House Nation­al Secu­ri­ty Coun­cil. He also worked at the US Depart­ment of Jus­tice.

    Gle­ich­er clar­i­fied that when Face­book accused Nicaragua of run­ning a sup­posed “troll farm,” it “means that the op is rely­ing on fake accounts to manip­u­late & deceive their audi­ence.”

    Accord­ing to this def­i­n­i­tion, Facebook’s report is com­plete­ly wrong. Many of the accounts it sus­pend­ed were run by every­day Nicaraguans, and The Gray­zone has inter­viewed them and post­ed videos below.

    1/ Today we shared the removal of a domes­tic CIB net­work in Nicaragua that was run by the gov­ern­ment of Nicaragua and the San­din­ista Nation­al Lib­er­a­tion Front (FSLN) par­ty.https://t.co/505nHhgljV— Nathaniel Gle­ich­er (@ngleicher) Novem­ber 1, 2021

    Facebook’s “direc­tor of threat dis­rup­tion,” David Agra­novich, also shared Nimmo’s false report.

    Like Gle­ich­er, Agra­novich worked at the US gov­ern­ment before mov­ing to Face­book, serv­ing as direc­tor of intel­li­gence for the White House Nation­al Secu­ri­ty Coun­cil.

    1/ Today we shared our lat­est month­ly CIB report, which includes a deep-dive into a net­work in Nicaragua that was run by the gov­ern­ment of Nicaragua and the San­din­ista Nation­al Lib­er­a­tion Front (FSLN) par­ty. https://t.co/2S36MInjeh— David Agra­novich (@DavidAgranovich) Novem­ber 1, 2021

    Both of these US Nation­al Secu­ri­ty Coun­cil vet­er­ans active­ly pro­mot­ed Facebook’s coor­di­nat­ed purge of pro-San­din­ista Nicaraguans.

    Face­book basi­cal­ly merged with the US government.Head of secu­ri­ty pol­i­cy at FB, Nathaniel Gle­ich­er @ngleicher, was direc­tor for cyber­se­cu­ri­ty pol­i­cy at the White House Nation­al Secu­ri­ty Coun­cil. Before that he worked at the DOJ.No won­der they are ban­ning San­din­ista sup­port­ers pic.twitter.com/vXsDgJH1rr— Ben Nor­ton (@BenjaminNorton) Novem­ber 1, 2021

    The Gray­zone con­tact­ed Face­book with a request for com­ment. The head of secu­ri­ty com­mu­ni­ca­tions, Mar­gari­ta Z. Franklin, replied with­out any com­ment, sim­ply link­ing to Nimmo’s report.

    When The Gray­zone fol­lowed up and asked Franklin about Face­book sus­pend­ing many real-life Nicaraguans who sup­port their gov­ern­ment but are very much not bots, she did not respond.

    Meet the Nicaraguans cen­sored by Face­book, Insta­gram, and Twit­ter

    The Gray­zone spoke with more than two dozen liv­ing, breath­ing San­din­ista activists, whom this reporter knows and has met in per­son, and who were purged in the social media crack­down.

    Many said this was the sec­ond or third time their accounts had been cen­sored. Sev­er­al had their Face­book and Twit­ter accounts removed dur­ing a vio­lent US-backed right-wing coup attempt in 2018.

    Mul­ti­ple activists said they are afraid Wash­ing­ton will spon­sor anoth­er coup attempt or desta­bi­liza­tion oper­a­tions fol­low­ing Nicaragua’s Novem­ber 7 elec­tions, and because they were banned on social media, the San­din­ista sup­ports will be unable to inform the out­side world about what is actu­al­ly hap­pen­ing in their coun­try.

    Ligia Sevil­la

    Here is Nicaraguan San­din­ista activist Ligia Sevil­la @ligiasevilla_, who was cen­sored by Face­book@Meta false­ly claimed she’s a “fake account,” part of a gov­ern­ment-run “troll farm“Will Face­book retract its false claims and restore her account?@benimmo@DavidAgranovich@olgs7 https://t.co/mDx9QvvtlU pic.twitter.com/wpykMeFon7— Ben Nor­ton (@BenjaminNorton) Novem­ber 1, 2021

    San­din­ista influ­encer Ligia Sevil­la, who had more than 5,500 fol­low­ers on her per­son­al Insta­gram account, which was sus­pend­ed along with her Face­book pro­file, pro­claimed, “I’m not a bot; I’m not a troll. And my social media accounts were cen­sored. Maybe Face­book doesn’t allow us to be San­din­istas?”

    After Sevil­la shared this video to ver­i­fy her authen­tic­i­ty, Twit­ter sus­pend­ed her account as well – a sign of a coor­di­nat­ed cen­sor­ship cam­paign tar­get­ing San­din­istas on social media.

    Franklin Ruiz

    And here is Nicaraguan San­din­ista activist Franklin Ruiz G. @ElChequelito, whose per­son­al account was cen­sored by Face­book, which false­ly claimed he is a gov­ern­ment-run bot/troll.

    And here is Nicaraguan San­din­ista activist Franklin Ruiz G. @ElChequelito, whose per­son­al account was cen­sored by Face­book, which false­ly claimed he is a gov­ern­ment-run bot/troll.Still no com­ment on this polit­i­cal cen­sor­ship by@Meta@benimmo@DavidAgranovich @ngleicher@olgs7 pic.twitter.com/KTthCikmgi— Ben Nor­ton (@BenjaminNorton) Novem­ber 1, 2021

    San­din­ista activist Franklin Ruiz, whose per­son­al Face­book page was sus­pend­ed, pub­lished a video mes­sage as well: “I want to tell you that we are human beings; we are peo­ple who, on Face­book, are defend­ing our rev­o­lu­tion, defend­ing our coun­try. We are not bots, as Face­book says, or pro­grammed trolls.”

    After Ruiz shared this video on Twit­ter, the plat­form purged him too.

    Tyler Moreno Diaz

    This is anoth­er Nicaraguan San­din­ista activist, Tyler Moreno Diaz @AlexdiazTyler, who was false­ly dubbed a gov­ern­ment-run bot by Face­book and had his account suspended.Why won’t @Meta admit its “troll farm” report is total­ly bogus?@DavidAgranovich@olgs7@benimmo@ngleicher pic.twitter.com/bDS3E5ngre— Ben Nor­ton (@BenjaminNorton) Novem­ber 1, 2021

    Anoth­er San­din­ista activist Tyler Moreno Diaz, whose Face­book account was purged, also post­ed a video stat­ing, “I want to show you that I’m not a troll.”

    “That’s why I’m ask­ing Face­book, why did you sus­pend me?,” he said. “That is med­dling. That is vio­lat­ing our right to free expres­sion.”

    Hayler Gaitán

    There are so many San­din­ista activists in Nicaragua who were false­ly labeled “bots” and cen­sored by Face­book. Here is pop­u­lar com­mu­ni­ca­tor Hayler Gaitán @QueNotaHayller, who is not a troll.@Meta still won’t com­ment on its false claims@DavidAgranovich@olgs7@benimmo@ngleicher pic.twitter.com/rG3iB9ba2S— Ben Nor­ton (@BenjaminNorton) Novem­ber 1, 2021

    Hayler Gaitán, anoth­er San­din­ista activist cen­sored by Face­book, pub­lished a video explain­ing, “”I am a young com­mu­ni­ca­tor. I am not a troll, as Face­book says, or a bot.”

    “I am a young com­mu­ni­ca­tor who shares infor­ma­tion about the good progress in Nicaragua,” he con­tin­ued. “We enjoy free health­care, free edu­ca­tion, and oth­er pro­grams that ben­e­fit the Nicaraguan peo­ple, and that we have been build­ing through­out our his­to­ry. And they have want­ed to take that from us, but they will nev­er be able to.”

    After Gaitán post­ed this video on Twit­ter, it sus­pend­ed his account as well.

    Daniela Cien­fue­gos

    This is anoth­er Nicaraguan San­din­ista sup­port­er, Daniela Cien­fue­gos @dani100sweet, an activist in the Red de Jóvenes Comu­ni­cadores, which Facebook/Instagram sus­pend­ed and false­ly claimed is a gov­’t bot farm.Any com­ment from @Meta?@benimmo@DavidAgranovich @ngleicher@olgs7 pic.twitter.com/md32dU6WZp— Ben Nor­ton (@BenjaminNorton) Novem­ber 1, 2021

    Daniela Cien­fue­gos, an activist with the pro-San­din­ista Red de Jóvenes Comu­ni­cadores (Net­work of Youth Com­mu­ni­ca­tors), post­ed a video on Twit­ter say­ing, “I want­ed to tell you that, no, we are not trolls. We are peo­ple who ded­i­cate our­selves to com­mu­ni­cate from the trench­es, to inform the Nicaraguan peo­ple, and on the inter­na­tion­al stage.”

    After Cien­fue­gos pub­lished this, Twit­ter delet­ed her account as well.

    Face­book, Insta­gram, Twit­ter cen­sor top pro-San­din­ista Nicaraguan jour­nal­ists and media out­lets

    The above are just a small sam­ple of Nicaraguans who were false­ly smeared as “gov­ern­ment-run trolls” by Face­book and erased from social media.

    But it wasn’t just indi­vid­ual Nicaraguans who were cen­sored. Major Nicaraguan media out­lets that pro­vide a pro-San­din­ista per­spec­tive were also removed.

    On the night of Octo­ber 31, Face­book removed 140 pages and 24 groups, 100% of which were pro-San­din­ista. Among those delet­ed were:

    * offi­cial San­din­ista news­pa­per Bar­ri­ca­da, which had more than 65,000 fol­low­ers
    * pop­u­lar youth-run left-wing b, which had more than 81,000 fol­low­ers
    * the Red de Jóvenes Comu­ni­cadores, or Young Com­mu­ni­ca­tors Net­work, which brings togeth­er jour­nal­ists and media activists from the San­din­ista Youth social move­ment, and which had more than 71,000 fol­low­ers
    * and the indi­vid­ual pro­files of dozens of Nicaraguan jour­nal­ists, activists, and influ­encers.

    At the exact same time as the Face­book purge, its sis­ter plat­form Insta­gram took down many of the same pages:

    * Bar­ri­ca­da, which had more than 9,500 fol­low­ers
    * Red­volu­ción, which had more than 22,700 fol­low­ers,
    * Red de Jóvenes Comu­ni­cadores, which had more than 12,600 fol­low­ers
    * and, once again, the per­son­al pages of dozens of Nicaraguan jour­nal­ists, activists, and influ­encers.

    Insta­gram also sus­pend­ed the account of the fash­ion orga­ni­za­tion Nicaragua Dis­eña, which is very pop­u­lar in Nicaragua, and had more than 42,700 fol­low­ers.

    Unlike the oth­er purged accounts, Nicaragua Dis­eña is decid­ed­ly not a polit­i­cal orga­ni­za­tion. It is run by Cami­la Orte­ga, a daugh­ter of the pres­i­dent, but Nicaragua Dis­eña inten­tion­al­ly goes out of its way to avoid pol­i­tics, try­ing to bring togeth­er oppo­si­tion sup­port­ers and San­din­istas in apo­lit­i­cal cul­tur­al events.

    Just a few days before the coor­di­nat­ed Face­book-Insta­gram purge, Twit­ter also removed the accounts of the most promi­nent pro-San­din­ista jour­nal­ists and influ­encers on the plat­form.

    On Octo­ber 28, Twit­ter sus­pend­ed the accounts of media activists @ElCuervoNica, @FloryCantoX, @TPU19J, @Jay_Clandestino, and numer­ous oth­ers. Togeth­er, these pro-San­din­ista com­mu­ni­ca­tors had tens of thou­sands of fol­low­ers.

    Many of them, such as @CuervoNica and @FloryCantoR, had been cen­sored before. This was the sec­ond or third account they had cre­at­ed, only to be cen­sored for their polit­i­cal views.

    Twit­ter’s polit­i­cal­ly moti­vat­ed cen­sor­ship is again tar­get­ing Latin Amer­i­can leftists:Influential Nicaraguan San­din­ista sup­port­ers @ElCuervoNica and @FloryCantoX were sus­pend­edThis is the SECOND time the US gov­ern­ment thought police at Twit­ter sus­pend­ed @FloryCantoR’s account pic.twitter.com/xlZJWVO9vt— Ben Nor­ton (@BenjaminNorton) Octo­ber 29, 2021

    Sil­i­con Valley’s cen­sor­ship of Nicaragua always goes in one direc­tion: It is left­ist, anti-impe­ri­al­ist sup­port­ers of the San­din­ista gov­ern­ment who are cen­sored, while right-wing oppo­si­tion activists, many of whom are fund­ed by the US gov­ern­ment, are ver­i­fied and pro­mot­ed by the social media monop­o­lies.

    Numer­ous Nicaraguan jour­nal­ists whose indi­vid­ual social media accounts were sus­pend­ed told The Gray­zone they were upset and angry, as they had spent count­less hours of work over years build­ing their pages, doing jour­nal­ism, and shar­ing infor­ma­tion. Face­book, Insta­gram, and Twit­ter delet­ed all of that labor in mere sec­onds.

    Some said they fear this cen­sor­ship will also harm them finan­cial­ly, as they had relied on their social media accounts as a source of income.

    In addi­tion to clear­ly infring­ing on their rights to free­dom of the press and free­dom of expres­sion, the lat­est wave of Sil­i­con Val­ley cen­sor­ship has done con­crete eco­nom­ic dam­age to work­ing-class Nicaraguans who had relied on Face­book and Insta­gram to run small busi­ness­es. Sev­er­al of those affect­ed told The Gray­zone they are now locked out of the Face­book and Insta­gram pages they had used to sell prod­ucts like food, cloth­ing, or home­made jew­el­ry.

    This Sil­i­con Val­ley cen­sor­ship thus not only great­ly hin­ders these work­ing-class Nicaraguans’ abil­i­ty to do their work as jour­nal­ists, giv­en social media is an inte­gral part of con­tem­po­rary jour­nal­ism, but also deprived them of extra sources of income they had relied on to sup­port their fam­i­lies.

    Giv­en the US government’s hyper­bol­ic claims of Russ­ian med­dling in its 2016 pres­i­den­tial elec­tion, the social media purge it has inspired in Nicaragua is tinged with irony. After years of inves­ti­ga­tions, and bil­lions of dol­lars spent, the only osten­si­ble evi­dence Wash­ing­ton found of Russ­ian inter­fer­ence was some Face­book posts, includ­ing absurd humor­ous memes.

    If these alleged Russ­ian Face­book memes con­sti­tute a Pearl Har­bor-style attack on North Amer­i­can democ­ra­cy, as top US gov­ern­ment offi­cials have claimed, then what does it mean for Face­book, Insta­gram, and Twit­ter to cen­sor high­ly influ­en­tial pro-San­din­ista media out­lets, jour­nal­ists, and activists mere days before Nicaragua’s elec­tions?

    Aaron Mate dis­cuss­es Russ­ian troll farms’ dan­ger­ous Jesus mas­tur­ba­tion memes that alleged­ly won Don­ald Trump the elec­tion. Full ep https://t.co/0izDspjRxX @aaronjmate @mtaibbi #Use­fu­lId­iot­s­Pod @RollingStone @RSPolitics #rus­si­a­gate pic.twitter.com/XG2cH6NZ8F— Katie Halper (@kthalps) May 18, 2020

    Besides med­dling in for­eign elec­tions, North Amer­i­can social media monop­o­lies have sys­tem­at­i­cal­ly and repeat­ed­ly cen­sored jour­nal­ists, politi­cians, and activists in numer­ous coun­tries tar­get­ed by Wash­ing­ton for regime change, such as Venezuela, Iran, Syr­ia, Rus­sia, and Chi­na. On numer­ous occa­sions, these Sil­i­con Val­ley com­pa­nies have admit­ted such purges were car­ried out at the request of the US gov­ern­ment.

    The Gray­zone has doc­u­ment­ed the many ways in which these Big Tech giants pro­mot­ing US state media, while pro­mot­ing US state media and silenc­ing peo­ple in coun­tries that Wash­ing­ton has deemed its adver­saries.

    ...

    Quen­ri Madri­gal, a promi­nent San­din­ista activist and social media influ­encer, com­ment­ed, “We have already wit­nessed the forms of online cen­sor­ship tar­get­ing oth­er coun­tries, like Cuba, Venezuela, Rus­sia, and Iran. There is a tyran­ny of transna­tion­al tech­nol­o­gy and social media cor­po­ra­tions. They are instru­ments that don’t belong to the peo­ples.”

    ———–

    “Meet the Nicaraguans Face­book false­ly brand­ed bots and cen­sored days before elec­tions” by Ben Nor­ton; The Gray­zone; 11/02/2021

    “The purges exclu­sive­ly tar­get­ed sup­port­ers of the social­ist, anti-impe­ri­al­ist San­din­ista Front par­ty. Zero right-wing oppo­si­tion sup­port­ers in Nicaragua were impact­ed.”

    It’s not sub­tle. Face­book, Insta­gram, and Twit­ter took explic­it­ly par­ti­san steps to silence the Left in Nicaragua. Like the whole left across soci­ety, from every­day activists, to elect­ed offi­cials, rep­re­sent­ing many of the most fol­lowed per­son­al­i­ties in the coun­try. A giant coor­di­nat­ed effort car­ried out under the false pre­tense that these are fake account. First Face­book and Insta­gram car­ried out the mass purge under the pre­tense that they were fake accounts. And when these real peo­ple went onto Twit­ter to prove they were real, Twit­ter pro­ceed­ed to ban them too:

    ...
    Since the putsch failed, both the Don­ald Trump and Joe Biden admin­is­tra­tions have imposed sev­er­al rounds of dev­as­tat­ing sanc­tions on Nicaragua. The US Con­gress plans to levy new heavy-hand­ed sanc­tions against Nicaragua fol­low­ing the Novem­ber 7 elec­tions.

    Sil­i­con Valley’s crack­down on pro-San­din­ista jour­nal­ists and activists was part and par­cel of the US government’s polit­i­cal assault on Nicaragua.

    Face­book and Insta­gram – both of which are owned by the new­ly rebrand­ed Big Tech giant Meta – sus­pend­ed 1,300 Nicaragua-based accounts run by pro-San­din­ista media out­lets, jour­nal­ists, and activists in a large-scale crack­down on Octo­ber 31.

    Days before, Twit­ter did the same, purg­ing many promi­nent pro-San­din­ista jour­nal­ists and influ­encers.

    On Novem­ber 1, San­din­ista activists whose accounts were sus­pend­ed by Face­book and Insta­gram respond­ed by post­ing videos on Twit­ter, show­ing the world that they are indeed real peo­ple. But Twit­ter sus­pend­ed their accounts as well, seek­ing to erase all evi­dence demon­strat­ing that these Nicaraguans are not gov­ern­ment bots or part of a coor­di­nat­ed inau­then­tic oper­a­tion.

    ...

    The thou­sands of accounts cen­sored by Face­book, Insta­gram, and Twit­ter col­lec­tive­ly had hun­dreds of thou­sands of fol­low­ers, and rep­re­sent­ed some of the biggest and most influ­en­tial media out­lets and orga­ni­za­tions in Nicaragua, a rel­a­tive­ly small coun­try of 6.5 mil­lion peo­ple.

    US Big Tech com­pa­nies sus­pend­ing all of these accounts mere days before elec­tions could have a sig­nif­i­cant, tan­gi­ble impact on Nicaragua’s elec­toral results.

    ...

    Sil­i­con Valley’s cen­sor­ship of Nicaragua always goes in one direc­tion: It is left­ist, anti-impe­ri­al­ist sup­port­ers of the San­din­ista gov­ern­ment who are cen­sored, while right-wing oppo­si­tion activists, many of whom are fund­ed by the US gov­ern­ment, are ver­i­fied and pro­mot­ed by the social media monop­o­lies.
    ...

    Face­book even pub­lished a report on Novem­ber 1 explain­ing the purge as being in response to “coor­di­nat­ed inau­then­tic behav­ior”. A report obvi­ous­ly made in bad faith. So we have a coor­di­nat­ed Sil­i­con Val­ley move against alleged bad faith activ­i­ty made in bad faith:

    ...
    Face­book pub­lished a report on Novem­ber 1 claim­ing the San­din­istas it cen­sored were part of a “troll farm run by the gov­ern­ment of Nicaragua and the San­din­ista Nation­al Lib­er­a­tion Front (FSLN) par­ty” that had engaged in “coor­di­nat­ed inau­then­tic behav­ior.”

    This is demon­stra­bly false. In real­i­ty, what Facebook/Instagram did is purge most high-pro­file San­din­ista sup­port­ers on the plat­forms, then try to jus­ti­fy it by claim­ing that aver­age San­din­ista activists are actu­al­ly gov­ern­ment-run bots.

    Face­book implic­it­ly admit­ted this fact by con­ced­ing in the report that there were “authen­tic accounts” purged in the mas­sive social media crack­down. But Face­book refused to dif­fer­en­ti­ate between the authen­tic accounts and the alleged “inau­then­tic” accounts, nam­ing none and instead lump­ing them all togeth­er in order to jus­ti­fy eras­ing their dig­i­tal exis­tence.

    Unlike Facebook’s inves­ti­ga­tors, this reporter, Ben Nor­ton, is based in Nicaragua and per­son­al­ly knows dozens of the Nicaraguans whose accounts were cen­sored, and can con­firm that they are indeed real peo­ple organ­i­cal­ly express­ing their authen­tic opin­ions – not trolls, bots, or fake accounts.

    ...

    Many said this was the sec­ond or third time their accounts had been cen­sored. Sev­er­al had their Face­book and Twit­ter accounts removed dur­ing a vio­lent US-backed right-wing coup attempt in 2018.

    ...

    The above are just a small sam­ple of Nicaraguans who were false­ly smeared as “gov­ern­ment-run trolls” by Face­book and erased from social media.

    But it wasn’t just indi­vid­ual Nicaraguans who were cen­sored. Major Nicaraguan media out­lets that pro­vide a pro-San­din­ista per­spec­tive were also removed.
    ...

    So it should come as no sur­prise to learn that this coor­di­nat­ed bad faith action by these Sil­i­con Val­ley giants that just hap­pens to align with the US’s long-stand­ing pol­i­cy of squash­ing Cen­tral Amer­i­can left­ist move­ments. In oth­er words, we’re watch­ing the lat­est US op tar­get­ing Nicaragua’s Left in action. An op exe­cut­ed by Face­book exec­u­tives who just hap­pen to have pre­vi­ous­ly worked in nation­al secu­ri­ty jobs with the US gov­ern­ment:

    ...
    The Face­book report false­ly depict­ing aver­age San­din­ista activists as gov­ern­ment trolls was co-authored by Ben Nim­mo, the leader of Meta’s “Threat Intel­li­gence Team.”

    The Gray­zone has exposed Nim­mo as a for­mer press offi­cer for the US-led NATO mil­i­tary alliance and paid con­sul­tant to an actu­al covert troll farm: the Integri­ty Ini­tia­tive, which was estab­lished in secret by British mil­i­tary offi­cers to run anti-Russ­ian influ­ence oper­a­tions through West­ern media.

    Nim­mo has served as head of inves­ti­ga­tions at Graphi­ka, anoth­er infor­ma­tion war­fare ini­tia­tive that was set up with fund­ing from the Pentagon’s Min­er­va Insti­tute, and oper­ates with sup­port from the Pentagon’s top-secret Defense Advanced Research Projects Agency (DARPA).

    Nim­mo, who is also a senior fel­low at the West­ern gov­ern­ment-fund­ed Atlantic Coun­cil, med­dled in Britain’s 2020 elec­tion by smear­ing left­ist Labour Par­ty leader Jere­my Cor­byn as the ves­sel for a sup­posed Russ­ian active mea­sures oper­a­tion.

    ...

    The head of secu­ri­ty pol­i­cy at Face­book, Nathaniel Gle­ich­er, pro­mot­ed Nimmo’s report, echo­ing his false claims.

    Before mov­ing to Face­book, Gle­ich­er was direc­tor for cyber­se­cu­ri­ty pol­i­cy at the White House Nation­al Secu­ri­ty Coun­cil. He also worked at the US Depart­ment of Jus­tice.

    ...

    Facebook’s “direc­tor of threat dis­rup­tion,” David Agra­novich, also shared Nimmo’s false report.

    Like Gle­ich­er, Agra­novich worked at the US gov­ern­ment before mov­ing to Face­book, serv­ing as direc­tor of intel­li­gence for the White House Nation­al Secu­ri­ty Coun­cil.
    ...

    Final­ly, note this omi­nous warn­ing from these activists: What Sil­i­con Val­ley is doing is mak­ing Nicaragua safe for the exe­cu­tion of a right-wing coup. The voic­es who may have been capa­ble inform­ing the world about what’s hap­pen­ing have been pre­emp­tive­ly silenced. It’s part of the rea­son this sto­ry goes far beyond Nicaragua. We’re watch­ing what could be inter­pret­ed as pre-coup dig­i­tal prep work:

    ...
    Mul­ti­ple activists said they are afraid Wash­ing­ton will spon­sor anoth­er coup attempt or desta­bi­liza­tion oper­a­tions fol­low­ing Nicaragua’s Novem­ber 7 elec­tions, and because they were banned on social media, the San­din­ista sup­ports will be unable to inform the out­side world about what is actu­al­ly hap­pen­ing in their coun­try.
    ...

    So at this point it sounds like we should­n’t be entire­ly sur­prised to hear about a new right-wing coup attempt in Nicaragua in the com­ing weeks. But we should maybe be a lit­tle sur­prised if we hear about it from any of the dis­sent­ing voic­es in the coun­try, who are of course all gov­ern­ment trolls any­way.

    Posted by Pterrafractyl | November 3, 2021, 2:27 pm
  23. When are peo­ple in the Unit­ed States going to real­ize that these god­damed social media web­sites are noth­ing but tools of the nation­al secu­ri­ty state?

    Hell, the Google search engine is lit­er­al­ly the end result of DARPA sci­en­tist William Her­mann Godel’s and Saigon Embassy Min­is­ter Edward Geary Lans­dale’s com­put­er-based “Project AGILE” high-val­ue tar­get iden­ti­fi­ca­tion “Sub­pro­ject V” pro­gram for the “hunter-killer” teams that oper­at­ed with­in the Civ­il Oper­a­tions and Rur­al Devel­op­ment Sup­port-run Provin­cial Recon­nais­sance Unit, which was lat­er renamed “ARPANET”.

    I mean, that was the orig­i­nal use of the “ARPANET” in Viet­nam.

    Peri­od.

    Maj. Gen. Edward Geary Lans­dale used it as com­put­er pro­gram to strate­gi­cal­ly tar­get and assas­si­nate thou­sands of inno­cent human beings by build­ing elec­tron­ic “meta-data” dossiers on them (exact­ly what mod­ern day social media does).

    And Depart­ment of the Navy Deputy Direc­tor of the Office of Spe­cial Oper­a­tions William Her­mann Godel, appar­ent­ly used the same com­put­er net­work to traf­fic tons of nar­cotics out of Indochi­na on Civ­il Air Trans­port air­craft & Sea Sup­ply Corp. ships.

    It can be argued that the Inter­net as we know it, start­ed out in Godel and Lans­dale’s “Com­bat Devel­op­ment and Test Cen­ter and the Viet­cong Moti­va­tion and Morale Project”, which con­duct­ed some of the most heinous and vile tor­ture and inter­ro­ga­tion exper­i­ments against Indochi­nese civil­ians, a joint CIA — Unit­ed States mil­i­tary oper­a­tion that was lat­er known as Phoenix Pro­gram, which was over­seen by Robert William “Blow­torch Bob” Komer and William Egan Col­by...

    I have often con­sid­ered that the “look-alikes” that sur­round­ed Lee Har­vey Oswald and James Earl Ray may have been select­ed in a “Project Agile” relat­ed sub­pro­ject com­put­er serv­er.

    Poten­tial­ly though the Bun­desnachrich­t­en­di­en­st’s ZR/OARLOCK com­put­er serv­er pro­gram?

    Pur­port­ed­ly, William H. Godel also over­saw all Civ­il Air Trans­port oper­a­tions in South Viet­nam for the bet­ter part of half of the 1960’s.

    Con­sid­er­ing David William Fer­rie was aide to the Nation­al Com­man­der of the Civ­il Air Patrol by per­son­al order of USAF Brig. Gen. Stephen Dav­en­port McEl­roy (him­self com­man­der of the Ground Elec­tron­ics Engi­neer­ing-Instal­la­tion Agency with Head­quar­ters at Griff­iss Air Force Base, N.Y. in 1964), I per­son­al­ly find the idea more than a sight pos­si­bil­i­ty...

    ...but I digress.

    Posted by Robert Ward Montenegro | November 3, 2021, 9:31 pm
  24. @Robert Ward Mongtene­gro–

    Yes, indeed. The whole damned inter­net is an “op” and always was.

    As dis­cussed in the “Sur­veil­lance Val­ley” series of For the Record pro­grams, not only is the inter­net itself, and social media in par­tic­u­lar, an “op” but the so-called pri­va­cy advo­cates, includ­ing St. Edward [Snow­den] and St. Julian [Assange] are a key part of the vac­u­um clean­er oper­a­tion.

    This is the domes­tic Phoenix Pro­gram made man­i­fest.

    It is inter­est­ing, in par­tic­u­lar, to con­tem­plate the Cam­bridge Ana­lyt­i­ca affair.

    https://spitfirelist.com/for-the-record/ftr-1077-surveillance-valley-part-3-cambridge-analytica-democracy-and-counterinsurgency/

    In his speech to the Indus­try Club of Dus­sel­dorf, Hitler equat­ed democ­ra­cy with Com­mu­nism, which went over very well.

    The pur­pose of Project Agile, and the Inter­net, is “Counter Insur­gency.”

    If you are going to do that in a Hit­ler­ian con­text, you have to know what peo­ple are think­ing and doing.

    Democracy=Communism is the dom­i­nant equa­tion.

    We’re doomed.

    Thanks for you con­tin­ued input and ded­i­ca­tion.

    Best,

    Dave

    Posted by Dave Emory | November 4, 2021, 4:18 pm
  25. We got anoth­er update on issue of Face­book’s tol­er­ance and embrace of Span­ish-lan­guage right-wing dis­in­for­ma­tion. Recall how Span­ish-lan­guage media in the US was get­ting over­whelmed with Q‑Anon-style far right memes in 2020 and arguably swung the state of Flori­da towards Trump. Well, the Los Ange­les Times has a new piece out describ­ing the inter­nal Face­book efforts in the final weeks of the 2020 cam­paign to deal with the del­uge of Span­ish-lan­guage mis­in­for­ma­tion on its plat­form. Although it’s not so much describ­ing an effort to com­bat this dis­in­for­ma­tion as it was an inter­nal effort to jus­ti­fy the lack of action.

    As the arti­cle describes, activist groups were find­ing that mis­in­for­ma­tion flagged and tak­en down in Eng­lish was remain­ing up on Face­book and only slow­ly tak­en down when it showed up in Span­ish. So what was the com­pa­ny’s response? Accord­ing to a 2020 prod­uct risk assess­ment, the Span­ish-lan­guage mis­in­for­ma­tion detec­tion remains “very low-per­for­mance,” and yet the sug­gest­ed response was­n’t to pro­vide more resources towards com­bat­ing this mis­in­for­ma­tion. No, it was to “Just keep try­ing to improve. Addi­tion of resources will not help.” As the arti­cle notes, anoth­er employ­ee replied to this sug­ges­tion by point­ing out that “My under­stand­ing is we have 1 part time [soft­ware engi­neer] ded­i­cat­ed on [Insta­gram] detec­tion right now.” So the implied inter­nal response by Face­book to the flood of Span­ish-lan­guage dis­in­for­ma­tion flood­ing Face­book’s Insta­gram plat­form dur­ing the 2020 elec­tion cycle was to ask that one part-time soft­ware engi­neer to try hard­er.

    Keep in mind the recent con­text of this report: Face­book’s deci­sion this month to take down near­ly every­one in Nicaragua asso­ci­at­ed with the left-wing San­din­ista gov­ern­ment. Includ­ing promi­nent pri­vate sup­port­ers of the gov­ern­ment. A nation­wide purge of the left. That just hap­pened like two weeks ago, right before the elec­tions that the US is now declar­ing a fraud.

    Now jux­ta­pose Face­book’s actions in Nicaragua with its behav­ior in Hon­duras. As we’ve seen, Face­book was essen­tial­ly tol­er­at­ing the inau­then­tic use of Face­book by the right-wing Hon­duran gov­ern­ment to car­ry out mis­in­for­ma­tion cam­paigns that includ­ed encour­ag­ing peo­ple to join migrant car­a­vans by pre­tend­ing to be promi­nent left-wing orga­niz­ers on Face­book. Yes, the Hon­duran was lit­er­al­ly caught hyp­ing migrant car­a­vans to the US by fak­ing the Face­book pro­files or real migrant activists and Face­book basi­cal­ly did noth­ing about this oth­er than pro­tect the iden­ti­ty of the per­pe­tra­tor. As we’re going to see in an update on that sto­ry from back in April from Sophie Zhang, Zhang informed the com­pa­ny about the Hon­duran gov­ern­men­t’s activ­i­ties in August of 2018 but the com­pa­ny dragged its feet on doing any­thing about it for 11 months.

    Oh, and here’s the best/worst part of this sto­ry: accord­ing to Face­book’s own met­rics, the resources it puts towards com­bat­ing Span­ish-lan­guage dis­in­for­ma­tion is eclipsed only by the resources it puts into com­bat­ing Eng­lish-lan­guage dis­in­for­ma­tion. Span­ish-lan­guage dis­in­for­ma­tion efforts get the sec­ond largest chunk of Face­book’s anti-dis­in­for­ma­tion efforts. So while these sto­ries are describ­ing an utter night­mare of dis­in­for­ma­tion hav­ing tak­en hold on the Span­ish-lan­guage com­mu­ni­ties on these plat­forms, the sit­u­a­tion described here is actu­al­ly pret­ty good, rel­a­tive­ly speak­ing by Face­book’s inter­nal stan­dards:

    The Los Ange­les Times

    What Face­book knew about its Lati­no-aimed dis­in­for­ma­tion prob­lem

    BY BRIAN CONTRERAS, MALOY MOORE
    NOV. 16, 2021 5 AM PT

    It was Octo­ber 2020, elec­tion con­spir­a­cy the­o­ries threat­ened to pull Amer­i­ca apart at its seams, and Jes­si­ca González was try­ing to get one of the most pow­er­ful com­pa­nies in the world to lis­ten to her.

    It wasn’t going well.

    After months of try­ing to get on their cal­en­dar, González — the co-chief exec­u­tive of media advo­ca­cy group Free Press — had final­ly man­aged to secure a meet­ing with some of the Face­book employ­ees respon­si­ble for enforc­ing the social platform’s com­mu­ni­ty stan­dards. The issue at hand: the spread of viral mis­in­for­ma­tion among Lati­no and Span­ish-speak­ing Face­book users.

    Across the coun­try, a pipeline of mis­lead­ing media had been pump­ing lies and half-truths, in both Eng­lish and Span­ish, into local Lati­no com­mu­ni­ties. Some­times the mis­in­for­ma­tion mir­rored what the rest of the coun­try was see­ing: fear-mon­ger­ing about mail-in bal­lots and antifa vig­i­lantes, or con­spir­a­cy the­o­ries about the deep state and COVID-19. Oth­er times it leaned into more Lati­no-spe­cif­ic con­cerns, such as com­par­ing can­di­date Joe Biden to Latin Amer­i­can dic­ta­tors or claim­ing that Black Lives Mat­ter activists were using bru­jería — that is, witch­craft.

    Much of the fake news was spread­ing on social media, via YouTube, Twit­ter and, piv­otal­ly, Face­book, What­sApp and Insta­gram. All three are owned by the same umbrel­la com­pa­ny, which recent­ly rebrand­ed as Meta.

    “The same sort of themes that were show­ing up in Eng­lish were also show­ing up in Span­ish,” González recalled. “But in Eng­lish, they were either get­ting flagged or tak­en down alto­geth­er, and in Span­ish they were being left up; or if they were get­ting tak­en down, it was tak­ing days and days to take them down.

    Free Press had briefly flagged the prob­lem in July 2020 dur­ing a meet­ing with Chief Exec­u­tive Mark Zucker­berg. González had spent the months since try­ing to set up anoth­er, more focused con­ver­sa­tion. Now, that was actu­al­ly hap­pen­ing.

    In atten­dance were Facebook’s pub­lic pol­i­cy direc­tor for coun­tert­er­ror­ism and dan­ger­ous orga­ni­za­tion, its glob­al direc­tor for risk and response, and sev­er­al mem­bers of the company’s pol­i­cy team, accord­ing to notes from the meet­ing reviewed by The Times.

    “We had a lot of spe­cif­ic ques­tions that they com­plete­ly failed to answer,” she said. “For instance, we asked them, who’s in charge of ensur­ing the integri­ty of con­tent mod­er­a­tion in Span­ish? They would not tell us the answer to that, or even if that per­son exist­ed. We asked, how many con­tent mod­er­a­tors do you have in Span­ish? They refused to [answer] that ques­tion. How many peo­ple that mod­er­ate con­tent in Span­ish are based in the U.S.? ... No answer.”

    “We were con­sis­tent­ly met much the same way they meet oth­er groups that are work­ing on dis­in­for­ma­tion or hate speech,” she added: “With a bunch of emp­ty promis­es and a lack of detail.”

    Free Press wasn’t alone in find­ing Face­book to be a less than ide­al part­ner in the fight against Span­ish-lan­guage and Lati­no-cen­tric mis­in­for­ma­tion. Days after the elec­tion, it and almost 20 oth­er advo­ca­cy groups — many of them Lati­no-cen­tric — sent a let­ter to Zucker­berg crit­i­ciz­ing his company’s “inac­tion and enable­ment of the tar­get­ing, manip­u­la­tion, and dis­en­fran­chise­ment of Lat­inx users” dur­ing the elec­tion, despite “repeat­ed efforts” by the sig­na­to­ries to alert him of their con­cerns.

    “Face­book has not been trans­par­ent at all,” said Jacobo Licona, a dis­in­for­ma­tion researcher at the Lati­no vot­er engage­ment group Equis Labs. More­over, he said, it “has not been coop­er­a­tive with law­mak­ers or Lat­inx-serv­ing orga­ni­za­tions” work­ing on dis­in­for­ma­tion.

    But inside Face­book, employ­ees had been rais­ing red flags of their own for months, call­ing for a more robust cor­po­rate response to the mis­in­for­ma­tion cam­paigns their com­pa­ny was facil­i­tat­ing.

    That’s a through-line in a trove of cor­po­rate reports, mem­os and chat logs recent­ly made pub­lic by whistle­blow­er and for­mer Face­book employ­ee Frances Hau­gen.

    “We’re not good at detect­ing mis­in­fo in Span­ish or lots of oth­er media types,” reads one such doc­u­ment, a prod­uct risk assess­ment from Feb­ru­ary 2020, includ­ed in dis­clo­sures made to the Secu­ri­ties and Exchange Com­mis­sion and pro­vid­ed to Con­gress in redact­ed form by Haugen’s legal coun­sel. A con­sor­tium of news orga­ni­za­tions, includ­ing the Los Ange­les Times, obtained the redact­ed ver­sions received by Con­gress.

    The same doc­u­ment lat­er adds, “We will still have gaps in detec­tion & enforce­ment, esp. for Span­ish.”

    The next month, anoth­er inter­nal report warned that Face­book had “no poli­cies to pro­tect against tar­get­ed sup­pres­sion (e.g., ICE at polls),” allud­ing to con­cerns that Lati­no vot­ers would be dis­suad­ed from show­ing up to vote if they were told, false­ly, that immi­gra­tion author­i­ties would be present at polling sites.

    The report col­or-cod­ed that con­cern bright red: high risk, low readi­ness.

    Lat­er, in an assess­ment of the company’s abil­i­ty to han­dle viral mis­in­for­ma­tion, the report added: “Gaps in detec­tion still exist (e.g. var­i­ous media types, Span­ish posts, etc.)”

    A third inter­nal report point­ed to racial groups with low his­tor­i­cal vot­er par­tic­i­pa­tion rates as one of the main sub­sets of Face­book users fac­ing an ele­vat­ed risk from vot­er dis­en­fran­chise­ment efforts. Lati­nos are among those groups.

    These con­cerns would prove pre­scient as the elec­tion drew clos­er.

    “Dis­in­for­ma­tion tar­get­ing Lati­nos in Eng­lish and Span­ish was hap­pen­ing across the coun­try, espe­cial­ly in places with high­er pop­u­la­tions of Lati­nos,” includ­ing Cal­i­for­nia, Texas, Flori­da, New York and Ari­zona, said Licona, the dis­in­for­ma­tion researcher. “Face­book was — and still is — a major play­er.”

    Com­pa­ny spokesper­son Kevin McAl­is­ter told The Times that Face­book took “a num­ber of steps” ahead of the 2020 elec­tion to com­bat Span­ish-lan­guage mis­in­for­ma­tion.

    “We built a Span­ish ver­sion of our Vot­ing Infor­ma­tion Cen­ter where peo­ple could find accu­rate infor­ma­tion about the elec­tion, expand­ed our vot­er inter­fer­ence poli­cies and enforced them in Span­ish and added two new U.S. fact-check­ing part­ners who review con­tent in Span­ish on Face­book and Insta­gram,” McAl­is­ter said. “We invest­ed in inter­nal research to help teams proac­tive­ly iden­ti­fy where we could improve our prod­ucts and poli­cies ahead of the U.S. 2020 elec­tions.”

    Oth­er broad­er mea­sures announced at the time includ­ed not accept­ing any new polit­i­cal ads in the week before elec­tion day and remov­ing mis­in­for­ma­tion about polling con­di­tions in the three days before elec­tion day.

    By elec­tion day, the com­pa­ny report­ed hav­ing removed more than 265,000 Face­book and Insta­gram posts which vio­lat­ed its vot­er inter­fer­ence poli­cies, and added warn­ing labels to more than 180 mil­lion instances of fact-checked mis­in­for­ma­tion.

    In a June 2020 post on his per­son­al Face­book page, Zucker­berg promised to “ban posts that make false claims say­ing ICE agents are check­ing for immi­gra­tion papers at polling places, which is a tac­tic used to dis­cour­age vot­ing.”

    The com­pa­ny also said that four of its 10 fact-check­ing part­ners in the U.S. han­dle Span­ish-lan­guage con­tent.

    Yet the prob­lems fac­ing Lati­nos on Face­book, What­sApp and Insta­gram extend beyond any one elec­tion cycle, Haugen’s leaks reveal.

    In 2019, Face­book pub­lished a study inter­nal­ly look­ing at efforts to dis­cour­age peo­ple from par­tic­i­pat­ing in the U.S. cen­sus, and how users per­ceived the company’s response to those efforts.

    Among the posts that users report­ed to Face­book were ones “telling Hispanic[s] to not fill out the form”; “telling His­pan­ics not to par­tic­i­pate in answer­ing ques­tions about cit­i­zen­ship”; say­ing that peo­ple “would be in dan­ger of being deport­ed if they par­tic­i­pat­ed”; imply­ing the gov­ern­ment would “get” immi­grants who par­tic­i­pat­ed; and “dis­cour­ag­ing eth­nic groups” from par­tic­i­pat­ing.

    Facebook’s researchers have also exam­ined the pos­si­bil­i­ty that the abun­dance of anti-immi­grant rhetoric on the site takes an out­sized toll on Lati­no users’ men­tal well-being.

    While dis­cussing one study with col­leagues on an inter­nal mes­sage board, a researcher com­ment­ed: “We did want to assess if vul­ner­a­ble pop­u­la­tions were affect­ed dif­fer­ent­ly, so we com­pared how Lat­inx [users] felt in com­par­i­son with the rest of the par­tic­i­pants, giv­en the expo­sure to anti-immi­gra­tion hate­ful rhetoric. We found that they expressed high­er lev­els of dis­ap­point­ment and anger, espe­cial­ly after see­ing vio­lat­ing con­tent.”

    In oth­er mes­sage boards, employ­ees wor­ried that the company’s prod­ucts might be con­tribut­ing to broad­er racial inequities.

    “While we pre­sum­ably don’t have any poli­cies designed to dis­ad­van­tage minori­ties, we def­i­nite­ly have policies/practices and emer­gent behav­ior that does,” wrote one employ­ee in a forum called Integri­ty Ideas to Fight Racial Injus­tice. “We should com­pre­hen­sive­ly study how our deci­sions and how the mechan­ics of social media do or do not sup­port minor­i­ty com­mu­ni­ties.”

    Anoth­er post in the same racial jus­tice group encour­aged the com­pa­ny to become more trans­par­ent about XCheck, a pro­gram designed to give promi­nent Face­book users high­er-qual­i­ty con­tent mod­er­a­tion which, in prac­tice, exempt­ed many from fol­low­ing the rules. “XCheck is our tech­ni­cal imple­men­ta­tion of a dou­ble stan­dard,” the employ­ee wrote.

    ...

    The 2020 prod­uct risk assess­ment indi­cates one such area of dis­sent. After not­ing that Span­ish-lan­guage mis­in­for­ma­tion detec­tion remains “very low-per­for­mance,” the report offers this rec­om­men­da­tion: “Just keep try­ing to improve. Addi­tion of resources will not help.”

    Not every­one was sat­is­fied with that answer.

    “For mis­in­fo this doesn’t seem right … curi­ous why we’re say­ing addi­tion of resources will not help?,” one employ­ee asked in a com­ment. “My under­stand­ing is we have 1 part time [soft­ware engi­neer] ded­i­cat­ed on [Insta­gram] detec­tion right now.”

    A sec­ond com­ment added that tar­get­ed mis­in­for­ma­tion “is a big gap. … Flag­ging that we have zero resources avail­able right now to sup­port any work that may be need­ed here.” (Redac­tions make it impos­si­ble to tell whether the same employ­ee was behind both com­ments.)

    In com­mu­ni­ca­tions with the out­side world, includ­ing law­mak­ers, the com­pa­ny has stressed the strength of its Span­ish-lan­guage con­tent mod­er­a­tion rather than the con­cerns raised by its own employ­ees.

    “We con­duct Span­ish-lan­guage con­tent review 24 hours per day at mul­ti­ple glob­al sites,” the com­pa­ny wrote in May in a state­ment to Con­gress. “Span­ish is one of the most com­mon lan­guages used on our plat­forms and is also one of the high­est-resourced lan­guages when it comes to con­tent review.”

    Two months lat­er, near­ly 30 sen­a­tors and con­gres­sion­al mem­bers sent a let­ter to the com­pa­ny express­ing con­cern that its con­tent mod­er­a­tion pro­to­cols were still fail­ing to stanch the flow of Span­ish-lan­guage mis­in­for­ma­tion.

    “We urge you to release spe­cif­ic and clear data demon­strat­ing the resources you cur­rent­ly devote to pro­tect non-Eng­lish speak­ers from mis­in­for­ma­tion, dis­in­for­ma­tion, and ille­gal con­tent on your plat­forms,” the group told Zucker­berg, as well as his coun­ter­parts at YouTube, Twit­ter and Nextdoor.

    Zuckerberg’s response, which again empha­sized the resources and man­pow­er the com­pa­ny was pour­ing into non-Eng­lish con­tent mod­er­a­tion, left them under­whelmed.

    “We received a response from Face­book, and it was real­ly more of the same — no con­crete, direct answers to any of our ques­tions,” said a spokesper­son for Rep. Tony Cár­de­nas (D‑Pacoima), one of the lead sig­na­to­ries on the let­ter.

    In a sub­se­quent inter­view with The Times, Cár­de­nas him­self said that he con­sid­ered his rela­tion­ship with Face­book “basi­cal­ly val­ue­less.” Dur­ing con­gres­sion­al hear­ings, Zucker­berg has “kept try­ing to give this image that they’re doing every­thing that they can: they’re mak­ing tremen­dous strides; all that they can do, they are doing; the invest­ments that they’re mak­ing are pro­found and large and appro­pri­ate.”

    “But when you go through his answers, they were very light on details,” Cár­de­nas added. “They were more aspi­ra­tional, and slight­ly apolo­getic, but not fac­tu­al at all.”

    It’s a com­mon sen­ti­ment on Capi­tol Hill.

    “Online plat­forms aren’t doing enough to stop” dig­i­tal mis­in­for­ma­tion, Sen. Amy Klobuchar (D‑Minn.) said in a state­ment, and “when it comes to non-Eng­lish mis­in­for­ma­tion, their track record is even worse. ... You can still find Span­ish-lan­guage Face­book posts from Novem­ber 2020 that pro­mote elec­tion lies with no warn­ing labels.”

    “I’ve said it before and I’m say­ing it again: Span­ish-lan­guage mis­in­for­ma­tion cam­paigns are absolute­ly explod­ing on social media plat­forms like Face­book, What­sApp, etc.,” Rep. Alexan­dria Oca­sio-Cortez (D‑N.Y.) said in a recent tweet. “It’s putting US Eng­lish mis­in­fo cam­paigns to shame.”

    Lati­no advo­ca­cy groups, too, have been crit­i­cal. Unido­sUS (for­mer­ly the Nation­al Coun­cil of La Raza) recent­ly cut ties with Face­book, return­ing a grant from the com­pa­ny out of frus­tra­tion with “the role that the plat­form has played in inten­tion­al­ly per­pet­u­at­ing prod­ucts and poli­cies that harm the Lati­no com­mu­ni­ty.”

    Yet for all the con­cern from with­in — and crit­i­cism from out­side — Span­ish is a rel­a­tive­ly well-sup­port­ed lan­guage — by Face­book stan­dards.

    One leaked memo from 2021 breaks down dif­fer­ent coun­tries by “cov­er­age,” a met­ric Face­book uses to track how much of the con­tent users see is in a lan­guage sup­port­ed by the company’s “civic clas­si­fi­er” (an AI tool respon­si­ble for flag­ging polit­i­cal con­tent for human review). Per that report, the only Latin Amer­i­can coun­try which has less than 75% cov­er­age is non-Span­ish-speak­ing Haiti. The U.S., for its part, has 99.45% cov­er­age.

    And a report on the company’s 2020 expens­es indi­cates that after Eng­lish, the sec­ond-high­est num­ber of hours spent on work relat­ed to mea­sur­ing and label­ing hate speech went toward Span­ish-lan­guage con­tent.

    Indeed, many of the dis­clo­sures which have come out of Haugen’s leaks have focused on cov­er­age gaps in oth­er, less-well-resourced lan­guages, espe­cial­ly in the Mid­dle East and Asia.

    But to those seek­ing to bet­ter pro­tect Lati­nos from tar­get­ed dis­in­for­ma­tion, Facebook’s asser­tions of suf­fi­cient resources — and the con­cerns voiced by its own employ­ees — raise the ques­tion of why it isn’t doing bet­ter.

    “They always say, ‘We hear you, we’re work­ing on this, we’re try­ing to get bet­ter,’” said González. “And then they just don’t do any­thing.”

    ————

    “What Face­book knew about its Lati­no-aimed dis­in­for­ma­tion prob­lem” by BRIAN CONTRERAS, MALOY MOORE; The Los Ange­les Times; 11/16/2021

    “Yet for all the con­cern from with­in — and crit­i­cism from out­side — Span­ish is a rel­a­tive­ly well-sup­port­ed lan­guage — by Face­book stan­dards.”

    Yep, that night­mare of a report on Face­book’s near com­plete lack of dis­in­for­ma­tion man­age­ment for Span­ish-lan­guage con­tent was actu­al­ly a feel good sto­ry for Face­book, rel­a­tive­ly speak­ing. Span­ish has the sec­ond high­est lev­els of sup­port inside the com­pa­ny. The sit­u­a­tion is even worse for oth­er lan­guages:

    ...
    One leaked memo from 2021 breaks down dif­fer­ent coun­tries by “cov­er­age,” a met­ric Face­book uses to track how much of the con­tent users see is in a lan­guage sup­port­ed by the company’s “civic clas­si­fi­er” (an AI tool respon­si­ble for flag­ging polit­i­cal con­tent for human review). Per that report, the only Latin Amer­i­can coun­try which has less than 75% cov­er­age is non-Span­ish-speak­ing Haiti. The U.S., for its part, has 99.45% cov­er­age.

    And a report on the company’s 2020 expens­es indi­cates that after Eng­lish, the sec­ond-high­est num­ber of hours spent on work relat­ed to mea­sur­ing and label­ing hate speech went toward Span­ish-lan­guage con­tent.

    Indeed, many of the dis­clo­sures which have come out of Haugen’s leaks have focused on cov­er­age gaps in oth­er, less-well-resourced lan­guages, espe­cial­ly in the Mid­dle East and Asia.

    But to those seek­ing to bet­ter pro­tect Lati­nos from tar­get­ed dis­in­for­ma­tion, Facebook’s asser­tions of suf­fi­cient resources — and the con­cerns voiced by its own employ­ees — raise the ques­tion of why it isn’t doing bet­ter.

    “They always say, ‘We hear you, we’re work­ing on this, we’re try­ing to get bet­ter,’” said González. “And then they just don’t do any­thing.”
    ...

    This is paired with the obser­va­tions of activist groups that mis­in­for­ma­tion that shows up in both Eng­lish and Span­ish was only hav­ing the Eng­lish con­tent flagged and removed. Face­book was­n’t even able to, or will­ing to, remove Span­ish lan­guage con­tent even after it’s already deter­mined that con­tent to be mis­in­for­ma­tion:

    ...
    After months of try­ing to get on their cal­en­dar, González — the co-chief exec­u­tive of media advo­ca­cy group Free Press — had final­ly man­aged to secure a meet­ing with some of the Face­book employ­ees respon­si­ble for enforc­ing the social platform’s com­mu­ni­ty stan­dards. The issue at hand: the spread of viral mis­in­for­ma­tion among Lati­no and Span­ish-speak­ing Face­book users.

    ...

    “The same sort of themes that were show­ing up in Eng­lish were also show­ing up in Span­ish,” González recalled. “But in Eng­lish, they were either get­ting flagged or tak­en down alto­geth­er, and in Span­ish they were being left up; or if they were get­ting tak­en down, it was tak­ing days and days to take them down.

    ...

    “We had a lot of spe­cif­ic ques­tions that they com­plete­ly failed to answer,” she said. “For instance, we asked them, who’s in charge of ensur­ing the integri­ty of con­tent mod­er­a­tion in Span­ish? They would not tell us the answer to that, or even if that per­son exist­ed. We asked, how many con­tent mod­er­a­tors do you have in Span­ish? They refused to [answer] that ques­tion. How many peo­ple that mod­er­ate con­tent in Span­ish are based in the U.S.? ... No answer.”

    “We were con­sis­tent­ly met much the same way they meet oth­er groups that are work­ing on dis­in­for­ma­tion or hate speech,” she added: “With a bunch of emp­ty promis­es and a lack of detail.”
    ...

    And then there’s the 2020 inter­nal Face­book report that explic­it­ly stat­ed “Addi­tion of resources will not help” after not­ing that Span­ish-lan­guage mis­in­for­ma­tion detec­tion remains “very low-per­for­mance”. A con­clu­sion oth­er Face­book employ­ees under­stand­ably took issue with, observ­ing that a sin­gle part time soft­ware engi­neer was ded­i­cat­ed to Span­ish-lan­guage tar­get­ed mis­in­for­ma­tion on Insta­gram. A sin­gle part time employ­ee. But no addi­tion­al resources are need­ed:

    ...
    But inside Face­book, employ­ees had been rais­ing red flags of their own for months, call­ing for a more robust cor­po­rate response to the mis­in­for­ma­tion cam­paigns their com­pa­ny was facil­i­tat­ing.

    ...

    The 2020 prod­uct risk assess­ment indi­cates one such area of dis­sent. After not­ing that Span­ish-lan­guage mis­in­for­ma­tion detec­tion remains “very low-per­for­mance,” the report offers this rec­om­men­da­tion: “Just keep try­ing to improve. Addi­tion of resources will not help.”

    Not every­one was sat­is­fied with that answer.

    “For mis­in­fo this doesn’t seem right … curi­ous why we’re say­ing addi­tion of resources will not help?,” one employ­ee asked in a com­ment. “My under­stand­ing is we have 1 part time [soft­ware engi­neer] ded­i­cat­ed on [Insta­gram] detec­tion right now.”

    A sec­ond com­ment added that tar­get­ed mis­in­for­ma­tion “is a big gap. … Flag­ging that we have zero resources avail­able right now to sup­port any work that may be need­ed here.” (Redac­tions make it impos­si­ble to tell whether the same employ­ee was behind both com­ments.)
    ...

    And yet it’s hard to ignore the under­ly­ing con­clu­sion that the cyn­i­cal anony­mous Face­book employ­ee who con­clud­ed that an “Addi­tion of resources will not help” was ulti­mate­ly speak­ing for Face­book’s man­age­ment and reflect­ing the com­pa­ny’s pol­i­cy today. A pol­i­cy towards mis­in­for­ma­tion that’s appar­ent­ly, “Well, we tried! Noth­ing more we can do but try hard­er!”

    And in case it was­n’t clear that it’s specif­i­cal­ly right-wing mis­in­for­ma­tion, and not just gener­ic mis­in­for­ma­tion, that is inun­dat­ing these Span­ish-lan­guage Face­book-owned plat­forms, here’s an update from back in April on the sto­ry of Face­book’s will­ing tol­er­a­tion of the right-wing gov­ern­ment of Hon­duras using Face­book to simul­ta­ne­ous­ly pro­mote his own gov­ern­ment while foment­ing dis­in­for­ma­tion cam­paigns against his left-wing oppo­nents and activists. As we’ve already seen, there is ample evi­dence that the Hon­duran gov­ern­ment was lit­er­al­ly wag­ing a secret cam­paign to encour­age peo­ple to join the migrant car­a­vans head­ing to the US in 2017, with pro-gov­ern­ment cable TV lead­ing the mes­sag­ing cam­paign. But as we also saw, inau­then­tic Face­book activ­i­ty was heav­i­ly used to ampli­fy the Hon­duran gov­ern­men­t’s dis­in­for­ma­tion mes­sage. And yet, when pressed with evi­dence of this inau­then­tic activ­i­ty by gov­ern­ment actors who were pre­tend­ing to be migrant activists pro­mot­ing the car­a­vans, Face­book refused to iden­ti­fy the bad actors, cit­ing pri­va­cy con­cerns.

    That’s all part of the con­text of the update we got on Face­book’s bad behav­ior in Hon­duras back in April. The update came viz Face­book whistle­blow­er Sophie Zhang, who had the job of com­bat­ing mis­in­for­ma­tion at the com­pa­ny. It was Zhang who uncov­ered a coor­di­nat­ed dis­in­for­ma­tion cam­paign by the Hon­duran gov­ern­ment in August of 2018. 90% of all iden­ti­fied ‘fake engage­ment’ iden­ti­fied in Hon­duras were iden­ti­fied with the Hon­duran gov­ern­ment. Despite this, the com­pa­ny dragged its feet and took over a year before actu­al­ly tak­ing any actions against this inau­then­tic behav­ior in July of 2019. What was the inter­nal ratio­nale for this foot-drag­ging? A need to pri­or­i­tize influ­ence oper­a­tions tar­get­ing the US and West­ern Europe and focus on the bad behav­ior of Russ­ian and Iran. Yep. So at least part of the inter­nal rea­son­ing inside Face­book for why it did­n’t need to pri­or­i­tize Span­ish-lan­guage mis­in­for­ma­tion is that it is lit­er­al­ly a low­er pri­or­i­ty:

    The Guardian

    Face­book knew of Hon­duran president’s manip­u­la­tion cam­paign – and let it con­tin­ue for 11 months

    Juan Orlan­do Hernán­dez false­ly inflat­ed his posts’ pop­u­lar­i­ty for near­ly a year after the com­pa­ny was informed about it

    Julia Car­rie Wong in San Fran­cis­co and Jeff Ernst
    Tue 13 Apr 2021 07.00 EDT
    Last mod­i­fied on Thu 15 Apr 2021 06.00 EDT

    Face­book allowed the pres­i­dent of Hon­duras to arti­fi­cial­ly inflate the appear­ance of pop­u­lar­i­ty on his posts for near­ly a year after the com­pa­ny was first alert­ed to the activ­i­ty.

    The astro­turf­ing – the dig­i­tal equiv­a­lent of a bussed-in crowd – was just one facet of a broad­er online dis­in­for­ma­tion effort that the admin­is­tra­tion has used to attack crit­ics and under­mine social move­ments, Hon­duran activists and schol­ars say.

    Face­book posts by Juan Orlan­do Hernán­dez, an author­i­tar­i­an rightwinger whose 2017 re-elec­tion is wide­ly viewed as fraud­u­lent, received hun­dreds of thou­sands of fake likes from more than a thou­sand inau­then­tic Face­book Pages – pro­files for busi­ness­es, orga­ni­za­tions and pub­lic fig­ures – that had been set up to look like Face­book user accounts.

    The cam­paign was uncov­ered in August 2018 by a Face­book data sci­en­tist, Sophie Zhang, whose job involved com­bat­ting fake engage­ment: com­ments, shares, likes and reac­tions from inau­then­tic or com­pro­mised accounts.

    Zhang began inves­ti­gat­ing Hernández’s Page because he was the ben­e­fi­cia­ry of 90% of all the known fake engage­ment received by civic or polit­i­cal Pages in Hon­duras. Over one six-week peri­od in 2018, for exam­ple, Hernández’s Face­book posts received likes from 59,100 users, of whom 46,500 were fake.

    She found that one of the admin­is­tra­tors for Hernández’s Page was also the admin­is­tra­tor for hun­dreds of the inau­then­tic Pages that were being used sole­ly to boost posts on Hernández’s Page. This indi­vid­ual was also an admin­is­tra­tor for the Page of Hil­da Hernán­dez, the president’s sis­ter, who served as his com­mu­ni­ca­tions min­is­ter until her death in Decem­ber 2017.

    Although the activ­i­ty vio­lat­ed Facebook’s pol­i­cy against “coor­di­nat­ed inau­then­tic behav­ior” – the kind of decep­tive cam­paign­ing used by a Russ­ian influ­ence oper­a­tion dur­ing the 2016 US elec­tion – Face­book dragged its feet for near­ly a year before tak­ing the cam­paign down in July 2019.

    Despite this, the cam­paign to boost Hernán­dez on Face­book repeat­ed­ly returned, and Face­book showed lit­tle appetite for polic­ing the recidi­vism. Guy Rosen, Facebook’s vice-pres­i­dent of integri­ty, referred to the return of the Hon­duras cam­paign as a “bum­mer” in an inter­nal dis­cus­sion in Decem­ber 2019 but empha­sized that the com­pa­ny need­ed to pri­or­i­tize influ­ence oper­a­tions that tar­get­ed the US or west­ern Europe, or were car­ried out by Rus­sia or Iran.

    Hernández’s Page admin­is­tra­tor also returned to Face­book despite being banned dur­ing the July 2019 take­down. His account list­ed his place of employ­ment as the Hon­duran pres­i­den­tial palace and includ­ed pho­tos tak­en inside restrict­ed areas of the president’s offices.

    The Page admin­is­tra­tor did not respond to queries from the Guardian, and his account was removed two days after the Guardian ques­tioned Face­book about it.

    A Face­book spokesper­son, Liz Bour­geois, said: “We fun­da­men­tal­ly dis­agree with Ms Zhang’s char­ac­ter­i­za­tion of our pri­or­i­ties and efforts to root out abuse on our plat­form.

    “We inves­ti­gat­ed and pub­licly shared our find­ings about the take­down of this net­work in Hon­duras almost two years ago. These inves­ti­ga­tions take time to under­stand the full scope of the decep­tive activ­i­ty so we don’t enforce piece­meal and have con­fi­dence in our pub­lic attri­bu­tion ... Like with oth­er CIB take­downs, we con­tin­ue to mon­i­tor and block attempts to rebuild pres­ence on our plat­form.”

    ...

    Decep­tive social media cam­paigns are used to “deter polit­i­cal par­tic­i­pa­tion or to get those who par­tic­i­pate to change their opin­ion”, said Aldo Sal­ga­do, co-founder of Cit­i­zen Lab Hon­duras. “They serve to emu­late pop­u­lar sup­port that the gov­ern­ment lacks.”

    Euge­nio Sosa, a pro­fes­sor of soci­ol­o­gy at the Nation­al Autonomous Uni­ver­si­ty of Hon­duras, said the government’s use of astro­turf­ing to sup­port Hernán­dez “has to do with the deep ero­sion of legit­i­ma­cy, the lit­tle cred­i­bil­i­ty that he has, and the enor­mous pub­lic mis­trust about what he does, what he says and what he promis­es”. Beyond the president’s loy­al sup­port­ers, how­ev­er, Sosa said he believes that it has lit­tle effect on pub­lic opin­ion, due to a steady stream of head­lines about Hernández’s cor­rup­tion and ties to the nar­cotics trade.

    Hernández’s broth­er was con­vict­ed of drug traf­fick­ing in US fed­er­al courts in Octo­ber 2019, and the pres­i­dent has him­self been iden­ti­fied by US pros­e­cu­tors as a co-con­spir­a­tor in mul­ti­ple drug traf­fick­ing and cor­rup­tion cas­es. Hernán­dez has not been charged with a crime and has denied any wrong­do­ing. Until recent­ly, he was con­sid­ered a key US ally in Cen­tral Amer­i­ca.

    Sal­ga­do said that the Hernán­dez admin­is­tra­tion began resort­ing to social media dis­in­for­ma­tion cam­paigns in 2015, when a major cor­rup­tion scan­dal involv­ing the theft of $350m from the country’s health­care and pen­sion sys­tem inspired months of torch­lit protest march­es. “That’s when the need for the gov­ern­ment aris­es and they des­per­ate­ly begin to cre­ate an army of bots,” he said.

    Face­book, which has about 4.4 mil­lion users in Hon­duras, was a dou­ble-edged sword for the non-par­ti­san protest orga­niz­ers, who used the social net­work to orga­nize but also found them­selves attacked by a dis­in­for­ma­tion cam­paign alleg­ing that they were con­trolled by Manuel Zelaya, a for­mer pres­i­dent who was deposed in a 2009 coup.

    “The smear cam­paign was psy­cho­log­i­cal­ly over­whelm­ing,” said Gabriela Blen, a social activist who was one of the lead­ers of the torch march­es. “It is not easy to endure so much crit­i­cism and so many lies. It affects your fam­i­ly and your loved ones. It is the price that is paid in such a cor­rupt coun­try when one tries to com­bat cor­rup­tion.

    “In Hon­duras there are no guar­an­tees for human rights defend­ers,” she added. “We are at the mer­cy of the pow­ers that dom­i­nate this coun­try. They try to ter­ror­ize us and stop our work, either through psy­cho­log­i­cal ter­ror or cam­paigns on social net­works to stir up rejec­tion and hatred.”

    The dis­in­for­ma­tion cam­paigns are most often employed dur­ing peri­ods of social unrest and typ­i­cal­ly paint protests as vio­lent or par­ti­san, accord­ing to Sosa, the soci­ol­o­gist. “It scares peo­ple away from par­tic­i­pat­ing,” he said.

    Hernán­dez won a sec­ond term in a 2017 elec­tion plagued with irreg­u­lar­i­ties. With the coun­try rocked by protests and a vio­lent gov­ern­ment crack­down, researchers in Mex­i­co and the US doc­u­ment­ed the wide-scale use of Twit­ter bot accounts to pro­mote Hernán­dez and project a false view of “good news, pros­per­i­ty, and tran­quil­i­ty in Hon­duras”.

    Fresh protests in 2019 against gov­ern­ment efforts to pri­va­tize the pub­lic edu­ca­tion and health sys­tems were again met by a dig­i­tal smear cam­paign – this time with the back­ing of an Israeli polit­i­cal mar­ket­ing firm that was barred from Face­book in May 2019 for vio­lat­ing its ban on coor­di­nat­ed inau­then­tic behav­ior.

    Archimedes Group set up fake Face­book Pages pur­port­ing to rep­re­sent Hon­duran news out­lets or com­mu­ni­ty orga­ni­za­tions that pro­mot­ed pro-Hernán­dez mes­sages, accord­ing to an analy­sis by the Atlantic Council’s DFR­Lab. Among them was a Page that ran ads again alleg­ing that Zelaya was the source of the protests, and two Pages that pushed the mes­sage that Hernán­dez was ded­i­cat­ed to fight­ing drug traf­fick­ing.

    “They said that we were incit­ing vio­lence and had groups of delin­quents,” said Suya­pa Figueroa, the pres­i­dent of the Hon­duran Med­ical Guild, who rose to promi­nence as one of the lead­ers of the 2019 protests. “Some peo­ple were afraid to sup­port the [pro­test­ers’] plat­form because they thought that [the oust­ed pres­i­dent] Mel Zelaya was behind it. There were always fears that the move­ment was polit­i­cal­ly manip­u­lat­ed and that stopped it grow­ing.”

    Figueroa con­tin­ues to strug­gle with Face­book-fueled dis­in­for­ma­tion. A Face­book Page pur­port­ing to rep­re­sent her has near­ly 20,000 fol­low­ers and has been used to “attack lead­ers of the oppo­si­tion and cre­ate con­flict with­in it”, she said.

    “I’ve report­ed it and many of my friends have report­ed it, yet I haven’t been able to get that fake Page tak­en down,” she said.

    ————

    “Face­book knew of Hon­duran president’s manip­u­la­tion cam­paign – and let it con­tin­ue for 11 months” by Julia Car­rie Wong and Jeff Ernst; The Guardian; 04/13/2021

    “Although the activ­i­ty vio­lat­ed Facebook’s pol­i­cy against “coor­di­nat­ed inau­then­tic behav­ior” – the kind of decep­tive cam­paign­ing used by a Russ­ian influ­ence oper­a­tion dur­ing the 2016 US elec­tion – Face­book dragged its feet for near­ly a year before tak­ing the cam­paign down in July 2019.”

    It took Face­book 11 months to stop the over­whelm­ing­ly obvi­ous inau­then­tic behav­ior of the Hon­duran gov­ern­ment. Com­pare that to Face­book’s recent take down of large swathes of Nicaragua’s left-wing soci­ety based on unfound­ed fears of gov­ern­ment involve­ment in their online activ­i­ties. Why the 11 month delay? Well, it’s just a “bum­mer”, but Hon­duras just was­n’t a pri­or­i­ty. The US, West­ern Europe, and the activ­i­ties of Rus­sia and Iran are the pri­or­i­ties:

    ...
    The cam­paign was uncov­ered in August 2018 by a Face­book data sci­en­tist, Sophie Zhang, whose job involved com­bat­ting fake engage­ment: com­ments, shares, likes and reac­tions from inau­then­tic or com­pro­mised accounts.

    Zhang began inves­ti­gat­ing Hernández’s Page because he was the ben­e­fi­cia­ry of 90% of all the known fake engage­ment received by civic or polit­i­cal Pages in Hon­duras. Over one six-week peri­od in 2018, for exam­ple, Hernández’s Face­book posts received likes from 59,100 users, of whom 46,500 were fake.

    She found that one of the admin­is­tra­tors for Hernández’s Page was also the admin­is­tra­tor for hun­dreds of the inau­then­tic Pages that were being used sole­ly to boost posts on Hernández’s Page. This indi­vid­ual was also an admin­is­tra­tor for the Page of Hil­da Hernán­dez, the president’s sis­ter, who served as his com­mu­ni­ca­tions min­is­ter until her death in Decem­ber 2017.

    ...

    Despite this, the cam­paign to boost Hernán­dez on Face­book repeat­ed­ly returned, and Face­book showed lit­tle appetite for polic­ing the recidi­vism. Guy Rosen, Facebook’s vice-pres­i­dent of integri­ty, referred to the return of the Hon­duras cam­paign as a “bum­mer” in an inter­nal dis­cus­sion in Decem­ber 2019 but empha­sized that the com­pa­ny need­ed to pri­or­i­tize influ­ence oper­a­tions that tar­get­ed the US or west­ern Europe, or were car­ried out by Rus­sia or Iran.

    Hernández’s Page admin­is­tra­tor also returned to Face­book despite being banned dur­ing the July 2019 take­down. His account list­ed his place of employ­ment as the Hon­duran pres­i­den­tial palace and includ­ed pho­tos tak­en inside restrict­ed areas of the president’s offices.

    The Page admin­is­tra­tor did not respond to queries from the Guardian, and his account was removed two days after the Guardian ques­tioned Face­book about it.
    ...

    It’s also impor­tant to take in this con­text that this Hon­duran right-wing gov­ern­ment was, at the time, seen as key US ally in the region:

    ...

    Decep­tive social media cam­paigns are used to “deter polit­i­cal par­tic­i­pa­tion or to get those who par­tic­i­pate to change their opin­ion”, said Aldo Sal­ga­do, co-founder of Cit­i­zen Lab Hon­duras. “They serve to emu­late pop­u­lar sup­port that the gov­ern­ment lacks.”

    Euge­nio Sosa, a pro­fes­sor of soci­ol­o­gy at the Nation­al Autonomous Uni­ver­si­ty of Hon­duras, said the government’s use of astro­turf­ing to sup­port Hernán­dez “has to do with the deep ero­sion of legit­i­ma­cy, the lit­tle cred­i­bil­i­ty that he has, and the enor­mous pub­lic mis­trust about what he does, what he says and what he promis­es”. Beyond the president’s loy­al sup­port­ers, how­ev­er, Sosa said he believes that it has lit­tle effect on pub­lic opin­ion, due to a steady stream of head­lines about Hernández’s cor­rup­tion and ties to the nar­cotics trade.

    Hernández’s broth­er was con­vict­ed of drug traf­fick­ing in US fed­er­al courts in Octo­ber 2019, and the pres­i­dent has him­self been iden­ti­fied by US pros­e­cu­tors as a co-con­spir­a­tor in mul­ti­ple drug traf­fick­ing and cor­rup­tion cas­es. Hernán­dez has not been charged with a crime and has denied any wrong­do­ing. Until recent­ly, he was con­sid­ered a key US ally in Cen­tral Amer­i­ca.
    ...

    . Under­scor­ing how Face­book real­ly is oper­at­ing as a tool of the nation­al secu­ri­ty state. It’s an aspect of this whole scan­dal that under­scores the cyn­i­cal absur­di­ty of US con­ser­v­a­tives com­plain­ing about Face­book cen­sor­ship: Face­book is effec­tive­ly act­ing as a tool of the US nation­al secu­ri­ty state and is con­stant­ly find­ing excus­es to pro­mote right-wing dis­in­for­ma­tion. It’s a reminder that it we real­ly want to get to the bot­tom of why Face­book is con­stant­ly cod­dling the far right, it requires ask­ing the much larg­er ques­tions about the US nation­al secu­ri­ty state’s decades-long cod­dling of the far right glob­al­ly. Rather dif­fi­cult ques­tions.

    Posted by Pterrafractyl | November 18, 2021, 5:50 pm
  26. Why are the two GOP Sen­ate can­di­dates with the clos­est ties to Peter Thiel joint­ly push­ing a new nar­ra­tive about Mark Zucker­berg steal­ing the elec­tion for Joe Biden? That’s the ques­tion raised by the fol­low­ing sto­ry about how Ari­zona Sen­ate can­di­date Blake Mas­ters and Ohio Sen­ate can­di­date JD Vance are both close Thiel asso­ciates, both backed by $10 mil­lion, and both aggres­sive­ly pro­mot­ing the lat­est ‘stolen elec­tion’ con­ser­v­a­tive nar­ra­tive. A nar­ra­tive where Mark Zucker­berg him­self stole the elec­tion for Biden. Not Face­book, just Zucker­berg and his wife.

    So how did Mark Zucker­berg and his wife steal the elec­tion for Biden? Through a pro-democ­ra­cy foun­da­tion the pair set up in 2012. That enti­ty, the Cen­ter for Tech and Civic Life (CTCL), which played a last-minute emer­gency role in 2020 assist­ing local­i­ties in rais­ing the mon­ey and resources need­ed to run an elec­tion dur­ing an unprece­dent­ed pan­dem­ic. As the fol­low­ing Yahoo News piece describes, while the fed­er­al gov­ern­ment pro­vid­ed $400 mil­lion in emer­gency assis­tance to local­i­ties, that num­ber was far less than what experts said was need­ed but Repub­li­cans were block­ing addi­tion­al resources. This is where the CTCL stepped in, pro­vid­ing anoth­er $400 mil­lion in grants to local­i­ties. One study found the was spent on “increased pay for poll work­ers, expand­ed ear­ly vot­ing sites and extra equip­ment to more quick­ly process mil­lions of mailed bal­lots.”

    So how did this $400 mil­lion in emer­gency grants steal the elec­tion for Biden? Well, accord­ing to con­ser­v­a­tive ‘analy­ses’, the mon­ey was dis­pro­por­tion­ate­ly giv­en to urban coun­ties, which ben­e­fit­ed Democ­rats. Now the com­plaint that the group gave more the urban vs rur­al area is demon­stra­bly absurd. Of course it would and should give more just based on pop­u­la­tion den­si­ty. But oth­er also point to the CTCL giv­ing grants to Demo­c­ra­t­ic-lean­ing urban coun­ties that Biden won with­out giv­ing to Repub­li­can-lean­ing urban coun­ties Trump won. CTCL replied that it gave grants to all coun­ties that request­ed them.

    So it appears that some­one noticed that the CTCL end­ed up giv­ing dis­pro­por­tion­ate­ly to Demo­c­ra­t­ic-lean­ing urban coun­ties vs Repub­li­can-lean­ing urban coun­ties and decid­ed to con­coct a ‘Mark Zucker­berg stole the elec­tion’ nar­ra­tive around this. A nar­ra­tive that con­ve­nient­ly ignores the exten­sive evi­dence of the role Face­book played as a key tool for the Repub­li­cans and the far right, and also con­ve­nient­ly ignores how the GOP was sys­tem­at­i­cal­ly refus­ing addi­tion­al funds to help local­i­ties run elec­tions dur­ing the pan­dem­ic.

    And this nar­ra­tive, which is simul­ta­ne­ous­ly con­ve­nient for Face­book but incon­ve­nient for Mark Zucker­berg (and there­fore kind of incon­ve­nient for Face­book too), is being heav­i­ly pro­mot­ed by the two Sen­ate can­di­dates with the clos­est ties to Peter Thiel. What’s going on here? Is this pure the­atrics? Don’t for­get the secret White House din­ner in Octo­ber of 2019 arranged by Thiel, where Zucker­berg and the White House came to some sort of secret agree­ment to go easy on con­ser­v­a­tive sites. The­atri­cal arrange­ments between the GOP and Face­book are to be expect­ed.

    And yet, this is a high­ly incon­ve­nient nar­ra­tive for Mark Zucker­berg per­son­al­ly, being pushed by his appar­ent men­tor. Why is this hap­pen­ing? Is this real­ly the­atrics? Or are we get­ting a bet­ter idea of the pow­er moves behind why Mark Zucker­berg finds Peter Thiel to be so indis­pens­able:

    Yahoo News

    GOP sen­ate can­di­dates allege Face­book’s Zucker­berg spent mil­lions to ‘buy the pres­i­den­cy’ for Biden — but there’s not much back­ing up the claim

    Jon Ward·Chief Nation­al Cor­re­spon­dent
    Wed, Decem­ber 8, 2021, 7:27 AM

    Two high-pro­file Repub­li­can can­di­dates for the U.S. Sen­ate, both of them close to tech entre­pre­neur Peter Thiel, are sup­port­ing an effort to merge for­mer Pres­i­dent Don­ald Trump’s lies about a stolen 2020 elec­tion with accu­sa­tions of med­dling against Face­book CEO Mark Zucker­berg.

    In Ari­zona, Sen­ate can­di­date Blake Mas­ters said vot­ers should “elect peo­ple who will tell you the truth.”

    But Mas­ters has made a false­hood part of his can­di­da­cy. “I think Trump won in 2020,” he said in a recent video.

    Mas­ters and J.D. Vance, a Repub­li­can run­ning for Sen­ate in Ohio, are seek­ing togeth­er to repack­age Trump’s decep­tion in a new nar­ra­tive. Both are backed by $10 mil­lion from Thiel, co-founder of Pay­Pal and data min­ing com­pa­ny Palan­tir Tech­nolo­gies.

    Mas­ters co-wrote a book with Thiel and is COO of Thiel’s invest­ment firm. Vance worked for Thiel after pub­lish­ing “Hill­bil­ly Ele­gy,” his best­selling 2016 mem­oir, and raised mon­ey from Thiel to start a ven­ture cap­i­tal firm.

    Mas­ters and Vance have jet­ti­soned the wild and debunked alle­ga­tions of out­right fraud and moved on to a new con­spir­a­cy the­o­ry: that Zucker­berg spent hun­dreds of mil­lions to “buy the pres­i­den­cy for Joe Biden.”

    It’s an alle­ga­tion that has shown some pur­chase among the GOP’s pro-Trump grass­roots. The Repub­li­can Par­ty, which has his­tor­i­cal­ly been amenable to the inter­ests of big busi­ness, is still in the throes of the for­mer president’s trade­mark pop­ulism. And Trump still insists that the 2020 elec­tion was ille­git­i­mate, lead­ing even his more sober-mind­ed sup­port­ers to try and jus­ti­fy that thor­ough­ly debunked idea.

    Since the elec­tion, Trump and his allies have accused Big Tech — major Sil­i­con Val­ley firms like Google, Twit­ter and Face­book — of inter­ven­ing on Biden’s behalf. Con­ser­v­a­tives have already alleged for years that these com­pa­nies were active­ly try­ing to muz­zle the right, and inci­dents like Twitter’s tem­po­rary block­ing of a sto­ry about Hunter Biden’s lap­top have served as a ral­ly­ing cry for these com­plaints.

    But there is lit­tle dis­cus­sion on the right of how dis­in­for­ma­tion and lies — terms that are some­times abused — are arti­fi­cial­ly ampli­fied in ways that divide friends, neigh­bors and fam­i­lies, bring­ing fame and for­tune to those will­ing to play the dem­a­gogue.

    Yet were it not for an ear­ly invest­ment from Thiel, the Face­book we know today might not even exist. In 2004 he became the company’s first out­side investor, giv­ing Zuckerberg’s nascent behe­moth a much-need­ed dose of cap­i­tal and cred­i­bil­i­ty. Even as he pro­pels the can­di­da­cies of Mas­ters and Vance — who are both seek­ing to blame Facebook’s CEO for buy­ing the elec­tion — Thiel still sits on Facebook’s board of direc­tors.

    Thiel’s sup­port of Mas­ters and Vance has cre­at­ed an unusu­al dynam­ic where two first-time can­di­dates, cam­paign­ing for fed­er­al office at oppo­site ends of the coun­try, appear to be some­thing like run­ning mates.

    “Tech bil­lion­aire Peter Thiel is going all-in to sup­port two of his pro­teges’ cam­paigns for the US Sen­ate — and his plan involves swanky Cal­i­for­nia din­ners with high-dol­lar donors,” read a New York Post sto­ry last month. “Rise of a megadonor: Thiel makes a play for the Sen­ate,” blared a Politi­co head­line in May.

    Mas­ters and Vance, for their part, don’t seem to mind being grouped togeth­er. In Octo­ber, they out­lined their alle­ga­tions against Zucker­berg — and against Big Tech more broad­ly — in a New York Post arti­cle they authored togeth­er.

    “Face­book — both the prod­uct and the wealth gen­er­at­ed for its exec­u­tives — was lever­aged to elect a Demo­c­ra­t­ic pres­i­dent,” Mas­ters and Vance wrote. “At a min­i­mum, the company’s lead­ers should be forced to answer for this before a con­gres­sion­al com­mit­tee.”

    The pair essen­tial­ly argued that Biden beat Trump in 2020 because Zucker­berg and his wife, Priscil­la Chan, donat­ed $400 mil­lion of their per­son­al for­tune to help local­i­ties run elec­tions dur­ing the pan­dem­ic, and that mon­ey helped too many Democ­rats vote.

    The com­plaint is not that votes were stolen or added ille­gal­ly. It’s that there were too many legal votes cast in places that lean Demo­c­ra­t­ic and that Zucker­berg and Chan’s mon­ey was in fact fun­neled to places where it would turn out more Demo­c­ra­t­ic vot­ers and help Biden.

    Zucker­berg and Chan, who donat­ed much of the mon­ey to a group called the Cen­ter for Tech and Civic Life (CTCL), deny the alle­ga­tion. Ben Labolt, a spokesman for the cou­ple, told Yahoo News that “near­ly 2,500 elec­tion juris­dic­tions from 49 states applied for and received funds, includ­ing urban, sub­ur­ban, rur­al, and exur­ban coun­ties … and more Repub­li­can than Demo­c­ra­t­ic juris­dic­tions applied for and received the funds.”

    Unques­tion­ably, exam­in­ing the impact of so much mon­ey from a pair of indi­vid­u­als in any sphere relat­ed to the elec­tion is a legit­i­mate endeav­or. But so far, the con­clu­sions about the impact of the Zucker­berg and Chan mon­ey go far beyond what any evi­dence shows, and are being dropped into an infor­ma­tion envi­ron­ment already deeply poi­soned by Trump’s relent­less cam­paign of lies and base­less claims.

    ...

    The anti-Zucker­berg mes­sage has been build­ing for months on the right. Last year, the Capi­tol Research Cen­ter (CRC), a con­ser­v­a­tive non­prof­it, began pub­lish­ing a series of arti­cles claim­ing that the mon­ey from Zucker­berg and Chan helped Biden win the elec­tion.

    CTCL is a Chica­go-based non­prof­it found­ed in 2012 to advo­cate for elec­tion reform. Com­plaints about the Zucker­berg-Chan dona­tions stem in part from the fact that top lead­ers at CTCL have worked for Demo­c­ra­t­ic can­di­dates or caus­es in the past, and that they have post­ed com­ments on social media indi­cat­ing a dis­like of Trump.

    CRC, the con­ser­v­a­tive group, wrote that the mon­ey from CTCL “did not appar­ent­ly vio­late any elec­tion laws” but that “many of its grants tar­get­ed key Demo­c­ra­t­ic-lean­ing coun­ties and cities in bat­tle­ground states.”

    “While CTCL sent grants to many coun­ties that Repub­li­can incum­bent Don­ald Trump won in these states, the largest grants went to Biden coun­ties such as Philadel­phia, Penn­syl­va­nia, and the greater Atlanta met­ro­pol­i­tan area,” CRC wrote.

    In oth­er words, the dona­tion spent more mon­ey on high­ly pop­u­lat­ed urban areas that are essen­tial to Demo­c­ra­t­ic for­tunes in swing states, but that also require far greater sums of mon­ey to con­duct elec­tions.

    How­ev­er, if the argu­ment is that Philadel­phia helped Biden win Penn­syl­va­nia, a close look at vote totals doesn’t sup­port that argu­ment.

    Trump did bet­ter in Philadel­phia in 2020 than he did in 2016, win­ning 18 per­cent of the vote last year com­pared with just 15 per­cent in 2016. In a state decid­ed by only 80,000 votes, the vote totals in Philadel­phia made it clos­er for Trump, rather than for Biden.

    Biden won the state pri­mar­i­ly because of his abil­i­ty to do bet­ter than Hillary Clin­ton had four years pri­or in the sub­ur­ban coun­ties around Philadel­phia.

    Nonethe­less, by this past sum­mer, rough­ly a dozen Repub­li­can state leg­is­la­tures had intro­duced or passed laws ban­ning or restrict­ing the abil­i­ty of pri­vate mon­ey to flow into elec­tion admin­is­tra­tion. But CTCL has said that in many states there is a “sys­temic under­fund­ing of elec­tions” — a notion sup­port­ed by non­par­ti­san elec­tion experts.

    Mean­while, it has become fash­ion­able among Repub­li­cans to announce a ban on “Zuck Bucks” or “Zucker­bucks,” as Flori­da Gov. Ron DeSan­tis did in Octo­ber.

    Zucker­berg and Chan donat­ed the mon­ey in Sep­tem­ber and Octo­ber 2020 after elec­tion admin­is­tra­tors from around the coun­try and from both par­ties said the strain of the COVID-19 pan­dem­ic was going to require a spe­cial infu­sion of mon­ey to pay for every­thing that was need­ed.

    The fed­er­al gov­ern­ment allo­cat­ed about $400 mil­lion through emer­gency fund­ing in the Coro­n­avirus Aid, Relief, and Eco­nom­ic Secu­ri­ty (CARES) Act, a stim­u­lus pack­age signed into law by Trump in March 2020. But elec­tion admin­is­tra­tors and out­side experts said much more was need­ed.

    Repub­li­cans in Con­gress blocked attempts to spend more mon­ey for elec­tion admin­is­tra­tion, and a few months lat­er Zucker­berg and Chan donat­ed their own per­son­al funds. One inves­ti­ga­tion by Amer­i­can Pub­lic Media into the dona­tions said the mon­ey was spent on “increased pay for poll work­ers, expand­ed ear­ly vot­ing sites and extra equip­ment to more quick­ly process mil­lions of mailed bal­lots.”

    The Mas­ters and Vance op-ed in the New York Post relies large­ly on accu­sa­tions made by anoth­er researcher, a for­mer eco­nom­ics pro­fes­sor at the Uni­ver­si­ty of Dal­las named William Doyle. Doyle has alleged that Zucker­berg paid for a “takeover of gov­ern­ment elec­tion oper­a­tions” and that the tech CEO “bought” the 2020 elec­tion and “sig­nif­i­cant­ly increased Joe Biden’s vote mar­gin in key swing states.”

    Doyle is plan­ning to pub­lish more arti­cles on the sub­ject, and in late Decem­ber or Jan­u­ary he intends to issue his first actu­al report, J.P. Arling­haus, a spokesman for Doyle, told Yahoo News. Arling­haus and Doyle are part of the Cae­sar Rod­ney Insti­tute for Amer­i­can Elec­tion Research, a non­prof­it orga­ni­za­tion set up “specif­i­cal­ly to study the 2020 elec­tion,” Arling­haus said.

    The one item Doyle has pub­lished so far claims that the 2020 elec­tion “wasn’t stolen — it was like­ly bought.” Accord­ing to Arling­haus, the seman­tic dis­tinc­tion dis­tances his argu­ment from those made by for­mer New York City May­or Rudy Giu­liani, lawyer Sid­ney Pow­ell or MyP­il­low CEO Mike Lin­dell, who have all made wild claims about sup­posed efforts to rig the vote total for Biden.

    “Unlike some advo­cates whose claims seem made to attempt an over­turn­ing of the 2020 result but which have not yet had solid­ly sourced evi­dence, we are pur­su­ing activ­i­ties and spend­ing that are pub­licly uncon­test­ed and known through pub­lic records,” Arling­haus told Yahoo News.

    Arling­haus also said, “We aren’t mak­ing asser­tions we wish were true, but rather we are inter­est­ed only in metic­u­lous study of the evi­dence wher­ev­er it leads.”

    Doyle’s op-ed com­plains that more of Zucker­berg and Chan’s mon­ey went to large cities than to rur­al areas, where Repub­li­cans tend to be much stronger.

    But that’s not de fac­to evi­dence of par­ti­san intent. High­ly pop­u­lat­ed local­i­ties need more resources to run an elec­tion where there are far more vot­ers.

    A more sub­stan­tive com­plaint is that in metro areas of swing states, more Demo­c­ra­t­ic-lean­ing por­tions of those regions got Zucker­berg fund­ing while more mod­er­ate metro areas, with high­er num­bers of Repub­li­can vot­ers, did not. Doyle alleges this hap­pened in the Dal­las-Fort Worth area, where of the four major coun­ties, the two that Biden won — Dal­las and Tar­rant coun­ties — got Zucker­berg grants through CTCL, and the two that Trump won — Den­ton and Collin coun­ties — did not.

    But CTCL has said it gave grants to any coun­ties that request­ed them. And in addi­tion, the two big Dal­las-Fort Worth coun­ties that Trump won — and that did not get Zucker­berg fund­ing — nonethe­less saw a big­ger increase in vot­er turnout than the two coun­ties that did get the mon­ey from Zucker­berg and Chan.

    Trump car­ried the state of Texas with near­ly 5.9 mil­lion votes, a sub­stan­tial increase over the 4.7 mil­lion votes he won there in 2016.

    The right-wing nar­ra­tive is that with­out groups like CTCL, and oth­ers like the Cen­ter for Elec­tion Inno­va­tion and Research (CEIR), which award­ed about $65 mil­lion in grants — most of that from Zucker­berg and Chan — Democ­rats would have had less of a turnout boost while Repub­li­cans vot­ed in high­er num­bers with­out any help.

    “It is part of the over­all elec­tion denial and the attempts to weak­en Amer­i­can democ­ra­cy by mak­ing just com­plete­ly false claims about the elec­tion,” David Beck­er, CEIR’s exec­u­tive direc­tor, said.

    Ohio Sec­re­tary of State Frank LaRose, a Repub­li­can, said that “even in the most chal­leng­ing of envi­ron­ments, 2020 was Ohio’s most suc­cess­ful elec­tion ever” and that Zucker­berg and Chan’s mon­ey — allo­cat­ed through CEIR grants — was “vital to achiev­ing that mis­sion.” Trump won Ohio by some 500,000 votes in 2020, an improve­ment on his 2016 show­ing.

    Doyle also argued that the peo­ple at CTCL over­see­ing the dis­burse­ment of hun­dreds of mil­lions of dol­lars to local elec­tion offices were “nom­i­nal­ly non-par­ti­san — but demon­stra­bly ide­o­log­i­cal.” There is an entire page at the Cae­sar Rod­ney Insti­tute web­site show­cas­ing social media posts from three CTCL mem­bers that indi­cate their polit­i­cal views lean left.

    Doyle’s web­site declares, “The 2020 Gen­er­al Elec­tion is not over and done with.”

    The notion that Big Tech is in cahoots with the Demo­c­ra­t­ic Par­ty is wide­spread on the right. And it’s pro­mot­ed by authors like the Federalist’s Mol­lie Hem­ing­way, whose book “Rigged” pro­vides much of the mate­r­i­al for oth­ers in the right-wing media ecos­phere.

    In that book, Hem­ing­way claims the Zucker­berg and Chan mon­ey had a par­ti­san impact, but she also talks about main­stream media bias and deci­sions by social media com­pa­nies like Face­book and Twit­ter to deplat­form Trump and oth­er Repub­li­cans. And she points to efforts to sup­press the cir­cu­la­tion of sto­ries crit­i­cal of Democ­rats, most notably the one involv­ing Hunter Biden’s lap­top.

    But Hem­ing­way, a for­mer Trump crit­ic turned stal­wart defend­er, con­tends that what appeared to be non­par­ti­san efforts to help peo­ple vote dur­ing a pan­dem­ic were real­ly a plot to defeat Trump.

    The prob­lem with this argu­ment is that high­er-turnout elec­tions have not been shown to help either par­ty, even as many par­ti­sans on both sides con­tin­ue to insist that high­er turnout helps Democ­rats.

    In addi­tion, apart from Trump, the GOP did excep­tion­al­ly well in the 2020 elec­tion, which saw huge turnout among both Repub­li­cans and Democ­rats.

    ...

    ———–

    “GOP sen­ate can­di­dates allege Face­book’s Zucker­berg spent mil­lions to ‘buy the pres­i­den­cy’ for Biden — but there’s not much back­ing up the claim” by Jon Ward; Yahoo News; 12/08/2021

    “Mas­ters and J.D. Vance, a Repub­li­can run­ning for Sen­ate in Ohio, are seek­ing togeth­er to repack­age Trump’s decep­tion in a new nar­ra­tive. Both are backed by $10 mil­lion from Thiel, co-founder of Pay­Pal and data min­ing com­pa­ny Palan­tir Tech­nolo­gies.

    The two GOP Sen­ate can­di­dates cham­pi­oning this ‘Mark Zucker­berg stole the elec­tion for Biden’ nar­ra­tive just hap­pen to be the two can­di­dates heav­i­ly backed by Thiel, the guy who is wide­ly seen as Zucker­berg’s de fac­to men­tor. What’s going on here? Is this pure­ly the­atrics? Or an exam­ple of how Thiel keeps Zucker­berg in check? Note how Vance and Mas­ters both joint­ly pub­lished an op-ed push­ing this nar­ra­tive. It’s like they want the world to know this nar­ra­tive was a Thiel-financed pro­duc­tion:

    ...
    Mas­ters co-wrote a book with Thiel and is COO of Thiel’s invest­ment firm. Vance worked for Thiel after pub­lish­ing “Hill­bil­ly Ele­gy,” his best­selling 2016 mem­oir, and raised mon­ey from Thiel to start a ven­ture cap­i­tal firm.

    Mas­ters and Vance have jet­ti­soned the wild and debunked alle­ga­tions of out­right fraud and moved on to a new con­spir­a­cy the­o­ry: that Zucker­berg spent hun­dreds of mil­lions to “buy the pres­i­den­cy for Joe Biden.”

    It’s an alle­ga­tion that has shown some pur­chase among the GOP’s pro-Trump grass­roots. The Repub­li­can Par­ty, which has his­tor­i­cal­ly been amenable to the inter­ests of big busi­ness, is still in the throes of the for­mer president’s trade­mark pop­ulism. And Trump still insists that the 2020 elec­tion was ille­git­i­mate, lead­ing even his more sober-mind­ed sup­port­ers to try and jus­ti­fy that thor­ough­ly debunked idea.

    ...

    “Tech bil­lion­aire Peter Thiel is going all-in to sup­port two of his pro­teges’ cam­paigns for the US Sen­ate — and his plan involves swanky Cal­i­for­nia din­ners with high-dol­lar donors,” read a New York Post sto­ry last month. “Rise of a megadonor: Thiel makes a play for the Sen­ate,” blared a Politi­co head­line in May.

    Mas­ters and Vance, for their part, don’t seem to mind being grouped togeth­er. In Octo­ber, they out­lined their alle­ga­tions against Zucker­berg — and against Big Tech more broad­ly — in a New York Post arti­cle they authored togeth­er.

    “Face­book — both the prod­uct and the wealth gen­er­at­ed for its exec­u­tives — was lever­aged to elect a Demo­c­ra­t­ic pres­i­dent,” Mas­ters and Vance wrote. “At a min­i­mum, the company’s lead­ers should be forced to answer for this before a con­gres­sion­al com­mit­tee.”
    ...

    Cru­cial­ly, it’s not a nar­ra­tive that involves elec­toral shenani­gans play­ing out on Face­book. No, the plat­form itself has noth­ing to do with this con­spir­a­cy the­o­ry, con­ve­nient­ly for both Zucker­berg and Thiel. Instead, it’s a con­spir­a­cy focused sole­ly on the actions of Zucker­berg’s non-prof­it, the Cen­ter for Tech and Civic Life (CTCL), which was used to chan­nel $400 mil­lion in emer­gency dona­tions Zucker­berg and Chan made for the pur­pose of help­ing local­i­ties run elec­tions. One study found the mon­ey was spent on “increased pay for poll work­ers, expand­ed ear­ly vot­ing sites and extra equip­ment to more quick­ly process mil­lions of mailed bal­lots.” As the arti­cle notes, this mon­ey was giv­en after the fed­er­al gov­ern­ment itself allo­cat­ed around $400 mil­lion for emer­gency spend­ing, but Repub­li­cans blocked more funds despite experts say­ing much more was need­ed. So The Zuckerberg/Chan $400 mil­lion was almost like a pri­vate match­ing fund for the fed’s $400 mil­lion. Arguably a very nec­es­sary match­ing fund due to the fact that the GOP was block­ing any­thing more. And accord­ing to this nar­ra­tive, that $400 mil­lion was spent in an unbal­anced man­ner that helped Democ­rats more than Repub­li­cans. That’s the big ‘Face­book stole the elec­tion for Biden’ con­spir­a­cy the­o­ry: the obser­va­tion that the non-par­ti­san Zucker­berg-financed elec­tion-assis­tance activ­i­ties end­ed up help­ing Democ­rats rel­a­tive­ly more than Repub­li­cans because it helped cities more than rur­al areas. A nar­ra­tive that has absolute­ly noth­ing to do with Face­book itself. Again, it’s a remark­ably con­ve­nient nar­ra­tive for Thiel, Face­book, and Zucker­berg. At least Zucker­berg does­n’t have to defend him­self against more accu­sa­tions about Face­book direct­ly manip­u­lat­ing peo­ple. For Zucker­berg, this must be a refresh­ing non-Face­book-relat­ed accu­sa­tion, if still annoy­ing:

    ...
    The pair essen­tial­ly argued that Biden beat Trump in 2020 because Zucker­berg and his wife, Priscil­la Chan, donat­ed $400 mil­lion of their per­son­al for­tune to help local­i­ties run elec­tions dur­ing the pan­dem­ic, and that mon­ey helped too many Democ­rats vote.

    The com­plaint is not that votes were stolen or added ille­gal­ly. It’s that there were too many legal votes cast in places that lean Demo­c­ra­t­ic and that Zucker­berg and Chan’s mon­ey was in fact fun­neled to places where it would turn out more Demo­c­ra­t­ic vot­ers and help Biden.

    Zucker­berg and Chan, who donat­ed much of the mon­ey to a group called the Cen­ter for Tech and Civic Life (CTCL), deny the alle­ga­tion. Ben Labolt, a spokesman for the cou­ple, told Yahoo News that “near­ly 2,500 elec­tion juris­dic­tions from 49 states applied for and received funds, includ­ing urban, sub­ur­ban, rur­al, and exur­ban coun­ties … and more Repub­li­can than Demo­c­ra­t­ic juris­dic­tions applied for and received the funds.”

    Unques­tion­ably, exam­in­ing the impact of so much mon­ey from a pair of indi­vid­u­als in any sphere relat­ed to the elec­tion is a legit­i­mate endeav­or. But so far, the con­clu­sions about the impact of the Zucker­berg and Chan mon­ey go far beyond what any evi­dence shows, and are being dropped into an infor­ma­tion envi­ron­ment already deeply poi­soned by Trump’s relent­less cam­paign of lies and base­less claims.

    ...

    CTCL is a Chica­go-based non­prof­it found­ed in 2012 to advo­cate for elec­tion reform. Com­plaints about the Zucker­berg-Chan dona­tions stem in part from the fact that top lead­ers at CTCL have worked for Demo­c­ra­t­ic can­di­dates or caus­es in the past, and that they have post­ed com­ments on social media indi­cat­ing a dis­like of Trump.

    ...

    The fed­er­al gov­ern­ment allo­cat­ed about $400 mil­lion through emer­gency fund­ing in the Coro­n­avirus Aid, Relief, and Eco­nom­ic Secu­ri­ty (CARES) Act, a stim­u­lus pack­age signed into law by Trump in March 2020. But elec­tion admin­is­tra­tors and out­side experts said much more was need­ed.

    Repub­li­cans in Con­gress blocked attempts to spend more mon­ey for elec­tion admin­is­tra­tion, and a few months lat­er Zucker­berg and Chan donat­ed their own per­son­al funds. One inves­ti­ga­tion by Amer­i­can Pub­lic Media into the dona­tions said the mon­ey was spent on “increased pay for poll work­ers, expand­ed ear­ly vot­ing sites and extra equip­ment to more quick­ly process mil­lions of mailed bal­lots.”
    ...

    Also note how this nar­ra­tive about Zucker­berg and the CTCL swing­ing the elec­tion ema­nat­ing from a right-wing group that’s promis­ing to put out more ‘research’ on this top­ic. Research based on William Doyle who has been argu­ing that Zucker­berg’s ini­tia­tive “sig­nif­i­cant­ly increased Biden’s vote mar­gin in key swing states.” And yet they’re simul­ta­ne­ous­ly dis­tanc­ing them­selves from the wild claims of elec­tion fraud made by fig­ures like Mike Lin­dell and Sid­ney Pow­ell. So it looks like this could be like a next gen­er­a­tion ‘the elec­tion was stolen’ nar­ra­tive. In oth­er words, there’s going to be a lot more put out around this nar­ra­tive:

    ...
    The anti-Zucker­berg mes­sage has been build­ing for months on the right. Last year, the Capi­tol Research Cen­ter (CRC), a con­ser­v­a­tive non­prof­it, began pub­lish­ing a series of arti­cles claim­ing that the mon­ey from Zucker­berg and Chan helped Biden win the elec­tion.

    ...

    CRC, the con­ser­v­a­tive group, wrote that the mon­ey from CTCL “did not appar­ent­ly vio­late any elec­tion laws” but that “many of its grants tar­get­ed key Demo­c­ra­t­ic-lean­ing coun­ties and cities in bat­tle­ground states.”

    “While CTCL sent grants to many coun­ties that Repub­li­can incum­bent Don­ald Trump won in these states, the largest grants went to Biden coun­ties such as Philadel­phia, Penn­syl­va­nia, and the greater Atlanta met­ro­pol­i­tan area,” CRC wrote.

    In oth­er words, the dona­tion spent more mon­ey on high­ly pop­u­lat­ed urban areas that are essen­tial to Demo­c­ra­t­ic for­tunes in swing states, but that also require far greater sums of mon­ey to con­duct elec­tions.

    ...

    The Mas­ters and Vance op-ed in the New York Post relies large­ly on accu­sa­tions made by anoth­er researcher, a for­mer eco­nom­ics pro­fes­sor at the Uni­ver­si­ty of Dal­las named William Doyle. Doyle has alleged that Zucker­berg paid for a “takeover of gov­ern­ment elec­tion oper­a­tions” and that the tech CEO “bought” the 2020 elec­tion and “sig­nif­i­cant­ly increased Joe Biden’s vote mar­gin in key swing states.”

    Doyle is plan­ning to pub­lish more arti­cles on the sub­ject, and in late Decem­ber or Jan­u­ary he intends to issue his first actu­al report, J.P. Arling­haus, a spokesman for Doyle, told Yahoo News. Arling­haus and Doyle are part of the Cae­sar Rod­ney Insti­tute for Amer­i­can Elec­tion Research, a non­prof­it orga­ni­za­tion set up “specif­i­cal­ly to study the 2020 elec­tion,” Arling­haus said.

    The one item Doyle has pub­lished so far claims that the 2020 elec­tion “wasn’t stolen — it was like­ly bought.” Accord­ing to Arling­haus, the seman­tic dis­tinc­tion dis­tances his argu­ment from those made by for­mer New York City May­or Rudy Giu­liani, lawyer Sid­ney Pow­ell or MyP­il­low CEO Mike Lin­dell, who have all made wild claims about sup­posed efforts to rig the vote total for Biden.
    ...

    Final­ly, note the CTCL response to these accu­sa­tions: it gave grants to any coun­ties that request­ed them. So if there was a par­ti­san pat­tern in how CTCL dis­trib­uted its grants, that was due to a par­ti­san refusal to accept them by con­ser­v­a­tive-led coun­ties:

    ...
    A more sub­stan­tive com­plaint is that in metro areas of swing states, more Demo­c­ra­t­ic-lean­ing por­tions of those regions got Zucker­berg fund­ing while more mod­er­ate metro areas, with high­er num­bers of Repub­li­can vot­ers, did not. Doyle alleges this hap­pened in the Dal­las-Fort Worth area, where of the four major coun­ties, the two that Biden won — Dal­las and Tar­rant coun­ties — got Zucker­berg grants through CTCL, and the two that Trump won — Den­ton and Collin coun­ties — did not.

    But CTCL has said it gave grants to any coun­ties that request­ed them. And in addi­tion, the two big Dal­las-Fort Worth coun­ties that Trump won — and that did not get Zucker­berg fund­ing — nonethe­less saw a big­ger increase in vot­er turnout than the two coun­ties that did get the mon­ey from Zucker­berg and Chan.

    Trump car­ried the state of Texas with near­ly 5.9 mil­lion votes, a sub­stan­tial increase over the 4.7 mil­lion votes he won there in 2016.

    The right-wing nar­ra­tive is that with­out groups like CTCL, and oth­ers like the Cen­ter for Elec­tion Inno­va­tion and Research (CEIR), which award­ed about $65 mil­lion in grants — most of that from Zucker­berg and Chan — Democ­rats would have had less of a turnout boost while Repub­li­cans vot­ed in high­er num­bers with­out any help.
    ...

    So what’s actu­al­ly hap­pen­ing here? Thiel and Zucker­berg are report­ed­ly quite close. It’s a demostra­ble fact giv­en how long both have remained at Face­book despite all the con­tro­ver­sy. And yet the two Thiel pro­teges run­ning for the Sen­ate are joint­ly cham­pi­oning this nar­ra­tive.

    Is this pure the­atrics? There was that now-noto­ri­ous Thiel-arranged secret din­ner at the Trump White House in Octo­ber 2019 where Zucker­berg report­ed­ly agreed to take a hands-off approach to con­ser­v­a­tive con­tent. It’s not like the­atri­cal arrange­ments between Zucker­berg, Thiel, and the GOP are unprece­dent­ed. And it’s hard to ignore how these nar­ra­tive con­ve­nient­ly ignores the role Face­book itself played in the elec­tion.

    And yet, as con­ve­nient as this nar­ra­tive is for Face­book and Thiel, it’s still kind of a giant pain in the ass for Zucker­berg. It’s a nar­ra­tive that casts him as a cen­tral vil­lain in the theft of the elec­tion for Biden. It’s hard to imag­ine he’s just chuck­ling about it all. So, again, what’s going on here?

    And that brings to the fol­low­ing inter­view of Thiel biog­ra­ph­er, Max Chafkin, who was asked direct­ly whether or not Zucker­berg should fear Thiel. As Chafkin sees it, while Zucker­berg is pow­er­ful enough him­self to fire Thiel from Face­book, he’s unlike­ley to do so. In part because he val­ues Thiel’s advice. And in part because he does­n’t want the giant headache that would come after he fires him. So there’s an implied under­stand­ing that fir­ing Thiel from Face­book would have very real reper­cus­sions:

    TechCrunch

    Should Mark Zucker­berg be scared of Peter Thiel?

    Con­nie Loizos / 10:59 PM CDT•September 27, 2021

    Unless you’ve been in a cave over the last week, you’ve like­ly read a review or some dis­cus­sion about “The Con­trar­i­an,” a new book about bil­lion­aire investor Peter Thiel by long­time Bloomberg Busi­ness­week fea­tures edi­tor and tech reporter Max Chafkin.

    ...

    To learn more, we talked with Chafkin last week in what proved to be a live­ly dis­cus­sion that cov­ered how much Thiel (who talked with Chafkin off the record) revealed of his per­son­al life; why the “Trump thing was part­ly ide­o­log­i­cal, but it was part­ly a trade — an insight that Trump was under­val­ued,” says Chafkin; and why Thiel’s beliefs are “extreme­ly incon­sis­tent,” accord­ing to Chafkin’s report­ing. We also dis­cussed Thiel’s rela­tion­ship with Mark Zucker­berg, who accept­ed one of Facebook’s first checks from Thiel and who has been bound, for good and bad, to Thiel since.

    You can hear that 30-minute inter­view here. In the mean­time, we’re pulling out a part of that con­ver­sa­tion cen­ter­ing on Zucker­berg because we find Zuckerberg’s rela­tion­ship with Thiel to be par­tic­u­lar­ly fas­ci­nat­ing and impor­tant, giv­en the impact of Face­book on Amer­i­can soci­ety and humankind more broad­ly. We’ve edit­ed this excerpt light­ly for length.

    TC: You talk about Thiel’s biggest and most impor­tant bet real­ly being Face­book and sug­gest in the book that he used his posi­tion as a board mem­ber since 2005 to per­suade Mark Zucker­berg to be more allow­ing of an any­thing-goes type stance, even mis­in­for­ma­tion. You also sug­gest there has been fric­tion between Thiel and Zucker­berg for some time, espe­cial­ly as Thiel has come to embrace Trump­ism. Do you antic­i­pate that Thiel will be a Face­book board mem­ber for much longer? Do you think he has been side­lined in any way?

    MC: There’s an anec­dote in the book: When Face­book went pub­lic, its stock crashed and Thiel sold the stock pret­ty quick­ly, but of course he stayed on the board [and in the book] I talk about this meet­ing they had at the Face­book cam­pus to kind of pump peo­ple up, because when you’re work­ing in a com­pa­ny and the stock is going down, I under­stand that it’s the world’s most depress­ing thing. Every­body every day is los­ing mon­ey. The press is beat­ing you up. They were get­ting sued by fire­fight­ers and teach­ers. It was just an end­less parade of bad news. So they had all these speak­ers come in to try to pick up the troops. And Peter Thiel gave a talk. And dur­ing the talk, he said, ‘My gen­er­a­tion was promised fly­ing cars. Instead we got Face­book.’ Nor­mal­ly he attacks Twit­ter [with that lan­guage]. He says, ‘We were promised fly­ing cars, but we got 140 char­ac­ters,’ but he made it Face­book in this case, and if you’re sit­ting in a crowd, or if you’re Mark Zucker­berg, it’s like, ‘Oh, so the longest-serv­ing board mem­ber, men­tor, guid­ing light of my busi­ness phi­los­o­phy, just kind of got up there and told me I sucked.’

    ...

    Thiel has at var­i­ous times embraced this kind of right wing activist project in Sil­i­con Val­ley. You have [con­ser­v­a­tive activist] James O’Keefe and oth­ers who are intent on expos­ing what they see as the hypocrisy of Face­book, Google, Apple — all the big tech com­pa­nies — and Thiel has sub­tly embraced those.

    But he’s also increas­ing­ly embrac­ing them in pub­lic. Right now, Thiel has two can­di­dates run­ning in the U.S. Sen­ate races. They’re both run­ning in Repub­li­can pri­maries: Blake Mas­ters in Ari­zona, and JD Vance in Ohio, and Thiel has donat­ed ten mil­lion bucks to super PACs sup­port­ing each of these can­di­dates. These guys are con­stant­ly attack­ing Face­book, and not just attack­ing Face­book on an intel­lec­tu­al lev­el or rais­ing ques­tions. They’re mak­ing almost per­son­al attacks against Mark Zucker­berg. There’s a JD Vance ad [fund­ed by Thiel], where it’s these dark tones, and it’s like, ‘There’s a con­tin­gent of elites in this coun­try who are out of touch,’ and there it is — there’s Mark Zucker­berg face. I mean, if I’m Mark Zucker­berg, God, that must be just a con­tin­u­al source of a headache.

    There was one instance [in 2017] where Zucker­berg asked if Thiel thought he should resign, and Thiel did not and Zucker­berg didn’t fire him, so there has at least been some ten­sion. [As for whether] Thiel’s val­ue has dimin­ished, that’s a real­ly astute ques­tion because with Biden in charge, with the Democ­rats in pow­er con­trol­ling the pres­i­den­cy and both hous­es of Con­gress, Thiel’s con­nec­tion to the right is less valu­able. That said, there’s a very good chance that Repub­li­cans will retake the Sen­ate in 2022. And there’s a chance that that some of those sen­a­tors will be very, very close to Peter Thiel, so that could dras­ti­cal­ly increase his val­ue.

    TC: You men­tion in the book that a lot of peo­ple who are close to Thiel and admire him are also ter­ri­fied of him. Despite the fact that Mark Zucker­berg is prob­a­bly the most pow­er­ful per­son in the world, I won­der if your sense of things is that he’s scared of Thiel.

    MC: I think Zucker­berg could fire Thiel. I mean, Mark Zucker­berg is a for­mi­da­ble guy. He’s worth a lot of mon­ey. He could afford a war with Peter Thiel, and he could afford the back­lash. But I think there’s a ques­tion about whether he’d want to, because right now, the rea­son Thiel is able to get away with what he’s able to get away with, with respect to both serv­ing on the board and being this pub­lic crit­ic, has to do with the fact that there would be a price to pay if Mark Zucker­berg fired him, and the price would be it would be a huge freak­ing sto­ry.

    Thiel had been such an impor­tant ally to Mark Zucker­berg dur­ing the Trump pres­i­den­cy. There have been these run­ning memes in con­ser­v­a­tive cir­cles that Face­book is sys­tem­at­i­cal­ly dis­crim­i­nat­ing against right wing points of view, [that it’s] a lib­er­al com­pa­ny staffed by lib­er­al employ­ees who hate Don­ald Trump, and that as a result, it is putting its thumb on the scale and advanc­ing the inter­ests of the left. . . [But] Zucker­berg had an awe­some response to that, which is ‘Hey, I’ve got this board mem­ber. He’s not just a Repub­li­can. He’s not just some kind of mid­dle-of-the-road con­ser­v­a­tive like George Bush guy or some­thing. He is Peter Freak­ing Thiel. He’s the guy who’s too crazy for Steve Ban­non. He is a dyed-in-the-wool Trump­ist.’ And that gives Face­book a real­ly, real­ly pow­er­ful argu­ment.

    When some­body like Josh Haw­ley, who has tak­en mon­ey from Peter Thiel, or Ted Cruz, anoth­er per­son who has tak­en mon­ey from Peter Thiel, comes along and attacks Face­book . . . I think if [Thiel] left, espe­cial­ly if he was fired — if that was a sto­ry that came out — it would be open sea­son.

    I don’t think it’s an exis­ten­tial issue for Mark Zucker­berg. But I think it might be more com­fort­able to keep his friend and board mem­ber Peter Thiel, despite the fact that they might have some pro­found dif­fer­ences of opin­ion on the val­ue of Face­book.

    ————

    “Should Mark Zucker­berg be scared of Peter Thiel?” by Con­nie Loizos; TechCrunch; 09/27/2021

    “MC: I think Zucker­berg could fire Thiel. I mean, Mark Zucker­berg is a for­mi­da­ble guy. He’s worth a lot of mon­ey. He could afford a war with Peter Thiel, and he could afford the back­lash. But I think there’s a ques­tion about whether he’d want to, because right now, the rea­son Thiel is able to get away with what he’s able to get away with, with respect to both serv­ing on the board and being this pub­lic crit­ic, has to do with the fact that there would be a price to pay if Mark Zucker­berg fired him, and the price would be it would be a huge freak­ing sto­ry.

    Zucker­berg could fire Thiel. He’s is wealthy and pow­er­ful in his own right, after all. But there would be a price paid. A price in the form of hav­ing a big pub­lic sto­ry about his split with Thiel. A price that has obvi­ous impli­ca­tions when it comes to Zucker­berg’s rela­tion­ship with the con­ser­v­a­tive move­ment. It’s part of what’s so iron­ic and absurd about this whole sit­u­a­tion: Zucker­berg appar­ent­ly keeps Thiel around, in part, as a kind of shield against even more attacks from the right-wing.

    And then there’s the one known instances of Zucker­berg seem­ing­ly try­ing to fire Thiel in 2017. Keep in mind this would have been when Thiel’s pub­lic tox­i­c­i­ty was prob­a­bly at its high­est fol­low­ing the his open close­ness with the new Trump admin­is­tra­tion. Zucker­berg report­ed­ly asked Thiel if he thought he should resign, Thiel said, no, and Zucker­berg did­n’t fire him. It tells us some­thing about the nature of their rela­tion­ship. Zucker­berg fears Thiel too much to fire him:

    ...
    There was one instance [in 2017] where Zucker­berg asked if Thiel thought he should resign, and Thiel did not and Zucker­berg didn’t fire him, so there has at least been some ten­sion. [As for whether] Thiel’s val­ue has dimin­ished, that’s a real­ly astute ques­tion because with Biden in charge, with the Democ­rats in pow­er con­trol­ling the pres­i­den­cy and both hous­es of Con­gress, Thiel’s con­nec­tion to the right is less valu­able. That said, there’s a very good chance that Repub­li­cans will retake the Sen­ate in 2022. And there’s a chance that that some of those sen­a­tors will be very, very close to Peter Thiel, so that could dras­ti­cal­ly increase his val­ue.

    ...

    When some­body like Josh Haw­ley, who has tak­en mon­ey from Peter Thiel, or Ted Cruz, anoth­er per­son who has tak­en mon­ey from Peter Thiel, comes along and attacks Face­book . . . I think if [Thiel] left, espe­cial­ly if he was fired — if that was a sto­ry that came out — it would be open sea­son.
    ...

    Now here’s a piece with a few more details on that 2017 attempt­ed fir­ing of Thiel from Max Chafk­in’s biog­ra­phy on Thiel. Accord­ing to Chafkin, the whole inci­dent took place after the NY Times pub­lished a leaked email from Face­book board mem­ber Reed Hast­ings telling Thiel that his endorse­ment of Trump reflect­ed poor­ly on Face­book. Zucker­berg then asked Thiel to step down. “ ‘I will not quit,’ he told Zucker­berg. ‘You’ll have to fire me.’ ” He was not fired, obvi­ous­ly. So it was­n’t sim­ply that Zucker­berg asked Thiel if he felt he should resign. Zucker­berg asked Thiel to resign, Thiel refused, and won:

    Busi­ness Insid­er

    Peter Thiel was the ‘pup­pet mas­ter’ behind Face­book’s polit­i­cal deal­ings, and 6 oth­er extra­or­di­nary details from a new book about the bil­lion­aire tycoon

    Dorothy Cuc­ci
    Oct 3, 2021, 12:02 PM

    * Peter Thiel and Mark Zucker­berg have known each oth­er since 2004, when Thiel became Face­book’s first major investor.
    * Zucker­berg saw Thiel as a men­tor, but those inside the com­pa­ny say he became increas­ing­ly depen­dent on Thiel.
    * A new biog­ra­phy reveals details around Thiel’s influ­ence on Zucker­berg and their roles in the 2020 elec­tion.

    Peter Thiel and Mark Zucker­berg have a lot in com­mon.

    Aggres­sive­ly ambi­tious, social­ly awk­ward, and unapolo­get­i­cal­ly con­tro­ver­sial, the tech bil­lion­aires — two of the most pow­er­ful in Sil­i­con Val­ley — have enjoyed a some­what sym­bi­ot­ic rela­tion­ship over the years.

    Thiel was the first big investor in Face­book, and Zucker­berg, 15 years younger, con­sid­ered him a men­tor. Some believe Thiel act­ed as Zucker­berg’s “pup­pet mas­ter,” and Face­book employ­ees noticed that Zucker­berg seemed to rely on Thiel in an unusu­al and some­times con­cern­ing way.

    We’ve com­piled sev­en of the most inter­est­ing details about Thiel and Zucker­berg’s strange rela­tion­ship from Max Chafk­in’s new biog­ra­phy, “The Con­trar­i­an: Peter Thiel and Sil­i­con Val­ley’s Pur­suit of Pow­er.”

    1. Thiel and Zucker­berg were “kin­dred spir­its” who con­nect­ed over their shared social awk­ward­ness and cut­throat approach to busi­ness.

    Zucker­berg looked up to Thiel, nam­ing him one of four mem­bers of Face­book’s orig­i­nal board of directors—and even said he mod­eled his famous­ly ruth­less equi­ty nego­ti­a­tions with co-founder Eduar­do Saverin after Thiel’s tac­tics at Pay­Pal.

    “Zucker­berg’s ret­i­cence and awk­ward­ness impressed Thiel, who saw in the young man’s indif­fer­ence a sign of intel­li­gence,” wrote Chafkin.

    2. Zucker­berg depend­ed on Thiel to keep Face­book’s rela­tion­ship with the right wing alive.

    In 2016, Thiel helped orches­trate a meet­ing in Men­lo Park between Zucker­berg and 16 well-known con­ser­v­a­tive fig­ures, includ­ing Tuck­er Carl­son, Glenn Beck, and Dana Peri­no, after claims erupt­ed that Face­book was cen­sor­ing right-wing opin­ions.

    Even as Zucker­berg’s own polit­i­cal lean­ings diverged from his men­tor’s, he still relied on Thiel as his “liai­son to the Amer­i­can right” and Face­book’s “con­ser­v­a­tive con­science.”

    ...

    4. Zucker­berg told Thiel he should resign from Face­book’s board of direc­tors in August 2017.

    After The New York Times pub­lished a leaked email from Net­flix CEO and fel­low board mem­ber Reed Hast­ings telling Thiel that his endorse­ment of Trump reflect­ed poor­ly on Face­book, Zucker­berg asked his long­time investor and men­tor to step down.

    “ ‘I will not quit,’ he told Zucker­berg. ‘You’ll have to fire me.’ ” He did noth­ing when Thiel refused.

    5. Thiel under­es­ti­mat­ed Face­book’s poten­tial and told Zucker­berg to sell the com­pa­ny in its sec­ond year.

    Despite its suc­cess, Thiel “had nev­er embraced the func­tion or phi­los­o­phy of Face­book” and did­n’t “buy into Zucker­berg’s con­cep­tion of the com­pa­ny,” wrote Chafkin.

    When Yahoo offered to buy Face­book for $1 bil­lion, Thiel told Zucker­berg to take the deal — but then-22-year-old CEO said no. In response, Thiel dumped around 17 mil­lion shares of his stock.

    6. Zucker­berg may have been CEO, but crit­ics saw Thiel as the “pup­pet mas­ter” behind the com­pa­ny’s polit­i­cal deal­ings.

    Thiel’s crit­ics believe he inten­tion­al­ly pushed his pro­tegé towards the extreme right under the guise of fos­ter­ing free speech, and Face­book employ­ees noticed that Zucker­berg seemed reliant on Thiel’s advice.

    “Thiel had been attract­ed to Zucker­berg’s obvi­ous lack of con­cern for what any­one else thought,” wrote Chafkin. “Zucker­berg now found the same qual­i­ty appeal­ing in Thiel. Thiel’s coun­sel could be rude, but it was always real.”

    7. Zucker­berg met with Thiel and Don­ald Trump to make a secret deal, accord­ing to a source close to Thiel.

    Ahead of the 2020 elec­tion, Zucker­berg and his wife joined Thiel, the pres­i­dent and first lady, and the Kush­n­ers at the White House, wrote Chafkin.

    Amid grow­ing crit­i­cism against Face­book for its tol­er­ance of mis­in­for­ma­tion, they made an agree­ment: Face­book would stop fact-check­ing polit­i­cal posts, and the Trump admin­is­tra­tion would ease up on restric­tions, one of Thiel’s con­fi­dantes said, accord­ing to the book.

    Edi­tor’s note: Both Thiel’s team and Face­book have denied that such an agree­ment was ever made.

    ————

    “Peter Thiel was the ‘pup­pet mas­ter’ behind Face­book’s polit­i­cal deal­ings, and 6 oth­er extra­or­di­nary details from a new book about the bil­lion­aire tycoon” by Dorothy Cuc­ci; Busi­ness Insid­er; 10/03/2021

    “ ‘I will not quit,’ he told Zucker­berg. ‘You’ll have to fire me.’ ” He did noth­ing when Thiel refused.”

    Zucker­berg tried to fire him. Tepid­ly, by ask­ing for a res­ig­na­tion. And that was as far as he was will­ing to go. Why? Is he reliant on Thiel? Or scared or him? Well, based on what we’ve heard, it’s prob­a­bly a bit of both: he relies on Thiel, in par­tic­u­lar when it comes to Face­book’s rela­tion­ship with the con­ser­v­a­tive move­ment, which is pre­cise­ly why he’s so ter­ri­fied of Thiel. With­out Thiel’s pro­tec­tion, the GOP would be even more pub­licly oppo­si­tion­al towards Face­book:

    ...
    Even as Zucker­berg’s own polit­i­cal lean­ings diverged from his men­tor’s, he still relied on Thiel as his “liai­son to the Amer­i­can right” and Face­book’s “con­ser­v­a­tive con­science.”

    ...

    Thiel’s crit­ics believe he inten­tion­al­ly pushed his pro­tegé towards the extreme right under the guise of fos­ter­ing free speech, and Face­book employ­ees noticed that Zucker­berg seemed reliant on Thiel’s advice.

    “Thiel had been attract­ed to Zucker­berg’s obvi­ous lack of con­cern for what any­one else thought,” wrote Chafkin. “Zucker­berg now found the same qual­i­ty appeal­ing in Thiel. Thiel’s coun­sel could be rude, but it was always real.”
    ...

    Don’t for­get that Thiel’s poten­tial lever­age over Zucker­berg isn’t lim­it­ed to his role as a far right beast­mas­ter who can hold the wolves at bay. Thiel’s role as the co-founder of Palan­tir prob­a­bly gives him all sorts of lever­age over both Face­book and Zucker­berg per­son­al­ly that we can bare­ly begin to mean­ing­ful­ly spec­u­late about.

    And then there’s the fact that the two have known each oth­er and oper­at­ed in the same social cir­cles for years. Just straight up black­mail could be a pos­si­bil­i­ty. Is Thiel black­mail­ing Zucker­berg?

    Anoth­er pos­si­bil­i­ty that that Thiel has got­ten wind of Zucker­berg plan­ning on final­ly fir­ing him for real, and the whole Vance/Masters pub­lic rela­tions ploy is Thiel’s warn­ing to Zucker­berg. Might that be what we’re look­ing at here? Who knows, but whether or not Mark Zucker­berg tru­ly is the most pow­er­ful per­son at Face­book, he does­n’t behave as if he believes that him­self.

    Posted by Pterrafractyl | December 9, 2021, 5:47 pm
  27. It’s the end of an era. It was an awful era. But at least it’s over. Not that we have any rea­son to believe the gen­er­al awful­ness of the era is going to recede: Peter Thiel is leav­ing the board of Face­book.

    The par­tic­u­lar rea­sons for Thiel’s depar­ture isn’t entire­ly clear. Or rather, omi­nous­ly vague. We’re told he’s leav­ing in order to focus on the 2022 mid-term elec­tions which Thiel report­ed­ly views as cru­cial to chang­ing the direc­tion of the coun­try. But the focus isn’t just on get­ting Repub­li­cans elect­ed to office. Thiel is try­ing to ensure it’s the pro-MAGA can­di­date who ulti­mate­ly win, with 3 of the 12 House can­di­dates he’s back­ing run­ning pri­ma­ry chal­lenges to Repub­li­cans who vot­ed in favor of impeach­ing Trump over the Jan 6 Capi­tol insur­rec­tion. So Thiel is now basi­cal­ly pro-insur­rec­tion, and try­ing to ensure the GOP remains the par­ty of insur­rec­tion and becomes even more pro-insur­rec­tion going for­ward. In that sense, he’s not incor­rect. 2022 real­ly is cru­cial to chang­ing the direc­tion of the coun­try. It’s going to be the great­est oppor­tu­ni­ty fas­cists like Thiel have ever had to ulti­mate­ly dri­ve a stake through the heart of the dying husk of Amer­i­can democ­ra­cy, with a pro-insur­rec­tion GOP poised to retake con­trol of the House and poten­tial engage in crim­i­nal pros­e­cu­tions of the Democ­rats who dared inves­ti­gate the insur­rec­tion.

    So is Thiel tru­ly leav­ing the Face­book board just to focus on the 2022 mid-terms? It does­n’t real­ly add up. It’s not like he has­n’t been doing exact­ly that for years. So what’s new? Why now? Is there a new scan­dal involv­ing secret deals between Face­book and the GOP, like the 2019 secret din­ner par­ty at the White House? Does it have to does with Thiel’s invest­ments in Bold­end, a hack­ing firm offer­ing prod­ucts that can hack Face­book-owned What­sApp? Or is Thiel per­haps plan­ning on using Face­book’s pro­pa­gan­da pow­er for mobi­liz­ing GOP vot­ers in man­ner that will be so scan­dalous the com­pa­ny needs to pre­emp­tive­ly part ways? That’s what makes this so announce­ment omi­nous­ly vague. The expressed rea­son — spend­ing more time on help­ing the GOP win elec­tions — does­n’t real­ly makes sense. Thiel is far more help­ful for help­ing the GOP win elec­tions when he’s sit­ting on the board of Face­book. So why leave?

    There’s anoth­er some­what amus­ing pos­si­ble motive: two of the Sen­ate can­di­dates backed by Thiel who he is par­tic­u­lar­ly close to are Blake Mas­ter and JD Vance. And Mas­ters and Blake have both made bash­ing Face­book and ‘Big Tech’ sig­na­ture cam­paign themes, mak­ing their close ties to Thiel an obvi­ous com­pli­ca­tion. The two can­di­dates even accuse Zucker­berg of steal­ing the 2012 elec­tion for Barack Oba­ma. Beyond that, Vance is invest­ing in ‘Alt Right’-friendly social media plat­forms of his own, like Rum­ble. So is Thiel’s depar­ture a pure­ly cos­met­ic move to allow Repub­li­cans to disin­gen­u­ous­ly com­plain about ‘Big Tech cen­sor­ship’ more effec­tive­ly?

    Note that we aren’t told Thiel sold off all his remain­ing shares in the com­pa­ny. We’re just told he’s leav­ing the board. There’s also no reports of any new divide between Thiel and Mark Zucker­berg. So it’s not like this is an announce­ment that Thiel is no longer act­ing as Zucker­ber­berg’s con­fi­dante and men­tor. It’s real­ly just an announce­ment about a curi­ous pub­lic rela­tions move by Thiel. A curi­ous omi­nous pub­lic rela­tions move:

    The New York Times

    Peter Thiel to Exit Meta’s Board to Sup­port Trump-Aligned Can­di­dates

    The tech bil­lion­aire, who has been on the board of the com­pa­ny for­mer­ly known as Face­book since 2005, is back­ing numer­ous politi­cians in the midterm elec­tions.

    By Ryan Mac and Mike Isaac
    Feb. 7, 2022

    Peter Thiel, one of the longest-serv­ing board mem­bers of Meta, the par­ent of Face­book, plans to step down, the com­pa­ny said on Mon­day.

    Mr. Thiel, 54, wants to focus on influ­enc­ing November’s midterm elec­tions, said a per­son with knowl­edge of Mr. Thiel’s think­ing who declined to be iden­ti­fied. Mr. Thiel sees the midterms as cru­cial to chang­ing the direc­tion of the coun­try, this per­son said, and he is back­ing can­di­dates who sup­port the agen­da of for­mer pres­i­dent Don­ald J. Trump.

    Over the last year, Mr. Thiel, who has a net worth esti­mat­ed at $2.6 bil­lion by Forbes, has become one of the Republican’s Party’s largest donors. He gave $10 mil­lion each last year to the cam­paigns of two pro­tégés, Blake Mas­ters, who is run­ning for a Sen­ate seat in Ari­zona, and J.D. Vance, who is run­ning for Sen­ate in Ohio.

    Mr. Thiel has been on Meta’s board since 2005, when Face­book was a tiny start-up and he was one of its first insti­tu­tion­al investors. But scruti­ny of Mr. Thiel’s posi­tion on the board has steadi­ly increased as the com­pa­ny was embroiled in polit­i­cal con­tro­ver­sies, includ­ing bar­ring Mr. Trump from the plat­form, and as the ven­ture cap­i­tal­ist has become more polit­i­cal­ly active.

    The depar­ture means Meta los­es its board’s most promi­nent con­ser­v­a­tive voice. The 10-mem­ber board has under­gone sig­nif­i­cant changes in recent years, as many of its mem­bers have left and been replaced, often with Sil­i­con Val­ley entre­pre­neurs. Drew Hous­ton, the chief exec­u­tive of Drop­box, joined Facebook’s board in 2020 and Tony Xu, the founder of Door­Dash, joined it last month. Meta didn’t address whether it intends to replace Mr. Thiel.

    The com­pa­ny, which recent­ly marked its 18th birth­day, is under­tak­ing a shift toward the so-called meta­verse, which Mr. Zucker­berg believes is the next gen­er­a­tion of the inter­net. Last week, Meta report­ed spend­ing more than $10 bil­lion on the effort in 2021, along with mixed finan­cial results. That wiped more than $230 bil­lion off the company’s mar­ket val­ue.

    “Peter has been a valu­able mem­ber of our board and I’m deeply grate­ful for every­thing he’s done for our com­pa­ny,” Mark Zucker­berg, chief exec­u­tive of Meta, said in a state­ment. “Peter is tru­ly an orig­i­nal thinker who you can bring your hard­est prob­lems and get unique sug­ges­tions.”

    ...

    Mr. Thiel first met Mr. Zucker­berg 18 years ago when he pro­vid­ed the entre­pre­neur with $500,000 in cap­i­tal for Face­book, valu­ing the com­pa­ny at $4.9 mil­lion. That gave Mr. Thiel, who with his ven­ture firm Founders Fund con­trolled a 10 per­cent stake in the social net­work, a seat on its board of direc­tors.

    Since then, Mr. Thiel has become a con­fi­dante of Mr. Zucker­berg. He coun­seled the com­pa­ny through its ear­ly years of rapid user growth, and through its dif­fi­cul­ties shift­ing its busi­ness to mobile phones around the time of its 2012 ini­tial pub­lic offer­ing.

    He has also been seen as the con­trar­i­an who has Mr. Zuckerberg’s ear, cham­pi­oning unfet­tered speech across dig­i­tal plat­forms. His con­ser­v­a­tive views also gave Facebook’s board ide­o­log­i­cal diver­si­ty.

    In 2019 and 2020, as Face­book grap­pled with how to deal with polit­i­cal speech and claims made in polit­i­cal adver­tis­ing, Mr. Thiel urged Mr. Zucker­berg to with­stand the pub­lic pres­sure to take down those ads, even as oth­er exec­u­tives and board mem­bers thought the com­pa­ny should change its posi­tion. Mr. Zucker­berg sided with Mr. Thiel.

    Mr. Thiel’s polit­i­cal influ­ence and ties to key Repub­li­cans and con­ser­v­a­tives have also offered a cru­cial gate­way into Wash­ing­ton for Mr. Zucker­berg, espe­cial­ly dur­ing the Trump admin­is­tra­tion. In Octo­ber 2019, Mr. Zucker­berg and Mr. Thiel had a pri­vate din­ner with Pres­i­dent Trump.

    Face­book and Mr. Zucker­berg have long tak­en heat for Mr. Thiel’s pres­ence on the board. In 2016, Mr. Thiel was one of the few tech titans in large­ly lib­er­al Sil­i­con Val­ley to pub­licly sup­port Mr. Trump’s pres­i­den­tial cam­paign.

    In 2020, when Mr. Trump’s incen­di­ary Face­book posts were put under the micro­scope, crit­ics cit­ed Mr. Thiel’s board seat as a rea­son for Mr. Zuckerberg’s con­tin­ued insis­tence that Mr. Trump’s posts be left stand­ing.

    Facebook’s ban of Mr. Trump’s account last year after the Jan. 6 storm­ing of the U.S. Capi­tol has become a key ral­ly­ing point for con­ser­v­a­tives who say that main­stream social plat­forms have cen­sored them.

    Mr. Vance, who used to work at one of Mr. Thiel’s ven­ture funds, and Mr. Mas­ters, the chief oper­at­ing offi­cer of Mr. Thiel’s fam­i­ly office, have railed against Face­book. In Octo­ber, the two Sen­ate can­di­dates argued in an opin­ion piece in The New York Post that Mr. Zuckerberg’s $400 mil­lion in dona­tions to local elec­tion offices in 2020 amount­ed to “elec­tion med­dling” that should be inves­ti­gat­ed.

    Big Tech com­pa­nies shouldn’t be allowed to silence polit­i­cal oppo­nents. They shouldn’t be allowed to work with the CCP. And they shouldn’t be allowed to manip­u­late infor­ma­tion to change elec­tion out­comes.— Blake Mas­ters (@bgmasters) Feb­ru­ary 7, 2022

    Recent­ly, Mr. Thiel has pub­licly voiced his dis­agree­ment with con­tent mod­er­a­tion deci­sions at Face­book and oth­er major social media plat­forms. In Octo­ber at a Mia­mi event orga­nized by a con­ser­v­a­tive tech­nol­o­gy asso­ci­a­tion, he said that he would “take QAnon and Piz­za­gate con­spir­a­cy the­o­ries any day over a Min­istry of Truth.”

    Mr. Thiel’s invest­ing has also clashed with his mem­ber­ship on Meta’s board. He invest­ed in the com­pa­ny that became Clearview AI, a facial-recog­ni­tion start-up that scraped bil­lions of pho­tos from Face­book, Insta­gram and oth­er social plat­forms in vio­la­tion of their terms of ser­vice. Founders Fund also invest­ed in Bold­end, a cyber­weapons com­pa­ny that claimed it had found a way to hack What­sApp, the Meta-owned mes­sag­ing plat­form.

    ...

    In the past year, Mr. Thiel, who also is chair­man of the soft­ware com­pa­ny Palan­tir, has increased his polit­i­cal giv­ing to Repub­li­can can­di­dates. Ahead of the midterms, he is back­ing three Sen­ate can­di­dates and 12 House can­di­dates. Among those House can­di­dates are three peo­ple run­ning pri­ma­ry chal­lenges to Repub­li­cans who vot­ed in favor of impeach­ing Mr. Trump for the events of Jan. 6.

    ————-

    “Peter Thiel to Exit Meta’s Board to Sup­port Trump-Aligned Can­di­dates” by Ryan Mac and Mike Isaac; The New York Times; 02/07/2022

    “Mr. Thiel, 54, wants to focus on influ­enc­ing November’s midterm elec­tions, said a per­son with knowl­edge of Mr. Thiel’s think­ing who declined to be iden­ti­fied. Mr. Thiel sees the midterms as cru­cial to chang­ing the direc­tion of the coun­try, this per­son said, and he is back­ing can­di­dates who sup­port the agen­da of for­mer pres­i­dent Don­ald J. Trump.

    Thiel clear­ly has big plans for the 2022 elec­tion. The ques­tion is whether or not this move is direct­ly relat­ed to those big plans. After all, it’s not like Thiel has­n’t like been a GOP sug­ar-dad­dy for years. Mak­ing large dona­tions to repug­nant can­di­dates is what he has long done. Sure, it sounds like Thiel has increased his con­tri­bu­tions to GOP can­di­dates this year, with two Sen­ate can­di­dates — Blake Mas­ters and JD Vance — have noto­ri­ous close ties to Thiel. But, again, it’s entire­ly unclear what’s changed from before. Why the big shake­up now?

    ...
    Over the last year, Mr. Thiel, who has a net worth esti­mat­ed at $2.6 bil­lion by Forbes, has become one of the Republican’s Party’s largest donors. He gave $10 mil­lion each last year to the cam­paigns of two pro­tégés, Blake Mas­ters, who is run­ning for a Sen­ate seat in Ari­zona, and J.D. Vance, who is run­ning for Sen­ate in Ohio.

    ...

    In 2019 and 2020, as Face­book grap­pled with how to deal with polit­i­cal speech and claims made in polit­i­cal adver­tis­ing, Mr. Thiel urged Mr. Zucker­berg to with­stand the pub­lic pres­sure to take down those ads, even as oth­er exec­u­tives and board mem­bers thought the com­pa­ny should change its posi­tion. Mr. Zucker­berg sided with Mr. Thiel.

    ...

    Face­book and Mr. Zucker­berg have long tak­en heat for Mr. Thiel’s pres­ence on the board. In 2016, Mr. Thiel was one of the few tech titans in large­ly lib­er­al Sil­i­con Val­ley to pub­licly sup­port Mr. Trump’s pres­i­den­tial cam­paign.

    In 2020, when Mr. Trump’s incen­di­ary Face­book posts were put under the micro­scope, crit­ics cit­ed Mr. Thiel’s board seat as a rea­son for Mr. Zuckerberg’s con­tin­ued insis­tence that Mr. Trump’s posts be left stand­ing.

    Facebook’s ban of Mr. Trump’s account last year after the Jan. 6 storm­ing of the U.S. Capi­tol has become a key ral­ly­ing point for con­ser­v­a­tives who say that main­stream social plat­forms have cen­sored them.

    Mr. Vance, who used to work at one of Mr. Thiel’s ven­ture funds, and Mr. Mas­ters, the chief oper­at­ing offi­cer of Mr. Thiel’s fam­i­ly office, have railed against Face­book. In Octo­ber, the two Sen­ate can­di­dates argued in an opin­ion piece in The New York Post that Mr. Zuckerberg’s $400 mil­lion in dona­tions to local elec­tion offices in 2020 amount­ed to “elec­tion med­dling” that should be inves­ti­gat­ed.

    Big Tech com­pa­nies shouldn’t be allowed to silence polit­i­cal oppo­nents. They shouldn’t be allowed to work with the CCP. And they shouldn’t be allowed to manip­u­late infor­ma­tion to change elec­tion out­comes.— Blake Mas­ters (@bgmasters) Feb­ru­ary 7, 2022

    ...

    In the past year, Mr. Thiel, who also is chair­man of the soft­ware com­pa­ny Palan­tir, has increased his polit­i­cal giv­ing to Repub­li­can can­di­dates. Ahead of the midterms, he is back­ing three Sen­ate can­di­dates and 12 House can­di­dates. Among those House can­di­dates are three peo­ple run­ning pri­ma­ry chal­lenges to Repub­li­cans who vot­ed in favor of impeach­ing Mr. Trump for the events of Jan. 6.
    ...

    So the whole ‘step­ping away to focus on the GOP’ excuse does­n’t entire­ly add up. But that does­n’t mean Thiel’s ties to the GOP aren’t a fac­tor. Recall the now noto­ri­ous secret 2019 din­ner par­ty Thiel and Zucker­berg had at the Trump White House where they alleged­ly ham­mered out a deal to ensure Face­book took it easy on the GOP’s strat­e­gy of rely­ing on a cyclone of dis­in­for­ma­tion. Was there anoth­er secret Face­book-GOP din­ner par­ty that we have yet to learn about? Could Thiel’s depar­ture be in antic­i­pa­tion of yet-to-be-dis­closed new secret Face­book-GOP arrange­ment?

    ...
    Mr. Thiel’s polit­i­cal influ­ence and ties to key Repub­li­cans and con­ser­v­a­tives have also offered a cru­cial gate­way into Wash­ing­ton for Mr. Zucker­berg, espe­cial­ly dur­ing the Trump admin­is­tra­tion. In Octo­ber 2019, Mr. Zucker­berg and Mr. Thiel had a pri­vate din­ner with Pres­i­dent Trump.
    ...

    And then there’s the fact that Thiel’s Founders Fund has appar­ent­ly invest­ed in a com­pa­ny that fig­ured out how to hacks Face­book-owned What­App. That’s kind of a huge deal for Face­book. Or at least should be. Was Thiel’s invest­ments in Bold­end a fac­tor here? If not, that’s quite a sto­ry too:

    ...
    Recent­ly, Mr. Thiel has pub­licly voiced his dis­agree­ment with con­tent mod­er­a­tion deci­sions at Face­book and oth­er major social media plat­forms. In Octo­ber at a Mia­mi event orga­nized by a con­ser­v­a­tive tech­nol­o­gy asso­ci­a­tion, he said that he would “take QAnon and Piz­za­gate con­spir­a­cy the­o­ries any day over a Min­istry of Truth.”

    Mr. Thiel’s invest­ing has also clashed with his mem­ber­ship on Meta’s board. He invest­ed in the com­pa­ny that became Clearview AI, a facial-recog­ni­tion start-up that scraped bil­lions of pho­tos from Face­book, Insta­gram and oth­er social plat­forms in vio­la­tion of their terms of ser­vice. Founders Fund also invest­ed in Bold­end, a cyber­weapons com­pa­ny that claimed it had found a way to hack What­sApp, the Meta-owned mes­sag­ing plat­form.
    ...

    That’s all part of what makes this such an dif­fi­cult announce­ment to inter­pret. Thiel’s grip over Face­book always seemed to far exceed his actu­al posi­tion in the com­pa­ny, so it’s not clear why we should assume that unof­fi­cial pow­er isn’t going to remain regard­less of whether or not Thiel is on the board or not. Unof­fi­cial­ly pow­er that is arguably going to be a lot eas­i­er to wield now.

    Posted by Pterrafractyl | February 8, 2022, 2:54 pm
  28. There was a new report about the scum­my behav­ior of Face­book (Meta) that adds a new angle to the many ques­tions about Face­book’s deep, and rather secre­tive, ties to the Repub­li­can Par­ty. Ques­tions that include the details of the appar­ent secret agree­ment that was ham­mered out between Mark Zucker­berg, Peter Thiel, and then-Pres­i­dent Don­ald Trump dur­ing a secret 2019 din­ner par­ty at the White House where Zucker­berg alleged­ly promised to go easy on right-wing dis­in­for­ma­tion in the 2020 cam­paign. The new report also tan­gen­tial­ly relates to the sto­ries about Peter Thiel and Steve Ban­non push­ing to foment a kind of ‘yel­low per­il’ in the US gov­ern­ment and Sil­i­con Val­ley about the dan­gers Chi­na tech firms in order to dam­age a direct rival (Google):

    New­ly leaked emails reveal that Face­book has been using a GOP-affil­i­at­ed pub­lic rela­tions firm, Tar­get­ed Vic­to­ry, to secret­ly push alarmist sto­ries about the dan­gers Tik­Tok pos­es to US chil­dren. The pro­pa­gan­da cam­paign includ­ed push­ing sto­ries into the local media mar­kets in the con­gres­sion­al dis­tricts of key mem­bers of con­gress, with some suc­cess appar­ent­ly. Oh, and it it turns out that some of the tox­ic viral memes that were alleged­ly being pro­mot­ed to chil­dren on Tik­Tok weren’t found on Tik­Tok at all but instead orig­i­nat­ed on Face­book. It was that sleazy.

    So how long has Face­book been hir­ing GOP PR firms to secret­ly con­duct smear cam­paigns on its rivals? That’s unclear, but in the case of Tar­get­ed Vic­to­ry, we are told that its rela­tion­ship with Face­book goes back to 2016. Yep, Face­book was already using this GOP firm for secret sleaze back in 2016, which is anoth­er wrin­kle in that whole sor­did sto­ry. Although this par­tic­u­lar anti-Tik­Tok cam­paign appears to be ongo­ing, with some of the leaked emails being sent in Feb­ru­ary of this year.

    And Tar­get­ed Vic­to­ry isn’t the only GOP affil­i­at­ed PR firm used by Face­book. In 2018, Face­book hired anoth­er GOP-affil­i­at­ed firm, Defin­ers Pub­lic Affairs, to attack crit­ics and oth­er tech com­pa­nies, includ­ing Apple and Google, dur­ing the Cam­bridge Ana­lyt­i­ca scan­dal. It’s that broad­er secret rela­tion­ship between Face­book (Meta) and the GOP that’s the larg­er sto­ry sto­ry. And based on these leaked emails, it appears that secret rela­tion­ship has been deep­er and sleazier than pre­vi­ous­ly appre­ci­at­ed:

    The Wash­ing­ton Post

    Face­book paid GOP firm to malign Tik­Tok

    The firm, Tar­get­ed Vic­to­ry, pushed local oper­a­tives across the coun­try to boost mes­sages call­ing Tik­Tok a threat to Amer­i­can chil­dren. “Dream would be to get sto­ries with head­lines like ‘From dances to dan­ger,’ ” one cam­paign direc­tor said.

    By Tay­lor Lorenz and Drew Har­well
    March 30, 2022 at 6:30 a.m. EDT

    Face­book par­ent com­pa­ny Meta is pay­ing one of the biggest Repub­li­can con­sult­ing firms in the coun­try to orches­trate a nation­wide cam­paign seek­ing to turn the pub­lic against Tik­Tok.

    The cam­paign includes plac­ing op-eds and let­ters to the edi­tor in major region­al news out­lets, pro­mot­ing dubi­ous sto­ries about alleged Tik­Tok trends that actu­al­ly orig­i­nat­ed on Face­book, and push­ing to draw polit­i­cal reporters and local politi­cians into help­ing take down its biggest com­peti­tor. These bare-knuck­le tac­tics, long com­mon­place in the world of pol­i­tics, have become increas­ing­ly notice­able with­in a tech indus­try where com­pa­nies vie for cul­tur­al rel­e­vance and come at a time when Face­book is under pres­sure to win back young users.

    Employ­ees with the firm, Tar­get­ed Vic­to­ry, worked to under­mine Tik­Tok through a nation­wide media and lob­by­ing cam­paign por­tray­ing the fast-grow­ing app, owned by the Bei­jing-based com­pa­ny ByteDance, as a dan­ger to Amer­i­can chil­dren and soci­ety, accord­ing to inter­nal emails shared with The Wash­ing­ton Post.

    Tar­get­ed Vic­to­ry needs to “get the mes­sage out that while Meta is the cur­rent punch­ing bag, Tik­Tok is the real threat espe­cial­ly as a for­eign owned app that is #1 in shar­ing data that young teens are using,” a direc­tor for the firm wrote in a Feb­ru­ary email.

    Cam­paign oper­a­tives were also encour­aged to use TikTok’s promi­nence as a way to deflect from Meta’s own pri­va­cy and antitrust con­cerns.

    “Bonus point if we can fit this into a broad­er mes­sage that the cur­rent bills/proposals aren’t where [state attor­neys gen­er­al] or mem­bers of Con­gress should be focused,” a Tar­get­ed Vic­to­ry staffer wrote.

    The emails, which have not been pre­vi­ous­ly report­ed, show the extent to which Meta and its part­ners will use oppo­si­tion-research tac­tics on the Chi­nese-owned, multi­bil­lion-dol­lar rival that has become one of the most down­loaded apps in the world, often out­rank­ing even Meta’s pop­u­lar Face­book and Insta­gram apps. In an inter­nal report last year leaked by the whistle­blow­er Frances Hau­gen, Face­book researchers said teens were spend­ing “2–3X more time” on Tik­Tok than Insta­gram, and that Facebook’s pop­u­lar­i­ty among young peo­ple had plum­met­ed.

    Tar­get­ed Vic­to­ry declined to respond to ques­tions about the cam­paign, say­ing only that it has rep­re­sent­ed Meta for sev­er­al years and is “proud of the work we have done.”

    In one email, a Tar­get­ed Vic­to­ry direc­tor asked for ideas on local polit­i­cal reporters who could serve as a “back chan­nel” for anti-Tik­Tok mes­sages, say­ing the firm “would def­i­nite­ly want it to be hands off.”

    In oth­er emails, Tar­get­ed Vic­to­ry urged part­ners to push sto­ries to local media tying Tik­Tok to dan­ger­ous teen trends in an effort to show the app’s pur­port­ed harms. “Any local exam­ples of bad Tik­Tok trends/stories in your mar­kets?” a Tar­get­ed Vic­to­ry staffer asked.

    “Dream would be to get sto­ries with head­lines like ‘From dances to dan­ger: how Tik­Tok has become the most harm­ful social media space for kids,’ ” the staffer wrote.

    Meta spokesper­son Andy Stone defend­ed the cam­paign by say­ing, “We believe all plat­forms, includ­ing Tik­Tok, should face a lev­el of scruti­ny con­sis­tent with their grow­ing suc­cess.”

    A Tik­Tok spokesper­son said the com­pa­ny is “deeply con­cerned” about “the stok­ing of local media reports on alleged trends that have not been found on the plat­form.”

    Tar­get­ed Vic­to­ry worked to ampli­fy neg­a­tive Tik­Tok cov­er­age through a Google doc­u­ment titled “Bad Tik­Tok Clips,” which was shared inter­nal­ly and includ­ed links to dubi­ous local news sto­ries cit­ing Tik­Tok as the ori­gin of dan­ger­ous teen trends. Local oper­a­tives work­ing with the firm were encour­aged to pro­mote these alleged Tik­Tok trends in their own mar­kets to put pres­sure on law­mak­ers to act.

    One trend Tar­get­ed Vic­to­ry sought to enhance through its work was the “devi­ous licks” chal­lenge, which showed stu­dents van­dal­iz­ing school prop­er­ty. Through the “Bad Tik­Tok Clips” doc­u­ment, the firm pushed sto­ries about the “devi­ous licks” chal­lenge in local media across Mass­a­chu­setts, Michi­gan, Min­neso­ta, Rhode Island and Wash­ing­ton, D.C.

    That trend led Sen. Richard Blu­men­thal (D‑Conn.) to write a let­ter in Sep­tem­ber call­ing on Tik­Tok exec­u­tives to tes­ti­fy in front of a Sen­ate sub­com­mit­tee, say­ing the app had been “repeat­ed­ly mis­used and abused to pro­mote behav­ior and actions that encour­age harm­ful and destruc­tive acts.” But accord­ing to an inves­ti­ga­tion by Anna Foley at the pod­cast net­work Gim­let, rumors of the “devi­ous licks” chal­lenge ini­tial­ly spread on Face­book, not Tik­Tok.

    In Octo­ber, Tar­get­ed Vic­to­ry worked to spread rumors of the “Slap a Teacher Tik­Tok chal­lenge” in local news, tout­ing a local news report on the alleged chal­lenge in Hawaii. In real­i­ty, no such chal­lenge exist­ed on Tik­Tok. Again, the rumor start­ed on Face­book, accord­ing to a series of Face­book posts first doc­u­ment­ed by Insid­er.

    The firm worked to use both gen­uine con­cerns and unfound­ed anx­i­eties to cast doubt about the pop­u­lar app. One email out­lin­ing recent neg­a­tive Tik­Tok sto­ries mixed rea­son­able ques­tions, large­ly about TikTok’s cor­po­rate own­er­ship and prac­tices, with more exag­ger­at­ed sto­ries about young users record­ing them­selves behav­ing bad­ly — the kinds of social media pan­ics that have long bedev­iled big social net­works, includ­ing Face­book.

    The agency was work­ing at the same time to get “proac­tive cov­er­age” about Face­book into local news­pa­pers, radio seg­ments and TV broad­casts, includ­ing sub­mit­ting let­ters and opin­ion pieces speak­ing glow­ing­ly of Facebook’s role in, for instance, sup­port­ing Black-owned busi­ness­es. Those let­ters did not men­tion the Meta-fund­ed firm’s involve­ment.

    Tar­get­ed Vic­to­ry has con­tract­ed with dozens of pub­lic rela­tions firms across the Unit­ed States to help sway pub­lic opin­ion against Tik­Tok. In addi­tion to plant­i­ng local news sto­ries, the firm has helped place op-eds tar­get­ing Tik­Tok around the coun­try, espe­cial­ly in key con­gres­sion­al dis­tricts.

    On March 12, a let­ter to the edi­tor that Tar­get­ed Vic­to­ry offi­cials helped orches­trate ran in the Den­ver Post. The let­ter, from a “con­cerned” “new par­ent,” claimed that Tik­Tok was harm­ful to children’s men­tal health, raised con­cerns over its data pri­va­cy prac­tices and said that “many peo­ple even sus­pect Chi­na is delib­er­ate­ly col­lect­ing behav­ioral data on our kids.” The let­ter also issued sup­port for Col­orado Attor­ney Gen­er­al Phil Weiser’s choice to join a coali­tion of state attor­neys gen­er­al inves­ti­gat­ing TikTok’s impact on Amer­i­can youths, putting polit­i­cal pres­sure on the com­pa­ny.

    A very sim­i­lar let­ter to the edi­tor, draft­ed by Tar­get­ed Vic­to­ry, ran that same day in the Des Moines Reg­is­ter. The piece linked to neg­a­tive sto­ries about Tik­Tok that Tar­get­ed Vic­to­ry had pre­vi­ous­ly sought to ampli­fy. The let­ter was signed by Mary McAdams, chair of the Anke­ny Area Democ­rats. Tar­get­ed Vic­to­ry tout­ed McAdams’ cre­den­tials in an email on March 7.

    “[McAdams’s] name on this [let­ter to the edi­tor] will car­ry a lot of weight with leg­is­la­tors and stake­hold­ers,” a Tar­get­ed Vic­to­ry direc­tor wrote. The email then encour­aged part­ners across oth­er states to look for oppor­tu­ni­ties to add to the cam­paign, “espe­cial­ly if your state AG sud­den­ly joins on.”

    ...

    In an email sent last week to local con­trac­tors, Tar­get­ed Vic­to­ry asked each team to “be pre­pared to share the op-ed they’re work­ing on right now.” “Col­orado and Iowa — Can you talk about the Tik­Tok Op-eds you both got?” a Tar­get­ed Vic­to­ry rep­re­sen­ta­tive asked.

    The emails show how the firm has effec­tive­ly pro­mot­ed its anti-Tik­Tok mes­sag­ing with­out reveal­ing that it came from a firm work­ing on Meta’s behalf. None of the op-eds or let­ters to the edi­tor were pub­lished with any indi­ca­tion that the Meta-fund­ed group had been involved.

    Launched as a Repub­li­can dig­i­tal con­sult­ing firm by Zac Mof­fatt, a dig­i­tal direc­tor for Mitt Romney’s 2012 pres­i­den­tial cam­paign, Tar­get­ed Vic­to­ry has rou­tine­ly advised Face­book offi­cials over the years, includ­ing dur­ing a high-pro­file con­gres­sion­al hear­ing after the 2016 elec­tion.

    The Arling­ton, Va.-based firm adver­tis­es on its web­site that it brings “a right-of-cen­ter per­spec­tive to solve mar­ket­ing chal­lenges” and can deploy field teams “any­where in the coun­try with­in 48 hours.”

    The firm is one of the biggest recip­i­ents of Repub­li­can cam­paign spend­ing, earn­ing more than $237 mil­lion in 2020, accord­ing to data com­piled by OpenSe­crets. Its biggest pay­ments came from nation­al GOP con­gres­sion­al com­mit­tees and Amer­i­ca First Action, a pro-Trump super PAC.

    In 2020, the firm said it was expand­ing its “cri­sis prac­tice and cor­po­rate affairs offer­ings” because of its clients’ grow­ing need for “issues man­age­ment and exec­u­tive posi­tion­ing,” adding that it would focus its efforts toward “authen­tic sto­ry­telling” with a “hyper-local approach.”

    Some of the emails tar­get­ing Tik­Tok were sent in Feb­ru­ary, short­ly after Meta announced that Face­book had lost users for the first time in its 18-year his­to­ry. Meta chief exec­u­tive Mark Zucker­berg told investors then that Tik­Tok was a major obsta­cle, say­ing, “Peo­ple have a lot of choic­es for how they want to spend their time, and apps like Tik­Tok are grow­ing very quick­ly.” The com­pa­ny has unveiled a Tik­Tok clone, a short-video fea­ture called Reels, and pro­motes it heav­i­ly in its Insta­gram app.

    In a 2019 speech at George­town Uni­ver­si­ty, dur­ing which he invoked the Rev. Mar­tin Luther King Jr. and cham­pi­oned Facebook’s role in pro­mot­ing free speech, Zucker­berg crit­i­cized Tik­Tok for reports it had banned dis­cus­sion of top­ics deemed sub­ver­sive by the Chi­nese gov­ern­ment, say­ing, “Is that the Inter­net that we want?” (The Wash­ing­ton Post and the Guardian had pre­vi­ous­ly high­light­ed those con­tent-mod­er­a­tion rules. Tik­Tok has said those guide­lines were out­dat­ed and that its U.S. busi­ness now oper­ates under dif­fer­ent rules than its Chi­nese coun­ter­part.)

    But Zucker­berg has also point­ed at Tik­Tok to counter con­cerns that Face­book holds a monop­oly on social media. Tik­Tok is the “fastest-grow­ing app,” he said in his open­ing remarks at a hear­ing of the House antitrust sub­com­mit­tee in 2020.

    The anti-Tik­Tok cam­paign fol­lows in a long line of Face­book-fund­ed advo­ca­cy groups work­ing to boost its stand­ing in the pub­lic eye.

    In 2018, Face­book worked with Defin­ers Pub­lic Affairs, anoth­er Wash­ing­ton con­sult­ing firm found­ed by Repub­li­can polit­i­cal vet­er­ans, to lash out at crit­ics and oth­er tech com­pa­nies, includ­ing Apple and Google, dur­ing the Cam­bridge Ana­lyt­i­ca scan­dal that sparked glob­al out­rage over Facebook’s pri­va­cy rules. (The com­pa­ny said it stopped work­ing with Defin­ers short­ly after a New York Times report on the arrange­ment.)

    And in 2019, as the com­pa­ny faced antitrust scruti­ny over its gar­gan­tu­an impact, Face­book drove the cre­ation of a polit­i­cal advo­ca­cy group, Amer­i­can Edge, designed to per­suade Wash­ing­ton law­mak­ers that Sil­i­con Val­ley was crit­i­cal to the U.S. econ­o­my — and that overt reg­u­la­tion could weak­en the country’s com­pet­i­tive­ness in a tech­nol­o­gy race against Chi­na.

    Meta out­spends all but six of the nation’s biggest com­pa­nies and indus­try groups in fed­er­al lob­by­ing, pay­ing more than $20 mil­lion last year, accord­ing to data com­piled by OpenSe­crets.

    ———–

    “Face­book paid GOP firm to malign Tik­Tok” by Tay­lor Lorenz and Drew Har­well; The Wash­ing­ton Post; 03/30/2022

    “The emails, which have not been pre­vi­ous­ly report­ed, show the extent to which Meta and its part­ners will use oppo­si­tion-research tac­tics on the Chi­nese-owned, multi­bil­lion-dol­lar rival that has become one of the most down­loaded apps in the world, often out­rank­ing even Meta’s pop­u­lar Face­book and Insta­gram apps. In an inter­nal report last year leaked by the whistle­blow­er Frances Hau­gen, Face­book researchers said teens were spend­ing “2–3X more time” on Tik­Tok than Insta­gram, and that Facebook’s pop­u­lar­i­ty among young peo­ple had plum­met­ed.”

    It’s all quite a coin­ci­dence that the one social media app that is more pop­u­lar than Face­book and Insta­gram is the tar­get of a secret smear cam­paign. A smear cam­paign that includ­ed attribut­ing tox­ic viral memes that orig­i­nat­ed on Face­book to Tik­Tok. The sleaze abounds:

    ...
    One trend Tar­get­ed Vic­to­ry sought to enhance through its work was the “devi­ous licks” chal­lenge, which showed stu­dents van­dal­iz­ing school prop­er­ty. Through the “Bad Tik­Tok Clips” doc­u­ment, the firm pushed sto­ries about the “devi­ous licks” chal­lenge in local media across Mass­a­chu­setts, Michi­gan, Min­neso­ta, Rhode Island and Wash­ing­ton, D.C.

    That trend led Sen. Richard Blu­men­thal (D‑Conn.) to write a let­ter in Sep­tem­ber call­ing on Tik­Tok exec­u­tives to tes­ti­fy in front of a Sen­ate sub­com­mit­tee, say­ing the app had been “repeat­ed­ly mis­used and abused to pro­mote behav­ior and actions that encour­age harm­ful and destruc­tive acts.” But accord­ing to an inves­ti­ga­tion by Anna Foley at the pod­cast net­work Gim­let, rumors of the “devi­ous licks” chal­lenge ini­tial­ly spread on Face­book, not Tik­Tok.

    In Octo­ber, Tar­get­ed Vic­to­ry worked to spread rumors of the “Slap a Teacher Tik­Tok chal­lenge” in local news, tout­ing a local news report on the alleged chal­lenge in Hawaii. In real­i­ty, no such chal­lenge exist­ed on Tik­Tok. Again, the rumor start­ed on Face­book, accord­ing to a series of Face­book posts first doc­u­ment­ed by Insid­er.
    ...

    And while some of these incrim­i­nat­ing leaked emails are from Feb­ru­ary of 2022 — right around the time Face­book was forced to announce its first loss in users in its 18 year his­to­ry — this rela­tion­ship between Face­book Tar­get­ed Vic­to­ry has been ongo­ing since 2016. In oth­er words, this GOP PR firm has pre­sum­ably done quite few oth­er sleazy pub­lic rela­tions cam­paigns for Face­book that we just haven’t learned about yet:

    ...
    Tar­get­ed Vic­to­ry declined to respond to ques­tions about the cam­paign, say­ing only that it has rep­re­sent­ed Meta for sev­er­al years and is “proud of the work we have done.”

    ...

    Launched as a Repub­li­can dig­i­tal con­sult­ing firm by Zac Mof­fatt, a dig­i­tal direc­tor for Mitt Romney’s 2012 pres­i­den­tial cam­paign, Tar­get­ed Vic­to­ry has rou­tine­ly advised Face­book offi­cials over the years, includ­ing dur­ing a high-pro­file con­gres­sion­al hear­ing after the 2016 elec­tion.

    ...

    Some of the emails tar­get­ing Tik­Tok were sent in Feb­ru­ary, short­ly after Meta announced that Face­book had lost users for the first time in its 18-year his­to­ry. Meta chief exec­u­tive Mark Zucker­berg told investors then that Tik­Tok was a major obsta­cle, say­ing, “Peo­ple have a lot of choic­es for how they want to spend their time, and apps like Tik­Tok are grow­ing very quick­ly.” The com­pa­ny has unveiled a Tik­Tok clone, a short-video fea­ture called Reels, and pro­motes it heav­i­ly in its Insta­gram app.

    In a 2019 speech at George­town Uni­ver­si­ty, dur­ing which he invoked the Rev. Mar­tin Luther King Jr. and cham­pi­oned Facebook’s role in pro­mot­ing free speech, Zucker­berg crit­i­cized Tik­Tok for reports it had banned dis­cus­sion of top­ics deemed sub­ver­sive by the Chi­nese gov­ern­ment, say­ing, “Is that the Inter­net that we want?” (The Wash­ing­ton Post and the Guardian had pre­vi­ous­ly high­light­ed those con­tent-mod­er­a­tion rules. Tik­Tok has said those guide­lines were out­dat­ed and that its U.S. busi­ness now oper­ates under dif­fer­ent rules than its Chi­nese coun­ter­part.)
    ...

    And as the arti­cle hints at, this sto­ry is mere­ly one exam­ple of how Face­book works with GOP-con­nect­ed pub­lic rela­tions firms to car­ry out dirty tricks cam­paigns. How many oth­er GOP-con­nect­ed firms has Face­book secret­ly work­ing with for dirty PR? Who knows, but we can be pret­ty con­fi­dent there are plen­ty more sto­ries of this nature giv­en that Meta out­spends all but six of the US’s biggest lob­by­ing groups:

    ...
    The anti-Tik­Tok cam­paign fol­lows in a long line of Face­book-fund­ed advo­ca­cy groups work­ing to boost its stand­ing in the pub­lic eye.

    In 2018, Face­book worked with Defin­ers Pub­lic Affairs, anoth­er Wash­ing­ton con­sult­ing firm found­ed by Repub­li­can polit­i­cal vet­er­ans, to lash out at crit­ics and oth­er tech com­pa­nies, includ­ing Apple and Google, dur­ing the Cam­bridge Ana­lyt­i­ca scan­dal that sparked glob­al out­rage over Facebook’s pri­va­cy rules. (The com­pa­ny said it stopped work­ing with Defin­ers short­ly after a New York Times report on the arrange­ment.)

    And in 2019, as the com­pa­ny faced antitrust scruti­ny over its gar­gan­tu­an impact, Face­book drove the cre­ation of a polit­i­cal advo­ca­cy group, Amer­i­can Edge, designed to per­suade Wash­ing­ton law­mak­ers that Sil­i­con Val­ley was crit­i­cal to the U.S. econ­o­my — and that overt reg­u­la­tion could weak­en the country’s com­pet­i­tive­ness in a tech­nol­o­gy race against Chi­na.

    Meta out­spends all but six of the nation’s biggest com­pa­nies and indus­try groups in fed­er­al lob­by­ing, pay­ing more than $20 mil­lion last year, accord­ing to data com­piled by OpenSe­crets.
    ...

    So are we going to learn that Face­book has now dropped Tar­get­ed Vic­to­ry, just as it dropped Defin­ers Pub­lic Affairs in 2018 after its work was exposed? We’ll see. Maybe. Either way, we prob­a­bly won’t see any reports about the new GOP-affil­i­at­ed PR firm that gets hired to replace them. At least not until the next round of reports with more rev­e­la­tions about this endur­ing ‘secret’ Meta-GOP rela­tion­ship.

    But it also all rais­es an intrigu­ing ques­tion about this arrange­ment, where Face­book is hir­ing GOP-affil­i­at­ed firms to car­ry out decep­tive attacks on its rivals: Does­n’t this effec­tive­ly give the GOP lever­age over Facebook/Meta? After all, it’s clear­ly an embar­rass­ment when these kinds of sto­ries come out. An embar­rass­ment pret­ty much just for Meta. It’s not like the GOP PR firm cares. So giv­en that we still don’t know the exact nature of the deal worked out between Mark Zucker­berg, Peter Thiel, and Don­ald Trump at that secret 2019 din­ner at the White House, it’s going to be worth keep­ing in mind that Face­book has appar­ent­ly decid­ed to use the GOP for its secret sleaze projects. A deci­sion that pre­sum­ably gave the GOP quite a few ‘favors’ it could ask for in return.

    Posted by Pterrafractyl | April 2, 2022, 3:22 pm

Post a comment

FTR BACK STORY

Even MORE Fun With Science: Earthquake Weaponry FTR #69: Tesla technology used by U.S. and U.S.S.R. to alter the weather and cause earthquakes. Read more »