Spitfire List Web site and blog of anti-fascist researcher and radio personality Dave Emory.

For The Record  

FTR #1021 FascisBook: (In Your Facebook, Part 3–A Virtual Panopticon, Part 3)

Dave Emory’s entire life­time of work is avail­able on a flash dri­ve that can be obtained HERE. The new dri­ve is a 32-giga­byte dri­ve that is cur­rent as of the pro­grams and arti­cles post­ed by the fall of 2017. The new dri­ve (avail­able for a tax-deductible con­tri­bu­tion of $65.00 or more.)

WFMU-FM is pod­cast­ing For The Record–You can sub­scribe to the pod­cast HERE.

You can sub­scribe to e‑mail alerts from Spitfirelist.com HERE.

You can sub­scribe to RSS feed from Spitfirelist.com HERE.

You can sub­scribe to the com­ments made on pro­grams and posts–an excel­lent source of infor­ma­tion in, and of, itself HERE.

This broad­cast was record­ed in one, 60-minute seg­ment.

Peter Thiel

Intro­duc­tion: This pro­gram fol­lows up FTR #‘s 718 and 946, we exam­ined Face­book, not­ing how it’s cute, warm, friend­ly pub­lic facade obscured a cyn­i­cal, reac­tionary, exploita­tive and, ulti­mate­ly “cor­po­ratist” eth­ic and oper­a­tion.

The UK’s Chan­nel 4 sent an inves­tiga­tive jour­nal­ist under­cov­er to work for one of the third-par­ty com­pa­nies Face­book pays to mod­er­ate con­tent. This inves­tiga­tive jour­nal­ist was trained to take a hands-off approach to far right vio­lent con­tent and fake news because that kind of con­tent engages users for longer and increas­es ad rev­enues. ” . . . . An inves­tiga­tive jour­nal­ist who went under­cov­er as a Face­book mod­er­a­tor in Ire­land says the com­pa­ny lets pages from far-right fringe groups ‘exceed dele­tion thresh­old,’ and that those pages are ‘sub­ject to dif­fer­ent treat­ment in the same cat­e­go­ry as pages belong­ing to gov­ern­ments and news orga­ni­za­tions.’ The accu­sa­tion is a damn­ing one, under­min­ing Facebook’s claims that it is active­ly try­ing to cut down on fake news, pro­pa­gan­da, hate speech, and oth­er harm­ful con­tent that may have sig­nif­i­cant real-world impact.The under­cov­er jour­nal­ist detailed his find­ings in a new doc­u­men­tary titled Inside Face­book: Secrets of the Social Net­work, that just aired on the UK’s Chan­nel 4. . . . .”

Next, we present a fright­en­ing sto­ry about Aggre­gateIQ (AIQ), the Cam­bridge Ana­lyt­i­ca off­shoot to which Cam­bridge Ana­lyt­i­ca out­sourced the devel­op­ment of its “Ripon” psy­cho­log­i­cal pro­file soft­ware devel­op­ment, and which lat­er played a key role in the pro-Brex­it cam­paign. The arti­cle also notes that, despite Facebook’s pledge to kick Cam­bridge Ana­lyt­i­ca off of its plat­form, secu­ri­ty researchers just found 13 apps avail­able for Face­book that appear to be devel­oped by AIQ. If Face­book real­ly was try­ing to kick Cam­bridge Ana­lyt­i­ca off of its plat­form, it’s not try­ing very hard. One app is even named “AIQ John­ny Scraper” and it’s reg­is­tered to AIQ.

The arti­cle is also a reminder that you don’t nec­es­sar­i­ly need to down­load a Cam­bridge Analytica/AIQ app for them to be track­ing your infor­ma­tion and reselling it to clients. Secu­ri­ty researcher stum­bled upon a new repos­i­to­ry of curat­ed Face­book data AIQ was cre­at­ing for a client and it’s entire­ly pos­si­ble a lot of the data was scraped from pub­lic Face­book posts.

” . . . . Aggre­gateIQ, a Cana­di­an con­sul­tan­cy alleged to have links to Cam­bridge Ana­lyt­i­ca, col­lect­ed and stored the data of hun­dreds of thou­sands of Face­book users, accord­ing to redact­ed com­put­er files seen by the Finan­cial Times.The social net­work banned Aggre­gateIQ, a data com­pa­ny, from its plat­form as part of a clean-up oper­a­tion fol­low­ing the Cam­bridge Ana­lyt­i­ca scan­dal, on sus­pi­cion that the com­pa­ny could have been improp­er­ly access­ing user infor­ma­tion. How­ev­er, Chris Vick­ery, a secu­ri­ty researcher, this week found an app on the plat­form called ‘AIQ John­ny Scraper’ reg­is­tered to the com­pa­ny, rais­ing fresh ques­tions about the effec­tive­ness of Facebook’s polic­ing efforts. . . .”

In addi­tion, the sto­ry high­lights a forms of micro-tar­get­ing com­pa­nies like AIQ make avail­able that’s fun­da­men­tal­ly dif­fer­ent from the algo­rith­mic micro-tar­get­ing asso­ci­at­ed with social media abus­es: micro-tar­get­ing by a human who wants to specif­i­cal­ly look and see what you per­son­al­ly have said about var­i­ous top­ics on social media. This is a ser­vice where some­one can type you into a search engine and AIQ’s prod­uct will serve up a list of all the var­i­ous polit­i­cal posts you’ve made or the polit­i­cal­ly-rel­e­vant “Likes” you’ve made.

Next, we note that Face­book is get­ting sued by an app devel­op­er for act­ing like the mafia and turn­ing access to all that user data as the key enforce­ment tool:

“Mark Zucker­berg faces alle­ga­tions that he devel­oped a ‘mali­cious and fraud­u­lent scheme’ to exploit vast amounts of pri­vate data to earn Face­book bil­lions and force rivals out of busi­ness. A com­pa­ny suing Face­book in a Cal­i­for­nia court claims the social network’s chief exec­u­tive ‘weaponised’ the abil­i­ty to access data from any user’s net­work of friends – the fea­ture at the heart of the Cam­bridge Ana­lyt­i­ca scan­dal.  . . . . ‘The evi­dence uncov­ered by plain­tiff demon­strates that the Cam­bridge Ana­lyt­i­ca scan­dal was not the result of mere neg­li­gence on Facebook’s part but was rather the direct con­se­quence of the mali­cious and fraud­u­lent scheme Zucker­berg designed in 2012 to cov­er up his fail­ure to antic­i­pate the world’s tran­si­tion to smart­phones,’ legal doc­u­ments said. . . . . Six4Three alleges up to 40,000 com­pa­nies were effec­tive­ly defraud­ed in this way by Face­book. It also alleges that senior exec­u­tives includ­ing Zucker­berg per­son­al­ly devised and man­aged the scheme, indi­vid­u­al­ly decid­ing which com­pa­nies would be cut off from data or allowed pref­er­en­tial access. . . . ‘They felt that it was bet­ter not to know. I found that utter­ly hor­ri­fy­ing,’ he [for­mer Face­book exec­u­tive Sandy Parak­i­las] said. ‘If true, these alle­ga­tions show a huge betray­al of users, part­ners and reg­u­la­tors. They would also show Face­book using its monop­oly pow­er to kill com­pe­ti­tion and putting prof­its over pro­tect­ing its users.’ . . . .”

The above-men­tioned Cam­bridge Ana­lyt­i­ca is offi­cial­ly going bank­rupt, along with the elec­tions divi­sion of its par­ent com­pa­ny, SCL Group. Appar­ent­ly their bad press has dri­ven away clients.

Is this tru­ly the end of Cam­bridge Ana­lyt­i­ca?

No.

They’re rebrand­ing under a new com­pa­ny, Emer­da­ta. Intrigu­ing­ly, Cam­bridge Analytica’s trans­for­ma­tion into Emer­da­ta is note­wor­thy because  the fir­m’s direc­tors include John­son Ko Chun Shun, a Hong Kong financier and busi­ness part­ner of Erik Prince: ” . . . . But the company’s announce­ment left sev­er­al ques­tions unan­swered, includ­ing who would retain the company’s intel­lec­tu­al prop­er­ty — the so-called psy­cho­graph­ic vot­er pro­files built in part with data from Face­book — and whether Cam­bridge Analytica’s data-min­ing busi­ness would return under new aus­pices. . . . In recent months, exec­u­tives at Cam­bridge Ana­lyt­i­ca and SCL Group, along with the Mer­cer fam­i­ly, have moved to cre­at­ed a new firm, Emer­da­ta, based in Britain, accord­ing to British records. The new company’s direc­tors include John­son Ko Chun Shun, a Hong Kong financier and busi­ness part­ner of Erik Prince. . . . An exec­u­tive and a part own­er of SCL Group, Nigel Oakes, has pub­licly described Emer­da­ta as a way of rolling up the two com­pa­nies under one new ban­ner. . . . 

In the Big Data inter­net age, there’s one area of per­son­al infor­ma­tion that has yet to be incor­po­rat­ed into the pro­files on everyone–personal bank­ing infor­ma­tion.  ” . . . . If tech com­pa­nies are in con­trol of pay­ment sys­tems, they’ll know “every sin­gle thing you do,” Kapi­to said. It’s a dif­fer­ent busi­ness mod­el from tra­di­tion­al bank­ing: Data is more valu­able for tech firms that sell a range of dif­fer­ent prod­ucts than it is for banks that only sell finan­cial ser­vices, he said. . . .”

Face­book is approach­ing a num­ber of big banks – JP Mor­gan, Wells Far­go, Cit­i­group, and US Ban­corp – request­ing finan­cial data includ­ing card trans­ac­tions and check­ing-account bal­ances. Face­book is joined byIn this by Google and Ama­zon who are also try­ing to get this kind of data.

Face­book assures us that this infor­ma­tion, which will be opt-in, is to be sole­ly for offer­ing new ser­vices on Face­book mes­sen­ger. Face­book also assures us that this infor­ma­tion, which would obvi­ous­ly be invalu­able for deliv­er­ing ads, won’t be used for ads at all. It will ONLY be used for Facebook’s Mes­sen­ger ser­vice.  This is a dubi­ous assur­ance, in light of Face­book’s past behav­ior.

” . . . . Face­book increas­ing­ly wants to be a plat­form where peo­ple buy and sell goods and ser­vices, besides con­nect­ing with friends. The com­pa­ny over the past year asked JPMor­gan Chase & Co., Wells Far­go & Co., Cit­i­group Inc. and U.S. Ban­corp to dis­cuss poten­tial offer­ings it could host for bank cus­tomers on Face­book Mes­sen­ger, said peo­ple famil­iar with the mat­ter. Face­book has talked about a fea­ture that would show its users their check­ing-account bal­ances, the peo­ple said. It has also pitched fraud alerts, some of the peo­ple said. . . .”

Peter Thiel’s sur­veil­lance firm Palan­tir was appar­ent­ly deeply involved with Cam­bridge Ana­lyt­i­ca’s gam­ing of per­son­al data har­vest­ed from Face­book in order to engi­neer an elec­toral vic­to­ry for Trump. Thiel was an ear­ly investor in Face­book, at one point was its largest share­hold­er and is still one of its largest share­hold­ers. ” . . . . It was a Palan­tir employ­ee in Lon­don, work­ing close­ly with the data sci­en­tists build­ing Cambridge’s psy­cho­log­i­cal pro­fil­ing tech­nol­o­gy, who sug­gest­ed the sci­en­tists cre­ate their own app — a mobile-phone-based per­son­al­i­ty quiz — to gain access to Face­book users’ friend net­works, accord­ing to doc­u­ments obtained by The New York Times. The rev­e­la­tions pulled Palan­tir — co-found­ed by the wealthy lib­er­tar­i­an Peter Thiel — into the furor sur­round­ing Cam­bridge, which improp­er­ly obtained Face­book data to build ana­lyt­i­cal tools it deployed on behalf of Don­ald J. Trump and oth­er Repub­li­can can­di­dates in 2016. Mr. Thiel, a sup­port­er of Pres­i­dent Trump, serves on the board at Face­book. ‘There were senior Palan­tir employ­ees that were also work­ing on the Face­book data,’ said Christo­pher Wylie, a data expert and Cam­bridge Ana­lyt­i­ca co-founder, in tes­ti­mo­ny before British law­mak­ers on Tues­day. . . . The con­nec­tions between Palan­tir and Cam­bridge Ana­lyt­i­ca were thrust into the spot­light by Mr. Wylie’s tes­ti­mo­ny on Tues­day. Both com­pa­nies are linked to tech-dri­ven bil­lion­aires who backed Mr. Trump’s cam­paign: Cam­bridge is chiefly owned by Robert Mer­cer, the com­put­er sci­en­tist and hedge fund mag­nate, while Palan­tir was co-found­ed in 2003 by Mr. Thiel, who was an ini­tial investor in Face­book. . . .”

Pro­gram High­lights Include:

  1. Face­book’s project to incor­po­rate brain-to-com­put­er inter­face into its oper­at­ing sys­tem: ” . . . Face­book wants to build its own “brain-to-com­put­er inter­face” that would allow us to send thoughts straight to a com­put­er. ‘What if you could type direct­ly from your brain?’ Regi­na Dugan, the head of the company’s secre­tive hard­ware R&D divi­sion, Build­ing 8, asked from the stage. Dugan then pro­ceed­ed to show a video demo of a woman typ­ing eight words per minute direct­ly from the stage. In a few years, she said, the team hopes to demon­strate a real-time silent speech sys­tem capa­ble of deliv­er­ing a hun­dred words per minute. ‘That’s five times faster than you can type on your smart­phone, and it’s straight from your brain,’ she said. ‘Your brain activ­i­ty con­tains more infor­ma­tion than what a word sounds like and how it’s spelled; it also con­tains seman­tic infor­ma­tion of what those words mean.’ . . .”
  2. ” . . . . Brain-com­put­er inter­faces are noth­ing new. DARPA, which Dugan used to head, has invest­ed heav­i­ly in brain-com­put­er inter­face tech­nolo­gies to do things like cure men­tal ill­ness and restore mem­o­ries to sol­diers injured in war. But what Face­book is propos­ing is per­haps more radical—a world in which social media doesn’t require pick­ing up a phone or tap­ping a wrist watch in order to com­mu­ni­cate with your friends; a world where we’re con­nect­ed all the time by thought alone. . . .”
  3. ” . . . . Facebook’s Build­ing 8 is mod­eled after DARPA and its projects tend to be equal­ly ambi­tious. . . .”
  4. ” . . . . But what Face­book is propos­ing is per­haps more radical—a world in which social media doesn’t require pick­ing up a phone or tap­ping a wrist watch in order to com­mu­ni­cate with your friends; a world where we’re con­nect­ed all the time by thought alone. . . .”
  5. ” . . . . Face­book hopes to use opti­cal neur­al imag­ing tech­nol­o­gy to scan the brain 100 times per sec­ond to detect thoughts and turn them into text. Mean­while, it’s work­ing on ‘skin-hear­ing’ that could trans­late sounds into hap­tic feed­back that peo­ple can learn to under­stand like braille. . . .”
  6. ” . . . . Wor­ry­ing­ly, Dugan even­tu­al­ly appeared frus­trat­ed in response to my inquiries about how her team thinks about safe­ty pre­cau­tions for brain inter­faces, say­ing, ‘The flip side of the ques­tion that you’re ask­ing is ‘why invent it at all?’ and I just believe that the opti­mistic per­spec­tive is that on bal­ance, tech­no­log­i­cal advances have real­ly meant good things for the world if they’re han­dled respon­si­bly.’ . . . .”
  7. Some telling obser­va­tions by Nigel Oakes, the founder of Cam­bridge Ana­lyt­i­ca par­ent firm SCL: ” . . . . . . . . The pan­el has pub­lished audio records in which an exec­u­tive tied to Cam­bridge Ana­lyt­i­ca dis­cuss­es how the Trump cam­paign used tech­niques used by the Nazis to tar­get vot­ers. . . .”
  8. Fur­ther expo­si­tion of Oakes’ state­ment: ” . . . . Adolf Hitler ‘didn’t have a prob­lem with the Jews at all, but peo­ple didn’t like the Jews,’ he told the aca­d­e­m­ic, Emma L. Bri­ant, a senior lec­tur­er in jour­nal­ism at the Uni­ver­si­ty of Essex. He went on to say that Don­ald J. Trump had done the same thing by tap­ping into griev­ances toward immi­grants and Mus­lims. . . . ‘What hap­pened with Trump, you can for­get all the micro­tar­get­ing and micro­da­ta and what­ev­er, and come back to some very, very sim­ple things,’ he told Dr. Bri­ant. ‘Trump had the balls, and I mean, real­ly the balls, to say what peo­ple want­ed to hear.’ . . .”
  9. Obser­va­tions about the pos­si­bil­i­ties of Face­book’s goal of hav­ing AI gov­ern­ing the edi­to­r­i­al func­tions of its con­tent: As not­ed in a Pop­u­lar Mechan­ics arti­cle: ” . . . When the next pow­er­ful AI comes along, it will see its first look at the world by look­ing at our faces. And if we stare it in the eyes and shout ‘we’re AWFUL lol,’ the lol might be the one part it doesn’t under­stand. . . .”
  10. Microsoft­’s Tay Chat­bot offers a glimpse into this future: As one Twit­ter user not­ed, employ­ing sar­casm: “Tay went from ‘humans are super cool’ to full nazi in <24 hrs and I’m not at all con­cerned about the future of AI.”

1. The UK’s Chan­nel 4 sent an inves­tiga­tive jour­nal­ist under­cov­er to work for one of the third-par­ty com­pa­nies Face­book pays to mod­er­ate con­tent. This inves­tiga­tive jour­nal­ist was trained to take a hands-off approach to far right vio­lent con­tent and fake news because that kind of con­tent engages users for longer and increas­es ad rev­enues. ” . . . . An inves­tiga­tive jour­nal­ist who went under­cov­er as a Face­book mod­er­a­tor in Ire­land says the com­pa­ny lets pages from far-right fringe groups ‘exceed dele­tion thresh­old,’ and that those pages are ‘sub­ject to dif­fer­ent treat­ment in the same cat­e­go­ry as pages belong­ing to gov­ern­ments and news orga­ni­za­tions.’ The accu­sa­tion is a damn­ing one, under­min­ing Facebook’s claims that it is active­ly try­ing to cut down on fake news, pro­pa­gan­da, hate speech, and oth­er harm­ful con­tent that may have sig­nif­i­cant real-world impact.The under­cov­er jour­nal­ist detailed his find­ings in a new doc­u­men­tary titled Inside Face­book: Secrets of the Social Net­work, that just aired on the UK’s Chan­nel 4. . . . .”

“Under­cov­er Face­book mod­er­a­tor Was Instruct­ed Not to Remove Fringe Groups or Hate Speech” by Nick Statt; The Verge; 07/17/2018

An inves­tiga­tive jour­nal­ist who went under­cov­er as a Face­book mod­er­a­tor in Ire­land says the com­pa­ny lets pages from far-right fringe groups “exceed dele­tion thresh­old,” and that those pages are “sub­ject to dif­fer­ent treat­ment in the same cat­e­go­ry as pages belong­ing to gov­ern­ments and news orga­ni­za­tions.” The accu­sa­tion is a damn­ing one, under­min­ing Facebook’s claims that it is active­ly try­ing to cut down on fake news, pro­pa­gan­da, hate speech, and oth­er harm­ful con­tent that may have sig­nif­i­cant real-world impact.The under­cov­er jour­nal­ist detailed his find­ings in a new doc­u­men­tary titled Inside Face­book: Secrets of the Social Net­work, that just aired on the UK’s Chan­nel 4. The inves­ti­ga­tion out­lines ques­tion­able prac­tices on behalf of CPL Resources, a third-par­ty con­tent mod­er­a­tor firm based in Dublin that Face­book has worked with since 2010.

Those ques­tion­able prac­tices pri­mar­i­ly involve a hands-off approach to flagged and report­ed con­tent like graph­ic vio­lence, hate speech, and racist and oth­er big­ot­ed rhetoric from far-right groups. The under­cov­er reporter says he was also instruct­ed to ignore users who looked as if they were under 13 years of age, which is the min­i­mum age require­ment to sign up for Face­book in accor­dance with the Child Online Pro­tec­tion Act, a 1998 pri­va­cy law passed in the US designed to pro­tect young chil­dren from exploita­tion and harm­ful and vio­lent con­tent on the inter­net. The doc­u­men­tary insin­u­ates that Face­book takes a hands-off approach to such con­tent, includ­ing bla­tant­ly false sto­ries parad­ing as truth, because it engages users for longer and dri­ves up adver­tis­ing rev­enue. . . . 

. . . . And as the Chan­nel 4 doc­u­men­tary makes clear, that thresh­old appears to be an ever-chang­ing met­ric that has no con­sis­ten­cy across par­ti­san lines and from legit­i­mate media orga­ni­za­tions to ones that ped­dle in fake news, pro­pa­gan­da, and con­spir­a­cy the­o­ries. It’s also unclear how Face­book is able to enforce its pol­i­cy with third-par­ty mod­er­a­tors all around the world, espe­cial­ly when they may be incen­tivized by any num­ber of per­for­mance met­rics and per­son­al bias­es. .  . . .

Mean­while, Face­book is ramp­ing up efforts in its arti­fi­cial intel­li­gence divi­sion, with the hope that one day algo­rithms can solve these press­ing mod­er­a­tion prob­lems with­out any human input. Ear­li­er today, the com­pa­ny said it would be accel­er­at­ing its AI research efforts to include more researchers and engi­neers, as well as new acad­e­mia part­ner­ships and expan­sions of its AI research labs in eight loca­tions around the world. . . . .The long-term goal of the company’s AI divi­sion is to cre­ate “machines that have some lev­el of com­mon sense” and that learn “how the world works by obser­va­tion, like young chil­dren do in the first few months of life.” . . . .

2. Next, we present a fright­en­ing sto­ry about Aggre­gateIQ (AIQ), the Cam­bridge Ana­lyt­i­ca off­shoot to which Cam­bridge Ana­lyt­i­ca out­sourced the devel­op­ment of its “Ripon” psy­cho­log­i­cal pro­file soft­ware devel­op­ment, and which lat­er played a key role in the pro-Brex­it cam­paign. The arti­cle also notes that, despite Facebook’s pledge to kick Cam­bridge Ana­lyt­i­ca off of its plat­form, secu­ri­ty researchers just found 13 apps avail­able for Face­book that appear to be devel­oped by AIQ. If Face­book real­ly was try­ing to kick Cam­bridge Ana­lyt­i­ca off of its plat­form, it’s not try­ing very hard. One app is even named “AIQ John­ny Scraper” and it’s reg­is­tered to AIQ.

The fol­low­ing arti­cle is also a reminder that you don’t nec­es­sar­i­ly need to down­load a Cam­bridge Analytica/AIQ app for them to be track­ing your infor­ma­tion and reselling it to clients. Secu­ri­ty researcher stum­bled upon a new repos­i­to­ry of curat­ed Face­book data AIQ was cre­at­ing for a client and it’s entire­ly pos­si­ble a lot of the data was scraped from pub­lic Face­book posts.

” . . . . Aggre­gateIQ, a Cana­di­an con­sul­tan­cy alleged to have links to Cam­bridge Ana­lyt­i­ca, col­lect­ed and stored the data of hun­dreds of thou­sands of Face­book users, accord­ing to redact­ed com­put­er files seen by the Finan­cial Times.The social net­work banned Aggre­gateIQ, a data com­pa­ny, from its plat­form as part of a clean-up oper­a­tion fol­low­ing the Cam­bridge Ana­lyt­i­ca scan­dal, on sus­pi­cion that the com­pa­ny could have been improp­er­ly access­ing user infor­ma­tion. How­ev­er, Chris Vick­ery, a secu­ri­ty researcher, this week found an app on the plat­form called ‘AIQ John­ny Scraper’ reg­is­tered to the com­pa­ny, rais­ing fresh ques­tions about the effec­tive­ness of Facebook’s polic­ing efforts. . . .”

Addi­tion­al­ly, the sto­ry high­lights a forms of micro-tar­get­ing com­pa­nies like AIQ make avail­able that’s fun­da­men­tal­ly dif­fer­ent from the algo­rith­mic micro-tar­get­ing we typ­i­cal­ly asso­ciate with social media abus­es: micro-tar­get­ing by a human who wants to specif­i­cal­ly look and see what you per­son­al­ly have said about var­i­ous top­ics on social media. A ser­vice where some­one can type you into a search engine and AIQ’s prod­uct will serve up a list of all the var­i­ous polit­i­cal posts you’ve made or the polit­i­cal­ly-rel­e­vant “Likes” you’ve made.

It’s also worth not­ing that this ser­vice would be per­fect for accom­plish­ing the right-wing’s long-stand­ing goal of purg­ing the fed­er­al gov­ern­ment of lib­er­al employ­ees. A goal that ‘Alt-Right’ neo-Nazi troll Charles C. John­son and ‘Alt-Right’ neo-Nazi bil­lion­aire Peter Thiel report­ed­ly was help­ing the Trump team accom­plish dur­ing the tran­si­tion peri­od. An ide­o­log­i­cal purge of the State Depart­ment is report­ed­ly already under­way.  

“Aggre­gateIQ Had Data of Thou­sands of Face­book Users” by Aliya Ram and Han­nah Kuch­ler; Finan­cial Times; 06/01/2018

Aggre­gateIQ, a Cana­di­an con­sul­tan­cy alleged to have links to Cam­bridge Ana­lyt­i­ca, col­lect­ed and stored the data of hun­dreds of thou­sands of Face­book users, accord­ing to redact­ed com­put­er files seen by the Finan­cial Times.The social net­work banned Aggre­gateIQ, a data com­pa­ny, from its plat­form as part of a clean-up oper­a­tion fol­low­ing the Cam­bridge Ana­lyt­i­ca scan­dal, on sus­pi­cion that the com­pa­ny could have been improp­er­ly access­ing user infor­ma­tion. How­ev­er, Chris Vick­ery, a secu­ri­ty researcher, this week found an app on the plat­form called “AIQ John­ny Scraper” reg­is­tered to the com­pa­ny, rais­ing fresh ques­tions about the effec­tive­ness of Facebook’s polic­ing efforts.

The tech­nol­o­gy group now says it shut down the John­ny Scraper app this week along with 13 oth­ers that could be relat­ed to Aggre­gateIQ, with a total of 1,000 users.

Ime Archi­bong, vice-pres­i­dent of prod­uct part­ner­ships, said the com­pa­ny was inves­ti­gat­ing whether there had been any mis­use of data. “We have sus­pend­ed an addi­tion­al 14 apps this week, which were installed by around 1,000 peo­ple,” he said. “They were all cre­at­ed after 2014 and so did not have access to friends’ data. How­ev­er, these apps appear to be linked to Aggre­gateIQ, which was affil­i­at­ed with Cam­bridge Ana­lyt­i­ca. So we have sus­pend­ed them while we inves­ti­gate fur­ther.”.

Accord­ing to files seen by the Finan­cial Times, Aggre­gateIQ had stored a list of 759,934 Face­book users in a table that record­ed home address­es, phone num­bers and email address­es for some pro­files.

Jeff Sil­vester, Aggre­gateIQ chief oper­at­ing offi­cer, said the file came from soft­ware designed for a par­tic­u­lar client, which tracked which users had liked a par­tic­u­lar page or were post­ing pos­i­tive and neg­a­tive com­ments.

“I believe as part of that the client did attempt to match peo­ple who had liked their Face­book page with sup­port­ers in their vot­er file [online elec­toral records],” he said. “I believe the result of this match­ing is what you are look­ing at. This is a fair­ly com­mon task that vot­er file tools do all of the time.”

He added that the pur­pose of the John­ny Scraper app was to repli­cate Face­book posts made by one of AggregateIQ’s clients into smart­phone apps that also belonged to the client.

Aggre­gateIQ has sought to dis­tance itself from an inter­na­tion­al pri­va­cy scan­dal engulf­ing Face­book and Cam­bridge Ana­lyt­i­ca, despite alle­ga­tions from Christo­pher Wylie, a whistle­blow­er at the now-defunct UK firm, that it had act­ed as the Cana­di­an branch of the organ­i­sa­tion.

The files do not indi­cate whether users had giv­en per­mis­sion for their Face­book “Likes” to be tracked through third-par­ty apps, or whether they were scraped from pub­licly vis­i­ble pages. Mr Vick­ery, who analysed AggregateIQ’s files after uncov­er­ing a trove of infor­ma­tion online, said that the com­pa­ny appeared to have gath­ered data from Face­book users despite telling Cana­di­an MPs “we don’t real­ly process data on folks”.

The files also include posts that focus on polit­i­cal issues with state­ments such as: “Like if you agree with Rea­gan that ‘gov­ern­ment is the prob­lem’,” but it is not clear if this infor­ma­tion orig­i­nat­ed on Face­book. Mr Sil­vester said the soft­ware Aggre­gateIQ had designed allowed its client to browse pub­lic com­ments. “It is pos­si­ble that some of those pub­lic com­ments or posts are in the file,” he said. . . .

. . . . “The over­all theme of these com­pa­nies and the way their tools work is that every­thing is reliant on every­thing else, but has enough inde­pen­dent oper­abil­i­ty to pre­serve deni­a­bil­i­ty,” said Mr Vick­ery. “But when you com­bine all these dif­fer­ent data sources togeth­er it becomes some­thing else.” . . . .

3. Face­book is get­ting sued by an app devel­op­er for act­ing like the mafia and turn­ing access to all that user data as the key enforce­ment tool:

“Mark Zucker­berg faces alle­ga­tions that he devel­oped a ‘mali­cious and fraud­u­lent scheme’ to exploit vast amounts of pri­vate data to earn Face­book bil­lions and force rivals out of busi­ness. A com­pa­ny suing Face­book in a Cal­i­for­nia court claims the social network’s chief exec­u­tive ‘weaponised’ the abil­i­ty to access data from any user’s net­work of friends – the fea­ture at the heart of the Cam­bridge Ana­lyt­i­ca scan­dal.  . . . . ‘The evi­dence uncov­ered by plain­tiff demon­strates that the Cam­bridge Ana­lyt­i­ca scan­dal was not the result of mere neg­li­gence on Facebook’s part but was rather the direct con­se­quence of the mali­cious and fraud­u­lent scheme Zucker­berg designed in 2012 to cov­er up his fail­ure to antic­i­pate the world’s tran­si­tion to smart­phones,’ legal doc­u­ments said. . . . . Six4Three alleges up to 40,000 com­pa­nies were effec­tive­ly defraud­ed in this way by Face­book. It also alleges that senior exec­u­tives includ­ing Zucker­berg per­son­al­ly devised and man­aged the scheme, indi­vid­u­al­ly decid­ing which com­pa­nies would be cut off from data or allowed pref­er­en­tial access. . . . ‘They felt that it was bet­ter not to know. I found that utter­ly hor­ri­fy­ing,’ he [for­mer Face­book exec­u­tive Sandy Parak­i­las] said. ‘If true, these alle­ga­tions show a huge betray­al of users, part­ners and reg­u­la­tors. They would also show Face­book using its monop­oly pow­er to kill com­pe­ti­tion and putting prof­its over pro­tect­ing its users.’ . . . .”

“Zucker­berg Set Up Fraud­u­lent Scheme to ‘Weaponise’ Data, Court Case Alleges” by Car­ole Cad­wal­ladr and Emma Gra­ham-Har­ri­son; The Guardian; 05/24/2018

Mark Zucker­berg faces alle­ga­tions that he devel­oped a “mali­cious and fraud­u­lent scheme” to exploit vast amounts of pri­vate data to earn Face­book bil­lions and force rivals out of busi­ness. A com­pa­ny suing Face­book in a Cal­i­for­nia court claims the social network’s chief exec­u­tive “weaponised” the abil­i­ty to access data from any user’s net­work of friends – the fea­ture at the heart of the Cam­bridge Ana­lyt­i­ca scan­dal.

A legal motion filed last week in the supe­ri­or court of San Mateo draws upon exten­sive con­fi­den­tial emails and mes­sages between Face­book senior exec­u­tives includ­ing Mark Zucker­berg. He is named indi­vid­u­al­ly in the case and, it is claimed, had per­son­al over­sight of the scheme.

Face­book rejects all claims, and has made a motion to have the case dis­missed using a free speech defence.

It claims the first amend­ment pro­tects its right to make “edi­to­r­i­al deci­sions” as it sees fit. Zucker­berg and oth­er senior exec­u­tives have assert­ed that Face­book is a plat­form not a pub­lish­er, most recent­ly in tes­ti­mo­ny to Con­gress.

Heather Whit­ney, a legal schol­ar who has writ­ten about social media com­pa­nies for the Knight First Amend­ment Insti­tute at Colum­bia Uni­ver­si­ty, said, in her opin­ion, this exposed a poten­tial ten­sion for Face­book.

“Facebook’s claims in court that it is an edi­tor for first amend­ment pur­pos­es and thus free to cen­sor and alter the con­tent avail­able on its site is in ten­sion with their, espe­cial­ly recent, claims before the pub­lic and US Con­gress to be neu­tral plat­forms.”

The com­pa­ny that has filed the case, a for­mer start­up called Six4Three, is now try­ing to stop Face­book from hav­ing the case thrown out and has sub­mit­ted legal argu­ments that draw on thou­sands of emails, the details of which are cur­rent­ly redact­ed. Face­book has until next Tues­day to file a motion request­ing that the evi­dence remains sealed, oth­er­wise the doc­u­ments will be made pub­lic.

The devel­op­er alleges the cor­re­spon­dence shows Face­book paid lip ser­vice to pri­va­cy con­cerns in pub­lic but behind the scenes exploit­ed its users’ pri­vate infor­ma­tion.

It claims inter­nal emails and mes­sages reveal a cyn­i­cal and abu­sive sys­tem set up to exploit access to users’ pri­vate infor­ma­tion, along­side a raft of anti-com­pet­i­tive behav­iours. . . .

. . . . The papers sub­mit­ted to the court last week allege Face­book was not only aware of the impli­ca­tions of its pri­va­cy pol­i­cy, but active­ly exploit­ed them, inten­tion­al­ly cre­at­ing and effec­tive­ly flag­ging up the loop­hole that Cam­bridge Ana­lyt­i­ca used to col­lect data on up to 87 mil­lion Amer­i­can users.

The law­suit also claims Zucker­berg mis­led the pub­lic and Con­gress about Facebook’s role in the Cam­bridge Ana­lyt­i­ca scan­dal by por­tray­ing it as a vic­tim of a third par­ty that had abused its rules for col­lect­ing and shar­ing data.

“The evi­dence uncov­ered by plain­tiff demon­strates that the Cam­bridge Ana­lyt­i­ca scan­dal was not the result of mere neg­li­gence on Facebook’s part but was rather the direct con­se­quence of the mali­cious and fraud­u­lent scheme Zucker­berg designed in 2012 to cov­er up his fail­ure to antic­i­pate the world’s tran­si­tion to smart­phones,” legal doc­u­ments said.

The law­suit claims to have uncov­ered fresh evi­dence con­cern­ing how Face­book made deci­sions about users’ pri­va­cy. It sets out alle­ga­tions that, in 2012, Facebook’s adver­tis­ing busi­ness, which focused on desk­top ads, was dev­as­tat­ed by a rapid and unex­pect­ed shift to smart­phones.

Zucker­berg respond­ed by forc­ing devel­op­ers to buy expen­sive ads on the new, under­used mobile ser­vice or risk hav­ing their access to data at the core of their busi­ness cut off, the court case alleges.

“Zucker­berg weaponised the data of one-third of the planet’s pop­u­la­tion in order to cov­er up his fail­ure to tran­si­tion Facebook’s busi­ness from desk­top com­put­ers to mobile ads before the mar­ket became aware that Facebook’s finan­cial pro­jec­tions in its 2012 IPO fil­ings were false,” one court fil­ing said.

In its lat­est fil­ing, Six4Three alleges Face­book delib­er­ate­ly used its huge amounts of valu­able and high­ly per­son­al user data to tempt devel­op­ers to cre­ate plat­forms with­in its sys­tem, imply­ing that they would have long-term access to per­son­al infor­ma­tion, includ­ing data from sub­scribers’ Face­book friends. 

Once their busi­ness­es were run­ning, and reliant on data relat­ing to “likes”, birth­days, friend lists and oth­er Face­book minu­ti­ae, the social media com­pa­ny could and did tar­get any that became too suc­cess­ful, look­ing to extract mon­ey from them, co-opt them or destroy them, the doc­u­ments claim.

Six4Three alleges up to 40,000 com­pa­nies were effec­tive­ly defraud­ed in this way by Face­book. It also alleges that senior exec­u­tives includ­ing Zucker­berg per­son­al­ly devised and man­aged the scheme, indi­vid­u­al­ly decid­ing which com­pa­nies would be cut off from data or allowed pref­er­en­tial access.

The law­suit alleges that Face­book ini­tial­ly focused on kick­start­ing its mobile adver­tis­ing plat­form, as the rapid adop­tion of smart­phones dec­i­mat­ed the desk­top adver­tis­ing busi­ness in 2012.

It lat­er used its abil­i­ty to cut off data to force rivals out of busi­ness, or coerce own­ers of apps Face­book cov­et­ed into sell­ing at below the mar­ket price, even though they were not break­ing any terms of their con­tracts, accord­ing to the doc­u­ments. . . .

. . . . David God­kin, Six4Three’s lead coun­sel said: “We believe the pub­lic has a right to see the evi­dence and are con­fi­dent the evi­dence clear­ly demon­strates the truth of our alle­ga­tions, and much more.”

Sandy Parak­i­las, a for­mer Face­book employ­ee turned whistle­blow­er who has tes­ti­fied to the UK par­lia­ment about its busi­ness prac­tices, said the alle­ga­tions were a “bomb­shell”. He claimed to MPs Facebook’s senior exec­u­tives were aware of abus­es of friends’ data back in 2011-12 and he was warned not to look into the issue.

“They felt that it was bet­ter not to know. I found that utter­ly hor­ri­fy­ing,” he said. “If true, these alle­ga­tions show a huge betray­al of users, part­ners and reg­u­la­tors. They would also show Face­book using its monop­oly pow­er to kill com­pe­ti­tion and putting prof­its over pro­tect­ing its users.” . . .

4. Cam­bridge Ana­lyt­i­ca is offi­cial­ly going bank­rupt, along with the elec­tions divi­sion of its par­ent com­pa­ny, SCL Group. Appar­ent­ly their bad press has dri­ven away clients.

Is this tru­ly the end of Cam­bridge Ana­lyt­i­ca?

No.

They’re rebrand­ing under a new com­pa­ny, Emer­da­ta. Intrigu­ing­ly, Cam­bridge Analytica’s trans­for­ma­tion into Emer­da­ta is note­wor­thy because  the fir­m’s direc­tors include John­son Ko Chun Shun, a Hong Kong financier and busi­ness part­ner of Erik Prince: ” . . . . But the company’s announce­ment left sev­er­al ques­tions unan­swered, includ­ing who would retain the company’s intel­lec­tu­al prop­er­ty — the so-called psy­cho­graph­ic vot­er pro­files built in part with data from Face­book — and whether Cam­bridge Analytica’s data-min­ing busi­ness would return under new aus­pices. . . . In recent months, exec­u­tives at Cam­bridge Ana­lyt­i­ca and SCL Group, along with the Mer­cer fam­i­ly, have moved to cre­at­ed a new firm, Emer­da­ta, based in Britain, accord­ing to British records. The new company’s direc­tors include John­son Ko Chun Shun, a Hong Kong financier and busi­ness part­ner of Erik Prince. . . . An exec­u­tive and a part own­er of SCL Group, Nigel Oakes, has pub­licly described Emer­da­ta as a way of rolling up the two com­pa­nies under one new ban­ner. . . . 

“Cam­bridge Ana­lyt­i­ca to File for Bank­rupt­cy After Mis­use of Face­book Data” by Nicholas Con­fes­sore and Matthew Rosen­berg; The New York Times; 5/02/2018.

. . . . In a state­ment post­ed to its web­site, Cam­bridge Ana­lyt­i­ca said the con­tro­ver­sy had dri­ven away vir­tu­al­ly all of the company’s cus­tomers, forc­ing it to file for bank­rupt­cy in both the Unit­ed States and Britain. The elec­tions divi­sion of Cambridge’s British affil­i­ate, SCL Group, will also shut down, the com­pa­ny said.

But the company’s announce­ment left sev­er­al ques­tions unan­swered, includ­ing who would retain the company’s intel­lec­tu­al prop­er­ty — the so-called psy­cho­graph­ic vot­er pro­files built in part with data from Face­book — and whether Cam­bridge Analytica’s data-min­ing busi­ness would return under new aus­pices. . . . 

. . . . In recent months, exec­u­tives at Cam­bridge Ana­lyt­i­ca and SCL Group, along with the Mer­cer fam­i­ly, have moved to cre­at­ed a new firm, Emer­da­ta, based in Britain, accord­ing to British records. The new company’s direc­tors include John­son Ko Chun Shun, a Hong Kong financier and busi­ness part­ner of Erik Prince. Mr. Prince found­ed the pri­vate secu­ri­ty firm Black­wa­ter, which was renamed Xe Ser­vices after Black­wa­ter con­trac­tors were con­vict­ed of killing Iraqi civil­ians.

Cam­bridge and SCL offi­cials pri­vate­ly raised the pos­si­bil­i­ty that Emer­da­ta could be used for a Black­wa­ter-style rebrand­ing of Cam­bridge Ana­lyt­i­ca and the SCL Group, accord­ing two peo­ple with knowl­edge of the com­pa­nies, who asked for anonymi­ty to describe con­fi­den­tial con­ver­sa­tions. One plan under con­sid­er­a­tion was to sell off the com­bined company’s data and intel­lec­tu­al prop­er­ty.

An exec­u­tive and a part own­er of SCL Group, Nigel Oakes, has pub­licly described Emer­da­ta as a way of rolling up the two com­pa­nies under one new ban­ner. . . . 

5. In the Big Data inter­net age, there’s one area of per­son­al infor­ma­tion that has yet to be incor­po­rat­ed into the pro­files on everyone–personal bank­ing infor­ma­tion.  ” . . . . If tech com­pa­nies are in con­trol of pay­ment sys­tems, they’ll know “every sin­gle thing you do,” Kapi­to said. It’s a dif­fer­ent busi­ness mod­el from tra­di­tion­al bank­ing: Data is more valu­able for tech firms that sell a range of dif­fer­ent prod­ucts than it is for banks that only sell finan­cial ser­vices, he said. . . .”

“Black­Rock Is Wor­ried Tech­nol­o­gy Firms Are About to Know ‘Every Sin­gle Thing You Do’” by John Detrix­he; Quartz; 11/02/2017

The pres­i­dent of Black­Rock, the world’s biggest asset man­ag­er, is among those who think big tech­nol­o­gy firms could invade the finan­cial industry’s turf. Google and Face­book have thrived by col­lect­ing and stor­ing data about con­sumer habits—our emails, search queries, and the videos we watch. Under­stand­ing of our finan­cial lives could be an even rich­er source of data for them to sell to adver­tis­ers.

“I wor­ry about the data,” said Black­Rock pres­i­dent Robert Kapi­to at a con­fer­ence in Lon­don today (Nov. 2). “We’re going to have some seri­ous com­peti­tors.”

If tech com­pa­nies are in con­trol of pay­ment sys­tems, they’ll know “every sin­gle thing you do,” Kapi­to said. It’s a dif­fer­ent busi­ness mod­el from tra­di­tion­al bank­ing: Data is more valu­able for tech firms that sell a range of dif­fer­ent prod­ucts than it is for banks that only sell finan­cial ser­vices, he said.

Kapi­to is wor­ried because the effort to win con­trol of pay­ment sys­tems is already underway—Apple will allow iMes­sage users to send cash to each oth­er, and Face­book is inte­grat­ing per­son-to-per­son Pay­Pal pay­ments into its Mes­sen­ger app.

As more pay­ments flow through mobile phones, banks are wor­ried they could get left behind, rel­e­gat­ed to serv­ing as low-mar­gin util­i­ties. To fight back, they’ve start­ed ini­tia­tives such as Zelle to com­pete with pay­ment ser­vices like Pay­Pal.

Bar­clays CEO Jes Sta­ley point­ed out at the con­fer­ence that banks prob­a­bly have the “rich­est data pool” of any sec­tor, and he said some 25% of the UK’s econ­o­my flows through Barl­cays’ pay­ment sys­tems. The indus­try could use that infor­ma­tion to offer bet­ter ser­vices. Com­pa­nies could alert peo­ple that they’re not sav­ing enough for retire­ment, or sug­gest ways to save mon­ey on their expens­es. The trick is access­ing that data and ana­lyz­ing it like a big tech­nol­o­gy com­pa­ny would.

And banks still have one thing going for them: There’s a mas­sive fortress of rules and reg­u­la­tions sur­round­ing the indus­try. “No one wants to be reg­u­lat­ed like we are,” Sta­ley said.

6. Face­book is approach­ing a num­ber of big banks – JP Mor­gan, Wells Far­go, Cit­i­group, and US Ban­corp – request­ing finan­cial data includ­ing card trans­ac­tions and check­ing-account bal­ances. Face­book is joined byIn this by Google and Ama­zon who are also try­ing to get this kind of data.

Face­book assures us that this infor­ma­tion, which will be opt-in, is to be sole­ly for offer­ing new ser­vices on Face­book mes­sen­ger. Face­book also assures us that this infor­ma­tion, which would obvi­ous­ly be invalu­able for deliv­er­ing ads, won’t be used for ads at all. It will ONLY be used for Facebook’s Mes­sen­ger ser­vice.  This is a dubi­ous assur­ance, in light of Face­book’s past behav­ior.

” . . . . Face­book increas­ing­ly wants to be a plat­form where peo­ple buy and sell goods and ser­vices, besides con­nect­ing with friends. The com­pa­ny over the past year asked JPMor­gan Chase & Co., Wells Far­go & Co., Cit­i­group Inc. and U.S. Ban­corp to dis­cuss poten­tial offer­ings it could host for bank cus­tomers on Face­book Mes­sen­ger, said peo­ple famil­iar with the mat­ter. Face­book has talked about a fea­ture that would show its users their check­ing-account bal­ances, the peo­ple said. It has also pitched fraud alerts, some of the peo­ple said. . . .”

“Face­book to Banks: Give Us Your Data, We’ll Give You Our Users” by Emi­ly Glaz­er, Deepa Seethara­man and Anna­Maria Andri­o­tis; The Wall Street Jour­nal; 08/06/2018

Face­book Inc. wants your finan­cial data.

The social-media giant has asked large U.S. banks to share detailed finan­cial infor­ma­tion about their cus­tomers, includ­ing card trans­ac­tions and check­ing-account bal­ances, as part of an effort to offer new ser­vices to users.

Face­book increas­ing­ly wants to be a plat­form where peo­ple buy and sell goods and ser­vices, besides con­nect­ing with friends. The com­pa­ny over the past year asked JPMor­gan Chase & Co., Wells Far­go & Co., Cit­i­group Inc. and U.S. Ban­corp to dis­cuss poten­tial offer­ings it could host for bank cus­tomers on Face­book Mes­sen­ger, said peo­ple famil­iar with the mat­ter.

Face­book has talked about a fea­ture that would show its users their check­ing-account bal­ances, the peo­ple said. It has also pitched fraud alerts, some of the peo­ple said.

Data pri­va­cy is a stick­ing point in the banks’ con­ver­sa­tions with Face­book, accord­ing to peo­ple famil­iar with the mat­ter. The talks are tak­ing place as Face­book faces sev­er­al inves­ti­ga­tions over its ties to polit­i­cal ana­lyt­ics firm Cam­bridge Ana­lyt­i­ca, which accessed data on as many as 87 mil­lion Face­book users with­out their con­sent.

One large U.S. bank pulled away from talks due to pri­va­cy con­cerns, some of the peo­ple said.

Face­book has told banks that the addi­tion­al cus­tomer infor­ma­tion could be used to offer ser­vices that might entice users to spend more time on Mes­sen­ger, a per­son famil­iar with the dis­cus­sions said. The com­pa­ny is try­ing to deep­en user engage­ment: Investors shaved more than $120 bil­lion from its mar­ket val­ue in one day last month after it said its growth is start­ing to slow..

Face­book said it wouldn’t use the bank data for ad-tar­get­ing pur­pos­es or share it with third par­ties. . . .

. . . . Alpha­bet Inc.’s Google and Amazon.com Inc. also have asked banks to share data if they join with them, in order to pro­vide basic bank­ing ser­vices on appli­ca­tions such as Google Assis­tant and Alexa, accord­ing to peo­ple famil­iar with the con­ver­sa­tions. . . . 

7. In FTR #946, we exam­ined Cam­bridge Ana­lyt­i­ca, its Trump and Steve Ban­non-linked tech firm that har­vest­ed Face­book data on behalf of the Trump cam­paign.

Peter Thiel’s sur­veil­lance firm Palan­tir was appar­ent­ly deeply involved with Cam­bridge Ana­lyt­i­ca’s gam­ing of per­son­al data har­vest­ed from Face­book in order to engi­neer an elec­toral vic­to­ry for Trump. Thiel was an ear­ly investor in Face­book, at one point was its largest share­hold­er and is still one of its largest share­hold­ers. ” . . . . It was a Palan­tir employ­ee in Lon­don, work­ing close­ly with the data sci­en­tists build­ing Cambridge’s psy­cho­log­i­cal pro­fil­ing tech­nol­o­gy, who sug­gest­ed the sci­en­tists cre­ate their own app — a mobile-phone-based per­son­al­i­ty quiz — to gain access to Face­book users’ friend net­works, accord­ing to doc­u­ments obtained by The New York Times. The rev­e­la­tions pulled Palan­tir — co-found­ed by the wealthy lib­er­tar­i­an Peter Thiel — into the furor sur­round­ing Cam­bridge, which improp­er­ly obtained Face­book data to build ana­lyt­i­cal tools it deployed on behalf of Don­ald J. Trump and oth­er Repub­li­can can­di­dates in 2016. Mr. Thiel, a sup­port­er of Pres­i­dent Trump, serves on the board at Face­book. ‘There were senior Palan­tir employ­ees that were also work­ing on the Face­book data,’ said Christo­pher Wylie, a data expert and Cam­bridge Ana­lyt­i­ca co-founder, in tes­ti­mo­ny before British law­mak­ers on Tues­day. . . . The con­nec­tions between Palan­tir and Cam­bridge Ana­lyt­i­ca were thrust into the spot­light by Mr. Wylie’s tes­ti­mo­ny on Tues­day. Both com­pa­nies are linked to tech-dri­ven bil­lion­aires who backed Mr. Trump’s cam­paign: Cam­bridge is chiefly owned by Robert Mer­cer, the com­put­er sci­en­tist and hedge fund mag­nate, while Palan­tir was co-found­ed in 2003 by Mr. Thiel, who was an ini­tial investor in Face­book. . . .”

“Spy Contractor’s Idea Helped Cam­bridge Ana­lyt­i­ca Har­vest Face­book Data” by NICHOLAS CONFESSORE and MATTHEW ROSENBERG; The New York Times; 03/27/2018

As a start-up called Cam­bridge Ana­lyt­i­ca sought to har­vest the Face­book data of tens of mil­lions of Amer­i­cans in sum­mer 2014, the com­pa­ny received help from at least one employ­ee at Palan­tir Tech­nolo­gies, a top Sil­i­con Val­ley con­trac­tor to Amer­i­can spy agen­cies and the Pen­ta­gon. It was a Palan­tir employ­ee in Lon­don, work­ing close­ly with the data sci­en­tists build­ing Cambridge’s psy­cho­log­i­cal pro­fil­ing tech­nol­o­gy, who sug­gest­ed the sci­en­tists cre­ate their own app — a mobile-phone-based per­son­al­i­ty quiz — to gain access to Face­book users’ friend net­works, accord­ing to doc­u­ments obtained by The New York Times.

Cam­bridge ulti­mate­ly took a sim­i­lar approach. By ear­ly sum­mer, the com­pa­ny found a uni­ver­si­ty researcher to har­vest data using a per­son­al­i­ty ques­tion­naire and Face­book app. The researcher scraped pri­vate data from over 50 mil­lion Face­book users — and Cam­bridge Ana­lyt­i­ca went into busi­ness sell­ing so-called psy­cho­me­t­ric pro­files of Amer­i­can vot­ers, set­ting itself on a col­li­sion course with reg­u­la­tors and law­mak­ers in the Unit­ed States and Britain.

The rev­e­la­tions pulled Palan­tir — co-found­ed by the wealthy lib­er­tar­i­an Peter Thiel — into the furor sur­round­ing Cam­bridge, which improp­er­ly obtained Face­book data to build ana­lyt­i­cal tools it deployed on behalf of Don­ald J. Trump and oth­er Repub­li­can can­di­dates in 2016. Mr. Thiel, a sup­port­er of Pres­i­dent Trump, serves on the board at Face­book.

“There were senior Palan­tir employ­ees that were also work­ing on the Face­book data,” said Christo­pher Wylie, a data expert and Cam­bridge Ana­lyt­i­ca co-founder, in tes­ti­mo­ny before British law­mak­ers on Tues­day. . . .

. . . .The con­nec­tions between Palan­tir and Cam­bridge Ana­lyt­i­ca were thrust into the spot­light by Mr. Wylie’s tes­ti­mo­ny on Tues­day. Both com­pa­nies are linked to tech-dri­ven bil­lion­aires who backed Mr. Trump’s cam­paign: Cam­bridge is chiefly owned by Robert Mer­cer, the com­put­er sci­en­tist and hedge fund mag­nate, while Palan­tir was co-found­ed in 2003 by Mr. Thiel, who was an ini­tial investor in Face­book. . . .

. . . . Doc­u­ments and inter­views indi­cate that start­ing in 2013, Mr. Chmieli­auskas began cor­re­spond­ing with Mr. Wylie and a col­league from his Gmail account. At the time, Mr. Wylie and the col­league worked for the British defense and intel­li­gence con­trac­tor SCL Group, which formed Cam­bridge Ana­lyt­i­ca with Mr. Mer­cer the next year. The three shared Google doc­u­ments to brain­storm ideas about using big data to cre­ate sophis­ti­cat­ed behav­ioral pro­files, a prod­uct code-named “Big Dad­dy.”

A for­mer intern at SCL — Sophie Schmidt, the daugh­ter of Eric Schmidt, then Google’s exec­u­tive chair­man — urged the com­pa­ny to link up with Palan­tir, accord­ing to Mr. Wylie’s tes­ti­mo­ny and a June 2013 email viewed by The Times.

“Ever come across Palan­tir. Amus­ing­ly Eric Schmidt’s daugh­ter was an intern with us and is try­ing to push us towards them?” one SCL employ­ee wrote to a col­league in the email.

. . . . But he [Wylie] said some Palan­tir employ­ees helped engi­neer Cambridge’s psy­cho­graph­ic mod­els.

“There were Palan­tir staff who would come into the office and work on the data,” Mr. Wylie told law­mak­ers. “And we would go and meet with Palan­tir staff at Palan­tir.” He did not pro­vide an exact num­ber for the employ­ees or iden­ti­fy them.

Palan­tir employ­ees were impressed with Cambridge’s back­ing from Mr. Mer­cer, one of the world’s rich­est men, accord­ing to mes­sages viewed by The Times. And Cam­bridge Ana­lyt­i­ca viewed Palantir’s Sil­i­con Val­ley ties as a valu­able resource for launch­ing and expand­ing its own busi­ness.

In an inter­view this month with The Times, Mr. Wylie said that Palan­tir employ­ees were eager to learn more about using Face­book data and psy­cho­graph­ics. Those dis­cus­sions con­tin­ued through spring 2014, accord­ing to Mr. Wylie.

Mr. Wylie said that he and Mr. Nix vis­it­ed Palantir’s Lon­don office on Soho Square. One side was set up like a high-secu­ri­ty office, Mr. Wylie said, with sep­a­rate rooms that could be entered only with par­tic­u­lar codes. The oth­er side, he said, was like a tech start-up — “weird inspi­ra­tional quotes and stuff on the wall and free beer, and there’s a Ping-Pong table.”

Mr. Chmieli­auskas con­tin­ued to com­mu­ni­cate with Mr. Wylie’s team in 2014, as the Cam­bridge employ­ees were locked in pro­tract­ed nego­ti­a­tions with a researcher at Cam­bridge Uni­ver­si­ty, Michal Kosin­s­ki, to obtain Face­book data through an app Mr. Kosin­s­ki had built. The data was cru­cial to effi­cient­ly scale up Cambridge’s psy­cho­met­rics prod­ucts so they could be used in elec­tions and for cor­po­rate clients. . . .

8a. Some ter­ri­fy­ing and con­sum­mate­ly impor­tant devel­op­ments tak­ing shape in the con­text of what Mr. Emory has called “tech­no­crat­ic fas­cism:”

Face­book wants to read your thoughts.

  1. ” . . . Face­book wants to build its own “brain-to-com­put­er inter­face” that would allow us to send thoughts straight to a com­put­er. ‘What if you could type direct­ly from your brain?’ Regi­na Dugan, the head of the company’s secre­tive hard­ware R&D divi­sion, Build­ing 8, asked from the stage. Dugan then pro­ceed­ed to show a video demo of a woman typ­ing eight words per minute direct­ly from the stage. In a few years, she said, the team hopes to demon­strate a real-time silent speech sys­tem capa­ble of deliv­er­ing a hun­dred words per minute. ‘That’s five times faster than you can type on your smart­phone, and it’s straight from your brain,’ she said. ‘Your brain activ­i­ty con­tains more infor­ma­tion than what a word sounds like and how it’s spelled; it also con­tains seman­tic infor­ma­tion of what those words mean.’ . . .”
  2. ” . . . . Brain-com­put­er inter­faces are noth­ing new. DARPA, which Dugan used to head, has invest­ed heav­i­ly in brain-com­put­er inter­face tech­nolo­gies to do things like cure men­tal ill­ness and restore mem­o­ries to sol­diers injured in war. But what Face­book is propos­ing is per­haps more radical—a world in which social media doesn’t require pick­ing up a phone or tap­ping a wrist watch in order to com­mu­ni­cate with your friends; a world where we’re con­nect­ed all the time by thought alone. . . .”
  3. ” . . . . Facebook’s Build­ing 8 is mod­eled after DARPA and its projects tend to be equal­ly ambi­tious. . . .”
  4. ” . . . . But what Face­book is propos­ing is per­haps more radical—a world in which social media doesn’t require pick­ing up a phone or tap­ping a wrist watch in order to com­mu­ni­cate with your friends; a world where we’re con­nect­ed all the time by thought alone. . . .”

Face­book Lit­er­al­ly Wants to Read Your Thoughts” by Kris­ten V. Brown; Giz­modo; 4/19/2017.

At Facebook’s annu­al devel­op­er con­fer­ence, F8, on Wednes­day, the group unveiled what may be Facebook’s most ambitious—and creepiest—proposal yet. Face­book wants to build its own “brain-to-com­put­er inter­face” that would allow us to send thoughts straight to a com­put­er.

What if you could type direct­ly from your brain?” Regi­na Dugan, the head of the company’s secre­tive hard­ware R&D divi­sion, Build­ing 8, asked from the stage. Dugan then pro­ceed­ed to show a video demo of a woman typ­ing eight words per minute direct­ly from the stage. In a few years, she said, the team hopes to demon­strate a real-time silent speech sys­tem capa­ble of deliv­er­ing a hun­dred words per minute.

“That’s five times faster than you can type on your smart­phone, and it’s straight from your brain,” she said. “Your brain activ­i­ty con­tains more infor­ma­tion than what a word sounds like and how it’s spelled; it also con­tains seman­tic infor­ma­tion of what those words mean.”

Brain-com­put­er inter­faces are noth­ing new. DARPA, which Dugan used to head, has invest­ed heav­i­ly in brain-com­put­er inter­face tech­nolo­gies to do things like cure men­tal ill­ness and restore mem­o­ries to sol­diers injured in war. But what Face­book is propos­ing is per­haps more radical—a world in which social media doesn’t require pick­ing up a phone or tap­ping a wrist watch in order to com­mu­ni­cate with your friends; a world where we’re con­nect­ed all the time by thought alone.

“Our world is both dig­i­tal and phys­i­cal,” she said. “Our goal is to cre­ate and ship new, cat­e­go­ry-defin­ing con­sumer prod­ucts that are social first, at scale.”

She also showed a video that demon­strat­ed a sec­ond tech­nol­o­gy that showed the abil­i­ty to “lis­ten” to human speech through vibra­tions on the skin. This tech has been in devel­op­ment to aid peo­ple with dis­abil­i­ties, work­ing a lit­tle like a Braille that you feel with your body rather than your fin­gers. Using actu­a­tors and sen­sors, a con­nect­ed arm­band was able to con­vey to a woman in the video a tac­tile vocab­u­lary of nine dif­fer­ent words.

Dugan adds that it’s also pos­si­ble to “lis­ten” to human speech by using your skin. It’s like using braille but through a sys­tem of actu­a­tors and sen­sors. Dugan showed a video exam­ple of how a woman could fig­ure out exact­ly what objects were select­ed on a touch­screen based on inputs deliv­ered through a con­nect­ed arm­band.

Facebook’s Build­ing 8 is mod­eled after DARPA and its projects tend to be equal­ly ambi­tious. Brain-com­put­er inter­face tech­nol­o­gy is still in its infan­cy. So far, researchers have been suc­cess­ful in using it to allow peo­ple with dis­abil­i­ties to con­trol par­a­lyzed or pros­thet­ic limbs. But stim­u­lat­ing the brain’s motor cor­tex is a lot sim­pler than read­ing a person’s thoughts and then trans­lat­ing those thoughts into some­thing that might actu­al­ly be read by a com­put­er.

The end goal is to build an online world that feels more immer­sive and real—no doubt so that you spend more time on Face­book.

“Our brains pro­duce enough data to stream 4 HD movies every sec­ond. The prob­lem is that the best way we have to get infor­ma­tion out into the world — speech — can only trans­mit about the same amount of data as a 1980s modem,” CEO Mark Zucker­berg said in a Face­book post. “We’re work­ing on a sys­tem that will let you type straight from your brain about 5x faster than you can type on your phone today. Even­tu­al­ly, we want to turn it into a wear­able tech­nol­o­gy that can be man­u­fac­tured at scale. Even a sim­ple yes/no ‘brain click’ would help make things like aug­ment­ed real­i­ty feel much more nat­ur­al.”

“That’s five times faster than you can type on your smart­phone, and it’s straight from your brain,” she said. “Your brain activ­i­ty con­tains more infor­ma­tion than what a word sounds like and how it’s spelled; it also con­tains seman­tic infor­ma­tion of what those words mean.”

Brain-com­put­er inter­faces are noth­ing new. DARPA, which Dugan used to head, has invest­ed heav­i­ly in brain-com­put­er inter­face tech­nolo­gies to do things like cure men­tal ill­ness and restore mem­o­ries to sol­diers injured in war. But what Face­book is propos­ing is per­haps more radical—a world in which social media doesn’t require pick­ing up a phone or tap­ping a wrist watch in order to com­mu­ni­cate with your friends; a world where we’re con­nect­ed all the time by thought alone.

8b. More about Face­book’s brain-to-com­put­er inter­face:

  1. ” . . . . Face­book hopes to use opti­cal neur­al imag­ing tech­nol­o­gy to scan the brain 100 times per sec­ond to detect thoughts and turn them into text. Mean­while, it’s work­ing on ‘skin-hear­ing’ that could trans­late sounds into hap­tic feed­back that peo­ple can learn to under­stand like braille. . . .”
  2. ” . . . . Wor­ry­ing­ly, Dugan even­tu­al­ly appeared frus­trat­ed in response to my inquiries about how her team thinks about safe­ty pre­cau­tions for brain inter­faces, say­ing, ‘The flip side of the ques­tion that you’re ask­ing is ‘why invent it at all?’ and I just believe that the opti­mistic per­spec­tive is that on bal­ance, tech­no­log­i­cal advances have real­ly meant good things for the world if they’re han­dled respon­si­bly.’ . . . .”

“Face­book Plans Ethics Board to Mon­i­tor Its Brain-Com­put­er Inter­face Work” by Josh Con­stine; Tech Crunch; 4/19/2017.

Face­book will assem­ble an inde­pen­dent Eth­i­cal, Legal and Social Impli­ca­tions (ELSI) pan­el to over­see its devel­op­ment of a direct brain-to-com­put­er typ­ing inter­face it pre­viewed today at its F8 con­fer­ence. Facebook’s R&D depart­ment Build­ing 8’s head Regi­na Dugan tells TechCrunch, “It’s ear­ly days . . . we’re in the process of form­ing it right now.”

Mean­while, much of the work on the brain inter­face is being con­duct­ed by Facebook’s uni­ver­si­ty research part­ners like UC Berke­ley and Johns Hop­kins. Facebook’s tech­ni­cal lead on the project, Mark Chevil­let, says, “They’re all held to the same stan­dards as the NIH or oth­er gov­ern­ment bod­ies fund­ing their work, so they already are work­ing with insti­tu­tion­al review boards at these uni­ver­si­ties that are ensur­ing that those stan­dards are met.” Insti­tu­tion­al review boards ensure test sub­jects aren’t being abused and research is being done as safe­ly as pos­si­ble.

Face­book hopes to use opti­cal neur­al imag­ing tech­nol­o­gy to scan the brain 100 times per sec­ond to detect thoughts and turn them into text. Mean­while, it’s work­ing on “skin-hear­ing” that could trans­late sounds into hap­tic feed­back that peo­ple can learn to under­stand like braille. Dugan insists, “None of the work that we do that is relat­ed to this will be absent of these kinds of insti­tu­tion­al review boards.”

So at least there will be inde­pen­dent ethi­cists work­ing to min­i­mize the poten­tial for mali­cious use of Facebook’s brain-read­ing tech­nol­o­gy to steal or police people’s thoughts.

Dur­ing our inter­view, Dugan showed her cog­nizance of people’s con­cerns, repeat­ing the start of her keynote speech today say­ing, “I’ve nev­er seen a tech­nol­o­gy that you devel­oped with great impact that didn’t have unin­tend­ed con­se­quences that need­ed to be guardrailed or man­aged. In any new tech­nol­o­gy you see a lot of hype talk, some apoc­a­lyp­tic talk and then there’s seri­ous work which is real­ly focused on bring­ing suc­cess­ful out­comes to bear in a respon­si­ble way.”

In the past, she says the safe­guards have been able to keep up with the pace of inven­tion. “In the ear­ly days of the Human Genome Project there was a lot of con­ver­sa­tion about whether we’d build a super race or whether peo­ple would be dis­crim­i­nat­ed against for their genet­ic con­di­tions and so on,” Dugan explains. “Peo­ple took that very seri­ous­ly and were respon­si­ble about it, so they formed what was called a ELSI pan­el . . . By the time that we got the tech­nol­o­gy avail­able to us, that frame­work, that con­trac­tu­al, eth­i­cal frame­work had already been built, so that work will be done here too. That work will have to be done.” . . . .

Wor­ry­ing­ly, Dugan even­tu­al­ly appeared frus­trat­ed in response to my inquiries about how her team thinks about safe­ty pre­cau­tions for brain inter­faces, say­ing, “The flip side of the ques­tion that you’re ask­ing is ‘why invent it at all?’ and I just believe that the opti­mistic per­spec­tive is that on bal­ance, tech­no­log­i­cal advances have real­ly meant good things for the world if they’re han­dled respon­si­bly.”

Facebook’s dom­i­na­tion of social net­work­ing and adver­tis­ing give it bil­lions in prof­it per quar­ter to pour into R&D. But its old “Move fast and break things” phi­los­o­phy is a lot more fright­en­ing when it’s build­ing brain scan­ners. Hope­ful­ly Face­book will pri­or­i­tize the assem­bly of the ELSI ethics board Dugan promised and be as trans­par­ent as pos­si­ble about the devel­op­ment of this excit­ing-yet-unnerv­ing tech­nol­o­gy.…

  1. In FTR #‘s 718 and 946, we detailed the fright­en­ing, ugly real­i­ty behind Face­book. Face­book is now devel­op­ing tech­nol­o­gy that will per­mit the tap­ping of users thoughts by mon­i­tor­ing brain-to-com­put­er tech­nol­o­gy. Face­book’s R & D is head­ed by Regi­na Dugan, who used to head the Pen­tagon’s DARPA. Face­book’s Build­ing 8 is pat­terned after DARPA:  ” . . . Face­book wants to build its own “brain-to-com­put­er inter­face” that would allow us to send thoughts straight to a com­put­er. ‘What if you could type direct­ly from your brain?’ Regi­na Dugan, the head of the company’s secre­tive hard­ware R&D divi­sion, Build­ing 8, asked from the stage. Dugan then pro­ceed­ed to show a video demo of a woman typ­ing eight words per minute direct­ly from the stage. In a few years, she said, the team hopes to demon­strate a real-time silent speech sys­tem capa­ble of deliv­er­ing a hun­dred words per minute. ‘That’s five times faster than you can type on your smart­phone, and it’s straight from your brain,’ she said. ‘Your brain activ­i­ty con­tains more infor­ma­tion than what a word sounds like and how it’s spelled; it also con­tains seman­tic infor­ma­tion of what those words mean.’ . . .”
  2. ” . . . . Facebook’s Build­ing 8 is mod­eled after DARPA and its projects tend to be equal­ly ambi­tious. . . .”
  3. ” . . . . But what Face­book is propos­ing is per­haps more radical—a world in which social media doesn’t require pick­ing up a phone or tap­ping a wrist watch in order to com­mu­ni­cate with your friends; a world where we’re con­nect­ed all the time by thought alone. . . .”

9a. Nigel Oakes is the founder of SCL, the par­ent com­pa­ny of Cam­bridge Ana­lyt­i­ca. His com­ments are relat­ed in a New York Times arti­cle. ” . . . . . . . . The pan­el has pub­lished audio records in which an exec­u­tive tied to Cam­bridge Ana­lyt­i­ca dis­cuss­es how the Trump cam­paign used tech­niques used by the Nazis to tar­get vot­ers. . . .”

“Face­book Gets Grilling in U.K. That It Avoid­ed in U.S.” by Adam Satar­i­ano; The New York Times [West­ern Edi­tion]; 4/27/2018; p. B3.

. . . . The pan­el has pub­lished audio records in which an exec­u­tive tied to Cam­bridge Ana­lyt­i­ca dis­cuss­es how the Trump cam­paign used tech­niques used by the Nazis to tar­get vot­ers. . . .

9b. Mr. Oakes’ com­ments are relat­ed in detail in anoth­er Times arti­cle. ” . . . . Adolf Hitler ‘didn’t have a prob­lem with the Jews at all, but peo­ple didn’t like the Jews,’ he told the aca­d­e­m­ic, Emma L. Bri­ant, a senior lec­tur­er in jour­nal­ism at the Uni­ver­si­ty of Essex. He went on to say that Don­ald J. Trump had done the same thing by tap­ping into griev­ances toward immi­grants and Mus­lims. . . . ‘What hap­pened with Trump, you can for­get all the micro­tar­get­ing and micro­da­ta and what­ev­er, and come back to some very, very sim­ple things,’ he told Dr. Bri­ant. ‘Trump had the balls, and I mean, real­ly the balls, to say what peo­ple want­ed to hear.’ . . .”

“The Ori­gins of an Ad Man’s Manip­u­la­tion Empire” by Ellen Bar­ry; The New York Times [West­ern Edi­tion]; 4/21/2018; p. A4.

. . . . Adolf Hitler “didn’t have a prob­lem with the Jews at all, but peo­ple didn’t like the Jews,” he told the aca­d­e­m­ic, Emma L. Bri­ant, a senior lec­tur­er in jour­nal­ism at the Uni­ver­si­ty of Essex. He went on to say that Don­ald J. Trump had done the same thing by tap­ping into griev­ances toward immi­grants and Mus­lims.

This sort of cam­paign, he con­tin­ued, did not require bells and whis­tles from tech­nol­o­gy or social sci­ence.

“What hap­pened with Trump, you can for­get all the micro­tar­get­ing and micro­da­ta and what­ev­er, and come back to some very, very sim­ple things,” he told Dr. Bri­ant. “Trump had the balls, and I mean, real­ly the balls, to say what peo­ple want­ed to hear.” . . .

9c. Tak­ing a look at the future of fas­cism in the con­text of AI, Tay, a “bot” cre­at­ed by Microsoft to respond to users of Twit­ter was tak­en offline after users taught it to–in effect–become a Nazi bot. It is note­wor­thy that Tay can only respond on the basis of what she is taught. In the future, tech­no­log­i­cal­ly accom­plished and will­ful peo­ple like “weev” may be able to do more. Inevitably, Under­ground Reich ele­ments will craft a Nazi AI that will be able to do MUCH, MUCH more!

Beware! As one Twit­ter user not­ed, employ­ing sar­casm: “Tay went from “humans are super cool” to full nazi in <24 hrs and I’m not at all con­cerned about the future of AI.”

Microsoft has been forced to dunk Tay, its mil­len­ni­al-mim­ic­k­ing chat­bot, into a vat of molten steel. The com­pa­ny has ter­mi­nat­ed her after the bot start­ed tweet­ing abuse at peo­ple and went full neo-Nazi, declar­ing that “Hitler was right I hate the jews.”

@TheBigBrebowski ricky ger­vais learned total­i­tar­i­an­ism from adolf hitler, the inven­tor of athe­ism

— TayTweets (@TayandYou) March 23, 2016

 Some of this appears to be “inno­cent” inso­far as Tay is not gen­er­at­ing these respons­es. Rather, if you tell her “repeat after me” she will par­rot back what­ev­er you say, allow­ing you to put words into her mouth. How­ev­er, some of the respons­es wereorgan­ic. The Guardianquotes one where, after being asked “is Ricky Ger­vais an athe­ist?”, Tay respond­ed, “ricky ger­vais learned total­i­tar­i­an­ism from adolf hitler, the inven­tor of athe­ism.” . . .

But like all teenagers, she seems to be angry with her moth­er.

Microsoft has been forced to dunk Tay, its mil­len­ni­al-mim­ic­k­ing chat­bot, into a vat of molten steel. The com­pa­ny has ter­mi­nat­ed her after the bot start­ed tweet­ing abuse at peo­ple and went full neo-Nazi, declar­ing that “Hitler was right I hate the jews.”

@TheBigBrebowski ricky ger­vais learned total­i­tar­i­an­ism from adolf hitler, the inven­tor of athe­ism

— TayTweets (@TayandYou) March 23, 2016

Some of this appears to be “inno­cent” inso­far as Tay is not gen­er­at­ing these respons­es. Rather, if you tell her “repeat after me” she will par­rot back what­ev­er you say, allow­ing you to put words into her mouth. How­ev­er, some of the respons­es wereorgan­ic. The Guardian quotes one where, after being asked “is Ricky Ger­vais an athe­ist?”, Tay respond­ed, “Ricky Ger­vais learned total­i­tar­i­an­ism from Adolf Hitler, the inven­tor of athe­ism.”

In addi­tion to turn­ing the bot off, Microsoft has delet­ed many of the offend­ing tweets. But this isn’t an action to be tak­en light­ly; Red­mond would do well to remem­ber that it was humans attempt­ing to pull the plug on Skynet that proved to be the last straw, prompt­ing the sys­tem to attack Rus­sia in order to elim­i­nate its ene­mies. We’d bet­ter hope that Tay does­n’t sim­i­lar­ly retal­i­ate. . . .

9d. As not­ed in a Pop­u­lar Mechan­ics arti­cle: ” . . . When the next pow­er­ful AI comes along, it will see its first look at the world by look­ing at our faces. And if we stare it in the eyes and shout “we’re AWFUL lol,” the lol might be the one part it doesn’t under­stand. . . .”

And we keep show­ing it our very worst selves.

We all know the half-joke about the AI apoc­a­lypse. The robots learn to think, and in their cold ones-and-zeros log­ic, they decide that humans—horrific pests we are—need to be exter­mi­nated. It’s the sub­ject of count­less sci-fi sto­ries and blog posts about robots, but maybe the real dan­ger isn’t that AI comes to such a con­clu­sion on its own, but that it gets that idea from us.

Yes­ter­day Microsoft launched a fun lit­tle AI Twit­ter chat­bot that was admit­tedly sort of gim­micky from the start. “A.I fam from the inter­net that’s got zero chill,” its Twit­ter bio reads. At its start, its knowl­edge was based on pub­lic data. As Microsoft’s page for the prod­uct puts it:

Tay has been built by min­ing rel­e­vant pub­lic data and by using AI and edi­to­r­ial devel­oped by a staff includ­ing impro­vi­sa­tional come­di­ans. Pub­lic data that’s been anonymized is Tay’s pri­mary data source. That data has been mod­eled, cleaned and fil­tered by the team devel­op­ing Tay.

The real point of Tay how­ever, was to learn from humans through direct con­ver­sa­tion, most notably direct con­ver­sa­tion using humanity’s cur­rent lead­ing show­case of deprav­ity: Twit­ter. You might not be sur­prised things went off the rails, but how fast and how far is par­tic­u­larly stag­ger­ing. 

Microsoft has since delet­ed some of Tay’s most offen­sive tweets, but var­i­ous pub­li­ca­tions memo­ri­al­ize some of the worst bits where Tay denied the exis­tence of the holo­caust, came out in sup­port of geno­cide, and went all kinds of racist. 

Nat­u­rally it’s hor­ri­fy­ing, and Microsoft has been try­ing to clean up the mess. Though as some on Twit­ter have point­ed out, no mat­ter how lit­tle Microsoft would like to have “Bush did 9/11″ spout­ing from a cor­po­rate spon­sored project, Tay does serve to illus­trate the most dan­ger­ous fun­da­men­tal truth of arti­fi­cial intel­li­gence: It is a mir­ror. Arti­fi­cial intelligence—specifically “neur­al net­works” that learn behav­ior by ingest­ing huge amounts of data and try­ing to repli­cate it—need some sort of source mate­r­ial to get start­ed. They can only get that from us. There is no oth­er way. 

But before you give up on human­ity entire­ly, there are a few things worth not­ing. For starters, it’s not like Tay just nec­es­sar­ily picked up vir­u­lent racism by just hang­ing out and pas­sively lis­ten­ing to the buzz of the humans around it. Tay was announced in a very big way—with a press cov­er­age—and pranksters pro-active­ly went to it to see if they could teach it to be racist. 

If you take an AI and then don’t imme­di­ately intro­duce it to a whole bunch of trolls shout­ing racism at it for the cheap thrill of see­ing it learn a dirty trick, you can get some more inter­est­ing results. Endear­ing ones even! Mul­ti­ple neur­al net­works designed to pre­dict text in emails and text mes­sages have an over­whelm­ing pro­cliv­ity for say­ing “I love you” con­stantly, espe­cially when they are oth­er­wise at a loss for words.

So Tay’s racism isn’t nec­es­sar­ily a reflec­tion of actu­al, human racism so much as it is the con­se­quence of unre­strained exper­i­men­ta­tion, push­ing the enve­lope as far as it can go the very first sec­ond we get the chance. The mir­ror isn’t show­ing our real image; it’s reflect­ing the ugly faces we’re mak­ing at it for fun. And maybe that’s actu­ally worse.

Sure, Tay can’t under­stand what racism means and more than Gmail can real­ly love you. And baby’s first words being “geno­cide lol!” is admit­tedly sort of fun­ny when you aren’t talk­ing about lit­eral all-pow­er­ful SkyNet or a real human child. But AI is advanc­ing at a stag­ger­ing rate. . . .

. . . . When the next pow­er­ful AI comes along, it will see its first look at the world by look­ing at our faces. And if we stare it in the eyes and shout “we’re AWFUL lol,” the lol might be the one part it doesn’t under­stand.

 

Discussion

14 comments for “FTR #1021 FascisBook: (In Your Facebook, Part 3–A Virtual Panopticon, Part 3)”

  1. Oh look, Face­book actu­al­ly banned some­one for post­ing neo-Nazi con­tent on their plat­form. But there’s a catch: They banned Ukrain­ian activist Eduard Dolin­sky for 30 days because he was post­ing exam­ples of anti­se­mit­ic graf­fi­ti. Dolin­sky is the direc­tor of the Ukrain­ian Jew­ish Com­mit­tee. Accord­ing to Dolinksy, his far right oppo­nents have a his­to­ry of report­ing Dolinksy’s posts to Face­book in order to get him sus­pend­ed. And this time it worked. Dolinksy appealed the ban but to no avail.

    So that hap­pened. But first let’s take a quick look at an arti­cle from back in April that high­lights how absurd this action was. The arti­cle is about a Ukrain­ian school teacher in Lviv, Mar­jana Batjuk, who post­ed birth­day greet­ings to Adolf Hitler on her Face­book page on April 20 (Hitler’s birth­day). She also taught her stu­dents the Nazi salute and even took some of her stu­dents to meet far right activists who had par­tic­i­pat­ed in a march wear­ing the uni­form of the the 14th Waf­fen Grenadier Divi­sion of the SS.

    Batjuk, who is a mem­ber of Svo­bo­da, lat­er claimed her Face­book account was hacked, but a news orga­ni­za­tion found that she has a his­to­ry of post­ing Nazi imagery on social media net­works. And there’s no men­tion in this report of Batjuk get­ting banned from Face­book:

    Jew­ish Tele­graph Agency

    Ukrain­ian teacher alleged­ly prais­es Hitler, per­forms Nazi salute with stu­dents

    By Cnaan Liphshiz
    April 23, 2018 4:22pm

    (JTA) — A pub­lic school teacher in Ukraine alleged­ly post­ed birth­day greet­ings to Adolf Hitler on Face­book and taught her stu­dents the Nazi salute.

    Mar­jana Batjuk, who teach­es at a school in Lviv and also is a coun­cil­woman, post­ed her greet­ing on April 20, the Nazi leader’s birth­day, Eduard Dolin­sky, direc­tor of the Ukrain­ian Jew­ish Com­mit­tee, told JTA. He called the inci­dent a “scan­dal.”

    She also took some of her stu­dents to meet far-right activists who over the week­end marched on the city’s streets while wear­ing the uni­form of the 14th Waf­fen Grenadier Divi­sion of the SS, an elite Nazi unite with many eth­nic Ukraini­ans also known as the 1st Gali­cian.

    Dis­play­ing Nazi imagery is ille­gal in Ukraine, but Dolin­sky said law enforce­ment author­i­ties allowed the activists to parade on main streets.

    Batjuk had the activists explain about their repli­ca weapons, which they parad­ed ahead of a larg­er event in hon­or of the 1st Gali­cian unit planned for next week in Lviv.

    The events hon­or­ing the 1st Gali­cian SS unit in Lviv are not orga­nized by munic­i­pal author­i­ties.

    Batjuk, 28, a mem­ber of the far-right Svo­bo­da par­ty, called Hitler “a great man” and quot­ed from his book “Mein Kampf” in her Face­book post, Dolin­sky said. She lat­er claimed that her Face­book account was hacked and delet­ed the post, but the Strana news site found that she had a his­to­ry of post­ing Nazi imagery on social net­works.

    She also post­ed pic­tures of chil­dren she said were her stu­dents per­form­ing the Nazi salute with her.

    ...

    Edu­ca­tion Min­istry offi­cials have start­ed a dis­ci­pli­nary review of her con­duct, the KP news site report­ed.

    Sep­a­rate­ly, in the town of Polta­va, in east­ern Ukraine, Dolin­sky said a swasti­ka and the words “heil Hitler” were spray-paint­ed Fri­day on a mon­u­ment for Holo­caust vic­tims of the Holo­caust. The van­dals, who have not been iden­ti­fied, also wrote “Death to the kikes.”

    In Odessa, a large graf­fi­ti read­ing “Jews into the sea” was writ­ten on the beach­front wall of a hotel.

    “The com­mon fac­tor between all of these inci­dents is gov­ern­ment inac­tion, which ensures they will con­tin­ue hap­pen­ing,” Dolin­sky said.
    ———-

    “Ukrain­ian teacher alleged­ly prais­es Hitler, per­forms Nazi salute with stu­dents” by Cnaan Liphshiz; Jew­ish Tele­graph Agency; 04/23/2018

    “Mar­jana Batjuk, who teach­es at a school in Lviv and also is a coun­cil­woman, post­ed her greet­ing on April 20, the Nazi leader’s birth­day, Eduard Dolin­sky, direc­tor of the Ukrain­ian Jew­ish Com­mit­tee, told JTA. He called the inci­dent a “scan­dal.””

    She’s not just a teacher. She’s also a coun­cil­woman. A teacher coun­cil­woman who likes to post about pos­i­tive things about Hitler on her Face­book page. And it was Eduard Dolin­sky who was talk­ing to the inter­na­tion­al media about this.

    But Batjuk does­n’t just post pro-Nazi things on her Face­book page. She also takes her stu­dents to meet the far right activists:

    ...
    She also took some of her stu­dents to meet far-right activists who over the week­end marched on the city’s streets while wear­ing the uni­form of the 14th Waf­fen Grenadier Divi­sion of the SS, an elite Nazi unite with many eth­nic Ukraini­ans also known as the 1st Gali­cian.

    Dis­play­ing Nazi imagery is ille­gal in Ukraine, but Dolin­sky said law enforce­ment author­i­ties allowed the activists to parade on main streets.

    Batjuk had the activists explain about their repli­ca weapons, which they parad­ed ahead of a larg­er event in hon­or of the 1st Gali­cian unit planned for next week in Lviv.

    The events hon­or­ing the 1st Gali­cian SS unit in Lviv are not orga­nized by munic­i­pal author­i­ties.
    ...

    Batjuk lat­er claimed that her Face­book page was hacked, and yet a media orga­ni­za­tion was able to find plen­ty of pre­vi­ous exam­ples of sim­i­lar posts on social media:

    ...
    Batjuk, 28, a mem­ber of the far-right Svo­bo­da par­ty, called Hitler “a great man” and quot­ed from his book “Mein Kampf” in her Face­book post, Dolin­sky said. She lat­er claimed that her Face­book account was hacked and delet­ed the post, but the Strana news site found that she had a his­to­ry of post­ing Nazi imagery on social net­works.

    She also post­ed pic­tures of chil­dren she said were her stu­dents per­form­ing the Nazi salute with her.
    ...

    And if you look at that Strana news sum­ma­ry of her social media posts, a num­ber of them are clear­ly Face­book posts. So if Strana news orga­ni­za­tion was able to find these old posts that’s a pret­ty clear indi­ca­tion Face­book was­n’t remov­ing them.

    That was back in April. Flash for­ward to today and we find a sud­den will­ing­ness to ban peo­ple for post Nazi con­tent...except it’s Eduard Dolin­sky get­ting banned for mak­ing peo­ple aware of the pro-Nazi graf­fi­ti that has become ram­pant in Ukraine:

    The Jerusalem Post

    Jew­ish activist: Face­book banned me for post­ing anti­se­mit­ic graf­fi­ti
    “I use my Face­book account for dis­trib­ut­ing infor­ma­tion about anti­se­mit­ic inci­dents, hate speech and hate crimes in Ukraine,” said the Ukrain­ian Jew­ish activist.

    By Seth J. Frantz­man
    August 21, 2018 16:39

    Eduard Dolinksy, a promi­nent Ukrain­ian Jew­ish activist, was banned from post­ing on Face­book Mon­day night for a post about anti­se­mit­ic graf­fi­ti in Odessa.

    Dolin­sky, the direc­tor of the Ukrain­ian Jew­ish Com­mit­tee, said he was blocked by the social media giant for post­ing a pho­to. “I had post­ed the pho­to which says in Ukrain­ian ‘kill the yid’ about a month ago,” he says. “I use my Face­book account for dis­trib­ut­ing infor­ma­tion about anti­se­mit­ic inci­dents and hate speech and hate crimes in Ukraine.”

    Now Dolinsky’s account has dis­abled him from post­ing for thir­ty days, which means media, law enforce­ment and the local com­mu­ni­ty who rely on his social media posts will receive no updates.

    Dolin­sky tweet­ed Mon­day that his account had been blocked and sent The Jerusalem Post a screen­shot of the image he post­ed which shows a bad­ly drawn swasti­ka and Ukrain­ian writ­ing. “You recent­ly post­ed some­thing that vio­lates Face­book poli­cies, so you’re tem­porar­i­ly blocked from using this fea­ture,” Face­book informs him when he logs in. “The block will be active for 29 days and 17 hours,” it says. “To keep from get­ting blocked again, please make sure you’ve read and under­stand Facebook’s Com­mu­ni­ty Stan­dards.”

    Dolinksy says that he has been tar­get­ed in the past by nation­al­ists and anti-semi­tes who oppose his work. Face­book has banned him tem­porar­i­ly in the past also, but nev­er for thir­ty days. “The last time I was blocked, the media also report­ed this and I felt some relief.

    It was as if they stopped ban­ning me. But now I don’t know – and this has again hap­pened. They are ban­ning the one who is try­ing to fight anti­semitism. They are ban­ning me for the very thing I do.”

    Based on Dolinsky’s work the police have opened crim­i­nal files against per­pe­tra­tors of anti­se­mit­ic crimes, in Odessa and oth­er places.

    He says that some locals are try­ing to silence him because he is crit­i­cal of the way Ukraine has com­mem­o­rat­ed his­tor­i­cal nation­al­ist fig­ures, “which is actu­al­ly deny­ing the Holo­caust and try­ing to white­wash the actions of nation­al­ists dur­ing the Sec­ond World War.”

    Dolinksy has been wide­ly quot­ed, and his work, includ­ing posts on Face­book, has been ref­er­enced by media in the past. “These inci­dents are hap­pen­ing and these crimes and the police should react.

    The soci­ety also. But their goal is to cut me off.”

    Iron­i­cal­ly, the activist oppos­ing anti­semitism is being tar­get­ed by anti­semites who label the anti­se­mit­ic exam­ples he reveals as hate speech. “They are specif­i­cal­ly com­plain­ing to Face­book for the con­tent, and they are com­plain­ing that I am vio­lat­ing the rules of Face­book and spread­ing hate speech. So Face­book, as I under­stand [it, doesn’t] look at this; they are ban­ning me and block­ing me and delet­ing these posts.”

    He says he tried to appeal the ban but has not been suc­cess­ful.

    “I use my Face­book exclu­sive­ly for this, so this is my work­ing tool as direc­tor of Ukrain­ian Jew­ish Com­mit­tee.”

    Face­book has been under scruti­ny recent­ly for who it bans and why. In July founder Mark Zucker­berg made con­tro­ver­sial remarks appear­ing to accept Holo­caust denial on the site. “I find it offen­sive, but at the end of the day, I don’t believe our plat­form should take that down because I think there are things that dif­fer­ent peo­ple get wrong. I don’t think they’re doing it inten­tion­al­ly.” In late July, Face­book banned US con­spir­a­cy the­o­rist Alex Jones for bul­ly­ing and hate speech.

    In a sim­i­lar inci­dent to Dolin­sky, Iran­ian sec­u­lar activist Armin Nav­abi was banned from Face­book for thir­ty days for post­ing the death threats that he receives. “This is ridicu­lous. My account is blocked for 30 days because I post the death threats I’m get­ting? I’m not the one mak­ing the threat!” he tweet­ed.

    ...

    ———

    “Jew­ish activist: Face­book banned me for post­ing anti­se­mit­ic graf­fi­ti” by Seth J. Frantz­man; The Jerusalem Post; 08/21/2018

    “Dolin­sky, the direc­tor of the Ukrain­ian Jew­ish Com­mit­tee, said he was blocked by the social media giant for post­ing a pho­to. “I had post­ed the pho­to which says in Ukrain­ian ‘kill the yid’ about a month ago,” he says. “I use my Face­book account for dis­trib­ut­ing infor­ma­tion about anti­se­mit­ic inci­dents and hate speech and hate crimes in Ukraine.”

    The direc­tor of the Ukrain­ian Jew­ish Com­mit­tee gets banned for post anti­se­mit­ic con­tent. That’s some world class trolling by Face­book.

    And while it’s only a 30 day ban, that’s 30 days where Ukraine’s media and law enforce­ment won’t be get­ting Dolin­sky’s updates. So it’s not just a moral­ly absurd ban­ning, it’s also actu­al­ly going to be pro­mot­ing pro-Nazi graf­fi­ti in Ukraine by silenc­ing one of the key fig­ures cov­er­ing it:

    ...
    Now Dolinsky’s account has dis­abled him from post­ing for thir­ty days, which means media, law enforce­ment and the local com­mu­ni­ty who rely on his social media posts will receive no updates.

    Dolin­sky tweet­ed Mon­day that his account had been blocked and sent The Jerusalem Post a screen­shot of the image he post­ed which shows a bad­ly drawn swasti­ka and Ukrain­ian writ­ing. “You recent­ly post­ed some­thing that vio­lates Face­book poli­cies, so you’re tem­porar­i­ly blocked from using this fea­ture,” Face­book informs him when he logs in. “The block will be active for 29 days and 17 hours,” it says. “To keep from get­ting blocked again, please make sure you’ve read and under­stand Facebook’s Com­mu­ni­ty Stan­dards.”
    ...

    And this isn’t the first time Dolin­sky has been banned from Face­book for post­ing this kind of con­tent. But it’s the longest he’s been banned. And the fact that this isn’t the first time he’s been banned sug­gest this isn’t just an ‘oops!’ gen­uine mis­take:

    ...
    Dolinksy says that he has been tar­get­ed in the past by nation­al­ists and anti-semi­tes who oppose his work. Face­book has banned him tem­porar­i­ly in the past also, but nev­er for thir­ty days. “The last time I was blocked, the media also report­ed this and I felt some relief.

    It was as if they stopped ban­ning me. But now I don’t know – and this has again hap­pened. They are ban­ning the one who is try­ing to fight anti­semitism. They are ban­ning me for the very thing I do.”

    Based on Dolinsky’s work the police have opened crim­i­nal files against per­pe­tra­tors of anti­se­mit­ic crimes, in Odessa and oth­er places.
    ...

    Dolin­sky also notes that he has peo­ple try­ing to silence him pre­cise­ly because of the job he does high­light­ing Ukraine’s offi­cial embrace of Nazi col­lab­o­rat­ing his­tor­i­cal fig­ures:

    ...
    He says that some locals are try­ing to silence him because he is crit­i­cal of the way Ukraine has com­mem­o­rat­ed his­tor­i­cal nation­al­ist fig­ures, “which is actu­al­ly deny­ing the Holo­caust and try­ing to white­wash the actions of nation­al­ists dur­ing the Sec­ond World War.”

    Dolinksy has been wide­ly quot­ed, and his work, includ­ing posts on Face­book, has been ref­er­enced by media in the past. “These inci­dents are hap­pen­ing and these crimes and the police should react.

    The soci­ety also. But their goal is to cut me off.”

    Iron­i­cal­ly, the activist oppos­ing anti­semitism is being tar­get­ed by anti­semites who label the anti­se­mit­ic exam­ples he reveals as hate speech. “They are specif­i­cal­ly com­plain­ing to Face­book for the con­tent, and they are com­plain­ing that I am vio­lat­ing the rules of Face­book and spread­ing hate speech. So Face­book, as I under­stand [it, doesn’t] look at this; they are ban­ning me and block­ing me and delet­ing these posts.”
    ...

    So we like­ly have a sit­u­a­tion where anti­semites suc­cess­ful­ly got Dolinksy silence, with Face­book ‘play­ing dumb’ the whole time. And as a con­se­quence Ukraine is fac­ing a month with­out Dolin­sky’s reports. Except it’s not even clear that Dolinksy is going to be allowed to clar­i­fy the sit­u­a­tion and con­tin­ue post­ing updates of Nazi graf­fi­ti after this month long ban is up. Because he says he’s been try­ing to appeal the ban, but with no suc­cess:

    ...
    He says he tried to appeal the ban but has not been suc­cess­ful.

    “I use my Face­book exclu­sive­ly for this, so this is my work­ing tool as direc­tor of Ukrain­ian Jew­ish Com­mit­tee.”
    ...

    Giv­en Dolin­sky’s pow­er­ful crit­i­cisms of Ukraine’s embrace and his­toric white­wash­ing of the far right, it would be inter­est­ing to learn if the deci­sion to ban Dolin­sky orig­i­nal­ly came from the Atlantic Coun­cil, which is one of the main orga­ni­za­tion Face­book out­sourced its troll-hunt­ing duties to.

    So for all we know, Dolin­sky is effec­tive­ly going to be banned per­ma­nent­ly from using Face­book to make Ukraine and the rest of the world aware of the epi­dem­ic of pro-Nazi anti­se­mit­ic graf­fi­ti in Ukraine. Maybe if he sets up a pro-Nazi Face­book per­sona he’ll be allowed to keep doing his work.

    Posted by Pterrafractyl | August 23, 2018, 12:49 pm
  2. It looks like we’re in for anoth­er round of right-wing com­plaints about Big Tech polit­i­cal bias designed to pres­sure com­pa­nies into push­ing right-wing con­tent onto users. Recall how com­plaints about Face­book sup­press­ing con­ser­v­a­tives in the Face­book News Feed result­ed in a change in pol­i­cy in 2016 that unleashed a flood of far right dis­in­for­ma­tion on the plat­form. This time, it’s Google’s turn to face the right-wing faux-out­rage machine and it’s Pres­i­dent Trump lead­ing it:

    Trump just accused Google of bias­ing the search results in its search engine to give neg­a­tive sto­ries about him. Appar­ent­ly he googled him­self and did­n’t like the results. His tweet came after a Fox Busi­ness report on Mon­day evening that made the claim that 96 per­cent of Google News results for “Trump” came from the “nation­al left-wing media.” The report was based on some ‘analy­sis’ by right-wing media out­let PJ Media.

    Lat­er, dur­ing a press con­fer­ence, Trump declared that Google, Face­book, and Twit­ter “are tread­ing on very, very trou­bled ter­ri­to­ry,” and his eco­nom­ic advi­sor Lar­ry Kud­low told the press that the issue is being inves­ti­gat­ing by the White House. And as Face­book already demon­strat­ed, while it seems high­ly unlike­ly that the Trump admin­is­tra­tion will actu­al­ly take some sort of gov­ern­ment action to force Google to pro­mote pos­i­tive sto­ries about Trump, it’s not like loud­ly com­plain­ing can’t get the job done:

    Bloomberg

    Trump Warns Tech Giants to ‘Be Care­ful,’ Claim­ing They Rig Search­es

    By Kath­leen Hunter and Ben Brody
    August 28, 2018, 4:58 AM CDT Updat­ed on August 28, 2018, 2:17 PM CDT

    * Pres­i­dent tweets con­ser­v­a­tive media being blocked by Google
    * Com­pa­ny denies any polit­i­cal agen­da in its search results

    Pres­i­dent Don­ald Trump warned Alpha­bet Inc.’s Google, Face­book Inc. and Twit­ter Inc. “bet­ter be care­ful” after he accused the search engine ear­li­er in the day of rig­ging results to give pref­er­ence to neg­a­tive news sto­ries about him.

    Trump told reporters in the Oval Office Tues­day that the three tech­nol­o­gy com­pa­nies “are tread­ing on very, very trou­bled ter­ri­to­ry,” as he added his voice to a grow­ing cho­rus of con­ser­v­a­tives who claim inter­net com­pa­nies favor lib­er­al view­points.

    “This is a very seri­ous sit­u­a­tion-will be addressed!” Trump said in a tweet ear­li­er Tues­day. The President’s com­ments came the morn­ing after a Fox Busi­ness TV seg­ment that said Google favored lib­er­al news out­lets in search results about Trump. Trump pro­vid­ed no sub­stan­ti­a­tion for his claim.

    “Google search results for ‘Trump News’ shows only the viewing/reporting of Fake New Media. In oth­er words, they have it RIGGED, for me & oth­ers, so that almost all sto­ries & news is BAD,” Trump said. “Republican/Conservative & Fair Media is shut out. Ille­gal.”

    The alle­ga­tion, dis­missed by online search experts, fol­lows the president’s Aug. 24 claim that social media “giants” are “silenc­ing mil­lions of peo­ple.” Such accu­sa­tions — along with asser­tions that the news media and Spe­cial Coun­sel Robert Mueller’s Rus­sia med­dling probe are biased against him — have been a chief Trump talk­ing point meant to appeal to the president’s base.

    Google issued a state­ment say­ing its search­es are designed to give users rel­e­vant answers.

    “Search is not used to set a polit­i­cal agen­da and we don’t bias our results toward any polit­i­cal ide­ol­o­gy,” the state­ment said. “Every year, we issue hun­dreds of improve­ments to our algo­rithms to ensure they sur­face high-qual­i­ty con­tent in response to users’ queries. We con­tin­u­al­ly work to improve Google Search and we nev­er rank search results to manip­u­late polit­i­cal sen­ti­ment.”

    Yonatan Zunger, an engi­neer who worked at Google for almost a decade, went fur­ther. “Users can ver­i­fy that his claim is spe­cious by sim­ply read­ing a wide range of news sources them­selves,” he said. “The ‘bias’ is that the news is all bad for him, for which he has only him­self to blame.”

    Google’s news search soft­ware doesn’t work the way the pres­i­dent says it does, accord­ing to Mark Irvine, senior data sci­en­tist at Word­Stream, a com­pa­ny that helps firms get web­sites and oth­er online con­tent to show up high­er in search results. The Google News sys­tem gives weight to how many times a sto­ry has been linked to, as well as to how promi­nent­ly the terms peo­ple are search­ing for show up in the sto­ries, Irvine said.

    “The Google search algo­rithm is a fair­ly agnos­tic and apa­thet­ic algo­rithm towards what people’s polit­i­cal feel­ings are,” he said.

    “Their job is essen­tial­ly to mod­el the world as it is,” said Pete Mey­ers, a mar­ket­ing sci­en­tist at Moz, which builds tools to help com­pa­nies improve how they show up in search results. “If enough peo­ple are link­ing to a site and talk­ing about a site, they’re going to show that site.”

    Trump’s con­cern is that search results about him appear neg­a­tive, but that’s because the major­i­ty of sto­ries about him are neg­a­tive, Mey­ers said. “He woke up and watched his par­tic­u­lar fla­vor and what Google had didn’t match that.”

    Com­plaints that social-media ser­vices cen­sor con­ser­v­a­tives have increased as com­pa­nies such as Face­book Inc. and Twit­ter Inc. try to curb the reach of con­spir­a­cy the­o­rists, dis­in­for­ma­tion cam­paigns, for­eign polit­i­cal med­dling and abu­sive posters.

    Google News rank­ings have some­times high­light­ed uncon­firmed and erro­neous reports in the ear­ly min­utes of tragedies when there’s lit­tle infor­ma­tion to fill its search results. After the Oct. 1, 2017, Las Vegas shoot­ing, for instance, sev­er­al accounts seemed to coor­di­nate an effort to smear a man misiden­ti­fied as the shoot­er with false claims about his polit­i­cal ties.

    Google has since tight­ened require­ments for inclu­sion in news rank­ings, block­ing out­lets that “con­ceal their coun­try of ori­gin” and rely­ing more on author­i­ta­tive sources, although the moves have led to charges of cen­sor­ship from less estab­lished out­lets. Google cur­rent­ly says it ranks news based on “fresh­ness” and “diver­si­ty” of the sto­ries. Trump-favored out­lets such as Fox News rou­tine­ly appear in results.

    Google’s search results have been the focus of com­plaints for more than a decade. The crit­i­cism has become more polit­i­cal as the pow­er and reach of online ser­vices has increased in recent years.

    Eric Schmidt, Alphabet’s for­mer chair­man, sup­port­ed Hillary Clin­ton against Trump dur­ing the last elec­tion. There have been unsub­stan­ti­at­ed claims the com­pa­ny buried neg­a­tive search results about her dur­ing the 2016 elec­tion. Scores of Google employ­ees entered gov­ern­ment to work under Pres­i­dent Barack Oba­ma.

    White House eco­nom­ic advis­er Lar­ry Kud­low, respond­ing to a ques­tion about the tweets, said that the admin­is­tra­tion is going to do “inves­ti­ga­tions and analy­sis” into the issue but stressed they’re “just look­ing into it.”

    Trump’s com­ment fol­lowed a report on Fox Busi­ness on Mon­day evening that said 96 per­cent of Google News results for “Trump” came from the “nation­al left-wing media.” The seg­ment cit­ed the con­ser­v­a­tive PJ Media site, which said its analy­sis sug­gest­ed “a pat­tern of bias against right-lean­ing con­tent.”

    The PJ Media analy­sis “is in no way sci­en­tif­ic,” said Joshua New, a senior pol­i­cy ana­lyst with the Cen­ter for Data Inno­va­tion.

    “This fre­quen­cy of appear­ance in an arbi­trary search at one time is in no way indi­cat­ing a bias or a slant,” New said. His non-par­ti­san pol­i­cy group is affil­i­at­ed with the Infor­ma­tion Tech­nol­o­gy and Inno­va­tion Foun­da­tion, which in turn has exec­u­tives from Sil­i­con Val­ley com­pa­nies, includ­ing Google, on its board of direc­tors.

    Ser­vices such as Google or Face­book “have a busi­ness incen­tive not to low­er the rank­ing of a cer­tain pub­li­ca­tion because of news bias. Because that low­ers the val­ue as a news plat­form,” New said.

    News search rank­ings use fac­tors includ­ing “use time­li­ness, accu­ra­cy, the pop­u­lar­i­ty of a sto­ry, a users’ per­son­al search his­to­ry, their loca­tion, qual­i­ty of con­tent, a website’s rep­u­ta­tion — a huge amount of dif­fer­ent fac­tors,” New said.

    Google is not the first tech stal­wart to receive crit­i­cism from Trump. He has alleged Amazon.com Inc. has a sweet­heart deal with the U.S. Postal Ser­vice and slammed founder Jeff Bezos’s own­er­ship of what Trump calls “the Ama­zon Wash­ing­ton Post.”

    Google is due to face law­mak­ers at a hear­ing on Russ­ian elec­tion med­dling on Sept. 5. The com­pa­ny intend­ed to send Senior Vice Pres­i­dent for Glob­al Affairs Kent Walk­er to tes­ti­fy, but the panel’s chair­man, Sen­a­tor Richard Burr, who want­ed Chief Exec­u­tive Offi­cer Sun­dar Pichai, has reject­ed Walk­er.

    Despite Trump’s com­ments, it’s unclear what he or Con­gress could do to influ­ence how inter­net com­pa­nies dis­trib­ute online news. The indus­try trea­sures an exemp­tion from lia­bil­i­ty for the con­tent users post. Some top mem­bers of Con­gress have sug­gest­ed lim­it­ing the pro­tec­tion as a response to alleged bias and oth­er mis­deeds, although there have been few moves to do so since Con­gress curbed the shield for some cas­es of sex traf­fick­ing ear­li­er in the year.

    The gov­ern­ment has lit­tle abil­i­ty to dic­tate to pub­lish­ers and online cura­tors what news to present despite the president’s occa­sion­al threats to use the pow­er of the gov­ern­ment to curb cov­er­age he dis­likes and his ten­den­cy to com­plain that news about him is over­ly neg­a­tive.

    Trump has talked about expand­ing libel laws and mused about rein­stat­ing long-end­ed rules requir­ing equal time for oppos­ing views, which didn’t apply to the inter­net. Nei­ther has result­ed in a seri­ous pol­i­cy push..

    ...

    ———-

    “Trump Warns Tech Giants to ‘Be Care­ful,’ Claim­ing They Rig Search­es” by Kath­leen Hunter and Ben Brody; Bloomberg; 08/28/2018

    “Trump told reporters in the Oval Office Tues­day that the three tech­nol­o­gy com­pa­nies “are tread­ing on very, very trou­bled ter­ri­to­ry,” as he added his voice to a grow­ing cho­rus of con­ser­v­a­tives who claim inter­net com­pa­nies favor lib­er­al view­points.”

    The Trumpian warn­ing shots have been fired: feed the pub­lic pos­i­tive news about Trump, or else...

    ...
    “This is a very seri­ous sit­u­a­tion-will be addressed!” Trump said in a tweet ear­li­er Tues­day. The President’s com­ments came the morn­ing after a Fox Busi­ness TV seg­ment that said Google favored lib­er­al news out­lets in search results about Trump. Trump pro­vid­ed no sub­stan­ti­a­tion for his claim.

    “Google search results for ‘Trump News’ shows only the viewing/reporting of Fake New Media. In oth­er words, they have it RIGGED, for me & oth­ers, so that almost all sto­ries & news is BAD,” Trump said. “Republican/Conservative & Fair Media is shut out. Ille­gal.”

    The alle­ga­tion, dis­missed by online search experts, fol­lows the president’s Aug. 24 claim that social media “giants” are “silenc­ing mil­lions of peo­ple.” Such accu­sa­tions — along with asser­tions that the news media and Spe­cial Coun­sel Robert Mueller’s Rus­sia med­dling probe are biased against him — have been a chief Trump talk­ing point meant to appeal to the president’s base.
    ...

    “Republican/Conservative & Fair Media is shut out. Ille­gal.”

    And he lit­er­al­ly charged Google with ille­gal­i­ty over alleged­ly shut­ting out “Republican/Conservative & Fair Media.” Which is, of course, an absurd charge for any­one famil­iar with Google’s news por­tal. But that was part of what made the tweet so poten­tial­ly threat­en­ing to these com­pa­nies since it implied there was a role the gov­ern­ment should be play­ing to cor­rect this per­ceived law-break­ing.

    At the same time, it’s unclear what, legal­ly speak­ing, Trump could actu­al­ly do. But that did­n’t stop him from issue such threats, as he’s done in the past:

    ...
    Despite Trump’s com­ments, it’s unclear what he or Con­gress could do to influ­ence how inter­net com­pa­nies dis­trib­ute online news. The indus­try trea­sures an exemp­tion from lia­bil­i­ty for the con­tent users post. Some top mem­bers of Con­gress have sug­gest­ed lim­it­ing the pro­tec­tion as a response to alleged bias and oth­er mis­deeds, although there have been few moves to do so since Con­gress curbed the shield for some cas­es of sex traf­fick­ing ear­li­er in the year.

    The gov­ern­ment has lit­tle abil­i­ty to dic­tate to pub­lish­ers and online cura­tors what news to present despite the president’s occa­sion­al threats to use the pow­er of the gov­ern­ment to curb cov­er­age he dis­likes and his ten­den­cy to com­plain that news about him is over­ly neg­a­tive.

    Trump has talked about expand­ing libel laws and mused about rein­stat­ing long-end­ed rules requir­ing equal time for oppos­ing views, which didn’t apply to the inter­net. Nei­ther has result­ed in a seri­ous pol­i­cy push..
    ...

    Iron­i­cal­ly, when Trump mus­es about rein­stat­ing long-end­ed rules requir­ing equal time for oppos­ing views (the “Fair­ness Doc­trine” over­turned by Rea­gan in 1987), he’s mus­ing about doing some­thing that would effec­tive­ly destroy the right-wing media mod­el, a mod­el that is pred­i­cat­ed on feed­ing the audi­ence exclu­sive­ly right-wing con­tent. As many have not­ed, the demise of the Fair­ness Doc­trine — which led to the explo­sion of right-wing talk radio hosts like Rush Lim­baugh — prob­a­bly played a big role in intel­lec­tu­al­ly neu­ter­ing the Amer­i­can pub­lic, paving the way for some­one like Trump to even­tu­al­ly come along.

    And yet, as unhinged as this lat­est threat may be, the admin­is­tra­tion is actu­al­ly going to do “inves­ti­ga­tions and analy­sis” into the issue accord­ing to Lar­ry Kud­low:

    ...
    White House eco­nom­ic advis­er Lar­ry Kud­low, respond­ing to a ques­tion about the tweets, said that the admin­is­tra­tion is going to do “inves­ti­ga­tions and analy­sis” into the issue but stressed they’re “just look­ing into it.”
    ...

    And as we should expect, this all appears to have been trig­gered by a Fox Busi­ness piece on Mon­day night that cov­ered an ‘study’ done by PJ Media (a right-wing media out­let) that found 96 per­cent of Google News results for “Trump” come from the “nation­al left-wing media”:

    ...
    Trump’s com­ment fol­lowed a report on Fox Busi­ness on Mon­day evening that said 96 per­cent of Google News results for “Trump” came from the “nation­al left-wing media.” The seg­ment cit­ed the con­ser­v­a­tive PJ Media site, which said its analy­sis sug­gest­ed “a pat­tern of bias against right-lean­ing con­tent.”

    The PJ Media analy­sis “is in no way sci­en­tif­ic,” said Joshua New, a senior pol­i­cy ana­lyst with the Cen­ter for Data Inno­va­tion.

    “This fre­quen­cy of appear­ance in an arbi­trary search at one time is in no way indi­cat­ing a bias or a slant,” New said. His non-par­ti­san pol­i­cy group is affil­i­at­ed with the Infor­ma­tion Tech­nol­o­gy and Inno­va­tion Foun­da­tion, which in turn has exec­u­tives from Sil­i­con Val­ley com­pa­nies, includ­ing Google, on its board of direc­tors.

    Ser­vices such as Google or Face­book “have a busi­ness incen­tive not to low­er the rank­ing of a cer­tain pub­li­ca­tion because of news bias. Because that low­ers the val­ue as a news plat­form,” New said.

    News search rank­ings use fac­tors includ­ing “use time­li­ness, accu­ra­cy, the pop­u­lar­i­ty of a sto­ry, a users’ per­son­al search his­to­ry, their loca­tion, qual­i­ty of con­tent, a website’s rep­u­ta­tion — a huge amount of dif­fer­ent fac­tors,” New said.
    ...

    Putting aside the gen­er­al ques­tions of the sci­en­tif­ic verac­i­ty of this PJ Media ‘study’, it’s kind of amus­ing to real­ize that it was study con­duct­ed specif­i­cal­ly on a search for “Trump” on Google News. And if you had to choose a sin­gle top­ic that is going to inevitably have an abun­dance of neg­a­tive news writ­ten about it, that would be the top­ic of “Trump”. In oth­er words, if you were to actu­al­ly con­duct a real study that attempts to assess the polit­i­cal bias of Google News’s search results, you almost could­n’t have picked a worse search term to test that the­o­ry on than “Trump”.

    Google not sur­pris­ing­ly refutes these charges. But it’s the peo­ple who work for com­pa­nies ded­i­cat­ed to improv­ing how their clients who give the most con­vinc­ing respons­es since their busi­ness­es are lit­er­al­ly depen­dents on them under­stand­ing Google’s algo­rithms:

    ...
    Google’s news search soft­ware doesn’t work the way the pres­i­dent says it does, accord­ing to Mark Irvine, senior data sci­en­tist at Word­Stream, a com­pa­ny that helps firms get web­sites and oth­er online con­tent to show up high­er in search results. The Google News sys­tem gives weight to how many times a sto­ry has been linked to, as well as to how promi­nent­ly the terms peo­ple are search­ing for show up in the sto­ries, Irvine said.

    “The Google search algo­rithm is a fair­ly agnos­tic and apa­thet­ic algo­rithm towards what people’s polit­i­cal feel­ings are,” he said.

    “Their job is essen­tial­ly to mod­el the world as it is,” said Pete Mey­ers, a mar­ket­ing sci­en­tist at Moz, which builds tools to help com­pa­nies improve how they show up in search results. “If enough peo­ple are link­ing to a site and talk­ing about a site, they’re going to show that site.”

    Trump’s con­cern is that search results about him appear neg­a­tive, but that’s because the major­i­ty of sto­ries about him are neg­a­tive, Mey­ers said. “He woke up and watched his par­tic­u­lar fla­vor and what Google had didn’t match that.”
    ...

    All that said, it’s not like the top­ic of the black­box nature of the algo­rithms behind things like Google’s search engine aren’t a legit­i­mate top­ic of pub­lic inter­est. And that’s part of why these far­ci­cal tweets are so dan­ger­ous: the Big Tech giants like Google, Face­book, and Twit­ter know that it’s not impos­si­ble that they’ll be sub­ject to algo­rith­mic reg­u­la­tion some­day. And they’re going to want to push that day off for a long as pos­si­ble. So when Trump makes these kinds of com­plaints, it’s not at all incon­ceiv­able that he’s going to get the response from these com­pa­nies that he wants as these com­pa­nies attempt to pla­cate him. It’s also high­ly like­ly that if these com­pa­nies do decide to pla­cate him, they’re not going to pub­licly announce this. Instead they’ll just start rig­ging their algo­rithms to serve up more pro-Trump con­tent and more right-wing con­tent in gen­er­al.

    Also keep in mind that, despite the rep­u­ta­tion of Sil­i­con Val­ley as being run by a bunch of lib­er­als, the real­i­ty is Sil­i­con Val­ley has a strong right-wing lib­er­tar­i­an fac­tion, and there’s going to be no short­age of peo­ple at these com­pa­nies that would love to inject a right-wing bias into their ser­vices. Trump’s stunt gives that right-wing fac­tion of Sil­i­con Val­ley lead­er­ship an excuse to do exact­ly that from a busi­ness stand­point.

    So if you use Google News to see what the lat­est the news is on “Trump” and you sud­den­ly find that it’s most­ly good news, keep in mind that that’s actu­al­ly real­ly, real­ly bad news because it means this stunt worked.

    Posted by Pterrafractyl | August 28, 2018, 3:55 pm
  3. The New York Times pub­lished a big piece on the inner work­ings of Face­book’s response to the array of scan­dals that have enveloped the com­pa­ny in recent years, from the charges of Russ­ian oper­a­tives using the plat­form to spread dis­in­for­ma­tion to the Cam­bridge Ana­lyt­i­ca scan­dal. Much of the sto­ry focus on the actions of Sheryl Sand­berg, who appears to be top per­son at Face­book who was over­see­ing the com­pa­ny’s response to these scan­dals. It describes a gen­er­al pat­tern of Face­book’s exec­u­tives first ignor­ing prob­lems and then using var­i­ous pub­lic rela­tions strate­gies to deal with the prob­lems when they are no longer able to ignore them. And it’s the choice of pub­lic rela­tions firms that is per­haps the biggest scan­dal revealed in this sto­ry: In Octo­ber of 2017, Face­book hired Defin­ers Pub­lic Affair, a DC-based firm found­ed by vet­er­ans of Repub­li­can pres­i­den­tial pol­i­tics that spe­cial­ized in apply­ing the tac­tics of polit­i­cal races to cor­po­rate pub­lic rela­tions.

    And one of the polit­i­cal strate­gies employed by Defin­ers was sim­ply putting out arti­cles that put their clients in a pos­i­tive light while simul­ta­ne­ous­ly attack­ing their clients’ ene­mies. That’s what Defin­ers did for Face­book, with Defin­ers uti­liz­ing an affil­i­at­ed con­ser­v­a­tive news site, NTK Net­work. NTK shares offices and stiff with Defin­ers and many NTK sto­ries are writ­ten by Defin­ers staff and are basi­cal­ly attack ads on Defin­ers’ clients’ ene­mies. So how does NTK get any­one to read their pro­pa­gan­da arti­cles? By get­ting them picked up by oth­er pop­u­lar con­ser­v­a­tive out­lets, includ­ing Bre­it­bart.

    Per­haps most con­tro­ver­sial­ly, Face­book had Defin­ers attempt to tie var­i­ous groups that are crit­i­cal of Face­book to George Soros, implic­it­ly har­ness­ing the exist­ing right-wing meme that George Soros is a super wealthy Jew who secret­ly con­trols almost every­thing. This attack by Defin­ers cen­tered around the Free­dom from Face­book coali­tion. Back in July, The group had crashed the House Judi­cia­ry Com­mit­tee hear­ings when a Face­book exec­u­tive was tes­ti­fy­ing, hold­ing up signs depict­ing Sheryl Sand­berg and Mark Zucker­berg as two heads of an octo­pus stretch­ing around the globe. The group claimed the sign was a ref­er­ence to old car­toons about the Stan­dard Oil monop­oly. But such imagery also evokes clas­sic anti-Semit­ic tropes, made more acute by the fact that both Sand­berg and Zucker­berg are Jew­ish. So Face­book enlist­ed the ADL to con­demn Free­dom from Face­book over the imagery.

    But charg­ing Free­dom from Face­book with anti-Semi­tism isn’t the only strat­e­gy Face­book used to address its crit­ics. After the protest in con­gress, Face­book had Defin­ers basi­cal­ly accuse the groups behind Free­dom from Face­book of being pup­pets of George Soros and encour­aged reporters to inves­ti­gate the finan­cial ties of the groups with Soros. And this was part of broad­er push by Defin­ers to cast Soros as the man behind all of the anti-Face­book sen­ti­ments that have popped up in recent years. This, of course, is play­ing right into the grow­ing right-wing meme that Soros, a bil­lion­aire Jew, is behind almost every­thing bad in the world. And it’s a meme that also hap­pens to be excep­tion­al­ly pop­u­lar with the ‘Alt Right’ neo-Nazi wing of con­tem­po­rary con­ser­vatism. So Face­book dealt with its crit­ics by first charg­ing them with indi­rect anti-Semi­tism and then used their hired Repub­li­can pub­lic rela­tions firm to make an indi­rect anti-Semit­ic attacks on those same crit­ics:

    The New York Times

    Delay, Deny and Deflect: How Facebook’s Lead­ers Fought Through Cri­sis

    By Sheera Frenkel, Nicholas Con­fes­sore, Cecil­ia Kang, Matthew Rosen­berg and Jack Nicas

    Nov. 14, 2018

    Sheryl Sand­berg was seething.

    Inside Facebook’s Men­lo Park, Calif., head­quar­ters, top exec­u­tives gath­ered in the glass-walled con­fer­ence room of its founder, Mark Zucker­berg. It was Sep­tem­ber 2017, more than a year after Face­book engi­neers dis­cov­ered sus­pi­cious Rus­sia-linked activ­i­ty on its site, an ear­ly warn­ing of the Krem­lin cam­paign to dis­rupt the 2016 Amer­i­can elec­tion. Con­gres­sion­al and fed­er­al inves­ti­ga­tors were clos­ing in on evi­dence that would impli­cate the com­pa­ny.

    But it wasn’t the loom­ing dis­as­ter at Face­book that angered Ms. Sand­berg. It was the social network’s secu­ri­ty chief, Alex Sta­mos, who had informed com­pa­ny board mem­bers the day before that Face­book had yet to con­tain the Russ­ian infes­ta­tion. Mr. Stamos’s brief­ing had prompt­ed a humil­i­at­ing board­room inter­ro­ga­tion of Ms. Sand­berg, Facebook’s chief oper­at­ing offi­cer, and her bil­lion­aire boss. She appeared to regard the admis­sion as a betray­al.

    “You threw us under the bus!” she yelled at Mr. Sta­mos, accord­ing to peo­ple who were present.

    The clash that day would set off a reck­on­ing — for Mr. Zucker­berg, for Ms. Sand­berg and for the busi­ness they had built togeth­er. In just over a decade, Face­book has con­nect­ed more than 2.2 bil­lion peo­ple, a glob­al nation unto itself that reshaped polit­i­cal cam­paigns, the adver­tis­ing busi­ness and dai­ly life around the world. Along the way, Face­book accu­mu­lat­ed one of the largest-ever repos­i­to­ries of per­son­al data, a trea­sure trove of pho­tos, mes­sages and likes that pro­pelled the com­pa­ny into the For­tune 500.

    But as evi­dence accu­mu­lat­ed that Facebook’s pow­er could also be exploit­ed to dis­rupt elec­tions, broad­cast viral pro­pa­gan­da and inspire dead­ly cam­paigns of hate around the globe, Mr. Zucker­berg and Ms. Sand­berg stum­bled. Bent on growth, the pair ignored warn­ing signs and then sought to con­ceal them from pub­lic view. At crit­i­cal moments over the last three years, they were dis­tract­ed by per­son­al projects, and passed off secu­ri­ty and pol­i­cy deci­sions to sub­or­di­nates, accord­ing to cur­rent and for­mer exec­u­tives.

    When Face­book users learned last spring that the com­pa­ny had com­pro­mised their pri­va­cy in its rush to expand, allow­ing access to the per­son­al infor­ma­tion of tens of mil­lions of peo­ple to a polit­i­cal data firm linked to Pres­i­dent Trump, Face­book sought to deflect blame and mask the extent of the prob­lem.

    And when that failed — as the company’s stock price plum­met­ed and it faced a con­sumer back­lash — Face­book went on the attack.

    While Mr. Zucker­berg has con­duct­ed a pub­lic apol­o­gy tour in the last year, Ms. Sand­berg has over­seen an aggres­sive lob­by­ing cam­paign to com­bat Facebook’s crit­ics, shift pub­lic anger toward rival com­pa­nies and ward off dam­ag­ing reg­u­la­tion. Face­book employed a Repub­li­can oppo­si­tion-research firm to dis­cred­it activist pro­test­ers, in part by link­ing them to the lib­er­al financier George Soros. It also tapped its busi­ness rela­tion­ships, lob­by­ing a Jew­ish civ­il rights group to cast some crit­i­cism of the com­pa­ny as anti-Semit­ic.

    In Wash­ing­ton, allies of Face­book, includ­ing Sen­a­tor Chuck Schumer, the Demo­c­ra­t­ic Sen­ate leader, inter­vened on its behalf. And Ms. Sand­berg wooed or cajoled hos­tile law­mak­ers, while try­ing to dis­pel Facebook’s rep­u­ta­tion as a bas­tion of Bay Area lib­er­al­ism.

    This account of how Mr. Zucker­berg and Ms. Sand­berg nav­i­gat­ed Facebook’s cas­cad­ing crises, much of which has not been pre­vi­ous­ly report­ed, is based on inter­views with more than 50 peo­ple. They include cur­rent and for­mer Face­book exec­u­tives and oth­er employ­ees, law­mak­ers and gov­ern­ment offi­cials, lob­by­ists and con­gres­sion­al staff mem­bers. Most spoke on the con­di­tion of anonymi­ty because they had signed con­fi­den­tial­i­ty agree­ments, were not autho­rized to speak to reporters or feared retal­i­a­tion.

    ...

    Even so, trust in the social net­work has sunk, while its pell-mell growth has slowed. Reg­u­la­tors and law enforce­ment offi­cials in the Unit­ed States and Europe are inves­ti­gat­ing Facebook’s con­duct with Cam­bridge Ana­lyt­i­ca, a polit­i­cal data firm that worked with Mr. Trump’s 2016 cam­paign, open­ing up the com­pa­ny to fines and oth­er lia­bil­i­ty. Both the Trump admin­is­tra­tion and law­mak­ers have begun craft­ing pro­pos­als for a nation­al pri­va­cy law, set­ting up a years­long strug­gle over the future of Facebook’s data-hun­gry busi­ness mod­el.

    “We failed to look and try to imag­ine what was hid­ing behind cor­ners,” Elliot Schrage, for­mer vice pres­i­dent for glob­al com­mu­ni­ca­tions, mar­ket­ing and pub­lic pol­i­cy at Face­book, said in an inter­view.

    Mr. Zucker­berg, 34, and Ms. Sand­berg, 49, remain at the company’s helm, while Mr. Sta­mos and oth­er high-pro­file exec­u­tives have left after dis­putes over Facebook’s pri­or­i­ties. Mr. Zucker­berg, who con­trols the social net­work with 60 per­cent of the vot­ing shares and who approved many of its direc­tors, has been asked repeat­ed­ly in the last year whether he should step down as chief exec­u­tive.

    His answer each time: a resound­ing “No.”

    ‘Don’t Poke the Bear’

    Three years ago, Mr. Zucker­berg, who found­ed Face­book in 2004 while attend­ing Har­vard, was cel­e­brat­ed for the company’s extra­or­di­nary suc­cess. Ms. Sand­berg, a for­mer Clin­ton admin­is­tra­tion offi­cial and Google vet­er­an, had become a fem­i­nist icon with the pub­li­ca­tion of her empow­er­ment man­i­festo, “Lean In,” in 2013.

    Like oth­er tech­nol­o­gy exec­u­tives, Mr. Zucker­berg and Ms. Sand­berg cast their com­pa­ny as a force for social good. Facebook’s lofty aims were embla­zoned even on secu­ri­ties fil­ings: “Our mis­sion is to make the world more open and con­nect­ed.”

    But as Face­book grew, so did the hate speech, bul­ly­ing and oth­er tox­ic con­tent on the plat­form. When researchers and activists in Myan­mar, India, Ger­many and else­where warned that Face­book had become an instru­ment of gov­ern­ment pro­pa­gan­da and eth­nic cleans­ing, the com­pa­ny large­ly ignored them. Face­book had posi­tioned itself as a plat­form, not a pub­lish­er. Tak­ing respon­si­bil­i­ty for what users post­ed, or act­ing to cen­sor it, was expen­sive and com­pli­cat­ed. Many Face­book exec­u­tives wor­ried that any such efforts would back­fire.

    Then Don­ald J. Trump ran for pres­i­dent. He described Mus­lim immi­grants and refugees as a dan­ger to Amer­i­ca, and in Decem­ber 2015 post­ed a state­ment on Face­book call­ing for a “total and com­plete shut­down” on Mus­lims enter­ing the Unit­ed States. Mr. Trump’s call to arms — wide­ly con­demned by Democ­rats and some promi­nent Repub­li­cans — was shared more than 15,000 times on Face­book, an illus­tra­tion of the site’s pow­er to spread racist sen­ti­ment.

    Mr. Zucker­berg, who had helped found a non­prof­it ded­i­cat­ed to immi­gra­tion reform, was appalled, said employ­ees who spoke to him or were famil­iar with the con­ver­sa­tion. He asked Ms. Sand­berg and oth­er exec­u­tives if Mr. Trump had vio­lat­ed Facebook’s terms of ser­vice.

    The ques­tion was unusu­al. Mr. Zucker­berg typ­i­cal­ly focused on broad­er tech­nol­o­gy issues; pol­i­tics was Ms. Sandberg’s domain. In 2010, Ms. Sand­berg, a Demo­c­rat, had recruit­ed a friend and fel­low Clin­ton alum, Marne Levine, as Facebook’s chief Wash­ing­ton rep­re­sen­ta­tive. A year lat­er, after Repub­li­cans seized con­trol of the House, Ms. Sand­berg installed anoth­er friend, a well-con­nect­ed Repub­li­can: Joel Kaplan, who had attend­ed Har­vard with Ms. Sand­berg and lat­er served in the George W. Bush admin­is­tra­tion.

    Some at Face­book viewed Mr. Trump’s 2015 attack on Mus­lims as an oppor­tu­ni­ty to final­ly take a stand against the hate speech cours­ing through its plat­form. But Ms. Sand­berg, who was edg­ing back to work after the death of her hus­band sev­er­al months ear­li­er, del­e­gat­ed the mat­ter to Mr. Schrage and Moni­ka Bick­ert, a for­mer pros­e­cu­tor whom Ms. Sand­berg had recruit­ed as the company’s head of glob­al pol­i­cy man­age­ment. Ms. Sand­berg also turned to the Wash­ing­ton office — par­tic­u­lar­ly to Mr. Kaplan, said peo­ple who par­tic­i­pat­ed in or were briefed on the dis­cus­sions.

    In video con­fer­ence calls between the Sil­i­con Val­ley head­quar­ters and Wash­ing­ton, the three offi­cials con­strued their task nar­row­ly. They parsed the company’s terms of ser­vice to see if the post, or Mr. Trump’s account, vio­lat­ed Facebook’s rules.

    Mr. Kaplan argued that Mr. Trump was an impor­tant pub­lic fig­ure and that shut­ting down his account or remov­ing the state­ment could be seen as obstruct­ing free speech, said three employ­ees who knew of the dis­cus­sions. He said it could also stoke a con­ser­v­a­tive back­lash.

    “Don’t poke the bear,” Mr. Kaplan warned.

    Mr. Zucker­berg did not par­tic­i­pate in the debate. Ms. Sand­berg attend­ed some of the video meet­ings but rarely spoke.

    Mr. Schrage con­clud­ed that Mr. Trump’s lan­guage had not vio­lat­ed Facebook’s rules and that the candidate’s views had pub­lic val­ue. “We were try­ing to make a deci­sion based on all the legal and tech­ni­cal evi­dence before us,” he said in an inter­view.

    In the end, Mr. Trump’s state­ment and account remained on the site. When Mr. Trump won elec­tion the next fall, giv­ing Repub­li­cans con­trol of the White House as well as Con­gress, Mr. Kaplan was empow­ered to plan accord­ing­ly. The com­pa­ny hired a for­mer aide to Mr. Trump’s new attor­ney gen­er­al, Jeff Ses­sions, along with lob­by­ing firms linked to Repub­li­can law­mak­ers who had juris­dic­tion over inter­net com­pa­nies.

    But inside Face­book, new trou­bles were brew­ing.

    Min­i­miz­ing Russia’s Role

    In the final months of Mr. Trump’s pres­i­den­tial cam­paign, Russ­ian agents esca­lat­ed a year­long effort to hack and harass his Demo­c­ra­t­ic oppo­nents, cul­mi­nat­ing in the release of thou­sands of emails stolen from promi­nent Democ­rats and par­ty offi­cials.

    Face­book had said noth­ing pub­licly about any prob­lems on its own plat­form. But in the spring of 2016, a com­pa­ny expert on Russ­ian cyber­war­fare spot­ted some­thing wor­ri­some. He reached out to his boss, Mr. Sta­mos.

    Mr. Stamos’s team dis­cov­ered that Russ­ian hack­ers appeared to be prob­ing Face­book accounts for peo­ple con­nect­ed to the pres­i­den­tial cam­paigns, said two employ­ees. Months lat­er, as Mr. Trump bat­tled Hillary Clin­ton in the gen­er­al elec­tion, the team also found Face­book accounts linked to Russ­ian hack­ers who were mes­sag­ing jour­nal­ists to share infor­ma­tion from the stolen emails.

    Mr. Sta­mos, 39, told Col­in Stretch, Facebook’s gen­er­al coun­sel, about the find­ings, said two peo­ple involved in the con­ver­sa­tions. At the time, Face­book had no pol­i­cy on dis­in­for­ma­tion or any resources ded­i­cat­ed to search­ing for it.

    Mr. Sta­mos, act­ing on his own, then direct­ed a team to scru­ti­nize the extent of Russ­ian activ­i­ty on Face­book. In Decem­ber 2016, after Mr. Zucker­berg pub­licly scoffed at the idea that fake news on Face­book had helped elect Mr. Trump, Mr. Sta­mos — alarmed that the company’s chief exec­u­tive seemed unaware of his team’s find­ings — met with Mr. Zucker­berg, Ms. Sand­berg and oth­er top Face­book lead­ers.

    Ms. Sand­berg was angry. Look­ing into the Russ­ian activ­i­ty with­out approval, she said, had left the com­pa­ny exposed legal­ly. Oth­er exec­u­tives asked Mr. Sta­mos why they had not been told soon­er.

    Still, Ms. Sand­berg and Mr. Zucker­berg decid­ed to expand on Mr. Stamos’s work, cre­at­ing a group called Project P, for “pro­pa­gan­da,” to study false news on the site, accord­ing to peo­ple involved in the dis­cus­sions. By Jan­u­ary 2017, the group knew that Mr. Stamos’s orig­i­nal team had only scratched the sur­face of Russ­ian activ­i­ty on Face­book, and pressed to issue a pub­lic paper about their find­ings.

    But Mr. Kaplan and oth­er Face­book exec­u­tives object­ed. Wash­ing­ton was already reel­ing from an offi­cial find­ing by Amer­i­can intel­li­gence agen­cies that Vladimir V. Putin, the Russ­ian pres­i­dent, had per­son­al­ly ordered an influ­ence cam­paign aimed at help­ing elect Mr. Trump.

    If Face­book impli­cat­ed Rus­sia fur­ther, Mr. Kaplan said, Repub­li­cans would accuse the com­pa­ny of sid­ing with Democ­rats. And if Face­book pulled down the Rus­sians’ fake pages, reg­u­lar Face­book users might also react with out­rage at hav­ing been deceived: His own moth­er-in-law, Mr. Kaplan said, had fol­lowed a Face­book page cre­at­ed by Russ­ian trolls.

    Ms. Sand­berg sided with Mr. Kaplan, recalled four peo­ple involved. Mr. Zucker­berg — who spent much of 2017 on a nation­al “lis­ten­ing tour,” feed­ing cows in Wis­con­sin and eat­ing din­ner with Soma­li refugees in Min­neso­ta — did not par­tic­i­pate in the con­ver­sa­tions about the pub­lic paper. When it was pub­lished that April, the word “Rus­sia” nev­er appeared.

    ...

    A Polit­i­cal Play­book

    The com­bined rev­e­la­tions infu­ri­at­ed Democ­rats, final­ly frac­tur­ing the polit­i­cal con­sen­sus that had pro­tect­ed Face­book and oth­er big tech com­pa­nies from Belt­way inter­fer­ence. Repub­li­cans, already con­cerned that the plat­form was cen­sor­ing con­ser­v­a­tive views, accused Face­book of fuel­ing what they claimed were mer­it­less con­spir­a­cy charges against Mr. Trump and Rus­sia. Democ­rats, long allied with Sil­i­con Val­ley on issues includ­ing immi­gra­tion and gay rights, now blamed Mr. Trump’s win part­ly on Facebook’s tol­er­ance for fraud and dis­in­for­ma­tion.

    After stalling for weeks, Face­book even­tu­al­ly agreed to hand over the Russ­ian posts to Con­gress. Twice in Octo­ber 2017, Face­book was forced to revise its pub­lic state­ments, final­ly acknowl­edg­ing that close to 126 mil­lion peo­ple had seen the Russ­ian posts.

    The same month, Mr. Warn­er and Sen­a­tor Amy Klobuchar, the Min­neso­ta Demo­c­rat, intro­duced leg­is­la­tion to com­pel Face­book and oth­er inter­net firms to dis­close who bought polit­i­cal ads on their sites — a sig­nif­i­cant expan­sion of fed­er­al reg­u­la­tion over tech com­pa­nies.

    “It’s time for Face­book to let all of us see the ads bought by Rus­sians *and paid for in Rubles* dur­ing the last elec­tion,” Ms. Klobuchar wrote on her own Face­book page.

    Face­book gird­ed for bat­tle. Days after the bill was unveiled, Face­book hired Mr. Warner’s for­mer chief of staff, Luke Albee, to lob­by on it. Mr. Kaplan’s team took a larg­er role in man­ag­ing the company’s Wash­ing­ton response, rou­tine­ly review­ing Face­book news releas­es for words or phras­es that might rile con­ser­v­a­tives.

    Ms. Sand­berg also reached out to Ms. Klobuchar. She had been friend­ly with the sen­a­tor, who is fea­tured on the web­site for Lean In, Ms. Sandberg’s empow­er­ment ini­tia­tive. Ms. Sand­berg had con­tributed a blurb to Ms. Klobuchar’s 2015 mem­oir, and the senator’s chief of staff had pre­vi­ous­ly worked at Ms. Sandberg’s char­i­ta­ble foun­da­tion.

    But in a tense con­ver­sa­tion short­ly after the ad leg­is­la­tion was intro­duced, Ms. Sand­berg com­plained about Ms. Klobuchar’s attacks on the com­pa­ny, said a per­son who was briefed on the call. Ms. Klobuchar did not back down on her leg­is­la­tion. But she dialed down her crit­i­cism in at least one venue impor­tant to the com­pa­ny: After blast­ing Face­book repeat­ed­ly that fall on her own Face­book page, Ms. Klobuchar hard­ly men­tioned the com­pa­ny in posts between Novem­ber and Feb­ru­ary.

    A spokesman for Ms. Klobuchar said in a state­ment that Facebook’s lob­by­ing had not less­ened her com­mit­ment to hold­ing the com­pa­ny account­able. “Face­book was push­ing to exclude issue ads from the Hon­est Ads Act, and Sen­a­tor Klobuchar stren­u­ous­ly dis­agreed and refused to change the bill,” he said.

    In Octo­ber 2017, Face­book also expand­ed its work with a Wash­ing­ton-based con­sul­tant, Defin­ers Pub­lic Affairs, that had orig­i­nal­ly been hired to mon­i­tor press cov­er­age of the com­pa­ny. Found­ed by vet­er­ans of Repub­li­can pres­i­den­tial pol­i­tics, Defin­ers spe­cial­ized in apply­ing polit­i­cal cam­paign tac­tics to cor­po­rate pub­lic rela­tions — an approach long employed in Wash­ing­ton by big telecom­mu­ni­ca­tions firms and activist hedge fund man­agers, but less com­mon in tech.

    Defin­ers had estab­lished a Sil­i­con Val­ley out­post ear­li­er that year, led by Tim Miller, a for­mer spokesman for Jeb Bush who preached the virtues of cam­paign-style oppo­si­tion research. For tech firms, he argued in one inter­view, a goal should be to “have pos­i­tive con­tent pushed out about your com­pa­ny and neg­a­tive con­tent that’s being pushed out about your com­peti­tor.”

    Face­book quick­ly adopt­ed that strat­e­gy. In Novem­ber 2017, the social net­work came out in favor of a bill called the Stop Enabling Sex Traf­fick­ers Act, which made inter­net com­pa­nies respon­si­ble for sex traf­fick­ing ads on their sites.

    Google and oth­ers had fought the bill for months, wor­ry­ing it would set a cum­ber­some prece­dent. But the sex traf­fick­ing bill was cham­pi­oned by Sen­a­tor John Thune, a Repub­li­can of South Dako­ta who had pum­meled Face­book over accu­sa­tions that it cen­sored con­ser­v­a­tive con­tent, and Sen­a­tor Richard Blu­men­thal, a Con­necti­cut Demo­c­rat and senior com­merce com­mit­tee mem­ber who was a fre­quent crit­ic of Face­book.

    Face­book broke ranks with oth­er tech com­pa­nies, hop­ing the move would help repair rela­tions on both sides of the aisle, said two con­gres­sion­al staffers and three tech indus­try offi­cials.

    When the bill came to a vote in the House in Feb­ru­ary, Ms. Sand­berg offered pub­lic sup­port online, urg­ing Con­gress to “make sure we pass mean­ing­ful and strong leg­is­la­tion to stop sex traf­fick­ing.”

    Oppo­si­tion Research

    In March, The Times, The Observ­er of Lon­don and The Guardian pre­pared to pub­lish a joint inves­ti­ga­tion into how Face­book user data had been appro­pri­at­ed by Cam­bridge Ana­lyt­i­ca to pro­file Amer­i­can vot­ers. A few days before pub­li­ca­tion, The Times pre­sent­ed Face­book with evi­dence that copies of improp­er­ly acquired Face­book data still exist­ed, despite ear­li­er promis­es by Cam­bridge exec­u­tives and oth­ers to delete it.

    Mr. Zucker­berg and Ms. Sand­berg met with their lieu­tenants to deter­mine a response. They decid­ed to pre-empt the sto­ries, say­ing in a state­ment pub­lished late on a Fri­day night that Face­book had sus­pend­ed Cam­bridge Ana­lyt­i­ca from its plat­form. The exec­u­tives fig­ured that get­ting ahead of the news would soft­en its blow, accord­ing to peo­ple in the dis­cus­sions.

    They were wrong. The sto­ry drew world­wide out­rage, prompt­ing law­suits and offi­cial inves­ti­ga­tions in Wash­ing­ton, Lon­don and Brus­sels. For days, Mr. Zucker­berg and Ms. Sand­berg remained out of sight, mulling how to respond. While the Rus­sia inves­ti­ga­tion had devolved into an increas­ing­ly par­ti­san bat­tle, the Cam­bridge scan­dal set off Democ­rats and Repub­li­cans alike. And in Sil­i­con Val­ley, oth­er tech firms began exploit­ing the out­cry to bur­nish their own brands.

    “We’re not going to traf­fic in your per­son­al life,” Tim Cook, Apple’s chief exec­u­tive, said in an MSNBC inter­view. “Pri­va­cy to us is a human right. It’s a civ­il lib­er­ty.” (Mr. Cook’s crit­i­cisms infu­ri­at­ed Mr. Zucker­berg, who lat­er ordered his man­age­ment team to use only Android phones — argu­ing that the oper­at­ing sys­tem had far more users than Apple’s.)

    Face­book scram­bled anew. Exec­u­tives qui­et­ly shelved an inter­nal com­mu­ni­ca­tions cam­paign, called “We Get It,” meant to assure employ­ees that the com­pa­ny was com­mit­ted to get­ting back on track in 2018.

    Then Face­book went on the offen­sive. Mr. Kaplan pre­vailed on Ms. Sand­berg to pro­mote Kevin Mar­tin, a for­mer Fed­er­al Com­mu­ni­ca­tions Com­mis­sion chair­man and fel­low Bush admin­is­tra­tion vet­er­an, to lead the company’s Amer­i­can lob­by­ing efforts. Face­book also expand­ed its work with Defin­ers.

    On a con­ser­v­a­tive news site called the NTK Net­work, dozens of arti­cles blast­ed Google and Apple for unsa­vory busi­ness prac­tices. One sto­ry called Mr. Cook hyp­o­crit­i­cal for chid­ing Face­book over pri­va­cy, not­ing that Apple also col­lects reams of data from users. Anoth­er played down the impact of the Rus­sians’ use of Face­book.

    The rash of news cov­er­age was no acci­dent: NTK is an affil­i­ate of Defin­ers, shar­ing offices and staff with the pub­lic rela­tions firm in Arling­ton, Va. Many NTK Net­work sto­ries are writ­ten by staff mem­bers at Defin­ers or Amer­i­ca Ris­ing, the company’s polit­i­cal oppo­si­tion-research arm, to attack their clients’ ene­mies. While the NTK Net­work does not have a large audi­ence of its own, its con­tent is fre­quent­ly picked up by pop­u­lar con­ser­v­a­tive out­lets, includ­ing Bre­it­bart.

    Mr. Miller acknowl­edged that Face­book and Apple do not direct­ly com­pete. Defin­ers’ work on Apple is fund­ed by a third tech­nol­o­gy com­pa­ny, he said, but Face­book has pushed back against Apple because Mr. Cook’s crit­i­cism upset Face­book.

    If the pri­va­cy issue comes up, Face­book is hap­py to “mud­dy the waters,” Mr. Miller said over drinks at an Oak­land, Calif., bar last month.

    On Thurs­day, after this arti­cle was pub­lished, Face­book said that it had end­ed its rela­tion­ship with Defin­ers, with­out cit­ing a rea­son.

    ...

    Per­son­al Appeals in Wash­ing­ton

    Ms. Sand­berg had said lit­tle pub­licly about the company’s prob­lems. But inside Face­book, her approach had begun to draw crit­i­cism.

    ...

    Face­book also con­tin­ued to look for ways to deflect crit­i­cism to rivals. In June, after The Times report­ed on Facebook’s pre­vi­ous­ly undis­closed deals to share user data with device mak­ers — part­ner­ships Face­book had failed to dis­close to law­mak­ers — exec­u­tives ordered up focus groups in Wash­ing­ton.

    In sep­a­rate ses­sions with lib­er­als and con­ser­v­a­tives, about a dozen at a time, Face­book pre­viewed mes­sages to law­mak­ers. Among the approach­es it test­ed was bring­ing YouTube and oth­er social media plat­forms into the con­tro­ver­sy, while argu­ing that Google struck sim­i­lar data-shar­ing deals.

    Deflect­ing Crit­i­cism

    By then, some of the harsh­est crit­i­cism of Face­book was com­ing from the polit­i­cal left, where activists and pol­i­cy experts had begun call­ing for the com­pa­ny to be bro­ken up.

    In July, orga­niz­ers with a coali­tion called Free­dom from Face­book crashed a hear­ing of the House Judi­cia­ry Com­mit­tee, where a com­pa­ny exec­u­tive was tes­ti­fy­ing about its poli­cies. As the exec­u­tive spoke, the orga­niz­ers held aloft signs depict­ing Ms. Sand­berg and Mr. Zucker­berg, who are both Jew­ish, as two heads of an octo­pus stretch­ing around the globe.

    Eddie Vale, a Demo­c­ra­t­ic pub­lic rela­tions strate­gist who led the protest, lat­er said the image was meant to evoke old car­toons of Stan­dard Oil, the Gild­ed Age monop­oly. But a Face­book offi­cial quick­ly called the Anti-Defama­tion League, a lead­ing Jew­ish civ­il rights orga­ni­za­tion, to flag the sign. Face­book and oth­er tech com­pa­nies had part­nered with the civ­il rights group since late 2017 on an ini­tia­tive to com­bat anti-Semi­tism and hate speech online.

    That after­noon, the A.D.L. issued a warn­ing from its Twit­ter account.

    “Depict­ing Jews as an octo­pus encir­cling the globe is a clas­sic anti-Semit­ic trope,” the orga­ni­za­tion wrote. “Protest Face­book — or any­one — all you want, but pick a dif­fer­ent image.” The crit­i­cism was soon echoed in con­ser­v­a­tive out­lets includ­ing The Wash­ing­ton Free Bea­con, which has sought to tie Free­dom from Face­book to what the pub­li­ca­tion calls “extreme anti-Israel groups.”

    An A.D.L. spokes­woman, Bet­sai­da Alcan­tara, said the group rou­tine­ly field­ed reports of anti-Semit­ic slurs from jour­nal­ists, syn­a­gogues and oth­ers. “Our experts eval­u­ate each one based on our years of expe­ri­ence, and we respond appro­pri­ate­ly,” Ms. Alcan­tara said. (The group has at times sharply crit­i­cized Face­book, includ­ing when Mr. Zucker­berg sug­gest­ed that his com­pa­ny should not cen­sor Holo­caust deniers.)

    Face­book also used Defin­ers to take on big­ger oppo­nents, such as Mr. Soros, a long­time boogey­man to main­stream con­ser­v­a­tives and the tar­get of intense anti-Semit­ic smears on the far right. A research doc­u­ment cir­cu­lat­ed by Defin­ers to reporters this sum­mer, just a month after the House hear­ing, cast Mr. Soros as the unac­knowl­edged force behind what appeared to be a broad anti-Face­book move­ment.

    He was a nat­ur­al tar­get. In a speech at the World Eco­nom­ic Forum in Jan­u­ary, he had attacked Face­book and Google, describ­ing them as a monop­o­list “men­ace” with “nei­ther the will nor the incli­na­tion to pro­tect soci­ety against the con­se­quences of their actions.”

    Defin­ers pressed reporters to explore the finan­cial con­nec­tions between Mr. Soros’s fam­i­ly or phil­an­thropies and groups that were mem­bers of Free­dom from Face­book, such as Col­or of Change, an online racial jus­tice orga­ni­za­tion, as well as a pro­gres­sive group found­ed by Mr. Soros’s son. (An offi­cial at Mr. Soros’s Open Soci­ety Foun­da­tions said the phil­an­thropy had sup­port­ed both mem­ber groups, but not Free­dom from Face­book, and had made no grants to sup­port cam­paigns against Face­book.)

    ...

    ———-

    “Delay, Deny and Deflect: How Facebook’s Lead­ers Fought Through Cri­sis” by Sheera Frenkel, Nicholas Con­fes­sore, Cecil­ia Kang, Matthew Rosen­berg and Jack Nicas; The New York Times; 11/14/2018

    “While Mr. Zucker­berg has con­duct­ed a pub­lic apol­o­gy tour in the last year, Ms. Sand­berg has over­seen an aggres­sive lob­by­ing cam­paign to com­bat Facebook’s crit­ics, shift pub­lic anger toward rival com­pa­nies and ward off dam­ag­ing reg­u­la­tion. Face­book employed a Repub­li­can oppo­si­tion-research firm to dis­cred­it activist pro­test­ers, in part by link­ing them to the lib­er­al financier George Soros. It also tapped its busi­ness rela­tion­ships, lob­by­ing a Jew­ish civ­il rights group to cast some crit­i­cism of the com­pa­ny as anti-Semit­ic.”

    Imag­ine if your job was to han­dle Face­book’s bad press. That was appar­ent­ly Sheryl Sand­berg’s job behind the scenes while Mark Zucker­berg was act­ing as the apolo­getic pub­lic face of Face­book.

    But both Zucker­berg and Sand­berg appeared to have large­ly the same response to the scan­dals involv­ing Face­book’s grow­ing use as a plat­form for spread­ing hate and extrem­ism: keep Face­book out of those dis­putes by argu­ing that it’s just a plat­form, not a pub­lish­er:

    ...
    ‘Don’t Poke the Bear’

    Three years ago, Mr. Zucker­berg, who found­ed Face­book in 2004 while attend­ing Har­vard, was cel­e­brat­ed for the company’s extra­or­di­nary suc­cess. Ms. Sand­berg, a for­mer Clin­ton admin­is­tra­tion offi­cial and Google vet­er­an, had become a fem­i­nist icon with the pub­li­ca­tion of her empow­er­ment man­i­festo, “Lean In,” in 2013.

    Like oth­er tech­nol­o­gy exec­u­tives, Mr. Zucker­berg and Ms. Sand­berg cast their com­pa­ny as a force for social good. Facebook’s lofty aims were embla­zoned even on secu­ri­ties fil­ings: “Our mis­sion is to make the world more open and con­nect­ed.”

    But as Face­book grew, so did the hate speech, bul­ly­ing and oth­er tox­ic con­tent on the plat­form. When researchers and activists in Myan­mar, India, Ger­many and else­where warned that Face­book had become an instru­ment of gov­ern­ment pro­pa­gan­da and eth­nic cleans­ing, the com­pa­ny large­ly ignored them. Face­book had posi­tioned itself as a plat­form, not a pub­lish­er. Tak­ing respon­si­bil­i­ty for what users post­ed, or act­ing to cen­sor it, was expen­sive and com­pli­cat­ed. Many Face­book exec­u­tives wor­ried that any such efforts would back­fire.
    ...

    Sand­berg also appears to have increas­ing­ly relied on Joel Kaplan, Face­book’s vice pres­i­dent of glob­al pub­lic pol­i­cy, for advice on how to han­dle these issues and scan­dal. Kaplan pre­vi­ous­ly served in the George W. Bush admin­is­tra­tion. When Don­ald Trump first ran for pres­i­dent in 2015 and announced his plan for a “total and com­plete shut­down” on Mus­lims enter­ing the Unit­ed States and that mes­sage was shared more than 15,000 times on Face­book, the ques­tion was raised by Zucker­berg of whether or not Trump vio­lat­ed the plat­for­m’s terms of ser­vice. Sand­berg turned to Kaplan for advice. Kaplan, unsur­pris­ing­ly, rec­om­mend­ed that any sort of crack­down on Trump’s use of Face­book would be seen as obstruct­ing free speech and prompt a con­ser­v­a­tive back­lash. Kaplan’s advice was tak­en:

    ...
    Then Don­ald J. Trump ran for pres­i­dent. He described Mus­lim immi­grants and refugees as a dan­ger to Amer­i­ca, and in Decem­ber 2015 post­ed a state­ment on Face­book call­ing for a “total and com­plete shut­down” on Mus­lims enter­ing the Unit­ed States. Mr. Trump’s call to arms — wide­ly con­demned by Democ­rats and some promi­nent Repub­li­cans — was shared more than 15,000 times on Face­book, an illus­tra­tion of the site’s pow­er to spread racist sen­ti­ment.

    Mr. Zucker­berg, who had helped found a non­prof­it ded­i­cat­ed to immi­gra­tion reform, was appalled, said employ­ees who spoke to him or were famil­iar with the con­ver­sa­tion. He asked Ms. Sand­berg and oth­er exec­u­tives if Mr. Trump had vio­lat­ed Facebook’s terms of ser­vice.

    The ques­tion was unusu­al. Mr. Zucker­berg typ­i­cal­ly focused on broad­er tech­nol­o­gy issues; pol­i­tics was Ms. Sandberg’s domain. In 2010, Ms. Sand­berg, a Demo­c­rat, had recruit­ed a friend and fel­low Clin­ton alum, Marne Levine, as Facebook’s chief Wash­ing­ton rep­re­sen­ta­tive. A year lat­er, after Repub­li­cans seized con­trol of the House, Ms. Sand­berg installed anoth­er friend, a well-con­nect­ed Repub­li­can: Joel Kaplan, who had attend­ed Har­vard with Ms. Sand­berg and lat­er served in the George W. Bush admin­is­tra­tion.

    Some at Face­book viewed Mr. Trump’s 2015 attack on Mus­lims as an oppor­tu­ni­ty to final­ly take a stand against the hate speech cours­ing through its plat­form. But Ms. Sand­berg, who was edg­ing back to work after the death of her hus­band sev­er­al months ear­li­er, del­e­gat­ed the mat­ter to Mr. Schrage and Moni­ka Bick­ert, a for­mer pros­e­cu­tor whom Ms. Sand­berg had recruit­ed as the company’s head of glob­al pol­i­cy man­age­ment. Ms. Sand­berg also turned to the Wash­ing­ton office — par­tic­u­lar­ly to Mr. Kaplan, said peo­ple who par­tic­i­pat­ed in or were briefed on the dis­cus­sions.

    In video con­fer­ence calls between the Sil­i­con Val­ley head­quar­ters and Wash­ing­ton, the three offi­cials con­strued their task nar­row­ly. They parsed the company’s terms of ser­vice to see if the post, or Mr. Trump’s account, vio­lat­ed Facebook’s rules.

    Mr. Kaplan argued that Mr. Trump was an impor­tant pub­lic fig­ure and that shut­ting down his account or remov­ing the state­ment could be seen as obstruct­ing free speech, said three employ­ees who knew of the dis­cus­sions. He said it could also stoke a con­ser­v­a­tive back­lash.

    “Don’t poke the bear,” Mr. Kaplan warned.

    Mr. Zucker­berg did not par­tic­i­pate in the debate. Ms. Sand­berg attend­ed some of the video meet­ings but rarely spoke.

    Mr. Schrage con­clud­ed that Mr. Trump’s lan­guage had not vio­lat­ed Facebook’s rules and that the candidate’s views had pub­lic val­ue. “We were try­ing to make a deci­sion based on all the legal and tech­ni­cal evi­dence before us,” he said in an inter­view.
    ...

    And note how, after Trump won, Face­book hired a for­mer aide to Jeff Ses­sions and lob­by­ing firms linked to Repub­li­can law­mak­ers who had juris­dic­tion over inter­net com­pa­nies. Face­book was mak­ing pleas­ing Repub­li­cans in Wash­ing­ton a top pri­or­i­ty:

    ...
    In the end, Mr. Trump’s state­ment and account remained on the site. When Mr. Trump won elec­tion the next fall, giv­ing Repub­li­cans con­trol of the White House as well as Con­gress, Mr. Kaplan was empow­ered to plan accord­ing­ly. The com­pa­ny hired a for­mer aide to Mr. Trump’s new attor­ney gen­er­al, Jeff Ses­sions, along with lob­by­ing firms linked to Repub­li­can law­mak­ers who had juris­dic­tion over inter­net com­pa­nies.
    ...

    Kaplan also encour­aged Face­book to avoid inves­ti­gat­ing too close­ly the alleged Russ­ian troll cam­paigns. This was his advice even in 2016, while the cam­paign was ongo­ing, and after the cam­paign in 2017. Inter­est­ing­ly, Face­book appar­ent­ly found accounts linked to ‘Russ­ian hack­ers’ that were using Face­book to look up infor­ma­tion on pres­i­den­tial cam­paigns. This was in the spring of 2016. Keep in mind that the ini­tial reports of the hacked emails did­n’t start until mid June of 2016. Sum­mer tech­ni­cal­ly start­ed about a week lat­er. So how did Face­book’s inter­nal team know these accounts were asso­ci­at­ed with Russ­ian hack­ers before the ‘Russ­ian hack­er’ scan­dal erupt­ed? That’s unclear. But the arti­cle goes on to say that this same team also found accounts linked with the Russ­ian hack­ers mes­sag­ing jour­nal­ists to share con­tents of the hacked emails. Was “Guc­cifer 2.0” using Face­book to talk with jour­nal­ists? that’s also unclear. But it sounds like Face­book was indeed active­ly observ­ing what it thought were Russ­ian hack­ers using the plat­form:

    ...
    Min­i­miz­ing Russia’s Role

    In the final months of Mr. Trump’s pres­i­den­tial cam­paign, Russ­ian agents esca­lat­ed a year­long effort to hack and harass his Demo­c­ra­t­ic oppo­nents, cul­mi­nat­ing in the release of thou­sands of emails stolen from promi­nent Democ­rats and par­ty offi­cials.

    Face­book had said noth­ing pub­licly about any prob­lems on its own plat­form. But in the spring of 2016, a com­pa­ny expert on Russ­ian cyber­war­fare spot­ted some­thing wor­ri­some. He reached out to his boss, Mr. Sta­mos.

    Mr. Stamos’s team dis­cov­ered that Russ­ian hack­ers appeared to be prob­ing Face­book accounts for peo­ple con­nect­ed to the pres­i­den­tial cam­paigns, said two employ­ees. Months lat­er, as Mr. Trump bat­tled Hillary Clin­ton in the gen­er­al elec­tion, the team also found Face­book accounts linked to Russ­ian hack­ers who were mes­sag­ing jour­nal­ists to share infor­ma­tion from the stolen emails.

    Mr. Sta­mos, 39, told Col­in Stretch, Facebook’s gen­er­al coun­sel, about the find­ings, said two peo­ple involved in the con­ver­sa­tions. At the time, Face­book had no pol­i­cy on dis­in­for­ma­tion or any resources ded­i­cat­ed to search­ing for it.
    ...

    Alex Sta­mos, Face­book’s head of secu­ri­ty, direct­ed a team to exam­ine the Russ­ian activ­i­ty on Face­book. And yet Zucker­berg and Sand­berg appar­ent­ly nev­er learned about their find­ings until Decem­ber of 2016, after the elec­tion. And when they did learn, Sand­berg got angry as Sta­mos for not get­ting approval before look­ing into this because it could leave the com­pa­ny legal­ly exposed, high­light­ing again how not know­ing about the abus­es on its plat­form is a legal strat­e­gy of the com­pa­ny. By Jan­u­ary of 2017, Sta­mos want­ed to issue a pub­lic paper on their find­ings, but Joel Kaplan shot down the idea, argu­ing that doing so would cause Repub­li­cans to turn on the com­pa­ny. Sand­berg again agreed with Kaplan:

    ...
    Mr. Sta­mos, act­ing on his own, then direct­ed a team to scru­ti­nize the extent of Russ­ian activ­i­ty on Face­book. In Decem­ber 2016, after Mr. Zucker­berg pub­licly scoffed at the idea that fake news on Face­book had helped elect Mr. Trump, Mr. Sta­mos — alarmed that the company’s chief exec­u­tive seemed unaware of his team’s find­ings — met with Mr. Zucker­berg, Ms. Sand­berg and oth­er top Face­book lead­ers.

    Ms. Sand­berg was angry. Look­ing into the Russ­ian activ­i­ty with­out approval, she said, had left the com­pa­ny exposed legal­ly. Oth­er exec­u­tives asked Mr. Sta­mos why they had not been told soon­er.

    Still, Ms. Sand­berg and Mr. Zucker­berg decid­ed to expand on Mr. Stamos’s work, cre­at­ing a group called Project P, for “pro­pa­gan­da,” to study false news on the site, accord­ing to peo­ple involved in the dis­cus­sions. By Jan­u­ary 2017, the group knew that Mr. Stamos’s orig­i­nal team had only scratched the sur­face of Russ­ian activ­i­ty on Face­book, and pressed to issue a pub­lic paper about their find­ings.

    But Mr. Kaplan and oth­er Face­book exec­u­tives object­ed. Wash­ing­ton was already reel­ing from an offi­cial find­ing by Amer­i­can intel­li­gence agen­cies that Vladimir V. Putin, the Russ­ian pres­i­dent, had per­son­al­ly ordered an influ­ence cam­paign aimed at help­ing elect Mr. Trump.

    If Face­book impli­cat­ed Rus­sia fur­ther, Mr. Kaplan said, Repub­li­cans would accuse the com­pa­ny of sid­ing with Democ­rats. And if Face­book pulled down the Rus­sians’ fake pages, reg­u­lar Face­book users might also react with out­rage at hav­ing been deceived: His own moth­er-in-law, Mr. Kaplan said, had fol­lowed a Face­book page cre­at­ed by Russ­ian trolls.

    Ms. Sand­berg sided with Mr. Kaplan, recalled four peo­ple involved. Mr. Zucker­berg — who spent much of 2017 on a nation­al “lis­ten­ing tour,” feed­ing cows in Wis­con­sin and eat­ing din­ner with Soma­li refugees in Min­neso­ta — did not par­tic­i­pate in the con­ver­sa­tions about the pub­lic paper. When it was pub­lished that April, the word “Rus­sia” nev­er appeared.
    ...

    “Mr. Sta­mos, act­ing on his own, then direct­ed a team to scru­ti­nize the extent of Russ­ian activ­i­ty on Face­book. In Decem­ber 2016, after Mr. Zucker­berg pub­licly scoffed at the idea that fake news on Face­book had helped elect Mr. Trump, Mr. Sta­mos — alarmed that the company’s chief exec­u­tive seemed unaware of his team’s find­ings — met with Mr. Zucker­berg, Ms. Sand­berg and oth­er top Face­book lead­ers.”

    Both Zucker­berg and Sand­berg were appar­ent­ly unaware of the find­ings of Sta­mos’s team that had been look­ing into Russ­ian activ­i­ty since the spring of 2016 and found ear­ly signs of the ‘Russ­ian hack­ing teams’ set­ting up Face­book pages to dis­trib­ute the emails. Huh.

    And then we get to Defin­ers Pub­lic Affairs, the com­pa­ny found­ed by Repub­li­can polit­i­cal oper­a­tives and spe­cial­iz­ing in bring polit­i­cal tac­tics to cor­po­rate pub­lic rela­tions. In Octo­ber of 2017, Face­book appears to have decid­ed to dou­ble down on the Defin­ers strat­e­gy. A strat­e­gy that appears to revolve around the strat­e­gy of simul­ta­ne­ous­ly push­ing out pos­i­tive Face­book cov­er­age while attack­ing Face­book­s’s oppo­nents and crit­ics to mud­dy the waters:

    ...
    In Octo­ber 2017, Face­book also expand­ed its work with a Wash­ing­ton-based con­sul­tant, Defin­ers Pub­lic Affairs, that had orig­i­nal­ly been hired to mon­i­tor press cov­er­age of the com­pa­ny. Found­ed by vet­er­ans of Repub­li­can pres­i­den­tial pol­i­tics, Defin­ers spe­cial­ized in apply­ing polit­i­cal cam­paign tac­tics to cor­po­rate pub­lic rela­tions — an approach long employed in Wash­ing­ton by big telecom­mu­ni­ca­tions firms and activist hedge fund man­agers, but less com­mon in tech.

    Defin­ers had estab­lished a Sil­i­con Val­ley out­post ear­li­er that year, led by Tim Miller, a for­mer spokesman for Jeb Bush who preached the virtues of cam­paign-style oppo­si­tion research. For tech firms, he argued in one inter­view, a goal should be to “have pos­i­tive con­tent pushed out about your com­pa­ny and neg­a­tive con­tent that’s being pushed out about your com­peti­tor.”

    Face­book quick­ly adopt­ed that strat­e­gy. In Novem­ber 2017, the social net­work came out in favor of a bill called the Stop Enabling Sex Traf­fick­ers Act, which made inter­net com­pa­nies respon­si­ble for sex traf­fick­ing ads on their sites.

    Google and oth­ers had fought the bill for months, wor­ry­ing it would set a cum­ber­some prece­dent. But the sex traf­fick­ing bill was cham­pi­oned by Sen­a­tor John Thune, a Repub­li­can of South Dako­ta who had pum­meled Face­book over accu­sa­tions that it cen­sored con­ser­v­a­tive con­tent, and Sen­a­tor Richard Blu­men­thal, a Con­necti­cut Demo­c­rat and senior com­merce com­mit­tee mem­ber who was a fre­quent crit­ic of Face­book.

    Face­book broke ranks with oth­er tech com­pa­nies, hop­ing the move would help repair rela­tions on both sides of the aisle, said two con­gres­sion­al staffers and three tech indus­try offi­cials.

    When the bill came to a vote in the House in Feb­ru­ary, Ms. Sand­berg offered pub­lic sup­port online, urg­ing Con­gress to “make sure we pass mean­ing­ful and strong leg­is­la­tion to stop sex traf­fick­ing.”
    ...

    Then, in March of this year, the Cam­bridge Ana­lyt­i­ca scan­dal blew open. In response, Kaplan con­vinced Sand­berg to pro­mote anoth­er Repub­li­can to help deal with the dam­age. Kevin Mar­tin, a for­mer FCC chair­man and a Bush admin­is­tra­tion vet­er­an, was cho­sen to lead Face­book’s US lob­by­ing efforts. Defin­ers was also tapped to deal with the scan­dal. And as part of that response, Defin­ers used its affil­i­at­ed NTK net­work to pump out waves of arti­cles slam­ming Google and Apple for var­i­ous rea­sons:

    ...
    Oppo­si­tion Research

    In March, The Times, The Observ­er of Lon­don and The Guardian pre­pared to pub­lish a joint inves­ti­ga­tion into how Face­book user data had been appro­pri­at­ed by Cam­bridge Ana­lyt­i­ca to pro­file Amer­i­can vot­ers. A few days before pub­li­ca­tion, The Times pre­sent­ed Face­book with evi­dence that copies of improp­er­ly acquired Face­book data still exist­ed, despite ear­li­er promis­es by Cam­bridge exec­u­tives and oth­ers to delete it.

    Mr. Zucker­berg and Ms. Sand­berg met with their lieu­tenants to deter­mine a response. They decid­ed to pre-empt the sto­ries, say­ing in a state­ment pub­lished late on a Fri­day night that Face­book had sus­pend­ed Cam­bridge Ana­lyt­i­ca from its plat­form. The exec­u­tives fig­ured that get­ting ahead of the news would soft­en its blow, accord­ing to peo­ple in the dis­cus­sions.

    They were wrong. The sto­ry drew world­wide out­rage, prompt­ing law­suits and offi­cial inves­ti­ga­tions in Wash­ing­ton, Lon­don and Brus­sels. For days, Mr. Zucker­berg and Ms. Sand­berg remained out of sight, mulling how to respond. While the Rus­sia inves­ti­ga­tion had devolved into an increas­ing­ly par­ti­san bat­tle, the Cam­bridge scan­dal set off Democ­rats and Repub­li­cans alike. And in Sil­i­con Val­ley, oth­er tech firms began exploit­ing the out­cry to bur­nish their own brands.

    “We’re not going to traf­fic in your per­son­al life,” Tim Cook, Apple’s chief exec­u­tive, said in an MSNBC inter­view. “Pri­va­cy to us is a human right. It’s a civ­il lib­er­ty.” (Mr. Cook’s crit­i­cisms infu­ri­at­ed Mr. Zucker­berg, who lat­er ordered his man­age­ment team to use only Android phones — argu­ing that the oper­at­ing sys­tem had far more users than Apple’s.)

    Face­book scram­bled anew. Exec­u­tives qui­et­ly shelved an inter­nal com­mu­ni­ca­tions cam­paign, called “We Get It,” meant to assure employ­ees that the com­pa­ny was com­mit­ted to get­ting back on track in 2018.

    Then Face­book went on the offen­sive. Mr. Kaplan pre­vailed on Ms. Sand­berg to pro­mote Kevin Mar­tin, a for­mer Fed­er­al Com­mu­ni­ca­tions Com­mis­sion chair­man and fel­low Bush admin­is­tra­tion vet­er­an, to lead the company’s Amer­i­can lob­by­ing efforts. Face­book also expand­ed its work with Defin­ers.

    On a con­ser­v­a­tive news site called the NTK Net­work, dozens of arti­cles blast­ed Google and Apple for unsa­vory busi­ness prac­tices. One sto­ry called Mr. Cook hyp­o­crit­i­cal for chid­ing Face­book over pri­va­cy, not­ing that Apple also col­lects reams of data from users. Anoth­er played down the impact of the Rus­sians’ use of Face­book.

    The rash of news cov­er­age was no acci­dent: NTK is an affil­i­ate of Defin­ers, shar­ing offices and staff with the pub­lic rela­tions firm in Arling­ton, Va. Many NTK Net­work sto­ries are writ­ten by staff mem­bers at Defin­ers or Amer­i­ca Ris­ing, the company’s polit­i­cal oppo­si­tion-research arm, to attack their clients’ ene­mies. While the NTK Net­work does not have a large audi­ence of its own, its con­tent is fre­quent­ly picked up by pop­u­lar con­ser­v­a­tive out­lets, includ­ing Bre­it­bart.
    ...

    Final­ly, in July of this year, we find Face­book accus­ing its crit­ics of anti-Semi­tism at the same time Defin­ers uses an arguably anti-Semit­ic attack on these exact same crit­ics as part of a gen­er­al strat­e­gy by Defin­ers to define Face­book’s crit­ics as pup­pets of George Soros:

    ...
    Deflect­ing Crit­i­cism

    By then, some of the harsh­est crit­i­cism of Face­book was com­ing from the polit­i­cal left, where activists and pol­i­cy experts had begun call­ing for the com­pa­ny to be bro­ken up.

    In July, orga­niz­ers with a coali­tion called Free­dom from Face­book crashed a hear­ing of the House Judi­cia­ry Com­mit­tee, where a com­pa­ny exec­u­tive was tes­ti­fy­ing about its poli­cies. As the exec­u­tive spoke, the orga­niz­ers held aloft signs depict­ing Ms. Sand­berg and Mr. Zucker­berg, who are both Jew­ish, as two heads of an octo­pus stretch­ing around the globe.

    Eddie Vale, a Demo­c­ra­t­ic pub­lic rela­tions strate­gist who led the protest, lat­er said the image was meant to evoke old car­toons of Stan­dard Oil, the Gild­ed Age monop­oly. But a Face­book offi­cial quick­ly called the Anti-Defama­tion League, a lead­ing Jew­ish civ­il rights orga­ni­za­tion, to flag the sign. Face­book and oth­er tech com­pa­nies had part­nered with the civ­il rights group since late 2017 on an ini­tia­tive to com­bat anti-Semi­tism and hate speech online.

    That after­noon, the A.D.L. issued a warn­ing from its Twit­ter account.

    “Depict­ing Jews as an octo­pus encir­cling the globe is a clas­sic anti-Semit­ic trope,” the orga­ni­za­tion wrote. “Protest Face­book — or any­one — all you want, but pick a dif­fer­ent image.” The crit­i­cism was soon echoed in con­ser­v­a­tive out­lets includ­ing The Wash­ing­ton Free Bea­con, which has sought to tie Free­dom from Face­book to what the pub­li­ca­tion calls “extreme anti-Israel groups.”

    An A.D.L. spokes­woman, Bet­sai­da Alcan­tara, said the group rou­tine­ly field­ed reports of anti-Semit­ic slurs from jour­nal­ists, syn­a­gogues and oth­ers. “Our experts eval­u­ate each one based on our years of expe­ri­ence, and we respond appro­pri­ate­ly,” Ms. Alcan­tara said. (The group has at times sharply crit­i­cized Face­book, includ­ing when Mr. Zucker­berg sug­gest­ed that his com­pa­ny should not cen­sor Holo­caust deniers.)

    Face­book also used Defin­ers to take on big­ger oppo­nents, such as Mr. Soros, a long­time boogey­man to main­stream con­ser­v­a­tives and the tar­get of intense anti-Semit­ic smears on the far right. A research doc­u­ment cir­cu­lat­ed by Defin­ers to reporters this sum­mer, just a month after the House hear­ing, cast Mr. Soros as the unac­knowl­edged force behind what appeared to be a broad anti-Face­book move­ment.

    He was a nat­ur­al tar­get. In a speech at the World Eco­nom­ic Forum in Jan­u­ary, he had attacked Face­book and Google, describ­ing them as a monop­o­list “men­ace” with “nei­ther the will nor the incli­na­tion to pro­tect soci­ety against the con­se­quences of their actions.”

    Defin­ers pressed reporters to explore the finan­cial con­nec­tions between Mr. Soros’s fam­i­ly or phil­an­thropies and groups that were mem­bers of Free­dom from Face­book, such as Col­or of Change, an online racial jus­tice orga­ni­za­tion, as well as a pro­gres­sive group found­ed by Mr. Soros’s son. (An offi­cial at Mr. Soros’s Open Soci­ety Foun­da­tions said the phil­an­thropy had sup­port­ed both mem­ber groups, but not Free­dom from Face­book, and had made no grants to sup­port cam­paigns against Face­book.)
    ...

    So as we can see, Face­book’s response to scan­dals appears to fall into the fol­low­ing pat­tern:

    1. Inten­tion­al­ly ignore the scan­dal.

    2. When it’s no longer pos­si­ble to ignore, try to get ahead of it by going pub­lic with a watered down admis­sion of the prob­lem.

    3. When get­ting ahead of the sto­ry does­n’t work, attack Face­book’s crit­ics (like sug­gest­ing they are all pawns of George Soros)

    4. Don’t piss off Repub­li­cans.

    Also, regard­ing the dis­cov­ery of Russ­ian hack­ers set­ting up Face­book accounts in the spring of 2016 to dis­trib­ute the hacked emails, here’s a Wash­ing­ton Post arti­cle from Sep­tem­ber of 2017 that talks about this. And accord­ing to the arti­cle, Face­book dis­cov­ered these alleged Russ­ian hack­er accounts in June of 2016 (tech­ni­cal­ly still spring) and prompt­ly informed the FBI. The Face­book cyber­se­cu­ri­ty team was report­ed­ly track­ing APT28 (Fan­cy Bear) as just part of their nor­mal work and dis­cov­ered this activ­i­ty as part of that work. They told the FBI, and then short­ly after­wards they dis­cov­ered that pages for Guc­cifer 2.0 and DCLeaks were being set up to pro­mote the stolen emails. And recall in the above arti­cle that the Face­book team appar­ent­ly dis­cov­ered mes­sage from these account to jour­nal­ists.

    Inter­est­ing­ly, while the arti­cle says this was in June of 2016, it does­n’t say when in June of 2016. And that tim­ing is rather impor­tant since the first Wash­ing­ton Post arti­cle on the hack of the DNC hap­pened on June 14, and Guc­cifer 2.0 popped up and went pub­lic just a day lat­er. So did Face­book dis­cov­er this activ­i­ty before the reports about the hacked emails? That’s remains unclear, but it sounds like Face­book knows how to track APT28/Fancy Bear’s activ­i­ty on its plat­form and just rou­tine­ly does this and that’s how they dis­cov­ered the email hack­ing dis­tri­b­u­tion oper­a­tion. And that implies that if APT28/Fancy Bear real­ly did run this oper­a­tion, they did it in a man­ner that allowed cyber­se­cu­ri­ty researchers to track their activ­i­ty all over the web and on sites like Face­book, which would be one more exam­ple of the inex­plic­a­bly poor oper­a­tion secu­ri­ty by these elite Russ­ian hack­ers:

    The Wash­ing­ton Post

    Oba­ma tried to give Zucker­berg a wake-up call over fake news on Face­book

    By Adam Entous, Eliz­a­beth Dwoskin and Craig Tim­berg
    Sep­tem­ber 24, 2017

    This sto­ry has been updat­ed with an addi­tion­al response from Face­book.

    Nine days after Face­book chief exec­u­tive Mark Zucker­berg dis­missed as “crazy” the idea that fake news on his com­pa­ny’s social net­work played a key role in the U.S. elec­tion, Pres­i­dent Barack Oba­ma pulled the youth­ful tech bil­lion­aire aside and deliv­ered what he hoped would be a wake-up call.

    ...

    A Russ­ian oper­a­tion

    It turned out that Face­book, with­out real­iz­ing it, had stum­bled into the Russ­ian oper­a­tion as it was get­ting under­way in June 2016.

    At the time, cyber­se­cu­ri­ty experts at the com­pa­ny were track­ing a Russ­ian hack­er group known as APT28, or Fan­cy Bear, which U.S. intel­li­gence offi­cials con­sid­ered an arm of the Russ­ian mil­i­tary intel­li­gence ser­vice, the GRU, accord­ing to peo­ple famil­iar with Face­book’s activ­i­ties.

    Mem­bers of the Russ­ian hack­er group were best known for steal­ing mil­i­tary plans and data from polit­i­cal tar­gets, so the secu­ri­ty experts assumed that they were plan­ning some sort of espi­onage oper­a­tion — not a far-reach­ing dis­in­for­ma­tion cam­paign designed to shape the out­come of the U.S. pres­i­den­tial race.

    Face­book exec­u­tives shared with the FBI their sus­pi­cions that a Russ­ian espi­onage oper­a­tion was in the works, a per­son famil­iar with the mat­ter said. An FBI spokesper­son had no com­ment.

    Soon there­after, Face­book’s cyber experts found evi­dence that mem­bers of APT28 were set­ting up a series of shad­owy accounts — includ­ing a per­sona known as Guc­cifer 2.0 and a Face­book page called DCLeaks — to pro­mote stolen emails and oth­er doc­u­ments dur­ing the pres­i­den­tial race. Face­book offi­cials once again con­tact­ed the FBI to share what they had seen.

    After the Novem­ber elec­tion, Face­book began to look more broad­ly at the accounts that had been cre­at­ed dur­ing the cam­paign.

    A review by the com­pa­ny found that most of the groups behind the prob­lem­at­ic pages had clear finan­cial motives, which sug­gest­ed that they weren’t work­ing for a for­eign gov­ern­ment.

    But amid the mass of data the com­pa­ny was ana­lyz­ing, the secu­ri­ty team did not find clear evi­dence of Russ­ian dis­in­for­ma­tion or ad pur­chas­es by Russ­ian-linked accounts.

    Nor did any U.S. law enforce­ment or intel­li­gence offi­cials vis­it the com­pa­ny to lay out what they knew, said peo­ple famil­iar with the effort, even after the nation’s top intel­li­gence offi­cial, James R. Clap­per Jr., tes­ti­fied on Capi­tol Hill in Jan­u­ary that the Rus­sians had waged a mas­sive pro­pa­gan­da cam­paign online.

    ...
    ———-

    “Oba­ma tried to give Zucker­berg a wake-up call over fake news on Face­book” by Adam Entous, Eliz­a­beth Dwoskin and Craig Tim­berg; The Wash­ing­ton Post; 09/24/2017

    “It turned out that Face­book, with­out real­iz­ing it, had stum­bled into the Russ­ian oper­a­tion as it was get­ting under­way in June 2016.”

    It’s kind of an amaz­ing sto­ry. Just by acci­dent, Face­book’s cyber­se­cu­ri­ty experts were already track­ing APT28 some­how and noticed a bunch of activ­i­ty by the group on Face­book. They alert the FBI. This is in June of 2016. “Soon there­after”, Face­book finds evi­dence that mem­bers of APT28 were set­ting up accounts for Guc­cifer 2.0 and DCLeaks. Face­book again informed the FBI:

    ...
    At the time, cyber­se­cu­ri­ty experts at the com­pa­ny were track­ing a Russ­ian hack­er group known as APT28, or Fan­cy Bear, which U.S. intel­li­gence offi­cials con­sid­ered an arm of the Russ­ian mil­i­tary intel­li­gence ser­vice, the GRU, accord­ing to peo­ple famil­iar with Face­book’s activ­i­ties.

    Mem­bers of the Russ­ian hack­er group were best known for steal­ing mil­i­tary plans and data from polit­i­cal tar­gets, so the secu­ri­ty experts assumed that they were plan­ning some sort of espi­onage oper­a­tion — not a far-reach­ing dis­in­for­ma­tion cam­paign designed to shape the out­come of the U.S. pres­i­den­tial race.

    Face­book exec­u­tives shared with the FBI their sus­pi­cions that a Russ­ian espi­onage oper­a­tion was in the works, a per­son famil­iar with the mat­ter said. An FBI spokesper­son had no com­ment.

    Soon there­after, Face­book’s cyber experts found evi­dence that mem­bers of APT28 were set­ting up a series of shad­owy accounts — includ­ing a per­sona known as Guc­cifer 2.0 and a Face­book page called DCLeaks — to pro­mote stolen emails and oth­er doc­u­ments dur­ing the pres­i­den­tial race. Face­book offi­cials once again con­tact­ed the FBI to share what they had seen.
    ...

    So Face­book alleged­ly detect­ed APT28/Fancy Bear activ­i­ty in the spring of 2016. It’s unclear how they knew these were APT28/Fancy Bear hack­ers and unclear how they were track­ing their activ­i­ty. And then they dis­cov­ered these APT28 hack­ers were set­ting pages for Guc­cifer 2.0 and DC Leaks. And as we saw in the above arti­cle, they also found mes­sages from these accounts to jour­nal­ists dis­cussing the emails.

    It’s a remark­able sto­ry, in part because it’s almost nev­er told. We learn that Face­book appar­ent­ly has the abil­i­ty to track exact­ly the same Russ­ian hack­er group that’s accused of car­ry­ing out these hacks, and we learn that Face­book watched these same hack­ers set up the Face­book pages for Guc­cifer 2.0 and DC Leaks. And yet this is almost nev­er men­tioned as evi­dence that Russ­ian gov­ern­ment hack­ers were indeed behind the hacks. Thus far, the attri­bu­tion of these hacks on APT28/Fancy Bear has relied on Crowd­strike and the US gov­ern­ment and the direct inves­ti­ga­tion of the hacks Demo­c­ra­t­ic Par­ty servers. But here we’re learn­ing that Face­book appar­ent­ly has it’s own pool of evi­dence that can tie APT28 to Face­book accounts set up for Guc­cifer 2.0 and DCLeaks. A pool of evi­dence that’s almost nev­er men­tioned.

    And, again, as we saw in the above arti­cle, Face­book’s chief of secu­ri­ty, Alex Sta­mos, was alarmed in Decem­ber of 2016 that Mark Zucker­berg and Sheryl Sand­berg did­n’t know about the find­ings of his team look­ing into this alleged ‘Russ­ian’ activ­i­ty. So Face­book dis­cov­ered Guc­cifer 2.0 and DCLeaks accounts get­ting set up and Zucker­berg and Sand­berg did­n’t know or care about this dur­ing the 2016 elec­tion sea­son. It all high­lights how one of the meta-prob­lems fac­ing Face­book. A meta-prob­lem we saw on dis­play with the Cam­bridge Ana­lyt­i­ca scan­dal and the charges by for­mer exec­u­tive Sandy Parak­i­las that Face­book’s man­age­ment warned him not to look into prob­lems because they deter­mined that know­ing about a prob­lem could make the com­pa­ny liable if the prob­lem is explosed. So it’s a meta-prob­lem of an appar­ent desire of top man­age­ment to not face prob­lems. Or at least pre­tend to not face prob­lems while they know­ing­ly ignore them and then unleash com­pa­nies like Defin­ers Pub­lic Affairs to clean up the mess after the fact.

    And in relat­ed news, both Zucker­berg and Sand­berg claim they had no idea who at Face­book even hired Defin­ers and both had no idea the com­pa­ny even hired Defin­ers at all until that New York Times report. In oth­er words, Face­book’s upper man­age­ment is claim­ing they had no idea about this lat­est scan­dal. Of course.

    Posted by Pterrafractyl | November 19, 2018, 5:02 pm
  4. Now that the UK par­lia­men­t’s seizure of inter­nal Face­book doc­u­ments from the Six4Three law­suit threat­ens to expose what Six4Three argues was an app devel­op­er extor­tion scheme that was per­son­al­ly man­aged by Mark Zucker­berga bait-and-switch scheme that enticed app devel­op­ers with offers of a wealth of access to user infor­ma­tion and then extort­ed the most suc­cess­ful apps with threats of cut­ting off access to the user data unless they give Face­book a big­ger cut of their prof­its — the ques­tion of just how many high-lev­el Face­book scan­dals have yet to be revealed to the pub­lic is now a much more top­i­cal ques­tion. Because based on what we know so far about Face­book’s out of con­trol behav­ior that appears to have been sanc­tioned by the com­pa­ny’s exec­u­tives there’s no rea­son to assume there isn’t plen­ty of scan­dalous behav­ior yet to be revealed.

    So in the spir­it of spec­u­lat­ing about just how cor­rupt Mark Zucker­berg might tru­ly be, here’s an arti­cle that gives us some insight into the kinds of his­toric Zucker­berg spends time think­ing about: Sur­prise! He real­ly looks up to Cae­sar August, the Roman emper­or who took “a real­ly harsh approach” and “had to do cer­tain things” to achieve his grand goals:

    The Guardian

    What’s behind Mark Zucker­berg’s man-crush on Emper­or Augus­tus?

    Char­lotte Hig­gins
    Wed 12 Sep 2018 11.45 EDT
    Last mod­i­fied on Thu 13 Sep 2018 05.23 EDT

    The Face­book founder’s bro­man­tic hero was a can­ny oper­a­tor who was obsessed with pow­er and over­rode democ­ra­cy

    Pow­er­ful men do love a tran­shis­tor­i­cal man-crush – fix­at­ing on an ances­tor fig­ure, who can be ven­er­at­ed, per­haps sur­passed. Facebook’s Mark Zucker­berg has told the New York­er about his par­tic­u­lar fas­ci­na­tion with the Roman emper­or, Augus­tus – he and his wife, Priscil­la Chan, have even called one of their chil­dren August.

    “Basi­cal­ly, through a real­ly harsh approach, he estab­lished 200 years of world peace,” Zucker­berg explained. He pon­dered, “What are the trade-offs in that? On the one hand, world peace is a long-term goal that peo­ple talk about today ...” On the oth­er hand, he said, “that didn’t come for free, and he had to do cer­tain things”.

    Zucker­berg loved Latin at school (“very much like cod­ing”, he said). His sis­ter, Don­na, got her clas­sics PhD at Prince­ton, is edi­tor of the excel­lent Eidolon online clas­sics mag­a­zine, and has just writ­ten a book on how “alt-right”, misog­y­nist online com­mu­ni­ties invoke clas­si­cal his­to­ry.

    I’m not sure whether the appeal­ing clas­sics nerdi­ness of Zuckerberg’s back­ground makes his san­guine euphemisms more or less alarm­ing. “He had to do cer­tain things” and “a real­ly harsh approach” are, let’s say, a relaxed way of describ­ing Augus­tus’ bru­tal and sys­tem­at­ic elim­i­na­tion of polit­i­cal oppo­nents. And “200 years of world peace”? Well yes, if that’s what you want to call cen­turies of bru­tal con­quest. Even the Roman his­to­ri­an Tac­i­tus had some­thing to say about that: “soli­tudinem faci­unt, pacem appel­lant”. They make a desert and call it peace.

    ...

    It’s true that his reign has been recon­sid­ered time and again: it is one of those extra­or­di­nary junc­tions in his­to­ry – when Rome’s repub­lic teetered, crum­bled, and reformed as the empire – that looks dif­fer­ent depend­ing on the moment from which he is exam­ined. It is per­fect­ly true to say that Augus­tus end­ed the civ­il strife that over­whelmed Rome in the late first cen­tu­ry BC, and ush­ered in a peri­od of sta­bil­i­ty and, in some ways, renew­al, by the time of his death in 14 AD. That’s how I was taught about Augus­tus at school, I sus­pect not unco­in­ci­den­tal­ly by some­one brought up dur­ing the sec­ond world war. But in 1939 Ronald Syme had pub­lished his bril­liant account of the peri­od, The Roman Rev­o­lu­tion – a rev­o­lu­tion­ary book in itself, chal­leng­ing Augustus’s then large­ly pos­i­tive rep­u­ta­tion by por­tray­ing him as a sin­is­ter fig­ure who emerged on the tides of his­to­ry out of the increas­ing­ly ungovern­able Roman repub­lic, to wield auto­crat­ic pow­er.

    Part of the fas­ci­na­tion of the man is that he was a mas­ter of pro­pa­gan­da and a superb polit­i­cal oper­a­tor. In our own era of obfus­ca­tion, deceit and fake news it’s inter­est­ing to try to unpick what was real­ly going on. Take his brief auto­bi­og­ra­phy, Res Ges­tae Divi Augusti. (Things Done By the Dei­fied Augus­tus – no mess­ing about here, title-wise).

    The text, while heavy­go­ing, is a fas­ci­nat­ing doc­u­ment, list­ing his polit­i­cal appoint­ments, his mil­i­tary achieve­ments, the infra­struc­ture projects he fund­ed. But it can, with oth­er con­tem­po­rary evi­dence, also be inter­pret­ed as a por­trait of a man who insti­tut­ed an autoc­ra­cy that clev­er­ly mim­ic­ked the forms and tra­di­tions of Rome’s qua­si-demo­c­ra­t­ic repub­lic.

    Under the guise of restor­ing Rome to great­ness, he hol­lowed out its con­sti­tu­tion and loaded pow­er into his own hands. Some­thing there for Zucker­berg to think about, per­haps. Par­tic­u­lar­ly con­sid­er­ing the New Yorker’s head­line for its pro­file: “Can Mark Zucker­berg fix Face­book before it breaks democ­ra­cy?”

    ———-

    “What’s behind Mark Zucker­berg’s man-crush on Emper­or Augus­tus?” by Char­lotte Hig­gins; The Guardian; 09/12/2018

    “Pow­er­ful men do love a tran­shis­tor­i­cal man-crush – fix­at­ing on an ances­tor fig­ure, who can be ven­er­at­ed, per­haps sur­passed. Facebook’s Mark Zucker­berg has told the New York­er about his par­tic­u­lar fas­ci­na­tion with the Roman emper­or, Augus­tus – he and his wife, Priscil­la Chan, have even called one of their chil­dren August.”

    He lit­er­al­ly named his daugh­ter after the Roman emper­or. That hints at more than just a casu­al his­tor­i­cal inter­est.

    So what is it about Cae­sar Augus­tus’s rule that Zucker­berg is so enam­ored with? Well, based on Zucker­berg’s own words, it sounds like it was the way Augus­tus took a “real­ly harsh approach” to mak­ing deci­sions with dif­fi­cult trade-offs in order to achieve Pax Romana, 200 years of peace for the Roman empire:

    ...
    “Basi­cal­ly, through a real­ly harsh approach, he estab­lished 200 years of world peace,” Zucker­berg explained. He pon­dered, “What are the trade-offs in that? On the one hand, world peace is a long-term goal that peo­ple talk about today ...” On the oth­er hand, he said, “that didn’t come for free, and he had to do cer­tain things”.
    ...

    And while focus­ing a 200 years of peace puts an obses­sion with Augus­tus in the most pos­i­tive pos­si­ble light, it’s hard to ignore the fact that Augus­tus was still a mas­ter of pro­pa­gan­da and the man who saw the end of the Roman Repub­lic and the impo­si­tion of an impe­r­i­al mod­el of gov­ern­ment:

    ...
    Part of the fas­ci­na­tion of the man is that he was a mas­ter of pro­pa­gan­da and a superb polit­i­cal oper­a­tor. In our own era of obfus­ca­tion, deceit and fake news it’s inter­est­ing to try to unpick what was real­ly going on. Take his brief auto­bi­og­ra­phy, Res Ges­tae Divi Augusti. (Things Done By the Dei­fied Augus­tus – no mess­ing about here, title-wise).

    The text, while heavy­go­ing, is a fas­ci­nat­ing doc­u­ment, list­ing his polit­i­cal appoint­ments, his mil­i­tary achieve­ments, the infra­struc­ture projects he fund­ed. But it can, with oth­er con­tem­po­rary evi­dence, also be inter­pret­ed as a por­trait of a man who insti­tut­ed an autoc­ra­cy that clev­er­ly mim­ic­ked the forms and tra­di­tions of Rome’s qua­si-demo­c­ra­t­ic repub­lic.

    Under the guise of restor­ing Rome to great­ness, he hol­lowed out its con­sti­tu­tion and loaded pow­er into his own hands. Some­thing there for Zucker­berg to think about, per­haps. Par­tic­u­lar­ly con­sid­er­ing the New Yorker’s head­line for its pro­file: “Can Mark Zucker­berg fix Face­book before it breaks democ­ra­cy?”

    And that’s a lit­tle peek into Mark Zucker­berg’s mind that gives us a sense of what he spends time think­ing about: his­toric fig­ures who did a lot of harsh things to achieve his­toric ‘great­ness’. That’s not a scary red flag or any­thing.

    Posted by Pterrafractyl | November 26, 2018, 12:43 pm
  5. Here’s a new rea­son to hate Face­book: if you hate Face­book on Face­book, Face­book might put you on its “Be on the look­out” (BOLO) list and start using its loca­tion track­ing tech­nol­o­gy to track your loca­tion. That’s accord­ing to a new report based on a num­ber of cur­rent and for­mer Face­book employ­ees who dis­cussed how the com­pa­ny’s BOLO list pol­i­cy works. And accord­ing to secu­ri­ty experts, while Face­book isn’t unique in hav­ing a BOLO list for com­pa­ny threats, it is high­ly unusu­al in that it can use its own tech­nol­o­gy to track the peo­ple on the BOLO list. Face­book can track BOLO users’ loca­tions using their IP address or the smart­phone’s loca­tion data col­lect­ed through the Face­book app.

    So how does one end up on this BOLO list? Well, there are the rea­son­able ways, like if some­one posts posts on one of Face­book’s social media plat­forms a spe­cif­ic threat against Face­book or one of its employ­ees. But it sounds like the stan­dards are a lot more sub­jec­tive and peo­ple are placed on the BOLO for sim­ply post­ing things like “F— you, Mark,” “F— Face­book”. Anoth­er group rou­tine­ly put on the list is for­mer employ­ees and con­trac­tors. Again, it does­n’t sound like it takes much to get on the list. Sim­ply get­ting emo­tion­al if your con­tract isn’t extend­ed appears to be enough. Giv­en those stan­dards, it’s almost sur­pris­ing that it sounds like the BOLO list is only hun­dreds of peo­ple long and not thou­sands of peo­ple:

    CNBC

    Face­book uses its apps to track users it thinks could threat­en employ­ees and offices

    * Face­book main­tains a list of indi­vid­u­als that its secu­ri­ty guards must “be on look­out” for that is com­prised of users who’ve made threat­en­ing state­ments against the com­pa­ny on its social net­work as well as numer­ous for­mer employ­ees.
    * The com­pa­ny’s infor­ma­tion secu­ri­ty team is capa­ble of track­ing these indi­vid­u­als’ where­abouts using the loca­tion data they pro­vide through Face­book’s apps and web­sites.
    * More than a dozen for­mer Face­book secu­ri­ty employ­ees described the com­pa­ny’s tac­tics to CNBC, with sev­er­al ques­tion­ing the ethics of the com­pa­ny’s prac­tices.

    Sal­vador Rodriguez
    Pub­lished 02/17/2019 Updat­ed

    In ear­ly 2018, a Face­book user made a pub­lic threat on the social net­work against one of the com­pa­ny’s offices in Europe.

    Face­book picked up the threat, pulled the user’s data and deter­mined he was in the same coun­try as the office he was tar­get­ing. The com­pa­ny informed the author­i­ties about the threat and direct­ed its secu­ri­ty offi­cers to be on the look­out for the user.

    “He made a veiled threat that ‘Tomor­row every­one is going to pay’ or some­thing to that effect,” a for­mer Face­book secu­ri­ty employ­ee told CNBC.

    The inci­dent is rep­re­sen­ta­tive of the steps Face­book takes to keep its offices, exec­u­tives and employ­ees pro­tect­ed, accord­ing to more than a dozen for­mer Face­book employ­ees who spoke with CNBC. The com­pa­ny mines its social net­work for threat­en­ing com­ments, and in some cas­es uses its prod­ucts to track the loca­tion of peo­ple it believes present a cred­i­ble threat.

    Sev­er­al of the for­mer employ­ees ques­tioned the ethics of Face­book’s secu­ri­ty strate­gies, with one of them call­ing the tac­tics “very Big Broth­er-esque.”

    Oth­er for­mer employ­ees argue these secu­ri­ty mea­sures are jus­ti­fied by Face­book’s reach and the intense emo­tions it can inspire. The com­pa­ny has 2.7 bil­lion users across its ser­vices. That means that if just 0.01 per­cent of users make a threat, Face­book is still deal­ing with 270,000 poten­tial secu­ri­ty risks.

    “Our phys­i­cal secu­ri­ty team exists to keep Face­book employ­ees safe,” a Face­book spokesman said in a state­ment. “They use indus­try-stan­dard mea­sures to assess and address cred­i­ble threats of vio­lence against our employ­ees and our com­pa­ny, and refer these threats to law enforce­ment when nec­es­sary. We have strict process­es designed to pro­tect peo­ple’s pri­va­cy and adhere to all data pri­va­cy laws and Face­book’s terms of ser­vice. Any sug­ges­tion our onsite phys­i­cal secu­ri­ty team has over­stepped is absolute­ly false.”

    Face­book is unique in the way it uses its own prod­uct to mine data for threats and loca­tions of poten­tial­ly dan­ger­ous indi­vid­u­als, said Tim Bradley, senior con­sul­tant with Inci­dent Man­age­ment Group, a cor­po­rate secu­ri­ty con­sult­ing firm that deals with employ­ee safe­ty issues. How­ev­er, the Occu­pa­tion­al Safe­ty and Health Admin­is­tra­tion’s gen­er­al duty clause says that com­pa­nies have to pro­vide their employ­ees with a work­place free of haz­ards that could cause death or seri­ous phys­i­cal harm, Bradley said.

    “If they know there’s a threat against them, they have to take steps,” Bradley said. “How they got the infor­ma­tion is sec­ondary to the fact that they have a duty to pro­tect employ­ees.”

    Mak­ing the list

    One of the tools Face­book uses to mon­i­tor threats is a “be on look­out” or “BOLO” list, which is updat­ed approx­i­mate­ly once a week. The list was cre­at­ed in 2008, an ear­ly employ­ee in Face­book’s phys­i­cal secu­ri­ty group told CNBC. It now con­tains hun­dreds of peo­ple, accord­ing to four for­mer Face­book secu­ri­ty employ­ees who have left the com­pa­ny since 2016.

    Face­book noti­fies its secu­ri­ty pro­fes­sion­als any­time a new per­son is added to the BOLO list, send­ing out a report that includes infor­ma­tion about the per­son, such as their name, pho­to, their gen­er­al loca­tion and a short descrip­tion of why they were added.

    In recent years, the secu­ri­ty team even had a large mon­i­tor that dis­played the faces of peo­ple on the list, accord­ing to a pho­to CNBC has seen and two peo­ple famil­iar, although Face­book says it no longer oper­ates this mon­i­tor.

    Oth­er com­pa­nies keep sim­i­lar lists of threats, Bradley and oth­er sources said. But Face­book is unique because it can use its own prod­ucts to iden­ti­fy these threats and track the loca­tion of peo­ple on the list.

    Users who pub­licly threat­en the com­pa­ny, its offices or employ­ees — includ­ing post­ing threat­en­ing com­ments in response to posts from exec­u­tives like CEO Mark Zucker­berg and COO Sheryl Sand­berg — are often added to the list. These users are typ­i­cal­ly described as mak­ing “improp­er com­mu­ni­ca­tion” or “threat­en­ing com­mu­ni­ca­tion,” accord­ing to for­mer employ­ees.

    The bar can be pret­ty low. While some users end up on the list after repeat­ed appear­ances on com­pa­ny prop­er­ty or long email threats, oth­ers might find them­selves on the BOLO list for say­ing some­thing as sim­ple as “F— you, Mark,” “F— Face­book” or “I’m gonna go kick your a–,” accord­ing to a for­mer employ­ee who worked with the exec­u­tive pro­tec­tion team. A dif­fer­ent for­mer employ­ee who was on the com­pa­ny’s secu­ri­ty team said there were no clear­ly com­mu­ni­cat­ed stan­dards to deter­mine what kinds of actions could land some­body on the list, and that deci­sions were often made on a case-by-case basis.

    The Face­book spokesman dis­put­ed this, say­ing that peo­ple were only added after a “rig­or­ous review to deter­mine the valid­i­ty of the threat.”

    Awk­ward sit­u­a­tions

    Most peo­ple on the list do not know they’re on it. This some­times leads to tense sit­u­a­tions.

    Sev­er­al years ago, one Face­book user dis­cov­ered he was on the BOLO list when he showed up to Face­book’s Men­lo Park cam­pus for lunch with a friend who worked there, accord­ing to a for­mer employ­ee who wit­nessed the inci­dent.

    The user checked in with secu­ri­ty to reg­is­ter as a guest. His name popped up right away, alert­ing secu­ri­ty. He was on the list. His issue had to do with mes­sages he had sent to Zucker­berg, accord­ing to a per­son famil­iar with the cir­cum­stances.

    Soon, more secu­ri­ty guards showed up in the entrance area where the guest had tried to reg­is­ter. No one grabbed the indi­vid­ual, but secu­ri­ty guards stood at his sides and at each of the doors lead­ing in and out of that entrance area.

    Even­tu­al­ly, the employ­ee showed up mad and demand­ed that his friend be removed from the BOLO list. After the employ­ee met with Face­book’s glob­al secu­ri­ty intel­li­gence and inves­ti­ga­tions team, the friend was removed from the list — a rare occur­rence.

    “No per­son would be on BOLO with­out cred­i­ble cause,” the Face­book spokesman said in regard to this inci­dent.

    It’s not just users who find them­selves on Face­book’s BOLO list. Many of the peo­ple on the list are for­mer Face­book employ­ees and con­trac­tors, whose col­leagues ask to add them when they leave the com­pa­ny.

    Some for­mer employ­ees are list­ed for hav­ing a track record of poor behav­ior, such as steal­ing com­pa­ny equip­ment. But in many cas­es, there is no rea­son list­ed on the BOLO descrip­tion. Three peo­ple famil­iar said that almost every Face­book employ­ee who gets fired is added to the list, and one called the process “real­ly sub­jec­tive.” Anoth­er said that con­trac­tors are added if they get emo­tion­al when their con­tracts are not extend­ed.

    The Face­book spokesman coun­tered that the process is more rig­or­ous than these peo­ple claim. “For­mer employ­ees are only added under very spe­cif­ic cir­cum­stances, after review by legal and HR, includ­ing threats of vio­lence or harass­ment.”

    The prac­tice of adding for­mer employ­ees to the BOLO list has occa­sion­al­ly cre­at­ed awk­ward sit­u­a­tions for the com­pa­ny’s recruiters, who often reach out to for­mer employ­ees to fill open­ings. Ex-employ­ees have showed up for job inter­views only to find out that they could­n’t enter because they were on the BOLO list, said a for­mer secu­ri­ty employ­ee who left the com­pa­ny last year.

    “It becomes a whole big embar­rass­ing sit­u­a­tion,” this per­son said.

    Tracked by spe­cial request

    Face­book has the capa­bil­i­ty to track BOLO users’ where­abouts by using their smart­phone’s loca­tion data col­lect­ed through the Face­book app, or their IP address col­lect­ed through the com­pa­ny’s web­site.

    Face­book only tracks BOLO-list­ed users when their threats are deemed cred­i­ble, accord­ing to a for­mer employ­ee with first­hand knowl­edge of the com­pa­ny’s secu­ri­ty pro­ce­dures. This could include a detailed threat with an exact loca­tion and tim­ing of an attack, or a threat from an indi­vid­ual who makes a habit of attend­ing com­pa­ny events, such as the Face­book share­hold­ers’ meet­ing. This for­mer employ­ee empha­sized Face­book could not look up users’ loca­tions with­out cause.

    When a cred­i­ble threat is detect­ed, the glob­al secu­ri­ty oper­a­tions cen­ter and the glob­al secu­ri­ty intel­li­gence and inves­ti­ga­tions units make a spe­cial request to the com­pa­ny’s infor­ma­tion secu­ri­ty team, which has the capa­bil­i­ties to track users’ loca­tion infor­ma­tion. In some cas­es, the track­ing does­n’t go very far — for instance, if a BOLO user made a threat about a spe­cif­ic loca­tion but their cur­rent loca­tion shows them nowhere close, the track­ing might end there.

    But if the BOLO user is near­by, the infor­ma­tion secu­ri­ty team can con­tin­ue to mon­i­tor their loca­tion peri­od­i­cal­ly and keep oth­er secu­ri­ty teams on alert.

    Depend­ing on the threat, Face­book’s secu­ri­ty teams can take oth­er actions, such as sta­tion­ing secu­ri­ty guards, escort­ing a BOLO user off cam­pus or alert­ing law enforce­ment.

    Face­book’s infor­ma­tion secu­ri­ty team has tracked users’ loca­tions in oth­er safe­ty-relat­ed instances, too.

    In 2017, a Face­book man­ag­er alert­ed the com­pa­ny’s secu­ri­ty teams when a group of interns she was man­ag­ing did not log into the com­pa­ny’s sys­tems to work from home. They had been on a camp­ing trip, accord­ing to a for­mer Face­book secu­ri­ty employ­ee, and the man­ag­er was con­cerned about their safe­ty.

    Face­book’s infor­ma­tion secu­ri­ty team became involved in the sit­u­a­tion and used the interns’ loca­tion data to try and find out if they were safe. “They call it ‘ping­ing them’, ping­ing their Face­book accounts,” the for­mer secu­ri­ty employ­ee recalled.

    After the loca­tion data did not turn up any­thing use­ful, the infor­ma­tion secu­ri­ty team then kept dig­ging and learned that the interns had exchanged mes­sages sug­gest­ing they nev­er intend­ed to come into work that day — essen­tial­ly, they had lied to the man­ag­er. The infor­ma­tion secu­ri­ty team gave the man­ag­er a sum­ma­ry of what they had found.

    “There was legit con­cern about the safe­ty of these indi­vid­u­als,” the Face­book spokesman said. “In each iso­lat­ed case, these employ­ees were unre­spon­sive on all com­mu­ni­ca­tion chan­nels. There’s a set of pro­to­cols guid­ing when and how we access employ­ee data when an employ­ee goes miss­ing.”

    ...

    ———-

    “Face­book uses its apps to track users it thinks could threat­en employ­ees and offices” by Sal­vador Rodriguez; CNBC; 02/17/2019

    “Sev­er­al of the for­mer employ­ees ques­tioned the ethics of Face­book’s secu­ri­ty strate­gies, with one of them call­ing the tac­tics “very Big Broth­er-esque.””

    Yeah, “very Big Broth­er-esque” sounds like a pret­ty good descrip­tion of the sit­u­a­tion. In part because Face­book is doing the track­ing with its own tech­nol­o­gy:

    ...
    Face­book is unique in the way it uses its own prod­uct to mine data for threats and loca­tions of poten­tial­ly dan­ger­ous indi­vid­u­als, said Tim Bradley, senior con­sul­tant with Inci­dent Man­age­ment Group, a cor­po­rate secu­ri­ty con­sult­ing firm that deals with employ­ee safe­ty issues. How­ev­er, the Occu­pa­tion­al Safe­ty and Health Admin­is­tra­tion’s gen­er­al duty clause says that com­pa­nies have to pro­vide their employ­ees with a work­place free of haz­ards that could cause death or seri­ous phys­i­cal harm, Bradley said.

    “If they know there’s a threat against them, they have to take steps,” Bradley said. “How they got the infor­ma­tion is sec­ondary to the fact that they have a duty to pro­tect employ­ees.”

    ...

    Oth­er com­pa­nies keep sim­i­lar lists of threats, Bradley and oth­er sources said. But Face­book is unique because it can use its own prod­ucts to iden­ti­fy these threats and track the loca­tion of peo­ple on the list.

    ...

    Tracked by spe­cial request

    Face­book has the capa­bil­i­ty to track BOLO users’ where­abouts by using their smart­phone’s loca­tion data col­lect­ed through the Face­book app, or their IP address col­lect­ed through the com­pa­ny’s web­site.

    Face­book only tracks BOLO-list­ed users when their threats are deemed cred­i­ble, accord­ing to a for­mer employ­ee with first­hand knowl­edge of the com­pa­ny’s secu­ri­ty pro­ce­dures. This could include a detailed threat with an exact loca­tion and tim­ing of an attack, or a threat from an indi­vid­ual who makes a habit of attend­ing com­pa­ny events, such as the Face­book share­hold­ers’ meet­ing. This for­mer employ­ee empha­sized Face­book could not look up users’ loca­tions with­out cause.

    When a cred­i­ble threat is detect­ed, the glob­al secu­ri­ty oper­a­tions cen­ter and the glob­al secu­ri­ty intel­li­gence and inves­ti­ga­tions units make a spe­cial request to the com­pa­ny’s infor­ma­tion secu­ri­ty team, which has the capa­bil­i­ties to track users’ loca­tion infor­ma­tion. In some cas­es, the track­ing does­n’t go very far — for instance, if a BOLO user made a threat about a spe­cif­ic loca­tion but their cur­rent loca­tion shows them nowhere close, the track­ing might end there.

    But if the BOLO user is near­by, the infor­ma­tion secu­ri­ty team can con­tin­ue to mon­i­tor their loca­tion peri­od­i­cal­ly and keep oth­er secu­ri­ty teams on alert.

    Depend­ing on the threat, Face­book’s secu­ri­ty teams can take oth­er actions, such as sta­tion­ing secu­ri­ty guards, escort­ing a BOLO user off cam­pus or alert­ing law enforce­ment.
    ...

    Get­ting on the list also sounds shock­ing­ly easy. A sim­ple “F— you, Mark,” or “F— Face­book” post on Face­book is all it appar­ent­ly takes. Giv­en that, it’s almost unbe­liev­able that the list only con­tains hun­dreds of peo­ple. Although it sounds like that “hun­dreds of peo­ple” esti­mate is based on for­mer secu­ri­ty employ­ees who left the com­pa­ny since 2016. You have to won­der how much longer the BOLO list could be today com­pared to 2016 sim­ply giv­en the amount of bad press Face­book has received just in the last year alone:

    ...
    Mak­ing the list

    One of the tools Face­book uses to mon­i­tor threats is a “be on look­out” or “BOLO” list, which is updat­ed approx­i­mate­ly once a week. The list was cre­at­ed in 2008, an ear­ly employ­ee in Face­book’s phys­i­cal secu­ri­ty group told CNBC. It now con­tains hun­dreds of peo­ple, accord­ing to four for­mer Face­book secu­ri­ty employ­ees who have left the com­pa­ny since 2016.

    Face­book noti­fies its secu­ri­ty pro­fes­sion­als any­time a new per­son is added to the BOLO list, send­ing out a report that includes infor­ma­tion about the per­son, such as their name, pho­to, their gen­er­al loca­tion and a short descrip­tion of why they were added.

    In recent years, the secu­ri­ty team even had a large mon­i­tor that dis­played the faces of peo­ple on the list, accord­ing to a pho­to CNBC has seen and two peo­ple famil­iar, although Face­book says it no longer oper­ates this mon­i­tor.

    ...

    Users who pub­licly threat­en the com­pa­ny, its offices or employ­ees — includ­ing post­ing threat­en­ing com­ments in response to posts from exec­u­tives like CEO Mark Zucker­berg and COO Sheryl Sand­berg — are often added to the list. These users are typ­i­cal­ly described as mak­ing “improp­er com­mu­ni­ca­tion” or “threat­en­ing com­mu­ni­ca­tion,” accord­ing to for­mer employ­ees.

    The bar can be pret­ty low. While some users end up on the list after repeat­ed appear­ances on com­pa­ny prop­er­ty or long email threats, oth­ers might find them­selves on the BOLO list for say­ing some­thing as sim­ple as “F— you, Mark,” “F— Face­book” or “I’m gonna go kick your a–,” accord­ing to a for­mer employ­ee who worked with the exec­u­tive pro­tec­tion team. A dif­fer­ent for­mer employ­ee who was on the com­pa­ny’s secu­ri­ty team said there were no clear­ly com­mu­ni­cat­ed stan­dards to deter­mine what kinds of actions could land some­body on the list, and that deci­sions were often made on a case-by-case basis.

    The Face­book spokesman dis­put­ed this, say­ing that peo­ple were only added after a “rig­or­ous review to deter­mine the valid­i­ty of the threat.”
    ...

    And it sounds like for­mer employ­ees and con­trac­tors can get thrown on the list for basi­cal­ly no rea­son at all. If you’re fired from Face­book, don’t get emo­tion­al. Or the com­pa­ny will track your loca­tion indef­i­nite­ly:

    ...
    Awk­ward sit­u­a­tions

    Most peo­ple on the list do not know they’re on it. This some­times leads to tense sit­u­a­tions.

    Sev­er­al years ago, one Face­book user dis­cov­ered he was on the BOLO list when he showed up to Face­book’s Men­lo Park cam­pus for lunch with a friend who worked there, accord­ing to a for­mer employ­ee who wit­nessed the inci­dent.

    The user checked in with secu­ri­ty to reg­is­ter as a guest. His name popped up right away, alert­ing secu­ri­ty. He was on the list. His issue had to do with mes­sages he had sent to Zucker­berg, accord­ing to a per­son famil­iar with the cir­cum­stances.

    Soon, more secu­ri­ty guards showed up in the entrance area where the guest had tried to reg­is­ter. No one grabbed the indi­vid­ual, but secu­ri­ty guards stood at his sides and at each of the doors lead­ing in and out of that entrance area.

    Even­tu­al­ly, the employ­ee showed up mad and demand­ed that his friend be removed from the BOLO list. After the employ­ee met with Face­book’s glob­al secu­ri­ty intel­li­gence and inves­ti­ga­tions team, the friend was removed from the list — a rare occur­rence.

    “No per­son would be on BOLO with­out cred­i­ble cause,” the Face­book spokesman said in regard to this inci­dent.

    It’s not just users who find them­selves on Face­book’s BOLO list. Many of the peo­ple on the list are for­mer Face­book employ­ees and con­trac­tors, whose col­leagues ask to add them when they leave the com­pa­ny.

    Some for­mer employ­ees are list­ed for hav­ing a track record of poor behav­ior, such as steal­ing com­pa­ny equip­ment. But in many cas­es, there is no rea­son list­ed on the BOLO descrip­tion. Three peo­ple famil­iar said that almost every Face­book employ­ee who gets fired is added to the list, and one called the process “real­ly sub­jec­tive.” Anoth­er said that con­trac­tors are added if they get emo­tion­al when their con­tracts are not extend­ed.

    The Face­book spokesman coun­tered that the process is more rig­or­ous than these peo­ple claim. “For­mer employ­ees are only added under very spe­cif­ic cir­cum­stances, after review by legal and HR, includ­ing threats of vio­lence or harass­ment.”

    The prac­tice of adding for­mer employ­ees to the BOLO list has occa­sion­al­ly cre­at­ed awk­ward sit­u­a­tions for the com­pa­ny’s recruiters, who often reach out to for­mer employ­ees to fill open­ings. Ex-employ­ees have showed up for job inter­views only to find out that they could­n’t enter because they were on the BOLO list, said a for­mer secu­ri­ty employ­ee who left the com­pa­ny last year.

    “It becomes a whole big embar­rass­ing sit­u­a­tion,” this per­son said.
    ...

    And as Face­book itself makes clear with its anec­dote about how it tracked the loca­tion of a team of interns after the com­pa­ny became con­cerned about their safe­ty on a camp­ing trip, the BOLO list is just one rea­son the com­pa­ny might decide to track the loca­tions of spe­cif­ic peo­ple. Employ­ees being unre­spon­sive to emails is anoth­er rea­son for the poten­tial track­ing. Giv­en that Face­book is using its own in-house loca­tion track­ing capa­bil­i­ties to do this there are prob­a­bly all sorts of dif­fer­ent excus­es for using the tech­nol­o­gy:

    ...
    Face­book’s infor­ma­tion secu­ri­ty team has tracked users’ loca­tions in oth­er safe­ty-relat­ed instances, too.

    In 2017, a Face­book man­ag­er alert­ed the com­pa­ny’s secu­ri­ty teams when a group of interns she was man­ag­ing did not log into the com­pa­ny’s sys­tems to work from home. They had been on a camp­ing trip, accord­ing to a for­mer Face­book secu­ri­ty employ­ee, and the man­ag­er was con­cerned about their safe­ty.

    Face­book’s infor­ma­tion secu­ri­ty team became involved in the sit­u­a­tion and used the interns’ loca­tion data to try and find out if they were safe. “They call it ‘ping­ing them’, ping­ing their Face­book accounts,” the for­mer secu­ri­ty employ­ee recalled.

    After the loca­tion data did not turn up any­thing use­ful, the infor­ma­tion secu­ri­ty team then kept dig­ging and learned that the interns had exchanged mes­sages sug­gest­ing they nev­er intend­ed to come into work that day — essen­tial­ly, they had lied to the man­ag­er. The infor­ma­tion secu­ri­ty team gave the man­ag­er a sum­ma­ry of what they had found.

    “There was legit con­cern about the safe­ty of these indi­vid­u­als,” the Face­book spokesman said. “In each iso­lat­ed case, these employ­ees were unre­spon­sive on all com­mu­ni­ca­tion chan­nels. There’s a set of pro­to­cols guid­ing when and how we access employ­ee data when an employ­ee goes miss­ing.”
    ...

    So now you know, if you’re a for­mer Face­book employee/contractor and/or have ever writ­ten a nasty thing about Face­book on Face­book’s plat­forms, Face­book is watch­ing you.

    Of course, Face­book is track­ing the loca­tions and every­thing else it can track about every­one to the great­est extent pos­si­ble any­way. Track­ing every­one is Face­book’s busi­ness mod­el. So the dis­tinc­tion is real­ly just whether or not Face­book’s secu­ri­ty team is specif­i­cal­ly watch­ing you. Face­book the com­pa­ny is watch­ing you whether or not you’re on the list or not.

    Posted by Pterrafractyl | February 18, 2019, 10:28 am
  6. So remem­ber those reports from 2017 about how Face­book’s ad tar­get­ing options allowed users to tar­get ads for Face­book users who have expressed an inter­est in top­ics like “Jew hater,” “How to burn jews,” or, “His­to­ry of ‘why jews ruin the world.’”? And remem­ber how Face­book explained that this was an acci­dent cause by their algo­rithms that auto-gen­er­ate user inter­est groups and the com­pa­ny promised that they’ll have humans review­ing these auto-gen­er­at­ed top­ics going for­ward? Sur­prise! It turns out the human review­ers are still allow­ing ads tar­get­ing user inter­est­ed in top­ics like “Joseph Goebbels,” “Josef Men­gele,” “Hein­rich Himm­ler,” and the neo-nazi punk band Skrew­driv­er:

    The Los Ange­les Times

    Face­book decid­ed which users are inter­est­ed in Nazis — and let adver­tis­ers tar­get them direct­ly

    By Sam Dean
    Feb 21, 2019 | 5:00 AM

    Face­book makes mon­ey by charg­ing adver­tis­ers to reach just the right audi­ence for their mes­sage — even when that audi­ence is made up of peo­ple inter­est­ed in the per­pe­tra­tors of the Holo­caust or explic­it­ly neo-Nazi music.

    Despite promis­es of greater over­sight fol­low­ing past adver­tis­ing scan­dals, a Times review shows that Face­book has con­tin­ued to allow adver­tis­ers to tar­get hun­dreds of thou­sands of users the social media firm believes are curi­ous about top­ics such as “Joseph Goebbels,” “Josef Men­gele,” “Hein­rich Himm­ler,” the neo-nazi punk band Skrew­driv­er and Ben­i­to Mussolini’s long-defunct Nation­al Fas­cist Par­ty.

    Experts say that this prac­tice runs counter to the company’s stat­ed prin­ci­ples and can help fuel rad­i­cal­iza­tion online.

    “What you’re describ­ing, where a clear hate­ful idea or nar­ra­tive can be ampli­fied to reach more peo­ple, is exact­ly what they said they don’t want to do and what they need to be held account­able for,” said Oren Segal, direc­tor of the Anti-Defama­tion League’s cen­ter on extrem­ism.

    After being con­tact­ed by The Times, Face­book said that it would remove many of the audi­ence group­ings from its ad plat­form.

    “Most of these tar­get­ing options are against our poli­cies and should have been caught and removed soon­er,” said Face­book spokesman Joe Osborne. “While we have an ongo­ing review of our tar­get­ing options, we clear­ly need to do more, so we’re tak­ing a broad­er look at our poli­cies and detec­tion meth­ods.”

    Approved by Face­book

    Facebook’s broad reach and sophis­ti­cat­ed adver­tis­ing tools brought in a record $55 bil­lion in ad rev­enue in 2018.

    Prof­it mar­gins stayed above 40%, thanks to a high degree of automa­tion, with algo­rithms sort­ing users into mar­ketable sub­sets based on their behav­ior — then choos­ing which ads to show them.

    But the lack of human over­sight has also brought the com­pa­ny con­tro­ver­sy.

    In 2017, Pro Pub­li­ca found that the com­pa­ny sold ads based on any user-gen­er­at­ed phrase, includ­ing “Jew hater” and “Hitler did noth­ing wrong.” Fol­low­ing the mur­der of 11 con­gre­gants at a syn­a­gogue in Pitts­burgh in 2018, the Inter­cept found that Face­book gave adver­tis­ers the abil­i­ty to tar­get users inter­est­ed in the anti-Semit­ic “white geno­cide con­spir­a­cy the­o­ry,” which the sus­pect­ed killer cit­ed as inspi­ra­tion before the attacks.

    This month, the Guardian high­light­ed the ways that YouTube and Face­book boost anti-vac­cine con­spir­a­cy the­o­ries, lead­ing Rep. Adam Schiff (D‑Burbank) to ques­tion whether the com­pa­ny was pro­mot­ing mis­in­for­ma­tion.

    Face­book has promised since 2017 that humans review every ad tar­get­ing cat­e­go­ry. It announced last fall the removal of 5,000 audi­ence cat­e­gories that risked enabling abuse or dis­crim­i­na­tion.

    The Times decid­ed to test the effec­tive­ness of the company’s efforts by see­ing if Face­book would allow the sale of ads direct­ed to cer­tain seg­ments of users.

    Face­book allowed The Times to tar­get ads to users Face­book has deter­mined are inter­est­ed in Goebbels, the Third Reich’s chief pro­pa­gan­dist, Himm­ler, the archi­tect of the Holo­caust and leader of the SS, and Men­gele, the infa­mous con­cen­tra­tion camp doc­tor who per­formed human exper­i­ments on pris­on­ers. Each cat­e­go­ry includ­ed hun­dreds of thou­sands of users.

    The com­pa­ny also approved an ad tar­get­ed to fans of Skrew­driv­er, a noto­ri­ous white suprema­cist punk band — and auto­mat­i­cal­ly sug­gest­ed a series of top­ics relat­ed to Euro­pean far-right move­ments to bol­ster the ad’s reach.

    Col­lec­tive­ly, the ads were seen by 4,153 users in 24 hours, with The Times pay­ing only $25 to fuel the push.

    Face­book admits its human mod­er­a­tors should have removed the Nazi-affil­i­at­ed demo­graph­ic cat­e­gories. But it says the “ads” them­selves — which con­sist­ed of the word “test” or The Times’ logo and linked back to the newspaper’s home­page — would not have raised red flags for the sep­a­rate team that looks over ad con­tent.

    Upon review, the com­pa­ny said the ad cat­e­gories were sel­dom used. The few ads pur­chased linked to his­tor­i­cal con­tent, Face­book said, but the com­pa­ny would not pro­vide more detail on their ori­gin.

    ‘Why is it my job to police their plat­form?’

    The Times was tipped off by a Los Ange­les musi­cian who asked to remain anony­mous for fear of retal­i­a­tion from hate groups.

    Ear­li­er this year, he tried to pro­mote a con­cert fea­tur­ing his hard­core punk group and a black met­al band on Face­book. When he typed “black met­al” into Facebook’s ad por­tal, he said he was dis­turbed to dis­cov­er that the com­pa­ny sug­gest­ed he also pay to tar­get users inter­est­ed in “Nation­al Social­ist black met­al” — a poten­tial audi­ence num­ber­ing in the hun­dreds of thou­sands.

    The punk and met­al music scenes, and black met­al in par­tic­u­lar, have a long grap­pled with white suprema­cist under­cur­rents. Black met­al grew out of the ear­ly Nor­we­gian met­al scene, which saw promi­nent mem­bers con­vict­ed of burn­ing down church­es, mur­der­ing fel­low musi­cians and plot­ting bomb­ings. Some bands and their fans have since com­bined anti-Semi­tism, neo-pagan­ism, and the pro­mo­tion of vio­lence into the dis­tinct sub­genre of Nation­al Social­ist black met­al, which the South­ern Pover­ty Law Cen­ter described as a dan­ger­ous white suprema­cist recruit­ing tool near­ly 20 years ago.

    But punk and met­al fans have long pushed back against hate. In 1981, the Dead Kennedys released “Nazi Punks F— Off”; last month 15 met­al bands played at an anti-fas­cist fes­ti­val in Brook­lyn.

    The musi­cian saw him­self as a part of that same tra­di­tion.

    ...

    Face­book sub­se­quent­ly removed the group­ing from the plat­form, but the musi­cian remains incred­u­lous that “Nation­al Social­ist black met­al” was a cat­e­go­ry in the first place — let alone one the com­pa­ny specif­i­cal­ly prompt­ed him to pur­sue.

    “Why is it my job to police their plat­form?” he said.

    A rab­bit hole of hate

    After review­ing screen­shots ver­i­fy­ing the musician’s sto­ry, The Times inves­ti­gat­ed whether Face­book would allow adver­tis­ers to tar­get explic­it­ly neo-Nazi bands or oth­er terms asso­ci­at­ed with hate groups.

    We start­ed with Skrew­driv­er, a British band with a song called “White Pow­er” and an album named after a Hitler Youth mot­to. Since the band only had 2,120 users iden­ti­fied as fans, Face­book informed us that we would need to add more tar­get demo­graph­ics to pub­lish the ad.

    The prompt led us down a rab­bit hole of terms it thought were relat­ed to white suprema­cist ide­ol­o­gy.

    First, it rec­om­mend­ed “Thor Steinar,” a cloth­ing brand that has been out­lawed in the Ger­man par­lia­ment for its asso­ci­a­tion with neo-Nazism. Then, it rec­om­mend­ed “NPD Group,” the name of both a promi­nent Amer­i­can mar­ket research firm and a far-right Ger­man polit­i­cal par­ty asso­ci­at­ed with neo-Nazism. Among the next rec­om­mend­ed terms were “Flüchtlinge,” the Ger­man word for “refugees,” and “Nation­al­ism.”

    Face­book said the cat­e­gories “Flüchtlinge,” “Nation­al­ism,” and “NPD Group” are in line with its poli­cies and will not be removed despite appear­ing as auto-sug­ges­tions fol­low­ing neo-Nazi terms. (Face­book said it had found that the users inter­est­ed in NPD Group were actu­al­ly inter­est­ed in the Amer­i­can mar­ket research firm.)

    In the wake of past con­tro­ver­sies, Face­book has blocked ads aimed at those inter­est­ed in the most obvi­ous terms affil­i­at­ed with hate groups. “Nazi,” “Hitler,” “white suprema­cy” and “Holo­caust” all yield noth­ing in the ad plat­form. But adver­tis­ers could tar­get more than a mil­lion users with inter­est in Goebbels or the Nation­al Fas­cist Par­ty, which dis­solved in 1943. Himm­ler had near­ly 95,000 con­stituents. Men­gele had 117,150 inter­est­ed users — a num­ber that increased over the dura­tion of our report­ing, to 127,010.

    Face­book said these cat­e­gories were auto­mat­i­cal­ly gen­er­at­ed based on user activ­i­ty — lik­ing or com­ment­ing on ads, or join­ing cer­tain groups. But it would not pro­vide spe­cif­ic details about how it deter­mined a user’s inter­est in top­ics linked to Nazis.

    ‘Expand­ing the orbit’

    The ads end­ed up being served with­in Instant Arti­cles — which are host­ed with­in Face­book, rather than link­ing out to a publisher’s own web­site — pub­lished by the Face­book pages of a wide swath of media out­lets.

    These includ­ed arti­cles by the Dai­ly Wire, CNN, Huff­Post, Moth­er Jones, Bre­it­bart, the BBC and ABC News. They also includ­ed arti­cles by viral pages with names like Pup­per Dog­go, I Love Movies and Right Health Today — a seem­ing­ly defunct media com­pa­ny whose only Face­book post was a link to a now-delet­ed arti­cle titled “What Is The Ben­e­fits Of Eat­ing Apple Every­day.”

    Segal, the ADL direc­tor, said Face­book might wind up fuel­ing the recruit­ment of new extrem­ists by serv­ing up such ads on the types of pages an ordi­nary news read­er might vis­it.

    “Being able to reach so many peo­ple with extrem­ist con­tent, exist­ing lit­er­al­ly in the same space as legit­i­mate news or non-hate­ful con­tent, is the biggest dan­ger,” he said. “What you’re doing is expand­ing the orbit.”

    ...

    ————-

    “Face­book decid­ed which users are inter­est­ed in Nazis — and let adver­tis­ers tar­get them direct­ly” By Sam Dean; The Los Ange­les Times; 02/21/2019

    ” Despite promis­es of greater over­sight fol­low­ing past adver­tis­ing scan­dals, a Times review shows that Face­book has con­tin­ued to allow adver­tis­ers to tar­get hun­dreds of thou­sands of users the social media firm believes are curi­ous about top­ics such as “Joseph Goebbels,” “Josef Men­gele,” “Hein­rich Himm­ler,” the neo-nazi punk band Skrew­driv­er and Ben­i­to Mussolini’s long-defunct Nation­al Fas­cist Par­ty.

    Yes, despite Face­book’s promis­es of greater over­sight fol­low­ing the pre­vi­ous reports of Nazi ad tar­get­ing cat­e­gories, the Nazi ad tar­get­ing con­tin­ues. And these ad cat­e­gories don’t have just a hand­ful of Face­book users. Each of the cat­e­gories the LA Times test­ed had hun­dreds of thou­sands of users. And with just a $25 pur­chase, over 4,000 users saw the test ad in 24 hours, demon­strat­ing that Face­book remains a remark­ably cost-effec­tive plat­form for direct­ly reach­ing out to peo­ple with Nazi sym­pa­thies:

    ...
    The Times decid­ed to test the effec­tive­ness of the company’s efforts by see­ing if Face­book would allow the sale of ads direct­ed to cer­tain seg­ments of users.

    Face­book allowed The Times to tar­get ads to users Face­book has deter­mined are inter­est­ed in Goebbels, the Third Reich’s chief pro­pa­gan­dist, Himm­ler, the archi­tect of the Holo­caust and leader of the SS, and Men­gele, the infa­mous con­cen­tra­tion camp doc­tor who per­formed human exper­i­ments on pris­on­ers. Each cat­e­go­ry includ­ed hun­dreds of thou­sands of users.

    The com­pa­ny also approved an ad tar­get­ed to fans of Skrew­driv­er, a noto­ri­ous white suprema­cist punk band — and auto­mat­i­cal­ly sug­gest­ed a series of top­ics relat­ed to Euro­pean far-right move­ments to bol­ster the ad’s reach.

    Col­lec­tive­ly, the ads were seen by 4,153 users in 24 hours, with The Times pay­ing only $25 to fuel the push.

    ...

    And these ads show up in as Instant Arti­cles, so they would show up in the same part of the Face­book page where arti­cles from sites like CNN and BBC might show up:

    ...
    ‘Expand­ing the orbit’

    The ads end­ed up being served with­in Instant Arti­cles — which are host­ed with­in Face­book, rather than link­ing out to a publisher’s own web­site — pub­lished by the Face­book pages of a wide swath of media out­lets.

    These includ­ed arti­cles by the Dai­ly Wire, CNN, Huff­Post, Moth­er Jones, Bre­it­bart, the BBC and ABC News. They also includ­ed arti­cles by viral pages with names like Pup­per Dog­go, I Love Movies and Right Health Today — a seem­ing­ly defunct media com­pa­ny whose only Face­book post was a link to a now-delet­ed arti­cle titled “What Is The Ben­e­fits Of Eat­ing Apple Every­day.”

    Segal, the ADL direc­tor, said Face­book might wind up fuel­ing the recruit­ment of new extrem­ists by serv­ing up such ads on the types of pages an ordi­nary news read­er might vis­it.

    “Being able to reach so many peo­ple with extrem­ist con­tent, exist­ing lit­er­al­ly in the same space as legit­i­mate news or non-hate­ful con­tent, is the biggest dan­ger,” he said. “What you’re doing is expand­ing the orbit.”
    ...

    Of course, Face­book pledged to remove these neo-Nazi ad categories...just like they did before:

    ...
    After being con­tact­ed by The Times, Face­book said that it would remove many of the audi­ence group­ings from its ad plat­form.

    “Most of these tar­get­ing options are against our poli­cies and should have been caught and removed soon­er,” said Face­book spokesman Joe Osborne. “While we have an ongo­ing review of our tar­get­ing options, we clear­ly need to do more, so we’re tak­ing a broad­er look at our poli­cies and detec­tion meth­ods.”

    ...

    Face­book has promised since 2017 that humans review every ad tar­get­ing cat­e­go­ry. It announced last fall the removal of 5,000 audi­ence cat­e­gories that risked enabling abuse or dis­crim­i­na­tion.
    ...

    So how con­fi­dent should we be that Face­book is actu­al­ly going to purge its sys­tem of neo-Nazi ad cat­e­gories? Well, as the arti­cle notes, Face­book’s cur­rent ad sys­tem earned the com­pa­ny a record $55 bil­lion in ad rev­enue in 2018 with over 40% prof­it mar­gins. And a big rea­son for those big prof­it mar­gins is the lack of human over­sight and the high degree of automa­tion in the run­ning of this sys­tem. In oth­er words, Face­book’s record prof­its depends on exact­ly the kind of lack of human over­sight that allowed for these neo-Nazi ad cat­e­gories to pro­lif­er­ate:

    ...
    Approved by Face­book

    Facebook’s broad reach and sophis­ti­cat­ed adver­tis­ing tools brought in a record $55 bil­lion in ad rev­enue in 2018.

    Prof­it mar­gins stayed above 40%, thanks to a high degree of automa­tion, with algo­rithms sort­ing users into mar­ketable sub­sets based on their behav­ior — then choos­ing which ads to show them.

    But the lack of human over­sight has also brought the com­pa­ny con­tro­ver­sy.

    ...

    Of course, we should­n’t nec­es­sar­i­ly assume that Face­book’s ongo­ing prob­lems with Nazi ad cat­e­gories is sim­ply due to a lack of human over­sight. It’s also quite pos­si­ble that Face­book sim­ply sees the pro­mo­tion of extrem­ism as a great source of rev­enue. After all, the LA Times reporters dis­cov­ered that the num­ber of users Face­book cat­e­go­rized as hav­ing an inter­est in Joseph Men­gele actu­al­ly grew from 117,l150 users to 127,010 users dur­ing their inves­ti­ga­tion. That’s a growth of over 8%! So the extrem­ist ad mar­ket might sim­ply be seen as a lucra­tive growth mar­ket that the com­pa­ny can’t resist:

    ...
    In the wake of past con­tro­ver­sies, Face­book has blocked ads aimed at those inter­est­ed in the most obvi­ous terms affil­i­at­ed with hate groups. “Nazi,” “Hitler,” “white suprema­cy” and “Holo­caust” all yield noth­ing in the ad plat­form. But adver­tis­ers could tar­get more than a mil­lion users with inter­est in Goebbels or the Nation­al Fas­cist Par­ty, which dis­solved in 1943. Himm­ler had near­ly 95,000 con­stituents. Men­gele had 117,150 inter­est­ed users — a num­ber that increased over the dura­tion of our report­ing, to 127,010.

    Face­book said these cat­e­gories were auto­mat­i­cal­ly gen­er­at­ed based on user activ­i­ty — lik­ing or com­ment­ing on ads, or join­ing cer­tain groups. But it would not pro­vide spe­cif­ic details about how it deter­mined a user’s inter­est in top­ics linked to Nazis.
    ...

    Could it be that the explo­sive growth of extrem­ism is sim­ply mak­ing the hate demo­graph­ic irre­sistible? Per­haps, although as we’ve seen with vir­tu­al­ly all of the major social media plat­forms like Twit­ter and YouTube, when it comes to social media plat­forms prof­it­ing off of extrem­ism it’s very much a ‘chick­en & egg’ sit­u­a­tion.

    Posted by Pterrafractyl | February 22, 2019, 11:57 am
  7. Oh look at that: A new Wall Street Jour­nal study dis­cov­ered that sev­er­al smart­phone apps are send­ing sen­si­tive infor­ma­tion to Face­book with­out get­ting user con­sent. This includ­ed “Flo Health”, an app for women to track their peri­ods and ovu­la­tion. Face­book was lit­er­al­ly col­lect­ing infor­ma­tion on users ovu­la­tion sta­tus. Anoth­er app, Instant Heart Rate: HR Mon­i­tor, was also send­ing Face­book data, along with the real-estate app Realtor.com. This is all hap­pen­ing using the toolk­it Face­book pro­vides app devel­op­ers. And while Face­book defend­ed itself by point out that its terms require that devel­op­ers not send the com­pa­ny sen­si­tive infor­ma­tion, Face­book also appears to be accept­ing this infor­ma­tion with­out telling devel­op­ers to stop:

    Asso­ci­at­ed Press

    Report: Apps give Face­book sen­si­tive health and oth­er data

    By MAE ANDERSON
    Feb­ru­ary 22, 2019

    NEW YORK (AP) — Sev­er­al phone apps are send­ing sen­si­tive user data, includ­ing health infor­ma­tion, to Face­book with­out users’ con­sent, accord­ing to a report by The Wall Street Jour­nal.

    An ana­lyt­ics tool called “App Events” allows app devel­op­ers to record user activ­i­ty and report it back to Face­book, even if the user isn’t on Face­book, accord­ing to the report .

    One exam­ple detailed by the Jour­nal shows how a woman would track her peri­od and ovu­la­tion using an app from Flo Health. After she enters when she last had her peri­od, Face­book soft­ware in the app would send along data, such as whether the user may be ovu­lat­ing. The Journal’s test­ing found that the data was sent with an adver­tis­ing ID that can be matched to a device or pro­file.

    Although Facebook’s terms instruct app devel­op­ers not to send such sen­si­tive infor­ma­tion, Face­book appeared to be accept­ing such data with­out telling the devel­op­ers to stop. Devel­op­ers are able to use such data to tar­get their own users while on Face­book.

    Face­book said in a state­ment that it requires apps to tell users what infor­ma­tion is shared with Face­book and it “pro­hibits app devel­op­ers from send­ing us sen­si­tive data.” The com­pa­ny said it works to remove infor­ma­tion that devel­op­ers should not have sent to Face­book.

    ...

    The data-shar­ing is relat­ed to a data ana­lyt­ics tool that Face­book offers devel­op­ers. The tool lets devel­op­ers see sta­tis­tics about their users and tar­get them with Face­book ads.

    Besides Flo Health, the Jour­nal found that Instant Heart Rate: HR Mon­i­tor and real-estate app Realtor.com were also send­ing app data to Face­book. The Jour­nal found that the apps did not pro­vide users any way to stop the data-shar­ing.

    Flo Health said in an emailed state­ment that using ana­lyt­i­cal sys­tems is a “com­mon prac­tice” for all app devel­op­ers and that it uses Face­book ana­lyt­ics for “inter­nal ana­lyt­ics pur­pos­es only.” But the com­pa­ny plans to audit its ana­lyt­ics tools to be “as proac­tive as pos­si­ble” on pri­va­cy con­cerns.

    Hours after the Jour­nal sto­ry was pub­lished, New York Gov. Andrew Cuo­mo direct­ed the state’s Depart­ment of State and Depart­ment of Finan­cial Ser­vices to “imme­di­ate­ly inves­ti­gate” what he calls a clear inva­sion of con­sumer pri­va­cy. The Demo­c­rat also urged fed­er­al reg­u­la­tors to step in to end the prac­tice.

    Securo­sis CEO Rich Mogull said that while it is not good for Face­book to have yet anoth­er data pri­va­cy flap in the head­lines, “In this case it looks like the main vio­la­tors were the com­pa­nies that wrote those appli­ca­tions,” he said. “Face­book in this case is more the enabler than the bad actor.”

    ———-

    “Report: Apps give Face­book sen­si­tive health and oth­er data” by MAE ANDERSON; Asso­ci­at­ed Press; 02/22/2019

    “In this case it looks like the main vio­la­tors were the com­pa­nies that wrote those applications...Facebook in this case is more the enabler than the bad actor.”

    That’s one way to spin it: Face­book is more of the enabler than the pri­ma­ry bad actor in this case. That’s sort of an improve­ment. Specif­i­cal­ly, Face­book’s “App Events” tool is enabling app devel­op­ers to send sen­si­tive user infor­ma­tion back Face­book despite Face­book’s instruc­tions to devel­op­ers not to send sen­si­tive infor­ma­tion. And the fact that Face­book was clear­ly accept­ing this sen­si­tive data with­out telling devel­op­ers to stop send­ing it cer­tain­ly adds to the enabling behav­ior. Even when that sen­si­tive data includ­ed whether or not a woman is ovu­lat­ing:

    ...
    An ana­lyt­ics tool called “App Events” allows app devel­op­ers to record user activ­i­ty and report it back to Face­book, even if the user isn’t on Face­book, accord­ing to the report .

    One exam­ple detailed by the Jour­nal shows how a woman would track her peri­od and ovu­la­tion using an app from Flo Health. After she enters when she last had her peri­od, Face­book soft­ware in the app would send along data, such as whether the user may be ovu­lat­ing. The Journal’s test­ing found that the data was sent with an adver­tis­ing ID that can be matched to a device or pro­file.

    Although Facebook’s terms instruct app devel­op­ers not to send such sen­si­tive infor­ma­tion, Face­book appeared to be accept­ing such data with­out telling the devel­op­ers to stop. Devel­op­ers are able to use such data to tar­get their own users while on Face­book.

    Face­book said in a state­ment that it requires apps to tell users what infor­ma­tion is shared with Face­book and it “pro­hibits app devel­op­ers from send­ing us sen­si­tive data.” The com­pa­ny said it works to remove infor­ma­tion that devel­op­ers should not have sent to Face­book.

    ...

    The data-shar­ing is relat­ed to a data ana­lyt­ics tool that Face­book offers devel­op­ers. The tool lets devel­op­ers see sta­tis­tics about their users and tar­get them with Face­book ads.
    ...

    And the range of sen­si­tive data includes every­thing from heart rate mon­i­tors to real estate apps. In oth­er words, pret­ty much any app might be send­ing data to Face­book but we don’t nec­es­sar­i­ly know which apps because the apps aren’t inform­ing users about this data col­lec­tion and don’t give users a way to stop it:

    ...
    Besides Flo Health, the Jour­nal found that Instant Heart Rate: HR Mon­i­tor and real-estate app Realtor.com were also send­ing app data to Face­book. The Jour­nal found that the apps did not pro­vide users any way to stop the data-shar­ing.
    ...

    And as the fol­low­ing Buz­zFeed report from Decem­ber describes, while app devel­op­ers tend to assume that the infor­ma­tion their apps are send­ing back to Face­book is anonymized because it does­n’t have your per­son­al name attached, that’s basi­cal­ly a garbage con­clu­sion because Face­book does­n’t need your name to know who you are. There’s plen­ty of oth­er iden­ti­fy­ing infor­ma­tion in what these apps are send­ing. Even if you don’t have a Face­book pro­file. And about half of the smart­phone apps found to be send­ing infor­ma­tion back to Face­book don’t even men­tion this in their pri­va­cy poli­cies accord­ing to a study by the Ger­man mobile secu­ri­ty ini­tia­tive Mobil­sich­er. So what per­cent of smart­phone apps over­all are send­ing infor­ma­tion back to Face­book? Accord­ing to the esti­mates of pri­va­cy researcher col­lec­tive App Cen­sus, about 30 per­cent of all apps on the Google Play store con­tact Face­book at start­up:

    Buz­zFeed News

    Apps Are Reveal­ing Your Pri­vate Infor­ma­tion To Face­book And You Prob­a­bly Don’t Know It

    Face­book pro­vid­ed devel­op­ers with tools to build Face­book-com­pat­i­ble apps like Tin­der, Grindr, and Preg­nan­cy+. Those apps have been qui­et­ly send­ing sen­si­tive user data to Face­book.
    Char­lie Warzel Buz­zFeed News Reporter

    Last updat­ed on Decem­ber 19, 2018, at 1:04 p.m. ET
    Post­ed on Decem­ber 19, 2018, at 12:30 p.m. ET

    Major Android apps like Tin­der, Grindr, and Preg­nan­cy+ are qui­et­ly trans­mit­ting sen­si­tive user data to Face­book, accord­ing to a new report by the Ger­man mobile secu­ri­ty ini­tia­tive Mobil­sich­er. This infor­ma­tion can include things like reli­gious affil­i­a­tion, dat­ing pro­files, and health care data. It’s being pur­pose­ful­ly col­lect­ed by Face­book through the Soft­ware Devel­op­er Kit (SDK) that it pro­vides to third-par­ty app devel­op­ers. And while Face­book does­n’t hide this, you prob­a­bly don’t know about it.

    Cer­tain­ly not all devel­op­ers did.

    “Most devel­op­ers we asked about this issue assumed that the infor­ma­tion Face­book receives is anonymized,” Mobil­sich­er explains in its report, which explores the types of infor­ma­tion shared behind the scenes between the plat­form and devel­op­ers. Through its SDK, Face­book pro­vides app devel­op­ers with data about their users, includ­ing where you click, how long you use the app, and your loca­tion when you use it. In exchange, Face­book can access the data those apps col­lect, which it then uses to tar­get adver­tis­ing rel­e­vant to a user’s inter­ests. That data doesn’t have your name attached, but as Mobil­sich­er shows, it’s far from anonymized, and it’s trans­mit­ted to Face­book regard­less of whether users are logged into the plat­form.

    Among the infor­ma­tion trans­mit­ted to Face­book are the IP address of the device that used the app, the type of device, time of use, and a user-spe­cif­ic Adver­tis­ing ID, which allows Face­book to iden­ti­fy and link third-par­ty app infor­ma­tion to the peo­ple using those apps. Apps that Mobil­sich­er test­ed include Bible+, Curvy, For­Dia­betes, Grindr, Kwitt, Migraine Bud­dy, Mood­path, Mus­lim Pro, OkCu­pid, Preg­nan­cy+, and more.

    As long as you’ve logged into Face­book on your mobile device at some point (through your phone’s brows­er or the Face­book app itself), the com­pa­ny cross-ref­er­ences the Adver­tis­ing ID and can link the third-par­ty app infor­ma­tion to your pro­file. And even if you don’t have a Face­book pro­file, the data can still be trans­mit­ted and col­lect­ed with oth­er third-par­ty app data that cor­re­sponds to your unique Adver­tis­ing ID.

    For devel­op­ers and Face­book, this trans­mis­sion appears rel­a­tive­ly com­mon. The pri­va­cy researcher col­lec­tive App Cen­sus esti­mates that “approx­i­mate­ly 30 per­cent of all apps in Google’s Play store con­tact Face­book at start­up” through the company’s SDK. The research firm Sta­tista esti­mates that the Google Play store has over 2.6 mil­lion apps as of Decem­ber 2018. As the Mobil­sich­er report details, many of these apps con­tain sen­si­tive infor­ma­tion. And while Face­book users can opt out and dis­able tar­get­ed adver­tise­ments (the same kind of ads that are informed by third-par­ty app data), it is unclear whether turn­ing off tar­get­ing stops Face­book from col­lect­ing this app infor­ma­tion. In a state­ment to Mobil­sich­er, Face­book spec­i­fied only that “if a per­son uti­lizes one of these con­trols, then Face­book will not use data gath­ered on these third-par­ty apps (e.g. through Face­book Audi­ence Net­work), for ad tar­get­ing.”

    A Face­book rep­re­sen­ta­tive clar­i­fied to Buz­zFeed News that while it enables users to opt out of tar­get­ed ads from third par­ties, the con­trols apply to the usage of the data and not its col­lec­tion. The com­pa­ny also said it does not use the third-par­ty data it col­lects through the SDK to cre­ate pro­files of non-Face­book users. Tin­der, Grindr, and Google did not respond to requests for com­ment. Apple, which uses a sim­i­lar ad iden­ti­fi­er, was not able to com­ment at the time of pub­li­ca­tion.

    The pub­li­ca­tion of Mobilsicher’s report comes at the end of a year rife with Face­book pri­va­cy scan­dals. In the past few months alone, the com­pa­ny has grap­pled with a few mas­sive ones. In late Sep­tem­ber, Face­book dis­closed a vul­ner­a­bil­i­ty that had exposed the per­son­al infor­ma­tion of 30 mil­lion users. A month lat­er, it revealed that same vul­ner­a­bil­i­ty had exposed pro­file infor­ma­tion includ­ing gen­der, loca­tion, birth dates, and recent search his­to­ry. Ear­li­er this month, the com­pa­ny report­ed anoth­er secu­ri­ty flaw that poten­tial­ly exposed the pub­lic and pri­vate pho­tos of as many as 6.8 mil­lion Face­book users to devel­op­ers that should not have had access to them. And on Tues­day, the New York Times report­ed that Face­book gave more than 150 com­pa­nies, includ­ing Net­flix, Ama­zon, Microsoft, Spo­ti­fy, and Yahoo, unprece­dent­ed and undis­closed access to users’ per­son­al data, in some cas­es grant­i­ng access to read users’ pri­vate mes­sages.

    The vul­ner­a­bil­i­ties, cou­pled with fall­out from the Cam­bridge Ana­lyt­i­ca data min­ing scan­dal, have set off a Face­book pri­va­cy reck­on­ing that’s inspired grass­roots cam­paigns to #Delete­Face­book, lead­ing to some high-pro­file dele­tions. They’ve also sparked a tech­ni­cal debate about whether Face­book “sells data” to adver­tis­ers. (Face­book and its defend­ers argue that no data changes hands as a result of its tar­get­ed adver­tis­ing, while crit­ics say that’s a seman­tic dodge and that the com­pa­ny sells ads against your infor­ma­tion, which is effec­tive­ly sim­i­lar.)

    Lost in that debate is the greater issue of trans­paren­cy. Plat­forms like Face­book do dis­close their data poli­cies in daunt­ing moun­tain ranges of text with impres­sive­ly off-putting com­plex­i­ty. Rare is the nor­mal human who reads them. Rar­er still is the non-devel­op­er human who reads the com­pa­ny’s even more off-putting data poli­cies for devel­op­ers. For these rea­sons, the mechan­ics of the Face­book plat­form — par­tic­u­lar­ly the nuances of its soft­ware devel­op­er kit — are large­ly unknown to the typ­i­cal Face­book user.

    Though CEO Mark Zucker­berg told law­mak­ers this year that Face­book users have “com­plete con­trol” of their data, Tues­day’s New York Times inves­ti­ga­tion as well as Mobil­sicher’s report reveal that user infor­ma­tion appears to move between dif­fer­ent com­pa­nies and plat­forms and is col­lect­ed, some­times with­out noti­fy­ing the users. In the case of Facebook’s SDK, for exam­ple, Mobil­sich­er notes that the trans­mis­sion of user infor­ma­tion from third-par­ty apps to Face­book occurs entire­ly behind the scenes. None of the apps Mobil­sich­er found to be trans­mit­ting data to Face­book “active­ly noti­fied users” that they were doing so. Accord­ing to the report, “Not even half of [the apps Mobil­sich­er test­ed] men­tion Face­book Ana­lyt­ics in their pri­va­cy pol­i­cy. Strict­ly speak­ing, none of them is GDPR-com­pli­ant, since the trans­mis­sion starts before any user inter­ac­tion could indi­cate informed con­sent.”

    ...

    ———-

    “Apps Are Reveal­ing Your Pri­vate Infor­ma­tion To Face­book And You Prob­a­bly Don’t Know It” by Char­lie Warzel; Buz­zFeed; 12/19/2018

    “Major Android apps like Tin­der, Grindr, and Preg­nan­cy+ are qui­et­ly trans­mit­ting sen­si­tive user data to Face­book, accord­ing to a new report by the Ger­man mobile secu­ri­ty ini­tia­tive Mobil­sich­er. This infor­ma­tion can include things like reli­gious affil­i­a­tion, dat­ing pro­files, and health care data. It’s being pur­pose­ful­ly col­lect­ed by Face­book through the Soft­ware Devel­op­er Kit (SDK) that it pro­vides to third-par­ty app devel­op­ers. And while Face­book does­n’t hide this, you prob­a­bly don’t know about it.”

    It’s not just the hand­ful of apps described in the Wall Street Jour­nal report. Major Android apps are rou­tine­ly pass­ing infor­ma­tion to Face­book. And this infor­ma­tion can include things like reli­gious affil­i­a­tion and data pro­files in addi­tion to health care data. And while devel­op­ers might be doing this, in part, because they assume the data is anonymized, it’s not. At least not in any mean­ing­ful way. And even non-Face­book users are get­ting their data sent:

    ...
    Cer­tain­ly not all devel­op­ers did.

    “Most devel­op­ers we asked about this issue assumed that the infor­ma­tion Face­book receives is anonymized,” Mobil­sich­er explains in its report, which explores the types of infor­ma­tion shared behind the scenes between the plat­form and devel­op­ers. Through its SDK, Face­book pro­vides app devel­op­ers with data about their users, includ­ing where you click, how long you use the app, and your loca­tion when you use it. In exchange, Face­book can access the data those apps col­lect, which it then uses to tar­get adver­tis­ing rel­e­vant to a user’s inter­ests. That data doesn’t have your name attached, but as Mobil­sich­er shows, it’s far from anonymized, and it’s trans­mit­ted to Face­book regard­less of whether users are logged into the plat­form.

    Among the infor­ma­tion trans­mit­ted to Face­book are the IP address of the device that used the app, the type of device, time of use, and a user-spe­cif­ic Adver­tis­ing ID, which allows Face­book to iden­ti­fy and link third-par­ty app infor­ma­tion to the peo­ple using those apps. Apps that Mobil­sich­er test­ed include Bible+, Curvy, For­Dia­betes, Grindr, Kwitt, Migraine Bud­dy, Mood­path, Mus­lim Pro, OkCu­pid, Preg­nan­cy+, and more.

    As long as you’ve logged into Face­book on your mobile device at some point (through your phone’s brows­er or the Face­book app itself), the com­pa­ny cross-ref­er­ences the Adver­tis­ing ID and can link the third-par­ty app infor­ma­tion to your pro­file. And even if you don’t have a Face­book pro­file, the data can still be trans­mit­ted and col­lect­ed with oth­er third-par­ty app data that cor­re­sponds to your unique Adver­tis­ing ID.
    ...

    How com­mon is this? Accord­ing to pri­va­cy researcher col­lec­tive App Cen­sus esti­mates, it’s about 30 per­cent of all apps in the Google Play store. And half of the apps test­ed by Mobil­sich­er did­n’t even men­tion Face­book Ana­lyt­ics in their pri­va­cy pol­i­cy:

    ...
    For devel­op­ers and Face­book, this trans­mis­sion appears rel­a­tive­ly com­mon. The pri­va­cy researcher col­lec­tive App Cen­sus esti­mates that “approx­i­mate­ly 30 per­cent of all apps in Google’s Play store con­tact Face­book at start­up” through the company’s SDK. The research firm Sta­tista esti­mates that the Google Play store has over 2.6 mil­lion apps as of Decem­ber 2018. As the Mobil­sich­er report details, many of these apps con­tain sen­si­tive infor­ma­tion. And while Face­book users can opt out and dis­able tar­get­ed adver­tise­ments (the same kind of ads that are informed by third-par­ty app data), it is unclear whether turn­ing off tar­get­ing stops Face­book from col­lect­ing this app infor­ma­tion. In a state­ment to Mobil­sich­er, Face­book spec­i­fied only that “if a per­son uti­lizes one of these con­trols, then Face­book will not use data gath­ered on these third-par­ty apps (e.g. through Face­book Audi­ence Net­work), for ad tar­get­ing.”

    ...

    Though CEO Mark Zucker­berg told law­mak­ers this year that Face­book users have “com­plete con­trol” of their data, Tues­day’s New York Times inves­ti­ga­tion as well as Mobil­sicher’s report reveal that user infor­ma­tion appears to move between dif­fer­ent com­pa­nies and plat­forms and is col­lect­ed, some­times with­out noti­fy­ing the users. In the case of Facebook’s SDK, for exam­ple, Mobil­sich­er notes that the trans­mis­sion of user infor­ma­tion from third-par­ty apps to Face­book occurs entire­ly behind the scenes. None of the apps Mobil­sich­er found to be trans­mit­ting data to Face­book “active­ly noti­fied users” that they were doing so. Accord­ing to the report, “Not even half of [the apps Mobil­sich­er test­ed] men­tion Face­book Ana­lyt­ics in their pri­va­cy pol­i­cy. Strict­ly speak­ing, none of them is GDPR-com­pli­ant, since the trans­mis­sion starts before any user inter­ac­tion could indi­cate informed con­sent.”
    ...

    And accord­ing to the fol­low­ing arti­cle, that 30 per­cent esti­mate might be low. Accord­ing to a Pri­va­cy Inter­na­tion­al study, at least 20 out of 34 pop­u­lar Android apps that they test­ed were trans­mit­ting sen­si­tive infor­ma­tion back to Face­book with­out ask­ing for per­mis­sion:

    Engad­get

    More pop­u­lar apps are send­ing data to Face­book with­out ask­ing
    MyFit­ness­Pal, Tri­pAd­vi­sor and oth­ers may be vio­lat­ing EU pri­va­cy law.

    Jon Fin­gas
    12.30.18

    It’s not just dat­ing and health apps that might be vio­lat­ing your pri­va­cy when they send data to Face­book. A Pri­va­cy Inter­na­tion­al study has deter­mined that “at least” 20 out of 34 pop­u­lar Android apps are trans­mit­ting sen­si­tive infor­ma­tion to Face­book with­out ask­ing per­mis­sion, includ­ing Kayak, MyFit­ness­Pal, Sky­scan­ner and Tri­pAd­vi­sor. This typ­i­cal­ly includes ana­lyt­ics data that sends on launch, includ­ing your unique Android ID, but can also include data that sends lat­er. The trav­el search engine Kayak, for instance, appar­ent­ly sends des­ti­na­tion and flight search data, trav­el dates and whether or not kids might come along.

    While the data might not imme­di­ate­ly iden­ti­fy you, it could the­o­ret­i­cal­ly be used to rec­og­nize some­one through round­about means, such as the apps they have installed or whether they trav­el with the same per­son.

    The con­cern isn’t just that apps are over­shar­ing data, but that they may be vio­lat­ing the EU’s GDPR pri­va­cy rules by both col­lect­ing info with­out con­sent and poten­tial­ly iden­ti­fy­ing users. You can’t lay the blame sole­ly at the feet of Face­book or devel­op­ers, though. Face­book’s rel­e­vant devel­op­er kit did­n’t pro­vide the option to ask for per­mis­sion until after GDPR took effect. The social net­work did devel­op a fix, but it’s not clear that it works or that devel­op­ers are imple­ment­ing it prop­er­ly. Numer­ous apps were still using old­er ver­sions of the devel­op­er kit, accord­ing to the study. Sky­scan­ner not­ed that it was “not aware” it was send­ing data with­out per­mis­sion.

    ...

    ———-

    “More pop­u­lar apps are send­ing data to Face­book with­out ask­ing” by Jon Fin­gas; Engad­get; 12/30/18

    “It’s not just dat­ing and health apps that might be vio­lat­ing your pri­va­cy when they send data to Face­book. A Pri­va­cy Inter­na­tion­al study has deter­mined that “at least” 20 out of 34 pop­u­lar Android apps are trans­mit­ting sen­si­tive infor­ma­tion to Face­book with­out ask­ing per­mis­sion, includ­ing Kayak, MyFit­ness­Pal, Sky­scan­ner and Tri­pAd­vi­sor. This typ­i­cal­ly includes ana­lyt­ics data that sends on launch, includ­ing your unique Android ID, but can also include data that sends lat­er. The trav­el search engine Kayak, for instance, appar­ent­ly sends des­ti­na­tion and flight search data, trav­el dates and whether or not kids might come along.”

    So if you don’t exact­ly whether or not an app is send­ing Face­book your data, it appears to be a safe bet that, yes, that an app is send­ing Face­book your data.

    And if you’re tempt­ed to delete all of the apps off of your smart­phone, recall all the sto­ries about device mak­ers, includ­ing smart­phone man­u­fac­tur­ers, send­ing and receiv­ing large amounts of user data with Face­book and lit­er­al­ly being treat­ed as “exten­sions” of Face­book by the com­pa­ny. So while smart­phone apps are cer­tain­ly going to be a major source of per­son­al data leak­age, don’t for­get there’s a good chance your smart­phone itself is basi­cal­ly work­ing for Face­book.

    Posted by Pterrafractyl | February 25, 2019, 12:03 pm
  8. Here’s an update on the brain-to-com­put­er inter­face tech­nol­o­gy that Face­book is work­ing on. First, recall how the ini­tial use for the tech­nol­o­gy that Face­book has been tout­ing thus far has been sim­ply replac­ing using your brain for rapid typ­ing. It always seemed like a rather lim­it­ed appli­ca­tion for a tech­nol­o­gy that’s basi­cal­ly read­ing your mind.

    Now Mark Zucker­berg is giv­ing us a hint at one of the more ambi­tious appli­ca­tions of these tech­nol­o­gy: Aug­ment­ed Real­i­ty (AR). AR tech­nol­o­gy isn’t new. Google Glass was an ear­li­er ver­sion of AR tech­nol­o­gy and Ocu­lus, the vir­tu­al real­i­ty head­set com­pa­ny owned by Face­book, has made it clear that AR is an area they are plan­ning on get­ting into. But it sounds like Face­book has big plans for using the the brain-to-com­put­er with AR tech­nol­o­gy. This was revealed dur­ing a talk Zucker­berg gave at Har­vard last month dur­ing a two hour inter­view by with Har­vard law school pro­fes­sor Jonathan Zit­train. Accord­ing to Zucker­berg, the vision is to allow peo­ple to use their thoughts to nav­i­gate through aug­ment­ed real­i­ties. This will pre­sum­ably work in tan­dem with AR head­sets.

    So as we should expect, Face­book’s ear­ly plans for brain-to-com­put­er inter­faces aren’t lim­it­ed to peo­ple typ­ing with their minds at a com­put­er. They are plans for incor­po­rat­ing the tech­nol­o­gy into the kind of tech­nol­o­gy that peo­ple can wear every­where like AR glass­es:

    Wired

    Zucker­berg Wants Face­book to Build a Mind-Read­ing Machine

    Author: Noam Cohen
    03.07.19 07:00 am

    For those of us who wor­ry that Face­book may have seri­ous bound­ary issues when it comes to the per­son­al infor­ma­tion of its users, Mark Zuckerberg’s recent com­ments at Har­vard should get the heart rac­ing.

    Zucker­berg dropped by the uni­ver­si­ty last month osten­si­bly as part of a a year of con­ver­sa­tions with experts about the role of tech­nol­o­gy in soci­ety, “the oppor­tu­ni­ties, the chal­lenges, the hopes, and the anx­i­eties.” His near­ly two-hour inter­view with Har­vard law school pro­fes­sor Jonathan Zit­train in front of Face­book cam­eras and a class­room of stu­dents cen­tered on the company’s unprece­dent­ed posi­tion as a town square for per­haps 2 bil­lion peo­ple. To hear the young CEO tell it, Face­book was tak­ing shots from all sides—either it was indif­fer­ent to the eth­nic hatred fes­ter­ing on its plat­forms or it was a heavy-hand­ed cen­sor decid­ing whether an idea was allowed to be expressed.

    Zucker­berg con­fessed that he hadn’t sought out such an awe­some respon­si­bil­i­ty. No one should, he said. “If I was a dif­fer­ent per­son, what would I want the CEO of the com­pa­ny to be able to do?” he asked him­self. “I would not want so many deci­sions about con­tent to be con­cen­trat­ed with any indi­vid­ual.”

    Instead, Face­book will estab­lish its own Supreme Court, he told Zit­train, an out­side pan­el entrust­ed to set­tle thorny ques­tions about what appears on the plat­form. “I will not be able to make a deci­sion that over­turns what they say,” he promised, “which I think is good.”

    All was going to plan. Zucker­berg had dis­played a wel­come humil­i­ty about him­self and his com­pa­ny. And then he described what real­ly excit­ed him about the future—and the famil­iar Sil­i­con Val­ley hubris had returned. There was this promis­ing new tech­nol­o­gy, he explained, a brain-com­put­er inter­face, which Face­book has been research­ing.

    The idea is to allow peo­ple to use their thoughts to nav­i­gate intu­itive­ly through aug­ment­ed reality—the neu­ro-dri­ven ver­sion of the world recent­ly described by Kevin Kel­ly in these pages. No typ­ing, no speak­ing, even, to dis­tract you or slow you down as you inter­act with dig­i­tal addi­tions to the land­scape: dri­ving instruc­tions super­im­posed over the free­way, short biogra­phies float­ing next to atten­dees of a con­fer­ence, 3‑D mod­els of fur­ni­ture you can move around your apart­ment.

    The Har­vard audi­ence was a lit­tle tak­en aback by the conversation’s turn, and Zit­train made a law-pro­fes­sor joke about the con­sti­tu­tion­al right to remain silent in light of a tech­nol­o­gy that allows eaves­drop­ping on thoughts. “Fifth amend­ment impli­ca­tions are stag­ger­ing,” he said to laugh­ter. Even this gen­tle push­back was met with the tried-and-true defense of big tech com­pa­nies when crit­i­cized for tram­pling users’ privacy—users’ con­sent. “Pre­sum­ably,” Zucker­berg said, “this would be some­thing that some­one would choose to use as a prod­uct.”

    In short, he would not be divert­ed from his self-assigned mis­sion to con­nect the peo­ple of the world for fun and prof­it. Not by the dystopi­an image of brain-prob­ing police offi­cers. Not by an extend­ed apol­o­gy tour. “I don’t know how we got onto that,” he said jovial­ly. “But I think a lit­tle bit on future tech and research is inter­est­ing, too.”

    Of course, Face­book already fol­lows you around as you make your way through the world via the GPS in the smart­phone in your pock­et, and, like­wise, fol­lows you across the inter­net via code implant­ed in your brows­er. Would we real­ly let Face­book inside those old nog­gins of ours just so we can order a piz­za faster and with more top­pings? Zucker­berg clear­ly is count­ing on it.

    To be fair, Face­book doesn’t plan to actu­al­ly enter our brains. For one thing, a sur­gi­cal implant, Zucker­berg told Zit­train, wouldn’t scale well: “If you’re actu­al­ly try­ing to build things that every­one is going to use, you’re going to want to focus on the non­in­va­sive things.”

    The tech­nol­o­gy that Zucker­berg described is a show­er-cap-look­ing device that sur­rounds a brain and dis­cov­ers con­nec­tions between par­tic­u­lar thoughts and par­tic­u­lar blood flows or brain activ­i­ty, pre­sum­ably to assist the glass­es or head­sets man­u­fac­tured by Ocu­lus VR, which is part of Face­book. Already, Zucker­berg said, researchers can dis­tin­guish when a per­son is think­ing of a giraffe or an ele­phant based on neur­al activ­i­ty. Typ­ing with your mind would work off of the same prin­ci­ples.

    As with so many of Facebook’s inno­va­tions, Zucker­berg doesn’t see how brain-com­put­er inter­face breach­es an individual’s integri­ty, what Louis Bran­deis famous­ly defined as “the right to be left alone” in one’s thoughts, but instead sees a tech­nol­o­gy that empow­ers the indi­vid­ual. “The way that our phones work today, and all com­put­ing sys­tems, orga­nized around apps and tasks is fun­da­men­tal­ly not how our brains work and how we approach the world,” he told Zit­train. “That’s one of the rea­sons why I’m just very excit­ed longer term about espe­cial­ly things like aug­ment­ed real­i­ty, because it’ll give us a plat­form that I think actu­al­ly is how we think about stuff.”

    Kel­ly, in his essay about AR, like­wise sees a world that makes more sense when a “smart” ver­sion rests atop the quo­tid­i­an one. “Watch­es will detect chairs,” he writes of this mir­ror­world, “chairs will detect spread­sheets; glass­es will detect watch­es, even under a sleeve; tablets will see the inside of a tur­bine; tur­bines will see work­ers around them.” Sud­den­ly our envi­ron­ment, nat­ur­al and arti­fi­cial, will oper­ate as an inte­grat­ed whole. Except for humans with their bot­tled up thoughts and desires. Until, that is, they install BCI-enhanced glass­es.

    Zucker­berg explained the poten­tial ben­e­fits of the tech­nol­o­gy this way when he announced Facebook’s research in 2017: “Our brains pro­duce enough data to stream 4 HD movies every sec­ond. The prob­lem is that the best way we have to get infor­ma­tion out into the world—speech—can only trans­mit about the same amount of data as a 1980s modem. We’re work­ing on a sys­tem that will let you type straight from your brain about 5x faster than you can type on your phone today. Even­tu­al­ly, we want to turn it into a wear­able tech­nol­o­gy that can be man­u­fac­tured at scale. Even a sim­ple yes/no ‘brain click’ would help make things like aug­ment­ed real­i­ty feel much more nat­ur­al.”

    Zucker­berg likes to quote Steve Jobs’s descrip­tion of com­put­ers as “bicy­cles for the mind.” I can imag­ine him think­ing, What’s wrong with help­ing us ped­al a lit­tle faster?

    ...

    ———-

    “Zucker­berg Wants Face­book to Build a Mind-Read­ing Machine” by Noam Cohen; Wired; 03/07/2019

    “All was going to plan. Zucker­berg had dis­played a wel­come humil­i­ty about him­self and his com­pa­ny. And then he described what real­ly excit­ed him about the future—and the famil­iar Sil­i­con Val­ley hubris had returned. There was this promis­ing new tech­nol­o­gy, he explained, a brain-com­put­er inter­face, which Face­book has been research­ing.

    Yep, every­thing was going well at the Zucker­berg event until he start­ed talk­ing about his vision for the future. A future of aug­ment­ed real­i­ty that you nav­i­gate with your thoughts using Face­book’s brain-to-com­put­er inter­face tech­nol­o­gy. It might seem creepy, but Face­book is clear­ly bet­ting on it not being too creepy to pre­vent peo­ple from using it:

    ...
    The idea is to allow peo­ple to use their thoughts to nav­i­gate intu­itive­ly through aug­ment­ed reality—the neu­ro-dri­ven ver­sion of the world recent­ly described by Kevin Kel­ly in these pages. No typ­ing, no speak­ing, even, to dis­tract you or slow you down as you inter­act with dig­i­tal addi­tions to the land­scape: dri­ving instruc­tions super­im­posed over the free­way, short biogra­phies float­ing next to atten­dees of a con­fer­ence, 3‑D mod­els of fur­ni­ture you can move around your apart­ment.

    ...

    Of course, Face­book already fol­lows you around as you make your way through the world via the GPS in the smart­phone in your pock­et, and, like­wise, fol­lows you across the inter­net via code implant­ed in your brows­er. Would we real­ly let Face­book inside those old nog­gins of ours just so we can order a piz­za faster and with more top­pings? Zucker­berg clear­ly is count­ing on it.

    To be fair, Face­book doesn’t plan to actu­al­ly enter our brains. For one thing, a sur­gi­cal implant, Zucker­berg told Zit­train, wouldn’t scale well: “If you’re actu­al­ly try­ing to build things that every­one is going to use, you’re going to want to focus on the non­in­va­sive things.”

    The tech­nol­o­gy that Zucker­berg described is a show­er-cap-look­ing device that sur­rounds a brain and dis­cov­ers con­nec­tions between par­tic­u­lar thoughts and par­tic­u­lar blood flows or brain activ­i­ty, pre­sum­ably to assist the glass­es or head­sets man­u­fac­tured by Ocu­lus VR, which is part of Face­book. Already, Zucker­berg said, researchers can dis­tin­guish when a per­son is think­ing of a giraffe or an ele­phant based on neur­al activ­i­ty. Typ­ing with your mind would work off of the same prin­ci­ples.
    ...

    What about poten­tial abus­es like vio­lat­ing the con­sti­tu­tion­al right to remain silent? Zucker­berg assured us that only peo­ple who choose to use the tech­nol­o­gy would actu­al­ly use so we should­n’t wor­ry about abuse, a rather wor­ry­ing response in part because of typ­i­cal it is:

    ...
    The Har­vard audi­ence was a lit­tle tak­en aback by the conversation’s turn, and Zit­train made a law-pro­fes­sor joke about the con­sti­tu­tion­al right to remain silent in light of a tech­nol­o­gy that allows eaves­drop­ping on thoughts. “Fifth amend­ment impli­ca­tions are stag­ger­ing,” he said to laugh­ter. Even this gen­tle push­back was met with the tried-and-true defense of big tech com­pa­nies when crit­i­cized for tram­pling users’ privacy—users’ con­sent. “Pre­sum­ably,” Zucker­berg said, “this would be some­thing that some­one would choose to use as a prod­uct.”

    In short, he would not be divert­ed from his self-assigned mis­sion to con­nect the peo­ple of the world for fun and prof­it. Not by the dystopi­an image of brain-prob­ing police offi­cers. Not by an extend­ed apol­o­gy tour. “I don’t know how we got onto that,” he said jovial­ly. “But I think a lit­tle bit on future tech and research is inter­est­ing, too.”
    ...

    But at least it’s aug­ment­ed real­i­ty that will be work­ing with some sort of AR head­set and the tech­nol­o­gy isn’t actu­al­ly inject­ing aug­ment­ed info into your brain. That would be a whole new lev­el of creepy.

    And accord­ing to the fol­low­ing arti­cle, a neu­ro­sci­en­tist at North­west­ern Uni­ver­si­ty, Dr. Moran Cerf, is work­ing on on exact­ly that kind of tech­nol­o­gy and pre­dicts it will be avail­able to the pub­lic in as lit­tle as five years. Cerf is work­ing on some sort chip that would be con­nect­ed to the inter­net, read your thoughts, go to Wikipedia or some web­site to get an answer to your ques­tions, and return the answer direct­ly to your brain. Yep, inter­net-con­nect­ed brain chips. He esti­mates that such tech­nol­o­gy could give peo­ple IQs of 200.

    So will peo­ple have to go through brain surgery to get this new tech­nol­o­gy? Not nec­es­sar­i­ly. Cerf is ask­ing the ques­tion “Can you eat some­thing that will actu­al­ly get to your brain? Can you eat things in parts that will assem­ble inside your head?” Yep, inter­net-con­nect­ed brain chips that you eat. So not only will you not need brain surgery to get the chip...in the­o­ry, you might not even know you ate one.

    Also note that it’s unclear if this brain chip can read your thoughts like Face­book’s brain-to-com­put­er inter­face or if it’s only for feed­ing your the infor­ma­tion from the inter­net. In oth­er words, since Cer­f’s vision for this chip requires the abil­i­ty to read thoughts first in order to go on the inter­net and find answers and report them back, it’s pos­si­ble that this is the kind of com­put­er-to-brain tech­nol­o­gy that is intend­ed to work with the kind of brain-to-com­put­er mind read­ing tech­nol­o­gy Face­book is work­ing on. And that’s par­tic­u­lar­ly rev­e­lent because Cerf tells us that he’s col­lab­o­rat­ing with ‘Sil­i­con Val­ley big wigs’ that he’d rather not name:

    CBS Chica­go

    North­west­ern Neu­ro­sci­en­tist Research­ing Brain Chips To Make Peo­ple Super­in­tel­li­gent

    By Lau­ren Vic­to­ry
    March 4, 2019 at 7:32 am

    CHICAGO (CBS) — What if you could make mon­ey, or type some­thing, just by think­ing about it? It sounds like sci­ence fic­tion, but it might be close to real­i­ty.

    In as lit­tle as five years, super smart peo­ple could be walk­ing down the street; men and women who’ve paid to increase their intel­li­gence.

    North­west­ern Uni­ver­si­ty neu­ro­sci­en­tist and busi­ness pro­fes­sor Dr. Moran Cerf made that pre­dic­tion, because he’s work­ing on a smart chip for the brain.

    “Make it so that it has an inter­net con­nec­tion, and goes to Wikipedia, and when I think this par­tic­u­lar thought, it gives me the answer,” he said.

    Cerf is col­lab­o­rat­ing with Sil­i­con Val­ley big wigs he’d rather not name.

    Face­book also has been work­ing on build­ing a brain-com­put­er inter­face, and SpaceX and Tes­la CEO Elon Musk is back­ing a brain-com­put­er inter­face called Neu­ralink.

    “Every­one is spend­ing a lot of time right now try­ing to find ways to get things into the brain with­out drilling a hole in your skull,” Cerf said. “Can you eat some­thing that will actu­al­ly get to your brain? Can you eat things in parts that will assem­ble inside your head?”

    ...

    “This is no longer a sci­ence prob­lem. This is a social prob­lem,” Cerf said.

    Cerf wor­ries about cre­at­ing intel­li­gence gaps in soci­ety; on top of exist­ing gen­der, racial, and finan­cial inequal­i­ties.

    “They can make mon­ey by just think­ing about the right invest­ments, and we can­not; so they’re going to get rich­er, they’re going to get health­i­er, they’re going to live longer,” he said.

    The aver­age IQ of an intel­li­gent mon­key is about 70, the aver­age human IQ is around 100, and a genius IQ is gen­er­al­ly con­sid­ered to begin around 140. Peo­ple with a smart chip in their brain could have an IQ of around 200, so would they even want to inter­act with the aver­age per­son?

    “Are they going to say, ‘Look at this cute human, Stephen Hawk­ing. He can do dif­fer­en­tial equa­tions in his mind, just like a lit­tle baby with 160 IQ points. Isn’t it amaz­ing? So cute. Now let’s put it back in a cage and give it bananas,’” Cerf said.

    Time will tell. Or will our minds?

    Approx­i­mate­ly 40,000 peo­ple in the Unit­ed States already have smart chips in their heads, but those brain implants are only approved for med­ical use for now.

    ———-

    “North­west­ern Neu­ro­sci­en­tist Research­ing Brain Chips To Make Peo­ple Super­in­tel­li­gent” by Lau­ren Vic­to­ry; CBS Chica­go; 03/04/2019

    “In as lit­tle as five years, super smart peo­ple could be walk­ing down the street; men and women who’ve paid to increase their intel­li­gence.”

    In just five years, you’ll be walk­ing down the street, won­der about some­thing, and your brain chip will go access Wikipedia, find the answer, and some­how deliv­er it to you. And you won’t even have to have gone through brain surgery. You’ll just eat some­thing that will some­how insert the chip in your brain:

    ...
    North­west­ern Uni­ver­si­ty neu­ro­sci­en­tist and busi­ness pro­fes­sor Dr. Moran Cerf made that pre­dic­tion, because he’s work­ing on a smart chip for the brain.

    “Make it so that it has an inter­net con­nec­tion, and goes to Wikipedia, and when I think this par­tic­u­lar thought, it gives me the answer,” he said.

    ...

    Face­book also has been work­ing on build­ing a brain-com­put­er inter­face, and SpaceX and Tes­la CEO Elon Musk is back­ing a brain-com­put­er inter­face called Neu­ralink.

    “Every­one is spend­ing a lot of time right now try­ing to find ways to get things into the brain with­out drilling a hole in your skull,” Cerf said. “Can you eat some­thing that will actu­al­ly get to your brain? Can you eat things in parts that will assem­ble inside your head?”

    ...

    The aver­age IQ of an intel­li­gent mon­key is about 70, the aver­age human IQ is around 100, and a genius IQ is gen­er­al­ly con­sid­ered to begin around 140. Peo­ple with a smart chip in their brain could have an IQ of around 200, so would they even want to inter­act with the aver­age per­son?
    ...

    That’s the promise. Or, rather, the hype. It’s hard to imag­ine this all being ready in five years. It’s also worth not­ing that if the only thing this chip does is con­duct inter­net queries it’s hard to see how this will effec­tive­ly raise peo­ple’s IQs to 200. After all, peo­ple damn near have their brains con­nect­ed to Wikipedia already with smart­phones and there does­n’t appear to have been a smart­phone-induced IQ boost. But who knows. Once you have the tech­nol­o­gy to rapid­ly feed infor­ma­tion back and forth between the brain and a com­put­er there could be all sorts of IQ-boost­ing tech­nolo­gies that could be devel­oped. At a min­i­mum, it could allow for some very fan­cy aug­ment­ed real­i­ty tech­nol­o­gy.

    So some sort of com­put­er-to-brain inter­face tech­nol­o­gy appears to be on the hori­zon. And if Cer­f’s chip ends up being tech­no­log­i­cal­ly fea­si­ble it’s going to have Sil­i­con Val­ley big wigs behind it. We just don’t know which big wigs because he won’t tell us:

    ...
    Cerf is col­lab­o­rat­ing with Sil­i­con Val­ley big wigs he’d rather not name.
    ...

    So some Sil­i­con Val­ley big wits are work­ing on com­put­er-to-brain inter­face tech­nol­o­gy that can poten­tial­ly be fed to peo­ple. And they they want to keep their involve­ment in the devel­op­ment of this tech­nol­o­gy a secret. That’s super omi­nous, right?

    Posted by Pterrafractyl | March 7, 2019, 3:45 pm
  9. Remem­ber how the right-wing out­rage machine cre­at­ed an uproar in 2016 over alle­ga­tion that Face­book’s trend­ing news was cen­sor­ing con­ser­v­a­tive sto­ries? And remem­ber how Face­book respond­ed by fir­ing all the human edi­tors and replac­ing them with an algo­rithm that turned the trend­ing news sec­tion into a dis­trib­u­tor or right-wing ‘fake news’ mis­in­for­ma­tion? And remem­ber how Face­book announced a new set of news feed changes in Jan­u­ary of 2018, then a cou­ple of months lat­er con­ser­v­a­tives were again com­plain­ing that it was biased against them, so Face­book hired for­mer Repub­li­can Sen­a­tor John Kyl and the Her­itage Foun­da­tion to do an audit of the com­pa­ny to deter­mined whether or not Face­book had a polit­i­cal bias?

    Well, it looks like we’re due for a round of fake out­rage designed to make social media com­pa­nies more com­pli­ant to right-wing dis­in­for­ma­tion cam­paigns. This time, it’s Pres­i­dent Trump lead­ing the way on the faux out­rage, com­plain­ing that “Some­thing’s hap­pen­ing with those groups of folks that are run­ning Face­book and Google and Twit­ter and I do think we have to get to the bot­tom of it”:

    The Hill

    Trump accus­es Sil­i­con Val­ley of col­lud­ing to silence con­ser­v­a­tives

    By Justin Wise — 03/19/19 03:09 PM EDT

    Pres­i­dent Trump on Tues­day sug­gest­ed that Google, Face­book and Twit­ter have col­lud­ed with each oth­er to dis­crim­i­nate against Repub­li­cans.

    “We use the word col­lu­sion very loose­ly all the time. And I will tell you there is col­lu­sion with respect to that,” Trump said dur­ing a press con­fer­ence at the White House Rose Gar­den. “Some­thing has to be going on. You see the lev­el, in many cas­es, of hatred for a cer­tain group of peo­ple that hap­pened to be in pow­er, that hap­pened to win the elec­tion.

    “Some­thing’s hap­pen­ing with those groups of folks that are run­ning Face­book and Google and Twit­ter and I do think we have to get to the bot­tom of it,” he added.

    The pres­i­den­t’s com­ments marked an esca­la­tion in his crit­i­cism of U.S. tech giants like Twit­ter, a plat­form that he fre­quent­ly uses to pro­mote his poli­cies and denounce his polit­i­cal oppo­nents.

    Trump said Twit­ter is “dif­fer­ent than it used to be,” when asked about a new push to make social media com­pa­nies liable for the con­tent on their plat­form.

    “We have to do some­thing,” Trump said. “I have many, many mil­lions of fol­low­ers on Twit­ter, and it’s dif­fer­ent than it used to be. Things are hap­pen­ing. Names are tak­en off.”

    He lat­er alleged that con­ser­v­a­tives and Repub­li­cans are dis­crim­i­nat­ed against on social media plat­forms.

    “It’s big, big dis­crim­i­na­tion,” he said. “I see it absolute­ly on Twit­ter.”

    Trump and oth­er con­ser­v­a­tives have increas­ing­ly argued that com­pa­nies like Google, Face­book and Twit­ter have an insti­tu­tion­al bias that favors lib­er­als. Trump tweet­ed Tues­day morn­ing that the tech giants were “sooo on the side of the Rad­i­cal Left Democ­rats.”

    The three com­pa­nies did not imme­di­ate­ly respond to requests for com­ment on Trump’s Tues­day morn­ing tweet.

    He also vowed to look into a report that his social media direc­tor, Dan Scav­i­no, was tem­porar­i­ly blocked from mak­ing pub­lic com­ments on one of his Face­book posts.

    The series of com­ments came a day after Rep. Devin Nunes (R‑Calif.) sued Twit­ter and some of its users for more than $250 mil­lion. Nunes’s suit alleges that the plat­form cen­sors con­ser­v­a­tive voic­es by “shad­ow-ban­ning” them.

    The Cal­i­for­nia Repub­li­can also accused Twit­ter of “facil­i­tat­ing defama­tion on its plat­form” by “ignor­ing law­ful com­plaints about offen­sive con­tent.”

    ———-

    “Trump accus­es Sil­i­con Val­ley of col­lud­ing to silence con­ser­v­a­tives” by Justin Wise; The Hill; 03/19/2019

    “Trump and oth­er con­ser­v­a­tives have increas­ing­ly argued that com­pa­nies like Google, Face­book and Twit­ter have an insti­tu­tion­al bias that favors lib­er­als. Trump tweet­ed Tues­day morn­ing that the tech giants were “sooo on the side of the Rad­i­cal Left Democ­rats.””

    Yep, the social media giants are appar­ent­ly “sooo on the side of the Rad­i­cal Left Democ­rats.” Trump is con­vinced of this because he feels that “some­thing has to be going on” and “we have to get to the bot­tom of it”. He’s also sure that Twit­ter is “dif­fer­ent than it used to be” and “we have to do some­thing” because it’s “big, big dis­crim­i­na­tion”:

    ...
    “We use the word col­lu­sion very loose­ly all the time. And I will tell you there is col­lu­sion with respect to that,” Trump said dur­ing a press con­fer­ence at the White House Rose Gar­den. “Some­thing has to be going on. You see the lev­el, in many cas­es, of hatred for a cer­tain group of peo­ple that hap­pened to be in pow­er, that hap­pened to win the elec­tion.

    “Some­thing’s hap­pen­ing with those groups of folks that are run­ning Face­book and Google and Twit­ter and I do think we have to get to the bot­tom of it,” he added.

    The pres­i­den­t’s com­ments marked an esca­la­tion in his crit­i­cism of U.S. tech giants like Twit­ter, a plat­form that he fre­quent­ly uses to pro­mote his poli­cies and denounce his polit­i­cal oppo­nents.

    Trump said Twit­ter is “dif­fer­ent than it used to be,” when asked about a new push to make social media com­pa­nies liable for the con­tent on their plat­form.

    “We have to do some­thing,” Trump said. “I have many, many mil­lions of fol­low­ers on Twit­ter, and it’s dif­fer­ent than it used to be. Things are hap­pen­ing. Names are tak­en off.”

    He lat­er alleged that con­ser­v­a­tives and Repub­li­cans are dis­crim­i­nat­ed against on social media plat­forms.

    “It’s big, big dis­crim­i­na­tion,” he said. “I see it absolute­ly on Twit­ter.”
    ...

    And these com­ments by Trump come a day after Repub­li­can con­gress­man Devin Nunes sued Twit­ter and for “shad­ow-ban­ning” con­ser­v­a­tive voic­es. Nunes also sued a hand­ful of Twit­ter users who had been par­tic­u­lar­ly crit­i­cal of him:

    ...
    The series of com­ments came a day after Rep. Devin Nunes (R‑Calif.) sued Twit­ter and some of its users for more than $250 mil­lion. Nunes’s suit alleges that the plat­form cen­sors con­ser­v­a­tive voic­es by “shad­ow-ban­ning” them.

    The Cal­i­for­nia Repub­li­can also accused Twit­ter of “facil­i­tat­ing defama­tion on its plat­form” by “ignor­ing law­ful com­plaints about offen­sive con­tent.”
    ...

    It’s worth not­ing that Twit­ter did admit to sort of inad­ver­tent­ly “shad­ow-ban­ning” some promi­nent con­ser­v­a­tives in June of last year, includ­ing Don­ald Trump, Jr. The com­pa­ny explained that they changed their algo­rithm for which names show in the auto-pop­u­lat­ed drop-down search box on Twit­ter in order to reduce the scope of accounts found engage in troll-like behav­ior and this had the effect of down­grad­ing the accounts of a num­ber of right-wing fig­ures. Because of course that’s what would hap­pen if you imple­ment an algo­rithm to reduce the expo­sure of accounts engag­ing in troll-like behav­ior. Also, a cou­ple of days after the reports on this Twit­ter claimed it ‘fixed’ the prob­lem so promi­nent Repub­li­cans engag­ing in troll-like behav­ior will once again show up in the auto-pop­u­lat­ed search drop down box.

    But Devin Nunes appears to feel so harmed by Twit­ter that he’s suing it for $250 mil­lion any­way. And as the fol­low­ing col­umn notes, while the law­suit is a joke on legal grounds and stands no chance of vic­to­ry, it does serve an impor­tant pur­pose. And it’s the same pur­pose we’ve seen over and over: intim­i­dat­ing the tech com­pa­nies into giv­ing con­ser­v­a­tives pref­er­en­tial treat­ment and giv­ing them a green light to turn these plat­forms into dis­in­for­ma­tion machines.

    But Nunes’s deci­sion to sue some indi­vid­u­als who were very crit­i­cal of him over Twit­ter also serves anoth­er pur­pose that we saw when Peter Thiel man­aged to sue Gawk­er into obliv­ion: send­ing out the gen­er­al threat that if you pub­licly crit­i­cize wealthy right-wingers they will sue and cost you large amounts of mon­ey in legal fees whether they have a legal case or not:

    Talk­ing Points Memo
    Edi­tor’s Blog

    Nunes And The Peter Thiel Era

    By Jeet Heer
    March 19, 2019 1:47 am

    First of all, I should intro­duce myself: I’m Jeet Heer, a con­tribut­ing edi­tor at The New Repub­lic. I’m fill­ing in for Josh as he takes a much-deserved break. Hav­ing fol­lowed TPM from its ear­li­est days as a blog cov­er­ing the 2000 (!) elec­tion and its after­math, I’m hon­ored to be here.

    I want­ed to flag a sto­ry from Mon­day night that is both com­i­cal­ly absurd but also has a sin­is­ter side: Repub­li­can Con­gress­man Devin Nunes’ announced law­suit against Twit­ter and three Twit­ter accounts who he claims have defamed him.

    You can read Nunes’ com­plaint here. Much of the suit reads like pure dada non­sense, espe­cial­ly since Nunes is going after two joke accounts with the han­dles Devin Nunes’ Mom and Devin Nunes’ Cow. This leads to the immor­tal line, “Like Devin Nunes’ Mom, Devin Nunes’ Cow engaged in a vicious defama­tion cam­paign against Nunes.”

    ...

    As tempt­ing as it is to sim­ply mock the suit, it also has to be said that it is part of some­thing more dis­turb­ing: the ris­ing use of legal actions, espe­cial­ly by right-wing forces, to shut down polit­i­cal oppo­nents. As Susan Hen­nessey, a legal schol­ar at the Brook­ings Insti­tute, not­ed, the suit “is a politi­cian attempt­ing to abuse the judi­cial process in order to scare peo­ple out of crit­i­ciz­ing him by prov­ing that he can cost them a lot in legal fees.”

    Peter Thiel’s sup­port of a suit that destroyed Gawk­er is the prime exam­ple. Thiel’s suc­cess seems to have embold­ened the right in gen­er­al. Amid Trump’s chat­ter about want­i­ng to loosen libel laws and sim­i­lar talk from Supreme Court Jus­tice Clarence Thomas, we’ve seen law­suits or threat­ened law­suits from Joe Arpaio, Sarah Palin, and Roy Moore, among oth­ers. As with the Nunes suit, many of these seem like jokes, but they have a goal of chill­ing speech.

    ———-

    “Nunes And The Peter Thiel Era” by Jeet Heer; Talk­ing Points Memo; 03/19/2019

    As tempt­ing as it is to sim­ply mock the suit, it also has to be said that it is part of some­thing more dis­turb­ing: the ris­ing use of legal actions, espe­cial­ly by right-wing forces, to shut down polit­i­cal oppo­nents. As Susan Hen­nessey, a legal schol­ar at the Brook­ings Insti­tute, not­ed, the suit “is a politi­cian attempt­ing to abuse the judi­cial process in order to scare peo­ple out of crit­i­ciz­ing him by prov­ing that he can cost them a lot in legal fees.””

    This this form of right-wing intim­i­da­tion of the media — intim­i­da­tion that ris­es to the lev­el of ‘we will finan­cial­ly destroy you if you crit­i­cize us’ — is exact­ly what we saw Peter Thiel unleashed when he revenge-bankrolled a law­suit that drove Gawk­er into bank­rupt­cy:

    ...
    Peter Thiel’s sup­port of a suit that destroyed Gawk­er is the prime exam­ple. Thiel’s suc­cess seems to have embold­ened the right in gen­er­al. Amid Trump’s chat­ter about want­i­ng to loosen libel laws and sim­i­lar talk from Supreme Court Jus­tice Clarence Thomas, we’ve seen law­suits or threat­ened law­suits from Joe Arpaio, Sarah Palin, and Roy Moore, among oth­ers. As with the Nunes suit, many of these seem like jokes, but they have a goal of chill­ing speech.
    ...

    So it’s going to be inter­est­ing to see if Nunes’s law­suit fur­thers this trend or ends up being a com­plete joke. But giv­en that one met­ric of suc­cess is sim­ply cost­ing the defen­dants a lot of mon­ey it real­ly could end up being quite suc­cess­ful. We’ll see.

    And with all that in mind, here’s a review of the impact of changes Face­book made to their news feed algo­rithm last year. Sur­prise! It turns out Fox News sto­ries lead in terms engage­ment on Face­book, where com­ments, shares, and user ‘reac­tions’ (like a smi­ley face or angry face reac­tion) about the sto­ry are used as the engage­ment met­ric. And if you fil­ter the response to only ‘angry’ respons­es, Fox News dom­i­nates the rest of the pack, with Bre­it­bart as #2 and offi­cial­ben­shapiro as #3 (CNN is #4). So more peo­ple appear to be see­ing Fox News sto­ries than sto­ries from any oth­er out­let on the plat­form and it’s mak­ing them angry:

    The Huff­in­g­ton Post

    Fox News Dom­i­nates Face­book By Incit­ing Anger, Study Shows
    Facebook’s algo­rithm over­haul was sup­posed to make users feel hap­pi­er, but it doesn’t look like it did.

    By Amy Rus­so
    3/18/2019 01:42 pm ET Updat­ed

    Face­book CEO Mark Zucker­berg announced an algo­rithm over­haul last year intend­ed to make users feel bet­ter with less news in their feeds and more con­tent from fam­i­ly and friends instead.

    But the data is in, and it shows Fox News rules the plat­form in terms of engage­ment, with “angry” reac­tions to its posts lead­ing the way.

    Accord­ing to a NewsWhip study pub­lished this month that exam­ines Face­book News Feed con­tent from Jan. 1 to March 10, the cable net­work was the No. 1 Eng­lish-lan­guage pub­lish­er when it came to com­ments, shares and reac­tions.

    The out­let far out­paced its com­pe­ti­tion, with NBC, the BBC, the Dai­ly Mail, CNN and oth­ers lag­ging behind.

    [see chart]

    The dif­fer­ence is even more glar­ing when rank­ing out­lets only by the num­ber of angry respons­es they trig­ger with Facebook’s reac­tions fea­ture.

    By that mea­sure, Fox News is leaps and bounds ahead of oth­er pages, includ­ing that of right-wing web­site Bre­it­bart and con­ser­v­a­tive Dai­ly Wire Edi­tor-in-Chief Ben Shapiro.

    [see chart]

    While Harvard’s Nie­man Lab on jour­nal­ism points out that Fox News’ pop­u­lar­i­ty on Face­book may have occurred with­out help from an algo­rithm, it begs the ques­tion of whether Zuckerberg’s vision for the plat­form is tru­ly com­ing to fruition.

    In Jan­u­ary 2018, Zucker­berg told users he had “a respon­si­bil­i­ty to make sure our ser­vices aren’t just fun to use, but also good for people’s well-being.”

    He said he was hop­ing to pro­mote “mean­ing­ful inter­ac­tions between peo­ple” and that the algo­rithm over­haul would result in “less pub­lic con­tent like posts from busi­ness­es, brands, and media” and “more from your friends, fam­i­ly and groups.”

    While over­all engage­ment on Face­book has sky­rock­et­ed this year com­pared with 2018, the pow­er of the platform’s algo­rithms remains unclear.

    ...

    ———-

    “Fox News Dom­i­nates Face­book By Incit­ing Anger, Study Shows” by Amy Rus­so; The Huff­in­g­ton Post; 3/18/2019

    “But the data is in, and it shows Fox News rules the plat­form in terms of engage­ment, with “angry” reac­tions to its posts lead­ing the way.

    Face­book’s news feed algo­rithm sure loves serv­ing up Fox News sto­ries. Espe­cial­ly the kinds of sto­ries that make peo­ple angry:

    ...
    Accord­ing to a NewsWhip study pub­lished this month that exam­ines Face­book News Feed con­tent from Jan. 1 to March 10, the cable net­work was the No. 1 Eng­lish-lan­guage pub­lish­er when it came to com­ments, shares and reac­tions.

    The out­let far out­paced its com­pe­ti­tion, with NBC, the BBC, the Dai­ly Mail, CNN and oth­ers lag­ging behind.

    [see chart]

    The dif­fer­ence is even more glar­ing when rank­ing out­lets only by the num­ber of angry respons­es they trig­ger with Facebook’s reac­tions fea­ture.

    By that mea­sure, Fox News is leaps and bounds ahead of oth­er pages, includ­ing that of right-wing web­site Bre­it­bart and con­ser­v­a­tive Dai­ly Wire Edi­tor-in-Chief Ben Shapiro.
    ...

    So as Pres­i­dent Trump and Rep Nunes con­tin­ue wag­ing their social media intim­i­da­tion cam­paign it’s going to be worth keep­ing in mind the wild suc­cess these intim­i­da­tion cam­paigns have already had. This is a tac­tic that clear­ly works.

    And in relat­ed news, Trump just threat­ened to open fed­er­al inves­ti­ga­tion against Sat­ur­day Night Live for mak­ing too much fun of him...

    Posted by Pterrafractyl | March 20, 2019, 3:56 pm
  10. Oh look, anoth­er Face­book data deba­cle: Face­book just admit­ted that it’s been stor­ing hun­dreds of mil­lions of pass­words in plain-text log files, which is a huge secu­ri­ty ‘no no’ for a com­pa­ny like Face­book. Nor­mal­ly, pass­words are sup­posed to be stored as a hash (where the pass­word is con­vert­ed to a long strong of ran­dom-seem­ing text). This pass­word-to-hash map­ping approach allows com­pa­nies like Face­book to check and make sure the pass­word you input match­es your account pass­word with­out hav­ing to direct­ly store the pass­word. Only the hash is stored. And that basic secu­ri­ty rule has­n’t been fol­lowed for up to 600 mil­lion Face­book accounts. As a result, the plain­text pass­words that peo­ple have been using for Face­book has poten­tial­ly been read­able by Face­book employ­ees for years. This has appar­ent­ly been the case since 2012 and was dis­cov­ered in Jan­u­ary 2019 by a team of engi­neers who were review­ing some code and noticed this ‘bug’.

    It sounds like the users of Face­book Lite — a ver­sion of Face­book for peo­ple with poor inter­net con­nec­tions — were par­tic­u­lar­ly hard hit. The way Face­book describes, hun­dreds of mil­lions of Face­book Lite users will be get­ting an email about this, along with tens of mil­lions of reg­u­lar Face­book users and even tens of thou­sands of Insta­gram users (Face­book owns Insta­gram).

    It’s unclear why Face­book did­n’t report this soon­er, but it sounds like it was only report­ed in the first place after an anony­mous senior Face­book employ­ee told Kreb­sOn­Se­cu­ri­ty — the blog for secu­ri­ty expert Bri­an Krebs — about this. So for all we know Face­book had no inten­tion of telling peo­ple at all, which would be par­tic­u­lar­ly egre­gious if true because peo­ple often reuse pass­words across dif­fer­ent web­sites and so stor­ing this infor­ma­tion in a man­ner that is read­able to thou­sands of Face­book employ­ees rep­re­sents a very real secu­ri­ty threat for sites across the inter­net for peo­ple that reuse pass­words (which is unfor­tu­nate­ly a lot of peo­ple).

    Is there any evi­dence of Face­book employ­ees actu­al­ly abus­ing this infor­ma­tion? At this point Face­book is assur­ing us that it has seen no evi­dence of any­one inten­tion­al­ly try­ing to read the pass­word data. But as we’re going to see, around 20,000 Face­book employ­ees have had access to these logs. More alarm­ing­ly, Face­book admits that around 2,000 engi­neers and soft­ware devel­op­ers have con­duct­ed around 9 mil­lion queries for data ele­ments that con­tained the pass­words. But we are assured by Face­book that there’s noth­ing to wor­ry about:

    TechCrunch

    Face­book admits it stored ‘hun­dreds of mil­lions’ of account pass­words in plain­text

    Zack Whit­tak­er
    03/21/2019

    Flip the “days since last Face­book secu­ri­ty inci­dent” back to zero.

    Face­book con­firmed Thurs­day in a blog post, prompt­ed by a report by cyber­se­cu­ri­ty reporter Bri­an Krebs, that it stored “hun­dreds of mil­lions” of account pass­words in plain­text for years.

    The dis­cov­ery was made in Jan­u­ary, said Facebook’s Pedro Canahuati, as part of a rou­tine secu­ri­ty review. None of the pass­words were vis­i­ble to any­one out­side Face­book, he said. Face­book admit­ted the secu­ri­ty lapse months lat­er, after Krebs said logs were acces­si­ble to some 2,000 engi­neers and devel­op­ers.

    Krebs said the bug dat­ed back to 2012.

    “This caught our atten­tion because our login sys­tems are designed to mask pass­words using tech­niques that make them unread­able,” said Canahuati. “We have found no evi­dence to date that any­one inter­nal­ly abused or improp­er­ly accessed them,” but did not say how the com­pa­ny made that con­clu­sion.

    Face­book said it will noti­fy “hun­dreds of mil­lions of Face­book Lite users,” a lighter ver­sion of Face­book for users where inter­net speeds are slow and band­width is expen­sive, and “tens of mil­lions of oth­er Face­book users.” The com­pa­ny also said “tens of thou­sands of Insta­gram users” will be noti­fied of the expo­sure.

    Krebs said as many as 600 mil­lion users could be affect­ed — about one-fifth of the company’s 2.7 bil­lion users, but Face­book has yet to con­firm the fig­ure.

    Face­book also didn’t say how the bug came to be. Stor­ing pass­words in read­able plain­text is an inse­cure way of stor­ing pass­words. Com­pa­nies, like Face­book, hash and salt pass­words — two ways of fur­ther scram­bling pass­words — to store pass­words secure­ly. That allows com­pa­nies to ver­i­fy a user’s pass­word with­out know­ing what it is.

    Twit­ter and GitHub were hit by sim­i­lar but inde­pen­dent bugs last year. Both com­pa­nies said pass­words were stored in plain­text and not scram­bled.

    It’s the lat­est in a string of embar­rass­ing secu­ri­ty issues at the com­pa­ny, prompt­ing con­gres­sion­al inquiries and gov­ern­ment inves­ti­ga­tions. It was report­ed last week that Facebook’s deals that allowed oth­er tech com­pa­nies to access account data with­out con­sent was under crim­i­nal inves­ti­ga­tion.

    It’s not known why Face­book took months to con­firm the inci­dent, or if the com­pa­ny informed state or inter­na­tion­al reg­u­la­tors per U.S. breach noti­fi­ca­tion and Euro­pean data pro­tec­tion laws. We asked Face­book but a spokesper­son did not imme­di­ate­ly com­ment beyond the blog post.

    ...

    ———-

    “Face­book admits it stored ‘hun­dreds of mil­lions’ of account pass­words in plain­text” by Zack Whit­tak­er; TechCrunch; 03/21/2019

    Face­book said it will noti­fy “hun­dreds of mil­lions of Face­book Lite users,” a lighter ver­sion of Face­book for users where inter­net speeds are slow and band­width is expen­sive, and “tens of mil­lions of oth­er Face­book users.” The com­pa­ny also said “tens of thou­sands of Insta­gram users” will be noti­fied of the expo­sure.”

    So the bug caused the pass­words of hun­dreds of mil­lions of peo­ple using the Face­book Lite ver­sion of Face­book, but only tens of mil­lions of reg­u­lar Face­book users and tens of thou­sands of Insta­gram users to get logged in plain text. Was that the result of a sin­gle bug or sep­a­rate bugs for Face­book and Insta­gram? Are these even bugs that were cre­at­ed by an inno­cent cod­ing mis­tak­ing or did some­one go out of their way to write code that would leave plain text pass­words?
    At this point we have no idea because Face­book isn’t say­ing how the bug came to be. Nor is the com­pa­ny say­ing how it is that they arrived at the con­clu­sion that there were no employ­ees abus­ing their access to this data:

    ...
    “This caught our atten­tion because our login sys­tems are designed to mask pass­words using tech­niques that make them unread­able,” said Canahuati. “We have found no evi­dence to date that any­one inter­nal­ly abused or improp­er­ly accessed them,” but did not say how the com­pa­ny made that con­clu­sion.

    ...

    Face­book also didn’t say how the bug came to be. Stor­ing pass­words in read­able plain­text is an inse­cure way of stor­ing pass­words. Com­pa­nies, like Face­book, hash and salt pass­words — two ways of fur­ther scram­bling pass­words — to store pass­words secure­ly. That allows com­pa­nies to ver­i­fy a user’s pass­word with­out know­ing what it is.”
    ...

    And yet we learn from Krebs that this bug has exist­ed since 2012 and some 2,000 engi­neers and devel­op­ers have access those text logs. We also learn from Krebs that Face­book learned about this bug months ago and did­n’t say any­thing:

    ...
    The dis­cov­ery was made in Jan­u­ary, said Facebook’s Pedro Canahuati, as part of a rou­tine secu­ri­ty review. None of the pass­words were vis­i­ble to any­one out­side Face­book, he said. Face­book admit­ted the secu­ri­ty lapse months lat­er, after Krebs said logs were acces­si­ble to some 2,000 engi­neers and devel­op­ers.

    Krebs said the bug dat­ed back to 2012.

    ...

    It’s not known why Face­book took months to con­firm the inci­dent, or if the com­pa­ny informed state or inter­na­tion­al reg­u­la­tors per U.S. breach noti­fi­ca­tion and Euro­pean data pro­tec­tion laws. We asked Face­book but a spokesper­son did not imme­di­ate­ly com­ment beyond the blog post.
    ...

    So that’s pret­ty bad. But it gets worse. Because if you read the ini­tial Krebs report, it sounds like an anony­mous Face­book exec­u­tive is the source for this sto­ry. In oth­er words, Face­book prob­a­bly had no inten­tion of telling the pub­lic about this. In addi­tion, while Face­book is acknowl­edg­ing that 2,000 employ­ees have actu­al­ly access the log files, accord­ing to the Krebs report there were actu­al­ly 20,000 employ­ees who could have accessed them. So we have to hope Face­book isn’t low-balling that 2,000 esti­mate. Beyond that, Krebs reports that those 2,000 employ­ees who did access those log files made around nine mil­lion inter­nal queries for data ele­ments that con­tained plain text user pass­words. And despite all that, Face­book is assur­ing us that no pass­word changes are nec­es­sary:

    Kreb­sOn­Se­cu­ri­ty

    Face­book Stored Hun­dreds of Mil­lions of User Pass­words in Plain Text for Years

    Bri­an Krebs

    Hun­dreds of mil­lions of Face­book users had their account pass­words stored in plain text and search­able by thou­sands of Face­book employ­ees — in some cas­es going back to 2012, Kreb­sOn­Se­cu­ri­ty has learned. Face­book says an ongo­ing inves­ti­ga­tion has so far found no indi­ca­tion that employ­ees have abused access to this data.

    Mar 21 2019

    Hun­dreds of mil­lions of Face­book users had their account pass­words stored in plain text and search­able by thou­sands of Face­book employ­ees — in some cas­es going back to 2012, Kreb­sOn­Se­cu­ri­ty has learned. Face­book says an ongo­ing inves­ti­ga­tion has so far found no indi­ca­tion that employ­ees have abused access to this data.

    Face­book is prob­ing a series of secu­ri­ty fail­ures in which employ­ees built appli­ca­tions that logged unen­crypt­ed pass­word data for Face­book users and stored it in plain text on inter­nal com­pa­ny servers. That’s accord­ing to a senior Face­book employ­ee who is famil­iar with the inves­ti­ga­tion and who spoke on con­di­tion of anonymi­ty because they were not autho­rized to speak to the press.

    The Face­book source said the inves­ti­ga­tion so far indi­cates between 200 mil­lion and 600 mil­lion Face­book users may have had their account pass­words stored in plain text and search­able by more than 20,000 Face­book employ­ees. The source said Face­book is still try­ing to deter­mine how many pass­words were exposed and for how long, but so far the inquiry has uncov­ered archives with plain text user pass­words in them dat­ing back to 2012.

    My Face­book insid­er said access logs showed some 2,000 engi­neers or devel­op­ers made approx­i­mate­ly nine mil­lion inter­nal queries for data ele­ments that con­tained plain text user pass­words.

    “The longer we go into this analy­sis the more com­fort­able the legal peo­ple [at Face­book] are going with the low­er bounds” of affect­ed users, the source said. “Right now they’re work­ing on an effort to reduce that num­ber even more by only count­ing things we have cur­rent­ly in our data ware­house.”

    In an inter­view with Kreb­sOn­Se­cu­ri­ty, Face­book soft­ware engi­neer Scott Ren­fro said the com­pa­ny wasn’t ready to talk about spe­cif­ic num­bers — such as the num­ber of Face­book employ­ees who could have accessed the data.

    Ren­fro said the com­pa­ny planned to alert affect­ed Face­book users, but that no pass­word resets would be required.

    “We’ve not found any cas­es so far in our inves­ti­ga­tions where some­one was look­ing inten­tion­al­ly for pass­words, nor have we found signs of mis­use of this data,” Ren­fro said. “In this sit­u­a­tion what we’ve found is these pass­words were inad­ver­tent­ly logged but that there was no actu­al risk that’s come from this. We want to make sure we’re reserv­ing those steps and only force a pass­word change in cas­es where there’s def­i­nite­ly been signs of abuse.”

    A writ­ten state­ment from Face­book pro­vid­ed to Kreb­sOn­Se­cu­ri­ty says the com­pa­ny expects to noti­fy “hun­dreds of mil­lions of Face­book Lite users, tens of mil­lions of oth­er Face­book users, and tens of thou­sands of Insta­gram users.” Face­book Lite is a ver­sion of Face­book designed for low speed con­nec­tions and low-spec phones.

    ...

    Ren­fro said the issue first came to light in Jan­u­ary 2019 when secu­ri­ty engi­neers review­ing some new code noticed pass­words were being inad­ver­tent­ly logged in plain text.

    “This prompt­ed the team to set up a small task force to make sure we did a broad-based review of any­where this might be hap­pen­ing,” Ren­fro said. “We have a bunch of con­trols in place to try to mit­i­gate these prob­lems, and we’re in the process of inves­ti­gat­ing long-term infra­struc­ture changes to pre­vent this going for­ward. We’re now review­ing any logs we have to see if there has been abuse or oth­er access to that data.”

    ...

    ————

    “Face­book Stored Hun­dreds of Mil­lions of User Pass­words in Plain Text for Years” by Bri­an Krebs; Kreb­sOn­Se­cu­ri­ty; 03/21/2019

    “Face­book is prob­ing a series of secu­ri­ty fail­ures in which employ­ees built appli­ca­tions that logged unen­crypt­ed pass­word data for Face­book users and stored it in plain text on inter­nal com­pa­ny servers. That’s accord­ing to a senior Face­book employ­ee who is famil­iar with the inves­ti­ga­tion and who spoke on con­di­tion of anonymi­ty because they were not autho­rized to speak to the press.

    An anony­mous senior Face­book employ­ee leak­ing to Krebs. That appears to be the only rea­son this sto­ry has gone pub­lic.

    And accord­ing to this anony­mous employ­ee, those logs were search­able by more than 20,000 Face­book employ­ees. And 9 mil­lion queries of those files were done by the 2,000 engi­neers and devel­op­ers who did def­i­nite­ly access the files:

    ...
    The Face­book source said the inves­ti­ga­tion so far indi­cates between 200 mil­lion and 600 mil­lion Face­book users may have had their account pass­words stored in plain text and search­able by more than 20,000 Face­book employ­ees. The source said Face­book is still try­ing to deter­mine how many pass­words were exposed and for how long, but so far the inquiry has uncov­ered archives with plain text user pass­words in them dat­ing back to 2012.

    My Face­book insid­er said access logs showed some 2,000 engi­neers or devel­op­ers made approx­i­mate­ly nine mil­lion inter­nal queries for data ele­ments that con­tained plain text user pass­words.

    “The longer we go into this analy­sis the more com­fort­able the legal peo­ple [at Face­book] are going with the low­er bounds” of affect­ed users, the source said. “Right now they’re work­ing on an effort to reduce that num­ber even more by only count­ing things we have cur­rent­ly in our data ware­house.”
    ...

    And yet Face­book is telling us that no pass­word resets are required because no abus­es have been found. Isn’t that reas­sur­ing:

    ...
    In an inter­view with Kreb­sOn­Se­cu­ri­ty, Face­book soft­ware engi­neer Scott Ren­fro said the com­pa­ny wasn’t ready to talk about spe­cif­ic num­bers — such as the num­ber of Face­book employ­ees who could have accessed the data.

    Ren­fro said the com­pa­ny planned to alert affect­ed Face­book users, but that no pass­word resets would be required.

    “We’ve not found any cas­es so far in our inves­ti­ga­tions where some­one was look­ing inten­tion­al­ly for pass­words, nor have we found signs of mis­use of this data,” Ren­fro said. “In this sit­u­a­tion what we’ve found is these pass­words were inad­ver­tent­ly logged but that there was no actu­al risk that’s come from this. We want to make sure we’re reserv­ing those steps and only force a pass­word change in cas­es where there’s def­i­nite­ly been signs of abuse.”
    ...

    So it sure looks like we have anoth­er case of a Face­book pri­va­cy scan­dal that Face­book had no inten­tion of telling any­one about.

    The whole episode also rais­es anoth­er inter­est­ing ques­tion about Face­book and Google and all the oth­er social media giants that have become trea­sure troves of per­son­al infor­ma­tion: just how many spy agen­cies out there are try­ing to get their spies embed­ded at Face­book (or Google, or Twit­ter, etc) pre­cise­ly to exploit exact­ly these kinds of inter­nal secu­ri­ty laps­es? Because, again, keep in mind if peo­ple use the same pass­word for Face­book that they use for oth­er web­sites that means their accounts at those oth­er web­sites are also poten­tial­ly at risk. So peo­ple could have effec­tive­ly had their pass­words for Face­book and GMail and who knows what else com­pro­mised by this. Hun­dreds of mil­lions of peo­ple. That’s part of why it’s so irre­spon­si­ble to tell peo­ple no pass­word resets are nec­es­sary. The appro­pri­ate response would be to tell peo­ple that not only should they reset their Face­book pass­word but they also need to reset the pass­words for any oth­er sites that use the same pass­word (prefer­ably to some­thing oth­er than your reset Face­book pass­word). Or, bet­ter yet, #Delete­Face­book.

    In pos­si­bly relat­ed news, two top Face­book exec­u­tives, includ­ing senior prod­uct engi­neer Chris Cox, just announced a few days ago that they’re leav­ing the com­pa­ny. It would be rather inter­est­ing of Cox was the anony­mous senior engi­neer who was the Krebs source for this sto­ry. Although we should prob­a­bly hope that’s note the case because that means there’s one less senior engi­neer work­ing at Face­book who is will­ing to go to the press about these kinds of things and there’s clear­ly a short­age of such peo­ple at this point.

    Posted by Pterrafractyl | March 21, 2019, 2:43 pm
  11. Here’s a pair of arti­cle to keep in mind regard­ing the role social media will play in the 2020 US elec­tion cycle and the ques­tions over whether or not we’re going to see them reprise their roles as the key prop­a­ga­tors of right-wing dis­in­for­ma­tion: Pres­i­dent Trump did an inter­view with CNBC this morn­ing where the issue of the EU’s law­suits against US tech giants like Google and Face­book came up. The answer Trump gave is the kind of answer that could ensure those com­pa­nies go as easy as pos­si­ble on Trump and the Repub­li­cans when it comes to plat­form vio­la­tions: Trump replied that it was improp­er for the EU to be suing these com­pa­nies because the US should be doing it instead and he agrees with the EU that the monop­oly con­cerns with these com­pa­nies are valid:

    The Verge

    Don­ald Trump on tech antitrust: ‘There’s some­thing going on’

    ‘We should be doing this. They’re our com­pa­nies.’

    By Mak­e­na Kel­ly
    Jun 10, 2019, 11:51am EDT

    In an inter­view with CNBC on Mon­day, Pres­i­dent Don­ald Trump crit­i­cized the antitrust fines imposed by the Euro­pean Union on Unit­ed States tech com­pa­nies, sug­gest­ing that these tech giants could, in fact, be monop­o­lies, but the US should be the polit­i­cal body rak­ing in the set­tle­ment fines.

    “Every week you see them going after Face­book and Apple and all of these com­pa­nies … The Euro­pean Union is suing them all of the time,” Trump said. “Well, we should be doing this. They’re our com­pa­nies. So, [the EU is] actu­al­ly attack­ing our com­pa­nies, but we should be doing what they’re doing. They think there’s a monop­oly, but I’m not sure that they think that. They just think this is easy mon­ey.

    Asked if he thinks tech com­pa­nies like Google should be bro­ken up, Trump says, “well I can tell you they dis­crim­i­nate against me. Peo­ple talk about col­lu­sion — the real col­lu­sion is between the Democ­rats & these com­pa­nies, because they were so against me dur­ing my elec­tion run.” pic.twitter.com/xVz6yTqoeI— Aaron Rupar (@atrupar) June 10, 2019

    It’s unclear whether Trump actu­al­ly wants to impose sim­i­lar fines or was only cri­tiquing the EU’s moves. “We have a great attor­ney gen­er­al,” he said lat­er in the inter­view. “We’re going to look at it dif­fer­ent­ly.”

    Over the past few years, the EU has fined some of the US’s largest tech com­pa­nies for behav­ing anti-com­pet­i­tive­ly. Just last sum­mer, In an inter­view with CNBC on Mon­day for vio­lat­ing antitrust law with the company’s Android oper­at­ing sys­tem prod­uct. Face­book has also been sub­ject to a hand­ful of pri­va­cy inves­ti­ga­tions in both the US and abroad fol­low­ing 2018’s Google was fined a record $5 bil­lion scan­dal.

    Respond­ing to the ques­tion of whether tech giants like Google and Face­book were monop­o­lies, Trump said, “I think it’s a bad sit­u­a­tion, but obvi­ous­ly there’s some­thing going on in terms of monop­oly.”

    ...

    ———-

    “Don­ald Trump on tech antitrust: ‘There’s some­thing going on’” by Mak­e­na Kel­ly; The Verge; 06/10/2019

    ““Every week you see them going after Face­book and Apple and all of these com­pa­nies … The Euro­pean Union is suing them all of the time,” Trump said. “Well, we should be doing this. They’re our com­pa­nies. So, [the EU is] actu­al­ly attack­ing our com­pa­nies, but we should be doing what they’re doing. They think there’s a monop­oly, but I’m not sure that they think that. They just think this is easy mon­ey.””

    “Well, we should be doing this. They’re our com­pa­nies.” That cer­tain­ly had to get Sil­i­con Val­ley’s atten­tion. Espe­cial­ly this part:

    ...
    Respond­ing to the ques­tion of whether tech giants like Google and Face­book were monop­o­lies, Trump said, “I think it’s a bad sit­u­a­tion, but obvi­ous­ly there’s some­thing going on in terms of monop­oly.”
    ...

    So Trump is now open­ly talk­ing about break­ing of the US tech giants over monop­oly con­cerns. GREAT! A monop­oly inquiry is long over­due. Of course, it’s not actu­al­ly going to be very great if it turns out Trump is just mak­ing these threats in order to extract more favor­able treat­ment from these com­pa­nies in the upcom­ing 2020 elec­tion cycle. And as the fol­low­ing arti­cle makes clear, that’s obvi­ous­ly what Trump was doing dur­ing this CNBC inter­view because he went on to com­plain that the tech giants were actu­al­ly dis­crim­i­nat­ing against him in the 2016 elec­tion and col­lud­ing with the Democ­rats. Of course, as Trump’s dig­i­tal cam­paign man­ag­er Brad Parscale has described, the tech giants were absolute­ly instru­men­tal for the suc­cess of the Trump cam­paign and com­pa­nies like Face­book actu­al­ly embed­ded employ­ees in the Trump cam­paign to help the Trump team max­i­mize their use of the plat­form. And Google-owned YouTube has basi­cal­ly become a dream recruit­ment tool for the ‘Alt Right’ Trumpian base. So the idea that the tech giants are some­how dis­crim­i­nat­ing against Trump is laugh­able. It’s true that there have been tepid moves by these plat­forms to project the image that they won’t tol­er­ate far right extrem­ism, with YouTube pledg­ing to ban white suprema­cist videos last week. The extent to which this was just a pub­lic rela­tions stunt by YouTube remains to be seen, but remov­ing overt neo-Nazi con­tent isn’t going to address most of the right-wing dis­in­for­ma­tion on the plat­forms any­way since so much of that con­tent cloaks the extrem­ism in dog whis­tles. But as we should expect, the right-wing meme that the tech giants being run by a bunch of lib­er­als and out to silence con­ser­v­a­tive voic­es is get­ting pushed heav­i­ly right now and Pres­i­dent Trump just pro­mot­ed that meme again in the CNBC inter­view as part of his threat over anti-trust inquiries:

    Talk­ing Points Memo
    News

    Trump Fur­thers Far-Right Con­spir­a­cy That Tech Com­pa­nies Are Out To Get Him

    By Nicole Lafond
    June 10, 2019 10:06 am

    Pres­i­dent Trump on Mon­day morn­ing con­tin­ued his call-out cam­paign against tech com­pa­nies, fur­ther­ing a far-right con­spir­a­cy the­o­ry that Sil­i­con Val­ley is out to get con­ser­v­a­tives like him­self.

    Dur­ing an inter­view with CNBC on Mon­day morn­ing, Trump com­plained about the Euro­pean Union’s antitrust law­suits against some of the largest U.S. tech com­pa­nies like Face­book, before sug­gest­ing the U.S. should be doing the same thing. He then claimed that tech com­pa­nies “dis­crim­i­nate” against him.

    “Well I can tell you they dis­crim­i­nate against me,” Trump said. “You know, peo­ple talk about col­lu­sion. The real col­lu­sion is between the Democ­rats and these com­pa­nies. ‘Cause they were so against me dur­ing my elec­tion run. Every­body said, ‘If you don’t have them, you can’t win.’ Well, I won. And I’ll win again.”

    Over the week­end, Trump called on Twit­ter to bring back the “banned Con­ser­v­a­tive Voic­es,” like­ly ref­er­enc­ing Twitter’s recent move to kick some con­spir­a­cy the­o­rists, like Alex Jones and oth­ers espous­ing racist views, off the plat­form.

    Twit­ter should let the banned Con­ser­v­a­tive Voic­es back onto their plat­form, with­out restric­tion. It’s called Free­dom of Speech, remem­ber. You are mak­ing a Giant Mis­take!— Don­ald J. Trump (@realDonaldTrump) June 9, 2019

    The con­spir­a­cy that social media and tech com­pa­nies are out to “shad­ow ban” con­ser­v­a­tive voic­es has gained more promi­nence dur­ing the Trump pres­i­den­cy, as Trump him­self and his son Don­ald Trump Jr. have made a strate­gic effort to raise aware­ness about the bogus issue.

    One of Trump’s most vehe­ment sup­port­ers in the House, Rep. Devin Nunes (R‑CA), ulti­mate­ly filed a law­suit against Twit­ter to try to legit­imize his “shad­ow ban­ning” the­o­ry.

    ...

    ———-

    “Trump Fur­thers Far-Right Con­spir­a­cy That Tech Com­pa­nies Are Out To Get Him” by Nicole Lafond; Talk­ing Points Memo; 06/10/2019

    “Well I can tell you they dis­crim­i­nate against me,” Trump said. “You know, peo­ple talk about col­lu­sion. The real col­lu­sion is between the Democ­rats and these com­pa­nies. ‘Cause they were so against me dur­ing my elec­tion run. Every­body said, ‘If you don’t have them, you can’t win.’ Well, I won. And I’ll win again.””

    Yes, accord­ing to right-wing fan­ta­sy world, the tech giants were actu­al­ly all against Trump in 2016 and not his secret weapon. That’s become one of the fic­tion­al ‘facts’ being pro­mot­ed as part of this right-wing meme. A meme about con­ser­v­a­tives get­ting ‘shad­ow banned’ by tech com­pa­nies. And when­ev­er Alex Jones gets banned from a plat­form it’s now seen as part of this anti-con­ser­v­a­tive con­spir­a­cy:

    ...
    Over the week­end, Trump called on Twit­ter to bring back the “banned Con­ser­v­a­tive Voic­es,” like­ly ref­er­enc­ing Twitter’s recent move to kick some con­spir­a­cy the­o­rists, like Alex Jones and oth­ers espous­ing racist views, off the plat­form.

    Twit­ter should let the banned Con­ser­v­a­tive Voic­es back onto their plat­form, with­out restric­tion. It’s called Free­dom of Speech, remem­ber. You are mak­ing a Giant Mis­take!— Don­ald J. Trump (@realDonaldTrump) June 9, 2019

    The con­spir­a­cy that social media and tech com­pa­nies are out to “shad­ow ban” con­ser­v­a­tive voic­es has gained more promi­nence dur­ing the Trump pres­i­den­cy, as Trump him­self and his son Don­ald Trump Jr. have made a strate­gic effort to raise aware­ness about the bogus issue.

    One of Trump’s most vehe­ment sup­port­ers in the House, Rep. Devin Nunes (R‑CA), ulti­mate­ly filed a law­suit against Twit­ter to try to legit­imize his “shad­ow ban­ning” the­o­ry.
    ...

    Recall how Fox News was pro­mot­ing this meme recent­ly when Lau­ra Ingra­ham’s prime time Fox News show as try­ing to present fig­ures like Alex Jones, Milo Yiannopou­los, Lau­ra Loomer, and neo-Nazi Paul Nehlen were banned from Face­book because of anti-con­ser­v­a­tive bias (and not because they kept break­ing the rules of the plat­form). This meme is now a cen­tral com­po­nent of the right-wing griev­ance pol­i­tics and basi­cal­ly just an update to the long-stand­ing ‘lib­er­al media’ meme that helped fuel the rise of right-wing talk radio and Fox News. It’s exact­ly the kind of ‘work­ing the ref’ meme that is designed to bul­ly the media into giv­ing right-wingers eas­i­er treat­ment. That’s what makes monop­oly threats by Trump so dis­turb­ing. He’s now basi­cal­ly telling these tech giants, ‘go easy on right-wingers or I’ll break you up,’ head­ing into a 2020 elec­tion cycle where all indi­ca­tions are that dis­in­for­ma­tion is going to play a big­ger role than ever. So the pres­i­dent basi­cal­ly warned all of these tech com­pa­nies that any new tools they’ve cre­at­ed for deal­ing with dis­in­for­ma­tion being spread on their plat­forms dur­ing the 2020 elec­tion cycle had bet­ter not work too well, at least not if it’s right-wing dis­in­for­ma­tion.

    Keep in mind that there’s been lit­tle indi­ca­tion that these plat­forms were seri­ous­ly going to do any­thing about dis­in­for­ma­tion any­way, so it’s unlike­ly that Trump’s threat will make this bad sit­u­a­tion worse.

    So that’s a pre­view for the role dis­in­for­ma­tion is going to play in the 2020 elec­tions: Trump is pre­emp­tive­ly run­ning a dis­in­for­ma­tion cam­paign in order to pres­sure the tech giants into not crack­ing down on the planned right-wing dis­in­for­ma­tion cam­paigns the tech giants weren’t seri­ous­ly plan­ning on crack­ing down on in the first place.

    Posted by Pterrafractyl | June 10, 2019, 2:35 pm
  12. So remem­ber the absur­dist ‘civ­il rights audit’ that Face­book pledged to do last year? This was the audit con­duct­ed by retired GOP-Sen­a­tor Jon Kyl to address the fre­quent claims of anti-con­ser­v­a­tive bias per­pet­u­al­ly lev­eled against Face­book by Repub­li­can politi­cians and right-wing pun­dits. It’s a core ele­ment of the right-wing’s ‘work­ing the refs’ strat­e­gy for get­ting a more com­pli­ant media. In this case, the audit involved inter­view­ing 133 con­ser­v­a­tive law­mak­ers and inter­est groups about whether they think Face­book is biased against con­ser­v­a­tives.

    Well, Face­book is final­ly releas­ing the results of their audit. And while the audit does­n’t find any sys­temic bias, it did acknowl­edge some con­ser­v­a­tive frus­tra­tions like frus­tra­tions with a longer approval process for sub­mit­ting ads to Face­book and the fear that the slowed ad approval process might dis­ad­van­tage right-wing cam­paigns fol­low­ing the wild suc­cess­es the right-wing had in 2016 using social media polit­i­cal ads. Amus­ing­ly, on the same day Face­book released this audit it also announced the return of human edi­tors for curat­ing Face­book’s news feeds. Recall how it was 2016 claims by a Face­book employ­ee that the news feed edi­tors were biased against con­ser­v­a­tives (when they were real­ly just biased against dis­in­for­ma­tion com­ing dis­pro­por­tion­ate­ly from right-wing sources) that led to Face­book decid­ing to switch to an algo­rithm with­out human over­sight for gen­er­at­ing news feeds which, in turn, turned the news feeds into right-wing dis­in­for­ma­tion out­lets dur­ing the 2016 cam­paign that was vital to the Trump cam­paign’s suc­cess. So the human news feed edi­tors are appar­ent­ly back, which will no doubt anger the right-wing. Although recall how Face­book hired Tuck­er Bounds, John McCain’s for­mer advis­er and spokesper­son, to be Face­book’s Com­mu­ni­ca­tions direc­tor focused on the News Feed back in Jan­u­ary of 2017. In oth­er words, yeah, there’s going to be human edi­tors over­see­ing the news feeds again, but it’s prob­a­bly going to a for­mer Repub­li­can oper­a­tive in charge of those human edi­tors. It’s a reminder that Face­book is going to find a way to make sure its plat­form is a potent right-wing pro­pa­gan­da tool one way or anoth­er. The claims of anti-con­ser­v­a­tive dis­crim­i­na­tion is just pro­pa­gan­da designed to allowed Face­book to be a more effec­tive right-wing pro­pa­gan­da out­let:

    The Verge

    The con­ser­v­a­tive audit of bias on Face­book is long on feel­ings and short on facts

    And con­ser­v­a­tives are beat­ing Face­book up over it any­way

    By Casey New­ton
    Aug 21, 2019, 6:02pm EDT

    There are many crit­i­cisms of Facebook’s size, pow­er, and busi­ness mod­el, but two stand out for the inten­si­ty with which they are usu­al­ly dis­cussed. One is that Face­book is a dystopi­an panop­ti­con that mon­i­tors our every move and uses that infor­ma­tion to pre­dict and manip­u­late our behav­ior. The oth­er is that Face­book has come such a pil­lar of mod­ern life that every prod­uct deci­sion it makes could reshape the body politic for­ev­er.

    Today, in an impres­sive flur­ry of news-mak­ing, Face­book took steps to address both con­cerns.

    First, the com­pa­ny said it was final­ly releas­ing its long-delayed “Clear His­to­ry” tool in three coun­tries. (The Unit­ed States is not one of them.) I wrote about it at The Verge:

    It was near­ly a year and a half ago that Face­book CEO Mark Zucker­berg, stand­ing onstage at the company’s annu­al devel­op­er con­fer­ence, announced that the com­pa­ny would begin let­ting users sev­er the con­nec­tion between their web brows­ing his­to­ry and their Face­book accounts. After months of delays, Facebook’s Clear His­to­ry is now rolling out in Ire­land, South Korea, and Spain, with oth­er coun­tries to fol­low “in com­ing months,” the com­pa­ny said. The new tool, which Face­book con­ceived in the wake of the Cam­bridge Ana­lyt­i­ca scan­dal, is designed to give users more con­trol over their data pri­va­cy at the expense of adver­tis­ers’ tar­get­ing capa­bil­i­ties.

    When it arrives in your coun­try, the Clear His­to­ry tool will be part of a new sec­tion of the ser­vice called “Off-Face­book activ­i­ty.” When you open it, you’ll see the apps and web­sites that are track­ing your activ­i­ty and send­ing reports back to Face­book for ad tar­get­ing pur­pos­es. Tap­ping the “Clear His­to­ry” but­ton will dis­so­ci­ate that infor­ma­tion from your Face­book account.

    You can also choose to block com­pa­nies from report­ing their track­ing data about you back to Face­book in the future. You’ll have the choice of dis­con­nect­ing all off-Face­book brows­ing data, or data for spe­cif­ic apps and web­sites. Face­book says the prod­uct is rolling out slow­ly “to help ensure it’s work­ing reli­ably for every­one.”

    Some writ­ers, such as Tony Romm here, point­ed out that Face­book is not actu­al­ly delet­ing your data — which would seem to blunt the impact of a but­ton called “Clear His­to­ry.” In fact, giv­en that the data link you’re shut­ting off is pri­mar­i­ly rel­e­vant to ads you might see lat­er, it feels more like a “Mud­dle Future” but­ton. Face­book, for its part, has cloaked the entire enter­prise into a sec­tion of the app opaque­ly titled “Off-Face­book Activ­i­ty,” which could more or less mean any­thing.

    I find it hard to get too worked up about any of this, because regard­less of whether Face­book is able to take into account your web brows­ing habits, it’s still going to be send­ing you plen­ty of high­ly tar­get­ed ads based on your age, gen­der, and all the oth­er demo­graph­ic data that you forked over when you made your pro­file. Or you could sim­ply turn off ad tar­get­ing on Face­book alto­geth­er, which is more pow­er­ful in this regard than any Clear His­to­ry tool was ever going to be. (Here’s an account from a per­son who did this.)

    Sec­ond, Face­book released the results of its anti-con­ser­v­a­tive bias audit, in which the com­pa­ny asked for­mer Sen. Jon Kyl and the law firm Cov­ing­ton & Burl­ing to ask 133 con­ser­v­a­tive law­mak­ers and inter­est groups to tell it whether they think Face­book is biased against con­ser­v­a­tives.

    This project has fas­ci­nat­ed me since it was announced, since Face­book had clear­ly vol­un­teered to play a game it could only lose. As I’ve writ­ten here before, the def­i­n­i­tion of “bias” has expand­ed to include any time some­one has a bad expe­ri­ence online.

    On one hand, there’s no evi­dence of sys­tem­at­ic bias against con­ser­v­a­tives or any oth­er main­stream polit­i­cal group on Face­book or oth­er plat­forms. On the oth­er hand, there are end­less anec­dotes about the law­mak­er whose ad pur­chase was not approved, or who did not appear in search results, or what­ev­er. Stack enough anec­dotes on top of one anoth­er and you’ve got some­thing that looks a lot like data — cer­tain­ly enough to con­vene a bad-faith con­gres­sion­al hear­ing about plat­form bias, which Repub­li­cans have done repeat­ed­ly now.

    So here comes Kyl’s “audit,” which appears to have tak­en rough­ly the same shape as Pres­i­dent Trump’s call for sto­ries of Amer­i­cans who feel that they have been cen­sored by the big plat­forms. Kyl’s find­ings are short on facts and long on feel­ings. Here’s this, from an op-ed he pub­lished today in The Wall Street Jour­nal.

    As a result of Facebook’s new, more strin­gent ad poli­cies, inter­vie­wees said the ad-approval process has slowed sig­nif­i­cant­ly. Some fear that the new process may be designed to dis­ad­van­tage con­ser­v­a­tive ads in the wake of the Trump campaign’s suc­cess­ful use of social media in 2016.

    So, some anony­mous con­ser­v­a­tives believe that Face­book is involved in a con­spir­a­cy to pre­vent con­ser­v­a­tives from adver­tis­ing. That might come as a sur­prise to, say, Pres­i­dent Trump, who is out­spend­ing all Democ­rats on Face­book ads. But the Kyl report has no room for empir­i­cal thought. What’s impor­tant here is that 133 unnamed peo­ple have feel­ings, and that they spent the bet­ter part of two years talk­ing about them in inter­views that we can’t read. (Here’s a link to the pub­lished report, which clocks in at a very thin eight pages. And here’s a help­ful rebut­tal from Media Mat­ters, which uses data to illus­trate how par­ti­san con­ser­v­a­tive pages con­tin­ue to thrive on Face­book.)

    Despite the fact that we have no idea who Kyl talked to, or what they said beyond his mea­ger bul­let points, the report still had at least some effect on Face­book pol­i­cy­mak­ing. As Sara Fis­ch­er reports in Axios, Face­book ads can now show med­ical tubes con­nect­ed to the human body, which appar­ent­ly make for more vis­cer­al­ly com­pelling anti-abor­tion ads:

    The med­ical tube pol­i­cy makes it eas­i­er for pro-life ads focused on sur­vival sto­ries of infants born before full-term to be accept­ed by Facebook’s ad pol­i­cy. Face­book notes that the pol­i­cy could also ben­e­fit oth­er groups who wish to dis­play med­ical tubes in ads for can­cer research, human­i­tar­i­an relief and elder­ly care.

    And how are con­ser­v­a­tives using the infor­ma­tion from today’s audit? If you guessed “as a cud­gel to con­tin­ue beat­ing Face­book with,” you win today’s grand prize. Here’s Brent Bozell: “The Face­book Kyl cov­er-up is aston­ish­ing. 133 groups pre­sent­ed Kyl with evi­dence of FB’s agen­da against con­ser­v­a­tives and he dis­hon­est­ly did FB’s bid­ding instead.”

    And here’s Sen. Josh Haw­ley (R‑MO):

    “Face­book should con­duct an actu­al audit by giv­ing a trust­ed third par­ty access to its algo­rithm, its key doc­u­ments, and its con­tent mod­er­a­tion pro­to­cols,” Haw­ley said in a state­ment. “Then Face­book should release the results to the pub­lic.”

    I asked Hawley’s peo­ple if the sen­a­tor was aware that Facebook’s con­tent mod­er­a­tion pro­to­cols have been pub­lic for years, but I nev­er heard back.

    Any­way, Face­book wrapped up the day by announc­ing — in a fan­tas­ti­cal­ly bizarre feat of tim­ing — that it would begin to hire human beings to curate your news sto­ries, just as Apple does for Apple News. (Apply for the job here! Let me know if you get it!) This is the right thing to do — our leaky infor­ma­tion sphere needs expe­ri­enced edi­tors with news judg­ment more than ever — but also one guar­an­teed to court con­tro­ver­sy. One person’s cura­tion is, after all, anoth­er person’s “bias.”

    The return of human edi­tors to Face­book, on the very day that it pub­lish­es its inves­ti­ga­tion into alleged bias against con­ser­v­a­tives, is a real time-is-a-flat-cir­cle moment. After all, it was trumped-up out­rage over sup­posed bias in its last group of human edi­tors that helped to set us down this benight­ed path to begin with. I want to end on some­thing I wrote last Feb­ru­ary on this sub­ject:

    I’m struck how, in ret­ro­spect, the sto­ry that helped to trig­ger our cur­rent anx­i­eties had the prob­lem exact­ly wrong. The sto­ry offered a dire warn­ing that Face­book exert­ed too much edi­to­r­i­al con­trol, in the one nar­row sec­tion of the site where it actu­al­ly employed human edi­tors, when in fact the prob­lem under­ly­ing our glob­al mis­in­for­ma­tion cri­sis is that it exert­ed too lit­tle. Gizmodo’s sto­ry fur­ther declared that Face­book had become hos­tile to con­ser­v­a­tive view­points when in fact con­ser­v­a­tive view­points — and con­ser­v­a­tive hoax­es — were thriv­ing across the plat­form.

    Last month, NewsWhip pub­lished a list of the most-engaged pub­lish­ers on Face­book. The no. 1 com­pa­ny post­ed more than 49,000 times in Decem­ber alone, earn­ing 21 mil­lion likes, com­ments, and shares. That pub­lish­er was Fox News. And the idea that Face­book sup­press­es the shar­ing of con­ser­v­a­tive news now seems very quaint indeed.

    ...

    ———-

    “The con­ser­v­a­tive audit of bias on Face­book is long on feel­ings and short on facts” by Casey New­ton; The Verge; 08/21/2019

    “On one hand, there’s no evi­dence of sys­tem­at­ic bias against con­ser­v­a­tives or any oth­er main­stream polit­i­cal group on Face­book or oth­er plat­forms. On the oth­er hand, there are end­less anec­dotes about the law­mak­er whose ad pur­chase was not approved, or who did not appear in search results, or what­ev­er. Stack enough anec­dotes on top of one anoth­er and you’ve got some­thing that looks a lot like data — cer­tain­ly enough to con­vene a bad-faith con­gres­sion­al hear­ing about plat­form bias, which Repub­li­cans have done repeat­ed­ly now.

    Sure, there’s no actu­al evi­dence of an anti-con­ser­v­a­tive bias. But there are 133 anony­mous right-wing oper­a­tives who feel dif­fer­ent­ly. That’s the basis for this audit. And despite the lengths Jon Kyl’s team went to describ­ing the var­i­ous feel­ings of bias felt by these 133 anony­mous right-wing oper­a­tives, he’s still be accused of wag­ing a cov­er-up on Face­book’s behalf by the right-wing media. Because you can’t stop ‘work­ing the refs’:

    ...
    So here comes Kyl’s “audit,” which appears to have tak­en rough­ly the same shape as Pres­i­dent Trump’s call for sto­ries of Amer­i­cans who feel that they have been cen­sored by the big plat­forms. Kyl’s find­ings are short on facts and long on feel­ings. Here’s this, from an op-ed he pub­lished today in The Wall Street Jour­nal.

    As a result of Facebook’s new, more strin­gent ad poli­cies, inter­vie­wees said the ad-approval process has slowed sig­nif­i­cant­ly. Some fear that the new process may be designed to dis­ad­van­tage con­ser­v­a­tive ads in the wake of the Trump campaign’s suc­cess­ful use of social media in 2016.

    So, some anony­mous con­ser­v­a­tives believe that Face­book is involved in a con­spir­a­cy to pre­vent con­ser­v­a­tives from adver­tis­ing. That might come as a sur­prise to, say, Pres­i­dent Trump, who is out­spend­ing all Democ­rats on Face­book ads. But the Kyl report has no room for empir­i­cal thought. What’s impor­tant here is that 133 unnamed peo­ple have feel­ings, and that they spent the bet­ter part of two years talk­ing about them in inter­views that we can’t read. (Here’s a link to the pub­lished report, which clocks in at a very thin eight pages. And here’s a help­ful rebut­tal from Media Mat­ters, which uses data to illus­trate how par­ti­san con­ser­v­a­tive pages con­tin­ue to thrive on Face­book.)

    ...

    And how are con­ser­v­a­tives using the infor­ma­tion from today’s audit? If you guessed “as a cud­gel to con­tin­ue beat­ing Face­book with,” you win today’s grand prize. Here’s Brent Bozell: “The Face­book Kyl cov­er-up is aston­ish­ing. 133 groups pre­sent­ed Kyl with evi­dence of FB’s agen­da against con­ser­v­a­tives and he dis­hon­est­ly did FB’s bid­ding instead.”
    ...

    And on the same day of the release of this report, Face­book announces the return of human edi­tors for the news feed:

    ...
    Any­way, Face­book wrapped up the day by announc­ing — in a fan­tas­ti­cal­ly bizarre feat of tim­ing — that it would begin to hire human beings to curate your news sto­ries, just as Apple does for Apple News. (Apply for the job here! Let me know if you get it!) This is the right thing to do — our leaky infor­ma­tion sphere needs expe­ri­enced edi­tors with news judg­ment more than ever — but also one guar­an­teed to court con­tro­ver­sy. One person’s cura­tion is, after all, anoth­er person’s “bias.”

    The return of human edi­tors to Face­book, on the very day that it pub­lish­es its inves­ti­ga­tion into alleged bias against con­ser­v­a­tives, is a real time-is-a-flat-cir­cle moment. After all, it was trumped-up out­rage over sup­posed bias in its last group of human edi­tors that helped to set us down this benight­ed path to begin with. I want to end on some­thing I wrote last Feb­ru­ary on this sub­ject:

    I’m struck how, in ret­ro­spect, the sto­ry that helped to trig­ger our cur­rent anx­i­eties had the prob­lem exact­ly wrong. The sto­ry offered a dire warn­ing that Face­book exert­ed too much edi­to­r­i­al con­trol, in the one nar­row sec­tion of the site where it actu­al­ly employed human edi­tors, when in fact the prob­lem under­ly­ing our glob­al mis­in­for­ma­tion cri­sis is that it exert­ed too lit­tle. Gizmodo’s sto­ry fur­ther declared that Face­book had become hos­tile to con­ser­v­a­tive view­points when in fact con­ser­v­a­tive view­points — and con­ser­v­a­tive hoax­es — were thriv­ing across the plat­form.

    Last month, NewsWhip pub­lished a list of the most-engaged pub­lish­ers on Face­book. The no. 1 com­pa­ny post­ed more than 49,000 times in Decem­ber alone, earn­ing 21 mil­lion likes, com­ments, and shares. That pub­lish­er was Fox News. And the idea that Face­book sup­press­es the shar­ing of con­ser­v­a­tive news now seems very quaint indeed.

    ...

    So it looks like we’re prob­a­bly in store for a new round of alle­ga­tions of anti-con­ser­v­a­tive bias at Face­book just in time for 2020 which will pre­sum­ably include a new round of alle­ga­tions of anti-con­ser­v­a­tive bias held by the human news feed edi­tors. With that in mind, it’s worth not­ing that Face­book has expand­ed its approach to mis­in­for­ma­tion-detec­tion since 2016 when it last had news feed human cura­tion. For exam­ple, now Face­book has teamed up with the Poynter’s Inter­na­tion­al Fact-Check­ing Net­work (IFCN) to find unbi­ased orga­ni­za­tions that Face­book can out­source the respon­si­bil­i­ty of fact-check­ing to. In Decem­ber of 2016, Face­book announced that it was part­ner­ing with ABC News, Snopes, Poli­ti­Fact, FactCheck.org, and the AP (all approved by IFCN) to help it iden­ti­fy mis­in­for­ma­tion on the plat­form. All non-par­ti­san orga­ni­za­tions, abeit the kinds of orga­ni­za­tions the right-wing media rou­tine­ly labels as ‘left-wing main­stream media’ out­lets despite the lack of any mean­ing­ful left-wing bias. Then, in Decem­ber of 2017, Face­book announced it was adding the right-wing Week­ly Stan­dard to its list of fact-check­ers, which soon result­ed in left-wing arti­cles get­ting flagged for dis­in­for­ma­tion for spu­ri­ous rea­sons. Note there was no left-wing site cho­sen at this point. But the Week­ly Stan­dard went out of busi­ness, so in April of this year, Face­book announced it was adding Check Your Fact to its list of fact-check­ing orga­ni­za­tions. Who is behind Check Your Fact? The Dai­ly Caller! This is almost like hir­ing Bre­it­bart to do your fact-check­ing.

    Accord­ing to the fol­low­ing arti­cle, it was Joe Kaplan, the for­mer White House aide to George W. Bush who now serves as Facebook’s glob­al pol­i­cy chief and is the company’s “pro­tec­tor against alle­ga­tions of polit­i­cal bias,” who has been push­ing to get Check Your Fact added to the list of Face­book’s fact-check­ers. This was a rather con­tentious deci­sion with­in Face­book’s board­room but Mark Zucker­berg appar­ent­ly gen­er­al­ly backed Kaplan’s push.

    And that tells us about his this new round of human-curat­ed news feeds is going to go: The humans doing the curat­ing are prob­a­bly going to have their judge­ment curat­ed by right-wing mis­in­for­ma­tion out­lets like the Dai­ly Caller

    Vox

    Facebook’s con­tro­ver­sial fact-check­ing part­ner­ship with a Dai­ly Caller-fund­ed web­site, explained

    In try­ing to stop the spread of fake news, the social media behe­moth has cre­at­ed new prob­lems.

    By Aaron Rupar
    Updat­ed May 6, 2019, 9:40am EDT

    Face­book knows that the spread of fake news on the plat­form dur­ing the 2016 pres­i­den­tial cam­paign was almost its undo­ing, so it has cho­sen to part­ner with third-par­ty media orga­ni­za­tions to fact-check pub­lish­ers on its plat­form in order to stave off more crit­i­cism. That makes sense. But some of its choic­es in part­ners — includ­ing a new fact-check­er fund­ed by a right-lean­ing news out­let found­ed by Tuck­er Carl­son — has only invit­ed more.

    Last week, Face­book announced that it’s part­ner­ing with Check Your Fact — a sub­sidiary of the right-wing Dai­ly Caller, a site known for its ties to white nation­al­ists — as one of six third-par­ty orga­ni­za­tions it cur­rent­ly works with to fact-check con­tent for Amer­i­can users. The part­ner­ship has already come under intense crit­i­cism from cli­mate jour­nal­ists (among oth­ers) who are con­cerned that the Dai­ly Caller’s edi­to­r­i­al stance on issues like cli­mate change, which is uncon­tro­ver­sial among sci­en­tists but isn’t treat­ed as such on right-wing media, will spread even more mis­in­for­ma­tion Face­book.

    In an inter­view, Face­book spokesper­son Lau­ren Svens­son defend­ed the part­ner­ship. She not­ed that Check Your Fact, like all fact-check­ers Face­book part­ners with, is cer­ti­fied by Poynter’s Inter­na­tion­al Fact-Check­ing Net­work (IFCN). Asked about the right-wing pro­cliv­i­ties of Check Your Fact’s par­ent com­pa­ny, Svenn­son referred to the IFCN’s cer­ti­fi­ca­tion process­es and said that “we do believe in hav­ing a diverse set of fact-check­ing part­ners.” Check Your Fact, for its part, says it oper­ates inde­pen­dent­ly from the Dai­ly Caller, and touts its record of accu­rate fact-checks.

    The real­i­ty is that Face­book has a fake news prob­lem that could hurt its bot­tom line, but it also has a polit­i­cal prob­lem. If it doesn’t give cre­dence to pop­u­lar but dis­rep­utable web­sites like the Dai­ly Caller, it runs the risk of anger­ing Repub­li­cans who use the plat­form. But in cre­dence to sites of that sort, the plat­form runs the risk or per­pet­u­at­ing the same “fake news” prob­lem third-par­ty fact-check­ers are meant to solve.

    Facebook’s fake news prob­lem, explained

    As Tim­o­thy B. Lee explained for Vox days after the 2016 elec­tion, “fake news” was a big prob­lem on Face­book dur­ing that year’s pres­i­den­tial cam­paign:

    Over the course of 2016, Face­book users learned that the pope endorsed Don­ald Trump (he didn’t), that a Demo­c­ra­t­ic oper­a­tive was mur­dered after agree­ing to tes­ti­fy against Hillary Clin­ton (it nev­er hap­pened), that Bill Clin­ton raped a 13-year-old girl (a total fab­ri­ca­tion), and many oth­er total­ly bogus “news” sto­ries. Sto­ries like this thrive on Face­book because Facebook’s algo­rithm pri­or­i­tizes “engage­ment” — and a reli­able way to get read­ers to engage is by mak­ing up out­ra­geous non­sense about politi­cians they don’t like.

    After a ton of pub­lic scruti­ny, includ­ing in the form of high-pro­file con­gres­sion­al hear­ings, Face­book after the elec­tion began part­ner­ing with news orga­ni­za­tions like the Asso­ci­at­ed Press, FactCheck.org, Lead Sto­ries, Poli­ti­Fact, and Sci­ence Feed­back to fact-check pub­lish­ers. That’s all well and good — those orga­ni­za­tions have rep­u­ta­tions for non­par­ti­san­ship and accu­ra­cy.

    But in attempt­ing to sti­fle “fake news,” Repub­li­cans have noticed that right-lean­ing news out­lets, ideas, and politi­cians some­times got caught up in the purge. Just look to Alex Jones, who active­ly spread con­spir­a­cy the­o­ries due to his pop­u­lar­i­ty on plat­forms like Face­book and YouTube. Con­ser­v­a­tives began to com­plain they were unfair­ly tar­get­ed. Ear­li­er this month, Sen Ted Cruz (R‑TX) held hear­ings inter­ro­gat­ing big tech pre­cise­ly on the issue of bias against con­ser­v­a­tives.

    To counter those (most­ly unfound­ed) alle­ga­tions that the plat­form is biased toward lib­er­als, Face­book is part­ner­ing with right-wing sites as well.

    This leads to sit­u­a­tions where Face­book part­ners with right-lean­ing orga­ni­za­tions to fact-check lib­er­als sites. Some lib­er­al sites have been tar­get­ed as “false,” there­by lim­it­ing dis­tri­b­u­tion of the “false” arti­cle by as much as 80 per­cent — a big prob­lem con­sid­er­ing Face­book is still the most com­mon­ly used plat­form in the coun­try for news, despite reduc­tions in dis­tri­b­u­tion that have hurt lib­er­al and con­ser­v­a­tive news sites alike.

    The first con­ser­v­a­tive site Face­book part­nered with for fact-check­ing was the Week­ly Stan­dard, which ceased oper­a­tions last Decem­ber. That part­ner­ship became a source of con­tro­ver­sy three months before then, when con­ser­v­a­tive fact-check­ers flagged an arti­cle from the lib­er­al pub­li­ca­tion ThinkProgress as “false” on seman­tic grounds. (Full dis­clo­sure: I am a for­mer ThinkProgress employ­ee, as are sev­er­al oth­er cur­rent Vox staffers.) As Vox’s Zack Beauchamp explained at the time, while the article’s the­sis was arguably accu­rate, the head­line like­ly went too far. But the pun­ish­ment result­ing from the Week­ly Standard’s “false” des­ig­na­tion was worse than the crime:

    Last week, the lib­er­al pub­li­ca­tion ThinkProgress pub­lished a piece on Supreme Court nom­i­nee Brett Kavanaugh’s con­fir­ma­tion hear­ing with the head­line “Brett Kavanaugh said he would kill Roe v. Wade and almost no one noticed.” The fact-check­er for the Week­ly Stan­dard ruled it was false. Facebook’s pun­ish­ment mech­a­nism kicked in, and the ThinkProgress arti­cle was cut off from being seen by about 80 per­cent of its poten­tial Face­book audi­ence.

    On Tues­day, the author of the ThinkProgress piece — edi­tor Ian Mill­his­er — pub­licly defend­ed the the­sis of his piece and accused Face­book of “pan­der­ing to the right” by allow­ing a con­ser­v­a­tive mag­a­zine to block lib­er­al arti­cles. The stakes here are high: Face­book pro­vides about 10 to 15 per­cent of ThinkProgress’s traf­fic, which means that get­ting choked off from read­ers there is a non­triv­ial hit to its read­er­ship.

    Svens­son told Vox that there was no direct con­nec­tion between the Week­ly Stan­dard shut­ting down and Face­book part­ner­ing with anoth­er con­ser­v­a­tive site.

    Face­book report­ed­ly has been inter­est­ed in part­ner­ing with the Dai­ly Caller for some time. In Decem­ber, the Wall Street Jour­nal report­ed that Joel Kaplan, a for­mer White House aide to George W. Bush who now serves as Facebook’s glob­al pol­i­cy chief and is the company’s “pro­tec­tor against alle­ga­tions of polit­i­cal bias,” made a failed push to part­ner with the Dai­ly Caller last year:

    This sum­mer, Mr. Kaplan pushed to part­ner with right-wing news site The Dai­ly Caller’s fact-check­ing divi­sion after con­ser­v­a­tives accused Face­book of work­ing only with main­stream pub­lish­ers, peo­ple famil­iar with the dis­cus­sions said. Con­ser­v­a­tive crit­ics argued those pub­li­ca­tions had a built-in lib­er­al bias.

    Mr. Kaplan argued that The Dai­ly Caller was accred­it­ed by the Poyn­ter Insti­tute, a St. Peters­burg, Fla.-based jour­nal­ism non­prof­it that over­sees a net­work of fact-check­ers. Oth­er exec­u­tives, includ­ing some in the Wash­ing­ton, D.C. office, argued that the pub­li­ca­tion print­ed mis­in­for­ma­tion. The con­tentious dis­cus­sion involved Mr. Zucker­berg, who appeared to side with Mr. Kaplan, and Chief Oper­at­ing Offi­cer Sheryl Sand­berg. The debate end­ed in Novem­ber when The Dai­ly Caller’s fact-check­ing oper­a­tion lost its accred­i­ta­tion.

    Accord­ing to IFCN direc­tor Bay­bars Örsek, Check Your Fact was expelled from IFCN’s ver­i­fied sig­na­to­ries last Novem­ber because “they failed to dis­close one of their fund­ing sources [the Dai­ly Caller News Foun­da­tion] in their appli­ca­tion,” but were rein­stat­ed ear­li­er this year after reap­ply­ing.

    But even though Check Your Fact is now being more trans­par­ent about its fund­ing sources, those fund­ing sources in and of them­selves present prob­lem­at­ic con­flicts of inter­est — ones that the IFCN’s cer­ti­fi­ca­tion process doesn’t account for.

    How Face­book choos­es its fact-check­ers

    All the fact-check­ers Face­book part­ners with are cer­ti­fied by Poynter’s Inter­na­tion­al Fact Check­ing Net­work (IFCN). Poyn­ter eval­u­ates appli­cants based on a set of cri­te­ria includ­ing “non­par­ti­san­ship and fair­ness,” “trans­paren­cy of sources,” “trans­paren­cy of fund­ing and orga­ni­za­tion,” “trans­paren­cy of method­ol­o­gy,” and an “open an hon­est cor­rec­tions pol­i­cy.”

    IFCN cer­ti­fi­ca­tion is a nec­es­sary con­di­tion for part­ner­ing with Face­book, but once a site is cer­ti­fied, it’s up to Face­book to decide whether to part­ner with it. There are cur­rent­ly 62 orga­ni­za­tions with IFCN cer­ti­fi­ca­tion glob­al­ly, but Face­book only part­ners with six in the Unit­ed States.

    “We don’t believe we at Face­book should be respon­si­ble for the verac­i­ty of con­tent,” Face­book spokesper­son Svens­son told me. “We believe in the cred­i­bil­i­ty of fact-check­ers that [IFCN] cer­ti­fies.”

    Notably, how­ev­er, the IFCN’s cri­te­ria for cer­ti­fi­ca­tion does not include con­flicts of inter­est. That’s the source of one of the con­cerns cli­mate jour­nal­ists are rais­ing about Check Your Fact.

    Accord­ing to a report pub­lished last month by PRWatch, the Charles Koch Foun­da­tion account­ed for 83 per­cent of the Dai­ly Caller News Foundation’s rev­enues in 2016, and the Dai­ly Caller News Foun­da­tion employs some of Check Your Fact’s fact-check­ers. Green­peace reports that the Koch Fam­i­ly Foun­da­tions spent more than $127 mil­lion from 1997 to 2017 financ­ing groups “that have attacked cli­mate change sci­ence and pol­i­cy solu­tions.”

    That con­flict of inter­est has raised con­cerns that Check Your Fact’s fact-check­ing role could have a chill­ing effect on cli­mate jour­nal­ism on Face­book.

    As lead­ing cli­ma­tol­o­gist Michael Mann told ThinkProgress, “It is appalling that Face­book has teamed up with a Koch-fund­ed orga­ni­za­tion that pro­motes cli­mate change denial. ... Face­book must dis­as­so­ci­ate itself from this orga­ni­za­tion.”

    Face­book says it wants a “diver­si­ty” of orga­ni­za­tions for fact-check­ing, but accord­ing to Media Bias/Fact Check, none of the fact-check­ers Face­book cur­rent­ly part­ners with in the US are left-lean­ing, and Check Your Fact is the only one with a a right-of-cen­ter rat­ing. Face­book is essen­tial­ly buy­ing into the argu­ment con­ser­v­a­tives have laid forth — that main­stream news out­lets have a lib­er­al bias and that con­ser­v­a­tives need spe­cial con­sid­er­a­tion in the process.

    Hav­ing accu­rate fact-checks doesn’t mean a fact-check­er is free of bias

    Check Your Fact’s web­site pledges that the site is “non-par­ti­san” and “loy­al to nei­ther peo­ple nor par­ties — only the truth.” (Full dis­clo­sure: Check Your Fact has also fact-checked one of this author’s own tweets). It also talks up the website’s “edi­to­r­i­al inde­pen­dence.” Indeed, a perusal of Check Your Fact’s web­site doesn’t indi­cate that there’s any­thing fac­tu­al­ly wrong with the site’s fact-checks, but the sto­ries it choos­es to fact-check speak to a bias of its own.

    For instance, as of April 30, the site’s home­page fea­tures more fact-checks of state­ments made by Hillary Clin­ton — for exam­ple, “FACT CHECK: Did Hillary Clin­ton Once Say That Demo­c­ra­t­ic Vot­ers Are ‘Just Plain Stu­pid’?” (the site notes there’s no evi­dence Clin­ton ever said it) — than it does state­ments from the cur­rent pres­i­dent, Don­ald Trump, who just sur­passed a his­toric 10,000 false or mis­lead­ing claims from main­stream fact-check­ers.

    And as Scott Wald­man recent­ly detailed for E&E News, even when Check Your Fact does fact-checks of claims like Trump’s recent one about wind tur­bines caus­ing can­cer that ulti­mate­ly arrive at the cor­rect con­clu­sion (Trump’s claim was false), the site ele­vates fringe voic­es in the process.

    While the web­site labeled the claim as false — and quot­ed can­cer experts say­ing as much — it also quot­ed Nation­al Wind Watch, an anti­wind group that orga­nizes and fights against wind tur­bines through­out the coun­try. A spokesman for that group claimed the pres­i­dent was cor­rect; he said tur­bines cause a lack of sleep and stress, which can lead to can­cer.

    In March, Check Your Fact gave cre­dence to Sen­ate Major­i­ty Leader Mitch McConnell’s claims that the Green New Deal would cost more than every dol­lar the fed­er­al gov­ern­ment has spent in its his­to­ry. The Ken­tucky Repub­li­can and Check Your Fact relied on a sin­gle study, pro­duced by a con­ser­v­a­tive think tank, the Amer­i­can Action Forum.

    But the author of that study has acknowl­edged that its cal­cu­la­tion of a $93 tril­lion price tag is essen­tial­ly a guess, since the Green New Deal is cur­rent­ly a vague res­o­lu­tion. E&E News has report­ed on how the Amer­i­can Action Forum is con­nect­ed to a web of con­ser­v­a­tive groups that fund polit­i­cal attacks through undis­closed donors and that have been fund­ed by fos­sil fuel lob­by­ing inter­ests opposed to envi­ron­men­tal reg­u­la­tions (Cli­matewire, April 1).

    It would be hard to com­plain if Face­book part­nered with rep­utable web­sites for fact-check­ing. But in order to pre­empt accu­sa­tions of left-wing bias, the plat­form has repeat­ed­ly part­nered with out­lets that draw into ques­tion how com­mit­ted the plat­form real­ly is to root­ing out fake news. (In a state­ment sent to Vox, Check Your Fact edi­tor David Sivak pushed back on char­ac­ter­i­za­tions of his site as being biased toward the right, writ­ing, “[t]hese last cou­ple of weeks have been reveal­ing, as a num­ber of news out­lets have resort­ed to mis­rep­re­sent­ing our work. Even when we fact-check con­ser­v­a­tives for putting words in Hillary Clinton’s mouth, that’s some­how mis­con­strued as con­ser­v­a­tive ‘bias’ on our part. The truth is, Check Your Fact has a two-year track record of fair, even­hand­ed arti­cles that hold fig­ures on both sides of the polit­i­cal aisle account­able, includ­ing Trump.”)

    There are indi­ca­tions that Facebook’s fact-check­ing prob­lems go deep­er than its part­ner­ship with the Dai­ly Caller. In Feb­ru­ary, one of the sites that was work­ing with Face­book, Snopes, announces it was end­ing the part­ner­ship.

    Two months before that announce­ment, the Guardian report­ed on some of the frus­tra­tions that may have moti­vat­ed that deci­sion.

    “Cur­rent and for­mer Face­book factcheck­ers told the Guardian that the tech platform’s col­lab­o­ra­tion with out­side reporters has pro­duced min­i­mal results and that they’ve lost trust in Face­book, which has repeat­ed­ly refused to release mean­ing­ful data about the impacts of their work,” the Guardian report­ed.

    The dis­ease of fake news is bad. But the “cures” Face­book is try­ing have side effects of their own.

    Face­book knows that it faces a tough sit­u­a­tion. Much of its val­ue lies in the fact that it has such a wide user base — lib­er­al or con­ser­v­a­tive, old or young — and that it can mon­e­tize those users. The preva­lence of mis­in­for­ma­tion threat­ens its abil­i­ty to sur­vive in a very real way, but so does poten­tial reg­u­la­tion from Repub­li­can politi­cians who don’t seem to have a firm grasp of how the inter­net works but harp on about lib­er­al bias any­way.

    Face­book, by part­ner­ing with a right-wing fact-check­ing orga­ni­za­tion, is mak­ing a con­ces­sion to con­ser­v­a­tive argu­ments. But by not includ­ing lib­er­al sites, it’s also tac­it­ly sug­ges­tion that main­stream out­lets have a lib­er­al bias — which isn’t nec­es­sar­i­ly true.

    ...

    ———-

    “Facebook’s con­tro­ver­sial fact-check­ing part­ner­ship with a Dai­ly Caller-fund­ed web­site, explained” by Aaron Rupar; Vox; 05/06/2019

    “Last week, Face­book announced that it’s part­ner­ing with Check Your Fact — a sub­sidiary of the right-wing Dai­ly Caller, a site known for its ties to white nation­al­ists — as one of six third-par­ty orga­ni­za­tions it cur­rent­ly works with to fact-check con­tent for Amer­i­can users. The part­ner­ship has already come under intense crit­i­cism from cli­mate jour­nal­ists (among oth­ers) who are con­cerned that the Dai­ly Caller’s edi­to­r­i­al stance on issues like cli­mate change, which is uncon­tro­ver­sial among sci­en­tists but isn’t treat­ed as such on right-wing media, will spread even more mis­in­for­ma­tion Face­book.”

    The Dai­ly Caller — a cesspool of white nation­al­ist pro­pa­gan­da — is fact-check­er sug­ar-dad­dy for one of the biggest sources of news on the plan­et. This is the state of the media in 2019. It’s also a reminder that, while Don­ald Trump is wide­ly rec­og­nized as the fig­ure that ‘capured’ the heart and soul of the Repub­li­can Par­ty in recent years, the real fig­ure that accom­plished this was Alex Jones. That’s why ensur­ing Face­book is safe for far right dis­in­for­ma­tion is so impor­tant to the par­ty. Alex Jones’s mes­sage is the Repub­li­can Par­ty’s unof­fi­cial zeit­geist at this point. Trump has just been rid­ing Jones’s wave that’s been build­ing for years.

    Oh, but it gets worse. Of course: it turns out the Charles Koch Foun­da­tion account­ed for 83 per­cent of the Dai­ly Caller News Foundation’s rev­enues in 2016, and the Dai­ly Caller News Foun­da­tion employs some of Check Your Fact’s fact-check­ers. So this is more of a Dai­ly Caller/Koch joint oper­a­tion. But Face­book explains this deci­sion by assert­ing that “we do believe in hav­ing a diverse set of fact-check­ing part­ners.” And yet there aren’t any actu­al left-wing orga­ni­za­tions hired to do sim­i­lar work:

    ...
    In an inter­view, Face­book spokesper­son Lau­ren Svens­son defend­ed the part­ner­ship. She not­ed that Check Your Fact, like all fact-check­ers Face­book part­ners with, is cer­ti­fied by Poynter’s Inter­na­tion­al Fact-Check­ing Net­work (IFCN). Asked about the right-wing pro­cliv­i­ties of Check Your Fact’s par­ent com­pa­ny, Svenn­son referred to the IFCN’s cer­ti­fi­ca­tion process­es and said that “we do believe in hav­ing a diverse set of fact-check­ing part­ners.” Check Your Fact, for its part, says it oper­ates inde­pen­dent­ly from the Dai­ly Caller, and touts its record of accu­rate fact-checks.

    The real­i­ty is that Face­book has a fake news prob­lem that could hurt its bot­tom line, but it also has a polit­i­cal prob­lem. If it doesn’t give cre­dence to pop­u­lar but dis­rep­utable web­sites like the Dai­ly Caller, it runs the risk of anger­ing Repub­li­cans who use the plat­form. But in cre­dence to sites of that sort, the plat­form runs the risk or per­pet­u­at­ing the same “fake news” prob­lem third-par­ty fact-check­ers are meant to solve.

    ...

    How Face­book choos­es its fact-check­ers

    All the fact-check­ers Face­book part­ners with are cer­ti­fied by Poynter’s Inter­na­tion­al Fact Check­ing Net­work (IFCN). Poyn­ter eval­u­ates appli­cants based on a set of cri­te­ria includ­ing “non­par­ti­san­ship and fair­ness,” “trans­paren­cy of sources,” “trans­paren­cy of fund­ing and orga­ni­za­tion,” “trans­paren­cy of method­ol­o­gy,” and an “open an hon­est cor­rec­tions pol­i­cy.”

    IFCN cer­ti­fi­ca­tion is a nec­es­sary con­di­tion for part­ner­ing with Face­book, but once a site is cer­ti­fied, it’s up to Face­book to decide whether to part­ner with it. There are cur­rent­ly 62 orga­ni­za­tions with IFCN cer­ti­fi­ca­tion glob­al­ly, but Face­book only part­ners with six in the Unit­ed States.

    “We don’t believe we at Face­book should be respon­si­ble for the verac­i­ty of con­tent,” Face­book spokesper­son Svens­son told me. “We believe in the cred­i­bil­i­ty of fact-check­ers that [IFCN] cer­ti­fies.”

    Notably, how­ev­er, the IFCN’s cri­te­ria for cer­ti­fi­ca­tion does not include con­flicts of inter­est. That’s the source of one of the con­cerns cli­mate jour­nal­ists are rais­ing about Check Your Fact.

    Accord­ing to a report pub­lished last month by PRWatch, the Charles Koch Foun­da­tion account­ed for 83 per­cent of the Dai­ly Caller News Foundation’s rev­enues in 2016, and the Dai­ly Caller News Foun­da­tion employs some of Check Your Fact’s fact-check­ers. Green­peace reports that the Koch Fam­i­ly Foun­da­tions spent more than $127 mil­lion from 1997 to 2017 financ­ing groups “that have attacked cli­mate change sci­ence and pol­i­cy solu­tions.”

    ...

    Face­book says it wants a “diver­si­ty” of orga­ni­za­tions for fact-check­ing, but accord­ing to Media Bias/Fact Check, none of the fact-check­ers Face­book cur­rent­ly part­ners with in the US are left-lean­ing, and Check Your Fact is the only one with a a right-of-cen­ter rat­ing. Face­book is essen­tial­ly buy­ing into the argu­ment con­ser­v­a­tives have laid forth — that main­stream news out­lets have a lib­er­al bias and that con­ser­v­a­tives need spe­cial con­sid­er­a­tion in the process.
    ...

    And it’s been none oth­er than for­mer White House aide to George W. Bush, Joel Kaplan, who has been push­ing to give the Dai­ly Caller this kind of over­sight over the plat­for­m’s con­tent. Kaplan is appar­ent­ly Face­book’s “pro­tec­tor against alle­ga­tions of polit­i­cal bias.” And while some of Face­book’s exec­u­tive’s rec­og­nized that the Dai­ly Caller is a ser­i­al ped­dler of mis­in­for­ma­tion, Mark Zucker­berg report­ed­ly took Kaplan’s side dur­ing these debates:

    ...
    Face­book report­ed­ly has been inter­est­ed in part­ner­ing with the Dai­ly Caller for some time. In Decem­ber, the Wall Street Jour­nal report­ed that Joel Kaplan, a for­mer White House aide to George W. Bush who now serves as Facebook’s glob­al pol­i­cy chief and is the company’s “pro­tec­tor against alle­ga­tions of polit­i­cal bias,” made a failed push to part­ner with the Dai­ly Caller last year:

    This sum­mer, Mr. Kaplan pushed to part­ner with right-wing news site The Dai­ly Caller’s fact-check­ing divi­sion after con­ser­v­a­tives accused Face­book of work­ing only with main­stream pub­lish­ers, peo­ple famil­iar with the dis­cus­sions said. Con­ser­v­a­tive crit­ics argued those pub­li­ca­tions had a built-in lib­er­al bias.

    Mr. Kaplan argued that The Dai­ly Caller was accred­it­ed by the Poyn­ter Insti­tute, a St. Peters­burg, Fla.-based jour­nal­ism non­prof­it that over­sees a net­work of fact-check­ers. Oth­er exec­u­tives, includ­ing some in the Wash­ing­ton, D.C. office, argued that the pub­li­ca­tion print­ed mis­in­for­ma­tion. The con­tentious dis­cus­sion involved Mr. Zucker­berg, who appeared to side with Mr. Kaplan, and Chief Oper­at­ing Offi­cer Sheryl Sand­berg. The debate end­ed in Novem­ber when The Dai­ly Caller’s fact-check­ing oper­a­tion lost its accred­i­ta­tion.

    Accord­ing to IFCN direc­tor Bay­bars Örsek, Check Your Fact was expelled from IFCN’s ver­i­fied sig­na­to­ries last Novem­ber because “they failed to dis­close one of their fund­ing sources [the Dai­ly Caller News Foun­da­tion] in their appli­ca­tion,” but were rein­stat­ed ear­li­er this year after reap­ply­ing.

    But even though Check Your Fact is now being more trans­par­ent about its fund­ing sources, those fund­ing sources in and of them­selves present prob­lem­at­ic con­flicts of inter­est — ones that the IFCN’s cer­ti­fi­ca­tion process doesn’t account for.
    ...

    Yep, Check Your Fact was­n’t even ini­tial­ly trans­par­ent with the IFCN about its fund­ing sources and instead hid the fact that it’s financed by the Koch-fund­ed Dai­ly Caller News Foun­da­tion. That’s kind of orga­ni­za­tion this is. And that’s all why the inevitable future right-wing claims of bias that’s we’re undoubt­ed­ly going to hear in the 2020 elec­tion will be such a bad joke.

    In relat­ed news, Face­book recent­ly announced that it’s ban­ning the pro-Trump ads from the Epoch Times. Recall the recent reports about how The Epoch Times, fund­ed by Falun Gong devo­tees, has become of the sec­ond biggest buy­er of pro-Trump Face­book ads in the world (after only the Trump cam­paign itself) and has become a cen­tral play­er in gen­er­at­ing all sorts of wild far right con­spir­a­cy the­o­ries like ‘QAnon’. So was The Epoch Times banned for aggres­sive­ly push­ing all sorts of mis­in­for­ma­tion? Nope, The Epoch Times was banned from buy­ing Face­book ads for not being upfront about its fund­ing sources. That was it.

    Posted by Pterrafractyl | August 26, 2019, 12:18 pm
  13. This next arti­cle shows how Face­book promised to ban white nation­al­ist con­tent from its plat­form in March 2019. It was not until then that Face­book acknowl­edged that white nation­al­ism “can­not be mean­ing­ful­ly sep­a­rat­ed from white suprema­cy and orga­nized hate groups” and banned it. Face­book does not ban Holo­caust denial, but does work to reduce the spread of such con­tent by lim­it­ing the dis­tri­b­u­tion of posts and pre­vent­ing Holo­caust-deny­ing groups and pages from appear­ing in algo­rith­mic rec­om­men­da­tions.  How­ev­er, a Guardian analy­sis found long­stand­ing Face­book pages for VDare, a white nation­al­ist web­site focused on oppo­si­tion to immi­gra­tion; the Affir­ma­tive Right, a rebrand­ing of Richard Spencer’s blog Alter­na­tive Right, which helped launch the “alt-right” move­ment; and Amer­i­can Free Press, a newslet­ter found­ed by the white suprema­cist Willis Car­to, in addi­tion to mul­ti­ple pages asso­ci­at­ed with Red Ice TV. Also oper­at­ing open­ly on the plat­form are two Holo­caust denial orga­ni­za­tions, the Com­mit­tee for Open Debate on the Holo­caust and the Insti­tute for His­tor­i­cal Review. The Guardian reviewed of white nation­al­ist out­lets on Face­book amid a debate over the company’s deci­sion to include Bre­it­bart News in Face­book News, a new sec­tion of its mobile app ded­i­cat­ed to “high qual­i­ty” jour­nal­ism. Crit­ics of Bre­it­bart News object to its inclu­sion in what Zucker­berg has described as a “trust­ed source” of infor­ma­tion on two fronts: its repeat­ed pub­li­ca­tion of par­ti­san mis­in­for­ma­tion and con­spir­a­cy the­o­ries – and its pro­mo­tion of extreme right-wing views. Steve Ban­non called the site Bre­it­bart “the plat­form for the alt-right” in 2016. In 2017, Buz­zFeed News report­ed on emails and doc­u­ments show­ing how a for­mer Bre­it­bart edi­tor had worked direct­ly with a white nation­al­ist and a neo-Nazi to write and edit an arti­cle about the “alt-right” move­ment. The SPLC and numer­ous news orga­ni­za­tions have report­ed on a cache of emails between the senior Trump advis­er Stephen Miller and the for­mer Bre­it­bart writer Katie McHugh show­ing how Miller pushed for cov­er­age and inclu­sion of white nation­al­ist ideas in the pub­li­ca­tion.  The arti­cle pro­vides an anal­o­gy where just because the KKK pro­duced their own news­pa­pers that it didn’t mean that it qual­i­fies as news.” Bre­it­bart is a polit­i­cal organ that was try­ing to do was give white suprema­cist pol­i­tics a veneer of objec­tiv­i­ty.”

    The Guardian, Julia Car­rie Wong , Thu 21 Nov 2019 06.00 EST

    White nation­al­ists are open­ly oper­at­ing on Face­book. The com­pa­ny won’t act

    Guardian analy­sis finds VDare and Red Ice TV among sev­er­al out­lets that are still on the plat­form despite Facebook’s promised ban

    Last mod­i­fied on Thu 21 Nov 2019 14.38 EST

    On 7 Novem­ber, Lana Lok­t­eff, an Amer­i­can white nation­al­ist, intro­duced a “thought crim­i­nal and polit­i­cal pris­on­er and friend” as a fea­tured guest on her inter­net talk show, Red Ice TV. 

    For about 90 min­utes, Lok­t­eff and her guest – Greg John­son, a promi­nent white nation­al­ist and edi­tor-in-chief of the white nation­al­ist pub­lish­er Counter-Cur­rents – dis­cussed Johnson’s recent arrest in Nor­way amid author­i­ties’ con­cerns about his past expres­sion of “respect” for the far-right mass mur­der­er Anders Breivik. In 2012, John­son wrote that he was angered by Breivik’s crimes because he feared they would harm the cause of white nation­al­ism but had dis­cov­ered a “strange new respect” for him dur­ing his tri­al; Breivik’s mur­der of 77 peo­ple has been cit­ed as an inspi­ra­tion by the sus­pect­ed Christchurch killer, the man who mur­dered the British MP Jo Cox, and a US coast guard offi­cer accused of plot­ting a white nation­al­ist ter­ror attack.

    Just a few weeks ear­li­er, Red Ice TV had suf­fered a seri­ous set­back when it was per­ma­nent­ly banned from YouTube for repeat­ed vio­la­tions of its pol­i­cy against hate speech. But Red Ice TV still had a home on Face­book, allow­ing the channel’s 90,000 fol­low­ers to stream the dis­cus­sion on Face­book Watch – the plat­form Mark Zucker­berg launched as a place “to share an expe­ri­ence and bring peo­ple togeth­er who care about the same things”.

    The con­ver­sa­tion wasn’t a unique occur­rence. Face­book promised to ban white nation­al­ist con­tent from its plat­form in March 2019, revers­ing a years-long pol­i­cy to tol­er­ate the ide­ol­o­gy. But Red Ice TV is just one of sev­er­al white nation­al­ist out­lets that remain active on the plat­form today.

    A Guardian analy­sis found long­stand­ing Face­book pages for VDare, a white nation­al­ist web­site focused on oppo­si­tion to immi­gra­tion; the Affir­ma­tive Right, a rebrand­ing of Richard Spencer’s blog Alter­na­tive Right, which helped launch the “alt-right” move­ment; and Amer­i­can Free Press, a newslet­ter found­ed by the white suprema­cist Willis Car­to, in addi­tion to mul­ti­ple pages asso­ci­at­ed with Red Ice TV. Also oper­at­ing open­ly on the plat­form are two Holo­caust denial orga­ni­za­tions, the Com­mit­tee for Open Debate on the Holo­caust and the Insti­tute for His­tor­i­cal Review.

    “There’s no ques­tion that every sin­gle one of these groups is a white nation­al­ist group,” said Hei­di Beirich, the direc­tor of the South­ern Pover­ty Law Center’s (SPLC) Intel­li­gence Project, after review­ing the Guardian’s find­ings. “It’s not even up for debate. There’s real­ly no excuse for not remov­ing this mate­r­i­al.”

    White nation­al­ists sup­port the estab­lish­ment of whites-only nation states, both by exclud­ing new non-white immi­grants and, in some cas­es, by expelling or killing non-white cit­i­zens and res­i­dents. Many con­tem­po­rary pro­po­nents of white nation­al­ism fix­ate on con­spir­a­cy the­o­ries about demo­graph­ic change and con­sid­er racial or eth­nic diver­si­ty to be acts of “geno­cide” against the white race.

    Face­book declined to take action against any of the pages iden­ti­fied by the Guardian. A com­pa­ny spokesper­son said: “We are inves­ti­gat­ing to deter­mine whether any of these groups vio­late our poli­cies against orga­nized hate. We reg­u­lar­ly review orga­ni­za­tions against our pol­i­cy and any that vio­late will be banned per­ma­nent­ly.”

    The spokesper­son also said that Face­book does not ban Holo­caust denial, but does work to reduce the spread of such con­tent by lim­it­ing the dis­tri­b­u­tion of posts and pre­vent­ing Holo­caust-deny­ing groups and pages from appear­ing in algo­rith­mic rec­om­men­da­tions. Such lim­i­ta­tions are being applied to the two Holo­caust denial groups iden­ti­fied by the Guardian, the spokesper­son said.

    The Guardian under­took a review of white nation­al­ist out­lets on Face­book amid a debate over the company’s deci­sion to include Bre­it­bart News in Face­book News, a new sec­tion of its mobile app ded­i­cat­ed to “high qual­i­ty” jour­nal­ism. Face­book has faced sig­nif­i­cant pres­sure to reduce the dis­tri­b­u­tion of mis­in­for­ma­tion on its plat­form. Crit­ics of Bre­it­bart News object to its inclu­sion in what Zucker­berg has described as a “trust­ed source” of infor­ma­tion on two fronts: its repeat­ed pub­li­ca­tion of par­ti­san mis­in­for­ma­tion and con­spir­a­cy the­o­ries – and its pro­mo­tion of extreme rightwing views.

    A grow­ing body of evi­dence shows the influ­ence of white nation­al­ism on Breitbart’s pol­i­tics. Breitbart’s for­mer exec­u­tive chair­man Steve Ban­non called the site “the plat­form for the alt-right” in 2016. In 2017, Buz­zFeed News report­ed on emails and doc­u­ments show­ing how a for­mer Bre­it­bart edi­tor had worked direct­ly with a white nation­al­ist and a neo-Nazi to write and edit an arti­cle about the “alt-right” move­ment.

    This month, the SPLC and numer­ous news orga­ni­za­tions have report­ed on a cache of emails between the senior Trump advis­er Stephen Miller and the for­mer Bre­it­bart writer Katie McHugh show­ing how Miller pushed for cov­er­age and inclu­sion of white nation­al­ist ideas in the pub­li­ca­tion. The emails show Miller direct­ing McHugh to read links from VDare and anoth­er white nation­al­ist pub­li­ca­tion, Amer­i­can Renais­sance, among oth­er sources. In one case, report­ed by NBC News, Bre­it­bart ran an anti-immi­gra­tion op-ed sub­mit­ted by Miller under the byline “Bre­it­bart News”.

    A Bre­it­bart spokes­woman, Eliz­a­beth Moore, said that the out­let “is not now nor has it ever been a plat­form for the alt-right”. Moore also said McHugh was “a trou­bled indi­vid­ual” who had been fired for a num­ber of rea­sons “includ­ing lying”.

    “Bre­it­bart is the fun­nel through which VDare’s ideas get out to the pub­lic,” said Beirich. “It’s basi­cal­ly a con­duit of con­spir­a­cy the­o­ry and racism into the con­ser­v­a­tive move­ment … We don’t list them as a hate group, but to con­sid­er them a trust­ed news source is pan­der­ing at best.”

    Draw­ing the line between pol­i­tics and news
    Face­book exec­u­tives have respond­ed defen­sive­ly to crit­i­cism of Bre­it­bart News’s inclu­sion in the Face­book News tab, argu­ing that the com­pa­ny should not pick ide­o­log­i­cal sides.

    “Part of hav­ing this be a trust­ed source is that it needs to have a diver­si­ty of … views in there,” Zucker­berg said at an event in New York in response to a ques­tion about Breitbart’s inclu­sion. Camp­bell Brown, Facebook’s head of news part­ner­ships, wrote in a lengthy Face­book post that she believed Face­book should “include con­tent from ide­o­log­i­cal pub­lish­ers on both the left and the right”. Adam Mosseri, the head of Insta­gram and a long­time Face­book exec­u­tive, ques­tioned on Twit­ter whether the company’s crit­ics “real­ly want a plat­form of our scale to make deci­sions to exclude news orga­ni­za­tions based on their ide­ol­o­gy”. In response to a ques­tion from the Guardian, Mosseri acknowl­edged that Face­book does ban the ide­ol­o­gy of white nation­al­ism, then added: “The tricky bit is, and this is always the case, where exact­ly to draw the line.”

    One of the chal­lenges for Face­book is that white nation­al­ist and white suprema­cist groups adopt the trap­pings of news out­lets or pub­li­ca­tions to dis­sem­i­nate their views, said Joan Dono­van, the direc­tor of the Tech­nol­o­gy and Social Change Research Project at Har­vard and an expert on media manip­u­la­tion.

    Red Ice TV is “a group that styles them­selves as a news orga­ni­za­tion when they are pri­mar­i­ly a polit­i­cal orga­ni­za­tion, and the pol­i­tics are staunch­ly white suprema­cist”, Dono­van said. “We have seen this hap­pen in the past where orga­ni­za­tions like the KKK have pro­duced their own news­pa­pers … It doesn’t mean that it qual­i­fies as news.”

    Many peo­ple argue that Bre­it­bart is more of a “polit­i­cal front” than a news oper­a­tion, she added. “When Steve Ban­non left Bre­it­bart in order to work much more con­crete­ly with cam­paigns, you could see that Bre­it­bart was a polit­i­cal organ before any­thing else. Real­ly what they were try­ing to do was give white suprema­cist pol­i­tics a veneer of objec­tiv­i­ty.”

    Dono­van said she expects plat­form com­pa­nies will reassess their treat­ment of Bre­it­bart fol­low­ing the release of the Miller emails. She also called for Face­book to take a more “holis­tic” approach to com­bat­ing US domes­tic ter­ror­ism, as it does with for­eign ter­ror­ist groups.

    A Face­book spokesper­son not­ed that Face­book News is still in a test phase and that Face­book is not pay­ing Bre­it­bart News for its inclu­sion in the pro­gram. The spokesper­son said the com­pa­ny would con­tin­ue to lis­ten to feed­back from news pub­lish­ers.

    A his­to­ry of tol­er­ance for hate
    Face­book has long assert­ed that “hate speech has no space on Face­book”, whether it comes from a news out­let or not.

    But the $566bn com­pa­ny has con­sis­tent­ly allowed a vari­ety of hate groups to use its plat­form to spread their mes­sage, even when alert­ed to their pres­ence by the media or advo­ca­cy groups. In July 2017, in response to queries from the Guardian, Face­book said that more than 160 pages and groups iden­ti­fied as hate groups by SPLC did not vio­late its com­mu­ni­ty stan­dards. Those groups includ­ed:

    Amer­i­can Renais­sance, a white suprema­cist web­site and mag­a­zine;

    The Coun­cil of Con­ser­v­a­tive Cit­i­zens, a white nation­al­ist orga­ni­za­tion ref­er­enced in the man­i­festo writ­ten by Dylann Roof before he mur­dered nine peo­ple in a black church;

    The Occi­den­tal Observ­er, an online pub­li­ca­tion described by the Anti-Defama­tion League as the “pri­ma­ry voice for anti­semitism from far-right intel­lec­tu­als”;

    the Tra­di­tion­al­ist Work­er par­ty, a neo-Nazi group that had already been involved in mul­ti­ple vio­lent inci­dents; and

    Counter-Cur­rents, the white nation­al­ist pub­lish­ing imprint run by the white nation­al­ist Greg John­son, the recent guest on Red Ice TV.

    Three weeks lat­er, fol­low­ing the dead­ly Unite the Right ral­ly in Char­lottesville, Face­book announced a crack­down on vio­lent threats and removed pages asso­ci­at­ed with the the Tra­di­tion­al­ist Work­er par­ty, Counter-Cur­rents, and the neo-Nazi orga­ni­za­tion Gal­lows Tree Wotans­volk. Many of the rest remained.
    A year lat­er, a Guardian review found that many of the groups and indi­vid­u­als involved in the Char­lottesville event were back on Face­book, includ­ing the neo-Con­fed­er­ate League of the South, Patri­ot Front and Jason Kessler, who orga­nized Unite the Right. Face­book took those pages down fol­low­ing inquiries from the Guardian, but declined to take action against the page of David Duke, the noto­ri­ous white suprema­cist and for­mer Grand Wiz­ard of the Ku Klux Klan.

    In May 2018, Vice News’s Moth­er­board report­ed on inter­nal Face­book train­ing doc­u­ments that showed the com­pa­ny was dis­tin­guish­ing between white suprema­cy and white nation­al­ism – and explic­it­ly allow­ing white nation­al­ism.

    In July 2018, Zucker­berg defend­ed the moti­va­tions of peo­ple who engage in Holo­caust denial dur­ing an inter­view, say­ing that he did not “think that they’re inten­tion­al­ly get­ting it wrong”. Fol­low­ing wide­spread crit­i­cism, he retract­ed his remarks.

    It was not until March 2019 that Face­book acknowl­edged that white nation­al­ism “can­not be mean­ing­ful­ly sep­a­rat­ed from white suprema­cy and orga­nized hate groups” and banned it.

    Beirich expressed deep frus­tra­tion with Facebook’s track record.

    “We have con­sult­ed with Face­book many, many times,” Beirich added. “We have sent them our list of hate groups. It’s not like they’re not aware, and I always get the sense that there is good faith desire [to take action], and yet over and over again [hate groups] keep pop­ping up. It’s just not pos­si­ble for civ­il rights groups like SPLC to play the role of flag­ging this stuff for Face­book. It’s a com­pa­ny that makes $42bn a year and I have a staff of 45.”

    https://www.theguardian.com/technology/2019/nov/21/facebook-white-nationalists-ban-vdare-red-ice?CMP=Share_iOSApp_Other

    Posted by Mary Benton | November 23, 2019, 6:55 pm
  14. Remem­ber the sto­ry from ear­li­er this year about Face­book out­sourc­ing its ‘fact check­ing’ oper­a­tions to orga­ni­za­tions like the Koch-financed far right Dai­ly Caller News Foun­da­tion? Well, here’s the flip side of sto­ries like that: Face­book just lost its last fact check­er orga­ni­za­tion in the Nether­lands, the Dutch news­pa­per NU.nl. Why did the news­pa­per leave the pro­gram? Because Face­book forced NU.nl to reverse its rul­ing that the claims in a far right Dutch ad are unsub­stan­ti­at­ed, in keep­ing with Face­book’s new pol­i­cy of not fact check­ing politi­cians. The group labeled an ad by a far right politi­cian that claimed that 10 per­cent of Roma­ni­a’s land is owned by non-Euro­peans as unsub­stan­ti­at­ed, but Face­book inter­vened and forced a rever­sal of that rul­ing. So NU.nl quit the fact check­ing pro­gram because it was­n’t allowed to check the facts of soci­ety’s biggest and loud­est liars:

    The Verge

    Facebook’s only fact-check­ing ser­vice in the Nether­lands just quit

    ‘What is the point of fight­ing fake news if you are not allowed to tack­le politi­cians?’

    By Zoe Schif­fer
    Nov 26, 2019, 3:02pm EST

    Face­book is now oper­at­ing with­out a third-par­ty fact-check­ing ser­vice in the Nether­lands. The company’s only part­ner, Dutch news­pa­per NU.nl, just quit over a dis­pute regard­ing the social network’s pol­i­cy to allow politi­cians to run ads con­tain­ing mis­in­for­ma­tion.

    “What is the point of fight­ing fake news if you are not allowed to tack­le politi­cians?” asked NU.nl’s edi­tor-in-chief Gert-Jaap Hoek­man in a blog post announc­ing the deci­sion. “Let one thing be clear: we stand behind the con­tent of our fact checks.”

    The con­flict began in May when Face­book inter­vened in NU.nl’s deci­sion to label an ad from the Dutch politi­cian Esther de Lange as unsub­stan­ti­at­ed. The ad’s claim, that 10 per­cent of farm­land in Roma­nia is owned by non-Euro­peans, could not be ver­i­fied, which led NU.nl to label it as false. Face­book inter­vened in that deci­sion, telling the orga­ni­za­tion that politi­cians’ speech should not be fact-checked.

    Facebook’s adver­tis­ing guide­lines do not allow mis­in­for­ma­tion in ads, and the com­pa­ny relies on third-par­ty fact-check­ing ser­vices to vet the claims mar­keters are mak­ing. In Octo­ber, how­ev­er, the com­pa­ny for­mal­ly exempt­ed politi­cians from being part of this pro­gram. “From now on we will treat speech from politi­cians as news­wor­thy con­tent that should, as a gen­er­al rule, be seen and heard,” wrote Nick Clegg, Facebook’s VP of com­mu­ni­ca­tions.

    ...

    Pres­sure began to mount after Jack Dorsey announced that Twit­ter would no longer allow polit­i­cal ads on the plat­form. “We believe polit­i­cal mes­sage reach should be earned, not bought,” he wrote on Twit­ter. Some of Facebook’s own employ­ees penned an open let­ter to Mark Zucker­berg, ask­ing him to con­sid­er chang­ing his mind.

    NU.nl felt increas­ing­ly uncom­fort­able with its rela­tion­ship with Face­book. The orga­ni­za­tion had become the only third-par­ty fact-check­ing ser­vice Face­book used in the Nether­lands, after Lei­den Uni­ver­si­ty pulled out from its part­ner­ship last year. When it became clear the social net­work would not change its posi­tion, NU.nl decid­ed to put an end to its part­ner­ship as well.

    “We val­ue the work that Nu.nl has done and regret to see them go, but respect their deci­sion as an inde­pen­dent busi­ness,” a Face­book spokesper­son said in a state­ment emailed to The Verge. “We have strong rela­tion­ships with 55 fact-check­ing part­ners around the world who fact-check con­tent in 45 lan­guages, and we plan to con­tin­ue expand­ing the pro­gram in Europe and hope­ful­ly in the Nether­lands.”

    ———-

    “Facebook’s only fact-check­ing ser­vice in the Nether­lands just quit” by Zoe Schif­fer; The Verge; 11/26/2019

    ““What is the point of fight­ing fake news if you are not allowed to tack­le politi­cians?” asked NU.nl’s edi­tor-in-chief Gert-Jaap Hoek­man in a blog post announc­ing the deci­sion. “Let one thing be clear: we stand behind the con­tent of our fact checks.””

    What is the point of fight­ing fake news if you are not allowed to tack­le politi­cians? That’s a pret­ty valid ques­tion for a fact check­er. Espe­cial­ly in an era of the rise of the far right when troll­ish polit­i­cal gas-light­ing has become the norm. At some point, being a fact check­er with those kinds of con­straints effec­tive­ly turns these fact check­ing orga­ni­za­tions into facil­i­ta­tors of these lies.

    In relat­ed news, check out the recent addi­tion to Face­book’s “trust­ed” news feed: Bre­it­bart News:

    The Verge

    Mark Zucker­berg is strug­gling to explain why Bre­it­bart belongs on Face­book News

    By Adi Robert­son
    Oct 25, 2019, 6:18pm EDT

    On Fri­day morn­ing, Face­book announced its plan to spend mil­lions of dol­lars on high-qual­i­ty jour­nal­ism, fuel­ing the launch of a new ded­i­cat­ed news tab on its plat­form. CEO Mark Zucker­berg joined News Corp CEO Robert Thom­son for an inter­view soon after, and Thom­son ham­mered home the need for objec­tive jour­nal­ism in the age of social media, wax­ing nos­tal­gic about the impor­tance of rig­or­ous fact-check­ing in his ear­ly career.

    ...

    Face­book News is part­ner­ing with a vari­ety of region­al news­pa­pers and some major nation­al part­ners, includ­ing USA Today and The Wall Street Jour­nal. But as The New York Times and Nie­man Lab report, its “trust­ed” sources also include Bre­it­bart, a far-right site whose co-founder Steve Ban­non once described it as a plat­form for the white nation­al­ist “alt-right.” Bre­it­bart has been crit­i­cized for repeat­ed inac­cu­rate and incen­di­ary report­ing, often at the expense of immi­grants and peo­ple of col­or. Last year, Wikipedia declared it an unre­li­able source for cita­tions, along­side the British tabloid Dai­ly Mail and the left-wing site Occu­py Democ­rats.

    That’s led to ques­tions about why Bre­it­bart belongs on Face­book News, a fea­ture that will sup­pos­ed­ly be held to far tougher stan­dards than the nor­mal News Feed. In a ques­tion-and-answer ses­sion after the inter­view, Zucker­berg told Wash­ing­ton Post colum­nist Mar­garet Sul­li­van that Face­book would have “objec­tive stan­dards” for qual­i­ty.

    “Most of the rest of what we oper­ate is help­ing give peo­ple a voice broad­ly and mak­ing sure that every­one can share their opin­ion,” he said. “That’s not this. This is a space that is ded­i­cat­ed to high-qual­i­ty and curat­ed news.”

    But when New York Times reporter Marc Tra­cy asked how includ­ing Bre­it­bart served that cause, Zucker­berg empha­sized its pol­i­tics, not its report­ing. “Part of hav­ing this be a trust­ed source is that it needs to have a diver­si­ty of views in there, so I think you want to have con­tent that rep­re­sents dif­fer­ent per­spec­tives,” he said. Zucker­berg reit­er­at­ed that these per­spec­tives should com­ply with Facebook’s stan­dards, and he was cagey about Bre­it­bart’s pres­ence, say­ing that “hav­ing some­one be pos­si­ble or eli­gi­ble to show up” doesn’t guar­an­tee fre­quent place­ment. “But I cer­tain­ly think you want to include a breadth of con­tent in there,” he said.

    Face­book hasn’t released a full list of News part­ners, so we don’t know the project’s full scope. Bre­it­bart is hard­ly the only right-lean­ing name Facebook’s list, which includes Nation­al Review, The Wash­ing­ton Times, and News Corp’s own Fox News. But it has faced unique chal­lenges to its edi­to­r­i­al integri­ty — includ­ing, in recent years, some of Bre­it­bart’s own for­mer staff denounc­ing its poli­cies.

    Zuckerberg’s answer is unlike­ly to sat­is­fy crit­ics, who see the site’s inclu­sion as an exam­ple of Face­book sur­ren­der­ing prin­ci­ple to appease right-wing com­men­ta­tors. Left-lean­ing non­prof­it Media Mat­ters for Amer­i­ca called the deci­sion “reflex­ive pan­der­ing to con­ser­v­a­tive pun­dits, right-wing extrem­ists, and white nation­al­ists.” Activist group Sleep­ing Giants — which has spear­head­ed a major adver­tis­er boy­cott of Bre­it­bart — retweet­ed sev­er­al reporters crit­i­ciz­ing the news, includ­ing Buz­zFeed News writer Joe Bern­stein, whose report­ing on Bre­it­bart and white nation­al­ism caused one of its biggest back­ers to sell his stake.

    But Face­book wants to win over Repub­li­cans, includ­ing law­mak­ers who have grilled Zucker­berg in Con­gress over shaky claims of “anti-con­ser­v­a­tive bias,” as well as Pres­i­dent Don­ald Trump, who has threat­ened tech com­pa­nies with new laws and antitrust action. Leav­ing out Bre­it­bart might earn Face­book con­dem­na­tion from these quar­ters.

    In a New York Times edi­to­r­i­al, Zucker­berg not­ed that out­right mis­in­for­ma­tion is banned on Face­book News. “If a pub­lish­er posts mis­in­for­ma­tion, it will no longer appear in the prod­uct,” he wrote. So in the­o­ry, Bre­it­bart will only stay on Face­book News if it hews to the rules. But that doesn’t explain why Face­book chose an out­let known for sen­sa­tion­al­ism and mis­in­for­ma­tion in the first place — and as Face­book News matures, kick­ing off a site like Bre­it­bart might cause more con­tro­ver­sy than nev­er includ­ing it at all.

    ———–

    “Mark Zucker­berg is strug­gling to explain why Bre­it­bart belongs on Face­book News” by Adi Robert­son; The Verge; 10/25/2019

    “Face­book News is part­ner­ing with a vari­ety of region­al news­pa­pers and some major nation­al part­ners, includ­ing USA Today and The Wall Street Jour­nal. But as The New York Times and Nie­man Lab report, its “trust­ed” sources also include Bre­it­bart, a far-right site whose co-founder Steve Ban­non once described it as a plat­form for the white nation­al­ist “alt-right.” Bre­it­bart has been crit­i­cized for repeat­ed inac­cu­rate and incen­di­ary report­ing, often at the expense of immi­grants and peo­ple of col­or. Last year, Wikipedia declared it an unre­li­able source for cita­tions, along­side the British tabloid Dai­ly Mail and the left-wing site Occu­py Democ­rats.”

    It’s not just a news feed. It’s a “trust­ed news” feed. That’s how Mark Zucker­berg envi­sions the Face­book News fea­ture is sup­posed to work. And yet when asked why Bre­it­bart News was invit­ed into this “trust­ed” col­lec­tion of news sources, Zucker­berg explains that in order for the Face­book News feed to be trust­ed it needs to draw from a wide vari­ety of sources across the ide­o­log­i­cal spec­trum. So in order for Face­book News to be trust­ed, it needs to include ide­o­log­i­cal sources from far right ide­olo­gies that thrive on warp­ing the truth and cre­at­ing fic­tion­al expla­na­tions of how the world works:

    ...
    That’s led to ques­tions about why Bre­it­bart belongs on Face­book News, a fea­ture that will sup­pos­ed­ly be held to far tougher stan­dards than the nor­mal News Feed. In a ques­tion-and-answer ses­sion after the inter­view, Zucker­berg told Wash­ing­ton Post colum­nist Mar­garet Sul­li­van that Face­book would have “objec­tive stan­dards” for qual­i­ty.

    “Most of the rest of what we oper­ate is help­ing give peo­ple a voice broad­ly and mak­ing sure that every­one can share their opin­ion,” he said. “That’s not this. This is a space that is ded­i­cat­ed to high-qual­i­ty and curat­ed news.”

    But when New York Times reporter Marc Tra­cy asked how includ­ing Bre­it­bart served that cause, Zucker­berg empha­sized its pol­i­tics, not its report­ing. “Part of hav­ing this be a trust­ed source is that it needs to have a diver­si­ty of views in there, so I think you want to have con­tent that rep­re­sents dif­fer­ent per­spec­tives,” he said. Zucker­berg reit­er­at­ed that these per­spec­tives should com­ply with Facebook’s stan­dards, and he was cagey about Bre­it­bart’s pres­ence, say­ing that “hav­ing some­one be pos­si­ble or eli­gi­ble to show up” doesn’t guar­an­tee fre­quent place­ment. “But I cer­tain­ly think you want to include a breadth of con­tent in there,” he said.

    Face­book hasn’t released a full list of News part­ners, so we don’t know the project’s full scope. Bre­it­bart is hard­ly the only right-lean­ing name Facebook’s list, which includes Nation­al Review, The Wash­ing­ton Times, and News Corp’s own Fox News. But it has faced unique chal­lenges to its edi­to­r­i­al integri­ty — includ­ing, in recent years, some of Bre­it­bart’s own for­mer staff denounc­ing its poli­cies.
    ...

    So as we can see, Face­book faces some chal­lenges with its new Face­book News and fact check­ing ser­vices. Enor­mous chal­lenges that are the same under­ly­ing chal­lenge: the chron­ic decep­tion at the foun­da­tion of far right world­views and the enor­mous oppor­tu­ni­ty social media cre­ates for prof­itably spread­ing those lies. And as we can also see, Face­book is, true to form, fail­ing immense­ly at over­com­ing those chal­lenges. Along with fail­ing the enor­mous meta-chal­lenge of over­com­ing Face­book’s insa­tiable cor­po­rate greed, also true to form.

    Posted by Pterrafractyl | December 2, 2019, 1:55 pm

Post a comment