Spitfire List Web site and blog of anti-fascist researcher and radio personality Dave Emory.

For The Record  

FTR #1074 FakeBook: Walkin’ the Snake on the Earth Island with Facebook (FascisBook, Part 2; In Your Facebook, Part 4)

Dave Emory’s entire life­time of work is avail­able on a flash dri­ve that can be obtained HERE. The new dri­ve is a 32-giga­byte dri­ve that is cur­rent as of the pro­grams and arti­cles post­ed by the fall of 2017. The new dri­ve (avail­able for a tax-deductible con­tri­bu­tion of $65.00 or more.)

WFMU-FM is pod­cast­ing For The Record–You can sub­scribe to the pod­cast HERE.

You can sub­scribe to e‑mail alerts from Spitfirelist.com HERE.

You can sub­scribe to RSS feed from Spitfirelist.com HERE.

You can sub­scribe to the com­ments made on pro­grams and posts–an excel­lent source of infor­ma­tion in, and of, itself, HERE.

Please con­sid­er sup­port­ing THE WORK DAVE EMORY DOES.

This broad­cast was record­ed in one, 60-minute seg­ment.

Intro­duc­tion: We have spo­ken repeat­ed­ly about the Nazi tract Ser­pen­t’s Walk, in which the Third Reich goes under­ground, buys into the opin­ion-form­ing media and, even­tu­al­ly, takes over.

Hitler, the Third Reich and their actions are glo­ri­fied and memo­ri­al­ized. The essence of the book is syn­op­sized on the back cov­er:

“It assumes that Hitler’s war­rior elite — the SS — did­n’t give up their strug­gle for a White world when they lost the Sec­ond World War. Instead their sur­vivors went under­ground and adopt­ed some of their tac­tics of their ene­mies: they began build­ing their eco­nom­ic mus­cle and buy­ing into the opin­ion-form­ing media. A cen­tu­ry after the war they are ready to chal­lenge the democ­rats and Jews for the hearts and minds of White Amer­i­cans, who have begun to have their fill of gov­ern­ment-enforced mul­ti-cul­tur­al­ism and ‘equal­i­ty.’ ”

Some­thing anal­o­gous is hap­pen­ing in Ukraine and India.

In FTR #889, we not­ed that Pierre Omid­yar, a dar­ling of the so-called “pro­gres­sive” sec­tor for his found­ing of The Inter­cept, was deeply involved with the financ­ing of the ascent of both Naren­dra Mod­i’s Hin­dut­va fas­cist BJP and the OUN/B suc­ces­sor orga­ni­za­tions in Ukraine.

Omid­yar’s anoint­ment as an icon of inves­tiga­tive report­ing could not be more iron­ic, in that jour­nal­ists and crit­ics of his fas­cist allies in Ukraine and India are being repressed and mur­dered, there­by fur­ther­ing the sup­pres­sion of truth in those soci­eties. This sup­pres­sion of truth feeds in to the Ser­pen­t’s Walk sce­nario.

This pro­gram sup­ple­ments past cov­er­age of Face­book in FTR #‘s 718, 946, 1021, 1039 not­ing how Face­book has net­worked with the very Hin­dut­va fas­cist Indi­an ele­ments and OUN/B suc­ces­sor orga­ni­za­tions in Ukraine. This net­work­ing has been–ostensibly to com­bat fake news. The real­i­ty may well high­light that the Face­book/B­JP-RSS/OUN/B links gen­er­ates fake news, rather than inter­dict­ing it. The fake news so gen­er­at­ed, how­ev­er, will be to the lik­ing of the fas­cists in pow­er in both coun­tries, man­i­fest­ing as a “Ser­pen­t’s Walk” revi­sion­ist sce­nario.

Key ele­ments of dis­cus­sion and analy­sis include:

  1. Indi­an pol­i­tics has been large­ly dom­i­nat­ed by fake news, spread by social media: ” . . . . In the con­tin­u­ing Indi­an elec­tions, as 900 mil­lion peo­ple are vot­ing to elect rep­re­sen­ta­tives to the low­er house of the Par­lia­ment, dis­in­for­ma­tion and hate speech are drown­ing out truth on social media net­works in the coun­try and cre­at­ing a pub­lic health cri­sis like the pan­demics of the past cen­tu­ryThis con­ta­gion of a stag­ger­ing amount of mor­phed images, doc­tored videos and text mes­sages is spread­ing large­ly through mes­sag­ing ser­vices and influ­enc­ing what India’s vot­ers watch and read on their smart­phones. A recent study by Microsoft found that over 64 per­cent Indi­ans encoun­tered fake news online, the high­est report­ed among the 22 coun­tries sur­veyed. . . . These plat­forms are filled with fake news and dis­in­for­ma­tion aimed at influ­enc­ing polit­i­cal choic­es dur­ing the Indi­an elec­tions. . . .
  2. Naren­dra Mod­i’s Hin­dut­va fas­cist BJP has been the pri­ma­ry ben­e­fi­cia­ry of fake news, and his regime has part­nered with Face­book: ” . . . . The hear­ing was an exer­cise in absur­dist the­ater because the gov­ern­ing B.J.P. has been the chief ben­e­fi­cia­ry of divi­sive con­tent that reach­es mil­lions because of the way social media algo­rithms, espe­cial­ly Face­book, ampli­fy ‘engag­ing’ arti­cles. . . .”
  3. Rajesh Jain is among those BJP func­tionar­ies who serve Face­book, as well as the Hin­dut­va fas­cists: ” . . . . By the time Rajesh Jain was scal­ing up his oper­a­tions in 2013, the BJP’s infor­ma­tion tech­nol­o­gy (IT) strate­gists had begun inter­act­ing with social media plat­forms like Face­book and its part­ner What­sApp. If sup­port­ers of the BJP are to be believed, the par­ty was bet­ter than oth­ers in util­is­ing the micro-tar­get­ing poten­tial of the plat­forms. How­ev­er, it is also true that Facebook’s employ­ees in India con­duct­ed train­ing work­shops to help the mem­bers of the BJP’s IT cell. . . .”
  4. Dr. Hiren Joshi is anoth­er of the BJP oper­a­tives who is heav­i­ly involved with Face­book. ” . . . . Also assist­ing the social media and online teams to build a larg­er-than-life image for Modi before the 2014 elec­tions was a team led by his right-hand man Dr Hiren Joshi, who (as already stat­ed) is a very impor­tant advis­er to Modi whose writ extends way beyond infor­ma­tion tech­nol­o­gy and social media. . . .  Joshi has had, and con­tin­ues to have, a close and long-stand­ing asso­ci­a­tion with Facebook’s senior employ­ees in India. . . .”
  5. Shiv­nath Thukral, who was hired by Face­book in 2017 to be its Pub­lic Pol­i­cy Direc­tor for India & South Asia, worked with Joshi’s team in 2014.  ” . . . . The third team, that was intense­ly focused on build­ing Modi’s per­son­al image, was head­ed by Hiren Joshi him­self who worked out of the then Gujarat Chief Minister’s Office in Gand­hi­na­gar. The mem­bers of this team worked close­ly with staffers of Face­book in India, more than one of our sources told us. As will be detailed lat­er, Shiv­nath Thukral, who is cur­rent­ly an impor­tant exec­u­tive in Face­book, worked with this team. . . .”
  6. An osten­si­bly remorse­ful BJP politician–Prodyut Bora–high­light­ed the dra­mat­ic effect of Face­book and its What­sApp sub­sidiary have had on Indi­a’s pol­i­tics: ” . . . . In 2009, social media plat­forms like Face­book and What­sApp had a mar­gin­al impact in India’s 20 big cities. By 2014, how­ev­er, it had vir­tu­al­ly replaced the tra­di­tion­al mass media. In 2019, it will be the most per­va­sive media in the coun­try. . . .”
  7. A con­cise state­ment about the rela­tion­ship between the BJP and Face­book was issued by BJP tech office Vinit Goen­ka” . . . . At one stage in our inter­view with [Vinit] Goen­ka that last­ed over two hours, we asked him a point­ed ques­tion: ‘Who helped whom more, Face­book or the BJP?’ He smiled and said: ‘That’s a dif­fi­cult ques­tion. I won­der whether the BJP helped Face­book more than Face­book helped the BJP. You could say, we helped each oth­er.’ . . .”

In Ukraine, as well, Face­book and the OUN/B suc­ces­sor orga­ni­za­tions func­tion sym­bi­ot­i­cal­ly:

(Note that the Atlantic Coun­cil is dom­i­nant in the array of indi­vid­u­als and insti­tu­tions con­sti­tut­ing the Ukrain­ian fascist/Facebook coop­er­a­tive effort. We have spo­ken about the Atlantic Coun­cil in numer­ous pro­grams, includ­ing FTR #943. The orga­ni­za­tion has deep oper­a­tional links to ele­ments of U.S. intel­li­gence, as well as the OUN/B milieu that dom­i­nates the Ukrain­ian dias­po­ra.)

CrowdStrike–at the epi­cen­ter of the sup­posed Russ­ian hack­ing con­tro­ver­sy is note­wor­thy. Its co-founder and chief tech­nol­o­gy offi­cer, Dmit­ry Alper­ovitch is a senior fel­low at the Atlantic Coun­cil, financed by ele­ments that are at the foun­da­tion of fan­ning the flames of the New Cold War: “In this respect, it is worth not­ing that one of the com­mer­cial cyber­se­cu­ri­ty com­pa­nies the gov­ern­ment has relied on is Crowd­strike, which was one of the com­pa­nies ini­tial­ly brought in by the DNC to inves­ti­gate the alleged hacks. . . . Dmitri Alper­ovitch is also a senior fel­low at the Atlantic Coun­cil. . . . The con­nec­tion between [Crowd­strike co-founder and chief tech­nol­o­gy offi­cer Dmitri] Alper­ovitch and the Atlantic Coun­cil has gone large­ly unre­marked upon, but it is rel­e­vant giv­en that the Atlantic Coun­cil—which is is fund­ed in part by the US State Depart­ment, NATO, the gov­ern­ments of Latvia and Lithua­nia, the Ukrain­ian World Con­gress, and the Ukrain­ian oli­garch Vic­tor Pinchuk—has been among the loud­est voic­es call­ing for a new Cold War with Rus­sia. As I point­ed out in the pages of The Nation in Novem­ber, the Atlantic Coun­cil has spent the past sev­er­al years pro­duc­ing some of the most vir­u­lent spec­i­mens of the new Cold War pro­pa­gan­da. . . .

In May of 2018, Face­book decid­ed to effec­tive­ly out­source the work of iden­ti­fy­ing pro­pa­gan­da and mis­in­for­ma­tion dur­ing elec­tions to the Atlantic Coun­cil.

” . . . . Face­book is part­ner­ing with the Atlantic Coun­cil in anoth­er effort to com­bat elec­tion-relat­ed pro­pa­gan­da and mis­in­for­ma­tion from pro­lif­er­at­ing on its ser­vice. The social net­work­ing giant said Thurs­day that a part­ner­ship with the Wash­ing­ton D.C.-based think tank would help it bet­ter spot dis­in­for­ma­tion dur­ing upcom­ing world elec­tions. The part­ner­ship is one of a num­ber of steps Face­book is tak­ing to pre­vent the spread of pro­pa­gan­da and fake news after fail­ing to stop it from spread­ing on its ser­vice in the run up to the 2016 U.S. pres­i­den­tial elec­tion. . . .”

Since autumn 2018, Face­book has looked to hire a pub­lic pol­i­cy man­ag­er for Ukraine. The job came after years of Ukraini­ans crit­i­ciz­ing the plat­form for take­downs of its activists’ pages and the spread of [alleged] Russ­ian dis­in­fo tar­get­ing Kyiv. Now, it appears to have one: @Kateryna_Kruk.— Christo­pher Miller (@ChristopherJM) June 3, 2019

Katery­na Kruk:

  1. Is Facebook’s Pub­lic Pol­i­cy Man­ag­er for Ukraine as of May of this year, accord­ing to her LinkedIn page.
  2. Worked as an ana­lyst and TV host for the Ukrain­ian ‘anti-Russ­ian pro­pa­gan­da’ out­fit Stop­Fake. Stop­Fake is the cre­ation of Ire­na Chalu­pa, who works for the Atlantic Coun­cil and the Ukrain­ian gov­ern­ment and appears to be the sis­ter of Andrea and Alexan­dra Chalu­pa.
  3. Joined the “Krem­lin Watch” team at the Euro­pean Val­ues think-tank, in Octo­ber of 2017.
  4. Received the Atlantic Coun­cil’s Free­dom award for her com­mu­ni­ca­tions work dur­ing the Euro­maid­an protests in June of 2014.
  5. Worked for OUN/B suc­ces­sor orga­ni­za­tion Svo­bo­da dur­ing the Euro­maid­an protests. “ . . . ‘There are peo­ple who don’t sup­port Svo­bo­da because of some of their slo­gans, but they know it’s the most active polit­i­cal par­ty and go to them for help, said Svo­bo­da vol­un­teer Katery­na Kruk. . . . ” . . . .
  6. Also has a num­ber of arti­cles on the Atlantic Council’s Blog. Here’s a blog post from August of 2018 where she advo­cates for the cre­ation of an inde­pen­dent Ukrain­ian Ortho­dox Church to dimin­ish the influ­ence of the Russ­ian Ortho­dox Church.
  7. Accord­ing to her LinkedIn page has also done exten­sive work for the Ukrain­ian gov­ern­ment. From March 2016 to Jan­u­ary 2017 she was the Strate­gic Com­mu­ni­ca­tions Man­ag­er for the Ukrain­ian par­lia­ment where she was respon­si­ble for social media and inter­na­tion­al com­mu­ni­ca­tions. From Jan­u­ary-April 2017 she was the Head of Com­mu­ni­ca­tions at the Min­istry of Health.
  8. Was not only was a vol­un­teer for Svo­bo­da dur­ing the 2014 Euro­maid­an protests, but open­ly cel­e­brat­ed on twit­ter the May 2014 mas­sacre in Odessa when the far right burned dozens of pro­tes­tors alive. Kruk’s twit­ter feed is set to pri­vate now so there isn’t pub­lic access to her old tweet, but peo­ple have screen cap­tures of it. Here’s a tweet from Yasha Levine with a screen­shot of Kruk’s May 2, 2014 tweet where she writes: “#Odessa cleaned itself from ter­ror­ists, proud for city fight­ing for its identity.glory to fall­en heroes..” She even threw in a “glo­ry to fall­en heroes” at the end of her tweet cel­e­brat­ing this mas­sacre. Keep in mind that it was month after this tweet that the Atlantic Coun­cil gave her that Free­dom Award for her com­mu­ni­ca­tions work dur­ing the protests.
  9. In 2014, . . .  tweet­ed that a man had asked her to con­vince his grand­son not to join the Azov Bat­tal­ion, a neo-Nazi mili­tia. “I couldn’t do it,” she said. “I thanked that boy and blessed him.” And he then trav­eled to Luhan­sk to fight pro-Russ­ian rebels.
  10. Lion­ized a Nazi sniper killed in Ukraine’s civ­il war. In March 2018, a 19-year neo-Nazi named Andriy “Dil­ly” Krivich was shot and killed by a sniper. Krivich had been fight­ing with the fas­cist Ukrain­ian group Right Sec­tor, and had post­ed pho­tos on social media wear­ing Nazi Ger­man sym­bols. After he was killed, Kruk tweet­ed an homage to the teenage Nazi. (The Nazi was also lion­ized on Euro­maid­an Press’ Face­book page.)
  11. Has staunch­ly defend­ed the use of the slo­gan “Sla­va Ukrai­ni,”which was first coined and pop­u­lar­ized by Nazi-col­lab­o­rat­ing fas­cists, and is now the offi­cial salute of Ukraine’s army.
  12. Has also said that the Ukrain­ian fas­cist politi­cian Andriy Paru­biy, who co-found­ed a neo-Nazi par­ty before lat­er becom­ing the chair­man of Ukraine’s par­lia­ment the Rada, is “act­ing smart,” writ­ing, “Paru­biy touche.” . . . .

In the con­text of Face­book’s insti­tu­tion­al lev­el net­work­ing with fas­cists, it is worth not­ing that social media them­selves have been cit­ed as a con­tribut­ing fac­tor to right-wing domes­tic ter­ror­ism. . . . The first is sto­chas­tic ter­ror­ism: ‘The use of mass, pub­lic com­mu­ni­ca­tion, usu­al­ly against a par­tic­u­lar indi­vid­ual or group, which incites or inspires acts of ter­ror­ism which are sta­tis­ti­cal­ly prob­a­ble but hap­pen seem­ing­ly at ran­dom.’ I encoun­tered the idea in a Fri­day thread from data sci­en­tist Emi­ly Gorcens­ki, who used it to tie togeth­er four recent attacks. . . . .”

The pro­gram con­cludes with review (from FTR #1039) of the psy­cho­log­i­cal war­fare strat­e­gy adapt­ed by Cam­bridge Ana­lyt­i­ca to the polit­i­cal are­na. Christo­pher Wylie–the for­mer head of research at Cam­bridge Ana­lyt­i­ca who became one of the key insid­er whis­tle-blow­ers about how Cam­bridge Ana­lyt­i­ca oper­at­ed and the extent of Facebook’s knowl­edge about it–gave an inter­view to Cam­paign Mag­a­zine. (We dealt with Cam­bridge Ana­lyt­i­ca in FTR #‘s 946, 1021.) Wylie recounts how, as direc­tor of research at Cam­bridge Ana­lyt­i­ca, his orig­i­nal role was to deter­mine how the com­pa­ny could use the infor­ma­tion war­fare tech­niques used by SCL Group – Cam­bridge Analytica’s par­ent com­pa­ny and a defense con­trac­tor pro­vid­ing psy op ser­vices for the British mil­i­tary. Wylie’s job was to adapt the psy­cho­log­i­cal war­fare strate­gies that SCL had been using on the bat­tle­field to the online space. As Wylie put it:

“ . . . . When you are work­ing in infor­ma­tion oper­a­tions projects, where your tar­get is a com­bat­ant, the auton­o­my or agency of your tar­gets is not your pri­ma­ry con­sid­er­a­tion. It is fair game to deny and manip­u­late infor­ma­tion, coerce and exploit any men­tal vul­ner­a­bil­i­ties a per­son has, and to bring out the very worst char­ac­ter­is­tics in that per­son because they are an ene­my…But if you port that over to a demo­c­ra­t­ic sys­tem, if you run cam­paigns designed to under­mine people’s abil­i­ty to make free choic­es and to under­stand what is real and not real, you are under­min­ing democ­ra­cy and treat­ing vot­ers in the same way as you are treat­ing ter­ror­ists. . . . .”

Wylie also draws par­al­lels between the psy­cho­log­i­cal oper­a­tions used on demo­c­ra­t­ic audi­ences and the bat­tle­field tech­niques used to be build an insur­gency.

1a. Fol­low­ing the sweep­ing vic­to­ry of the BJP in India’s elec­tions that exceed­ed the expec­ta­tions, there’s no short­age of ques­tions of how the BJP man­aged such a resound­ing vic­to­ry despite what appeared to be grow­ing pop­u­lar frus­tra­tions with the par­ty just six months ago. And while the embrace of nation­al­ism and sec­tar­i­an­ism no doubt played a major role along with the ten­sions with Pak­istan, it’s also impor­tant to give cred­it to the pro­found role social media played in this year’s elec­tions. Specif­i­cal­ly, orga­nized social media dis­in­for­ma­tion cam­paigns run by the BJP:

“India Has a Pub­lic Health Cri­sis. It’s Called Fake News.” by Samir Patil; The New York Times; 04/29/2019.

In the con­tin­u­ing Indi­an elec­tions, as 900 mil­lion peo­ple are vot­ing to elect rep­re­sen­ta­tives to the low­er house of the Par­lia­ment, dis­in­for­ma­tion and hate speech are drown­ing out truth on social media net­works in the coun­try and cre­at­ing a pub­lic health cri­sis like the pan­demics of the past cen­tu­ry.

This con­ta­gion of a stag­ger­ing amount of mor­phed images, doc­tored videos and text mes­sages is spread­ing large­ly through mes­sag­ing ser­vices and influ­enc­ing what India’s vot­ers watch and read on their smart­phones. A recent study by Microsoft found that over 64 per­cent Indi­ans encoun­tered fake news online, the high­est report­ed among the 22 coun­tries sur­veyed.

India has the most social media users, with 300 mil­lion users on Face­book, 200 mil­lion on What­sApp and 250 mil­lion using YouTube. Tik­Tok, the video mes­sag­ing ser­vice owned by a Chi­nese com­pa­ny, has more than 88 mil­lion users in India. And there are Indi­an mes­sag­ing appli­ca­tions such as ShareChat, which claims to have 40 mil­lion users and allows them to com­mu­ni­cate in 14 Indi­an lan­guages.

These plat­forms are filled with fake news and dis­in­for­ma­tion aimed at influ­enc­ing polit­i­cal choic­es dur­ing the Indi­an elec­tions. Some of the egre­gious instances are a made-up BBC sur­vey pre­dict­ing vic­to­ry for the gov­ern­ing Bharatiya Jana­ta Par­ty and a fake video of the oppo­si­tion Con­gress Par­ty pres­i­dent, Rahul Gand­hi, say­ing a machine can con­vert pota­toes into gold.

Fake sto­ries are spread by legions of online trolls and unsus­pect­ing users, with dan­ger­ous impact. A rumor spread through social media about child kid­nap­pers arriv­ing in var­i­ous parts of India has led to 33 deaths in 69 inci­dents of mob vio­lence since 2017, accord­ing to Indi­aSpend, a data jour­nal­ism web­site.

Six months before the 2014 gen­er­al elec­tions in India, 62 peo­ple were killed in sec­tar­i­an vio­lence and 50,000 were dis­placed from their homes in the north­ern state of Uttar Pradesh. Inves­ti­ga­tions by the police found that a fake video was shared on What­sApp to whip up sec­tar­i­an pas­sions.

In the lead-up to the elec­tions, the Indi­an gov­ern­ment sum­moned the top exec­u­tives of Face­book and Twit­ter to dis­cuss the cri­sis of coor­di­nat­ed mis­in­for­ma­tion, fake news and polit­i­cal bias on their plat­forms. In March, Joel Kaplan, Facebook’s glob­al vice pres­i­dent for pub­lic pol­i­cy, was called to appear before a com­mit­tee of 31 mem­bers of the Indi­an Par­lia­ment — who were most­ly from the rul­ing Bharatiya Jana­ta Par­ty — to dis­cuss “safe­guard­ing cit­i­zens’ rights on social/online news media plat­forms.”

The hear­ing was an exer­cise in absur­dist the­ater because the gov­ern­ing B.J.P. has been the chief ben­e­fi­cia­ry of divi­sive con­tent that reach­es mil­lions because of the way social media algo­rithms, espe­cial­ly Face­book, ampli­fy “engag­ing” arti­cles.

As else­where in the world, Face­book, Twit­ter and YouTube are ambiva­lent about tack­ling the prob­lem head-on for the fear of mak­ing deci­sions that invoke the wrath of nation­al polit­i­cal forces. The tightrope walk was evi­dent when in April, Face­book announced a ban on about 1,000 fake news pages tar­get­ing India. They includ­ed pages direct­ly asso­ci­at­ed with polit­i­cal par­ties.

Face­book announced that a major­i­ty of the pages were asso­ci­at­ed with the oppo­si­tion Indi­an Nation­al Con­gress par­ty, but it mere­ly named the tech­nol­o­gy com­pa­ny asso­ci­at­ed with the gov­ern­ing B.J.P. pages. Many news reports lat­er point­ed out that the pages relat­ed to the B.J.P. that were removed were far more con­se­quen­tial and reached mil­lions.

Ask­ing the social media plat­forms to fix the cri­sis is a deeply flawed approach because most of the dis­in­for­ma­tion is shared in a decen­tral­ized man­ner through mes­sag­ing. Seek­ing to mon­i­tor those mes­sages is a step toward accept­ing mass sur­veil­lance. The Indi­an gov­ern­ment loves the idea and has pro­posed laws that, among oth­er things, would break end-to-end encryp­tion and obtain user data with­out a court order. 

The idea of more effec­tive fact-check­ing has come up often in the debates around India’s dis­in­for­ma­tion con­ta­gion. But it comes with many con­cep­tu­al dif­fi­cul­ties: A large pro­por­tion of mes­sages shared on social net­works in India have lit­tle to do with ver­i­fi­able facts and ped­dle prej­u­diced opin­ions. Face­book India has a small 11- to 22-mem­ber fact-check­ing team for con­tent relat­ed to Indi­an elec­tions.

Fake news is not a tech­no­log­i­cal or sci­en­tif­ic prob­lem with a quick fix. It should be treat­ed as a new kind of pub­lic health cri­sis in all its social and human com­plex­i­ty. The answer might lie in look­ing back at how we respond­ed to the epi­demics, the infec­tious dis­eases in the 19th and ear­ly 20th cen­turies, which have sim­i­lar char­ac­ter­is­tics. . . .

1b. As the fol­low­ing arti­cle notes, the far­ci­cal nature of the BJP gov­ern­ment ask­ing Face­book to help with the dis­in­for­ma­tion cri­sis is even more far­ci­cal by the fact that Face­book has pre­vi­ous­ly con­duct­ing train­ing work­shops to help the BJP use Face­book more effec­tive­ly. The arti­cle describes the teams of IT cells that were set up by the BJP for the 2014 elec­tion to build a larg­er-than-life image for Modi. There were four cells.

One of those cells was run by Modi’s right hand man Dr Hiren Joshi. Joshi has had, and con­tin­ues to have, a close and long-stand­ing asso­ci­a­tion with Facebook’s senior employ­ees in India accord­ing to the arti­cle. Hiren’s team worked close­ly with Facebook’s staff. Shiv­nath Thukral, who was hired by Face­book in 2017 to be its Pub­lic Pol­i­cy Direc­tor for India & South Asia, worked with this team in 2014. And that’s just an overview of how tight­ly Face­book was work­ing with the BJP in 2014:

“Meet the advi­sors who helped make the BJP a social media pow­er­house of data and pro­pa­gan­da” by Cyril Sam & Paran­joy Guha Thakur­ta; Scroll.in; 05/06/2019.

By the time Rajesh Jain was scal­ing up his oper­a­tions in 2013, the BJP’s infor­ma­tion tech­nol­o­gy (IT) strate­gists had begun inter­act­ing with social media plat­forms like Face­book and its part­ner What­sApp. If sup­port­ers of the BJP are to be believed, the par­ty was bet­ter than oth­ers in util­is­ing the micro-tar­get­ing poten­tial of the plat­forms. How­ev­er, it is also true that Facebook’s employ­ees in India con­duct­ed train­ing work­shops to help the mem­bers of the BJP’s IT cell.

Help­ing par­ty func­tionar­ies were adver­tis­ing hon­chos like Sajan Raj Kurup, founder of Cre­ative­land Asia and Prahlad Kakkar, the well-known adver­tis­ing pro­fes­sion­al. Actor Anu­pam Kher became the pub­lic face of some of the adver­tis­ing cam­paigns. Also assist­ing the social media and online teams to build a larg­er-than-life image for Modi before the 2014 elec­tions was a team led by his right-hand man Dr Hiren Joshi, who (as already stat­ed) is a very impor­tant advis­er to Modi whose writ extends way beyond infor­ma­tion tech­nol­o­gy and social media.

Cur­rent­ly, Offi­cer On Spe­cial Duty in the Prime Minister’s Office, he is assist­ed by two young pro­fes­sion­al “techies,” Nirav Shah and Yash Rajiv Gand­hi. Joshi has had, and con­tin­ues to have, a close and long-stand­ing asso­ci­a­tion with Facebook’s senior employ­ees in India. In 2013, one of his impor­tant col­lab­o­ra­tors was Akhilesh Mishra who lat­er went on to serve as a direc­tor of the Indi­an government’s web­site, MyGov India – which is at present led by Arvind Gup­ta who was ear­li­er head of the BJP’s IT cell.

Mishra is CEO of Bluekraft Dig­i­tal Foun­da­tion. The Foun­da­tion has been linked to a dis­in­for­ma­tion web­site titled “The True Pic­ture,” has pub­lished books authored by Prime Min­is­ter Naren­dra Modi and pro­duces cam­paign videos for NaMo Tele­vi­sion, a 24 hour cable tele­vi­sion chan­nel ded­i­cat­ed to pro­mot­ing Modi.

The 2014 Modi pre-elec­tion cam­paign was inspired by the 2012 cam­paign to elect Barack Oba­ma as the “world’s first Face­book Pres­i­dent.” Some of the man­agers of the Modi cam­paign like Jain were appar­ent­ly inspired by Sasha Issenberg’s book on the top­ic, The Vic­to­ry Lab: The Secret Sci­ence of Win­ning Cam­paignsIn the first data-led elec­tion in India in 2014, infor­ma­tion was col­lect­ed from every pos­si­ble source to not just micro-tar­get users but also fine-tune mes­sages prais­ing and “mythol­o­gis­ing” Modi as the Great Leader who would ush­er in acche din for the coun­try.

Four teams spear­head­ed the cam­paign. The first team was led by Mum­bai-based Jain who fund­ed part of the com­mu­ni­ca­tion cam­paign and also over­saw vot­er data analy­sis. He was helped by Shashi Shekhar Vem­pati in run­ning NITI and “Mis­sion 272+.” As already men­tioned, Shekhar had worked in Infos­ys and is at present the head of Prasar Bharati Cor­po­ra­tion which runs Door­dar­shan and All India Radio.

The sec­ond team was led by polit­i­cal strate­gist Prashant Kishor and his I‑PAC or Indi­an Polit­i­cal Action Com­mit­tee who super­vised the three-dimen­sion­al pro­jec­tion pro­gramme for Modi besides pro­grammes like Run for Uni­ty, Chai Pe Char­cha (or Dis­cus­sions Over Tea), Man­than (or Churn­ing) and Cit­i­zens for Account­able Gov­er­nance (CAG) that roped in man­age­ment grad­u­ates to gar­ner sup­port for Modi at large gath­er­ings. Hav­ing worked across the polit­i­cal spec­trum and oppor­tunis­ti­cal­ly switched affil­i­a­tion to those who backed (and paid) him, 41-year-old Kishor is cur­rent­ly the sec­ond-in-com­mand in Jana­ta Dal (Unit­ed) head­ed by Bihar Chief Min­is­ter Nitish Kumar.

The third team, that was intense­ly focused on build­ing Modi’s per­son­al image, was head­ed by Hiren Joshi him­self who worked out of the then Gujarat Chief Minister’s Office in Gand­hi­na­gar. The mem­bers of this team worked close­ly with staffers of Face­book in India, more than one of our sources told us. As will be detailed lat­er, Shiv­nath Thukral, who is cur­rent­ly an impor­tant exec­u­tive in Face­book, worked with this team. (We made a num­ber of tele­phone calls to Joshi’s office in New Delhi’s South Block seek­ing a meet­ing with him and also sent him an e‑mail mes­sage request­ing an inter­view but he did not respond.)

The fourth team was led by Arvind Gup­ta, the cur­rent CEO of MyGov.in, a social media plat­form run by the gov­ern­ment of India. He ran the BJP’s cam­paign based out of New Del­hi. When con­tact­ed, he too declined to speak on the record say­ing he is now with the gov­ern­ment and not a rep­re­sen­ta­tive of the BJP. He sug­gest­ed we con­tact Amit Malviya who is the present head of the BJP’s IT cell. He came on the line but declined to speak specif­i­cal­ly on the BJP’s rela­tion­ship with Face­book and What­sApp.

The four teams worked sep­a­rate­ly. “It was (like) a relay (race),” said Vinit Goen­ka who was then the nation­al co-con­ven­er of the BJP’s IT cell, adding: “The only knowl­edge that was shared (among the teams) was on a ‘need to know’ basis. That’s how any sen­si­ble organ­i­sa­tion works.”

From all accounts, Rajesh Jain worked inde­pen­dent­ly from his Low­er Par­el office and invest­ed his own funds to sup­port Modi and towards exe­cut­ing what he described as “Project 275 for 2014” in a blog post that he wrote in June 2011, near­ly three years before the elec­tions actu­al­ly took place. The BJP, of course, went on to win 282 seats in the 2014 Lok Sab­ha elec­tions, ten above the half-way mark, with a lit­tle over 31 per cent of the vote.

As an aside, it may be men­tioned in pass­ing that – like cer­tain for­mer bhak­ts or fol­low­ers of Modi – Jain today appears less than enthu­si­as­tic about the per­for­mance of the gov­ern­ment over the last four and a half years. He is cur­rent­ly engaged in pro­mot­ing a cam­paign called Dhan Vapasi (or “return our wealth”) which is aimed at mon­etis­ing sur­plus land and oth­er assets held by gov­ern­ment bod­ies, includ­ing defence estab­lish­ments, and pub­lic sec­tor under­tak­ings, for the ben­e­fit of the poor and the under­priv­i­leged. Dhan Vapasi, in his words, is all about mak­ing “every Indi­an rich and free.”

In one of his recent videos that are in the pub­lic domain, Jain remarked: “For the 2014 elec­tions, I had spent three years and my own mon­ey to build a team of 100 peo­ple to help with Modi’s cam­paign. Why? Because I trust­ed that a Modi-led BJP gov­ern­ment could end the Con­gress’ anti-pros­per­i­ty pro­grammes and put India on a path to pros­per­i­ty, a nayi disha (or new direc­tion). But four years have gone by with­out any sig­nif­i­cant change in pol­i­cy. India need­ed that to elim­i­nate the big and hame­sha (peren­ni­al) prob­lems of pover­ty, unem­ploy­ment and cor­rup­tion. The Modi-led BJP gov­ern­ment fol­lowed the same old failed pol­i­cy of increas­ing tax­es and spend­ing. The ruler changed, but the out­comes have not.”

As men­tioned, when we con­tact­ed 51-year-old Jain, who heads the Mum­bai-based Net­core group of com­pa­nies, said to be India’s biggest dig­i­tal media mar­ket­ing cor­po­rate group, he declined to be inter­viewed. Inci­den­tal­ly, he had till Octo­ber 2017 served on the boards of direc­tors of two promi­nent pub­lic sec­tor com­pa­nies. One was Nation­al Ther­mal Pow­er Cor­po­ra­tion (NTPC) – Jain has no expe­ri­ence in the pow­er sec­tor, just as Sam­bit Patra, BJP spokesper­son, who is an “inde­pen­dent” direc­tor on the board of the Oil and Nat­ur­al Gas Cor­po­ra­tion, has zero expe­ri­ence in the petro­le­um indus­try. Jain also served on the board of the Unique Iden­ti­fi­ca­tion Author­i­ty of India (UIDAI), which runs the Aad­har pro­gramme.

Unlike Jain who was not at all forth­com­ing, 44-year-old Prodyut Bora, founder of the BJP’s IT cell in 2007 (bare­ly a year after Face­book and Twit­ter had been launched) was far from ret­i­cent while speak­ing to us. He had resigned from the party’s nation­al exec­u­tive in Feb­ru­ary 2015 after ques­tion­ing Modi and Amit Shah’s “high­ly indi­vid­u­alised and cen­tralised style of deci­sion-mak­ing” that had led to the “sub­ver­sion of demo­c­ra­t­ic tra­di­tions” in the gov­ern­ment and in the par­ty.

Bora recalled how he was one of the first grad­u­ates from the lead­ing busi­ness school, the Indi­an Insti­tute of Man­age­ment, Ahmed­abad, to join the BJP because of his great admi­ra­tion for the then Prime Min­is­ter Atal Behari Vaj­pay­ee. It was at the behest of the then par­ty pres­i­dent Raj­nath Singh (who is now Union Home Min­is­ter) that he set up the party’s IT cell to enable its lead­ers to come clos­er to, and inter­act with, their sup­port­ers.

The cell, he told us, was cre­at­ed not with a man­date to abuse peo­ple on social media plat­forms. He lament­ed that “mad­ness” has now gripped the BJP and the desire to win elec­tions at any cost has “destroyed the very ethos” of the par­ty he was once a part of. Today, the Gur­gaon-based Bora runs a firm mak­ing air purifi­ca­tion equip­ment and is involved with an inde­pen­dent polit­i­cal par­ty in his home state, Assam.

He told us: “The process of being eco­nom­i­cal with the truth (in the BJP) began in 2014. The (elec­tion) cam­paign was send­ing out unver­i­fied facts, infomer­cials, memes, dodgy data and graphs. From there, fake news was one step up the curve. Lead­ers of polit­i­cal par­ties, includ­ing the BJP, like to out­source this work because they don’t want to leave behind dig­i­tal foot­prints. In 2009, social media plat­forms like Face­book and What­sApp had a mar­gin­al impact in India’s 20 big cities. By 2014, how­ev­er, it had vir­tu­al­ly replaced the tra­di­tion­al mass media. In 2019, it will be the most per­va­sive media in the coun­try.” . . . .

. . . . At one stage in our inter­view with [Vinit] Goen­ka that last­ed over two hours, we asked him a point­ed ques­tion: “Who helped whom more, Face­book or the BJP?”

He smiled and said: “That’s a dif­fi­cult ques­tion. I won­der whether the BJP helped Face­book more than Face­book helped the BJP. You could say, we helped each oth­er.”

1c. Accord­ing to Christo­pher Miller of RFERL, Face­book select­ed Katery­na Kruk for the posi­tion:

Since autumn 2018, Face­book has looked to hire a pub­lic pol­i­cy man­ag­er for Ukraine. The job came after years of Ukraini­ans crit­i­ciz­ing the plat­form for take­downs of its activists’ pages and the spread of Russ­ian dis­in­fo tar­get­ing Kyiv. Now, it appears to have one: @Kateryna_Kruk.— Christo­pher Miller (@ChristopherJM) June 3, 2019

Kruk’s LinkedIn page also lists her as being Facebook’s Pub­lic Pol­i­cy Man­ag­er for Ukraine as of May of this year.

Kruk  worked as an ana­lyst and TV host for the Ukrain­ian ‘anti-Russ­ian pro­pa­gan­da’ out­fit Stop­Fake. Stop­Fake is the cre­ation of Ire­na Chalu­pa, who works for the Atlantic Coun­cil and the Ukrain­ian gov­ern­ment and appears to be the sis­ter of Andrea and Alexan­dra Chalu­pa.

(As an exam­ple of how StopFake.org approach­es Ukraine’s far right, here’s a tweet from StopFake’s co-founder, Yevhen Fed­chenko, from May of 2018 where he com­plains about an arti­cle in Hro­madske Inter­na­tion­al that char­ac­ter­izes C14 as a neo-Nazi group:

“for Hro­madske C14 is ‘neo- nazi’, in real­i­ty one of them – Olek­san­dr Voitko – is a war vet­er­an and before going to the war – alum and fac­ul­ty at @MohylaJSchool, jour­nal­ist at For­eign news desk at Chan­nel 5. Now also active par­tic­i­pant of war vet­er­ans grass-root orga­ni­za­tion. https://t.co/QmaGnu6QGZ— Yevhen Fed­chenko (@yevhenfedchenko) May 5, 2018)

In Octo­ber of 2017, Kruk joined the “Krem­lin Watch” team at the Euro­pean Val­ues think-tankIn June of 2014, The Atlantic Coun­cil gave Kruk its Free­dom award for her com­mu­ni­ca­tions work dur­ing the Euro­maid­an protests. Kruk also has a num­ber of arti­cles on the Atlantic Council’s Blog. Here’s a blog post from August of 2018 where she advo­cates for the cre­ation of an inde­pen­dent Ukrain­ian Ortho­dox Church to dimin­ish the influ­ence of the Russ­ian Ortho­dox Church. Keep in mind that, in May of 2018, Face­book decid­ed to effec­tive­ly out­source the work of iden­ti­fy­ing pro­pa­gan­da and mis­in­for­ma­tion dur­ing elec­tions to the Atlantic Coun­cil, so choos­ing some­one like Kruk who already has the Atlantic Council’s stamp of approval is in keep­ing with that trend.

Accord­ing to Kruk’s LinkedIn page she’s also done exten­sive work for the Ukrain­ian gov­ern­ment. From March 2016 to Jan­u­ary 2017 she was the Strate­gic Com­mu­ni­ca­tions Man­ag­er for the Ukrain­ian par­lia­ment where she was respon­si­ble for social media and inter­na­tion­al com­mu­ni­ca­tions. From Jan­u­ary-April 2017 she was the Head of Com­mu­ni­ca­tions at the Min­istry of Health.

Kruk not only was a vol­un­teer for Svo­bo­da dur­ing the 2014 Euro­maid­an protests, she also open­ly cel­e­brat­ed on twit­ter the May 2014 mas­sacre in Odessa when the far right burned dozens of pro­tes­tors alive. Kruk’s twit­ter feed is set to pri­vate now so there isn’t pub­lic access to her old tweet, but peo­ple have screen cap­tures of it. Here’s a tweet from Yasha Levine with a screen­shot of Kruk’s May 2, 2014 tweet where she writes:
“#Odessa cleaned itself from ter­ror­ists, proud for city fight­ing for its identity.glory to fall­en heroes..”

She even threw in a “glo­ry to fall­en heroes” at the end of her tweet cel­e­brat­ing this mas­sacre. Keep in mind that it was month after this tweet that the Atlantic Coun­cil gave her that Free­dom Award for her com­mu­ni­ca­tions work dur­ing the protests.

An arti­cle from Jan­u­ary of 2014 about the then-ongo­ing Maid­an square protests, The arti­cle cov­ers the grow­ing pres­ence of the far right in the protests and their attacks on left-wing pro­tes­tors. Kruk is inter­viewed in the arti­cle and describes her­self as a Svo­bo­da vol­un­teer. Kruk issued a tweet cel­e­brat­ing the Odessa mas­sacre a few months lat­er and also stands out from a pub­lic rela­tions stand­point: Kruk was send­ing mes­sages for why aver­age Ukraini­ans who don’t nec­es­sar­i­ly sup­port the far right should sup­port the far right at that moment, which was one of the most use­ful mes­sages she could have been send­ing for the far right at that time:

“The Ukrain­ian Nation­al­ism at the Heart of ‘Euro­maid­an’” by Alec Luhn; The Nation; 01/21/2014.

. . . . For now, Svo­bo­da and oth­er far-right move­ments like Right Sec­tor are focus­ing on the protest-wide demands for civic free­doms gov­ern­ment account­abil­i­ty rather than overt­ly nation­al­ist agen­das. Svo­bo­da enjoys a rep­u­ta­tion as a par­ty of action, respon­sive to cit­i­zens’ prob­lems. Noyevy cut an inter­view with The Nation short to help local res­i­dents who came with a com­plaint that a devel­op­er was tear­ing down a fence with­out per­mis­sion.

“There are peo­ple who don’t sup­port Svo­bo­da because of some of their slo­gans, but they know it’s the most active polit­i­cal par­ty and go to them for help,” said Svo­bo­da vol­un­teer Katery­na Kruk. “Only Svo­bo­da is help­ing against land seizures in Kiev.” . . . .

1d. Kruk has man­i­fest­ed oth­er fas­cist sym­pa­thies and con­nec­tions:

  1. In 2014, she tweet­ed that a man had asked her to con­vince his grand­son not to join the Azov Bat­tal­ion, a neo-Nazi mili­tia. “I couldn’t do it,” she said. “I thanked that boy and blessed him.” And he then trav­eled to Luhan­sk to fight pro-Russ­ian rebels.
  2. Nazi sniper Dil­ly Krivich, posthu­mous­ly lion­ized by Katery­na Kruk

    In March 2018, a 19-year neo-Nazi named Andriy “Dil­ly” Krivich was shot and killed by a sniper. Krivich had been fight­ing with the fas­cist Ukrain­ian group Right Sec­tor, and had post­ed pho­tos on social media wear­ing Nazi Ger­man sym­bols. After he was killed, Kruk tweet­ed an homage to the teenage Nazi. (The Nazi was also lion­ized on Euro­maid­an Press’ Face­book page.)

  3. Kruk has staunch­ly defend­ed the use of the slo­gan “Sla­va Ukrai­ni,”which was first coined and pop­u­lar­ized by Nazi-col­lab­o­rat­ing fas­cists, and is now the offi­cial salute of Ukraine’s army.
  4. She has also said that the Ukrain­ian fas­cist politi­cian Andriy Paru­biy, who co-found­ed a neo-Nazi par­ty before lat­er becom­ing the chair­man of Ukraine’s par­lia­ment the Rada, is “act­ing smart,” writ­ing, “Paru­biy touche.” . . . .

“Facebook’s New Pub­lic Pol­i­cy Man­ag­er Is Nation­al­ist Hawk Who Vol­un­teered with Fas­cist Par­ty Dur­ing US-Backed Coup” by Ben Nor­ton; The Gray Zone; 6/4/2019.

. . . . Svo­bo­da is not the only Ukrain­ian fas­cist group Katery­na Kruk has expressed sup­port for. In 2014, she tweet­ed that a man had asked her to con­vince his grand­son not to join the Azov Bat­tal­ion, a neo-Nazi mili­tia. “I couldn’t do it,” she said. “I thanked that boy and blessed him.” And he then trav­eled to Luhan­sk to fight pro-Russ­ian rebels.

That’s not all. In March 2018, a 19-year neo-Nazi named Andriy “Dil­ly” Krivich was shot and killed by a sniper. Krivich had been fight­ing with the fas­cist Ukrain­ian group Right Sec­tor, and had post­ed pho­tos on social media wear­ing Nazi Ger­man sym­bols. After he was killed, Kruk tweet­ed an homage to the teenage Nazi. (The Nazi was also lion­ized on Euro­maid­an Press’ Face­book page.)

Kruk has staunch­ly defend­ed the use of the slo­gan “Sla­va Ukrai­ni,”which was first coined and pop­u­lar­ized by Nazi-col­lab­o­rat­ing fas­cists, and is now the offi­cial salute of Ukraine’s army.

She has also said that the Ukrain­ian fas­cist politi­cian Andriy Paru­biy, who co-found­ed a neo-Nazi par­ty before lat­er becom­ing the chair­man of Ukraine’s par­lia­ment the Rada, is “act­ing smart,” writ­ing, “Paru­biy touche.” . . . .

2. The essence of the book Ser­pen­t’s Walk  is pre­sent­ed on the back cov­er:

Ser­pen­t’s Walk by “Ran­dolph D. Calver­hall;” Copy­right 1991 [SC]; Nation­al Van­guard Books; 0–937944-05‑X.

It assumes that Hitler’s war­rior elite — the SS — did­n’t give up their strug­gle for a White world when they lost the Sec­ond World War. Instead their sur­vivors went under­ground and adopt­ed some of the tac­tics of their ene­mies: they began build­ing their eco­nom­ic mus­cle and buy­ing into the opin­ion-form­ing media. A cen­tu­ry after the war they are ready to chal­lenge the democ­rats and Jews for the hearts and minds of White Amer­i­cans, who have begun to have their fill of gov­ern­ment-enforced mul­ti-cul­tur­al­ism and ‘equal­i­ty.’

3. This process is described in more detail in a pas­sage of text, con­sist­ing of a dis­cus­sion between Wrench (a mem­ber of this Under­ground Reich) and a mer­ce­nary named Less­ing.

Ser­pen­t’s Walk by “Ran­dolph D. Calver­hall;” Copy­right 1991 [SC]; Nation­al Van­guard Books; 0–937944-05‑X; pp. 42–43.

. . . . The SS . . . what was left of it . . . had busi­ness objec­tives before and dur­ing World War II. When the war was lost they just kept on, but from oth­er places: Bogo­ta, Asun­cion, Buenos Aires, Rio de Janeiro, Mex­i­co City, Colom­bo, Dam­as­cus, Dac­ca . . . you name it. They real­ized that the world is head­ing towards a ‘cor­po­racra­cy;’ five or ten inter­na­tion­al super-com­pa­nies that will run every­thing worth run­ning by the year 2100. Those super-cor­po­ra­tions exist now, and they’re already divid­ing up the pro­duc­tion and mar­ket­ing of food, trans­port, steel and heavy indus­try, oil, the media, and oth­er com­modi­ties. They’re most­ly con­glom­er­ates, with fin­gers in more than one pie . . . . We, the SS, have the say in four or five. We’ve been com­pet­ing for the past six­ty years or so, and we’re slow­ly gain­ing . . . . About ten years ago, we swung a merg­er, a takeover, and got vot­ing con­trol of a super­corp that runs a small but sig­nif­i­cant chunk of the Amer­i­can media. Not open­ly, not with bands and trum­pets or swastikas fly­ing, but qui­et­ly: one huge cor­po­ra­tion cud­dling up to anoth­er one and gen­tly munch­ing it up, like a great, gub­bing amoe­ba. Since then we’ve been replac­ing exec­u­tives, push­ing some­body out here, bring­ing some­body else in there. We’ve swing pro­gram con­tent around, too. Not much, but a lit­tle, so it won’t show. We’ve cut down on ‘nasty-Nazi’ movies . . . good guys in white hats and bad guys in black SS hats . . . lov­able Jews ver­sus fiendish Ger­mans . . . and we have media psy­chol­o­gists, ad agen­cies, and behav­ior mod­i­fi­ca­tion spe­cial­ists work­ing on image changes. . . .

4. The broad­cast address­es the grad­ual remak­ing of the image of the Third Reich that is rep­re­sent­ed in Ser­pen­t’s Walk. In the dis­cus­sion excerpt­ed above, this process is fur­ther described.

Ser­pen­t’s Walk by “Ran­dolph D. Calver­hall;” Copy­right 1991 [SC]; Nation­al Van­guard Books; 0–937944-05‑X; pp. 42–44.

. . . . Hell, if you can con granny into buy­ing Sug­ar Turds instead of Bran Farts, then why can’t you swing pub­lic opin­ion over to a cause as vital and impor­tant as ours?’ . . . In any case, we’re slow­ly replac­ing those neg­a­tive images with oth­ers: the ‘Good Bad Guy’ rou­tine’ . . . ‘What do you think of Jesse James? John Dillinger? Julius Cae­sar? Genghis Khan?’ . . . The real­i­ty may have been rough, but there’s a sort of glit­ter about most of those dudes: mean hon­chos but respectable. It’s all how you pack­age it. Opin­ion is a godamned com­mod­i­ty!’ . . . It works with any­body . . . Give it time. Aside from the media, we’ve been buy­ing up pri­vate schools . . . and help­ing some pub­lic ones through phil­an­thropic foun­da­tions . . . and work­ing on the church­es and the Born Agains. . . .

5. Through the years, we have high­light­ed the Nazi tract Ser­pen­t’s Walk, excerpt­ed above, which deals, in part, with the reha­bil­i­ta­tion of the Third Reich’s rep­u­ta­tion and the trans­for­ma­tion of Hitler into a hero.

In FTR #1015, we not­ed that a Ser­pen­t’s Walk sce­nario is indeed unfold­ing in India.

Key points of analy­sis and dis­cus­sion include:

  1. Naren­dra Mod­i’s pres­ence on the same book cove(along with Gand­hi, Man­dela, Oba­ma and Hitler.)
  2. Modi him­self has his own polit­i­cal his­to­ry with children’s books that pro­mote Hitler as a great leader: ” . . . . In 2004, reports sur­faced of high-school text­books in the state of Gujarat, which was then led by Mr. Modi, that spoke glow­ing­ly of Nazism and fas­cism. Accord­ing to ‘The Times of India,’ in a sec­tion called ‘Ide­ol­o­gy of Nazism,’ the text­book said Hitler had ‘lent dig­ni­ty and pres­tige to the Ger­man gov­ern­ment,’ ‘made untir­ing efforts to make Ger­many self-reliant’ and ‘instilled the spir­it of adven­ture in the com­mon peo­ple.’  . . . .”
  3. In India, many have a favor­able view of Hitler: ” . . . . as far back as 2002, the Times of India report­ed a sur­vey that found that 17 per­cent of stu­dents in elite Indi­an col­leges ‘favored Adolf Hitler as the kind of leader India ought to have.’ . . . . Con­sid­er Mein Kampf, Hitler’s auto­bi­og­ra­phy. Reviled it might be in the much of the world, but Indi­ans buy thou­sands of copies of it every month. As a recent paper in the jour­nal EPW tells us (PDF), there are over a dozen Indi­an pub­lish­ers who have edi­tions of the book on the mar­ket. Jaico, for exam­ple, print­ed its 55th edi­tion in 2010, claim­ing to have sold 100,000 copies in the pre­vi­ous sev­en years. (Con­trast this to the 3,000 copies my own 2009 book, Road­run­ner, has sold). In a coun­try where 10,000 copies sold makes a book a best­seller, these are sig­nif­i­cant num­bers. . . .”
  4. A class­room of school chil­dren filled with fans of Hitler had a very dif­fer­ent sen­ti­ment about Gand­hi. ” . . . . ‘He’s a cow­ard!’ That’s the obvi­ous flip side of this love of Hitler in India. It’s an implic­it rejec­tion of Gand­hi. . . .”
  5. Appar­ent­ly, Mein Kampf has achieved grav­i­tas among busi­ness stu­dents in India” . . . . What’s more, there’s a steady trick­le of reports that say it has become a must-read for busi­ness-school stu­dents; a man­age­ment guide much like Spencer Johnson’s Who Moved My Cheese or Edward de Bono’s Lat­er­al Think­ing. If this undis­tin­guished artist could take an entire coun­try with him, I imag­ine the rea­son­ing goes, sure­ly his book has some lessons for future cap­tains of indus­try? . . . .”

6. Christo­pher Wylie–the for­mer head of research at Cam­bridge Ana­lyt­i­ca who became one of the key insid­er whis­tle-blow­ers about how Cam­bridge Ana­lyt­i­ca oper­at­ed and the extent of Facebook’s knowl­edge about it–gave an inter­view last month to Cam­paign Mag­a­zine. (We dealt with Cam­bridge Ana­lyt­i­ca in FTR #‘s 946, 1021.)

Wylie recounts how, as direc­tor of research at Cam­bridge Ana­lyt­i­ca, his orig­i­nal role was to deter­mine how the com­pa­ny could use the infor­ma­tion war­fare tech­niques used by SCL Group – Cam­bridge Analytica’s par­ent com­pa­ny and a defense con­trac­tor pro­vid­ing psy op ser­vices for the British mil­i­tary. Wylie’s job was to adapt the psy­cho­log­i­cal war­fare strate­gies that SCL had been using on the bat­tle­field to the online space. As Wylie put it:

“ . . . . When you are work­ing in infor­ma­tion oper­a­tions projects, where your tar­get is a com­bat­ant, the auton­o­my or agency of your tar­gets is not your pri­ma­ry con­sid­er­a­tion. It is fair game to deny and manip­u­late infor­ma­tion, coerce and exploit any men­tal vul­ner­a­bil­i­ties a per­son has, and to bring out the very worst char­ac­ter­is­tics in that per­son because they are an ene­my…But if you port that over to a demo­c­ra­t­ic sys­tem, if you run cam­paigns designed to under­mine people’s abil­i­ty to make free choic­es and to under­stand what is real and not real, you are under­min­ing democ­ra­cy and treat­ing vot­ers in the same way as you are treat­ing ter­ror­ists. . . . .”

Wylie also draws par­al­lels between the psy­cho­log­i­cal oper­a­tions used on demo­c­ra­t­ic audi­ences and the bat­tle­field tech­niques used to be build an insur­gency. It starts with tar­get­ing peo­ple more prone to hav­ing errat­ic traits, para­noia or con­spir­a­to­r­i­al think­ing, and get them to “like” a group on social media. The infor­ma­tion you’re feed­ing this tar­get audi­ence may or may not be real. The impor­tant thing is that it’s con­tent that they already agree with so that “it feels good to see that infor­ma­tion.” Keep in mind that one of the goals of the ‘psy­cho­graph­ic pro­fil­ing’ that Cam­bridge Ana­lyt­i­ca was to iden­ti­fy traits like neu­roti­cism.

Wylie goes on to describe the next step in this insur­gency-build­ing tech­nique: keep build­ing up the inter­est in the social media group that you’re direct­ing this tar­get audi­ence towards until it hits around 1,000–2,000 peo­ple. Then set up a real life event ded­i­cat­ed to the cho­sen dis­in­for­ma­tion top­ic in some local area and try to get as many of your tar­get audi­ence to show up. Even if only 5 per­cent of them show up, that’s still 50–100 peo­ple con­verg­ing on some local cof­fee shop or what­ev­er. The peo­ple meet each oth­er in real life and start talk­ing about about “all these things that you’ve been see­ing online in the depths of your den and get­ting angry about”. This tar­get audi­ence starts believ­ing that no one else is talk­ing about this stuff because “they don’t want you to know what the truth is”. As Wylie puts it, “What start­ed out as a fan­ta­sy online gets port­ed into the tem­po­ral world and becomes real to you because you see all these peo­ple around you.”

“Cam­bridge Ana­lyt­i­ca whistle­blow­er Christo­pher Wylie: It’s time to save cre­ativ­i­ty” by Kate Magee; Cam­paign; 11/05/2018.

In the ear­ly hours of 17 March 2018, the 28-year-old Christo­pher Wylie tweet­ed: “Here we go….”

Lat­er that day, The Observ­er and The New York Times pub­lished the sto­ry of Cam­bridge Analytica’s mis­use of Face­book data, which sent shock­waves around the world, caused mil­lions to #Delete­Face­book, and led the UK Infor­ma­tion Commissioner’s Office to fine the site the max­i­mum penal­ty for fail­ing to pro­tect users’ infor­ma­tion. Six weeks after the sto­ry broke, Cam­bridge Ana­lyt­i­ca closed. . . .

. . . . He believes that poor use of data is killing good ideas. And that, unless effec­tive reg­u­la­tion is enact­ed, society’s wor­ship of algo­rithms, unchecked data cap­ture and use, and the like­ly spread of AI to all parts of our lives is caus­ing us to sleep­walk into a bleak future.

Not only are such cir­cum­stances a threat to adland – why do you need an ad to tell you about a prod­uct if an algo­rithm is choos­ing it for you? – it is a threat to human free will. “Cur­rent­ly, the only moral­i­ty of the algo­rithm is to opti­mise you as a con­sumer and, in many cas­es, you become the prod­uct. There are very few exam­ples in human his­to­ry of indus­tries where peo­ple them­selves become prod­ucts and those are scary indus­tries – slav­ery and the sex trade. And now, we have social media,” Wylie says.

“The prob­lem with that, and what makes it inher­ent­ly dif­fer­ent to sell­ing, say, tooth­paste, is that you’re sell­ing parts of peo­ple or access to peo­ple. Peo­ple have an innate moral worth. If we don’t respect that, we can cre­ate indus­tries that do ter­ri­ble things to peo­ple. We are [head­ing] blind­ly and quick­ly into an envi­ron­ment where this men­tal­i­ty is going to be ampli­fied through AI every­where. We’re humans, we should be think­ing about peo­ple first.”

His words car­ry weight, because he’s been on the dark side. He has seen what can hap­pen when data is used to spread mis­in­for­ma­tion, cre­ate insur­gen­cies and prey on the worst of people’s char­ac­ters.

The polit­i­cal bat­tle­field

A quick refresh­er on the scan­dal, in Wylie’s words: Cam­bridge Ana­lyt­i­ca was a com­pa­ny spun out of SCL Group, a British mil­i­tary con­trac­tor that worked in infor­ma­tion oper­a­tions for armed forces around the world. It was con­duct­ing research on how to scale and digi­tise infor­ma­tion war­fare – the use of infor­ma­tion to con­fuse or degrade the effi­ca­cy of an ene­my. . . .

. . . . As direc­tor of research, Wylie’s orig­i­nal role was to map out how the com­pa­ny would take tra­di­tion­al infor­ma­tion oper­a­tions tac­tics into the online space – in par­tic­u­lar, by pro­fil­ing peo­ple who would be sus­cep­ti­ble to cer­tain mes­sag­ing.

This mor­phed into the polit­i­cal are­na. After Wylie left, the com­pa­ny worked on Don­ald Trump’s US pres­i­den­tial cam­paign and – pos­si­bly – the UK’s Euro­pean Union ref­er­en­dum. In Feb­ru­ary 2016, Cam­bridge Analytica’s for­mer chief exec­u­tive, Alexan­der Nix, wrote in Cam­paign that his com­pa­ny had “already helped super­charge Leave.EU’s social-media cam­paign”. Nix has stren­u­ous­ly denied this since, includ­ing to MPs.

It was this shift from the bat­tle­field to pol­i­tics that made Wylie uncom­fort­able. “When you are work­ing in infor­ma­tion oper­a­tions projects, where your tar­get is a com­bat­ant, the auton­o­my or agency of your tar­gets is not your pri­ma­ry con­sid­er­a­tion. It is fair game to deny and manip­u­late infor­ma­tion, coerce and exploit any men­tal vul­ner­a­bil­i­ties a per­son has, and to bring out the very worst char­ac­ter­is­tics in that per­son because they are an ene­my,” he says.

“But if you port that over to a demo­c­ra­t­ic sys­tem, if you run cam­paigns designed to under­mine people’s abil­i­ty to make free choic­es and to under­stand what is real and not real, you are under­min­ing democ­ra­cy and treat­ing vot­ers in the same way as you are treat­ing ter­ror­ists.”

One of the rea­sons these tech­niques are so insid­i­ous is that being a tar­get of a dis­in­for­ma­tion cam­paign is “usu­al­ly a plea­sur­able expe­ri­ence”, because you are being fed con­tent with which you are like­ly to agree. “You are being guid­ed through some­thing that you want to be true,” Wylie says.

To build an insur­gency, he explains, you first tar­get peo­ple who are more prone to hav­ing errat­ic traits, para­noia or con­spir­a­to­r­i­al think­ing, and get them to “like” a group on social media. They start engag­ing with the con­tent, which may or may not be true; either way “it feels good to see that infor­ma­tion”.

When the group reach­es 1,000 or 2,000 mem­bers, an event is set up in the local area. Even if only 5% show up, “that’s 50 to 100 peo­ple flood­ing a local cof­fee shop”, Wylie says. This, he adds, val­i­dates their opin­ion because oth­er peo­ple there are also talk­ing about “all these things that you’ve been see­ing online in the depths of your den and get­ting angry about”.

Peo­ple then start to believe the rea­son it’s not shown on main­stream news chan­nels is because “they don’t want you to know what the truth is”. As Wylie sums it up: “What start­ed out as a fan­ta­sy online gets port­ed into the tem­po­ral world and becomes real to you because you see all these peo­ple around you.” . . . . 

. . . . Psy­cho­graph­ic poten­tial

One such appli­ca­tion was Cam­bridge Analytica’s use of psy­cho­graph­ic pro­fil­ing, a form of seg­men­ta­tion that will be famil­iar to mar­keters, although not in com­mon use.

The com­pa­ny used the OCEAN mod­el, which judges peo­ple on scales of the Big Five per­son­al­i­ty traits: open­ness to expe­ri­ences, con­sci­en­tious­ness, extra­ver­sion, agree­able­ness and neu­roti­cism.

Wylie believes the method could be use­ful in the com­mer­cial space. For exam­ple, a fash­ion brand that cre­ates bold, colour­ful, pat­terned clothes might want to seg­ment wealthy woman by extro­ver­sion because they will be more like­ly to buy bold items, he says.

Scep­tics say Cam­bridge Analytica’s approach may not be the dark mag­ic that Wylie claims. Indeed, when speak­ing to Cam­paign in June 2017, Nix unchar­ac­ter­is­ti­cal­ly played down the method, claim­ing the com­pa­ny used “pret­ty bland data in a pret­ty enter­pris­ing way”.

But Wylie argues that peo­ple under­es­ti­mate what algo­rithms allow you to do in pro­fil­ing. “I can take pieces of infor­ma­tion about you that seem innocu­ous, but what I’m able to do with an algo­rithm is find pat­terns that cor­re­late to under­ly­ing psy­cho­log­i­cal pro­files,” he explains.

“I can ask whether you lis­ten to Justin Bieber, and you won’t feel like I’m invad­ing your pri­va­cy. You aren’t nec­es­sar­i­ly aware that when you tell me what music you lis­ten to or what TV shows you watch, you are telling me some of your deep­est and most per­son­al attrib­ut­es.”

This is where mat­ters stray into the ques­tion of ethics. Wylie believes that as long as the com­mu­ni­ca­tion you are send­ing out is clear, not coer­cive or manip­u­la­tive, it’s fine, but it all depends on con­text. “If you are a beau­ty com­pa­ny and you use facets of neu­roti­cism – which Cam­bridge Ana­lyt­i­ca did – and you find a seg­ment of young women or men who are more prone to body dys­mor­phia, and one of the proac­tive actions they take is to buy more skin cream, you are exploit­ing some­thing which is unhealthy for that per­son and doing dam­age,” he says. “The ethics of using psy­cho­me­t­ric data real­ly depend on whether it is pro­por­tion­al to the ben­e­fit and util­i­ty that the cus­tomer is get­ting.” . . .

Clash­es with Face­book

Wylie is opposed to self-reg­u­la­tion, because indus­tries won’t become con­sumer cham­pi­ons – they are, he says, too con­flict­ed.

“Face­book has known about what Cam­bridge Ana­lyt­i­ca was up to from the very begin­ning of those projects,” Wylie claims. “They were noti­fied, they autho­rised the appli­ca­tions, they were giv­en the terms and con­di­tions of the app that said explic­it­ly what it was doing. They hired peo­ple who worked on build­ing the app. I had legal cor­re­spon­dence with their lawyers where they acknowl­edged it hap­pened as far back as 2016.”

He wants to cre­ate a set of endur­ing prin­ci­ples that are hand­ed over to a tech­ni­cal­ly com­pe­tent reg­u­la­tor to enforce. “Cur­rent­ly, the indus­try is not respond­ing to some pret­ty fun­da­men­tal things that have hap­pened on their watch. So I think it is the right place for gov­ern­ment to step in,” he adds.

Face­book in par­tic­u­lar, he argues is “the most obsti­nate and bel­liger­ent in recog­nis­ing the harm that has been done and actu­al­ly doing some­thing about it”. . . .

7. Social media have been under­scored as a con­tribut­ing fac­tor to right-wing, domes­tic ter­ror­ism. . . . The first is sto­chas­tic ter­ror­ism: ‘The use of mass, pub­lic com­mu­ni­ca­tion, usu­al­ly against a par­tic­u­lar indi­vid­ual or group, which incites or inspires acts of ter­ror­ism which are sta­tis­ti­cal­ly prob­a­ble but hap­pen seem­ing­ly at ran­dom.’ I encoun­tered the idea in a Fri­day thread from data sci­en­tist Emi­ly Gorcens­ki, who used it to tie togeth­er four recent attacks. . . . .”

“Why Social Media is Friend to Far-Right Politi­cians Around the World” by Casey New­ton; The Verge; 10/30/2018.

The Links Between Social Media, Domes­tic Ter­ror­ism and the Retreat from Democ­ra­cy

It was an awful week­end of hate-fueled vio­lence, ugly rhetoric, and wor­ri­some retreats from our demo­c­ra­t­ic ideals. Today I’m focused on two ways of fram­ing what we’re see­ing, from the Unit­ed States to Brazil. While nei­ther offers any com­fort, they do give help­ful names to phe­nom­e­na I expect will be with us for a long while.

The first is sto­chas­tic ter­ror­ism: “The use of mass, pub­lic com­mu­ni­ca­tion, usu­al­ly against a par­tic­u­lar indi­vid­ual or group, which incites or inspires acts of ter­ror­ism which are sta­tis­ti­cal­ly prob­a­ble but hap­pen seem­ing­ly at ran­dom.” I encoun­tered the idea in a Fri­day thread from data sci­en­tist Emi­ly Gorcens­ki, who used it to tie togeth­er four recent attacks.

In her thread, Gorcens­ki argues that var­i­ous right-wing con­spir­a­cy the­o­ries and frauds, ampli­fied both through main­stream and social media, have result­ed in a grow­ing num­ber of cas­es where men snap and com­mit vio­lence. “Right-wing media is a gra­di­ent push­ing right­wards, toward vio­lence and oppres­sion,” she wrote. “One of the symp­toms of this is that you are basi­cal­ly guar­an­teed to gen­er­ate ran­dom ter­ror­ists. Like pop­corn ker­nels pop­ping.”

On Sat­ur­day, anoth­er ker­nel popped. Robert A. Bow­ers, the sus­pect in a shoot­ing at a syn­a­gogue that left 11 peo­ple dead, was steeped in online con­spir­a­cy cul­ture. He post­ed fre­quent­ly to Gab, a Twit­ter clone that empha­sizes free speech and has become a favored social net­work among white nation­al­ists. Julie Turke­witz and Kevin Roose described his hate­ful views in the New York Times:

After open­ing an account on it in Jan­u­ary, he had shared a stream of anti-Jew­ish slurs and con­spir­a­cy the­o­ries. It was on Gab where he found a like-mind­ed com­mu­ni­ty, repost­ing mes­sages from Nazi sup­port­ers.

“Jews are the chil­dren of Satan,” read Mr. Bowers’s biog­ra­phy.

Bow­ers is in cus­tody — his life was saved by Jew­ish doc­tors and nurs­es — and pre­sum­ably will nev­er go free again. Gab’s life, how­ev­er, may be imper­iled. Two pay­ment proces­sors, Pay­Pal and Stripe, de-plat­formed the site, as did its cloud host, Joyent. The site went down on Mon­day after its host­ing provider GoDad­dy, told it to find anoth­er one. Its founder post­ed defi­ant mes­sages on Twit­ter and else­where promis­ing it would sur­vive.

Gab hosts a lot of deeply upset­ting con­tent, and to its sup­port­ers, that’s the point. Free speech is a right, their rea­son­ing goes, and it ought to be exer­cised. Cer­tain­ly it seems wrong to sug­gest that Gab or any oth­er sin­gle plat­form “caused” Bow­ers to act. Hatred, after all, is an ecosys­tem. But his action came amid a con­cert­ed effort to focus atten­tion on a car­a­van of migrants com­ing to the Unit­ed States in seek of refugee.

Right-wing media, most notably Fox News, has advanced the idea that the car­a­van is linked to Jew­ish bil­lion­aire (and Holo­caust sur­vivor) George Soros. An actu­al Con­gress­man, Flori­da Repub­li­can Matt Gaetz, sug­gest­ed the car­a­van was fund­ed by Soros. Bow­ers enthu­si­as­ti­cal­ly pushed these con­spir­a­cy the­o­ries on social media.

In his final post on Gab, Bow­ers wrote: “I can’t sit by and watch my peo­ple get slaugh­tered. Screw your optics. I’m going in.”

The indi­vid­ual act was ran­dom. But it had become sta­tis­ti­cal­ly prob­a­ble thanks to the rise of anti-immi­grant rhetoric across all man­ner of media. And I fear we will see far more of it before the cur­rent fever breaks.

The sec­ond con­cept I’m think­ing about today is demo­c­ra­t­ic reces­sion. The idea, which is rough­ly a decade old, is that democ­ra­cy is in retreat around the globe. The Econ­o­mist cov­ered it in Jan­u­ary:

The tenth edi­tion of the Econ­o­mist Intel­li­gence Unit’s Democ­ra­cy Index sug­gests that this unwel­come trend remains firm­ly in place. The index, which com­pris­es 60 indi­ca­tors across five broad categories—electoral process and plu­ral­ism, func­tion­ing of gov­ern­ment, polit­i­cal par­tic­i­pa­tion, demo­c­ra­t­ic polit­i­cal cul­ture and civ­il liberties—concludes that less than 5% of the world’s pop­u­la­tion cur­rent­ly lives in a “full democ­ra­cy”. Near­ly a third live under author­i­tar­i­an rule, with a large share of those in Chi­na. Over­all, 89 of the 167 coun­tries assessed in 2017 received low­er scores than they had the year before.

In Jan­u­ary, The Econ­o­mist con­sid­ered Brazil a “flawed democ­ra­cy.” But after this week­end, the coun­try may under­go a more pre­cip­i­tous decline in demo­c­ra­t­ic free­doms. As expect­ed, far-right can­di­date Jair Bol­sonaro, who speaks approv­ing­ly of the country’s pre­vi­ous mil­i­tary dic­ta­tor­ship, hand­i­ly won elec­tion over his left­ist rival.

In the best piece I read todayBuz­zFeed’s Ryan Brod­er­ick — who was in Brazil for the elec­tion — puts Bolsonaro’s elec­tion into the con­text of the inter­net and social plat­form. Brod­er­ick focus­es on the sym­bio­sis between inter­net media, which excels at pro­mot­ing a sense of per­pet­u­al cri­sis and out­rage, and far-right lead­ers who promise a return to nor­mal­cy.

Typ­i­cal­ly, large right-wing news chan­nels or con­ser­v­a­tive tabloids will then take these sto­ries going viral on Face­book and repack­age them for old­er, main­stream audi­ences. Depend­ing on your country’s media land­scape, the far-right trolls and influ­encers may try to hijack this social-media-to-news­pa­per-to-tele­vi­sion pipeline. Which then cre­ates more con­tent to screen­shot, meme, and share. It’s a feed­back loop.

Pop­ulist lead­ers and the legions of influ­encers rid­ing their wave know they can cre­ate fil­ter bub­bles inside of plat­forms like Face­book or YouTube that promise a safer time, one that nev­er exist­ed in the first place, before the protests, the vio­lence, the cas­cad­ing crises, and end­less news cycles. Don­ald Trump wants to Make Amer­i­can Great Again; Bol­sonaro wants to bring back Brazil’s mil­i­tary dic­ta­tor­ship; Shin­zo Abe wants to recap­ture Japan’s impe­r­i­al past; Germany’s AFD per­formed the best with old­er East Ger­man vot­ers long­ing for the days of author­i­tar­i­an­ism. All of these lead­ers promise to close bor­ders, to make things safe. Which will, of course, usu­al­ly exac­er­bate the prob­lems they’re promis­ing to dis­ap­pear. Anoth­er feed­back loop.

A third feed­back loop, of course, is between a social media ecosys­tem pro­mot­ing a sense of per­pet­u­al cri­sis and out­rage, and the ran­dom-but-sta­tis­ti­cal­ly-prob­a­ble pro­duc­tion of domes­tic ter­ror­ists.

Per­haps the glob­al rise of author­i­tar­i­ans and big tech plat­forms are mere­ly cor­re­lat­ed, and no cau­sa­tion can be proved. But I increas­ing­ly won­der whether we would ben­e­fit if tech com­pa­nies assumed that some lev­el of cau­sa­tion was real — and, assum­ing that it is, what they might do about it.

DEMOCRACY

On Social Media, No Answers for Hate

You don’t have to go to Gab to see hate­ful posts. Sheera Frenkel, Mike Isaac, and Kate Con­ger report on how the past week’s domes­tic ter­ror attacks play out on once-hap­pi­er places, most notably Insta­gram:

On Mon­day, a search on Insta­gram, the pho­to-shar­ing site owned by Face­book, pro­duced a tor­rent of anti-Semit­ic images and videos uploaded in the wake of Saturday’s shoot­ing at a Pitts­burgh syn­a­gogue.

A search for the word “Jews” dis­played 11,696 posts with the hash­tag “#jewsdid911,” claim­ing that Jews had orches­trat­ed the Sept. 11 ter­ror attacks. Oth­er hash­tags on Insta­gram ref­er­enced Nazi ide­ol­o­gy, includ­ing the num­ber 88, an abbre­vi­a­tion used for the Nazi salute “Heil Hitler.”

Attacks on Jew­ish peo­ple ris­ing on Insta­gram and Twit­ter, researchers say

Just before the syn­a­gogue attack took place on Sat­ur­day, David Ingram post­ed this sto­ry about an alarm­ing rise in attacks on Jews on social plat­forms:

Samuel Wool­ley, a social media researcher who worked on the study, ana­lyzed more than 7 mil­lion tweets from August and Sep­tem­ber and found an array of attacks, also often linked to Soros. About a third of the attacks on Jews came from auto­mat­ed accounts known as “bots,” he said.

“It’s real­ly spik­ing dur­ing this elec­tion,” Wool­ley, direc­tor of the Dig­i­tal Intel­li­gence Lab­o­ra­to­ry, which stud­ies the inter­sec­tion of tech­nol­o­gy and soci­ety, said in a tele­phone inter­view. “We’re see­ing what we think is an attempt to silence con­ver­sa­tions in the Jew­ish com­mu­ni­ty.”

Russ­ian dis­in­for­ma­tion on Face­book tar­get­ed Ukraine well before the 2016 U.S. elec­tion

Dana Priest, James Jaco­by and Anya Bourg report that Ukraine’s expe­ri­ence with infor­ma­tion war­fare offered an ear­ly — and unheed­ed — warn­ing to Face­book:

To get Zuckerberg’s atten­tion, the pres­i­dent post­ed a ques­tion for a town hall meet­ing at Facebook’s Sil­i­con Val­ley head­quar­ters. There, a mod­er­a­tor read it aloud.

“Mark, will you estab­lish a Face­book office in Ukraine?” the mod­er­a­tor said, chuck­ling, accord­ing to a video of the assem­bly. The room of young employ­ees rip­pled with laugh­ter. But the government’s sug­ges­tion was seri­ous: It believed that a Kiev office, staffed with peo­ple famil­iar with Ukraine’s polit­i­cal sit­u­a­tion, could help solve Facebook’s high-lev­el igno­rance about Russ­ian infor­ma­tion war­fare. . . . .

Discussion

13 comments for “FTR #1074 FakeBook: Walkin’ the Snake on the Earth Island with Facebook (FascisBook, Part 2; In Your Facebook, Part 4)”

  1. The Ukrain­ian pub­li­ca­tion 112.ua has a piece on the appoint­ment of Katery­na Kruk as Face­book’s new head of Pub­lic Pol­i­cy for Ukraine that pro­vides some of the back­sto­ry for how this posi­tion got cre­at­ed in the first. And, yes, it’s rather dis­turb­ing. Sur­prise!

    So back in 2015, Face­book was engaged in wide­spread block­ing of users from Ukraine. It got to the point where then-Pres­i­dent Petro Poroshenko asked Mark Zucker­berg to open a Face­book office in Ukraine to han­dle the issue of when some­one should be blocked. At that point, it was Face­book’s office in Ire­land that made those deci­sions for Ukraine’s users. Zucker­berg respond­ed that the block­ing of the Ukrain­ian accounts was done right because “lan­guage of hos­til­i­ty” was used in them. Giv­en the civ­il war at the time and the fact that neo-Nazi move­ments were play­ing a major role in fight­ing on the pro-Kiev side of that war we can get a pret­ty good idea of what that “lan­guage of hos­til­i­ty” would have sound­ed like.

    Flash for­ward to Octo­ber of 2018, and Face­book announces a com­pe­ti­tion for the posi­tion of pub­lic pol­i­cy man­ag­er for Ukraine. As Face­book’s post put it, “We are look­ing for a good com­mu­ni­ca­tor that can com­bine the pas­sion for the Inter­net ser­vices Face­book pro­vides and has deep knowl­edge of the polit­i­cal and reg­u­la­to­ry dynam­ics in Ukraine and, prefer­ably, in all the East­ern Euro­pean region,” and that some­one with expe­ri­ence work­ing on polit­i­cal issues with the par­tic­i­pa­tion of the Ukrain­ian gov­ern­ment would be pre­ferred.

    Inter­est­ing­ly, one source in the arti­cle indi­cates that the new man­ag­er posi­tion won’t be han­dling the deal­ing with ban­ning users. Of course, the arti­cle also ref­er­ences the Pub­lic Pol­i­cy team. In oth­er words, Kruk is going to have a bunch of peo­ple work­ing under her so it it seems like­ly that peo­ple work­ing under Kruk would be the ones actu­al­ly han­dling the ban­nings. Plus, one of the respon­si­bil­i­ties Kruk will have includes help­ing to “cre­ate rules in the Inter­net sec­tor”, and it’s very pos­si­ble tweak­ing those rules will be how Kruk pre­vents the need for future ban­nings. And the arti­cle explic­it­ly says that it is expect­ed that after this new appoint­ment the block­ing of posts of Ukrain­ian users would stop.

    So in 2015, Ukraine’s gov­ern­ment com­plains about Face­book ban­ning peo­ple for what sounds like hate speech and requests a spe­cial Ukrain­ian-spe­cif­ic office for han­dling who gets banned and four years lat­er Face­book basi­cal­ly does exact­ly that:

    112 UA

    Face­book appoints man­ag­er for Ukraine: What does it mean?

    Katery­na Kruk became a Face­book pub­lic pol­i­cy man­ag­er for Ukraine

    Author : News Agency 112.ua
    14:08, 7 June 2019

    In ear­ly June, Face­book for the first time in its his­to­ry appoint­ed a pub­lic pol­i­cy man­ag­er for Ukraine – she is Ukrain­ian Katery­na Kruk. It is expect­ed that after this appoint­ment the block­ing of posts of Ukrain­ian users would stop, as well as “gross and unpro­fes­sion­al atti­tude of Face­book towards Ukraine and Ukraini­ans.”

    What is the idea?

    In spring of 2015, due to the mass block­ing of Ukrain­ian users, the Ukrain­ian Face­book group addressed the founder of Face­book Mark Zucker­berg with a request to cre­ate a Ukrain­ian admin­is­tra­tion. For­mer Ukrain­ian Pres­i­dent Petro Poroshenko also asked Zucker­berg to open a Face­book office in Ukraine.

    Zucker­berg said that the con­tro­ver­sial pub­li­ca­tions for which Ukrain­ian users were banned, were delet­ed right­ly, because “lan­guage of hos­til­i­ty” was used in them. At the same time, Zucker­berg said that the Ukrain­ian social net­work­ing seg­ment is mod­er­at­ed by an office in Ire­land, and the issue of open­ing a rep­re­sen­ta­tive office of a social net­work in Ukraine can be con­sid­ered over time.

    And in Octo­ber 2018, Face­book announced a com­pe­ti­tion for the posi­tion of pub­lic pol­i­cy man­ag­er for Ukraine.

    “We are look­ing for a good com­mu­ni­ca­tor that can com­bine the pas­sion for the Inter­net ser­vices Face­book pro­vides and has deep knowl­edge of the polit­i­cal and reg­u­la­to­ry dynam­ics in Ukraine and, prefer­ably, in all the East­ern Euro­pean region,” said the com­ment to the posi­tion.

    In addi­tion, it was not­ed that a can­di­date who is acquaint­ed with politi­cians and gov­ern­ment offi­cials, and has expe­ri­ence in work­ing on polit­i­cal issues with the par­tic­i­pa­tion of the Ukrain­ian gov­ern­ment, will be giv­en pref­er­ence to. It was report­ed that the new man­ag­er would work at the Face­book office in War­saw.

    In ear­ly June, it became known that Katery­na Kruk became Face­book’s Pub­lic Pol­i­cy Man­ag­er for Ukraine.

    Who is Katery­na Kruk?

    ...

    The Deputy Min­is­ter of Infor­ma­tion Pol­i­cy Dmytro Zolo­tukhin believes that Kate­ri­na Kruk is the best choice that could have been made by Face­book. The min­istry promised to sup­port Kruk in all mat­ters that will be in the inter­ests of Ukraine.

    What will the new man­ag­er do?

    It is not­ed, that the Pub­lic Pol­i­cy team is engaged in com­mu­ni­ca­tion between politi­cians and Face­book: responds to inquiries from politi­cians and reg­u­la­to­ry bod­ies, helps to cre­ate rules in the Inter­net sec­tor, shares infor­ma­tion about prod­ucts and activ­i­ties of the com­pa­ny.

    In addi­tion, the man­ag­er will mon­i­tor the leg­is­la­tion and reg­u­la­to­ry issues relat­ed to Face­book in Ukraine, form coali­tions with oth­er orga­ni­za­tions to pro­mote the polit­i­cal goals of the social net­work, com­mu­ni­cate with the media and rep­re­sent the inter­ests of Face­book before state agen­cies.

    At the same time, some sources report that the work with blocked groups, user bans and prob­lems with the adver­tis­ing cab­i­nets are not includ­ed in the man­ager’s respon­si­bil­i­ties.

    What’s next?

    Ear­li­er Dmytro Zolo­tukhin not­ed that, first of all the new man­ag­er would act in the inter­ests of the com­pa­ny, which pays him/her.

    “How­ev­er, on the oth­er hand, this will relieve us of sus­pi­cion of who real­ly solves con­flict sit­u­a­tions with Ukrain­ian users,” Zolo­tukhin wrote in the fall last year.

    And after announc­ing the results of the com­pe­ti­tion for the vacan­cy, he expressed the hope that after this appoint­ment “gross and unpro­fes­sion­al atti­tude of Face­book towards Ukraine and Ukraini­ans.”

    ———–

    “Face­book appoints man­ag­er for Ukraine: What does it mean?” by News Agency 112.ua; 112.ua, 06/07/2019

    “In ear­ly June, Face­book for the first time in its his­to­ry appoint­ed a pub­lic pol­i­cy man­ag­er for Ukraine – she is Ukrain­ian Katery­na Kruk. It is expect­ed that after this appoint­ment the block­ing of posts of Ukrain­ian users would stop, as well as “gross and unpro­fes­sion­al atti­tude of Face­book towards Ukraine and Ukraini­ans.””

    No more block­ings of Ukrain­ian posts. That’s the expec­ta­tion now that Kruk has this new posi­tion. It’s quite a change from 2015 when Mark Zucker­berg him­self defend­ed the block­ing of such posts because they vio­lat­ed Face­book’s terms of use by includ­ing “lan­guage of hos­til­i­ty”, which is almost cer­tain­ly a euphemism for Nazi hate speech. But Zucker­berg said the com­pa­ny would con­sid­er Petro Poroshenko’s request for a spe­cial Ukrain­ian office to han­dle these issues and in 2018 the com­pa­ny decid­ed to go ahead with the idea:

    ...
    What is the idea?

    In spring of 2015, due to the mass block­ing of Ukrain­ian users, the Ukrain­ian Face­book group addressed the founder of Face­book Mark Zucker­berg with a request to cre­ate a Ukrain­ian admin­is­tra­tion. For­mer Ukrain­ian Pres­i­dent Petro Poroshenko also asked Zucker­berg to open a Face­book office in Ukraine.

    Zucker­berg said that the con­tro­ver­sial pub­li­ca­tions for which Ukrain­ian users were banned, were delet­ed right­ly, because “lan­guage of hos­til­i­ty” was used in them. At the same time, Zucker­berg said that the Ukrain­ian social net­work­ing seg­ment is mod­er­at­ed by an office in Ire­land, and the issue of open­ing a rep­re­sen­ta­tive office of a social net­work in Ukraine can be con­sid­ered over time.

    And in Octo­ber 2018, Face­book announced a com­pe­ti­tion for the posi­tion of pub­lic pol­i­cy man­ag­er for Ukraine.

    “We are look­ing for a good com­mu­ni­ca­tor that can com­bine the pas­sion for the Inter­net ser­vices Face­book pro­vides and has deep knowl­edge of the polit­i­cal and reg­u­la­to­ry dynam­ics in Ukraine and, prefer­ably, in all the East­ern Euro­pean region,” said the com­ment to the posi­tion.

    In addi­tion, it was not­ed that a can­di­date who is acquaint­ed with politi­cians and gov­ern­ment offi­cials, and has expe­ri­ence in work­ing on polit­i­cal issues with the par­tic­i­pa­tion of the Ukrain­ian gov­ern­ment, will be giv­en pref­er­ence to. It was report­ed that the new man­ag­er would work at the Face­book office in War­saw.
    ...

    And while it does­n’t sound like the man­ag­er (Kruk) will be direct­ly respon­si­ble for han­dling ban­nings, it also sounds like she’s going to be man­ag­ing a team of peo­ple so we would expect that team to be the one’s actu­al­ly han­dling the ban­nings. Plus, Kruk’s respon­si­bil­i­ties for things like help­ing to “cre­ate rules in the Inter­net sec­tor” are a far more effec­tive way to lift the rules that were result­ing in these bans:

    ...
    What will the new man­ag­er do?

    It is not­ed, that the Pub­lic Pol­i­cy team is engaged in com­mu­ni­ca­tion between politi­cians and Face­book: responds to inquiries from politi­cians and reg­u­la­to­ry bod­ies, helps to cre­ate rules in the Inter­net sec­tor, shares infor­ma­tion about prod­ucts and activ­i­ties of the com­pa­ny.

    In addi­tion, the man­ag­er will mon­i­tor the leg­is­la­tion and reg­u­la­to­ry issues relat­ed to Face­book in Ukraine, form coali­tions with oth­er orga­ni­za­tions to pro­mote the polit­i­cal goals of the social net­work, com­mu­ni­cate with the media and rep­re­sent the inter­ests of Face­book before state agen­cies.

    At the same time, some sources report that the work with blocked groups, user bans and prob­lems with the adver­tis­ing cab­i­nets are not includ­ed in the man­ager’s respon­si­bil­i­ties.
    ...

    And note how Urkaine’s Deputy Min­is­ter of Infor­ma­tion Pol­i­cy has already pledged to sup­port Kruk and has expressed his hope that this appoint will end the “gross and unpro­fes­sion­al atti­tude of Face­book towards Ukraine and Ukraini­ans”:

    ...
    The Deputy Min­is­ter of Infor­ma­tion Pol­i­cy Dmytro Zolo­tukhin believes that Kate­ri­na Kruk is the best choice that could have been made by Face­book. The min­istry promised to sup­port Kruk in all mat­ters that will be in the inter­ests of Ukraine.

    ...

    What’s next?

    Ear­li­er Dmytro Zolo­tukhin not­ed that, first of all the new man­ag­er would act in the inter­ests of the com­pa­ny, which pays him/her.

    “How­ev­er, on the oth­er hand, this will relieve us of sus­pi­cion of who real­ly solves con­flict sit­u­a­tions with Ukrain­ian users,” Zolo­tukhin wrote in the fall last year.

    And after announc­ing the results of the com­pe­ti­tion for the vacan­cy, he expressed the hope that after this appoint­ment “gross and unpro­fes­sion­al atti­tude of Face­book towards Ukraine and Ukraini­ans.”
    ...

    Now, it’s impor­tant to acknowl­edge that there has undoubt­ed­ly been some bans of Ukraini­ans that were the result of pro-Krem­lin trolls (and vice ver­sa). That was, in fact, one of the big com­plaints of Ukraini­ans in 2015: that pro-Krem­lin trolls were effec­tive­ly gam­ing Face­book’s sys­tems to get Ukraini­ans banned. But there’s also no deny­ing that Ukraine is awash in fas­cist pro­pa­gan­da backed by the gov­ern­ment by undoubt­ed­ly vio­lates Face­book’s var­i­ous rules against hate speech. And now that we have a far right sym­pa­thiz­er, Kruk, in this new posi­tion.

    So it’s going to be real­ly inter­est­ing to see what hap­pens with the neo-Nazi groups with gov­ern­ment back­ing like Azov. As the fol­low­ing arti­cle from April of this describes, Azov mem­bers first start­ing expe­ri­enc­ing ban­nings in 2015 and this year the group was qui­et­ly banned entire­ly at some point this year. Except, despite that ban, Azov remains on Face­book, just under new pages. Ole­na Semenya­ka, the inter­na­tion­al spokesper­son for the move­ment, has had mul­ti­ple pages banned but has mul­ti­ple pages still up. And that’s going to be an impor­tant thing to keep in mind as this plays out: even if Face­book bans these far right groups, get­ting around those bans appears to be triv­ial:

    Radio Free Europe/Radio Lib­er­ty

    Face­book ‘Bans’ Ukrain­ian Far-Right Group Over ‘Hate Speech’ — But Get­ting Rid Of It Isn’t Easy

    By Christo­pher Miller
    April 16, 2019 18:50 GMT

    KYIV — Ukraine’s mil­i­taris­tic, far-right Azov move­ment and its var­i­ous branch­es have used Face­book to pro­mote its anti­de­mo­c­ra­t­ic, ultra­na­tion­al­ist mes­sages and recruit new mem­bers since its incep­tion at the start of the coun­try’s war against Rus­sia-backed sep­a­ratists five years ago.

    The Amer­i­can social-net­work­ing giant has also been an impor­tant plat­form for Azov’s glob­al expan­sion and attempts to legit­imize itself among like­mind­ed Amer­i­can and Euro­pean white nation­al­ists.

    Face­book has occa­sion­al­ly tak­en down pages and groups asso­ci­at­ed with Azov when they have been found to be in vio­la­tion of its poli­cies on hate speech and the depic­tion of vio­lence.

    The first Face­book removals occurred in 2015, Azov mem­bers told RFE/RL.

    But after con­tin­u­ous, repeat vio­la­tions Azov — which includes many war vet­er­ans and mil­i­tant mem­bers with open­ly neo-Nazi views who have been involved in attacks on LGBT activists, Romany encamp­ments, and wom­en’s groups — is now offi­cial­ly banned from hav­ing any pres­ence on Face­book, the social net­work has con­firmed to RFE/RL.

    Despite the ban, how­ev­er, which qui­et­ly came into force months ago, a defi­ant Azov and its mem­bers remain active on the social net­work under pseu­do­nyms and name vari­a­tions, under­scor­ing the dif­fi­cul­ty Face­book faces in com­bat­ing extrem­ism on a plat­form with some 2.32 bil­lion month­ly active users.

    ‘Orga­nized Hate’ Not Allowed

    For years, Face­book has strug­gled with how to deal with extrem­ist con­tent and it has been crit­i­cized for mov­ing too slow­ly on it or behav­ing reac­tive­ly.

    The issue was put front-and-cen­ter in August 2017, when the plat­form was used to orga­nize a white suprema­cist ral­ly in Char­lottesville, Vir­ginia, that turned dead­ly.

    The issue was raised most recent­ly in the after­math of the Christchurch mas­sacre that left 50 peo­ple dead. The shoot­er livestreamed the killing on his Face­book page. The com­pa­ny said it had “quick­ly removed both the shooter’s Face­book and Insta­gram accounts and the video,” and was tak­ing down posts of praise or sup­port for the shoot­ing.

    Joe Mul­hall, a senior researcher at the U.K.-based antifas­cist orga­ni­za­tion Hope Not Hate, told RFE/RL by phone that Char­lottesville brought a “sea change” when it came to social media com­pa­nies and Face­book, in par­tic­u­lar pay­ing atten­tion to extrem­ists.

    For instance, he praised the com­pa­ny for its “robust” action against the far-right founder of the Eng­lish Defence League, Tom­my Robin­son, who had repeat­ed­ly vio­lat­ed Face­book’s poli­cies on hate speech.

    But Mul­hall said Face­book more often acts only after “they’re pub­licly shamed.”

    “When there is mas­sive pub­lic pres­sure, they act; or when they think they can get away with things, they don’t,” he added.

    This may explain why it took Face­book years to ban the Azov move­ment, which received sig­nif­i­cant media atten­tion fol­low­ing a series of vio­lent attacks against minori­ties in 2018.

    Face­book did not spec­i­fy what exact­ly tipped the scale. But respond­ing to an RFE/RL e‑mail request on April 15, a Face­book spokesper­son wrote that the com­pa­ny has been tak­ing down accounts asso­ci­at­ed with the Azov Reg­i­ment, Nation­al Corps, and Nation­al Mili­tia – the group’s mil­i­tary, polit­i­cal, and vig­i­lante wings, respec­tive­ly — on Face­book for months, cit­ing its poli­cies against hate groups. The spokesper­son did not say when exact­ly the ban came into force.

    In its pol­i­cy on dan­ger­ous indi­vid­u­als and orga­ni­za­tions, Face­book defines a hate orga­ni­za­tion as “any asso­ci­a­tion of three or more peo­ple that is orga­nized under a name, sign, or sym­bol and that has an ide­ol­o­gy, state­ments, or phys­i­cal actions that attack indi­vid­u­als based on char­ac­ter­is­tics, includ­ing race, reli­gious affil­i­a­tion, nation­al­i­ty, eth­nic­i­ty, gen­der, sex, sex­u­al ori­en­ta­tion, seri­ous dis­ease or dis­abil­i­ty.”

    Defend­ing ‘Ukrain­ian Order’

    Azov and its lead­er­ship con­sid­er them­selves defend­ers of what they call “Ukrain­ian order,” or an illib­er­al and anti­de­mo­c­ra­t­ic soci­ety. They are anti-Russ­ian and also against Ukraine’s poten­tial acces­sion to the Euro­pean Union and NATO.

    Their ide­al Ukraine is a “Ukraine for Ukraini­ans,” as Ole­na Semenya­ka, the inter­na­tion­al sec­re­tary for Azov’s polit­i­cal wing, the Nation­al Corps, told RFE/RL last year. Azov’s sym­bol is sim­i­lar to the Nazi Wolf­san­gel but the group claims it is com­prised of the let­ters N and I, mean­ing “nation­al idea.”

    ...

    Ear­li­er in March, the U.S. State Depart­ment referred to the Nation­al Corps as a “nation­al­ist hate group” in its annu­al human rights report.

    Azov has induct­ed thou­sands of mil­i­tant mem­bers in recent years in torch­light cer­e­monies with chants of “Glo­ry to Ukraine! Death to ene­mies.” The move­ment claims to have rough­ly 10,000 mem­bers in its broad­er move­ment and the abil­i­ty to mobi­lize some 2,000 to the streets with­in hours. A large part of its recruit­ing has been done using slick­ly pro­duced videos and adver­tise­ments of it fight clubs, hard­core con­certs, and fash­ion lines pro­mot­ed on Face­book and oth­er social net­works.

    Still On Face­book, But Mov­ing Else­where

    Many of those may no longer be found on Face­book after the ban. But some are like­ly to stick around, since many Azov fac­tions and lead­ers remain on the plat­form or else have opened fresh accounts after orig­i­nal ones were removed, RFE/RL research shows.

    For instance, the Azov Reg­i­ment, whose offi­cial page under the Polk Azov name was removed months ago, has also opened a fresh page with a new name: Tviy Polk (Your Reg­i­ment).

    Its lead­ers have react­ed sim­i­lar­ly, as have the Nation­al Corps and Nation­al Mili­tia, open­ing dozens of new accounts under slight­ly altered names to make it more dif­fi­cult for Face­book to track them. A sim­ple search on April 16 brought up more than a dozen active accounts.

    Semenya­ka has had at least two per­son­al accounts removed by Face­book. But two oth­er accounts belong­ing to her and opened with dif­fer­ent spellings of her name — Lena Semenya­ka and Hele­na Semenya­ka — are still open, as is a group page she man­ages.

    In a post to the Lena account on April 11, Semenya­ka wrote after the take­down of her orig­i­nal account that Face­book “is get­ting increas­ing­ly anti-intel­lec­tu­al.”

    “If you wish to keep in touch, please sub­scribe to some oth­er per­ma­nent and tem­po­rary plat­forms,” she con­tin­ued, adding a link to her Face­book-owned Insta­gram account.

    Then, high­light­ing what’s become a pop­u­lar new des­ti­na­tion for far-right and oth­er extrem­ist groups, she also announced the open­ing of a new Nation­al Corps Inter­na­tion­al account — on the mes­sen­ger app Telegram.

    ———-

    “Face­book ‘Bans’ Ukrain­ian Far-Right Group Over ‘Hate Speech’ — But Get­ting Rid Of It Isn’t Easy” by Christo­pher Miller; Radio Free Europe/Radio Lib­er­ty; 04/16/2019

    Despite the ban, how­ev­er, which qui­et­ly came into force months ago, a defi­ant Azov and its mem­bers remain active on the social net­work under pseu­do­nyms and name vari­a­tions, under­scor­ing the dif­fi­cul­ty Face­book faces in com­bat­ing extrem­ism on a plat­form with some 2.32 bil­lion month­ly active users.”

    The total ban on Azov took place months ago and yet Azov mem­bers still have an active pres­ence, includ­ing the move­men­t’s spokesper­son, Ole­na Semenya­ka, who has two per­son­al pages and a group page still up as of April, along with an account on Face­book-owned Insta­gram:

    ...
    Defend­ing ‘Ukrain­ian Order’

    Azov and its lead­er­ship con­sid­er them­selves defend­ers of what they call “Ukrain­ian order,” or an illib­er­al and anti­de­mo­c­ra­t­ic soci­ety. They are anti-Russ­ian and also against Ukraine’s poten­tial acces­sion to the Euro­pean Union and NATO.

    Their ide­al Ukraine is a “Ukraine for Ukraini­ans,” as Ole­na Semenya­ka, the inter­na­tion­al sec­re­tary for Azov’s polit­i­cal wing, the Nation­al Corps, told RFE/RL last year. Azov’s sym­bol is sim­i­lar to the Nazi Wolf­san­gel but the group claims it is com­prised of the let­ters N and I, mean­ing “nation­al idea.”

    ...

    Still On Face­book, But Mov­ing Else­where

    Many of those may no longer be found on Face­book after the ban. But some are like­ly to stick around, since many Azov fac­tions and lead­ers remain on the plat­form or else have opened fresh accounts after orig­i­nal ones were removed, RFE/RL research shows.

    For instance, the Azov Reg­i­ment, whose offi­cial page under the Polk Azov name was removed months ago, has also opened a fresh page with a new name: Tviy Polk (Your Reg­i­ment).

    Its lead­ers have react­ed sim­i­lar­ly, as have the Nation­al Corps and Nation­al Mili­tia, open­ing dozens of new accounts under slight­ly altered names to make it more dif­fi­cult for Face­book to track them. A sim­ple search on April 16 brought up more than a dozen active accounts.

    Semenya­ka has had at least two per­son­al accounts removed by Face­book. But two oth­er accounts belong­ing to her and opened with dif­fer­ent spellings of her name — Lena Semenya­ka and Hele­na Semenya­ka — are still open, as is a group page she man­ages.

    In a post to the Lena account on April 11, Semenya­ka wrote after the take­down of her orig­i­nal account that Face­book “is get­ting increas­ing­ly anti-intel­lec­tu­al.”

    “If you wish to keep in touch, please sub­scribe to some oth­er per­ma­nent and tem­po­rary plat­forms,” she con­tin­ued, adding a link to her Face­book-owned Insta­gram account.

    Then, high­light­ing what’s become a pop­u­lar new des­ti­na­tion for far-right and oth­er extrem­ist groups, she also announced the open­ing of a new Nation­al Corps Inter­na­tion­al account — on the mes­sen­ger app Telegram.
    ...

    And note how Face­book would­n’t actu­al­ly say what exact­ly trig­gered the com­pa­ny to ful­ly ban the group after years of indi­vid­ual ban­nings. That’s part of what’s going to be inter­est­ing to watch with the cre­ation of a new Pub­lic Pol­i­cy office for Ukraine: those rules are going to become a lot clear­er after fig­ures like Kruk learn what they are and can shape them:

    ...
    Joe Mul­hall, a senior researcher at the U.K.-based antifas­cist orga­ni­za­tion Hope Not Hate, told RFE/RL by phone that Char­lottesville brought a “sea change” when it came to social media com­pa­nies and Face­book, in par­tic­u­lar pay­ing atten­tion to extrem­ists.

    For instance, he praised the com­pa­ny for its “robust” action against the far-right founder of the Eng­lish Defence League, Tom­my Robin­son, who had repeat­ed­ly vio­lat­ed Face­book’s poli­cies on hate speech.

    But Mul­hall said Face­book more often acts only after “they’re pub­licly shamed.”

    “When there is mas­sive pub­lic pres­sure, they act; or when they think they can get away with things, they don’t,” he added.

    This may explain why it took Face­book years to ban the Azov move­ment, which received sig­nif­i­cant media atten­tion fol­low­ing a series of vio­lent attacks against minori­ties in 2018.

    Face­book did not spec­i­fy what exact­ly tipped the scale. But respond­ing to an RFE/RL e‑mail request on April 15, a Face­book spokesper­son wrote that the com­pa­ny has been tak­ing down accounts asso­ci­at­ed with the Azov Reg­i­ment, Nation­al Corps, and Nation­al Mili­tia – the group’s mil­i­tary, polit­i­cal, and vig­i­lante wings, respec­tive­ly — on Face­book for months, cit­ing its poli­cies against hate groups. The spokesper­son did not say when exact­ly the ban came into force.
    ...

    So get­ting around those rules is also pre­sum­ably going to get a lot eas­i­er once fig­ures like Kruk can inform her fel­low far right activists what exact­ly those rules are...assuming the rules against orga­nized hate aren’t dealt away with entire­ly for Ukraine.

    Posted by Pterrafractyl | June 7, 2019, 3:06 pm
  2. Here’s an arti­cle dis­cussing a book that just came out, The Real Face of Face­book in India, about the rela­tion­ship between Face­book and the BJP and the role this rela­tion­ship played in the BJP’s stun­ning 2014 suc­cess­es. Most of what’s in the arti­cle cov­ers what we already knew about this rela­tion­ship, where Shiv­nath Thukral, a for­mer NDTV jour­nal­ist with a close work­ing rela­tion­ship with close Modi aide, Hiren Joshi, worked togeth­er on the Modi dig­i­tal team in the 2014 elec­tion before Thukral went on to become Face­book’s direc­tor of pol­i­cy for India and South Asia.

    Some of the new fun facts include Face­book appar­ent­ly refus­ing to run the Con­gress Par­ty’s ads high­light­ing the Modi gov­ern­men­t’s Rafale fight­er jet scan­dal. It also delayed for 11 days ad for an expose in Car­a­van Mag­a­zine about BJP offi­cial Amit Shah. Dis­turbing­ly, it also sounds like Indi­an pro­pa­gan­da com­pa­nies are offer­ing their ser­vices in oth­er coun­tries like South Africa, which makes the com­pa­ny’s cozy ties to the BJP pro­pa­gan­dists even more trou­bling.

    One of the more iron­ic fun facts in the book is that Katie Har­bath, Facebook’s Direc­tor for Glob­al Pol­i­tics and Gov­ern­ment Out­reach, was appar­ent­ly “unhap­py and uneasy about the prox­im­i­ty” of top offi­cials of Face­book to the Naren­dra Modi gov­ern­ment after Thukral got his posi­tion at Face­book. This is accord­ing to an anony­mous source. So that would appear to indi­cate that even Face­book’s high-lev­el employ­ees rec­og­nize these are politi­cized posi­tions and yet the com­pa­ny goes ahead with it any­way. Sur­prise! As the arti­cle also notes, it’s some­what iron­ic for Har­bath to be express­ing an unease with the com­pa­ny hir­ing a polit­i­cal­ly con­nect­ed indi­vid­ual close to the gov­ern­ment for such a posi­tion since Har­bath her­self was once a dig­i­tal strate­gist for the Repub­li­can Par­ty and Rudy Giu­liani:

    The Wire

    The Past and Future of Face­book and BJP’s Mutu­al­ly Ben­e­fi­cial Rela­tion­ship

    A new book by Paran­joy Guha Thakur­ta and Cyril Sam finds revolv­ing doors and quid pro quos between Indi­a’s rich­est polit­i­cal par­ty and the world’s largest social plat­form.

    Partha P. Chakrabart­ty
    03/Jun/2019

    Five years from now, we may well be read­ing a book about the BJP’s What­sApp oper­a­tions in the 2019 elec­tions – fea­tur­ing two lakh groups of 256 mem­bers each, or over 5o mil­lion read­ers of the par­ty line. A recent book, how­ev­er, tells the sto­ry of the 2014 elec­tions, and the role of WhatsApp’s par­ent com­pa­ny Face­book in the rise of Naren­dra Modi.

    In 2019, if we for­get Facebook’s bil­lions of dol­lars in rev­enue, we might almost feel sor­ry for it. Face­book has had a rough year, where it has been attacked both by the Left (for per­mit­ting the rise of right-wing troll armies), and the Right (for cen­sor­ship of con­ser­v­a­tives: Don­ald Trump has launched a new tool to report instances.)

    But we can’t for­get their bil­lions of dol­lars of rev­enue, espe­cial­ly when, even in this tough year, Facebook’s income grew by 26% quar­ter-on-quar­ter. To add to the voic­es raised against it, a new book alleges that Face­book was both direct­ly com­plic­it in, and ben­e­fit­ed from, the rise of Modi’s BJP in India.

    The Real Face of Face­book in India, co-authored by the jour­nal­ists Paran­joy Guha Thakur­ta and Cyril Sam, is a short, terse book that reads like a who­dun­nit. In the intro­duc­tion, it is teased that the book will reveal ‘a wealth of details about the kind of sup­port that Face­book pro­vid­ed Naren­dra Modi and the appa­ra­tus of the BJP appa­ra­tus (sic) even (sic) before the 2014 elec­tions’. The many copy errors reveal that the book is an attempt to get the news out as wide­ly and as quick­ly as pos­si­ble. This is report­ing, not deep analy­sis.

    In line with the aim of reach­ing as wide an audi­ence as pos­si­ble, the book has been simul­ta­ne­ous­ly pub­lished in Hin­di under the title Face­book ka Asli Chehra. There is also a com­pan­ion web­site, theaslifacebook.com, which also has a Hin­di sec­tion.

    Teasers to the big reveal come in the first few chap­ters, which do a slight­ly hap­haz­ard job of nar­rat­ing the his­to­ry of Face­book in India. The smok­ing gun is final­ly dis­closed in Chap­ter 8 in the form of a per­son, Shiv­nath Thukral, a for­mer NDTV jour­nal­ist and ex-man­ag­ing direc­tor of Carnegie India. Going by the evi­dence in the book, Thukral had a close work­ing rela­tion­ship with inti­mate Modi aide, Hiren Joshi. Togeth­er, they cre­at­ed the Mera Bharosa web­site and oth­er web pages for the BJP in late 2013, ahead of the nation­al elec­tion. In 2017, after his stint at Carnegie, Thukral joined Face­book as its direc­tor of pol­i­cy for India and South Asia.

    For a per­son so close to a rul­ing par­ty to become a top offi­cial of a ‘neu­tral’ plat­form is wor­ry­ing. Wor­ry­ing enough, it seems, to trou­ble the com­pa­ny itself: Real Face claims that Katie Har­bath, Facebook’s direc­tor for glob­al pol­i­tics and gov­ern­ment out­reach, said she was “unhap­py and uneasy about the prox­im­i­ty” of top offi­cials of Face­book to the Naren­dra Modi gov­ern­ment. The quote is attrib­uted to an anony­mous source. Whether it is true or not, cit­i­zens should be con­cerned about this par­tic­u­lar revolv­ing door between the most pow­er­ful media organ­i­sa­tion in the world and the Modi admin­is­tra­tion. (It’s a dif­fer­ent mat­ter that Har­bath her­self was once a dig­i­tal strate­gist for the Repub­li­can Par­ty and Rudy Giu­liani).

    The over­ar­ch­ing sto­ry is this: The BJP was the first in our coun­try to see the poten­tial of Face­book as a way to reach vot­ers. Face­book, a pri­vate cor­po­ra­tion with an eye on build­ing rel­e­vance in India and earn­ing prof­its through adver­tis­ing, saw in pol­i­tics a great way to dri­ve engage­ment. Both the BJP and Face­book had much to gain from a part­ner­ship.

    As a result, in the run-up to the 2014 elec­tion, Face­book offered train­ing to BJP per­son­nel in run­ning social media cam­paigns. (Face­book has stat­ed that they con­duct these work­shops for var­i­ous polit­i­cal par­ties, but the impli­ca­tion remains that the BJP, in being a first mover, ben­e­fit­ed dis­pro­por­tion­ate­ly).

    The strat­e­gy worked beau­ti­ful­ly for Face­book. As report­ed by Ankhi Das, Face­book India’s lead on pol­i­cy and gov­ern­ment rela­tions, the 2014 elec­tions reaped the plat­form 227 mil­lion inter­ac­tions. Read today, Das’ arti­cle – which speaks of how ‘likes’ won Naren­dra Modi votes – comes off as more sin­is­ter than it might have at the time.

    We also know that the strat­e­gy worked for Modi. So potent was BJP’s tar­get­ing that it won 90% of its votes in only 299 con­stituen­cies, 282 of which it won. For­mer and cur­rent mem­bers of the BJP’s dig­i­tal media strat­e­gy team were hap­py to con­firm the mutu­al ben­e­fit. The cur­rent mem­ber is Vinit Goen­ka, once the nation­al co-con­ven­er of the BJP’s IT cell, and cur­rent­ly work­ing with Nitin Gad­kari. This is how the book tells it:

    At one stage in our inter­view with Goen­ka that last­ed over two hours, we asked him a point­ed ques­tion: ‘Who helped whom more, Face­book or the BJP?’

    He smiled and said: ‘That’s a dif­fi­cult ques­tion. I won­der whether the BJP helped Face­book more than Face­book helped the BJP. You could say, we helped each oth­er.’

    Equal­ly alarm­ing are reports, in the book, of Face­book deny­ing Con­gress paid ads to pub­li­cise the Rafale con­tro­ver­sy. Face­book also delayed a boost on a Car­a­van expose on Amit Shah by more than 11 days, an eter­ni­ty in our ridicu­lous­ly fast news cycle. Final­ly, there are reports of Indi­an pro­pa­gan­da com­pa­nies repli­cat­ing these lessons in elec­tions in South Africa and oth­er coun­tries. Tak­en togeth­er, we see how pri­vate plat­forms are hap­py to be used to manip­u­late demo­c­ra­t­ic process­es, whether in ser­vice of the Right or Left or Cen­tre.

    This book is con­cerned with cri­tiquing Facebook’s links with the right-wing. It has a fore­word by the pop­u­lar jour­nal­ist Rav­ish Kumar and a pref­ace by pro­fes­sor Apoor­vanand of the Depart­ment of Hin­di at Del­hi Uni­ver­si­ty (and a con­trib­u­tor at The Wire). Both of these belong to what the right-wing terms the ‘sec­u­lar’ brigade.

    How­ev­er, we know now that Face­book is also act­ing against some of the assets of the BJP itself. This may be an eye­wash, or just the log­i­cal next step in Facebook’s project: hav­ing cre­at­ed its impor­tance in elec­tions with the help of the BJP, it is now sell­ing its influ­ence to oth­er par­ties. It doesn’t mat­ter which par­ty comes out on top in the social media game: the house always wins. Even sup­port­ers of the BJP should be wary of the mon­ster they have fed. There is no rea­son for Face­book to be loy­al to the par­ty.

    Cam­bridge Ana­lyt­i­ca, by using lim­it­ed data from Face­book, was able to influ­ence the Brex­it and 2016 US pres­i­den­tial elec­tions. The book asks: what kind of influ­ence can the plat­form itself exert on our demo­c­ra­t­ic process­es?

    ...

    The ques­tion cit­i­zens have to ask is: how much pow­er do we allow one cor­po­ra­tion, and its 35-year-old CEO, to have? What can be done about its near-monop­o­lis­tic grip on data, and its abil­i­ty to uni­lat­er­al­ly impede or encour­age the flow of infor­ma­tion? These are ear­ly days, and one can hope that checks and bal­ances will kick in. Until that hap­pens, our work – of sim­ply keep­ing up with how plat­forms pro­pel or impede polit­i­cal inter­ests – will be cut out for us.

    ———-

    “The Past and Future of Face­book and BJP’s Mutu­al­ly Ben­e­fi­cial Rela­tion­ship” by Partha P. Chakrabart­ty; The Wire; 06/03/2019

    “Teasers to the big reveal come in the first few chap­ters, which do a slight­ly hap­haz­ard job of nar­rat­ing the his­to­ry of Face­book in India. The smok­ing gun is final­ly dis­closed in Chap­ter 8 in the form of a per­son, Shiv­nath Thukral, a for­mer NDTV jour­nal­ist and ex-man­ag­ing direc­tor of Carnegie India. Going by the evi­dence in the book, Thukral had a close work­ing rela­tion­ship with inti­mate Modi aide, Hiren Joshi. Togeth­er, they cre­at­ed the Mera Bharosa> web­site and oth­er web pages for the BJP in late 2013, ahead of the nation­al elec­tion. In 2017, after his stint at Carnegie, Thukral joined Face­book as its direc­tor of pol­i­cy for India and South Asia.

    It’s the kind of smok­ing gun of Face­book’s rela­tion­ship with the BJP that just keeps smok­ing more and more the longer Shiv­nath Thukral holds that posi­tion. But it’s not the only smok­ing gun. Reports of Face­book refus­ing to pub­li­cize ads for the rival Con­gress Par­ty and delay­ing sto­ries that would be dam­ag­ing to the BJP pro­duce quite a bit of smoke too. And note that, while the arti­cle rais­es the risks for the BJP that Face­book might work against the BJP’s inter­ests in the future cit­ing some of Fac­book’s efforts that have act­ed against the BJP’s dig­i­tal assets, keep in mind that the par­tic­u­lar effort the piece is refer­ring to was a crack­down on ‘fake news’ that Face­book did where more than 700 pages were removed and almost all of them (687) were Con­gress Par­ty pages, although the hand­ful of BJP pages removed did have far more view­ers than the Con­gress pages. So, thus far, the only time Face­book appears to work against the BJP’s inter­ests is when there’s a gener­ic ‘fake news’ purge and even in that case it appeared to tar­get the BJP’s rivals much more heav­i­ly:

    ...
    Equal­ly alarm­ing are reports, in the book, of Face­book deny­ing Con­gress paid ads to pub­li­cise the Rafale con­tro­ver­sy. Face­book also delayed a boost on a Car­a­van expose on Amit Shah by more than 11 days, an eter­ni­ty in our ridicu­lous­ly fast news cycle. Final­ly, there are reports of Indi­an pro­pa­gan­da com­pa­nies repli­cat­ing these lessons in elec­tions in South Africa and oth­er coun­tries. Tak­en togeth­er, we see how pri­vate plat­forms are hap­py to be used to manip­u­late demo­c­ra­t­ic process­es, whether in ser­vice of the Right or Left or Cen­tre.

    ...

    How­ev­er, we know now that Face­book is also act­ing against some of the assets of the BJP itself. This may be an eye­wash, or just the log­i­cal next step in Facebook’s project: hav­ing cre­at­ed its impor­tance in elec­tions with the help of the BJP, it is now sell­ing its influ­ence to oth­er par­ties. It doesn’t mat­ter which par­ty comes out on top in the social media game: the house always wins. Even sup­port­ers of the BJP should be wary of the mon­ster they have fed. There is no rea­son for Face­book to be loy­al to the par­ty.
    ...

    The fact that this arrange­ment with the BJP is prob­lem­at­ic isn’t lost on Face­book’s exec­u­tives, accord­ing to the book. Face­book’s own
    direc­tor for glob­al pol­i­tics and gov­ern­ment out­reach, Katie Har­bath, report­ed­ly said she was “unhap­py and uneasy about the prox­im­i­ty” of top offi­cials of Face­book to the Modi gov­ern­ment after Thukral was hired. But those con­cerns were clear­ly ignored. The con­cerns were also clear­ly iron­ic since Har­bath her­self was once a a dig­i­tal strate­gist for the Repub­li­can Par­ty and Rudy Giu­liani:

    ...
    For a per­son so close to a rul­ing par­ty to become a top offi­cial of a ‘neu­tral’ plat­form is wor­ry­ing. Wor­ry­ing enough, it seems, to trou­ble the com­pa­ny itself: Real Face claims that Katie Har­bath, Facebook’s direc­tor for glob­al pol­i­tics and gov­ern­ment out­reach, said she was “unhap­py and uneasy about the prox­im­i­ty” of top offi­cials of Face­book to the Naren­dra Modi gov­ern­ment. The quote is attrib­uted to an anony­mous source. Whether it is true or not, cit­i­zens should be con­cerned about this par­tic­u­lar revolv­ing door between the most pow­er­ful media organ­i­sa­tion in the world and the Modi admin­is­tra­tion. (It’s a dif­fer­ent mat­ter that Har­bath her­self was once a dig­i­tal strate­gist for the Repub­li­can Par­ty and Rudy Giu­liani).
    ...

    Recall how, right when the Cam­bridge Ana­lyt­i­ca scan­dal was emerg­ing in late March of 2018, Face­book replaced its head of pol­i­cy in the Unit­ed States last year with anoth­er right-wing hack, Kevin Mar­tin. Mar­tin would be the new per­son in charge of lob­by­ing the US gov­ern­ment. Mar­tin was Face­book’s vice pres­i­dent of mobile and glob­al access pol­i­cy and a for­mer Repub­li­can chair­man of the Fed­er­al Com­mu­ni­ca­tions Com­mis­sion. When Mar­tin took this new posi­tion he would be report­ing to Face­book’s vice pres­i­dent of glob­al pub­lic pol­i­cy, Joel Kaplan. Both Mar­tin and Kaplan worked togeth­er on George W. Bush’s 2000 pres­i­den­tial cam­paign. Yep, that’s how Face­book respond­ed to the Cam­bridge Ana­lyt­i­ca scan­dal. By putting a Repub­li­can in charge of lob­by­ing the US gov­ern­ment.

    It’s that con­text that makes the con­cerns of Katie Har­bath so iron­ic, along with the fact that Face­book was so inte­gral to the suc­cess of the 2016 Trump cam­paign that the com­pa­ny embed­ded employ­ees with the cam­paign. Yes, Har­bath’s con­cerns over an over­ly close rela­tion­ship with the BJP were indeed valid con­cerns, but iron­ic valid when com­ing from a Repub­li­can oper­a­tive like Har­bath.

    And when you look at Har­bath’s LinkedIn page, we learn that she was hired by Face­book to become the Pub­lic Pol­i­cy Direc­tor for Glob­al Elec­tions in Feb­ru­ary of 2011. Har­bath was the Nation­al Repub­li­can Sen­a­to­r­i­al Com­mit­tee’s chief dig­i­tal strate­gist from August 2009-March 2011. So Har­bath would have been in charge of the GOP Sen­ate’s dig­i­tal strat­e­gy for the 2010 mid-terms when the Repub­li­cans gained six Sen­ate seats and retook con­trol of the US House and a few months lat­er Face­book hired her to become the Pub­lic Pol­i­cy Direc­tor for Glob­al Elec­tions.

    Beyond that, Har­bath’s LinkedIn page lists her work for DCI Group. She was a Senior Account Man­ag­er at DCI Group from 2006–2007. Then she left to work at the Deputy eCam­paign Direc­tor for Rudy Giu­lian­i’s pres­i­den­tial cam­paign from Feb­ru­ary 2007-Jan­u­ary 2008. And in Feb­ru­ary of 2008 she returned to DCI Group as Direc­tor of Online Ser­vices, the posi­tion she held until going to work for the Nation­al Repub­li­can Sen­a­to­r­i­al Com­mit­tee in 2009. Recall how DCI Group has close ties to Karl Rove and is known for being one of the sleazi­est and most amoral of the ‘dark mon­ey’ lobbying/propaganda firms oper­at­ing in DC. In addi­tion to lob­by­ing and pub­lic rela­tions work for the Repub­li­can Par­ty, DCI has a his­to­ry of tak­ing on clients like RJ Reynolds Tobac­co and the Burmese Jun­ta. It’s also known for ped­dling mis­in­for­ma­tion and engag­ing in dirty pol­i­tics. In 2008, the CEO of DCI Group was select to man­age the Repub­li­can Nation­al Con­ven­tion. And DCI Group also worked with the Koch broth­ers’ front groups Amer­i­cans for Pros­per­i­ty and Free­dom­Works in cre­at­ing the Tea Par­ty move­ment, which would have tak­en place dur­ing Har­bath’s time as the Nation­al Repub­li­can Sen­a­to­r­i­al Com­mit­tee’s chief dig­i­tal strate­gist. DCI Group was also the pub­lish­er of Tech Cen­tral Sta­tion, a web­site fund­ed by Exxon ded­i­cate to cli­mate change denial and has worked on major right-wing dis­in­for­ma­tion cam­paigns in the US rang­ing from health care to oil pipelines.

    So Face­book’s Pub­lic Pol­i­cy Direc­tor for Glob­al Elec­tions, Katie Har­bath, was­n’t just a Repub­li­can Par­ty oper­a­tive. She also worked for one of the most dis­rep­utable lob­by­ing and pro­pa­gan­da firms in DC and a key enti­ty in the Amer­i­can ‘dark mon­ey’ pro­pa­gan­da indus­try. That’s the per­son who was alleged­ly uncom­fort­able with Face­book hir­ing of a BJP-con­nect­ed indi­vid­ual. And despite those alleged con­cerns Thukral’s hir­ing hap­pened any­way, of course.

    In relat­ed news, the Trump White House set up a web­page where con­ser­v­a­tives could go to report instances of Face­book and oth­er social media com­pa­nies being biased against them. Yep.

    Posted by Pterrafractyl | June 11, 2019, 1:59 pm
  3. Here’s a pre­sen­ta­tion not to be missed on what fas­cism is, using Mohen­dra Mod­i’s regime as an exam­ple (in Hin­di with Eng­lish sub­ti­tles, click [cc]): https://www.youtube.com/watch?v=JpVTmlSXRck

    Posted by Atlanta Bill | July 31, 2019, 6:41 pm
  4. This next arti­cle shows how Face­book lists Brei­it­bart and its pro­pa­gan­da motivi­at­ed news report­ing as a legit­i­mate News Source despite the fat that its chair­man Steve Ban­non, ran Don­ald Trump’s pres­i­den­tial cam­paign in 2016. Bre­it­bart uses a “black crime” tag on arti­cles and pro­mot­ed anti-Mus­lim and anti-immi­grant views. Ban­non even said “We’re the plat­form for the alt-right,”. Addi­tion­al­ly their for­mer tech edi­tor Milo Yiannopou­los had worked direct­ly with a white nation­al­ist and a neo-Nazi to write and edit an arti­cle defin­ing the “alt-right” move­ment and advanc­ing its ideas. It also iden­ti­fies that Face­book has been reluc­tant to police white nation­al­ism and far-right hate even after the Guardian pro­vid­ed Face­book, in July, 2017, with a list of 175 pages and groups run by hate groups, as des­ig­nat­ed by the South­ern Pover­ty Law Cen­ter, includ­ing neo-Nazi and white nation­al­ist groups. Face­books actions show its real intent when the com­pa­ny removed just nine of them. This real­ly puts into ques­tion Facebook’s asser­tion that “If a pub­lish­er posts mis­in­for­ma­tion, it will no longer appear in the prod­uct.” They have not made any seri­ous attempt to address this with their poli­cies and prac­tices.

    The arti­cles does not take address the fol­low­ing issue but one should ask the ques­tion if Mr. Zucker­berg behav­ior sup­ports sim­i­lar ide­olo­gies to those advo­cat­ed by ear­ly co-investor, Peter Thiel?

    https://www.theguardian.com/us-news/2019/oct/25/facebook-breitbart-news-tab-alt-right?CMP=Share_iOSApp_Other

    Face­book includes Bre­it­bart in new ‘high qual­i­ty’ news tab
    The social media site has received back­lash over its choice to include a pub­li­ca­tion that has been called ‘the plat­form for the alt-right’
    Julia Car­rie Wong @juliacarriew Email
    Fri 25 Oct 2019 16.56 EDT
    Last mod­i­fied on Fri 25 Oct 2019 16.58 EDT

    Facebook’s launch of a new sec­tion on its flag­ship app ded­i­cat­ed to “deeply-report­ed and well-sourced” jour­nal­ism sparked imme­di­ate con­tro­ver­sy on Fri­day over the inclu­sion of Bre­it­bart News, a pub­li­ca­tion whose for­mer exec­u­tive chair­man explic­it­ly embraced the “alt-right”.

    Face­book News is a sep­a­rate sec­tion of the company’s mobile app that will fea­ture arti­cles from about 200 pub­lish­ers. Friday’s launch is a test and will only be vis­i­ble to some users in the US.

    The ini­tia­tive is designed to quell crit­i­cism on two fronts: by pro­mot­ing high­er qual­i­ty jour­nal­ism over mis­in­for­ma­tion and by appeas­ing news pub­lish­ers who have long com­plained that Face­book prof­its from jour­nal­ism with­out pay­ing for it. The com­pa­ny will pay some pub­lish­ers between $1m and $3m each year to fea­ture their arti­cles, accord­ing to Bloomberg.
    Par­tic­i­pat­ing pub­li­ca­tions include the New York Times, the Wash­ing­ton Post, the Wall Street Jour­nal, Buz­zFeed, Bloomberg and ABC News, as well as local news­pa­pers such as the Chica­go Tri­bune and Dal­las Morn­ing News.

    Facebook’s chief exec­u­tive, Mark Zucker­berg, paid trib­ute to the impor­tance of “high qual­i­ty” jour­nal­ism in an op-ed pub­lished in the New York Times, which ref­er­enced “how the news has held Face­book account­able when we’ve made mis­takes”.

    Zucker­berg also allud­ed to the pow­er that Face­book will have to influ­ence the media, stat­ing: “If a pub­lish­er posts mis­in­for­ma­tion, it will no longer appear in the prod­uct.”
    The op-ed does not ref­er­ence the inclu­sion of Bre­it­bart News, but the out­let is noto­ri­ous for its role in pro­mot­ing extreme rightwing nar­ra­tives and con­spir­a­cy the­o­ries. Thou­sands of major adver­tis­ers have black­list­ed the site over its extreme views.

    Found­ed in 2005 by con­ser­v­a­tive writer Andrew Bre­it­bart, Bre­it­bart News achieved greater influ­ence and a wider audi­ence under its exec­u­tive chair­man Steve Ban­non, who went on to run Don­ald Trump’s pres­i­den­tial cam­paign in 2016. For years, the pub­li­ca­tion used a “black crime” tag on arti­cles and pro­mot­ed anti-Mus­lim and anti-immi­grant views.

    “We’re the plat­form for the alt-right,” Ban­non told a reporter in 2016.

    In 2017, Buz­zFeed News report­ed on emails and doc­u­ments show­ing how the for­mer Bre­it­bart tech edi­tor Milo Yiannopou­los had worked direct­ly with a white nation­al­ist and a neo-Nazi to write and edit an arti­cle defin­ing the “alt-right” move­ment and advanc­ing its ideas.

    Face­book has long faced scruti­ny for its ret­i­cence to police white nation­al­ism and far-right hate on its plat­form. In July 2017, the Guardian pro­vid­ed Face­book with a list of 175 pages and groups run by hate groups, as des­ig­nat­ed by the South­ern Pover­ty Law Cen­ter, includ­ing neo-Nazi and white nation­al­ist groups. The com­pa­ny removed just nine of them.

    Fol­low­ing the dead­ly “Unite the Right” ral­ly in Char­lottesville in August 2017 – which was orga­nized in part on a Face­book event page – the com­pa­ny cracked down on some white suprema­cist and neo-Nazi groups. Enforce­ment was spot­ty, how­ev­er, and a year after Char­lottesville, sev­er­al groups and indi­vid­u­als involved in Char­lottesville were back on Face­book. It was not until March 2019 that the com­pa­ny decid­ed that its pol­i­cy against hate should include white nation­al­ism, an ide­ol­o­gy that pro­motes the exclu­sion and expul­sion of non-white peo­ple from cer­tain nations.

    Face­book declined to pro­vide a full list of the par­tic­i­pat­ing pub­li­ca­tions or offer fur­ther com­ment.

    Asked about the inclu­sion of Bre­it­bart News at a launch event for Face­book News in New York, Zucker­berg declined to com­ment on “any spe­cif­ic firm” but added, “I do think that part of hav­ing this be a trust­ed source is that it needs to have a diver­si­ty of … views in there. I think you want to have con­tent that kind of rep­re­sents dif­fer­ent per­spec­tives, but also in a way that com­plies with the stan­dards that we have.”

    The Face­book CEO faced harsh ques­tion­ing from law­mak­ers this week, when he tes­ti­fied at a hear­ing of the US House of Rep­re­sen­ta­tives finan­cial ser­vices com­mit­tee. Though the hear­ing was puta­tive­ly about Facebook’s plans to launch a cryp­tocur­ren­cy, sev­er­al rep­re­sen­ta­tives pressed Zucker­berg on his company’s poor track record on com­ply­ing with US civ­il rights laws, as well as polic­ing hate speech.

    Dur­ing an exchange about the company’s deci­sion to allow politi­cians to pro­mote mis­in­for­ma­tion in paid adver­tis­ing, the Demo­c­ra­t­ic rep­re­sen­ta­tive Alexan­dria Oca­sio-Cortez pressed Zucker­berg on the inclu­sion of the Dai­ly Caller, which she called “a pub­li­ca­tion with well-doc­u­ment­ed ties to white suprema­cists”, in the company’s third-par­ty fact-check­er pro­gram. In 2018, the Atlantic revealed that a for­mer deputy edi­tor of the Dai­ly Caller also wrote under a pseu­do­nym for a white suprema­cist pub­li­ca­tion.

    Posted by Mary Benton | October 27, 2019, 4:05 pm
  5. Here’s a set of arti­cle that high­lights how one of the endur­ing fea­tures of Face­book’s attempts to police extrem­ist hate speech on its plat­form has been the cre­ation of spe­cial loop­holes that allow this con­tent to con­tin­ue even after the new poli­cies are put into effect:

    First, in May of this year, Face­book announced a sig­nif­i­cant change to its hate speech poli­cies. Part of what made it sig­nif­i­cant is that it was the kind of change that should­n’t have ever been nec­es­sary in the first place. Face­book updat­ed its pol­i­cy ban­ning over “white suprema­cy” to include “white nation­al­ism” and “white sep­a­ratism”. When the com­pa­ny ini­tial­ly banned white suprema­cy fol­low­ing the 2017 Unite the Right neo-Nazi ral­ly in Char­lottesville, VA, they appar­ent­ly con­clud­ed that white nation­al­ism and white sep­a­ratism aren’t nec­es­sar­i­ly explic­it­ly racist in nature and there­fore white nation­al­ism and white sep­a­ratism would con­tin­ue to be allowed.

    As Ulrick Casseus, one of Face­book’s pol­i­cy team sub­ject mat­ter experts on hate groups, described the rea­son­ing behind that ini­tial deci­sion to make a dis­tinc­tion between white nationalism/separatism and white suprema­cy, “When you have a broad range of peo­ple you engage with, you’re going to get a range of ideas and beliefs...There were a few peo­ple who [...] did not agree that white nation­al­ism and white sep­a­ratism were inher­ent­ly hate­ful.” So there were “a few peo­ple” telling Face­book that white nation­al­ism and white sep­a­ratism aren’t inher­ent­ly hate­ful and that was the basis for Face­book’s deci­sion. It would be inter­est­ing to know if any of those peo­ple hap­pened to be the numer­ous peo­ple in Face­book’s man­age­ment team with ties to right-wing polit­i­cal par­ties. Peter Thiel is an obvi­ous sus­pect, but don’t for­get oth­er fig­ures like for­mer George W. Bush White House staffer Joel Kaplan who was appoint­ed Face­book’s vice pres­i­dent of glob­al pub­lic pol­i­cy. And then there are peo­ple like Katery­na Kruk in Ukraine or Shiv­nath Thukral in India. Or maybe it was just some ran­dom per­son on Face­book’s pol­i­cy team with far right sym­pa­thies.

    And, of course, the new pol­i­cy ban­ning white nation­al­ism and white sep­a­ratism has a loop­hole: only explic­it white nation­al­ism and sep­a­ratism con­tent will be banned. Implic­it and cod­ed white nation­al­ism and white sep­a­ratism won’t be banned osten­si­bly because they are hard­er to detect. So the white nationalists/supremacists are still free to use Face­book as a propaganda/recruitment plat­form but they’ll have to dog-whistling a lit­tle more than before:

    Vice

    Face­book Bans White Nation­al­ism and White Sep­a­ratism
    After a civ­il rights back­lash, Face­book will now treat white nation­al­ism and sep­a­ratism the same as white suprema­cy, and will direct users who try to post that con­tent to a non­prof­it that helps peo­ple leave hate groups.

    by Joseph Cox and Jason Koe­bler
    Mar 27 2019, 11:00am

    In a major pol­i­cy shift for the world’s biggest social media net­work, Face­book banned white nation­al­ism and white sep­a­ratism on its plat­form Tues­day. Face­book will also begin direct­ing users who try to post con­tent asso­ci­at­ed with those ide­olo­gies to a non­prof­it that helps peo­ple leave hate groups, Moth­er­board has learned.

    The new pol­i­cy, which will be offi­cial­ly imple­ment­ed next week, high­lights the mal­leable nature of Facebook’s poli­cies, which gov­ern the speech of more than 2 bil­lion users world­wide. And Face­book still has to effec­tive­ly enforce the poli­cies if it is real­ly going to dimin­ish hate speech on its plat­form. The pol­i­cy will apply to both Face­book and Insta­gram.

    Last year, a Moth­er­board inves­ti­ga­tion found that, though Face­book banned “white suprema­cy” on its plat­form, it explic­it­ly allowed “white nation­al­ism” and “white sep­a­ratism.” After back­lash from civ­il rights groups and his­to­ri­ans who say there is no dif­fer­ence between the ide­olo­gies, Face­book has decid­ed to ban all three, two mem­bers of Facebook’s con­tent pol­i­cy team said.

    “We’ve had con­ver­sa­tions with more than 20 mem­bers of civ­il soci­ety, aca­d­e­mics, in some cas­es these were civ­il rights orga­ni­za­tions, experts in race rela­tions from around the world,” Bri­an Fish­man, pol­i­cy direc­tor of coun­tert­er­ror­ism at Face­book, told us in a phone call. “We decid­ed that the over­lap between white nation­al­ism, [white] sep­a­ratism, and white suprema­cy is so exten­sive we real­ly can’t make a mean­ing­ful dis­tinc­tion between them. And that’s because the lan­guage and the rhetoric that is used and the ide­ol­o­gy that it rep­re­sents over­laps to a degree that it is not a mean­ing­ful dis­tinc­tion.”

    Specif­i­cal­ly, Face­book will now ban con­tent that includes explic­it praise, sup­port, or rep­re­sen­ta­tion of white nation­al­ism or sep­a­ratism. Phras­es such as “I am a proud white nation­al­ist” and “Immi­gra­tion is tear­ing this coun­try apart; white sep­a­ratism is the only answer” will now be banned, accord­ing to the com­pa­ny. Implic­it and cod­ed white nation­al­ism and white sep­a­ratism will not be banned imme­di­ate­ly, in part because the com­pa­ny said it’s hard­er to detect and remove.

    The deci­sion was for­mal­ly made at Facebook’s Con­tent Stan­dards Forum on Tues­day, a meet­ing that includes rep­re­sen­ta­tives from a range of dif­fer­ent Face­book depart­ments in which con­tent mod­er­a­tion poli­cies are dis­cussed and ulti­mate­ly adopt­ed. Fish­man told Moth­er­board that Face­book COO Sheryl Sand­berg was involved in the for­mu­la­tion of the new pol­i­cy, though rough­ly three dozen Face­book employ­ees worked on it.

    Fish­man said that users who search for or try to post white nation­al­ism, white sep­a­ratism, or white suprema­cist con­tent will begin get­ting a pop­up that will redi­rect to the web­site for Life After Hate, a non­prof­it found­ed by ex-white suprema­cists that is ded­i­cat­ed to get­ting peo­ple to leave hate groups.

    “If peo­ple are explor­ing this move­ment, we want to con­nect them with folks that will be able to pro­vide sup­port offline,” Fish­man said. “This is the kind of work that we think is part of a com­pre­hen­sive pro­gram to take this sort of move­ment on.”

    Behind the scenes, Face­book will con­tin­ue using some of the same tac­tics it uses to sur­face and remove con­tent asso­ci­at­ed with ISIS, Al Qae­da, and oth­er ter­ror­ist groups to remove white nation­al­ist, sep­a­ratist, and suprema­cist con­tent. This includes con­tent match­ing, which algo­rith­mi­cal­ly detects and deletes images that have been pre­vi­ous­ly iden­ti­fied to con­tain hate mate­r­i­al, and will include machine learn­ing and arti­fi­cial intel­li­gence, Fish­man said, though he didn’t elab­o­rate on how those tech­niques would work.

    The new pol­i­cy is a sig­nif­i­cant change from the company’s old poli­cies on white sep­a­ratism and white nation­al­ism. In inter­nal mod­er­a­tion train­ing doc­u­ments obtained and pub­lished by Moth­er­board last year, Face­book argued that white nation­al­ism “doesn’t seem to be always asso­ci­at­ed with racism (at least not explic­it­ly).”

    That arti­cle elicit­ed wide­spread crit­i­cism from civ­il rights, Black his­to­ry, and extrem­ism experts, who stressed that “white nation­al­ism” and “white sep­a­ratism” are often sim­ply fronts for white suprema­cy.

    “I do think it’s a step for­ward, and a direct result of pres­sure being placed on it [Face­book],” Rashad Robin­son, pres­i­dent of cam­paign group Col­or Of Change, told Moth­er­board in a phone call.

    Experts say that white nation­al­ism and white sep­a­ratism move­ments are dif­fer­ent from oth­er sep­a­ratist move­ments such as the Basque sep­a­ratist move­ment in France and Spain and Black sep­a­ratist move­ments world­wide because of the long his­to­ry of white suprema­cism that has been used to sub­ju­gate and dehu­man­ize peo­ple of col­or in the Unit­ed States and around the world.

    “Any­one who dis­tin­guish­es white nation­al­ists from white suprema­cists does not have any under­stand­ing about the his­to­ry of white suprema­cism and white nation­al­ism, which is his­tor­i­cal­ly inter­twined,” Ibram X. Ken­di, who won a Nation­al Book Award in 2016 for Stamped from the Begin­ning: The Defin­i­tive His­to­ry of Racist Ideas in Amer­i­ca, told Moth­er­board last year.

    Hei­di Beirich, head of the South­ern Pover­ty Law Center’s (SPLC) Intel­li­gence Project, told Moth­er­board last year that “white nation­al­ism is some­thing that peo­ple like David Duke [for­mer leader of the Ku Klux Klan] and oth­ers came up with to sound less bad.”

    While there is unan­i­mous agree­ment among civ­il rights experts Moth­er­board spoke to that white nation­al­ism and sep­a­ratism are indis­tin­guish­able from white suprema­cy, the deci­sion is like­ly to be polit­i­cal­ly con­tro­ver­sial both in the Unit­ed States, where the right has accused Face­book of hav­ing an anti-con­ser­v­a­tive bias, and world­wide, espe­cial­ly in coun­tries where open­ly white nation­al­ist politi­cians have found large fol­low­ings. Face­book said that not all of the groups it spoke to believed it should change its pol­i­cy.

    “When you have a broad range of peo­ple you engage with, you’re going to get a range of ideas and beliefs,” Ulrick Casseus, a sub­ject mat­ter expert on hate groups on Facebook’s pol­i­cy team, told us. “There were a few peo­ple who [...] did not agree that white nation­al­ism and white sep­a­ratism were inher­ent­ly hate­ful.”

    But Face­book said that the over­whelm­ing major­i­ty of experts it spoke to believed that white nation­al­ism and white sep­a­ratism are tied close­ly to orga­nized hate, and that all experts it spoke to believe that white nation­al­ism expressed online has led to real-world harm. After speak­ing to these experts, Face­book decid­ed that white nation­al­ism and white sep­a­ratism are “inher­ent­ly hate­ful.”

    “We saw that was becom­ing more of a thing, where they would try to nor­mal­ize what they were doing by say­ing ‘I’m not racist, I’m a nation­al­ist’, and try to make that dis­tinc­tion. They even go so far as to say ‘I’m not a white suprema­cist, I’m a white nation­al­ist’. Time and time again they would say that but they would also have hate­ful speech and hate­ful behav­iors tied to that,” Casseus said. “They’re try­ing to nor­mal­ize it and based upon what we’ve seen and who we’ve talked to, we deter­mined that this is hate­ful, and it’s tied to orga­nized hate.”

    The change comes less than two years after Face­book inter­nal­ly clar­i­fied its poli­cies on white suprema­cy after the Char­lottesville protests of August 2017, in which a white suprema­cist killed counter-pro­test­er Heather Hey­er. That includ­ed draw­ing the dis­tinc­tion between suprema­cy and nation­al­ism that extrem­ist experts saw as prob­lem­at­ic.

    Face­book qui­et­ly made oth­er tweaks inter­nal­ly around this time. One source with direct knowl­edge of Facebook’s delib­er­a­tions said that fol­low­ing Motherboard’s report­ing, Face­book changed its inter­nal doc­u­ments to say that racial suprema­cy isn’t allowed in gen­er­al. Moth­er­board grant­ed the source anonymi­ty to speak can­did­ly about inter­nal Face­book dis­cus­sions.

    “Every­thing was rephrased so instead of say­ing white nation­al­ism is allowed while white suprema­cy isn’t, it now says racial suprema­cy isn’t allowed,” the source said last year. At the time, white nation­al­ism and Black nation­al­ism did not vio­late Facebook’s poli­cies, the source added. A Face­book spokesper­son con­firmed that it did make that change last year.

    The new pol­i­cy will not ban implic­it white nation­al­ism and white sep­a­ratism, which Casseus said is dif­fi­cult to detect and enforce. It also doesn’t change the company’s exist­ing poli­cies on sep­a­ratist and nation­al­ist move­ments more gen­er­al­ly; con­tent relat­ing to Black sep­a­ratist move­ments and the Basque sep­a­ratist move­ment, for exam­ple, will still be allowed.

    A social media pol­i­cy is only as good as its imple­men­ta­tion and enforce­ment. A recent report from NGO the Counter Extrem­ism Project found that Face­book did not remove pages belong­ing to known neo-Nazi groups after this month’s Christchurch, New Zealand ter­ror­ist attacks. Face­book wants to be sure that enforce­ment of its poli­cies is con­sis­tent around the world and from mod­er­a­tor to mod­er­a­tor, which is one of the rea­sons why its pol­i­cy doesn’t ban implic­it or cod­ed expres­sions of white nation­al­ism or white sep­a­ratism.

    David Brody, an attor­ney with the Lawyers’ Com­mit­tee for Civ­il Rights Under Law which lob­bied Face­book over the pol­i­cy change, told Moth­er­board in a phone call “if there is a cer­tain type of prob­lem­at­ic con­tent that real­ly is not amenable to enforce­ment at scale, they would pre­fer to write their poli­cies in a way where they can pre­tend it doesn’t exist.

    Kee­gan Han­kes, a research ana­lyst for the SPLC’s Intel­li­gence Project, added, “One thing that con­tin­u­al­ly sur­pris­es me about Face­book, is this unwill­ing­ness to rec­og­nize that even if con­tent is not explic­it­ly racist and vio­lent out­right, it [needs] to think about how their audi­ence is receiv­ing that mes­sage.”

    ...

    ———-

    “Face­book Bans White Nation­al­ism and White Sep­a­ratism” by Joseph Cox and Jason Koe­bler; Vice; 03/27/2019

    Last year, a Moth­er­board inves­ti­ga­tion found that, though Face­book banned “white suprema­cy” on its plat­form, it explic­it­ly allowed “white nation­al­ism” and “white sep­a­ratism.” After back­lash from civ­il rights groups and his­to­ri­ans who say there is no dif­fer­ence between the ide­olo­gies, Face­book has decid­ed to ban all three, two mem­bers of Facebook’s con­tent pol­i­cy team said.”

    Yep, it when Face­book respond­ed to the vio­lence of Char­lottesville in 2017 by ban­ning white suprema­cists, the com­pa­ny decid­ed to leave a giant loop­hole: white suprema­cists are banned, but white nation­al­ists and sep­a­ratists are still allowed. It’s as if Face­book was trolling the pub­lic, except this was basi­cal­ly a secret pol­i­cy that was only uncov­ered by a Moth­er­board inves­ti­ga­tion and leaked inter­nal doc­u­ments. That’s a key detail here: this giant loop­hole was a secret loop­hole until Moth­er­board wrote an arti­cle about it in May of 2018. And it was­n’t until March of 2019 that Face­book closed that giant loop­hole. But, of course, they cre­at­ed a new one: implic­it and cod­ed white nation­al­ism and sep­a­ratism are still allowed:

    ...
    “We’ve had con­ver­sa­tions with more than 20 mem­bers of civ­il soci­ety, aca­d­e­mics, in some cas­es these were civ­il rights orga­ni­za­tions, experts in race rela­tions from around the world,” Bri­an Fish­man, pol­i­cy direc­tor of coun­tert­er­ror­ism at Face­book, told us in a phone call. “We decid­ed that the over­lap between white nation­al­ism, [white] sep­a­ratism, and white suprema­cy is so exten­sive we real­ly can’t make a mean­ing­ful dis­tinc­tion between them. And that’s because the lan­guage and the rhetoric that is used and the ide­ol­o­gy that it rep­re­sents over­laps to a degree that it is not a mean­ing­ful dis­tinc­tion.”

    Specif­i­cal­ly, Face­book will now ban con­tent that includes explic­it praise, sup­port, or rep­re­sen­ta­tion of white nation­al­ism or sep­a­ratism. Phras­es such as “I am a proud white nation­al­ist” and “Immi­gra­tion is tear­ing this coun­try apart; white sep­a­ratism is the only answer” will now be banned, accord­ing to the com­pa­ny. Implic­it and cod­ed white nation­al­ism and white sep­a­ratism will not be banned imme­di­ate­ly, in part because the com­pa­ny said it’s hard­er to detect and remove.

    ...

    The new pol­i­cy is a sig­nif­i­cant change from the company’s old poli­cies on white sep­a­ratism and white nation­al­ism. In inter­nal mod­er­a­tion train­ing doc­u­ments obtained and pub­lished by Moth­er­board last year, Face­book argued that white nation­al­ism “doesn’t seem to be always asso­ci­at­ed with racism (at least not explic­it­ly).”
    ...

    Keep in mind that ‘cod­ed’ white nation­al­ism is often bare­ly cod­ed at all, so this is the kind of loop­hole that indi­vid­ual Face­book con­tent mod­er­a­tors are going to poten­tial­ly have a great deal of flex­i­bil­i­ty over how they enforce the pol­i­cy. And to under­score how easy it is for mod­er­a­tors to ‘play dumb’ about these these kinds of con­tent judge­ment call, accord­ing to Face­book’s hate group expert Ulrick Casseus, that ini­tial loop­hole to allow white nation­al­ism and sep­a­ratism came about because, “There were a few peo­ple who [...] did not agree that white nation­al­ism and white sep­a­ratism were inher­ent­ly hate­ful.” That’s play­ing it real­ly dumb and that was Face­book’s pol­i­cy until this lat­est change:

    ...
    While there is unan­i­mous agree­ment among civ­il rights experts Moth­er­board spoke to that white nation­al­ism and sep­a­ratism are indis­tin­guish­able from white suprema­cy, the deci­sion is like­ly to be polit­i­cal­ly con­tro­ver­sial both in the Unit­ed States, where the right has accused Face­book of hav­ing an anti-con­ser­v­a­tive bias, and world­wide, espe­cial­ly in coun­tries where open­ly white nation­al­ist politi­cians have found large fol­low­ings. Face­book said that not all of the groups it spoke to believed it should change its pol­i­cy.

    “When you have a broad range of peo­ple you engage with, you’re going to get a range of ideas and beliefs,” Ulrick Casseus, a sub­ject mat­ter expert on hate groups on Facebook’s pol­i­cy team, told us. “There were a few peo­ple who [...] did not agree that white nation­al­ism and white sep­a­ratism were inher­ent­ly hate­ful.”

    But Face­book said that the over­whelm­ing major­i­ty of experts it spoke to believed that white nation­al­ism and white sep­a­ratism are tied close­ly to orga­nized hate, and that all experts it spoke to believe that white nation­al­ism expressed online has led to real-world harm. After speak­ing to these experts, Face­book decid­ed that white nation­al­ism and white sep­a­ratism are “inher­ent­ly hate­ful.”

    “We saw that was becom­ing more of a thing, where they would try to nor­mal­ize what they were doing by say­ing ‘I’m not racist, I’m a nation­al­ist’, and try to make that dis­tinc­tion. They even go so far as to say ‘I’m not a white suprema­cist, I’m a white nation­al­ist’. Time and time again they would say that but they would also have hate­ful speech and hate­ful behav­iors tied to that,” Casseus said. “They’re try­ing to nor­mal­ize it and based upon what we’ve seen and who we’ve talked to, we deter­mined that this is hate­ful, and it’s tied to orga­nized hate.”

    ...

    The new pol­i­cy will not ban implic­it white nation­al­ism and white sep­a­ratism, which Casseus said is dif­fi­cult to detect and enforce. It also doesn’t change the company’s exist­ing poli­cies on sep­a­ratist and nation­al­ist move­ments more gen­er­al­ly; con­tent relat­ing to Black sep­a­ratist move­ments and the Basque sep­a­ratist move­ment, for exam­ple, will still be allowed.
    ...

    So as we can see, Face­book real­ly, real­ly, real­ly wants to keep some loop­holes in place to ensure white suprema­cist con­tent still has an out­let. And while much of that desire to keep these loop­holes in place like­ly comes from the far right ide­olo­gies of impor­tant Face­book fig­ures like Peter Thiel, here’s an arti­cle that gives us an idea of the finan­cial incen­tive to ensure Face­book reminds the plat­form of choice for big­otry: Accord­ing to a study by the Sludge, between May 2018 and Sept. 17, 2019 Face­book made near­ly $1.6 mil­lion from 4,921 ads ads pur­chased by 38 groups iden­ti­fied by the SPLC as hate groups. Quite a few of these hate groups are clear­ly of the white nation­al­ist vari­ety, like the Fed­er­a­tion for Amer­i­can Immi­gra­tion Reform (FAIR) which spent $910,101 on 35 ads dur­ing this peri­od.

    Keep in mind that May of 2018 is the same month Face­book put in place its pol­i­cy of ban­ning white suprema­cy but still allow­ing white nation­al­ism and sep­a­ratism to con­tin­ue, so the date range for this SPLC study is basi­cal­ly a look at how effec­tive that pol­i­cy was at keep white suprema­cists con­tent off of Face­book. As we can see from the near­ly $1 mil­lion spent by FAIR dur­ing this peri­od, it was­n’t very effec­tive:

    Giz­mo­do

    Face­book Has Banked Near­ly $1.6 Mil­lion From SPLC-Des­ig­nat­ed Hate Groups Since May 2018

    Tom McK­ay
    9/25/19 9:50PM

    Face­book claims to be doing a lot to fight hate speech. But Face­book has also cashed near­ly $1.6 mil­lion in ad mon­ey from orga­ni­za­tions des­ig­nat­ed as hate groups by the South­ern Pover­ty Law Cen­ter between May 2018 and Sept. 17, 2019, accord­ing to a Wednes­day report by Sludge.

    The SPLC is con­sid­ered one of the nation’s most promi­nent civ­il rights watch­dogs. It clas­si­fied the 38 orga­ni­za­tions in ques­tion as hate groups because they have “beliefs or prac­tices that attack or malign an entire class of peo­ple, typ­i­cal­ly for their immutable char­ac­ter­is­tics.” (Many of the groups in ques­tion have vig­or­ous­ly con­test­ed those des­ig­na­tions and insist they are being tar­get­ed sim­ply for espous­ing con­ser­v­a­tive view­points, which is per­haps not the most per­sua­sive argu­ment in these times.)

    At the top of the list is the Fed­er­a­tion for Amer­i­can Immi­gra­tion Reform (FAIR), which Facebook’s ad data­base shows ran 335 ads at a total bill of $910,101. (FAIR was found­ed by vir­u­lent nativist and white suprema­cist John Tan­ton and reg­u­lar­ly gripes about top­ics like the chang­ing “eth­nic base” of the U.S., but has man­aged to main­tain some degree of main­stream cred­i­bil­i­ty with right-wing news out­lets.) Sec­ond was the Alliance Defend­ing Free­dom, an anti-LGBTQ Chris­t­ian group that has pushed for the crim­i­nal­iza­tion of “sodomy” in the states and abroad, at $391,669.

    Oth­er groups on Sludge’s list of Face­book ad buy­ers includ­ed the Fam­i­ly Research Coun­cil ($106,987), the anti-Mus­lim Clar­i­on Project ($55,012), and the omi­nous­ly-titled Cal­i­for­ni­ans for Pop­u­la­tion Sta­bi­liza­tion ($202,212), an anti-immi­grant group found­ed by eugeni­cist and far-right race “sci­en­tist” Gar­ret Hardin. CAP once hired a neo-Nazi as its pub­lic affairs direc­tor.

    Spe­cif­ic ads not­ed by Sludge includ­ed an ad by The Amer­i­can Vision, a group the SPLC writes has advo­cat­ed the exe­cu­tion of gay peo­ple, which linked to a now-removed blog post call­ing gay peo­ple “evil.” William Gheen, the nativist head of Amer­i­cans for Legal Immi­gra­tion PAC, pur­chased ads par­rot­ing anti-immi­gra­tion “inva­sion” rhetoric of the type cit­ed by a mass shoot­er in El Paso, Texas this year and ask­ing users to share a post stat­ing “100% OF ILLEGAL ALIENS ARE CRIMINALS.” Three local chap­ters of the Proud Boys, a far-right street brawl­ing group that earned the atten­tion of the FBI last year, also did com­par­a­tive­ly small Face­book ad buys (some of which were even­tu­al­ly removed).

    Sludge wrote that in total, Face­book ran some 4,921 ads from the 38 hate groups. Face­book has claimed that it is mak­ing progress and proac­tive­ly iden­ti­fied 65 per­cent of the hate speech it removed in Q1 2019, up from 24 per­cent in Q4 2017. But the groups have been allowed to remain, Sludge argued, because the platform’s mod­er­a­tion efforts are “main­ly focused on indi­vid­ual posts, not on the accounts that do the post­ing” and it only bans groups “that pro­claim a vio­lent mis­sion or are engaged in vio­lence”:

    Face­book may take down a hate group’s post that explic­it­ly attacks peo­ple based on a “pro­tect­ed char­ac­ter­is­tic,” but it wouldn’t ordi­nar­i­ly ban that group from its plat­form if the group didn’t have a mis­sion Face­book con­sid­ers vio­lent. For exam­ple, it removed three pages of the Proud Boys, who advo­cate vio­lence, but has let hate groups that are extreme­ly dis­crim­i­na­to­ry yet not explic­it­ly vio­lent remain. The con­trast­ing def­i­n­i­tions of hate speech and hate groups allow the com­pa­ny to take down some offen­sive posts but per­mit numer­ous hate groups to have a pres­ence, post­ing, spend­ing mon­ey, and recruit­ing on its plat­form.

    In June, Face­book released a near­ly 30-page audit pre­pared by its civ­il rights ambas­sador Lau­ra Mur­phy and rough­ly 90 promi­nent civ­il rights groups. Mul­ti­ple civ­il rights groups told Giz­mo­do that while the audit showed Face­book had made some progress, pol­i­cy changes such as its deci­sion to ban sup­port of white suprema­cy or “nation­al­ism” didn’t go far enough and the com­pa­ny had not laid out a proac­tive plan to fight the spread of hate speech.

    Facebook’s much-tot­ed machine learn­ing algo­rithms for polic­ing hate speech have also been reg­u­lar­ly lam­bast­ed as inad­e­quate. For exam­ple, Auburn Uni­ver­si­ty senior fel­low and GDELT co-cre­ator Kalev Lee­taru told Giz­mo­do that he thought Face­book could improve its auto­mat­ed mod­er­a­tion with exist­ing tech­nol­o­gy, but “the rea­son plat­forms are reluc­tant to deploy it comes down to sev­er­al factors”—including the cost of run­ning more “com­pu­ta­tion­al­ly expen­sive” sys­tems and the mon­ey gen­er­at­ed from extreme con­tent.

    “Ter­ror­ism, hate speech, human traf­fick­ing, sex­u­al assault and oth­er hor­rif­ic imagery actu­al­ly ben­e­fits the sites mon­e­tar­i­ly,” Lee­taru added. “... Oth­er than a few high-pub­lic­i­ty cas­es of adver­tis­er back­lash against par­tic­u­lar­ly high pro­file cas­es, adver­tis­ers aren’t forc­ing the com­pa­nies to do bet­ter, and gov­ern­ments aren’t putting any pres­sure on them, so they have lit­tle incen­tive to do bet­ter.”

    Face­book has also admit­ted it failed to act appro­pri­ate­ly against mil­i­tary offi­cials in Myan­mar incit­ing geno­cide against the minor­i­ty Rohingya pop­u­la­tion. A Unit­ed Nations inves­ti­ga­tor lat­er harsh­ly crit­i­cized Facebook’s sub­se­quent efforts to do bet­ter and its efforts since have failed to inspire con­fi­dence. Oth­er report­ing has indi­cat­ed Face­book and its prop­er­ties such as What­sApp have become ves­sels for hate speech and vio­lence in coun­tries includ­ing Sri Lan­ka, India, the Philip­pines, and Libya.

    Accord­ing to Sludge, search­es for sim­i­lar con­tent on com­peti­tors Google/YouTube, Twit­ter, and Snap showed that Twit­ter took $917,000 from FAIR since Octo­ber 2018, while Google/YouTube took $90,000 from the group since the end of May 2018. “Few, if any” oth­er hate groups appeared in the Google/YouTube polit­i­cal ad archive, while no oth­er SPLC-des­ig­nat­ed groups appeared in Twit­ter or Snap’s data­bas­es, Sludge wrote. (How­ev­er, as Sludge not­ed, Facebook’s ad archive is more com­pre­hen­sive and acces­si­ble than the oth­ers’ data­base.)

    Kee­gan Han­kes, the inter­im research direc­tor of the SPLC’s Intel­li­gence Project, told Sludge, “This is an astound­ing amount of mon­ey that’s been allowed to be spent by hate groups... It is a decades-long tac­tic of these orga­ni­za­tions to dress up their rhetoric using euphemisms and using soft­er lan­guage to appeal to a wider audi­ence. They’re not just going to come out with their most extreme ide­o­log­i­cal view­points.”

    The orga­ni­za­tions in ques­tion soft-ped­al their Face­book con­tent “know­ing full well that peo­ple who are amenable to that mes­sage might very well go to their web­site or go to what­ev­er pro­pa­gan­da they’re oper­at­ing and get exposed to more extreme rhetoric,” Han­kes added. He told Sludge that he believed Face­book only takes action when it is “polit­i­cal­ly expe­di­ent,” where­as anti-immi­gra­tion, anti-Islam, and anti-LGBTQ view­points “have a lot of trac­tion in main­stream con­ser­vatism right now.”

    ...

    ———-

    “Face­book Has Banked Near­ly $1.6 Mil­lion From SPLC-Des­ig­nat­ed Hate Groups Since May 2018” by Tom McK­ay; Giz­mo­do; 09/25/2019

    At the top of the list is the Fed­er­a­tion for Amer­i­can Immi­gra­tion Reform (FAIR), which Facebook’s ad data­base shows ran 335 ads at a total bill of $910,101. (FAIR was found­ed by vir­u­lent nativist and white suprema­cist John Tan­ton and reg­u­lar­ly gripes about top­ics like the chang­ing “eth­nic base” of the U.S., but has man­aged to main­tain some degree of main­stream cred­i­bil­i­ty with right-wing news out­lets.) Sec­ond was the Alliance Defend­ing Free­dom, an anti-LGBTQ Chris­t­ian group that has pushed for the crim­i­nal­iza­tion of “sodomy” in the states and abroad, at $391,669.”

    That’s right, FAIR, which is about as overt­ly white nation­al­ist a group as you’re going to find, spent almost $1 mil­lion on Face­book ads fol­low­ing Face­book’s pol­i­cy change to ban white suprema­cy. And yet, as the arti­cle notes FAIR is also treat­ed as a cred­i­ble orga­ni­za­tion with­in the right-wig media com­plex. It high­lights the trag­i­cal­ly polit­i­cal­ly charged nature of any mean­ing­ful ban of white nation­al­ism on these plat­forms: not only would Face­book be giv­ing up all that ad mon­ey but any mean­ing­ful ban of white nation­al­ist con­tent would be treat­ed by the polit­i­cal right, which has been increas­ing­ly embrac­ing white nation­al­ism for years, as a cen­sor­ship attack against con­ser­v­a­tives. So instead we have Face­book pro­claim­ing that its ban­ning white nation­al­ism and white suprema­cy but it only appears to lim­it that ban to groups that pro­claim a vio­lent mis­sion or are engaged in vio­lence. As long as these groups cloak their mes­sages with enough dog-whis­tles and hints at what they’re ulti­mate agen­da their con­tent will be allowed. It’s, again, Face­book play­ing dumb, for the ben­e­fit of its bot­tom line and the far right:

    ...
    Sludge wrote that in total, Face­book ran some 4,921 ads from the 38 hate groups. Face­book has claimed that it is mak­ing progress and proac­tive­ly iden­ti­fied 65 per­cent of the hate speech it removed in Q1 2019, up from 24 per­cent in Q4 2017. But the groups have been allowed to remain, Sludge argued, because the platform’s mod­er­a­tion efforts are “main­ly focused on indi­vid­ual posts, not on the accounts that do the post­ing” and it only bans groups “that pro­claim a vio­lent mis­sion or are engaged in vio­lence”:

    Face­book may take down a hate group’s post that explic­it­ly attacks peo­ple based on a “pro­tect­ed char­ac­ter­is­tic,” but it wouldn’t ordi­nar­i­ly ban that group from its plat­form if the group didn’t have a mis­sion Face­book con­sid­ers vio­lent. For exam­ple, it removed three pages of the Proud Boys, who advo­cate vio­lence, but has let hate groups that are extreme­ly dis­crim­i­na­to­ry yet not explic­it­ly vio­lent remain. The con­trast­ing def­i­n­i­tions of hate speech and hate groups allow the com­pa­ny to take down some offen­sive posts but per­mit numer­ous hate groups to have a pres­ence, post­ing, spend­ing mon­ey, and recruit­ing on its plat­form.

    ...

    Accord­ing to Sludge, search­es for sim­i­lar con­tent on com­peti­tors Google/YouTube, Twit­ter, and Snap showed that Twit­ter took $917,000 from FAIR since Octo­ber 2018, while Google/YouTube took $90,000 from the group since the end of May 2018. “Few, if any” oth­er hate groups appeared in the Google/YouTube polit­i­cal ad archive, while no oth­er SPLC-des­ig­nat­ed groups appeared in Twit­ter or Snap’s data­bas­es, Sludge wrote. (How­ev­er, as Sludge not­ed, Facebook’s ad archive is more com­pre­hen­sive and acces­si­ble than the oth­ers’ data­base.)

    Kee­gan Han­kes, the inter­im research direc­tor of the SPLC’s Intel­li­gence Project, told Sludge, “This is an astound­ing amount of mon­ey that’s been allowed to be spent by hate groups... It is a decades-long tac­tic of these orga­ni­za­tions to dress up their rhetoric using euphemisms and using soft­er lan­guage to appeal to a wider audi­ence. They’re not just going to come out with their most extreme ide­o­log­i­cal view­points.”

    The orga­ni­za­tions in ques­tion soft-ped­al their Face­book con­tent “know­ing full well that peo­ple who are amenable to that mes­sage might very well go to their web­site or go to what­ev­er pro­pa­gan­da they’re oper­at­ing and get exposed to more extreme rhetoric,” Han­kes added. He told Sludge that he believed Face­book only takes action when it is “polit­i­cal­ly expe­di­ent,” where­as anti-immi­gra­tion, anti-Islam, and anti-LGBTQ view­points “have a lot of trac­tion in main­stream con­ser­vatism right now.”
    ...

    “Kee­gan Han­kes, the inter­im research direc­tor of the SPLC’s Intel­li­gence Project, told Sludge, “This is an astound­ing amount of mon­ey that’s been allowed to be spent by hate groups... It is a decades-long tac­tic of these orga­ni­za­tions to dress up their rhetoric using euphemisms and using soft­er lan­guage to appeal to a wider audi­ence. They’re not just going to come out with their most extreme ide­o­log­i­cal view­points.”

    Let’s review: first Face­book bans white suprema­cy in response to the neo-Nazi march in Char­lottesville. Then leaked inter­nal doc­u­ments reveal in May 2018 that Face­book left a giant loop­hole of its white suprema­cy ban that still allows white nation­al­ism and white sep­a­ratism because “a few peo­ple” at Face­book felt that white nation­al­ism and white sep­a­ratism weren’t inher­ent­ly racist. Then, in March of 2019, Face­book announces its real­ized that white nation­al­ism and white sep­a­ratism are the same as white suprema­cy and extends its ban to white nation­al­ism and sep­a­ratism. But the ban only applies to overt white nation­al­ism and sep­a­ratism. White nation­al­ism code words and dog-whistling are still allowed. And then, in Sep­tem­ber, Sludge issues a report that found that Face­book sold $1.6 mil­lion in ads to hate group between May of 2018 and Sep­tem­ber of 2019, and almost $1 mil­lion of that ad mon­ey came from FAIR, an vir­u­lent white nation­al­ist group that’s also some­what main­stream in right-wing media. As long as a group isn’t overt­ly advo­cat­ing vio­lence, it will be allowed to pro­mote and recruit its ideas on the plat­form. The far right’s decades-old tac­tic of soft­en­ing their lan­guage to appeal to a wider audi­ence is lit­er­al­ly the loop­hole Face­book kept in place for these groups.

    So when we hear about oth­er con­tro­ver­sial recent Face­book poli­cies, like the new loop­hole in Face­books pol­i­cy against lying in polit­i­cal ads that says politi­cians will still be allowed to lie, keep in mind that right-wing politi­cians aren’t just being giv­en a loop­hole that allows them to con­tin­ue lying in ads. They’re also giv­en a loop­hole that allows them to con­tin­ue pro­mote white nation­al­ism. Except this par­tic­u­lar loop­hole isn’t lim­it­ed to politi­cians.

    In oth­er news...

    Posted by Pterrafractyl | November 6, 2019, 3:36 pm
  6. Here’s the kind of sto­ry about the abuse of social media plat­forms that is dis­turb­ing not just because of the the con­tent of this par­tic­u­lar sto­ry based in Kuwait but also because there’s no rea­son to assume this sto­ry is lim­it­ed to Kuwait: BBC New Ara­bic con­duct­ed an under­cov­er inves­ti­ga­tion of the appar­ent­ly boom­ing online black mar­ket in Kuwait that relies on social media plat­forms like Insta­gram (owned by Face­book) and var­i­ous online mar­ket­place apps avail­able through the Google Play and Apple’s App Store. This black mar­ket hap­pens to be in de fac­to human slav­ery. The mar­ket­places are used to buy and sell for­eign domes­tic work­ers who come to Kuwait and oper­ate under the Kafala sys­tem, where a domes­tic work­er is brought into the coun­try through their spon­sor (the fam­i­ly hir­ing them), and they can’t change or quit their job or leave the coun­try with­out the per­mis­sion of their spon­sor, mak­ing it effec­tive­ly a sys­tem of mod­ern slav­ery once some­one enters it. And because the spon­sor­ship of these ‘domes­tic work­ers’ can be sold at a high­er price than they’re bought for this sys­tem has turned these work­ers into poten­tial­ly for-prof­it com­modi­ties.

    As the arti­cle notes, 9 out of 10 Kuwaiti house­holds have a domes­tic work­er, so the poten­tial size of this black mar­ket includes almost every Kuwaiti house­hold. Part of what appears to be fuel­ing this black mar­ket trade is a series of laws Kuwait intro­duced in 2015 intend­ed to pro­tect these domes­tic work­ers from abuse. The BBC met with over a dozen sell­ers, and almost all advo­cat­ed con­fis­cat­ing the work­ers’ pass­ports, con­fin­ing them to the house, deny­ing them any time off and giv­ing them lit­tle or no access to a phone. So they real­ly were active­ly treat­ing these women as slaves. The BBC even found a 16 year old female for sale in Kuwait, despite Kuwaiti law man­dat­ing that all domes­tic work­ers must be over 21. So this is black mar­ket poten­tial­ly includes child slav­ery.

    After the BBC noti­fied Face­book that Insta­gram was being used for this black mar­ket mar­ket­place, Face­book announced that it banned one of the hash­tags that was used on Insta­gram to adver­tise these offers. But, of course, the BBC still found many relat­ed list­ings still active on Insta­gram. Sim­i­lar­ly, Google and Apple told the BBC that they were work­ing with app devel­op­ers to address the issue. The apps used for this black mar­ket, like the 4Sale app, can be used to buy and sell all sorts of things, not just domes­tic work­ers, which com­pli­cates crack­ings down on this prac­tice. The 4Sale app even lets you fil­ter the avail­able list­ings accord­ing to race. The BBC con­tin­ued to find the offend­ing apps avail­able on the Google Play and Apple app store after giv­ing the com­pa­nies these noti­fi­ca­tions.

    And it’s not lim­it­ed to Kuwait. The BBC also found hun­dreds of peo­ple adver­tised for sale in Sau­di Ara­bia via Insta­gram and on the pop­u­lar Haraj app. Giv­en the rel­a­tive lack of glob­al atten­tion giv­en to prac­tice, it’s hard to believe this is lim­it­ed to Kuwait and Sau­di Ara­bia, espe­cial­ly since the Kafala sys­tem is also prac­ticed in Bahrain, Oman, Qatar, the UAE, Jor­dan and Lebanon. Any coun­try where effec­tive forced labor takes place could poten­tial­ly uti­lize social media to facil­i­tate these kinds of mar­ket­places.

    So while the pri­ma­ry prob­lem here stems from the fact that sys­tems like Kafala are still in use despite the clear poten­tial for abus­es, the fact that the social media giants only appear to have cracked down on this prac­tice after the BBC brought it to their atten­tion, and even then only appear to have made half-heart­ed attempts, makes them a big part of this prob­lem:

    BBC News Ara­bic

    Slave mar­kets found on Insta­gram and oth­er apps

    By Owen Pin­nell & Jess Kel­ly

    31 Octo­ber 2019

    Dri­ve around the streets of Kuwait and you won’t see these women. They are behind closed doors, deprived of their basic rights, unable to leave and at risk of being sold to the high­est bid­der.

    But pick up a smart­phone and you can scroll through thou­sands of their pic­tures, cat­e­gorised by race, and avail­able to buy for a few thou­sand dol­lars.

    An under­cov­er inves­ti­ga­tion by BBC News Ara­bic has found that domes­tic work­ers are being ille­gal­ly bought and sold online in a boom­ing black mar­ket.

    Some of the trade has been car­ried out on Face­book-owned Insta­gram, where posts have been pro­mot­ed via algo­rithm-boost­ed hash­tags, and sales nego­ti­at­ed via pri­vate mes­sages.

    Oth­er list­ings have been pro­mot­ed in apps approved and pro­vid­ed by Google Play and Apple’s App Store, as well as the e‑commerce plat­forms’ own web­sites.

    “What they are doing is pro­mot­ing an online slave mar­ket,” said Urmi­la Bhoola, the UN spe­cial rap­por­teur on con­tem­po­rary forms of slav­ery.

    “If Google, Apple, Face­book or any oth­er com­pa­nies are host­ing apps like these, they have to be held account­able.”

    After being alert­ed to the issue, Face­book said it had banned one of the hash­tags involved.

    Google and Apple said they were work­ing with app devel­op­ers to pre­vent ille­gal activ­i­ty.

    The ille­gal sales are a clear breach of the US tech firms’ rules for app devel­op­ers and users.

    How­ev­er, the BBC has found there are many relat­ed list­ings still active on Insta­gram, and oth­er apps avail­able via Apple and Google.

    Slave mar­ket

    Nine out of 10 Kuwaiti homes have a domes­tic work­er — they come from some of the poor­est parts of the world to the Gulf, aim­ing to make enough mon­ey to sup­port their fam­i­ly at home.

    Pos­ing as a cou­ple new­ly arrived in Kuwait, the BBC Ara­bic under­cov­er team spoke to 57 app users and vis­it­ed more than a dozen peo­ple who were try­ing to sell them their domes­tic work­er via a pop­u­lar com­mod­i­ty app called 4Sale.

    The sell­ers almost all advo­cat­ed con­fis­cat­ing the wom­en’s pass­ports, con­fin­ing them to the house, deny­ing them any time off and giv­ing them lit­tle or no access to a phone.

    The 4Sale app allowed you to fil­ter by race, with dif­fer­ent price brack­ets clear­ly on offer, accord­ing to cat­e­go­ry.

    “African work­er, clean and smi­ley,” said one list­ing. Anoth­er: “Nepalese who dares to ask for a day off.”

    When speak­ing to the sell­ers, the under­cov­er team fre­quent­ly heard racist lan­guage. “Indi­ans are the dirt­i­est,” said one, describ­ing a woman being adver­tised.

    Human rights vio­lat­ed

    The team were urged by app users, who act­ed as if they were the “own­ers” of these women, to deny them oth­er basic human rights, such as giv­ing them a “day or a minute or a sec­ond” off.

    One man, a police­man, look­ing to offload his work­er said: “Trust me she’s very nice, she laughs and has a smi­ley face. Even if you keep her up till 5am she won’t com­plain.”

    He told the BBC team how domes­tic work­ers were used as a com­mod­i­ty.

    “You will find some­one buy­ing a maid for 600 KD ($2,000), and sell­ing her on for 1,000 KD ($3,300),” he said.

    He sug­gest­ed how the BBC team should treat her: “The pass­port, don’t give it to her. You’re her spon­sor. Why would you give her her pass­port?”

    In one case, the BBC team was offered a 16-year-old girl. It has called her Fatou to pro­tect her real name.

    Fatou had been traf­ficked from Guinea in West Africa and had been employed as a domes­tic work­er in Kuwait for six months, when the BBC dis­cov­ered her. Kuwait­’s laws say that domes­tic work­ers must be over 21.

    Her sell­er’s sales pitch includ­ed the facts that she had giv­en Fatou no time off, her pass­port and phone had been tak­en away, and she had not allowed her to leave the house alone — all of which are ille­gal in Kuwait.

    Spon­sor’s per­mis­sion

    “This is the quin­tes­sen­tial exam­ple of mod­ern slav­ery,” said Ms Bhoola. “Here we see a child being sold and trad­ed like chat­tel, like a piece of prop­er­ty.”

    In most places in the Gulf, domes­tic work­ers are brought into the coun­try by agen­cies and then offi­cial­ly reg­is­tered with the gov­ern­ment.

    Poten­tial employ­ers pay the agen­cies a fee and become the offi­cial spon­sor of the domes­tic work­er.

    Under what is known as the Kafala sys­tem, a domes­tic work­er can­not change or quit her job, nor leave the coun­try with­out her spon­sor’s per­mis­sion.

    In 2015, Kuwait intro­duced some of the most wide-rang­ing laws to help pro­tect domes­tic work­ers. But the law was not pop­u­lar with every­one.

    Apps includ­ing 4Sale and Insta­gram enable employ­ers to sell the spon­sor­ship of their domes­tic work­ers to oth­er employ­ers, for a prof­it. This bypass­es the agen­cies, and cre­ates an unreg­u­lat­ed black mar­ket which leaves women more vul­ner­a­ble to abuse and exploita­tion.

    This online slave mar­ket is not just hap­pen­ing in Kuwait.

    In Sau­di Ara­bia, the inves­ti­ga­tion found hun­dreds of women being sold on Haraj, anoth­er pop­u­lar com­mod­i­ty app. There were hun­dreds more on Insta­gram, which is owned by Face­book.

    ‘Real hell’

    The BBC team trav­elled to Guinea to try to con­tact the fam­i­ly of Fatou, the child they had dis­cov­ered being offered for sale in Kuwait.

    Every year hun­dreds of women are traf­ficked from here to the Gulf as domes­tic work­ers.

    “Kuwait is real­ly a hell,” said one for­mer maid, who recalled being made to sleep in the same place as cows by the woman who employed her. “Kuwaiti hous­es are very bad,” said anoth­er. “No sleep, no food, noth­ing.”

    Fatou was found by the Kuwaiti author­i­ties and tak­en to the gov­ern­ment-run shel­ter for domes­tic work­ers. Two days lat­er she was deport­ed back to Guinea for being a minor.

    She told the BBC about her expe­ri­ence work­ing in three house­holds dur­ing her nine months in Kuwait: “They used to shout at me and call me an ani­mal. It hurt, it made me sad, but there was noth­ing I could do.”

    ...

    Hash­tag removed

    The Kuwaiti gov­ern­ment says it is “at war with this kind of behav­iour” and insist­ed the apps would be “heav­i­ly scru­ti­nised”.

    To date, no sig­nif­i­cant action has been tak­en against the plat­forms. And there has not been any legal action against the woman who tried to sell Fatou. The sell­er has not respond­ed to the BBC’s request for com­ment.

    Since the BBC team con­tact­ed the apps and tech com­pa­nies about their find­ings, 4Sale has removed the domes­tic work­er sec­tion of its plat­form.

    Face­book said it had banned the Ara­bic hash­tag “?????? ???????#” — which trans­lates as “#maid­s­for­trans­fer”.

    “We will con­tin­ue to work with law enforce­ment, expert organ­i­sa­tions and indus­try to pre­vent this behav­iour on our plat­forms,” added a Face­book spokesman.

    There was no com­ment from the Sau­di com­mod­i­ty app, Haraj.

    Google said it was “deeply trou­bled by the alle­ga­tions”.

    “We have asked BBC to share addi­tion­al details so we can con­duct a more in-depth inves­ti­ga­tion,” it added. “We are work­ing to ensure that the app devel­op­ers put in place the nec­es­sary safe­guards to pre­vent indi­vid­u­als from con­duct­ing this activ­i­ty on their online mar­ket­places.”

    Apple said it “strict­ly pro­hib­it­ed” the pro­mo­tion of human traf­fick­ing and child exploita­tion in apps made avail­able on its mar­ket­place.

    “App devel­op­ers are respon­si­ble for polic­ing the user-gen­er­at­ed con­tent on their plat­forms,” it said.

    “We work with devel­op­ers to take imme­di­ate cor­rec­tive actions when­ev­er we find any issues and, in extreme cas­es, we will remove the app from the Store.

    “We also work with devel­op­ers to report any ille­gal­i­ties to local law enforce­ment author­i­ties.”

    The firms con­tin­ue to dis­trib­ute the 4Sale and Haraj apps, how­ev­er, on the basis that their pri­ma­ry pur­pose is to sell legit­i­mate goods and ser­vices.

    4Sale may have tack­led the prob­lem, but at the time of pub­li­ca­tion, hun­dreds of domes­tic work­ers were still being trad­ed on Haraj, Insta­gram and oth­er apps which the BBC has seen.

    ———-

    “Slave mar­kets found on Insta­gram and oth­er apps” by Owen Pin­nell & Jess Kel­ly; BBC News Ara­bic; 10/31/2019

    “What they are doing is pro­mot­ing an online slave mar­ket,” said Urmi­la Bhoola, the UN spe­cial rap­por­teur on con­tem­po­rary forms of slav­ery.”

    It’s an online de fac­to slave mar­ket fuel­ing the tra­di­tion­al de fac­to slave mar­ket of the Kafala sys­tem, where for­eign domes­tic work­ers lit­er­al­ly relin­quish their right to leave the job or the coun­try. And while gov­ern­ments like Kuwait have belat­ed passed reg­u­la­tions intend­ed to pro­tect these work­ers, the apps pro­vide a loop­hole around those reg­u­la­tions:

    ...
    Spon­sor’s per­mis­sion

    “This is the quin­tes­sen­tial exam­ple of mod­ern slav­ery,” said Ms Bhoola. “Here we see a child being sold and trad­ed like chat­tel, like a piece of prop­er­ty.”

    In most places in the Gulf, domes­tic work­ers are brought into the coun­try by agen­cies and then offi­cial­ly reg­is­tered with the gov­ern­ment.

    Poten­tial employ­ers pay the agen­cies a fee and become the offi­cial spon­sor of the domes­tic work­er.

    Under what is known as the Kafala sys­tem, a domes­tic work­er can­not change or quit her job, nor leave the coun­try with­out her spon­sor’s per­mis­sion.

    In 2015, Kuwait intro­duced some of the most wide-rang­ing laws to help pro­tect domes­tic work­ers. But the law was not pop­u­lar with every­one.

    Apps includ­ing 4Sale and Insta­gram enable employ­ers to sell the spon­sor­ship of their domes­tic work­ers to oth­er employ­ers, for a prof­it. This bypass­es the agen­cies, and cre­ates an unreg­u­lat­ed black mar­ket which leaves women more vul­ner­a­ble to abuse and exploita­tion.
    ...

    And while Face­book, Google, and Apple have pledged to end this prac­tice, there does­n’t appear to be much done at all. The Haraj app that’s being used in Sau­di Ara­bia is still avail­able in the app stores and hun­dreds of work­ers are still be bought and sold on Haraj, Insta­gram, and oth­er apps:

    ...
    “If Google, Apple, Face­book or any oth­er com­pa­nies are host­ing apps like these, they have to be held account­able.”

    After being alert­ed to the issue, Face­book said it had banned one of the hash­tags involved.

    Google and Apple said they were work­ing with app devel­op­ers to pre­vent ille­gal activ­i­ty.

    ...

    How­ev­er, the BBC has found there are many relat­ed list­ings still active on Insta­gram, and oth­er apps avail­able via Apple and Google.

    ...

    Since the BBC team con­tact­ed the apps and tech com­pa­nies about their find­ings, 4Sale has removed the domes­tic work­er sec­tion of its plat­form.

    ...

    The firms con­tin­ue to dis­trib­ute the 4Sale and Haraj apps, how­ev­er, on the basis that their pri­ma­ry pur­pose is to sell legit­i­mate goods and ser­vices.

    4Sale may have tack­led the prob­lem, but at the time of pub­li­ca­tion, hun­dreds of domes­tic work­ers were still being trad­ed on Haraj, Insta­gram and oth­er apps which the BBC has seen.
    ...

    Also keep in mind that Kuwait­’s 2015 law giv­ing extra pro­tec­tions to these domes­tic work­ers still leaves them trapped in a sys­tem where they can’t leave with­out the per­mis­sion of their spon­sors. It’s still a wild­ly abu­sive sys­tem even with these new pro­tec­tions. Worse, Kuwait­’s laws pro­tect­ing these work­ers are the most exten­sive of the coun­tries that have this Kafala sys­tem. It’s part of what makes the role these social media giants are play­ing in facil­i­tat­ing this trade so egre­gious: they’re one of the only par­ties in this trade that can be real­is­ti­cal­ly expect­ed to even try to crack down on it, and yet, as we can see, that’s not actu­al­ly a real­is­tic expec­ta­tion.

    Posted by Pterrafractyl | November 7, 2019, 1:36 pm
  7. Is the BJP going to be inter­fer­ing in the US 2020 elec­tion on behalf of Don­ald Trump? Yes, accord­ing to the BJP’s gen­er­al sec­re­tary, BL San­thosh. That was the threat he pub­licly made last week in response to Demo­c­ra­t­ic crit­i­cisms of the anti-Mus­lim riots in New Del­hi dur­ing Trump’s vis­its. San­thosh was specif­i­cal­ly reply­ing to a tweet by Bernie Sanders when he made the threat. Sanders tweet­ed out that, “Over 200 mil­lion Mus­lims call India home. Wide­spread anti-Mus­lim mob vio­lence has killed at least 27 and injured many more. Trump responds by say­ing, “That’s up to India.” This is a fail­ure of lead­er­ship on human rights.” In response, San­thosh tweet­ed, “How much ever neu­tral we wish to be you com­pel us to play a role in Pres­i­den­tial elec­tions . Sor­ry to say so ... But you are com­pelling us .” So Sander­s’s crit­i­cism of Trump’s lack of con­dem­na­tion of the anti-Mus­lim riots prompt­ed an open threat of 2020 elec­tion med­dling on behalf of Trump by the BJP’s gen­er­al sec­re­tary:

    Huff­in­g­ton Post
    India

    BJP Gen­er­al Sec­re­tary Threat­ens Bernie Sanders With US Elec­tion Inter­fer­ence
    BJP’s BL San­thosh respond­ed to Sanders’ con­dem­na­tion of Don­ald Trump’s response to the vio­lence in Del­hi.

    By Meryl Sebas­t­ian
    27/02/2020 10:17 AM IST | Updat­ed

    BJP gen­er­al sec­re­tary BL San­thosh on Thurs­day said Demo­c­ra­t­ic pres­i­den­tial can­di­date Bernie Sanders’s reac­tion to the vio­lence in north­east Del­hi was com­pelling “us to play a role in Pres­i­den­tial elec­tions.”

    Sanders had shared the Wash­ing­ton Post’s report on the vio­lence and slammed US Pres­i­dent Don­ald Trump’s response, call­ing it “a fail­ure of lead­er­ship on human rights.”

    Over 200 mil­lion Mus­lims call India home. Wide­spread anti-Mus­lim mob vio­lence has killed at least 27 and injured many more. Trump responds by say­ing, “That’s up to India.” This is a fail­ure of lead­er­ship on human rights.https://t.co/tUX713Bz9Y&mdash Bernie Sanders (@BernieSanders) Feb­ru­ary 26, 2020

    To this, San­thosh replied:

    How much ever neu­tral we wish to be you com­pel us to play a role in Pres­i­den­tial elec­tions . Sor­ry to say so ... But you are com­pelling us .
    — B L San­thosh (@blsanthosh) Feb­ru­ary 27, 2020

    He lat­er delet­ed the tweet.

    The BJP leader may have chan­nelled his inner Putin but threat­en­ing to inter­fere with demo­c­ra­t­ic elec­tions in anoth­er coun­try is no joke.

    ...

    Sanders is the lat­est in a long list of US law­mak­ers who have expressed deep con­cern over the vio­lence in India’s cap­i­tal.

    Demo­c­ra­t­ic can­di­date Sen­a­tor Eliz­a­beth War­ren had shared BBC’s report, say­ing “vio­lence against peace­ful pro­test­ers is nev­er accept­able”.

    It’s impor­tant to strength­en rela­tion­ships with demo­c­ra­t­ic part­ners like India. But we must be able to speak truth­ful­ly about our val­ues, includ­ing reli­gious free­dom and free­dom of expression—and vio­lence against peace­ful pro­tes­tors is nev­er accept­able. https://t.co/UxkFNDI0rP&mdash Eliz­a­beth War­ren (@ewarren) Feb­ru­ary 26, 2020

    This week, Trump vis­it­ed India but the real sto­ry should be the com­mu­nal vio­lence tar­get­ing Mus­lims in Del­hi right now. We can­not be silent as this tide of anti-Mus­lim vio­lence con­tin­ues across India. https://t.co/4VXFlk5pEg&mdash Rashi­da Tlaib (@RashidaTlaib) Feb­ru­ary 26, 2020

    I con­demn attacks against Mus­lims in India, and reject vio­lence, big­otry, and reli­gious intol­er­ance. The US State Depart­ment should too.&mdash Rep. Don Bey­er (@RepDonBeyer) Feb­ru­ary 26, 2020

    On Wednes­day, Sen­a­tor Mark Warn­er from the Demo­c­ra­t­ic Par­ty and John Cornyn from the Repub­li­can Par­ty released a joint state­ment that said, “We are alarmed by the recent vio­lence in New Del­hi. We con­tin­ue to sup­port an open dia­logue on issues of sig­nif­i­cant con­cern in order to advance our vital long-term rela­tion­ship,”

    Warn­er and Cornyn are co-chairs of the Sen­ate India Cau­cus, the largest coun­try-spe­cif­ic cau­cus in the US Sen­ate.

    US Con­gress­man Jamie Raskin said:

    Hor­ri­fied by the dead­ly vio­lence unfold­ing in India, all fueled by reli­gious hatred and fanati­cism. Lib­er­al democ­ra­cies must pro­tect reli­gious free­dom and plu­ral­ism, and avoid the path of dis­crim­i­na­tion and big­otry. https://t.co/hs4zYqYlT7&mdash Rep. Jamie Raskin (@RepRaskin) Feb­ru­ary 26, 2020

    Richard N Hass, who heads the pow­er­ful Coun­cil on For­eign Rela­tions, said the rea­son for India’s rel­a­tive suc­cess has been that its large Mus­lim minor­i­ty saw itself as Indi­an.

    “But this is at risk owing to govt attempts to exploit iden­ti­ty pol­i­tics for polit­i­cal advan­tage,” he said.

    The US Com­mis­sion on Inter­na­tion­al Reli­gious Free­dom urged the Indi­an Gov­ern­ment to take swift action for the safe­ty of its cit­i­zens.

    Express­ing “grave con­cern” over the vio­lence, the US body said the Indi­an gov­ern­ment should pro­vide pro­tec­tion to peo­ple regard­less of their faith.

    ————

    “BJP Gen­er­al Sec­re­tary Threat­ens Bernie Sanders With US Elec­tion Inter­fer­ence” by Meryl Sebas­t­ian, Huff­in­g­ton Post India, 02/27/2020

    “Sanders had shared the Wash­ing­ton Post’s report on the vio­lence and slammed US Pres­i­dent Don­ald Trump’s response, call­ing it “a fail­ure of lead­er­ship on human rights.””

    So Bernie Sanders slams Trump for “a fail­ure of lead­er­ship on human rights” over Trump’s lack of con­dem­na­tion of the riots and the BJP’s gen­er­al sec­re­tary acts like the BJP is now “com­pelled” to “play a role in Pres­i­den­tial elec­tions”. And while he lat­er delet­ed the tweet, it’s hard to ignore the claims of being “com­pelled” to do inter­fere whether the tweet was delet­ed or not:

    ...
    To this, San­thosh replied:

    How much ever neu­tral we wish to be you com­pel us to play a role in Pres­i­den­tial elec­tions . Sor­ry to say so ... But you are com­pelling us .
    — B L San­thosh (@blsanthosh) Feb­ru­ary 27, 2020

    He lat­er delet­ed the tweet.
    ...

    Now here’s anoth­er arti­cle about the threat by San­thosh that points out that he’s also a senior RSS leader. The arti­cle also men­tions anoth­er tweet that San­thosh he made after after the sec­ond day of sec­tar­i­an vio­lence: “Now the Game Being”. It was more than a lit­tle omi­nous. And reveal­ing. Recall how one of the most scan­dalous aspects of the anti-Mus­lim riots is the fact that BJP politi­cians were threat­en­ing vig­i­lante vio­lence in the lead up to the anti-Mus­lim mobs, the police in Del­hi appeared to be allow­ing it to hap­pen, and it’s the BJP-led fed­er­al gov­ern­ment that con­trols the police in Del­hi. So it real­ly does look like Mod­i’s gov­ern­ment was open­ly stok­ing and then allow­ing these anti-Mus­lim mobs to run ram­pant in the cap­i­tal dur­ing Trump’s vis­it. That’s all part of the chill­ing con­text of San­thosh’s “Now the Game Begins” tweet:

    Nation­al Her­ald

    BJP organ­is­ing sec­re­tary threat­ens to inter­fere in US elec­tion, then deletes tweet
    After urg­ing Britons of Indi­an ori­gin to vote against the Labour Par­ty in the last elec­tion in UK, a BJP leader now threat­ens to influ­ence the US Pres­i­den­tial elec­tion.

    NH Web Desk
    Pub­lished: 27 Feb 2020, 3:32 PM

    While US cit­i­zens and insti­tu­tions are wor­ried over Russ­ian inter­fer­ence in the US Pres­i­den­tial elec­tion, a senior RSS leader and BJP’s Gen­er­al Sec­re­tary ( Organ­i­sa­tion) B.L. San­thosh has held out a threat to influ­ence the elec­tion in favour of Don­ald Trump.

    The threat was tweet­ed by San­thosh fol­low­ing the acer­bic tweet by one of the Demo­c­ra­t­ic Par­ty aspi­rants for the US Pres­i­den­cy, Bernie Sanders, who is indeed the front run­ner in the Demo­c­ra­t­ic Pri­maries. Sanders, an avowed social­ist, had crit­i­cised Trump for nego­ti­a­tion defence con­tracts for US firms while in India, point­ing out that the two coun­tries could have far more prof­itably found areas of agree­ment to fight cli­mate change.

    Sanders fol­lowed it up with anoth­er sharp tweet crit­i­cis­ing Indi­an Prime Min­is­ter Modi and the US Pres­i­dent for ignor­ing the Del­hi riots.

    In a sharp response, the BJP func­tionary tweet­ed, “ “How much ever neu­tral we wish to be, you com­pel us to play a role in Pres­i­den­tial elec­tions,” and added “Sor­ry to say so…But you are com­pelling us”. Lat­er he delet­ed his tweet.

    Yet anoth­er con­tro­ver­sial tweet by the BJP leader, which again was delet­ed by him, relat­ed to the vio­lence in Del­hi. Some­what indis­creet­ly he had tweet­ed, “ Now the Game Begins” after the sec­ond day of vio­lence, which has left at least 32 peo­ple dead.

    The tweet by the BJP func­tionary not only estab­lish­es the party’s will­ing­ness and intent to inter­fere in elec­tions abroad but also the con­fi­dence it has in its own abil­i­ty to do so, leav­ing peo­ple to won­der if the par­ty does fight shy of inter­fer­ing in the elec­toral process in domes­tic elec­tions.

    ...

    At a media inter­ac­tion while in New Del­hi, Trump had respond­ed to a ques­tion on the Del­hi vio­lence by say­ing that he would “leave it to India” to deal with the CAA and the protests and he had not dis­cussed the vio­lence with PM Naren­dra Modi.

    Sanders has crit­i­cised both Modi and Trump sev­er­al times in the past. He was also crit­i­cal of the “Howdy Modi” ral­ly in Hous­ton which was held last Sep­tem­ber. He and the silence of the US overn­ment and White House on Kash­mir.

    Ro Khan­na, Sanders’ For­eign Pol­i­cy advis­er and Indi­an-Amer­i­can Rep­re­sen­ta­tive, had also said that while India remained an impor­tant strate­gic ally, the “India that has cap­tured the imag­i­na­tion of the world, and the world respects, is the India of 1947, an India shaped by Gand­hi and Nehru. It’s not the India of the eleventh cen­tu­ry. Any effort to under­mine India’s con­cep­tion as a plu­ral­is­tic democ­ra­cy and go back to the medieval ages will not be in India’s inter­est.”

    Demo­c­ra­t­ic Par­ty leader and a mem­ber of the House of Rep­re­sen­ta­tives, Pre­mi­la Jaya­pal, an Indi­an-Amer­i­can, had sought to bring a US Con­gress Res­o­lu­tion on Kash­mir last year for a vote on which the Modi gov­ern­ment has rebuked, and also refused to attend a House For­eign Affairs Com­mit­tee.

    The BJP’s For­eign Affairs Cell request­ed British-Indi­ans to vote for the Con­ser­v­a­tive Par­ty, and against the Labour Par­ty to stop the res­o­lu­tion of nul­li­fy Arti­cle 370. The British gov­ern­ment had for­mal­ly expressed its tor­ment to the Min­istry of Exter­nal Affairs last year, report­ed by The Hin­du.

    On which, the Min­istry of Exter­nal Affairs said that the “Elec­tion is an inter­nal mat­ter of U.K. Peo­ple who vot­ed in the elec­tions are all U.K. nation­als. We do not wish to get involved as to which sec­tion of their pop­u­la­tion is sup­port­ing whom.”

    ————

    “BJP organ­is­ing sec­re­tary threat­ens to inter­fere in US elec­tion, then deletes tweet” by NH Web Desk, Nation­al Her­ald India, 02/27/2020

    “Yet anoth­er con­tro­ver­sial tweet by the BJP leader, which again was delet­ed by him, relat­ed to the vio­lence in Del­hi. Some­what indis­creet­ly he had tweet­ed, “ Now the Game Begins” after the sec­ond day of vio­lence, which has left at least 32 peo­ple dead.

    “Now the Game Begins”. Yeah, that’s a pret­ty indis­creet way of show­ing his sup­port for the vio­lence. No won­der he was so upset by Sander­s’s tweet. And note how a num­ber of Democ­rats have angered the BJP in recent years. It’s a reminder that if the BJP does decid­ed to direct­ly inter­vene in the elec­tion it’s like­ly going to be a gen­er­al inter­ven­tion against Democ­rats and not just Sanders:

    ...
    Sanders has crit­i­cised both Modi and Trump sev­er­al times in the past. He was also crit­i­cal of the “Howdy Modi” ral­ly in Hous­ton which was held last Sep­tem­ber. He and the silence of the US overn­ment and White House on Kash­mir.

    Ro Khan­na, Sanders’ For­eign Pol­i­cy advis­er and Indi­an-Amer­i­can Rep­re­sen­ta­tive, had also said that while India remained an impor­tant strate­gic ally, the “India that has cap­tured the imag­i­na­tion of the world, and the world respects, is the India of 1947, an India shaped by Gand­hi and Nehru. It’s not the India of the eleventh cen­tu­ry. Any effort to under­mine India’s con­cep­tion as a plu­ral­is­tic democ­ra­cy and go back to the medieval ages will not be in India’s inter­est.”

    Demo­c­ra­t­ic Par­ty leader and a mem­ber of the House of Rep­re­sen­ta­tives, Pre­mi­la Jaya­pal, an Indi­an-Amer­i­can, had sought to bring a US Con­gress Res­o­lu­tion on Kash­mir last year for a vote on which the Modi gov­ern­ment has rebuked, and also refused to attend a House For­eign Affairs Com­mit­tee.
    ...

    And note that the BJP inter­fer­ing in for­eign elec­tions isn’t just some emp­ty threat. The par­ty lit­er­al­ly did exact­ly that in the UK last year when it open­ly encour­aged UK vot­ers to vot­er against Labour can­di­dates:

    ...
    The BJP’s For­eign Affairs Cell request­ed British-Indi­ans to vote for the Con­ser­v­a­tive Par­ty, and against the Labour Par­ty to stop the res­o­lu­tion of nul­li­fy Arti­cle 370. The British gov­ern­ment had for­mal­ly expressed its tor­ment to the Min­istry of Exter­nal Affairs last year, report­ed by The Hin­du.

    On which, the Min­istry of Exter­nal Affairs said that the “Elec­tion is an inter­nal mat­ter of U.K. Peo­ple who vot­ed in the elec­tions are all U.K. nation­als. We do not wish to get involved as to which sec­tion of their pop­u­la­tion is sup­port­ing whom.”
    ...

    Now here’s a Novem­ber 2019 arti­cle about that BJP med­dling in the UK’s elec­tions. It was done via the group Over­seas Friends of the BJP UK (OFBJPUK). The pres­i­dent of the group open­ly announced that it was plan­ning on cam­paign in favor of the Con­ser­v­a­tive in 48 seats mar­gin­al­ly held by Labour can­di­dates:

    Open Democ­ra­cy UK

    Con­cerns over ‘for­eign inter­fer­ence’ as India-linked Hin­du nation­al­ist group tar­gets Labour can­di­dates

    Cam­paign­ers linked to Indi­an prime min­is­ter Modi’s BJP say they’re tar­get­ing 48 Labour-Tory mar­gin­als, also prompt­ing fears of height­ened eth­nic ten­sions.

    Sun­ny Hun­dal
    6 Novem­ber 2019

    Activists direct­ly linked to India’s rul­ing Hin­du nation­al­ist par­ty, the BJP, have vowed to cam­paign on behalf of the Con­ser­v­a­tive Par­ty – rais­ing con­cerns about attempt­ed for­eign inter­fer­ence in next month’s UK gen­er­al elec­tion.

    The cam­paign has alarmed some Labour Par­ty MPs stand­ing for reelec­tion, who say the prospect of for­eign inter­fer­ence by “reli­gious hard­lin­ers” could stir up inter-com­mu­ni­ty ten­sions.

    In July, Cana­di­an offi­cials warned of poten­tial elec­tion inter­fer­ence from the BJP gov­ern­ment in Canada’s upcom­ing elec­tions. In a report, the civ­il ser­vants accused India and Chi­na of try­ing to pro­mote sym­pa­thet­ic can­di­dates and spread mis­in­for­ma­tion.

    On Tues­day the pres­i­dent of Over­seas Friends of BJP UK (OFBJPUK), told The The Times of India his cam­paign group was plan­ning to cam­paign in 48 mar­gin­al seats to help Con­ser­v­a­tive can­di­dates.

    “We have a team in each con­stituen­cy which is going round with the Tory can­di­date leaflet­ing, speak­ing to peo­ple and per­suad­ing them to vote Tory,” said Kuldeep Singh Shekhawat. “The teams are organ­ised by the BJP and Friends of India Soci­ety Inter­na­tion­al.”

    It is extreme­ly unusu­al for a group explic­it­ly tied to a for­eign polit­i­cal par­ty to open­ly declare its intent to cam­paign for a spe­cif­ic British polit­i­cal par­ty dur­ing an elec­tion.

    Shekhawat also said their cam­paign will tar­get Britain’s only two Sikh MPs – Tan Dhe­si and Preet Gill – both of whom are Labour, and replace them with Con­ser­v­a­tives.

    “We are work­ing with the Tory can­di­dates in Kei­th Vaz’s ex seat, Tan­man­jeet Singh Dhesi’s seat, Preet Gill’s seat, Lisa Nandy’s seat, Seema Malhotra’s and Valerie Vaz’s seats,” Shekhawat said, because – he claimed – “some of them have signed let­ters against India”.

    Friends of Modi

    Over­seas Friends of BJP UK (OFBJPUK), found­ed in 1992, says it aims to “spread a pos­i­tive mes­sage of the BJP Gov­ern­ment in India” led by Prime Min­is­ter Naren­dra Modi, who has attract­ed recent con­tro­ver­sy and inter­na­tion­al con­dem­na­tion by strip­ping the dis­put­ed ter­ri­to­ry of Kash­mir of its semi-autonomous sta­tus. The res­i­dents of Kash­mir are now liv­ing under severe lock­down, with TV chan­nels cut, cur­fews and thou­sands of troops deployed to the region.

    Tan­man­jeet Singh Dhe­si, who is stand­ing to be re-elect­ed as the Labour MP for Slough, told open­Democ­ra­cy: “There has been a lot of talk in recent years about for­eign exter­nal inter­fer­ence in elec­tions and sure­ly this is just anoth­er prime exam­ple of it.”

    He also said the Labour Par­ty was not “anti-India”, as some crit­ics have claimed.

    “Unlike what some peo­ple may try to por­tray, the Labour Par­ty is not anti-India, anti-Pak­istan, or anti any­one else. We mere­ly stand up for and have always stood up for the human rights of all – regard­less of back­ground, colour or creed.”

    Labour’s Lisa Nandy, who is stand­ing for re-elec­tion in Wigan, told open­Democ­ra­cy: “The idea that the BJP is going to have any sort of cam­paign pres­ence on the ground and make any inroads here is some­what ridicu­lous.

    “Peo­ple in Wigan wouldn’t take kind­ly to being told what to do by Man­ches­ter, let alone India.”

    Jaskaran Singh, for­mer direc­tor at the World Sikh Orga­ni­za­tion of Cana­da said: “Top Cana­di­an secu­ri­ty experts con­firmed what the Sikh com­mu­ni­ty has always known – that India, cur­rent­ly under BJP rule, is using com­mu­ni­ty pres­sure, media manip­u­la­tion and oth­er tac­tics to pres­sure and malign the Sikh com­mu­ni­ty in the hopes of influ­enc­ing the out­come of anoth­er coun­try’s elec­tion results.”

    Char­i­ty Com­mis­sion

    The cam­paign has also raised con­cerns at the Char­i­ty Com­mis­sion. Shekhawat said that OFBJPUK had been in talks with Hin­du tem­ples in the UK about cam­paign­ing on behalf of Con­ser­v­a­tive can­di­dates too. But most of those tem­ples would be reg­is­tered as char­i­ties.

    A spokesper­son for the Char­i­ty Com­mis­sion told open­Democ­ra­cy: “The pub­lic expect char­i­ties to be dri­ven by their pur­pose and rep­re­sent­ing their ben­e­fi­cia­ries at all times, which is all the more impor­tant in this intense polit­i­cal envi­ron­ment. Char­i­ties must nev­er engage in par­ty polit­i­cal activ­i­ty.”

    The Char­i­ty Com­mis­sion has inter­vened in the issue of polit­i­cal activ­i­ty by Hin­du tem­ples before. Just before the 2015 and 2017 gen­er­al elec­tions, the Nation­al Coun­cil of Hin­du Tem­ples (NCHT) sent out emails urg­ing Hin­dus to vote Con­ser­v­a­tive. The Char­i­ty Com­mis­sion inter­vened on both occa­sions and forced the NCHT to with­draw its advice.

    Kash­mir

    Last month the NCHT also sent a let­ter to Jere­my Cor­byn accus­ing the Labour Par­ty of “inter­nal apartheid” and “anti-Indi­an racism”. It said that Labour had kept its “Indi­an mem­bers” in the dark about a motion over Kash­mir that had been passed at the par­ty con­fer­ence. The NCHT also claimed that Labour was “per­ilous­ly close to becom­ing direct sup­port­ers of Islamist ter­ror organ­i­sa­tions such as al-Qae­da and ISIS”.

    A Labour MP who want­ed to remain anony­mous told open­Democ­ra­cy that con­cerns over Kash­mir were being raised by Con­ser­v­a­tive MPs too.

    Last week the Con­ser­v­a­tive MP Steven Bak­er asked Boris John­son about “seri­ous alle­ga­tions of human rights abus­es” in the region and argued that this was not just a mat­ter for India and Pak­istan to decide.

    ...

    ———-

    “Con­cerns over ‘for­eign inter­fer­ence’ as India-linked Hin­du nation­al­ist group tar­gets Labour can­di­dates” by Sun­ny Hun­dal, Open Democ­ra­cy UK, 11/06/2019

    “On Tues­day the pres­i­dent of Over­seas Friends of BJP UK (OFBJPUK), told The The Times of India his cam­paign group was plan­ning to cam­paign in 48 mar­gin­al seats to help Con­ser­v­a­tive can­di­dates.”

    The BJP sure is open about their for­eign elec­tion med­dling schemes these days. As the arti­cle describes, it is extreme­ly unusu­al for a group explic­it­ly tied to a for­eign polit­i­cal par­ty to open­ly declare its intent to cam­paign for a spe­cif­ic British polit­i­cal par­ty dur­ing an elec­tion. But you have to won­der how unusu­al it to inter­vene — open­ly or covert­ly — any­more:

    ...
    “We have a team in each con­stituen­cy which is going round with the Tory can­di­date leaflet­ing, speak­ing to peo­ple and per­suad­ing them to vote Tory,” said Kuldeep Singh Shekhawat. “The teams are organ­ised by the BJP and Friends of India Soci­ety Inter­na­tion­al.”

    It is extreme­ly unusu­al for a group explic­it­ly tied to a for­eign polit­i­cal par­ty to open­ly declare its intent to cam­paign for a spe­cif­ic British polit­i­cal par­ty dur­ing an elec­tion.

    Shekhawat also said their cam­paign will tar­get Britain’s only two Sikh MPs – Tan Dhe­si and Preet Gill – both of whom are Labour, and replace them with Con­ser­v­a­tives.

    “We are work­ing with the Tory can­di­dates in Kei­th Vaz’s ex seat, Tan­man­jeet Singh Dhesi’s seat, Preet Gill’s seat, Lisa Nandy’s seat, Seema Malhotra’s and Valerie Vaz’s seats,” Shekhawat said, because – he claimed – “some of them have signed let­ters against India”.

    Friends of Modi

    Over­seas Friends of BJP UK (OFBJPUK), found­ed in 1992, says it aims to “spread a pos­i­tive mes­sage of the BJP Gov­ern­ment in India” led by Prime Min­is­ter Naren­dra Modi, who has attract­ed recent con­tro­ver­sy and inter­na­tion­al con­dem­na­tion by strip­ping the dis­put­ed ter­ri­to­ry of Kash­mir of its semi-autonomous sta­tus. The res­i­dents of Kash­mir are now liv­ing under severe lock­down, with TV chan­nels cut, cur­fews and thou­sands of troops deployed to the region.
    ...

    And the Over­seas Friends of BJP UK was­n’t work­ing alone. It was talks with Hin­du tem­ples about cam­paign­ing for Con­ser­v­a­tives, poten­tial­ly threat­en­ing the tem­ple’s char­i­ta­ble sta­tus. So the BJP is lit­er­al­ly try­ing to get Hin­du lead­er­ship in the UK to choose sides. And this would­n’t be the first time the Nation­al Coun­cil of Hin­du Tem­ples (NCHT) got involved in UK pol­i­tics in recent years so it was the kind of sit­u­a­tion that was ripe for BJP over­tures:

    ...
    The cam­paign has alarmed some Labour Par­ty MPs stand­ing for reelec­tion, who say the prospect of for­eign inter­fer­ence by “reli­gious hard­lin­ers” could stir up inter-com­mu­ni­ty ten­sions.

    ...

    Char­i­ty Com­mis­sion

    The cam­paign has also raised con­cerns at the Char­i­ty Com­mis­sion. Shekhawat said that OFBJPUK had been in talks with Hin­du tem­ples in the UK about cam­paign­ing on behalf of Con­ser­v­a­tive can­di­dates too. But most of those tem­ples would be reg­is­tered as char­i­ties.

    A spokesper­son for the Char­i­ty Com­mis­sion told open­Democ­ra­cy: “The pub­lic expect char­i­ties to be dri­ven by their pur­pose and rep­re­sent­ing their ben­e­fi­cia­ries at all times, which is all the more impor­tant in this intense polit­i­cal envi­ron­ment. Char­i­ties must nev­er engage in par­ty polit­i­cal activ­i­ty.”

    The Char­i­ty Com­mis­sion has inter­vened in the issue of polit­i­cal activ­i­ty by Hin­du tem­ples before. Just before the 2015 and 2017 gen­er­al elec­tions, the Nation­al Coun­cil of Hin­du Tem­ples (NCHT) sent out emails urg­ing Hin­dus to vote Con­ser­v­a­tive. The Char­i­ty Com­mis­sion inter­vened on both occa­sions and forced the NCHT to with­draw its advice.
    ...

    And then the arti­cle notes that even Cana­da issued a warn­ing about pos­si­ble elec­tion inter­fer­ence from Mod­i’s BJP. It’s as if for­eign elec­tion med­dling by the BJP is an open secret:

    ...
    In July, Cana­di­an offi­cials warned of poten­tial elec­tion inter­fer­ence from the BJP gov­ern­ment in Canada’s upcom­ing elec­tions. In a report, the civ­il ser­vants accused India and Chi­na of try­ing to pro­mote sym­pa­thet­ic can­di­dates and spread mis­in­for­ma­tion.

    ...

    Jaskaran Singh, for­mer direc­tor at the World Sikh Orga­ni­za­tion of Cana­da said: “Top Cana­di­an secu­ri­ty experts con­firmed what the Sikh com­mu­ni­ty has always known – that India, cur­rent­ly under BJP rule, is using com­mu­ni­ty pres­sure, media manip­u­la­tion and oth­er tac­tics to pres­sure and malign the Sikh com­mu­ni­ty in the hopes of influ­enc­ing the out­come of anoth­er coun­try’s elec­tion results.”
    ...

    And that’s all part of why we should take the threats of pro-Trump/pro-Repub­li­can elec­tion inter­fer­ence by the BJP’s gen­er­al sec­re­tary quite seri­ous­ly. It was­n’t an emp­ty threat. The BJP has expe­ri­ence doing this kind of stuff. What’s going to be more inter­est­ing to see is if the BJP does its 2020 elec­tion med­dling open­ly or tries to hide it. We’ll see. But don’t be super sur­prised if we hear about a bunch of ‘Russ­ian trolls’ tar­get­ing Indi­an Amer­i­cans over Face­book and What­sApp with mes­sages about how only Trump sup­ports Indi­a’s sov­er­eign­ty or some­thing.

    Posted by Pterrafractyl | March 3, 2020, 3:30 pm
  8. @Pterrafractyl–

    There could be no more egre­gious exam­ple of Brain­dead Bernie’s abject stu­pid­i­ty, all-encom­pass­ing hypocrisy and the igno­rance that auto­mat­i­cal­ly derives from the com­bi­na­tion of these two char­ac­ter­is­tics.

    That Tul­si Gab­bard, a mem­ber of the Sanders Insti­tute, who placed Boinie’s name up for nom­i­na­tion at the 2016 Demo­c­ra­t­ic Nation­al Con­ven­tion and who was his prospec­tive Vice-Pres­i­den­tial can­di­date has been the point-per­son for Team Modi in the U.S. has obvi­ous­ly escaped the gaze of His Low­li­ness.

    Gab­bard is “the Sangh’s mas­cot in the U.S.”–the “mas­cot” of the Hin­dut­va fas­cist RSS that mur­dered Gand­hi and is the dri­ving force behind Team Modi.

    https://spitfirelist.com/for-the-record/ftr-991-hindutva-fascism-part-4-the-hare-krishna-cult/

    Give me a F*g break!

    Best,

    Dave

    Posted by Dave Emory | March 3, 2020, 5:13 pm
  9. We’re get­ting more details on the charges against Steven Car­ril­lo and his accom­plice, Robert Jus­tus, in rela­tion mur­der of a fed­er­al secu­ri­ty guard in Oak­land near­by a George Floyd protest. Details like the online chats where Car­ril­lo and Jus­tus appeared to first meet and devel­op their plans.

    Part of what’s so dis­turb­ing about the emerg­ing pic­ture is how casu­al­ly Car­ril­lo and Jus­tus, who don’t appear to have known each oth­er in real life before meet­ing to car­ry out the Oak­land attacks, seemed to be will­ing to meet up to car­ry out a domes­tic ter­ror attack. Car­ril­lo was clear­ly very excit­ed about using the George Floyd protests to encour­age the pro­test­ers to attack law enforce­ment by set­ting the exam­ple them­selves and he was able to find at least one oth­er per­son who he appears to have had nev­er before, Robert Jus­tus, to help him. As Car­ril­lo’s post­ed on Face­book the morn­ing of Oak­land shoot­ings, ““Go to the riots and sup­port our own cause. Show them the real tar­gets. Use their anger to fuel our fire. Think out­side the box. We have mobs of angry peo­ple to use to our advan­tage.” So there’s no ques­tion that Car­ril­lo was intent on going to protests to attack law enforce­ment with the spe­cif­ic hope of encour­ag­ing pro­test­ers to do the same. He clear­ly states that was his intent. On Face­book.

    And as we prob­a­bly should have expect­ed, there might be a third per­son involved. It’s unclear how direct their involve­ment was, but based on the chats between Car­ril­lo and Jus­tus there was def­i­nite­ly an unnamed third per­son involved with the plan­ning. On May 28, the day before the shoot­ing, Car­ril­lo posts at 7:20 a.m. in a Face­book group, “It’s on our coast now, this needs to be nation­wide. It’s a great oppor­tu­ni­ty to tar­get the spe­cial­ty soup bois. Keep that ener­gy going,” fol­lowed by two fire emo­jis and a link to a YouTube video show­ing a large crowd attack­ing two Cal­i­for­nia High­way Patrol vehi­cles. “Spe­cial­ty soup bois” is a ‘booga­loo’ term for fed­er­al agents (a play a words of the ‘alpha­bet soup agen­cies’ term). Min­utes lat­er, Jus­tus respond­ed with, “Lets boo­gie.” And then, at 6:44 p.m., a third mys­tery user com­ment­ed: “Start­ing tomor­row, Oak­land be pop­ping off. Maybe more.” So this third per­son knew about the time and loca­tion of the shoot­ing:

    Asso­ci­at­ed Press

    FBI: Face­book exchange pre­ced­ed dead­ly attack on offi­cer

    By The Asso­ci­at­ed Press
    06/16/2020

    The crim­i­nal com­plaint against Steven Car­ril­lo includes a Face­book exchange with his alleged accom­plice, Robert Jus­tus Jr., and a third per­son. The FBI says it is evi­dence of their affil­i­a­tion with the anti-gov­ern­ment “booga­loo” move­ment and plan to tar­get fed­er­al law enforce­ment offi­cers dur­ing protests against police bru­tal­i­ty. The posts were obtained from Carrillo’s Face­book account under a search war­rant.

    Here is the sum­ma­ry from the com­plaint filed against Car­ril­lo for the May 29 killing of fed­er­al secu­ri­ty offi­cer David Patrick Under­wood and wound­ing of his part­ner at the U.S. cour­t­house in Oak­land, Cal­i­for­nia.

    MAY 28

    —7:20 a.m.: Car­ril­lo post­ed in a Face­book group, “It’s on our coast now, this needs to be nation­wide. It’s a great oppor­tu­ni­ty to tar­get the spe­cial­ty soup bois. Keep that ener­gy going.” The state­ment was fol­lowed by two fire emo­jis and a link to a YouTube video show­ing a large crowd attack­ing two Cal­i­for­nia High­way Patrol vehi­cles.

    —7:37 a.m.: Jus­tus respond­ed, “Lets boo­gie.” Anoth­er user com­ment­ed at 6:44 p.m.: “Start­ing tomor­row, Oak­land be pop­ping off. Maybe more.”

    —Accord­ing to the FBI “soup bois” may be a term that fol­low­ers of the booga­loo move­ment used to refer to fed­er­al law enforce­ment agents. “Let’s boo­gie” is a state­ment of agree­ment to engage in attacks.

    MAY 29

    —7:57 a.m.: Car­ril­lo com­ment­ed on Face­book, “If it kicks off? Its (sic) kick­ing off now and if its (sic) not kick­ing off in your hood then start it. Show them the tar­gets.”

    —8:02 a.m.: Car­ril­lo post­ed on Face­book, “Go to the riots and sup­port our own cause. Show them the real tar­gets. Use their anger to fuel our fire. Think out­side the box. We have mobs of angry peo­ple to use to our advan­tage.

    ...

    ———–

    “FBI: Face­book exchange pre­ced­ed dead­ly attack on offi­cer” by The Asso­ci­at­ed Press; Asso­ci­at­ed Press; 06/16/2020

    “—7:37 a.m.: Jus­tus respond­ed, “Lets boo­gie.” Anoth­er user com­ment­ed at 6:44 p.m.: “Start­ing tomor­row, Oak­land be pop­ping off. Maybe more.”

    It’s not just Car­ril­lo and Jus­tus. Some­one else knew at a min­i­mum about Car­ril­lo’s plans. And that keeps on the open the ques­tion of the extent to which this entire event was large­ly dri­ven by Car­ril­lo him­self or if he was oper­at­ing as part of a larg­er orga­nized group. But based on what Jus­tus is report­ed­ly telling the FBI, the planned vio­lence was­n’t nec­es­sar­i­ly going to be lim­it­ed to attacks on law enforce­ment. Jus­tus claims that he had to talk Car­ril­lo out of attack civil­ians and fir­ing on a heli­copter. So if Jus­tus isn’t lying, the orig­i­nal plans for attacks on Oak­land involved attacks on law enforce­ment and pro­test­ers:

    SFist
    SF News

    San­ta Cruz Shoot­er Charged Along With ‘Booga­loo Move­ment’ Accom­plice In Oak­land Shoot­ing Of Fed­er­al Offi­cers

    Jay Bar­mann
    16 June 2020

    Once again, a dis­turb­ing cor­ner of the inter­net has popped into main­stream head­lines by way of two men who were con­vinced a civ­il war is loom­ing, and that mem­bers of law enforce­ment need to be killed. They found each oth­er on Face­book, and now they stand accused of killing a fed­er­al offi­cer and wound­ing anoth­er out­side Oak­land’s Ronald V. Del­lums Fed­er­al Build­ing — and one of them is already accused of killing a San­ta Cruz Sher­if­f’s deputy a week lat­er.

    32-year-old Air Force Sergeant Steven Car­ril­lo was charged in fed­er­al court on Tues­day with mur­der and attempt­ed mur­der con­nect­ed with the shoot­ing death of 53-year-old fed­er­al secu­ri­ty offi­cer David Patrick Under­wood on May 29, and the shoot­ing of his part­ner in Oak­land. Also charged along­side Car­ril­lo was 30-year-old Robert Alvin Jus­tus Jr. of Mill­brae, a fel­low fol­low­er of the loose­ly defined Booga­loo move­ment who alleged­ly drove the white van.

    Jus­tus turned him­self in to fed­er­al author­i­ties last week, as the Mer­cury News reports, in the days fol­low­ing the fatal June 6 shoot­ing in San­ta Cruz Coun­ty that Car­ril­lo was arrest­ed for. Fed­er­al agents were report­ed­ly already sur­veilling Jus­tus, hav­ing found the social media ties between him and Car­ril­lo.

    Mes­sages between the two on Face­book spell out their inten­tions remark­ably clear­ly, as the fed­er­al com­plaint now shows us. “Go to the riots and sup­port our own cause. Show them the real tar­gets,” Car­ril­lo alleged­ly wrote in one post pri­or to the May 29 protests in Oak­land, which he and Jus­tus appar­ent­ly used as cov­er for their own motives to kill fed­er­al offi­cers. “Use their anger to fuel our fire. Think out­side the box. We have mobs of angry peo­ple to use to our advan­tage.”

    Ear­li­er state­ments from a friend of Car­ril­lo’s sug­gest­ed that he may have been pushed over the edge by images of law enforce­ment using force against civil­ian pro­test­ers around the coun­try in the wake of the killing of George Floyd.

    In one exchange between Car­ril­lo and Jus­tus in a Face­book group on the morn­ing of May 28, just after one of the first nights of chaos in Min­neapo­lis, Car­ril­lo alleged­ly said, “It’s on our coast now, this needs to be nation­wide. It’s a great oppor­tu­ni­ty to tar­get the spe­cial­ty group soup bois.” Jus­tus report­ed­ly replied, “Let’s boo­gie.”

    Per the Mer­cury News, the term “soup bois” is “com­mon­ly used by Booga­loo fol­low­ers to refer to fed­er­al agents.”

    As the New York Times reports, like too much of what drib­bles out of the world of 4chan and Red­dit and the spaces where dis­af­fect­ed straight men (most­ly) foment racism and rev­o­lu­tion, “booga­loo does not rep­re­sent a cohe­sive or sin­gu­lar ide­ol­o­gy.” It seems part­ly based on jokey memes, mis­placed rage, and half-baked ide­olo­gies, and yet some of its so-called adher­ents have been tak­ing dis­turb­ing and vio­lent action offline in recent months, tak­ing the ten­sion of the pan­dem­ic, forced lock­downs, and var­i­ous protests — includ­ing those to reopen the coun­try — as signs that their pre­dict­ed sec­ond civ­il war is upon us.

    The non­prof­it called the Net­work Con­ta­gion Research Insti­tute, which stud­ies online mis­in­for­ma­tion, recent­ly released a report about how the pan­dem­ic had inflamed the “mili­tia-sphere” online at the same time as it inflamed QAnon con­spir­a­cy obses­sives and oth­ers.

    Per the Times:

    The [Booga­loo] move­ment attracts both far-right white suprema­cists and some armed men who joined the Black Lives Mat­ter protests because of their anger at the police and oth­er sym­bols of gov­ern­ment author­i­ty. The 1992 siege by fed­er­al law enforce­ment agents over firearms charges at Ruby Ridge in Ida­ho, which left two peo­ple dead, has long been a ral­ly­ing cry.

    And per the Mer­cury News:

    Ear­li­er this month, the FBI arrest­ed three adher­ents to the Booga­loo move­ment in Neva­da, charg­ing them with incit­ing vio­lence with Molo­tov cock­tails and oth­er explo­sives at protests over the death of George Floyd. In April, a Texarkana, Texas man with alleged ties to Booga­loo was arrest­ed on sus­pi­cion of cap­i­tal attempt­ed mur­der of a peace offi­cer. He had two pis­tols and was wear­ing a bal­lis­tic vest when he was arrest­ed, author­i­ties said.

    CBS News has picked up the sto­ry of the fed­er­al charges against Car­ril­lo, and what appears to be a dis­turb­ing trend of lone gun­man emerg­ing from the shad­owy cha­t­rooms of this so-called move­ment, tar­get­ing law enforce­ment for rea­sons that seem to have noth­ing to do with the Black Lives Mat­ter or anti-police-bru­tal­i­ty caus­es.

    As FBI spe­cial agent in charge Jack Ben­nett said in a Tues­day press con­fer­ence, “To be clear, Car­ril­lo elect­ed to trav­el to Oak­land to con­duct this mur­der and take advan­tage of a time when this nation was mourn­ing the killing of George Floyd. There is no evi­dence that these men had any inten­tion to join the demon­stra­tion in Oak­land as some as the media have asked. They came to Oak­land to kill cops.”

    Jus­tus alleged­ly did some “sur­veil­lance” on foot when the pair arrived in Oak­land on May 29, and then took the wheel as they did a dri­ve-by of the fed­er­al build­ing, tar­get­ing a guard sta­tion while loud and live­ly protest activ­i­ty was going on just a few blocks away. Car­ril­lo is accused of the actu­al shoot­ing, using a home­made “ghost gun” that lacked ser­i­al num­bers.

    In the San­ta Cruz shoot­ing, he was found car­ry­ing an AR-15 rifle, and mul­ti­ple oth­er weapons and makeshift explo­sives were report­ed­ly found in his vehi­cle and on his per­son.

    Scrawled in blood on one vehi­cle the day of that killed were the phras­es “I became unrea­son­able” — a ref­er­ence to lib­er­tar­i­an/­Booga­loo/an­ti-gov­ern­ment hero Mar­vin Heemey­er, who destroyed 13 build­ings in Col­orado in 2004 over a zon­ing dis­pute with the gov­ern­ment — and “stop the duop­oly,” anoth­er pop­u­lar meme phrase refer­ring to the fail­ure of the two-par­ty sys­tem.

    As the Times explains, the Booga­loo move­ment appears to have tak­en form using a joke ref­er­ence to the 1984 cult-clas­sic break­danc­ing film Breakin’ 2: Elec­tric Booga­loo. and like all things on the inter­net it’s lost con­nec­tion to that ref­er­ence and morphs into oth­er phras­es like “Big Igloo” and “Big Luau” — giv­ing birth to visu­al ref­er­ences used by mem­bers, like Hawai­ian shirts and the black and white Amer­i­can flag with an igloo replac­ing the stars that was report­ed­ly found in Car­ril­lo’s pos­ses­sion.

    An eight-day man­hunt for Car­ril­lo did not suc­ceed in locat­ing him before a call came in to the San­ta Cruz Sher­if­f’s Depart­ment from a neigh­bor who’d noticed a sus­pi­cious amount of fire pow­er amassed in Car­ril­lo’s van. The two deputies could not have known they were in for an ambush by an anti-gov­ern­ment activist with all inten­tion of killing them, but that is alleged­ly what occurred. 38-year-old Sher­if­f’s Sergeant Damon Gutzwiller.

    Jus­tus told his sto­ry after show­ing up at the fed­er­al build­ing on Gold­en Gate Avenue in San Fran­cis­co five days after Car­ril­lo’s arrest, accom­pa­nied by his moth­er. After meet­ing online, Jus­tus went to meet Car­ril­lo at the San Lean­dro BART sta­tion the evening of May 29, and accord­ing to state­ments he gave to fed­er­al author­i­ties, it sounds like he was quick­ly in over his head. The pair removed the van’s license plates. Jus­tus claims that he tried to talk Car­ril­lo out of killing any­one, and says he con­vinced Car­ril­lo not to shoot any civil­ians, or to shoot at a heli­copter.

    As the Mer­cury News reports, fol­low­ing the San­ta Cruz shoot­ing, fed­er­al author­i­ties linked Car­ril­lo’s van to the Oak­land shoot­ing, and quick­ly found cell­phone pings that con­nect­ed Car­ril­lo to Oak­land — he had turned his phone off between 8 and 10 p.m. that night, but he drove home down along the Bay rather than take any bridges where his van might be clocked by a cam­era. His phone appar­ent­ly pinged on a tow­er near the Oak­land Zoo as he turned it back on.

    ...

    ————

    “San­ta Cruz Shoot­er Charged Along With ‘Booga­loo Move­ment’ Accom­plice In Oak­land Shoot­ing Of Fed­er­al Offi­cers” by Jay Bar­mann; SFist; 06/16/2020

    “Jus­tus told his sto­ry after show­ing up at the fed­er­al build­ing on Gold­en Gate Avenue in San Fran­cis­co five days after Car­ril­lo’s arrest, accom­pa­nied by his moth­er. After meet­ing online, Jus­tus went to meet Car­ril­lo at the San Lean­dro BART sta­tion the evening of May 29, and accord­ing to state­ments he gave to fed­er­al author­i­ties, it sounds like he was quick­ly in over his head. The pair removed the van’s license plates. Jus­tus claims that he tried to talk Car­ril­lo out of killing any­one, and says he con­vinced Car­ril­lo not to shoot any civil­ians, or to shoot at a heli­copter.

    The two appar­ent­ly met online, then met an real life on the evening of May 29 at the San Lean­dro BART sta­tions. And they were quick­ly plan­ning their ‘booga­loo’ attacks. Attacks that, accord­ing to Jus­tus, includ­ed attacks on civil­ians and even a heli­copter but Jus­tus talked Car­ril­lo out of it. And yet the two appar­ent­ly were post­ing pret­ty clear­ly about their intent on head­ing to Oak­land to encour­age attacks on law enforce­ment in their Face­book posts. It’s one of the more remark­able aspects of this whole sto­ry: it’s not entire­ly clear how much they were plan­ning on hid­ing their actions. On the one hand, they drove around a van with­out plates and used a home-made ‘ghost gun’. But on the oth­er hand, they were post­ing their plans on Face­book, which isn’t exact­ly covert:

    ...
    Jus­tus turned him­self in to fed­er­al author­i­ties last week, as the Mer­cury News reports, in the days fol­low­ing the fatal June 6 shoot­ing in San­ta Cruz Coun­ty that Car­ril­lo was arrest­ed for. Fed­er­al agents were report­ed­ly already sur­veilling Jus­tus, hav­ing found the social media ties between him and Car­ril­lo.

    Mes­sages between the two on Face­book spell out their inten­tions remark­ably clear­ly, as the fed­er­al com­plaint now shows us. “Go to the riots and sup­port our own cause. Show them the real tar­gets,” Car­ril­lo alleged­ly wrote in one post pri­or to the May 29 protests in Oak­land, which he and Jus­tus appar­ent­ly used as cov­er for their own motives to kill fed­er­al offi­cers. “Use their anger to fuel our fire. Think out­side the box. We have mobs of angry peo­ple to use to our advan­tage.”

    ...

    Jus­tus alleged­ly did some “sur­veil­lance” on foot when the pair arrived in Oak­land on May 29, and then took the wheel as they did a dri­ve-by of the fed­er­al build­ing, tar­get­ing a guard sta­tion while loud and live­ly protest activ­i­ty was going on just a few blocks away. Car­ril­lo is accused of the actu­al shoot­ing, using a home­made “ghost gun” that lacked ser­i­al num­bers.
    ...

    Relat­ed to that ques­tion of how much they were even try­ing to avoid get­ting caught is the fact that Car­ril­lo had a bul­let­proof vest with a spe­cial ‘booga­loo’ patch. The vest was found in one of Car­ril­lo’s vans. The patch is a vari­ant of the US flat, with an igloo where the stars are and black and white stripes with a Hawai­ian pat­tern on one of the stripes. It’s a reminder that as the ‘booga­loo’ move­ment intend­ing on spark­ing a civ­il war already has a uni­form.

    Final­ly, there’s anoth­er twist to this whole sto­ry worth not­ing: it turns out the fed­er­al secu­ri­ty guard killed by Car­ril­lo and Jus­tus on May 29, David Patrick Under­wood, as the broth­er of ris­ing local GOP politi­cian Angela Under­wood Jacobs. A Lan­cast­er City Coun­cil­woman, Under­wood Jacobs announced her plans of run­ning for the GOP nom­i­na­tion for Cal­i­for­ni­a’s 25th con­gres­sion­al dis­trict last year to run against Demo­c­rat Katie Hill (recall how Hill resigned after the release of sex­u­al­ly explic­it pho­tos that appear to have been released as part of a GOP dirty-tricks oper­a­tion). Under­wood Jacobs end­ed up drop­ping out in Novem­ber after Steve Knight, who pre­vi­ous­ly held the seat, declared his intent to join the race. And last week she was invit­ed to tes­ti­fy before Con­gress about police bru­tal­i­ty and the recent wave of nation­al protests. So that’s anoth­er remark­able part of this sto­ry: a His­pan­ic elite mil­i­tary police offi­cer killed a black fed­er­al law enforce­ment offi­cer dur­ing police bru­tal­i­ty protests to pro­mote a broad­er white nation­al­ist ‘booga­loo’ move­ment and the secu­ri­ty guard hap­pens to be the broth­er of a black Repub­li­can elect­ed offi­cial.

    So at this point there remains a big ques­tion as to the iden­ti­ty of that mys­tery third per­son in the ‘booga­loo’ Face­book posts who clear­ly knew about the plans for the Oak­land attacks. And that mys­tery per­son isn’t nec­es­sar­i­ly the only per­son involved in those Face­book ‘booga­loo’ chats. They just hap­pen to be the only addi­tion­al per­son from the online chats who ends up in the for­mal legal charges against Car­ril­lo. We have no idea how many oth­er peo­ple were mem­ber of this Face­book ‘booga­loo’ group who were well aware of what they were plan­ning.

    And there’s no rea­son to assume this kind of plan­ning was only tak­ing place on Face­book. A big part of what makes these Face­book posts to remark­able is how unnec­es­sar­i­ly non-secure they are when there are so many more anony­mous means of com­mu­ni­cat­ing. But Face­book does have the advan­tage of mak­ing it easy to reach out to new audi­ences and it’s pos­si­ble that’s why so much of the com­mu­ni­ca­tions were tak­ing place on Face­book: these real­ly may have been very recent acquain­tances. Jus­tus appeared to tell the FBI that he met Car­ril­lo over Face­book and only met him for the first time on the night of the Oak­land shoot­ing. It’s the kind of sce­nario that might explain why they did so much of their plan­ning on Face­book. But it’s also the kind of sce­nario that sug­gests this kind of domes­tic ter­ror net­work­ing is just casu­al­ly tak­ing place out in the open on Face­book. Peo­ple come to a ‘booga­loo’ page to get all excit­ed about a civ­il war and cre­ate new con­nec­tions for real life mee­tups to car­ry out ‘booga­loo’ ter­ror attacks. It’s like the ulti­mate ‘lead­er­less resis­tance’ net­work­ing plat­form: the con­nec­tiv­i­ty to eas­i­ly put dis­parate poten­tial domes­tic ter­ror­ists in con­tact with each oth­er with­out the mean­ing­ful over­sight. It’s a reminder that when Face­book changed its mis­sion state­ment in 2017 to, “Give peo­ple the pow­er to build com­mu­ni­ty and bring the world clos­er togeth­er,” that includes the pow­er to build com­mu­ni­ties of ‘lone wolf’ domes­tic ter­ror­ists. Which is why Face­book is only announc­ing today that it’s shut­ting down ‘booga­loo’ groups:

    CNN

    Face­book shuts down groups where Booga­loo sus­pects post­ed before attacks

    By Donie O’Sul­li­van, CNN Busi­ness

    Updat­ed 3:55 PM ET, Wed June 17, 2020

    New York (CNN Business)Facebook said it will con­tin­ue to review groups on its plat­form asso­ci­at­ed with the extrem­ist Booga­loo move­ment after it emerged that sus­pects in the shoot­ing deaths of two law enforce­ment offi­cers in Cal­i­for­nia had post­ed on Face­book pri­or to the attacks.

    ...

    “It’s on our coast now, this needs to be nation­wide. It’s a great oppor­tu­ni­ty to tar­get the spe­cial­ty soup bois,” Car­ril­lo said in Face­book group mes­sage on May 28, an FBI spe­cial agent wrote in a fed­er­al crim­i­nal com­plaint filed Tues­day.

    “Soup bois” is an appar­ent ref­er­ence to fed­er­al law enforce­ment offi­cers. Fed­er­al agen­cies, many known by their acronyms like “FBI,” are some­times called “alpha­bet agen­cies.” The “soup boi” term is an appar­ent­ly relat­ed ref­er­ence to alpha­bet soup.

    The post was fol­lowed by two fire emo­jis and a link to a YouTube video “show­ing a large crowd vio­lent­ly attack­ing two Cal­i­for­nia High­way Patrol vehi­cles,” accord­ing to the com­pli­ant.

    “Let’s boo­gie,” Jus­tus respond­ed, accord­ing to the com­plaint.

    “I believe that Jus­tus’ response ‘let’s boo­gie’ is a state­ment of agree­ment and affir­ma­tion to engage in attacks on law enforce­ment per­son­nel in accor­dance with Booga­loo ide­ol­o­gy,” an FBI spe­cial agent wrote.

    The mes­sages were released after fed­er­al author­i­ties obtained a search war­rant for Car­ril­lo’s Face­book records.

    On the morn­ing of the May 29 attack, accord­ing to the com­plaint, Car­ril­lo com­ment­ed on Face­book, “If it kicks off? Its kick­ing off now and if its not kick­ing off in your hood then start it. Show them the tar­gets.”

    Car­ril­lo alleged­ly added, “Go to the riots and sup­port our own cause. Show them the real tar­gets. Use their anger to fuel our fire. Think out­side the box. We have mobs of angry peo­ple to use to our advan­tage.”

    Accord­ing to author­i­ties, Car­ril­lo and Jus­tus went to the protests in Oak­land that night, and while Jus­tus drove, Car­ril­lo opened fire, open­ing fire on offi­cers guard­ing the cour­t­house at the fed­er­al build­ing while pro­test­ers marched in the streets near­by, killing one guard and seri­ous­ly injur­ing anoth­er.

    Face­book said Tues­day it has removed Face­book groups Car­ril­lo and Jus­tus were mem­bers of and is review­ing oth­er Booga­loo Face­book groups.

    Face­book has banned the use of the term “Booga­loo” and approx­i­mate­ly 50 oth­er deriv­a­tives of the term when they are accom­pa­nied by images or state­ments depict­ing armed vio­lence, a spokesper­son told CNN Busi­ness Wednes­day.

    Com­ment­ing on the Cal­i­for­nia attacks, the spokesper­son told CNN Busi­ness Tues­day night, “We des­ig­nat­ed these attacks as vio­lat­ing events and removed the accounts for the two per­pe­tra­tors along with sev­er­al groups. We will remove con­tent that sup­ports these attacks and con­tin­ue to work with law enforce­ment in their inves­ti­ga­tion.”

    ————

    “Face­book shuts down groups where Booga­loo sus­pects post­ed before attacks” by Donie O’Sul­li­van; CNN; 06/17/2020

    “Face­book said Tues­day it has removed Face­book groups Car­ril­lo and Jus­tus were mem­bers of and is review­ing oth­er Booga­loo Face­book groups.

    As we can see, the ‘booga­loo’ move­ment has been using Face­book to open­ly recruit and orga­nize. Because of course, why not if Face­book is going to allow it? And until now Face­book clear­ly allowed it since it’s only now start­ing to ban the use of terms like ‘booga­loo’:

    ...
    Face­book has banned the use of the term “Booga­loo” and approx­i­mate­ly 50 oth­er deriv­a­tives of the term when they are accom­pa­nied by images or state­ments depict­ing armed vio­lence, a spokesper­son told CNN Busi­ness Wednes­day.
    ...

    So now it looks like we’ve hit the point where Face­book belat­ed­ly responds to the use of its plat­form as a ter­ror orga­niz­ing tool and the ter­ror­ists come up with new slo­gans and memes to get around the ban and let Face­book go back to play­ing dumb about how its plat­form is a ter­ror orga­niz­ing tool. We’ll see what the ‘booga­loo bois’ come up with as the new code word for civ­il war, but we can be con­fi­dent that what­ev­er that new slo­gan is Face­book won’t fig­ure it out until its too late. Again.

    Posted by Pterrafractyl | June 17, 2020, 3:52 pm
  10. This arti­cle shows how Facebook’s Mark Zucker­berg and FB Investor/Board Mem­ber Peter Thiele lob­bied Pres­i­dent Trump and Con­gress to elim­i­nate com­pe­ti­tion from the Chi­nese soft­ware App Tik­Tok because of there great suc­cess in the Social Media Mar­ket that was cut­ting into their mar­ket share.

    They jus­ti­fied it because of “Facebook’s com­mit­ment to free­dom of expres­sion, and rep­re­sents a risk to Amer­i­can val­ues and tech­no­log­i­cal suprema­cy”. ,

    Face­book has estab­lished an advo­ca­cy group, called Amer­i­can Edge, that has begun run­ning ads “extolling U.S. tech com­pa­nies for their con­tri­bu­tions to Amer­i­can eco­nom­ic might, nation­al secu­ri­ty and cul­tur­al influ­ence.”

    Mark Zucker­berg, Peter Thiele, Pres­i­dent’ Trump and Jared Kush­n­er par­tic­i­pat­ed in a White House din­ner and dis­cussed the issue. Face­book board mem­ber Peter Thiel has been a backer of Mr. Trump,

    Tik­Tok has gained more than 100 mil­lion U.S. users and become the biggest threat to Facebook’s dom­i­nance of social media, as the app’s blend of dance videos and goofs has made it a sen­sa­tion among young peo­ple around the world. Face­book, by com­par­i­son, had 256 mil­lion month­ly users in the U.S. and Cana­da as of the end of June. “Tik­Tok has gone from being next-to-noth­ing to quite some­thing in major West­ern mar­kets in the last two years,” said Bri­an Wieser, glob­al pres­i­dent of busi­ness intel­li­gence at GroupM, a unit of WPP PLC.

    Mr. Zuckerberg’s lob­by­ists asked Con­gress why Tik­Tok should be allowed to oper­ate in the U.S., when many Amer­i­can com­pa­nies, includ­ing his own, can’t oper­ate in Chi­na.

    TIkTok’s CEO Keven May­er stat­ed “At Tik­Tok, we wel­come com­pe­ti­tion,” he said in a blog post. “But let’s focus our ener­gies on fair and open com­pe­ti­tion in ser­vice of our con­sumers, rather than malign­ing attacks by our competitor—namely Facebook—disguised as patri­o­tism and designed to put an end to our very pres­ence in the U.S.”

    •WSJ NEWS EXCLUSIVE  — TECH

    Face­book CEO Mark Zucker­berg Stoked Washington’s Fears About Tik­Tok 

    Social-media tycoon empha­sized threat from Chi­nese inter­net com­pa­nies as he worked to fend off U.S. reg­u­la­tion of Face­book

    By Geor­gia Wells, Jeff Hor­witz and Aruna Viswanatha
    Updat­ed Aug. 23, 2020 8:33 pm ET

    When Face­book Inc. Chief Exec­u­tive Mark Zucker­berg deliv­ered a speech about free­dom of expres­sion in Wash­ing­ton, D.C., last fall, there was also anoth­er agen­da: to raise the alarm about the threat from Chi­nese tech com­pa­nies and, more specif­i­cal­ly, the pop­u­lar video-shar­ing app Tik­Tok.

    Tucked into the speech was a line point­ing to Facebook’s ris­ing rival: Mr. Zucker­berg told George­town stu­dents that Tik­Tok doesn’t share Facebook’s com­mit­ment to free­dom of expres­sion, and rep­re­sents a risk to Amer­i­can val­ues and tech­no­log­i­cal suprema­cy.

    That was a mes­sage Mr. Zucker­berg ham­mered behind the scenes in meet­ings with offi­cials and law­mak­ers dur­ing the Octo­ber trip and a sep­a­rate vis­it to Wash­ing­ton weeks ear­li­er, accord­ing to peo­ple famil­iar with the mat­ter.

    In a pri­vate din­ner at the White House in late Octo­ber, Mr. Zucker­berg made the case to Pres­i­dent Trump that the rise of Chi­nese inter­net com­pa­nies threat­ens Amer­i­can busi­ness, and should be a big­ger con­cern than rein­ing in Face­book, some of the peo­ple said.

    Don­ald J. Trump
    @realDonaldTrump

    Nice meet­ing with Mark Zucker­berg of @Facebook in the Oval Office today. https://facebook.com/153080620724/posts/10163173035125725?sfns=mo…

    8:03 PM · Sep 19, 2019

    Mr. Zucker­berg dis­cussed Tik­Tok specif­i­cal­ly in meet­ings with sev­er­al sen­a­tors, accord­ing to peo­ple famil­iar with the meet­ings. In late Octo­ber, Sen. Tom Cot­ton (R., Ark.)—who met with Mr. Zucker­berg in September—and Sen. Chuck Schumer (D., N.Y.) wrote a let­ter to intel­li­gence offi­cials demand­ing an inquiry into Tik­Tok. The gov­ern­ment began a nation­al-secu­ri­ty review of the com­pa­ny soon after, and by the spring, Mr. Trump began threat­en­ing to ban the app entire­ly. This month he signed an exec­u­tive order demand­ing that TikTok’s Chi­nese own­er, ByteDance Ltd., divest itself of its U.S. oper­a­tions.

    Few tech com­pa­nies have as much to gain as Face­book from TikTok’s tra­vails, and the social-media giant has tak­en an active role in rais­ing con­cerns about the pop­u­lar app and its Chi­nese own­ers.

    In addi­tion to Mr. Zuckerberg’s per­son­al out­reach and pub­lic state­ments about Chi­nese com­pe­ti­tion, Face­book has estab­lished an advo­ca­cy group, called Amer­i­can Edge, that has begun run­ning ads extolling U.S. tech com­pa­nies for their con­tri­bu­tions to Amer­i­can eco­nom­ic might, nation­al secu­ri­ty and cul­tur­al influ­ence. And Face­book over­all in the first half of this year spent more on lob­by­ing than any oth­er sin­gle com­pa­ny, accord­ing to data from the Cen­ter for Respon­sive Pol­i­tics. In 2018, by con­trast, it ranked eighth among com­pa­nies, the center’s data show.

    It couldn’t be deter­mined exact­ly what role Mr. Zuckerberg’s com­ments have played in the government’s han­dling of Tik­Tok. A spokes­woman for Sen. Cot­ton said his office doesn’t com­ment on the senator’s meet­ings.

    Asked about the din­ner, a White House spokesman said the admin­is­tra­tion “is com­mit­ted to pro­tect­ing the Amer­i­can peo­ple from all cyber relat­ed threats to crit­i­cal infra­struc­ture, pub­lic health and safe­ty, and our eco­nom­ic and nation­al secu­ri­ty.”

    Face­book spokesman Andy Stone said Mr. Zucker­berg has no rec­ol­lec­tion of dis­cussing Tik­Tok at the din­ner.

    The CEO’s com­ments in Wash­ing­ton about the Chi­nese app were tied into Facebook’s cam­paign to blunt antitrust and reg­u­la­to­ry threats by empha­siz­ing Facebook’s impor­tance to U.S. tech pre-emi­nence, he said.

    “Our view on Chi­na has been clear: we must com­pete,” Mr. Stone said in a writ­ten state­ment. “As Chi­nese com­pa­nies and influ­ence have been grow­ing so has the risk of a glob­al inter­net based on their val­ues, as opposed to ours.”

    In an employ­ee meet­ing this month, Mr. Zucker­berg called the exec­u­tive order against Tik­Tok unwel­come, because the glob­al harm of such a move could out­weigh any short-term gain to Face­book. The remarks were ear­li­er report­ed by Buz­zFeed News.

    Tik­Tok has gained more than 100 mil­lion U.S. users and become the biggest threat to Facebook’s dom­i­nance of social media, as the app’s blend of dance videos and goofs has made it a sen­sa­tion among young peo­ple around the world. In the first quar­ter of 2020, Tik­Tok became the most down­loaded app in a sin­gle quar­ter, accord­ing to research firm Sen­sor Tow­er. Face­book, by com­par­i­son, had 256 mil­lion month­ly users in the U.S. and Cana­da as of the end of June.

    “Tik­Tok has gone from being next-to-noth­ing to quite some­thing in major West­ern mar­kets in the last two years,” said Bri­an Wieser, glob­al pres­i­dent of busi­ness intel­li­gence at GroupM, a unit of WPP PLC.

    While Face­book once acquired star­tups such as Tik­Tok that it viewed as poten­tial threats, scruti­ny from antitrust author­i­ties makes those deals more fraught for big tech com­pa­nies, so they might look to oth­er defen­sive mea­sures instead, Mr. Wieser said. “You might then in fact wel­come more reg­u­la­tion or things that would lim­it the oppor­tu­ni­ties for upstarts,” he said.

    Facebook’s Insta­gram unit this month launched its own video-shar­ing fea­ture, called Reels, and is try­ing to poach Tik­Tok cre­ators by pay­ing some users if they post videos exclu­sive­ly to the new ser­vice.

    TikTok’s fate is up in the air. With the Trump administration’s dead­line loom­ing, Microsoft Corp. has said it is nego­ti­at­ing to buy TikTok’s U.S. oper­a­tions, and at least two oth­er groups are believed to be cir­cling, involv­ing Twit­ter Inc. and Ora­cle Corp.

    It is pos­si­ble that Tik­Tok ends up with one of those com­pa­nies, imme­di­ate­ly mak­ing the buy­er a for­mi­da­ble U.S. rival to Face­book.

    Facebook’s advo­ca­cy has angered peo­ple inside Tik­Tok, accord­ing to peo­ple famil­iar with the mat­ter. Last month, CEO Kevin May­er pub­licly accused Face­book of try­ing to unfair­ly quash com­pe­ti­tion.

    “At Tik­Tok, we wel­come com­pe­ti­tion,” he said in a blog post. “But let’s focus our ener­gies on fair and open com­pe­ti­tion in ser­vice of our con­sumers, rather than malign­ing attacks by our competitor—namely Facebook—disguised as patri­o­tism and designed to put an end to our very pres­ence in the U.S.”
    Mr. Zuckerberg’s argu­ments about Tik­Tok show a rever­sal in his stance on Chi­na.

    In 2010 he said he was plan­ning to learn Man­darin, and he made sev­er­al well-pub­li­cized trips to Chi­na over the years as Face­book explored the pos­si­bil­i­ty of get­ting back into the world’s most pop­u­lous coun­try, where it has been banned since 2009.

    Those moves made Mr. Zucker­berg pop­u­lar among many in Chi­na, but pub­lic opin­ion there has turned against him because of his recent com­ments, includ­ing at a con­gres­sion­al hear­ing about com­pe­ti­tion in July in which he said it was “well doc­u­ment­ed that the Chi­nese gov­ern­ment steals tech­nol­o­gy from U.S. com­pa­nies.”

    The Glob­al Times, a pub­li­ca­tion linked to the Chi­nese Com­mu­nist Par­ty, this week said Mr. Zucker­berg was pre­vi­ous­ly con­sid­ered “the people’s son-in-law,” but that his recent actions sug­gest­ed that he was will­ing “to set aside moral­i­ty for prof­it.”

    Mr. Zucker­berg saw TikTok’s suc­cess com­ing. When its pre­de­ces­sor app in the U.S., Musical.ly, start­ed to become pop­u­lar among Amer­i­can teens in 2017, Face­book con­sid­ered acquir­ing it, The Wall Street Jour­nal has report­ed. Instead, Bytedance bought Musical.ly, and lat­er rebrand­ed it as Tik­Tok.

    In the Octo­ber speech in George­town, Mr. Zucker­berg described Tik­Tok as at odds with Amer­i­can val­ues: “On Tik­Tok, the Chi­nese app grow­ing quick­ly around the world, men­tions of protests are cen­sored, even in the U.S. Is that the inter­net we want?” Mr. Zucker­berg said in his speech.

    Days lat­er, Mr. Zucker­berg reit­er­at­ed his con­cerns about Chi­na dur­ing the White House din­ner with Mr. Trump, the president’s son-in-law Jared Kush­n­er, and Face­book board mem­ber Peter Thiel, who has been a backer of Mr. Trump, accord­ing to peo­ple briefed on the con­ver­sa­tion.

    Mr. Zuckerberg’s team also reached out to mem­bers of Con­gress who are tough on Chi­na, accord­ing to peo­ple famil­iar with the meet­ings. He asked them why Tik­Tok should be allowed to oper­ate in the U.S., when many Amer­i­can com­pa­nies, includ­ing his own, can’t oper­ate in Chi­na.

    In Novem­ber, Sen. Josh Haw­ley (R., Mo.), who also had met with Mr. Zucker­berg in Sep­tem­ber, said in a hear­ing that Tik­Tok threat­ens the pri­va­cy of Amer­i­can chil­dren. “For Face­book, the fear is lost social-media mar­ket share,” he said. “For the rest of us, the fear is some­what dif­fer­ent.”

    Kel­li Ford, a spokes­woman for Sen. Haw­ley, said the senator’s con­cerns about Tik­Tok pre­dat­ed the meet­ing with Mr. Zucker­berg. “Face­book has recent­ly been sound­ing the alarm about Chi­na-based tech as a PR tac­tic to boost its own rep­u­ta­tion,” she said.

    Face­book declined to com­ment on Ms. Ford’s remark.
    —Deepa Seethara­man and Michael C. Ben­der con­tributed to this arti­cle.

    Write to Geor­gia Wells at Georgia.Wells@wsj.com, Jeff Hor­witz at Jeff.Horwitz@wsj.com and Aruna Viswanatha at Aruna.Viswanatha@wsj.com

    https://www.wsj.com/articles/facebook-ceo-mark-zuckerberg-stoked-washingtons-fears-about-tiktok-11598223133

    Posted by Mary Benton | August 30, 2020, 5:38 am
  11. Here’s an inter­est­ing sto­ry relat­ed to both the wave of protests ini­tial­ly sparked by the killing of George Floyd and the open attempts by the ‘Booga­loo’ move­ment to exploit the protests to spark a race war:

    Police in North­ern Cal­i­for­nia arrest­ed a com­mu­ni­ty col­lege sta­tis­tics pro­fes­sor, Alan Viaren­go, in con­nec­tion with send­ing two dozen threat­en­ing let­ters to San­ta Clara Coun­ty Health Offi­cer Dr. Sara Cody over the last five months fol­low­ing Cody’s COVID-19 “shel­ter in place” orders back in March. Viaren­go’s let­ters includ­ed a num­ber of ‘Booga­loo’ ref­er­ences and images. Inves­ti­ga­tors also found hun­dreds of guns and explo­sives in Viaren­go’s home. Varien­go’s attor­ney is deny­ing the alle­ga­tions that he’s a Booga­loo mem­ber while simul­ta­ne­ous­ly admit­ting that he wrote the threat­en­ing let­ters by argu­ing that they are pro­tect­ed by his First Amend­ment right to free speech.

    Intrigu­ing­ly, Varien­go let­ters to Dr. Cody did­n’t just include threats. In one of the final let­ters he also describes his ide­ol­o­gy in four parts and in the process claims his words were respon­si­ble for inspir­ing at last five recent acts of vio­lence against pub­lic offi­cials dur­ing the George Floyd protests:

    In one of the final let­ters Viaren­go is accused of writ­ing before his arrest, he appears to claim some lev­el of respon­si­bil­i­ty for recent acts of vio­lence against pub­lic offi­cials and lays out his ide­ol­o­gy in four parts:

    “Enable the vio­lent to car­ry out their mis­sions by reveal­ing the home address­es of pub­lic offi­cials and their fam­i­lies.”
    “Plant the seeds of social unrest into the minds of the vio­lent. When the protests against George Floyd [sic] mur­der began, my words alone caused at least five offi­cials to be attacked.
    “Reg­u­lar­ly remind every­one that (1) the Con­sti­tu­tion is not sus­pend­ed dur­ing times of cri­sis and (2) your sil­ly lit­tle ‘orders’ are not enforce­able by law.”
    “Sub­ver­sive­ly spread defi­ance to author­i­ty, par­tic­u­lar­ly con­tempt for courts and law enforce­ment, to make their jobs more dif­fi­cult. In turn, they react in a more fas­cist way, which cre­ates a snow­ball effect. Look no fur­ther than the George Floyd riots, for which I take cred­it with­out ever cast­ing a stone.

    Anoth­er tar­get of Varien­go’s threat­en­ing let­ters includ­ed San­ta Cruz Coun­ty Sheriff’s Sgt. Damon Gutzwiller’s wid­ow. Recall how that Steven Car­ril­lo and Robert Jus­tus — the two ‘Booga­loo Boi’ mem­bers who plot­ted and exe­cut­ed a false flag attack on a fed­er­al offi­cer next to a George Floyd protest in Oak­land and killed Gutzwiller in an ambush-style attack out­side Car­ril­lo’s home — come from the same gen­er­al area (the cities of Gilroy and Ben Lomond are only around 45 miles apart). And inves­ti­ga­tors found that Car­ril­lo and Jus­tus were in con­tact with an still-unnamed third per­son over Face­book who was aware of their false flag plans. Might the inves­ti­ga­tion of Varien­go be tied to the Car­ril­lo and Jus­tus case? And how about oth­er ‘Booga­loo’ attempts to infil­trate and exploit the protests? Is Viaren­go some sort of ‘Booga­loo’ online mas­ter­mind? That’s what he was claim­ing in these let­ters:

    NBC Bay Area

    Alleged ‘Booga­loo Boy’ Arrest­ed Over Threats to Top South Bay Health Offi­cial
    Inves­ti­ga­tors say they found explo­sives and hun­dreds of guns in the fam­i­ly home of Gilroy man Alan Viaren­go last week and say his let­ters were laced with the tell­tale sym­bols and phras­es of the Booga­loo Move­ment.

    By Michael Bott and Robert Han­da •
    Pub­lished August 31, 2020 • Updat­ed on Sep­tem­ber 1, 2020 at 7:54 am

    A Gilroy man who police say is tied to the far-right, anti-gov­ern­ment Booga­loo Move­ment was arrest­ed last week in con­nec­tion to 24 threat­en­ing let­ters, some con­tain­ing tell­tale Booga­loo slo­gans and imagery, sent to San­ta Clara Coun­ty Health Offi­cer Dr. Sara Cody over the past five months, accord­ing to court and police records reviewed by NBC Bay Area.

    Offi­cers from the San­ta Clara Coun­ty Sher­if­f’s Office arrest­ed Alan Viaren­go, 55, last week, seiz­ing a large cache of firearms and explo­sives from his family’s home, accord­ing to a bail motion filed by San­ta Clara Coun­ty pros­e­cu­tors. Detec­tives found more than 100 firearms, includ­ing poten­tial assault rifles, explo­sives, thou­sands of rounds of ammu­ni­tion, tools for man­u­fac­tur­ing ammu­ni­tion, and con­fed­er­ate flags, accord­ing to the court records.

    He was charged with felony counts of stalk­ing and harass­ing a pub­lic offi­cial and has not yet entered a plea.

    Alleged Booga­loo mem­bers were charged ear­li­er this year in the mur­der of Fed­er­al Secu­ri­ty Offi­cer Dave Under­wood at Oakland’s Fed­er­al Build­ing and the ambush-style killing of San­ta Cruz Coun­ty Sheriff’s Sgt. Damon Gutzwiller in Ben Lomand.

    Viaren­go also sent harass­ing let­ters to the San­ta Cruz Coun­ty Sheriff’s Office and Gutzwiller’s wid­ow replete with Booga­loo slo­gans and imagery, accord­ing to the police report.

    Booga­loo Ide­ol­o­gy

    Accord­ing to a San­ta Clara Coun­ty Sheriff’s Office inci­dent report, Viarengo’s ide­ol­o­gy, which appears to advo­cate for vio­lent upris­ings against the gov­ern­ment and encour­age oth­ers to use recent George Floyd protests as a tool to incite vio­lence, is par­tial­ly fleshed out in his let­ters to Dr. Cody.

    In one of the final let­ters Viaren­go is accused of writ­ing before his arrest, he appears to claim some lev­el of respon­si­bil­i­ty for recent acts of vio­lence against pub­lic offi­cials and lays out his ide­ol­o­gy in four parts:

    “Enable the vio­lent to car­ry out their mis­sions by reveal­ing the home address­es of pub­lic offi­cials and their fam­i­lies.”
    “Plant the seeds of social unrest into the minds of the vio­lent. When the protests against George Floyd [sic] mur­der began, my words alone caused at least five offi­cials to be attacked.
    “Reg­u­lar­ly remind every­one that (1) the Con­sti­tu­tion is not sus­pend­ed dur­ing times of cri­sis and (2) your sil­ly lit­tle ‘orders’ are not enforce­able by law.”
    “Sub­ver­sive­ly spread defi­ance to author­i­ty, par­tic­u­lar­ly con­tempt for courts and law enforce­ment, to make their jobs more dif­fi­cult. In turn, they react in a more fas­cist way, which cre­ates a snow­ball effect. Look no fur­ther than the George Floyd riots, for which I take cred­it with­out ever cast­ing a stone.

    The let­ter ends with Viaren­go writ­ing, “F**k all author­i­ty. Enjoy the Booga­loo!” accord­ing to the police report.

    Viaren­go teach­es sta­tis­tics at Gilroy’s Gav­i­lan Col­lege, accord­ing to the school’s web­site.

    ...

    Viaren­go appeared in San­ta Clara Coun­ty Supe­ri­or Court Mon­day after­noon. He’d post­ed bail Fri­day but was remand­ed back into the cus­tody by the judge Mon­day and was led away in hand­cuffs by deputies fol­low­ing the hear­ing.

    Deputy Dis­trict Attor­ney Alexan­der Adams said the deci­sion to remand Viaren­go was based on sev­er­al fac­tors.

    “That was based on both the nature of his let­ters he sent to the vic­tim, as well as the in excess of 100 firearms and thou­sands of rounds of ammu­ni­tion and explo­sives that were found in his house,” Alexan­der said.

    When asked about Viarengo’s alleged ties to the Booga­loo move­ment, Alexan­der said, “That con­nec­tion was made based on the inves­ti­ga­tion link­ing both the lan­guage and sym­bols used in the mul­ti­ple let­ters that he sent.”

    San Jose attor­ney Cody Salfen rep­re­sent­ed Viaren­go in court Fri­day. Salfen said he’s not Viarengo’s attor­ney but was rep­re­sent­ing him as Viaren­go seeks coun­sel.

    In a lengthy state­ment, which can be read in its entire­ty here, Salfen blast­ed the Dis­trict Attorney’s Office and said his client is a respect­ed com­mu­ni­ty mem­ber.

    “[Alan] is a ded­i­cat­ed father, hus­band, com­mu­ni­ty activist, respect­ed pro­fes­sor, and vol­un­teer,” Salfen said. “Each year, he devotes over 400 hours of his time, guid­ing and teach­ing youth and oth­er mem­bers of the Bay Area com­mu­ni­ties he serves. He works two jobs to sup­port his family…Alan is a law abid­ing cit­i­zen. He respects the rule of law and the Con­sti­tu­tion.”

    Salfen said Viaren­go and his fam­i­ly were shocked and trau­ma­tized by the “sneak attack” launched against them by law enforce­ment and the Dis­trict Attorney’s Office.

    “At this time we have alle­ga­tions,” Salfen said. “Alle­ga­tions are not facts. Very few facts, if any, have been pro­vid­ed by the Dis­trict Attor­ney’s Office about the law enforce­ment activ­i­ties in this case. But, with the lit­tle infor­ma­tion that has been pro­vid­ed, at present, the only appar­ent attacks that have occurred are against Alan and his fam­i­ly.”

    The Let­ters

    In the months fol­low­ing her March COVID-19 “Shel­ter in Place” order, San­ta Clara Coun­ty Health Offi­cer Dr. Sara Cody was tar­get­ed with let­ters rid­dled with threats and vile lan­guage, accord­ing to the police report. Her home address was blast­ed across the inter­net and pro­tes­tors began demon­strat­ing in front of her res­i­dence. Con­cerned for her safe­ty, the San­ta Clara Coun­ty Sheriff’s Office assigned Dr. Cody a per­son­al pro­tec­tion detail.

    Cody and her secu­ri­ty detail found the let­ters con­cern­ing, accord­ing to the police report, but they were not over­ly-wor­ried about her safe­ty, at least ini­tial­ly. Pan­dem­ic-relat­ed death threats against pub­lic health offi­cials have been wide­ly report­ed across the coun­try for months.

    But the lev­el of con­cern for Cody’s safe­ty soared in late June, accord­ing to the police report, when a let­ter addressed to Cody showed up at the coun­ty health depart­ment with a pic­ture of an igloo where the return address would nor­mal­ly be and the phrase “Let’s Boo­gie” writ­ten above.

    “I’m glad you are get­ting threats,” the anony­mous per­son wrote in the let­ter, accord­ing to the police report. “I post­ed your res­i­dence every­where I could; I hope some­one fol­lows through.”

    The igloo and phrase “Let’s Boo­gie” were obvi­ous sym­bols of the Booga­loo move­ment, whose mem­bers have recent­ly been tied to a ter­ror­ist plot in Neva­da, efforts to foment vio­lence at George Floyd protests in South Car­oli­na, an alleged plot to kid­nap the chil­dren of East Bay elect­ed offi­cials, and very real vio­lence car­ried out against law enforce­ment offi­cers, includ­ing the mur­ders of Under­wood and Sgt. Gutzwiller.

    “The booga­loo term is used by extrem­ists to ref­er­ence a vio­lent upris­ing or an impend­ing civ­il war in the Unit­ed States,” North­ern Cal­i­for­nia U.S. Attor­ney David Ander­son said in a press con­fer­ence when Air Force Sergeant Steven Car­ril­lo was indict­ed for the mur­der of Under­wood and Gutzwiller.

    The group is believed to have first formed in online chat forums on sites like 4chan.

    Detec­tives, accord­ing to the police report, believed the same man was respon­si­ble for what would ulti­mate­ly become a series of 24 threat­en­ing let­ters sent to Cody between April 8 and July 29, which grew increas­ing­ly offen­sive and threat­en­ing as time went on.

    The first let­ter called Cody degrad­ing names, railed against law enforce­ment and Chi­na, and includ­ed a sketch of a hand extend­ing a mid­dle fin­ger, accord­ing to the police report.

    “We are stronger than you pigs in every way,” the let­ter said. “We are out to defeat you.”

    Before the June Booga­loo let­ter arrived, Cody had already received a steady stream of let­ters con­tain­ing misog­y­nis­tic lan­guage, threats, pornog­ra­phy, and anti-gov­ern­ment views, accord­ing to the police report.

    “Maybe this is the spark we need for a bloody rev­o­lu­tion!” the sender wrote in one let­ter, accord­ing to the police report.

    Anoth­er said, “You’re done…it’s over…say good­bye,” accord­ing to the report.

    On June 23, an offi­cer safe­ty bul­letin sent to sur­round­ing law enforce­ment agen­cies by the San­ta Cruz Coun­ty Sheriff’s Office shift­ed detec­tives’ atten­tion to one man: Alan Viaren­go of Gilroy.

    Accord­ing to the San­ta Cruz bul­letin, Viaren­go had a his­to­ry of send­ing harass­ing let­ters to law enforce­ment agen­cies, and San­ta Clara detec­tives inquired about the bul­letin, accord­ing to the report. Author­i­ties in San­ta Cruz informed them the Sheriff’s Office and the wid­ow of Sgt. Gutzwiller were receiv­ing mock­ing let­ters from peo­ple appar­ent­ly asso­ci­at­ed with the Booga­loo move­ment.

    One of those let­ters described a spe­cif­ic Las Vegas detec­tive as a “piece of sh*t,” accord­ing to the police report. The same detec­tive had arrest­ed Viaren­go in the ear­ly 1990’s for send­ing police threat­en­ing let­ters while he was attend­ing an Out­law Motor­cy­cle Gang ral­ly in Neva­da, accord­ing to the police report.

    Viaren­go was con­vict­ed but had his con­vic­tion over­turned years lat­er because the crim­i­nal­ist who reviewed the evi­dence in his case was lat­er accused of uneth­i­cal prac­tices.

    On July 29, accord­ing to the police report, detec­tives sur­veilling Viaren­go watched him as he drove his black Tes­la Mod­el 3 up to a mail­box and dropped a let­ter inside. It was addressed to Dr. Sara Cody and mocked her for her han­dling of the pan­dem­ic, accord­ing to the police report. He was arrest­ed almost a month lat­er, on August 27, accord­ing to San­ta Clara Coun­ty Jail book­ing records.

    On the same day a San­ta Clara Coun­ty Sheriff’s Cap­tain was arraigned on bribery charges, Salfen ques­tioned the cred­i­bil­i­ty of the depart­ment.

    “I can say that there are one or more law enforce­ment offi­cers direct­ly involved who have seri­ous cred­i­bil­i­ty issues,” Salfen said. “The inves­ti­ga­tion in this case was appar­ent­ly spear­head­ed by the San­ta Clara Coun­ty Sheriff’s Office – the law enforce­ment agency head­ed by the county’s top law enforce­ment offi­cer, Sher­iff Lau­rie Smith. This is the same Sher­iff Lau­rie Smith who just recent­ly refused to tes­ti­fy in a grand jury indict­ment into alle­ga­tion of pub­lic cor­rup­tion…”

    NBC Bay Area found a trove of pub­lic let­ters sent by Viaren­go over the years, most­ly let­ters to the edi­tor at the Gilroy Dis­patch. His let­ters, which some­times received back­lash from read­ers, advo­cat­ed strong­ly for the first and sec­ond amend­ments and railed against tax­es and gov­ern­ment inter­ven­tion in people’s lives.

    In a 2019 let­ter sent to Gilroy Life, Viaren­go calls Julian Assange a hero, called the San Fran­cis­co Police Department’s raid of jour­nal­ist Bryan Car­mody an “atroc­i­ty,” and says most major news out­lets are “polit­i­cal.”

    “They pick and choose the sto­ries that fit their agen­da,” Viaren­go wrote. “They drone out [sic] about ‘diver­si­ty,’ but have nev­er had one local right-wing writer.”

    He end­ed the let­ter with, “Make Amer­i­ca Great Again (and Keep Amer­i­ca Great in 2020)!”

    ————

    “Alleged ‘Booga­loo Boy’ Arrest­ed Over Threats to Top South Bay Health Offi­cial” by Michael Bott and Robert Han­da; NBC Bay Area; 09/01/2020

    “Offi­cers from the San­ta Clara Coun­ty Sher­if­f’s Office arrest­ed Alan Viaren­go, 55, last week, seiz­ing a large cache of firearms and explo­sives from his family’s home, accord­ing to a bail motion filed by San­ta Clara Coun­ty pros­e­cu­tors. Detec­tives found more than 100 firearms, includ­ing poten­tial assault rifles, explo­sives, thou­sands of rounds of ammu­ni­tion, tools for man­u­fac­tur­ing ammu­ni­tion, and con­fed­er­ate flags, accord­ing to the court records.

    more than 100 firearms, explo­sives, and con­fed­er­ate flags. That’s s a lot of red flags. Red flags under­scored by one of his last threat­en­ing let­ters where he brags about his past abil­i­ty to get at least five pub­lic offi­cials attacked in con­nec­tion with the George Floyd protests. And while it’s some­what odd that he would make these admis­sions in let­ters to a pub­lic offi­cials keep in mind that the let­ters clear­ly intend­ed to intim­i­date and brag­ging about inspir­ing acts of vio­lence against pub­lic offi­cials would indeed be intim­i­dat­ing con­tent in a let­ter like this.

    And then there’s the fact that he report­ed­ly tar­get­ed Gutzwiller’s wid­ow which fur­ther rais­es the ques­tion of whether or not he was involved with the plan­ning of that attack:

    ...
    Alleged Booga­loo mem­bers were charged ear­li­er this year in the mur­der of Fed­er­al Secu­ri­ty Offi­cer Dave Under­wood at Oakland’s Fed­er­al Build­ing and the ambush-style killing of San­ta Cruz Coun­ty Sheriff’s Sgt. Damon Gutzwiller in Ben Lomand.

    Viaren­go also sent harass­ing let­ters to the San­ta Cruz Coun­ty Sheriff’s Office and Gutzwiller’s wid­ow replete with Booga­loo slo­gans and imagery, accord­ing to the police report.
    ...

    So hope­ful­ly Varien­go real­ly was involved with that plot­ting and he’s now under arrest. Because, again, we are told a third per­son was involved with plot­ting those attacks over Face­book so if Varien­go isn’t that third per­son then that per­son is still out there.

    Next, here’s a fol­low up arti­cle that gives us a hint of Varien­go’s defense in this case: Accord­ing to Varien­go’s lawyer, “There is clear­ly a First Amend­ment right to free speech. Like any oth­er cit­i­zen has the right to write a pub­lic fig­ure and voice dis­plea­sure with rules, reg­u­la­tions that are put into place.” The lawyer also argues that he nev­er act­ed on any of the threats in the let­ters. So to some extent this case is going to be test of whether or not one can legal­ly foment ‘Boogaloo’-style calls for rev­o­lu­tion and race war as long as they don’t direct­ly engage in the vio­lence them­selves:

    NBC Bay Area

    Detec­tives Seize 138 Guns, Explo­sives Owned by Alleged ‘Booga­loo Boy’
    A judge on Tues­day also denied bail for the col­lege pro­fes­sor accused of threat­en­ing San­ta Clara Coun­ty’s top pub­lic health offi­cial.

    By Dami­an Tru­jil­lo
    • Pub­lished Sep­tem­ber 1, 2020 • Updat­ed on Sep­tem­ber 2, 2020 at 10:24 am

    A Gilroy man was back in court Tues­day after being accused of vio­lent threats against San­ta Clara Coun­ty Pub­lic Health Offi­cer Dr. Sara Cody.

    The San­ta Clara Coun­ty sher­iff also believes Alan Viaren­go is part of the “Booga­loo Move­ment,” a loose­ly orga­nized, right-wing, anti-gov­ern­ment group that advo­cates extreme vio­lence and civ­il war.

    The 55-year-old Viaren­go is also a pro­fes­sor at Gav­i­lan Col­lege in Gilroy involved in Boy Scouts of Amer­i­ca. The orga­ni­za­tion issued a state­ment say­ing in part, “This indi­vid­ual has been removed from Scout­ing and is pro­hib­it­ed from any future par­tic­i­pa­tion in our pro­grams.”

    Inves­ti­ga­tors on Tues­day also released pic­tures of what they said are Viaren­go’s arse­nal — 138 guns and thou­sands of rounds of ammu­ni­tion and explo­sives.

    “We had to load it onto a pal­let to get it into evi­dence,” Sher­if­f’s detec­tive Lt. Bren­dan Omori said. “This indi­vid­ual had a sig­nif­i­cant cache of firearms and weapon­ry.”

    Mean­while, a judge on Tues­day denied Viaren­go’s request for bail.

    Court doc­u­ments list at least 24 inci­dents claim­ing Viaren­go mailed obscene, threat­en­ing let­ters to the coun­ty pub­lic health offi­cer.

    One let­ter con­clud­ed with “You are done. It’s over. Say good­bye.”

    Inves­ti­ga­tors said it start­ed with Cody’s ini­tial shel­ter in place order due to the pan­dem­ic. Detec­tives also said Cody is part of a list of oth­er peo­ple Viaren­go may have threat­ened.

    “We have rea­son to believe that Mr. Viaren­go has threat­ened oth­er offi­cials and oth­er res­i­dents in the coun­ty,” Omori said.

    “There is clear­ly a First Amend­ment right to free speech,” said Den­nis Luca, a for­mer cop and now defense lawyer rep­re­sent­ing Viaren­go. “Like any oth­er cit­i­zen has the right to write a pub­lic fig­ure and voice dis­plea­sure with rules, reg­u­la­tions that are put into place.”

    Luca said his client nev­er act­ed out on any alleged writ­ten threat against Cody, and adds the guns are all under legal own­er­ship to Viaren­go.

    The attor­ney also denied alle­ga­tions Viaren­go is a mem­ber of the Booga­loo Move­ment. The right-wing group is named in the June killings of a San­ta Cruz Coun­ty deputy and a fed­er­al offi­cer in Oak­land.

    ———–

    ” Detec­tives Seize 138 Guns, Explo­sives Owned by Alleged ‘Booga­loo Boy’ ” by Dami­an Tru­jil­lo; NBC Bay Area; 09/01/2020

    “There is clear­ly a First Amend­ment right to free speech,” said Den­nis Luca, a for­mer cop and now defense lawyer rep­re­sent­ing Viaren­go. “Like any oth­er cit­i­zen has the right to write a pub­lic fig­ure and voice dis­plea­sure with rules, reg­u­la­tions that are put into place.””

    Will Alan Varien­go’s legal defense — that he was just exer­cis­ing his First Amend­ment rights to free speech when he sent those threat­en­ing let­ters brag­ging about how his words had pre­vi­ous­ly been used to incite vio­lence — ulti­mate­ly suc­ceed? We’ll see. It’s 2020. Open­ly incit­ing vio­lence is pret­ty by nor­mal now so he’s prob­a­bly got a shot.

    Posted by Pterrafractyl | September 3, 2020, 11:26 am
  12. Fol­low­ing up on the recent­ly released report on Face­book’s lack of reg­u­la­tion for Span­ish-lan­guage con­tent and the result­ing del­uge of far right dis­in­for­ma­tion that was tar­get­ing the US Span­ish-speak­ing elec­torate in 2020, along with the report back in April about about how Face­book active­ly dragged its feet on the enforce­ment of its rules in Hon­duras for near­ly a year after it was dis­cov­ered the right-wing gov­ern­ment was car­ry­ing out dis­in­for­ma­tion cam­paigns on the plat­form, here’s anoth­er report from back in April about anoth­er exam­ple of Face­book’s high­ly selec­tive enforce­ment of its rules. Rules where the ulti­mate rule is the right-wing rules.

    Face­book whistle­blow­er Sophie Zhang was once again the source for this sto­ry. Accord­ing to Zhang, she dis­cov­ered four “inau­then­tic net­works” (net­works of fake Face­book accounts) in Decem­ber of 2019. Two of these net­works were asso­ci­at­ed with the rul­ing right-wing BJP par­ty, and the oth­er two appeared to be work­ing on behalf of the cen­ter-left oppo­si­tion Con­gress Par­ty. After iden­ti­fy­ing that all four net­works vio­lat­ed Face­book’s poli­cies, they decid­ed to “check­point” the accounts in the net­work, where the accounts are tem­porar­i­ly dis­abled until users pro­vide some sort of ver­i­fi­ca­tion. And three of those net­works were indeed “check­point­ed”. But when the Face­book staffer was about to check­point the last net­work, they noticed one of the flagged accounts was part of Face­book’s “Xcheck” sys­tem as a “Gov­ern­ment Part­ner” and “High Pri­or­i­ty” Indi­an. The “Xcheck” sys­tem is used by Face­book to not just flag promi­nent accounts but also exempt them from auto­mat­ed enforce­ment actions. So guess what hap­pened next. Yep, this fourth net­work was allowed to oper­ate. In con­trast, one of the two net­works pro­mot­ing the Con­gress Par­ty can­di­dates had repeat­ed actions tak­en against it.

    So who was this promi­nent politi­cian? The Guardian did­n’t name them in the report, in part because evi­dence of their involve­ment in the net­work was not defin­i­tive, which itself is rather notable in this sto­ry because it would appear this promi­nent politi­cians involve­ment in the net­work was­n’t defin­i­tive­ly estab­lished and yet their ties to the net­work were used as an excuse under the “Xcheck” sys­tem to allow the net­work to keep run­ning. It’s the kind of loop­hole we should expect from Face­book at this point: the inau­then­tic behav­ior will be allowed to con­tin­ue as long as there’s at least one promi­nent real per­son pos­si­bly involved with it.

    This is prob­a­bly a good time to recall the sto­ries about BJP politi­cians using inau­then­tic Face­book net­works to car­ry out influ­ence oper­a­tions out­side of India, with one BJP offi­cial open­ly threat­en­ing to med­dle in the US 2020 elec­tion in favor of Don­ald Trump. It’s also impor­tant to keep in mind that Face­book’s lead­er­ship in India includes a num­ber of BJP-tied indi­vid­u­als. So when Face­book staffers are mak­ing these kinds of deci­sions on whether or not to crack down on a net­work asso­ci­at­ed with a promi­nent BJP politi­cian, they’re poten­tial­ly going to be wor­ried about piss­ing off their BJP-friend­ly boss­es.

    Sophie Zhang end­ed up leav­ing Face­book in Sep­tem­ber of 2020. The shut­down of that BJP net­work still had­n’t hap­pened by her last day despite repeat­ed inter­nal requests for action. So what was Face­book’s inter­nal response to Zhang’s requests over the 9+ months between when the net­work was first iden­ti­fied and Zhang leav­ing the com­pa­ny? Noth­ing. They sim­ply ignored her repeat­ed requests for action or even an expla­na­tion until she quit. True to form, when pressed by the Guardian about why this net­work was allowed to oper­at­ed, Face­book pro­ceed­ed to give a series of con­tra­dic­to­ry non-answers, which is prob­a­bly as close to an answer as we can expect to get from Face­book on the mat­ter:

    The Guardian

    Face­book planned to remove fake accounts in India – until it real­ized a BJP politi­cian was involved

    Whistle­blow­er points to dou­ble stan­dard in Facebook’s enforce­ment of rules against pow­er­ful

    Julia Car­rie Wong in San Fran­cis­co and Han­nah Ellis-Petersen in Del­hi
    Thu 15 Apr 2021 06.00 EDT
    Last mod­i­fied on Thu 15 Apr 2021 06.02 EDT

    Face­book allowed a net­work of fake accounts to arti­fi­cial­ly inflate the pop­u­lar­i­ty of an MP from India’s rul­ing Bharatiya Jana­ta par­ty (BJP), for months after being alert­ed to the prob­lem.

    The com­pa­ny was prepar­ing to remove the fake accounts but paused when it found evi­dence that the politi­cian was prob­a­bly direct­ly involved in the net­work, inter­nal doc­u­ments seen by the Guardian show.

    The company’s deci­sion not to take time­ly action against the net­work, which it had already deter­mined vio­lat­ed its poli­cies, is just the lat­est exam­ple of Face­book hold­ing the pow­er­ful to low­er stan­dards than it does reg­u­lar users.

    “It’s not fair to have one jus­tice sys­tem for the rich and impor­tant and one for every­one else, but that’s essen­tial­ly the route that Face­book has carved out,” said Sophie Zhang, a for­mer data sci­en­tist for Face­book who uncov­ered the inau­then­tic net­work. Zhang has come for­ward to expose the company’s fail­ure to address how its plat­form is being used to manip­u­late polit­i­cal dis­course around the world.

    Facebook’s fail­ure to act against the MP will also raise ques­tions about Facebook’s rela­tion­ship with the Hin­du nation­al­ist par­ty. Face­book has repeat­ed­ly treat­ed rule vio­la­tions by BJP lead­ers with undue lenien­cy, the Wall Street Jour­nal report­ed in August 2020.

    Since Naren­dra Modi and the BJP har­nessed the pow­er of Face­book and took pow­er in India’s 2014 gen­er­al elec­tion, decep­tive social media tac­tics have become com­mon­place in Indi­an pol­i­tics, accord­ing to local experts.

    “Politi­cians in India are ahead of the curve when it comes to adopt­ing these manip­u­la­tive tech­niques, and so this lever­ag­ing of social media for polit­i­cal means is only to be expect­ed,” said Nikhil Pah­wa, an an Indi­an dig­i­tal rights activist and founder of Medi­aNa­ma. “This is an arms race between the social media plat­forms and those who are gen­er­at­ing inau­then­tic behav­ior.”

    All of the major polit­i­cal par­ties in India ben­e­fit from decep­tive tech­niques to acquire fake likes, com­ments, shares or fans, Zhang found. Ahead of India’s 2019 gen­er­al elec­tion, she worked on a mass take­down of low-qual­i­ty script­ed fake engage­ment on polit­i­cal Pages across all par­ties, result­ing in the removal of 2.2m reac­tions, 1.7m shares and 330,000 com­ments from inau­then­tic or com­pro­mised accounts.

    In Decem­ber 2019, Zhang detect­ed four sophis­ti­cat­ed net­works of sus­pi­cious accounts that were pro­duc­ing fake engage­ment – ie likes, shares, com­ments and reac­tions – on the Pages of major Indi­an politi­cians. Two of the net­works were ded­i­cat­ed to sup­port­ing mem­bers of the BJP, includ­ing the MP; the oth­er two sup­port­ed mem­bers of the Indi­an Nation­al Con­gress, the lead­ing oppo­si­tion par­ty.

    An inves­ti­ga­tor from Facebook’s threat intel­li­gence team deter­mined that the net­works were made up of man­u­al­ly con­trolled inau­then­tic accounts that were being used to cre­ate fake engage­ment. They did not rise to the lev­el of “coor­di­nat­ed inau­then­tic behav­ior” – the term Face­book applies to the most seri­ous decep­tive tac­tics on its plat­form, such as the Russ­ian influ­ence oper­a­tion that inter­fered in the 2016 US elec­tion – but they still vio­lat­ed the platform’s rules.

    The inves­ti­ga­tor rec­om­mend­ed that the accounts be sent through an iden­ti­ty “check­point” – a process by which sus­pi­cious accounts are locked unless and until the account own­er can pro­vide proof of their iden­ti­ty. Check­points are a com­mon enforce­ment mech­a­nism for Face­book, which allows users to have just one account, under the user’s “real” name.

    On 19 Decem­ber, a Face­book staffer check­point­ed more than 500 accounts con­nect­ed to three of the net­works. On 20 Decem­ber, the same staffer was prepar­ing to check­point the approx­i­mate­ly 50 accounts involved in the fourth net­work when he paused.

    “Just want to con­firm we’re com­fort­able act­ing on those actors,” he wrote in Facebook’s task man­age­ment sys­tem. One of the accounts had been tagged by Facebook’s “Xcheck” sys­tem as a “Gov­ern­ment Part­ner” and “High Pri­or­i­ty – Indi­an”, he not­ed. The sys­tem is used to flag promi­nent accounts and exempt them from cer­tain auto­mat­ed enforce­ment actions.

    It was the MP’s own account, Zhang real­ized, and its inclu­sion in the net­work con­sti­tut­ed strong evi­dence that either the MP or some­one with access to his Face­book account was involved in coor­di­nat­ing the 50 fake accounts. (The Guardian is aware of the MP’s iden­ti­ty but is choos­ing not to reveal it since the evi­dence of his involve­ment in the net­work is not defin­i­tive. The MP’s office did not response to requests for com­ment.)

    Polit­i­cal ambi­tions may explain why an MP would attempt to acquire fake likes on his Face­book posts.

    “The worth of a politi­cian is now deter­mined by his social media fol­low­ers, with Modi lead­ing among most world lead­ers,” said Srini­vas Kodali, a researcher with the Free Soft­ware Move­ment India. “Pop­u­lar­i­ty on social media doesn’t direct­ly help acquire real pow­er, but it has become a means to enter pol­i­tics and rise up in the ranks.

    Task man­age­ment doc­u­ments show that Zhang repeat­ed­ly sought approval to move ahead with the check­points. “For com­plete­ness and [to] avoid accu­sa­tions of biased enforce­ment, could we also come to an assess­ment on the clus­ter act­ing on [the MP]?” she wrote on 3 Feb­ru­ary. No one respond­ed.

    On 7 August, she not­ed the still unre­solved sit­u­a­tion, writ­ing: “Giv­en the close ties to a sit­ting mem­ber of the Lok Sab­ha, we sought pol­i­cy approval for a take­down, which we did not receive; and the sit­u­a­tion was not deemed to be a focus for pri­or­i­ti­za­tion.” Again there was no response.

    And on her final day at Face­book in Sep­tem­ber 2020, she updat­ed the task one last time to flag that there was a “still-exist­ing clus­ter of accounts asso­ci­at­ed with” the MP.

    “I asked about it repeat­ed­ly, and I don’t think I ever got a response,” Zhang said. “It seemed quite con­cern­ing to myself because the fact that I had caught a politi­cian or some­one asso­ci­at­ed with him red-hand­ed was more of a rea­son to act, not less.”

    Face­book pro­vid­ed the Guardian with sev­er­al con­tra­dic­to­ry accounts of its han­dling of the MP’s net­work. The com­pa­ny ini­tial­ly denied that action on the net­work had been blocked and said the “vast major­i­ty” of accounts had been check­point­ed and per­ma­nent­ly removed in Decem­ber 2019 and ear­ly 2020.

    After the Guardian point­ed to doc­u­ments show­ing that the check­points had not been car­ried out, Face­book said that “a por­tion” of the clus­ter had been dis­abled in May 2020, and that it was con­tin­u­ing to mon­i­tor the rest of the network’s accounts. It lat­er said that a “spe­cial­ist team” had reviewed the accounts and that a small minor­i­ty of them had not met the thresh­old for removal but were nev­er­the­less now inac­tive.

    The com­pa­ny did not respond to ques­tions about why the accounts had not been check­point­ed in Decem­ber, when the inves­ti­ga­tor first rec­om­mend­ed the enforce­ment. It also did not respond to ques­tions about which spe­cial­ist team was involved in the May review of the accounts, nor why this review and enforce­ment was not record­ed in the task man­age­ment sys­tem. It claimed that the pol­i­cy team was not respon­si­ble for block­ing any action.

    ...

    While Zhang was try­ing and fail­ing to con­vince Face­book to take action on the MP’s net­work, Facebook’s staff took repeat­ed action against one of the two Indi­an Nation­al Con­gress net­works that it had tried to remove in Decem­ber. Though the check­points had knocked out most of the fake accounts, Face­book saw imme­di­ate efforts to recon­sti­tute with new accounts and, in the weeks ahead of the 2020 state elec­tions in Del­hi, the net­work that had pre­vi­ous­ly boost­ed a Con­gress politi­cian in Pun­jab began sup­port­ing AAP, the anti-cor­rup­tion par­ty in Del­hi.

    In the com­ments of posts by BJP politi­cians in Del­hi, the fake accounts rep­re­sent­ed them­selves as sup­port­ers of Modi who were nev­er­the­less choos­ing to vote for AAP in the state elec­tions. The inter­ven­tion may have been a result of polit­i­cal actors attempt­ing to sup­port the par­ty in Del­hi with the best chance to defeat the BJP, since Con­gress enjoys lit­tle sup­port in local Del­hi pol­i­tics. Face­book under­took mul­ti­ple rounds of check­point­ing to knock out the net­work.

    The MP’s case was not the first time that Facebook’s low­er stan­dards toward politi­cians vio­lat­ing its rules against inau­then­tic behav­ior prompt­ed con­cern among some staff. “If peo­ple start real­iz­ing that we make excep­tions for Page admins of pres­i­dents or polit­i­cal par­ties, these oper­a­tors may even­tu­al­ly fig­ure that out and delib­er­ate­ly run their [coor­di­nat­ed inau­then­tic behav­ior] out of more offi­cial chan­nels,” a researcher said to Zhang dur­ing a June 2019 chat about the company’s reluc­tance to take action against a net­work of fake accounts and Pages boost­ing the pres­i­dent of Hon­duras.

    The issue is par­tic­u­lar­ly sen­si­tive in India, where Face­book has come under fire by oppo­si­tion politi­cians for allow­ing BJP politi­cians to break its rules, par­tic­u­lar­ly with regard to anti-Mus­lim hate speech.

    Facebook’s head of pub­lic pol­i­cy for India, Ankhi Das, over­ruled pol­i­cy staff who had deter­mined that the BJP politi­cian T Raja Singh should be des­ig­nat­ed a “dan­ger­ous indi­vid­ual” – the clas­si­fi­ca­tion for hate group lead­ers – over his anti-Mus­lim incite­ment, accord­ing to an August 2020 Wall Street Jour­nal report. Das resigned fol­low­ing the Journal’s report­ing on her open sup­port for Modi’s 2014 cam­paign. Face­book denied any bias or wrong­do­ing.

    ————

    “Face­book planned to remove fake accounts in India – until it real­ized a BJP politi­cian was involved” by Julia Car­rie Wong and Han­nah Ellis-Petersen; The Guardian; 04/15/2021

    “Since Naren­dra Modi and the BJP har­nessed the pow­er of Face­book and took pow­er in India’s 2014 gen­er­al elec­tion, decep­tive social media tac­tics have become com­mon­place in Indi­an pol­i­tics, accord­ing to local experts.”

    If it ain’t broke, don’t fix it. The BJP’s Face­book-focused approach to win­ning through decep­tive social media tac­tics is a win­ning strat­e­gy. It’s proven. So of course the BJP’s rival Con­gress Par­ty has joined in. But the Con­gress Par­ty appar­ent­ly did­n’t know the secret trick to mak­ing Face­book keep your bot-net­work run­ning: have at least one account in your net­work be a real promi­nent politi­cian. That was appar­ent­ly the rule deployed by Face­book when it dis­cov­ered one of the BJP net­works includ­ed a “High Pri­or­i­ty” BJP politi­cians on the “Xcheck” sys­tem of promi­nent indi­vid­u­als. It’s an incred­i­ble loop­hole:

    ...
    In Decem­ber 2019, Zhang detect­ed four sophis­ti­cat­ed net­works of sus­pi­cious accounts that were pro­duc­ing fake engage­ment – ie likes, shares, com­ments and reac­tions – on the Pages of major Indi­an politi­cians. Two of the net­works were ded­i­cat­ed to sup­port­ing mem­bers of the BJP, includ­ing the MP; the oth­er two sup­port­ed mem­bers of the Indi­an Nation­al Con­gress, the lead­ing oppo­si­tion par­ty.

    ...

    On 19 Decem­ber, a Face­book staffer check­point­ed more than 500 accounts con­nect­ed to three of the net­works. On 20 Decem­ber, the same staffer was prepar­ing to check­point the approx­i­mate­ly 50 accounts involved in the fourth net­work when he paused.

    “Just want to con­firm we’re com­fort­able act­ing on those actors,” he wrote in Facebook’s task man­age­ment sys­tem. One of the accounts had been tagged by Facebook’s “Xcheck” sys­tem as a “Gov­ern­ment Part­ner” and “High Pri­or­i­ty – Indi­an”, he not­ed. The sys­tem is used to flag promi­nent accounts and exempt them from cer­tain auto­mat­ed enforce­ment actions.

    It was the MP’s own account, Zhang real­ized, and its inclu­sion in the net­work con­sti­tut­ed strong evi­dence that either the MP or some­one with access to his Face­book account was involved in coor­di­nat­ing the 50 fake accounts. (The Guardian is aware of the MP’s iden­ti­ty but is choos­ing not to reveal it since the evi­dence of his involve­ment in the net­work is not defin­i­tive. The MP’s office did not response to requests for com­ment.)

    ...

    The MP’s case was not the first time that Facebook’s low­er stan­dards toward politi­cians vio­lat­ing its rules against inau­then­tic behav­ior prompt­ed con­cern among some staff. “If peo­ple start real­iz­ing that we make excep­tions for Page admins of pres­i­dents or polit­i­cal par­ties, these oper­a­tors may even­tu­al­ly fig­ure that out and delib­er­ate­ly run their [coor­di­nat­ed inau­then­tic behav­ior] out of more offi­cial chan­nels,” a researcher said to Zhang dur­ing a June 2019 chat about the company’s reluc­tance to take action against a net­work of fake accounts and Pages boost­ing the pres­i­dent of Hon­duras.
    ...

    So would this loop­hole apply to Con­gress Par­ty had they also asso­ci­at­ed their bot net­works with an ‘Xcheck’ fig­ure? That remains unclear, but the fact that Face­book repeat­ed­ly took action against one of the Con­gress Par­ty net­works at the same time it was sys­tem­at­i­cal­ly ignor­ing the BJP net­work points towards a pro-BJP pol­i­cy. Again, don’t for­get that BJP affil­i­ates play promi­nent roles in Face­book’s Indi­an oper­a­tion. So while it’s pos­si­ble that the asso­ci­at­ed with promi­nent politi­cians on the “XCheck” sys­tem might con­fer a degree of immu­ni­ty from this over­sight, it’s also very pos­si­ble that this immu­ni­ty only real­ly applies to BJP asso­ciates:

    ...
    While Zhang was try­ing and fail­ing to con­vince Face­book to take action on the MP’s net­work, Facebook’s staff took repeat­ed action against one of the two Indi­an Nation­al Con­gress net­works that it had tried to remove in Decem­ber. Though the check­points had knocked out most of the fake accounts, Face­book saw imme­di­ate efforts to recon­sti­tute with new accounts and, in the weeks ahead of the 2020 state elec­tions in Del­hi, the net­work that had pre­vi­ous­ly boost­ed a Con­gress politi­cian in Pun­jab began sup­port­ing AAP, the anti-cor­rup­tion par­ty in Del­hi.

    In the com­ments of posts by BJP politi­cians in Del­hi, the fake accounts rep­re­sent­ed them­selves as sup­port­ers of Modi who were nev­er­the­less choos­ing to vote for AAP in the state elec­tions. The inter­ven­tion may have been a result of polit­i­cal actors attempt­ing to sup­port the par­ty in Del­hi with the best chance to defeat the BJP, since Con­gress enjoys lit­tle sup­port in local Del­hi pol­i­tics. Face­book under­took mul­ti­ple rounds of check­point­ing to knock out the net­work.
    ...

    So what did Face­book tell Zhang for why it refused to car­ry out the actions she called for? No response in Feb­ru­ary of 2020. No response again in August of 2020, and no response on her way out in Sep­tem­ber of 2020. Noth­ing. So what­ev­er the actu­al expla­na­tion is, it’s too scan­dalous to even be dis­cussed in Face­book’s inter­nal com­mu­ni­ca­tions:

    ...
    Task man­age­ment doc­u­ments show that Zhang repeat­ed­ly sought approval to move ahead with the check­points. “For com­plete­ness and [to] avoid accu­sa­tions of biased enforce­ment, could we also come to an assess­ment on the clus­ter act­ing on [the MP]?” she wrote on 3 Feb­ru­ary. No one respond­ed.

    On 7 August, she not­ed the still unre­solved sit­u­a­tion, writ­ing: “Giv­en the close ties to a sit­ting mem­ber of the Lok Sab­ha, we sought pol­i­cy approval for a take­down, which we did not receive; and the sit­u­a­tion was not deemed to be a focus for pri­or­i­ti­za­tion.” Again there was no response.

    And on her final day at Face­book in Sep­tem­ber 2020, she updat­ed the task one last time to flag that there was a “still-exist­ing clus­ter of accounts asso­ci­at­ed with” the MP.

    I asked about it repeat­ed­ly, and I don’t think I ever got a response,” Zhang said. “It seemed quite con­cern­ing to myself because the fact that I had caught a politi­cian or some­one asso­ci­at­ed with him red-hand­ed was more of a rea­son to act, not less.”
    ...

    And then there’s Face­book’s open­ly con­tra­dic­to­ry answers to reports on this issue, the clas­sic sign of a cov­er up. And note the curi­ous state­ment Face­book final­ly gave when pressed about why the block­ing of the iden­ti­fied inau­then­tic accounts did­n’t hap­pen when Zhang first flagged them in Decem­ber of 2019: Face­book assert­ed that the pol­i­cy team was not respon­si­ble for block­ing any action. Up to this point it’s appeared that the pol­i­cy teams are indeed the groups block­ing these actions. So Face­book is either out­right lying here (very pos­si­ble), or they’ve added a new lay­er to their bureau­cra­cy where there’s like a secret team mak­ing these block­ing deci­sions, ensur­ing no one is to blame when it comes time for fin­ger point­ing:

    ...
    Face­book pro­vid­ed the Guardian with sev­er­al con­tra­dic­to­ry accounts of its han­dling of the MP’s net­work. The com­pa­ny ini­tial­ly denied that action on the net­work had been blocked and said the “vast major­i­ty” of accounts had been check­point­ed and per­ma­nent­ly removed in Decem­ber 2019 and ear­ly 2020.

    After the Guardian point­ed to doc­u­ments show­ing that the check­points had not been car­ried out, Face­book said that “a por­tion” of the clus­ter had been dis­abled in May 2020, and that it was con­tin­u­ing to mon­i­tor the rest of the network’s accounts. It lat­er said that a “spe­cial­ist team” had reviewed the accounts and that a small minor­i­ty of them had not met the thresh­old for removal but were nev­er­the­less now inac­tive.

    The com­pa­ny did not respond to ques­tions about why the accounts had not been check­point­ed in Decem­ber, when the inves­ti­ga­tor first rec­om­mend­ed the enforce­ment. It also did not respond to ques­tions about which spe­cial­ist team was involved in the May review of the accounts, nor why this review and enforce­ment was not record­ed in the task man­age­ment sys­tem. It claimed that the pol­i­cy team was not respon­si­ble for block­ing any action.
    ...

    So we have to ask? What’s the cur­rent sta­tus of that BJP politi­cian’s net­work? Face­book’s answers to reporters sug­gest­ed that some, but not all, of the accounts were tak­en down, but that was after repeat­ed­ly giv­ing con­tra­dic­to­ry answers indi­cat­ing respons­es giv­en in bad faith. In oth­er words, all cir­cum­stan­tial evi­dence points in the direc­tion of this BJP net­work like­ly still oper­at­ing.

    The sto­ry also rais­es the ques­tion of whether or not we’re going to see any Con­gress Par­ty mem­bers inten­tion­al­ly asso­ciate influ­ence bot net­works with their own Face­book accounts in order to try and receive the “Xcheck” spe­cial treat­ment. Because if not, that would be a pret­ty strong indi­ca­tion that this is a loop­hole that only applies to con­ser­v­a­tive politi­cians, although it’s not as if we need­ed more indi­ca­tions that this is the actu­al­ly pol­i­cy.

    Posted by Pterrafractyl | November 19, 2021, 3:29 pm
  13. There’s a new report out on the sys­tem­at­ic ped­dling of mis­in­for­ma­tion on Face­book that con­tains what is arguably one of the most scan­dalous find­ings so far for the com­pa­ny. It turns out Face­book has­n’t just been allow­ing the worst pro­lif­er­a­tors of dis­in­for­ma­tion to con­tin­ue exploit­ing its plat­forms. No, Face­book has also been pay­ing them. Yep. As a result, ped­dling dis­in­for­ma­tion has become quite a growth sec­tor over the past five years, in par­tic­u­lar in the Glob­al South, where pay­ments often vast­ly exceed the local month­ly salaries.

    It’s a con­se­quence of the Instant Arti­cles pro­gram Face­book rolled out in 2015, which offered the option of hav­ing arti­cle con­tent show up direct­ly on Face­book instead of launch­ing a sep­a­rate page at the pub­lish­ers’ sites, allow­ing Face­book to grab a greater share of the adver­tis­ing mar­ket­place from Google and share part of that rev­enue. And while the pro­gram was large­ly ignored by major pub­lish­ers, who pre­ferred to send the traf­fic direct­ly to their own pages, it’s been increas­ing­ly pop­u­lar in the devel­op­ing world with rel­a­tive­ly attrac­tive pay­ments, with bil­lions of dol­lars paid out by Face­book in 2019.

    And as we should expect, there’s a seri­ous prob­lem with the qual­i­ty of the con­tent being pushed through the page oper­a­tors par­tic­i­pat­ing in the Instant Arti­cles pro­gram, with a strong pref­er­ence for the most viral con­tent that also hap­pens to be filled with the most dis­in­for­ma­tion. Addi­tion­al­ly, Face­book has refused to invest in the man­pow­er required for the effec­tive over­sight of non-Eng­lish con­tent, mean­ing the parts of the world where this pro­gram is the most pop­u­lar are the parts where Face­book has the largest over­sight gaps. Accord­ing to the find­ings in the fol­low­ing report, obvi­ous click­bait pages were stay­ing up for hun­dreds of days before being tak­en down and the same actors would just spin up new pages. Which, again, is exact­ly what we should expect from Face­book. It would be utter­ly bizarre if this was­n’t the case.

    But here’s what is pos­si­bly the most dis­turb­ing part of this report: these same click­bait oper­a­tions have been found spoof­ing Face­book’s Live Feed streams. That’s not some­thing that should be tech­ni­cal­ly pos­si­ble. And the instances where researchers observed this high­lights the incred­i­ble poten­tial for this Live Feed abuse: click­bait oper­a­tors in Viet­nam and Cam­bo­dia were repeat­ed­ly show­ing alleged Live Feeds of footage of chil­dren been put into bus­es in Myan­mar. And while it looks like­ly that the footage was actu­al­ly real­ly tak­en from Myan­mar at that time, the fact that click­bait oper­a­tors were able to deliv­er this footage as a Live Feed that audi­ences in Myan­mar were able to watch in real-time demon­strates the explo­sive poten­tial of this kind of mis­in­for­ma­tion.

    Final­ly, the report also not­ed that while this Instant Arti­cles scheme was set up to allow Face­book to grab ad rev­enue that was oth­er­wise going to Google, Google is no inno­cent par­ty in all this and has a sim­i­lar incen­tiviza­tion struc­ture with its AdSense adver­tis­ing net­work that effec­tive­ly mon­e­tiz­ing dis­in­for­ma­tion videos on YouTube. Most of these Click­bait oper­a­tors are rely­ing heav­i­ly on both Face­book and Google’s mon­e­ti­za­tion pro­grams. So the gen­er­al take­away from the report is that the prob­lem with glob­al social media dis­in­for­ma­tion is worse than you think. And while the report cer­tain­ly paints a pic­ture of doom and gloom, there is on obvi­ous sil­ver lin­ing to this: At least there are some obvi­ous solu­tions here. Because maybe the chal­lenges of mass dis­in­for­ma­tion will be a lit­tle eas­i­er to deal with if the social media giants, you know, stop pay­ing peo­ple bil­lions of dol­lars to ped­dle it around the world. Maybe. Just maybe:

    MIT Tech­nol­o­gy Review

    How Face­book and Google fund glob­al mis­in­for­ma­tion
    The tech giants are pay­ing mil­lions of dol­lars to the oper­a­tors of click­bait pages, bankrolling the dete­ri­o­ra­tion of infor­ma­tion ecosys­tems around the world.

    by Karen Hao
    Novem­ber 20, 2021

    Myan­mar, March 2021.

    A month after the fall of the demo­c­ra­t­ic gov­ern­ment.

    A Face­book Live video showed hun­dreds of peo­ple protest­ing against the mil­i­tary coup on the streets of Myan­mar.

    It had near­ly 50,000 shares and over 1.5 mil­lion views, in a coun­try with a lit­tle over 54 mil­lion peo­ple.

    Observers, unable to see the events on the ground, used the footage, along with hun­dreds of oth­er live feeds, to track and doc­u­ment the unfold­ing sit­u­a­tion. (MIT Tech­nol­o­gy Review blurred the names and images of the posters to avoid jeop­ar­diz­ing their safe­ty.)

    But less than a day lat­er, the same video would be broad­cast again mul­ti­ple times, each still claim­ing to be live.

    In the mid­dle of a mas­sive polit­i­cal cri­sis, there was no longer a way to dis­cern what was real and what wasn’t.

    In 2015, six of the 10 web­sites in Myan­mar get­ting the most engage­ment on Face­book were from legit­i­mate media, accord­ing to data from Crowd­Tan­gle, a Face­book-run tool. A year lat­er, Face­book (which recent­ly rebrand­ed to Meta) offered glob­al access to Instant Arti­cles, a pro­gram pub­lish­ers could use to mon­e­tize their con­tent.

    One year after that roll­out, legit­i­mate pub­lish­ers account­ed for only two of the top 10 pub­lish­ers on Face­book in Myan­mar. By 2018, they account­ed for zero. All the engage­ment had instead gone to fake news and click­bait web­sites. In a coun­try where Face­book is syn­ony­mous with the inter­net, the low-grade con­tent over­whelmed oth­er infor­ma­tion sources.

    It was dur­ing this rapid degra­da­tion of Myanmar’s dig­i­tal envi­ron­ment that a mil­i­tant group of Rohingya—a pre­dom­i­nant­ly Mus­lim eth­nic minority—attacked and killed a dozen mem­bers of the secu­ri­ty forces, in August of 2017. As police and mil­i­tary began to crack down on the Rohingya and push out anti-Mus­lim pro­pa­gan­da, fake news arti­cles cap­i­tal­iz­ing on the sen­ti­ment went viral. They claimed that Mus­lims were armed, that they were gath­er­ing in mobs 1,000 strong, that they were around the cor­ner com­ing to kill you.

    It’s still not clear today whether the fake news came pri­mar­i­ly from polit­i­cal actors or from finan­cial­ly moti­vat­ed ones. But either way, the sheer vol­ume of fake news and click­bait act­ed like fuel on the flames of already dan­ger­ous­ly high eth­nic and reli­gious ten­sions. It shift­ed pub­lic opin­ion and esca­lat­ed the con­flict, which ulti­mate­ly led to the death of 10,000 Rohingya, by con­ser­v­a­tive esti­mates, and the dis­place­ment of 700,000 more.

    In 2018, a Unit­ed Nations inves­ti­ga­tion deter­mined that the vio­lence against the Rohingya con­sti­tut­ed a geno­cide and that Face­book had played a “deter­min­ing role” in the atroc­i­ties. Months lat­er, Face­book admit­ted it hadn’t done enough “to help pre­vent our plat­form from being used to foment divi­sion and incite offline vio­lence.”

    Over the last few weeks, the rev­e­la­tions from the Face­book Papers, a col­lec­tion of inter­nal doc­u­ments pro­vid­ed to Con­gress and a con­sor­tium of news orga­ni­za­tions by whistle­blow­er Frances Hau­gen, have reaf­firmed what civ­il soci­ety groups have been say­ing for years: Facebook’s algo­rith­mic ampli­fi­ca­tion of inflam­ma­to­ry con­tent, com­bined with its fail­ure to pri­or­i­tize con­tent mod­er­a­tion out­side the US and Europe, has fueled the spread of hate speech and mis­in­for­ma­tion, dan­ger­ous­ly desta­bi­liz­ing coun­tries around the world.

    But there’s a cru­cial piece miss­ing from the sto­ry. Face­book isn’t just ampli­fy­ing mis­in­for­ma­tion.

    The com­pa­ny is also fund­ing it.

    An MIT Tech­nol­o­gy Review inves­ti­ga­tion, based on expert inter­views, data analy­ses, and doc­u­ments that were not includ­ed in the Face­book Papers, has found that Face­book and Google are pay­ing mil­lions of ad dol­lars to bankroll click­bait actors, fuel­ing the dete­ri­o­ra­tion of infor­ma­tion ecosys­tems around the world.

    The anato­my of a click­bait farm

    Face­book launched its Instant Arti­cles pro­gram in 2015 with a hand­ful of US and Euro­pean pub­lish­ers. The com­pa­ny billed the pro­gram as a way to improve arti­cle load times and cre­ate a slick­er user expe­ri­ence.

    That was the pub­lic sell. But the move also con­ve­nient­ly cap­tured adver­tis­ing dol­lars from Google. Before Instant Arti­cles, arti­cles post­ed on Face­book would redi­rect to a brows­er, where they’d open up on the publisher’s own web­site. The ad provider, usu­al­ly Google, would then cash in on any ad views or clicks. With the new scheme, arti­cles would open up direct­ly with­in the Face­book app, and Face­book would own the ad space. If a par­tic­i­pat­ing pub­lish­er had also opt­ed into mon­e­tiz­ing with Facebook’s adver­tis­ing net­work, called Audi­ence Net­work, Face­book could insert ads into the publisher’s sto­ries and take a 30% cut of the rev­enue.

    Instant Arti­cles quick­ly fell out of favor with its orig­i­nal cohort of big main­stream pub­lish­ers. For them, the pay­outs weren’t high enough com­pared with oth­er avail­able forms of mon­e­ti­za­tion. But that was not true for pub­lish­ers in the Glob­al South, which Face­book began accept­ing into the pro­gram in 2016. In 2018, the com­pa­ny report­ed pay­ing out $1.5 bil­lion to pub­lish­ers and app devel­op­ers (who can also par­tic­i­pate in Audi­ence Net­work). In 2019, that fig­ure had reached mul­ti­ple bil­lions.

    Ear­ly on, Face­book per­formed lit­tle qual­i­ty con­trol on the types of pub­lish­ers join­ing the pro­gram. The platform’s design also didn’t suf­fi­cient­ly penal­ize users for post­ing iden­ti­cal con­tent across Face­book pages—in fact, it reward­ed the behav­ior. Post­ing the same arti­cle on mul­ti­ple pages could as much as dou­ble the num­ber of users who clicked on it and gen­er­at­ed ad rev­enue.

    Click­bait farms around the world seized on this flaw as a strategy—one they still use today.

    A farm will cre­ate a web­site or mul­ti­ple web­sites…

    …for pub­lish­ing pre­dom­i­nant­ly pla­gia­rized con­tent.

    It reg­is­ters them with Instant Arti­cles and Audi­ence Net­work, which inserts ads into their arti­cles.

    Then it posts those arti­cles across a clus­ter of as many as dozens of Face­book pages at a time.

    Click­bait actors cropped up in Myan­mar overnight. With the right recipe for pro­duc­ing engag­ing and evoca­tive con­tent, they could gen­er­ate thou­sands of US dol­lars a month in ad rev­enue, or 10 times the aver­age month­ly salary—paid to them direct­ly by Face­book.

    ??Scam­mers used to make their $$ from naive peo­ple. Now they get their pay­ments straight from some of the world’s biggest tech com­pa­nies. Sor­ry David — but this is NOT equiv­a­lent. https://t.co/mhMZMTNi6e pic.twitter.com/hgqYBcHw8U— Vic­toire Rio (@riovictoire) Sep­tem­ber 19, 2021

    If this is wild — let’s check out @google, who offi­cial­ly says it does not enable mon­eti­sa­tion in Myan­mar (too risky) but in prac­tice turns a blind eye and finds no issue with mak­ing pay­ments into Myan­mar bank accounts ?????The rules ??https://t.co/vCdrTpkGbf https://t.co/gX2FbeIa00 pic.twitter.com/dYV3eJp2eH— Vic­toire Rio (@riovictoire) Sep­tem­ber 20, 2021

    An inter­nal com­pa­ny doc­u­ment, first report­ed by MIT Tech­nol­o­gy Review in Octo­ber, shows that Face­book was aware of the prob­lem as ear­ly as 2019. The author, for­mer Face­book data sci­en­tist Jeff Allen, found that these exact tac­tics had allowed click­bait farms in Mace­do­nia and Koso­vo to reach near­ly half a mil­lion Amer­i­cans a year before the 2020 elec­tion. The farms had also made their way into Instant Arti­cles and Ad Breaks, a sim­i­lar mon­e­ti­za­tion pro­gram for insert­ing ads into Face­book videos. At one point, as many as 60% of the domains enrolled in Instant Arti­cles were using the spam­my writ­ing tac­tics employed by click­bait farms, the report said. Allen, bound by a nondis­clo­sure agree­ment with Face­book, did not com­ment on the report.

    Despite pres­sure from both inter­nal and exter­nal researchers, Face­book strug­gled to stem the abuse. Mean­while, the com­pa­ny was rolling out more mon­e­ti­za­tion pro­grams to open up new streams of rev­enue. Besides Ad Breaks for videos, there was IGTV Mon­e­ti­za­tion for Insta­gram and In-Stream Ads for Live videos. “That reck­less push for user growth we saw—now we are see­ing a reck­less push for pub­lish­er growth,” says Vic­toire Rio, a dig­i­tal rights researcher fight­ing plat­form-induced harms in Myan­mar and oth­er coun­tries in the Glob­al South.

    MIT Tech­nol­o­gy Review has found that the prob­lem is now hap­pen­ing on a glob­al scale. Thou­sands of click­bait oper­a­tions have sprung up, pri­mar­i­ly in coun­tries where Facebook’s pay­outs pro­vide a larg­er and stead­ier source of income than oth­er forms of avail­able work. Some are teams of peo­ple while oth­ers are indi­vid­u­als, abet­ted by cheap auto­mat­ed tools that help them cre­ate and dis­trib­ute arti­cles at mass scale. They’re no longer lim­it­ed to pub­lish­ing arti­cles, either. They push out Live videos and run Insta­gram accounts, which they mon­e­tize direct­ly or use to dri­ve more traf­fic to their sites.

    Google is also cul­pa­ble. Its AdSense pro­gram fueled the Mace­do­nia- and Koso­vo-based farms that tar­get­ed Amer­i­can audi­ences in the lead-up to the 2016 pres­i­den­tial elec­tion. And it’s AdSense that is incen­tiviz­ing new click­bait actors on YouTube to post out­ra­geous con­tent and viral mis­in­for­ma­tion.

    Many click­bait farms today now mon­e­tize with both Instant Arti­cles and AdSense, receiv­ing pay­outs from both com­pa­nies. And because Facebook’s and YouTube’s algo­rithms boost what­ev­er is engag­ing to users, they’ve cre­at­ed an infor­ma­tion ecosys­tem where con­tent that goes viral on one plat­form will often be recy­cled on the oth­er to max­i­mize dis­tri­b­u­tion and rev­enue.

    “These actors wouldn’t exist if it wasn’t for the plat­forms,” Rio says.

    ...

    These farms are not just tar­get­ing their home coun­tries. Fol­low­ing the exam­ple of actors from Mace­do­nia and Koso­vo, the newest oper­a­tors have real­ized they need to under­stand nei­ther a country’s local con­text nor its lan­guage to turn polit­i­cal out­rage into income.

    MIT Tech­nol­o­gy Review part­nered with Allen, who now leads a non­prof­it called the Integri­ty Insti­tute that con­ducts research on plat­form abuse, to iden­ti­fy pos­si­ble click­bait actors on Face­book. We focused on pages run out of Cam­bo­dia and Vietnam—two of the coun­tries where click­bait oper­a­tions are now cash­ing in on the sit­u­a­tion in Myan­mar.

    We obtained data from Crowd­Tan­gle, whose devel­op­ment team the com­pa­ny broke up ear­li­er this year, and from Facebook’s Pub­lish­er Lists, which record which pub­lish­ers are reg­is­tered in mon­e­ti­za­tion pro­grams. Allen wrote a cus­tom clus­ter­ing algo­rithm to find pages post­ing con­tent in a high­ly coor­di­nat­ed man­ner and tar­get­ing speak­ers of lan­guages used pri­mar­i­ly out­side the coun­tries where the oper­a­tions are based. We then ana­lyzed which clus­ters had at least one page reg­is­tered in a mon­e­ti­za­tion pro­gram or were heav­i­ly pro­mot­ing con­tent from a page reg­is­tered with a pro­gram.

    We found over 2,000 pages in both coun­tries engaged in this click­bait-like behav­ior. (That could be an under­count, because not all Face­book pages are tracked by Crowd­Tan­gle.) Many have mil­lions of fol­low­ers and like­ly reach even more users. In his 2019 report, Allen found that 75% of users who were exposed to click­bait con­tent from farms run in Mace­do­nia and Koso­vo had nev­er fol­lowed any of the pages. Facebook’s con­tent-rec­om­men­da­tion sys­tem had instead pushed it into their news feeds.

    When MIT Tech­nol­o­gy Review sent Face­book a list of these pages and a detailed expla­na­tion of our method­ol­o­gy, Osborne called the analy­sis “flawed.” “While some Pages here may have been on our pub­lish­er lists, many of them didn’t actu­al­ly mon­e­tize on Face­book,” he said.

    Indeed, these num­bers do not indi­cate that all of these pages gen­er­at­ed ad rev­enue. Instead, it is an esti­mate, based on data Face­book has made pub­licly avail­able, of the num­ber of pages asso­ci­at­ed with click­bait actors in Cam­bo­dia and Viet­nam that Face­book has made eli­gi­ble to mon­e­tize on the plat­form.

    Osborne also con­firmed that more of the Cam­bo­dia-run click­bait-like pages we found had direct­ly onboard­ed onto one of Facebook’s mon­e­ti­za­tion pro­grams than we pre­vi­ous­ly believed. In our analy­sis, we found 35% of the pages in our clus­ters had direct­ly reg­is­tered with a mon­e­ti­za­tion pro­gram in the last two years. The oth­er 65% would have indi­rect­ly gen­er­at­ed ad rev­enue by heav­i­ly pro­mot­ing con­tent from the reg­is­tered page to a wider audi­ence. Osborne said that in fact about half of the pages we found, or rough­ly 150 more pages, had direct­ly reg­is­tered at one point with a mon­e­ti­za­tion pro­gram, pri­mar­i­ly Instant Arti­cles.

    Short­ly after we approached Face­book, some of the Cam­bo­di­an oper­a­tors of these pages began com­plain­ing in online forums that their pages had been boot­ed out of Instant Arti­cles. Osborne declined to respond to our ques­tions about the lat­est enforce­ment actions the com­pa­ny has tak­en.

    Face­book has con­tin­u­ous­ly sought to weed these actors out of its pro­grams. For exam­ple, only 30 of the Cam­bo­dia-run pages are still mon­e­tiz­ing, Osborne said. But our data from Facebook’s pub­lish­er lists shows enforce­ment is often delayed and incomplete—clickbait pages can stay with­in mon­e­ti­za­tion pro­grams for hun­dreds of days before they are tak­en down. The same actors will also spin up new pages once their old ones have demon­e­tized.

    Allen is now open-sourc­ing the code we used to encour­age oth­er inde­pen­dent researchers to refine and build on our work.

    Using the same method­ol­o­gy, we also found more than 400 for­eign-run pages tar­get­ing pre­dom­i­nant­ly US audi­ences in clus­ters that appeared in Facebook’s Pub­lish­er lists over the last two years. (We did not include pages from coun­tries whose pri­ma­ry lan­guage is Eng­lish.) The set includes a mon­e­tiz­ing clus­ter run in part out of Mace­do­nia aimed at women and the LGBTQ com­mu­ni­ty. It has eight Face­book pages, includ­ing two ver­i­fied ones with over 1.7 mil­lion and 1.5 mil­lion fol­low­ers respec­tive­ly, and posts con­tent from five web­sites, each reg­is­tered with Google AdSense and Audi­ence Net­work. It also has three Insta­gram accounts, which mon­e­tize through gift shops and col­lab­o­ra­tions and by direct­ing users to the same large­ly pla­gia­rized web­sites. Admins of the Face­book pages and Insta­gram accounts did not respond to our requests for com­ment.

    Osborne said Face­book is now inves­ti­gat­ing the accounts after we brought them to the company’s atten­tion. Choi said Google has removed AdSense ads from hun­dreds of pages on these sites in the past due to pol­i­cy vio­la­tions but that the sites them­selves are still allowed to mon­e­tize based on the company’s reg­u­lar reviews.

    While it’s pos­si­ble that the Mace­do­nians who run the pages do indeed care about US pol­i­tics and about women’s and LGBTQ rights, the con­tent is unde­ni­ably gen­er­at­ing rev­enue. This means what they pro­mote is most like­ly guid­ed by what wins and los­es with Facebook’s news feed algo­rithm.

    The activ­i­ty of a sin­gle page or clus­ter of pages may not feel sig­nif­i­cant, says Camille François, a researcher at Colum­bia Uni­ver­si­ty who stud­ies orga­nized dis­in­for­ma­tion cam­paigns on social media. But when hun­dreds or thou­sands of actors are doing the same thing, ampli­fy­ing the same con­tent, and reach­ing mil­lions of audi­ence mem­bers, it can affect the pub­lic con­ver­sa­tion. “What peo­ple see as the domes­tic con­ver­sa­tion on a top­ic can actu­al­ly be some­thing com­plete­ly dif­fer­ent,” François says. “It’s a bunch of paid peo­ple pre­tend­ing to not have any rela­tion­ship with one anoth­er, opti­miz­ing what to post.”

    Osborne said Face­book has cre­at­ed sev­er­al new poli­cies and enforce­ment pro­to­cols in the last two years to address this issue, includ­ing penal­iz­ing pages run out of one coun­try that behave like they are domes­tic to anoth­er, as well as penal­iz­ing pages that build an audi­ence on one top­ic and then piv­ot anoth­er. But both Allen and Rio say the company’s actions have failed to close fun­da­men­tal loop­holes in the platform’s poli­cies and designs—vulnerabilities that are fuel­ing a glob­al infor­ma­tion cri­sis.

    “It’s affect­ing coun­tries first and fore­most out­side the US but presents a mas­sive risk to the US long term as well,” Rio says. “It’s going to affect pret­ty much any­where in the world when there are height­ened events like an elec­tion.”

    Dis­in­for­ma­tion for hire

    In response to MIT Tech­nol­o­gy Review’s ini­tial report­ing on Allen’s 2019 inter­nal report, which we pub­lished in full, David Agra­novich, the direc­tor of glob­al threat dis­rup­tion at Face­book, tweet­ed, “The pages ref­er­enced here, based on our own 2019 research, are finan­cial­ly moti­vat­ed spam­mers, not overt influ­ence ops. Both of these are seri­ous chal­lenges, but they’re dif­fer­ent. Con­flat­ing them doesn’t help any­one.” Osborne repeat­ed that we were con­flat­ing the two groups in response to our find­ings.

    But dis­in­for­ma­tion experts say it’s mis­lead­ing to draw a hard line between finan­cial­ly moti­vat­ed spam­mers and polit­i­cal influ­ence oper­a­tions. There is a dis­tinc­tion in intent: finan­cial­ly moti­vat­ed spam­mers are agnos­tic about the con­tent they pub­lish. They go wher­ev­er the clicks and mon­ey are, let­ting Facebook’s news feed algo­rithm dic­tate which top­ics they’ll cov­er next. Polit­i­cal oper­a­tions are instead tar­get­ed toward push­ing a spe­cif­ic agen­da.

    But in prac­tice it doesn’t mat­ter: in their tac­tics and impact, they often look the same. On an aver­age day, a finan­cial­ly moti­vat­ed click­bait site might be pop­u­lat­ed with celebri­ty news, cute ani­mals, or high­ly emo­tion­al stories—all reli­able dri­vers of traf­fic. Then, when polit­i­cal tur­moil strikes, they drift toward hyper­par­ti­san news, mis­in­for­ma­tion, and out­rage bait because it gets more engage­ment.

    The Mace­don­ian page clus­ter is a prime exam­ple. Most of the time the con­tent pro­motes women’s and LGTBQ rights. But around the time of events like the 2020 elec­tion, the Jan­u­ary 6 insur­rec­tion, and the pas­sage of Texas’s antiabor­tion “heart­beat bill,” the clus­ter ampli­fied par­tic­u­lar­ly point­ed polit­i­cal con­tent. Many of its arti­cles have been wide­ly cir­cu­lat­ed by legit­i­mate pages with huge fol­low­ings, includ­ing those run by Occu­py Democ­rats, the Union of Con­cerned Sci­en­tists, and Women’s March Glob­al.

    Polit­i­cal influ­ence oper­a­tions, mean­while, might post celebri­ty and ani­mal con­tent to build out Face­book pages with large fol­low­ings. They then also piv­ot to pol­i­tics dur­ing sen­si­tive polit­i­cal events, cap­i­tal­iz­ing on the huge audi­ences already at their dis­pos­al.

    Polit­i­cal oper­a­tives will some­times also pay finan­cial­ly moti­vat­ed spam­mers to broad­cast pro­pa­gan­da on their Face­book pages, or buy pages to repur­pose them for influ­ence cam­paigns. Rio has already seen evi­dence of a black mar­ket where click­bait actors can sell their large Face­book audi­ences.

    In oth­er words, pages look innocu­ous until they don’t. “We have empow­ered inau­then­tic actors to accu­mu­late huge fol­low­ings for large­ly unknown pur­pos­es,” Allen wrote in the report.

    This shift has hap­pened many times in Myan­mar since the rise of click­bait farms, in par­tic­u­lar dur­ing the Rohingya cri­sis and again in the lead-up to and after­math of this year’s mil­i­tary coup. (The lat­ter was pre­cip­i­tat­ed by events much like those lead­ing to the US Jan­u­ary 6 insur­rec­tion, includ­ing wide­spread fake claims of a stolen elec­tion.)

    In Octo­ber 2020, Face­book took down a num­ber of pages and groups engaged in coor­di­nat­ed click­bait behav­ior in Myan­mar. In an analy­sis of those assets, Graphi­ka, a research firm that stud­ies the spread of infor­ma­tion online, found that the pages focused pre­dom­i­nant­ly on celebri­ty news and gos­sip but pushed out polit­i­cal pro­pa­gan­da, dan­ger­ous anti-Mus­lim rhetoric, and covid-19 mis­in­for­ma­tion dur­ing key moments of cri­sis. Dozens of pages had more than 1 mil­lion fol­low­ers each, with the largest reach­ing over 5 mil­lion.

    The same phe­nom­e­non played out in the Philip­pines in the lead-up to pres­i­dent Rodri­go Duterte’s 2016 elec­tion. Duterte has been com­pared to Don­ald Trump for his pop­ulist pol­i­tics, bom­bas­tic rhetoric, and author­i­tar­i­an lean­ings. Dur­ing his cam­paign, a click­bait farm, reg­is­tered for­mal­ly as the com­pa­ny Twin­mark Media, shift­ed from cov­er­ing celebri­ties and enter­tain­ment to pro­mot­ing him and his ide­ol­o­gy.

    At the time, it was wide­ly believed that politi­cians had hired Twin­mark to con­duct an influ­ence cam­paign. But in inter­views with jour­nal­ists and researchers, for­mer Twin­mark employ­ees admit­ted they were sim­ply chas­ing prof­it. Through exper­i­men­ta­tion, the employ­ees dis­cov­ered that pro-Duterte con­tent excelled dur­ing a heat­ed elec­tion. They even paid oth­er celebri­ties and influ­encers to share their arti­cles to get more clicks and gen­er­ate more ad rev­enue, accord­ing to research from media and com­mu­ni­ca­tion schol­ars Jonathan Ong and Jason Vin­cent A. Cabañes.

    In the final months of the cam­paign, Duterte dom­i­nat­ed the polit­i­cal dis­course on social media. Face­book itself named him the “undis­put­ed king of Face­book con­ver­sa­tions” when it found he was the sub­ject of 68% of all elec­tion-relat­ed dis­cus­sions, com­pared with 46% for his next clos­est rival.

    Three months after the elec­tion, Maria Ressa, CEO of the media com­pa­ny Rap­pler, who won the Nobel Peace Prize this year for her work fight­ing dis­in­for­ma­tion, pub­lished a piece describ­ing how a con­cert of coor­di­nat­ed click­bait and pro­pa­gan­da on Face­book “shift[ed] pub­lic opin­ion on key issues.”

    “It’s a strat­e­gy of ‘death by a thou­sand cuts’—a chip­ping away at facts, using half-truths that fab­ri­cate an alter­na­tive real­i­ty by merg­ing the pow­er of bots and fake accounts on social media to manip­u­late real peo­ple,” she wrote.

    In 2019, Face­book final­ly took down 220 Face­book pages, 73 Face­book accounts, and 29 Insta­gram accounts linked to Twin­mark Media. By then, Face­book and Google had already paid the farm as much as $8 mil­lion (400 mil­lion Philip­pine pesos).

    Nei­ther Face­book nor Google con­firmed this amount. Meta’s Osborne dis­put­ed the char­ac­ter­i­za­tion that Face­book had influ­enced the elec­tion.

    An evolv­ing threat

    Face­book made a major effort to weed click­bait farms out of Instant Arti­cles and Ad Breaks in the first half of 2019, accord­ing to Allen’s inter­nal report. Specif­i­cal­ly, it began check­ing pub­lish­ers for con­tent orig­i­nal­i­ty and demon­e­tiz­ing those who post­ed large­ly uno­rig­i­nal con­tent.

    But these auto­mat­ed checks are lim­it­ed. They pri­mar­i­ly focus on assess­ing the orig­i­nal­i­ty of videos, and not, for exam­ple, whether an arti­cle has been pla­gia­rized. Even if they did, such sys­tems would only be as good as the company’s arti­fi­cial-intel­li­gence capa­bil­i­ties in a giv­en lan­guage. Coun­tries with lan­guages not pri­or­i­tized by the AI research com­mu­ni­ty receive far less atten­tion, if any at all. “In the case of Ethiopia there are 100 mil­lion peo­ple and six lan­guages. Face­book only sup­ports two of those lan­guages for integri­ty sys­tems,” Hau­gen said dur­ing her tes­ti­mo­ny to Con­gress.

    Rio says there are also loop­holes in enforce­ment. Vio­la­tors are tak­en out of the pro­gram but not off the plat­form, and they can appeal to be rein­stat­ed. The appeals are processed by a sep­a­rate team from the one that does the enforc­ing and per­forms only basic top­i­cal checks before rein­stat­ing the actor. (Face­book did not respond to ques­tions about what these checks actu­al­ly look for.) As a result, it can take mere hours for a click­bait oper­a­tor to rejoin again and again after removal. “Some­how all of the teams don’t talk to each oth­er,” she says.

    This is how Rio found her­self in a state of pan­ic in March of this year. A month after the mil­i­tary had arrest­ed for­mer demo­c­ra­t­ic leader Aung San Suu Kyi and seized con­trol of the gov­ern­ment, pro­test­ers were still vio­lent­ly clash­ing with the new regime. The mil­i­tary was spo­rad­i­cal­ly cut­ting access to the inter­net and broad­cast net­works, and Rio was ter­ri­fied for the safe­ty of her friends in the coun­try.

    She began look­ing for them in Face­book Live videos. “Peo­ple were real­ly active­ly watch­ing these videos because this is how you keep track of your loved ones,” she says. She wasn’t con­cerned to see that the videos were com­ing from pages with cred­i­bil­i­ty issues; she believed that the stream­ers were using fake pages to pro­tect their anonymi­ty.

    Then the impos­si­ble hap­pened: she saw the same Live video twice. She remem­bered it because it was hor­ri­fy­ing: hun­dreds of kids, who looked as young as 10, in a line with their hands on their heads, being loaded into mil­i­tary trucks.

    When she dug into it, she dis­cov­ered that the videos were not live at all. Live videos are meant to indi­cate a real-time broad­cast and include impor­tant meta­da­ta about the time and place of the activ­i­ty. These videos had been down­loaded from else­where and rebroad­cast on Face­book using third-par­ty tools to make them look like livestreams.

    There were hun­dreds of them, rack­ing up tens of thou­sands of engage­ments and hun­dreds of thou­sands of views. As of ear­ly Novem­ber, MIT Tech­nol­o­gy Review found dozens of dupli­cate fake Live videos from this time frame still up. One dupli­cate pair with over 200,000 and 160,000 views, respec­tive­ly, pro­claimed in Burmese, “I am the only one who broad­casts live from all over the coun­try in real time.” Face­book took sev­er­al of them down after we brought them to its atten­tion but dozens more, as well as the pages that post­ed them, still remain. Osborne said the com­pa­ny is aware of the issue and has sig­nif­i­cant­ly reduced these fake Lives and their dis­tri­b­u­tion over the past year.

    Iron­i­cal­ly, Rio believes, the videos were like­ly ripped from footage of the cri­sis uploaded to YouTube as human rights evi­dence. The scenes, in oth­er words, are indeed from Myanmar—but they were all being post­ed from Viet­nam and Cam­bo­dia.

    Over the past half-year, Rio has tracked and iden­ti­fied sev­er­al page clus­ters run out of Viet­nam and Cam­bo­dia. Many used fake Live videos to rapid­ly build their fol­low­er num­bers and dri­ve view­ers to join Face­book groups dis­guised as pro-democ­ra­cy com­mu­ni­ties. Rio now wor­ries that Facebook’s lat­est roll­out of in-stream ads in Live videos will fur­ther incen­tivize click­bait actors to fake them. One Cam­bo­di­an clus­ter with 18 pages began post­ing high­ly dam­ag­ing polit­i­cal mis­in­for­ma­tion, reach­ing a total of 16 mil­lion engage­ments and an audi­ence of 1.6 mil­lion in four months. Face­book took all 18 pages down in March but new clus­ters con­tin­ue to spin up while oth­ers remain.

    For all Rio knows, these Viet­namese and Cam­bo­di­an actors do not speak Burmese. They like­ly do not under­stand Burmese cul­ture or the country’s pol­i­tics. The bot­tom line is they don’t need to. Not when they’re steal­ing their con­tent.

    Rio has since found sev­er­al of the Cam­bo­di­ans’ pri­vate Face­book and Telegram groups (one with upward of 3,000 indi­vid­u­als), where they trade tools and tips about the best mon­ey-mak­ing strate­gies. MIT Tech­nol­o­gy Review reviewed the doc­u­ments, images, and videos she gath­ered, and hired a Khmer trans­la­tor to inter­pret a tuto­r­i­al video that walks view­ers step by step through a click­bait work­flow.

    The mate­ri­als show how the Cam­bo­di­an oper­a­tors gath­er research on the best-per­form­ing con­tent in each coun­try and pla­gia­rize them for their click­bait web­sites. One Google Dri­ve fold­er shared with­in the com­mu­ni­ty has two dozen spread­sheets of links to the most pop­u­lar Face­book groups in 20 coun­tries, includ­ing the US, the UK, Aus­tralia, India, France, Ger­many, Mex­i­co, and Brazil.

    The tuto­r­i­al video also shows how they find the most viral YouTube videos in dif­fer­ent lan­guages and use an auto­mat­ed tool to con­vert each one into an arti­cle for their site. We found 29 YouTube chan­nels spread­ing polit­i­cal mis­in­for­ma­tion about the cur­rent polit­i­cal sit­u­a­tion in Myan­mar, for exam­ple, that were being con­vert­ed into click­bait arti­cles and redis­trib­uted to new audi­ences on Face­book.

    After we brought the chan­nels to its atten­tion, YouTube ter­mi­nat­ed all of them for vio­lat­ing its com­mu­ni­ty guide­lines, includ­ing 7 of which it deter­mined were part of coor­di­nat­ed influ­ence oper­a­tions linked to Myan­mar. Choi not­ed that YouTube had pre­vi­ous­ly also stopped serv­ing ads on near­ly 2,000 videos across these chan­nels. “We con­tin­ue to active­ly mon­i­tor our plat­forms to pre­vent bad actors look­ing to abuse our net­work for prof­it,” she said.

    Then there are oth­er tools, includ­ing one that allows pre­re­cord­ed videos to appear as fake Face­book Live videos. Anoth­er ran­dom­ly gen­er­ates pro­file details for US men, includ­ing image, name, birth­day, Social Secu­ri­ty num­ber, phone num­ber, and address, so yet anoth­er tool can mass-pro­duce fake Face­book accounts using some of that infor­ma­tion.

    It’s now so easy to do that many Cam­bo­di­an actors oper­ate solo. Rio calls them micro-entre­pre­neurs. In the most extreme sce­nario, she’s seen indi­vid­u­als man­age as many as 11,000 Face­book accounts on their own.

    Suc­cess­ful micro-entre­pre­neurs are also train­ing oth­ers to do this work in their com­mu­ni­ty. “It’s going to get worse,” she says. “Any Joe in the world could be affect­ing your infor­ma­tion envi­ron­ment with­out you real­iz­ing.”

    Prof­it over safe­ty

    Dur­ing her Sen­ate tes­ti­mo­ny in Octo­ber of this year, Hau­gen high­light­ed the fun­da­men­tal flaws of Facebook’s con­tent-based approach to plat­form abuse. The cur­rent strat­e­gy, focused on what can and can­not appear on the plat­form, can only be reac­tive and nev­er com­pre­hen­sive, she said. Not only does it require Face­book to enu­mer­ate every pos­si­ble form of abuse, but it also requires the com­pa­ny to be pro­fi­cient at mod­er­at­ing in every lan­guage. Face­book has failed on both counts—and the most vul­ner­a­ble peo­ple in the world have paid the great­est price, she said.

    The main cul­prit, Hau­gen said, is Facebook’s desire to max­i­mize engage­ment, which has turned its algo­rithm and plat­form design into a giant bull­horn for hate speech and mis­in­for­ma­tion. An MIT Tech­nol­o­gy Review inves­ti­ga­tion from ear­li­er this year, based on dozens of inter­views with Face­book exec­u­tives, cur­rent and for­mer employ­ees, indus­try peers, and exter­nal experts, cor­rob­o­rates this char­ac­ter­i­za­tion.

    Her tes­ti­mo­ny also echoed what Allen wrote in his report—and what Rio and oth­er dis­in­for­ma­tion experts have repeat­ed­ly seen through their research. For click­bait farms, get­ting into the mon­e­ti­za­tion pro­grams is the first step, but how much they cash in depends on how far Facebook’s con­tent-rec­om­men­da­tion sys­tems boost their arti­cles. They would not thrive, nor would they pla­gia­rize such dam­ag­ing con­tent, if their shady tac­tics didn’t do so well on the plat­form.

    As a result, weed­ing out the farms them­selves isn’t the solu­tion: high­ly moti­vat­ed actors will always be able to spin up new web­sites and new pages to get more mon­ey. Instead, it’s the algo­rithms and con­tent reward mech­a­nisms that need address­ing.

    In his report, Allen pro­posed one pos­si­ble way Face­book could do this: by using what’s known as a graph-based author­i­ty mea­sure to rank con­tent. This would ampli­fy high­er-qual­i­ty pages like news and media and dimin­ish low­er-qual­i­ty pages like click­bait, revers­ing the cur­rent trend.

    Hau­gen empha­sized that Facebook’s fail­ure to fix its plat­form was not for want of solu­tions, tools, or capac­i­ty. “Face­book can change but is clear­ly not going to do so on its own,” she said. “My fear is that with­out action, the divi­sive and extrem­ist behav­iors we see today are only the begin­ning. What we saw in Myan­mar and are now see­ing in Ethiopia are only the open­ing chap­ters of a sto­ry so ter­ri­fy­ing no one wants to read the end of it.”

    ...

    In Octo­ber, the out­go­ing UN spe­cial envoy on Myan­mar said the coun­try had dete­ri­o­rat­ed into civ­il war. Thou­sands of peo­ple have since fled to neigh­bor­ing coun­tries like Thai­land and India. As of mid-Novem­ber, click­bait actors were con­tin­u­ing to post fake news hourly: In one, the demo­c­ra­t­ic leader, “Moth­er Suu,” had been assas­si­nat­ed. In anoth­er, she had final­ly been freed.

    ———-

    “How Face­book and Google fund glob­al mis­in­for­ma­tion” by Karen Hao; MIT Tech­nol­o­gy Review; 11/20/2021

    MIT Tech­nol­o­gy Review has found that the prob­lem is now hap­pen­ing on a glob­al scale. Thou­sands of click­bait oper­a­tions have sprung up, pri­mar­i­ly in coun­tries where Facebook’s pay­outs pro­vide a larg­er and stead­ier source of income than oth­er forms of avail­able work. Some are teams of peo­ple while oth­ers are indi­vid­u­als, abet­ted by cheap auto­mat­ed tools that help them cre­ate and dis­trib­ute arti­cles at mass scale. They’re no longer lim­it­ed to pub­lish­ing arti­cles, either. They push out Live videos and run Insta­gram accounts, which they mon­e­tize direct­ly or use to dri­ve more traf­fic to their sites.”

    What researchers found was hap­pen­ing in Myan­mar dur­ing the out­break of civ­il war was hap­pen­ing at a glob­al scale: click­bait oper­a­tors prop­ping up in the poor­est coun­tries around the world, push­ing viral dis­in­for­ma­tion for prof­it. Typ­i­cal­ly much high­er prof­its than they could oth­er­wise earn local­ly. In oth­er words, push­ing dis­in­for­ma­tion on Face­book has sud­den­ly become one of the best pay­ing jobs around the globe. And it’s Face­book pay­ing for this through its Instant Arti­cles pro­gram launched in 2015. Bil­lions of dol­lars paid out in just the last five years by Face­book to dis­in­for­ma­tion push­ers across the Glob­al South. That’s the mega scan­dal here. Just the lat­est Face­book mega scan­dal:

    ...
    Face­book launched its Instant Arti­cles pro­gram in 2015 with a hand­ful of US and Euro­pean pub­lish­ers. The com­pa­ny billed the pro­gram as a way to improve arti­cle load times and cre­ate a slick­er user expe­ri­ence.

    That was the pub­lic sell. But the move also con­ve­nient­ly cap­tured adver­tis­ing dol­lars from Google. Before Instant Arti­cles, arti­cles post­ed on Face­book would redi­rect to a brows­er, where they’d open up on the publisher’s own web­site. The ad provider, usu­al­ly Google, would then cash in on any ad views or clicks. With the new scheme, arti­cles would open up direct­ly with­in the Face­book app, and Face­book would own the ad space. If a par­tic­i­pat­ing pub­lish­er had also opt­ed into mon­e­tiz­ing with Facebook’s adver­tis­ing net­work, called Audi­ence Net­work, Face­book could insert ads into the publisher’s sto­ries and take a 30% cut of the rev­enue.

    Instant Arti­cles quick­ly fell out of favor with its orig­i­nal cohort of big main­stream pub­lish­ers. For them, the pay­outs weren’t high enough com­pared with oth­er avail­able forms of mon­e­ti­za­tion. But that was not true for pub­lish­ers in the Glob­al South, which Face­book began accept­ing into the pro­gram in 2016. In 2018, the com­pa­ny report­ed pay­ing out $1.5 bil­lion to pub­lish­ers and app devel­op­ers (who can also par­tic­i­pate in Audi­ence Net­work). In 2019, that fig­ure had reached mul­ti­ple bil­lions.

    Ear­ly on, Face­book per­formed lit­tle qual­i­ty con­trol on the types of pub­lish­ers join­ing the pro­gram. The platform’s design also didn’t suf­fi­cient­ly penal­ize users for post­ing iden­ti­cal con­tent across Face­book pages—in fact, it reward­ed the behav­ior. Post­ing the same arti­cle on mul­ti­ple pages could as much as dou­ble the num­ber of users who clicked on it and gen­er­at­ed ad rev­enue.

    ...

    Click­bait actors cropped up in Myan­mar overnight. With the right recipe for pro­duc­ing engag­ing and evoca­tive con­tent, they could gen­er­ate thou­sands of US dol­lars a month in ad rev­enue, or 10 times the aver­age month­ly salary—paid to them direct­ly by Face­book.
    ...

    And, of course, we already have evi­dence of Face­book’s inter­nal strug­gle over how to address the abuse facil­i­tat­ed by this Instant Arti­cles pro­gram. An inter­nal strug­gle inevitably won by the forces of chaos. Prof­itable chaos. The abusers kicked out of the pro­gram could sign back up and repeat the process with­in hours:

    ...
    An inter­nal com­pa­ny doc­u­ment, first report­ed by MIT Tech­nol­o­gy Review in Octo­ber, shows that Face­book was aware of the prob­lem as ear­ly as 2019. The author, for­mer Face­book data sci­en­tist Jeff Allen, found that these exact tac­tics had allowed click­bait farms in Mace­do­nia and Koso­vo to reach near­ly half a mil­lion Amer­i­cans a year before the 2020 elec­tion. The farms had also made their way into Instant Arti­cles and Ad Breaks, a sim­i­lar mon­e­ti­za­tion pro­gram for insert­ing ads into Face­book videos. At one point, as many as 60% of the domains enrolled in Instant Arti­cles were using the spam­my writ­ing tac­tics employed by click­bait farms, the report said. Allen, bound by a nondis­clo­sure agree­ment with Face­book, did not com­ment on the report.

    Despite pres­sure from both inter­nal and exter­nal researchers, Face­book strug­gled to stem the abuse. Mean­while, the com­pa­ny was rolling out more mon­e­ti­za­tion pro­grams to open up new streams of rev­enue. Besides Ad Breaks for videos, there was IGTV Mon­e­ti­za­tion for Insta­gram and In-Stream Ads for Live videos. “That reck­less push for user growth we saw—now we are see­ing a reck­less push for pub­lish­er growth,” says Vic­toire Rio, a dig­i­tal rights researcher fight­ing plat­form-induced harms in Myan­mar and oth­er coun­tries in the Glob­al South.

    ...

    When MIT Tech­nol­o­gy Review sent Face­book a list of these pages and a detailed expla­na­tion of our method­ol­o­gy, Osborne called the analy­sis “flawed.” “While some Pages here may have been on our pub­lish­er lists, many of them didn’t actu­al­ly mon­e­tize on Face­book,” he said.

    Indeed, these num­bers do not indi­cate that all of these pages gen­er­at­ed ad rev­enue. Instead, it is an esti­mate, based on data Face­book has made pub­licly avail­able, of the num­ber of pages asso­ci­at­ed with click­bait actors in Cam­bo­dia and Viet­nam that Face­book has made eli­gi­ble to mon­e­tize on the plat­form.

    Osborne also con­firmed that more of the Cam­bo­dia-run click­bait-like pages we found had direct­ly onboard­ed onto one of Facebook’s mon­e­ti­za­tion pro­grams than we pre­vi­ous­ly believed. In our analy­sis, we found 35% of the pages in our clus­ters had direct­ly reg­is­tered with a mon­e­ti­za­tion pro­gram in the last two years. The oth­er 65% would have indi­rect­ly gen­er­at­ed ad rev­enue by heav­i­ly pro­mot­ing con­tent from the reg­is­tered page to a wider audi­ence. Osborne said that in fact about half of the pages we found, or rough­ly 150 more pages, had direct­ly reg­is­tered at one point with a mon­e­ti­za­tion pro­gram, pri­mar­i­ly Instant Arti­cles.

    Short­ly after we approached Face­book, some of the Cam­bo­di­an oper­a­tors of these pages began com­plain­ing in online forums that their pages had been boot­ed out of Instant Arti­cles. Osborne declined to respond to our ques­tions about the lat­est enforce­ment actions the com­pa­ny has tak­en.

    Face­book has con­tin­u­ous­ly sought to weed these actors out of its pro­grams. For exam­ple, only 30 of the Cam­bo­dia-run pages are still mon­e­tiz­ing, Osborne said. But our data from Facebook’s pub­lish­er lists shows enforce­ment is often delayed and incomplete—clickbait pages can stay with­in mon­e­ti­za­tion pro­grams for hun­dreds of days before they are tak­en down. The same actors will also spin up new pages once their old ones have demon­e­tized.

    ...

    Face­book made a major effort to weed click­bait farms out of Instant Arti­cles and Ad Breaks in the first half of 2019, accord­ing to Allen’s inter­nal report. Specif­i­cal­ly, it began check­ing pub­lish­ers for con­tent orig­i­nal­i­ty and demon­e­tiz­ing those who post­ed large­ly uno­rig­i­nal con­tent.

    But these auto­mat­ed checks are lim­it­ed. They pri­mar­i­ly focus on assess­ing the orig­i­nal­i­ty of videos, and not, for exam­ple, whether an arti­cle has been pla­gia­rized. Even if they did, such sys­tems would only be as good as the company’s arti­fi­cial-intel­li­gence capa­bil­i­ties in a giv­en lan­guage. Coun­tries with lan­guages not pri­or­i­tized by the AI research com­mu­ni­ty receive far less atten­tion, if any at all. “In the case of Ethiopia there are 100 mil­lion peo­ple and six lan­guages. Face­book only sup­ports two of those lan­guages for integri­ty sys­tems,” Hau­gen said dur­ing her tes­ti­mo­ny to Con­gress.

    Rio says there are also loop­holes in enforce­ment. Vio­la­tors are tak­en out of the pro­gram but not off the plat­form, and they can appeal to be rein­stat­ed. The appeals are processed by a sep­a­rate team from the one that does the enforc­ing and per­forms only basic top­i­cal checks before rein­stat­ing the actor. (Face­book did not respond to ques­tions about what these checks actu­al­ly look for.) As a result, it can take mere hours for a click­bait oper­a­tor to rejoin again and again after removal. “Some­how all of the teams don’t talk to each oth­er,” she says.
    ...

    But per­haps the most dis­turb­ing find­ing in this report is the spoof­ing of “Live Feeds”. Feeds deliv­ered to Face­book audi­ences as gen­uine live con­tent. The per­fect tool for social desta­bi­liza­tion. It’s just a mat­ter of time before we see an entire coun­try col­lapse into con­flict as a result of this kind of spoof­ing:

    ...
    This is how Rio found her­self in a state of pan­ic in March of this year. A month after the mil­i­tary had arrest­ed for­mer demo­c­ra­t­ic leader Aung San Suu Kyi and seized con­trol of the gov­ern­ment, pro­test­ers were still vio­lent­ly clash­ing with the new regime. The mil­i­tary was spo­rad­i­cal­ly cut­ting access to the inter­net and broad­cast net­works, and Rio was ter­ri­fied for the safe­ty of her friends in the coun­try.

    She began look­ing for them in Face­book Live videos. “Peo­ple were real­ly active­ly watch­ing these videos because this is how you keep track of your loved ones,” she says. She wasn’t con­cerned to see that the videos were com­ing from pages with cred­i­bil­i­ty issues; she believed that the stream­ers were using fake pages to pro­tect their anonymi­ty.

    Then the impos­si­ble hap­pened: she saw the same Live video twice. She remem­bered it because it was hor­ri­fy­ing: hun­dreds of kids, who looked as young as 10, in a line with their hands on their heads, being loaded into mil­i­tary trucks.

    When she dug into it, she dis­cov­ered that the videos were not live at all. Live videos are meant to indi­cate a real-time broad­cast and include impor­tant meta­da­ta about the time and place of the activ­i­ty. These videos had been down­loaded from else­where and rebroad­cast on Face­book using third-par­ty tools to make them look like livestreams.

    There were hun­dreds of them, rack­ing up tens of thou­sands of engage­ments and hun­dreds of thou­sands of views. As of ear­ly Novem­ber, MIT Tech­nol­o­gy Review found dozens of dupli­cate fake Live videos from this time frame still up. One dupli­cate pair with over 200,000 and 160,000 views, respec­tive­ly, pro­claimed in Burmese, “I am the only one who broad­casts live from all over the coun­try in real time.” Face­book took sev­er­al of them down after we brought them to its atten­tion but dozens more, as well as the pages that post­ed them, still remain. Osborne said the com­pa­ny is aware of the issue and has sig­nif­i­cant­ly reduced these fake Lives and their dis­tri­b­u­tion over the past year.

    Iron­i­cal­ly, Rio believes, the videos were like­ly ripped from footage of the cri­sis uploaded to YouTube as human rights evi­dence. The scenes, in oth­er words, are indeed from Myanmar—but they were all being post­ed from Viet­nam and Cam­bo­dia.

    Over the past half-year, Rio has tracked and iden­ti­fied sev­er­al page clus­ters run out of Viet­nam and Cam­bo­dia. Many used fake Live videos to rapid­ly build their fol­low­er num­bers and dri­ve view­ers to join Face­book groups dis­guised as pro-democ­ra­cy com­mu­ni­ties. Rio now wor­ries that Facebook’s lat­est roll­out of in-stream ads in Live videos will fur­ther incen­tivize click­bait actors to fake them. One Cam­bo­di­an clus­ter with 18 pages began post­ing high­ly dam­ag­ing polit­i­cal mis­in­for­ma­tion, reach­ing a total of 16 mil­lion engage­ments and an audi­ence of 1.6 mil­lion in four months. Face­book took all 18 pages down in March but new clus­ters con­tin­ue to spin up while oth­ers remain.

    For all Rio knows, these Viet­namese and Cam­bo­di­an actors do not speak Burmese. They like­ly do not under­stand Burmese cul­ture or the country’s pol­i­tics. The bot­tom line is they don’t need to. Not when they’re steal­ing their con­tent.
    ...

    And note how easy it is for a sin­gle indi­vid­ual to set up a mass fake Face­book bot net­work: tools exist that allow for a sin­gle moti­vat­ed indi­vid­ual to cre­ate over 10,000 Face­book accounts on their own. As we’ve seen, part of this whole click­bait enter­prise involves not just the direct­ly mon­e­tized pages but all the pages that are indi­rect­ly mon­e­tized by flow­ing traf­fic to the mon­e­tized pages. Bot net­works are a key part of how that’s done:

    ...
    Then there are oth­er tools, includ­ing one that allows pre­re­cord­ed videos to appear as fake Face­book Live videos. Anoth­er ran­dom­ly gen­er­ates pro­file details for US men, includ­ing image, name, birth­day, Social Secu­ri­ty num­ber, phone num­ber, and address, so yet anoth­er tool can mass-pro­duce fake Face­book accounts using some of that infor­ma­tion.

    It’s now so easy to do that many Cam­bo­di­an actors oper­ate solo. Rio calls them micro-entre­pre­neurs. In the most extreme sce­nario, she’s seen indi­vid­u­als man­age as many as 11,000 Face­book accounts on their own.

    Suc­cess­ful micro-entre­pre­neurs are also train­ing oth­ers to do this work in their com­mu­ni­ty. “It’s going to get worse,” she says. “Any Joe in the world could be affect­ing your infor­ma­tion envi­ron­ment with­out you real­iz­ing.”
    ...

    Final­ly, have to note that this is just the Face­book-side of this larg­er sto­ry of Sil­i­con Val­ley giants facil­i­tat­ing mis­in­for­ma­tion around the globe. Google-owned YouTube’s AdSense pro­gram is basi­cal­ly a sub­sidy for viral dis­in­for­ma­tion videos of any kind:

    ...
    Google is also cul­pa­ble. Its AdSense pro­gram fueled the Mace­do­nia- and Koso­vo-based farms that tar­get­ed Amer­i­can audi­ences in the lead-up to the 2016 pres­i­den­tial elec­tion. And it’s AdSense that is incen­tiviz­ing new click­bait actors on YouTube to post out­ra­geous con­tent and viral mis­in­for­ma­tion.
    ...

    As we can see, Face­book does­n’t have a monop­oly on the dis­in­for­ma­tion sub­si­diza­tion mar­ket­place. It’s an oli­gop­oly.

    So how long will it be before we wit­ness the first civ­il war start­ed by spoofed Face­book Live Feeds? We’ll find out. But we can be pret­ty con­fi­dent that when­ev­er it hap­pens, and wher­ev­er it hap­pens, some­one some­where will be turn­ing a prof­it while doing it. Along with Face­book, who will be help­ful­ly shar­ing the dis­in­for­ma­tion-prof­it pie.

    Posted by Pterrafractyl | November 20, 2021, 5:30 pm

Post a comment