Spitfire List Web site and blog of anti-fascist researcher and radio personality Dave Emory.

For The Record  

FTR #1074 FakeBook: Walkin’ the Snake on the Earth Island with Facebook (FascisBook, Part 2; In Your Facebook, Part 4)

Dave Emory’s entire life­time of work is avail­able on a flash dri­ve that can be obtained HERE. The new dri­ve is a 32-giga­byte dri­ve that is cur­rent as of the pro­grams and arti­cles post­ed by the fall of 2017. The new dri­ve (avail­able for a tax-deductible con­tri­bu­tion of $65.00 or more.)

WFMU-FM is pod­cast­ing For The Record–You can sub­scribe to the pod­cast HERE.

You can sub­scribe to e‑mail alerts from Spitfirelist.com HERE.

You can sub­scribe to RSS feed from Spitfirelist.com HERE.

You can sub­scribe to the com­ments made on pro­grams and posts–an excel­lent source of infor­ma­tion in, and of, itself, HERE.

Please con­sid­er sup­port­ing THE WORK DAVE EMORY DOES.

This broad­cast was record­ed in one, 60-minute seg­ment.

An Indi­an book plac­ing Adolf Hitler along­side Mahat­ma Gand­hi, Barack Oba­ma and (ahem) Naren­dra Modi.

Intro­duc­tion: We have spo­ken repeat­ed­ly about the Nazi tract Ser­pen­t’s Walk, in which the Third Reich goes under­ground, buys into the opin­ion-form­ing media and, even­tu­al­ly, takes over.

Hitler, the Third Reich and their actions are glo­ri­fied and memo­ri­al­ized. The essence of the book is syn­op­sized on the back cov­er:

“It assumes that Hitler’s war­rior elite — the SS — did­n’t give up their strug­gle for a White world when they lost the Sec­ond World War. Instead their sur­vivors went under­ground and adopt­ed some of their tac­tics of their ene­mies: they began build­ing their eco­nom­ic mus­cle and buy­ing into the opin­ion-form­ing media. A cen­tu­ry after the war they are ready to chal­lenge the democ­rats and Jews for the hearts and minds of White Amer­i­cans, who have begun to have their fill of gov­ern­ment-enforced mul­ti-cul­tur­al­ism and ‘equal­i­ty.’ ”

Some­thing anal­o­gous is hap­pen­ing in Ukraine and India.

In FTR #889, we not­ed that Pierre Omid­yar, a dar­ling of the so-called “pro­gres­sive” sec­tor for his found­ing of The Inter­cept, was deeply involved with the financ­ing of the ascent of both Naren­dra Mod­i’s Hin­dut­va fas­cist BJP and the OUN/B suc­ces­sor orga­ni­za­tions in Ukraine.

Omid­yar’s anoint­ment as an icon of inves­tiga­tive report­ing could not be more iron­ic, in that jour­nal­ists and crit­ics of his fas­cist allies in Ukraine and India are being repressed and mur­dered, there­by fur­ther­ing the sup­pres­sion of truth in those soci­eties. This sup­pres­sion of truth feeds in to the Ser­pen­t’s Walk sce­nario.

This pro­gram sup­ple­ments past cov­er­age of Face­book in FTR #‘s 718, 946, 1021, 1039 not­ing how Face­book has net­worked with the very Hin­dut­va fas­cist Indi­an ele­ments and OUN/B suc­ces­sor orga­ni­za­tions in Ukraine. This net­work­ing has been–ostensibly to com­bat fake news. The real­i­ty may well high­light that the Face­book/B­JP-RSS/OUN/B links gen­er­ates fake news, rather than inter­dict­ing it. The fake news so gen­er­at­ed, how­ev­er, will be to the lik­ing of the fas­cists in pow­er in both coun­tries, man­i­fest­ing as a “Ser­pen­t’s Walk” revi­sion­ist sce­nario.

Key ele­ments of dis­cus­sion and analy­sis include:

  1. Indi­an pol­i­tics has been large­ly dom­i­nat­ed by fake news, spread by social media: ” . . . . In the con­tin­u­ing Indi­an elec­tions, as 900 mil­lion peo­ple are vot­ing to elect rep­re­sen­ta­tives to the low­er house of the Par­lia­ment, dis­in­for­ma­tion and hate speech are drown­ing out truth on social media net­works in the coun­try and cre­at­ing a pub­lic health cri­sis like the pan­demics of the past cen­tu­ryThis con­ta­gion of a stag­ger­ing amount of mor­phed images, doc­tored videos and text mes­sages is spread­ing large­ly through mes­sag­ing ser­vices and influ­enc­ing what India’s vot­ers watch and read on their smart­phones. A recent study by Microsoft found that over 64 per­cent Indi­ans encoun­tered fake news online, the high­est report­ed among the 22 coun­tries sur­veyed. . . . These plat­forms are filled with fake news and dis­in­for­ma­tion aimed at influ­enc­ing polit­i­cal choic­es dur­ing the Indi­an elec­tions. . . .
  2. Naren­dra Mod­i’s Hin­dut­va fas­cist BJP has been the pri­ma­ry ben­e­fi­cia­ry of fake news, and his regime has part­nered with Face­book: ” . . . . The hear­ing was an exer­cise in absur­dist the­ater because the gov­ern­ing B.J.P. has been the chief ben­e­fi­cia­ry of divi­sive con­tent that reach­es mil­lions because of the way social media algo­rithms, espe­cial­ly Face­book, ampli­fy ‘engag­ing’ arti­cles. . . .”
  3. Rajesh Jain is among those BJP func­tionar­ies who serve Face­book, as well as the Hin­dut­va fas­cists: ” . . . . By the time Rajesh Jain was scal­ing up his oper­a­tions in 2013, the BJP’s infor­ma­tion tech­nol­o­gy (IT) strate­gists had begun inter­act­ing with social media plat­forms like Face­book and its part­ner What­sApp. If sup­port­ers of the BJP are to be believed, the par­ty was bet­ter than oth­ers in util­is­ing the micro-tar­get­ing poten­tial of the plat­forms. How­ev­er, it is also true that Facebook’s employ­ees in India con­duct­ed train­ing work­shops to help the mem­bers of the BJP’s IT cell. . . .”
  4. Dr. Hiren Joshi is anoth­er of the BJP oper­a­tives who is heav­i­ly involved with Face­book. ” . . . . Also assist­ing the social media and online teams to build a larg­er-than-life image for Modi before the 2014 elec­tions was a team led by his right-hand man Dr Hiren Joshi, who (as already stat­ed) is a very impor­tant advis­er to Modi whose writ extends way beyond infor­ma­tion tech­nol­o­gy and social media. . . .  Joshi has had, and con­tin­ues to have, a close and long-stand­ing asso­ci­a­tion with Facebook’s senior employ­ees in India. . . .”
  5. Shiv­nath Thukral, who was hired by Face­book in 2017 to be its Pub­lic Pol­i­cy Direc­tor for India & South Asia, worked with Joshi’s team in 2014.  ” . . . . The third team, that was intense­ly focused on build­ing Modi’s per­son­al image, was head­ed by Hiren Joshi him­self who worked out of the then Gujarat Chief Minister’s Office in Gand­hi­na­gar. The mem­bers of this team worked close­ly with staffers of Face­book in India, more than one of our sources told us. As will be detailed lat­er, Shiv­nath Thukral, who is cur­rent­ly an impor­tant exec­u­tive in Face­book, worked with this team. . . .”
  6. An osten­si­bly remorse­ful BJP politician–Prodyut Bora–high­light­ed the dra­mat­ic effect of Face­book and its What­sApp sub­sidiary have had on Indi­a’s pol­i­tics: ” . . . . In 2009, social media plat­forms like Face­book and What­sApp had a mar­gin­al impact in India’s 20 big cities. By 2014, how­ev­er, it had vir­tu­al­ly replaced the tra­di­tion­al mass media. In 2019, it will be the most per­va­sive media in the coun­try. . . .”
  7. A con­cise state­ment about the rela­tion­ship between the BJP and Face­book was issued by BJP tech office Vinit Goen­ka” . . . . At one stage in our inter­view with [Vinit] Goen­ka that last­ed over two hours, we asked him a point­ed ques­tion: ‘Who helped whom more, Face­book or the BJP?’ He smiled and said: ‘That’s a dif­fi­cult ques­tion. I won­der whether the BJP helped Face­book more than Face­book helped the BJP. You could say, we helped each oth­er.’ . . .”

Cel­e­bra­tion of the 75th Anniver­sary of the 14th Waf­fen SS Divi­sion in Lviv, Ukraine

In Ukraine, as well, Face­book and the OUN/B suc­ces­sor orga­ni­za­tions func­tion sym­bi­ot­i­cal­ly:

(Note that the Atlantic Coun­cil is dom­i­nant in the array of indi­vid­u­als and insti­tu­tions con­sti­tut­ing the Ukrain­ian fascist/Facebook coop­er­a­tive effort. We have spo­ken about the Atlantic Coun­cil in numer­ous pro­grams, includ­ing FTR #943. The orga­ni­za­tion has deep oper­a­tional links to ele­ments of U.S. intel­li­gence, as well as the OUN/B milieu that dom­i­nates the Ukrain­ian dias­po­ra.)

CrowdStrike–at the epi­cen­ter of the sup­posed Russ­ian hack­ing con­tro­ver­sy is note­wor­thy. Its co-founder and chief tech­nol­o­gy offi­cer, Dmit­ry Alper­ovitch is a senior fel­low at the Atlantic Coun­cil, financed by ele­ments that are at the foun­da­tion of fan­ning the flames of the New Cold War: “In this respect, it is worth not­ing that one of the com­mer­cial cyber­se­cu­ri­ty com­pa­nies the gov­ern­ment has relied on is Crowd­strike, which was one of the com­pa­nies ini­tial­ly brought in by the DNC to inves­ti­gate the alleged hacks. . . . Dmitri Alper­ovitch is also a senior fel­low at the Atlantic Coun­cil. . . . The con­nec­tion between [Crowd­strike co-founder and chief tech­nol­o­gy offi­cer Dmitri] Alper­ovitch and the Atlantic Coun­cil has gone large­ly unre­marked upon, but it is rel­e­vant giv­en that the Atlantic Coun­cil—which is is fund­ed in part by the US State Depart­ment, NATO, the gov­ern­ments of Latvia and Lithua­nia, the Ukrain­ian World Con­gress, and the Ukrain­ian oli­garch Vic­tor Pinchuk—has been among the loud­est voic­es call­ing for a new Cold War with Rus­sia. As I point­ed out in the pages of The Nation in Novem­ber, the Atlantic Coun­cil has spent the past sev­er­al years pro­duc­ing some of the most vir­u­lent spec­i­mens of the new Cold War pro­pa­gan­da. . . .

In May of 2018, Face­book decid­ed to effec­tive­ly out­source the work of iden­ti­fy­ing pro­pa­gan­da and mis­in­for­ma­tion dur­ing elec­tions to the Atlantic Coun­cil.

” . . . . Face­book is part­ner­ing with the Atlantic Coun­cil in anoth­er effort to com­bat elec­tion-relat­ed pro­pa­gan­da and mis­in­for­ma­tion from pro­lif­er­at­ing on its ser­vice. The social net­work­ing giant said Thurs­day that a part­ner­ship with the Wash­ing­ton D.C.-based think tank would help it bet­ter spot dis­in­for­ma­tion dur­ing upcom­ing world elec­tions. The part­ner­ship is one of a num­ber of steps Face­book is tak­ing to pre­vent the spread of pro­pa­gan­da and fake news after fail­ing to stop it from spread­ing on its ser­vice in the run up to the 2016 U.S. pres­i­den­tial elec­tion. . . .”

Since autumn 2018, Face­book has looked to hire a pub­lic pol­i­cy man­ag­er for Ukraine. The job came after years of Ukraini­ans crit­i­ciz­ing the plat­form for take­downs of its activists’ pages and the spread of [alleged] Russ­ian dis­in­fo tar­get­ing Kyiv. Now, it appears to have one: @Kateryna_Kruk.— Christo­pher Miller (@ChristopherJM) June 3, 2019

Oleh Tihany­bok, leader of the OUN/B suc­ces­sor orga­ni­za­tion Svo­bo­da, for which Katery­na Kruk worked.

Katery­na Kruk:

  1. Is Facebook’s Pub­lic Pol­i­cy Man­ag­er for Ukraine as of May of this year, accord­ing to her LinkedIn page.
  2. Worked as an ana­lyst and TV host for the Ukrain­ian ‘anti-Russ­ian pro­pa­gan­da’ out­fit Stop­Fake. Stop­Fake is the cre­ation of Ire­na Chalu­pa, who works for the Atlantic Coun­cil and the Ukrain­ian gov­ern­ment and appears to be the sis­ter of Andrea and Alexan­dra Chalu­pa.
  3. Joined the “Krem­lin Watch” team at the Euro­pean Val­ues think-tank, in Octo­ber of 2017.
  4. Received the Atlantic Coun­cil’s Free­dom award for her com­mu­ni­ca­tions work dur­ing the Euro­maid­an protests in June of 2014.
  5. Worked for OUN/B suc­ces­sor orga­ni­za­tion Svo­bo­da dur­ing the Euro­maid­an protests. “ . . . ‘There are peo­ple who don’t sup­port Svo­bo­da because of some of their slo­gans, but they know it’s the most active polit­i­cal par­ty and go to them for help, said Svo­bo­da vol­un­teer Katery­na Kruk. . . . ” . . . .
  6. Also has a num­ber of arti­cles on the Atlantic Council’s Blog. Here’s a blog post from August of 2018 where she advo­cates for the cre­ation of an inde­pen­dent Ukrain­ian Ortho­dox Church to dimin­ish the influ­ence of the Russ­ian Ortho­dox Church.
  7. Accord­ing to her LinkedIn page has also done exten­sive work for the Ukrain­ian gov­ern­ment. From March 2016 to Jan­u­ary 2017 she was the Strate­gic Com­mu­ni­ca­tions Man­ag­er for the Ukrain­ian par­lia­ment where she was respon­si­ble for social media and inter­na­tion­al com­mu­ni­ca­tions. From Jan­u­ary-April 2017 she was the Head of Com­mu­ni­ca­tions at the Min­istry of Health.
  8. Was not only was a vol­un­teer for Svo­bo­da dur­ing the 2014 Euro­maid­an protests, but open­ly cel­e­brat­ed on twit­ter the May 2014 mas­sacre in Odessa when the far right burned dozens of pro­tes­tors alive. Kruk’s twit­ter feed is set to pri­vate now so there isn’t pub­lic access to her old tweet, but peo­ple have screen cap­tures of it. Here’s a tweet from Yasha Levine with a screen­shot of Kruk’s May 2, 2014 tweet where she writes: “#Odessa cleaned itself from ter­ror­ists, proud for city fight­ing for its identity.glory to fall­en heroes..” She even threw in a “glo­ry to fall­en heroes” at the end of her tweet cel­e­brat­ing this mas­sacre. Keep in mind that it was month after this tweet that the Atlantic Coun­cil gave her that Free­dom Award for her com­mu­ni­ca­tions work dur­ing the protests.
  9. In 2014, . . .  tweet­ed that a man had asked her to con­vince his grand­son not to join the Azov Bat­tal­ion, a neo-Nazi mili­tia. “I couldn’t do it,” she said. “I thanked that boy and blessed him.” And he then trav­eled to Luhan­sk to fight pro-Russ­ian rebels.
  10. Lion­ized a Nazi sniper killed in Ukraine’s civ­il war. In March 2018, a 19-year neo-Nazi named Andriy “Dil­ly” Krivich was shot and killed by a sniper. Krivich had been fight­ing with the fas­cist Ukrain­ian group Right Sec­tor, and had post­ed pho­tos on social media wear­ing Nazi Ger­man sym­bols. After he was killed, Kruk tweet­ed an homage to the teenage Nazi. (The Nazi was also lion­ized on Euro­maid­an Press’ Face­book page.)
  11. Has staunch­ly defend­ed the use of the slo­gan “Sla­va Ukrai­ni,”which was first coined and pop­u­lar­ized by Nazi-col­lab­o­rat­ing fas­cists, and is now the offi­cial salute of Ukraine’s army.
  12. Has also said that the Ukrain­ian fas­cist politi­cian Andriy Paru­biy, who co-found­ed a neo-Nazi par­ty before lat­er becom­ing the chair­man of Ukraine’s par­lia­ment the Rada, is “act­ing smart,” writ­ing, “Paru­biy touche.” . . . .

In the con­text of Face­book’s insti­tu­tion­al lev­el net­work­ing with fas­cists, it is worth not­ing that social media them­selves have been cit­ed as a con­tribut­ing fac­tor to right-wing domes­tic ter­ror­ism. . . . The first is sto­chas­tic ter­ror­ism: ‘The use of mass, pub­lic com­mu­ni­ca­tion, usu­al­ly against a par­tic­u­lar indi­vid­ual or group, which incites or inspires acts of ter­ror­ism which are sta­tis­ti­cal­ly prob­a­ble but hap­pen seem­ing­ly at ran­dom.’ I encoun­tered the idea in a Fri­day thread from data sci­en­tist Emi­ly Gorcens­ki, who used it to tie togeth­er four recent attacks. . . . .”

The pro­gram con­cludes with review (from FTR #1039) of the psy­cho­log­i­cal war­fare strat­e­gy adapt­ed by Cam­bridge Ana­lyt­i­ca to the polit­i­cal are­na. Christo­pher Wylie–the for­mer head of research at Cam­bridge Ana­lyt­i­ca who became one of the key insid­er whis­tle-blow­ers about how Cam­bridge Ana­lyt­i­ca oper­at­ed and the extent of Facebook’s knowl­edge about it–gave an inter­view to Cam­paign Mag­a­zine. (We dealt with Cam­bridge Ana­lyt­i­ca in FTR #‘s 946, 1021.) Wylie recounts how, as direc­tor of research at Cam­bridge Ana­lyt­i­ca, his orig­i­nal role was to deter­mine how the com­pa­ny could use the infor­ma­tion war­fare tech­niques used by SCL Group – Cam­bridge Analytica’s par­ent com­pa­ny and a defense con­trac­tor pro­vid­ing psy op ser­vices for the British mil­i­tary. Wylie’s job was to adapt the psy­cho­log­i­cal war­fare strate­gies that SCL had been using on the bat­tle­field to the online space. As Wylie put it:

“ . . . . When you are work­ing in infor­ma­tion oper­a­tions projects, where your tar­get is a com­bat­ant, the auton­o­my or agency of your tar­gets is not your pri­ma­ry con­sid­er­a­tion. It is fair game to deny and manip­u­late infor­ma­tion, coerce and exploit any men­tal vul­ner­a­bil­i­ties a per­son has, and to bring out the very worst char­ac­ter­is­tics in that per­son because they are an ene­my…But if you port that over to a demo­c­ra­t­ic sys­tem, if you run cam­paigns designed to under­mine people’s abil­i­ty to make free choic­es and to under­stand what is real and not real, you are under­min­ing democ­ra­cy and treat­ing vot­ers in the same way as you are treat­ing ter­ror­ists. . . . .”

Wylie also draws par­al­lels between the psy­cho­log­i­cal oper­a­tions used on demo­c­ra­t­ic audi­ences and the bat­tle­field tech­niques used to be build an insur­gency.

1a. Fol­low­ing the sweep­ing vic­to­ry of the BJP in India’s elec­tions that exceed­ed the expec­ta­tions, there’s no short­age of ques­tions of how the BJP man­aged such a resound­ing vic­to­ry despite what appeared to be grow­ing pop­u­lar frus­tra­tions with the par­ty just six months ago. And while the embrace of nation­al­ism and sec­tar­i­an­ism no doubt played a major role along with the ten­sions with Pak­istan, it’s also impor­tant to give cred­it to the pro­found role social media played in this year’s elec­tions. Specif­i­cal­ly, orga­nized social media dis­in­for­ma­tion cam­paigns run by the BJP:

“India Has a Pub­lic Health Cri­sis. It’s Called Fake News.” by Samir Patil; The New York Times; 04/29/2019.

In the con­tin­u­ing Indi­an elec­tions, as 900 mil­lion peo­ple are vot­ing to elect rep­re­sen­ta­tives to the low­er house of the Par­lia­ment, dis­in­for­ma­tion and hate speech are drown­ing out truth on social media net­works in the coun­try and cre­at­ing a pub­lic health cri­sis like the pan­demics of the past cen­tu­ry.

This con­ta­gion of a stag­ger­ing amount of mor­phed images, doc­tored videos and text mes­sages is spread­ing large­ly through mes­sag­ing ser­vices and influ­enc­ing what India’s vot­ers watch and read on their smart­phones. A recent study by Microsoft found that over 64 per­cent Indi­ans encoun­tered fake news online, the high­est report­ed among the 22 coun­tries sur­veyed.

India has the most social media users, with 300 mil­lion users on Face­book, 200 mil­lion on What­sApp and 250 mil­lion using YouTube. Tik­Tok, the video mes­sag­ing ser­vice owned by a Chi­nese com­pa­ny, has more than 88 mil­lion users in India. And there are Indi­an mes­sag­ing appli­ca­tions such as ShareChat, which claims to have 40 mil­lion users and allows them to com­mu­ni­cate in 14 Indi­an lan­guages.

These plat­forms are filled with fake news and dis­in­for­ma­tion aimed at influ­enc­ing polit­i­cal choic­es dur­ing the Indi­an elec­tions. Some of the egre­gious instances are a made-up BBC sur­vey pre­dict­ing vic­to­ry for the gov­ern­ing Bharatiya Jana­ta Par­ty and a fake video of the oppo­si­tion Con­gress Par­ty pres­i­dent, Rahul Gand­hi, say­ing a machine can con­vert pota­toes into gold.

Fake sto­ries are spread by legions of online trolls and unsus­pect­ing users, with dan­ger­ous impact. A rumor spread through social media about child kid­nap­pers arriv­ing in var­i­ous parts of India has led to 33 deaths in 69 inci­dents of mob vio­lence since 2017, accord­ing to Indi­aSpend, a data jour­nal­ism web­site.

Six months before the 2014 gen­er­al elec­tions in India, 62 peo­ple were killed in sec­tar­i­an vio­lence and 50,000 were dis­placed from their homes in the north­ern state of Uttar Pradesh. Inves­ti­ga­tions by the police found that a fake video was shared on What­sApp to whip up sec­tar­i­an pas­sions.

In the lead-up to the elec­tions, the Indi­an gov­ern­ment sum­moned the top exec­u­tives of Face­book and Twit­ter to dis­cuss the cri­sis of coor­di­nat­ed mis­in­for­ma­tion, fake news and polit­i­cal bias on their plat­forms. In March, Joel Kaplan, Facebook’s glob­al vice pres­i­dent for pub­lic pol­i­cy, was called to appear before a com­mit­tee of 31 mem­bers of the Indi­an Par­lia­ment — who were most­ly from the rul­ing Bharatiya Jana­ta Par­ty — to dis­cuss “safe­guard­ing cit­i­zens’ rights on social/online news media plat­forms.”

The hear­ing was an exer­cise in absur­dist the­ater because the gov­ern­ing B.J.P. has been the chief ben­e­fi­cia­ry of divi­sive con­tent that reach­es mil­lions because of the way social media algo­rithms, espe­cial­ly Face­book, ampli­fy “engag­ing” arti­cles.

As else­where in the world, Face­book, Twit­ter and YouTube are ambiva­lent about tack­ling the prob­lem head-on for the fear of mak­ing deci­sions that invoke the wrath of nation­al polit­i­cal forces. The tightrope walk was evi­dent when in April, Face­book announced a ban on about 1,000 fake news pages tar­get­ing India. They includ­ed pages direct­ly asso­ci­at­ed with polit­i­cal par­ties.

Face­book announced that a major­i­ty of the pages were asso­ci­at­ed with the oppo­si­tion Indi­an Nation­al Con­gress par­ty, but it mere­ly named the tech­nol­o­gy com­pa­ny asso­ci­at­ed with the gov­ern­ing B.J.P. pages. Many news reports lat­er point­ed out that the pages relat­ed to the B.J.P. that were removed were far more con­se­quen­tial and reached mil­lions.

Ask­ing the social media plat­forms to fix the cri­sis is a deeply flawed approach because most of the dis­in­for­ma­tion is shared in a decen­tral­ized man­ner through mes­sag­ing. Seek­ing to mon­i­tor those mes­sages is a step toward accept­ing mass sur­veil­lance. The Indi­an gov­ern­ment loves the idea and has pro­posed laws that, among oth­er things, would break end-to-end encryp­tion and obtain user data with­out a court order. 

The idea of more effec­tive fact-check­ing has come up often in the debates around India’s dis­in­for­ma­tion con­ta­gion. But it comes with many con­cep­tu­al dif­fi­cul­ties: A large pro­por­tion of mes­sages shared on social net­works in India have lit­tle to do with ver­i­fi­able facts and ped­dle prej­u­diced opin­ions. Face­book India has a small 11- to 22-mem­ber fact-check­ing team for con­tent relat­ed to Indi­an elec­tions.

Fake news is not a tech­no­log­i­cal or sci­en­tif­ic prob­lem with a quick fix. It should be treat­ed as a new kind of pub­lic health cri­sis in all its social and human com­plex­i­ty. The answer might lie in look­ing back at how we respond­ed to the epi­demics, the infec­tious dis­eases in the 19th and ear­ly 20th cen­turies, which have sim­i­lar char­ac­ter­is­tics. . . .

1b. As the fol­low­ing arti­cle notes, the far­ci­cal nature of the BJP gov­ern­ment ask­ing Face­book to help with the dis­in­for­ma­tion cri­sis is even more far­ci­cal by the fact that Face­book has pre­vi­ous­ly con­duct­ing train­ing work­shops to help the BJP use Face­book more effec­tive­ly. The arti­cle describes the teams of IT cells that were set up by the BJP for the 2014 elec­tion to build a larg­er-than-life image for Modi. There were four cells.

One of those cells was run by Modi’s right hand man Dr Hiren Joshi. Joshi has had, and con­tin­ues to have, a close and long-stand­ing asso­ci­a­tion with Facebook’s senior employ­ees in India accord­ing to the arti­cle. Hiren’s team worked close­ly with Facebook’s staff. Shiv­nath Thukral, who was hired by Face­book in 2017 to be its Pub­lic Pol­i­cy Direc­tor for India & South Asia, worked with this team in 2014. And that’s just an overview of how tight­ly Face­book was work­ing with the BJP in 2014:

“Meet the advi­sors who helped make the BJP a social media pow­er­house of data and pro­pa­gan­da” by Cyril Sam & Paran­joy Guha Thakur­ta; Scroll.in; 05/06/2019.

By the time Rajesh Jain was scal­ing up his oper­a­tions in 2013, the BJP’s infor­ma­tion tech­nol­o­gy (IT) strate­gists had begun inter­act­ing with social media plat­forms like Face­book and its part­ner What­sApp. If sup­port­ers of the BJP are to be believed, the par­ty was bet­ter than oth­ers in util­is­ing the micro-tar­get­ing poten­tial of the plat­forms. How­ev­er, it is also true that Facebook’s employ­ees in India con­duct­ed train­ing work­shops to help the mem­bers of the BJP’s IT cell.

Help­ing par­ty func­tionar­ies were adver­tis­ing hon­chos like Sajan Raj Kurup, founder of Cre­ative­land Asia and Prahlad Kakkar, the well-known adver­tis­ing pro­fes­sion­al. Actor Anu­pam Kher became the pub­lic face of some of the adver­tis­ing cam­paigns. Also assist­ing the social media and online teams to build a larg­er-than-life image for Modi before the 2014 elec­tions was a team led by his right-hand man Dr Hiren Joshi, who (as already stat­ed) is a very impor­tant advis­er to Modi whose writ extends way beyond infor­ma­tion tech­nol­o­gy and social media.

Cur­rent­ly, Offi­cer On Spe­cial Duty in the Prime Minister’s Office, he is assist­ed by two young pro­fes­sion­al “techies,” Nirav Shah and Yash Rajiv Gand­hi. Joshi has had, and con­tin­ues to have, a close and long-stand­ing asso­ci­a­tion with Facebook’s senior employ­ees in India. In 2013, one of his impor­tant col­lab­o­ra­tors was Akhilesh Mishra who lat­er went on to serve as a direc­tor of the Indi­an government’s web­site, MyGov India – which is at present led by Arvind Gup­ta who was ear­li­er head of the BJP’s IT cell.

Mishra is CEO of Bluekraft Dig­i­tal Foun­da­tion. The Foun­da­tion has been linked to a dis­in­for­ma­tion web­site titled “The True Pic­ture,” has pub­lished books authored by Prime Min­is­ter Naren­dra Modi and pro­duces cam­paign videos for NaMo Tele­vi­sion, a 24 hour cable tele­vi­sion chan­nel ded­i­cat­ed to pro­mot­ing Modi.

The 2014 Modi pre-elec­tion cam­paign was inspired by the 2012 cam­paign to elect Barack Oba­ma as the “world’s first Face­book Pres­i­dent.” Some of the man­agers of the Modi cam­paign like Jain were appar­ent­ly inspired by Sasha Issenberg’s book on the top­ic, The Vic­to­ry Lab: The Secret Sci­ence of Win­ning Cam­paignsIn the first data-led elec­tion in India in 2014, infor­ma­tion was col­lect­ed from every pos­si­ble source to not just micro-tar­get users but also fine-tune mes­sages prais­ing and “mythol­o­gis­ing” Modi as the Great Leader who would ush­er in acche din for the coun­try.

Four teams spear­head­ed the cam­paign. The first team was led by Mum­bai-based Jain who fund­ed part of the com­mu­ni­ca­tion cam­paign and also over­saw vot­er data analy­sis. He was helped by Shashi Shekhar Vem­pati in run­ning NITI and “Mis­sion 272+.” As already men­tioned, Shekhar had worked in Infos­ys and is at present the head of Prasar Bharati Cor­po­ra­tion which runs Door­dar­shan and All India Radio.

The sec­ond team was led by polit­i­cal strate­gist Prashant Kishor and his I‑PAC or Indi­an Polit­i­cal Action Com­mit­tee who super­vised the three-dimen­sion­al pro­jec­tion pro­gramme for Modi besides pro­grammes like Run for Uni­ty, Chai Pe Char­cha (or Dis­cus­sions Over Tea), Man­than (or Churn­ing) and Cit­i­zens for Account­able Gov­er­nance (CAG) that roped in man­age­ment grad­u­ates to gar­ner sup­port for Modi at large gath­er­ings. Hav­ing worked across the polit­i­cal spec­trum and oppor­tunis­ti­cal­ly switched affil­i­a­tion to those who backed (and paid) him, 41-year-old Kishor is cur­rent­ly the sec­ond-in-com­mand in Jana­ta Dal (Unit­ed) head­ed by Bihar Chief Min­is­ter Nitish Kumar.

The third team, that was intense­ly focused on build­ing Modi’s per­son­al image, was head­ed by Hiren Joshi him­self who worked out of the then Gujarat Chief Minister’s Office in Gand­hi­na­gar. The mem­bers of this team worked close­ly with staffers of Face­book in India, more than one of our sources told us. As will be detailed lat­er, Shiv­nath Thukral, who is cur­rent­ly an impor­tant exec­u­tive in Face­book, worked with this team. (We made a num­ber of tele­phone calls to Joshi’s office in New Delhi’s South Block seek­ing a meet­ing with him and also sent him an e‑mail mes­sage request­ing an inter­view but he did not respond.)

The fourth team was led by Arvind Gup­ta, the cur­rent CEO of MyGov.in, a social media plat­form run by the gov­ern­ment of India. He ran the BJP’s cam­paign based out of New Del­hi. When con­tact­ed, he too declined to speak on the record say­ing he is now with the gov­ern­ment and not a rep­re­sen­ta­tive of the BJP. He sug­gest­ed we con­tact Amit Malviya who is the present head of the BJP’s IT cell. He came on the line but declined to speak specif­i­cal­ly on the BJP’s rela­tion­ship with Face­book and What­sApp.

The four teams worked sep­a­rate­ly. “It was (like) a relay (race),” said Vinit Goen­ka who was then the nation­al co-con­ven­er of the BJP’s IT cell, adding: “The only knowl­edge that was shared (among the teams) was on a ‘need to know’ basis. That’s how any sen­si­ble organ­i­sa­tion works.”

From all accounts, Rajesh Jain worked inde­pen­dent­ly from his Low­er Par­el office and invest­ed his own funds to sup­port Modi and towards exe­cut­ing what he described as “Project 275 for 2014” in a blog post that he wrote in June 2011, near­ly three years before the elec­tions actu­al­ly took place. The BJP, of course, went on to win 282 seats in the 2014 Lok Sab­ha elec­tions, ten above the half-way mark, with a lit­tle over 31 per cent of the vote.

As an aside, it may be men­tioned in pass­ing that – like cer­tain for­mer bhak­ts or fol­low­ers of Modi – Jain today appears less than enthu­si­as­tic about the per­for­mance of the gov­ern­ment over the last four and a half years. He is cur­rent­ly engaged in pro­mot­ing a cam­paign called Dhan Vapasi (or “return our wealth”) which is aimed at mon­etis­ing sur­plus land and oth­er assets held by gov­ern­ment bod­ies, includ­ing defence estab­lish­ments, and pub­lic sec­tor under­tak­ings, for the ben­e­fit of the poor and the under­priv­i­leged. Dhan Vapasi, in his words, is all about mak­ing “every Indi­an rich and free.”

In one of his recent videos that are in the pub­lic domain, Jain remarked: “For the 2014 elec­tions, I had spent three years and my own mon­ey to build a team of 100 peo­ple to help with Modi’s cam­paign. Why? Because I trust­ed that a Modi-led BJP gov­ern­ment could end the Con­gress’ anti-pros­per­i­ty pro­grammes and put India on a path to pros­per­i­ty, a nayi disha (or new direc­tion). But four years have gone by with­out any sig­nif­i­cant change in pol­i­cy. India need­ed that to elim­i­nate the big and hame­sha (peren­ni­al) prob­lems of pover­ty, unem­ploy­ment and cor­rup­tion. The Modi-led BJP gov­ern­ment fol­lowed the same old failed pol­i­cy of increas­ing tax­es and spend­ing. The ruler changed, but the out­comes have not.”

As men­tioned, when we con­tact­ed 51-year-old Jain, who heads the Mum­bai-based Net­core group of com­pa­nies, said to be India’s biggest dig­i­tal media mar­ket­ing cor­po­rate group, he declined to be inter­viewed. Inci­den­tal­ly, he had till Octo­ber 2017 served on the boards of direc­tors of two promi­nent pub­lic sec­tor com­pa­nies. One was Nation­al Ther­mal Pow­er Cor­po­ra­tion (NTPC) – Jain has no expe­ri­ence in the pow­er sec­tor, just as Sam­bit Patra, BJP spokesper­son, who is an “inde­pen­dent” direc­tor on the board of the Oil and Nat­ur­al Gas Cor­po­ra­tion, has zero expe­ri­ence in the petro­le­um indus­try. Jain also served on the board of the Unique Iden­ti­fi­ca­tion Author­i­ty of India (UIDAI), which runs the Aad­har pro­gramme.

Unlike Jain who was not at all forth­com­ing, 44-year-old Prodyut Bora, founder of the BJP’s IT cell in 2007 (bare­ly a year after Face­book and Twit­ter had been launched) was far from ret­i­cent while speak­ing to us. He had resigned from the party’s nation­al exec­u­tive in Feb­ru­ary 2015 after ques­tion­ing Modi and Amit Shah’s “high­ly indi­vid­u­alised and cen­tralised style of deci­sion-mak­ing” that had led to the “sub­ver­sion of demo­c­ra­t­ic tra­di­tions” in the gov­ern­ment and in the par­ty.

Bora recalled how he was one of the first grad­u­ates from the lead­ing busi­ness school, the Indi­an Insti­tute of Man­age­ment, Ahmed­abad, to join the BJP because of his great admi­ra­tion for the then Prime Min­is­ter Atal Behari Vaj­pay­ee. It was at the behest of the then par­ty pres­i­dent Raj­nath Singh (who is now Union Home Min­is­ter) that he set up the party’s IT cell to enable its lead­ers to come clos­er to, and inter­act with, their sup­port­ers.

The cell, he told us, was cre­at­ed not with a man­date to abuse peo­ple on social media plat­forms. He lament­ed that “mad­ness” has now gripped the BJP and the desire to win elec­tions at any cost has “destroyed the very ethos” of the par­ty he was once a part of. Today, the Gur­gaon-based Bora runs a firm mak­ing air purifi­ca­tion equip­ment and is involved with an inde­pen­dent polit­i­cal par­ty in his home state, Assam.

He told us: “The process of being eco­nom­i­cal with the truth (in the BJP) began in 2014. The (elec­tion) cam­paign was send­ing out unver­i­fied facts, infomer­cials, memes, dodgy data and graphs. From there, fake news was one step up the curve. Lead­ers of polit­i­cal par­ties, includ­ing the BJP, like to out­source this work because they don’t want to leave behind dig­i­tal foot­prints. In 2009, social media plat­forms like Face­book and What­sApp had a mar­gin­al impact in India’s 20 big cities. By 2014, how­ev­er, it had vir­tu­al­ly replaced the tra­di­tion­al mass media. In 2019, it will be the most per­va­sive media in the coun­try.” . . . .

. . . . At one stage in our inter­view with [Vinit] Goen­ka that last­ed over two hours, we asked him a point­ed ques­tion: “Who helped whom more, Face­book or the BJP?”

He smiled and said: “That’s a dif­fi­cult ques­tion. I won­der whether the BJP helped Face­book more than Face­book helped the BJP. You could say, we helped each oth­er.”

1c. Accord­ing to Christo­pher Miller of RFERL, Face­book select­ed Katery­na Kruk for the posi­tion:

Since autumn 2018, Face­book has looked to hire a pub­lic pol­i­cy man­ag­er for Ukraine. The job came after years of Ukraini­ans crit­i­ciz­ing the plat­form for take­downs of its activists’ pages and the spread of Russ­ian dis­in­fo tar­get­ing Kyiv. Now, it appears to have one: @Kateryna_Kruk.— Christo­pher Miller (@ChristopherJM) June 3, 2019

Kruk’s LinkedIn page also lists her as being Facebook’s Pub­lic Pol­i­cy Man­ag­er for Ukraine as of May of this year.

Kruk  worked as an ana­lyst and TV host for the Ukrain­ian ‘anti-Russ­ian pro­pa­gan­da’ out­fit Stop­Fake. Stop­Fake is the cre­ation of Ire­na Chalu­pa, who works for the Atlantic Coun­cil and the Ukrain­ian gov­ern­ment and appears to be the sis­ter of Andrea and Alexan­dra Chalu­pa.

(As an exam­ple of how StopFake.org approach­es Ukraine’s far right, here’s a tweet from StopFake’s co-founder, Yevhen Fed­chenko, from May of 2018 where he com­plains about an arti­cle in Hro­madske Inter­na­tion­al that char­ac­ter­izes C14 as a neo-Nazi group:

“for Hro­madske C14 is ‘neo- nazi’, in real­i­ty one of them – Olek­san­dr Voitko – is a war vet­er­an and before going to the war – alum and fac­ul­ty at @MohylaJSchool, jour­nal­ist at For­eign news desk at Chan­nel 5. Now also active par­tic­i­pant of war vet­er­ans grass-root orga­ni­za­tion. https://t.co/QmaGnu6QGZ— Yevhen Fed­chenko (@yevhenfedchenko) May 5, 2018)

In Octo­ber of 2017, Kruk joined the “Krem­lin Watch” team at the Euro­pean Val­ues think-tankIn June of 2014, The Atlantic Coun­cil gave Kruk its Free­dom award for her com­mu­ni­ca­tions work dur­ing the Euro­maid­an protests. Kruk also has a num­ber of arti­cles on the Atlantic Council’s Blog. Here’s a blog post from August of 2018 where she advo­cates for the cre­ation of an inde­pen­dent Ukrain­ian Ortho­dox Church to dimin­ish the influ­ence of the Russ­ian Ortho­dox Church. Keep in mind that, in May of 2018, Face­book decid­ed to effec­tive­ly out­source the work of iden­ti­fy­ing pro­pa­gan­da and mis­in­for­ma­tion dur­ing elec­tions to the Atlantic Coun­cil, so choos­ing some­one like Kruk who already has the Atlantic Council’s stamp of approval is in keep­ing with that trend.

Accord­ing to Kruk’s LinkedIn page she’s also done exten­sive work for the Ukrain­ian gov­ern­ment. From March 2016 to Jan­u­ary 2017 she was the Strate­gic Com­mu­ni­ca­tions Man­ag­er for the Ukrain­ian par­lia­ment where she was respon­si­ble for social media and inter­na­tion­al com­mu­ni­ca­tions. From Jan­u­ary-April 2017 she was the Head of Com­mu­ni­ca­tions at the Min­istry of Health.

Kruk not only was a vol­un­teer for Svo­bo­da dur­ing the 2014 Euro­maid­an protests, she also open­ly cel­e­brat­ed on twit­ter the May 2014 mas­sacre in Odessa when the far right burned dozens of pro­tes­tors alive. Kruk’s twit­ter feed is set to pri­vate now so there isn’t pub­lic access to her old tweet, but peo­ple have screen cap­tures of it. Here’s a tweet from Yasha Levine with a screen­shot of Kruk’s May 2, 2014 tweet where she writes:
“#Odessa cleaned itself from ter­ror­ists, proud for city fight­ing for its identity.glory to fall­en heroes..”

She even threw in a “glo­ry to fall­en heroes” at the end of her tweet cel­e­brat­ing this mas­sacre. Keep in mind that it was month after this tweet that the Atlantic Coun­cil gave her that Free­dom Award for her com­mu­ni­ca­tions work dur­ing the protests.

An arti­cle from Jan­u­ary of 2014 about the then-ongo­ing Maid­an square protests, The arti­cle cov­ers the grow­ing pres­ence of the far right in the protests and their attacks on left-wing pro­tes­tors. Kruk is inter­viewed in the arti­cle and describes her­self as a Svo­bo­da vol­un­teer. Kruk issued a tweet cel­e­brat­ing the Odessa mas­sacre a few months lat­er and also stands out from a pub­lic rela­tions stand­point: Kruk was send­ing mes­sages for why aver­age Ukraini­ans who don’t nec­es­sar­i­ly sup­port the far right should sup­port the far right at that moment, which was one of the most use­ful mes­sages she could have been send­ing for the far right at that time:

“The Ukrain­ian Nation­al­ism at the Heart of ‘Euro­maid­an’” by Alec Luhn; The Nation; 01/21/2014.

. . . . For now, Svo­bo­da and oth­er far-right move­ments like Right Sec­tor are focus­ing on the protest-wide demands for civic free­doms gov­ern­ment account­abil­i­ty rather than overt­ly nation­al­ist agen­das. Svo­bo­da enjoys a rep­u­ta­tion as a par­ty of action, respon­sive to cit­i­zens’ prob­lems. Noyevy cut an inter­view with The Nation short to help local res­i­dents who came with a com­plaint that a devel­op­er was tear­ing down a fence with­out per­mis­sion.

“There are peo­ple who don’t sup­port Svo­bo­da because of some of their slo­gans, but they know it’s the most active polit­i­cal par­ty and go to them for help,” said Svo­bo­da vol­un­teer Katery­na Kruk. “Only Svo­bo­da is help­ing against land seizures in Kiev.” . . . .

1d. Kruk has man­i­fest­ed oth­er fas­cist sym­pa­thies and con­nec­tions:

  1. In 2014, she tweet­ed that a man had asked her to con­vince his grand­son not to join the Azov Bat­tal­ion, a neo-Nazi mili­tia. “I couldn’t do it,” she said. “I thanked that boy and blessed him.” And he then trav­eled to Luhan­sk to fight pro-Russ­ian rebels.
  2. Nazi sniper Dil­ly Krivich, posthu­mous­ly lion­ized by Katery­na Kruk

    In March 2018, a 19-year neo-Nazi named Andriy “Dil­ly” Krivich was shot and killed by a sniper. Krivich had been fight­ing with the fas­cist Ukrain­ian group Right Sec­tor, and had post­ed pho­tos on social media wear­ing Nazi Ger­man sym­bols. After he was killed, Kruk tweet­ed an homage to the teenage Nazi. (The Nazi was also lion­ized on Euro­maid­an Press’ Face­book page.)

  3. Kruk has staunch­ly defend­ed the use of the slo­gan “Sla­va Ukrai­ni,”which was first coined and pop­u­lar­ized by Nazi-col­lab­o­rat­ing fas­cists, and is now the offi­cial salute of Ukraine’s army.
  4. She has also said that the Ukrain­ian fas­cist politi­cian Andriy Paru­biy, who co-found­ed a neo-Nazi par­ty before lat­er becom­ing the chair­man of Ukraine’s par­lia­ment the Rada, is “act­ing smart,” writ­ing, “Paru­biy touche.” . . . .

“Facebook’s New Pub­lic Pol­i­cy Man­ag­er Is Nation­al­ist Hawk Who Vol­un­teered with Fas­cist Par­ty Dur­ing US-Backed Coup” by Ben Nor­ton; The Gray Zone; 6/4/2019.

. . . . Svo­bo­da is not the only Ukrain­ian fas­cist group Katery­na Kruk has expressed sup­port for. In 2014, she tweet­ed that a man had asked her to con­vince his grand­son not to join the Azov Bat­tal­ion, a neo-Nazi mili­tia. “I couldn’t do it,” she said. “I thanked that boy and blessed him.” And he then trav­eled to Luhan­sk to fight pro-Russ­ian rebels.

That’s not all. In March 2018, a 19-year neo-Nazi named Andriy “Dil­ly” Krivich was shot and killed by a sniper. Krivich had been fight­ing with the fas­cist Ukrain­ian group Right Sec­tor, and had post­ed pho­tos on social media wear­ing Nazi Ger­man sym­bols. After he was killed, Kruk tweet­ed an homage to the teenage Nazi. (The Nazi was also lion­ized on Euro­maid­an Press’ Face­book page.)

Kruk has staunch­ly defend­ed the use of the slo­gan “Sla­va Ukrai­ni,”which was first coined and pop­u­lar­ized by Nazi-col­lab­o­rat­ing fas­cists, and is now the offi­cial salute of Ukraine’s army.

She has also said that the Ukrain­ian fas­cist politi­cian Andriy Paru­biy, who co-found­ed a neo-Nazi par­ty before lat­er becom­ing the chair­man of Ukraine’s par­lia­ment the Rada, is “act­ing smart,” writ­ing, “Paru­biy touche.” . . . .

2. The essence of the book Ser­pen­t’s Walk  is pre­sent­ed on the back cov­er:

Ser­pen­t’s Walk by “Ran­dolph D. Calver­hall;” Copy­right 1991 [SC]; Nation­al Van­guard Books; 0–937944-05‑X.

It assumes that Hitler’s war­rior elite — the SS — did­n’t give up their strug­gle for a White world when they lost the Sec­ond World War. Instead their sur­vivors went under­ground and adopt­ed some of the tac­tics of their ene­mies: they began build­ing their eco­nom­ic mus­cle and buy­ing into the opin­ion-form­ing media. A cen­tu­ry after the war they are ready to chal­lenge the democ­rats and Jews for the hearts and minds of White Amer­i­cans, who have begun to have their fill of gov­ern­ment-enforced mul­ti-cul­tur­al­ism and ‘equal­i­ty.’

3. This process is described in more detail in a pas­sage of text, con­sist­ing of a dis­cus­sion between Wrench (a mem­ber of this Under­ground Reich) and a mer­ce­nary named Less­ing.

Ser­pen­t’s Walk by “Ran­dolph D. Calver­hall;” Copy­right 1991 [SC]; Nation­al Van­guard Books; 0–937944-05‑X; pp. 42–43.

. . . . The SS . . . what was left of it . . . had busi­ness objec­tives before and dur­ing World War II. When the war was lost they just kept on, but from oth­er places: Bogo­ta, Asun­cion, Buenos Aires, Rio de Janeiro, Mex­i­co City, Colom­bo, Dam­as­cus, Dac­ca . . . you name it. They real­ized that the world is head­ing towards a ‘cor­po­racra­cy;’ five or ten inter­na­tion­al super-com­pa­nies that will run every­thing worth run­ning by the year 2100. Those super-cor­po­ra­tions exist now, and they’re already divid­ing up the pro­duc­tion and mar­ket­ing of food, trans­port, steel and heavy indus­try, oil, the media, and oth­er com­modi­ties. They’re most­ly con­glom­er­ates, with fin­gers in more than one pie . . . . We, the SS, have the say in four or five. We’ve been com­pet­ing for the past six­ty years or so, and we’re slow­ly gain­ing . . . . About ten years ago, we swung a merg­er, a takeover, and got vot­ing con­trol of a super­corp that runs a small but sig­nif­i­cant chunk of the Amer­i­can media. Not open­ly, not with bands and trum­pets or swastikas fly­ing, but qui­et­ly: one huge cor­po­ra­tion cud­dling up to anoth­er one and gen­tly munch­ing it up, like a great, gub­bing amoe­ba. Since then we’ve been replac­ing exec­u­tives, push­ing some­body out here, bring­ing some­body else in there. We’ve swing pro­gram con­tent around, too. Not much, but a lit­tle, so it won’t show. We’ve cut down on ‘nasty-Nazi’ movies . . . good guys in white hats and bad guys in black SS hats . . . lov­able Jews ver­sus fiendish Ger­mans . . . and we have media psy­chol­o­gists, ad agen­cies, and behav­ior mod­i­fi­ca­tion spe­cial­ists work­ing on image changes. . . .

4. The broad­cast address­es the grad­ual remak­ing of the image of the Third Reich that is rep­re­sent­ed in Ser­pen­t’s Walk. In the dis­cus­sion excerpt­ed above, this process is fur­ther described.

Ser­pen­t’s Walk by “Ran­dolph D. Calver­hall;” Copy­right 1991 [SC]; Nation­al Van­guard Books; 0–937944-05‑X; pp. 42–44.

. . . . Hell, if you can con granny into buy­ing Sug­ar Turds instead of Bran Farts, then why can’t you swing pub­lic opin­ion over to a cause as vital and impor­tant as ours?’ . . . In any case, we’re slow­ly replac­ing those neg­a­tive images with oth­ers: the ‘Good Bad Guy’ rou­tine’ . . . ‘What do you think of Jesse James? John Dillinger? Julius Cae­sar? Genghis Khan?’ . . . The real­i­ty may have been rough, but there’s a sort of glit­ter about most of those dudes: mean hon­chos but respectable. It’s all how you pack­age it. Opin­ion is a godamned com­mod­i­ty!’ . . . It works with any­body . . . Give it time. Aside from the media, we’ve been buy­ing up pri­vate schools . . . and help­ing some pub­lic ones through phil­an­thropic foun­da­tions . . . and work­ing on the church­es and the Born Agains. . . .

5. Through the years, we have high­light­ed the Nazi tract Ser­pen­t’s Walk, excerpt­ed above, which deals, in part, with the reha­bil­i­ta­tion of the Third Reich’s rep­u­ta­tion and the trans­for­ma­tion of Hitler into a hero.

In FTR #1015, we not­ed that a Ser­pen­t’s Walk sce­nario is indeed unfold­ing in India.

Key points of analy­sis and dis­cus­sion include:

  1. Naren­dra Mod­i’s pres­ence on the same book cove(along with Gand­hi, Man­dela, Oba­ma and Hitler.)
  2. Modi him­self has his own polit­i­cal his­to­ry with children’s books that pro­mote Hitler as a great leader: ” . . . . In 2004, reports sur­faced of high-school text­books in the state of Gujarat, which was then led by Mr. Modi, that spoke glow­ing­ly of Nazism and fas­cism. Accord­ing to ‘The Times of India,’ in a sec­tion called ‘Ide­ol­o­gy of Nazism,’ the text­book said Hitler had ‘lent dig­ni­ty and pres­tige to the Ger­man gov­ern­ment,’ ‘made untir­ing efforts to make Ger­many self-reliant’ and ‘instilled the spir­it of adven­ture in the com­mon peo­ple.’  . . . .”
  3. In India, many have a favor­able view of Hitler: ” . . . . as far back as 2002, the Times of India report­ed a sur­vey that found that 17 per­cent of stu­dents in elite Indi­an col­leges ‘favored Adolf Hitler as the kind of leader India ought to have.’ . . . . Con­sid­er Mein Kampf, Hitler’s auto­bi­og­ra­phy. Reviled it might be in the much of the world, but Indi­ans buy thou­sands of copies of it every month. As a recent paper in the jour­nal EPW tells us (PDF), there are over a dozen Indi­an pub­lish­ers who have edi­tions of the book on the mar­ket. Jaico, for exam­ple, print­ed its 55th edi­tion in 2010, claim­ing to have sold 100,000 copies in the pre­vi­ous sev­en years. (Con­trast this to the 3,000 copies my own 2009 book, Road­run­ner, has sold). In a coun­try where 10,000 copies sold makes a book a best­seller, these are sig­nif­i­cant num­bers. . . .”
  4. A class­room of school chil­dren filled with fans of Hitler had a very dif­fer­ent sen­ti­ment about Gand­hi. ” . . . . ‘He’s a cow­ard!’ That’s the obvi­ous flip side of this love of Hitler in India. It’s an implic­it rejec­tion of Gand­hi. . . .”
  5. Appar­ent­ly, Mein Kampf has achieved grav­i­tas among busi­ness stu­dents in India” . . . . What’s more, there’s a steady trick­le of reports that say it has become a must-read for busi­ness-school stu­dents; a man­age­ment guide much like Spencer Johnson’s Who Moved My Cheese or Edward de Bono’s Lat­er­al Think­ing. If this undis­tin­guished artist could take an entire coun­try with him, I imag­ine the rea­son­ing goes, sure­ly his book has some lessons for future cap­tains of indus­try? . . . .”

6. Christo­pher Wylie–the for­mer head of research at Cam­bridge Ana­lyt­i­ca who became one of the key insid­er whis­tle-blow­ers about how Cam­bridge Ana­lyt­i­ca oper­at­ed and the extent of Facebook’s knowl­edge about it–gave an inter­view last month to Cam­paign Mag­a­zine. (We dealt with Cam­bridge Ana­lyt­i­ca in FTR #‘s 946, 1021.)

Wylie recounts how, as direc­tor of research at Cam­bridge Ana­lyt­i­ca, his orig­i­nal role was to deter­mine how the com­pa­ny could use the infor­ma­tion war­fare tech­niques used by SCL Group – Cam­bridge Analytica’s par­ent com­pa­ny and a defense con­trac­tor pro­vid­ing psy op ser­vices for the British mil­i­tary. Wylie’s job was to adapt the psy­cho­log­i­cal war­fare strate­gies that SCL had been using on the bat­tle­field to the online space. As Wylie put it:

“ . . . . When you are work­ing in infor­ma­tion oper­a­tions projects, where your tar­get is a com­bat­ant, the auton­o­my or agency of your tar­gets is not your pri­ma­ry con­sid­er­a­tion. It is fair game to deny and manip­u­late infor­ma­tion, coerce and exploit any men­tal vul­ner­a­bil­i­ties a per­son has, and to bring out the very worst char­ac­ter­is­tics in that per­son because they are an ene­my…But if you port that over to a demo­c­ra­t­ic sys­tem, if you run cam­paigns designed to under­mine people’s abil­i­ty to make free choic­es and to under­stand what is real and not real, you are under­min­ing democ­ra­cy and treat­ing vot­ers in the same way as you are treat­ing ter­ror­ists. . . . .”

Wylie also draws par­al­lels between the psy­cho­log­i­cal oper­a­tions used on demo­c­ra­t­ic audi­ences and the bat­tle­field tech­niques used to be build an insur­gency. It starts with tar­get­ing peo­ple more prone to hav­ing errat­ic traits, para­noia or con­spir­a­to­r­i­al think­ing, and get them to “like” a group on social media. The infor­ma­tion you’re feed­ing this tar­get audi­ence may or may not be real. The impor­tant thing is that it’s con­tent that they already agree with so that “it feels good to see that infor­ma­tion.” Keep in mind that one of the goals of the ‘psy­cho­graph­ic pro­fil­ing’ that Cam­bridge Ana­lyt­i­ca was to iden­ti­fy traits like neu­roti­cism.

Wylie goes on to describe the next step in this insur­gency-build­ing tech­nique: keep build­ing up the inter­est in the social media group that you’re direct­ing this tar­get audi­ence towards until it hits around 1,000–2,000 peo­ple. Then set up a real life event ded­i­cat­ed to the cho­sen dis­in­for­ma­tion top­ic in some local area and try to get as many of your tar­get audi­ence to show up. Even if only 5 per­cent of them show up, that’s still 50–100 peo­ple con­verg­ing on some local cof­fee shop or what­ev­er. The peo­ple meet each oth­er in real life and start talk­ing about about “all these things that you’ve been see­ing online in the depths of your den and get­ting angry about”. This tar­get audi­ence starts believ­ing that no one else is talk­ing about this stuff because “they don’t want you to know what the truth is”. As Wylie puts it, “What start­ed out as a fan­ta­sy online gets port­ed into the tem­po­ral world and becomes real to you because you see all these peo­ple around you.”

“Cam­bridge Ana­lyt­i­ca whistle­blow­er Christo­pher Wylie: It’s time to save cre­ativ­i­ty” by Kate Magee; Cam­paign; 11/05/2018.

In the ear­ly hours of 17 March 2018, the 28-year-old Christo­pher Wylie tweet­ed: “Here we go….”

Lat­er that day, The Observ­er and The New York Times pub­lished the sto­ry of Cam­bridge Analytica’s mis­use of Face­book data, which sent shock­waves around the world, caused mil­lions to #Delete­Face­book, and led the UK Infor­ma­tion Commissioner’s Office to fine the site the max­i­mum penal­ty for fail­ing to pro­tect users’ infor­ma­tion. Six weeks after the sto­ry broke, Cam­bridge Ana­lyt­i­ca closed. . . .

. . . . He believes that poor use of data is killing good ideas. And that, unless effec­tive reg­u­la­tion is enact­ed, society’s wor­ship of algo­rithms, unchecked data cap­ture and use, and the like­ly spread of AI to all parts of our lives is caus­ing us to sleep­walk into a bleak future.

Not only are such cir­cum­stances a threat to adland – why do you need an ad to tell you about a prod­uct if an algo­rithm is choos­ing it for you? – it is a threat to human free will. “Cur­rent­ly, the only moral­i­ty of the algo­rithm is to opti­mise you as a con­sumer and, in many cas­es, you become the prod­uct. There are very few exam­ples in human his­to­ry of indus­tries where peo­ple them­selves become prod­ucts and those are scary indus­tries – slav­ery and the sex trade. And now, we have social media,” Wylie says.

“The prob­lem with that, and what makes it inher­ent­ly dif­fer­ent to sell­ing, say, tooth­paste, is that you’re sell­ing parts of peo­ple or access to peo­ple. Peo­ple have an innate moral worth. If we don’t respect that, we can cre­ate indus­tries that do ter­ri­ble things to peo­ple. We are [head­ing] blind­ly and quick­ly into an envi­ron­ment where this men­tal­i­ty is going to be ampli­fied through AI every­where. We’re humans, we should be think­ing about peo­ple first.”

His words car­ry weight, because he’s been on the dark side. He has seen what can hap­pen when data is used to spread mis­in­for­ma­tion, cre­ate insur­gen­cies and prey on the worst of people’s char­ac­ters.

The polit­i­cal bat­tle­field

A quick refresh­er on the scan­dal, in Wylie’s words: Cam­bridge Ana­lyt­i­ca was a com­pa­ny spun out of SCL Group, a British mil­i­tary con­trac­tor that worked in infor­ma­tion oper­a­tions for armed forces around the world. It was con­duct­ing research on how to scale and digi­tise infor­ma­tion war­fare – the use of infor­ma­tion to con­fuse or degrade the effi­ca­cy of an ene­my. . . .

. . . . As direc­tor of research, Wylie’s orig­i­nal role was to map out how the com­pa­ny would take tra­di­tion­al infor­ma­tion oper­a­tions tac­tics into the online space – in par­tic­u­lar, by pro­fil­ing peo­ple who would be sus­cep­ti­ble to cer­tain mes­sag­ing.

This mor­phed into the polit­i­cal are­na. After Wylie left, the com­pa­ny worked on Don­ald Trump’s US pres­i­den­tial cam­paign and – pos­si­bly – the UK’s Euro­pean Union ref­er­en­dum. In Feb­ru­ary 2016, Cam­bridge Analytica’s for­mer chief exec­u­tive, Alexan­der Nix, wrote in Cam­paign that his com­pa­ny had “already helped super­charge Leave.EU’s social-media cam­paign”. Nix has stren­u­ous­ly denied this since, includ­ing to MPs.

It was this shift from the bat­tle­field to pol­i­tics that made Wylie uncom­fort­able. “When you are work­ing in infor­ma­tion oper­a­tions projects, where your tar­get is a com­bat­ant, the auton­o­my or agency of your tar­gets is not your pri­ma­ry con­sid­er­a­tion. It is fair game to deny and manip­u­late infor­ma­tion, coerce and exploit any men­tal vul­ner­a­bil­i­ties a per­son has, and to bring out the very worst char­ac­ter­is­tics in that per­son because they are an ene­my,” he says.

“But if you port that over to a demo­c­ra­t­ic sys­tem, if you run cam­paigns designed to under­mine people’s abil­i­ty to make free choic­es and to under­stand what is real and not real, you are under­min­ing democ­ra­cy and treat­ing vot­ers in the same way as you are treat­ing ter­ror­ists.”

One of the rea­sons these tech­niques are so insid­i­ous is that being a tar­get of a dis­in­for­ma­tion cam­paign is “usu­al­ly a plea­sur­able expe­ri­ence”, because you are being fed con­tent with which you are like­ly to agree. “You are being guid­ed through some­thing that you want to be true,” Wylie says.

To build an insur­gency, he explains, you first tar­get peo­ple who are more prone to hav­ing errat­ic traits, para­noia or con­spir­a­to­r­i­al think­ing, and get them to “like” a group on social media. They start engag­ing with the con­tent, which may or may not be true; either way “it feels good to see that infor­ma­tion”.

When the group reach­es 1,000 or 2,000 mem­bers, an event is set up in the local area. Even if only 5% show up, “that’s 50 to 100 peo­ple flood­ing a local cof­fee shop”, Wylie says. This, he adds, val­i­dates their opin­ion because oth­er peo­ple there are also talk­ing about “all these things that you’ve been see­ing online in the depths of your den and get­ting angry about”.

Peo­ple then start to believe the rea­son it’s not shown on main­stream news chan­nels is because “they don’t want you to know what the truth is”. As Wylie sums it up: “What start­ed out as a fan­ta­sy online gets port­ed into the tem­po­ral world and becomes real to you because you see all these peo­ple around you.” . . . . 

. . . . Psy­cho­graph­ic poten­tial

One such appli­ca­tion was Cam­bridge Analytica’s use of psy­cho­graph­ic pro­fil­ing, a form of seg­men­ta­tion that will be famil­iar to mar­keters, although not in com­mon use.

The com­pa­ny used the OCEAN mod­el, which judges peo­ple on scales of the Big Five per­son­al­i­ty traits: open­ness to expe­ri­ences, con­sci­en­tious­ness, extra­ver­sion, agree­able­ness and neu­roti­cism.

Wylie believes the method could be use­ful in the com­mer­cial space. For exam­ple, a fash­ion brand that cre­ates bold, colour­ful, pat­terned clothes might want to seg­ment wealthy woman by extro­ver­sion because they will be more like­ly to buy bold items, he says.

Scep­tics say Cam­bridge Analytica’s approach may not be the dark mag­ic that Wylie claims. Indeed, when speak­ing to Cam­paign in June 2017, Nix unchar­ac­ter­is­ti­cal­ly played down the method, claim­ing the com­pa­ny used “pret­ty bland data in a pret­ty enter­pris­ing way”.

But Wylie argues that peo­ple under­es­ti­mate what algo­rithms allow you to do in pro­fil­ing. “I can take pieces of infor­ma­tion about you that seem innocu­ous, but what I’m able to do with an algo­rithm is find pat­terns that cor­re­late to under­ly­ing psy­cho­log­i­cal pro­files,” he explains.

“I can ask whether you lis­ten to Justin Bieber, and you won’t feel like I’m invad­ing your pri­va­cy. You aren’t nec­es­sar­i­ly aware that when you tell me what music you lis­ten to or what TV shows you watch, you are telling me some of your deep­est and most per­son­al attrib­ut­es.”

This is where mat­ters stray into the ques­tion of ethics. Wylie believes that as long as the com­mu­ni­ca­tion you are send­ing out is clear, not coer­cive or manip­u­la­tive, it’s fine, but it all depends on con­text. “If you are a beau­ty com­pa­ny and you use facets of neu­roti­cism – which Cam­bridge Ana­lyt­i­ca did – and you find a seg­ment of young women or men who are more prone to body dys­mor­phia, and one of the proac­tive actions they take is to buy more skin cream, you are exploit­ing some­thing which is unhealthy for that per­son and doing dam­age,” he says. “The ethics of using psy­cho­me­t­ric data real­ly depend on whether it is pro­por­tion­al to the ben­e­fit and util­i­ty that the cus­tomer is get­ting.” . . .

Clash­es with Face­book

Wylie is opposed to self-reg­u­la­tion, because indus­tries won’t become con­sumer cham­pi­ons – they are, he says, too con­flict­ed.

“Face­book has known about what Cam­bridge Ana­lyt­i­ca was up to from the very begin­ning of those projects,” Wylie claims. “They were noti­fied, they autho­rised the appli­ca­tions, they were giv­en the terms and con­di­tions of the app that said explic­it­ly what it was doing. They hired peo­ple who worked on build­ing the app. I had legal cor­re­spon­dence with their lawyers where they acknowl­edged it hap­pened as far back as 2016.”

He wants to cre­ate a set of endur­ing prin­ci­ples that are hand­ed over to a tech­ni­cal­ly com­pe­tent reg­u­la­tor to enforce. “Cur­rent­ly, the indus­try is not respond­ing to some pret­ty fun­da­men­tal things that have hap­pened on their watch. So I think it is the right place for gov­ern­ment to step in,” he adds.

Face­book in par­tic­u­lar, he argues is “the most obsti­nate and bel­liger­ent in recog­nis­ing the harm that has been done and actu­al­ly doing some­thing about it”. . . .

7. Social media have been under­scored as a con­tribut­ing fac­tor to right-wing, domes­tic ter­ror­ism. . . . The first is sto­chas­tic ter­ror­ism: ‘The use of mass, pub­lic com­mu­ni­ca­tion, usu­al­ly against a par­tic­u­lar indi­vid­ual or group, which incites or inspires acts of ter­ror­ism which are sta­tis­ti­cal­ly prob­a­ble but hap­pen seem­ing­ly at ran­dom.’ I encoun­tered the idea in a Fri­day thread from data sci­en­tist Emi­ly Gorcens­ki, who used it to tie togeth­er four recent attacks. . . . .”

“Why Social Media is Friend to Far-Right Politi­cians Around the World” by Casey New­ton; The Verge; 10/30/2018.

The Links Between Social Media, Domes­tic Ter­ror­ism and the Retreat from Democ­ra­cy

It was an awful week­end of hate-fueled vio­lence, ugly rhetoric, and wor­ri­some retreats from our demo­c­ra­t­ic ideals. Today I’m focused on two ways of fram­ing what we’re see­ing, from the Unit­ed States to Brazil. While nei­ther offers any com­fort, they do give help­ful names to phe­nom­e­na I expect will be with us for a long while.

The first is sto­chas­tic ter­ror­ism: “The use of mass, pub­lic com­mu­ni­ca­tion, usu­al­ly against a par­tic­u­lar indi­vid­ual or group, which incites or inspires acts of ter­ror­ism which are sta­tis­ti­cal­ly prob­a­ble but hap­pen seem­ing­ly at ran­dom.” I encoun­tered the idea in a Fri­day thread from data sci­en­tist Emi­ly Gorcens­ki, who used it to tie togeth­er four recent attacks.

In her thread, Gorcens­ki argues that var­i­ous right-wing con­spir­a­cy the­o­ries and frauds, ampli­fied both through main­stream and social media, have result­ed in a grow­ing num­ber of cas­es where men snap and com­mit vio­lence. “Right-wing media is a gra­di­ent push­ing right­wards, toward vio­lence and oppres­sion,” she wrote. “One of the symp­toms of this is that you are basi­cal­ly guar­an­teed to gen­er­ate ran­dom ter­ror­ists. Like pop­corn ker­nels pop­ping.”

On Sat­ur­day, anoth­er ker­nel popped. Robert A. Bow­ers, the sus­pect in a shoot­ing at a syn­a­gogue that left 11 peo­ple dead, was steeped in online con­spir­a­cy cul­ture. He post­ed fre­quent­ly to Gab, a Twit­ter clone that empha­sizes free speech and has become a favored social net­work among white nation­al­ists. Julie Turke­witz and Kevin Roose described his hate­ful views in the New York Times:

After open­ing an account on it in Jan­u­ary, he had shared a stream of anti-Jew­ish slurs and con­spir­a­cy the­o­ries. It was on Gab where he found a like-mind­ed com­mu­ni­ty, repost­ing mes­sages from Nazi sup­port­ers.

“Jews are the chil­dren of Satan,” read Mr. Bowers’s biog­ra­phy.

Bow­ers is in cus­tody — his life was saved by Jew­ish doc­tors and nurs­es — and pre­sum­ably will nev­er go free again. Gab’s life, how­ev­er, may be imper­iled. Two pay­ment proces­sors, Pay­Pal and Stripe, de-plat­formed the site, as did its cloud host, Joyent. The site went down on Mon­day after its host­ing provider GoDad­dy, told it to find anoth­er one. Its founder post­ed defi­ant mes­sages on Twit­ter and else­where promis­ing it would sur­vive.

Gab hosts a lot of deeply upset­ting con­tent, and to its sup­port­ers, that’s the point. Free speech is a right, their rea­son­ing goes, and it ought to be exer­cised. Cer­tain­ly it seems wrong to sug­gest that Gab or any oth­er sin­gle plat­form “caused” Bow­ers to act. Hatred, after all, is an ecosys­tem. But his action came amid a con­cert­ed effort to focus atten­tion on a car­a­van of migrants com­ing to the Unit­ed States in seek of refugee.

Right-wing media, most notably Fox News, has advanced the idea that the car­a­van is linked to Jew­ish bil­lion­aire (and Holo­caust sur­vivor) George Soros. An actu­al Con­gress­man, Flori­da Repub­li­can Matt Gaetz, sug­gest­ed the car­a­van was fund­ed by Soros. Bow­ers enthu­si­as­ti­cal­ly pushed these con­spir­a­cy the­o­ries on social media.

In his final post on Gab, Bow­ers wrote: “I can’t sit by and watch my peo­ple get slaugh­tered. Screw your optics. I’m going in.”

The indi­vid­ual act was ran­dom. But it had become sta­tis­ti­cal­ly prob­a­ble thanks to the rise of anti-immi­grant rhetoric across all man­ner of media. And I fear we will see far more of it before the cur­rent fever breaks.

The sec­ond con­cept I’m think­ing about today is demo­c­ra­t­ic reces­sion. The idea, which is rough­ly a decade old, is that democ­ra­cy is in retreat around the globe. The Econ­o­mist cov­ered it in Jan­u­ary:

The tenth edi­tion of the Econ­o­mist Intel­li­gence Unit’s Democ­ra­cy Index sug­gests that this unwel­come trend remains firm­ly in place. The index, which com­pris­es 60 indi­ca­tors across five broad categories—electoral process and plu­ral­ism, func­tion­ing of gov­ern­ment, polit­i­cal par­tic­i­pa­tion, demo­c­ra­t­ic polit­i­cal cul­ture and civ­il liberties—concludes that less than 5% of the world’s pop­u­la­tion cur­rent­ly lives in a “full democ­ra­cy”. Near­ly a third live under author­i­tar­i­an rule, with a large share of those in Chi­na. Over­all, 89 of the 167 coun­tries assessed in 2017 received low­er scores than they had the year before.

In Jan­u­ary, The Econ­o­mist con­sid­ered Brazil a “flawed democ­ra­cy.” But after this week­end, the coun­try may under­go a more pre­cip­i­tous decline in demo­c­ra­t­ic free­doms. As expect­ed, far-right can­di­date Jair Bol­sonaro, who speaks approv­ing­ly of the country’s pre­vi­ous mil­i­tary dic­ta­tor­ship, hand­i­ly won elec­tion over his left­ist rival.

In the best piece I read todayBuz­zFeed’s Ryan Brod­er­ick — who was in Brazil for the elec­tion — puts Bolsonaro’s elec­tion into the con­text of the inter­net and social plat­form. Brod­er­ick focus­es on the sym­bio­sis between inter­net media, which excels at pro­mot­ing a sense of per­pet­u­al cri­sis and out­rage, and far-right lead­ers who promise a return to nor­mal­cy.

Typ­i­cal­ly, large right-wing news chan­nels or con­ser­v­a­tive tabloids will then take these sto­ries going viral on Face­book and repack­age them for old­er, main­stream audi­ences. Depend­ing on your country’s media land­scape, the far-right trolls and influ­encers may try to hijack this social-media-to-news­pa­per-to-tele­vi­sion pipeline. Which then cre­ates more con­tent to screen­shot, meme, and share. It’s a feed­back loop.

Pop­ulist lead­ers and the legions of influ­encers rid­ing their wave know they can cre­ate fil­ter bub­bles inside of plat­forms like Face­book or YouTube that promise a safer time, one that nev­er exist­ed in the first place, before the protests, the vio­lence, the cas­cad­ing crises, and end­less news cycles. Don­ald Trump wants to Make Amer­i­can Great Again; Bol­sonaro wants to bring back Brazil’s mil­i­tary dic­ta­tor­ship; Shin­zo Abe wants to recap­ture Japan’s impe­r­i­al past; Germany’s AFD per­formed the best with old­er East Ger­man vot­ers long­ing for the days of author­i­tar­i­an­ism. All of these lead­ers promise to close bor­ders, to make things safe. Which will, of course, usu­al­ly exac­er­bate the prob­lems they’re promis­ing to dis­ap­pear. Anoth­er feed­back loop.

A third feed­back loop, of course, is between a social media ecosys­tem pro­mot­ing a sense of per­pet­u­al cri­sis and out­rage, and the ran­dom-but-sta­tis­ti­cal­ly-prob­a­ble pro­duc­tion of domes­tic ter­ror­ists.

Per­haps the glob­al rise of author­i­tar­i­ans and big tech plat­forms are mere­ly cor­re­lat­ed, and no cau­sa­tion can be proved. But I increas­ing­ly won­der whether we would ben­e­fit if tech com­pa­nies assumed that some lev­el of cau­sa­tion was real — and, assum­ing that it is, what they might do about it.

DEMOCRACY

On Social Media, No Answers for Hate

You don’t have to go to Gab to see hate­ful posts. Sheera Frenkel, Mike Isaac, and Kate Con­ger report on how the past week’s domes­tic ter­ror attacks play out on once-hap­pi­er places, most notably Insta­gram:

On Mon­day, a search on Insta­gram, the pho­to-shar­ing site owned by Face­book, pro­duced a tor­rent of anti-Semit­ic images and videos uploaded in the wake of Saturday’s shoot­ing at a Pitts­burgh syn­a­gogue.

A search for the word “Jews” dis­played 11,696 posts with the hash­tag “#jewsdid911,” claim­ing that Jews had orches­trat­ed the Sept. 11 ter­ror attacks. Oth­er hash­tags on Insta­gram ref­er­enced Nazi ide­ol­o­gy, includ­ing the num­ber 88, an abbre­vi­a­tion used for the Nazi salute “Heil Hitler.”

Attacks on Jew­ish peo­ple ris­ing on Insta­gram and Twit­ter, researchers say

Just before the syn­a­gogue attack took place on Sat­ur­day, David Ingram post­ed this sto­ry about an alarm­ing rise in attacks on Jews on social plat­forms:

Samuel Wool­ley, a social media researcher who worked on the study, ana­lyzed more than 7 mil­lion tweets from August and Sep­tem­ber and found an array of attacks, also often linked to Soros. About a third of the attacks on Jews came from auto­mat­ed accounts known as “bots,” he said.

“It’s real­ly spik­ing dur­ing this elec­tion,” Wool­ley, direc­tor of the Dig­i­tal Intel­li­gence Lab­o­ra­to­ry, which stud­ies the inter­sec­tion of tech­nol­o­gy and soci­ety, said in a tele­phone inter­view. “We’re see­ing what we think is an attempt to silence con­ver­sa­tions in the Jew­ish com­mu­ni­ty.”

Russ­ian dis­in­for­ma­tion on Face­book tar­get­ed Ukraine well before the 2016 U.S. elec­tion

Dana Priest, James Jaco­by and Anya Bourg report that Ukraine’s expe­ri­ence with infor­ma­tion war­fare offered an ear­ly — and unheed­ed — warn­ing to Face­book:

To get Zuckerberg’s atten­tion, the pres­i­dent post­ed a ques­tion for a town hall meet­ing at Facebook’s Sil­i­con Val­ley head­quar­ters. There, a mod­er­a­tor read it aloud.

“Mark, will you estab­lish a Face­book office in Ukraine?” the mod­er­a­tor said, chuck­ling, accord­ing to a video of the assem­bly. The room of young employ­ees rip­pled with laugh­ter. But the government’s sug­ges­tion was seri­ous: It believed that a Kiev office, staffed with peo­ple famil­iar with Ukraine’s polit­i­cal sit­u­a­tion, could help solve Facebook’s high-lev­el igno­rance about Russ­ian infor­ma­tion war­fare. . . . .

Discussion

6 comments for “FTR #1074 FakeBook: Walkin’ the Snake on the Earth Island with Facebook (FascisBook, Part 2; In Your Facebook, Part 4)”

  1. The Ukrain­ian pub­li­ca­tion 112.ua has a piece on the appoint­ment of Katery­na Kruk as Face­book’s new head of Pub­lic Pol­i­cy for Ukraine that pro­vides some of the back­sto­ry for how this posi­tion got cre­at­ed in the first. And, yes, it’s rather dis­turb­ing. Sur­prise!

    So back in 2015, Face­book was engaged in wide­spread block­ing of users from Ukraine. It got to the point where then-Pres­i­dent Petro Poroshenko asked Mark Zucker­berg to open a Face­book office in Ukraine to han­dle the issue of when some­one should be blocked. At that point, it was Face­book’s office in Ire­land that made those deci­sions for Ukraine’s users. Zucker­berg respond­ed that the block­ing of the Ukrain­ian accounts was done right because “lan­guage of hos­til­i­ty” was used in them. Giv­en the civ­il war at the time and the fact that neo-Nazi move­ments were play­ing a major role in fight­ing on the pro-Kiev side of that war we can get a pret­ty good idea of what that “lan­guage of hos­til­i­ty” would have sound­ed like.

    Flash for­ward to Octo­ber of 2018, and Face­book announces a com­pe­ti­tion for the posi­tion of pub­lic pol­i­cy man­ag­er for Ukraine. As Face­book’s post put it, “We are look­ing for a good com­mu­ni­ca­tor that can com­bine the pas­sion for the Inter­net ser­vices Face­book pro­vides and has deep knowl­edge of the polit­i­cal and reg­u­la­to­ry dynam­ics in Ukraine and, prefer­ably, in all the East­ern Euro­pean region,” and that some­one with expe­ri­ence work­ing on polit­i­cal issues with the par­tic­i­pa­tion of the Ukrain­ian gov­ern­ment would be pre­ferred.

    Inter­est­ing­ly, one source in the arti­cle indi­cates that the new man­ag­er posi­tion won’t be han­dling the deal­ing with ban­ning users. Of course, the arti­cle also ref­er­ences the Pub­lic Pol­i­cy team. In oth­er words, Kruk is going to have a bunch of peo­ple work­ing under her so it it seems like­ly that peo­ple work­ing under Kruk would be the ones actu­al­ly han­dling the ban­nings. Plus, one of the respon­si­bil­i­ties Kruk will have includes help­ing to “cre­ate rules in the Inter­net sec­tor”, and it’s very pos­si­ble tweak­ing those rules will be how Kruk pre­vents the need for future ban­nings. And the arti­cle explic­it­ly says that it is expect­ed that after this new appoint­ment the block­ing of posts of Ukrain­ian users would stop.

    So in 2015, Ukraine’s gov­ern­ment com­plains about Face­book ban­ning peo­ple for what sounds like hate speech and requests a spe­cial Ukrain­ian-spe­cif­ic office for han­dling who gets banned and four years lat­er Face­book basi­cal­ly does exact­ly that:

    112 UA

    Face­book appoints man­ag­er for Ukraine: What does it mean?

    Katery­na Kruk became a Face­book pub­lic pol­i­cy man­ag­er for Ukraine

    Author : News Agency 112.ua
    14:08, 7 June 2019

    In ear­ly June, Face­book for the first time in its his­to­ry appoint­ed a pub­lic pol­i­cy man­ag­er for Ukraine – she is Ukrain­ian Katery­na Kruk. It is expect­ed that after this appoint­ment the block­ing of posts of Ukrain­ian users would stop, as well as “gross and unpro­fes­sion­al atti­tude of Face­book towards Ukraine and Ukraini­ans.”

    What is the idea?

    In spring of 2015, due to the mass block­ing of Ukrain­ian users, the Ukrain­ian Face­book group addressed the founder of Face­book Mark Zucker­berg with a request to cre­ate a Ukrain­ian admin­is­tra­tion. For­mer Ukrain­ian Pres­i­dent Petro Poroshenko also asked Zucker­berg to open a Face­book office in Ukraine.

    Zucker­berg said that the con­tro­ver­sial pub­li­ca­tions for which Ukrain­ian users were banned, were delet­ed right­ly, because “lan­guage of hos­til­i­ty” was used in them. At the same time, Zucker­berg said that the Ukrain­ian social net­work­ing seg­ment is mod­er­at­ed by an office in Ire­land, and the issue of open­ing a rep­re­sen­ta­tive office of a social net­work in Ukraine can be con­sid­ered over time.

    And in Octo­ber 2018, Face­book announced a com­pe­ti­tion for the posi­tion of pub­lic pol­i­cy man­ag­er for Ukraine.

    “We are look­ing for a good com­mu­ni­ca­tor that can com­bine the pas­sion for the Inter­net ser­vices Face­book pro­vides and has deep knowl­edge of the polit­i­cal and reg­u­la­to­ry dynam­ics in Ukraine and, prefer­ably, in all the East­ern Euro­pean region,” said the com­ment to the posi­tion.

    In addi­tion, it was not­ed that a can­di­date who is acquaint­ed with politi­cians and gov­ern­ment offi­cials, and has expe­ri­ence in work­ing on polit­i­cal issues with the par­tic­i­pa­tion of the Ukrain­ian gov­ern­ment, will be giv­en pref­er­ence to. It was report­ed that the new man­ag­er would work at the Face­book office in War­saw.

    In ear­ly June, it became known that Katery­na Kruk became Face­book’s Pub­lic Pol­i­cy Man­ag­er for Ukraine.

    Who is Katery­na Kruk?

    ...

    The Deputy Min­is­ter of Infor­ma­tion Pol­i­cy Dmytro Zolo­tukhin believes that Kate­ri­na Kruk is the best choice that could have been made by Face­book. The min­istry promised to sup­port Kruk in all mat­ters that will be in the inter­ests of Ukraine.

    What will the new man­ag­er do?

    It is not­ed, that the Pub­lic Pol­i­cy team is engaged in com­mu­ni­ca­tion between politi­cians and Face­book: responds to inquiries from politi­cians and reg­u­la­to­ry bod­ies, helps to cre­ate rules in the Inter­net sec­tor, shares infor­ma­tion about prod­ucts and activ­i­ties of the com­pa­ny.

    In addi­tion, the man­ag­er will mon­i­tor the leg­is­la­tion and reg­u­la­to­ry issues relat­ed to Face­book in Ukraine, form coali­tions with oth­er orga­ni­za­tions to pro­mote the polit­i­cal goals of the social net­work, com­mu­ni­cate with the media and rep­re­sent the inter­ests of Face­book before state agen­cies.

    At the same time, some sources report that the work with blocked groups, user bans and prob­lems with the adver­tis­ing cab­i­nets are not includ­ed in the man­ager’s respon­si­bil­i­ties.

    What’s next?

    Ear­li­er Dmytro Zolo­tukhin not­ed that, first of all the new man­ag­er would act in the inter­ests of the com­pa­ny, which pays him/her.

    “How­ev­er, on the oth­er hand, this will relieve us of sus­pi­cion of who real­ly solves con­flict sit­u­a­tions with Ukrain­ian users,” Zolo­tukhin wrote in the fall last year.

    And after announc­ing the results of the com­pe­ti­tion for the vacan­cy, he expressed the hope that after this appoint­ment “gross and unpro­fes­sion­al atti­tude of Face­book towards Ukraine and Ukraini­ans.”

    ———–

    “Face­book appoints man­ag­er for Ukraine: What does it mean?” by News Agency 112.ua; 112.ua, 06/07/2019

    “In ear­ly June, Face­book for the first time in its his­to­ry appoint­ed a pub­lic pol­i­cy man­ag­er for Ukraine – she is Ukrain­ian Katery­na Kruk. It is expect­ed that after this appoint­ment the block­ing of posts of Ukrain­ian users would stop, as well as “gross and unpro­fes­sion­al atti­tude of Face­book towards Ukraine and Ukraini­ans.””

    No more block­ings of Ukrain­ian posts. That’s the expec­ta­tion now that Kruk has this new posi­tion. It’s quite a change from 2015 when Mark Zucker­berg him­self defend­ed the block­ing of such posts because they vio­lat­ed Face­book’s terms of use by includ­ing “lan­guage of hos­til­i­ty”, which is almost cer­tain­ly a euphemism for Nazi hate speech. But Zucker­berg said the com­pa­ny would con­sid­er Petro Poroshenko’s request for a spe­cial Ukrain­ian office to han­dle these issues and in 2018 the com­pa­ny decid­ed to go ahead with the idea:

    ...
    What is the idea?

    In spring of 2015, due to the mass block­ing of Ukrain­ian users, the Ukrain­ian Face­book group addressed the founder of Face­book Mark Zucker­berg with a request to cre­ate a Ukrain­ian admin­is­tra­tion. For­mer Ukrain­ian Pres­i­dent Petro Poroshenko also asked Zucker­berg to open a Face­book office in Ukraine.

    Zucker­berg said that the con­tro­ver­sial pub­li­ca­tions for which Ukrain­ian users were banned, were delet­ed right­ly, because “lan­guage of hos­til­i­ty” was used in them. At the same time, Zucker­berg said that the Ukrain­ian social net­work­ing seg­ment is mod­er­at­ed by an office in Ire­land, and the issue of open­ing a rep­re­sen­ta­tive office of a social net­work in Ukraine can be con­sid­ered over time.

    And in Octo­ber 2018, Face­book announced a com­pe­ti­tion for the posi­tion of pub­lic pol­i­cy man­ag­er for Ukraine.

    “We are look­ing for a good com­mu­ni­ca­tor that can com­bine the pas­sion for the Inter­net ser­vices Face­book pro­vides and has deep knowl­edge of the polit­i­cal and reg­u­la­to­ry dynam­ics in Ukraine and, prefer­ably, in all the East­ern Euro­pean region,” said the com­ment to the posi­tion.

    In addi­tion, it was not­ed that a can­di­date who is acquaint­ed with politi­cians and gov­ern­ment offi­cials, and has expe­ri­ence in work­ing on polit­i­cal issues with the par­tic­i­pa­tion of the Ukrain­ian gov­ern­ment, will be giv­en pref­er­ence to. It was report­ed that the new man­ag­er would work at the Face­book office in War­saw.
    ...

    And while it does­n’t sound like the man­ag­er (Kruk) will be direct­ly respon­si­ble for han­dling ban­nings, it also sounds like she’s going to be man­ag­ing a team of peo­ple so we would expect that team to be the one’s actu­al­ly han­dling the ban­nings. Plus, Kruk’s respon­si­bil­i­ties for things like help­ing to “cre­ate rules in the Inter­net sec­tor” are a far more effec­tive way to lift the rules that were result­ing in these bans:

    ...
    What will the new man­ag­er do?

    It is not­ed, that the Pub­lic Pol­i­cy team is engaged in com­mu­ni­ca­tion between politi­cians and Face­book: responds to inquiries from politi­cians and reg­u­la­to­ry bod­ies, helps to cre­ate rules in the Inter­net sec­tor, shares infor­ma­tion about prod­ucts and activ­i­ties of the com­pa­ny.

    In addi­tion, the man­ag­er will mon­i­tor the leg­is­la­tion and reg­u­la­to­ry issues relat­ed to Face­book in Ukraine, form coali­tions with oth­er orga­ni­za­tions to pro­mote the polit­i­cal goals of the social net­work, com­mu­ni­cate with the media and rep­re­sent the inter­ests of Face­book before state agen­cies.

    At the same time, some sources report that the work with blocked groups, user bans and prob­lems with the adver­tis­ing cab­i­nets are not includ­ed in the man­ager’s respon­si­bil­i­ties.
    ...

    And note how Urkaine’s Deputy Min­is­ter of Infor­ma­tion Pol­i­cy has already pledged to sup­port Kruk and has expressed his hope that this appoint will end the “gross and unpro­fes­sion­al atti­tude of Face­book towards Ukraine and Ukraini­ans”:

    ...
    The Deputy Min­is­ter of Infor­ma­tion Pol­i­cy Dmytro Zolo­tukhin believes that Kate­ri­na Kruk is the best choice that could have been made by Face­book. The min­istry promised to sup­port Kruk in all mat­ters that will be in the inter­ests of Ukraine.

    ...

    What’s next?

    Ear­li­er Dmytro Zolo­tukhin not­ed that, first of all the new man­ag­er would act in the inter­ests of the com­pa­ny, which pays him/her.

    “How­ev­er, on the oth­er hand, this will relieve us of sus­pi­cion of who real­ly solves con­flict sit­u­a­tions with Ukrain­ian users,” Zolo­tukhin wrote in the fall last year.

    And after announc­ing the results of the com­pe­ti­tion for the vacan­cy, he expressed the hope that after this appoint­ment “gross and unpro­fes­sion­al atti­tude of Face­book towards Ukraine and Ukraini­ans.”
    ...

    Now, it’s impor­tant to acknowl­edge that there has undoubt­ed­ly been some bans of Ukraini­ans that were the result of pro-Krem­lin trolls (and vice ver­sa). That was, in fact, one of the big com­plaints of Ukraini­ans in 2015: that pro-Krem­lin trolls were effec­tive­ly gam­ing Face­book’s sys­tems to get Ukraini­ans banned. But there’s also no deny­ing that Ukraine is awash in fas­cist pro­pa­gan­da backed by the gov­ern­ment by undoubt­ed­ly vio­lates Face­book’s var­i­ous rules against hate speech. And now that we have a far right sym­pa­thiz­er, Kruk, in this new posi­tion.

    So it’s going to be real­ly inter­est­ing to see what hap­pens with the neo-Nazi groups with gov­ern­ment back­ing like Azov. As the fol­low­ing arti­cle from April of this describes, Azov mem­bers first start­ing expe­ri­enc­ing ban­nings in 2015 and this year the group was qui­et­ly banned entire­ly at some point this year. Except, despite that ban, Azov remains on Face­book, just under new pages. Ole­na Semenya­ka, the inter­na­tion­al spokesper­son for the move­ment, has had mul­ti­ple pages banned but has mul­ti­ple pages still up. And that’s going to be an impor­tant thing to keep in mind as this plays out: even if Face­book bans these far right groups, get­ting around those bans appears to be triv­ial:

    Radio Free Europe/Radio Lib­er­ty

    Face­book ‘Bans’ Ukrain­ian Far-Right Group Over ‘Hate Speech’ — But Get­ting Rid Of It Isn’t Easy

    By Christo­pher Miller
    April 16, 2019 18:50 GMT

    KYIV — Ukraine’s mil­i­taris­tic, far-right Azov move­ment and its var­i­ous branch­es have used Face­book to pro­mote its anti­de­mo­c­ra­t­ic, ultra­na­tion­al­ist mes­sages and recruit new mem­bers since its incep­tion at the start of the coun­try’s war against Rus­sia-backed sep­a­ratists five years ago.

    The Amer­i­can social-net­work­ing giant has also been an impor­tant plat­form for Azov’s glob­al expan­sion and attempts to legit­imize itself among like­mind­ed Amer­i­can and Euro­pean white nation­al­ists.

    Face­book has occa­sion­al­ly tak­en down pages and groups asso­ci­at­ed with Azov when they have been found to be in vio­la­tion of its poli­cies on hate speech and the depic­tion of vio­lence.

    The first Face­book removals occurred in 2015, Azov mem­bers told RFE/RL.

    But after con­tin­u­ous, repeat vio­la­tions Azov — which includes many war vet­er­ans and mil­i­tant mem­bers with open­ly neo-Nazi views who have been involved in attacks on LGBT activists, Romany encamp­ments, and wom­en’s groups — is now offi­cial­ly banned from hav­ing any pres­ence on Face­book, the social net­work has con­firmed to RFE/RL.

    Despite the ban, how­ev­er, which qui­et­ly came into force months ago, a defi­ant Azov and its mem­bers remain active on the social net­work under pseu­do­nyms and name vari­a­tions, under­scor­ing the dif­fi­cul­ty Face­book faces in com­bat­ing extrem­ism on a plat­form with some 2.32 bil­lion month­ly active users.

    ‘Orga­nized Hate’ Not Allowed

    For years, Face­book has strug­gled with how to deal with extrem­ist con­tent and it has been crit­i­cized for mov­ing too slow­ly on it or behav­ing reac­tive­ly.

    The issue was put front-and-cen­ter in August 2017, when the plat­form was used to orga­nize a white suprema­cist ral­ly in Char­lottesville, Vir­ginia, that turned dead­ly.

    The issue was raised most recent­ly in the after­math of the Christchurch mas­sacre that left 50 peo­ple dead. The shoot­er livestreamed the killing on his Face­book page. The com­pa­ny said it had “quick­ly removed both the shooter’s Face­book and Insta­gram accounts and the video,” and was tak­ing down posts of praise or sup­port for the shoot­ing.

    Joe Mul­hall, a senior researcher at the U.K.-based antifas­cist orga­ni­za­tion Hope Not Hate, told RFE/RL by phone that Char­lottesville brought a “sea change” when it came to social media com­pa­nies and Face­book, in par­tic­u­lar pay­ing atten­tion to extrem­ists.

    For instance, he praised the com­pa­ny for its “robust” action against the far-right founder of the Eng­lish Defence League, Tom­my Robin­son, who had repeat­ed­ly vio­lat­ed Face­book’s poli­cies on hate speech.

    But Mul­hall said Face­book more often acts only after “they’re pub­licly shamed.”

    “When there is mas­sive pub­lic pres­sure, they act; or when they think they can get away with things, they don’t,” he added.

    This may explain why it took Face­book years to ban the Azov move­ment, which received sig­nif­i­cant media atten­tion fol­low­ing a series of vio­lent attacks against minori­ties in 2018.

    Face­book did not spec­i­fy what exact­ly tipped the scale. But respond­ing to an RFE/RL e‑mail request on April 15, a Face­book spokesper­son wrote that the com­pa­ny has been tak­ing down accounts asso­ci­at­ed with the Azov Reg­i­ment, Nation­al Corps, and Nation­al Mili­tia – the group’s mil­i­tary, polit­i­cal, and vig­i­lante wings, respec­tive­ly — on Face­book for months, cit­ing its poli­cies against hate groups. The spokesper­son did not say when exact­ly the ban came into force.

    In its pol­i­cy on dan­ger­ous indi­vid­u­als and orga­ni­za­tions, Face­book defines a hate orga­ni­za­tion as “any asso­ci­a­tion of three or more peo­ple that is orga­nized under a name, sign, or sym­bol and that has an ide­ol­o­gy, state­ments, or phys­i­cal actions that attack indi­vid­u­als based on char­ac­ter­is­tics, includ­ing race, reli­gious affil­i­a­tion, nation­al­i­ty, eth­nic­i­ty, gen­der, sex, sex­u­al ori­en­ta­tion, seri­ous dis­ease or dis­abil­i­ty.”

    Defend­ing ‘Ukrain­ian Order’

    Azov and its lead­er­ship con­sid­er them­selves defend­ers of what they call “Ukrain­ian order,” or an illib­er­al and anti­de­mo­c­ra­t­ic soci­ety. They are anti-Russ­ian and also against Ukraine’s poten­tial acces­sion to the Euro­pean Union and NATO.

    Their ide­al Ukraine is a “Ukraine for Ukraini­ans,” as Ole­na Semenya­ka, the inter­na­tion­al sec­re­tary for Azov’s polit­i­cal wing, the Nation­al Corps, told RFE/RL last year. Azov’s sym­bol is sim­i­lar to the Nazi Wolf­san­gel but the group claims it is com­prised of the let­ters N and I, mean­ing “nation­al idea.”

    ...

    Ear­li­er in March, the U.S. State Depart­ment referred to the Nation­al Corps as a “nation­al­ist hate group” in its annu­al human rights report.

    Azov has induct­ed thou­sands of mil­i­tant mem­bers in recent years in torch­light cer­e­monies with chants of “Glo­ry to Ukraine! Death to ene­mies.” The move­ment claims to have rough­ly 10,000 mem­bers in its broad­er move­ment and the abil­i­ty to mobi­lize some 2,000 to the streets with­in hours. A large part of its recruit­ing has been done using slick­ly pro­duced videos and adver­tise­ments of it fight clubs, hard­core con­certs, and fash­ion lines pro­mot­ed on Face­book and oth­er social net­works.

    Still On Face­book, But Mov­ing Else­where

    Many of those may no longer be found on Face­book after the ban. But some are like­ly to stick around, since many Azov fac­tions and lead­ers remain on the plat­form or else have opened fresh accounts after orig­i­nal ones were removed, RFE/RL research shows.

    For instance, the Azov Reg­i­ment, whose offi­cial page under the Polk Azov name was removed months ago, has also opened a fresh page with a new name: Tviy Polk (Your Reg­i­ment).

    Its lead­ers have react­ed sim­i­lar­ly, as have the Nation­al Corps and Nation­al Mili­tia, open­ing dozens of new accounts under slight­ly altered names to make it more dif­fi­cult for Face­book to track them. A sim­ple search on April 16 brought up more than a dozen active accounts.

    Semenya­ka has had at least two per­son­al accounts removed by Face­book. But two oth­er accounts belong­ing to her and opened with dif­fer­ent spellings of her name — Lena Semenya­ka and Hele­na Semenya­ka — are still open, as is a group page she man­ages.

    In a post to the Lena account on April 11, Semenya­ka wrote after the take­down of her orig­i­nal account that Face­book “is get­ting increas­ing­ly anti-intel­lec­tu­al.”

    “If you wish to keep in touch, please sub­scribe to some oth­er per­ma­nent and tem­po­rary plat­forms,” she con­tin­ued, adding a link to her Face­book-owned Insta­gram account.

    Then, high­light­ing what’s become a pop­u­lar new des­ti­na­tion for far-right and oth­er extrem­ist groups, she also announced the open­ing of a new Nation­al Corps Inter­na­tion­al account — on the mes­sen­ger app Telegram.

    ———-

    “Face­book ‘Bans’ Ukrain­ian Far-Right Group Over ‘Hate Speech’ — But Get­ting Rid Of It Isn’t Easy” by Christo­pher Miller; Radio Free Europe/Radio Lib­er­ty; 04/16/2019

    Despite the ban, how­ev­er, which qui­et­ly came into force months ago, a defi­ant Azov and its mem­bers remain active on the social net­work under pseu­do­nyms and name vari­a­tions, under­scor­ing the dif­fi­cul­ty Face­book faces in com­bat­ing extrem­ism on a plat­form with some 2.32 bil­lion month­ly active users.”

    The total ban on Azov took place months ago and yet Azov mem­bers still have an active pres­ence, includ­ing the move­men­t’s spokesper­son, Ole­na Semenya­ka, who has two per­son­al pages and a group page still up as of April, along with an account on Face­book-owned Insta­gram:

    ...
    Defend­ing ‘Ukrain­ian Order’

    Azov and its lead­er­ship con­sid­er them­selves defend­ers of what they call “Ukrain­ian order,” or an illib­er­al and anti­de­mo­c­ra­t­ic soci­ety. They are anti-Russ­ian and also against Ukraine’s poten­tial acces­sion to the Euro­pean Union and NATO.

    Their ide­al Ukraine is a “Ukraine for Ukraini­ans,” as Ole­na Semenya­ka, the inter­na­tion­al sec­re­tary for Azov’s polit­i­cal wing, the Nation­al Corps, told RFE/RL last year. Azov’s sym­bol is sim­i­lar to the Nazi Wolf­san­gel but the group claims it is com­prised of the let­ters N and I, mean­ing “nation­al idea.”

    ...

    Still On Face­book, But Mov­ing Else­where

    Many of those may no longer be found on Face­book after the ban. But some are like­ly to stick around, since many Azov fac­tions and lead­ers remain on the plat­form or else have opened fresh accounts after orig­i­nal ones were removed, RFE/RL research shows.

    For instance, the Azov Reg­i­ment, whose offi­cial page under the Polk Azov name was removed months ago, has also opened a fresh page with a new name: Tviy Polk (Your Reg­i­ment).

    Its lead­ers have react­ed sim­i­lar­ly, as have the Nation­al Corps and Nation­al Mili­tia, open­ing dozens of new accounts under slight­ly altered names to make it more dif­fi­cult for Face­book to track them. A sim­ple search on April 16 brought up more than a dozen active accounts.

    Semenya­ka has had at least two per­son­al accounts removed by Face­book. But two oth­er accounts belong­ing to her and opened with dif­fer­ent spellings of her name — Lena Semenya­ka and Hele­na Semenya­ka — are still open, as is a group page she man­ages.

    In a post to the Lena account on April 11, Semenya­ka wrote after the take­down of her orig­i­nal account that Face­book “is get­ting increas­ing­ly anti-intel­lec­tu­al.”

    “If you wish to keep in touch, please sub­scribe to some oth­er per­ma­nent and tem­po­rary plat­forms,” she con­tin­ued, adding a link to her Face­book-owned Insta­gram account.

    Then, high­light­ing what’s become a pop­u­lar new des­ti­na­tion for far-right and oth­er extrem­ist groups, she also announced the open­ing of a new Nation­al Corps Inter­na­tion­al account — on the mes­sen­ger app Telegram.
    ...

    And note how Face­book would­n’t actu­al­ly say what exact­ly trig­gered the com­pa­ny to ful­ly ban the group after years of indi­vid­ual ban­nings. That’s part of what’s going to be inter­est­ing to watch with the cre­ation of a new Pub­lic Pol­i­cy office for Ukraine: those rules are going to become a lot clear­er after fig­ures like Kruk learn what they are and can shape them:

    ...
    Joe Mul­hall, a senior researcher at the U.K.-based antifas­cist orga­ni­za­tion Hope Not Hate, told RFE/RL by phone that Char­lottesville brought a “sea change” when it came to social media com­pa­nies and Face­book, in par­tic­u­lar pay­ing atten­tion to extrem­ists.

    For instance, he praised the com­pa­ny for its “robust” action against the far-right founder of the Eng­lish Defence League, Tom­my Robin­son, who had repeat­ed­ly vio­lat­ed Face­book’s poli­cies on hate speech.

    But Mul­hall said Face­book more often acts only after “they’re pub­licly shamed.”

    “When there is mas­sive pub­lic pres­sure, they act; or when they think they can get away with things, they don’t,” he added.

    This may explain why it took Face­book years to ban the Azov move­ment, which received sig­nif­i­cant media atten­tion fol­low­ing a series of vio­lent attacks against minori­ties in 2018.

    Face­book did not spec­i­fy what exact­ly tipped the scale. But respond­ing to an RFE/RL e‑mail request on April 15, a Face­book spokesper­son wrote that the com­pa­ny has been tak­ing down accounts asso­ci­at­ed with the Azov Reg­i­ment, Nation­al Corps, and Nation­al Mili­tia – the group’s mil­i­tary, polit­i­cal, and vig­i­lante wings, respec­tive­ly — on Face­book for months, cit­ing its poli­cies against hate groups. The spokesper­son did not say when exact­ly the ban came into force.
    ...

    So get­ting around those rules is also pre­sum­ably going to get a lot eas­i­er once fig­ures like Kruk can inform her fel­low far right activists what exact­ly those rules are...assuming the rules against orga­nized hate aren’t dealt away with entire­ly for Ukraine.

    Posted by Pterrafractyl | June 7, 2019, 3:06 pm
  2. Here’s an arti­cle dis­cussing a book that just came out, The Real Face of Face­book in India, about the rela­tion­ship between Face­book and the BJP and the role this rela­tion­ship played in the BJP’s stun­ning 2014 suc­cess­es. Most of what’s in the arti­cle cov­ers what we already knew about this rela­tion­ship, where Shiv­nath Thukral, a for­mer NDTV jour­nal­ist with a close work­ing rela­tion­ship with close Modi aide, Hiren Joshi, worked togeth­er on the Modi dig­i­tal team in the 2014 elec­tion before Thukral went on to become Face­book’s direc­tor of pol­i­cy for India and South Asia.

    Some of the new fun facts include Face­book appar­ent­ly refus­ing to run the Con­gress Par­ty’s ads high­light­ing the Modi gov­ern­men­t’s Rafale fight­er jet scan­dal. It also delayed for 11 days ad for an expose in Car­a­van Mag­a­zine about BJP offi­cial Amit Shah. Dis­turbing­ly, it also sounds like Indi­an pro­pa­gan­da com­pa­nies are offer­ing their ser­vices in oth­er coun­tries like South Africa, which makes the com­pa­ny’s cozy ties to the BJP pro­pa­gan­dists even more trou­bling.

    One of the more iron­ic fun facts in the book is that Katie Har­bath, Facebook’s Direc­tor for Glob­al Pol­i­tics and Gov­ern­ment Out­reach, was appar­ent­ly “unhap­py and uneasy about the prox­im­i­ty” of top offi­cials of Face­book to the Naren­dra Modi gov­ern­ment after Thukral got his posi­tion at Face­book. This is accord­ing to an anony­mous source. So that would appear to indi­cate that even Face­book’s high-lev­el employ­ees rec­og­nize these are politi­cized posi­tions and yet the com­pa­ny goes ahead with it any­way. Sur­prise! As the arti­cle also notes, it’s some­what iron­ic for Har­bath to be express­ing an unease with the com­pa­ny hir­ing a polit­i­cal­ly con­nect­ed indi­vid­ual close to the gov­ern­ment for such a posi­tion since Har­bath her­self was once a dig­i­tal strate­gist for the Repub­li­can Par­ty and Rudy Giu­liani:

    The Wire

    The Past and Future of Face­book and BJP’s Mutu­al­ly Ben­e­fi­cial Rela­tion­ship

    A new book by Paran­joy Guha Thakur­ta and Cyril Sam finds revolv­ing doors and quid pro quos between Indi­a’s rich­est polit­i­cal par­ty and the world’s largest social plat­form.

    Partha P. Chakrabart­ty
    03/Jun/2019

    Five years from now, we may well be read­ing a book about the BJP’s What­sApp oper­a­tions in the 2019 elec­tions – fea­tur­ing two lakh groups of 256 mem­bers each, or over 5o mil­lion read­ers of the par­ty line. A recent book, how­ev­er, tells the sto­ry of the 2014 elec­tions, and the role of WhatsApp’s par­ent com­pa­ny Face­book in the rise of Naren­dra Modi.

    In 2019, if we for­get Facebook’s bil­lions of dol­lars in rev­enue, we might almost feel sor­ry for it. Face­book has had a rough year, where it has been attacked both by the Left (for per­mit­ting the rise of right-wing troll armies), and the Right (for cen­sor­ship of con­ser­v­a­tives: Don­ald Trump has launched a new tool to report instances.)

    But we can’t for­get their bil­lions of dol­lars of rev­enue, espe­cial­ly when, even in this tough year, Facebook’s income grew by 26% quar­ter-on-quar­ter. To add to the voic­es raised against it, a new book alleges that Face­book was both direct­ly com­plic­it in, and ben­e­fit­ed from, the rise of Modi’s BJP in India.

    The Real Face of Face­book in India, co-authored by the jour­nal­ists Paran­joy Guha Thakur­ta and Cyril Sam, is a short, terse book that reads like a who­dun­nit. In the intro­duc­tion, it is teased that the book will reveal ‘a wealth of details about the kind of sup­port that Face­book pro­vid­ed Naren­dra Modi and the appa­ra­tus of the BJP appa­ra­tus (sic) even (sic) before the 2014 elec­tions’. The many copy errors reveal that the book is an attempt to get the news out as wide­ly and as quick­ly as pos­si­ble. This is report­ing, not deep analy­sis.

    In line with the aim of reach­ing as wide an audi­ence as pos­si­ble, the book has been simul­ta­ne­ous­ly pub­lished in Hin­di under the title Face­book ka Asli Chehra. There is also a com­pan­ion web­site, theaslifacebook.com, which also has a Hin­di sec­tion.

    Teasers to the big reveal come in the first few chap­ters, which do a slight­ly hap­haz­ard job of nar­rat­ing the his­to­ry of Face­book in India. The smok­ing gun is final­ly dis­closed in Chap­ter 8 in the form of a per­son, Shiv­nath Thukral, a for­mer NDTV jour­nal­ist and ex-man­ag­ing direc­tor of Carnegie India. Going by the evi­dence in the book, Thukral had a close work­ing rela­tion­ship with inti­mate Modi aide, Hiren Joshi. Togeth­er, they cre­at­ed the Mera Bharosa web­site and oth­er web pages for the BJP in late 2013, ahead of the nation­al elec­tion. In 2017, after his stint at Carnegie, Thukral joined Face­book as its direc­tor of pol­i­cy for India and South Asia.

    For a per­son so close to a rul­ing par­ty to become a top offi­cial of a ‘neu­tral’ plat­form is wor­ry­ing. Wor­ry­ing enough, it seems, to trou­ble the com­pa­ny itself: Real Face claims that Katie Har­bath, Facebook’s direc­tor for glob­al pol­i­tics and gov­ern­ment out­reach, said she was “unhap­py and uneasy about the prox­im­i­ty” of top offi­cials of Face­book to the Naren­dra Modi gov­ern­ment. The quote is attrib­uted to an anony­mous source. Whether it is true or not, cit­i­zens should be con­cerned about this par­tic­u­lar revolv­ing door between the most pow­er­ful media organ­i­sa­tion in the world and the Modi admin­is­tra­tion. (It’s a dif­fer­ent mat­ter that Har­bath her­self was once a dig­i­tal strate­gist for the Repub­li­can Par­ty and Rudy Giu­liani).

    The over­ar­ch­ing sto­ry is this: The BJP was the first in our coun­try to see the poten­tial of Face­book as a way to reach vot­ers. Face­book, a pri­vate cor­po­ra­tion with an eye on build­ing rel­e­vance in India and earn­ing prof­its through adver­tis­ing, saw in pol­i­tics a great way to dri­ve engage­ment. Both the BJP and Face­book had much to gain from a part­ner­ship.

    As a result, in the run-up to the 2014 elec­tion, Face­book offered train­ing to BJP per­son­nel in run­ning social media cam­paigns. (Face­book has stat­ed that they con­duct these work­shops for var­i­ous polit­i­cal par­ties, but the impli­ca­tion remains that the BJP, in being a first mover, ben­e­fit­ed dis­pro­por­tion­ate­ly).

    The strat­e­gy worked beau­ti­ful­ly for Face­book. As report­ed by Ankhi Das, Face­book India’s lead on pol­i­cy and gov­ern­ment rela­tions, the 2014 elec­tions reaped the plat­form 227 mil­lion inter­ac­tions. Read today, Das’ arti­cle – which speaks of how ‘likes’ won Naren­dra Modi votes – comes off as more sin­is­ter than it might have at the time.

    We also know that the strat­e­gy worked for Modi. So potent was BJP’s tar­get­ing that it won 90% of its votes in only 299 con­stituen­cies, 282 of which it won. For­mer and cur­rent mem­bers of the BJP’s dig­i­tal media strat­e­gy team were hap­py to con­firm the mutu­al ben­e­fit. The cur­rent mem­ber is Vinit Goen­ka, once the nation­al co-con­ven­er of the BJP’s IT cell, and cur­rent­ly work­ing with Nitin Gad­kari. This is how the book tells it:

    At one stage in our inter­view with Goen­ka that last­ed over two hours, we asked him a point­ed ques­tion: ‘Who helped whom more, Face­book or the BJP?’

    He smiled and said: ‘That’s a dif­fi­cult ques­tion. I won­der whether the BJP helped Face­book more than Face­book helped the BJP. You could say, we helped each oth­er.’

    Equal­ly alarm­ing are reports, in the book, of Face­book deny­ing Con­gress paid ads to pub­li­cise the Rafale con­tro­ver­sy. Face­book also delayed a boost on a Car­a­van expose on Amit Shah by more than 11 days, an eter­ni­ty in our ridicu­lous­ly fast news cycle. Final­ly, there are reports of Indi­an pro­pa­gan­da com­pa­nies repli­cat­ing these lessons in elec­tions in South Africa and oth­er coun­tries. Tak­en togeth­er, we see how pri­vate plat­forms are hap­py to be used to manip­u­late demo­c­ra­t­ic process­es, whether in ser­vice of the Right or Left or Cen­tre.

    This book is con­cerned with cri­tiquing Facebook’s links with the right-wing. It has a fore­word by the pop­u­lar jour­nal­ist Rav­ish Kumar and a pref­ace by pro­fes­sor Apoor­vanand of the Depart­ment of Hin­di at Del­hi Uni­ver­si­ty (and a con­trib­u­tor at The Wire). Both of these belong to what the right-wing terms the ‘sec­u­lar’ brigade.

    How­ev­er, we know now that Face­book is also act­ing against some of the assets of the BJP itself. This may be an eye­wash, or just the log­i­cal next step in Facebook’s project: hav­ing cre­at­ed its impor­tance in elec­tions with the help of the BJP, it is now sell­ing its influ­ence to oth­er par­ties. It doesn’t mat­ter which par­ty comes out on top in the social media game: the house always wins. Even sup­port­ers of the BJP should be wary of the mon­ster they have fed. There is no rea­son for Face­book to be loy­al to the par­ty.

    Cam­bridge Ana­lyt­i­ca, by using lim­it­ed data from Face­book, was able to influ­ence the Brex­it and 2016 US pres­i­den­tial elec­tions. The book asks: what kind of influ­ence can the plat­form itself exert on our demo­c­ra­t­ic process­es?

    ...

    The ques­tion cit­i­zens have to ask is: how much pow­er do we allow one cor­po­ra­tion, and its 35-year-old CEO, to have? What can be done about its near-monop­o­lis­tic grip on data, and its abil­i­ty to uni­lat­er­al­ly impede or encour­age the flow of infor­ma­tion? These are ear­ly days, and one can hope that checks and bal­ances will kick in. Until that hap­pens, our work – of sim­ply keep­ing up with how plat­forms pro­pel or impede polit­i­cal inter­ests – will be cut out for us.

    ———-

    “The Past and Future of Face­book and BJP’s Mutu­al­ly Ben­e­fi­cial Rela­tion­ship” by Partha P. Chakrabart­ty; The Wire; 06/03/2019

    “Teasers to the big reveal come in the first few chap­ters, which do a slight­ly hap­haz­ard job of nar­rat­ing the his­to­ry of Face­book in India. The smok­ing gun is final­ly dis­closed in Chap­ter 8 in the form of a per­son, Shiv­nath Thukral, a for­mer NDTV jour­nal­ist and ex-man­ag­ing direc­tor of Carnegie India. Going by the evi­dence in the book, Thukral had a close work­ing rela­tion­ship with inti­mate Modi aide, Hiren Joshi. Togeth­er, they cre­at­ed the Mera Bharosa> web­site and oth­er web pages for the BJP in late 2013, ahead of the nation­al elec­tion. In 2017, after his stint at Carnegie, Thukral joined Face­book as its direc­tor of pol­i­cy for India and South Asia.

    It’s the kind of smok­ing gun of Face­book’s rela­tion­ship with the BJP that just keeps smok­ing more and more the longer Shiv­nath Thukral holds that posi­tion. But it’s not the only smok­ing gun. Reports of Face­book refus­ing to pub­li­cize ads for the rival Con­gress Par­ty and delay­ing sto­ries that would be dam­ag­ing to the BJP pro­duce quite a bit of smoke too. And note that, while the arti­cle rais­es the risks for the BJP that Face­book might work against the BJP’s inter­ests in the future cit­ing some of Fac­book’s efforts that have act­ed against the BJP’s dig­i­tal assets, keep in mind that the par­tic­u­lar effort the piece is refer­ring to was a crack­down on ‘fake news’ that Face­book did where more than 700 pages were removed and almost all of them (687) were Con­gress Par­ty pages, although the hand­ful of BJP pages removed did have far more view­ers than the Con­gress pages. So, thus far, the only time Face­book appears to work against the BJP’s inter­ests is when there’s a gener­ic ‘fake news’ purge and even in that case it appeared to tar­get the BJP’s rivals much more heav­i­ly:

    ...
    Equal­ly alarm­ing are reports, in the book, of Face­book deny­ing Con­gress paid ads to pub­li­cise the Rafale con­tro­ver­sy. Face­book also delayed a boost on a Car­a­van expose on Amit Shah by more than 11 days, an eter­ni­ty in our ridicu­lous­ly fast news cycle. Final­ly, there are reports of Indi­an pro­pa­gan­da com­pa­nies repli­cat­ing these lessons in elec­tions in South Africa and oth­er coun­tries. Tak­en togeth­er, we see how pri­vate plat­forms are hap­py to be used to manip­u­late demo­c­ra­t­ic process­es, whether in ser­vice of the Right or Left or Cen­tre.

    ...

    How­ev­er, we know now that Face­book is also act­ing against some of the assets of the BJP itself. This may be an eye­wash, or just the log­i­cal next step in Facebook’s project: hav­ing cre­at­ed its impor­tance in elec­tions with the help of the BJP, it is now sell­ing its influ­ence to oth­er par­ties. It doesn’t mat­ter which par­ty comes out on top in the social media game: the house always wins. Even sup­port­ers of the BJP should be wary of the mon­ster they have fed. There is no rea­son for Face­book to be loy­al to the par­ty.
    ...

    The fact that this arrange­ment with the BJP is prob­lem­at­ic isn’t lost on Face­book’s exec­u­tives, accord­ing to the book. Face­book’s own
    direc­tor for glob­al pol­i­tics and gov­ern­ment out­reach, Katie Har­bath, report­ed­ly said she was “unhap­py and uneasy about the prox­im­i­ty” of top offi­cials of Face­book to the Modi gov­ern­ment after Thukral was hired. But those con­cerns were clear­ly ignored. The con­cerns were also clear­ly iron­ic since Har­bath her­self was once a a dig­i­tal strate­gist for the Repub­li­can Par­ty and Rudy Giu­liani:

    ...
    For a per­son so close to a rul­ing par­ty to become a top offi­cial of a ‘neu­tral’ plat­form is wor­ry­ing. Wor­ry­ing enough, it seems, to trou­ble the com­pa­ny itself: Real Face claims that Katie Har­bath, Facebook’s direc­tor for glob­al pol­i­tics and gov­ern­ment out­reach, said she was “unhap­py and uneasy about the prox­im­i­ty” of top offi­cials of Face­book to the Naren­dra Modi gov­ern­ment. The quote is attrib­uted to an anony­mous source. Whether it is true or not, cit­i­zens should be con­cerned about this par­tic­u­lar revolv­ing door between the most pow­er­ful media organ­i­sa­tion in the world and the Modi admin­is­tra­tion. (It’s a dif­fer­ent mat­ter that Har­bath her­self was once a dig­i­tal strate­gist for the Repub­li­can Par­ty and Rudy Giu­liani).
    ...

    Recall how, right when the Cam­bridge Ana­lyt­i­ca scan­dal was emerg­ing in late March of 2018, Face­book replaced its head of pol­i­cy in the Unit­ed States last year with anoth­er right-wing hack, Kevin Mar­tin. Mar­tin would be the new per­son in charge of lob­by­ing the US gov­ern­ment. Mar­tin was Face­book’s vice pres­i­dent of mobile and glob­al access pol­i­cy and a for­mer Repub­li­can chair­man of the Fed­er­al Com­mu­ni­ca­tions Com­mis­sion. When Mar­tin took this new posi­tion he would be report­ing to Face­book’s vice pres­i­dent of glob­al pub­lic pol­i­cy, Joel Kaplan. Both Mar­tin and Kaplan worked togeth­er on George W. Bush’s 2000 pres­i­den­tial cam­paign. Yep, that’s how Face­book respond­ed to the Cam­bridge Ana­lyt­i­ca scan­dal. By putting a Repub­li­can in charge of lob­by­ing the US gov­ern­ment.

    It’s that con­text that makes the con­cerns of Katie Har­bath so iron­ic, along with the fact that Face­book was so inte­gral to the suc­cess of the 2016 Trump cam­paign that the com­pa­ny embed­ded employ­ees with the cam­paign. Yes, Har­bath’s con­cerns over an over­ly close rela­tion­ship with the BJP were indeed valid con­cerns, but iron­ic valid when com­ing from a Repub­li­can oper­a­tive like Har­bath.

    And when you look at Har­bath’s LinkedIn page, we learn that she was hired by Face­book to become the Pub­lic Pol­i­cy Direc­tor for Glob­al Elec­tions in Feb­ru­ary of 2011. Har­bath was the Nation­al Repub­li­can Sen­a­to­r­i­al Com­mit­tee’s chief dig­i­tal strate­gist from August 2009-March 2011. So Har­bath would have been in charge of the GOP Sen­ate’s dig­i­tal strat­e­gy for the 2010 mid-terms when the Repub­li­cans gained six Sen­ate seats and retook con­trol of the US House and a few months lat­er Face­book hired her to become the Pub­lic Pol­i­cy Direc­tor for Glob­al Elec­tions.

    Beyond that, Har­bath’s LinkedIn page lists her work for DCI Group. She was a Senior Account Man­ag­er at DCI Group from 2006–2007. Then she left to work at the Deputy eCam­paign Direc­tor for Rudy Giu­lian­i’s pres­i­den­tial cam­paign from Feb­ru­ary 2007-Jan­u­ary 2008. And in Feb­ru­ary of 2008 she returned to DCI Group as Direc­tor of Online Ser­vices, the posi­tion she held until going to work for the Nation­al Repub­li­can Sen­a­to­r­i­al Com­mit­tee in 2009. Recall how DCI Group has close ties to Karl Rove and is known for being one of the sleazi­est and most amoral of the ‘dark mon­ey’ lobbying/propaganda firms oper­at­ing in DC. In addi­tion to lob­by­ing and pub­lic rela­tions work for the Repub­li­can Par­ty, DCI has a his­to­ry of tak­ing on clients like RJ Reynolds Tobac­co and the Burmese Jun­ta. It’s also known for ped­dling mis­in­for­ma­tion and engag­ing in dirty pol­i­tics. In 2008, the CEO of DCI Group was select to man­age the Repub­li­can Nation­al Con­ven­tion. And DCI Group also worked with the Koch broth­ers’ front groups Amer­i­cans for Pros­per­i­ty and Free­dom­Works in cre­at­ing the Tea Par­ty move­ment, which would have tak­en place dur­ing Har­bath’s time as the Nation­al Repub­li­can Sen­a­to­r­i­al Com­mit­tee’s chief dig­i­tal strate­gist. DCI Group was also the pub­lish­er of Tech Cen­tral Sta­tion, a web­site fund­ed by Exxon ded­i­cate to cli­mate change denial and has worked on major right-wing dis­in­for­ma­tion cam­paigns in the US rang­ing from health care to oil pipelines.

    So Face­book’s Pub­lic Pol­i­cy Direc­tor for Glob­al Elec­tions, Katie Har­bath, was­n’t just a Repub­li­can Par­ty oper­a­tive. She also worked for one of the most dis­rep­utable lob­by­ing and pro­pa­gan­da firms in DC and a key enti­ty in the Amer­i­can ‘dark mon­ey’ pro­pa­gan­da indus­try. That’s the per­son who was alleged­ly uncom­fort­able with Face­book hir­ing of a BJP-con­nect­ed indi­vid­ual. And despite those alleged con­cerns Thukral’s hir­ing hap­pened any­way, of course.

    In relat­ed news, the Trump White House set up a web­page where con­ser­v­a­tives could go to report instances of Face­book and oth­er social media com­pa­nies being biased against them. Yep.

    Posted by Pterrafractyl | June 11, 2019, 1:59 pm
  3. Here’s a pre­sen­ta­tion not to be missed on what fas­cism is, using Mohen­dra Mod­i’s regime as an exam­ple (in Hin­di with Eng­lish sub­ti­tles, click [cc]): https://www.youtube.com/watch?v=JpVTmlSXRck

    Posted by Atlanta Bill | July 31, 2019, 6:41 pm
  4. This next arti­cle shows how Face­book lists Brei­it­bart and its pro­pa­gan­da motivi­at­ed news report­ing as a legit­i­mate News Source despite the fat that its chair­man Steve Ban­non, ran Don­ald Trump’s pres­i­den­tial cam­paign in 2016. Bre­it­bart uses a “black crime” tag on arti­cles and pro­mot­ed anti-Mus­lim and anti-immi­grant views. Ban­non even said “We’re the plat­form for the alt-right,”. Addi­tion­al­ly their for­mer tech edi­tor Milo Yiannopou­los had worked direct­ly with a white nation­al­ist and a neo-Nazi to write and edit an arti­cle defin­ing the “alt-right” move­ment and advanc­ing its ideas. It also iden­ti­fies that Face­book has been reluc­tant to police white nation­al­ism and far-right hate even after the Guardian pro­vid­ed Face­book, in July, 2017, with a list of 175 pages and groups run by hate groups, as des­ig­nat­ed by the South­ern Pover­ty Law Cen­ter, includ­ing neo-Nazi and white nation­al­ist groups. Face­books actions show its real intent when the com­pa­ny removed just nine of them. This real­ly puts into ques­tion Facebook’s asser­tion that “If a pub­lish­er posts mis­in­for­ma­tion, it will no longer appear in the prod­uct.” They have not made any seri­ous attempt to address this with their poli­cies and prac­tices.

    The arti­cles does not take address the fol­low­ing issue but one should ask the ques­tion if Mr. Zucker­berg behav­ior sup­ports sim­i­lar ide­olo­gies to those advo­cat­ed by ear­ly co-investor, Peter Thiel?

    https://www.theguardian.com/us-news/2019/oct/25/facebook-breitbart-news-tab-alt-right?CMP=Share_iOSApp_Other

    Face­book includes Bre­it­bart in new ‘high qual­i­ty’ news tab
    The social media site has received back­lash over its choice to include a pub­li­ca­tion that has been called ‘the plat­form for the alt-right’
    Julia Car­rie Wong @juliacarriew Email
    Fri 25 Oct 2019 16.56 EDT
    Last mod­i­fied on Fri 25 Oct 2019 16.58 EDT

    Facebook’s launch of a new sec­tion on its flag­ship app ded­i­cat­ed to “deeply-report­ed and well-sourced” jour­nal­ism sparked imme­di­ate con­tro­ver­sy on Fri­day over the inclu­sion of Bre­it­bart News, a pub­li­ca­tion whose for­mer exec­u­tive chair­man explic­it­ly embraced the “alt-right”.

    Face­book News is a sep­a­rate sec­tion of the company’s mobile app that will fea­ture arti­cles from about 200 pub­lish­ers. Friday’s launch is a test and will only be vis­i­ble to some users in the US.

    The ini­tia­tive is designed to quell crit­i­cism on two fronts: by pro­mot­ing high­er qual­i­ty jour­nal­ism over mis­in­for­ma­tion and by appeas­ing news pub­lish­ers who have long com­plained that Face­book prof­its from jour­nal­ism with­out pay­ing for it. The com­pa­ny will pay some pub­lish­ers between $1m and $3m each year to fea­ture their arti­cles, accord­ing to Bloomberg.
    Par­tic­i­pat­ing pub­li­ca­tions include the New York Times, the Wash­ing­ton Post, the Wall Street Jour­nal, Buz­zFeed, Bloomberg and ABC News, as well as local news­pa­pers such as the Chica­go Tri­bune and Dal­las Morn­ing News.

    Facebook’s chief exec­u­tive, Mark Zucker­berg, paid trib­ute to the impor­tance of “high qual­i­ty” jour­nal­ism in an op-ed pub­lished in the New York Times, which ref­er­enced “how the news has held Face­book account­able when we’ve made mis­takes”.

    Zucker­berg also allud­ed to the pow­er that Face­book will have to influ­ence the media, stat­ing: “If a pub­lish­er posts mis­in­for­ma­tion, it will no longer appear in the prod­uct.”
    The op-ed does not ref­er­ence the inclu­sion of Bre­it­bart News, but the out­let is noto­ri­ous for its role in pro­mot­ing extreme rightwing nar­ra­tives and con­spir­a­cy the­o­ries. Thou­sands of major adver­tis­ers have black­list­ed the site over its extreme views.

    Found­ed in 2005 by con­ser­v­a­tive writer Andrew Bre­it­bart, Bre­it­bart News achieved greater influ­ence and a wider audi­ence under its exec­u­tive chair­man Steve Ban­non, who went on to run Don­ald Trump’s pres­i­den­tial cam­paign in 2016. For years, the pub­li­ca­tion used a “black crime” tag on arti­cles and pro­mot­ed anti-Mus­lim and anti-immi­grant views.

    “We’re the plat­form for the alt-right,” Ban­non told a reporter in 2016.

    In 2017, Buz­zFeed News report­ed on emails and doc­u­ments show­ing how the for­mer Bre­it­bart tech edi­tor Milo Yiannopou­los had worked direct­ly with a white nation­al­ist and a neo-Nazi to write and edit an arti­cle defin­ing the “alt-right” move­ment and advanc­ing its ideas.

    Face­book has long faced scruti­ny for its ret­i­cence to police white nation­al­ism and far-right hate on its plat­form. In July 2017, the Guardian pro­vid­ed Face­book with a list of 175 pages and groups run by hate groups, as des­ig­nat­ed by the South­ern Pover­ty Law Cen­ter, includ­ing neo-Nazi and white nation­al­ist groups. The com­pa­ny removed just nine of them.

    Fol­low­ing the dead­ly “Unite the Right” ral­ly in Char­lottesville in August 2017 – which was orga­nized in part on a Face­book event page – the com­pa­ny cracked down on some white suprema­cist and neo-Nazi groups. Enforce­ment was spot­ty, how­ev­er, and a year after Char­lottesville, sev­er­al groups and indi­vid­u­als involved in Char­lottesville were back on Face­book. It was not until March 2019 that the com­pa­ny decid­ed that its pol­i­cy against hate should include white nation­al­ism, an ide­ol­o­gy that pro­motes the exclu­sion and expul­sion of non-white peo­ple from cer­tain nations.

    Face­book declined to pro­vide a full list of the par­tic­i­pat­ing pub­li­ca­tions or offer fur­ther com­ment.

    Asked about the inclu­sion of Bre­it­bart News at a launch event for Face­book News in New York, Zucker­berg declined to com­ment on “any spe­cif­ic firm” but added, “I do think that part of hav­ing this be a trust­ed source is that it needs to have a diver­si­ty of … views in there. I think you want to have con­tent that kind of rep­re­sents dif­fer­ent per­spec­tives, but also in a way that com­plies with the stan­dards that we have.”

    The Face­book CEO faced harsh ques­tion­ing from law­mak­ers this week, when he tes­ti­fied at a hear­ing of the US House of Rep­re­sen­ta­tives finan­cial ser­vices com­mit­tee. Though the hear­ing was puta­tive­ly about Facebook’s plans to launch a cryp­tocur­ren­cy, sev­er­al rep­re­sen­ta­tives pressed Zucker­berg on his company’s poor track record on com­ply­ing with US civ­il rights laws, as well as polic­ing hate speech.

    Dur­ing an exchange about the company’s deci­sion to allow politi­cians to pro­mote mis­in­for­ma­tion in paid adver­tis­ing, the Demo­c­ra­t­ic rep­re­sen­ta­tive Alexan­dria Oca­sio-Cortez pressed Zucker­berg on the inclu­sion of the Dai­ly Caller, which she called “a pub­li­ca­tion with well-doc­u­ment­ed ties to white suprema­cists”, in the company’s third-par­ty fact-check­er pro­gram. In 2018, the Atlantic revealed that a for­mer deputy edi­tor of the Dai­ly Caller also wrote under a pseu­do­nym for a white suprema­cist pub­li­ca­tion.

    Posted by Mary Benton | October 27, 2019, 4:05 pm
  5. Here’s a set of arti­cle that high­lights how one of the endur­ing fea­tures of Face­book’s attempts to police extrem­ist hate speech on its plat­form has been the cre­ation of spe­cial loop­holes that allow this con­tent to con­tin­ue even after the new poli­cies are put into effect:

    First, in May of this year, Face­book announced a sig­nif­i­cant change to its hate speech poli­cies. Part of what made it sig­nif­i­cant is that it was the kind of change that should­n’t have ever been nec­es­sary in the first place. Face­book updat­ed its pol­i­cy ban­ning over “white suprema­cy” to include “white nation­al­ism” and “white sep­a­ratism”. When the com­pa­ny ini­tial­ly banned white suprema­cy fol­low­ing the 2017 Unite the Right neo-Nazi ral­ly in Char­lottesville, VA, they appar­ent­ly con­clud­ed that white nation­al­ism and white sep­a­ratism aren’t nec­es­sar­i­ly explic­it­ly racist in nature and there­fore white nation­al­ism and white sep­a­ratism would con­tin­ue to be allowed.

    As Ulrick Casseus, one of Face­book’s pol­i­cy team sub­ject mat­ter experts on hate groups, described the rea­son­ing behind that ini­tial deci­sion to make a dis­tinc­tion between white nationalism/separatism and white suprema­cy, “When you have a broad range of peo­ple you engage with, you’re going to get a range of ideas and beliefs...There were a few peo­ple who [...] did not agree that white nation­al­ism and white sep­a­ratism were inher­ent­ly hate­ful.” So there were “a few peo­ple” telling Face­book that white nation­al­ism and white sep­a­ratism aren’t inher­ent­ly hate­ful and that was the basis for Face­book’s deci­sion. It would be inter­est­ing to know if any of those peo­ple hap­pened to be the numer­ous peo­ple in Face­book’s man­age­ment team with ties to right-wing polit­i­cal par­ties. Peter Thiel is an obvi­ous sus­pect, but don’t for­get oth­er fig­ures like for­mer George W. Bush White House staffer Joel Kaplan who was appoint­ed Face­book’s vice pres­i­dent of glob­al pub­lic pol­i­cy. And then there are peo­ple like Katery­na Kruk in Ukraine or Shiv­nath Thukral in India. Or maybe it was just some ran­dom per­son on Face­book’s pol­i­cy team with far right sym­pa­thies.

    And, of course, the new pol­i­cy ban­ning white nation­al­ism and white sep­a­ratism has a loop­hole: only explic­it white nation­al­ism and sep­a­ratism con­tent will be banned. Implic­it and cod­ed white nation­al­ism and white sep­a­ratism won’t be banned osten­si­bly because they are hard­er to detect. So the white nationalists/supremacists are still free to use Face­book as a propaganda/recruitment plat­form but they’ll have to dog-whistling a lit­tle more than before:

    Vice

    Face­book Bans White Nation­al­ism and White Sep­a­ratism
    After a civ­il rights back­lash, Face­book will now treat white nation­al­ism and sep­a­ratism the same as white suprema­cy, and will direct users who try to post that con­tent to a non­prof­it that helps peo­ple leave hate groups.

    by Joseph Cox and Jason Koe­bler
    Mar 27 2019, 11:00am

    In a major pol­i­cy shift for the world’s biggest social media net­work, Face­book banned white nation­al­ism and white sep­a­ratism on its plat­form Tues­day. Face­book will also begin direct­ing users who try to post con­tent asso­ci­at­ed with those ide­olo­gies to a non­prof­it that helps peo­ple leave hate groups, Moth­er­board has learned.

    The new pol­i­cy, which will be offi­cial­ly imple­ment­ed next week, high­lights the mal­leable nature of Facebook’s poli­cies, which gov­ern the speech of more than 2 bil­lion users world­wide. And Face­book still has to effec­tive­ly enforce the poli­cies if it is real­ly going to dimin­ish hate speech on its plat­form. The pol­i­cy will apply to both Face­book and Insta­gram.

    Last year, a Moth­er­board inves­ti­ga­tion found that, though Face­book banned “white suprema­cy” on its plat­form, it explic­it­ly allowed “white nation­al­ism” and “white sep­a­ratism.” After back­lash from civ­il rights groups and his­to­ri­ans who say there is no dif­fer­ence between the ide­olo­gies, Face­book has decid­ed to ban all three, two mem­bers of Facebook’s con­tent pol­i­cy team said.

    “We’ve had con­ver­sa­tions with more than 20 mem­bers of civ­il soci­ety, aca­d­e­mics, in some cas­es these were civ­il rights orga­ni­za­tions, experts in race rela­tions from around the world,” Bri­an Fish­man, pol­i­cy direc­tor of coun­tert­er­ror­ism at Face­book, told us in a phone call. “We decid­ed that the over­lap between white nation­al­ism, [white] sep­a­ratism, and white suprema­cy is so exten­sive we real­ly can’t make a mean­ing­ful dis­tinc­tion between them. And that’s because the lan­guage and the rhetoric that is used and the ide­ol­o­gy that it rep­re­sents over­laps to a degree that it is not a mean­ing­ful dis­tinc­tion.”

    Specif­i­cal­ly, Face­book will now ban con­tent that includes explic­it praise, sup­port, or rep­re­sen­ta­tion of white nation­al­ism or sep­a­ratism. Phras­es such as “I am a proud white nation­al­ist” and “Immi­gra­tion is tear­ing this coun­try apart; white sep­a­ratism is the only answer” will now be banned, accord­ing to the com­pa­ny. Implic­it and cod­ed white nation­al­ism and white sep­a­ratism will not be banned imme­di­ate­ly, in part because the com­pa­ny said it’s hard­er to detect and remove.

    The deci­sion was for­mal­ly made at Facebook’s Con­tent Stan­dards Forum on Tues­day, a meet­ing that includes rep­re­sen­ta­tives from a range of dif­fer­ent Face­book depart­ments in which con­tent mod­er­a­tion poli­cies are dis­cussed and ulti­mate­ly adopt­ed. Fish­man told Moth­er­board that Face­book COO Sheryl Sand­berg was involved in the for­mu­la­tion of the new pol­i­cy, though rough­ly three dozen Face­book employ­ees worked on it.

    Fish­man said that users who search for or try to post white nation­al­ism, white sep­a­ratism, or white suprema­cist con­tent will begin get­ting a pop­up that will redi­rect to the web­site for Life After Hate, a non­prof­it found­ed by ex-white suprema­cists that is ded­i­cat­ed to get­ting peo­ple to leave hate groups.

    “If peo­ple are explor­ing this move­ment, we want to con­nect them with folks that will be able to pro­vide sup­port offline,” Fish­man said. “This is the kind of work that we think is part of a com­pre­hen­sive pro­gram to take this sort of move­ment on.”

    Behind the scenes, Face­book will con­tin­ue using some of the same tac­tics it uses to sur­face and remove con­tent asso­ci­at­ed with ISIS, Al Qae­da, and oth­er ter­ror­ist groups to remove white nation­al­ist, sep­a­ratist, and suprema­cist con­tent. This includes con­tent match­ing, which algo­rith­mi­cal­ly detects and deletes images that have been pre­vi­ous­ly iden­ti­fied to con­tain hate mate­r­i­al, and will include machine learn­ing and arti­fi­cial intel­li­gence, Fish­man said, though he didn’t elab­o­rate on how those tech­niques would work.

    The new pol­i­cy is a sig­nif­i­cant change from the company’s old poli­cies on white sep­a­ratism and white nation­al­ism. In inter­nal mod­er­a­tion train­ing doc­u­ments obtained and pub­lished by Moth­er­board last year, Face­book argued that white nation­al­ism “doesn’t seem to be always asso­ci­at­ed with racism (at least not explic­it­ly).”

    That arti­cle elicit­ed wide­spread crit­i­cism from civ­il rights, Black his­to­ry, and extrem­ism experts, who stressed that “white nation­al­ism” and “white sep­a­ratism” are often sim­ply fronts for white suprema­cy.

    “I do think it’s a step for­ward, and a direct result of pres­sure being placed on it [Face­book],” Rashad Robin­son, pres­i­dent of cam­paign group Col­or Of Change, told Moth­er­board in a phone call.

    Experts say that white nation­al­ism and white sep­a­ratism move­ments are dif­fer­ent from oth­er sep­a­ratist move­ments such as the Basque sep­a­ratist move­ment in France and Spain and Black sep­a­ratist move­ments world­wide because of the long his­to­ry of white suprema­cism that has been used to sub­ju­gate and dehu­man­ize peo­ple of col­or in the Unit­ed States and around the world.

    “Any­one who dis­tin­guish­es white nation­al­ists from white suprema­cists does not have any under­stand­ing about the his­to­ry of white suprema­cism and white nation­al­ism, which is his­tor­i­cal­ly inter­twined,” Ibram X. Ken­di, who won a Nation­al Book Award in 2016 for Stamped from the Begin­ning: The Defin­i­tive His­to­ry of Racist Ideas in Amer­i­ca, told Moth­er­board last year.

    Hei­di Beirich, head of the South­ern Pover­ty Law Center’s (SPLC) Intel­li­gence Project, told Moth­er­board last year that “white nation­al­ism is some­thing that peo­ple like David Duke [for­mer leader of the Ku Klux Klan] and oth­ers came up with to sound less bad.”

    While there is unan­i­mous agree­ment among civ­il rights experts Moth­er­board spoke to that white nation­al­ism and sep­a­ratism are indis­tin­guish­able from white suprema­cy, the deci­sion is like­ly to be polit­i­cal­ly con­tro­ver­sial both in the Unit­ed States, where the right has accused Face­book of hav­ing an anti-con­ser­v­a­tive bias, and world­wide, espe­cial­ly in coun­tries where open­ly white nation­al­ist politi­cians have found large fol­low­ings. Face­book said that not all of the groups it spoke to believed it should change its pol­i­cy.

    “When you have a broad range of peo­ple you engage with, you’re going to get a range of ideas and beliefs,” Ulrick Casseus, a sub­ject mat­ter expert on hate groups on Facebook’s pol­i­cy team, told us. “There were a few peo­ple who [...] did not agree that white nation­al­ism and white sep­a­ratism were inher­ent­ly hate­ful.”

    But Face­book said that the over­whelm­ing major­i­ty of experts it spoke to believed that white nation­al­ism and white sep­a­ratism are tied close­ly to orga­nized hate, and that all experts it spoke to believe that white nation­al­ism expressed online has led to real-world harm. After speak­ing to these experts, Face­book decid­ed that white nation­al­ism and white sep­a­ratism are “inher­ent­ly hate­ful.”

    “We saw that was becom­ing more of a thing, where they would try to nor­mal­ize what they were doing by say­ing ‘I’m not racist, I’m a nation­al­ist’, and try to make that dis­tinc­tion. They even go so far as to say ‘I’m not a white suprema­cist, I’m a white nation­al­ist’. Time and time again they would say that but they would also have hate­ful speech and hate­ful behav­iors tied to that,” Casseus said. “They’re try­ing to nor­mal­ize it and based upon what we’ve seen and who we’ve talked to, we deter­mined that this is hate­ful, and it’s tied to orga­nized hate.”

    The change comes less than two years after Face­book inter­nal­ly clar­i­fied its poli­cies on white suprema­cy after the Char­lottesville protests of August 2017, in which a white suprema­cist killed counter-pro­test­er Heather Hey­er. That includ­ed draw­ing the dis­tinc­tion between suprema­cy and nation­al­ism that extrem­ist experts saw as prob­lem­at­ic.

    Face­book qui­et­ly made oth­er tweaks inter­nal­ly around this time. One source with direct knowl­edge of Facebook’s delib­er­a­tions said that fol­low­ing Motherboard’s report­ing, Face­book changed its inter­nal doc­u­ments to say that racial suprema­cy isn’t allowed in gen­er­al. Moth­er­board grant­ed the source anonymi­ty to speak can­did­ly about inter­nal Face­book dis­cus­sions.

    “Every­thing was rephrased so instead of say­ing white nation­al­ism is allowed while white suprema­cy isn’t, it now says racial suprema­cy isn’t allowed,” the source said last year. At the time, white nation­al­ism and Black nation­al­ism did not vio­late Facebook’s poli­cies, the source added. A Face­book spokesper­son con­firmed that it did make that change last year.

    The new pol­i­cy will not ban implic­it white nation­al­ism and white sep­a­ratism, which Casseus said is dif­fi­cult to detect and enforce. It also doesn’t change the company’s exist­ing poli­cies on sep­a­ratist and nation­al­ist move­ments more gen­er­al­ly; con­tent relat­ing to Black sep­a­ratist move­ments and the Basque sep­a­ratist move­ment, for exam­ple, will still be allowed.

    A social media pol­i­cy is only as good as its imple­men­ta­tion and enforce­ment. A recent report from NGO the Counter Extrem­ism Project found that Face­book did not remove pages belong­ing to known neo-Nazi groups after this month’s Christchurch, New Zealand ter­ror­ist attacks. Face­book wants to be sure that enforce­ment of its poli­cies is con­sis­tent around the world and from mod­er­a­tor to mod­er­a­tor, which is one of the rea­sons why its pol­i­cy doesn’t ban implic­it or cod­ed expres­sions of white nation­al­ism or white sep­a­ratism.

    David Brody, an attor­ney with the Lawyers’ Com­mit­tee for Civ­il Rights Under Law which lob­bied Face­book over the pol­i­cy change, told Moth­er­board in a phone call “if there is a cer­tain type of prob­lem­at­ic con­tent that real­ly is not amenable to enforce­ment at scale, they would pre­fer to write their poli­cies in a way where they can pre­tend it doesn’t exist.

    Kee­gan Han­kes, a research ana­lyst for the SPLC’s Intel­li­gence Project, added, “One thing that con­tin­u­al­ly sur­pris­es me about Face­book, is this unwill­ing­ness to rec­og­nize that even if con­tent is not explic­it­ly racist and vio­lent out­right, it [needs] to think about how their audi­ence is receiv­ing that mes­sage.”

    ...

    ———-

    “Face­book Bans White Nation­al­ism and White Sep­a­ratism” by Joseph Cox and Jason Koe­bler; Vice; 03/27/2019

    Last year, a Moth­er­board inves­ti­ga­tion found that, though Face­book banned “white suprema­cy” on its plat­form, it explic­it­ly allowed “white nation­al­ism” and “white sep­a­ratism.” After back­lash from civ­il rights groups and his­to­ri­ans who say there is no dif­fer­ence between the ide­olo­gies, Face­book has decid­ed to ban all three, two mem­bers of Facebook’s con­tent pol­i­cy team said.”

    Yep, it when Face­book respond­ed to the vio­lence of Char­lottesville in 2017 by ban­ning white suprema­cists, the com­pa­ny decid­ed to leave a giant loop­hole: white suprema­cists are banned, but white nation­al­ists and sep­a­ratists are still allowed. It’s as if Face­book was trolling the pub­lic, except this was basi­cal­ly a secret pol­i­cy that was only uncov­ered by a Moth­er­board inves­ti­ga­tion and leaked inter­nal doc­u­ments. That’s a key detail here: this giant loop­hole was a secret loop­hole until Moth­er­board wrote an arti­cle about it in May of 2018. And it was­n’t until March of 2019 that Face­book closed that giant loop­hole. But, of course, they cre­at­ed a new one: implic­it and cod­ed white nation­al­ism and sep­a­ratism are still allowed:

    ...
    “We’ve had con­ver­sa­tions with more than 20 mem­bers of civ­il soci­ety, aca­d­e­mics, in some cas­es these were civ­il rights orga­ni­za­tions, experts in race rela­tions from around the world,” Bri­an Fish­man, pol­i­cy direc­tor of coun­tert­er­ror­ism at Face­book, told us in a phone call. “We decid­ed that the over­lap between white nation­al­ism, [white] sep­a­ratism, and white suprema­cy is so exten­sive we real­ly can’t make a mean­ing­ful dis­tinc­tion between them. And that’s because the lan­guage and the rhetoric that is used and the ide­ol­o­gy that it rep­re­sents over­laps to a degree that it is not a mean­ing­ful dis­tinc­tion.”

    Specif­i­cal­ly, Face­book will now ban con­tent that includes explic­it praise, sup­port, or rep­re­sen­ta­tion of white nation­al­ism or sep­a­ratism. Phras­es such as “I am a proud white nation­al­ist” and “Immi­gra­tion is tear­ing this coun­try apart; white sep­a­ratism is the only answer” will now be banned, accord­ing to the com­pa­ny. Implic­it and cod­ed white nation­al­ism and white sep­a­ratism will not be banned imme­di­ate­ly, in part because the com­pa­ny said it’s hard­er to detect and remove.

    ...

    The new pol­i­cy is a sig­nif­i­cant change from the company’s old poli­cies on white sep­a­ratism and white nation­al­ism. In inter­nal mod­er­a­tion train­ing doc­u­ments obtained and pub­lished by Moth­er­board last year, Face­book argued that white nation­al­ism “doesn’t seem to be always asso­ci­at­ed with racism (at least not explic­it­ly).”
    ...

    Keep in mind that ‘cod­ed’ white nation­al­ism is often bare­ly cod­ed at all, so this is the kind of loop­hole that indi­vid­ual Face­book con­tent mod­er­a­tors are going to poten­tial­ly have a great deal of flex­i­bil­i­ty over how they enforce the pol­i­cy. And to under­score how easy it is for mod­er­a­tors to ‘play dumb’ about these these kinds of con­tent judge­ment call, accord­ing to Face­book’s hate group expert Ulrick Casseus, that ini­tial loop­hole to allow white nation­al­ism and sep­a­ratism came about because, “There were a few peo­ple who [...] did not agree that white nation­al­ism and white sep­a­ratism were inher­ent­ly hate­ful.” That’s play­ing it real­ly dumb and that was Face­book’s pol­i­cy until this lat­est change:

    ...
    While there is unan­i­mous agree­ment among civ­il rights experts Moth­er­board spoke to that white nation­al­ism and sep­a­ratism are indis­tin­guish­able from white suprema­cy, the deci­sion is like­ly to be polit­i­cal­ly con­tro­ver­sial both in the Unit­ed States, where the right has accused Face­book of hav­ing an anti-con­ser­v­a­tive bias, and world­wide, espe­cial­ly in coun­tries where open­ly white nation­al­ist politi­cians have found large fol­low­ings. Face­book said that not all of the groups it spoke to believed it should change its pol­i­cy.

    “When you have a broad range of peo­ple you engage with, you’re going to get a range of ideas and beliefs,” Ulrick Casseus, a sub­ject mat­ter expert on hate groups on Facebook’s pol­i­cy team, told us. “There were a few peo­ple who [...] did not agree that white nation­al­ism and white sep­a­ratism were inher­ent­ly hate­ful.”

    But Face­book said that the over­whelm­ing major­i­ty of experts it spoke to believed that white nation­al­ism and white sep­a­ratism are tied close­ly to orga­nized hate, and that all experts it spoke to believe that white nation­al­ism expressed online has led to real-world harm. After speak­ing to these experts, Face­book decid­ed that white nation­al­ism and white sep­a­ratism are “inher­ent­ly hate­ful.”

    “We saw that was becom­ing more of a thing, where they would try to nor­mal­ize what they were doing by say­ing ‘I’m not racist, I’m a nation­al­ist’, and try to make that dis­tinc­tion. They even go so far as to say ‘I’m not a white suprema­cist, I’m a white nation­al­ist’. Time and time again they would say that but they would also have hate­ful speech and hate­ful behav­iors tied to that,” Casseus said. “They’re try­ing to nor­mal­ize it and based upon what we’ve seen and who we’ve talked to, we deter­mined that this is hate­ful, and it’s tied to orga­nized hate.”

    ...

    The new pol­i­cy will not ban implic­it white nation­al­ism and white sep­a­ratism, which Casseus said is dif­fi­cult to detect and enforce. It also doesn’t change the company’s exist­ing poli­cies on sep­a­ratist and nation­al­ist move­ments more gen­er­al­ly; con­tent relat­ing to Black sep­a­ratist move­ments and the Basque sep­a­ratist move­ment, for exam­ple, will still be allowed.
    ...

    So as we can see, Face­book real­ly, real­ly, real­ly wants to keep some loop­holes in place to ensure white suprema­cist con­tent still has an out­let. And while much of that desire to keep these loop­holes in place like­ly comes from the far right ide­olo­gies of impor­tant Face­book fig­ures like Peter Thiel, here’s an arti­cle that gives us an idea of the finan­cial incen­tive to ensure Face­book reminds the plat­form of choice for big­otry: Accord­ing to a study by the Sludge, between May 2018 and Sept. 17, 2019 Face­book made near­ly $1.6 mil­lion from 4,921 ads ads pur­chased by 38 groups iden­ti­fied by the SPLC as hate groups. Quite a few of these hate groups are clear­ly of the white nation­al­ist vari­ety, like the Fed­er­a­tion for Amer­i­can Immi­gra­tion Reform (FAIR) which spent $910,101 on 35 ads dur­ing this peri­od.

    Keep in mind that May of 2018 is the same month Face­book put in place its pol­i­cy of ban­ning white suprema­cy but still allow­ing white nation­al­ism and sep­a­ratism to con­tin­ue, so the date range for this SPLC study is basi­cal­ly a look at how effec­tive that pol­i­cy was at keep white suprema­cists con­tent off of Face­book. As we can see from the near­ly $1 mil­lion spent by FAIR dur­ing this peri­od, it was­n’t very effec­tive:

    Giz­mo­do

    Face­book Has Banked Near­ly $1.6 Mil­lion From SPLC-Des­ig­nat­ed Hate Groups Since May 2018

    Tom McK­ay
    9/25/19 9:50PM

    Face­book claims to be doing a lot to fight hate speech. But Face­book has also cashed near­ly $1.6 mil­lion in ad mon­ey from orga­ni­za­tions des­ig­nat­ed as hate groups by the South­ern Pover­ty Law Cen­ter between May 2018 and Sept. 17, 2019, accord­ing to a Wednes­day report by Sludge.

    The SPLC is con­sid­ered one of the nation’s most promi­nent civ­il rights watch­dogs. It clas­si­fied the 38 orga­ni­za­tions in ques­tion as hate groups because they have “beliefs or prac­tices that attack or malign an entire class of peo­ple, typ­i­cal­ly for their immutable char­ac­ter­is­tics.” (Many of the groups in ques­tion have vig­or­ous­ly con­test­ed those des­ig­na­tions and insist they are being tar­get­ed sim­ply for espous­ing con­ser­v­a­tive view­points, which is per­haps not the most per­sua­sive argu­ment in these times.)

    At the top of the list is the Fed­er­a­tion for Amer­i­can Immi­gra­tion Reform (FAIR), which Facebook’s ad data­base shows ran 335 ads at a total bill of $910,101. (FAIR was found­ed by vir­u­lent nativist and white suprema­cist John Tan­ton and reg­u­lar­ly gripes about top­ics like the chang­ing “eth­nic base” of the U.S., but has man­aged to main­tain some degree of main­stream cred­i­bil­i­ty with right-wing news out­lets.) Sec­ond was the Alliance Defend­ing Free­dom, an anti-LGBTQ Chris­t­ian group that has pushed for the crim­i­nal­iza­tion of “sodomy” in the states and abroad, at $391,669.

    Oth­er groups on Sludge’s list of Face­book ad buy­ers includ­ed the Fam­i­ly Research Coun­cil ($106,987), the anti-Mus­lim Clar­i­on Project ($55,012), and the omi­nous­ly-titled Cal­i­for­ni­ans for Pop­u­la­tion Sta­bi­liza­tion ($202,212), an anti-immi­grant group found­ed by eugeni­cist and far-right race “sci­en­tist” Gar­ret Hardin. CAP once hired a neo-Nazi as its pub­lic affairs direc­tor.

    Spe­cif­ic ads not­ed by Sludge includ­ed an ad by The Amer­i­can Vision, a group the SPLC writes has advo­cat­ed the exe­cu­tion of gay peo­ple, which linked to a now-removed blog post call­ing gay peo­ple “evil.” William Gheen, the nativist head of Amer­i­cans for Legal Immi­gra­tion PAC, pur­chased ads par­rot­ing anti-immi­gra­tion “inva­sion” rhetoric of the type cit­ed by a mass shoot­er in El Paso, Texas this year and ask­ing users to share a post stat­ing “100% OF ILLEGAL ALIENS ARE CRIMINALS.” Three local chap­ters of the Proud Boys, a far-right street brawl­ing group that earned the atten­tion of the FBI last year, also did com­par­a­tive­ly small Face­book ad buys (some of which were even­tu­al­ly removed).

    Sludge wrote that in total, Face­book ran some 4,921 ads from the 38 hate groups. Face­book has claimed that it is mak­ing progress and proac­tive­ly iden­ti­fied 65 per­cent of the hate speech it removed in Q1 2019, up from 24 per­cent in Q4 2017. But the groups have been allowed to remain, Sludge argued, because the platform’s mod­er­a­tion efforts are “main­ly focused on indi­vid­ual posts, not on the accounts that do the post­ing” and it only bans groups “that pro­claim a vio­lent mis­sion or are engaged in vio­lence”:

    Face­book may take down a hate group’s post that explic­it­ly attacks peo­ple based on a “pro­tect­ed char­ac­ter­is­tic,” but it wouldn’t ordi­nar­i­ly ban that group from its plat­form if the group didn’t have a mis­sion Face­book con­sid­ers vio­lent. For exam­ple, it removed three pages of the Proud Boys, who advo­cate vio­lence, but has let hate groups that are extreme­ly dis­crim­i­na­to­ry yet not explic­it­ly vio­lent remain. The con­trast­ing def­i­n­i­tions of hate speech and hate groups allow the com­pa­ny to take down some offen­sive posts but per­mit numer­ous hate groups to have a pres­ence, post­ing, spend­ing mon­ey, and recruit­ing on its plat­form.

    In June, Face­book released a near­ly 30-page audit pre­pared by its civ­il rights ambas­sador Lau­ra Mur­phy and rough­ly 90 promi­nent civ­il rights groups. Mul­ti­ple civ­il rights groups told Giz­mo­do that while the audit showed Face­book had made some progress, pol­i­cy changes such as its deci­sion to ban sup­port of white suprema­cy or “nation­al­ism” didn’t go far enough and the com­pa­ny had not laid out a proac­tive plan to fight the spread of hate speech.

    Facebook’s much-tot­ed machine learn­ing algo­rithms for polic­ing hate speech have also been reg­u­lar­ly lam­bast­ed as inad­e­quate. For exam­ple, Auburn Uni­ver­si­ty senior fel­low and GDELT co-cre­ator Kalev Lee­taru told Giz­mo­do that he thought Face­book could improve its auto­mat­ed mod­er­a­tion with exist­ing tech­nol­o­gy, but “the rea­son plat­forms are reluc­tant to deploy it comes down to sev­er­al factors”—including the cost of run­ning more “com­pu­ta­tion­al­ly expen­sive” sys­tems and the mon­ey gen­er­at­ed from extreme con­tent.

    “Ter­ror­ism, hate speech, human traf­fick­ing, sex­u­al assault and oth­er hor­rif­ic imagery actu­al­ly ben­e­fits the sites mon­e­tar­i­ly,” Lee­taru added. “... Oth­er than a few high-pub­lic­i­ty cas­es of adver­tis­er back­lash against par­tic­u­lar­ly high pro­file cas­es, adver­tis­ers aren’t forc­ing the com­pa­nies to do bet­ter, and gov­ern­ments aren’t putting any pres­sure on them, so they have lit­tle incen­tive to do bet­ter.”

    Face­book has also admit­ted it failed to act appro­pri­ate­ly against mil­i­tary offi­cials in Myan­mar incit­ing geno­cide against the minor­i­ty Rohingya pop­u­la­tion. A Unit­ed Nations inves­ti­ga­tor lat­er harsh­ly crit­i­cized Facebook’s sub­se­quent efforts to do bet­ter and its efforts since have failed to inspire con­fi­dence. Oth­er report­ing has indi­cat­ed Face­book and its prop­er­ties such as What­sApp have become ves­sels for hate speech and vio­lence in coun­tries includ­ing Sri Lan­ka, India, the Philip­pines, and Libya.

    Accord­ing to Sludge, search­es for sim­i­lar con­tent on com­peti­tors Google/YouTube, Twit­ter, and Snap showed that Twit­ter took $917,000 from FAIR since Octo­ber 2018, while Google/YouTube took $90,000 from the group since the end of May 2018. “Few, if any” oth­er hate groups appeared in the Google/YouTube polit­i­cal ad archive, while no oth­er SPLC-des­ig­nat­ed groups appeared in Twit­ter or Snap’s data­bas­es, Sludge wrote. (How­ev­er, as Sludge not­ed, Facebook’s ad archive is more com­pre­hen­sive and acces­si­ble than the oth­ers’ data­base.)

    Kee­gan Han­kes, the inter­im research direc­tor of the SPLC’s Intel­li­gence Project, told Sludge, “This is an astound­ing amount of mon­ey that’s been allowed to be spent by hate groups... It is a decades-long tac­tic of these orga­ni­za­tions to dress up their rhetoric using euphemisms and using soft­er lan­guage to appeal to a wider audi­ence. They’re not just going to come out with their most extreme ide­o­log­i­cal view­points.”

    The orga­ni­za­tions in ques­tion soft-ped­al their Face­book con­tent “know­ing full well that peo­ple who are amenable to that mes­sage might very well go to their web­site or go to what­ev­er pro­pa­gan­da they’re oper­at­ing and get exposed to more extreme rhetoric,” Han­kes added. He told Sludge that he believed Face­book only takes action when it is “polit­i­cal­ly expe­di­ent,” where­as anti-immi­gra­tion, anti-Islam, and anti-LGBTQ view­points “have a lot of trac­tion in main­stream con­ser­vatism right now.”

    ...

    ———-

    “Face­book Has Banked Near­ly $1.6 Mil­lion From SPLC-Des­ig­nat­ed Hate Groups Since May 2018” by Tom McK­ay; Giz­mo­do; 09/25/2019

    At the top of the list is the Fed­er­a­tion for Amer­i­can Immi­gra­tion Reform (FAIR), which Facebook’s ad data­base shows ran 335 ads at a total bill of $910,101. (FAIR was found­ed by vir­u­lent nativist and white suprema­cist John Tan­ton and reg­u­lar­ly gripes about top­ics like the chang­ing “eth­nic base” of the U.S., but has man­aged to main­tain some degree of main­stream cred­i­bil­i­ty with right-wing news out­lets.) Sec­ond was the Alliance Defend­ing Free­dom, an anti-LGBTQ Chris­t­ian group that has pushed for the crim­i­nal­iza­tion of “sodomy” in the states and abroad, at $391,669.”

    That’s right, FAIR, which is about as overt­ly white nation­al­ist a group as you’re going to find, spent almost $1 mil­lion on Face­book ads fol­low­ing Face­book’s pol­i­cy change to ban white suprema­cy. And yet, as the arti­cle notes FAIR is also treat­ed as a cred­i­ble orga­ni­za­tion with­in the right-wig media com­plex. It high­lights the trag­i­cal­ly polit­i­cal­ly charged nature of any mean­ing­ful ban of white nation­al­ism on these plat­forms: not only would Face­book be giv­ing up all that ad mon­ey but any mean­ing­ful ban of white nation­al­ist con­tent would be treat­ed by the polit­i­cal right, which has been increas­ing­ly embrac­ing white nation­al­ism for years, as a cen­sor­ship attack against con­ser­v­a­tives. So instead we have Face­book pro­claim­ing that its ban­ning white nation­al­ism and white suprema­cy but it only appears to lim­it that ban to groups that pro­claim a vio­lent mis­sion or are engaged in vio­lence. As long as these groups cloak their mes­sages with enough dog-whis­tles and hints at what they’re ulti­mate agen­da their con­tent will be allowed. It’s, again, Face­book play­ing dumb, for the ben­e­fit of its bot­tom line and the far right:

    ...
    Sludge wrote that in total, Face­book ran some 4,921 ads from the 38 hate groups. Face­book has claimed that it is mak­ing progress and proac­tive­ly iden­ti­fied 65 per­cent of the hate speech it removed in Q1 2019, up from 24 per­cent in Q4 2017. But the groups have been allowed to remain, Sludge argued, because the platform’s mod­er­a­tion efforts are “main­ly focused on indi­vid­ual posts, not on the accounts that do the post­ing” and it only bans groups “that pro­claim a vio­lent mis­sion or are engaged in vio­lence”:

    Face­book may take down a hate group’s post that explic­it­ly attacks peo­ple based on a “pro­tect­ed char­ac­ter­is­tic,” but it wouldn’t ordi­nar­i­ly ban that group from its plat­form if the group didn’t have a mis­sion Face­book con­sid­ers vio­lent. For exam­ple, it removed three pages of the Proud Boys, who advo­cate vio­lence, but has let hate groups that are extreme­ly dis­crim­i­na­to­ry yet not explic­it­ly vio­lent remain. The con­trast­ing def­i­n­i­tions of hate speech and hate groups allow the com­pa­ny to take down some offen­sive posts but per­mit numer­ous hate groups to have a pres­ence, post­ing, spend­ing mon­ey, and recruit­ing on its plat­form.

    ...

    Accord­ing to Sludge, search­es for sim­i­lar con­tent on com­peti­tors Google/YouTube, Twit­ter, and Snap showed that Twit­ter took $917,000 from FAIR since Octo­ber 2018, while Google/YouTube took $90,000 from the group since the end of May 2018. “Few, if any” oth­er hate groups appeared in the Google/YouTube polit­i­cal ad archive, while no oth­er SPLC-des­ig­nat­ed groups appeared in Twit­ter or Snap’s data­bas­es, Sludge wrote. (How­ev­er, as Sludge not­ed, Facebook’s ad archive is more com­pre­hen­sive and acces­si­ble than the oth­ers’ data­base.)

    Kee­gan Han­kes, the inter­im research direc­tor of the SPLC’s Intel­li­gence Project, told Sludge, “This is an astound­ing amount of mon­ey that’s been allowed to be spent by hate groups... It is a decades-long tac­tic of these orga­ni­za­tions to dress up their rhetoric using euphemisms and using soft­er lan­guage to appeal to a wider audi­ence. They’re not just going to come out with their most extreme ide­o­log­i­cal view­points.”

    The orga­ni­za­tions in ques­tion soft-ped­al their Face­book con­tent “know­ing full well that peo­ple who are amenable to that mes­sage might very well go to their web­site or go to what­ev­er pro­pa­gan­da they’re oper­at­ing and get exposed to more extreme rhetoric,” Han­kes added. He told Sludge that he believed Face­book only takes action when it is “polit­i­cal­ly expe­di­ent,” where­as anti-immi­gra­tion, anti-Islam, and anti-LGBTQ view­points “have a lot of trac­tion in main­stream con­ser­vatism right now.”
    ...

    “Kee­gan Han­kes, the inter­im research direc­tor of the SPLC’s Intel­li­gence Project, told Sludge, “This is an astound­ing amount of mon­ey that’s been allowed to be spent by hate groups... It is a decades-long tac­tic of these orga­ni­za­tions to dress up their rhetoric using euphemisms and using soft­er lan­guage to appeal to a wider audi­ence. They’re not just going to come out with their most extreme ide­o­log­i­cal view­points.”

    Let’s review: first Face­book bans white suprema­cy in response to the neo-Nazi march in Char­lottesville. Then leaked inter­nal doc­u­ments reveal in May 2018 that Face­book left a giant loop­hole of its white suprema­cy ban that still allows white nation­al­ism and white sep­a­ratism because “a few peo­ple” at Face­book felt that white nation­al­ism and white sep­a­ratism weren’t inher­ent­ly racist. Then, in March of 2019, Face­book announces its real­ized that white nation­al­ism and white sep­a­ratism are the same as white suprema­cy and extends its ban to white nation­al­ism and sep­a­ratism. But the ban only applies to overt white nation­al­ism and sep­a­ratism. White nation­al­ism code words and dog-whistling are still allowed. And then, in Sep­tem­ber, Sludge issues a report that found that Face­book sold $1.6 mil­lion in ads to hate group between May of 2018 and Sep­tem­ber of 2019, and almost $1 mil­lion of that ad mon­ey came from FAIR, an vir­u­lent white nation­al­ist group that’s also some­what main­stream in right-wing media. As long as a group isn’t overt­ly advo­cat­ing vio­lence, it will be allowed to pro­mote and recruit its ideas on the plat­form. The far right’s decades-old tac­tic of soft­en­ing their lan­guage to appeal to a wider audi­ence is lit­er­al­ly the loop­hole Face­book kept in place for these groups.

    So when we hear about oth­er con­tro­ver­sial recent Face­book poli­cies, like the new loop­hole in Face­books pol­i­cy against lying in polit­i­cal ads that says politi­cians will still be allowed to lie, keep in mind that right-wing politi­cians aren’t just being giv­en a loop­hole that allows them to con­tin­ue lying in ads. They’re also giv­en a loop­hole that allows them to con­tin­ue pro­mote white nation­al­ism. Except this par­tic­u­lar loop­hole isn’t lim­it­ed to politi­cians.

    In oth­er news...

    Posted by Pterrafractyl | November 6, 2019, 3:36 pm
  6. Here’s the kind of sto­ry about the abuse of social media plat­forms that is dis­turb­ing not just because of the the con­tent of this par­tic­u­lar sto­ry based in Kuwait but also because there’s no rea­son to assume this sto­ry is lim­it­ed to Kuwait: BBC New Ara­bic con­duct­ed an under­cov­er inves­ti­ga­tion of the appar­ent­ly boom­ing online black mar­ket in Kuwait that relies on social media plat­forms like Insta­gram (owned by Face­book) and var­i­ous online mar­ket­place apps avail­able through the Google Play and Apple’s App Store. This black mar­ket hap­pens to be in de fac­to human slav­ery. The mar­ket­places are used to buy and sell for­eign domes­tic work­ers who come to Kuwait and oper­ate under the Kafala sys­tem, where a domes­tic work­er is brought into the coun­try through their spon­sor (the fam­i­ly hir­ing them), and they can’t change or quit their job or leave the coun­try with­out the per­mis­sion of their spon­sor, mak­ing it effec­tive­ly a sys­tem of mod­ern slav­ery once some­one enters it. And because the spon­sor­ship of these ‘domes­tic work­ers’ can be sold at a high­er price than they’re bought for this sys­tem has turned these work­ers into poten­tial­ly for-prof­it com­modi­ties.

    As the arti­cle notes, 9 out of 10 Kuwaiti house­holds have a domes­tic work­er, so the poten­tial size of this black mar­ket includes almost every Kuwaiti house­hold. Part of what appears to be fuel­ing this black mar­ket trade is a series of laws Kuwait intro­duced in 2015 intend­ed to pro­tect these domes­tic work­ers from abuse. The BBC met with over a dozen sell­ers, and almost all advo­cat­ed con­fis­cat­ing the work­ers’ pass­ports, con­fin­ing them to the house, deny­ing them any time off and giv­ing them lit­tle or no access to a phone. So they real­ly were active­ly treat­ing these women as slaves. The BBC even found a 16 year old female for sale in Kuwait, despite Kuwaiti law man­dat­ing that all domes­tic work­ers must be over 21. So this is black mar­ket poten­tial­ly includes child slav­ery.

    After the BBC noti­fied Face­book that Insta­gram was being used for this black mar­ket mar­ket­place, Face­book announced that it banned one of the hash­tags that was used on Insta­gram to adver­tise these offers. But, of course, the BBC still found many relat­ed list­ings still active on Insta­gram. Sim­i­lar­ly, Google and Apple told the BBC that they were work­ing with app devel­op­ers to address the issue. The apps used for this black mar­ket, like the 4Sale app, can be used to buy and sell all sorts of things, not just domes­tic work­ers, which com­pli­cates crack­ings down on this prac­tice. The 4Sale app even lets you fil­ter the avail­able list­ings accord­ing to race. The BBC con­tin­ued to find the offend­ing apps avail­able on the Google Play and Apple app store after giv­ing the com­pa­nies these noti­fi­ca­tions.

    And it’s not lim­it­ed to Kuwait. The BBC also found hun­dreds of peo­ple adver­tised for sale in Sau­di Ara­bia via Insta­gram and on the pop­u­lar Haraj app. Giv­en the rel­a­tive lack of glob­al atten­tion giv­en to prac­tice, it’s hard to believe this is lim­it­ed to Kuwait and Sau­di Ara­bia, espe­cial­ly since the Kafala sys­tem is also prac­ticed in Bahrain, Oman, Qatar, the UAE, Jor­dan and Lebanon. Any coun­try where effec­tive forced labor takes place could poten­tial­ly uti­lize social media to facil­i­tate these kinds of mar­ket­places.

    So while the pri­ma­ry prob­lem here stems from the fact that sys­tems like Kafala are still in use despite the clear poten­tial for abus­es, the fact that the social media giants only appear to have cracked down on this prac­tice after the BBC brought it to their atten­tion, and even then only appear to have made half-heart­ed attempts, makes them a big part of this prob­lem:

    BBC News Ara­bic

    Slave mar­kets found on Insta­gram and oth­er apps

    By Owen Pin­nell & Jess Kel­ly

    31 Octo­ber 2019

    Dri­ve around the streets of Kuwait and you won’t see these women. They are behind closed doors, deprived of their basic rights, unable to leave and at risk of being sold to the high­est bid­der.

    But pick up a smart­phone and you can scroll through thou­sands of their pic­tures, cat­e­gorised by race, and avail­able to buy for a few thou­sand dol­lars.

    An under­cov­er inves­ti­ga­tion by BBC News Ara­bic has found that domes­tic work­ers are being ille­gal­ly bought and sold online in a boom­ing black mar­ket.

    Some of the trade has been car­ried out on Face­book-owned Insta­gram, where posts have been pro­mot­ed via algo­rithm-boost­ed hash­tags, and sales nego­ti­at­ed via pri­vate mes­sages.

    Oth­er list­ings have been pro­mot­ed in apps approved and pro­vid­ed by Google Play and Apple’s App Store, as well as the e‑commerce plat­forms’ own web­sites.

    “What they are doing is pro­mot­ing an online slave mar­ket,” said Urmi­la Bhoola, the UN spe­cial rap­por­teur on con­tem­po­rary forms of slav­ery.

    “If Google, Apple, Face­book or any oth­er com­pa­nies are host­ing apps like these, they have to be held account­able.”

    After being alert­ed to the issue, Face­book said it had banned one of the hash­tags involved.

    Google and Apple said they were work­ing with app devel­op­ers to pre­vent ille­gal activ­i­ty.

    The ille­gal sales are a clear breach of the US tech firms’ rules for app devel­op­ers and users.

    How­ev­er, the BBC has found there are many relat­ed list­ings still active on Insta­gram, and oth­er apps avail­able via Apple and Google.

    Slave mar­ket

    Nine out of 10 Kuwaiti homes have a domes­tic work­er — they come from some of the poor­est parts of the world to the Gulf, aim­ing to make enough mon­ey to sup­port their fam­i­ly at home.

    Pos­ing as a cou­ple new­ly arrived in Kuwait, the BBC Ara­bic under­cov­er team spoke to 57 app users and vis­it­ed more than a dozen peo­ple who were try­ing to sell them their domes­tic work­er via a pop­u­lar com­mod­i­ty app called 4Sale.

    The sell­ers almost all advo­cat­ed con­fis­cat­ing the wom­en’s pass­ports, con­fin­ing them to the house, deny­ing them any time off and giv­ing them lit­tle or no access to a phone.

    The 4Sale app allowed you to fil­ter by race, with dif­fer­ent price brack­ets clear­ly on offer, accord­ing to cat­e­go­ry.

    “African work­er, clean and smi­ley,” said one list­ing. Anoth­er: “Nepalese who dares to ask for a day off.”

    When speak­ing to the sell­ers, the under­cov­er team fre­quent­ly heard racist lan­guage. “Indi­ans are the dirt­i­est,” said one, describ­ing a woman being adver­tised.

    Human rights vio­lat­ed

    The team were urged by app users, who act­ed as if they were the “own­ers” of these women, to deny them oth­er basic human rights, such as giv­ing them a “day or a minute or a sec­ond” off.

    One man, a police­man, look­ing to offload his work­er said: “Trust me she’s very nice, she laughs and has a smi­ley face. Even if you keep her up till 5am she won’t com­plain.”

    He told the BBC team how domes­tic work­ers were used as a com­mod­i­ty.

    “You will find some­one buy­ing a maid for 600 KD ($2,000), and sell­ing her on for 1,000 KD ($3,300),” he said.

    He sug­gest­ed how the BBC team should treat her: “The pass­port, don’t give it to her. You’re her spon­sor. Why would you give her her pass­port?”

    In one case, the BBC team was offered a 16-year-old girl. It has called her Fatou to pro­tect her real name.

    Fatou had been traf­ficked from Guinea in West Africa and had been employed as a domes­tic work­er in Kuwait for six months, when the BBC dis­cov­ered her. Kuwait­’s laws say that domes­tic work­ers must be over 21.

    Her sell­er’s sales pitch includ­ed the facts that she had giv­en Fatou no time off, her pass­port and phone had been tak­en away, and she had not allowed her to leave the house alone — all of which are ille­gal in Kuwait.

    Spon­sor’s per­mis­sion

    “This is the quin­tes­sen­tial exam­ple of mod­ern slav­ery,” said Ms Bhoola. “Here we see a child being sold and trad­ed like chat­tel, like a piece of prop­er­ty.”

    In most places in the Gulf, domes­tic work­ers are brought into the coun­try by agen­cies and then offi­cial­ly reg­is­tered with the gov­ern­ment.

    Poten­tial employ­ers pay the agen­cies a fee and become the offi­cial spon­sor of the domes­tic work­er.

    Under what is known as the Kafala sys­tem, a domes­tic work­er can­not change or quit her job, nor leave the coun­try with­out her spon­sor’s per­mis­sion.

    In 2015, Kuwait intro­duced some of the most wide-rang­ing laws to help pro­tect domes­tic work­ers. But the law was not pop­u­lar with every­one.

    Apps includ­ing 4Sale and Insta­gram enable employ­ers to sell the spon­sor­ship of their domes­tic work­ers to oth­er employ­ers, for a prof­it. This bypass­es the agen­cies, and cre­ates an unreg­u­lat­ed black mar­ket which leaves women more vul­ner­a­ble to abuse and exploita­tion.

    This online slave mar­ket is not just hap­pen­ing in Kuwait.

    In Sau­di Ara­bia, the inves­ti­ga­tion found hun­dreds of women being sold on Haraj, anoth­er pop­u­lar com­mod­i­ty app. There were hun­dreds more on Insta­gram, which is owned by Face­book.

    ‘Real hell’

    The BBC team trav­elled to Guinea to try to con­tact the fam­i­ly of Fatou, the child they had dis­cov­ered being offered for sale in Kuwait.

    Every year hun­dreds of women are traf­ficked from here to the Gulf as domes­tic work­ers.

    “Kuwait is real­ly a hell,” said one for­mer maid, who recalled being made to sleep in the same place as cows by the woman who employed her. “Kuwaiti hous­es are very bad,” said anoth­er. “No sleep, no food, noth­ing.”

    Fatou was found by the Kuwaiti author­i­ties and tak­en to the gov­ern­ment-run shel­ter for domes­tic work­ers. Two days lat­er she was deport­ed back to Guinea for being a minor.

    She told the BBC about her expe­ri­ence work­ing in three house­holds dur­ing her nine months in Kuwait: “They used to shout at me and call me an ani­mal. It hurt, it made me sad, but there was noth­ing I could do.”

    ...

    Hash­tag removed

    The Kuwaiti gov­ern­ment says it is “at war with this kind of behav­iour” and insist­ed the apps would be “heav­i­ly scru­ti­nised”.

    To date, no sig­nif­i­cant action has been tak­en against the plat­forms. And there has not been any legal action against the woman who tried to sell Fatou. The sell­er has not respond­ed to the BBC’s request for com­ment.

    Since the BBC team con­tact­ed the apps and tech com­pa­nies about their find­ings, 4Sale has removed the domes­tic work­er sec­tion of its plat­form.

    Face­book said it had banned the Ara­bic hash­tag “?????? ???????#” — which trans­lates as “#maid­s­for­trans­fer”.

    “We will con­tin­ue to work with law enforce­ment, expert organ­i­sa­tions and indus­try to pre­vent this behav­iour on our plat­forms,” added a Face­book spokesman.

    There was no com­ment from the Sau­di com­mod­i­ty app, Haraj.

    Google said it was “deeply trou­bled by the alle­ga­tions”.

    “We have asked BBC to share addi­tion­al details so we can con­duct a more in-depth inves­ti­ga­tion,” it added. “We are work­ing to ensure that the app devel­op­ers put in place the nec­es­sary safe­guards to pre­vent indi­vid­u­als from con­duct­ing this activ­i­ty on their online mar­ket­places.”

    Apple said it “strict­ly pro­hib­it­ed” the pro­mo­tion of human traf­fick­ing and child exploita­tion in apps made avail­able on its mar­ket­place.

    “App devel­op­ers are respon­si­ble for polic­ing the user-gen­er­at­ed con­tent on their plat­forms,” it said.

    “We work with devel­op­ers to take imme­di­ate cor­rec­tive actions when­ev­er we find any issues and, in extreme cas­es, we will remove the app from the Store.

    “We also work with devel­op­ers to report any ille­gal­i­ties to local law enforce­ment author­i­ties.”

    The firms con­tin­ue to dis­trib­ute the 4Sale and Haraj apps, how­ev­er, on the basis that their pri­ma­ry pur­pose is to sell legit­i­mate goods and ser­vices.

    4Sale may have tack­led the prob­lem, but at the time of pub­li­ca­tion, hun­dreds of domes­tic work­ers were still being trad­ed on Haraj, Insta­gram and oth­er apps which the BBC has seen.

    ———-

    “Slave mar­kets found on Insta­gram and oth­er apps” by Owen Pin­nell & Jess Kel­ly; BBC News Ara­bic; 10/31/2019

    “What they are doing is pro­mot­ing an online slave mar­ket,” said Urmi­la Bhoola, the UN spe­cial rap­por­teur on con­tem­po­rary forms of slav­ery.”

    It’s an online de fac­to slave mar­ket fuel­ing the tra­di­tion­al de fac­to slave mar­ket of the Kafala sys­tem, where for­eign domes­tic work­ers lit­er­al­ly relin­quish their right to leave the job or the coun­try. And while gov­ern­ments like Kuwait have belat­ed passed reg­u­la­tions intend­ed to pro­tect these work­ers, the apps pro­vide a loop­hole around those reg­u­la­tions:

    ...
    Spon­sor’s per­mis­sion

    “This is the quin­tes­sen­tial exam­ple of mod­ern slav­ery,” said Ms Bhoola. “Here we see a child being sold and trad­ed like chat­tel, like a piece of prop­er­ty.”

    In most places in the Gulf, domes­tic work­ers are brought into the coun­try by agen­cies and then offi­cial­ly reg­is­tered with the gov­ern­ment.

    Poten­tial employ­ers pay the agen­cies a fee and become the offi­cial spon­sor of the domes­tic work­er.

    Under what is known as the Kafala sys­tem, a domes­tic work­er can­not change or quit her job, nor leave the coun­try with­out her spon­sor’s per­mis­sion.

    In 2015, Kuwait intro­duced some of the most wide-rang­ing laws to help pro­tect domes­tic work­ers. But the law was not pop­u­lar with every­one.

    Apps includ­ing 4Sale and Insta­gram enable employ­ers to sell the spon­sor­ship of their domes­tic work­ers to oth­er employ­ers, for a prof­it. This bypass­es the agen­cies, and cre­ates an unreg­u­lat­ed black mar­ket which leaves women more vul­ner­a­ble to abuse and exploita­tion.
    ...

    And while Face­book, Google, and Apple have pledged to end this prac­tice, there does­n’t appear to be much done at all. The Haraj app that’s being used in Sau­di Ara­bia is still avail­able in the app stores and hun­dreds of work­ers are still be bought and sold on Haraj, Insta­gram, and oth­er apps:

    ...
    “If Google, Apple, Face­book or any oth­er com­pa­nies are host­ing apps like these, they have to be held account­able.”

    After being alert­ed to the issue, Face­book said it had banned one of the hash­tags involved.

    Google and Apple said they were work­ing with app devel­op­ers to pre­vent ille­gal activ­i­ty.

    ...

    How­ev­er, the BBC has found there are many relat­ed list­ings still active on Insta­gram, and oth­er apps avail­able via Apple and Google.

    ...

    Since the BBC team con­tact­ed the apps and tech com­pa­nies about their find­ings, 4Sale has removed the domes­tic work­er sec­tion of its plat­form.

    ...

    The firms con­tin­ue to dis­trib­ute the 4Sale and Haraj apps, how­ev­er, on the basis that their pri­ma­ry pur­pose is to sell legit­i­mate goods and ser­vices.

    4Sale may have tack­led the prob­lem, but at the time of pub­li­ca­tion, hun­dreds of domes­tic work­ers were still being trad­ed on Haraj, Insta­gram and oth­er apps which the BBC has seen.
    ...

    Also keep in mind that Kuwait­’s 2015 law giv­ing extra pro­tec­tions to these domes­tic work­ers still leaves them trapped in a sys­tem where they can’t leave with­out the per­mis­sion of their spon­sors. It’s still a wild­ly abu­sive sys­tem even with these new pro­tec­tions. Worse, Kuwait­’s laws pro­tect­ing these work­ers are the most exten­sive of the coun­tries that have this Kafala sys­tem. It’s part of what makes the role these social media giants are play­ing in facil­i­tat­ing this trade so egre­gious: they’re one of the only par­ties in this trade that can be real­is­ti­cal­ly expect­ed to even try to crack down on it, and yet, as we can see, that’s not actu­al­ly a real­is­tic expec­ta­tion.

    Posted by Pterrafractyl | November 7, 2019, 1:36 pm

Post a comment