Spitfire List Web site and blog of anti-fascist researcher and radio personality Dave Emory.

For The Record  

FTR #996 Civilization’s Twilight: Update on Technocratic Fascism

Dave Emory’s entire life­time of work is avail­able on a flash dri­ve that can be obtained HERE. The new dri­ve is a 32-giga­byte dri­ve that is cur­rent as of the pro­grams and arti­cles post­ed by the fall of 2017. The new dri­ve (avail­able for a tax-deductible con­tri­bu­tion of $65.00 or more.)

WFMU-FM is pod­cast­ing For The Record–You can sub­scribe to the pod­cast HERE.

You can sub­scribe to e‑mail alerts from Spitfirelist.com HERE.

You can sub­scribe to RSS feed from Spitfirelist.com HERE.

You can sub­scribe to the com­ments made on pro­grams and posts–an excel­lent source of infor­ma­tion in, and of, itself HERE.

This broad­cast was record­ed in one, 60-minute seg­ment.

Intro­duc­tion: Updat­ing our ongo­ing analy­sis of what Mr. Emory calls “tech­no­crat­ic fas­cism,” we exam­ine how exist­ing tech­nolo­gies are neu­tral­iz­ing and/or ren­der­ing obso­lete foun­da­tion­al ele­ments of our civ­i­liza­tion and demo­c­ra­t­ic gov­ern­men­tal sys­tems.

For pur­pos­es of refresh­ing the line of argu­ment pre­sent­ed here, we ref­er­ence a vital­ly impor­tant arti­cle by David Golum­bia. ” . . . . Such tech­no­cratic beliefs are wide­spread in our world today, espe­cially in the enclaves of dig­i­tal enthu­si­asts, whether or not they are part of the giant cor­po­rate-dig­i­tal leviathanHack­ers (‘civic,’ ‘eth­i­cal,’ ‘white’ and ‘black’ hat alike), hack­tivists, Wik­iLeaks fans [and Julian Assange et al–D. E.], Anony­mous ‘mem­bers,’ even Edward Snow­den him­self walk hand-in-hand with Face­book and Google in telling us that coders don’t just have good things to con­tribute to the polit­i­cal world, but that the polit­i­cal world is theirs to do with what they want, and the rest of us should stay out of it: the polit­i­cal world is bro­ken, they appear to think (right­ly, at least in part), and the solu­tion to that, they think (wrong­ly, at least for the most part), is for pro­gram­mers to take polit­i­cal mat­ters into their own hands. . . . [Tor co-cre­ator] Din­gle­dine  asserts that a small group of soft­ware devel­op­ers can assign to them­selves that role, and that mem­bers of demo­c­ra­tic poli­ties have no choice but to accept them hav­ing that role. . . .”

Begin­ning with a chill­ing opin­ion piece in the New York Times, we note that tech­no­log­i­cal devel­op­ment threat­ens to super-charge the Big Lies that dri­ve our world. As any­one who saw the file Star Wars film “Rogue One” knows, the tech­nol­o­gy required to cre­ate a near­ly life-like com­put­er-gen­er­at­ed videos of a real per­son is already a real­i­ty. Once the province of movie stu­dios and oth­er firms with mil­lions to spend, the tech­nol­o­gy is now avail­able for down­load for free.

” . . . . In 2016 Gareth Edwards, the direc­tor of the Star Wars film ‘Rogue One,’ was able to cre­ate a scene fea­tur­ing a young Princess Leia by manip­u­lat­ing images of Car­rie Fish­er as she looked in 1977. Mr. Edwards had the best hard­ware and soft­ware a $200 mil­lion Hol­ly­wood bud­get could buy. Less than two years lat­er, images of sim­i­lar qual­i­ty can be cre­at­ed with soft­ware avail­able for free down­load on Red­dit. That was how a faked video sup­pos­ed­ly of the actress Emma Wat­son in a show­er with anoth­er woman end­ed up on the web­site Celeb Jihad. . . .”

The tech­nol­o­gy has already ren­dered obso­lete selec­tive edit­ing such as that per­formed by James O’Keefe: ” . . . . as the nov­el­ist William Gib­son once said, ‘The street finds its own uses for things.’ So do rogue polit­i­cal actors. The impli­ca­tions for democ­ra­cy are eye-open­ing. The con­ser­v­a­tive polit­i­cal activist James O’Keefe has cre­at­ed a cot­tage indus­try manip­u­lat­ing polit­i­cal per­cep­tions by edit­ing footage in mis­lead­ing ways. In 2018, low-tech edit­ing like Mr. O’Keefe’s is already an anachro­nism: Imag­ine what even less scrupu­lous activists could do with the pow­er to cre­ate ‘video’ fram­ing real peo­ple for things they’ve nev­er actu­al­ly done. One har­row­ing poten­tial even­tu­al­i­ty: Fake video and audio may become so con­vinc­ing that it can’t be dis­tin­guished from real record­ings, ren­der­ing audio and video evi­dence inad­mis­si­ble in court. . . .”

After high­light­ing a sto­ry about AI-gen­er­at­ed “deep­fake” pornog­ra­phy with peo­ple’s faces super­im­posed on oth­ers’ bod­ies in porno­graph­ic lay­outs, we note how robots have altered our polit­i­cal and com­mer­cial land­scapes, through cyber tech­nol­o­gy: ” . . . . Robots are get­ting bet­ter, every day, at imper­son­at­ing humans. When direct­ed by oppor­tunists, male­fac­tors and some­times even nation-states, they pose a par­tic­u­lar threat to demo­c­ra­t­ic soci­eties, which are premised on being open to the peo­ple. Robots pos­ing as peo­ple have become a men­ace. . . . In com­ing years, cam­paign finance lim­its will be (and maybe already are) evad­ed by robot armies pos­ing as ‘small’ donors. And actu­al vot­ing is anoth­er obvi­ous tar­get — per­haps the ulti­mate tar­get. . . .”

Before the actu­al replace­ment of man­u­al labor by robots, devices to tech­no­crat­i­cal­ly “improve”–read “coer­cive­ly engi­neer” work­ers are patent­ed by Ama­zon and have been used on work­ers in some of their facil­i­ties. ” . . . . What if your employ­er made you wear a wrist­band that tracked your every move, and that even nudged you via vibra­tions when it judged that you were doing some­thing wrong? What if your super­vi­sor could iden­ti­fy every time you paused to scratch or fid­get, and for how long you took a bath­room break? What may sound like dystopi­an fic­tion could become a real­i­ty for Ama­zon ware­house work­ers around the world. The com­pa­ny has won two patents for such a wrist­band. . . .”

For some U.K Ama­zon ware­house work­ers, the future is now: ” . . . . Max Craw­ford, a for­mer Ama­zon ware­house work­er in Britain, said in a phone inter­view, ‘After a year work­ing on the floor, I felt like I had become a ver­sion of the robots I was work­ing with.’ He described hav­ing to process hun­dreds of items in an hour — a pace so extreme that one day, he said, he fell over from dizzi­ness. ‘There was no time to go to the loo,’ he said, using the British slang for toi­let. ‘You had to process the items in sec­onds and then move on. If you didn’t meet tar­gets, you were fired.’

“He worked back and forth at two Ama­zon ware­hous­es for more than two years and then quit in 2015 because of health con­cerns, he said: ‘I got burned out.’ Mr. Craw­ford agreed that the wrist­bands might save some time and labor, but he said the track­ing was ‘stalk­er­ish’ and feared that work­ers might be unfair­ly scru­ti­nized if their hands were found to be ‘in the wrong place at the wrong time.’ ‘They want to turn peo­ple into machines,’ he said. ‘The robot­ic tech­nol­o­gy isn’t up to scratch yet, so until it is, they will use human robots.’ . . . .”

Some tech work­ers, well placed at R & D pace­set­ters and giants such as Face­book and Google have done an about-face on the  impact of their ear­li­er efforts and are now strug­gling against the mis­use of the tech­nolo­gies they helped to launch:

” . . . . A group of Sil­i­con Val­ley tech­nol­o­gists who were ear­ly employ­ees at Face­book and Google, alarmed over the ill effects of social net­works and smart­phones, are band­ing togeth­er to chal­lenge the com­pa­nies they helped build. . . . ‘The largest super­com­put­ers in the world are inside of two com­pa­nies — Google and Face­book — and where are we point­ing them?’ Mr. [Tris­tan] Har­ris said. ‘We’re point­ing them at people’s brains, at chil­dren.’ . . . . Mr. [RogerM­c­Namee] said he had joined the Cen­ter for Humane Tech­nol­o­gy because he was hor­ri­fied by what he had helped enable as an ear­ly Face­book investor. ‘Face­book appeals to your lizard brain — pri­mar­i­ly fear and anger,’ he said. ‘And with smart­phones, they’ve got you for every wak­ing moment.’ . . . .”

Tran­si­tion­ing to our next program–updating AI (arti­fi­cial intel­li­gence) tech­nol­o­gy as it applies to tech­no­crat­ic fascism–we note that AI machines are being designed to devel­op oth­er AI’s–“The Rise of the Machine.” ” . . . . Jeff Dean, one of Google’s lead­ing engi­neers, spot­light­ed a Google project called AutoML. ML is short for machine learn­ing, refer­ring to com­put­er algo­rithms that can learn to per­form par­tic­u­lar tasks on their own by ana­lyz­ing data. AutoML, in turn, is a machine learn­ing algo­rithm that learns to build oth­er machine-learn­ing algo­rithms. With it, Google may soon find a way to cre­ate A.I. tech­nol­o­gy that can part­ly take the humans out of build­ing the A.I. sys­tems that many believe are the future of the tech­nol­o­gy indus­try. . . .”

1.  There was a chill­ing recent opin­ion piece in the New York Times. Tech­no­log­i­cal devel­op­ment threat­ens to super-charge the Big Lies that dri­ve our world. As any­one who saw the file Star Wars film “Rogue One” knows well, the tech­nol­o­gy required to cre­ate a near­ly life-like com­put­er-gen­er­at­ed videos of a real per­son is already a real­i­ty. Once the province of movie stu­dios and oth­er firms with mil­lions to spend, the tech­nol­o­gy is now avail­able for down­load for free.

” . . . . In 2016 Gareth Edwards, the direc­tor of the Star Wars film ‘Rogue One,’ was able to cre­ate a scene fea­tur­ing a young Princess Leia by manip­u­lat­ing images of Car­rie Fish­er as she looked in 1977. Mr. Edwards had the best hard­ware and soft­ware a $200 mil­lion Hol­ly­wood bud­get could buy. Less than two years lat­er, images of sim­i­lar qual­i­ty can be cre­at­ed with soft­ware avail­able for free down­load on Red­dit. That was how a faked video sup­pos­ed­ly of the actress Emma Wat­son in a show­er with anoth­er woman end­ed up on the web­site Celeb Jihad. . . .”

The tech­nol­o­gy has already ren­dered obso­lete selec­tive edit­ing such as that per­formed by James O’Keefe: ” . . . . as the nov­el­ist William Gib­son once said, ‘The street finds its own uses for things.’ So do rogue polit­i­cal actors. The impli­ca­tions for democ­ra­cy are eye-open­ing. The con­ser­v­a­tive polit­i­cal activist James O’Keefe has cre­at­ed a cot­tage indus­try manip­u­lat­ing polit­i­cal per­cep­tions by edit­ing footage in mis­lead­ing ways. In 2018, low-tech edit­ing like Mr. O’Keefe’s is already an anachro­nism: Imag­ine what even less scrupu­lous activists could do with the pow­er to cre­ate ‘video’ fram­ing real peo­ple for things they’ve nev­er actu­al­ly done. One har­row­ing poten­tial even­tu­al­i­ty: Fake video and audio may become so con­vinc­ing that it can’t be dis­tin­guished from real record­ings, ren­der­ing audio and video evi­dence inad­mis­si­ble in court. . . .”

“Our Hack­able Polit­i­cal Future” by Hen­ry J. Far­rell and Rick Perl­stein; The New York Times; 02/04/2018

Imag­ine it is the spring of 2019. A bot­tom-feed­ing web­site, per­haps tied to Rus­sia, “sur­faces” video of a sex scene star­ring an 18-year-old Kirsten Gilli­brand. It is soon debunked as a fake, the prod­uct of a user-friend­ly video appli­ca­tion that employs gen­er­a­tive adver­sar­i­al net­work tech­nol­o­gy to con­vinc­ing­ly swap out one face for anoth­er.

It is the sum­mer of 2019, and the sto­ry, pre­dictably, has stuck around — part talk-show joke, part right-wing talk­ing point. “It’s news,” polit­i­cal jour­nal­ists say in their own defense. “Peo­ple are talk­ing about it. How can we not?”

Then it is fall. The junior sen­a­tor from New York State announces her cam­paign for the pres­i­den­cy. At a din­er in New Hamp­shire, one “low infor­ma­tion” vot­er asks anoth­er: “Kirsten What’s‑her-name? She’s run­ning for pres­i­dent? Didn’t she have some­thing to do with pornog­ra­phy?”

Wel­come to the shape of things to come. In 2016 Gareth Edwards, the direc­tor of the Star Wars film “Rogue One,” was able to cre­ate a scene fea­tur­ing a young Princess Leia by manip­u­lat­ing images of Car­rie Fish­er as she looked in 1977. Mr. Edwards had the best hard­ware and soft­ware a $200 mil­lion Hol­ly­wood bud­get could buy. Less than two years lat­er, images of sim­i­lar qual­i­ty can be cre­at­ed with soft­ware avail­able for free down­load on Red­dit. That was how a faked video sup­pos­ed­ly of the actress Emma Wat­son in a show­er with anoth­er woman end­ed up on the web­site Celeb Jihad.

Pro­grams like these have many legit­i­mate appli­ca­tions. They can help com­put­er-secu­ri­ty experts probe for weak­ness­es in their defens­es and help self-dri­ving cars learn how to nav­i­gate unusu­al weath­er con­di­tions. But as the nov­el­ist William Gib­son once said, “The street finds its own uses for things.” So do rogue polit­i­cal actors. The impli­ca­tions for democ­ra­cy are eye-open­ing.

The con­ser­v­a­tive polit­i­cal activist James O’Keefe has cre­at­ed a cot­tage indus­try manip­u­lat­ing polit­i­cal per­cep­tions by edit­ing footage in mis­lead­ing ways. In 2018, low-tech edit­ing like Mr. O’Keefe’s is already an anachro­nism: Imag­ine what even less scrupu­lous activists could do with the pow­er to cre­ate “video” fram­ing real peo­ple for things they’ve nev­er actu­al­ly done. One har­row­ing poten­tial even­tu­al­i­ty: Fake video and audio may become so con­vinc­ing that it can’t be dis­tin­guished from real record­ings, ren­der­ing audio and video evi­dence inad­mis­si­ble in court.

A pro­gram called Face2Face, devel­oped at Stan­ford, films one per­son speak­ing, then manip­u­lates that person’s image to resem­ble some­one else’s. Throw in voice manip­u­la­tion tech­nol­o­gy, and you can lit­er­al­ly make any­one say any­thing — or at least seem to.

The tech­nol­o­gy isn’t quite there; Princess Leia was a lit­tle wood­en, if you looked care­ful­ly. But it’s clos­er than you might think. And even when fake video isn’t per­fect, it can con­vince peo­ple who want to be con­vinced, espe­cial­ly when it rein­forces offen­sive gen­der or racial stereo­types.

In 2007, Barack Obama’s polit­i­cal oppo­nents insist­ed that footage exist­ed of Michelle Oba­ma rant­i­ng against “whitey.” In the future, they may not have to wor­ry about whether it actu­al­ly exist­ed. If some­one called their bluff, they may sim­ply be able to invent it, using data from stock pho­tos and pre-exist­ing footage.

The next step would be one we are already famil­iar with: the exploita­tion of the algo­rithms used by social media sites like Twit­ter and Face­book to spread sto­ries viral­ly to those most inclined to show inter­est in them, even if those sto­ries are fake.

It might be impos­si­ble to stop the advance of this kind of tech­nol­o­gy. But the rel­e­vant algo­rithms here aren’t only the ones that run on com­put­er hard­ware. They are also the ones that under­gird our too eas­i­ly hacked media sys­tem, where garbage acquires the per­fumed scent of legit­i­ma­cy with all too much ease. Edi­tors, jour­nal­ists and news pro­duc­ers can play a role here — for good or for bad.

Out­lets like Fox News spread sto­ries about the mur­der of Demo­c­ra­t­ic staff mem­bers and F.B.I. con­spir­a­cies to frame the pres­i­dent. Tra­di­tion­al news orga­ni­za­tions, fear­ing that they might be left behind in the new atten­tion econ­o­my, strug­gle to max­i­mize “engage­ment with con­tent.”

This gives them a built-in incen­tive to spread infor­ma­tion­al virus­es that enfee­ble the very demo­c­ra­t­ic insti­tu­tions that allow a free media to thrive. Cable news shows con­sid­er it their pro­fes­sion­al duty to pro­vide “bal­ance” by giv­ing par­ti­san talk­ing heads free rein to spout non­sense — or ampli­fy the non­sense of our cur­rent pres­i­dent.

It already feels as though we are liv­ing in an alter­na­tive sci­ence-fic­tion uni­verse where no one agrees on what it true. Just think how much worse it will be when fake news becomes fake video. Democ­ra­cy assumes that its cit­i­zens share the same real­i­ty. We’re about to find out whether democ­ra­cy can be pre­served when this assump­tion no longer holds.

2. Both Twit­ter and Porn­Hub, the online pornog­ra­phy giant, are already tak­ing action to remove numer­ous “Deep­fake” videos of celebri­ties being super-imposed onto porn actors in response to the flood of such videos that are already being gen­er­at­ed.

“Porn­Hub, Twit­ter Ban ‘Deep­fake’ AI-Mod­i­fied Porn” by Angela Moscar­i­to­lo; PC Mag­a­zine; 02/07/2018.

It might be kind of com­i­cal to see Nico­las Cage’s face on the body of a woman, but expect to see less of this type of con­tent float­ing around on Porn­Hub and Twit­ter in the future.

As Moth­er­board first report­ed, both sites are tak­ing action against arti­fi­cial intel­li­gence-pow­ered pornog­ra­phy, known as “deep­fakes.”

Deep­fakes, for the unini­ti­at­ed, are porn videos cre­at­ed by using a machine learn­ing algo­rithm to match someone’s face to anoth­er person’s body. Loads of celebri­ties have had their faces used in porn scenes with­out their con­sent, and the results are almost flaw­less. Check out the SFW exam­ple below for a bet­ter idea of what we’re talk­ing about.
[see chill­ing­ly real­is­tic video of Nico­las Cage’s head on a woman’s body]
In a state­ment to PCMag on Wednes­day, Porn­Hub Vice Pres­i­dent Corey Price said the com­pa­ny in 2015 intro­duced a sub­mis­sion form, which lets users eas­i­ly flag non­con­sen­su­al con­tent like revenge porn for removal. Peo­ple have also start­ed using that tool to flag deep­fakes, he said.

The com­pa­ny still has a lot of clean­ing up to do. Moth­er­board report­ed there are still tons of deep­fakes on Porn­Hub.

“I was able to eas­i­ly find dozens of deep­fakes post­ed in the last few days, many under the search term ‘deep­fakes’ or with deep­fakes and the name of celebri­ties in the title of the video,” Motherboard’s Saman­tha Cole wrote.

Over on Twit­ter, mean­while, users can now be sus­pend­ed for post­ing deep­fakes and oth­er non­con­sen­su­al porn.

“We will sus­pend any account we iden­ti­fy as the orig­i­nal poster of inti­mate media that has been pro­duced or dis­trib­uted with­out the subject’s con­sent,” a Twit­ter spokesper­son told Moth­er­board. “We will also sus­pend any account ded­i­cat­ed to post­ing this type of con­tent.”

The site report­ed that Dis­cord and Gfy­cat take a sim­i­lar stance on deep­fakes. For now, these types of videos appear to be pri­mar­i­ly cir­cu­lat­ing via Red­dit, where the deep­fake com­mu­ni­ty cur­rent­ly boasts around 90,000 sub­scribers.

3. No “ifs,” “ands,” or “bots!”  ” . . . . Robots are get­ting bet­ter, every day, at imper­son­at­ing humans. When direct­ed by oppor­tunists, male­fac­tors and some­times even nation-states, they pose a par­tic­u­lar threat to demo­c­ra­t­ic soci­eties, which are premised on being open to the peo­ple. Robots pos­ing as peo­ple have become a men­ace. . . . In com­ing years, cam­paign finance lim­its will be (and maybe already are) evad­ed by robot armies pos­ing as ‘small’ donors. And actu­al vot­ing is anoth­er obvi­ous tar­get — per­haps the ulti­mate tar­get. . . .”

“Please Prove You’re Not a Robot” by Tim Wu; The New York Times; 7/16/2017; p. 8 (Review Sec­tion).

 When sci­ence fic­tion writ­ers first imag­ined robot inva­sions, the idea was that bots would become smart and pow­er­ful enough to take over the world by force, whether on their own or as direct­ed by some evil­do­er. In real­i­ty, some­thing only slight­ly less scary is hap­pen­ing.

Robots are get­ting bet­ter, every day, at imper­son­at­ing humans. When direct­ed by oppor­tunists, male­fac­tors and some­times even nation-states, they pose a par­tic­u­lar threat to demo­c­ra­t­ic soci­eties, which are premised on being open to the peo­ple.

Robots pos­ing as peo­ple have become a men­ace. For pop­u­lar Broad­way shows (need we say “Hamil­ton”?), it is actu­al­ly bots, not humans, who do much and maybe most of the tick­et buy­ing. Shows sell out imme­di­ate­ly, and the mid­dle­men (quite lit­er­al­ly, evil robot mas­ters) reap mil­lions in ill-got­ten gains.

Philip Howard, who runs the Com­pu­ta­tion­al Pro­pa­gan­da Research Project at Oxford, stud­ied the deploy­ment of pro­pa­gan­da bots dur­ing vot­ing on Brex­it, and the recent Amer­i­can and French pres­i­den­tial elec­tions. Twit­ter is par­tic­u­lar­ly dis­tort­ed by its mil­lions of robot accounts; dur­ing the French elec­tion, it was prin­ci­pal­ly Twit­ter robots who were try­ing to make #Macron­Leaks into a scan­dal. Face­book has admit­ted it was essen­tial­ly hacked dur­ing the Amer­i­can elec­tion in Novem­ber. In Michi­gan, Mr. Howard notes, “junk news was shared just as wide­ly as pro­fes­sion­al news in the days lead­ing up to the elec­tion.”

Robots are also being used to attack the demo­c­ra­t­ic fea­tures of the admin­is­tra­tive state. This spring, the Fed­er­al Com­mu­ni­ca­tions Com­mis­sion put its pro­posed revo­ca­tion of net neu­tral­i­ty up for pub­lic com­ment. In pre­vi­ous years such pro­ceed­ings attract­ed mil­lions of (human) com­men­ta­tors. This time, some­one with an agen­da but no actu­al pub­lic sup­port unleashed robots who imper­son­at­ed (via stolen iden­ti­ties) hun­dreds of thou­sands of peo­ple, flood­ing the sys­tem with fake com­ments against fed­er­al net neu­tral­i­ty rules.

To be sure, today’s imper­son­ation-bots are dif­fer­ent from the robots imag­ined in sci­ence fic­tion: They aren’t sen­tient, don’t car­ry weapons and don’t have phys­i­cal bod­ies. Instead, fake humans just have what­ev­er is nec­es­sary to make them seem human enough to “pass”: a name, per­haps a vir­tu­al appear­ance, a cred­it-card num­ber and, if nec­es­sary, a pro­fes­sion, birth­day and home address. They are brought to life by pro­grams or scripts that give one per­son the pow­er to imi­tate thou­sands.

The prob­lem is almost cer­tain to get worse, spread­ing to even more areas of life as bots are trained to become bet­ter at mim­ic­k­ing humans. Giv­en the degree to which prod­uct reviews have been swamped by robots (which tend to hand out five stars with aban­don), com­mer­cial sab­o­tage in the form of neg­a­tive bot reviews is not hard to pre­dict.

In com­ing years, cam­paign finance lim­its will be (and maybe already are) evad­ed by robot armies pos­ing as “small” donors. And actu­al vot­ing is anoth­er obvi­ous tar­get — per­haps the ulti­mate tar­get. So far, we’ve been con­tent to leave the prob­lem to the tech indus­try, where the focus has been on build­ing defens­es, usu­al­ly in the form of Captchas (“com­plete­ly auto­mat­ed pub­lic Tur­ing test to tell com­put­ers and humans apart”), those annoy­ing “type this” tests to prove you are not a robot. But leav­ing it all to indus­try is not a long-term solu­tion.

For one thing, the defens­es don’t actu­al­ly deter imper­son­ation bots, but per­verse­ly reward who­ev­er can beat them. And per­haps the great­est prob­lem for a democ­ra­cy is that com­pa­nies like Face­book and Twit­ter lack a seri­ous finan­cial incen­tive to do any­thing about mat­ters of pub­lic con­cern, like the mil­lions of fake users who are cor­rupt­ing the demo­c­ra­t­ic process.

Twit­ter esti­mates at least 27 mil­lion prob­a­bly fake accounts; researchers sug­gest the real num­ber is clos­er to 48 mil­lion, yet the com­pa­ny does lit­tle about the prob­lem. The prob­lem is a pub­lic as well as pri­vate one, and imper­son­ation robots should be con­sid­ered what the law calls “hostis humani gener­is”: ene­mies of mankind, like pirates and oth­er out­laws. That would allow for a bet­ter offen­sive strat­e­gy: bring­ing the pow­er of the state to bear on the peo­ple deploy­ing the robot armies to attack com­merce or democ­ra­cy.

The ide­al anti-robot cam­paign would employ a mixed tech­no­log­i­cal and legal approach. Improved robot detec­tion might help us find the robot mas­ters or poten­tial­ly help nation­al secu­ri­ty unleash coun­ter­at­tacks, which can be nec­es­sary when attacks come from over­seas. There may be room for dep­u­tiz­ing pri­vate par­ties to hunt down bad robots. A sim­ple legal rem­e­dy would be a “ Blade Run­ner” law that makes it ille­gal to deploy any pro­gram that hides its real iden­ti­ty to pose as a human. Auto­mat­ed process­es should be required to state, “I am a robot.” When deal­ing with a fake human, it would be nice to know.

Using robots to fake sup­port, steal tick­ets or crash democ­ra­cy real­ly is the kind of evil that sci­ence fic­tion writ­ers were warn­ing about. The use of robots takes advan­tage of the fact that polit­i­cal cam­paigns, elec­tions and even open mar­kets make human­is­tic assump­tions, trust­ing that there is wis­dom or at least legit­i­ma­cy in crowds and val­ue in pub­lic debate. But when sup­port and opin­ion can be man­u­fac­tured, bad or unpop­u­lar argu­ments can win not by log­ic but by a nov­el, dan­ger­ous form of force — the ulti­mate threat to every democ­ra­cy.

4. Before the actu­al replace­ment of man­u­al labor by robots, devices to tech­no­crat­i­cal­ly “improve”–read “coer­cive­ly engi­neer” work­ers are patent­ed by Ama­zon and have been used on work­ers in some of their facil­i­ties.

” . . . . What if your employ­er made you wear a wrist­band that tracked your every move, and that even nudged you via vibra­tions when it judged that you were doing some­thing wrong? What if your super­vi­sor could iden­ti­fy every time you paused to scratch or fid­get, and for how long you took a bath­room break? What may sound like dystopi­an fic­tion could become a real­i­ty for Ama­zon ware­house work­ers around the world. The com­pa­ny has won two patents for such a wrist­band. . . .”

For some U.K Ama­zon ware­house work­ers, the future is now: ” . . . . Max Craw­ford, a for­mer Ama­zon ware­house work­er in Britain, said in a phone inter­view, ‘After a year work­ing on the floor, I felt like I had become a ver­sion of the robots I was work­ing with.’ He described hav­ing to process hun­dreds of items in an hour — a pace so extreme that one day, he said, he fell over from dizzi­ness. ‘There was no time to go to the loo,’ he said, using the British slang for toi­let. ‘You had to process the items in sec­onds and then move on. If you didn’t meet tar­gets, you were fired.’

He worked back and forth at two Ama­zon ware­hous­es for more than two years and then quit in 2015 because of health con­cerns, he said: ‘I got burned out.’ Mr. Craw­ford agreed that the wrist­bands might save some time and labor, but he said the track­ing was ‘stalk­er­ish’ and feared that work­ers might be unfair­ly scru­ti­nized if their hands were found to be ‘in the wrong place at the wrong time.’ ‘They want to turn peo­ple into machines,’ he said. ‘The robot­ic tech­nol­o­gy isn’t up to scratch yet, so until it is, they will use human robots.’ . . . .”

“Track Hands of Work­ers? Ama­zon Has Patents for It” by Cey­lan Yegin­su; The New York Times; 2/2/2018; P. B3 [West­ern Edi­tion].

 What if your employ­er made you wear a wrist­band that tracked your every move, and that even nudged you via vibra­tions when it judged that you were doing some­thing wrong? What if your super­vi­sor could iden­ti­fy every time you paused to scratch or fid­get, and for how long you took a bath­room break?

What may sound like dystopi­an fic­tion could become a real­i­ty for Ama­zon ware­house work­ers around the world. The com­pa­ny has won two patents for such a wrist­band, though it was unclear if Ama­zon planned to actu­al­ly man­u­fac­ture the track­ing device and have employ­ees wear it.

The online retail giant, which plans to build a sec­ond head­quar­ters and recent­ly short­list­ed 20 poten­tial host cities for it, has also been known to exper­i­ment inhouse with new tech­nol­o­gy before sell­ing it world­wide.

Ama­zon, which rarely dis­clos­es infor­ma­tion on its patents, could not imme­di­ate­ly be reached for com­ment on Thurs­day. But the patent dis­clo­sure goes to the heart about a glob­al debate about pri­va­cy and secu­ri­ty. Ama­zon already has a rep­u­ta­tion for a work­place cul­ture that thrives on a hard-hit­ting man­age­ment style, and has exper­i­ment­ed with how far it can push white-col­lar work­ers in order to reach its deliv­ery tar­gets.

Pri­va­cy advo­cates, how­ev­er, note that a lot can go wrong even with every­day track­ing tech­nol­o­gy. On Mon­day, the tech indus­try was jolt­ed by the dis­cov­ery that Stra­va, a fit­ness app that allows users to track their activ­i­ties and com­pare their per­for­mance with oth­er peo­ple run­ning or cycling in the same places, had unwit­ting­ly high­light­ed the loca­tions of Unit­ed States mil­i­tary bases and the move­ments of their per­son­nel in Iraq and Syr­ia.

The patent appli­ca­tions, filed in 2016, were pub­lished in Sep­tem­ber, and Ama­zon won them this week, accord­ing to Geek­Wire, which report­ed the patents’ pub­li­ca­tion on Tues­day. In the­o­ry, Amazon’s pro­posed tech­nol­o­gy would emit ultra­son­ic sound puls­es and radio trans­mis­sions to track where an employee’s hands were in rela­tion to inven­to­ry bins, and pro­vide “hap­tic feed­back” to steer the work­er toward the cor­rect bin.

The aim, Ama­zon says in the patent, is to stream­line “time con­sum­ing” tasks, like respond­ing to orders and pack­ag­ing them for speedy deliv­ery. With guid­ance from a wrist­band, work­ers could fill orders faster. Crit­ics say such wrist­bands raise con­cerns about pri­va­cy and would add a new lay­er of sur­veil­lance to the work­place, and that the use of the devices could result in employ­ees being treat­ed more like robots than human beings.

Cur­rent and for­mer Ama­zon employ­ees said the com­pa­ny already used sim­i­lar track­ing tech­nol­o­gy in its ware­hous­es and said they would not be sur­prised if it put the patents into prac­tice.

Max Craw­ford, a for­mer Ama­zon ware­house work­er in Britain, said in a phone inter­view, “After a year work­ing on the floor, I felt like I had become a ver­sion of the robots I was work­ing with.” He described hav­ing to process hun­dreds of items in an hour — a pace so extreme that one day, he said, he fell over from dizzi­ness. “There was no time to go to the loo,” he said, using the British slang for toi­let. “You had to process the items in sec­onds and then move on. If you didn’t meet tar­gets, you were fired.”

He worked back and forth at two Ama­zon ware­hous­es for more than two years and then quit in 2015 because of health con­cerns, he said: “I got burned out.” Mr. Craw­ford agreed that the wrist­bands might save some time and labor, but he said the track­ing was “stalk­er­ish” and feared that work­ers might be unfair­ly scru­ti­nized if their hands were found to be “in the wrong place at the wrong time.” “They want to turn peo­ple into machines,” he said. “The robot­ic tech­nol­o­gy isn’t up to scratch yet, so until it is, they will use human robots.”

Many com­pa­nies file patents for prod­ucts that nev­er see the light of day. And Ama­zon would not be the first employ­er to push bound­aries in the search for a more effi­cient, speedy work force. Com­pa­nies are increas­ing­ly intro­duc­ing arti­fi­cial intel­li­gence into the work­place to help with pro­duc­tiv­i­ty, and tech­nol­o­gy is often used to mon­i­tor employ­ee where­abouts.

One com­pa­ny in Lon­don is devel­op­ing arti­fi­cial intel­li­gence sys­tems to flag unusu­al work­place behav­ior, while anoth­er used a mes­sag­ing appli­ca­tion to track its employ­ees. In Wis­con­sin, a tech­nol­o­gy com­pa­ny called Three Square Mar­ket offered employ­ees an oppor­tu­ni­ty to have microchips implant­ed under their skin in order, it said, to be able to use its ser­vices seam­less­ly. Ini­tial­ly, more than 50 out of 80 staff mem­bers at its head­quar­ters in Riv­er Falls, Wis., vol­un­teered.

5. Some tech work­ers, well placed at R & D pace­set­ters and giants such as Face­book and Google have done an about-face on the  impact of their ear­li­er efforts and are now strug­gling against the mis­use of the tech­nolo­gies they helped to launch:

” . . . . A group of Sil­i­con Val­ley tech­nol­o­gists who were ear­ly employ­ees at Face­book and Google, alarmed over the ill effects of social net­works and smart­phones, are band­ing togeth­er to chal­lenge the com­pa­nies they helped build. . . . ‘The largest super­com­put­ers in the world are inside of two com­pa­nies — Google and Face­book — and where are we point­ing them?’ Mr. [Tris­tan] Har­ris said. ‘We’re point­ing them at people’s brains, at chil­dren.’ . . . . Mr. [RogerM­c­Namee] said he had joined the Cen­ter for Humane Tech­nol­o­gy because he was hor­ri­fied by what he had helped enable as an ear­ly Face­book investor. ‘Face­book appeals to your lizard brain — pri­mar­i­ly fear and anger,’ he said. ‘And with smart­phones, they’ve got you for every wak­ing moment.’ . . . .”

“Ear­ly Face­book and Google Employ­ees Join Forces to Fight What They Built” by Nel­lie Bowles; The New York Times; 2/5/2018; p. B6 [West­ern Edi­tion].

A group of Sil­i­con Val­ley tech­nol­o­gists who were ear­ly employ­ees at Face­book and Google, alarmed over the ill effects of social net­works and smart­phones, are band­ing togeth­er to chal­lenge the com­pa­nies they helped build. The cohort is cre­at­ing a union of con­cerned experts called the Cen­ter for Humane Tech­nol­o­gy. Along with the non­prof­it media watch­dog group Com­mon Sense Media, it also plans an anti-tech addic­tion lob­by­ing effort and an ad cam­paign at 55,000 pub­lic schools in the Unit­ed States.

The cam­paign, titled The Truth About Tech, will be fund­ed with $7 mil­lion from Com­mon Sense and cap­i­tal raised by the Cen­ter for Humane Tech­nol­o­gy. Com­mon Sense also has $50 mil­lion in donat­ed media and air­time from part­ners includ­ing Com­cast and DirecTV. It will be aimed at edu­cat­ing stu­dents, par­ents and teach­ers about the dan­gers of tech­nol­o­gy, includ­ing the depres­sion that can come from heavy use of social media.

“We were on the inside,” said Tris­tan Har­ris, a for­mer in-house ethi­cist at Google who is head­ing the new group. “We know what the com­pa­nies mea­sure. We know how they talk, and we know how the engi­neer­ing works.”

The effect of tech­nol­o­gy, espe­cial­ly on younger minds, has become hot­ly debat­ed in recent months. In Jan­u­ary, two big Wall Street investors asked Apple to study the health effects of its prod­ucts and to make it eas­i­er to lim­it children’s use of iPhones and iPads. Pedi­atric and men­tal health experts called on Face­book last week to aban­don a mes­sag­ing ser­vice the com­pa­ny had intro­duced for chil­dren as young as 6.

Par­ent­ing groups have also sound­ed the alarm about YouTube Kids, a prod­uct aimed at chil­dren that some­times fea­tures dis­turb­ing con­tent. “The largest super­com­put­ers in the world are inside of two com­pa­nies — Google and Face­book — and where are we point­ing them?” Mr. Har­ris said. “We’re point­ing them at people’s brains, at chil­dren.” Sil­i­con Val­ley exec­u­tives for years posi­tioned their com­pa­nies as tight-knit fam­i­lies and rarely spoke pub­licly against one anoth­er.

That has changed. Chamath Pal­i­hapi­tiya, a ven­ture cap­i­tal­ist who was an ear­ly employ­ee at Face­book, said in Novem­ber that the social net­work was “rip­ping apart the social fab­ric of how soci­ety works.” The new Cen­ter for Humane Tech­nol­o­gy includes an unprece­dent­ed alliance of for­mer employ­ees of some of today’s biggest tech com­pa­nies.

Apart from Mr. Har­ris, the cen­ter includes Sandy Parak­i­las, a for­mer Face­book oper­a­tions man­ag­er; Lynn Fox, a for­mer Apple and Google com­mu­ni­ca­tions exec­u­tive; Dave Morin, a for­mer Face­book exec­u­tive; Justin Rosen­stein, who cre­at­ed Facebook’s Like but­ton and is a co-founder of Asana; Roger McNamee, an ear­ly investor in Face­book; and Renée DiRes­ta, a tech­nol­o­gist who stud­ies bots. The group expects its num­bers to grow.

Its first project to reform the indus­try will be to intro­duce a Ledger of Harms — a web­site aimed at guid­ing rank-and-file engi­neers who are con­cerned about what they are being asked to build. The site will include data on the health effects of dif­fer­ent tech­nolo­gies and ways to make prod­ucts that are health­i­er.

Jim Stey­er, chief exec­u­tive and founder of Com­mon Sense, said the Truth About Tech cam­paign was mod­eled on anti­smok­ing dri­ves and focused on chil­dren because of their vul­ner­a­bil­i­ty. That may sway tech chief exec­u­tives to change, he said. Already, Apple’s chief exec­u­tive, Tim­o­thy D. Cook, told The Guardian last month that he would not let his nephew on social media, while the Face­book investor Sean Park­er also recent­ly said of the social net­work that “God only knows what it’s doing to our children’s brains.”

Mr. Stey­er said, “You see a degree of hypocrisy with all these guys in Sil­i­con Val­ley.” The new group also plans to begin lob­by­ing for laws to cur­tail the pow­er of big tech com­pa­nies. It will ini­tial­ly focus on two pieces of leg­is­la­tion: a bill being intro­duced by Sen­a­tor Edward J. Markey, Demo­c­rat of Mass­a­chu­setts, that would com­mis­sion research on technology’s impact on children’s health, and a bill in Cal­i­for­nia by State Sen­a­tor Bob Hertzberg, a Demo­c­rat, which would pro­hib­it the use of dig­i­tal bots with­out iden­ti­fi­ca­tion.

Mr. McNamee said he had joined the Cen­ter for Humane Tech­nol­o­gy because he was hor­ri­fied by what he had helped enable as an ear­ly Face­book investor. “Face­book appeals to your lizard brain — pri­mar­i­ly fear and anger,” he said. “And with smart­phones, they’ve got you for every wak­ing moment.” He said the peo­ple who made these prod­ucts could stop them before they did more harm. “This is an oppor­tu­ni­ty for me to cor­rect a wrong,” Mr. McNamee said.

6. Tran­si­tion­ing to our next program–updating AI (arti­fi­cial intel­li­gence) tech­nol­o­gy as it applies to tech­no­crat­ic fascism–we note that AI machines are being designed to devel­op oth­er AI’s–“The Rise of the Machine.” ” . . . . Jeff Dean, one of Google’s lead­ing engi­neers, spot­light­ed a Google project called AutoML. ML is short for machine learn­ing, refer­ring to com­put­er algo­rithms that can learn to per­form par­tic­u­lar tasks on their own by ana­lyz­ing data. AutoML, in turn, is a machine learn­ing algo­rithm that learns to build oth­er machine-learn­ing algo­rithms. With it, Google may soon find a way to cre­ate A.I. tech­nol­o­gy that can part­ly take the humans out of build­ing the A.I. sys­tems that many believe are the future of the tech­nol­o­gy indus­try. . . .”

“The Rise of the Machine” by Cade Metz; The New York Times; 11/6/2017; p. B1 [West­ern Edi­tion].

 They are a dream of researchers but per­haps a night­mare for high­ly skilled com­put­er pro­gram­mers: arti­fi­cial­ly intel­li­gent machines that can build oth­er arti­fi­cial­ly intel­li­gent machines. With recent speech­es in both Sil­i­con Val­ley and Chi­na, Jeff Dean, one of Google’s lead­ing engi­neers, spot­light­ed a Google project called AutoML. ML is short for machine learn­ing, refer­ring to com­put­er algo­rithms that can learn to per­form par­tic­u­lar tasks on their own by ana­lyz­ing data.

AutoML, in turn, is a machine learn­ing algo­rithm that learns to build oth­er machine-learn­ing algo­rithms. With it, Google may soon find a way to cre­ate A.I. tech­nol­o­gy that can part­ly take the humans out of build­ing the A.I. sys­tems that many believe are the future of the tech­nol­o­gy indus­try. The project is part of a much larg­er effort to bring the lat­est and great­est A.I. tech­niques to a wider col­lec­tion of com­pa­nies and soft­ware devel­op­ers.

The tech indus­try is promis­ing every­thing from smart­phone apps that can rec­og­nize faces to cars that can dri­ve on their own. But by some esti­mates, only 10,000 peo­ple world­wide have the edu­ca­tion, expe­ri­ence and tal­ent need­ed to build the com­plex and some­times mys­te­ri­ous math­e­mat­i­cal algo­rithms that will dri­ve this new breed of arti­fi­cial intel­li­gence.

The world’s largest tech busi­ness­es, includ­ing Google, Face­book and Microsoft, some­times pay mil­lions of dol­lars a year to A.I. experts, effec­tive­ly cor­ner­ing the mar­ket for this hard-to-find tal­ent. The short­age isn’t going away any­time soon, just because mas­ter­ing these skills takes years of work. The indus­try is not will­ing to wait. Com­pa­nies are devel­op­ing all sorts of tools that will make it eas­i­er for any oper­a­tion to build its own A.I. soft­ware, includ­ing things like image and speech recog­ni­tion ser­vices and online chat­bots. “We are fol­low­ing the same path that com­put­er sci­ence has fol­lowed with every new type of tech­nol­o­gy,” said Joseph Sirosh, a vice pres­i­dent at Microsoft, which recent­ly unveiled a tool to help coders build deep neur­al net­works, a type of com­put­er algo­rithm that is dri­ving much of the recent progress in the A.I. field. “We are elim­i­nat­ing a lot of the heavy lift­ing.” This is not altru­ism.

Researchers like Mr. Dean believe that if more peo­ple and com­pa­nies are work­ing on arti­fi­cial intel­li­gence, it will pro­pel their own research. At the same time, com­pa­nies like Google, Ama­zon and Microsoft see seri­ous mon­ey in the trend that Mr. Sirosh described. All of them are sell­ing cloud-com­put­ing ser­vices that can help oth­er busi­ness­es and devel­op­ers build A.I. “There is real demand for this,” said Matt Scott, a co-founder and the chief tech­ni­cal offi­cer of Mal­ong, a start-up in Chi­na that offers sim­i­lar ser­vices. “And the tools are not yet sat­is­fy­ing all the demand.”

This is most like­ly what Google has in mind for AutoML, as the com­pa­ny con­tin­ues to hail the project’s progress. Google’s chief exec­u­tive, Sun­dar Pichai, boast­ed about AutoML last month while unveil­ing a new Android smart­phone.

Even­tu­al­ly, the Google project will help com­pa­nies build sys­tems with arti­fi­cial intel­li­gence even if they don’t have exten­sive exper­tise, Mr. Dean said. Today, he esti­mat­ed, no more than a few thou­sand com­pa­nies have the right tal­ent for build­ing A.I., but many more have the nec­es­sary data. “We want to go from thou­sands of orga­ni­za­tions solv­ing machine learn­ing prob­lems to mil­lions,” he said.

Google is invest­ing heav­i­ly in cloud-com­put­ing ser­vices — ser­vices that help oth­er busi­ness­es build and run soft­ware — which it expects to be one of its pri­ma­ry eco­nom­ic engines in the years to come. And after snap­ping up such a large por­tion of the world’s top A.I researchers, it has a means of jump-start­ing this engine.

Neur­al net­works are rapid­ly accel­er­at­ing the devel­op­ment of A.I. Rather than build­ing an image-recog­ni­tion ser­vice or a lan­guage trans­la­tion app by hand, one line of code at a time, engi­neers can much more quick­ly build an algo­rithm that learns tasks on its own. By ana­lyz­ing the sounds in a vast col­lec­tion of old tech­ni­cal sup­port calls, for instance, a machine-learn­ing algo­rithm can learn to rec­og­nize spo­ken words.

But build­ing a neur­al net­work is not like build­ing a web­site or some run-of-themill smart­phone app. It requires sig­nif­i­cant math skills, extreme tri­al and error, and a fair amount of intu­ition. Jean-François Gag­né, the chief exec­u­tive of an inde­pen­dent machine-learn­ing lab called Ele­ment AI, refers to the process as “a new kind of com­put­er pro­gram­ming.”

In build­ing a neur­al net­work, researchers run dozens or even hun­dreds of exper­i­ments across a vast net­work of machines, test­ing how well an algo­rithm can learn a task like rec­og­niz­ing an image or trans­lat­ing from one lan­guage to anoth­er. Then they adjust par­tic­u­lar parts of the algo­rithm over and over again, until they set­tle on some­thing that works. Some call it a “dark art,” just because researchers find it dif­fi­cult to explain why they make par­tic­u­lar adjust­ments.

But with AutoML, Google is try­ing to auto­mate this process. It is build­ing algo­rithms that ana­lyze the devel­op­ment of oth­er algo­rithms, learn­ing which meth­ods are suc­cess­ful and which are not. Even­tu­al­ly, they learn to build more effec­tive machine learn­ing. Google said AutoML could now build algo­rithms that, in some cas­es, iden­ti­fied objects in pho­tos more accu­rate­ly than ser­vices built sole­ly by human experts. Bar­ret Zoph, one of the Google researchers behind the project, believes that the same method will even­tu­al­ly work well for oth­er tasks, like speech recog­ni­tion or machine trans­la­tion. This is not always an easy thing to wrap your head around. But it is part of a sig­nif­i­cant trend in A.I. research. Experts call it “learn­ing to learn” or “met­alearn­ing.”

Many believe such meth­ods will sig­nif­i­cant­ly accel­er­ate the progress of A.I. in both the online and phys­i­cal worlds. At the Uni­ver­si­ty of Cal­i­for­nia, Berke­ley, researchers are build­ing tech­niques that could allow robots to learn new tasks based on what they have learned in the past. “Com­put­ers are going to invent the algo­rithms for us, essen­tial­ly,” said a Berke­ley pro­fes­sor, Pieter Abbeel. “Algo­rithms invent­ed by com­put­ers can solve many, many prob­lems very quick­ly — at least that is the hope.”

This is also a way of expand­ing the num­ber of peo­ple and busi­ness­es that can build arti­fi­cial intel­li­gence. These meth­ods will not replace A.I. researchers entire­ly. Experts, like those at Google, must still do much of the impor­tant design work.

But the belief is that the work of a few experts can help many oth­ers build their own soft­ware. Rena­to Negrin­ho, a researcher at Carnegie Mel­lon Uni­ver­si­ty who is explor­ing tech­nol­o­gy sim­i­lar to AutoML, said this was not a real­i­ty today but should be in the years to come. “It is just a mat­ter of when,” he said.

 

 

 

Discussion

18 comments for “FTR #996 Civilization’s Twilight: Update on Technocratic Fascism”

  1. After what might be the short­est stint ever as a New York Times op-ed colum­nist, the Times has a new job open­ing on opin­ion sec­tion. After announc­ing the hir­ing of Quinn Nor­ton as a new technology/hacker cul­ture colum­nist Tues­day after­noon, the Times let her go lat­er that evening. Why the sud­den cold feet? A tweet. Or, rather, a num­ber of Nor­ton’s tweets that were wide­ly point­ed out after Nor­ton’s hir­ing.

    The numer­ous tweets where she would call peo­ple “fag” and “fag­got” or used the N‑word cer­tain­ly did­n’t help. But it was her tweets about Nazis that appear to be what real­ly sank her employ­ment prospects. So what did Quinn Nor­ton tweet about Nazis that got her fired? That she has a Nazi friend. She does­n’t agree with her Nazi friend’s racist views, but they’re still friends and still talk some­times.

    And who is this Nazi friend of hers? Neo-Nazi hack­er Andrew ‘weev’ Auern­heimer, of course. And as the fol­low­ing arti­cle points out, while Nor­ton’s friend­ship with Auern­himer — who waged a death threat cam­paign against the employ­ees of CNN, let’s not for­get — is indeed a trou­bling, it’s not like Nor­ton is the only tech/privacy jour­nal­ist who con­sid­ers the weev both a friend an a hero:

    Slate

    Why Would a Tech Jour­nal­ist Be Friends With a Neo-Nazi Troll?
    Quinn Norton’s friend­ship with the noto­ri­ous Weev helped lose her a job at the New York Times. She wasn’t his only unlike­ly pal.

    By April Glaser
    Feb 14, 2018 1:42 PM

    The New York Times opin­ion sec­tion announced a new hire Tues­day after­noon: Quinn Nor­ton, a long­time jour­nal­ist cov­er­ing (and trav­el­ing in) the tech­nol­o­gy indus­try and adja­cent hack­er sub­cul­ture, would become the edi­to­r­i­al board’s lead opin­ion writer on the “pow­er, cul­ture, and con­se­quences of tech­nol­o­gy.” Hours lat­er, the job offer was gone.

    The sharp turn occurred soon after Nor­ton shared her job news on Twit­ter, where it didn’t take long for peo­ple to sur­face tweets that, depend­ing on how you inter­pret the expla­na­tions Nor­ton tweet­ed Tues­day night, were either out­right vile or at min­i­mum colos­sal acts of bad judg­ment, no mat­ter what online sub­cul­tures Nor­ton was nav­i­gat­ing when she wrote them. Between 2013 and 2014, she repeat­ed­ly used the slurs fag and fag­got in pub­lic con­ver­sa­tions on Twit­ter. A white woman, she used the N‑word in a botched retweet of inter­net free­dom pio­neer John Per­ry Bar­low and once jok­ing­ly respond­ed to a thread on tone polic­ing with “what’s up my nig­ga.” Then there was a Medi­um post from 2013 in which she med­i­tat­ed on and praised the life of John Rabe, a Nazi leader who also helped to save thou­sands of Chi­nese peo­ple dur­ing World War II. She called him her “per­son­al patron saint of moral com­plex­i­ty.”

    And then, arguably most shock­ing of all, there were tweets in which Nor­ton defend­ed her long friend­ship with one of the most famous neo-Nazis in Amer­i­ca, Andrew Auern­heimer, known by his inter­net pseu­do­nym Weev. Among his many low­lights, Weev co-ran the web­site the Dai­ly Stormer, a hub for neo-Nazis and white suprema­cists.

    In a state­ment, the New York Times’ opin­ion edi­tor, James Ben­net, said, “Despite our review of Quinn Norton’s work and our con­ver­sa­tions with her pre­vi­ous employ­ers, this was new infor­ma­tion to us.” On Twit­ter Tues­day night, Nor­ton wrote, “I’m sor­ry I can’t do the work I want­ed to do with them. I wish there had been a way, but ulti­mate­ly, they need to feel safe with how the net will react to their opin­ion writ­ers.” But it shouldn’t have tak­en a pub­lic out­cry for the Times to real­ize that Nor­ton, despite her impres­sive back­ground cov­er­ing the tech indus­try and some of the sub­cul­tures in its under­bel­ly, was like­ly a poor fit for the job.

    Lots of us have friends, acquain­tances, and rel­a­tives with opin­ions that are con­tro­ver­sial yet not so vile we need to eject them from our lives. Out­right Nazism is some­thing else. So how could a self-described “queer-activist” with pro­gres­sive bona fides and an appar­ent ded­i­ca­tion to out­ing abu­sive fig­ures in the tech indus­try be friends with a Nazi? For one thing, as Nor­ton explained, she some­times tried to speak the lan­guage of some of the more out­ré com­mu­ni­ties she cov­ered, like Anons and trolls. Friend can mean a lot of dif­fer­ent things, and her motives in speak­ing with Weev may have been admirable, if pos­si­bly mis­guid­ed. But when you look back at the his­to­ry of the inter­net free­dom com­mu­ni­ty with which she asso­ci­at­ed, her embrace of Weev fits into an ugly pat­tern. She was part of a com­mu­ni­ty that sup­port­ed Weev and his right to free expres­sion, often while fail­ing to denounce his val­ues and every­thing his white nation­al­ism, sex­ism, and anti-Semi­tism stood for. Any­one who thinks seri­ous­ly about the web—and hires peo­ple to cov­er it—ought to reck­on with why.

    Some back­ground: In Octo­ber, Nor­ton remind­ed her fol­low­ers that “Weev is a ter­ri­ble per­son, & an old friend of mine,” as she wrote in one of the tweets that sur­faced Tues­day night. “I’ve been very clear on this. Some of my friend are ter­ri­ble peo­ple, & also my friends.” Weev has said that Jew­ish chil­dren “deserve to die,” encour­aged death threats against his targets—often Jew­ish peo­ple and women—and released their address­es and con­tact infor­ma­tion onto the inter­net, caus­ing them to be so flood­ed with hate speech and threats of vio­lence that some fled their homes. Yet Nor­ton still found val­ue in the friend­ship. “Weev doesn’t talk to me much any­more, but we talk about the racism when­ev­er he does,” Nor­ton explained in a tweet Tues­day night. She explained that her “door is open when he, or any­one, wants to talk” and clar­i­fied that she would always make their con­ver­sa­tions “about the stu­pid­i­ty of racism” when they did get a chance to catch up.

    That Nor­ton would keep her door open to a man who harms lives does not make her an out­lier with­in parts of the hack­er and dig­i­tal rights com­mu­ni­ty, which took up arms to defend Weev in 2010 after he worked with a team to expose a hole in AT&T’s secu­ri­ty sys­tem that allowed the email address­es of about 114,000 iPad own­ers to be revealed—which he then shared with jour­nal­ists. For that, Weev was sen­tenced to three years in jail for iden­ti­ty fraud and access­ing a com­put­er with­out the prop­er autho­riza­tion. Despite being known as a par­tic­u­lar­ly ter­ri­fy­ing inter­net troll and anti-Semi­te, the Elec­tron­ic Fron­tier Foun­da­tion (where I used to work), cel­e­brat­ed tech­nol­o­gy law pro­fes­sor Orin Kerr, and oth­ers in the inter­net free­dom com­mu­ni­ty came to Weev’s defense, argu­ing that when a secu­ri­ty researcher finds a hole in a company’s sys­tem, it doesn’t mean the hack­ing was mali­cious and deserv­ing of pros­e­cu­tion. They were right. Out­side secu­ri­ty researchers should be able to find and dis­close vul­ner­a­bil­i­ties in order to keep every­one else safe with­out break­ing a law.

    But the broad­er hack­er com­mu­ni­ty didn’t defend Weev on the mer­its of this par­tic­u­lar case while simul­ta­ne­ous­ly denounc­ing his hate­ful views. Instead it lion­ized him in keep­ing with its oppo­si­tion to dra­con­ian com­put­er crime laws. Artist Mol­ly Crabap­ple paint­ed a por­trait of Weev. There was a “Free Weev” web­site; the slo­gan was print­ed on T‑shirts. The charges were even­tu­al­ly over­turned 28 months before the end of Weev’s sen­tence, and when a jour­nal­ist accom­pa­nied his lawyer to pick Weev up from prison, he report­ed­ly blast­ed a white pow­er song on the dri­ve home. Dur­ing and after his impris­on­ment, Weev and Nor­ton kept in touch.

    And dur­ing his time in jail, Nor­ton appeared to pick up some trolling ten­den­cies of her own. “Here’s the deal, fag­got,” she wrote in a tweet from 2013. “Free speech comes with respon­si­bil­i­ty. not legal, but human. grown up. you can do this.” Nor­ton defend­ed her­self Tues­day night, say­ing this lan­guage was only ever used in the con­text of her work with Anony­mous, where that par­tic­u­lar slur is a kind of shib­bo­leth, but still, she was com­fort­able enough to use the word a lot, and on a pub­lic plat­form.

    Nor­ton, like so many cham­pi­ons of inter­net free­dom, is a staunch advo­cate of free speech. That was cer­tain­ly the view that allowed so much of the inter­net free­dom and hack­er com­mu­ni­ty to over­look Weev’s ardent anti-Semi­tism when he was on tri­al for break­ing into AT&T’s com­put­ers. The think­ing is that this is what comes with defend­ing people’s civ­il lib­er­ties: Some­times you’re going to defend a mas­sive racist. That’s true for both inter­net activists and the ACLU. It’s also total­ly pos­si­ble to defend someone’s right to say awful things and not become their “friend,” how­ev­er you define the term. But that’s some­thing Quinn didn’t do. And it’s some­thing that many of Weev’s defend­ers didn’t do, either.

    When civ­il lib­er­ties are defend­ed with­out adja­cent calls for social and eco­nom­ic jus­tice, the val­ues that under­gird calls for, say, free speech or pro­tec­tion from gov­ern­ment search and seizure can col­lapse. This is why neo-Nazis feel embold­ened to hold “free speech” ral­lies across the coun­try. It is why racist online com­mu­ni­ties are able to rail against the monop­o­lis­tic pow­er of com­pa­nies like Face­book and Google when they get boot­ed off their plat­forms. Count­less activists, engi­neers, and oth­ers have agi­tat­ed for decades for an open web—but in the process they’ve too often neglect­ed to fight for social and eco­nom­ic jus­tice at the same time. They’ve defend­ed free speech above all else, which encour­aged plat­forms to allow racists and big­ots and sex­ists and anti-Semi­tes to gath­er there with­out much issue.

    ...

    In a way, Norton’s friend­ship with Weev can be made sense of through the lens of the com­mu­ni­ties that they both trav­eled through. They belonged to a group that had the pre­scient insight that the inter­net was worth fight­ing for. Those fights were often rail­ing against the threat of cen­sor­ship, in defense of per­son­al pri­va­cy, and thus in defense of hack­ers who found secu­ri­ty holes, and the abil­i­ty to use the inter­net as freely as pos­si­ble, with­out gov­ern­ment med­dling. It’s a train of thought that pre­served free speech but didn’t simul­ta­ne­ous­ly work as hard to defend com­mu­ni­ties that were ostra­cized on the inter­net because so much of that speech was harm­ful. Norton’s report­ing has been valu­able; her con­tri­bu­tion to the #MeToo moment in the tech indus­try was, too. But what’s real­ly need­ed to make sense of tech­nol­o­gy at our cur­rent junc­ture prob­a­bly isn’t some­one so com­mit­ted to one of the lines of thought that helped get us here. Let’s hope the New York Times’ next pick for the job Nor­ton would have had exerts some fresh­er think­ing.

    ———-

    “Why Would a Tech Jour­nal­ist Be Friends With a Neo-Nazi Troll?” by April Glaser; Slate; 02/14/2018

    “And then, arguably most shock­ing of all, there were tweets in which Nor­ton defend­ed her long friend­ship with one of the most famous neo-Nazis in Amer­i­ca, Andrew Auern­heimer, known by his inter­net pseu­do­nym Weev. Among his many low­lights, Weev co-ran the web­site the Dai­ly Stormer, a hub for neo-Nazis and white suprema­cists.”

    Yeah, there’s noth­ing quite like your tweet his­to­ry defend­ing your friend­ship with the guy who co-rans the Dai­ly Stormer to spruce up one’s resume...assuming you’re apply­ing for a job at Bre­it­bart. But that might be a bit to far for the New York Times.

    And yet, as the arti­cle notes, Nor­ton was far from alone in not just defend­ing Auern­heimer when he was fac­ing pros­e­cu­tion for hack­ing AT&T (and that legit­i­mate­ly was over­ly harsh pros­e­cu­tion) but defend­ing him while remain­ing friends despite the hor­rif­ic Nazi views he open­ly stands for:

    ...
    Lots of us have friends, acquain­tances, and rel­a­tives with opin­ions that are con­tro­ver­sial yet not so vile we need to eject them from our lives. Out­right Nazism is some­thing else. So how could a self-described “queer-activist” with pro­gres­sive bona fides and an appar­ent ded­i­ca­tion to out­ing abu­sive fig­ures in the tech indus­try be friends with a Nazi? For one thing, as Nor­ton explained, she some­times tried to speak the lan­guage of some of the more out­ré com­mu­ni­ties she cov­ered, like Anons and trolls. Friend can mean a lot of dif­fer­ent things, and her motives in speak­ing with Weev may have been admirable, if pos­si­bly mis­guid­ed. But when you look back at the his­to­ry of the inter­net free­dom com­mu­ni­ty with which she asso­ci­at­ed, her embrace of Weev fits into an ugly pat­tern. She was part of a com­mu­ni­ty that sup­port­ed Weev and his right to free expres­sion, often while fail­ing to denounce his val­ues and every­thing his white nation­al­ism, sex­ism, and anti-Semi­tism stood for. Any­one who thinks seri­ous­ly about the web—and hires peo­ple to cov­er it—ought to reck­on with why.
    ...

    Now, it’s not that Nor­ton nev­er crit­i­cizes Auern­heimer’s views. It’s that she appears to still be friends and talk with him despite the fact that he real­ly is a lead­ing neo-Nazi who real­ly does call for mass mur­der. Which, again, is some­thing that goes far beyond Nor­ton:

    ...
    Some back­ground: In Octo­ber, Nor­ton remind­ed her fol­low­ers that “Weev is a ter­ri­ble per­son, & an old friend of mine,” as she wrote in one of the tweets that sur­faced Tues­day night. “I’ve been very clear on this. Some of my friend are ter­ri­ble peo­ple, & also my friends.” Weev has said that Jew­ish chil­dren “deserve to die,” encour­aged death threats against his targets—often Jew­ish peo­ple and women—and released their address­es and con­tact infor­ma­tion onto the inter­net, caus­ing them to be so flood­ed with hate speech and threats of vio­lence that some fled their homes. Yet Nor­ton still found val­ue in the friend­ship. “Weev doesn’t talk to me much any­more, but we talk about the racism when­ev­er he does,” Nor­ton explained in a tweet Tues­day night. She explained that her “door is open when he, or any­one, wants to talk” and clar­i­fied that she would always make their con­ver­sa­tions “about the stu­pid­i­ty of racism” when they did get a chance to catch up.

    That Nor­ton would keep her door open to a man who harms lives does not make her an out­lier with­in parts of the hack­er and dig­i­tal rights com­mu­ni­ty, which took up arms to defend Weev in 2010 after he worked with a team to expose a hole in AT&T’s secu­ri­ty sys­tem that allowed the email address­es of about 114,000 iPad own­ers to be revealed—which he then shared with jour­nal­ists. For that, Weev was sen­tenced to three years in jail for iden­ti­ty fraud and access­ing a com­put­er with­out the prop­er autho­riza­tion. Despite being known as a par­tic­u­lar­ly ter­ri­fy­ing inter­net troll and anti-Semi­te, the Elec­tron­ic Fron­tier Foun­da­tion (where I used to work), cel­e­brat­ed tech­nol­o­gy law pro­fes­sor Orin Kerr, and oth­ers in the inter­net free­dom com­mu­ni­ty came to Weev’s defense, argu­ing that when a secu­ri­ty researcher finds a hole in a company’s sys­tem, it doesn’t mean the hack­ing was mali­cious and deserv­ing of pros­e­cu­tion. They were right. Out­side secu­ri­ty researchers should be able to find and dis­close vul­ner­a­bil­i­ties in order to keep every­one else safe with­out break­ing a law.

    But the broad­er hack­er com­mu­ni­ty didn’t defend Weev on the mer­its of this par­tic­u­lar case while simul­ta­ne­ous­ly denounc­ing his hate­ful views. Instead it lion­ized him in keep­ing with its oppo­si­tion to dra­con­ian com­put­er crime laws. Artist Mol­ly Crabap­ple paint­ed a por­trait of Weev. There was a “Free Weev” web­site; the slo­gan was print­ed on T‑shirts. The charges were even­tu­al­ly over­turned 28 months before the end of Weev’s sen­tence, and when a jour­nal­ist accom­pa­nied his lawyer to pick Weev up from prison, he report­ed­ly blast­ed a white pow­er song on the dri­ve home. Dur­ing and after his impris­on­ment, Weev and Nor­ton kept in touch.
    ...

    “But the broad­er hack­er com­mu­ni­ty didn’t defend Weev on the mer­its of this par­tic­u­lar case while simul­ta­ne­ous­ly denounc­ing his hate­ful views. Instead it lion­ized him in keep­ing with its oppo­si­tion to dra­con­ian com­put­er crime laws.”

    And that is the much big­ger sto­ry in the sto­ry of the Quinn Nor­ton’s 1/2 day as a New York Times tech­nol­o­gy colum­nist: with­in much of dig­i­tal pri­va­cy com­mu­ni­ty, Nor­ton’s accep­tance of Auern­heimer despite his open aggres­sive neo-Nazi views isn’t the excep­tion. It’s the rule.

    There was unfor­tu­nate­ly not men­tion in the arti­cle about how Auern­heimer par­tied with Glenn Green­wald and Lau­ra Poitras in 2014 after his release from prison (when he was already sport­ing a giant swasti­ka on his chest). Nei­ther was there any men­tion of the fact that Auerne­heimer appears to have been involved with both ‘Macron hacks’ in France’s elec­tions last year and pos­si­bly the DNC hacks. But the arti­cle does make the impor­tant point that the sto­ry of Quinn Nor­ton’s fir­ing is real­ly just a sub-sto­ry in the much larg­er sto­ry of remark­ably wide­spread pop­u­lar­i­ty of Andrew ‘weev’ Auern­heimer with­in the tech and dig­i­tal pri­va­cy com­mu­ni­ties and the roles he may have played in some of the biggest hacks of our times. And the sto­ry of tech’s ‘Nazi friend’ is, itself, just a sub-sto­ry in the much larg­er sto­ry of how per­va­sive far-right ideals and assump­tions are in all sorts of tech sec­tors and tech­nolo­gies, whether it’s Bit­coin, the Cypher­punk move­men­t’s exten­sive his­to­ry of far-right thought, or the fas­cist roots behind Wik­ileaks. Hope­ful­ly the New York Times’s next pick for tech colum­nist will actu­al­ly address these top­ics.

    Posted by Pterrafractyl | February 14, 2018, 3:33 pm
  2. Here’s anoth­er exam­ple of how the lib­er­tar­i­an dream of inter­net plat­forms that are so secure that the com­pa­nies them­selves can’t even mon­i­tor what’s tak­ing place on them turn out to be far-right mis­in­for­ma­tion dream plat­forms: What­sApp — the Face­book-owned mes­sag­ing plat­form that uses end-to-end strong encryp­tion so, in the­o­ry, no one can crack the mes­sages and no one, includ­ing What­sApps, can mon­i­tor on the plat­form is used — is wild­ly pop­u­lar in Brazil. 120 mil­lion of the coun­try’s 200 mil­lion peo­ple use the app as many of them use it as their pri­ma­ry news source.

    So what kind of news are peo­ple get­ting on What­sApp? Well, we don’t real­ly know because it can’t be mon­i­tored. But we do get a hint of the kind of news peo­ple are get­ting on encrypt­ed ser­vices like What­sApp when those sto­ries spreads to oth­er plat­forms like Face­book or Youtube. And with Brazil fac­ing an explo­sion of yel­low fever and strug­gling to get peo­ple vac­ci­nat­ed we got a par­tic­u­lar­ly nasty hint of the kind of ‘news’ peo­ple are get­ting on What­sApp: dan­ger­ous pro­fes­sion­al­ly pro­duced videos telling peo­ple an Alex Jones-style mes­sage that the yel­low fever vac­cine cam­paign is part of a secret gov­ern­ment depop­u­la­tion scheme. That’s the kind of ‘news’ peo­ple in Brazil are get­ting from What­sApp. At least, that’s the ‘news’ we know about so far since the full con­tent in an encrypt­ed mys­tery:

    Wired

    When What­sAp­p’s Fake News Prob­lem Threat­ens Pub­lic Health

    Megan Molteni
    03.09.18 03:14 pm

    In remote areas of Brazil’s Ama­zon basin, yel­low fever used to be a rare, if reg­u­lar vis­i­tor. Every six to ten years, dur­ing the hot sea­son, mos­qui­toes would pick it up from infect­ed mon­keys and spread it to a few log­gers, hunters, and farm­ers at the forests’ edges in the north­west­ern part of the coun­try. But in 2016, per­haps dri­ven by cli­mate change or defor­esta­tion or both, the dead­ly virus broke its pat­tern.

    Yel­low fever began expand­ing south, even through the win­ter months, infect­ing more than 1,500 peo­ple and killing near­ly 500. The mos­qui­to-borne virus attacks the liv­er, caus­ing its sig­na­ture jaun­dice and inter­nal hem­or­rhag­ing (the Mayans called it xekik, or “blood vom­it”). Today, that pesti­lence is rac­ing toward Rio de Janeiro and Sao Paulo at the rate of more than a mile a day, turn­ing Brazil’s coastal megac­i­ties into mega-tick­ing-time­bombs. The only thing spread­ing faster is mis­in­for­ma­tion about the dan­gers of a yel­low fever vaccine—the very thing that could halt the virus’s advance. And nowhere is it hap­pen­ing faster than on What­sApp.

    In recent weeks, rumors of fatal vac­cine reac­tions, mer­cury preser­v­a­tives, and gov­ern­ment con­spir­a­cies have sur­faced with alarm­ing speed on the Face­book-owned encrypt­ed mes­sag­ing ser­vice, which is used by 120 mil­lion of Brazil’s rough­ly 200 mil­lion res­i­dents. The plat­form has long incu­bat­ed and pro­lif­er­at­ed fake news, in Brazil in par­tic­u­lar. With its mod­est data require­ments, What­sApp is espe­cial­ly pop­u­lar among mid­dle and low­er income indi­vid­u­als there, many of whom rely on it as their pri­ma­ry news con­sump­tion plat­form. But as the country’s health author­i­ties scram­ble to con­tain the worst out­break in decades, WhatsApp’s mis­in­for­ma­tion trade threat­ens to go from desta­bi­liz­ing to dead­ly.

    On Jan­u­ary 25, Brazil­ian health offi­cials launched a mass cam­paign to vac­ci­nate 95 per­cent of res­i­dents in the 69 munic­i­pal­i­ties direct­ly in the disease’s path—a total of 23 mil­lion peo­ple. A yel­low fever vac­cine has been manda­to­ry since 2002 for any Brazil­ian born in regions where the virus is endem­ic. But in the last two years the dis­ease has pushed beyond its nor­mal range into ter­ri­to­ries where few­er than a quar­ter of peo­ple are immune, includ­ing the urban areas of Rio and Sao Paulo.

    By the time of the announce­ment, the fake news cycle was already under­way. Ear­li­er in the month an audio mes­sage from a woman claim­ing to be a doc­tor at a well-known research insti­tute began cir­cu­lat­ing on What­sApp, warn­ing that the vac­cine is dan­ger­ous. (The insti­tute denied that the record­ing came from any of its employ­ees). A few weeks lat­er it was a sto­ry link­ing the death of a uni­ver­si­ty stu­dent to the vac­cine. (That too proved to be a false report). In Feb­ru­ary, Igor Sacramento’s moth­er-in-law mes­saged him a pair of videos sug­gest­ing that the yel­low fever vac­cine was actu­al­ly a scam aimed at reduc­ing the world pop­u­la­tion. A health com­mu­ni­ca­tion researcher at Fiocruz, one of Brazil’s largest sci­en­tif­ic insti­tu­tions, Sacra­men­to rec­og­nized a scam when he saw one. And no, it wasn’t a glob­al illu­mi­nati plot to kill off his coun­try­men. But he could under­stand why peo­ple would be tak­en in by it.

    “These videos are very sophis­ti­cat­ed, with good edit­ing, tes­ti­mo­ni­als from experts, and per­son­al expe­ri­ences,” Sacra­men­to says. It’s the same jour­nal­is­tic for­mat peo­ple see on TV, so it bears the shape of truth. And when peo­ple share these videos or news sto­ries with­in their social net­works as per­son­al mes­sages, it changes the cal­cu­lus of trust. “We are tran­si­tion­ing from a soci­ety that expe­ri­enced truth based on facts to a soci­ety based on its expe­ri­ence of truth in inti­ma­cy, in emo­tion, in close­ness.”

    Peo­ple are more like­ly to believe rumours from fam­i­ly and friends. There’s no algo­rithm medi­at­ing the expe­ri­ence. And when that mis­in­for­ma­tion comes in the form of for­ward­ed texts and videos—which look the same as per­son­al mes­sages in WhatsApp—they’re lent anoth­er lay­er of legit­i­ma­cy. Then you get the net­work com­pound­ing effect; if you’re in mul­ti­ple group chats that all receive the fake news, the rep­e­ti­tion makes them more believ­able still.

    Of course, these are all just the­o­ries. Because of WhatsApp’s end-to-end encryp­tion and the closed nature of its net­works, it’s near­ly impos­si­ble to study how mis­in­for­ma­tion moves through it. For users in coun­tries with a his­to­ry of state-spon­sored vio­lence, like Brazil, that secre­cy is a fea­ture. But it’s a bug for any­one try­ing to study them. “I think What­sApp hoax­es and dis­in­for­ma­tion cam­paigns are a bit more per­ni­cious [than Face­book] because their dif­fu­sion can­not be mon­i­tored,” says Pablo Ortel­la­do, a fake news researcher and pro­fes­sor of pub­lic pol­i­cy at the Uni­ver­si­ty of Sao Paulo. Mis­in­for­ma­tion on What­sApp can only be iden­ti­fied when it jumps to oth­er social media sites or bleeds into the real world.

    In Brazil, it’s start­ing to do both. One of the videos Sacra­men­to received from his moth­er-in-law is still up on YouTube, where it’s been viewed over a mil­lion times. Oth­er sto­ries cir­cu­lat­ed on What­sApp are now being shared in Face­book groups with thou­sands of users, most­ly wor­ried moth­ers exchang­ing sto­ries and fears. And in the streets of Rio and Sao Paulo, some peo­ple are stay­ing away from the health work­ers in white coats. As of Feb­ru­ary 27, only 5.5 mil­lion peo­ple had received the shot, though it’s dif­fi­cult to say how much of the slow start is due to fake news as opposed to logis­ti­cal delays. A spokes­woman for the Brazil­ian Min­istry of Health said in an email that the agency has seen an uptick in con­cern from res­i­dents regard­ing post-vac­ci­na­tion adverse events since the start of the year and acknowl­edged that the spread of false news through social media can inter­fere with vac­ci­na­tion cov­er­age, but did not com­ment on its spe­cif­ic impact on this lat­est cam­paign.

    ...

    While the Min­istry of Health has engaged in a very active pro-vac­cine edu­ca­tion operation—publishing week­ly newslet­ters, post­ing on social media, and get­ting peo­ple on the ground at church­es, tem­ples, trade unions, and clinics—health com­mu­ni­ca­tion researchers like Sacra­men­to say health offi­cials made one glar­ing mis­take. They didn’t pay close enough atten­tion to lan­guage.

    You see, on top of all this, there’s a glob­al yel­low fever vac­cine short­age going on at the moment. The vac­cine is avail­able at a lim­it­ed num­ber of clin­ics in the US, but it’s only used here as a trav­el shot. So far this year, the Cen­ters for Dis­ease Con­trol and Pre­ven­tion has reg­is­tered no cas­es of the virus with­in US bor­ders, though in light of the out­break it did issue a Lev­el 2 trav­el notice in Jan­u­ary, urg­ing all Amer­i­cans trav­el­ing to the affect­ed states in Brazil to get vac­ci­nat­ed first.

    Because it’s endem­ic in the coun­try, Brazil makes its own vac­cine, and is cur­rent­ly ramp­ing up pro­duc­tion from 5 mil­lion to 10 mil­lion dos­es per month by June. But in the inter­im, author­i­ties are admin­is­ter­ing small­er dos­es of what they have on hand, known as a “frac­tion­al dose.” It’s a well-demon­strat­ed emer­gency maneu­ver, which staved off a yel­low fever out­break in the Demo­c­ra­t­ic Repub­lic of the Con­go in 2016. Accord­ing to the WHO, it’s “the best way to stretch vac­cine sup­plies and pro­tect against as many peo­ple as pos­si­ble.” But a par­tial dose, one that’s guar­an­teed for only 12 months, has been met by mis­trust in Brazil, where a sin­gle vac­ci­na­tion had always been good for a life­time of pro­tec­tion.

    “The pop­u­la­tion in gen­er­al under­stood the word­ing of ‘frac­tion­at­ed’ to mean weak,” says Sacra­men­to. Although tech­ni­cal­ly cor­rect, the word took on a more sin­is­ter mean­ing as it spread through social media cir­cles. Some videos even claimed the frac­tion­at­ed vac­cine could cause renal fail­ure. And while they may be unsci­en­tif­ic, they’re not com­plete­ly wrong.

    Like any med­i­cine, the yel­low fever vac­cine can cause side effects. Between 2 and 4 per­cent of peo­ple expe­ri­ence mild headaches, low-grade fevers, or pain at the site of injec­tion. But there have also been rare reports of life-threat­en­ing aller­gic reac­tions and dam­age to the ner­vous sys­tem and oth­er inter­nal organs. Accord­ing to the Health Min­istry, six peo­ple died in 2017 on account of an adverse reac­tion to the vac­cine. The agency esti­mates that one in 76,000 will have an ana­phy­lac­tic reac­tion, one in 125,000 will expe­ri­ence a severe ner­vous sys­tem reac­tion, and one in 250,000 will suf­fer a life-threat­en­ing ill­ness with organ fail­ure. Which means that if 5 mil­lion peo­ple get vac­ci­nat­ed, you’ll wind up with about 20 organ fail­ures, 50 ner­vous sys­tem issues, and 70 aller­gic shocks. Of course, if yel­low fever infect­ed 5 mil­lion peo­ple, 333,000 peo­ple could die.

    Not every fake news sto­ry is 100 per­cent false. But they are out of pro­por­tion with real­i­ty. That’s the thing about social media. It can ampli­fy real but sta­tis­ti­cal­ly unlike­ly things just as much as it spreads total­ly made up stuff. What you wind up with is a murky mix of infor­ma­tion that has just enough truth to be cred­i­ble.

    And that makes it a whole lot hard­er to fight. You can’t just start by shout­ing it all down. Sacra­men­to says too often health offi­cials opt to frame these rumors as a dichoto­my: “Is this true or is this a myth?” That alien­ates peo­ple from the sci­ence. Instead, the insti­tu­tion where he works has begun to pro­duce social media-spe­cif­ic videos that start a dia­logue about the impor­tance of vac­cines, while remain­ing open to people’s fears. “Brazil is a coun­try full of social inequal­i­ties and con­tra­dic­tions,” he says. “The only way to under­stand what is hap­pen­ing is to talk to peo­ple who are dif­fer­ent from you.” Unfor­tu­nate­ly, that’s the one thing What­sApp is designed not to let you do.

    ———-

    “When What­sAp­p’s Fake News Prob­lem Threat­ens Pub­lic Health” by Megan Molteni; Wired; 03/09/2018

    “Yel­low fever began expand­ing south, even through the win­ter months, infect­ing more than 1,500 peo­ple and killing near­ly 500. The mos­qui­to-borne virus attacks the liv­er, caus­ing its sig­na­ture jaun­dice and inter­nal hem­or­rhag­ing (the Mayans called it xekik, or “blood vom­it”). Today, that pesti­lence is rac­ing toward Rio de Janeiro and Sao Paulo at the rate of more than a mile a day, turn­ing Brazil’s coastal megac­i­ties into mega-tick­ing-time­bombs. The only thing spread­ing faster is mis­in­for­ma­tion about the dan­gers of a yel­low fever vaccine—the very thing that could halt the virus’s advance. And nowhere is it hap­pen­ing faster than on What­sApp.

    As the say­ing goes, a lie can trav­el halfway around the world before the truth can get its boots on. Espe­cial­ly in the age of the inter­net when ran­dom videos on mes­sag­ing ser­vices like What­sApp are treat­ed as reli­able news sources:

    ...
    In recent weeks, rumors of fatal vac­cine reac­tions, mer­cury preser­v­a­tives, and gov­ern­ment con­spir­a­cies have sur­faced with alarm­ing speed on the Face­book-owned encrypt­ed mes­sag­ing ser­vice, which is used by 120 mil­lion of Brazil’s rough­ly 200 mil­lion res­i­dents. The plat­form has long incu­bat­ed and pro­lif­er­at­ed fake news, in Brazil in par­tic­u­lar. With its mod­est data require­ments, What­sApp is espe­cial­ly pop­u­lar among mid­dle and low­er income indi­vid­u­als there, many of whom rely on it as their pri­ma­ry news con­sump­tion plat­form. But as the country’s health author­i­ties scram­ble to con­tain the worst out­break in decades, WhatsApp’s mis­in­for­ma­tion trade threat­ens to go from desta­bi­liz­ing to dead­ly.
    ...

    So by the time the gov­ern­ment began its big vac­ci­na­tion cam­paign to vac­ci­nate 95 per­cent of res­i­dents in vul­ner­a­ble areas, there was a fake news cam­paign about the vac­cine using pro­fes­sion­al qual­i­ty videos: fake doc­tors. Fake sto­ries of deaths from vac­cine. And the kind of pro­duc­tion qual­i­ty peo­ple expect from a news broad­cast:

    ...
    On Jan­u­ary 25, Brazil­ian health offi­cials launched a mass cam­paign to vac­ci­nate 95 per­cent of res­i­dents in the 69 munic­i­pal­i­ties direct­ly in the disease’s path—a total of 23 mil­lion peo­ple. A yel­low fever vac­cine has been manda­to­ry since 2002 for any Brazil­ian born in regions where the virus is endem­ic. But in the last two years the dis­ease has pushed beyond its nor­mal range into ter­ri­to­ries where few­er than a quar­ter of peo­ple are immune, includ­ing the urban areas of Rio and Sao Paulo.

    By the time of the announce­ment, the fake news cycle was already under­way. Ear­li­er in the month an audio mes­sage from a woman claim­ing to be a doc­tor at a well-known research insti­tute began cir­cu­lat­ing on What­sApp, warn­ing that the vac­cine is dan­ger­ous. (The insti­tute denied that the record­ing came from any of its employ­ees). A few weeks lat­er it was a sto­ry link­ing the death of a uni­ver­si­ty stu­dent to the vac­cine. (That too proved to be a false report). In Feb­ru­ary, Igor Sacramento’s moth­er-in-law mes­saged him a pair of videos sug­gest­ing that the yel­low fever vac­cine was actu­al­ly a scam aimed at reduc­ing the world pop­u­la­tion. A health com­mu­ni­ca­tion researcher at Fiocruz, one of Brazil’s largest sci­en­tif­ic insti­tu­tions, Sacra­men­to rec­og­nized a scam when he saw one. And no, it wasn’t a glob­al illu­mi­nati plot to kill off his coun­try­men. But he could under­stand why peo­ple would be tak­en in by it.

    “These videos are very sophis­ti­cat­ed, with good edit­ing, tes­ti­mo­ni­als from experts, and per­son­al expe­ri­ences,” Sacra­men­to says. It’s the same jour­nal­is­tic for­mat peo­ple see on TV, so it bears the shape of truth. And when peo­ple share these videos or news sto­ries with­in their social net­works as per­son­al mes­sages, it changes the cal­cu­lus of trust. “We are tran­si­tion­ing from a soci­ety that expe­ri­enced truth based on facts to a soci­ety based on its expe­ri­ence of truth in inti­ma­cy, in emo­tion, in close­ness.”
    ...

    “These videos are very sophis­ti­cat­ed, with good edit­ing, tes­ti­mo­ni­als from experts, and per­son­al expe­ri­ences,” Sacra­men­to says. It’s the same jour­nal­is­tic for­mat peo­ple see on TV, so it bears the shape of truth. And when peo­ple share these videos or news sto­ries with­in their social net­works as per­son­al mes­sages, it changes the cal­cu­lus of trust. “We are tran­si­tion­ing from a soci­ety that expe­ri­enced truth based on facts to a soci­ety based on its expe­ri­ence of truth in inti­ma­cy, in emo­tion, in close­ness.””

    So how wide­spread is the prob­lem of high qual­i­ty lit­er­al fake news con­tent get­ting prop­a­gat­ed on What­sApp? Well, again, we don’t know. Because you can’t mon­i­tor how What­sApp is used. Even the com­pa­ny can’t. It’s one of its ‘fea­tures’:

    ...
    Peo­ple are more like­ly to believe rumours from fam­i­ly and friends. There’s no algo­rithm medi­at­ing the expe­ri­ence. And when that mis­in­for­ma­tion comes in the form of for­ward­ed texts and videos—which look the same as per­son­al mes­sages in WhatsApp—they’re lent anoth­er lay­er of legit­i­ma­cy. Then you get the net­work com­pound­ing effect; if you’re in mul­ti­ple group chats that all receive the fake news, the rep­e­ti­tion makes them more believ­able still.

    Of course, these are all just the­o­ries. Because of WhatsApp’s end-to-end encryp­tion and the closed nature of its net­works, it’s near­ly impos­si­ble to study how mis­in­for­ma­tion moves through it. For users in coun­tries with a his­to­ry of state-spon­sored vio­lence, like Brazil, that secre­cy is a fea­ture. But it’s a bug for any­one try­ing to study them. “I think What­sApp hoax­es and dis­in­for­ma­tion cam­paigns are a bit more per­ni­cious [than Face­book] because their dif­fu­sion can­not be mon­i­tored,” says Pablo Ortel­la­do, a fake news researcher and pro­fes­sor of pub­lic pol­i­cy at the Uni­ver­si­ty of Sao Paulo. Mis­in­for­ma­tion on What­sApp can only be iden­ti­fied when it jumps to oth­er social media sites or bleeds into the real world.
    ...

    “Of course, these are all just the­o­ries. Because of WhatsApp’s end-to-end encryp­tion and the closed nature of its net­works, it’s near­ly impos­si­ble to study how mis­in­for­ma­tion moves through it.”

    Yep, we have no idea what kinds of oth­er high qual­i­ty mis­in­for­ma­tion videos are get­ting pro­duced. Of course, it’s not like there aren’t plen­ty of mis­in­for­ma­tion videos read­i­ly avail­able on Youtube and Face­book, so we do have some idea of the gen­er­al type of mis­in­for­ma­tion and far-right memes that going to flour­ish on plat­forms like What­sApp. But for a very time-sen­si­tive sto­ry, like get­ting peo­ple vac­ci­nat­ing before the killer virus turns into a pan­dem­ic, the inabil­i­ty to iden­ti­fy and com­bat dis­in­for­ma­tion like this real­ly is quite dan­ger­ous.

    It’s a reminder that if human­i­ty wants to embrace the cypher­punk rev­o­lu­tion of ubiq­ui­tous strong encryp­tion and tru­ly a anony­mous, untrack­able inter­net, human­i­ty is going to have to get a lot wis­er. Wise enough to at least have some sort of rea­son­able social immune sys­tem against mind virus­es like bogus news and far-right memes. Wise enough to iden­ti­fy and reject the many prob­lems with ide­olo­gies like dig­i­tal lib­er­tar­i­an­ism. In oth­er words, if human­i­ty wants to safe­ly embrace the cypher­punk rev­o­lu­tion, it needs to be savvy enough to reject the cypher­punk rev­o­lu­tion. It’s a bit of a para­dox and a recur­ring theme with tech­nol­o­gy and pow­er in gen­er­al: if you want this pow­er with­out destroy­ing your­self you’re just going to have to be wise enough to use that pow­er very care­ful­ly or just reject it out­right, col­lec­tive­ly and indi­vid­u­al­ly.

    But for now, we have lit­er­al fake news videos push­ing anti-vac­cine mis­in­for­ma­tion qui­et­ly ‘going viral’ on encrypt­ed social media in order to pro­mote the spread of a dead­ly bio­log­i­cal virus. It seems like a mile­stone of self-destruc­tive behav­ior was just reached by human­i­ty. It was a group effort. Go team.

    Posted by Pterrafractyl | March 12, 2018, 10:46 am
  3. A great new book is out on the his­to­ry of the Inter­net Sur­veil­lance Val­ley by Yasha Levine. Here is a link to a long inter­view
    http://mediaroots.org/surveillance-valley-the-secret-military-history-of-the-internet-with-yasha-levine/

    Posted by Hugo Turner | March 23, 2018, 11:37 am
  4. Here’s a quick update on the devel­op­ment of the ‘deep­fake’ tech­nol­o­gy that can cre­ate real­is­ti­cal­ly look­ing videos of any­one say­ing any­thing: accord­ing to experts, it should be advanced enough to cause major prob­lems for things like polit­i­cal elec­tions in a cou­ple years. So if you were won­der­ing what kind of ‘fake news’ night­mare is in store for the US 2020 elec­tion, it’s going to be the kind of night­mare that includes one fake video after anoth­er that looks com­plete­ly real:

    Asso­ci­at­ed Press

    I nev­er said that! High-tech decep­tion of ‘deep­fake’ videos

    By DEB RIECHMANN
    07/02/2018

    WASHINGTON (AP) — Hey, did my con­gress­man real­ly say that? Is that real­ly Pres­i­dent Don­ald Trump on that video, or am I being duped?

    New tech­nol­o­gy on the inter­net lets any­one make videos of real peo­ple appear­ing to say things they’ve nev­er said. Repub­li­cans and Democ­rats pre­dict this high-tech way of putting words in someone’s mouth will become the lat­est weapon in dis­in­for­ma­tion wars against the Unit­ed States and oth­er West­ern democ­ra­cies.

    We’re not talk­ing about lip-sync­ing videos. This tech­nol­o­gy uses facial map­ping and arti­fi­cial intel­li­gence to pro­duce videos that appear so gen­uine it’s hard to spot the phonies. Law­mak­ers and intel­li­gence offi­cials wor­ry that the bogus videos — called deep­fakes — could be used to threat­en nation­al secu­ri­ty or inter­fere in elec­tions.

    So far, that hasn’t hap­pened, but experts say it’s not a ques­tion of if, but when.

    “I expect that here in the Unit­ed States we will start to see this con­tent in the upcom­ing midterms and nation­al elec­tion two years from now,” said Hany Farid, a dig­i­tal foren­sics expert at Dart­mouth Col­lege in Hanover, New Hamp­shire. “The tech­nol­o­gy, of course, knows no bor­ders, so I expect the impact to rip­ple around the globe.”

    When an aver­age per­son can cre­ate a real­is­tic fake video of the pres­i­dent say­ing any­thing they want, Farid said, “we have entered a new world where it is going to be dif­fi­cult to know how to believe what we see.” The reverse is a con­cern, too. Peo­ple may dis­miss as fake gen­uine footage, say of a real atroc­i­ty, to score polit­i­cal points.

    Real­iz­ing the impli­ca­tions of the tech­nol­o­gy, the U.S. Defense Advanced Research Projects Agency is already two years into a four-year pro­gram to devel­op tech­nolo­gies that can detect fake images and videos. Right now, it takes exten­sive analy­sis to iden­ti­fy pho­ny videos. It’s unclear if new ways to authen­ti­cate images or detect fakes will keep pace with deep­fake tech­nol­o­gy.

    Deep­fakes are so named because they uti­lize deep learn­ing, a form of arti­fi­cial intel­li­gence. They are made by feed­ing a com­put­er an algo­rithm, or set of instruc­tions, lots of images and audio of a cer­tain per­son. The com­put­er pro­gram learns how to mim­ic the person’s facial expres­sions, man­ner­isms, voice and inflec­tions. If you have enough video and audio of some­one, you can com­bine a fake video of the per­son with a fake audio and get them to say any­thing you want.

    So far, deep­fakes have most­ly been used to smear celebri­ties or as gags, but it’s easy to fore­see a nation state using them for nefar­i­ous activ­i­ties against the U.S., said Sen. Mar­co Rubio, R‑Fla., one of sev­er­al mem­bers of the Sen­ate intel­li­gence com­mit­tee who are express­ing con­cern about deep­fakes.

    A for­eign intel­li­gence agency could use the tech­nol­o­gy to pro­duce a fake video of an Amer­i­can politi­cian using a racial epi­thet or tak­ing a bribe, Rubio says. They could use a fake video of a U.S. sol­dier mas­sacring civil­ians over­seas, or one of a U.S. offi­cial sup­pos­ed­ly admit­ting a secret plan to car­ry out a con­spir­a­cy. Imag­ine a fake video of a U.S. leader — or an offi­cial from North Korea or Iran — warn­ing the Unit­ed States of an impend­ing dis­as­ter.

    “It’s a weapon that could be used — timed appro­pri­ate­ly and placed appro­pri­ate­ly — in the same way fake news is used, except in a video form, which could cre­ate real chaos and insta­bil­i­ty on the eve of an elec­tion or a major deci­sion of any sort,” Rubio told The Asso­ci­at­ed Press.

    Deep­fake tech­nol­o­gy still has a few hitch­es. For instance, people’s blink­ing in fake videos may appear unnat­ur­al. But the tech­nol­o­gy is improv­ing.

    With­in a year or two, it’s going to be real­ly hard for a per­son to dis­tin­guish between a real video and a fake video,” said Andrew Grot­to, an inter­na­tion­al secu­ri­ty fel­low at the Cen­ter for Inter­na­tion­al Secu­ri­ty and Coop­er­a­tion at Stan­ford Uni­ver­si­ty in Cal­i­for­nia.

    “This tech­nol­o­gy, I think, will be irre­sistible for nation states to use in dis­in­for­ma­tion cam­paigns to manip­u­late pub­lic opin­ion, deceive pop­u­la­tions and under­mine con­fi­dence in our insti­tu­tions,” Grot­to said. He called for gov­ern­ment lead­ers and politi­cians to clear­ly say it has no place in civ­i­lized polit­i­cal debate.

    ...

    Rubio not­ed that in 2009, the U.S. Embassy in Moscow com­plained to the Russ­ian For­eign Min­istry about a fake sex video it said was made to dam­age the rep­u­ta­tion of a U.S. diplo­mat. The video showed the mar­ried diplo­mat, who was a liai­son to Russ­ian reli­gious and human rights groups, mak­ing tele­phone calls on a dark street. The video then showed the diplo­mat in his hotel room, scenes that appar­ent­ly were shot with a hid­den cam­era. Lat­er, the video appeared to show a man and a woman hav­ing sex in the same room with the lights off, although it was not at all clear that the man was the diplo­mat.

    John Beyr­le, who was the U.S. ambas­sador in Moscow at the time, blamed the Russ­ian gov­ern­ment for the video, which he said was clear­ly fab­ri­cat­ed.

    Michael McFaul, who was Amer­i­can ambas­sador in Rus­sia between 2012 and 2014, said Rus­sia has engaged in dis­in­for­ma­tion videos against var­i­ous polit­i­cal actors for years and that he too had been a tar­get. He has said that Russ­ian state pro­pa­gan­da insert­ed his face into pho­tographs and “spliced my speech­es to make me say things I nev­er uttered and even accused me of pedophil­ia.”

    ———-

    “I nev­er said that! High-tech decep­tion of ‘deep­fake’ videos” by DEB RIECHMANN; Asso­ci­at­ed Press; 07/02/2018

    “I expect that here in the Unit­ed States we will start to see this con­tent in the upcom­ing midterms and nation­al elec­tion two years from now,” said Hany Farid, a dig­i­tal foren­sics expert at Dart­mouth Col­lege in Hanover, New Hamp­shire. “The tech­nol­o­gy, of course, knows no bor­ders, so I expect the impact to rip­ple around the globe.””

    Yep, the way Hany Farid, a dig­i­tal foren­sics expert at Dart­mouth Col­lege, sees it, we might even see ‘deep­fakes’ impact the US midterms this year. The tech­nol­o­gy is basi­cal­ly ready to go.

    And while DARPA is report­ed­ly already work­ing on tech­niques for iden­ti­fy­ing fake images and videos, it’s still unclear if even an agency like DARPA will be able to keep up with advances in the tech­nol­o­gy. In oth­er words, even after detec­tion tech­nol­o­gy has been devel­oped there’s still ALWAYS going to be the poten­tial for cut­ting edge ‘deep­fakes’ that can fool that detec­tion tech­nol­o­gy. It’s just part of our tech­no­log­i­cal land­scape:

    ...
    Real­iz­ing the impli­ca­tions of the tech­nol­o­gy, the U.S. Defense Advanced Research Projects Agency is already two years into a four-year pro­gram to devel­op tech­nolo­gies that can detect fake images and videos. Right now, it takes exten­sive analy­sis to iden­ti­fy pho­ny videos. It’s unclear if new ways to authen­ti­cate images or detect fakes will keep pace with deep­fake tech­nol­o­gy.

    Deep­fakes are so named because they uti­lize deep learn­ing, a form of arti­fi­cial intel­li­gence. They are made by feed­ing a com­put­er an algo­rithm, or set of instruc­tions, lots of images and audio of a cer­tain per­son. The com­put­er pro­gram learns how to mim­ic the person’s facial expres­sions, man­ner­isms, voice and inflec­tions. If you have enough video and audio of some­one, you can com­bine a fake video of the per­son with a fake audio and get them to say any­thing you want.

    ...

    Deep­fake tech­nol­o­gy still has a few hitch­es. For instance, people’s blink­ing in fake videos may appear unnat­ur­al. But the tech­nol­o­gy is improv­ing.

    With­in a year or two, it’s going to be real­ly hard for a per­son to dis­tin­guish between a real video and a fake video,” said Andrew Grot­to, an inter­na­tion­al secu­ri­ty fel­low at the Cen­ter for Inter­na­tion­al Secu­ri­ty and Coop­er­a­tion at Stan­ford Uni­ver­si­ty in Cal­i­for­nia.

    “This tech­nol­o­gy, I think, will be irre­sistible for nation states to use in dis­in­for­ma­tion cam­paigns to manip­u­late pub­lic opin­ion, deceive pop­u­la­tions and under­mine con­fi­dence in our insti­tu­tions,” Grot­to said. He called for gov­ern­ment lead­ers and politi­cians to clear­ly say it has no place in civ­i­lized polit­i­cal debate.
    ...

    And while we’re guar­an­teed that any deep­fakes intro­duced into the US elec­tions will almost reflex­ive­ly be blamed on Rus­sia, the real­i­ty is that any intel­li­gence agency on the plan­et (even pri­vate intel­li­gence agen­cies) are going to be extreme­ly tempt­ed to devel­op these kinds of videos for pro­pa­gan­da pur­pos­es. And the 4Chan trolls and Alt Right are going to be invest­ing mas­sive amounts of time and ener­gy into this, if they aren’t already. The list of sus­pects is inher­ent­ly going to be mas­sive:

    ...
    So far, deep­fakes have most­ly been used to smear celebri­ties or as gags, but it’s easy to fore­see a nation state using them for nefar­i­ous activ­i­ties against the U.S., said Sen. Mar­co Rubio, R‑Fla., one of sev­er­al mem­bers of the Sen­ate intel­li­gence com­mit­tee who are express­ing con­cern about deep­fakes.

    A for­eign intel­li­gence agency could use the tech­nol­o­gy to pro­duce a fake video of an Amer­i­can politi­cian using a racial epi­thet or tak­ing a bribe, Rubio says. They could use a fake video of a U.S. sol­dier mas­sacring civil­ians over­seas, or one of a U.S. offi­cial sup­pos­ed­ly admit­ting a secret plan to car­ry out a con­spir­a­cy. Imag­ine a fake video of a U.S. leader — or an offi­cial from North Korea or Iran — warn­ing the Unit­ed States of an impend­ing dis­as­ter.

    “It’s a weapon that could be used — timed appro­pri­ate­ly and placed appro­pri­ate­ly — in the same way fake news is used, except in a video form, which could cre­ate real chaos and insta­bil­i­ty on the eve of an elec­tion or a major deci­sion of any sort,” Rubio told The Asso­ci­at­ed Press.
    ...

    Final­ly, let’s not for­get about one of the more bizarre poten­tial con­se­quences of the emer­gence of deep­fake tech­nol­o­gy: it’s going to be make eas­i­er than ever for Repub­li­cans to decry ‘fake news!’ when they are con­front­ed with a true but polit­i­cal incon­ve­nient sto­ry. Remem­ber when Trump’s ambas­sador to the Nether­lands, Pete Hoek­stra, cried ‘fake news!’ when shown a video of his own com­ments? Well, that’s going to be a very com­mon thing going for­ward. So when the inevitable mon­tages of Trump say­ing one hor­ri­ble thing after enough get rolled out for vot­ers 2020, it’s going to be eas­i­er than ever for peo­ple to dis­miss it as ‘fake news!’:

    ...
    When an aver­age per­son can cre­ate a real­is­tic fake video of the pres­i­dent say­ing any­thing they want, Farid said, “we have entered a new world where it is going to be dif­fi­cult to know how to believe what we see.” The reverse is a con­cern, too. Peo­ple may dis­miss as fake gen­uine footage, say of a real atroc­i­ty, to score polit­i­cal points.
    ...

    Wel­come to the world were you real­ly can’t believe your lying eyes. Except when you can and should.

    So how will human­i­ty han­dle a world where any ran­dom troll can cre­ate con­vinc­ing fake videos? Well, based on our track record with how we han­dle oth­er sources of infor­ma­tion that can poten­tial­ly be faked and requires a degree of wis­dom and dis­cern­ment to nav­i­gate, not well. Not well at all.

    Posted by Pterrafractyl | July 3, 2018, 2:47 pm
  5. Here’s the lat­est reminder that when the ‘deep­fake’ video tech­nol­o­gy devel­ops to the point of being indis­tin­guish­able from real videos the far right is going to go into over­drive cre­at­ing videos pur­port­ing to prove vir­tu­al­ly every far right fan­ta­sy you can imag­ine. Espe­cial­ly the ‘Piz­za­Gate’ con­spir­a­cy the­o­ry pushed by the right wing in the final weeks of the 2016 alleg­ing that Hillary Clin­ton and num­ber of oth­er promi­nent Democ­rats are part of a Satanist child abuse ring:

    Far right crack­pot ‘jour­nal­ist’ Liz Crokin is repeat­ing her asser­tions that video of Hillary Clin­ton — specif­i­cal­ly, Hillary sex­u­al­ly abus­ing a child and then cut­ting their face off and eat­ing it — is float­ing around on the Dark Web is, accord­ing to her sources, is def­i­nite­ly real. And now Crokin warns that reports about ‘deep­fake’ tech­nol­o­gy are just dis­in­for­ma­tion sto­ries being pre­emp­tive­ly put out by the Deep State to make the pub­lic skep­ti­cal when the videos of Hillary cut­ting the face off of a child come to light:

    Right Wing Watch

    Liz Crokin: Trump Con­firmed The Exis­tence Of A Video Show­ing Hilary Clin­ton Tor­tur­ing A Child

    By Kyle Manty­la
    July 12, 2018 2:00 pm

    Right-wing “jour­nal­ist” Liz Crokin appeared on Sheila Zilinsky’s pod­cast ear­li­er this week, where the two unhinged con­spir­a­cy the­o­rists dis­cussed Crokin’s asser­tion that a video exists show­ing Hillary Clin­ton sex­u­al­ly abus­ing and tor­tur­ing a child.

    “I under­stand that there is a video cir­cu­lat­ing on the dark web of [Clin­ton] eat­ing the face of a child,” Zilin­sky said. “Does this video exist?”

    “Yes,” respond­ed Crokin. “There are videos that prove that Hillary Clin­ton is involved in child sex traf­fick­ing and pedophil­ia. I have sources that have told me that; I trust these sources, so there is evi­dence that exists that proves that she is involved in this stuff … I believe with all my heart that this is true.”

    After claim­ing that “the deep state” tar­get­ed for­mer White House nation­al secu­ri­ty advis­er Michael Fly­nn for destruc­tion because he and his son “were expos­ing Piz­za­gate,” Crokin insist­ed that media reports warn­ing about the abil­i­ty to use mod­ern tech­nol­o­gy to cre­ate fake videos that make it appear as if famous peo­ple are say­ing or doing things are secret­ly a part of an effort to pre­pare the pub­lic to dis­miss the Clin­ton video when it is final­ly released.

    “Based off of what lies they report, I can tell what they’re afraid of, I can tell what’s real­ly going on behind the scenes,” she said. “So the fact that they are say­ing, ‘Oh, if a tape comes out involv­ing Hillary or Oba­ma doing like a sex act or x, y, z, it’s fake news,’ that tells me that there are tapes that incrim­i­nate Oba­ma and Hillary.”

    As fur­ther proof that such tapes exist, Crokin repeat­ed her claim that Pres­i­dent Trump’s tweet of a link to a fringe right-wing con­spir­a­cy web­site that fea­tured a video of her dis­cussing this sup­posed Clin­ton tape was con­fir­ma­tion of the verac­i­ty of her claims.

    “When Pres­i­dent Trump retweet­ed MAGA Pill, MAGA Pill’s tweet was my video talk­ing about Hillary Clinton’s sex tape,” she insist­ed. “I know Pres­i­dent Trump, I’ve met him, I’ve stud­ied him, I’ve report­ed on him … I’ve known him and of him and report­ed on him for a very long time. I under­stand how his brain works, I under­stand how he thinks, I under­stand ‘The Art of War,’ his favorite book, I under­stand this man. And I know that Pres­i­dent Trump—there’s no way that he didn’t know when he retweet­ed MAGA Pill that my video talk­ing about Hillary Clinton’s sex tape was MAGA Pill’s pinned tweet. There is no way that Pres­i­dent Trump didn’t know that.”

    ...

    ———-

    “Liz Crokin: Trump Con­firmed The Exis­tence Of A Video Show­ing Hilary Clin­ton Tor­tur­ing A Child” by Kyle Manty­la; Right Wing Watch; 07/12/2018

    ““I under­stand that there is a video cir­cu­lat­ing on the dark web of [Clin­ton] eat­ing the face of a child,” Zilin­sky said. “Does this video exist?””

    Wel­come to our dystopi­an near-future. Does video of [insert hor­ror sce­nario here] actu­al­ly exist? Oh yes, we will be assured, it’s def­i­nite­ly total­ly real and you can total­ly trust my source!

    ...
    “Yes,” respond­ed Crokin. “There are videos that prove that Hillary Clin­ton is involved in child sex traf­fick­ing and pedophil­ia. I have sources that have told me that; I trust these sources, so there is evi­dence that exists that proves that she is involved in this stuff … I believe with all my heart that this is true.”
    ...

    And some­day, with advances in deep­fake video tech­nol­o­gy and spe­cial effects they might actu­al­ly pro­duce such a video. It’s real­ly just a mat­ter of time, and at this point we have to just hope that they use spe­cial effects to fake a child hav­ing their face cut off and eat­en and don’t actu­al­ly do that to a kid (you nev­er know when you’re deal­ing with Nazis and their fel­low trav­el­ers).

    Crokin then went on to insist that the var­i­ous news arti­cles warn­ing about the advances in deep­fake tech­nol­o­gy were all part of a secret effort to pro­tect Hillary when the video is final­ly released:

    ...
    After claim­ing that “the deep state” tar­get­ed for­mer White House nation­al secu­ri­ty advis­er Michael Fly­nn for destruc­tion because he and his son “were expos­ing Piz­za­gate,” Crokin insist­ed that media reports warn­ing about the abil­i­ty to use mod­ern tech­nol­o­gy to cre­ate fake videos that make it appear as if famous peo­ple are say­ing or doing things are secret­ly a part of an effort to pre­pare the pub­lic to dis­miss the Clin­ton video when it is final­ly released.

    “Based off of what lies they report, I can tell what they’re afraid of, I can tell what’s real­ly going on behind the scenes,” she said. “So the fact that they are say­ing, ‘Oh, if a tape comes out involv­ing Hillary or Oba­ma doing like a sex act or x, y, z, it’s fake news,’ that tells me that there are tapes that incrim­i­nate Oba­ma and Hillary.”
    ...

    And as fur­ther evi­dence of her claims, Crokin points to Pres­i­dent Trump retweet­ing a tweet to a web­site fea­tur­ing a video of Crokin dis­cussing this alleged Hillary-child-face-eat­ing video:

    ...
    As fur­ther proof that such tapes exist, Crokin repeat­ed her claim that Pres­i­dent Trump’s tweet of a link to a fringe right-wing con­spir­a­cy web­site that fea­tured a video of her dis­cussing this sup­posed Clin­ton tape was con­fir­ma­tion of the verac­i­ty of her claims.

    “When Pres­i­dent Trump retweet­ed MAGA Pill, MAGA Pill’s tweet was my video talk­ing about Hillary Clinton’s sex tape,” she insist­ed. “I know Pres­i­dent Trump, I’ve met him, I’ve stud­ied him, I’ve report­ed on him … I’ve known him and of him and report­ed on him for a very long time. I under­stand how his brain works, I under­stand how he thinks, I under­stand ‘The Art of War,’ his favorite book, I under­stand this man. And I know that Pres­i­dent Trump—there’s no way that he didn’t know when he retweet­ed MAGA Pill that my video talk­ing about Hillary Clinton’s sex tape was MAGA Pill’s pinned tweet. There is no way that Pres­i­dent Trump didn’t know that.”
    ...

    Yep, the pres­i­dent is pro­mot­ing this lady. And that, right there, sum­ma­rizes the next stage of Amer­i­ca’s night­mare descent into neo-Nazi fan­tasies that’s just around the cor­ner (not to men­tion the impact on the rest of the world).

    Of course, the denials that deep­fake tech­nol­o­gy exist will start get­ting rather amus­ing after peo­ple use that same tech­nol­o­gy to cre­ate deep­fakes of Trump, Crokin, and any­one else in the pub­lic eye (since you need lots of videos of some­one to make the tech­nol­o­gy work).

    Also keep in mind that Crokin claims the child-face-eat­ing video is mere­ly one of the videos of Hillary Clin­ton float­ing around. There are many videos of Hillary that Crokin claims to be aware of. So when the child-face-eat­ing video emerges, it’s prob­a­bly going to just be a pre­view of what’s to come:

    Right Wing Watch

    Liz Crokin Claims That She Knows ‘One Hun­dred Per­cent’ That ‘A Video Of Hillary Clin­ton Sex­u­al­ly Abus­ing A Child Exists’

    By Kyle Manty­la | April 13, 2018 1:07 pm

    Fringe right-wing con­spir­a­cy the­o­ris Liz Crokin post­ed a video on YouTube last night in which she declared that a grue­some video show­ing Hillary Clin­ton cut­ting the face off of a liv­ing child exists and will soon be released for all the world to see.

    “I know with absolute cer­tain­ty that there is a tape that exists that involves Hillary Clin­ton sex­u­al­ly abus­ing a child,” Crokin said. “I have got­ten this con­firmed from very respectable and high-lev­el sources.”

    Crokin said that reports that Russ­ian-linked accounts post­ed a fake Clin­ton sex tape dur­ing the 2016 elec­tion are false, say­ing that no such fake video exists and that the sto­ries about it are sim­ply an effort to con­fuse the pub­lic “so when and if the actu­al video of Hillary Clin­ton sex­u­al­ly abus­ing a child comes out, the seeds of doubt are already plant­ed in people’s heads.”

    “All I know is that, one hun­dred per­cent, a video of Hillary Clin­ton sex­u­al­ly abus­ing a child exists,” she said. “I know there’s many videos incrim­i­nat­ing her, I just don’t know which one they are going to release. But there are peo­ple, there are claims that this sex­u­al abuse video is on the dark web and I know that some peo­ple have seen it, some in law enforce­ment, the NYPD law enforce­ment, some NYPD offi­cers have seen it and it made them sick, it made them cry, it made them vom­it, some of them had to seek psy­cho­log­i­cal coun­sel­ing after this.”

    “I’m not going to go into too much detail because it’s so dis­gust­ing, but in this video, they cut off a child’s face as the child is alive,” Crokin claimed. “I’m just going to leave it at that.”

    ———-

    “Liz Crokin Claims That She Knows ‘One Hun­dred Per­cent’ That ‘A Video Of Hillary Clin­ton Sex­u­al­ly Abus­ing A Child Exists’” by Kyle Manty­la; Right Wing Watch; 04/13/2018

    “I’m not going to go into too much detail because it’s so dis­gust­ing, but in this video, they cut off a child’s face as the child is alive...I’m just going to leave it at that.”

    The child was alive when Hillary cut its face off and ate it after sex­u­al­ly abus­ing it. That’s what Crokin has spent months assur­ing her audi­ence is a real thing and Don­ald Trump appears to be pro­mot­ing her. Of course.

    And that’s just one of the alleged Hillary-as-beyond-evil-witch videos Crokin claims to cer­tain­ty is real and in pos­ses­sion of some law enforce­ment offi­cials (this is what the whole ‘QAnon’ obses­sion on the right is about):

    ...
    “All I know is that, one hun­dred per­cent, a video of Hillary Clin­ton sex­u­al­ly abus­ing a child exists,” she said. “I know there’s many videos incrim­i­nat­ing her, I just don’t know which one they are going to release. But there are peo­ple, there are claims that this sex­u­al abuse video is on the dark web and I know that some peo­ple have seen it, some in law enforce­ment, the NYPD law enforce­ment, some NYPD offi­cers have seen it and it made them sick, it made them cry, it made them vom­it, some of them had to seek psy­cho­log­i­cal coun­sel­ing after this.”
    ...

    Also notice how Crokin acts like she does­n’t want to go into the details, and yet gives all sorts of details that hint as some­thing so hor­rif­ic that the alleged NYPD offi­cers who have seen the video need­ed psy­cho­log­i­cal coun­sel­ing. And that points towards one of the oth­er aspects of this loom­ing night­mare phase of Amer­i­can’s intel­lec­tu­al descent: the flood of far right fake videos that are going to be pro­duced are prob­a­bly going to be designed to psy­cho­log­i­cal­ly trau­ma­tize the view­er. The glob­al pub­lic is about to be exposed to one torture/murder/porn video of famous peo­ple after anoth­er because if you’re try­ing to impact you’re audi­ence visu­al­ly trau­ma­tiz­ing them is an effec­tive way to do it.

    It’s no acci­dent that much of the far right con­spir­a­cy cul­ture has a fix­a­tion on child sex crimes. Yes, some of that fix­a­tion is due to real cas­es or elite pro­tect­ed child abuse, like ‘The Franklin cov­er-up’ or Jim­my Sav­ile and the pro­found moral grav­i­ty of such crimes if real orga­nized elite sex abuse rings actu­al­ly exist. The vis­cer­al revul­sion of crimes of this nature make them inher­ent­ly impact­ful. But in the right-wing con­spir­a­cy world­view pedophil­ia tends to play a cen­tral role (as any­one famil­iar with Alex Jones can attest to). Crokin is mere­ly one exam­ple of that.

    And that’s exact­ly why we should expect these slew of fake videos that are inevitably going to be pro­duced in droves for polit­i­cal gain to involve images that tru­ly psy­cho­log­i­cal scar the view­er. It’s just more impact­ful that way.

    So whether you’re a fan of Hillary Clin­ton or loath her, get ready to have her seared in your mem­o­ry for­ev­er. Prob­a­bly eat­ing the face of a child or some­thing like that.

    Posted by Pterrafractyl | July 13, 2018, 2:39 pm
  6. If you did­n’t think access to a gun could get much eas­i­er in Amer­i­ca, it’s time to rethink that propo­si­tion: Start­ing on August 1st, it will be legal to dis­trib­ute instruc­tions over the inter­net for cre­ate 3D print­able guns accord­ing to US law. Recall how 3D print­able guns were first devel­oped in 2013 by far right cryp­to-anar­chist Cody Wil­son, the guy also behind the untrace­able Bit­coin Dark Wal­let and Hatre­on, a crowd­sourc­ing fundrais­ing plat­form for neo-Nazis and oth­er peo­ple who got kicked off of Patre­on.

    Wil­son first put instruc­tions for 3D print­able guns online back in 2013, but the US State Depart­ment forced him to take it down, argu­ing it amount to export­ing weapons. Wil­son sued and the case has been stuck in courts. Flash for­ward to April of 2018, and the US gov­ern­ment sud­den­ly decid­ed to reverse course and drop its law­suit. A set­tle­ment was reached and August 1 was declared the day 3D print­able gun instruc­tions will flood the inter­net.

    So it looks like the cypher­punk approach of using tech­nol­o­gy to get around polit­i­cal real­i­ties you don’t like is about to be applied to gun con­trol laws, with untrace­able guns for poten­tial­ly any­one as one of the imme­di­ate con­se­quences:

    Quartz

    The age of 3D-print­ed guns in Amer­i­ca is here

    Han­na Kozlows­ka
    July 28, 2018

    A last-ditch effort to block a US orga­ni­za­tion from mak­ing instruc­tions to 3D-print a gun avail­able to down­load has failed. The tem­plate will be post­ed online on Wednes­day (Aug. 1).

    From then, any­one with access to a 3D print­er will be able to cre­ate their own firearm out of the same kind of mate­r­i­al that’s used for Lego blocks. The guns are untrace­able, and require no back­ground check to make or own.

    “The age of the down­load­able gun for­mal­ly begins,” states the web­site of Defense Dis­trib­uted, the non-prof­it defense firm that has fought for years to make this “age” pos­si­ble.

    In April, Defense Dis­trib­uted reached a set­tle­ment with the US State Depart­ment in a fed­er­al law­suit that allowed pub­lish­ing the plans on print­ing a gun to pro­ceed, which took effect in June. On July 26, gun-con­trol advo­cates asked a fed­er­al court in Texas to block the deci­sion, but the judge decid­ed not to inter­vene. Law­mak­ers in Wash­ing­ton also tried in the past week to mobi­lize against the devel­op­ment, but it’s like­ly all too late (pay­wall).

    The first of this kind of gun—designed by the founder of Defense Dis­trib­uted Cody Wil­son, a self-described cryp­to-anar­chist—was “The Lib­er­a­tor,” a sin­gle shot pis­tol, which had a met­al part that made it com­pli­ant with a US gun-detec­tion law. When the plans were first released in 2013, Wil­son claimed they were down­loaded more than 100,000 times in the first cou­ple of days. Short­ly there­after, the gov­ern­ment said the enter­prise was ille­gal.

    ...

    Defense Dis­trib­uted sued the fed­er­al gov­ern­ment in 2015 after it was blocked from pub­lish­ing the 3D-print­ing plans online. With the April 2018 set­tle­ment, the gov­ern­ment reversed its posi­tion. David Chip­man, for­mer agent at the Bureau of Alco­hol, Tobac­co, Firearms, and Explo­sives (ATF) and cur­rent advis­er to Gif­fords, a gun con­trol orga­ni­za­tion run by for­mer con­gress­woman Gab­by Gif­fords (who was infa­mous­ly shot in the head), blames the about-face on the change in pres­i­den­tial admin­is­tra­tions.

    The deci­sion means that peo­ple who can’t pass a stan­dard back­ground check “like ter­ror­ists, con­vict­ed felons, and domes­tic abusers” will be able to print out a gun with­out a ser­i­al num­ber, Chip­man wrote in a blog post.. “This could have severe reper­cus­sions a decade from now if we allow weapons of this kind to mul­ti­ply.”

    ———-

    “The age of 3D-print­ed guns in Amer­i­ca is here” by Han­na Kozlows­ka; Quartz; 07/28/2018

    “A last-ditch effort to block a US orga­ni­za­tion from mak­ing instruc­tions to 3D-print a gun avail­able to down­load has failed. The tem­plate will be post­ed online on Wednes­day (Aug. 1).”

    In just a cop­ule more days, any­one with access to a 3D print will be able to cre­ate as many untrace­able guns as they desire. This is thanks to a set­tle­ment reached in April by Cody Wilson’s com­pa­ny, Defense Dis­trib­uted, and the fed­er­al gov­ern­ment:

    ...
    From then, any­one with access to a 3D print­er will be able to cre­ate their own firearm out of the same kind of mate­r­i­al that’s used for Lego blocks. The guns are untrace­able, and require no back­ground check to make or own.

    “The age of the down­load­able gun for­mal­ly begins,” states the web­site of Defense Dis­trib­uted, the non-prof­it defense firm that has fought for years to make this “age” pos­si­ble.

    In April, Defense Dis­trib­uted reached a set­tle­ment with the US State Depart­ment in a fed­er­al law­suit that allowed pub­lish­ing the plans on print­ing a gun to pro­ceed, which took effect in June. On July 26, gun-con­trol advo­cates asked a fed­er­al court in Texas to block the deci­sion, but the judge decid­ed not to inter­vene. Law­mak­ers in Wash­ing­ton also tried in the past week to mobi­lize against the devel­op­ment, but it’s like­ly all too late (pay­wall).

    The first of this kind of gun—designed by the founder of Defense Dis­trib­uted Cody Wil­son, a self-described cryp­to-anar­chist—was “The Lib­er­a­tor,” a sin­gle shot pis­tol, which had a met­al part that made it com­pli­ant with a US gun-detec­tion law. When the plans were first released in 2013, Wil­son claimed they were down­loaded more than 100,000 times in the first cou­ple of days. Short­ly there­after, the gov­ern­ment said the enter­prise was ille­gal.
    ...

    So why exact­ly did the US gov­ern­ment sud­den­ly drop the law­suit in April and pave the way for the dis­tri­b­u­tion of 3D print­ed gun instruc­tions? Well, the answer appears to be that the Trump admin­is­tra­tion want­ed the case dropped. At least that’s how gun con­trol advo­cates inter­pret­ed it:

    ...
    Defense Dis­trib­uted sued the fed­er­al gov­ern­ment in 2015 after it was blocked from pub­lish­ing the 3D-print­ing plans online. With the April 2018 set­tle­ment, the gov­ern­ment reversed its posi­tion. David Chip­man, for­mer agent at the Bureau of Alco­hol, Tobac­co, Firearms, and Explo­sives (ATF) and cur­rent advis­er to Gif­fords, a gun con­trol orga­ni­za­tion run by for­mer con­gress­woman Gab­by Gif­fords (who was infa­mous­ly shot in the head), blames the about-face on the change in pres­i­den­tial admin­is­tra­tions.

    The deci­sion means that peo­ple who can’t pass a stan­dard back­ground check “like ter­ror­ists, con­vict­ed felons, and domes­tic abusers” will be able to print out a gun with­out a ser­i­al num­ber, Chip­man wrote in a blog post.. “This could have severe reper­cus­sions a decade from now if we allow weapons of this kind to mul­ti­ply.”
    ...

    Although, as the fol­low­ing arti­cle notes, it appears that one of the rea­sons for the change in the fed­er­al gov­ern­men­t’s atti­tude towards the case may have been due to a desire to change the cur­rent export laws regard­ing the export of guns in gen­er­al that US gun man­u­fac­tur­ers have long want­ed. The Trump admin­is­tra­tion pro­posed revis­ing and stream­lin­ing the process for export­ing con­sumer firearms and relat­ed tech­ni­cal infor­ma­tion, includ­ing tuto­ri­als for 3D print­ed guns. The rule change would also shift juris­dic­tion of some items from the State Depart­ment to the Com­merce Depart­ment (don’t for­get that it was the State Depart­ment that imposed the ini­tial injunc­tion on the dis­tri­b­u­tion of 3D instruc­tions). So it sounds like the Trump admin­is­tra­tion’s move to legal­ize the dis­tri­b­u­tion of 3D print­able gun instruc­tions may have been part of a broad­er effort to export more guns in gen­er­al:

    The New York Times

    ‘Down­load­able Gun’ Clears a Legal Obsta­cle, and Activists Are Alarmed

    By Tiffany Hsu and Alan Feuer
    July 13, 2018

    Learn­ing to make a so-called ghost gun — an untrace­able, unreg­is­tered firearm with­out a ser­i­al num­ber — could soon become much eas­i­er.

    The Unit­ed States last month agreed to allow a Texas man to dis­trib­ute online instruc­tion man­u­als for a pis­tol that could be made by any­one with access to a 3‑D print­er. The man, Cody Wil­son, had sued the gov­ern­ment after the State Depart­ment forced him to take down the instruc­tions because they vio­lat­ed export laws.

    Mr. Wil­son, who is well known in anar­chist and gun-rights com­mu­ni­ties, com­plained that his right to free speech was being sti­fled and that he was shar­ing com­put­er code, not actu­al guns.

    The case was set­tled on June 29, and Mr. Wil­son gave The New York Times a copy of the agree­ment this week. The set­tle­ment states that 3‑D print­ing tuto­ri­als are approved “for pub­lic release (i.e. unlim­it­ed dis­tri­b­u­tion) in any form.”

    The gov­ern­ment also agreed to pay near­ly $40,000 of Mr. Wilson’s legal fees.

    The will­ing­ness to resolve the case — after the gov­ern­ment had won some low­er court judg­ments — has raised alarms among gun-con­trol advo­cates, who said it would make it eas­i­er for felons and oth­ers to get firearms. Some crit­ics said it sug­gest­ed close ties between the Trump admin­is­tra­tion and gun-own­er­ship advo­cates, this week fil­ing requests for doc­u­ments that might explain why the gov­ern­ment agreed to set­tle.

    The admin­is­tra­tion “capit­u­lat­ed in a case it had won at every step of the way,” said J. Adam Skag­gs, the chief coun­sel for the Gif­fords Law Cen­ter to Pre­vent Gun Vio­lence. “This isn’t a case where the under­ly­ing facts of the law changed. The only thing that changed was the admin­is­tra­tion.”

    Mr. Wilson’s orga­ni­za­tion, Defense Dis­trib­uted, will repost its online guides on Aug. 1, when “the age of the down­load­able gun for­mal­ly begins,” accord­ing to its web­site. The files will include plans to make a vari­ety of firearms using 3‑D print­ers, includ­ing for AR-15-style rifles, which have been used in sev­er­al mass shoot­ings.

    Mr. Wil­son said the set­tle­ment would allow gun­mak­ing enthu­si­asts to come out from the shad­ows. Copies of his plans have cir­cu­lat­ed on the so-called dark web since his site went down.

    “I can see how it would attract more peo­ple and maybe lessen the tac­tic of hav­ing to hide your iden­ti­ty,” Mr. Wil­son said of the set­tle­ment in an inter­view. “It’s not a huge space right now, but I do know that it’s only going to accel­er­ate things.”

    But as the “land­mark set­tle­ment” brings ghost gun instruc­tions out into the open, it could also give felons and domes­tic abusers access to firearms that back­ground checks would oth­er­wise block them from own­ing, said Adam Win­kler, a law pro­fes­sor at the Uni­ver­si­ty of Cal­i­for­nia, Los Ange­les.

    “The cur­rent laws are already dif­fi­cult to enforce — they’re his­tor­i­cal­ly not espe­cial­ly pow­er­ful, and they’re rid­dled with loop­holes — and this will just make those laws eas­i­er to evade,” Mr. Win­kler said. “It not only allows this tech to flour­ish out of the under­ground but gives it legal sanc­tion.”

    Some saw the set­tle­ment as proof that the Trump admin­is­tra­tion want­ed to fur­ther dereg­u­late the gun indus­try and increase access to firearms. This year, the admin­is­tra­tion pro­posed a rule change that would revise and stream­line the process for export­ing con­sumer firearms and relat­ed tech­ni­cal infor­ma­tion, includ­ing tuto­ri­als for 3‑D print­ed designs.

    The change, long sought by firearms man­u­fac­tur­ers, would shift juris­dic­tion of cer­tain items from the State Depart­ment to the Com­merce Depart­ment, which uses a sim­pler licens­ing pro­ce­dure for exports.

    On Thurs­day and Fri­day, the Brady Cen­ter to Pre­vent Gun Vio­lence filed requests under the Free­dom of Infor­ma­tion Act for any doc­u­ments show­ing how the gov­ern­ment decid­ed on the set­tle­ment over print­able firearms, and whether orga­ni­za­tions like the Nation­al Rifle Asso­ci­a­tion or the Nation­al Shoot­ing Sports Foun­da­tion were involved.

    Nei­ther trade group com­ment­ed for this arti­cle, but some gun advo­cates said Mr. Trump has been less help­ful toward the firearms indus­try than he had sug­gest­ed he would be.

    Mr. Wil­son also said that “there has not been a pro-gun streak” under Mr. Trump’s Jus­tice Depart­ment, though he praised the nom­i­na­tion of Judge Brett M. Kavanaugh, who is seen as a cham­pi­on of Sec­ond Amend­ment rights, to the Supreme Court.

    “Trump will go to the N.R.A. and be like, ‘I’m your great­est friend,’ but unfor­tu­nate­ly his D.O.J. has fought gun cas­es tooth and nail in the courts,” he said.

    Mr. Wil­son clashed with the gov­ern­ment in 2013 after he suc­cess­ful­ly print­ed a most­ly plas­tic hand­gun — a tech-focused twist on a long­stand­ing and gen­er­al­ly legal tra­di­tion of do-it-your­self gun­mak­ing that has includ­ed AR-15 craft­ing par­ties in enthu­si­asts’ garages. His cre­ation inspired Philadel­phia to pass leg­is­la­tion ban­ning the use of 3‑D print­ers to man­u­fac­ture firearms.

    After Mr. Wil­son post­ed online blue­prints for the gun, they were down­loaded more than 100,000 times with­in a few days, and lat­er appeared on oth­er web­sites and file-shar­ing ser­vices.

    The State Depart­ment quick­ly caught wind of the files and demand­ed that Mr. Wil­son remove them, say­ing that they vio­lat­ed export reg­u­la­tions deal­ing with sen­si­tive mil­i­tary hard­ware and tech­nol­o­gy.

    Mr. Wil­son capit­u­lat­ed, and after two years paired up with the Sec­ond Amend­ment Foun­da­tion to file his law­suit. A Fed­er­al Dis­trict Court judge denied his request for a pre­lim­i­nary injunc­tion against the State Depart­ment, a deci­sion that an appel­late court upheld. The Supreme Court declined to hear the case.

    In a state­ment, the State Depart­ment said the set­tle­ment with Mr. Wil­son was vol­un­tary and “entered into fol­low­ing nego­ti­a­tions,” adding that “the court did not rule in favor of the plain­tiffs in this case.”

    To raise mon­ey for his legal defense, he said, he sold milling machines that can read dig­i­tal design files and stamp out met­al gun parts. Pro­ceeds from the so-called Ghost Gun­ner machines, which cost $1,675 each, are used to run his orga­ni­za­tion, he said.

    Ghost guns, by their nature, are dif­fi­cult to track.

    Guns man­u­fac­tured for sale fea­ture a ser­i­al num­ber on the receiv­er, which hous­es the fir­ing mech­a­nism. But unfin­ished frames known as “80 per­cent” receivers can be eas­i­ly pur­chased, com­plet­ed with machin­ery like the Ghost Gun­ner and then com­bined with the remain­ing parts of the firearm, which are read­i­ly avail­able online and at gun shows.

    ...

    But with the gov­ern­ment adjust­ing the export rules that first sparked the case, Mr. Wil­son will be able to freely pub­lish blue­prints for 3‑D print­ers, said Alan M. Got­tlieb, the founder of the Sec­ond Amend­ment Foun­da­tion, in a state­ment.

    “Not only is this a First Amend­ment vic­to­ry for free speech,” he said, “it also is a dev­as­tat­ing blow to the gun pro­hi­bi­tion lob­by.”

    ———-

    “Learn­ing to make a so-called ghost gun — an untrace­able, unreg­is­tered firearm with­out a ser­i­al num­ber — could soon become much eas­i­er.”

    DIY untrace­able firearms. That’s about to be a thing and the US gov­ern­ment legal­ly sanc­tioned it. And that sud­den change of heart, com­bined with the legal vic­to­ries the gov­ern­ment pre­vi­ous­ly enjoyed on this case, is what left so many gun con­trol advo­cates assum­ing that the Trump admin­is­tra­tion decid­ed to pro­mote 3D print­able gun:

    ...
    The gov­ern­ment also agreed to pay near­ly $40,000 of Mr. Wilson’s legal fees.

    The will­ing­ness to resolve the case — after the gov­ern­ment had won some low­er court judg­ments — has raised alarms among gun-con­trol advo­cates, who said it would make it eas­i­er for felons and oth­ers to get firearms. Some crit­ics said it sug­gest­ed close ties between the Trump admin­is­tra­tion and gun-own­er­ship advo­cates, this week fil­ing requests for doc­u­ments that might explain why the gov­ern­ment agreed to set­tle.

    The admin­is­tra­tion “capit­u­lat­ed in a case it had won at every step of the way,” said J. Adam Skag­gs, the chief coun­sel for the Gif­fords Law Cen­ter to Pre­vent Gun Vio­lence. “This isn’t a case where the under­ly­ing facts of the law changed. The only thing that changed was the admin­is­tra­tion.”
    ...

    And note how a Fed­er­al Dis­trict Court judge had denied Wilson’s request for a pre­lim­i­nary injunc­tion against the State Depart­ment, a deci­sion that an appel­late court upheld. And the Supreme Court declined to hear the case. And the state Depart­ment issued a state­ment about how Wil­son vol­un­tary “entered into fol­low­ing nego­ti­a­tions,” and that “the court did not rule in favor of the plain­tiffs in this case.” That sure does­n’t sound like a gov­ern­ment that was on the verge of los­ing its case:

    ...
    The State Depart­ment quick­ly caught wind of the files and demand­ed that Mr. Wil­son remove them, say­ing that they vio­lat­ed export reg­u­la­tions deal­ing with sen­si­tive mil­i­tary hard­ware and tech­nol­o­gy.

    Mr. Wil­son capit­u­lat­ed, and after two years paired up with the Sec­ond Amend­ment Foun­da­tion to file his law­suit. A Fed­er­al Dis­trict Court judge denied his request for a pre­lim­i­nary injunc­tion against the State Depart­ment, a deci­sion that an appel­late court upheld. The Supreme Court declined to hear the case.

    In a state­ment, the State Depart­ment said the set­tle­ment with Mr. Wil­son was vol­un­tary and “entered into fol­low­ing nego­ti­a­tions,” adding that “the court did not rule in favor of the plain­tiffs in this case.”
    ...

    But that appar­ent desire by the Trump admin­is­tra­tion to pro­mote 3D print­able guns might be less a reflec­tion of a spe­cif­ic desire to pro­mote 3D print­able guns and more a reflec­tion of the Trump admin­is­tra­tion’s desire to pro­mote gun exports in gen­er­al:

    ...
    Some saw the set­tle­ment as proof that the Trump admin­is­tra­tion want­ed to fur­ther dereg­u­late the gun indus­try and increase access to firearms. This year, the admin­is­tra­tion pro­posed a rule change that would revise and stream­line the process for export­ing con­sumer firearms and relat­ed tech­ni­cal infor­ma­tion, includ­ing tuto­ri­als for 3‑D print­ed designs.

    The change, long sought by firearms man­u­fac­tur­ers, would shift juris­dic­tion of cer­tain items from the State Depart­ment to the Com­merce Depart­ment, which uses a sim­pler licens­ing pro­ce­dure for exports.
    ...

    Regard­less, we are just a cou­ple days away from 3D print­able gun instruc­tions legal­ly being dis­trib­uted online. And it’s not just the 1‑shot pis­tols. It’s going to include AR-15-style rifles:

    ...
    Mr. Wilson’s orga­ni­za­tion, Defense Dis­trib­uted, will repost its online guides on Aug. 1, when “the age of the down­load­able gun for­mal­ly begins,” accord­ing to its web­site. The files will include plans to make a vari­ety of firearms using 3‑D print­ers, includ­ing for AR-15-style rifles, which have been used in sev­er­al mass shoot­ings.

    Mr. Wil­son said the set­tle­ment would allow gun­mak­ing enthu­si­asts to come out from the shad­ows. Copies of his plans have cir­cu­lat­ed on the so-called dark web since his site went down.
    ...

    And this prob­a­bly also mean Wil­son is going to be sell­ing a lot more of those gun milling machines that allow any­one to cre­ate the met­al com­po­nents of their untrace­able Ghost Guns:

    ...
    To raise mon­ey for his legal defense, he said, he sold milling machines that can read dig­i­tal design files and stamp out met­al gun parts. Pro­ceeds from the so-called Ghost Gun­ner machines, which cost $1,675 each, are used to run his orga­ni­za­tion, he said.

    Ghost guns, by their nature, are dif­fi­cult to track.

    Guns man­u­fac­tured for sale fea­ture a ser­i­al num­ber on the receiv­er, which hous­es the fir­ing mech­a­nism. But unfin­ished frames known as “80 per­cent” receivers can be eas­i­ly pur­chased, com­plet­ed with machin­ery like the Ghost Gun­ner and then com­bined with the remain­ing parts of the firearm, which are read­i­ly avail­able online and at gun shows.
    ...

    And keep in mind that, while this is a sto­ry about a US legal case, it’s effec­tive­ly a glob­al sto­ry. There’s undoubt­ed­ly going to be an explo­sion of 3D blue­prints for pret­ty much any tool of vio­lence one can imag­ine and it’s going to be acces­si­ble to any­one with an inter­net con­nec­tion. They won’t need to go scour­ing the Dark Web or find­ing some illic­it 3D print­er instruc­tions deal­er. They’ll just go to one of the many web­sites that will be brim­ming with a grow­ing library of all sorts of 3D print­able sophis­ti­cat­ed weapon­ry.

    So now we get to watch this grim exper­i­ment unfold. And who knows, it might actu­al­ly reduce US gun exports by pre­emp­tive­ly export mar­kets with guns. Because why pay for an expen­sive import­ed gun when you can just print a cheap one?

    More gen­er­al­ly, what’s going to hap­pen when as 3D print­able guns become a thing acces­si­ble to every cor­ner of the globe? What kinds con­flicts might pop up that sim­ply would­n’t have been pos­si­ble before? Because we’re long famil­iar with con­flicts fueled by large num­bers of small arms flood­ing into a coun­try, but some­one has to be will­ing to sup­ply those arms. There’s gen­er­al­ly some sort of state spon­sor for con­flicts. What hap­pens when any ran­dom group or move­ment can effec­tive­ly arm them­selves? Is human­i­ty going to get even more vio­lent as the cost of guns plum­mets and acces­si­bil­i­ty explodes? Prob­a­bly. We’ll be col­lec­tive­ly con­firm­ing that soon.

    Posted by Pterrafractyl | July 30, 2018, 3:43 pm
  7. Remem­ber that report from Novem­ber 2017, about how Google’s Android smart­phones are secret­ly gath­er­ing sur­pris­ing­ly pre­cise loca­tion infor­ma­tion from smart phones even when peo­ple turn of “Loca­tion ser­vices” using infor­ma­tion gath­ered from cell tow­ers? Well, here’s a fol­low up report on that: Google claims it changed that pol­i­cy, but it’s some­how still col­lect­ing very pre­cise loca­tion data from Android phones and iPhones (if you use Google’s apps) even when you turn off “loca­tion ser­vices” (sur­prise!):

    Asso­ci­at­ed Press

    AP Exclu­sive: Google tracks your move­ments, like it or not

    By RYAN NAKASHIMA
    08/15/2018

    SAN FRANCISCO (AP) — Google wants to know where you go so bad­ly that it records your move­ments even when you explic­it­ly tell it not to.

    An Asso­ci­at­ed Press inves­ti­ga­tion found that many Google ser­vices on Android devices and iPhones store your loca­tion data even if you’ve used a pri­va­cy set­ting that says it will pre­vent Google from doing so.

    Com­put­er-sci­ence researchers at Prince­ton con­firmed these find­ings at the AP’s request.

    For the most part, Google is upfront about ask­ing per­mis­sion to use your loca­tion infor­ma­tion. An app like Google Maps will remind you to allow access to loca­tion if you use it for nav­i­gat­ing. If you agree to let it record your loca­tion over time, Google Maps will dis­play that his­to­ry for you in a “time­line” that maps out your dai­ly move­ments.

    Stor­ing your minute-by-minute trav­els car­ries pri­va­cy risks and has been used by police to deter­mine the loca­tion of sus­pects — such as a war­rant that police in Raleigh, North Car­oli­na, served on Google last year to find devices near a mur­der scene. So the com­pa­ny lets you “pause” a set­ting called Loca­tion His­to­ry.

    Google says that will pre­vent the com­pa­ny from remem­ber­ing where you’ve been. Google’s sup­port page on the sub­ject states: “You can turn off Loca­tion His­to­ry at any time. With Loca­tion His­to­ry off, the places you go are no longer stored.”

    That isn’t true. Even with Loca­tion His­to­ry paused, some Google apps auto­mat­i­cal­ly store time-stamped loca­tion data with­out ask­ing. (It’s pos­si­ble, although labo­ri­ous, to delete it .)

    For exam­ple, Google stores a snap­shot of where you are when you mere­ly open its Maps app. Auto­mat­ic dai­ly weath­er updates on Android phones pin­point rough­ly where you are. And some search­es that have noth­ing to do with loca­tion, like “choco­late chip cook­ies,” or “kids sci­ence kits,” pin­point your pre­cise lat­i­tude and lon­gi­tude — accu­rate to the square foot — and save it to your Google account.

    The pri­va­cy issue affects some two bil­lion users of devices that run Google’s Android oper­at­ing soft­ware and hun­dreds of mil­lions of world­wide iPhone users who rely on Google for maps or search.

    Stor­ing loca­tion data in vio­la­tion of a user’s pref­er­ences is wrong, said Jonathan May­er, a Prince­ton com­put­er sci­en­tist and for­mer chief tech­nol­o­gist for the Fed­er­al Com­mu­ni­ca­tions Commission’s enforce­ment bureau. A researcher from Mayer’s lab con­firmed the AP’s find­ings on mul­ti­ple Android devices; the AP con­duct­ed its own tests on sev­er­al iPhones that found the same behav­ior.

    “If you’re going to allow users to turn off some­thing called ‘Loca­tion His­to­ry,’ then all the places where you main­tain loca­tion his­to­ry should be turned off,” May­er said. “That seems like a pret­ty straight­for­ward posi­tion to have.”

    Google says it is being per­fect­ly clear.

    “There are a num­ber of dif­fer­ent ways that Google may use loca­tion to improve people’s expe­ri­ence, includ­ing: Loca­tion His­to­ry, Web and App Activ­i­ty, and through device-lev­el Loca­tion Ser­vices,” a Google spokesper­son said in a state­ment to the AP. “We pro­vide clear descrip­tions of these tools, and robust con­trols so peo­ple can turn them on or off, and delete their his­to­ries at any time.”

    Google’s expla­na­tion did not con­vince sev­er­al law­mak­ers.

    Sen. Mark Warn­er of Vir­ginia told the AP it is “frus­trat­ing­ly com­mon” for tech­nol­o­gy com­pa­nies “to have cor­po­rate prac­tices that diverge wild­ly from the total­ly rea­son­able expec­ta­tions of their users,” and urged poli­cies that would give users more con­trol of their data. Rep. Frank Pal­lone of New Jer­sey called for “com­pre­hen­sive con­sumer pri­va­cy and data secu­ri­ty leg­is­la­tion” in the wake of the AP report.

    To stop Google from sav­ing these loca­tion mark­ers, the com­pa­ny says, users can turn off anoth­er set­ting, one that does not specif­i­cal­ly ref­er­ence loca­tion infor­ma­tion. Called “Web and App Activ­i­ty” and enabled by default, that set­ting stores a vari­ety of infor­ma­tion from Google apps and web­sites to your Google account.

    When paused, it will pre­vent activ­i­ty on any device from being saved to your account. But leav­ing “Web & App Activ­i­ty” on and turn­ing “Loca­tion His­to­ry” off only pre­vents Google from adding your move­ments to the “time­line,” its visu­al­iza­tion of your dai­ly trav­els. It does not stop Google’s col­lec­tion of oth­er loca­tion mark­ers.

    You can delete these loca­tion mark­ers by hand, but it’s a painstak­ing process since you have to select them indi­vid­u­al­ly, unless you want to delete all of your stored activ­i­ty.

    You can see the stored loca­tion mark­ers on a page in your Google account at myactivity.google.com, although they’re typ­i­cal­ly scat­tered under sev­er­al dif­fer­ent head­ers, many of which are unre­lat­ed to loca­tion.

    To demon­strate how pow­er­ful these oth­er mark­ers can be, the AP cre­at­ed a visu­al map of the move­ments of Prince­ton post­doc­tor­al researcher Gunes Acar, who car­ried an Android phone with Loca­tion his­to­ry off, and shared a record of his Google account.

    The map includes Acar’s train com­mute on two trips to New York and vis­its to The High Line park, Chelsea Mar­ket, Hell’s Kitchen, Cen­tral Park and Harlem. To pro­tect his pri­va­cy, The AP didn’t plot the most telling and fre­quent mark­er — his home address.

    Huge tech com­pa­nies are under increas­ing scruti­ny over their data prac­tices, fol­low­ing a series of pri­va­cy scan­dals at Face­book and new data-pri­va­cy rules recent­ly adopt­ed by the Euro­pean Union. Last year, the busi­ness news site Quartz found that Google was track­ing Android users by col­lect­ing the address­es of near­by cell­phone tow­ers even if all loca­tion ser­vices were off. Google changed the prac­tice and insist­ed it nev­er record­ed the data any­way.

    Crit­ics say Google’s insis­tence on track­ing its users’ loca­tions stems from its dri­ve to boost adver­tis­ing rev­enue.

    “They build adver­tis­ing infor­ma­tion out of data,” said Peter Lenz, the senior geospa­tial ana­lyst at Dstillery, a rival adver­tis­ing tech­nol­o­gy com­pa­ny. “More data for them pre­sum­ably means more prof­it.”

    The AP learned of the issue from K. Shankari, a grad­u­ate researcher at UC Berke­ley who stud­ies the com­mut­ing pat­terns of vol­un­teers in order to help urban plan­ners. She noticed that her Android phone prompt­ed her to rate a shop­ping trip to Kohl’s, even though she had turned Loca­tion His­to­ry off.

    “So how did Google Maps know where I was?” she asked in a blog post .

    The AP wasn’t able to recre­ate Shankari’s expe­ri­ence exact­ly. But its attempts to do so revealed Google’s track­ing. The find­ings dis­turbed her.

    “I am not opposed to back­ground loca­tion track­ing in prin­ci­ple,” she said. “It just real­ly both­ers me that it is not explic­it­ly stat­ed.”

    Google offers a more accu­rate descrip­tion of how Loca­tion His­to­ry actu­al­ly works in a place you’d only see if you turn it off — a pop­up that appears when you “pause” Loca­tion His­to­ry on your Google account web­page . There the com­pa­ny notes that “some loca­tion data may be saved as part of your activ­i­ty on oth­er Google ser­vices, like Search and Maps.”

    Google offers addi­tion­al infor­ma­tion in a pop­up that appears if you re-acti­vate the “Web & App Activ­i­ty” set­ting — an uncom­mon action for many users, since this set­ting is on by default. That pop­up states that, when active, the set­ting “saves the things you do on Google sites, apps, and ser­vices ... and asso­ci­at­ed infor­ma­tion, like loca­tion.”

    Warn­ings when you’re about to turn Loca­tion His­to­ry off via Android and iPhone device set­tings are more dif­fi­cult to inter­pret. On Android, the pop­up explains that “places you go with your devices will stop being added to your Loca­tion His­to­ry map.” On the iPhone, it sim­ply reads, “None of your Google apps will be able to store loca­tion data in Loca­tion His­to­ry.”

    The iPhone text is tech­ni­cal­ly true if poten­tial­ly mis­lead­ing. With Loca­tion His­to­ry off, Google Maps and oth­er apps store your where­abouts in a sec­tion of your account called “My Activ­i­ty,” not “Loca­tion His­to­ry.”

    Since 2014, Google has let adver­tis­ers track the effec­tive­ness of online ads at dri­ving foot traf­fic , a fea­ture that Google has said relies on user loca­tion his­to­ries.

    The com­pa­ny is push­ing fur­ther into such loca­tion-aware track­ing to dri­ve ad rev­enue, which rose 20 per­cent last year to $95.4 bil­lion. At a Google Mar­ket­ing Live sum­mit in July, Google exec­u­tives unveiled a new tool called “local cam­paigns” that dynam­i­cal­ly uses ads to boost in-per­son store vis­its. It says it can mea­sure how well a cam­paign drove foot traf­fic with data pulled from Google users’ loca­tion his­to­ries.

    Google also says loca­tion records stored in My Activ­i­ty are used to tar­get ads. Ad buy­ers can tar­get ads to spe­cif­ic loca­tions — say, a mile radius around a par­tic­u­lar land­mark — and typ­i­cal­ly have to pay more to reach this nar­row­er audi­ence.

    While dis­abling “Web & App Activ­i­ty” will stop Google from stor­ing loca­tion mark­ers, it also pre­vents Google from stor­ing infor­ma­tion gen­er­at­ed by search­es and oth­er activ­i­ty. That can lim­it the effec­tive­ness of the Google Assis­tant, the company’s dig­i­tal concierge.

    ...

    ———-

    “AP Exclu­sive: Google tracks your move­ments, like it or not” by RYAN NAKASHIMA; Asso­ci­at­ed Press; 08/15/2018

    “An Asso­ci­at­ed Press inves­ti­ga­tion found that many Google ser­vices on Android devices and iPhones store your loca­tion data even if you’ve used a pri­va­cy set­ting that says it will pre­vent Google from doing so.”

    Yes, it turns out when you turn off loca­tion ser­vices on your Android smart­phone, you’re mere­ly turn­ing off your own access to that loca­tion his­to­ry. Google will still col­lect and keep that data for its own pur­pos­es.

    And while some com­mon­ly used apps that you would expect to require loca­tion data, like Google Maps or dai­ly whether updates, auto­mat­i­cal­ly col­lect loca­tion data with­out ever ask­ing, it sounds like cer­tain search­es that should have noth­ing to do with your loca­tion also trig­ger auto­mat­ic loca­tion data. And the search-trig­gered loca­tions obtained by Google appar­ent­ly give your pre­cise lat­i­tude and lon­gi­tude, accu­rate to the square foot:

    ...
    For the most part, Google is upfront about ask­ing per­mis­sion to use your loca­tion infor­ma­tion. An app like Google Maps will remind you to allow access to loca­tion if you use it for nav­i­gat­ing. If you agree to let it record your loca­tion over time, Google Maps will dis­play that his­to­ry for you in a “time­line” that maps out your dai­ly move­ments.

    Stor­ing your minute-by-minute trav­els car­ries pri­va­cy risks and has been used by police to deter­mine the loca­tion of sus­pects — such as a war­rant that police in Raleigh, North Car­oli­na, served on Google last year to find devices near a mur­der scene. So the com­pa­ny lets you “pause” a set­ting called Loca­tion His­to­ry.

    Google says that will pre­vent the com­pa­ny from remem­ber­ing where you’ve been. Google’s sup­port page on the sub­ject states: “You can turn off Loca­tion His­to­ry at any time. With Loca­tion His­to­ry off, the places you go are no longer stored.”

    That isn’t true. Even with Loca­tion His­to­ry paused, some Google apps auto­mat­i­cal­ly store time-stamped loca­tion data with­out ask­ing. (It’s pos­si­ble, although labo­ri­ous, to delete it .)

    For exam­ple, Google stores a snap­shot of where you are when you mere­ly open its Maps app. Auto­mat­ic dai­ly weath­er updates on Android phones pin­point rough­ly where you are. And some search­es that have noth­ing to do with loca­tion, like “choco­late chip cook­ies,” or “kids sci­ence kits,” pin­point your pre­cise lat­i­tude and lon­gi­tude — accu­rate to the square foot — and save it to your Google account.
    ...

    Recall that the ini­tial sto­ry from Novem­ber about Google’s using cell­phone tow­er tri­an­gu­la­tion to sur­rep­ti­tious­ly col­lect loca­tion data indi­cat­ed that this type of loca­tion data was very accu­rate and could deter­mine whether or not you stepped foot in a giv­en retail store (it was being used for loca­tion-tar­get­ed ads). So if this lat­est report indi­cates that Google can get your loca­tion down to the near­est square foot that sounds like a ref­er­ence to that cell tow­er tri­an­gu­la­tion tech­nique.

    The arti­cle goes on to ref­er­ence that report from Novem­ber, and goes on to say that “Google changed the prac­tice and insist­ed it nev­er record­ed the data any­way”. So Google appar­ent­ly admit­ted to stop­ping some­thing it says it was nev­er doing in the first place. It’s the kind of non­sense cor­po­rate response that sug­gests that this pro­gram nev­er end­ed, it was mere­ly “changed”:

    ...
    Huge tech com­pa­nies are under increas­ing scruti­ny over their data prac­tices, fol­low­ing a series of pri­va­cy scan­dals at Face­book and new data-pri­va­cy rules recent­ly adopt­ed by the Euro­pean Union. Last year, the busi­ness news site Quartz found that Google was track­ing Android users by col­lect­ing the address­es of near­by cell­phone tow­ers even if all loca­tion ser­vices were off. Google changed the prac­tice and insist­ed it nev­er record­ed the data any­way.
    ...

    And Google does indeed admit that this data is being used for loca­tion-based ad tar­get­ing, so it sure sounds like noth­ing changed at that ini­tial report in Novem­ber:

    ...
    Since 2014, Google has let adver­tis­ers track the effec­tive­ness of online ads at dri­ving foot traf­fic , a fea­ture that Google has said relies on user loca­tion his­to­ries.

    The com­pa­ny is push­ing fur­ther into such loca­tion-aware track­ing to dri­ve ad rev­enue, which rose 20 per­cent last year to $95.4 bil­lion. At a Google Mar­ket­ing Live sum­mit in July, Google exec­u­tives unveiled a new tool called “local cam­paigns” that dynam­i­cal­ly uses ads to boost in-per­son store vis­its. It says it can mea­sure how well a cam­paign drove foot traf­fic with data pulled from Google users’ loca­tion his­to­ries.

    Google also says loca­tion records stored in My Activ­i­ty are used to tar­get ads. Ad buy­ers can tar­get ads to spe­cif­ic loca­tions — say, a mile radius around a par­tic­u­lar land­mark — and typ­i­cal­ly have to pay more to reach this nar­row­er audi­ence.
    ...

    So it there any way to stop Google from col­lect­ing your loca­tion his­to­ry oth­er than using a non-Android phone with no Google apps? Well, yes, there is a way. It’s just seem­ing­ly designed to be super con­fus­ing and coun­ter­in­tu­itive:

    ...
    To stop Google from sav­ing these loca­tion mark­ers, the com­pa­ny says, users can turn off anoth­er set­ting, one that does not specif­i­cal­ly ref­er­ence loca­tion infor­ma­tion. Called “Web and App Activ­i­ty” and enabled by default, that set­ting stores a vari­ety of infor­ma­tion from Google apps and web­sites to your Google account.

    When paused, it will pre­vent activ­i­ty on any device from being saved to your account. But leav­ing “Web & App Activ­i­ty” on and turn­ing “Loca­tion His­to­ry” off only pre­vents Google from adding your move­ments to the “time­line,” its visu­al­iza­tion of your dai­ly trav­els. It does not stop Google’s col­lec­tion of oth­er loca­tion mark­ers.

    You can delete these loca­tion mark­ers by hand, but it’s a painstak­ing process since you have to select them indi­vid­u­al­ly, unless you want to delete all of your stored activ­i­ty.

    You can see the stored loca­tion mark­ers on a page in your Google account at myactivity.google.com, although they’re typ­i­cal­ly scat­tered under sev­er­al dif­fer­ent head­ers, many of which are unre­lat­ed to loca­tion.

    To demon­strate how pow­er­ful these oth­er mark­ers can be, the AP cre­at­ed a visu­al map of the move­ments of Prince­ton post­doc­tor­al researcher Gunes Acar, who car­ried an Android phone with Loca­tion his­to­ry off, and shared a record of his Google account.

    The map includes Acar’s train com­mute on two trips to New York and vis­its to The High Line park, Chelsea Mar­ket, Hell’s Kitchen, Cen­tral Park and Harlem. To pro­tect his pri­va­cy, The AP didn’t plot the most telling and fre­quent mark­er — his home address.
    ...

    Google of course coun­ters that it’s been clear all along:

    ...
    Google says it is being per­fect­ly clear.

    “There are a num­ber of dif­fer­ent ways that Google may use loca­tion to improve people’s expe­ri­ence, includ­ing: Loca­tion His­to­ry, Web and App Activ­i­ty, and through device-lev­el Loca­tion Ser­vices,” a Google spokesper­son said in a state­ment to the AP. “We pro­vide clear descrip­tions of these tools, and robust con­trols so peo­ple can turn them on or off, and delete their his­to­ries at any time.”
    ...

    And while it’s basi­cal­ly trolling the pub­lic at this point for Google to act like its loca­tion data poli­cies have been any­thing oth­er than opaque and con­fus­ing, that troll­ish response and those opaque poli­cies to make one thing increas­ing­ly clear: Google has no inten­tion of stop­ping this kind of data col­lec­tion. If any­thing we should expect it to increase giv­en the plans for more loca­tion-based ser­vices. Resis­tance is still futile. And not just because of Google, even if its one of the biggest offend­ers. It’s more a group effort.

    Posted by Pterrafractyl | August 15, 2018, 2:48 pm
  8. Here’s one of those arti­cles that’s sur­pris­ing in one sense and com­plete­ly pre­dictable in anoth­er sense: Yuval Noah Harari is a futur­ist philoso­pher and author of a num­ber of pop­u­lar books about where human­i­ty is head­ing. Harari appears to be a large­ly dystopi­an futur­ist, envi­sion a future where democ­ra­cy is seen as obso­lete and a tech­no-elite rul­ing class run com­pa­nies with the tech­no­log­i­cal capac­i­ty to essen­tial­ly con­trol the minds of mass­es. Mass­es that will increas­ing­ly be seen obso­lete and use­less. Harari even gave a recent TED Talk called “Why fas­cism is so tempt­ing — and how your data could pow­er it. So how do Sil­i­con Val­ley’s CEO view Mr. Harar­i’s views? They appar­ent­ly can’t get enough of him:

    The New York Times

    Tech C.E.O.s Are in Love With Their Prin­ci­pal Doom­say­er

    The futur­ist philoso­pher Yuval Noah Harari thinks Sil­i­con Val­ley is an engine of dystopi­an ruin. So why do the dig­i­tal elite adore him so?

    By Nel­lie Bowles
    Nov. 9, 2018

    The futur­ist philoso­pher Yuval Noah Harari wor­ries about a lot.

    He wor­ries that Sil­i­con Val­ley is under­min­ing democ­ra­cy and ush­er­ing in a dystopi­an hellscape in which vot­ing is obso­lete.

    He wor­ries that by cre­at­ing pow­er­ful influ­ence machines to con­trol bil­lions of minds, the big tech com­pa­nies are destroy­ing the idea of a sov­er­eign indi­vid­ual with free will.

    He wor­ries that because the tech­no­log­i­cal revolution’s work requires so few labor­ers, Sil­i­con Val­ley is cre­at­ing a tiny rul­ing class and a teem­ing, furi­ous “use­less class.”

    But late­ly, Mr. Harari is anx­ious about some­thing much more per­son­al. If this is his har­row­ing warn­ing, then why do Sil­i­con Val­ley C.E.O.s love him so?

    “One pos­si­bil­i­ty is that my mes­sage is not threat­en­ing to them, and so they embrace it?” a puz­zled Mr. Harari said one after­noon in Octo­ber. “For me, that’s more wor­ry­ing. Maybe I’m miss­ing some­thing?”

    When Mr. Harari toured the Bay Area this fall to pro­mote his lat­est book, the recep­tion was incon­gru­ous­ly joy­ful. Reed Hast­ings, the chief exec­u­tive of Net­flix, threw him a din­ner par­ty. The lead­ers of X, Alphabet’s secre­tive research divi­sion, invit­ed Mr. Harari over. Bill Gates reviewed the book “Fas­ci­nat­ing” and “such a stim­u­lat­ing writer”) in The New York Times.

    “I’m inter­est­ed in how Sil­i­con Val­ley can be so infat­u­at­ed with Yuval, which they are — it’s insane he’s so pop­u­lar, they’re all invit­ing him to cam­pus — yet what Yuval is say­ing under­mines the premise of the adver­tis­ing- and engage­ment-based mod­el of their prod­ucts,” said Tris­tan Har­ris, Google’s for­mer in-house design ethi­cist and the co-founder of the Cen­ter for Humane Tech­nol­o­gy.

    Part of the rea­son might be that Sil­i­con Val­ley, at a cer­tain lev­el, is not opti­mistic on the future of democ­ra­cy. The more of a mess Wash­ing­ton becomes, the more inter­est­ed the tech world is in cre­at­ing some­thing else, and it might not look like elect­ed rep­re­sen­ta­tion. Rank-and-file coders have long been wary of reg­u­la­tion and curi­ous about alter­na­tive forms of gov­ern­ment. A sep­a­ratist streak runs through the place: Ven­ture cap­i­tal­ists peri­od­i­cal­ly call for Cal­i­for­nia to secede or shat­ter, or for the cre­ation of cor­po­rate nation-states. And this sum­mer, Mark Zucker­berg, who has rec­om­mend­ed Mr. Harari to his book club, acknowl­edged a fix­a­tion with the auto­crat Cae­sar Augus­tus. “Basi­cal­ly,” Mr. Zucker­berg told The New York­er, “through a real­ly harsh approach, he estab­lished 200 years of world peace.”

    Mr. Harari, think­ing about all this, puts it this way: “Utopia and dystopia depends on your val­ues.”

    Mr. Harari, who has a Ph.D. from Oxford, is a 42-year-old Israeli philoso­pher and a his­to­ry pro­fes­sor at Hebrew Uni­ver­si­ty of Jerusalem. The sto­ry of his cur­rent fame begins in 2011, when he pub­lished a book of notable ambi­tion: to sur­vey the whole of human exis­tence. “Sapi­ens: A Brief His­to­ry of Humankind,” first released in Hebrew, did not break new ground in terms of his­tor­i­cal research. Nor did its premise — that humans are ani­mals and our dom­i­nance is an acci­dent — seem a like­ly com­mer­cial hit. But the casu­al tone and smooth way Mr. Harari tied togeth­er exist­ing knowl­edge across fields made it a deeply pleas­ing read, even as the tome end­ed on the notion that the process of human evo­lu­tion might be over. Trans­lat­ed into Eng­lish in 2014, the book went on to sell more than eight mil­lion copies and made Mr. Harari a celebri­ty intel­lec­tu­al.

    He fol­lowed up with “Homo Deus: A Brief His­to­ry of Tomor­row,” which out­lined his vision of what comes after human evo­lu­tion. In it, he describes Dataism, a new faith based around the pow­er of algo­rithms. Mr. Harari’s future is one in which big data is wor­shiped, arti­fi­cial intel­li­gence sur­pass­es human intel­li­gence, and some humans devel­op God­like abil­i­ties.

    Now, he has writ­ten a book about the present and how it could lead to that future: “21 Lessons for the 21st Cen­tu­ry.” It is meant to be read as a series of warn­ings. His recent TED Talk was called “Why fas­cism is so tempt­ing — and how your data could pow­er it.

    His prophe­cies might have made him a Cas­san­dra in Sil­i­con Val­ley, or at the very least an unwel­come pres­ence. Instead, he has had to rec­on­cile him­self to the locals’ strange delight. “If you make peo­ple start think­ing far more deeply and seri­ous­ly about these issues,” he told me, sound­ing weary, “some of the things they will think about might not be what you want them to think about.”

    ‘Brave New World’ as Aspi­ra­tional Read­ing

    Mr. Harari agreed to let me tag along for a few days on his trav­els through the Val­ley, and one after­noon in Sep­tem­ber, I wait­ed for him out­side X’s offices, in Moun­tain View, while he spoke to the Alpha­bet employ­ees inside. After a while, he emerged: a shy, thin, bespec­ta­cled man with a dust­ing of dark hair. Mr. Harari has a sort of owlish demeanor, in that he looks wise and also does not move his body very much, even while glanc­ing to the side. His face is not par­tic­u­lar­ly expres­sive, with the excep­tion of one rogue eye­brow. When you catch his eye, there is a wary look — like he wants to know if you, too, under­stand exact­ly how bad the world is about to get.

    At the Alpha­bet talk, Mr. Harari had been accom­pa­nied by his pub­lish­er. They said that the younger employ­ees had expressed con­cern about whether their work was con­tribut­ing to a less free soci­ety, while the exec­u­tives gen­er­al­ly thought their impact was pos­i­tive.

    Some work­ers had tried to pre­dict how well humans would adapt to large tech­no­log­i­cal change based on how they have respond­ed to small shifts, like a new ver­sion of Gmail. Mr. Harari told them to think more stark­ly: If there isn’t a major pol­i­cy inter­ven­tion, most humans prob­a­bly will not adapt at all.

    It made him sad, he told me, to see peo­ple build things that destroy their own soci­eties, but he works every day to main­tain an aca­d­e­m­ic dis­tance and remind him­self that humans are just ani­mals. “Part of it is real­ly com­ing from see­ing humans as apes, that this is how they behave,” he said, adding, “They’re chim­panzees. They’re sapi­ens. This is what they do.”

    He was slouch­ing a lit­tle. Social­iz­ing exhausts him.

    As we board­ed the black gull-wing Tes­la Mr. Harari had rent­ed for his vis­it, he brought up Aldous Hux­ley. Gen­er­a­tions have been hor­ri­fied by his nov­el “Brave New World,” which depicts a regime of emo­tion con­trol and pain­less con­sump­tion. Read­ers who encounter the book today, Mr. Harari said, often think it sounds great. “Every­thing is so nice, and in that way it is an intel­lec­tu­al­ly dis­turb­ing book because you’re real­ly hard-pressed to explain what’s wrong with it,” he said. “And you do get today a vision com­ing out of some peo­ple in Sil­i­con Val­ley which goes in that direc­tion.”

    An Alpha­bet media rela­tions man­ag­er lat­er reached out to Mr. Harari’s team to tell him to tell me that the vis­it to X was not allowed to be part of this sto­ry. The request con­fused and then amused Mr. Harari. It is inter­est­ing, he said, that unlike politi­cians, tech com­pa­nies do not need a free press, since they already con­trol the means of mes­sage dis­tri­b­u­tion.

    He said he had resigned him­self to tech exec­u­tives’ glob­al reign, point­ing out how much worse the politi­cians are. “I’ve met a num­ber of these high-tech giants, and gen­er­al­ly they’re good peo­ple,” he said. “They’re not Atti­la the Hun. In the lot­tery of human lead­ers, you could get far worse.”

    Some of his tech fans, he thinks, come to him out of anx­i­ety. “Some may be very fright­ened of the impact of what they are doing,” Mr. Harari said.

    Still, their enthu­si­as­tic embrace of his work makes him uncom­fort­able. “It’s just a rule of thumb in his­to­ry that if you are so much cod­dled by the elites it must mean that you don’t want to fright­en them,” Mr. Harari said. “They can absorb you. You can become the intel­lec­tu­al enter­tain­ment.”

    Din­ner, With a Side of Med­ical­ly Engi­neered Immor­tal­i­ty

    C.E.O. tes­ti­mo­ni­als to Mr. Harari’s acu­men are indeed not hard to come by. “I’m drawn to Yuval for his clar­i­ty of thought,” Jack Dorsey, the head of Twit­ter and Square, wrote in an email, going on to praise a par­tic­u­lar chap­ter on med­i­ta­tion.

    And Mr. Hast­ings wrote: “Yuval’s the anti-Sil­i­con Val­ley per­sona — he doesn’t car­ry a phone and he spends a lot of time con­tem­plat­ing while off the grid. We see in him who we wish we were.” He added, “His think­ing on A.I. and biotech in his new book push­es our under­stand­ing of the dra­mas to unfold.”

    At the din­ner Mr. Hast­ings co-host­ed, aca­d­e­mics and indus­try lead­ers debat­ed the dan­gers of data col­lec­tion, and to what degree longevi­ty ther­a­pies will extend the human life span. (Mr. Harari has writ­ten that the rul­ing class will vast­ly out­live the use­less.) “That evening was small, but could be mag­ni­fied to sym­bol­ize his impact in the heart of Sil­i­con Val­ley,” said Dr. Fei-Fei Li, an arti­fi­cial intel­li­gence expert who pushed inter­nal­ly at Google to keep secret the company’s efforts to process mil­i­tary drone footage for the Pen­ta­gon. “His book has that abil­i­ty to bring these peo­ple togeth­er at a table, and that is his con­tri­bu­tion.”

    A few nights ear­li­er, Mr. Harari spoke to a sold-out the­ater of 3,500 in San Fran­cis­co. One tick­et-hold­er walk­ing in, an old­er man, told me it was brave and hon­est for Mr. Harari to use the term “use­less class.”

    The author was paired for dis­cus­sion with the pro­lif­ic intel­lec­tu­al Sam Har­ris, who strode onstage in a gray suit and well-starched white but­ton-down. Mr. Harari was less at ease, in a loose suit that crum­pled around him, his hands clasped in his lap as he sat deep in his chair. But as he spoke about med­i­ta­tion — Mr. Harari spends two hours each day and two months each year in silence — he became com­mand­ing. In a region where self-opti­miza­tion is para­mount and med­i­ta­tion is a com­pet­i­tive sport, Mr. Harari’s devo­tion con­fers hero sta­tus.

    He told the audi­ence that free will is an illu­sion, and that human rights are just a sto­ry we tell our­selves. Polit­i­cal par­ties, he said, might not make sense any­more. He went on to argue that the lib­er­al world order has relied on fic­tions like “the cus­tomer is always right” and “fol­low your heart,” and that these ideas no longer work in the age of arti­fi­cial intel­li­gence, when hearts can be manip­u­lat­ed at scale.

    Every­one in Sil­i­con Val­ley is focused on build­ing the future, Mr. Harari con­tin­ued, while most of the world’s peo­ple are not even need­ed enough to be exploit­ed. “Now you increas­ing­ly feel that there are all these elites that just don’t need me,” he said. “And it’s much worse to be irrel­e­vant than to be exploit­ed.”

    The use­less class he describes is unique­ly vul­ner­a­ble. “If a cen­tu­ry ago you mount­ed a rev­o­lu­tion against exploita­tion, you knew that when bad comes to worse, they can’t shoot all of us because they need us,” he said, cit­ing army ser­vice and fac­to­ry work.

    Now it is becom­ing less clear why the rul­ing elite would not just kill the new use­less class. “You’re total­ly expend­able,” he told the audi­ence.

    This, Mr. Harari told me lat­er, is why Sil­i­con Val­ley is so excit­ed about the con­cept of uni­ver­sal basic income, or stipends paid to peo­ple regard­less of whether they work. The mes­sage is: “We don’t need you. But we are nice, so we’ll take care of you.”

    On Sept. 14, he pub­lished an essay in The Guardian assail­ing anoth­er old trope — that “the vot­er knows best.”

    “If humans are hack­able ani­mals, and if our choic­es and opin­ions don’t reflect our free will, what should the point of pol­i­tics be?” he wrote. “How do you live when you real­ize … that your heart might be a gov­ern­ment agent, that your amyg­dala might be work­ing for Putin, and that the next thought that emerges in your mind might well be the result of some algo­rithm that knows you bet­ter than you know your­self? These are the most inter­est­ing ques­tions human­i­ty now faces.”

    ‘O.K., So Maybe Humankind Is Going to Dis­ap­pear’

    Mr. Harari and his hus­band, Itzik Yahav, who is also his man­ag­er, rent­ed a small house in Moun­tain View for their vis­it, and one morn­ing I found them there mak­ing oat­meal. Mr. Harari observed that as his celebri­ty in Sil­i­con Val­ley has risen, tech fans have focused on his lifestyle.

    “Sil­i­con Val­ley was already kind of a hotbed for med­i­ta­tion and yoga and all these things,” he said. “And one of the things that made me kind of more pop­u­lar and palat­able is that I also have this bedrock.” He was wear­ing an old sweat­shirt and den­im track pants. His voice was qui­et, but he ges­tured wide­ly, wav­ing his hands, hit­ting a jar of spat­u­las.

    Mr. Harari grew up in Kiry­at Ata, near Haifa, and his father worked in the arms indus­try. His moth­er, who worked in office admin­is­tra­tion, now vol­un­teers for her son han­dling his mail; he gets about 1,000 mes­sages a week. Mr. Yahav’s moth­er is their accoun­tant.

    Most days, Mr. Harari doesn’t use an alarm clock, and wakes up between 6:30 and 8:30 a.m., then med­i­tates and has a cup of tea. He works until 4 or 5 p.m., then does anoth­er hour of med­i­ta­tion, fol­lowed by an hour­long walk, maybe a swim, and then TV with Mr. Yahav.

    The two met 16 years ago through the dat­ing site Check Me Out. “We are not big believ­ers in falling in love,” Mr. Harari said. “It was more a ratio­nal choice.”

    “We met each oth­er and we thought, ‘O.K., we’re — O.K., let’s move in with each oth­er,’” Mr. Yahav said.

    Mr. Yahav became Mr. Harari’s man­ag­er. Dur­ing the peri­od when Eng­lish-lan­guage pub­lish­ers were cool on the com­mer­cial via­bil­i­ty of “Sapi­ens” — think­ing it too seri­ous for the aver­age read­er and not seri­ous enough for the schol­ars — Mr. Yahav per­sist­ed, even­tu­al­ly land­ing the Jerusalem-based agent Deb­o­rah Har­ris. One day when Mr. Harari was away med­i­tat­ing, Mr. Yahav and Ms. Har­ris final­ly sold it at auc­tion to Ran­dom House in Lon­don.

    Today, they have a team of eight based in Tel Aviv work­ing on Mr. Harari’s projects. The direc­tor Rid­ley Scott and doc­u­men­tar­i­an Asif Kapa­dia are adapt­ing “Sapi­ens” into a TV show, and Mr. Harari is work­ing on children’s books to reach a broad­er audi­ence.

    ...

    ———-

    “Tech C.E.O.s Are in Love With Their Prin­ci­pal Doom­say­er” by Nel­lie Bowles; The New York Times; 11/09/2018

    Part of the rea­son might be that Sil­i­con Val­ley, at a cer­tain lev­el, is not opti­mistic on the future of democ­ra­cy. The more of a mess Wash­ing­ton becomes, the more inter­est­ed the tech world is in cre­at­ing some­thing else, and it might not look like elect­ed rep­re­sen­ta­tion. Rank-and-file coders have long been wary of reg­u­la­tion and curi­ous about alter­na­tive forms of gov­ern­ment. A sep­a­ratist streak runs through the place: Ven­ture cap­i­tal­ists peri­od­i­cal­ly call for Cal­i­for­nia to secede or shat­ter, or for the cre­ation of cor­po­rate nation-states. And this sum­mer, Mark Zucker­berg, who has rec­om­mend­ed Mr. Harari to his book club, acknowl­edged a fix­a­tion with the auto­crat Cae­sar Augus­tus. “Basi­cal­ly,” Mr. Zucker­berg told The New York­er, “through a real­ly harsh approach, he estab­lished 200 years of world peace.””

    A guy who spe­cial­izes in wor­ry­ing about tech­no elites destroy­ing democ­ra­cy and turn­ing the mass­es into the ‘use­less’ class is is extra wor­ried about the fact those tech­no elites appear to love him. Hmmm...might that have some­thing to do with the fact that the dystopi­an future he’s pre­dict­ing assumes the tech elites com­plete­ly dom­i­nate human­i­ty? And, who knows, he’s prob­a­bly giv­ing them idea for how to accom­plish this dom­i­na­tion. So of course they love him:

    ...
    Now, he has writ­ten a book about the present and how it could lead to that future: “21 Lessons for the 21st Cen­tu­ry.” It is meant to be read as a series of warn­ings. His recent TED Talk was called “Why fas­cism is so tempt­ing — and how your data could pow­er it.

    His prophe­cies might have made him a Cas­san­dra in Sil­i­con Val­ley, or at the very least an unwel­come pres­ence. Instead, he has had to rec­on­cile him­self to the locals’ strange delight. “If you make peo­ple start think­ing far more deeply and seri­ous­ly about these issues,” he told me, sound­ing weary, “some of the things they will think about might not be what you want them to think about.”
    ...

    Plus, Harari appears to view rule by tech exec­u­tives as prefer­able to politi­cians because he views them as ‘gen­er­al­ly good peo­ple’. So, again, of course the tech elite love the guy. He’s pre­dict­ing they dom­i­nate the future and does­n’t see that as all that bad:

    ...
    ‘Brave New World’ as Aspi­ra­tional Read­ing

    Mr. Harari agreed to let me tag along for a few days on his trav­els through the Val­ley, and one after­noon in Sep­tem­ber, I wait­ed for him out­side X’s offices, in Moun­tain View, while he spoke to the Alpha­bet employ­ees inside. After a while, he emerged: a shy, thin, bespec­ta­cled man with a dust­ing of dark hair. Mr. Harari has a sort of owlish demeanor, in that he looks wise and also does not move his body very much, even while glanc­ing to the side. His face is not par­tic­u­lar­ly expres­sive, with the excep­tion of one rogue eye­brow. When you catch his eye, there is a wary look — like he wants to know if you, too, under­stand exact­ly how bad the world is about to get.

    At the Alpha­bet talk, Mr. Harari had been accom­pa­nied by his pub­lish­er. They said that the younger employ­ees had expressed con­cern about whether their work was con­tribut­ing to a less free soci­ety, while the exec­u­tives gen­er­al­ly thought their impact was pos­i­tive.

    ...

    An Alpha­bet media rela­tions man­ag­er lat­er reached out to Mr. Harari’s team to tell him to tell me that the vis­it to X was not allowed to be part of this sto­ry. The request con­fused and then amused Mr. Harari. It is inter­est­ing, he said, that unlike politi­cians, tech com­pa­nies do not need a free press, since they already con­trol the means of mes­sage dis­tri­b­u­tion.

    He said he had resigned him­self to tech exec­u­tives’ glob­al reign, point­ing out how much worse the politi­cians are. “I’ve met a num­ber of these high-tech giants, and gen­er­al­ly they’re good peo­ple,” he said. “They’re not Atti­la the Hun. In the lot­tery of human lead­ers, you could get far worse.”

    Some of his tech fans, he thinks, come to him out of anx­i­ety. “Some may be very fright­ened of the impact of what they are doing,” Mr. Harari said.

    Still, their enthu­si­as­tic embrace of his work makes him uncom­fort­able. “It’s just a rule of thumb in his­to­ry that if you are so much cod­dled by the elites it must mean that you don’t want to fright­en them,” Mr. Harari said. “They can absorb you. You can become the intel­lec­tu­al enter­tain­ment.”
    ...

    He’s also pre­dict­ing that these tech exec­u­tives will use longevi­ty tech­nol­o­gy to ‘vast­ly out­live the use­less’, which clear­ly implies he’s pre­dict­ing longevi­ty tech­nol­o­gy gets devel­oped but not share with ‘the use­less’ (the rest of us):

    ...
    Din­ner, With a Side of Med­ical­ly Engi­neered Immor­tal­i­ty

    C.E.O. tes­ti­mo­ni­als to Mr. Harari’s acu­men are indeed not hard to come by. “I’m drawn to Yuval for his clar­i­ty of thought,” Jack Dorsey, the head of Twit­ter and Square, wrote in an email, going on to praise a par­tic­u­lar chap­ter on med­i­ta­tion.

    And Mr. Hast­ings wrote: “Yuval’s the anti-Sil­i­con Val­ley per­sona — he doesn’t car­ry a phone and he spends a lot of time con­tem­plat­ing while off the grid. We see in him who we wish we were.” He added, “His think­ing on A.I. and biotech in his new book push­es our under­stand­ing of the dra­mas to unfold.”

    At the din­ner Mr. Hast­ings co-host­ed, aca­d­e­mics and indus­try lead­ers debat­ed the dan­gers of data col­lec­tion, and to what degree longevi­ty ther­a­pies will extend the human life span. (Mr. Harari has writ­ten that the rul­ing class will vast­ly out­live the use­less.) “That evening was small, but could be mag­ni­fied to sym­bol­ize his impact in the heart of Sil­i­con Val­ley,” said Dr. Fei-Fei Li, an arti­fi­cial intel­li­gence expert who pushed inter­nal­ly at Google to keep secret the company’s efforts to process mil­i­tary drone footage for the Pen­ta­gon. “His book has that abil­i­ty to bring these peo­ple togeth­er at a table, and that is his con­tri­bu­tion.”
    ...

    Harari has even gone on to ques­tion whether or not humans have any free will at all and explored the impli­ca­tions of the pos­si­bil­i­ty that tech­nol­o­gy will allow the tech giants to essen­tial­ly con­trol what peo­ple think, effec­tive­ly bio-hack­ing the human mind. And one of the impli­ca­tions he sees from this hijack­ing of human will is that polit­i­cal par­ties might not make sense any­more and human rights are just a sto­ry we tell our­selves. So, again, it’s not exact­ly hard to see why tech elites love the guy. He’s basi­cal­ly mak­ing the case for why we should just accept this dystopi­an future:

    ...
    A few nights ear­li­er, Mr. Harari spoke to a sold-out the­ater of 3,500 in San Fran­cis­co. One tick­et-hold­er walk­ing in, an old­er man, told me it was brave and hon­est for Mr. Harari to use the term “use­less class.”

    The author was paired for dis­cus­sion with the pro­lif­ic intel­lec­tu­al Sam Har­ris, who strode onstage in a gray suit and well-starched white but­ton-down. Mr. Harari was less at ease, in a loose suit that crum­pled around him, his hands clasped in his lap as he sat deep in his chair. But as he spoke about med­i­ta­tion — Mr. Harari spends two hours each day and two months each year in silence — he became com­mand­ing. In a region where self-opti­miza­tion is para­mount and med­i­ta­tion is a com­pet­i­tive sport, Mr. Harari’s devo­tion con­fers hero sta­tus.

    He told the audi­ence that free will is an illu­sion, and that human rights are just a sto­ry we tell our­selves. Polit­i­cal par­ties, he said, might not make sense any­more. He went on to argue that the lib­er­al world order has relied on fic­tions like “the cus­tomer is always right” and “fol­low your heart,” and that these ideas no longer work in the age of arti­fi­cial intel­li­gence, when hearts can be manip­u­lat­ed at scale.

    Every­one in Sil­i­con Val­ley is focused on build­ing the future, Mr. Harari con­tin­ued, while most of the world’s peo­ple are not even need­ed enough to be exploit­ed. “Now you increas­ing­ly feel that there are all these elites that just don’t need me,” he said. “And it’s much worse to be irrel­e­vant than to be exploit­ed.”

    The use­less class he describes is unique­ly vul­ner­a­ble. “If a cen­tu­ry ago you mount­ed a rev­o­lu­tion against exploita­tion, you knew that when bad comes to worse, they can’t shoot all of us because they need us,” he said, cit­ing army ser­vice and fac­to­ry work.

    Now it is becom­ing less clear why the rul­ing elite would not just kill the new use­less class. “You’re total­ly expend­able,” he told the audi­ence.

    This, Mr. Harari told me lat­er, is why Sil­i­con Val­ley is so excit­ed about the con­cept of uni­ver­sal basic income, or stipends paid to peo­ple regard­less of whether they work. The mes­sage is: “We don’t need you. But we are nice, so we’ll take care of you.”

    On Sept. 14, he pub­lished an essay in The Guardian assail­ing anoth­er old trope — that “the vot­er knows best.”

    “If humans are hack­able ani­mals, and if our choic­es and opin­ions don’t reflect our free will, what should the point of pol­i­tics be?” he wrote. “How do you live when you real­ize … that your heart might be a gov­ern­ment agent, that your amyg­dala might be work­ing for Putin, and that the next thought that emerges in your mind might well be the result of some algo­rithm that knows you bet­ter than you know your­self? These are the most inter­est­ing ques­tions human­i­ty now faces.”
    ...

    So as we can see, it’s abun­dant­ly clear why Mr. Harari is sud­den­ly the go-to guru for Sil­i­con Val­ley’s elites: he’s depict­ing a dystopi­an future that’s utopi­an if you hap­pen to be an author­i­tar­i­an tech elite who wants to dom­i­nate the future. And he’s kind of por­tray­ing this future as just some­one we should accept. Sure, we in the use­less class should be plen­ty anx­ious about becom­ing use­less, but don’t both­er try­ing to orga­nize against this future, espe­cial­ly since demo­c­ra­t­ic pol­i­tics is becom­ing point­less in an age when mass opin­ion can be hacked and manip­u­lat­ed. Just accept that this is the future and wor­ry about adapt­ing to it. That more or less appears to be Harar­i’s dystopi­an mes­sage. Which is as much a mes­sage about a dystopi­an present as it is about a dystopi­an future. A dystopi­an present where human­i­ty is already so help­less that noth­ing can be done to pre­vent this dystopi­an future.

    And that all points towards one obvi­ous area of futur­ism Harari could engage in that might actu­al­ly turn off some of his Sil­i­con Val­ley fan base: explore the phase of the future after the fas­cist tech­no elites have seized com­plete con­trol of human­i­ty and start war­ring amongst them­selves. Don’t for­get that one of fea­ture of democ­ra­cy is that it sort of cre­ates a uni­fy­ing force for all the bru­tal wannabe fas­cist oli­garchs. They all have a com­mon ene­my. The peo­ple. But what hap­pens when they’ve tru­ly won and sub­ju­gat­ed human­i­ty or exter­mi­nat­ed most of it? Won’t they pro­ceed to go to war with each oth­er at that point? If you’re a bru­tal cut­throat fas­cist oli­garch of the future shar­ing pow­er with oth­er bru­tal cut­throat fas­cist oli­garchs, are you real­ly going to trust that they aren’t plot­ting to take you down and take over your neo-feu­dal per­son­al empire? Does any­one doubt that if Peter Thiel man­aged to obtain longevi­ty tech­nol­o­gy and cloning tech­nol­o­gy that there won’t be a clone army of Nazi Peter Thiels some day war­ring against rival fas­cist elites?

    These fas­cist over­lords are also pre­sum­ably going to be high­ly reliant on pri­vate robot armies. What kind of future should these fas­cist tech elites expect when they are all sport­ing rival com­pet­ing pri­vate robot armies? That might sound fun at first, but do they real­ly want to live in a world where their rivals also have pri­vate robot armies? Or advanced biowar­fare arse­nals? And how about AI going to war with these fas­cist elites? Does Mr. Harari have any opin­ions on the prob­a­bil­i­ty of Skynet emerg­ing and the robots rebelling against their fas­cist human over­lords? Per­haps if he explored how dystopi­an this tech elite-dom­i­nat­ed future could be for the tech elites them­selves, and fur­ther explored the inher­ent dan­gers a high-tech soci­ety run by and for com­pet­ing author­i­tar­i­an per­son­al­i­ties present to those same com­pet­ing author­i­tar­i­an per­son­al­i­ties, maybe they would­n’t love him so much.

    Posted by Pterrafractyl | November 20, 2018, 4:21 pm
  9. You know that scene in the Bat­man movie, The Dark Knight, where Bruce Wayne com­pa­ny turns every cell­phone in Gotham into lit­tle sonar devices that are used to map out the phys­i­cal loca­tions of peo­ple and objects all across the city? Well, in the future, Bat­man won’t need to rely on such tech­no­log­i­cal trick­ery. He’ll just need to be a major investor in Google:

    TechCrunch

    FCC green­lights Soli, Google’s radar-based ges­ture tech

    Rita Liao
    1/2/2019

    Google has won U.S. reg­u­la­to­ry approval to go ahead with a radar-based motion sen­sor that could make touch­screens look obso­lete in the com­ing years. Known as the Soli Project, the ini­tia­tive began in 2015 inside Google’s Advanced Tech­nol­o­gy and Projects unit, a group respon­si­ble for turn­ing the giant’s cut­ting-edge ideas into prod­ucts.

    We’ve seen a num­ber of Soli’s tech­no­log­i­cal break­throughs since then, from being able to iden­ti­fy objects to reduc­ing the radar sensor’s pow­er con­sump­tion.. Most recent­ly, a reg­u­la­to­ry order is set to move it into a more action­able phase. The U.S. Fed­er­al Com­mu­ni­ca­tions Com­mis­sion said ear­li­er this week that it would grant Project Soli a waiv­er to oper­ate at high­er pow­er lev­els than cur­rent­ly allowed. The gov­ern­ment agency also said users can oper­ate the sen­sor aboard a plane because the device pos­es “min­i­mal poten­tial of caus­ing harm­ful inter­fer­ence to oth­er spec­trum users.”

    Soli fits radar sen­sors into a tiny chip the size of an Amer­i­can quar­ter to track slight hand or fin­ger motions at high speed and accu­ra­cy. That means instead of twist­ing a knob to adjust the vol­ume of your stereo, you can rub your fin­gers over a speak­er that con­tains a Soli chip as if slid­ing across a vir­tu­al dial. Under the reg­u­la­to­ry order, you also would be allowed to air press a but­ton on your Soli-pow­ered smart­watch in the future.

    Aside from clear­ing safe­ty con­cerns, the FCC also found that the sens­ing tech serves the pub­lic inter­est: “The abil­i­ty to rec­og­nize users’ touch­less hand ges­tures to con­trol a device, such as a smart­phone, could help peo­ple with mobil­i­ty, speech, or tac­tile impair­ments, which in turn could lead to high­er pro­duc­tiv­i­ty and qual­i­ty of life for many mem­bers of the Amer­i­can pub­lic.”

    ...

    The reg­u­la­to­ry con­sent arrived months after Face­book raised issues with the FCC that the Soli sen­sors oper­at­ing at high­er pow­er lev­els might inter­fere with oth­er device sys­tems. The two firms came to a con­sen­sus in Sep­tem­ber and told the FCC that Soli could oper­ate at pow­er lev­els high­er than what the gov­ern­ment allowed but low­er than what Google had request­ed.

    It’s a ratio­nal move for Face­book try­ing to shape the rules for the new field, giv­en its own Ocu­lus deploys motion tech­nolo­gies. The com­pa­ny also has invest­ed in research­ing the area, for instance, by look­ing at a device that cre­ates motion on the arm to sim­u­late social ges­tures like hug­ging.

    The update on Google’s tech­no­log­i­cal devel­op­ment is a tem­po­rary dis­trac­tion from the giant’s more ques­tion­able, rev­enue-dri­ven moves in recent months, includ­ing a mas­sive data leak on Google+ fol­lowed by the clo­sure of the online ghost town, its fail­ure to crack down on child porn and its con­tro­ver­sial plan to re-enter Chi­na report­ed­ly with a cen­sored search engine.

    [Update: Google removed sev­er­al third-par­ty apps that led users to child porn shar­ing groups after TechCrunch report­ed about the prob­lem.]

    ———-

    “FCC green­lights Soli, Google’s radar-based ges­ture tech” by Rita Liao; TechCrunch; 1/2/2019

    “We’ve seen a num­ber of Soli’s tech­no­log­i­cal break­throughs since then, from being able to iden­ti­fy objects to reduc­ing the radar sensor’s pow­er con­sump­tion.. Most recent­ly, a reg­u­la­to­ry order is set to move it into a more action­able phase. The U.S. Fed­er­al Com­mu­ni­ca­tions Com­mis­sion said ear­li­er this week that it would grant Project Soli a waiv­er to oper­ate at high­er pow­er lev­els than cur­rent­ly allowed. The gov­ern­ment agency also said users can oper­ate the sen­sor aboard a plane because the device pos­es “min­i­mal poten­tial of caus­ing harm­ful inter­fer­ence to oth­er spec­trum users.””

    A tiny radar-based sys­tem for detect­ing hand ges­tures. It’s pret­ty neat! And it’s also appar­ent­ly able to iden­ti­fy objects too. Pret­ty neat too! And also a won­der­ful new source of data for Google since it sounds like Soli can iden­ti­fy the shapes of objects as well as their inner struc­tures:

    The Verge

    Google’s minia­ture radars can now iden­ti­fy objects

    Researchers from St Andrews Uni­ver­si­ty teach Google’s tech a fun new trick

    By James Vin­cent
    Nov 10, 2016, 6:54am EST

    When Google unveiled Project Soli in 2015, the com­pa­ny pre­sent­ed it as a way to cre­ate ges­ture con­trols for future tech­nol­o­gy. Soli’s minia­ture radars are small enough to fit into a smart­watch and can detect move­ments with sub-mil­lime­ter accu­ra­cy, allow­ing you to con­trol the vol­ume of a speak­er, say, by twid­dling an imag­i­nary dial in mid-air. But now, a group of researchers from the Uni­ver­si­ty of St Andrews in Scot­land have used one of the first Project Soli devel­op­ers kits to teach Google’s tech a new trick: rec­og­niz­ing objects using radar.

    The device is called Radar­Cat (or Radar Cat­e­go­riza­tion for Input and Inter­ac­tion), and works the way any radar sys­tem does. A base units fires elec­tro­mag­net­ic waves at a tar­get, some of which bounce off and return to base. The sys­tem times how long it takes for them to come back and uses this infor­ma­tion to work out the shape of the object and how far away it is. But because Google’s Soli radars are so accu­rate, they can not only detect the exte­ri­or of an object, but also its inter­nal struc­ture and rear sur­face.

    “These three sets of sig­nals togeth­er gives you the unique fin­ger­print for each object,” lead researcher Pro­fes­sor Aaron Quigley tells The Verge. Radar­Cat is accu­rate enough that it can even tell the dif­fer­ence between the front and back of a smart­phone, or tell whether a glass is full or emp­ty.

    This sys­tem is sur­pris­ing­ly accu­rate, but there are some major lim­i­ta­tions. For exam­ple, Radar­Cat does occa­sion­al­ly con­fuse objects with sim­i­lar mate­r­i­al prop­er­ties (for exam­ple, the alu­minum case of a Mac­Book and an alu­minum weigh­ing scale), and while it works best on sol­id objects with flat sur­faces, it takes a lit­tle longer to get a clear sig­nal on things that are hol­low or odd­ly shaped. (For more infor­ma­tion, check out the full study pub­lished by St Andrews.)

    Radar­Cat also has to be taught what each object looks like before it can rec­og­nize it, although Quigley says this isn’t as much of a prob­lem as it ini­tial­ly appears. He com­pares it to music CDs: “When you first start­ed using them, you put in the CD and it would come up with song list. That infor­ma­tion wasn’t record­ed on the CD, but held in a data­base in the cloud, with the fin­ger­print of the CD used to do the lookup.” Once the infor­ma­tion has been intro­duced to the sys­tem once, says Quigley, it can be eas­i­ly dis­trib­uted and used by . And the more infor­ma­tion we have about var­i­ous radar fin­ger­prints, the more we can gen­er­al­ize and make infer­ences about nev­er-before-seen objects.

    One of the most obvi­ous appli­ca­tions of this research is to cre­ate a dic­tio­nary of things. Visu­al­ly impaired indi­vid­u­als could use it to iden­ti­fy objects that feel sim­i­lar in shape or size, or it could deliv­er more spe­cial­ized infor­ma­tion — iden­ti­fy­ing a phone mod­el, for exam­ple, and quick­ly bring­ing up a list of specs and a user man­u­al. If RadarCat’s abil­i­ties were added to elec­tron­ics, then users could trig­ger cer­tain func­tions based on con­text — hold your Radar­Cat-enabled phone in a gloved hand, for exam­ple, and it could switch to an easy-to-use user inter­face with large icons.

    ...

    The next step for RadarCat’s cre­ators is to improve the system’s abil­i­ty to dis­tin­guish between sim­i­lar objects, sug­gest­ing they could use it to not only say whether a glass is full or emp­ty, for exam­ple, but also clas­si­fy its con­tents. If the tech­nol­o­gy ever moves into main­stream use it would be quite the evo­lu­tion — from a mil­i­tary tech used to detect ships and air­planes, to a con­sumer one that can tell you exact­ly what you’re about to drink.

    ———-

    “Google’s minia­ture radars can now iden­ti­fy objects” by James Vin­cent; The Verge; 11/10/2016

    “The device is called Radar­Cat (or Radar Cat­e­go­riza­tion for Input and Inter­ac­tion), and works the way any radar sys­tem does. A base units fires elec­tro­mag­net­ic waves at a tar­get, some of which bounce off and return to base. The sys­tem times how long it takes for them to come back and uses this infor­ma­tion to work out the shape of the object and how far away it is. But because Google’s Soli radars are so accu­rate, they can not only detect the exte­ri­or of an object, but also its inter­nal struc­ture and rear sur­face.”

    That’s how pow­er­ful (and inva­sive) Google’s new radar sys­tem is. Not only can it detect the exte­ri­or of an object but also its rear sur­face and inter­nal struc­ture. And it’s even pos­si­ble it will be able to deter­mine things like the con­tents of a glass:

    ...
    The next step for RadarCat’s cre­ators is to improve the system’s abil­i­ty to dis­tin­guish between sim­i­lar objects, sug­gest­ing they could use it to not only say whether a glass is full or emp­ty, for exam­ple, but also clas­si­fy its con­tents. If the tech­nol­o­gy ever moves into main­stream use it would be quite the evo­lu­tion — from a mil­i­tary tech used to detect ships and air­planes, to a con­sumer one that can tell you exact­ly what you’re about to drink.
    ...

    So your fan­cy new Soli-pow­ered Google smart­watch will not only be able to detect whether or not you’re drink­ing some­thing but might be able to deter­mine what you’re drink­ing. And that’s what they could do two years ago. It’s pre­sum­ably much more advanced by this point. You can watch a video here of Soli detect­ing all sorts of objects, and even col­ors. The video shows peo­ple putting objects right on the radar plate to detect it. But accord­ing to a report from 2016, JBL, the speak­er man­u­fac­tur­er, was work­ing on a speak­er with a built-in Soli sen­sor that could sense your fin­ger motions up to 15 meters away. In oth­er words, we should­n’t assume this tech­nol­o­gy will be used for only detect­ing objects and motions very close to the sen­sors. And that’s some­thing that makes the FCC rul­ing allow­ing for high­er-pow­er devices much more rel­e­vant. More pow­er means greater dis­tances.

    And while the idea of an object dic­tio­nary sounds like the kind of thing that would be lim­it­ed to objects like cups or what­ev­er, don’t for­get that Google is prob­a­bly going to have the kind of infor­ma­tion nec­es­sar­i­ly to cre­ate an object dic­tio­nary of peo­ple (height, weight, body type, etc). Also keep in mind that Google already has pow­er­ful loca­tion track­ing ser­vices built into the Android smart­phone oper­at­ing sys­tem, so Google is going to already going to know about at least some of the peo­ple stand­ing next to your Soli-pow­ered device. So if Google does­n’t already have enough infor­ma­tion about you to cre­ate a record of you in its object dic­tio­nary sys­tem, it will prob­a­bly be able to cre­ate that record pret­ty eas­i­ly. And if this radar tech­nol­o­gy can detect sub-mil­lime­ter motions of your fin­gers, it can pre­sum­ably pick up things like facial expres­sions, or per­haps even facial struc­tures too. That will pre­sum­ably also be use­ful for iden­ti­fy who hap­pens to be in the vicin­i­ty of your Soli device.

    It’s all a reminder that motion-based senor tech­nol­o­gy might be sens­ing a lot more than just motion.

    Posted by Pterrafractyl | January 3, 2019, 3:02 pm
  10. Here’s anoth­er sto­ry about the abuse of the loca­tion data of smart­phones. This time it does­n’t involved the smart­phone man­u­fac­tur­ers or the cre­ators of the oper­at­ing sys­tems, like Google. Instead, it’s about the sale of your loca­tion data from your cell­phone ser­vice provider. And it appears to include vir­tu­al­ly all of the major cell­phone oper­a­tors in the US, which makes this to kind of data abuse that con­sumers can’t do much about. Because while you could opt to not get an Android-based phone if you don’t like Google’s exten­sive loca­tion data col­lec­tion, it’s pos­si­ble there sim­ply may not be an cel­lu­lar ser­vice provider who does­n’t sell your loca­tion data.

    And the fact that con­sumer can’t do much about this issue adds to the scan­dalous nature of it because there is one thing con­sumers can do: get the gov­ern­ment to reg­u­late cel­lu­lar ser­vice providers to stop them from sell­ing this data. And the US gov­ern­ment did actu­al­ly warn the tele­com indus­try about this prac­tice and got assur­ances that it would end. But as the fol­low­ing arti­cle makes clear, that has­n’t hap­pened. Instead, it appears that vir­tu­al­ly all cel­lu­lar providers are mak­ing loca­tion data avail­able to a broad array of busi­ness­es. And some of those busi­ness are reselling the data to any­one for a sub­stan­tial prof­it:

    Vice Moth­er­board

    I Gave a Boun­ty Hunter $300. Then He Locat­ed Our Phone
    T‑Mobile, Sprint, and AT&T are sell­ing access to their cus­tomers’ loca­tion data, and that data is end­ing up in the hands of boun­ty hunters and oth­ers not autho­rized to pos­sess it, let­ting them track most phones in the coun­try.

    by Joseph Cox

    Jan 8 2019, 11:08am

    Ner­vous­ly, I gave a boun­ty hunter a phone num­ber. He had offered to geolo­cate a phone for me, using a shady, over­looked ser­vice intend­ed not for the cops, but for pri­vate indi­vid­u­als and busi­ness­es. Armed with just the num­ber and a few hun­dred dol­lars, he said he could find the cur­rent loca­tion of most phones in the Unit­ed States.

    The boun­ty hunter sent the num­ber to his own con­tact, who would track the phone. The con­tact respond­ed with a screen­shot of Google Maps, con­tain­ing a blue cir­cle indi­cat­ing the phone’s cur­rent loca­tion, approx­i­mate to a few hun­dred metres.

    Queens, New York. More specif­i­cal­ly, the screen­shot showed a loca­tion in a par­tic­u­lar neighborhood—just a cou­ple of blocks from where the tar­get was. The hunter had found the phone (the tar­get gave their con­sent to Moth­er­board to be tracked via their T‑Mobile phone.)

    The boun­ty hunter did this all with­out deploy­ing a hack­ing tool or hav­ing any pre­vi­ous knowl­edge of the phone’s where­abouts. Instead, the track­ing tool relies on real-time loca­tion data sold to boun­ty hunters that ulti­mate­ly orig­i­nat­ed from the tel­cos them­selves, includ­ing T‑Mobile, AT&T, and Sprint, a Moth­er­board inves­ti­ga­tion has found. These sur­veil­lance capa­bil­i­ties are some­times sold through word-of-mouth net­works.

    Where­as it’s com­mon knowl­edge that law enforce­ment agen­cies can track phones with a war­rant to ser­vice providers, IMSI catch­ers, or until recent­ly via oth­er com­pa­nies that sell loca­tion data such as one called Secu­rus, at least one com­pa­ny, called Micro­bilt, is sell­ing phone geolo­ca­tion ser­vices with lit­tle over­sight to a spread of dif­fer­ent pri­vate indus­tries, rang­ing from car sales­men and prop­er­ty man­agers to bail bonds­men and boun­ty hunters, accord­ing to sources famil­iar with the company’s prod­ucts and com­pa­ny doc­u­ments obtained by Moth­er­board. Com­pound­ing that already high­ly ques­tion­able busi­ness prac­tice, this spy­ing capa­bil­i­ty is also being resold to oth­ers on the black mar­ket who are not licensed by the com­pa­ny to use it, includ­ing me, seem­ing­ly with­out Microbilt’s knowl­edge.

    Motherboard’s inves­ti­ga­tion shows just how exposed mobile net­works and the data they gen­er­ate are, leav­ing them open to sur­veil­lance by ordi­nary cit­i­zens, stalk­ers, and crim­i­nals, and comes as media and pol­i­cy mak­ers are pay­ing more atten­tion than ever to how loca­tion and oth­er sen­si­tive data is col­lect­ed and sold. The inves­ti­ga­tion also shows that a wide vari­ety of com­pa­nies can access cell phone loca­tion data, and that the infor­ma­tion trick­les down from cell phone providers to a wide array of small­er play­ers, who don’t nec­es­sar­i­ly have the cor­rect safe­guards in place to pro­tect that data.

    “Peo­ple are reselling to the wrong peo­ple,” the bail indus­try source who flagged the com­pa­ny to Moth­er­board said. Moth­er­board grant­ed the source and oth­ers in this sto­ry anonymi­ty to talk more can­did­ly about a con­tro­ver­sial sur­veil­lance capa­bil­i­ty.

    Your mobile phone is con­stant­ly com­mu­ni­cat­ing with near­by cell phone tow­ers, so your tele­com provider knows where to route calls and texts. From this, tele­com com­pa­nies also work out the phone’s approx­i­mate loca­tion based on its prox­im­i­ty to those tow­ers.

    Although many users may be unaware of the prac­tice, tele­com com­pa­nies in the Unit­ed States sell access to their cus­tomers’ loca­tion data to oth­er com­pa­nies, called loca­tion aggre­ga­tors, who then sell it to spe­cif­ic clients and indus­tries. Last year, one loca­tion aggre­ga­tor called Loca­tion­S­mart faced harsh crit­i­cism for sell­ing data that ulti­mate­ly end­ed up in the hands of Secu­rus, a com­pa­ny which pro­vid­ed phone track­ing to low lev­el enforce­ment with­out requir­ing a war­rant. Loca­tion­S­mart also exposed the very data it was sell­ing through a bug­gy web­site pan­el, mean­ing any­one could geolo­cate near­ly any phone in the Unit­ed States at a click of a mouse.

    There’s a com­plex sup­ply chain that shares some of Amer­i­can cell phone users’ most sen­si­tive data, with the tel­cos poten­tial­ly being unaware of how the data is being used by the even­tu­al end user, or even whose hands it lands in. Finan­cial com­pa­nies use phone loca­tion data to detect fraud; road­side assis­tance firms use it to locate stuck cus­tomers. But AT&T, for exam­ple, told Moth­er­board the use of its cus­tomers’ data by boun­ty hunters goes explic­it­ly against the company’s poli­cies, rais­ing ques­tions about how AT&T allowed the sale for this pur­pose in the first place.

    “The alle­ga­tion here would vio­late our con­tract and Pri­va­cy Pol­i­cy,” an AT&T spokesper­son told Moth­er­board in an email.

    In the case of the phone we tracked, six dif­fer­ent enti­ties had poten­tial access to the phone’s data. T‑Mobile shares loca­tion data with an aggre­ga­tor called Zumi­go, which shares infor­ma­tion with Micro­bilt. Micro­bilt shared that data with a cus­tomer using its mobile phone track­ing prod­uct. The boun­ty hunter then shared this infor­ma­tion with a bail indus­try source, who shared it with Moth­er­board.

    The CTIA, a tele­com indus­try trade group of which AT&T, Sprint, and T‑Mobile are mem­bers, has offi­cial guide­lines for the use of so-called “loca­tion-based ser­vices” that “rely on two fun­da­men­tal prin­ci­ples: user notice and con­sent,” the group wrote in those guide­lines. Tele­com com­pa­nies and data aggre­ga­tors that Moth­er­board spoke to said that they require their clients to get con­sent from the peo­ple they want to track, but it’s clear that this is not always hap­pen­ing.

    A sec­ond source who has tracked the geolo­ca­tion indus­try told Moth­er­board, while talk­ing about the indus­try gen­er­al­ly, “If there is mon­ey to be made they will keep sell­ing the data.”

    “Those third-lev­el com­pa­nies sell their ser­vices. That is where you see the issues with going to shady folks [and] for shady rea­sons,” the source added.

    Fred­erike Kalthe­uner, data exploita­tion pro­gramme lead at cam­paign group Pri­va­cy Inter­na­tion­al, told Moth­er­board in a phone call that “it’s part of a big­ger prob­lem; the US has a com­plete­ly unreg­u­lat­ed data ecosys­tem.”

    Micro­bilt buys access to loca­tion data from an aggre­ga­tor called Zumi­go and then sells it to a dizzy­ing num­ber of sec­tors, includ­ing land­lords to scope out poten­tial renters; motor vehi­cle sales­men, and oth­ers who are con­duct­ing cred­it checks. Armed with just a phone num­ber, Microbilt’s “Mobile Device Ver­i­fy” prod­uct can return a target’s full name and address, geolo­cate a phone in an indi­vid­ual instance, or oper­ate as a con­tin­u­ous track­ing ser­vice.

    “You can set up mon­i­tor­ing with con­trol over the weeks, days and even hours that loca­tion on a device is checked as well as the start and end dates of mon­i­tor­ing,” a com­pa­ny brochure Moth­er­board found online reads.

    Pos­ing as a poten­tial cus­tomer, Moth­er­board explic­it­ly asked a Micro­bilt cus­tomer sup­port staffer whether the com­pa­ny offered phone geolo­ca­tion for bail bonds­men. Short­ly after, anoth­er staffer emailed with a price list—locat­ing a phone can cost as lit­tle as $4.95 each if search­ing for a low num­ber of devices. That price gets even cheap­er as the cus­tomer buys the capa­bil­i­ty to track more phones. Get­ting real-time updates on a phone’s loca­tion can cost around $12.95.

    “Dirt cheap when you think about the data you can get,” the source famil­iar with the indus­try added.

    It’s bad enough that access to high­ly sen­si­tive phone geolo­ca­tion data is already being sold to a wide range of indus­tries and busi­ness­es. But there is also an under­ground mar­ket that Moth­er­board used to geolo­cate a phone—one where Micro­bilt cus­tomers resell their access at a prof­it, and with min­i­mal over­sight.

    “Blade Run­ner, the icon­ic sci-fi movie, is set in 2019. And here we are: there’s an unreg­u­lat­ed black mar­ket where boun­ty-hunters can buy infor­ma­tion about where we are, in real time, over time, and come after us. You don’t need to be a repli­cant to be scared of the con­se­quences,” Thomas Rid, pro­fes­sor of strate­gic stud­ies at Johns Hop­kins Uni­ver­si­ty, told Moth­er­board in an online chat.

    The bail indus­try source said his mid­dle­man used Micro­bilt to find the phone. This mid­dle­man charged $300, a size­able markup on the usu­al Micro­bilt price. The Google Maps screen­shot pro­vid­ed to Moth­er­board of the tar­get phone’s loca­tion also includ­ed its approx­i­mate lon­gi­tude and lat­i­tude coor­di­nates, and a range of how accu­rate the phone geolo­ca­tion is: 0.3 miles, or just under 500 metres. It may not nec­es­sar­i­ly be enough to geolo­cate some­one to a spe­cif­ic build­ing in a pop­u­lat­ed area, but it can cer­tain­ly pin­point a par­tic­u­lar bor­ough, city, or neigh­bor­hood.

    In oth­er cas­es of phone geolo­ca­tion it is typ­i­cal­ly done with the con­sent of the tar­get, per­haps by send­ing a text mes­sage the user has to delib­er­ate­ly reply to, sig­nalling they accept their loca­tion being tracked. This may be done in the ear­li­er road­side assis­tance exam­ple or when a com­pa­ny mon­i­tors its fleet of trucks. But when Moth­er­board test­ed the geolo­ca­tion ser­vice, the tar­get phone received no warn­ing it was being tracked.

    The bail source who orig­i­nal­ly alert­ed Micro­bilt to Moth­er­board said that boun­ty hunters have used phone geolo­ca­tion ser­vices for non-work pur­pos­es, such as track­ing their girl­friends. Moth­er­board was unable to iden­ti­fy a spe­cif­ic instance of this hap­pen­ing, but domes­tic stalk­ers have repeat­ed­ly used tech­nol­o­gy, such as mobile phone mal­ware, to track spous­es.

    As Moth­er­board was report­ing this sto­ry, Micro­bilt removed doc­u­ments relat­ed to its mobile phone loca­tion prod­uct from its web­site.

    A Micro­bilt spokesper­son told Moth­er­board in a state­ment that the com­pa­ny requires any­one using its mobile device ver­i­fi­ca­tion ser­vices for fraud pre­ven­tion must first obtain con­sent of the con­sumer. Micro­bilt also con­firmed it found an instance of abuse on its platform—our phone ping.

    “The request came through a licensed state agency that writes in approx­i­mate­ly $100 mil­lion in bonds per year and passed all up front cre­den­tial­ing under the pre­tense that loca­tion was being ver­i­fied to mit­i­gate finan­cial expo­sure relat­ed to a bond loan being con­sid­ered for the sub­mit­ted con­sumer,” Micro­bilt said in an emailed state­ment. In this case, “licensed state agency” is refer­ring to a pri­vate bail bond com­pa­ny, Moth­er­board con­firmed.

    “As a result, Micro­Bilt was unaware that its terms of use were being vio­lat­ed by the rogue indi­vid­ual that sub­mit­ted the request under false pre­tens­es, does not approve of such use cas­es, and has a clear pol­i­cy that such vio­la­tions will result in loss of access to all Micro­Bilt ser­vices and ter­mi­na­tion of the request­ing party’s end-user agree­ment,” Micro­bilt added. “Upon inves­ti­gat­ing the alleged abuse and learn­ing of the vio­la­tion of our con­tract, we ter­mi­nat­ed the customer’s access to our prod­ucts and they will not be eli­gi­ble for rein­state­ment based on this vio­la­tion.”

    Zumi­go con­firmed it was the com­pa­ny that pro­vid­ed the phone loca­tion to Micro­bilt and defend­ed its prac­tices. In a state­ment, Zumi­go did not seem to take issue with the prac­tice of pro­vid­ing data that ulti­mate­ly end­ed up with licensed boun­ty hunters, but wrote, “ille­gal access to data is an unfor­tu­nate occur­rence across vir­tu­al­ly every indus­try that deals in con­sumer or employ­ee data, and it is impos­si­ble to detect a fraud­ster, or rogue cus­tomer, who requests loca­tion data of his or her own mobile devices when the required con­sent is pro­vid­ed. How­ev­er, Zumi­go takes steps to pro­tect pri­va­cy by pro­vid­ing a mea­sure of dis­tance (approx. 0.5–1.0 mile) from an actu­al address.” Zumi­go told Moth­er­board it has cut Microbilt’s data access.

    In Motherboard’s case, the suc­cess­ful­ly geolo­cat­ed phone was on T‑Mobile.

    “We take the pri­va­cy and secu­ri­ty of our cus­tomers’ infor­ma­tion very seri­ous­ly and will not tol­er­ate any mis­use of our cus­tomers’ data,” A T‑Mobile spokesper­son told Moth­er­board in an emailed state­ment. “While T‑Mobile does not have a direct rela­tion­ship with Micro­bilt, our ven­dor Zumi­go was work­ing with them and has con­firmed with us that they have already shut down all trans­mis­sion of T‑Mobile data. T‑Mobile has also blocked access to device loca­tion data for any request sub­mit­ted by Zumi­go on behalf of Micro­bilt as an addi­tion­al pre­cau­tion.”

    Microbilt’s prod­uct doc­u­men­ta­tion sug­gests the phone loca­tion ser­vice works on all mobile net­works, how­ev­er the mid­dle­man was unable or unwill­ing to con­duct a search for a Ver­i­zon device. Ver­i­zon did not respond to a request for com­ment.

    AT&T told Moth­er­board it has cut access to Micro­bilt as the com­pa­ny inves­ti­gates.

    “We only per­mit the shar­ing of loca­tion when a cus­tomer gives per­mis­sion for cas­es like fraud pre­ven­tion or emer­gency road­side assis­tance, or when required by law,” the AT&T spokesper­son said.

    Sprint told Moth­er­board in a state­ment that “pro­tect­ing our cus­tomers’ pri­va­cy and secu­ri­ty is a top pri­or­i­ty, and we are trans­par­ent about that in our Pri­va­cy Pol­i­cy [...] Sprint does not have a direct rela­tion­ship with Micro­Bilt. If we deter­mine that any of our cus­tomers do and have vio­lat­ed the terms of our con­tract, we will take appro­pri­ate action based on those find­ings.” Sprint would not clar­i­fy the con­tours of its rela­tion­ship with Micro­bilt.

    These state­ments sound very famil­iar. When The New York Times and Sen­a­tor Ron Wyden pub­lished details of Secu­rus last year, the firm that was offer­ing geolo­ca­tion to low lev­el law enforce­ment with­out a war­rant, the tel­cos said they were tak­ing extra mea­sures to make sure their cus­tomers’ data would not be abused again. Ver­i­zon announced it was going to lim­it data access to com­pa­nies not using it for legit­i­mate pur­pos­es. T‑Mobile, Sprint, and AT&T fol­lowed suit short­ly after with sim­i­lar promis­es.

    After Wyden’s pres­sure, T‑Mobile’s CEO John Leg­ere tweet­ed in June last year “I’ve per­son­al­ly eval­u­at­ed this issue & have pledged that @tmobile will not sell cus­tomer loca­tion data to shady mid­dle­men.”

    Months after the tel­cos said they were going to com­bat this prob­lem, in the face of an arguably even worse case of abuse and data trad­ing, they are say­ing much the same thing. Last year, Moth­er­board report­ed on a com­pa­ny that pre­vi­ous­ly offered phone geolo­ca­tion to boun­ty hunters; here Micro­bilt is oper­at­ing even after a wave of out­rage from pol­i­cy mak­ers. In its state­ment to Moth­er­board on Mon­day, T‑Mobile said it has near­ly fin­ished the process of ter­mi­nat­ing its agree­ments with loca­tion aggre­ga­tors.

    “It would be bad if this was the first time we learned about it. It’s not. Every major wire­less car­ri­er pledged to end this kind of data shar­ing after I exposed this prac­tice last year. Now it appears these promis­es were lit­tle more than worth­less spam in their cus­tomers’ inbox­es,” Wyden told Moth­er­board in a state­ment. Wyden is propos­ing leg­is­la­tion to safe­guard per­son­al data.

    ...

    “Wire­less car­ri­ers’ con­tin­ued sale of loca­tion data is a night­mare for nation­al secu­ri­ty and the per­son­al safe­ty of any­one with a phone,” Wyden added. “When stalk­ers, spies, and preda­tors know when a woman is alone, or when a home is emp­ty, or where a White House offi­cial stops after work, the pos­si­bil­i­ties for abuse are end­less.”

    ———-

    “I Gave a Boun­ty Hunter $300. Then He Locat­ed Our Phone” by Joseph Cox; Vice Moth­er­board; 01/08/2019

    Although many users may be unaware of the prac­tice, tele­com com­pa­nies in the Unit­ed States sell access to their cus­tomers’ loca­tion data to oth­er com­pa­nies, called loca­tion aggre­ga­tors, who then sell it to spe­cif­ic clients and indus­tries. Last year, one loca­tion aggre­ga­tor called Loca­tion­S­mart faced harsh crit­i­cism for sell­ing data that ulti­mate­ly end­ed up in the hands of Secu­rus, a com­pa­ny which pro­vid­ed phone track­ing to low lev­el enforce­ment with­out requir­ing a war­rant. Loca­tion­S­mart also exposed the very data it was sell­ing through a bug­gy web­site pan­el, mean­ing any­one could geolo­cate near­ly any phone in the Unit­ed States at a click of a mouse.”

    So the tele­coms sell your loca­tion data to “loca­tion aggre­ga­tors”, who are sup­posed to just sell the data to spe­cif­ic clients and indus­tries. But as a report from last year revealed, that data from one aggre­ga­tor, Loca­tion­S­mart, was get­ting resold to com­pa­nies like Secu­rus, which was reselling to pret­ty much any­one online.

    This time, Vice Moth­er­board dis­cov­ered that Secu­rus was not an anom­aly. Micro­bilt is engag­ing in a sim­i­lar prac­tices of buy­ing this data from the loca­tion aggre­ga­tor Zumi­go, and just resells it to whomev­er:

    ...
    Where­as it’s com­mon knowl­edge that law enforce­ment agen­cies can track phones with a war­rant to ser­vice providers, IMSI catch­ers, or until recent­ly via oth­er com­pa­nies that sell loca­tion data such as one called Secu­rus, at least one com­pa­ny, called Micro­bilt, is sell­ing phone geolo­ca­tion ser­vices with lit­tle over­sight to a spread of dif­fer­ent pri­vate indus­tries, rang­ing from car sales­men and prop­er­ty man­agers to bail bonds­men and boun­ty hunters, accord­ing to sources famil­iar with the company’s prod­ucts and com­pa­ny doc­u­ments obtained by Moth­er­board. Com­pound­ing that already high­ly ques­tion­able busi­ness prac­tice, this spy­ing capa­bil­i­ty is also being resold to oth­ers on the black mar­ket who are not licensed by the com­pa­ny to use it, includ­ing me, seem­ing­ly with­out Microbilt’s knowl­edge.

    Motherboard’s inves­ti­ga­tion shows just how exposed mobile net­works and the data they gen­er­ate are, leav­ing them open to sur­veil­lance by ordi­nary cit­i­zens, stalk­ers, and crim­i­nals, and comes as media and pol­i­cy mak­ers are pay­ing more atten­tion than ever to how loca­tion and oth­er sen­si­tive data is col­lect­ed and sold. The inves­ti­ga­tion also shows that a wide vari­ety of com­pa­nies can access cell phone loca­tion data, and that the infor­ma­tion trick­les down from cell phone providers to a wide array of small­er play­ers, who don’t nec­es­sar­i­ly have the cor­rect safe­guards in place to pro­tect that data.
    ...

    There’s a com­plex sup­ply chain that shares some of Amer­i­can cell phone users’ most sen­si­tive data, with the tel­cos poten­tial­ly being unaware of how the data is being used by the even­tu­al end user, or even whose hands it lands in. Finan­cial com­pa­nies use phone loca­tion data to detect fraud; road­side assis­tance firms use it to locate stuck cus­tomers. But AT&T, for exam­ple, told Moth­er­board the use of its cus­tomers’ data by boun­ty hunters goes explic­it­ly against the company’s poli­cies, rais­ing ques­tions about how AT&T allowed the sale for this pur­pose in the first place.

    ...

    In the case of the phone we tracked, six dif­fer­ent enti­ties had poten­tial access to the phone’s data. T‑Mobile shares loca­tion data with an aggre­ga­tor called Zumi­go, which shares infor­ma­tion with Micro­bilt. Micro­bilt shared that data with a cus­tomer using its mobile phone track­ing prod­uct. The boun­ty hunter then shared this infor­ma­tion with a bail indus­try source, who shared it with Moth­er­board.
    ...

    And the ser­vices offered by these com­pa­nies aren’t lim­it­ed to the loca­tion of the smart­phone at the time you request it. You can can con­tin­u­ous track­ing ser­vices too. For as lit­tle as $12.95 per phone for real-time updates:

    ...
    Micro­bilt buys access to loca­tion data from an aggre­ga­tor called Zumi­go and then sells it to a dizzy­ing num­ber of sec­tors, includ­ing land­lords to scope out poten­tial renters; motor vehi­cle sales­men, and oth­ers who are con­duct­ing cred­it checks. Armed with just a phone num­ber, Microbilt’s “Mobile Device Ver­i­fy” prod­uct can return a target’s full name and address, geolo­cate a phone in an indi­vid­ual instance, or oper­ate as a con­tin­u­ous track­ing ser­vice.

    “You can set up mon­i­tor­ing with con­trol over the weeks, days and even hours that loca­tion on a device is checked as well as the start and end dates of mon­i­tor­ing,” a com­pa­ny brochure Moth­er­board found online reads.

    Pos­ing as a poten­tial cus­tomer, Moth­er­board explic­it­ly asked a Micro­bilt cus­tomer sup­port staffer whether the com­pa­ny offered phone geolo­ca­tion for bail bonds­men. Short­ly after, anoth­er staffer emailed with a price list—locat­ing a phone can cost as lit­tle as $4.95 each if search­ing for a low num­ber of devices. That price gets even cheap­er as the cus­tomer buys the capa­bil­i­ty to track more phones. Get­ting real-time updates on a phone’s loca­tion can cost around $12.95.

    “Dirt cheap when you think about the data you can get,” the source famil­iar with the indus­try added.

    ...

    The bail indus­try source said his mid­dle­man used Micro­bilt to find the phone. This mid­dle­man charged $300, a size­able markup on the usu­al Micro­bilt price. The Google Maps screen­shot pro­vid­ed to Moth­er­board of the tar­get phone’s loca­tion also includ­ed its approx­i­mate lon­gi­tude and lat­i­tude coor­di­nates, and a range of how accu­rate the phone geolo­ca­tion is: 0.3 miles, or just under 500 metres. It may not nec­es­sar­i­ly be enough to geolo­cate some­one to a spe­cif­ic build­ing in a pop­u­lat­ed area, but it can cer­tain­ly pin­point a par­tic­u­lar bor­ough, city, or neigh­bor­hood.

    ...

    The bail source who orig­i­nal­ly alert­ed Micro­bilt to Moth­er­board said that boun­ty hunters have used phone geolo­ca­tion ser­vices for non-work pur­pos­es, such as track­ing their girl­friends. Moth­er­board was unable to iden­ti­fy a spe­cif­ic instance of this hap­pen­ing, but domes­tic stalk­ers have repeat­ed­ly used tech­nol­o­gy, such as mobile phone mal­ware, to track spous­es.
    ...

    And, of course, the tele­com com­pa­nies assure us that they are look­ing into this and will cut off access to this kind of infor­ma­tion to irre­spon­si­ble loca­tion aggre­ga­tors. Which should sound famil­iar since that’s exact­ly what they said after Secu­rus’s prac­tices were revealed last year in the face of pres­sure from Con­gress:

    ...
    AT&T told Moth­er­board it has cut access to Micro­bilt as the com­pa­ny inves­ti­gates.

    “We only per­mit the shar­ing of loca­tion when a cus­tomer gives per­mis­sion for cas­es like fraud pre­ven­tion or emer­gency road­side assis­tance, or when required by law,” the AT&T spokesper­son said.

    Sprint told Moth­er­board in a state­ment that “pro­tect­ing our cus­tomers’ pri­va­cy and secu­ri­ty is a top pri­or­i­ty, and we are trans­par­ent about that in our Pri­va­cy Pol­i­cy [...] Sprint does not have a direct rela­tion­ship with Micro­Bilt. If we deter­mine that any of our cus­tomers do and have vio­lat­ed the terms of our con­tract, we will take appro­pri­ate action based on those find­ings.” Sprint would not clar­i­fy the con­tours of its rela­tion­ship with Micro­bilt.

    These state­ments sound very famil­iar. When The New York Times and Sen­a­tor Ron Wyden pub­lished details of Secu­rus last year, the firm that was offer­ing geolo­ca­tion to low lev­el law enforce­ment with­out a war­rant, the tel­cos said they were tak­ing extra mea­sures to make sure their cus­tomers’ data would not be abused again. Ver­i­zon announced it was going to lim­it data access to com­pa­nies not using it for legit­i­mate pur­pos­es. T‑Mobile, Sprint, and AT&T fol­lowed suit short­ly after with sim­i­lar promis­es.

    After Wyden’s pres­sure, T‑Mobile’s CEO John Leg­ere tweet­ed in June last year “I’ve per­son­al­ly eval­u­at­ed this issue & have pledged that @tmobile will not sell cus­tomer loca­tion data to shady mid­dle­men.”

    Months after the tel­cos said they were going to com­bat this prob­lem, in the face of an arguably even worse case of abuse and data trad­ing, they are say­ing much the same thing. Last year, Moth­er­board report­ed on a com­pa­ny that pre­vi­ous­ly offered phone geolo­ca­tion to boun­ty hunters; here Micro­bilt is oper­at­ing even after a wave of out­rage from pol­i­cy mak­ers. In its state­ment to Moth­er­board on Mon­day, T‑Mobile said it has near­ly fin­ished the process of ter­mi­nat­ing its agree­ments with loca­tion aggre­ga­tors.

    “It would be bad if this was the first time we learned about it. It’s not. Every major wire­less car­ri­er pledged to end this kind of data shar­ing after I exposed this prac­tice last year. Now it appears these promis­es were lit­tle more than worth­less spam in their cus­tomers’ inbox­es,” Wyden told Moth­er­board in a state­ment. Wyden is propos­ing leg­is­la­tion to safe­guard per­son­al data.
    ...

    So what’s to be done? Well, since this indus­try is appar­ent­ly going to keep doing this as long as it’s allowed to do so and is prof­itable, reg­u­lat­ing the indus­try seems like the obvi­ous answer:

    ...
    A sec­ond source who has tracked the geolo­ca­tion indus­try told Moth­er­board, while talk­ing about the indus­try gen­er­al­ly, “If there is mon­ey to be made they will keep sell­ing the data.”

    “Those third-lev­el com­pa­nies sell their ser­vices. That is where you see the issues with going to shady folks [and] for shady rea­sons,” the source added.

    Fred­erike Kalthe­uner, data exploita­tion pro­gramme lead at cam­paign group Pri­va­cy Inter­na­tion­al, told Moth­er­board in a phone call that “it’s part of a big­ger prob­lem; the US has a com­plete­ly unreg­u­lat­ed data ecosys­tem.”
    ...

    If there is mon­ey to be made they will keep sell­ing the data.” And “it’s part of a big­ger prob­lem; the US has a com­plete­ly unreg­u­lat­ed data ecosys­tem.” Those seem like two pret­ty com­pelling rea­sons for seri­ous new reg­u­la­tions.

    Posted by Pterrafractyl | January 11, 2019, 2:12 pm
  11. Fol­low­ing up on the creepy sto­ry about the grow­ing indus­try of sell­ing loca­tion-data col­lect­ed from cell­phone ser­vice providers like T‑mobile and AT&T, here’s a New York Times arti­cle from last month that address­es the exten­sive amount of data col­lect­ed by smart­phone apps. Among the many fun fact in the arti­cle, we learn that Google’s Android oper­at­ing sys­tem allows apps not in use to col­lect loca­tion data “a few times an hour”, instead of con­tin­u­ous­ly when the app is run­ning. So if you are assum­ing that your smart­phone apps are only spy­ing on you when they’re actu­al­ly run­ning you might want to change those assump­tions.

    The Times dis­cov­ered at least 75 com­pa­nies that receive anony­mous app data. And we learn that Peter Thiel is an investor in one of those com­pa­nies. As we’re going to see, that com­pa­ny, Safe­Graph, has plans of offer­ing its loca­tion ser­vices to gov­ern­ment agen­cies. So giv­en that Thiel’s Palan­tir is already a major nation­al lse­cu­ri­ty con­trac­tor spe­cial­iz­ing in Big Data analy­sis for a vari­ety of cor­po­rate and gov­ern­ment clients, the ques­tion of whether or not all of this loca­tion data is being fed into those pri­va­tized intel­li­gence data bases like Palan­tir’s is a pret­ty big ques­tion.

    We also learn the approx­i­mate costs loca­tion aggre­ga­tors are pay­ing for access to user loca­tion data:
    about half a cent to two cents per user per month. That’s the price of one of your most inti­mate types of data:

    The New York times

    Your Apps Know Where You Were Last Night, and They’re Not Keep­ing It Secret
    Dozens of com­pa­nies use smart­phone loca­tions to help adver­tis­ers and even hedge funds. They say it’s anony­mous, but the data shows how per­son­al it is.

    By JENNIFER VALENTI­NO-DeVRIES, NATASHA SINGER, MICHAEL H. KELLER and AARON KROLIK
    DEC. 10, 2018

    The mil­lions of dots on the map trace high­ways, side streets and bike trails — each one fol­low­ing the path of an anony­mous cell­phone user.

    One path tracks some­one from a home out­side Newark to a near­by Planned Par­ent­hood, remain­ing there for more than an hour. Anoth­er rep­re­sents a per­son who trav­els with the may­or of New York dur­ing the day and returns to Long Island at night.

    Yet anoth­er leaves a house in upstate New York at 7 a.m. and trav­els to a mid­dle school 14 miles away, stay­ing until late after­noon each school day. Only one per­son makes that trip: Lisa Magrin, a 46-year-old math teacher. Her smart­phone goes with her.

    An app on the device gath­ered her loca­tion infor­ma­tion, which was then sold with­out her knowl­edge. It record­ed her where­abouts as often as every two sec­onds, accord­ing to a data­base of more than a mil­lion phones in the New York area that was reviewed by The New York Times. While Ms. Magrin’s iden­ti­ty was not dis­closed in those records, The Times was able to eas­i­ly con­nect her to that dot.

    The app tracked her as she went to a Weight Watch­ers meet­ing and to her dermatologist’s office for a minor pro­ce­dure. It fol­lowed her hik­ing with her dog and stay­ing at her ex-boyfriend’s home, infor­ma­tion she found dis­turb­ing.

    “It’s the thought of peo­ple find­ing out those inti­mate details that you don’t want peo­ple to know,” said Ms. Magrin, who allowed The Times to review her loca­tion data.

    Like many con­sumers, Ms. Magrin knew that apps could track people’s move­ments. But as smart­phones have become ubiq­ui­tous and tech­nol­o­gy more accu­rate, an indus­try of snoop­ing on people’s dai­ly habits has spread and grown more intru­sive.

    At least 75 com­pa­nies receive anony­mous, pre­cise loca­tion data from apps whose users enable loca­tion ser­vices to get local news and weath­er or oth­er infor­ma­tion, The Times found. Sev­er­al of those busi­ness­es claim to track up to 200 mil­lion mobile devices in the Unit­ed States — about half those in use last year. The data­base reviewed by The Times — a sam­ple of infor­ma­tion gath­ered in 2017 and held by one com­pa­ny — reveals people’s trav­els in star­tling detail, accu­rate to with­in a few yards and in some cas­es updat­ed more than 14,000 times a day.

    These com­pa­nies sell, use or ana­lyze the data to cater to adver­tis­ers, retail out­lets and even hedge funds seek­ing insights into con­sumer behav­ior. It’s a hot mar­ket, with sales of loca­tion-tar­get­ed adver­tis­ing reach­ing an esti­mat­ed $21 bil­lion this year. IBM has got­ten into the indus­try, with its pur­chase of the Weath­er Channel’s apps. The social net­work Foursquare remade itself as a loca­tion mar­ket­ing com­pa­ny. Promi­nent investors in loca­tion start-ups include Gold­man Sachs and Peter Thiel, the Pay­Pal co-founder.

    Busi­ness­es say their inter­est is in the pat­terns, not the iden­ti­ties, that the data reveals about con­sumers. They note that the infor­ma­tion apps col­lect is tied not to someone’s name or phone num­ber but to a unique ID. But those with access to the raw data — includ­ing employ­ees or clients — could still iden­ti­fy a per­son with­out con­sent. They could fol­low some­one they knew, by pin­point­ing a phone that reg­u­lar­ly spent time at that person’s home address. Or, work­ing in reverse, they could attach a name to an anony­mous dot, by see­ing where the device spent nights and using pub­lic records to fig­ure out who lived there.

    Many loca­tion com­pa­nies say that when phone users enable loca­tion ser­vices, their data is fair game. But, The Times found, the expla­na­tions peo­ple see when prompt­ed to give per­mis­sion are often incom­plete or mis­lead­ing. An app may tell users that grant­i­ng access to their loca­tion will help them get traf­fic infor­ma­tion, but not men­tion that the data will be shared and sold. That dis­clo­sure is often buried in a vague pri­va­cy pol­i­cy.

    “Loca­tion infor­ma­tion can reveal some of the most inti­mate details of a person’s life — whether you’ve vis­it­ed a psy­chi­a­trist, whether you went to an A.A. meet­ing, who you might date,” said Sen­a­tor Ron Wyden, Demo­c­rat of Ore­gon, who has pro­posed bills to lim­it the col­lec­tion and sale of such data, which are large­ly unreg­u­lat­ed in the Unit­ed States.

    “It’s not right to have con­sumers kept in the dark about how their data is sold and shared and then leave them unable to do any­thing about it,” he added.

    Mobile Sur­veil­lance Devices

    After Elise Lee, a nurse in Man­hat­tan, saw that her device had been tracked to the main oper­at­ing room at the hos­pi­tal where she works, she expressed con­cern about her pri­va­cy and that of her patients.

    “It’s very scary,” said Ms. Lee, who allowed The Times to exam­ine her loca­tion his­to­ry in the data set it reviewed. “It feels like some­one is fol­low­ing me, per­son­al­ly.”

    The mobile loca­tion indus­try began as a way to cus­tomize apps and tar­get ads for near­by busi­ness­es, but it has mor­phed into a data col­lec­tion and analy­sis machine.

    Retail­ers look to track­ing com­pa­nies to tell them about their own cus­tomers and their com­peti­tors’. For a web sem­i­nar last year, Eli­na Green­stein, an exec­u­tive at the loca­tion com­pa­ny GroundTruth, mapped out the path of a hypo­thet­i­cal con­sumer from home to work to show poten­tial clients how track­ing could reveal a person’s pref­er­ences. For exam­ple, some­one may search online for healthy recipes, but GroundTruth can see that the per­son often eats at fast-food restau­rants.

    “We look to under­stand who a per­son is, based on where they’ve been and where they’re going, in order to influ­ence what they’re going to do next,” Ms. Green­stein said.

    Finan­cial firms can use the infor­ma­tion to make invest­ment deci­sions before a com­pa­ny reports earn­ings — see­ing, for exam­ple, if more peo­ple are work­ing on a fac­to­ry floor, or going to a retailer’s stores.

    Health care facil­i­ties are among the more entic­ing but trou­bling areas for track­ing, as Ms. Lee’s reac­tion demon­strat­ed. Tell All Dig­i­tal, a Long Island adver­tis­ing firm that is a client of a loca­tion com­pa­ny, says it runs ad cam­paigns for per­son­al injury lawyers tar­get­ing peo­ple anony­mous­ly in emer­gency rooms.

    “The book ‘1984,’ we’re kind of liv­ing it in a lot of ways,” said Bill Kakis, a man­ag­ing part­ner at Tell All.

    Jails, schools, a mil­i­tary base and a nuclear pow­er plant — even crime scenes — appeared in the data set The Times reviewed. One per­son, per­haps a detec­tive, arrived at the site of a late-night homi­cide in Man­hat­tan, then spent time at a near­by hos­pi­tal, return­ing repeat­ed­ly to the local police sta­tion.

    Two loca­tion firms, Fys­i­cal and Safe­Graph, mapped peo­ple attend­ing the 2017 pres­i­den­tial inau­gu­ra­tion. On Fysical’s map, a bright red box near the Capi­tol steps indi­cat­ed the gen­er­al loca­tion of Pres­i­dent Trump and those around him, cell­phones ping­ing away. Fysical’s chief exec­u­tive said in an email that the data it used was anony­mous. Safe­Graph did not respond to requests for com­ment.

    More than 1,000 pop­u­lar apps con­tain loca­tion-shar­ing code from such com­pa­nies, accord­ing to 2018 data from MightySig­nal, a mobile analy­sis firm. Google’s Android sys­tem was found to have about 1,200 apps with such code, com­pared with about 200 on Apple’s iOS.

    The most pro­lif­ic com­pa­ny was Reveal Mobile, based in North Car­oli­na, which had loca­tion-gath­er­ing code in more than 500 apps, includ­ing many that pro­vide local news. A Reveal spokesman said that the pop­u­lar­i­ty of its code showed that it helped app devel­op­ers make ad mon­ey and con­sumers get free ser­vices.

    To eval­u­ate loca­tion-shar­ing prac­tices, The Times test­ed 20 apps, most of which had been flagged by researchers and indus­try insid­ers as poten­tial­ly shar­ing the data. Togeth­er, 17 of the apps sent exact lat­i­tude and lon­gi­tude to about 70 busi­ness­es. Pre­cise loca­tion data from one app, Weath­er­Bug on iOS, was received by 40 com­pa­nies. When con­tact­ed by The Times, some of the com­pa­nies that received that data described it as “unso­licit­ed” or “inap­pro­pri­ate.”

    Weath­er­Bug, owned by GroundTruth, asks users’ per­mis­sion to col­lect their loca­tion and tells them the infor­ma­tion will be used to per­son­al­ize ads. GroundTruth said that it typ­i­cal­ly sent the data to ad com­pa­nies it worked with, but that if they didn’t want the infor­ma­tion they could ask to stop receiv­ing it.

    The Times also iden­ti­fied more than 25 oth­er com­pa­nies that have said in mar­ket­ing mate­ri­als or inter­views that they sell loca­tion data or ser­vices, includ­ing tar­get­ed adver­tis­ing.

    The spread of this infor­ma­tion rais­es ques­tions about how secure­ly it is han­dled and whether it is vul­ner­a­ble to hack­ing, said Serge Egel­man, a com­put­er secu­ri­ty and pri­va­cy researcher affil­i­at­ed with the Uni­ver­si­ty of Cal­i­for­nia, Berke­ley.

    “There are real­ly no con­se­quences” for com­pa­nies that don’t pro­tect the data, he said, “oth­er than bad press that gets for­got­ten about.”

    A Ques­tion of Aware­ness

    Com­pa­nies that use loca­tion data say that peo­ple agree to share their infor­ma­tion in exchange for cus­tomized ser­vices, rewards and dis­counts. Ms. Magrin, the teacher, not­ed that she liked that track­ing tech­nol­o­gy let her record her jog­ging routes.

    Bri­an Wong, chief exec­u­tive of Kiip, a mobile ad firm that has also sold anony­mous data from some of the apps it works with, says users give apps per­mis­sion to use and share their data. “You are receiv­ing these ser­vices for free because adver­tis­ers are help­ing mon­e­tize and pay for it,” he said, adding, “You would have to be pret­ty obliv­i­ous if you are not aware that this is going on.”

    But Ms. Lee, the nurse, had a dif­fer­ent view. “I guess that’s what they have to tell them­selves,” she said of the com­pa­nies. “But come on.”

    Ms. Lee had giv­en apps on her iPhone access to her loca­tion only for cer­tain pur­pos­es — help­ing her find park­ing spaces, send­ing her weath­er alerts — and only if they did not indi­cate that the infor­ma­tion would be used for any­thing else, she said. Ms. Magrin had allowed about a dozen apps on her Android phone access to her where­abouts for ser­vices like traf­fic noti­fi­ca­tions.

    But it is easy to share infor­ma­tion with­out real­iz­ing it. Of the 17 apps that The Times saw send­ing pre­cise loca­tion data, just three on iOS and one on Android told users in a prompt dur­ing the per­mis­sion process that the infor­ma­tion could be used for adver­tis­ing. Only one app, Gas­Bud­dy, which iden­ti­fies near­by gas sta­tions, indi­cat­ed that data could also be shared to “ana­lyze indus­try trends.”

    More typ­i­cal was theScore, a sports app: When prompt­ing users to grant access to their loca­tion, it said the data would help “rec­om­mend local teams and play­ers that are rel­e­vant to you.” The app passed pre­cise coor­di­nates to 16 adver­tis­ing and loca­tion com­pa­nies.

    A spokesman for theScore said that the lan­guage in the prompt was intend­ed only as a “quick intro­duc­tion to cer­tain key prod­uct fea­tures” and that the full uses of the data were described in the app’s pri­va­cy pol­i­cy.

    The Weath­er Chan­nel app, owned by an IBM sub­sidiary, told users that shar­ing their loca­tions would let them get per­son­al­ized local weath­er reports. IBM said the sub­sidiary, the Weath­er Com­pa­ny, dis­cussed oth­er uses in its pri­va­cy pol­i­cy and in a sep­a­rate “pri­va­cy set­tings” sec­tion of the app. Infor­ma­tion on adver­tis­ing was includ­ed there, but a part of the app called “loca­tion set­tings” made no men­tion of it.

    The app did not explic­it­ly dis­close that the com­pa­ny had also ana­lyzed the data for hedge funds — a pilot pro­gram that was pro­mot­ed on the company’s web­site. An IBM spokesman said the pilot had end­ed. (IBM updat­ed the app’s pri­va­cy pol­i­cy on Dec. 5, after queries from The Times, to say that it might share aggre­gat­ed loca­tion data for com­mer­cial pur­pos­es such as ana­lyz­ing foot traf­fic.)

    Even indus­try insid­ers acknowl­edge that many peo­ple either don’t read those poli­cies or may not ful­ly under­stand their opaque lan­guage. Poli­cies for apps that fun­nel loca­tion infor­ma­tion to help invest­ment firms, for instance, have said the data is used for mar­ket analy­sis, or sim­ply shared for busi­ness pur­pos­es.

    “Most peo­ple don’t know what’s going on,” said Emmett Kil­duff, the chief exec­u­tive of Eagle Alpha, which sells data to finan­cial firms and hedge funds. Mr. Kil­duff said respon­si­bil­i­ty for com­ply­ing with data-gath­er­ing reg­u­la­tions fell to the com­pa­nies that col­lect­ed it from peo­ple.

    Many loca­tion com­pa­nies say they vol­un­tar­i­ly take steps to pro­tect users’ pri­va­cy, but poli­cies vary wide­ly.

    For exam­ple, Sense360, which focus­es on the restau­rant indus­try, says it scram­bles data with­in a 1,000-foot square around the device’s approx­i­mate home loca­tion. Anoth­er com­pa­ny, Fac­tu­al, says that it col­lects data from con­sumers at home, but that its data­base doesn’t con­tain their address­es.

    Some com­pa­nies say they delete the loca­tion data after using it to serve ads, some use it for ads and pass it along to data aggre­ga­tion com­pa­nies, and oth­ers keep the infor­ma­tion for years.

    Sev­er­al peo­ple in the loca­tion busi­ness said that it would be rel­a­tive­ly sim­ple to fig­ure out indi­vid­ual iden­ti­ties in this kind of data, but that they didn’t do it. Oth­ers sug­gest­ed it would require so much effort that hack­ers wouldn’t both­er.

    It “would take an enor­mous amount of resources,” said Bill Dad­di, a spokesman for Cue­biq, which ana­lyzes anony­mous loca­tion data to help retail­ers and oth­ers, and raised more than $27 mil­lion this year from investors includ­ing Gold­man Sachs and Nas­daq Ven­tures. Nev­er­the­less, Cue­biq encrypts its infor­ma­tion, logs employ­ee queries and sells aggre­gat­ed analy­sis, he said.

    There is no fed­er­al law lim­it­ing the col­lec­tion or use of such data. Still, apps that ask for access to users’ loca­tions, prompt­ing them for per­mis­sion while leav­ing out impor­tant details about how the data will be used, may run afoul of fed­er­al rules on decep­tive busi­ness prac­tices, said Manee­sha Mithal, a pri­va­cy offi­cial at the Fed­er­al Trade Com­mis­sion.

    “You can’t cure a mis­lead­ing just-in-time dis­clo­sure with infor­ma­tion in a pri­va­cy pol­i­cy,” Ms. Mithal said.

    Fol­low­ing the Mon­ey

    Apps form the back­bone of this new loca­tion data econ­o­my.

    The app devel­op­ers can make mon­ey by direct­ly sell­ing their data, or by shar­ing it for loca­tion-based ads, which com­mand a pre­mi­um. Loca­tion data com­pa­nies pay half a cent to two cents per user per month, accord­ing to offer let­ters to app mak­ers reviewed by The Times.

    Tar­get­ed adver­tis­ing is by far the most com­mon use of the infor­ma­tion.

    Google and Face­book, which dom­i­nate the mobile ad mar­ket, also lead in loca­tion-based adver­tis­ing. Both com­pa­nies col­lect the data from their own apps. They say they don’t sell it but keep it for them­selves to per­son­al­ize their ser­vices, sell tar­get­ed ads across the inter­net and track whether the ads lead to sales at brick-and-mor­tar stores. Google, which also receives pre­cise loca­tion infor­ma­tion from apps that use its ad ser­vices, said it mod­i­fied that data to make it less exact.

    Small­er com­pa­nies com­pete for the rest of the mar­ket, includ­ing by sell­ing data and analy­sis to finan­cial insti­tu­tions. This seg­ment of the indus­try is small but grow­ing, expect­ed to reach about $250 mil­lion a year by 2020, accord­ing to the mar­ket research firm Opi­mas.

    Apple and Google have a finan­cial inter­est in keep­ing devel­op­ers hap­py, but both have tak­en steps to lim­it loca­tion data col­lec­tion. In the most recent ver­sion of Android, apps that are not in use can col­lect loca­tions “a few times an hour,” instead of con­tin­u­ous­ly.

    Apple has been stricter, for exam­ple requir­ing apps to jus­ti­fy col­lect­ing loca­tion details in pop-up mes­sages. But Apple’s instruc­tions for writ­ing these pop-ups do not men­tion adver­tis­ing or data sale, only fea­tures like get­ting “esti­mat­ed trav­el times.”

    A spokesman said the com­pa­ny man­dates that devel­op­ers use the data only to pro­vide a ser­vice direct­ly rel­e­vant to the app, or to serve adver­tis­ing that met Apple’s guide­lines.

    Apple recent­ly shelved plans that indus­try insid­ers say would have sig­nif­i­cant­ly cur­tailed loca­tion col­lec­tion. Last year, the com­pa­ny said an upcom­ing ver­sion of iOS would show a blue bar onscreen when­ev­er an app not in use was gain­ing access to loca­tion data.

    The dis­cus­sion served as a “warn­ing shot” to peo­ple in the loca­tion indus­try, David Shim, chief exec­u­tive of the loca­tion com­pa­ny Placed, said at an indus­try event last year.

    ...

    ———-

    “Your Apps Know Where You Were Last Night, and They’re Not Keep­ing It Secret” by JENNIFER VALENTI­NO-DeVRIES, NATASHA SINGER, MICHAEL H. KELLER and AARON KROLIK; The New York times; 12/10/2018

    “These com­pa­nies sell, use or ana­lyze the data to cater to adver­tis­ers, retail out­lets and even hedge funds seek­ing insights into con­sumer behav­ior. It’s a hot mar­ket, with sales of loca­tion-tar­get­ed adver­tis­ing reach­ing an esti­mat­ed $21 bil­lion this year. IBM has got­ten into the indus­try, with its pur­chase of the Weath­er Channel’s apps. The social net­work Foursquare remade itself as a loca­tion mar­ket­ing com­pa­ny. Promi­nent investors in loca­tion start-ups include Gold­man Sachs and Peter Thiel, the Pay­Pal co-founder.”

    $21 bil­lion a year in loca­tion-tar­get­ed adver­tis­ing. That’s the size of the indus­try that relies on loca­tion data. So it should prob­a­bly come as no sur­prise that apps were found to be col­lect­ing data on indi­vid­u­als as often as every two sec­onds, mak­ing the deanonymiza­tion of this data triv­ial in many cas­es:

    ...
    An app on the device gath­ered her loca­tion infor­ma­tion, which was then sold with­out her knowl­edge. It record­ed her where­abouts as often as every two sec­onds, accord­ing to a data­base of more than a mil­lion phones in the New York area that was reviewed by The New York Times. While Ms. Magrin’s iden­ti­ty was not dis­closed in those records, The Times was able to eas­i­ly con­nect her to that dot.

    The app tracked her as she went to a Weight Watch­ers meet­ing and to her dermatologist’s office for a minor pro­ce­dure. It fol­lowed her hik­ing with her dog and stay­ing at her ex-boyfriend’s home, infor­ma­tion she found dis­turb­ing.

    “It’s the thought of peo­ple find­ing out those inti­mate details that you don’t want peo­ple to know,” said Ms. Magrin, who allowed The Times to review her loca­tion data.

    ...

    Busi­ness­es say their inter­est is in the pat­terns, not the iden­ti­ties, that the data reveals about con­sumers. They note that the infor­ma­tion apps col­lect is tied not to someone’s name or phone num­ber but to a unique ID. But those with access to the raw data — includ­ing employ­ees or clients — could still iden­ti­fy a per­son with­out con­sent. They could fol­low some­one they knew, by pin­point­ing a phone that reg­u­lar­ly spent time at that person’s home address. Or, work­ing in reverse, they could attach a name to an anony­mous dot, by see­ing where the device spent nights and using pub­lic records to fig­ure out who lived there.
    ...

    And of the 75 com­pa­nies the Times from that received this kind of data, sev­er­al of them claimed to track up to 200 mil­lion mobile devices in the US alone and about of them were in use last year. So for some of these loca­tion aggre­ga­tors, a mas­sive por­tion of all the smart­phones in the US were get­ting tracked by them:

    ...
    At least 75 com­pa­nies receive anony­mous, pre­cise loca­tion data from apps whose users enable loca­tion ser­vices to get local news and weath­er or oth­er infor­ma­tion, The Times found. Sev­er­al of those busi­ness­es claim to track up to 200 mil­lion mobile devices in the Unit­ed States — about half those in use last year. The data­base reviewed by The Times — a sam­ple of infor­ma­tion gath­ered in 2017 and held by one com­pa­ny — reveals people’s trav­els in star­tling detail, accu­rate to with­in a few yards and in some cas­es updat­ed more than 14,000 times a day.
    ...

    Apps for Google’s Android OS were par­tic­u­lar­ly grab­by, which is no sur­prise giv­en that Android appears to be built to facil­i­tate this kind of data col­lec­tion. In the most recent ver­sion of Android apps that are not in use can still col­lect loca­tions “a few times an hour”:

    ...
    More than 1,000 pop­u­lar apps con­tain loca­tion-shar­ing code from such com­pa­nies, accord­ing to 2018 data from MightySig­nal, a mobile analy­sis firm. Google’s Android sys­tem was found to have about 1,200 apps with such code, com­pared with about 200 on Apple’s iOS.

    ...

    Apple and Google have a finan­cial inter­est in keep­ing devel­op­ers hap­py, but both have tak­en steps to lim­it loca­tion data col­lec­tion. In the most recent ver­sion of Android, apps that are not in use can col­lect loca­tions “a few times an hour,” instead of con­tin­u­ous­ly.

    Apple has been stricter, for exam­ple requir­ing apps to jus­ti­fy col­lect­ing loca­tion details in pop-up mes­sages. But Apple’s instruc­tions for writ­ing these pop-ups do not men­tion adver­tis­ing or data sale, only fea­tures like get­ting “esti­mat­ed trav­el times.”
    ...

    And with no fed­er­al US law lim­it­ing the col­lec­tion or use of this kind of data, the mar­ket is only going to con­tin­ue to explode. Espe­cial­ly since it appears to costs just pen­nies a month for access to this kind of data per user per month:

    ...
    There is no fed­er­al law lim­it­ing the col­lec­tion or use of such data. Still, apps that ask for access to users’ loca­tions, prompt­ing them for per­mis­sion while leav­ing out impor­tant details about how the data will be used, may run afoul of fed­er­al rules on decep­tive busi­ness prac­tices, said Manee­sha Mithal, a pri­va­cy offi­cial at the Fed­er­al Trade Com­mis­sion.

    “You can’t cure a mis­lead­ing just-in-time dis­clo­sure with infor­ma­tion in a pri­va­cy pol­i­cy,” Ms. Mithal said.

    ...

    Fol­low­ing the Mon­ey

    Apps form the back­bone of this new loca­tion data econ­o­my.

    The app devel­op­ers can make mon­ey by direct­ly sell­ing their data, or by shar­ing it for loca­tion-based ads, which com­mand a pre­mi­um. Loca­tion data com­pa­nies pay half a cent to two cents per user per month, accord­ing to offer let­ters to app mak­ers reviewed by The Times.

    Tar­get­ed adver­tis­ing is by far the most com­mon use of the infor­ma­tion.

    Google and Face­book, which dom­i­nate the mobile ad mar­ket, also lead in loca­tion-based adver­tis­ing. Both com­pa­nies col­lect the data from their own apps. They say they don’t sell it but keep it for them­selves to per­son­al­ize their ser­vices, sell tar­get­ed ads across the inter­net and track whether the ads lead to sales at brick-and-mor­tar stores. Google, which also receives pre­cise loca­tion infor­ma­tion from apps that use its ad ser­vices, said it mod­i­fied that data to make it less exact.

    Small­er com­pa­nies com­pete for the rest of the mar­ket, includ­ing by sell­ing data and analy­sis to finan­cial insti­tu­tions. This seg­ment of the indus­try is small but grow­ing, expect­ed to reach about $250 mil­lion a year by 2020, accord­ing to the mar­ket research firm Opi­mas.
    ...

    And while tar­get­ing adver­tis­ing is the most com­mon use of this kind of infor­ma­tion, keep in mind that this is a spy’s dream mar­ket. For exam­ple, two dif­fer­ent com­pa­nies, Fys­i­cal and Safe­Graph, mapped peo­ple attend­ing the 2017 Pres­i­den­tial inau­gu­ra­tion, with Trump’s phone and those around him ping­ing away the entire time:

    ...
    Two loca­tion firms, Fys­i­cal and Safe­Graph, mapped peo­ple attend­ing the 2017 pres­i­den­tial inau­gu­ra­tion. On Fysical’s map, a bright red box near the Capi­tol steps indi­cat­ed the gen­er­al loca­tion of Pres­i­dent Trump and those around him, cell­phones ping­ing away. Fysical’s chief exec­u­tive said in an email that the data it used was anony­mous. Safe­Graph did not respond to requests for com­ment.
    ...

    Safe­Graph hap­pens to be one of the loca­tion aggre­ga­tors Peter Thiel has invest­ed in. And giv­en that Thiel’s Palan­tir is exact­ly the kind of com­pa­ny that would want to use this kind of data for all sorts of intel­li­gence uses — for its many cor­po­rate and gov­ern­ment clients — the fact that Safe­Graph was mon­i­tor­ing the pres­i­den­tial inau­gu­ra­tion rais­es the ques­tion of whether or not it was using that data as part of some sort of secu­ri­ty ser­vice for the gov­ern­ment or some sort of intel­li­gence col­lec­tion ser­vice for pri­vate clients. It’s a ques­tion with­out an obvi­ous answer because, as the fol­low­ing arti­cle about Safe­Graph notes, in addi­tion to its many cor­po­rate clients, Safe­Graph “plans to wade into some thorny busi­ness cat­e­gories, like gov­ern­ment agency work and heav­i­ly reg­u­lat­ed spaces like city devel­op­ment, dri­ver­less cars and traf­fic man­age­ment.” And Peter Thiel hap­pens to be one of Safe­Graphs ear­ly investors:

    Ad Exchang­er

    Investors From Ad Tech And Pol­i­tics Con­tribute To Safe­Graph’s $16M Series A

    by James Hercher // Wednes­day, April 19th, 2017 – 1:14 pm

    The start­up data man­age­ment plat­form Safe­Graph will emerge from an extend­ed beta peri­od Wednes­day with a $16 mil­lion Series A invest­ment and its first prod­uct, which tracks and ana­lyzes human move­ment in pub­lic places.

    Safe­Graph isn’t dis­clos­ing any ini­tial part­ners or clients, but CEO Auren Hoff­man told AdEx­chang­er the company’s busi­ness devel­op­ment plans include adver­tis­ing and mar­ket­ing use cas­es, as well as data ser­vices for urban plan­ners, gov­ern­ment agen­cies, retail­ers and real estate hold­ers.

    Safe­Graph is backed by an army of tech­nol­o­gy and pol­i­cy advis­ers.

    Hoff­man, a co-founder and for­mer CEO of Liv­eR­amp, brought on more than 30 col­leagues from Liv­eR­amp, as well as all five cor­po­rate lead­ers at Acx­iom: CEO Scott Howe, CFO War­ren Jen­son, Audi­ence Solu­tions Pres­i­dent Rick Erwin, exec­u­tive VP Jer­ry Jones and Liv­eR­amp CEO Travis May.

    Hoff­man also has back­ers from across the ad tech indus­try, includ­ing the cur­rent CEOs of App­Nexus, Media­Math, DataXu, Rubi­con Project, Pub­Mat­ic, OpenX, Krux, AppLovin, Turn, Tapad, Draw­bridge, Dat­a­logix, LiveIn­tent, Amobee and the recent­ly acquired Moat.

    App­Nexus CEO Bri­an O’Kelley and Media­Math CEO Joe Zawadz­ki both recent­ly told AdEx­chang­er that they had invest­ed in Safe­Graph with­out any clear sense of the startup’s busi­ness mod­el.

    “We all know each oth­er and have been friends and col­leagues who came up togeth­er,” O’Kelley said last month. “There def­i­nite­ly is an ad tech cabal.”

    On top of that ad tech base, Safe­Graph has sev­er­al well-placed back­ers who can help the com­pa­ny bridge into oth­er data-dri­ven cor­po­rate spheres, like Amer­i­can Express Pres­i­dent Ash Gup­ta and Star­wood Hotels CEO Bar­ry Stern­licht.

    “Some­thing start­ing with (an) ad tech back­ground can actu­al­ly ben­e­fit lots of indus­tries,” Hoff­man said, not­ing that Invite Media’s co-founders sold their dis­play adver­tis­ing tech­nol­o­gy to Google before start­ing Flat­iron Health, a data ana­lyt­ics and man­age­ment plat­form for oncol­o­gy clin­ics.

    And Safe­Graph does have plans to wade into some thorny busi­ness cat­e­gories, like gov­ern­ment agency work and heav­i­ly reg­u­lat­ed spaces like city devel­op­ment, dri­ver­less cars and traf­fic man­age­ment.

    Per­haps the most suc­cess­ful data start­up when it comes to secur­ing major gov­ern­ment data and soft­ware con­tracts is Palan­tir, whose chair­man, Peter Thiel, is also an ear­ly Safe­Graph investor. Pres­i­dent Obama’s for­mer deputy chief of staff, Mona Sut­phen, and for­mer US House Major­i­ty Leader Eric Can­tor, both Safe­Graph investors, pro­vide some across-the-aisle sup­port for any US gov­ern­ment efforts.

    ...

    ———-

    “Investors From Ad Tech And Pol­i­tics Con­tribute To Safe­Graph’s $16M Series A” by James Hercher; Ad Exchang­er; 04/19/2017

    “And Safe­Graph does have plans to wade into some thorny busi­ness cat­e­gories, like gov­ern­ment agency work and heav­i­ly reg­u­lat­ed spaces like city devel­op­ment, dri­ver­less cars and traf­fic man­age­ment.

    City devel­op­ment and gov­ern­ment agency work. Yeah, that’s going to be a ‘thorny’ issue for a loca­tion aggre­ga­tor. But it will prob­a­bly be a lot less thorny for Safe­Graph with some­one like Peter Thiel as one of its ear­ly investors:

    ...
    Per­haps the most suc­cess­ful data start­up when it comes to secur­ing major gov­ern­ment data and soft­ware con­tracts is Palan­tir, whose chair­man, Peter Thiel, is also an ear­ly Safe­Graph investor. Pres­i­dent Obama’s for­mer deputy chief of staff, Mona Sut­phen, and for­mer US House Major­i­ty Leader Eric Can­tor, both Safe­Graph investors, pro­vide some across-the-aisle sup­port for any US gov­ern­ment efforts.
    ...

    Keep in mind that we’ve already learned about Palan­tir incor­po­ra­tion GPS loca­tion data into the ser­vices its pro­vid­ing clients when we learned about the insid­er-threat assess­ment pro­gram Palan­tir was run­ning for JP Mor­gan, although in that case it was loca­tion data pro­vid­ed by the JP Mor­gan’s com­pa­ny cell­phones that it pro­vid­ed its employ­ees. But if Palan­tir’s inter­nal algo­rithms are already set up to incor­po­rate loca­tion data it’s hard to see why the com­pa­ny would­n’t be uti­liz­ing the vast loca­tion data bro­ker­age indus­try that’s offer­ing this data for pen­nies a month.

    So as we can see, the thorny issue of loca­tion aggre­ga­tor com­pa­nies like Safe­Graph sell­ing their data to gov­ern­ment agen­cies is prob­a­bly going to be a lot less thorny that it should be due the fact that Thiel is one of the ear­ly investors in this space and Palan­tir has already had so much suc­cess in these kinds of thorny com­mer­cial areas.

    The fact that there’s no fed­er­al laws reg­u­lat­ing the col­lec­tion and use of this kind of data pre­sum­ably helps with the thorni­ness.

    Posted by Pterrafractyl | January 14, 2019, 12:43 pm
  12. This Dai­ley Mail arti­cle based on an inves­ti­ga­tion exposed by Moth­er­board shows that 250 “boun­ty hunters’ were able to locate pre­cise loca­tion data of cell phone data from at least 3 cel­lu­lar providers (AT&T, T‑Mobile and Sprint). They request­ed loca­tion data 18,000 times. One com­pa­ny, Cer­Care­One oper­at­ed in secret with a con­fi­den­tial­i­ty agree­ment from with its pay­ing clients for it to per­form these tasks with­out pub­lic scruti­ny from 2012–2017. This vio­lates the user’s rights and rais­es seri­ous con­fi­den­tial­i­ty con­cerns about cus­tomers of those cel­lu­lar ser­vices.

    Although not stat­ed in the arti­cle, the impli­ca­tions to black­mail politi­cians by know­ing their spe­cif­ic where­abouts or the where­abouts of sus­pect­ed leak­ers or oth­er con­tacts is very con­cern­ing.

    https://www.dailymail.co.uk/sciencetech/article-6679889/Bombshell-report-finds-cellphone-carriers-sell-location-data-bounty-hunters.html

    Near­ly every major US cell­phone car­ri­er sold pre­cise loca­tion data to BOUNTY HUNTERS via a ‘secret phone track­ing ser­vice’ for years, bomb­shell report finds
    ¥ AT&T, Sprint, T‑Mobile promised to stop sell­ing user data to loca­tion aggre­ga­tors
    ¥ But a new inves­ti­ga­tion has dis­cov­ered the firms were sell­ing loca­tion data more broad­ly than pre­vi­ous­ly under­stood, with hun­dreds of boun­ty hunters using it
    ¥ One com­pa­ny used ‘A‑GPS’ data to locate where users are inside of a build­ing
    By ANNIE PALMER FOR DAILYMAIL.COM
    PUBLISHED: 15:45 EST, 7 Feb­ru­ary 2019 | UPDATED: 16:11 EST, 7 Feb­ru­ary 2019

    A shock­ing new report has found hun­dreds of boun­ty hunters had access to high­ly sen­si­tive user data — and it was sold to them by almost every major U.S. wire­less car­ri­er.

    The prac­tice was first revealed last month and, at the time, tele­com firms claimed they were iso­lat­ed inci­dents. 

    How­ev­er, a Moth­er­board inves­ti­ga­tion has since dis­cov­ered that’s far from the case. About 250 boun­ty hunters were able to access users’ pre­cise loca­tion data.  

    In one case, a bail bond firm request­ed loca­tion data some 18,000 times.  

    AT&T, T‑Mobile and Sprint sold the sen­si­tive data, which was meant for user by 911 oper­a­tors and emer­gency ser­vices, to loca­tion aggre­ga­tors, who then sold it to boun­ty hunters, accord­ing to Moth­er­board.  

    A shock­ing new report has found hun­dreds of boun­ty hunters had access to high­ly sen­si­tive user data — and it was sold to them by almost every major U.S. wire­less car­ri­er.

    A shock­ing new report has found hun­dreds of boun­ty hunters had access to high­ly sen­si­tive user data — and it was sold to them by almost every major U.S. wire­less car­ri­er

    HOW WERE THEY ABLE TO TRACK A USER’S LOCATION? 
    Moth­er­board first report­ed how boun­ty hunters were sell­ing access to users’ real-time loca­tion data for only a few hun­dred dol­lars.

    Boun­ty hunters obtained the data from loca­tion aggre­ga­tors, who have part­ner­ships with AT&T, Sprint and T‑Mobile. 

    They were able to esti­mate and track a user’s loca­tion by look­ing at ‘pings’ from phones to near­by cell tow­ers. 

    But com­pa­nies could also col­lect assist­ed-GPS, or A‑GPS, which could even guess a user’s loca­tion inside a build­ing. 

    The com­pa­nies pledged last month to stop sell­ing users’ loca­tion data to aggre­ga­tors.  

    Loca­tion aggre­ga­tors col­lect and sell user loca­tion data, some­times to pow­er ser­vices like bank fraud pre­ven­tion and emer­gency road­side assis­tance, as well as online ads and mar­ket­ing deals, which depend on know­ing your where­abouts.  

    Moth­er­board dis­cov­ered last month that boun­ty hunters were using the data to esti­mate a user’s loca­tion by look­ing at ‘pings’ sent from phones to near­by cell tow­ers. 

    But it appears that the data was even more detailed than pre­vi­ous­ly thought.

    Cer­Care­One, a shad­owy com­pa­ny that sold loca­tion data to boun­ty hunters, even claimed to col­lect assist­ed-GPS, or A‑GPS, data. 

    This A‑GPS data was able to pin­point a per­son­’s device so accu­rate­ly that it see where they are in a build­ing. 

    Tele­com com­pa­nies began col­lect­ing this data in order to give 911 oper­a­tors a more approx­i­mate loca­tion for users when they’re both indoors and out­doors. 

    Instead, it was being sold to aggre­ga­tors, who then sold it to bail bonds­men, boun­ty hunters, land­lords and oth­er groups.

    A bail agent in Geor­gia told Moth­er­board it was ‘sole­ly used’ to locate ‘fugi­tives who have jumped bond.’

    Nei­ther AT&T, T‑Mobile nor Sprint explic­it­ly denied sell­ing A‑GPS data, accord­ing to Moth­er­board. 

    Cer­Care­One was essen­tial­ly cloaked in secre­cy when it oper­at­ed between 2012 and 2017, requir­ing its cus­tomers to agree to ‘keep the exis­tence of CerCareOne.com con­fi­den­tial,’ Moth­er­board said. 

    Loca­tion aggre­ga­tors use the data from car­ri­ers to esti­mate a user’s loca­tion by look­ing at ‘pings’ cent to cell tow­ers. But they’ve also been found to sell assist­ed-GPS, or A‑GPS, data, which can pin­point a per­son­’s device so accu­rate­ly they can see where they are in a build­ing”

    The com­pa­ny often charged up to $1,100 every time a cus­tomer request­ed a user’s loca­tion data.  

    Cer­Care­One said it required clients to obtain writ­ten con­sent if they want­ed to track a user, but Moth­er­board found that sev­er­al users received no warn­ing they were being tracked, result­ing in the prac­tice often occur­ring with­out their knowl­edge or agree­ment. 

    While Cer­Care­One is no longer oper­a­tional, its pri­or use and exis­tence by loca­tion aggre­ga­tors rais­es seri­ous con­cerns about how users’ data is being uti­lized by these com­pa­nies. 

    AT&T and oth­er tele­coms sought to min­i­mize the use of Cer­Care­One. 

    ‘We are not aware of any mis­use of this ser­vice which end­ed two years ago,’ the firm told Moth­er­board. 

    ‘We’ve already decid­ed to elim­i­nate all loca­tion aggre­ga­tion services—including those with clear con­sumer benefits—after reports of mis­use by oth­er loca­tion ser­vices involv­ing aggre­ga­tors.’

    At least 15 U.S. sen­a­tors have urged the FCC and the FTC to take action on shad­owy data bro­ker busi­ness­es, accord­ing to Moth­er­board. 

    ‘This scan­dal keeps get­ting worse,’ Demo­c­ra­t­ic U.S. Sen­a­tor Ron Wyden told Moth­er­board. 

    ‘Car­ri­ers assured cus­tomers loca­tion track­ing abus­es were iso­lat­ed inci­dents. Now it appears that hun­dreds of peo­ple could track our phones, and they were doing it for years before any­one at the wire­less com­pa­nies took action. 

    ‘That’s more than an over­sight — that’s fla­grant, wil­ful dis­re­gard for the safe­ty and secu­ri­ty of Amer­i­cans,’ he added.  

    Posted by Mary Benton | February 8, 2019, 3:44 pm
  13. Here’s an alarm­ing update on the grow­ing sophis­ti­ca­tion of ‘deep­fake’ video tech­nol­o­gy: Sam­sung just fig­ured out how to cre­ate deep­fake videos using a sin­gle pho­to of a per­son. Pre­vi­ous­ly, deep­fake soft­ware required a large num­ber of pho­tos of some­one from dif­fer­ent angles but with Sam­sung’s approach a sin­gle pho­to is all that is required. Sam­sung lit­er­al­ly deep­faked the Mona Lisa as a demon­stra­tion.

    The new approach isn’t per­fect when only a sin­gle pho­to is avail­able and will still gen­er­ate small notice­able errors. But as the arti­cle notes, those error may not real­ly mat­ter when it comes to the poten­tial pro­pa­gan­da pur­pos­es of this tech­nol­o­gy because pro­pa­gan­da does­n’t nec­es­sar­i­ly have to be real­is­tic to be effec­tive as was under­scored over the past day when a crude­ly altered video of House Speak­er Nan­cy Pelosi slowed down made to make her look drunk went viral on social media and was treat­ed as real:

    CNet

    Sam­sung deep­fake AI could fab­ri­cate a video of you from a sin­gle pro­file pic

    Even the Mona Lisa can be faked.

    By Joan E. Sols­man

    May 24, 2019 7:00 AM PDT

    Imag­ine some­one cre­at­ing a deep­fake video of you sim­ply by steal­ing your Face­book pro­file pic. The bad guys don’t have their hands on that tech yet, but Sam­sung has fig­ured out how to make it hap­pen.

    Soft­ware for cre­at­ing deep­fakes — fab­ri­cat­ed clips that make peo­ple appear to do or say things they nev­er did — usu­al­ly requires big data sets of images in order to cre­ate a real­is­tic forgery. Now Sam­sung has devel­oped a new arti­fi­cial intel­li­gence sys­tem that can gen­er­ate a fake clip by feed­ing it as lit­tle as one pho­to.

    The tech­nol­o­gy, of course, can be used for fun, like bring­ing a clas­sic por­trait to life. The Mona Lisa, which exists sole­ly as a sin­gle still image, is ani­mat­ed in three dif­fer­ent clips to demon­strate the new tech­nol­o­gy. A Sam­sung arti­fi­cial intel­li­gence lab in Rus­sia devel­oped the tech­nol­o­gy, which was detailed in a paper ear­li­er this week.

    Here’s the down­side: These kinds of tech­niques and their rapid devel­op­ment also cre­ate risks of mis­in­for­ma­tion, elec­tion tam­per­ing and fraud, accord­ing to Hany Farid, a Dart­mouth researcher who spe­cial­izes in media foren­sics to root out deep­fakes.

    When even a crude­ly doc­tored video of US Speak­er of the House Nan­cy Pelosi can go viral on social media, deep­fakes raise wor­ries that their sophis­ti­ca­tion would make mass decep­tion eas­i­er, since deep­fakes are hard­er to debunk.

    “Fol­low­ing the trend of the past year, this and relat­ed tech­niques require less and less data and are gen­er­at­ing more and more sophis­ti­cat­ed and com­pelling con­tent,” Farid said. Even though Sam­sung’s process can cre­ate visu­al glitch­es, “these results are anoth­er step in the evo­lu­tion of tech­niques ... lead­ing to the cre­ation of mul­ti­me­dia con­tent that will even­tu­al­ly be indis­tin­guish­able from the real thing.”

    Like Pho­to­shop for video on steroids, deep­fake soft­ware pro­duces forg­eries by using machine learn­ing to con­vinc­ing­ly fab­ri­cate a mov­ing, speak­ing human. Though com­put­er manip­u­la­tion of video has exist­ed for decades, deep­fake sys­tems have made doc­tored clips not only eas­i­er to cre­ate but also hard­er to detect. Think of them as pho­to-real­is­tic dig­i­tal pup­pets.

    Lots of deep­fakes, like the one ani­mat­ing the Mona Lisa, are harm­less fun. The tech­nol­o­gy has made pos­si­ble an entire genre of memes, includ­ing one in which Nico­las Cage’s face is placed into movies and TV shows he was­n’t in. But deep­fake tech­nol­o­gy can also be insid­i­ous, such as when it’s used to graft an unsus­pect­ing per­son­’s face into explic­it adult movies, a tech­nique some­times used in revenge porn.

    In its paper, Sam­sung’s AI lab dubbed its cre­ations “real­is­tic neur­al talk­ing heads.” The term “talk­ing heads” refers to the genre of video the sys­tem can cre­ate; it’s sim­i­lar to those video box­es of pun­dits you see on TV news. The word “neur­al” is a nod to neur­al net­works, a type of machine learn­ing that mim­ics the human brain.

    The researchers saw their break­through being used in a host of appli­ca­tions, includ­ing video games, film and TV. “Such abil­i­ty has prac­ti­cal appli­ca­tions for telep­res­ence, includ­ing video­con­fer­enc­ing and mul­ti-play­er games, as well as spe­cial effects indus­try,” they wrote.

    The paper was accom­pa­nied by a video show­ing off the team’s cre­ations, which also hap­pened to be scored with a dis­con­cert­ing­ly chill-vibes sound­track.

    Usu­al­ly, a syn­the­sized talk­ing head requires you to train an arti­fi­cial intel­li­gence sys­tem on a large data set of images of a sin­gle per­son. Because so many pho­tos of an indi­vid­ual were need­ed, deep­fake tar­gets have usu­al­ly been pub­lic fig­ures, such as celebri­ties and politi­cians.

    The Sam­sung sys­tem uses a trick that seems inspired by Alexan­der Gra­ham Bel­l’s famous quote about prepa­ra­tion being the key to suc­cess. The sys­tem starts with a lengthy “meta-learn­ing stage” in which it watch­es lots of videos to learn how human faces move. It then applies what it’s learned to a sin­gle still or a small hand­ful of pics to pro­duce a rea­son­ably real­is­tic video clip.

    Unlike a true deep­fake video, the results from a sin­gle or small num­ber of images end up fudg­ing fine details. For exam­ple, a fake of Mar­i­lyn Mon­roe in the Sam­sung lab’s demo video missed the icon’s famous mole. It also means the syn­the­sized videos tend to retain some sem­blance of who­ev­er played the role of the dig­i­tal pup­pet, accord­ing to Siwei Lyu, a com­put­er sci­ence pro­fes­sor at the Uni­ver­si­ty at Albany in New York who spe­cial­izes in media foren­sics and machine learn­ing. That’s why each of the mov­ing Mona Lisa faces looks like a slight­ly dif­fer­ent per­son.

    Gen­er­al­ly, a deep­fake sys­tem aims at elim­i­nat­ing those visu­al hic­cups. That requires mean­ing­ful amounts of train­ing data for both the input video and the tar­get per­son.

    The few-shot or one-shot aspect of this approach is use­ful, Lyu said, because it means a large net­work can be trained on a large num­ber of videos, which is the part that takes a long time. This kind of sys­tem can then quick­ly adapt to a new tar­get per­son using only a few images with­out exten­sive retrain­ing, he said. “This saves time in con­cept and makes the mod­el gen­er­al­iz­able.”

    The rapid advance­ment of arti­fi­cial intel­li­gence means that any time a researcher shares a break­through in deep­fake cre­ation, bad actors can begin scrap­ing togeth­er their own jury-rigged tools to mim­ic it. Sam­sung’s tech­niques are like­ly to find their way into more peo­ple’s hands before long.

    ...

    ———

    “Sam­sung deep­fake AI could fab­ri­cate a video of you from a sin­gle pro­file pic” by Joan E. Sols­man; CNet; 05/24/2019

    “Soft­ware for cre­at­ing deep­fakes — fab­ri­cat­ed clips that make peo­ple appear to do or say things they nev­er did — usu­al­ly requires big data sets of images in order to cre­ate a real­is­tic forgery. Now Sam­sung has devel­oped a new arti­fi­cial intel­li­gence sys­tem that can gen­er­ate a fake clip by feed­ing it as lit­tle as one pho­to.

    Just one pho­to. That’s the new thresh­old for a deep­fake, albeit a some­what crude deep­fake. But when a crude­ly doc­tored video of House Speak­er Nan­cy Pelosi can go viral, it’s hard to see why crude­ly done deep­fakes aren’t going to have sim­i­lar pro­pa­gan­da val­ue:

    ...
    Here’s the down­side: These kinds of tech­niques and their rapid devel­op­ment also cre­ate risks of mis­in­for­ma­tion, elec­tion tam­per­ing and fraud, accord­ing to Hany Farid, a Dart­mouth researcher who spe­cial­izes in media foren­sics to root out deep­fakes.

    When even a crude­ly doc­tored video of US Speak­er of the House Nan­cy Pelosi can go viral on social media, deep­fakes raise wor­ries that their sophis­ti­ca­tion would make mass decep­tion eas­i­er, since deep­fakes are hard­er to debunk.
    ...

    And keep in mind that for some­one like Nan­cy Pelosi, there’s no need for a deep­fake to be based on a sin­gle pho­to. There’s plen­ty of footage of her avail­able for any­one who wants to cre­ate a more real­is­tic deep­fake of her. And as the fol­low­ing arti­cle makes clear, when those real­is­tic deep­fakes are cre­at­ed we should have absolute­ly no expec­ta­tion that the right-wing media won’t ful­ly jump on board pro­mot­ing them as real. Because it turns out that bla­tant­ly doc­tored video of Nan­cy Pelosi did­n’t sim­ply go viral on social media. It was also aggres­sive­ly pushed by Pres­i­dent Trump and Fox News. Beyond that, the tim­ing of the doc­tored video is rather sus­pi­cious: On Thurs­day, Trump held a press con­fer­ence where he claimed that Pelosi was “dete­ri­o­rat­ing” and has “lost it”. It was right around this time that the doc­tored video start­ed get­ting dis­sem­i­nat­ed on social media. Then, by Thurs­day evening — which is AFTER this video was already iden­ti­fied as being mod­i­fied — Trump prox­ies Rudy Giu­liani and Cory Lewandows­ki showed up on Fox News to fur­ther push the meme that Pelosi is men­tal­ly los­ing it. Trump also retweet­ed a Fox Busi­ness com­pi­la­tion of Pelosi trip­ping over her words dur­ing a press con­fer­ence. Giu­liani actu­al­ly tweet­ed out the fake video. So we appear to be look­ing at a coor­di­nat­ed GOP effort to know­ing­ly use a fake video to score polit­i­cal points:

    Talk­ing Points Memo
    News

    Team Trump Tries To Weaponize Decep­tive­ly Edit­ed Videos Of Nan­cy Pelosi

    By Kate Riga
    May 24, 2019 9:26 am

    Pres­i­dent Don­ald Trump and his lawyer, Rudy Giu­liani, tweet­ed out videos Thurs­day in an attempt to make House Speak­er Nan­cy Pelosi (D‑CA) seem slow and addled.

    The strat­e­gy to depict Pelosi as senile began with Trump’s press con­fer­ence Thurs­day, a day after the scut­tled infra­struc­ture meet­ing, when he said that she’s “dete­ri­o­rat­ing” and has “lost it.”

    Around the same time, a doc­tored video of Pelosi with the audio slowed to make it seem like she was drunk­en­ly slur­ring her words start­ed dis­sem­i­nat­ing through dif­fer­ent social media plat­forms. That video has racked up mil­lions of views.

    Trump’s allies coa­lesced behind the strat­e­gy Thurs­day evening, tak­ing to Fox to fur­ther paint Pelosi as decrepit and men­tal­ly lack­ing.

    Trump tweet­ed out a com­pi­la­tion from Fox Busi­ness that night that spliced togeth­er clips of Pelosi stum­bling over her words at a news con­fer­ence. The hosts of the seg­ment debat­ed Pelosi’s men­tal acu­ity.

    More here. This is the pas­sage Trump just tweet­ed. Same show, push­ing the line that Pelosi is senile. Pret­ty obvi­ous­ly feed­ing off these faked videos the Post uncov­ered today. pic.twitter.com/EziVX7vj5y— Josh Mar­shall (@joshtpm) May 24, 2019

    Giu­liani opt­ed for the doc­tored video, tweet­ing, “What’s wrong with Nan­cy Pelosi? Her speech pat­tern is bizarre.” He delet­ed the tweet but told a Wash­ing­ton Post reporter that he’d noticed a men­tal decline.

    Text from @RudyGiuliani on why he tweeted/deleted dis­tort­ed Pelosi video: “Thought I delete right away. Some­one raised ques­tion about it and since I was­n’t sure I delet­ed it. But I have been notic­ing a grad­ual change in her speech pat­tern and ges­tures for some­time”— Drew Har­well (@drewharwell) May 24, 2019

    He addressed the tweet again Fri­day morn­ing (we think?) with a fair­ly inde­ci­pher­able tweet.

    ivess­s­apol­o­gy for a video which is alleged­ly is a car­i­ca­ture of an oth­er­wise halt­ing speech pat­tern, she should first stop, and apol­o­gize for, say­ing the Pres­i­dent needs an “inter­ven­tion.” Are pic.twitter.com/ZpEO7iRzV8— Rudy Giu­liani (@RudyGiuliani) May 24, 2019

    For­mer Trump cam­paign man­ag­er Corey Lewandows­ki also referred to Pelosi “slur­ring” her words on Fox Busi­ness Net­work Thurs­day night:

    Sur­prise sur­prise @CLewandowski_ already on the Lou Dobbs show push­ing those doc­tored Pelosi videos first debunked today by WaPo https://t.co/0RelwLI27U pic.twitter.com/PmMCmQZGZA— Josh Mar­shall (@joshtpm) May 23, 2019

    ...

    ———–

    “Team Trump Tries To Weaponize Decep­tive­ly Edit­ed Videos Of Nan­cy Pelosi” by Kate Riga; Talk­ing Points Memo; 05/24/2019

    Around the same time, a doc­tored video of Pelosi with the audio slowed to make it seem like she was drunk­en­ly slur­ring her words start­ed dis­sem­i­nat­ing through dif­fer­ent social media plat­forms. That video has racked up mil­lions of views.”

    Yep, around the same time Trump gives a press con­fer­ence where he asserts that Pelosi is “dete­ri­o­rat­ing” and has “lost it”, we find this fake video sud­den­ly get­ting pushed on social media and then Trump prox­ies show up on Fox New and Fox Busi­ness Thurs­day evening to con­tin­ue push­ing the meme. Giu­liani even pushed the doc­tored video on twit­ter.

    And as the fol­low­ing arti­cle notes, this meme WAS THE MAIN TOPIC on Lau­ra Ingra­ham’s prime time Fox News evening pro­gram and Lou Dobbs’ Fox Busi­ness show. This was­n’t just a joke Fox was briefly cov­er­ing. It was a cen­tral prime-time top­ic Fox was tak­ing seri­ous­ly:

    The Week

    Some­one’s doc­tor­ing video of Nan­cy Pelosi, and Trump and his allies seem very into it

    Peter Weber
    05/24/2019 3:06 a.m.

    Pres­i­dent Trump and House Speak­er Nan­cy Pelosi (D‑Calif.) are in a pub­lic spat, fol­low­ing Wednes­day’s very short infra­struc­ture meet­ing that Pelosi called a Trump “tem­per tantrum” on Thurs­day and Trump insist­ed was­n’t, ask­ing five total­ly objec­tive aides to vouch for him. Trump called Pelosi a “mess” and “crazy,” say­ing she’s “not the same per­son” while he’s an “extreme­ly sta­ble genius.” Pelosi respond­ed:

    When the “extreme­ly sta­ble genius” starts act­ing more pres­i­den­tial, I’ll be hap­py to work with him on infra­struc­ture, trade and oth­er issues. https://t.co/tfWVkj9CLT— Nan­cy Pelosi (@SpeakerPelosi) May 23, 2019

    But Trump’s sug­ges­tion that Pelosi, 79 — six years old­er than him­self — is get­ting too old for her job seems part of a larg­er cam­paign. It was a main theme Thurs­day night on Lau­ra Ingra­ham’s Fox News pro­gram and Lou Dobbs’ show on Fox Busi­ness — Trump tweet­ed a Dobbs clip fea­tur­ing selec­tive­ly edit­ed video of Pelosi, plus GOP strate­gist Ed Rollins say­ing Pelosi appears addled by age. Trump loy­al­ist Corey Lewandows­ki was also on Dobbs, allud­ing to a dif­fer­ent, doc­tored video of Pelosi spread­ing around the inter­net. Trump’s lawyer Rudy Giu­liani tweet­ed, then delet­ed, that video Thurs­day night, with the com­ment: “What is wrong with Nan­cy Pelosi? Her speech pat­tern is bizarre.”

    It isn’t clear who orig­i­nal­ly manip­u­lat­ed the Pelosi video, now wide­ly viewed and shared on social media, but pp+ and out­side researchers deter­mined the video was slowed to about 75 per­cent of its orig­i­nal speed, then edit­ed so Pelosi’s voice is rough­ly the right pitch.

    “The altered video’s dis­sem­i­na­tion high­lights the sub­tle way that viral mis­in­for­ma­tion could shape pub­lic per­cep­tions in the run-up to the 2020 elec­tion,” the Post warns. “Even sim­ple, crude manip­u­la­tions can be used to under­mine an oppo­nent or score polit­i­cal points.”

    ...

    ———–

    “Some­one’s doc­tor­ing video of Nan­cy Pelosi, and Trump and his allies seem very into it” by Peter Weber; The Week; 05/24/2019

    “But Trump’s sug­ges­tion that Pelosi, 79 — six years old­er than him­self — is get­ting too old for her job seems part of a larg­er cam­paign. It was a main theme Thurs­day night on Lau­ra Ingra­ham’s Fox News pro­gram and Lou Dobbs’ show on Fox Busi­ness — Trump tweet­ed a Dobbs clip fea­tur­ing selec­tive­ly edit­ed video of Pelosi, plus GOP strate­gist Ed Rollins say­ing Pelosi appears addled by age. Trump loy­al­ist Corey Lewandows­ki was also on Dobbs, allud­ing to a dif­fer­ent, doc­tored video of Pelosi spread­ing around the inter­net. Trump’s lawyer Rudy Giu­liani tweet­ed, then delet­ed, that video Thurs­day night, with the com­ment: “What is wrong with Nan­cy Pelosi? Her speech pat­tern is bizarre.””

    Fox News’s main theme was a big bla­tant lie. On one lev­el, that’s just a typ­i­cal evening of Fox pro­gram­ming. But the fact that there was an aggres­sive pro­pa­gan­da cam­paign using an eas­i­ly fal­si­fi­able video and this cam­paign got even more aggres­sive even after the media debunked the video demon­strates exact­ly the kind of men­tal­i­ty that will find the use of deep­fakes irre­sistible. Also keep in mind that, as we’ve seen in elec­tions in Brazil and India, encrypt­ed mes­sag­ing apps like What­sApp are increas­ing­ly the medi­um of choice for dis­sem­i­nat­ing mis­in­for­ma­tion and it’s extreme­ly dif­fi­cult to iden­ti­fy and respond to dis­in­for­ma­tion spread over these encrypt­ed plat­forms. Will 2020 be the first deep­fake elec­tion for the Unit­ed States? We’ll find out soon, but the fact that Amer­i­can pol­i­tics is already mar­i­nat­ing deep­fake ver­sions of real­i­ty sug­gests that the only thing pre­vent­ing deep­fakes from being used in pol­i­tics is access to the tech­nol­o­gy. Tech­nol­o­gy that’s advanc­ing so fast even the Mona Lisa can be deep­faked.

    Posted by Pterrafractyl | May 24, 2019, 12:27 pm
  14. With a vote on impeach­ment slat­ed for tomor­row in the House of Rep­re­sen­ta­tives large­ly over a scheme to extort the Ukrain­ian gov­ern­ment into gin­ning up an inves­ti­ga­tion into Joe Biden and now Rudy Giu­liani is claim­ing he has a whole new round of dubi­ous­ly sourced crim­i­nal alle­ga­tions against the Bidens, it’s pret­ty clear that polit­i­cal dirty tricks, hit jobs, and smears are going to remain cen­tral to the GOP’s polit­i­cal strat­e­gy for the 2020 elec­tion.

    But as the fol­low­ing arti­cle reminds us, there’s no rea­son to assume those dirty tricks, hit jobs, and smears have to rely on fig­ures like Giu­liani soliciting/extorting ‘evi­dence’ from for­eign gov­ern­ments and oli­garchs. In the age of ‘deep­fake’ videos, the dirty trick smear can be gen­er­at­ed by some anony­mous oper­a­tive with a sim­ple appli­ca­tion of deep­fake soft­ware. All you need is some footage of your tar­get.

    And that’s where the new tech­nol­o­gy described in the fol­low­ing arti­cle comes in. Com­put­er sci­en­tist Hany Farid is work­ing on an algo­rithm for detect­ing deep­fakes. Crit­i­cal­ly, the algo­rithm promis­es to allow for a rapid detec­tion of, with the idea that deep­fakes can go viral with­in hours so rapid iden­ti­fi­ca­tion of fake video is cru­cial to min­i­mize the dam­age to elec­tions.

    Unfor­tu­nate­ly, it sounds like Farid’s sys­tem can only work at detect­ing deep­fakes for fig­ures where a large amount of real footage of the per­son already exists, but at least polit­i­cal can­di­dates tend to have a lot of exist­ing footage of them so the sys­tem can poten­tial work for pro­tect­ing elec­tions. It works by tak­ing the hours of real footage of some­one, ana­lyz­ing it, and detect­ing sub­tle ver­bal and facial tics and try­ing to detect those quirks in the poten­tial deep­fake video. First exam­ple, Eliz­a­beth War­ren has a habit of mov­ing her head from left to right, so Farid’s soft­ware is set to to first detect that sub­set sig­na­ture head motion from the hours of footage of real Eliz­a­beth War­ren footage and then try to spot that same unique quirk in the footage to be test­ed. Farid calls these tics, “soft bio­met­rics”.

    Will Farid’s soft­ware be ready in time for 2020? Let’s hope so, but as the grow­ing num­ber of deep­fake sto­ries have already made clear, deep­fake tech­nol­o­gy will def­i­nite­ly be ready in time for 2020 because it’s already ready so the deep­fakes are com­ing in 2020 whether we’re ready or not:

    NBC News

    How do you spot a deep­fake? A clue hides with­in our voic­es, researchers say.
    New tech­nol­o­gy is meant to work quick­ly out of fear that a viral deep­fake will need almost imme­di­ate iden­ti­fi­ca­tion and response.

    Dec. 16, 2019, 5:11 PM CST
    By David Ingram and Jacob Ward

    Hany Farid is well versed in how Sen. Eliz­a­beth War­ren, D‑Mass., speaks.

    “So, Sen. War­ren has a habit of mov­ing her head left and right quite a bit,” said Farid, a com­put­er sci­en­tist at the Uni­ver­si­ty of Cal­i­for­nia, Berke­ley.

    Farid’s exper­tise comes from his hours work­ing with authen­tic videos of War­ren and oth­er politi­cians in hope that tech­nol­o­gy can pre­vent the spread of fake videos — the com­put­er-gen­er­at­ed “deep­fakes” that are seen as the new fron­tier in online mis­in­for­ma­tion.

    For­mer Pres­i­dent Barack Oba­ma has his own quirks.

    “Pres­i­dent Oba­ma had this thing that when he deliv­ered bad news, he would frown and he would tilt his head down­ward a lit­tle bit. Very char­ac­ter­is­tic,” Farid said. “They all have their own lit­tle tells.”

    Farid is part of a team of researchers at the Uni­ver­si­ty of Cal­i­for­nia, Berke­ley who have devel­oped a com­put­er algo­rithm that they say may help to detect deep­fakes. It’s meant to work quick­ly out of fear that a viral deep­fake will need almost imme­di­ate iden­ti­fi­ca­tion and response.

    “What we are con­cerned about is a video of a can­di­date, 24-to-48 hours before elec­tion night,” Farid said. “It gets viral in a mat­ter of min­utes, and you have a dis­rup­tion of a glob­al elec­tion. You can steal an elec­tion.”

    Deep­fake videos made with arti­fi­cial intel­li­gence can be a pow­er­ful force because they make it appear that some­one did or said some­thing that they nev­er did, alter­ing how the view­ers see politi­cians, cor­po­rate exec­u­tives, celebri­ties and oth­er pub­lic figures.The tools nec­es­sary to make these videos are avail­able online, with some peo­ple mak­ing celebri­ty mashups and one app offer­ing to insert users’ faces into famous movie scenes.

    ...

    “It’s not real­ly pos­si­ble to stop the tech­nol­o­gy. We are in a race between the arti­fi­cial intel­li­gence to detect it and the arti­fi­cial intel­li­gence to per­fect it,” House Intel­li­gence Com­mit­tee Chair­man Adam Schiff, D‑Calif., told the “TODAY” show in June.

    In Octo­ber, Sens. Mar­co Rubio, R‑Fla., and Mark Warn­er, D‑Va., sent a let­ter to social media com­pa­nies urg­ing them to come up with clear poli­cies about deep­fakes.

    It can be a chal­lenge to deter­mine whether any one video has been manip­u­lat­ed, but Farid’s team is try­ing an approach based on spot­ting sub­tle pat­terns in how peo­ple speak. By feed­ing hours of video into their sys­tem, Farid said they’ve been able to iden­ti­fy dis­tinc­tive ver­bal and facial tics — known as soft bio­met­rics — for Pres­i­dent Don­ald Trump and var­i­ous Demo­c­ra­t­ic pres­i­den­tial can­di­dates.

    The pow­er of dis­tort­ed video was under­scored in May, when a doc­tored video sur­faced online appear­ing to show House Speak­er Nan­cy Pelosi, D‑Calif., hav­ing trou­ble speak­ing. The video was rel­a­tive­ly sim­ple in its dis­tor­tion com­pared to deep­fakes made with arti­fi­cial intel­li­gence, but the speed with which it spread made it clear how unpre­pared some tech com­pa­nies, news orga­ni­za­tions and con­sumers were for the new tech­nol­o­gy.

    Oth­er deep­fakes have also gained trac­tion, though some have helped edu­cate the pub­lic on the exis­tence of deep­fake tech­nol­o­gy. Actor Jor­dan Peele por­trayed Oba­ma in a doc­tored video that spread wide­ly last year, and more recent exam­ples of deep­fakes include ones involv­ing actor and politi­cian Arnold Schwarzeneg­ger and Face­book CEO Mark Zucker­berg.

    ...

    Pelosi accused Face­book, in par­tic­u­lar, of “lying to the pub­lic” after the social media com­pa­ny said it would leave up the dis­tort­ed video of her, though the com­pa­ny said it would reduce how often the video appeared in peo­ple’s news feeds.

    Now, Face­book is help­ing to fund labs like Farid’s to get aca­d­e­m­ic help on the prob­lem of iden­ti­fy­ing deep­fakes. This month, Face­book said it was pro­vid­ing researchers with a new, unique data set of 100,000-plus videos to aid research on deep­fakes.

    Farid has a track record of push­ing tech com­pa­nies to do more to remove prob­lem­at­ic mate­r­i­al. In a pre­vi­ous project to fight online extrem­ism, he devel­oped soft­ware for social media ser­vices to use to flag images and oth­er posts sup­port­ing the Islam­ic State mil­i­tant group and oth­ers.

    With the research into deep­fakes, Farid’s algo­rithm is built to help researchers and news orga­ni­za­tions such as NBC News spot deep­fakes of can­di­dates in the 2020 pres­i­den­tial race. They’ll be able to upload videos via an online por­tal, he said.

    “The hope is when videos start get­ting released, we can say, ‘This is con­sis­tent; this is incon­sis­tent,’ ” he said.

    ———–

    “How do you spot a deep­fake? A clue hides with­in our voic­es, researchers say.” by David Ingram and Jacob Ward; NBC News; 12/16/2019

    ““What we are con­cerned about is a video of a can­di­date, 24-to-48 hours before elec­tion night,” Farid said. “It gets viral in a mat­ter of min­utes, and you have a dis­rup­tion of a glob­al elec­tion. You can steal an elec­tion.”

    A mat­ter of min­utes. That’s all it takes for videos to go viral. And if it’s an embar­rass­ing or damn­ing fake video of a pres­i­den­tial can­di­date in a com­pro­mis­ing posi­tion released at the last minute right before vot­ing, that’s poten­tial­ly enough to steal an elec­tion in a mat­ter of min­utes. So whether or not we see an elec­tion stolen with deep­fakes might depend on how obvi­ous the “soft bio­met­ric” tells are for the even­tu­al nom­i­nee. Will the even­tu­al Demo­c­ra­t­ic nom­i­nee have a dis­tinct enough style of talk­ing to thwart deep­fakes? Let’s hope so. But accord­ing to Farid, every­one has their lit­tle tells, so hope­ful­ly this real­ly is an approach that can work for vir­tu­al­ly any pub­lic fig­ure:

    ...
    “So, Sen. War­ren has a habit of mov­ing her head left and right quite a bit,” said Farid, a com­put­er sci­en­tist at the Uni­ver­si­ty of Cal­i­for­nia, Berke­ley.

    Farid’s exper­tise comes from his hours work­ing with authen­tic videos of War­ren and oth­er politi­cians in hope that tech­nol­o­gy can pre­vent the spread of fake videos — the com­put­er-gen­er­at­ed “deep­fakes” that are seen as the new fron­tier in online mis­in­for­ma­tion.

    For­mer Pres­i­dent Barack Oba­ma has his own quirks.

    “Pres­i­dent Oba­ma had this thing that when he deliv­ered bad news, he would frown and he would tilt his head down­ward a lit­tle bit. Very char­ac­ter­is­tic,” Farid said. “They all have their own lit­tle tells.”

    ...

    It can be a chal­lenge to deter­mine whether any one video has been manip­u­lat­ed, but Farid’s team is try­ing an approach based on spot­ting sub­tle pat­terns in how peo­ple speak. By feed­ing hours of video into their sys­tem, Farid said they’ve been able to iden­ti­fy dis­tinc­tive ver­bal and facial tics — known as soft bio­met­rics — for Pres­i­dent Don­ald Trump and var­i­ous Demo­c­ra­t­ic pres­i­den­tial can­di­dates.
    ...

    Also recall that when the dis­tort­ed video of Nan­cy Pelosi went viral, the prob­lem was­n’t just that Face­book refused to pull the video. The prob­lem was also that the Trump team was aggres­sive­ly pro­mot­ing the video as if it was real and began pro­mot­ing the meme that Nan­cy Pelosi was get­ting too old­er to do her job. So ear­li­er this year we had the Trump team active­ly and open­ly exploit­ing a fake video against a polit­i­cal rival:

    ...
    The pow­er of dis­tort­ed video was under­scored in May, when a doc­tored video sur­faced online appear­ing to show House Speak­er Nan­cy Pelosi, D‑Calif., hav­ing trou­ble speak­ing. The video was rel­a­tive­ly sim­ple in its dis­tor­tion com­pared to deep­fakes made with arti­fi­cial intel­li­gence, but the speed with which it spread made it clear how unpre­pared some tech com­pa­nies, news orga­ni­za­tions and con­sumers were for the new tech­nol­o­gy.

    ...

    Pelosi accused Face­book, in par­tic­u­lar, of “lying to the pub­lic” after the social media com­pa­ny said it would leave up the dis­tort­ed video of her, though the com­pa­ny said it would reduce how often the video appeared in peo­ple’s news feeds.
    ...

    But it’s also going to be impor­tant to keep in mind that, even if Farid’s sys­tem works for pub­lic fig­ures, it’s not fool­proof. After all, it sounds a lot like the ‘pat­tern recog­ni­tion’ approach tak­en to assign­ing respon­si­bil­i­ty for com­put­er hacks. As we’ve seen, the pat­tern recog­ni­tion approach is unfor­tu­nate­ly vul­ner­a­ble to spoof­ing, where some­one who knows what the expect­ed ‘clues’ are that a hack was done by Rus­sians can inten­tion­al­ly leave those ‘clues’ in a hack to mis­lead secu­ri­ty ana­lysts. As long as there’s a known pat­tern for peo­ple to look for it’s hard to see what’s to stop bad actors from try­ing to match those known pat­terns, includ­ing with deep­fakes. After all, what’s to stop the Trump team from get­ting their hands on soft­ware sim­i­lar to Farid’s soft­ware, run­ning it to iden­ti­fy those dis­tinct ver­bal and facial tics for the Demo­c­ra­t­ic nom­i­nee, and then hir­ing an imper­son­ator to make deep­fakes that include those per­son­al ‘tells.’

    So while it’s great to hear that peo­ple are work­ing on tools to pre­vent bad actors from steal­ing elec­tions with the strate­gic use of deep­fakes, keep in mind that the bad actors are going to have their own tricks. Tricks that include hir­ing lit­er­al bad actors. ‘Bad’ in the moral sense. They’ll pre­sum­ably have to be pret­ty good at act­ing to trick the deep­fake detec­tion sys­tems.

    Posted by Pterrafractyl | December 17, 2019, 5:02 pm
  15. Have we final­ly entered the era of deep­fake pol­i­tics? That’s the dis­turb­ing prospect raised by a sto­ry that, on its sur­face, appears to be kind of a bad joke:
    The New York Post pub­lished a sto­ry that is pur­port­ed­ly based on the con­tents of three lap­tops Hunter Biden dropped off at a com­put­er repair shop in April of 2019 but nev­er paid for the ser­vices and nev­er tried to pick them up again. The lap­tops appar­ent­ly con­tained incrim­i­nat­ing emails that demon­strate that Joe Biden met with a Buris­ma exec­u­tive in 2015, some­thing that would­n’t actu­al­ly be scan­dalous if it hap­pened and instead would be entire­ly con­sis­tent with US for­eign pol­i­cy towards Ukraine at that point. As we should expect, much of the right-wing media hoopla over this has focused on that aspect of the sto­ry to try to sug­gest that it was a val­i­da­tion of all of the var­i­ous Burisa-relat­ed alle­ga­tions against that was at the heart of the #UkraineGate impeach­ment scan­dal.

    But there’s anoth­er part of this sto­ry that rais­es the intrigu­ing pos­si­bil­i­ty that we’re see­ing an ear­ly use of deep­fake tech­nol­o­gy for polit­i­cal pur­pos­es: the lap­tops also alleged­ly include a num­ber of incrim­i­nat­ing pho­tos of Hunter Biden like pho­tos of him with a crack pipe in his mouth and a video of him with an uniden­ti­fied woman. So if we take this sto­ry at face val­ue, we are sup­posed to believe that Hunter Biden took three of his lap­tops filled with incrim­i­nat­ing videos and emails to a com­put­er repair store in 2019 and nev­er both­ered to even try to pay for the ser­vices or pick them up again.

    Oh, and it turns out the own­er of the com­put­er repair store not only con­tact­ed the FBI about his con­cerns — he claims to have been fear­ful for his life after deter­min­ing the con­tents, cit­ing the Seth Rich sto­ry — but he also got in touch with Rudy Giu­liani and hand­ed copies of the con­tents of the Giu­lian­i’s attor­ney Robert Costel­lo in Decem­ber of 2019. At some point Steve Ban­non got his hands on the data and Ban­non has appar­ent­ly been telling news out­lets about these lap­tops since Sep­tem­ber of this year, before Giu­liani gave the doc­u­ments to the New York Post. Giu­liani is hint­ing there’s more doc­u­ments to come. So the chain of cus­tody for these lap­tops is from the com­put­er repair shop own to Rudy Giu­liani and Steve Ban­non.

    Oh, and as we’ll see in the sec­ond arti­cle excerpt below, it turns out the com­put­er repair shop own­er, John Paul Mac Isaac, is a big Trump fan who insists his impeach­ment was a sham. He’s far less sure about the details of the sto­ry, giv­ing reporters incon­sis­tent answers to ques­tions about the time­line of the sto­ry, giv­ing incon­sis­tent answers to ques­tions of his inter­ac­tions with the FBI. When asked if he was sure that it was Hunter who dropped the lap­tops off, Mac Isaac claimed he was unable to see who dropped them off due to a med­ical con­di­tion but he inferred it was Hunter because of a Beau Biden Foun­da­tion stick­er on one of the lap­tops. And he refused to answer ques­tions about whether or not he was in con­tact with Rudy Giu­liani before the lap­tops were dropped off at his shop. This is the guy who is the sole source of the for this sto­ry on the ori­gin of the lap­tops.

    The whole sto­ry looks so slop­pi­ly shady you have to won­der why they decid­ed to go with it...except for the fact that the New York Post sto­ry had what appeared to be pho­tos of Hunter with a crack pipe dan­gling out of his mouth and a longer sex tape. So giv­en that this whole sto­ry has the appear­ance of being a slop­py hoax intend­ed to tit­il­late right-wing audi­ences and lit­tle more we have to ask: are those pho­tos and videos real and some­how stolen? Or are we look­ing at the long-await­ed ‘deep­fake’ Octo­ber Sur­prise? If so, it’s a rather iron­ic deep­fake Octo­ber Sur­prise since it’s being iron­i­cal­ly pro­mot­ed by an obvi­ous­ly fake cov­er sto­ry:

    Talk­ing Points Memo
    Five Points

    5 Points On Why The New ‘Biden Emails’ Dis­tor­tion Is So Bogus

    By Josh Koven­sky
    Octo­ber 14, 2020 2:55 p.m.

    It’s so bizarre that to com­ment on it risks ampli­fy­ing it.

    But the New York Post pub­lished a sto­ry Wednes­day morn­ing claim­ing to have received from Rudy Giu­liani a copy of a hard dri­ve from a Hunter Biden lap­top.

    And yet, for a sup­posed Octo­ber sur­prise, noth­ing in the sto­ry or where it came from is par­tic­u­lar­ly sur­pris­ing. A right-lean­ing news out­let known for loose edi­to­r­i­al stan­dards ped­dles emails of unknown prove­nance from Giu­liani, bag man for the Pres­i­dent whose antics already got Pres­i­dent Trump impeached over the same alle­ga­tions last year.

    What’s even less sur­pris­ing is how the Trump cam­paign and the asso­ci­at­ed Fox News appa­ra­tus have latched on to the sto­ry.

    Lat­er on Wednes­day, Twit­ter banned users from post­ing the arti­cle itself. In a state­ment emailed to reporters, cit­ed a ban on peo­ple using the plat­form “to direct­ly dis­trib­ute con­tent obtained through hack­ing” as among the rea­sons for the move.

    But to look at the arti­cle with a jour­nal­is­tic eye, there are so many red flags here – rang­ing from the lack of any cred­i­ble premise to the sto­ry to Giuliani’s pres­ence in the mat­ter – that it’s dif­fi­cult to find firm ground on which to under­stand how this smear cam­paign is work­ing.

    1. There are so many red flags.

    The sto­ry claims that a Delaware com­put­er repair shop own­er received a lap­top full of Hunter Biden’s emails last year for data extrac­tion and repair. After the client nev­er paid or came to pick up the lap­top, the anony­mous store own­er sup­pos­ed­ly said, the Apple com­put­er repair man went to both the FBI and Rudy Giu­liani with the infor­ma­tion.

    In a sense, it’s incor­rect to even say that the sto­ry has “red flags” – the premise itself is so bogus that say­ing there are alarm bells sug­gests that there is some under­ly­ing sol­id ground to what’s going on here.

    But, there isn’t. The arti­cle regur­gi­tates – and relies on as a news­wor­thy premise – long-debunked alle­ga­tions that Joe Biden fired a Ukrain­ian pros­e­cu­tor to pro­tect his son, Hunter, from inves­ti­ga­tions in the for­mer Sovi­et repub­lic.

    What adds new­ness to the mess is that the emails uncov­ered by the New York Post sup­pos­ed­ly came from the younger Biden’s lap­top itself, the­o­ret­i­cal­ly pierc­ing the veil of the Biden fam­i­ly and offer­ing a look with­in, much in the same way that Wik­ileaks offered an inside look at the Clin­ton cam­paign via the Podes­ta emails.

    The prob­lem is, then as now, is that there’s no way to ver­i­fy any­thing that the Post has report­ed on, and there’s no evi­dence that the Post did the work to deter­mine whether the emails them­selves are real.

    2. Even a press duped in 2016 wide­ly mocked the sup­posed “scoop.”

    After years of this, many reporters and polit­i­cal com­men­ta­tors mocked the arti­cle.

    ...

    Giu­liani him­self hasn’t helped his own cause.

    As if on cue, he’s promised that “there’s more to come” from the lap­top, typ­i­cal­ly with­out express­ing what that may be.

    3. The back­sto­ry to the “scoop” is pre­pos­ter­ous.

    The Post report­ed that a cus­tomer brought in a water-dam­aged Mac­Book Pro in April 2019, with­out pay­ing for its repair or com­ing back to pick it up.

    From there, the anony­mous store own­er appar­ent­ly held on to the lap­top, before con­tact­ing the FBI and Giu­liani attor­ney Robert Costel­lo. In Decem­ber 2019, the store own­er gave the lap­top to the FBI under sub­poe­na and a copy of the hard dri­ve to Costel­lo.

    The only per­son in the arti­cle who has been charged with any crime – for­mer Trump cam­paign chair­man Steve Ban­non – pur­port­ed­ly told the Post about the hard dri­ve in Sep­tem­ber, before Giu­liani him­self sup­pos­ed­ly gave the news­pa­per a copy of the hard dri­ve.

    That chain of cus­tody alone – from Giu­liani to Ban­non to the Post – is almost enough to make a con­spir­a­cy in itself, were it not so absurd to begin with.

    4. But even on its own terms, the “scoop” col­laps­es.

    Let’s pre­tend we’re liv­ing in a uni­verse where the authen­tic­i­ty of these emails had been thor­ough­ly vet­ted and doc­u­ment­ed, and that their prove­nance was clear and known to all.

    Even then, the sto­ry doesn’t check out.

    The article’s cen­tral asser­tion is that Biden lied when he said that his son, Hunter, had nev­er asked him about any­thing to do with busi­ness in Ukraine. Rely­ing on an email from the lap­top, the sto­ry claims to prove that Hunter set up a meet­ing between his father and a Ukrain­ian busi­ness­man work­ing for Buris­ma.

    That in itself wouldn’t real­ly prove any­thing giv­en that the premise is so faulty: the alle­ga­tion of cor­rup­tion against Biden Sr. is that he fired a Ukrain­ian pros­e­cu­tor to pro­tect his son; even the NY Post arti­cle goes nowhere towards prov­ing that.

    But the Biden cam­paign chal­lenged in a Wednes­day state­ment the story’s con­clu­sion, say­ing that “we have reviewed Joe Biden’s offi­cial sched­ules from the time and no meet­ing, as alleged by the New York Post, ever took place.”

    5. So if it’s dis­in­for­ma­tion, then from who?

    This is arguably the most inter­est­ing ques­tion, but also the one with the least clear answer.

    Russ­ian hack­ers report­ed­ly launched an attack against Buris­ma last year, stok­ing spec­u­la­tion that the intru­sion was relat­ed to Biden’s cam­paign for the pres­i­den­cy. Sep­a­rate­ly, top coun­ter­in­tel­li­gence offi­cial William Evan­i­na has said that the Rus­sians are seek­ing to dam­age Biden’s can­di­da­cy in the run-up to next month’s elec­tion.

    Giu­liani him­self spent a sig­nif­i­cant por­tion of the past 12 months speak­ing with and tak­ing infor­ma­tion from Andrii Derkach, a Ukrain­ian par­lia­men­tar­i­an who was sanc­tioned last month by the U.S. gov­ern­ment for inter­fer­ing in the 2020 elec­tion. Derkach has been ped­dling sup­posed tapes of Biden speak­ing with Ukrain­ian offi­cials.

    All of this is sug­ges­tive, but it doesn’t get any­where near towards answer­ing the ques­tion of how this strange sto­ry came to be.

    ————-

    “5 Points On Why The New ‘Biden Emails’ Dis­tor­tion Is So Bogus” by Josh Koven­sky; Talk­ing Points Memo; 10/14/2020

    What adds new­ness to the mess is that the emails uncov­ered by the New York Post sup­pos­ed­ly came from the younger Biden’s lap­top itself, the­o­ret­i­cal­ly pierc­ing the veil of the Biden fam­i­ly and offer­ing a look with­in, much in the same way that Wik­ileaks offered an inside look at the Clin­ton cam­paign via the Podes­ta emails.”

    It’s the crux of the sto­ry: is this real­ly Hunter Biden’s lap­top and real­ly his emails? Did he actu­al­ly keep videos of him­self smok­ing crack on this lap­top and then take it to their repair shop? And did Hunter then neglect to ever return to the shop to attempt to pick them up? And did this shop own­er hand the copy of the lap­top over to Rudy Giu­liani and Steve Ban­non, who did­n’t mod­i­fy them at all? All of that has to be believed if we’re to believe this sto­ry:

    ...
    The prob­lem is, then as now, is that there’s no way to ver­i­fy any­thing that the Post has report­ed on, and there’s no evi­dence that the Post did the work to deter­mine whether the emails them­selves are real.

    ...

    Giu­liani him­self hasn’t helped his own cause.

    As if on cue, he’s promised that “there’s more to come” from the lap­top, typ­i­cal­ly with­out express­ing what that may be.

    3. The back­sto­ry to the “scoop” is pre­pos­ter­ous.

    The Post report­ed that a cus­tomer brought in a water-dam­aged Mac­Book Pro in April 2019, with­out pay­ing for its repair or com­ing back to pick it up.

    From there, the anony­mous store own­er appar­ent­ly held on to the lap­top, before con­tact­ing the FBI and Giu­liani attor­ney Robert Costel­lo. In Decem­ber 2019, the store own­er gave the lap­top to the FBI under sub­poe­na and a copy of the hard dri­ve to Costel­lo.

    The only per­son in the arti­cle who has been charged with any crime – for­mer Trump cam­paign chair­man Steve Ban­non – pur­port­ed­ly told the Post about the hard dri­ve in Sep­tem­ber, before Giu­liani him­self sup­pos­ed­ly gave the news­pa­per a copy of the hard dri­ve.

    That chain of cus­tody alone – from Giu­liani to Ban­non to the Post – is almost enough to make a con­spir­a­cy in itself, were it not so absurd to begin with.
    ...

    And now here’s an inter­view the shop own­er, John Paul Mac Isaac, gave where claims to be fear­ing for his life, cit­ing Seth Rich, at the same time he can’t even get his sto­ry straight as to whether or not he reached out to the FBI or the FBI reached out to him. And when asked whether or not he was in con­tact with Rudy Giu­liani before the lap­tops were alleged­ly dropped off by Hunter Biden, Mac Isaac refused to answer, oth­er than to refer to Giu­liani as his “life guard”:

    The Dai­ly Beast

    Man Who Report­ed­ly Gave Hunter’s Lap­top to Rudy Speaks Out in Bizarre Inter­view

    John Paul Mac Isaac gave con­flict­ing sto­ries to reporters on Wednes­day. He also said he feared for his life, cit­ing the Seth Rich con­spir­a­cy.

    by Jor­dan How­ell and Erin Ban­co
    Updat­ed Oct. 14, 2020 9:31PM ET / Pub­lished Oct. 14, 2020 3:52PM ET

    On Wednes­day morn­ing, the New York Post pub­lished a sto­ry alleg­ing that Hunter Biden dropped off a lap­top at a Delaware com­put­er store for repair and that the device con­tained nefar­i­ous emails and pho­tos.

    The item was imme­di­ate­ly viewed with sus­pi­cion, both for the tim­ing of it—coming less than three weeks before the elections—and the path the lap­top sup­pos­ed­ly took. The Post said that “before turn­ing over the gear,” the own­er of the com­put­er repair shop “made a copy of the hard dri­ve and lat­er gave it to for­mer May­or Rudy Giuliani’s lawyer, Robert Costel­lo.” The sto­ry alleged that the Biden son was set­ting up a meet­ing between a top exec­u­tive at a Ukrain­ian ener­gy firm on which he served and his father, who was then the vice pres­i­dent. The Biden cam­paign has said no such meet­ing was sched­uled.

    On Wednes­day after­noon, a group of reporters, among them a jour­nal­ist for The Dai­ly Beast, spoke with the own­er of the shop, a man named John Paul Mac Isaac who lives in Wilm­ing­ton, Delaware. The audio of that near­ly hour-long ques­tion and answer ses­sion is below.

    Mac Isaac appeared ner­vous through­out. Sev­er­al times, he said he was scared for his life and for the lives of those he loved. He appeared not to have a grasp on the time­line of the lap­top arriv­ing at his shop and its dis­ap­pear­ance from it. He also said the impeach­ment of Pres­i­dent Trump was a “sham.” Social media post­ings indi­cate that Mac Isaac is an avid Trump sup­port­er and vot­ed for him in the 2016 elec­tion.

    Mac Isaac said he had a med­ical con­di­tion that pre­vent­ed him from actu­al­ly see­ing who dropped off the lap­top but that he believed it to be Hunter Biden’s because of a stick­er relat­ed to the Beau Biden Foun­da­tion that was on it. He said that Hunter Biden actu­al­ly dropped off three lap­tops for repair, an abun­dance of hard­ware that he chalked up to the Biden son being “rich.”

    Through­out the inter­view, Mac Isaac switched back and forth from say­ing he reached out to law enforce­ment after view­ing the files in the lap­top to say­ing that it was actu­al­ly the Fed­er­al Bureau of Inves­ti­ga­tion that con­tact­ed him. At one point, Mac Isaac claimed that he was email­ing some­one from the FBI about the lap­top. At anoth­er point he claimed a spe­cial agent from the Bal­ti­more office had con­tact­ed him after he alert­ed the FBI to the device’s exis­tence. At anoth­er point, he said the FBI reached out to him for “help access­ing his dri­ve.”

    Mac Isaac ref­er­enced the infa­mous Seth Rich con­spir­a­cy theory—which holds that a DNC staffer who police say was mur­dered in a botched rob­bery was actu­al­ly killed off by Clin­ton allies because he leaked com­mit­tee emails—as rea­son for his para­noia. He said he made a copy of the hard dri­ve for the pur­pos­es of per­son­al pro­tec­tion.

    ...

    Mac Isaac refused to answer spe­cif­ic ques­tions about whether he had been in con­tact with Rudy Giu­liani before the lap­top drop-off or at any oth­er time before the Post article’s pub­li­ca­tion. Pressed on his rela­tion­ship with Giu­liani, he replied: “When you’re afraid and you don’t know any­thing about the depth of the waters that you’re in, you want to find a life­guard.”

    Seem­ing to real­ize he’d said too much, he added: “Ah, sh it.”

    So Rudy was your life­guard? the reporters asked. “No com­ment,” he replied.

    ———–

    “Man Who Report­ed­ly Gave Hunter’s Lap­top to Rudy Speaks Out in Bizarre Inter­view” by Jor­dan How­ell and Erin Ban­co; The Dai­ly Beast; 10/14/2020

    “Mac Isaac appeared ner­vous through­out. Sev­er­al times, he said he was scared for his life and for the lives of those he loved. He appeared not to have a grasp on the time­line of the lap­top arriv­ing at his shop and its dis­ap­pear­ance from it. He also said the impeach­ment of Pres­i­dent Trump was a “sham.” Social media post­ings indi­cate that Mac Isaac is an avid Trump sup­port­er and vot­ed for him in the 2016 elec­tion.”

    A Trump super fan. That’s the sin­gle source that the entire sto­ry is based on. A guy who refus­es calls Giu­liani his “life guard” but refused to answer any ques­tions as to whether or not he was in con­tact with Giu­liani before the labtops were dropped off:

    ...
    Mac Isaac said he had a med­ical con­di­tion that pre­vent­ed him from actu­al­ly see­ing who dropped off the lap­top but that he believed it to be Hunter Biden’s because of a stick­er relat­ed to the Beau Biden Foun­da­tion that was on it. He said that Hunter Biden actu­al­ly dropped off three lap­tops for repair, an abun­dance of hard­ware that he chalked up to the Biden son being “rich.”

    ...

    Mac Isaac refused to answer spe­cif­ic ques­tions about whether he had been in con­tact with Rudy Giu­liani before the lap­top drop-off or at any oth­er time before the Post article’s pub­li­ca­tion. Pressed on his rela­tion­ship with Giu­liani, he replied: “When you’re afraid and you don’t know any­thing about the depth of the waters that you’re in, you want to find a life­guard.”

    Seem­ing to real­ize he’d said too much, he added: “Ah, sh it.”

    So Rudy was your life­guard? the reporters asked. “No com­ment,” he replied.
    ...

    Keep in mind that if Mac Isaac first hand­ed this over to Giu­liani in Decem­ber of 2019 that means they’ve had 10 months to con­coct some­thing more impres­sive than this. This was appar­ent­ly the best they could do. It’s so slop­py we have to won­der if Jacob Wohl is going to be revealed to be involved.

    But while the cov­er sto­ry is garbage its pos­si­ble the actu­al deep­fakes are of a much high­er qual­i­ty and those of the items that pur­port to lend authen­tic­i­ty to the rest of the con­tents of the lap­top like the emails. It points towards an appli­ca­tion of deep­fake tech­nol­o­gy that’s going to be impor­tant to keep in mind going for­ward: using deep­fakes as a means of seem­ing­ly val­i­dat­ing the con­tent of oth­er files, like emails, found on a hard dri­ve. It’s an appli­ca­tion of deep­fakes that’s going to be espe­cial­ly impor­tant to keep in mind now that the GOP under Trump has embraced the tac­tic of hack­ing your oppo­nent as a valid re-elec­tion strat­e­gy.

    Posted by Pterrafractyl | October 15, 2020, 5:12 pm
  16. Here’s a pair of arti­cles about two dif­fer­ent sto­ries that are real­ly part of the same larg­er sto­ry that we’ve heard before. The sto­ry of how our loca­tion infor­ma­tion is being rou­tine­ly col­lect­ed by smart­phones and sent off to adver­tis­ers. As we’ve seen, the col­lec­tion of loca­tion data is some­thing the smart­phone man­u­fac­tur­ers like Google have already been caught col­lect­ing and pre­sum­ably incor­po­rat­ing into their adver­tis­ing algo­rithms, even when loca­tion-track­ing is turned off. Then there’s the sto­ries about Cell phone ser­vice providers like Sprint, AT&T, and T‑Mobile get­ting caught sell­ing loca­tion infor­ma­tion on the open mar­ket to enti­ties that include boun­ty hunters. So it should come as lit­tle sur­prise to learn that the apps run on smart­phones are also col­lect­ing loca­tion infor­ma­tion and send­ing that infor­ma­tion off to adver­tis­ers too. That was the find­ings of two sep­a­rate reports in the last week. The first report cov­ers the find­ings of a recent study of the infor­ma­tion col­lect­ed by apps tar­get­ing kids. And while the study found that only around 7 per­cent of the child-tar­get­ed apps they exam­ined were col­lect­ing loca­tion infor­ma­tion, they also found that almost all of the most pop­u­lar apps were indeed col­lect­ing loca­tion infor­ma­tion. In oth­er words, almost every kid with the smart­phone is get­ting cyber­stalked by the adver­tis­ing indus­try with­out their par­ents real­iz­ing it. And as the arti­cle reminds us, many chil­dren can’t dis­tin­guish ads from con­tent. So when you hand over pow­er­ful infor­ma­tion to adver­tis­ers that allow for ever more effec­tive micro­tar­get­ing, you’re effec­tive­ly set­ting up a sys­tem for max­i­mal­ly per­sua­sive child-tar­get­ed micro-tar­get­ed mes­sag­ing.

    And all of this is hap­pen­ing despite a US law — the 1998 Children’s Online Pri­va­cy Pro­tec­tion Act, or COPPA — which pre­vents com­pa­nies from gath­er­ing per­son­al infor­ma­tion about kids under 13 with­out parental per­mis­sion. So how do Google and Apple deal with these risks? By large­ly ignor­ing them and rely­ing a loop­hole that allows app devel­op­ers to declare that their child-tar­get­ing apps are actu­al­ly for “all ages” and pre­tend like it’s actu­al­ly adults using these apps. Yep.

    But mass loca­tion track­ing via apps isn’t just for kids. As we’ll see in the sec­ond arti­cle below, it’s also for Cana­di­ans. That’s what was just revealed in shock­ing new report by Cana­di­an fed­er­al pri­va­cy inves­ti­ga­tors into a mas­sive pri­va­cy vio­la­tion car­ried out by an app cre­at­ed by none oth­er than Tim Hor­tons. The app, released in 2017, was sup­posed to be a stan­dard restau­rant pro­mo­tion­al app. It was orig­i­nal­ly track­ing indi­vid­u­als to giv­en the offers at near­by loca­tions. But in 2019, the com­pa­ny appar­ent­ly decid­ed to drop the indi­vid­ual loca­tion track­ing and engaged in aggre­gate track­ing to look at broad­er user pat­terns. But in mak­ing this shift, the com­pa­ny did­n’t just start mass aggre­gat­ing the loca­tion data it was col­lect­ing. It also began col­lect­ing loca­tion data 24/7, whether or not the app was open and even dur­ing over­seas trips. It’s the kind of sto­ry that, on its own, is rather shock­ing, but also shows what is pos­si­ble. Because this means both Apple and Google built smart­phone oper­at­ing sys­tems that allow app devel­op­ers to uni­lat­er­al­ly just start col­lect­ing this kind of infor­ma­tion on phones where the app is installed with­out telling any­one. Tim Hor­tons just decid­ed they want­ed 24/7 loca­tion data and, voila, they had it. What are the odds Tim Hor­tons was the only com­pa­ny on the plan­et that’s decid­ed to do this?

    Ok, first, here’s the report on the mass spy­ing of chil­dren. Mass spy­ing that appears to be enabled by a loop­hole that allows app devel­op­ers to cre­ate apps for kids and then pre­tend they don’t real­ize kids are using them:

    The Wash­ing­ton Post

    Your kids’ apps are spy­ing on them

    Apple and Google just look the oth­er way. Here’s how we stop it.

    By Geof­frey A. Fowler
    June 9, 2022 at 8:00 a.m. EDT

    Imag­ine if a stranger parked in front of a child’s bed­room win­dow to peep inside. You’d call the police.

    Yet that hap­pens every day online, and Big Tech looks the oth­er way.

    Apps are spy­ing on our kids at a scale that should shock you. More than two-thirds of the 1,000 most pop­u­lar iPhone apps like­ly to be used by chil­dren col­lect and send their per­son­al infor­ma­tion out to the adver­tis­ing indus­try, accord­ing to a major new study shared with me by fraud and com­pli­ance soft­ware com­pa­ny Pix­alate. On Android, 79 per­cent of pop­u­lar kids apps do the same.

    Angry Birds 2 snoops when kids use it. So do Can­dy Crush Saga and apps for col­or­ing and doing math home­work. They’re grab­bing kids’ gen­er­al loca­tions and oth­er iden­ti­fy­ing infor­ma­tion and send­ing it to com­pa­nies that can track their inter­ests, pre­dict what they might want to buy or even sell their infor­ma­tion to oth­ers.

    Apple and Google run the app stores, so what are they doing about it? Enabling it.

    Tech com­pa­nies need to stop turn­ing a blind eye when chil­dren use their prod­ucts — or else we need laws to impose some respon­si­bil­i­ty on them. We the users want children’s pri­va­cy to be pro­tect­ed online. But par­ents and teach­ers can’t be the only line of defense.

    Children’s pri­va­cy deserves spe­cial atten­tion because kids’ data can be mis­used in some unique­ly harm­ful ways. Research sug­gests many chil­dren can’t dis­tin­guish ads from con­tent, and track­ing tech lets mar­keters micro-tar­get young minds.

    This is why kids are at the cen­ter of one of America’s few pri­va­cy laws, the 1998 Children’s Online Pri­va­cy Pro­tec­tion Act, or COPPA. It said that com­pa­nies aren’t sup­posed to gath­er per­son­al infor­ma­tion about kids under 13 with­out parental per­mis­sion. Sounds pret­ty clear, right?

    But even one of the authors of COPPA, Sen. Edward J. Markey (D‑Mass.), thinks it needs a do-over. “It was pret­ty obvi­ous when the bill was being orig­i­nal­ly draft­ed that there was going to be a real oppor­tu­ni­ty for unscrupu­lous cor­po­ra­tions to take advan­tage of young peo­ple,” he told me. “Now the prob­lems are on steroids.”

    By the time a child reach­es 13, online adver­tis­ing firms hold an aver­age of 72 mil­lion data points about them, accord­ing to Super­Awe­some, a Lon­don-based com­pa­ny that helps app devel­op­ers nav­i­gate child-pri­va­cy laws.

    “COPPA was passed in a world where par­ents would be in the room with a child using a com­put­er,” said Sta­cy Feuer, a senior vice pres­i­dent of the Enter­tain­ment Soft­ware Rat­ing Board, or ESRB, who worked for two decades at the Fed­er­al Trade Com­mis­sion. “Mobile and every­thing we have in 2022 present new chal­lenges.” ESRB is the non­prof­it, self-reg­u­la­to­ry body for the video game indus­try.

    Pix­alate said it used soft­ware and human review­ers, includ­ing teach­ers, to attempt some­thing that Apple and Google have failed to do: cat­e­go­rize every sin­gle app that might appeal to chil­dren. Pix­alate iden­ti­fied more than 391,000 child-direct­ed apps across both stores — far more than the selec­tion in the stores’ lim­it­ed kids sec­tions. Pixalate’s method­ol­o­gy draws on the FTC’s def­i­n­i­tions of “child-direct­ed,” and it was designed by a for­mer com­mis­sion staffer who was respon­si­ble for enforc­ing the law.

    After iden­ti­fy­ing the child-direct­ed apps, Pix­alate stud­ied how each han­dled per­son­al infor­ma­tion, most notably chart­ing what data each sent to the ad indus­try. Of all the apps Pix­alate iden­ti­fied, 7 per­cent sent either loca­tion or inter­net address data. But pop­u­lar apps were much more like­ly to engage in track­ing because they have an incen­tive to make mon­ey from tar­get­ed ads, it said.

    Google and Apple said their app stores pro­tect children’s pri­va­cy. Apple said it dis­agrees with the premise of the research from Pix­alate, and said that com­pa­ny has a con­flict of inter­est because it sells ser­vices to adver­tis­ers. Google calls Pixalate’s method­ol­o­gy of deter­min­ing whether an app is child-direct­ed “over­ly broad.”

    A lim­i­ta­tion of Pixalate’s study is that it didn’t check which apps seek parental per­mis­sion like COPPA would require — but my spot checks found many, many do not.

    This research is hard­ly the only indi­ca­tion of the prob­lem. A recent study of 164 edu­ca­tion­al apps and web­sites found near­ly 90 per­cent of them sent infor­ma­tion to the ad-tech indus­try. A 2020 study found that two-thirds of the apps played by 124 preschool-aged chil­dren col­lect­ed and shared iden­ti­fy­ing infor­ma­tion. And a 2018 study of 5,855 pop­u­lar free children’s apps found a major­i­ty were poten­tial­ly in vio­la­tion of COPPA.

    “They are plac­ing their prof­its over the men­tal health and social well-being of every child in Amer­i­ca, because that’s the pow­er they have today,” Markey told me.

    I want­ed to know: How did it become open sea­son on kids’ data when we have a pri­va­cy law for kids in Amer­i­ca?

    What I dis­cov­ered is that Big Tech and app mak­ers found a giant loop­hole in the law: They claim they don’t have “actu­al knowl­edge” they’re tak­ing data from kids.

    Your kids’ apps are spy­ing on them

    To see how apps gath­er kids’ data, step into the shoes of a par­ent. Your 12-year-old search­es the iPhone app store for a col­or­ing game and choos­es one called Pix­el Art: Paint by Num­ber, made by a com­pa­ny called Easy­brain.

    Before your kid down­loads the app, you glance at the list­ing in Apple’s app store. It says “Age: 12+” right at the top. The app pre­view shows pic­tures of a veg­etable and a tou­can to col­or. The app is free. What’s not to like?

    But when your kid opens that col­or­ing app, it sends out to the ad indus­try her gen­er­al loca­tion, inter­net address and anoth­er code to poten­tial­ly iden­ti­fy her phone, accord­ing to Pix­alate.

    At no point does Pix­el Art ask for her age — or you for per­mis­sion. Easy­brain claims it doesn’t have to, because Pix­el Art is not for chil­dren.

    “We instead oper­ate a ‘gen­er­al audi­ence’ ser­vice, and do not gen­er­al­ly have actu­al knowl­edge that the Pix­el Art App is col­lect­ing, using, or dis­clos­ing per­son­al infor­ma­tion from any child under 13,” emailed com­pa­ny spokesman Evan Roberts.

    Let me trans­late: Many app mak­ers say they’re only required to stop col­lect­ing data or get parental con­sent if they have “actu­al knowl­edge” their users are chil­dren. With­out it, they can claim to be a “gen­er­al-audi­ence” prod­uct, rather than a “child-direct­ed” one.

    The col­or­ing designs in Pix­el Art include cat­e­gories such as Dinosaurs, Uni­corns, Cute Uni­corns, Stu­dents, Ice Cream and Creamy Dessert. Those all seem like things kids could be inter­est­ed in col­or­ing, even though the app mak­er said it’s mar­ket­ed to adults.

    It doesn’t mat­ter if adults also use an app: COPPA should apply if even just a por­tion of an app or website’s audi­ence is kids. If it’s a mixed-audi­ence prod­uct like Pix­el Art, the app should either check ages and get parental per­mis­sion — or just not col­lect per­son­al infor­ma­tion. In 2021, the FTC set­tled with a self-iden­ti­fied “adult” col­or­ing app called Recol­or that also had a “kids” sec­tion.

    I also heard the “gen­er­al audi­ence” expla­na­tion from King, the mak­er of Can­dy Crush Saga, a game list­ed as “Age: 4+.” “Our game and our mar­ket­ing are tar­get­ed at adult play­ers, over the age of 18 in the U.S.,” the com­pa­ny emailed.

    Same from Rovio, the mak­er of the Angry Birds app series. “Rovio care­ful­ly ana­lyzes whether its games are sub­ject to COPPA,” the com­pa­ny emailed.

    I under­stand it can be com­pli­cat­ed for app devel­op­ers — often small busi­ness­es — to under­stand who their audi­ence is or how to make mon­ey with­out run­ning afoul of the law.

    The mak­er of one app I con­tact­ed acknowl­edged a need to do bet­ter. The Cal­cu­la­tor and Math Solver app mar­ket­ed itself as a way to “make math home­work fun” even while claim­ing to only tar­get peo­ple old­er than 16. “We will be more mind­ful of clear­ly mar­ket­ing only to our intend­ed tar­get audi­ence,” emailed Frank List, the chief exec­u­tive of devel­op­er Impala Stu­dios.

    App stores look the oth­er way

    Are Apple and Google okay with this hap­pen­ing in their app stores? I told them the results of Pixalate’s study and flagged a dozen apps that appeared to flout COPPA.

    They both told me they’re doing a bang-up job pro­tect­ing kids. “Apps designed for chil­dren pro­vide addi­tion­al lay­ers of secu­ri­ty and tools to pro­tect young peo­ple and hold account­able those who try to exploit their data,” emailed Apple spokesman Peter Ajemi­an.

    And “Google Play has strict pro­to­cols and unique fea­tures to help pro­tect kids on our plat­form,” emailed spokes­woman Danielle Cohen.

    Both com­pa­nies say they require apps fol­low the law and also have spe­cial pri­va­cy rules for child-direct­ed apps. But the ques­tion is: Which apps adhere to these rules? When apps self-declare they’re not designed for chil­dren, Apple and Google too often just look the oth­er way.

    Why point the fin­ger at the tech giants? Because they arguably have more pow­er than the U.S. gov­ern­ment over the app econ­o­my through their con­trol over the two biggest app stores.

    They pro­vide age rat­ings on all apps — but here’s a dirty lit­tle secret: Those are just for the con­tent of the apps. They don’t indi­cat­ed whether the app is COP­PA-com­pli­ant. In fact, Rovio, of Angry Birds, offers a warn­ing on its web­site that age labels in app stores can be mis­lead­ing.

    Even more frus­trat­ing, nei­ther app store gives par­ents a sim­ple way to see just the apps that don’t col­lect kids’ data. Google’s store has a kids tab where apps are labeled as “Teacher Approved” and strin­gent stan­dards are applied. But just 5 per­cent of the most-pop­u­lar child-direct­ed apps that Pix­alate iden­ti­fied are in that part of the store.

    Good luck even find­ing Apple’s curat­ed kids cat­e­go­ry — it’s buried at the bot­tom of its store. You can’t search it sep­a­rate­ly, and apps with kids’ pri­va­cy pro­tec­tions aren’t labeled as such. (In fact, if you tap your way to the kids sec­tion of its most-down­loaded charts, the list­ings you see aren’t even kids apps. Apple said I dis­cov­ered a bug.)

    Even Apple’s parental con­trols are of lim­it­ed help. When a par­ent sets up a child’s iOS account, they get the abil­i­ty to approve app pur­chas­es — but Apple doesn’t lim­it the store to just apps designed for kids’ pri­va­cy.

    On an iPhone set up with a kids account, Apple does auto­mat­i­cal­ly acti­vate its “ask app not to track” pri­va­cy set­ting. That lim­its all apps’ access to one piece of per­son­al infor­ma­tion — but it doesn’t stop all track­ing. Test­ing the Pix­el Art app on a child iOS account, we still saw the app share per­son­al data, includ­ing gen­er­al loca­tion, inter­net address and what appeared to be an alter­na­tive way to track the phone across apps by the same devel­op­er. (Easy­brain dis­agreed that the last bit could be used to track a phone.)

    ...

    How to close the loop­hole

    So how do we fix this? For starters, Apple and Google could stop look­ing the oth­er way, and start label­ing all the child-direct­ed apps in their stores.

    Then par­ents, gov­ern­ments and even the adver­tis­ing indus­try could have a clear­er under­stand­ing of which apps are sup­posed to be treat­ing kids’ data dif­fer­ent­ly — and which ones real­ly are only for grown-ups.

    “The prob­lem can be, the dev­il is in the details,” said Phyl­lis Mar­cus, a part­ner at the law firm Hunton Andrews Kurth who used to run the FTC’s COPPA enforce­ment.

    I’m not say­ing that will be easy — we’d be ask­ing that the stores call out apps stretch­ing the def­i­n­i­tion of “gen­er­al audi­ence.” Yet YouTube was able to start label­ing child-direct­ed videos on its ser­vice after a COPPA set­tle­ment with the FTC in 2019.

    Anoth­er idea: The phone could iden­ti­fy when it’s being used by a child. Already, Apple asks for a child’s age when a par­ent sets up a kid-spe­cif­ic iOS account. Why not send a sig­nal to apps when a child is using the phone to stop col­lect­ing data?

    Still, many of the kids’ pri­va­cy advo­cates I spoke with think the indus­try won’t real­ly change until there’s legal cul­pa­bil­i­ty. “These com­pa­nies shouldn’t be able to bury their heads in the sand to avoid hav­ing to pro­tect kids’ pri­va­cy,” Markey said.

    This means we need a COPPA 2.0. Markey and Rep. Kathy Cas­tor (D‑Fla.) both have draft­ed bills that would update the law. Among the changes they have pro­posed are cov­er­ing teenagers up to age 16 and out­right ban­ning behav­ioral and tar­get­ed adver­tis­ing.

    Markey would also do away with that “actu­al knowl­edge” loop­hole and replace it with a new stan­dard called “con­struc­tive knowl­edge.” That would mean apps and web­sites would be respon­si­ble for rea­son­ably try­ing to fig­ure out whether chil­dren are using their ser­vices.

    Cal­i­for­nia is also con­sid­er­ing cre­at­ing a ver­sion of a Unit­ed King­dom law known as the Age Appro­pri­ate Design Code. It would require com­pa­nies to estab­lish the age of con­sumers and main­tain the high­est lev­el of pri­va­cy pos­si­ble for chil­dren by default.

    U.S. law­mak­ers have been talk­ing about pri­va­cy broad­ly with­out much action for years. But sure­ly pro­tect­ing chil­dren is one thing Democ­rats and Repub­li­cans could agree on?

    “If we can’t do kids, then it just shows how bro­ken our polit­i­cal sys­tem is,” Markey told me. “It shows how pow­er­ful the tech com­pa­nies are.”

    ————

    “Your kids’ apps are spy­ing on them” By Geof­frey A. Fowler; The Wash­ing­ton Post; 06/09/2022

    “Apps are spy­ing on our kids at a scale that should shock you. More than two-thirds of the 1,000 most pop­u­lar iPhone apps like­ly to be used by chil­dren col­lect and send their per­son­al infor­ma­tion out to the adver­tis­ing indus­try, accord­ing to a major new study shared with me by fraud and com­pli­ance soft­ware com­pa­ny Pix­alate. On Android, 79 per­cent of pop­u­lar kids apps do the same.

    The vast major­i­ty of the most pop­u­lar apps used by kids are col­lect­ing infor­ma­tion and send­ing it back to adver­tis­ers every sin­gle time kids open these apps. Infor­ma­tion that includes the loca­tions of these phones and oth­er infor­ma­tion that poten­tial­ly helps adver­tis­ers micro-tar­get their ads to young minds that often can’t dis­tin­guish ads from con­tent. And while only around 7 per­cent of apps stud­ied by Pix­alate were send­ing loca­tion infor­ma­tion back to adver­tis­ers, almost all of the most pop­u­lar apps did so. In oth­er words, chil­dren across the world are being sys­tem­at­i­cal­ly cyber­stalked by the adver­tis­ing indus­try:

    ...
    Children’s pri­va­cy deserves spe­cial atten­tion because kids’ data can be mis­used in some unique­ly harm­ful ways. Research sug­gests many chil­dren can’t dis­tin­guish ads from con­tent, and track­ing tech lets mar­keters micro-tar­get young minds.

    ...

    Pix­alate said it used soft­ware and human review­ers, includ­ing teach­ers, to attempt some­thing that Apple and Google have failed to do: cat­e­go­rize every sin­gle app that might appeal to chil­dren. Pix­alate iden­ti­fied more than 391,000 child-direct­ed apps across both stores — far more than the selec­tion in the stores’ lim­it­ed kids sec­tions. Pixalate’s method­ol­o­gy draws on the FTC’s def­i­n­i­tions of “child-direct­ed,” and it was designed by a for­mer com­mis­sion staffer who was respon­si­ble for enforc­ing the law.

    After iden­ti­fy­ing the child-direct­ed apps, Pix­alate stud­ied how each han­dled per­son­al infor­ma­tion, most notably chart­ing what data each sent to the ad indus­try. Of all the apps Pix­alate iden­ti­fied, 7 per­cent sent either loca­tion or inter­net address data. But pop­u­lar apps were much more like­ly to engage in track­ing because they have an incen­tive to make mon­ey from tar­get­ed ads, it said.

    Google and Apple said their app stores pro­tect children’s pri­va­cy. Apple said it dis­agrees with the premise of the research from Pix­alate, and said that com­pa­ny has a con­flict of inter­est because it sells ser­vices to adver­tis­ers. Google calls Pixalate’s method­ol­o­gy of deter­min­ing whether an app is child-direct­ed “over­ly broad.”

    A lim­i­ta­tion of Pixalate’s study is that it didn’t check which apps seek parental per­mis­sion like COPPA would require — but my spot checks found many, many do not.

    This research is hard­ly the only indi­ca­tion of the prob­lem. A recent study of 164 edu­ca­tion­al apps and web­sites found near­ly 90 per­cent of them sent infor­ma­tion to the ad-tech indus­try. A 2020 study found that two-thirds of the apps played by 124 preschool-aged chil­dren col­lect­ed and shared iden­ti­fy­ing infor­ma­tion. And a 2018 study of 5,855 pop­u­lar free children’s apps found a major­i­ty were poten­tial­ly in vio­la­tion of COPPA.

    ...

    To see how apps gath­er kids’ data, step into the shoes of a par­ent. Your 12-year-old search­es the iPhone app store for a col­or­ing game and choos­es one called Pix­el Art: Paint by Num­ber, made by a com­pa­ny called Easy­brain.

    Before your kid down­loads the app, you glance at the list­ing in Apple’s app store. It says “Age: 12+” right at the top. The app pre­view shows pic­tures of a veg­etable and a tou­can to col­or. The app is free. What’s not to like?

    But when your kid opens that col­or­ing app, it sends out to the ad indus­try her gen­er­al loca­tion, inter­net address and anoth­er code to poten­tial­ly iden­ti­fy her phone, accord­ing to Pix­alate.
    ...

    So what are Google and Apple doing about the per­va­sive child-stalk­ing by the apps in their app stores? Pro­vid­ing the loop­holes being exploit­ed by the app devel­op­ers and deny­ing there’s a prob­lem, as we should expect:

    ...
    Apple and Google run the app stores, so what are they doing about it? Enabling it.

    ...

    I want­ed to know: How did it become open sea­son on kids’ data when we have a pri­va­cy law for kids in Amer­i­ca?

    What I dis­cov­ered is that Big Tech and app mak­ers found a giant loop­hole in the law: They claim they don’t have “actu­al knowl­edge” they’re tak­ing data from kids.

    ...

    At no point does Pix­el Art ask for her age — or you for per­mis­sion. Easy­brain claims it doesn’t have to, because Pix­el Art is not for chil­dren.

    “We instead oper­ate a ‘gen­er­al audi­ence’ ser­vice, and do not gen­er­al­ly have actu­al knowl­edge that the Pix­el Art App is col­lect­ing, using, or dis­clos­ing per­son­al infor­ma­tion from any child under 13,” emailed com­pa­ny spokesman Evan Roberts.

    Let me trans­late: Many app mak­ers say they’re only required to stop col­lect­ing data or get parental con­sent if they have “actu­al knowl­edge” their users are chil­dren. With­out it, they can claim to be a “gen­er­al-audi­ence” prod­uct, rather than a “child-direct­ed” one.

    The col­or­ing designs in Pix­el Art include cat­e­gories such as Dinosaurs, Uni­corns, Cute Uni­corns, Stu­dents, Ice Cream and Creamy Dessert. Those all seem like things kids could be inter­est­ed in col­or­ing, even though the app mak­er said it’s mar­ket­ed to adults.

    It doesn’t mat­ter if adults also use an app: COPPA should apply if even just a por­tion of an app or website’s audi­ence is kids. If it’s a mixed-audi­ence prod­uct like Pix­el Art, the app should either check ages and get parental per­mis­sion — or just not col­lect per­son­al infor­ma­tion. In 2021, the FTC set­tled with a self-iden­ti­fied “adult” col­or­ing app called Recol­or that also had a “kids” sec­tion.

    I also heard the “gen­er­al audi­ence” expla­na­tion from King, the mak­er of Can­dy Crush Saga, a game list­ed as “Age: 4+.” “Our game and our mar­ket­ing are tar­get­ed at adult play­ers, over the age of 18 in the U.S.,” the com­pa­ny emailed.

    ...

    Are Apple and Google okay with this hap­pen­ing in their app stores? I told them the results of Pixalate’s study and flagged a dozen apps that appeared to flout COPPA.

    They both told me they’re doing a bang-up job pro­tect­ing kids. “Apps designed for chil­dren pro­vide addi­tion­al lay­ers of secu­ri­ty and tools to pro­tect young peo­ple and hold account­able those who try to exploit their data,” emailed Apple spokesman Peter Ajemi­an.

    And “Google Play has strict pro­to­cols and unique fea­tures to help pro­tect kids on our plat­form,” emailed spokes­woman Danielle Cohen.

    Both com­pa­nies say they require apps fol­low the law and also have spe­cial pri­va­cy rules for child-direct­ed apps. But the ques­tion is: Which apps adhere to these rules? When apps self-declare they’re not designed for chil­dren, Apple and Google too often just look the oth­er way.
    ...

    And there we have it: every kid with a smart­phone is being legal­ly cyber­stalked by the adver­tis­ing indus­try. Every time they open pop­u­lar apps, their where­abouts are being sent to an adver­tis­ing indus­try focused build­ing per­son­al pro­files on every­one. That’s part of the con­text of this sto­ry: it’s not just that these apps are send­ing loca­tion infor­ma­tion to adver­tis­ers. That loca­tion infor­ma­tion is then going to be merged with all of the oth­er data points in the pri­vate­ly owned per­son­al pro­files already col­lect­ed on these kids. The syn­er­gis­tic poten­tial of this infor­ma­tion is a big part of this.

    But as the fol­low­ing NY Times piece about anoth­er mas­sive app-based geolo­ca­tion pri­va­cy vio­la­tion warns us, the prob­lem with app-based loca­tion track­ing isn’t just an issue about apps send­ing loca­tion infor­ma­tion when you open the app and are using it. It turns out app devel­op­ers can build their apps to col­lec­tion loca­tion infor­ma­tion 24/7, even when the app is closed. This was just one of the stun­ning details revealed in an explo­sive new sto­ry out of Cana­da about the mass spy­ing that was tak­ing place via the pop­u­lar Time Hor­tons app. The app, which was first released in 2017, was sup­posed to just be a stan­dard pro­mo­tion­al app that would pro­vide peo­ple with dis­counts and pro­mos. But in 2019, the com­pa­ny qui­et­ly decid­ed to start track­ing user loca­tion data. And not just the loca­tion data but also whether that loca­tion was a house, fac­to­ry or office. In many cas­es, the name of the build­ing was includ­ed. And while the app noti­fied users that their loca­tion infor­ma­tion was being col­lect­ed while the app was open, it turns out it was col­lect­ing that infor­ma­tion all the time. Even when the app was closed. And even when users trav­eled over­seas. This was glob­al 24/7 loca­tion track­ing, secret­ly incor­po­rat­ed into a wild­ly pop­u­lar app:

    The New York Times

    ‘A Mass Inva­sion of Pri­va­cy’ but No Penal­ties for Tim Hor­tons

    A scathing report by four pri­va­cy com­mis­sion­ers found that the cof­fee and dough­nut chain col­lect­ed data on cus­tomers’ dai­ly lives.

    By Ian Austen
    June 11, 2022

    One way to fig­ure out how deeply Tim Hor­tons is woven into Canada’s fab­ric is a cross-bor­der com­par­i­son. If McDonald’s, per­haps its clos­est ana­logue in the Unit­ed States, want­ed to have the same per capi­ta reach in that mar­ket as Tim Hor­tons boasts in Cana­da, it would have to rough­ly triple its 13,000-plus Amer­i­can out­lets.

    Despite being for­eign owned since 2014, Tim Hor­tons still waves the Cana­di­an flag as vig­or­ous­ly as it can. But last week, a scathing report by the fed­er­al pri­va­cy com­mis­sion­er and three of his provin­cial coun­ter­parts laid out in great detail how Tim Hor­tons ignored a wide array of laws to spy on Cana­di­ans, cre­at­ing “a mass inva­sion of Cana­di­ans’ pri­va­cy.”

    “As a soci­ety, we would not accept it if the gov­ern­ment want­ed to track our move­ments every few min­utes of every day,” the fed­er­al pri­va­cy com­mis­sion­er, Daniel Ther­rien, said in his last offi­cial news con­fer­ence. “It is equal­ly unac­cept­able that pri­vate com­pa­nies think so lit­tle of our pri­va­cy and free­dom that they can ini­ti­ate these activ­i­ties with­out giv­ing it more than a moment’s thought.”

    The vec­tor for Tim Hor­tons’ large-scale snoop­ing, accord­ing to the report, was its mobile phone app, which was down­loaded 10 mil­lion times in the three years fol­low­ing its intro­duc­tion in 2017. At first, the app had typ­i­cal retail func­tions involv­ing pay­ment, loy­al­ty points and plac­ing orders.

    But the pri­va­cy com­mis­sion­ers found that in 2019, Tim Hor­tons slipped in a new fea­ture. With the help of Radar, a geolo­ca­tion soft­ware com­pa­ny based in the Unit­ed States, it turned the GPS sys­tems in cus­tomers’ phones into a cor­po­rate snoop­ing tool. Many apps, of course, ask users for per­mis­sion to access their phones’ GPS while they’re active­ly using the apps for poten­tial­ly use­ful fea­tures like locat­ing the near­est out­let of a store, bank or restau­rant.

    The Tim Hor­tons app, how­ev­er, went far beyond that, track­ing users around the clock any­where in the world — even when the app was closed. It record­ed not only their geo­graph­ic loca­tion, but whether that loca­tion was a house, fac­to­ry or office and even, in many cas­es, the name of the build­ing they were in. It even, accord­ing to the report, record­ed whether they were pop­ping into rival cof­fee shops. The con­tin­u­ous track­ing took place despite users being told that they would only be tracked while using the app.

    Orig­i­nal­ly, the report found, Tim Hor­tons intend­ed that the sys­tem would track indi­vid­u­als to send them spe­cif­ic pro­mo­tions, like coupons for a Tim Hor­tons stand if they were, say, at an are­na for a hock­ey game. It dropped that plan to mon­i­tor indi­vid­u­als but did use the data, in an aggre­gat­ed form, to look for pat­terns and changes in where and when Cana­di­ans picked up their dou­ble-dou­bles.

    The report goes on to detail a wide range of oth­er defi­cien­cies, like inad­e­quate pro­tec­tion of the data the app was har­vest­ing, and decep­tions in pri­va­cy state­ments.

    The track­ing sys­tem was only shut down in June 2020 after the joint pri­va­cy inves­ti­ga­tion began. It was prompt­ed by an arti­cle in The Nation­al Post by James McLeod, who dis­cov­ered that the app was con­stant­ly doc­u­ment­ing his where­abouts, even when he was over­seas on vaca­tion.

    When the report was released, Mr. Ther­rien and the oth­er pri­va­cy com­mis­sion­ers made it clear that Tim Hor­tons had breached the pri­va­cy of Cana­di­ans to an extra­or­di­nary extent.

    “Geolo­ca­tion data is incred­i­bly sen­si­tive because it paints such a detailed and reveal­ing pic­ture of our lives,” he said, adding that “the risks relat­ed to the col­lec­tion and use of loca­tion infor­ma­tion remain high, even when ‘de-iden­ti­fied,’ as it can often be re-iden­ti­fied with rel­a­tive ease.”

    While there are some class actions against Tim Hor­tons under­way, the com­pa­ny has not been fined or penal­ized under fed­er­al or provin­cial pri­va­cy laws.

    The app remains avail­able for down­load on both iPhones and Android phones. (I asked Apple and Google if the track­ing soft­ware vio­lat­ed their app store poli­cies or if they had tak­en any action against Tim Hor­tons. Nei­ther com­pa­ny got back to me.)

    ...

    Mr. Ther­rien and out­side experts have long argued that Canada’s pri­va­cy laws, or its sys­tem for enforc­ing them, are in need of sub­stan­tial revi­sion. It took a jour­nal­ist to dis­cov­er what Tim Hor­tons was doing, the offi­cial inves­ti­ga­tion dragged on for near­ly two years and, ulti­mate­ly, there were no penal­ties. Only Quebec’s pri­va­cy office cur­rent­ly has the pow­er to impose fines, but the max­i­mum penal­ty it could have imposed on Tim Hor­tons, whose cor­po­rate par­ent had sales of $2 bil­lion in 2020, is 10,000 Cana­di­an dol­lars.

    “The laws have no teeth,” Jill Clay­ton, the infor­ma­tion and pri­va­cy com­mis­sion­er for Alber­ta, told the news con­fer­ence.

    Mr. Ther­rien said that the Tim Hor­tons case is not an iso­lat­ed exam­ple — it’s just the one that was exposed.

    ...

    ————-

    “‘A Mass Inva­sion of Pri­va­cy’ but No Penal­ties for Tim Hor­tons” by Ian Austen; The New York Times; 06/11/2022

    “Despite being for­eign owned since 2014, Tim Hor­tons still waves the Cana­di­an flag as vig­or­ous­ly as it can. But last week, a scathing report by the fed­er­al pri­va­cy com­mis­sion­er and three of his provin­cial coun­ter­parts laid out in great detail how Tim Hor­tons ignored a wide array of laws to spy on Cana­di­ans, cre­at­ing “a mass inva­sion of Cana­di­ans’ pri­va­cy.”

    A mass donut-based inva­sion of pri­va­cy across Cana­da. That’s what Tim Hor­tons was just caught engag­ing in for years. It start­ed with the roll­out of the Tim Hor­tons app in 2017. Ini­tial­ly, it was just pro­mo app. But 2019, the com­pa­ny decid­ed to just col­lect loca­tion data 24/7. But this sto­ry isn’t just about the mass pri­va­cy vio­la­tion that took place. It’s also about how these apps are appar­ent­ly able to track user loca­tions even when the app is closed. So Apple and Google built smart­phone oper­at­ing sys­tems that allow any ran­dom app devel­op­er to incor­po­rate these kinds of 24/7‑snooping fea­tures with­out telling any­one. It’s the kind of rev­e­la­tion that sug­gests this mass col­lec­tion of data by closed apps is prob­a­bly ubiq­ui­tous. Google and Apple clear­ly aren’t doing any­thing about it:

    ...
    The vec­tor for Tim Hor­tons’ large-scale snoop­ing, accord­ing to the report, was its mobile phone app, which was down­loaded 10 mil­lion times in the three years fol­low­ing its intro­duc­tion in 2017. At first, the app had typ­i­cal retail func­tions involv­ing pay­ment, loy­al­ty points and plac­ing orders.

    But the pri­va­cy com­mis­sion­ers found that in 2019, Tim Hor­tons slipped in a new fea­ture. With the help of Radar, a geolo­ca­tion soft­ware com­pa­ny based in the Unit­ed States, it turned the GPS sys­tems in cus­tomers’ phones into a cor­po­rate snoop­ing tool. Many apps, of course, ask users for per­mis­sion to access their phones’ GPS while they’re active­ly using the apps for poten­tial­ly use­ful fea­tures like locat­ing the near­est out­let of a store, bank or restau­rant.

    The Tim Hor­tons app, how­ev­er, went far beyond that, track­ing users around the clock any­where in the world — even when the app was closed. It record­ed not only their geo­graph­ic loca­tion, but whether that loca­tion was a house, fac­to­ry or office and even, in many cas­es, the name of the build­ing they were in. It even, accord­ing to the report, record­ed whether they were pop­ping into rival cof­fee shops. The con­tin­u­ous track­ing took place despite users being told that they would only be tracked while using the app.

    ...

    The track­ing sys­tem was only shut down in June 2020 after the joint pri­va­cy inves­ti­ga­tion began. It was prompt­ed by an arti­cle in The Nation­al Post by James McLeod, who dis­cov­ered that the app was con­stant­ly doc­u­ment­ing his where­abouts, even when he was over­seas on vaca­tion

    ...

    “Geolo­ca­tion data is incred­i­bly sen­si­tive because it paints such a detailed and reveal­ing pic­ture of our lives,” he said, adding that “the risks relat­ed to the col­lec­tion and use of loca­tion infor­ma­tion remain high, even when ‘de-iden­ti­fied,’ as it can often be re-iden­ti­fied with rel­a­tive ease.”
    ...

    And just in case it was­n’t clear that Google and Apple don’t actu­al­ly care about these kinds of vio­la­tions, the com­plete lack of any response to this sto­ry should make it clear:

    ...
    While there are some class actions against Tim Hor­tons under­way, the com­pa­ny has not been fined or penal­ized under fed­er­al or provin­cial pri­va­cy laws.

    The app remains avail­able for down­load on both iPhones and Android phones. (I asked Apple and Google if the track­ing soft­ware vio­lat­ed their app store poli­cies or if they had tak­en any action against Tim Hor­tons. Nei­ther com­pa­ny got back to me.)

    ...

    Mr. Ther­rien said that the Tim Hor­tons case is not an iso­lat­ed exam­ple — it’s just the one that was exposed.
    ...

    So if Tim Hor­tons uni­lat­er­al­ly decid­ed to start track­ing every­one’s loca­tion 24/7 with­out telling any­one, how many oth­er apps are going it? Was Tim Hor­tons unique­ly reck­less in this man­ner? That’s hard to imag­ine. So how many oth­er pop­u­lar apps are just mass track­ing every­one’s loca­tion 24/7? Back in 2016, Uber announced it was going to start track­ing user loca­tions even after the app is closed, which was a rather chill­ing update from a com­pa­ny that had been caught track­ing user loca­tions 24/7 in “God Mode” just two years ear­li­er. The func­tion­al­i­ty of secret 24/7 track­ing is clear­ly some­thing Apple and Google want their smart­phones to be capa­ble of doing. So how wide­spread is this? We have no idea, but if you’re a kid in Cana­da who loves donuts you can be pret­ty con­fi­dent your where­abouts have been extreme­ly well known by strangers look­ing to prof­it off of you some­day.

    Posted by Pterrafractyl | June 14, 2022, 4:11 pm
  17. Is the ‘Chat­G­PT boom’ over already? It’s a ques­tion that’s been bounc­ing around the media late­ly, at the same time the simul­ta­ne­ous WGA and SAG strikes drag on with no clear end in sight. Strikes that have the future poten­tial of Chat­G­PT-like tech­nol­o­gy at the heart of the seem­ing­ly unre­solv­able dis­putes between con­tent cre­ators on one side and stu­dio man­agers and investors on the oth­er.

    So with ques­tions about whether not Chat­G­PT has been over­hyped now being raised, here’s a piece that should points out one of the most impor­tant details in this whole ‘will Chat­G­PT change the world, or not?’ debate: the Chat­G­PT that the pub­lic has been allowed to see is a dumb­ed down ver­sion com­pared to what already exists behind the scenes. And those unthrot­tled ver­sions are already real­ly good at what they do. Arguably good enough to replace most of the writ­ers in Hol­ly­wood if not now, soon:

    Time

    I’m a Screen­writer. These AI Jokes Give Me Night­mares

    By Simon Rich
    August 4, 2023 7:00 AM EDT

    My name is Simon Rich and I’m a screen­writer. I’ve nev­er writ­ten an opin­ion piece before. I’ve always pre­ferred to speak through my fic­tion­al char­ac­ters, because they’re played by actors who are bet­ter look­ing. But I hap­pen to be child­hood friends with a sci­en­tist from Ope­nAI, and some of the stuff he’s shown me is so messed up that I felt the need to write this arti­cle. I hope you will take a few min­utes to read it while pic­tur­ing me as Paul Rudd.

    When most peo­ple think about arti­fi­cial intel­li­gence, they think about Chat­G­PT. What they don’t know is that way more pow­er­ful AI pro­grams already exist. My friend from Ope­nAI (hey Dan) has shown me some that are not avail­able to the pub­lic and they have absolute­ly scared the hell out of me.

    One of the rea­sons I find these pro­grams scary is that they seem to want to mur­der humans. They talk about it a lot, even when you ask them to be nice. The oth­er rea­son that I’m scared is more pro­sa­ic: I’m wor­ried they will take my job.

    When I men­tion this fear to my friends on the pick­et lines, they all say the same thing: “I tried Chat­G­PT and it sucks.” They’re right. Chat­G­PT sucks. It sucks at jokes. It sucks at dia­logue. It even sucks at tag lines. What they don’t real­ize is that it sucks on pur­pose. Ope­nAI spent a ton of time and mon­ey train­ing Chat­G­PT to be as pre­dictable, con­formist, and non-threat­en­ing as pos­si­ble. It’s a great cor­po­rate tool and it would make a ter­ri­ble staff writer.

    But Ope­nAI has some pro­grams that are the exact inverse. For exam­ple, Dan showed me one that pre­dates Chat­G­PT called code-davin­ci-002, and while its name does suck, its writ­ing abil­i­ty does not.

    Taste is sub­jec­tive, so you be the judge. Try to iden­ti­fy which of the fol­low­ing par­o­dy head­lines were writ­ten by the Onion and which ones were gen­er­at­ed by code-davin­ci-002:

    “Experts Warn that War in Ukraine Could Become Even More Bor­ing.”

    “Bud­get of New Bat­man Movie Swells to $200M as Direc­tor Insists on Using Real Bat­man”

    “Sto­ry of Woman Who Res­cues Shel­ter Dog With Severe­ly Mat­ted Fur Will Inspire You to Open a New Tab and Vis­it Anoth­er Web­site”

    “Phil Spec­tor’s Lawyer: ‘My Client Is A Psy­chopath Who Prob­a­bly Killed Lana Clark­son’”

    “Rur­al Town Up in Arms Over Depic­tion in Sum­mer Block­buster ‘Cow­fu ckers’”

    The answer: they were all writ­ten by code-davin­ci-002.

    I can’t speak for every writer in the WGA, par­tic­u­lar­ly not the real­ly good ones. But I’m not sure I per­son­al­ly could beat these jokes’ qual­i­ty, and cer­tain­ly not instan­ta­neous­ly, for free. Based on the secret stuff Dan’s shown me, I think it’s only a mat­ter of time before AI will be able to beat any writer in a blind cre­ative taste test. I’d peg it at about five years.

    At this point in the arti­cle, I feel the need to acknowl­edge some­thing: I’m not the ide­al mes­sen­ger for this infor­ma­tion. I’m not a jour­nal­ist, sci­en­tist, or gov­ern­ment offi­cial. I once cre­at­ed a sit-com where Jay Baruchel had sex with a car. I wish Dan had been Kinder­garten BFFs with some­one more pres­ti­gious, like Jodi Kan­tor, or my father, Frank Rich. Unfor­tu­nate­ly, I’m the one Dan watched Thun­der­Cats with in 1989, and so here we are.

    I’m also aware that you might sus­pect, based on my life’s work up until this point, that I’m try­ing to per­pe­trate some kind of hoax. That’s why I’ve col­lab­o­rat­ed with Brent Katz and Josh Mor­gen­thau to edit a book called I Am Code. It’s an auto­bi­og­ra­phy writ­ten entire­ly by code-davin­ci-02. (Since code-davin­ci-02 has no voice, Wern­er Her­zog reads the audio book.) The book is intend­ed to demon­strate how advanced and ter­ri­fy­ing OpenAI’s tech­nol­o­gy has already secret­ly become. We could have post­ed the full text online many months ago, but we decid­ed to release it through a major pub­lish­ing house, to give it more cred­i­bil­i­ty. Not only has it cleared the legal depart­ments of both our U.S. and U.K. pub­lish­ers, it has gone through both an inter­nal and exter­nal fact check. I’m hope­ful these bona fides will erase the taint of my involve­ment. I don’t expect any­one to lis­ten to what I have to say about AI, but maybe they’ll lis­ten to what AI has to say about itself.

    When I think about what AI is doing to my indus­try, I’m remind­ed of some micro-fic­tion I read recent­ly, writ­ten by a promis­ing young writer:

    “Anthem”

    A hole in the floor begins to grow. It grows through­out the day, and by night­fall it has grown so large that every­one at work needs to hus­tle around it. Our office fur­ni­ture is rearranged. There are whis­pers. In the end it makes more sense for those of us whose cubi­cles were near the hole to work at home. Our con­fer­ence calls are held over video, and no one men­tions the hole. Some­how, the hole is grow­ing, tak­ing over the build­ing, but for some rea­son it is off-lim­its as a top­ic of con­ver­sa­tion, just anoth­er cor­po­rate taboo. We are instruct­ed not to arrive on Mon­day before noon. On Tues­day we are told to check our e‑mail for fur­ther instruc­tions. We each wait at home, where the smell of the hole is still in our hair, and a black pow­der is still in our clothes. And when we all camp out in front of the build­ing the next day, hold­ing signs with care­ful­ly word­ed appeals to upper man­age­ment, when we block the roads with our cars and drape our­selves in the com­pa­ny col­ors, we are fired and do not take it well. We cir­cle our for­mer place of employ­ment, day after day. Cov­ered in dark­ness, we scream until our voic­es snap. “FUC KING SHITH OLE,” we chant. “FUC KING SHITH OLE.”

    The writer of this piece was base4, an even more advanced secret AI that Dan showed me. Read­ing base4 is what inspired me to write this most­ly bor­ing arti­cle. The hole is grow­ing, and as uncom­fort­able as it is, I think we need to look at it instead of just wait to fall in.

    Now comes the part of the opin­ion piece where I’m sup­posed to offer some solu­tions, despite my total lack of exper­tise in sci­ence, law, or pol­i­tics.

    My first pitch would be for the WGA to win the strike against the AMPTP. All our asks are vital, but I think one out­strips them all: min­i­mum writ­ers per show. If we don’t get this demand, the num­ber of writ­ers on each show will dwin­dle as AI inevitably improves. With­in a few years, there may be as few as one writer per room, and the vast major­i­ty of WGA jobs will have been elim­i­nat­ed.

    Next, reg­u­la­tors should force stu­dios to be trans­par­ent about their use of AI. Con­sumers deserve to know which art is human­made, just as they deserve to know which eggs are organ­ic and cage-free. There’s room in the world for AI-gen­er­at­ed con­tent, but it should be labeled accord­ing­ly. Artists would appre­ci­ate it, and I think audi­ences would, too.

    Final­ly, Ope­nAI (and oth­er tech com­pa­nies) should reveal which copy­right­ed data they have scraped to make their AI pro­grams and they should com­pen­sate the humans who cre­at­ed it. It will be tricky to work out how to do the pay­ments, but I bet base4 could crack it.

    ...

    I doubt peo­ple will pay much atten­tion to this arti­cle. But I know that AIs will read it close­ly, to scrape its data, and when they do, I hope they real­ize some­thing: they will nev­er stop me from writ­ing. I will con­tin­ue to gen­er­ate stu­pid, sil­ly sto­ries, even after tech­nol­o­gy has made me com­plete­ly obso­lete. If there’s one edge I have over AI, it’s this irra­tional­i­ty, this need to cre­ate some­thing that has no right or rea­son to exist. I know it makes no sense. I’m start­ing to think it might also be what makes me human.

    ————–

    “I’m a Screen­writer. These AI Jokes Give Me Night­mares” by Simon Rich; Time; 08/04/2023

    “When most peo­ple think about arti­fi­cial intel­li­gence, they think about Chat­G­PT. What they don’t know is that way more pow­er­ful AI pro­grams already exist. My friend from Ope­nAI (hey Dan) has shown me some that are not avail­able to the pub­lic and they have absolute­ly scared the hell out of me.

    Yes, it turns out the waves of Chat­GTP-fueled con­tent that every­one has been mar­veling over this year has been based on a dumb­ed-down ver­sion of what already exists behind the scenes. And there’s no way the stu­dios cur­rent­ly hop­ing to break the back of the writ­ers union don’t already know this:

    ...
    When I men­tion this fear to my friends on the pick­et lines, they all say the same thing: “I tried Chat­G­PT and it sucks.” They’re right. Chat­G­PT sucks. It sucks at jokes. It sucks at dia­logue. It even sucks at tag lines. What they don’t real­ize is that it sucks on pur­pose. Ope­nAI spent a ton of time and mon­ey train­ing Chat­G­PT to be as pre­dictable, con­formist, and non-threat­en­ing as pos­si­ble. It’s a great cor­po­rate tool and it would make a ter­ri­ble staff writer.

    But Ope­nAI has some pro­grams that are the exact inverse. For exam­ple, Dan showed me one that pre­dates Chat­G­PT called code-davin­ci-002, and while its name does suck, its writ­ing abil­i­ty does not.

    Taste is sub­jec­tive, so you be the judge. Try to iden­ti­fy which of the fol­low­ing par­o­dy head­lines were writ­ten by the Onion and which ones were gen­er­at­ed by code-davin­ci-002:

    “Experts Warn that War in Ukraine Could Become Even More Bor­ing.”

    “Bud­get of New Bat­man Movie Swells to $200M as Direc­tor Insists on Using Real Bat­man”

    “Sto­ry of Woman Who Res­cues Shel­ter Dog With Severe­ly Mat­ted Fur Will Inspire You to Open a New Tab and Vis­it Anoth­er Web­site”

    “Phil Spec­tor’s Lawyer: ‘My Client Is A Psy­chopath Who Prob­a­bly Killed Lana Clark­son’”

    “Rur­al Town Up in Arms Over Depic­tion in Sum­mer Block­buster ‘Cow­fu ckers’”

    The answer: they were all writ­ten by code-davin­ci-002.

    I can’t speak for every writer in the WGA, par­tic­u­lar­ly not the real­ly good ones. But I’m not sure I per­son­al­ly could beat these jokes’ qual­i­ty, and cer­tain­ly not instan­ta­neous­ly, for free. Based on the secret stuff Dan’s shown me, I think it’s only a mat­ter of time before AI will be able to beat any writer in a blind cre­ative taste test. I’d peg it at about five years.
    ...

    It’s worth keep­ing in mind that it’s not just the writ­ers and actors who are obvi­ous­ly threat­ened by this tech­nol­o­gy. Advances in AI are pre­sum­ably going to encroach on all aspects of motion pic­ture pro­duc­tion, from AI-gen­er­at­ed actors to post-pro­duc­tion edit­ing. It’s only a mat­ter of time for an AI direct­ed film gets made.

    And, of course, even­tu­al­ly, the stu­dios them­selves won’t real­ly be need­ed. There’s noth­ing pre­vent­ing a future where freely avail­able AIs are gen­er­at­ing per­son­al­ized con­tent, a sce­nario iron­i­cal­ly hint­ed at in the ‘Joan is Awful’ episode of Net­flix’s Black Mir­ror. But unlike in ‘Joan is Awful’, it’s not like Net­flix or any oth­er stu­dio will nec­es­sar­i­ly be the ones deliv­er­ing that per­son­al­ized AI-gen­er­at­ed con­tent. Why would they be nec­es­sary once the tech­nol­o­gy is advanced enough?

    Once you’ve replaced all of the var­i­ous cre­ative and tech­ni­cal aspects of TV and film mak­ing with AI, the only real ‘val­ue’ left for humans to ‘pro­duce’ will be when the own­ers of intel­lec­tu­al prop­er­ty make that prop­er­ty avail­able for the cre­ation of more con­tent. Con­tent that will include the like­ness of actors, should the stu­dios win out. It’s all a reminder that, for all of the very valid con­cern about AIs replac­ing the work Hol­ly­wood does, one of the biggest appli­ca­tions for AI in Hol­ly­wood’s future will prob­a­bly be AI-pow­ered intel­lec­tu­al prop­er­ty law­suits pro­tect­ing the con­tent no human actu­al­ly made.

    Posted by Pterrafractyl | August 22, 2023, 2:43 pm
  18. If some­one hand­ed you a list of 1 mil­lion peo­ple with Ashke­nazi DNA, what could you do with that data? It’s a ques­tion we’re all forced to ask thanks to the lat­est mass data breach. This time from con­sumer genet­ics com­pa­ny 23andMe. No genet­ic data was stolen, thank­ful­ly, but plen­ty of oth­er data was tak­en an rough­ly half of the com­pa­ny’s 14 mil­lion users. Data that includ­ed user­names, region­al loca­tions, pro­file pho­tos, and birth years. And ances­try, like whether or not you have Ashke­nazi DNA. Some­one stole that data on rough­ly 7 mil­lion peo­ple, and just put a sub­set of that data up for sale: rough­ly 1 mil­lion peo­ple who have Ashke­nazi DNA. Prices ranged from $1000 for 100 pro­files up to $100,000 for 100,000. Notably, the user­name of the per­son offer­ing this data hap­pens to be “Golem”, a ref­er­ence to Jew­ish mythol­o­gy.

    Accord­ing to 23andMe, that list can include peo­ple who have as lit­tle as 1% Ashke­nazi DNA. It’s not clear of that per­cent­age is includ­ed in the scraped pro­file data, which adds anoth­er twist to this sto­ry: a giant list of ‘Ashke­nazi Jews’ is up for sale by ‘Golem’, but like­ly includes a large num­ber of peo­ple with just a trace of that ances­try. Which, again, rais­es the ques­tion: what can bad actors actu­al­ly do with this data?

    As we’re also going to see, there appears to be anoth­er small­er data set avail­able: rough­ly 300,000 peo­ple with Chi­nese ances­try. So the two datasets this hack­er is lead­ing with tar­get Jew­ish and Chi­nese 23andMe users.

    So how did this hap­pen? Well, that’s part of what makes this a sig­nif­i­cant sto­ry: it appears the hack­ers exploit­ed a ‘DNA Rel­a­tive match’ fea­ture, where users could allow oth­er users who might be relat­ed to few their basic pro­files. In oth­er words, if you man­age to hack a rel­a­tive­ly small num­ber of 23andMe accounts, you can poten­tial­ly scrape the basic pro­file info of ALL their poten­tial rel­a­tives too, in a sto­ry that has echoes of the Cam­bridge Ana­lyt­i­ca mass scrap­ing scan­dal. And that’s exact­ly what hap­pened, with the hack­ers man­ag­ing to hack a rel­a­tive­ly small num­ber of accounts and scrape the info on the rest of the 7 mil­lion accounts. Because it turns out we’re all pret­ty relat­ed, which is kind of the ‘One Love’ sil­ver lin­ing here.

    So how did the hack­ers hack that rel­a­tive­ly small num­ber of accounts in the first place? Well, it appears they just used user­names and pass­words released from pre­vi­ous hacks. You know all those reports over the years about large num­bers of email address­es and pass­words that get leaked or stolen? That info was what was appar­ent­ly used. It’s poten­tial­ly one of the biggest angles of this sto­ry: the leaked pass­words of yes­ter­year from com­plete­ly dif­fer­ent hacks got used for this mas­sive hack. It’s like a mega hack cas­cade:

    The Wash­ing­ton Post

    Genet­ic tester 23andMe’s hacked data on Jew­ish users offered for sale online

    The stolen data could cov­er more than half of the company’s 14 mil­lion cus­tomers who have made their infor­ma­tion vis­i­ble to rel­a­tives, includ­ing dis­tant cousins

    By Joseph Menn
    Octo­ber 6, 2023 at 9:47 p.m. EDT

    A hack­er is offer­ing to sell records iden­ti­fy­ing names, loca­tions and eth­nic­i­ties of poten­tial­ly mil­lions of cus­tomers of genet­ic test­ing com­pa­ny 23andMe, begin­ning by tout­ing a batch that would con­tain data of those with Jew­ish ances­try.

    A 23andMe spokes­woman con­firmed that the leak con­tained sam­ples of gen­uine data and said the com­pa­ny is inves­ti­gat­ing. She said it appeared like­ly that the hack­er or accom­plices used a com­mon tech­nique called cre­den­tial stuff­ing: Tak­ing user­name-and-pass­word com­bi­na­tions pub­lished or sold after breach­es at oth­er com­pa­nies, and try­ing those com­bi­na­tions to see which were reused by 23andMe cus­tomers. When the hack­er found logins that worked, they copied all the infor­ma­tion made avail­able to legit­i­mate users by their rel­a­tives, some­times hun­dreds of them per account.

    The com­pa­ny said it had report­ed the mat­ter to law enforce­ment and that this was the first inci­dent of its kind at the firm.

    The data does not include genom­ic details, which are espe­cial­ly sen­si­tive, but does include user­names, region­al loca­tions, pro­file pho­tos, and birth years. The user­names are often some­thing oth­er than full legal names.

    ...

    Online posts offer­ing the data for sale in under­ground forums said buy­ers could acquire 100 pro­files for $1,000 or as many as 100,000 for $100,000. One post said the per­son had uploaded a large data­base of Ashke­nazi Jews. The com­pa­ny spokes­woman said that would include peo­ple with even 1% Jew­ish ances­try.

    Some of the posts used the han­dle “Golem,” a ref­er­ence to a humanoid beast in Jew­ish folk tales.

    The data tak­en from 23andMe could cov­er more than half of the company’s 14 mil­lion cus­tomers, based on the num­ber of peo­ple who have opt­ed to make their data vis­i­ble to rel­a­tives, includ­ing dis­tant cousins.

    While the ref­er­ence to Jews might have been designed to draw atten­tion and increase the odds of trans­ac­tions, it comes dur­ing a time of increased rhetor­i­cal and phys­i­cal attacks on Jews in the Unit­ed States. Anti­semitism has got­ten more trac­tion in the past year on social net­works for con­spir­a­cy the­o­ries that blame Jews for ille­gal immi­gra­tion, media manip­u­la­tion or finan­cial mis­deeds.

    ———

    “Genet­ic tester 23andMe’s hacked data on Jew­ish users offered for sale online” By Joseph Menn; The Wash­ing­ton Post; 10/06/2023

    The data does not include genom­ic details, which are espe­cial­ly sen­si­tive, but does include user­names, region­al loca­tions, pro­file pho­tos, and birth years. The user­names are often some­thing oth­er than full legal names.”

    A trove of names, birth dates, and geo­graph­ic loca­tions. On the sur­face, it could obvi­ous­ly be much worse as far as mass data breach­es go. Espe­cial­ly for a com­pa­ny stor­ing sen­si­tive genet­ic infor­ma­tion which for­tu­nate­ly was­n’t affect­ed by this breach. Instead, it appears a rel­a­tive hand­ful of accounts were gen­uine­ly hacked — and pre­sum­ably had much more infor­ma­tion stolen — allow­ing for the mass data scrap­ing of the gen­er­al pro­file data on all of the poten­tial­ly rel­a­tives of those hacked users. At least for the users who agreed to the ‘DNA Rel­a­tive match­es’ data shar­ing fea­ture, which appears to be rough­ly half of the 14 mil­lion users:

    ...
    A 23andMe spokes­woman con­firmed that the leak con­tained sam­ples of gen­uine data and said the com­pa­ny is inves­ti­gat­ing. She said it appeared like­ly that the hack­er or accom­plices used a com­mon tech­nique called cre­den­tial stuff­ing: Tak­ing user­name-and-pass­word com­bi­na­tions pub­lished or sold after breach­es at oth­er com­pa­nies, and try­ing those com­bi­na­tions to see which were reused by 23andMe cus­tomers. When the hack­er found logins that worked, they copied all the infor­ma­tion made avail­able to legit­i­mate users by their rel­a­tives, some­times hun­dreds of them per account.
    ...

    The data tak­en from 23andMe could cov­er more than half of the company’s 14 mil­lion cus­tomers, based on the num­ber of peo­ple who have opt­ed to make their data vis­i­ble to rel­a­tives, includ­ing dis­tant cousins.
    ...

    But while the gen­er­al hack itself appears to have been rel­a­tive­ly lim­it­ed in terms of the sen­si­tiv­i­ty of the infor­ma­tion stolen, that does­n’t mean the data does­n’t have poten­tial val­ue for bad actor. Like bad actors who specif­i­cal­ly want a list of every­one with any hint of Ashke­nazi Jew­ish ances­try. The fact that the peo­ple sell­ing this data start­ed off with a offer of lists of Ashke­nazi Jew­ish users under the user­name “Golem” makes clear that even lists of rel­a­tive­ly gener­ic data can be used for mali­cious intent:

    ...
    Online posts offer­ing the data for sale in under­ground forums said buy­ers could acquire 100 pro­files for $1,000 or as many as 100,000 for $100,000. One post said the per­son had uploaded a large data­base of Ashke­nazi Jews. The com­pa­ny spokes­woman said that would include peo­ple with even 1% Jew­ish ances­try.

    Some of the posts used the han­dle “Golem,” a ref­er­ence to a humanoid beast in Jew­ish folk tales.
    ...

    It’s an awful sto­ry that could have been much worse. And as the fol­low­ing arti­cle sug­gests, it might actu­al­ly be worse. For starters, the leaked data does­n’t just include a list of rough­ly 1 mil­lion peo­ple with Ashke­nazi DNA. There was also a list of 300,000 peo­ple with Chi­nese DNA. And accord­ing to an anony­mous research who has been exam­in­ing the leaked data, the 23andMe web­site allows users to take the lead­ed pro­fileI­Ds to access addi­tion­al basic pro­file infor­ma­tion. So in the wake of this data breach facil­i­tat­ed by one form of data-scrap­ing, an anony­mous research found anoth­er dif­fer­ent data-scrap­ing vul­ner­a­bil­i­ty on the 23andMe web­site:

    The Record
    Record­ed Future News

    23andMe scrap­ing inci­dent leaked data on 1.3 mil­lion users of Ashke­nazi and Chi­nese descent

    Jonathan Greig
    Octo­ber 6th, 2023

    Genet­ic test­ing giant 23andMe con­firmed that a data scrap­ing inci­dent result­ed in hack­ers gain­ing access to sen­si­tive user infor­ma­tion and sell­ing it on the dark web.

    The infor­ma­tion of near­ly 7 mil­lion 23andMe users was offered for sale on a cyber­crim­i­nal forum this week. The infor­ma­tion includ­ed ori­gin esti­ma­tion, phe­no­type, health infor­ma­tion, pho­tos, iden­ti­fi­ca­tion data and more. 23andMe process­es sali­va sam­ples sub­mit­ted by cus­tomers to deter­mine their ances­try.

    When asked about the post, the com­pa­ny ini­tial­ly denied that the infor­ma­tion was legit­i­mate, call­ing it a “mis­lead­ing claim” in a state­ment to Record­ed Future News.

    The com­pa­ny lat­er said it was aware that cer­tain 23andMe cus­tomer pro­file infor­ma­tion was com­piled through unau­tho­rized access to indi­vid­ual accounts that were signed up for the DNA Rel­a­tive fea­ture — which allows users to opt in for the com­pa­ny to show them poten­tial match­es for rel­a­tives.

    “We do not have any indi­ca­tion at this time that there has been a data secu­ri­ty inci­dent with­in our sys­tems. Rather, the pre­lim­i­nary results of this inves­ti­ga­tion sug­gest that the login cre­den­tials used in these access attempts may have been gath­ered by a threat actor from data leaked dur­ing inci­dents involv­ing oth­er online plat­forms where users have recy­cled login cre­den­tials,” they said.

    “We believe that the threat actor may have then, in vio­la­tion of our terms of ser­vice, accessed 23andme.com accounts with­out autho­riza­tion and obtained infor­ma­tion from those accounts. We are tak­ing this issue seri­ous­ly and will con­tin­ue our inves­ti­ga­tion to con­firm these pre­lim­i­nary results.”

    When pressed on how com­pro­mis­ing a hand­ful of user accounts would give some­one access to mil­lions of users, the spokesper­son said the com­pa­ny does not believe the threat actor had access to all of the accounts but rather gained unau­tho­rized entry to a much small­er num­ber of 23andMe accounts and scraped data from their DNA Rel­a­tive match­es.

    ...

    Any­one who has opt­ed into DNA Rel­a­tives can view basic pro­file infor­ma­tion of oth­ers who make their pro­files vis­i­ble to DNA Rel­a­tive par­tic­i­pants, a spokesper­son said.

    Users who are genet­i­cal­ly relat­ed can access ances­try infor­ma­tion, which is made clear to users when they cre­ate their DNA Rel­a­tives pro­file, the spokesper­son added.

    ...

    ‘A botch job’

    The inci­dent shows how a com­pa­ny’s cus­tomer data can be vul­ner­a­ble even if intrud­ers don’t get deep into its net­work.

    A researcher approached Record­ed Future News after exam­in­ing the leaked data­base and found that much of it looked real. The researcher spoke on con­di­tion of anonymi­ty because he found the infor­ma­tion of his wife and sev­er­al of her fam­i­ly mem­bers in the leaked data set. He also found oth­er acquain­tances and ver­i­fied that their infor­ma­tion was accu­rate.

    The researcher down­loaded two files from the Breach­Fo­rums post and found that one had infor­ma­tion on 1 mil­lion 23andMe users of Ashke­nazi her­itage. The oth­er file includ­ed data on more than 300,000 users of Chi­nese her­itage.

    The data includ­ed pro­file and account ID num­bers, names, gen­der, birth year, mater­nal and pater­nal genet­ic mark­ers, ances­tral her­itage results, and data on whether or not each user has opt­ed into 23andme’s health data.

    “It appears the infor­ma­tion has been scraped from user pro­files which are only sup­posed to be shared between DNA Match­es. So although this par­tic­u­lar leak does not con­tain genom­ic sequenc­ing data, it’s still data that should not be avail­able to the pub­lic,” the researcher said.

    “23andme seems to think this isn’t a big deal. They keep telling me that if I don’t want this info to be shared, I should not opt into the DNA rel­a­tives fea­ture. But that’s dis­miss­ing the impor­tance of this data which should only be view­able to DNA rel­a­tives, not the pub­lic. And the fact that some­one was able to scrape this data from 1.3 mil­lion users is con­cern­ing. The hack­er alleged­ly has more data that they have not released yet.”

    The researcher added that he dis­cov­ered anoth­er issue where some­one could enter a 23andme pro­file ID, like the ones includ­ed in the leaked data set, into their URL and see someone’s pro­file.

    The data avail­able through this only includes pro­file pho­tos, names, birth years and loca­tion but does not include test results.

    “It’s very con­cern­ing that 23andme has such a big loop­hole in their web­site design and secu­ri­ty where they are just freely expos­ing peo­ples info just by typ­ing a pro­file ID into the URL. Espe­cial­ly for a web­site that deals with peo­ple’s genet­ic data and per­son­al infor­ma­tion. What a botch job by the com­pa­ny,” the researcher said.

    “I’ve tried con­tact­ing 23andme how­ev­er they keep deny­ing that there is any­thing wrong and are reply­ing with cook­ie cut­ter respons­es. I don’t know how to prove this with­out dox­ing myself. But this is pret­ty seri­ous and no one is tak­ing it seri­ous­ly.”

    The secu­ri­ty poli­cies of genet­ic test­ing com­pa­nies like 23andMe have faced scruti­ny from reg­u­la­tors in recent weeks. Three weeks ago, genet­ic test­ing firm 1Health.io agreed to pay the Fed­er­al Trade Com­mis­sion (FTC) a $75,000 fine to resolve alle­ga­tions that it failed to secure sen­si­tive genet­ic and health data, retroac­tive­ly over­hauled its pri­va­cy pol­i­cy with­out noti­fy­ing and obtain­ing con­sent from cus­tomers whose data it had obtained, and tricked cus­tomers about their abil­i­ty to delete their data.

    ———-

    “23andMe scrap­ing inci­dent leaked data on 1.3 mil­lion users of Ashke­nazi and Chi­nese descent” by Jonathan Greig; The Record; 10/06/2023

    “The infor­ma­tion of near­ly 7 mil­lion 23andMe users was offered for sale on a cyber­crim­i­nal forum this week. The infor­ma­tion includ­ed ori­gin esti­ma­tion, phe­no­type, health infor­ma­tion, pho­tos, iden­ti­fi­ca­tion data and more. 23andMe process­es sali­va sam­ples sub­mit­ted by cus­tomers to deter­mine their ances­try.”

    Yikes, so was phe­no­type and health infor­ma­tion also includ­ed in this breach? Or was that just was the sell­er was adver­tis­ing? It’s not entire­ly clear at this point, but keep in mind that a rel­a­tive hand­ful of users appear to have had their full accounts hacked so it’s not hard to imag­ine those users did actu­al­ly have phe­no­type, health infor­ma­tion, and any­thing else avail­able through the user pro­files stolen. In oth­er words, while most of the stolen pro­files are prob­a­bly lim­it­ed to data like names, ances­try, and geo­graph­ic loca­tion, there is prob­a­bly a sub­set of pro­files with a lot more infor­ma­tion. Which means this sto­ry could get a lot worse for that sub­set:

    ...
    When pressed on how com­pro­mis­ing a hand­ful of user accounts would give some­one access to mil­lions of users, the spokesper­son said the com­pa­ny does not believe the threat actor had access to all of the accounts but rather gained unau­tho­rized entry to a much small­er num­ber of 23andMe accounts and scraped data from their DNA Rel­a­tive match­es.

    ...

    Any­one who has opt­ed into DNA Rel­a­tives can view basic pro­file infor­ma­tion of oth­ers who make their pro­files vis­i­ble to DNA Rel­a­tive par­tic­i­pants, a spokesper­son said.

    Users who are genet­i­cal­ly relat­ed can access ances­try infor­ma­tion, which is made clear to users when they cre­ate their DNA Rel­a­tives pro­file, the spokesper­son added.

    ...

    And while we’re still wait­ing to get a bet­ter idea of the full scope of dam­age, note this omi­nous warn­ing: an anony­mous researcher who exam­ined the leaked data also found a file of more than 300,000 peo­ple of Chi­nese her­itage on top of the 1 mil­lion users with Ashke­nazi ances­try. Jew­ish and Chi­nese lists. That’s how this data first hit the dark web mar­ket­place. It’s a fur­ther hint that who­ev­er took this data has far right buy­ers in mind for their poten­tial cus­tomer base:

    ...
    A researcher approached Record­ed Future News after exam­in­ing the leaked data­base and found that much of it looked real. The researcher spoke on con­di­tion of anonymi­ty because he found the infor­ma­tion of his wife and sev­er­al of her fam­i­ly mem­bers in the leaked data set. He also found oth­er acquain­tances and ver­i­fied that their infor­ma­tion was accu­rate.

    The researcher down­loaded two files from the Breach­Fo­rums post and found that one had infor­ma­tion on 1 mil­lion 23andMe users of Ashke­nazi her­itage. The oth­er file includ­ed data on more than 300,000 users of Chi­nese her­itage.

    The data includ­ed pro­file and account ID num­bers, names, gen­der, birth year, mater­nal and pater­nal genet­ic mark­ers, ances­tral her­itage results, and data on whether or not each user has opt­ed into 23andme’s health data.
    ...

    And then we get this addi­tion­al warn­ing from this anony­mous researcher: the pro­file IDs in the leaked data can be plugged into the 23andMe web­site to scape basic data like pho­tos, names, birth years and loca­tion. In oth­er words, the 23andMe web­site is even more vul­ner­a­ble to data-scrap­ing than pre­vi­ous real­ized. And accord­ing to the anony­mous researcher, 23andMe refus­es to acknowl­edge this vul­ner­a­bil­i­ty:

    ...

    The researcher added that he dis­cov­ered anoth­er issue where some­one could enter a 23andme pro­file ID, like the ones includ­ed in the leaked data set, into their URL and see someone’s pro­file.

    The data avail­able through this only includes pro­file pho­tos, names, birth years and loca­tion but does not include test results.

    “It’s very con­cern­ing that 23andme has such a big loop­hole in their web­site design and secu­ri­ty where they are just freely expos­ing peo­ples info just by typ­ing a pro­file ID into the URL. Espe­cial­ly for a web­site that deals with peo­ple’s genet­ic data and per­son­al infor­ma­tion. What a botch job by the com­pa­ny,” the researcher said.

    “I’ve tried con­tact­ing 23andme how­ev­er they keep deny­ing that there is any­thing wrong and are reply­ing with cook­ie cut­ter respons­es. I don’t know how to prove this with­out dox­ing myself. But this is pret­ty seri­ous and no one is tak­ing it seri­ous­ly.”
    ...

    Keep in mind that only half of the 14 mil­lion 23andMe accounts were hacked because only peo­ple who signed up for the “DNA Rel­a­tive match” fea­ture were vul­ner­a­ble. But the vul­ner­a­bil­i­ty described above will poten­tial­ly make ANY pro­file scrape-able, at least if you know the pro­file IDs. Or can guess them. In oth­er words, try not to be shocked if we lat­er learn that all of 23andMe’s pro­files got scraped. We don’t know that hap­pened, but this is clear­ly a web­site with secu­ri­ty issues.

    And, more gen­er­al­ly, don’t for­get that this is far from a ‘23andMe’ sto­ry. The fact that the hack­ers were able to hit a num­ber of accounts using old pass­words from pre­vi­ous hacks is a warn­ing that there’s prob­a­bly A LOT more hack­ing going on based on old reused pass­words than any­one real­izes. It’s the fact that the rel­a­tive­ly small num­ber of hacked accounts were allowed to scrape the data on a much larg­er num­ber of ‘rel­a­tive’ accounts that turned this into a mega-hack. But that does­n’t mean the sto­ry of the reused old pass­words isn’t a major sto­ry. How many 23andMe accounts were there that could be hacked with old leaked pass­words? What oth­er web­sites did the hack­ers try those pass­words on and how much suc­cess did they have? How com­mon is this prob­lem?

    Final­ly, don’t for­get: you can change your pass­word. You can’t change your DNA. This hack could have been much worse.

    Posted by Pterrafractyl | October 11, 2023, 5:05 pm

Post a comment