Spitfire List Web site and blog of anti-fascist researcher and radio personality Dave Emory.

For The Record  

FTR #859 Because They Can: Update on Technocratic Fascism

Dave Emory’s entire life­time of work is avail­able on a flash dri­ve that can be obtained here. The new dri­ve is a 32-giga­byte dri­ve that is cur­rent as of the pro­grams and arti­cles post­ed by late spring of 2015. The new dri­ve (avail­able for a tax-deductible con­tri­bu­tion of $65.00 or more) con­tains FTR #850.  

WFMU-FM is pod­cast­ing For The Record–You can sub­scribe to the pod­cast HERE.

You can sub­scribe to e‑mail alerts from Spitfirelist.com HERE

You can sub­scribe to RSS feed from Spitfirelist.com HERE.

You can sub­scribe to the com­ments made on pro­grams and posts–an excel­lent source of infor­ma­tion in, and of, itself HERE.

This pro­gram was record­ed in one, 60-minute seg­ment

Intro­duc­tion: Albert Ein­stein said of the inven­tion of the atom­ic bomb: “Every­thing has changed but our way of think­ing.” We feel that oth­er, more recent devel­op­ments in the world of Big Tech war­rant the same type of warn­ing.

This pro­gram fur­ther explores the Brave New World being mid­wived by tech­nocrats. These stun­ning devel­op­ments should be viewed against the back­ground of what we call “tech­no­crat­ic fas­cism,” ref­er­enc­ing a vital­ly impor­tant arti­cle by David Golum­bia. ” . . . . Such tech­no­cratic beliefs are wide­spread in our world today, espe­cially in the enclaves of dig­i­tal enthu­si­asts, whether or not they are part of the giant cor­po­rate-dig­i­tal leviathanHack­ers (“civic,” “eth­i­cal,” “white” and “black” hat alike), hack­tivists, Wik­iLeaks fans [and Julian Assange et al–D. E.], Anony­mous “mem­bers,” even Edward Snow­den him­self walk hand-in-hand with Face­book and Google in telling us that coders don’t just have good things to con­tribute to the polit­i­cal world, but that the polit­i­cal world is theirs to do with what they want, and the rest of us should stay out of it: the polit­i­cal world is bro­ken, they appear to think (right­ly, at least in part), and the solu­tion to that, they think (wrong­ly, at least for the most part), is for pro­gram­mers to take polit­i­cal mat­ters into their own hands. . . First, [Tor co-cre­ator] Din­gle­dine claimed that Tor must be sup­ported because it fol­lows direct­ly from a fun­da­men­tal “right to pri­vacy.” Yet when pressed—and not that hard—he admits that what he means by “right to pri­vacy” is not what any human rights body or “par­tic­u­lar legal regime” has meant by it. Instead of talk­ing about how human rights are pro­tected, he asserts that human rights are nat­ural rights and that these nat­ural rights cre­ate nat­ural law that is prop­erly enforced by enti­ties above and out­side of demo­c­ra­tic poli­tiesWhere the UN’s Uni­ver­sal Dec­la­ra­tion on Human Rights of 1948 is very clear that states and bod­ies like the UN to which states belong are the exclu­sive guar­an­tors of human rights, what­ever the ori­gin of those rights, Din­gle­dine asserts that a small group of soft­ware devel­op­ers can assign to them­selves that role, and that mem­bers of demo­c­ra­tic poli­ties have no choice but to accept them hav­ing that role. . . Fur­ther, it is hard not to notice that the appeal to nat­ural rights is today most often asso­ci­ated with the polit­i­cal right, for a vari­ety of rea­sons (ur-neo­con Leo Strauss was one of the most promi­nent 20th cen­tury pro­po­nents of these views). We aren’t sup­posed to endorse Tor because we endorse the right: it’s sup­posed to be above the left/right dis­tinc­tion. But it isn’t. . . .

We begin by exam­in­ing a cou­ple of arti­cles rel­e­vant to the world of cred­it.

Big Tech and Big Data have reached the point where, for all intents and pur­pos­es, cred­it card users and vir­tu­al­ly every­one else have no per­son­al pri­va­cy. Even with­out detailed per­son­al infor­ma­tion, capa­ble tech oper­a­tors can iden­ti­fy peo­ple’s iden­ti­ties with an extra­or­di­nary degree of pre­ci­sion using a sur­pris­ing­ly small amount of infor­ma­tion.

Com­pound­ing the wor­ries of those seek­ing cred­it is a new Face­book “app” that will enable banks to iden­ti­fy how poor a cus­tomer’s friends are and enable those same insti­tu­tions to deny the unsus­pect­ing cred­it on the basis of how poor their friends are!

Even as Big Tech is per­mit­ting finan­cial insti­tu­tions to zero in on cus­tomers to an unprece­dent­ed degree, it is mov­ing in the direc­tion of obscur­ing the doings of Banksters. The Sym­pho­ny net­work offers end-to-end encryp­tion that appears to make the oper­a­tions of the finan­cial insti­tu­tions using it opaque to reg­u­la­tors.

A new vari­ant of the Bit­coin tech­nol­o­gy will not only facil­i­tate the use of Bit­coin to assas­si­nate pub­lic fig­ures but may very well replace–to a cer­tain extent–the func­tions per­formed by attor­ney. (We have cov­ered Bitcoin–an appar­ent Under­ground Reich invention–in FTR #‘s 760, 764, 770, 785.)

As fright­en­ing as some of the above pos­si­bil­i­ties may be, things may get dra­mat­i­cal­ly worse with the intro­duc­tion of “the Inter­net of Things,” per­mit­ting the hack­ing of many types of every­day tech­nolo­gies, as well as the use of those tech­nolo­gies to give Big Tech and Big Data unprece­dent­ed intru­sion into peo­ple’s lives.

Pro­gram High­lights Include: 

  • Dis­cus­sion of the hack­ing of an auto­mo­bile using a lap­top.
  • Com­par­i­son of the devel­op­ments of Big Tech and Big Data to mag­ic and the impli­ca­tions for a species that remains true to its nean­derthal, femur-crack­ing, mar­row-suck­ing roots.
  • Review of some of the points cov­ered in L‑2.
  • The need for vast­ly big­ger, rig­or­ous­ly reg­u­lat­ed gov­ern­ment instead of the fas­cism inher­ent in the lib­er­tar­i­an doc­trine.
  • How hack­ers are attempt­ing to extort users of the Ash­ley Madison cheaters web­site.

1. Big Tech and Big Data have reached the point where, for all intents and pur­pos­es, cred­it card users and vir­tu­al­ly every­one else have no per­son­al pri­va­cy. Even with­out detailed per­son­al infor­ma­tion, capa­ble tech oper­a­tors can iden­ti­fy peo­ple’s iden­ti­ties with an extra­or­di­nary degree of pre­ci­sion using a sur­pris­ing­ly small amount of infor­ma­tion.

“The Sin­gu­lar­i­ty Is Already Here–It’s Name Is Big Data” sub­mit­ted by Ben Hunt; Zerohedge.com; 2/08/2015.

Last Thurs­day the jour­nal Sci­ence pub­lished an arti­cle by four MIT-affil­i­at­ed data sci­en­tists (Sandy Pent­land is in the group, and he’s a big name in these cir­cles), titled “Unique in the shop­ping mall: On the rei­den­ti­fi­a­bil­i­ty of cred­it card meta­da­ta”. Sounds innocu­ous enough, but here’s the sum­ma­ry from the front page WSJ arti­cle describ­ing the find­ings:

Researchers at the Mass­a­chu­setts Insti­tute of Tech­nol­o­gy, writ­ing Thurs­day in the jour­nal Sci­ence, ana­lyzed anony­mous cred­it-card trans­ac­tions by 1.1 mil­lion peo­ple. Using a new ana­lyt­ic for­mu­la, they need­ed only four bits of sec­ondary information—metadata such as loca­tion or timing—to iden­ti­fy the unique indi­vid­ual pur­chas­ing pat­terns of 90% of the peo­ple involved, even when the data were scrubbed of any names, account num­bers or oth­er obvi­ous iden­ti­fiers.

Still not sure what this means? It means that I don’t need your name and address, much less your social secu­ri­ty num­ber, to know who you ARE. With a triv­ial amount of trans­ac­tion­al data I can fig­ure out where you live, what you do, who you asso­ciate with, what you buy and what you sell. I don’t need to steal this data, and frankly I wouldn’t know what to do with your social secu­ri­ty num­ber even if I had it … it would just slow down my analy­sis. No, you give me every­thing I need just by liv­ing your very con­ve­nient life, where you’ve vol­un­teered every bit of trans­ac­tion­al infor­ma­tion in the fine print of all of these won­drous ser­vices you’ve signed up for. And if there’s a bit more infor­ma­tion I need – say, a device that records and trans­mits your dri­ving habits – well, you’re only too hap­py to sell that to me for a few dol­lars off your insur­ance pol­i­cy. After all, you’ve got noth­ing to hide. It’s free mon­ey!

Almost every investor I know believes that the tools of sur­veil­lance and Big Data are only used against the mar­gin­al­ized Oth­er – ter­ror­ist “sym­pa­thiz­ers” in Yemen, gang “asso­ciates” in Comp­ton – but not us. Oh no, not us. And if those tools are trained on us, it’s only to pro­mote “trans­paren­cy” and weed out the bad guys lurk­ing in our midst. Or maybe to sug­gest a movie we’d like to watch. What could pos­si­bly be wrong with that? I’ve writ­ten a lot (here, here, and here) about what’s wrong with that, about how the mod­ern fetish with trans­paren­cy, aid­ed and abet­ted by tech­nol­o­gy and gov­ern­ment, per­verts the core small‑l lib­er­al insti­tu­tions of mar­kets and rep­re­sen­ta­tive gov­ern­ment.

It’s not that we’re com­pla­cent about our per­son­al infor­ma­tion. On the con­trary, we are obsessed about the per­son­al “keys” that are mean­ing­ful to humans – names, social secu­ri­ty num­bers, pass­words and the like – and we spend bil­lions of dol­lars and mil­lions of hours every year to con­trol those keys, to pre­vent them from falling into the wrong hands of oth­er humans. But we will­ing­ly hand over a dif­fer­ent set of keys to non-human hands with­out a sec­ond thought.

The prob­lem is that our human brains are wired to think of data pro­cess­ing in human ways, and so we assume that com­put­er­ized sys­tems process data in these same human ways, albeit more quick­ly and more accu­rate­ly. Our sci­ence fic­tion is filled with com­put­er sys­tems that are essen­tial­ly god-like human brains, machines that can talk and “think” and manip­u­late phys­i­cal objects, as if sen­tience in a human con­text is the pin­na­cle of data pro­cess­ing! This anthro­po­mor­phic bias dri­ves me nuts, as it damp­ens both the sense of awe and the sense of dan­ger we should be feel­ing at what already walks among us. It seems like every­one and his broth­er today are wring­ing their hands about AI and some impend­ing “Sin­gu­lar­i­ty”, a moment of future doom where non-human intel­li­gence achieves some human-esque sen­tience and decides in Matrix-like fash­ion to turn us into bat­ter­ies or some such. Please. The Sin­gu­lar­i­ty is already here. Its name is Big Data.

Big Data is mag­ic, in exact­ly the sense that Arthur C. Clarke wrote of suf­fi­cient­ly advanced tech­nol­o­gy. It’s mag­ic in a way that ther­monu­clear bombs and tele­vi­sion are not, because for all the com­plex­i­ty of these inven­tions they are dri­ven by cause and effect rela­tion­ships in the phys­i­cal world that the human brain can process com­fort­ably, phys­i­cal world rela­tion­ships that might not have exist­ed on the African savan­na 2,000,000 years ago but are under­stand­able with the sen­so­ry and neur­al organs our ances­tors evolved on that savan­na. Big Data sys­tems do not “see” the world as we do, with mere­ly 3 dimen­sions of phys­i­cal real­i­ty. Big Data sys­tems are not social ani­mals, evolved by nature and trained from birth to inter­pret all sig­nals through a social lens. Big Data sys­tems are sui gener­is, a way of per­ceiv­ing the world that may have been invent­ed by human inge­nu­ity and can serve human inter­ests, but are utter­ly non-human and pro­found­ly not of this world.

A Big Data sys­tem couldn’t care less if it has your spe­cif­ic social secu­ri­ty num­ber or your spe­cif­ic account ID, because it’s not under­stand­ing who you are based on how you iden­ti­fy your­self to oth­er humans. That’s the human bias here, that a Big Data sys­tem would try to pre­dict our indi­vid­ual behav­ior based on an analy­sis of what we indi­vid­u­al­ly have done in the past, as if the com­put­er were some super-advanced ver­sion of Sher­lock Holmes. No, what a Big Data sys­tem can do is look at ALL of our behav­iors, across ALL dimen­sions of that behav­ior, and infer what ANY of us would do under sim­i­lar cir­cum­stances. It’s a sim­ple con­cept, real­ly, but what the human brain can’t eas­i­ly com­pre­hend is the vast­ness of the ALL part of the equa­tion or what it means to look at the ALL simul­ta­ne­ous­ly and in par­al­lel. I’ve been work­ing with infer­ence engines for almost 30 years now, and while I think that I’ve got unusu­al­ly good instincts for this and I’ve been able to train my brain to kin­da sor­ta think in mul­ti-dimen­sion­al terms, the truth is that I only get glimpses of what’s hap­pen­ing inside these engines. I can chan­nel the mag­ic, I can appre­ci­ate the mag­ic, and on a pure­ly sym­bol­ic lev­el I can describe the mag­ic. But on a fun­da­men­tal lev­el I don’t under­stand the mag­ic, and nei­ther does any oth­er human. What I can say to you with absolute cer­tain­ty, how­ev­er, is that the mag­ic exists and there are plen­ty of magi­cians like me out there, with more grad­u­at­ing from MIT and Har­vard and Stan­ford every year.

Here’s the mag­ic trick that I’m wor­ried about for investors.

In exact­ly the same way that we have giv­en away our per­son­al behav­ioral data to banks and cred­it card com­pa­nies and wire­less car­ri­ers and insur­ance com­pa­nies and a mil­lion app providers, so are we now being tempt­ed to give away our port­fo­lio behav­ioral data to mega-banks and mega-asset man­agers and the tech­nol­o­gy providers who work with them. Don’t wor­ry, they say, there’s noth­ing in this infor­ma­tion that iden­ti­fies you direct­ly. It’s all anony­mous. What rub­bish! With enough anony­mous port­fo­lio behav­ioral data and a laugh­ably small IT bud­get, any com­pe­tent magi­cian can design a Big Data sys­tem that can pre­dict with 90% accu­ra­cy what you will buy and sell in your account, at what price you will buy and sell, and under what exter­nal macro con­di­tions you will buy and sell. Every day these pri­vate data sets at the mega-mar­ket play­ers get big­ger and big­ger, and every day we get clos­er and clos­er to a Citadel or a Renais­sance per­fect­ing their Infer­ence Machine for the liq­uid cap­i­tal mar­kets. For all I know, they already have. . . .

2. Check­out Facebook’s new patent, to be eval­u­at­ed in con­junc­tion with the pre­vi­ous sto­ry. Facebook’s patent is for a ser­vice that will let banks scan your Face­book friends for the pur­pose of assess­ing your cred­it qual­ity. For instance, Face­book might set up a ser­vice where banks can take the aver­age of the cred­it rat­ings for all of the peo­ple in your social net­work, and if that aver­age doesn’t meet a min­i­mum cred­it score, your loan appli­ca­tion is denied. And that’s not just some ran­dom appli­ca­tion of Facebook’s new patent–the sys­tem of using the aver­age cred­it scores of your social net­work to deny you loans is explic­itly part of the patent:

“Facebook’s New Plan: Help Banks Fig­ure Out How Poor You Are So They Can Deny You Loans” by Jack Smith IV; mic.com; 8/5/2015.

If you and your Face­book friends are poor, good luck get­ting approved for a loan.

Face­book has reg­is­tered a patent for a sys­tem that would let banks and lenders screen your social net­work before decid­ing whether or not you’re approved for a loan. If your Face­book friends’ aver­age cred­it scores don’t make the cut, the bank can reject you. The patent is word­ed in clear, ter­ri­fy­ing lan­guage that speaks for itself:

When an indi­vid­ual applies for a loan, the lender exam­ines the cred­it rat­ings of mem­bers of the individual’s social net­work who are con­nected to the indi­vid­ual through autho­rized nodes. If the aver­age cred­it rat­ing of these mem­bers is at least a min­i­mum cred­it score, the lender con­tin­ues to process the loan appli­ca­tion. Oth­er­wise, the loan appli­ca­tion is reject­ed.

It’s very lit­er­ally guilt by asso­ci­a­tion, allow­ing banks and lenders to pro­file you by the sta­tus of your loved ones.

Though a cred­it score isn’t nec­es­sar­ily a reflec­tion of your wealth, it can serve as a rough guide­line for who has a reli­able, man­aged income and who has had to lean on cred­it in try­ing times. A line of cred­it is some­times a life­line, either for start­ing a new busi­ness or escap­ing a tem­po­rary hard­ship.

Pro­fil­ing peo­ple for being in social cir­cles where low cred­it scores are like­ly could cut off someone’s chances of find­ing finan­cial relief. In effect, it’s a device that iso­lates the poor and keeps them poor.

A bold new era for dis­crim­i­na­tion: In the Unit­ed States, it’s ille­gal to deny some­one a loan based on tra­di­tional iden­ti­fiers like race or gen­der — the kinds of things peo­ple usu­ally use to dis­crim­i­nate. But these laws were made before Face­book was able to peer into your social graph and learn when, where and how long you’ve known your friends and acquain­tances.

The fit­ness-track­ing tech com­pany Fit­bit said in 2014 that the fastest grow­ing part of their busi­ness is help­ing employ­ers mon­i­tor the health of their employ­ees. Once insur­ers show inter­est in this infor­ma­tion, you can bet they’ll be mak­ing a few rejec­tions of their own. And if a group insur­ance plan that affects every employ­ee depends on mea­sur­able, real-time data for the fit­ness of its employ­ees, how will that affect the hir­ing process?

...

And if you don’t like it, just find rich­er friends.

3a. A con­sor­tium of 14 mega-banks have pri­vately devel­oped a spe­cial super-secure inter-bank mes­sag­ing sys­tem that uses end-to-end strong encryp­tion and per­ma­nently deletes data. The Sym­pho­ny sys­tem may very well make it impos­si­ble for reg­u­la­tors to ade­quate­ly over­see the finan­cial male­fac­tors respon­si­ble for the 2008 finan­cial melt­down.

“NY Reg­u­la­tor Sends Mes­sage to Sym­pho­ny” by Ben McLan­na­han and Gina Chon; Finan­cial Times; 7/22/2015.

New York’s state bank­ing reg­u­la­tor has fired a shot across the bows of Sym­phony, a mes­sag­ing ser­vice about to be launched by a con­sor­tium of Wall Street banks and asset man­agers, by call­ing for infor­ma­tion on how it man­ages — and deletes — cus­tomer data.

In a let­ter on Wednes­day to David Gurle, the chief exec­u­tive of Sym­phony Com­mu­ni­ca­tion Ser­vices, the New York Depart­ment of Finan­cial Ser­vices asked it to clar­ify how its tool would allow firms to erase their data trails, poten­tially falling foul of laws on record-keep­ing.

The let­ter, which was signed by act­ing super­in­ten­dent Antho­ny Albanese and shared with the press, not­ed that cha­t­room tran­scripts had formed a crit­i­cal part of author­i­ties’ inves­ti­ga­tions into the rig­ging of mar­kets for for­eign exchange and inter­bank loans. It called for Sym­phony to spell out its doc­u­ment reten­tion capa­bil­i­ties, poli­cies and fea­tures, cit­ing two spe­cific areas of inter­est as “data dele­tion” and “end-to-end encryp­tion”.

The let­ter marks the first expres­sion of con­cern from reg­u­la­tors over a new ini­tia­tive that has set out to chal­lenge the dom­i­nance of Bloomberg, whose 320,000-plus sub­scribers ping about 200m mes­sages a day between ter­mi­nals using its com­mu­ni­ca­tion tools.

Peo­ple famil­iar with the mat­ter described the inquiry as an infor­ma­tion gath­er­ing exer­cise, which could con­clude that Sym­phony is a per­fectly legit­i­mate enter­prise.

The NYDFS not­ed that Symphony’s mar­ket­ing mate­ri­als state that “Sym­phony has designed a spe­cific set of pro­ce­dures to guar­an­tee that data dele­tion is per­ma­nent and ful­ly doc­u­mented. We also delete con­tent on a reg­u­lar basis in accor­dance with cus­tomer data reten­tion poli­cies.”

Mr Albanese also wrote that he would fol­low up with four con­sor­tium mem­bers that the NYDFS reg­u­lates — Bank of New York Mel­lon, Cred­it Suisse, Deutsche Bank and Gold­man Sachs — to ask them how they plan to use the new ser­vice, which will go live for big cus­tomers in the first week of August.

The reg­u­la­tor said it was keen to find out how banks would ensure that mes­sages cre­ated using Sym­phony would be retained, and “whether their use of Symphony’s encryp­tion tech­nol­ogy can be used to pre­vent review by com­pli­ance per­son­nel or reg­u­la­tors”. It also flagged con­cerns over the open-source fea­tures of the prod­uct, won­der­ing if they could be used to “cir­cum­vent” over­sight.

The oth­er mem­bers of the con­sor­tium are Bank of Amer­ica Mer­rill Lynch, Black­Rock, Citadel, Cit­i­group, HSBC, Jef­feries, JPMor­gan, Mav­er­ick Cap­i­tal, Mor­gan Stan­ley and Wells Far­go. Togeth­er they have chipped in about $70m to get Sym­phony start­ed. Anoth­er San Fran­cis­co-based fund run by a for­mer col­league of Mr Gurle’s, Merus Cap­i­tal, has a 5 per cent inter­est.

“Sym­phony is built on a foun­da­tion of secu­rity, com­pli­ance and pri­vacy fea­tures that were built to enable our finan­cial ser­vices and enter­prise cus­tomers to meet their reg­u­la­tory require­ments,” said Mr Gurle. “We look for­ward to explain­ing the var­i­ous aspects of our com­mu­ni­ca­tions plat­form to the New York Depart­ment of Finan­cial Ser­vices.”

3b. Accord­ing to Symphony’s back­ers, noth­ing could go wrong because all the infor­ma­tion that banks are required to retain for reg­u­la­tory pur­poses is indeed retained in the sys­tem. Whether or not reg­u­la­tors can actu­ally access that retained data, how­ever, appears to be more of an open ques­tion. Again, the end-to-end encryp­tion may very well insu­late Banksters from the reg­u­la­tion vital to avoid a repeat of the 2008 sce­nario.

“Sym­phony, the ‘What­sApp for Wall Street,’ Orches­trates a Nuanced Response to Reg­u­la­tory Crit­ics” by Michael del Castil­loNew York Busi­ness Jour­nal; 8/13/2015.

Sym­phony is tak­ing heat from some in Wash­ing­ton, D.C., D.C. for its What­App-like mes­sag­ing ser­vice that promis­es to encrypt Wall Street’s mes­sages from end to end. At the heart of the con­cern is whether or not the keys used to decrypt the mes­sages will be made avail­able to reg­u­la­tors, or if anoth­er form of back door access will be pro­vid­ed.

With­out such keys it would be immense­ly more dif­fi­cult to retrace the steps of shady char­ac­ters on Wall Street dur­ing reg­u­la­tory inves­ti­ga­tions — an abil­ity, which accord­ing to a New York Post report, has result­ed $74 bil­lion in fines over the past five years.

So, ear­lier this week Sym­phony took to the blo­gos­phere with a rather detailed expla­na­tion of its plans to be com­pli­ant with reg­u­la­tors. In spite of answer­ing a lot of ques­tions though, one key point was either deft­ly evad­ed, or over­looked.

What Sym­phony does, accord­ing to the blog post:

Sym­phony pro­vides its cus­tomers with an inno­v­a­tive “end-to-end” secure mes­sag­ing capa­bil­ity that pro­tects com­mu­ni­ca­tions in the cloud from cyber-threats and the risk of data breach, while safe­guard­ing our cus­tomers’ abil­ity to retain records of their mes­sages. Sym­phony pro­tects data, not only when it trav­els from “point-to-point” over net­work con­nec­tions, but also the entire time the data is in the cloud.

How it works:

Large insti­tu­tions using Sym­phony typ­i­cally will store encryp­tion keys using spe­cial­ized hard­ware key man­age­ment devices known as Hard­ware Secu­rity Mod­ules (HSMs). These mod­ules are installed in data cen­ters and pro­tect an organization’s keys, stor­ing them with­in the secure pro­tected mem­ory of the HSM. Firms will use these keys to decrypt data and then feed the data into their record reten­tion sys­tems.

The crux:

Sym­phony is designed to inter­face with record reten­tion sys­tems com­monly deployed in finan­cial insti­tu­tions. By help­ing orga­ni­za­tions reli­ably store mes­sages in a cen­tral archive, our plat­form facil­i­tates the rapid and com­plete retrieval of records when need­ed. Sym­phony pro­vides secu­rity while data trav­els through the cloud; firms then secure­ly receive the data from Sym­phony, decrypt it and store it so they can meet their reten­tion oblig­a­tions.

The poten­tial to store every key-stroke of every employ­ee behind an encrypt­ed wall safe from mali­cious gov­ern­ments and oth­er enti­ties is one that should make Wall Streeters, and those depen­dent on Wall Street resources, sleep a bit bet­ter at night.

But nowhere in Symphony’s blog post does it actu­ally say that any of the 14 com­pa­nies which have invest­ed $70 mil­lion in the prod­uct, or any of the forth­com­ing cus­tomers who might sign up to use it, will actu­ally share any­thing with reg­u­la­tors. Sure, it will retain all the infor­ma­tion oblig­ed by reg­u­la­tors, which in the right hands is equal­ly use­ful to the com­pa­nies. So there’s no sur­prise there.

The clos­est we see to any actu­al assur­ance that the Sil­i­con Val­ley-based com­pany plans to share that infor­ma­tion with reg­u­la­tors is that Sym­phony is “designed to inter­face with record reten­tion sys­tems com­monly deployed in finan­cial insti­tu­tions.” Which the­o­ret­i­cally, means the SEC, the DOJ, or any num­ber of reg­u­la­tory bod­ies could plug in, assum­ing they had access.

So, the ques­tions remain, will Sym­phony be build­ing in some sort of back-door access for reg­u­la­tors? Or will it just be stor­ing that infor­ma­tion required of reg­u­la­tors, but for its clients’ use?

...

4a. The Bit­coin assas­si­na­tion mar­kets are about to get some com­pe­ti­tion. A new vari­ant of the Bit­coin tech­nol­o­gy will not only per­mit the use of Bit­coin to assas­si­nate pub­lic fig­ures but may very well replace–to a cer­tain extent–the func­tions per­formed by attor­ney.

“Bitcoin’s Dark Side Could Get Dark­er” by Tom Simonite; MIT Tech­nol­ogy Review; 8/13/2015.

Investors see rich­es in a cryp­tog­ra­phy-enabled tech­nol­ogy called smart contracts–but it could also offer much to crim­i­nals.

Some of the ear­li­est adopters of the dig­i­tal cur­rency Bit­coin were crim­i­nals, who have found it invalu­able in online mar­ket­places for con­tra­band and as pay­ment extort­ed through lucra­tive “ran­somware” that holds per­sonal data hostage. A new Bit­coin-inspired tech­nol­ogy that some investors believe will be much more use­ful and pow­er­ful may be set to unlock a new wave of crim­i­nal inno­va­tion.

That tech­nol­ogy is known as smart contracts—small com­puter pro­grams that can do things like exe­cute finan­cial trades or nota­rize doc­u­ments in a legal agree­ment. Intend­ed to take the place of third-par­ty human admin­is­tra­tors such as lawyers, which are required in many deals and agree­ments, they can ver­ify infor­ma­tion and hold or use funds using sim­i­lar cryp­tog­ra­phy to that which under­pins Bit­coin.

Some com­pa­nies think smart con­tracts could make finan­cial mar­kets more effi­cient, or sim­plify com­plex trans­ac­tions such as prop­erty deals (see “The Start­up Meant to Rein­vent What Bit­coin Can Do”)Ari Juels, a cryp­tog­ra­pher and pro­fes­sor at the Jacobs Tech­nion-Cor­nell Insti­tute at Cor­nell Tech, believes they will also be use­ful for ille­gal activity–and, with two col­lab­o­ra­tors, he has demon­strated how.

“In some ways this is the per­fect vehi­cle for crim­i­nal acts, because it’s meant to cre­ate trust in sit­u­a­tions where oth­er­wise it’s dif­fi­cult to achieve,” says Juels.

In a paper to be released today, Juels, fel­low Cor­nell pro­fes­sor Elaine Shi, and Uni­ver­sity of Mary­land researcher Ahmed Kos­bapresent sev­eral exam­ples of what they call “crim­i­nal con­tracts.” They wrote them to work on the recent­ly launched smart-con­tract plat­form Ethereum.

One exam­ple is a con­tract offer­ing a cryp­tocur­rency reward for hack­ing a par­tic­u­lar web­site. Ethereum’s pro­gram­ming lan­guage makes it pos­si­ble for the con­tract to con­trol the promised funds. It will release them only to some­one who pro­vides proof of hav­ing car­ried out the job, in the form of a cryp­to­graph­i­cally ver­i­fi­able string added to the defaced site.

Con­tracts with a sim­i­lar design could be used to com­mis­sion many kinds of crime, say the researchers.Most provoca­tively, they out­line a ver­sion designed to arrange the assas­si­na­tion of a pub­lic fig­ure. A per­son wish­ing to claim the boun­ty would have to send infor­ma­tion such as the time and place of the killing in advance. The con­tract would pay out after ver­i­fy­ing that those details had appeared in sev­eral trust­ed news sources, such as news wires. A sim­i­lar approach could be used for less­er phys­i­cal crimes, such as high-pro­file van­dal­ism.

“It was a bit of a sur­prise to me that these types of crimes in the phys­i­cal world could be enabled by a dig­i­tal sys­tem,” says Juels. He and his coau­thors say they are try­ing to pub­li­cize the poten­tial for such activ­ity to get tech­nol­o­gists and pol­icy mak­ers think­ing about how to make sure the pos­i­tives of smart con­tracts out­weigh the neg­a­tives.

“We are opti­mistic about their ben­e­fi­cial appli­ca­tions, but crime is some­thing that is going to have to be dealt with in an effec­tive way if those ben­e­fits are to bear fruit,” says Shi.

Nico­las Christin, an assis­tant pro­fes­sor at Carnegie Mel­lon Uni­ver­sity who has stud­ied crim­i­nal uses of Bit­coin, agrees there is poten­tial for smart con­tracts to be embraced by the under­ground. “It will not be sur­pris­ing,” he says. “Fringe busi­nesses tend to be the first adopters of new tech­nolo­gies, because they don’t have any­thing to lose.”

...

Gavin Wood, chief tech­nol­ogy offi­cer at Ethereum, notes that legit­i­mate busi­nesses are already plan­ning to make use of his technology—for exam­ple, to pro­vide a dig­i­tally trans­fer­able proof of own­er­ship of gold.

How­ever, Wood acknowl­edges it is like­ly that Ethereum will be used in ways that break the law—and even says that is part of what makes the tech­nol­ogy inter­est­ing. Just as file shar­ing found wide­spread unau­tho­rized use and forced changes in the enter­tain­ment and tech indus­tries, illic­it activ­ity enabled by Ethereum could change the world, he says.

“The poten­tial for Ethereum to alter aspects of soci­ety is of sig­nif­i­cant mag­ni­tude,” says Wood. “This is some­thing that would pro­vide a tech­ni­cal basis for all sorts of social changes and I find that excit­ing.”

For exam­ple, Wood says that Ethereum’s soft­ware could be used to cre­ate a decen­tral­ized ver­sion of a ser­vice such as Uber, con­nect­ing peo­ple want­ing to go some­where with some­one will­ing to take them, and han­dling the pay­ments with­out the need for a com­pany in the mid­dle. Reg­u­la­tors like those har­ry­ing Uber in many places around the world would be left with noth­ing to tar­get. “You can imple­ment any Web ser­vice with­out there being a legal enti­ty behind it,” he says. “The idea of mak­ing cer­tain things impos­si­ble to leg­is­late against is real­ly inter­est­ing.”

4b. If you’re a for­mer sub­scriber of the “Ash­ley Madi­son” web­site for cheat­ing, just FYI, you might get­ting a friend­ly email soon:

“Extor­tion­ists Are After the Ash­ley Madi­son Users and They Want Bit­coin” by Adam Clark EstesGiz­modo; 8/21/15.

Peo­ple are the worst. An unknown num­ber of ass­holes are threat­en­ing to expose Ash­ley Madi­son users, pre­sum­ably ruin­ing their mar­riages. The hack­ing vic­tims must pay the extor­tion­ists “exact­ly 1.0000001 Bit­coins” or the spouse gets noti­fied. Ugh.

This is an unnerv­ing but not unpre­dictable turn of events. The data that the Ash­ley Madi­son hack­ers released ear­ly this week includ­ed mil­lions of real email address­es, along with real home address­es, sex­ual pro­cliv­i­ties and oth­er very pri­vate infor­ma­tion. Secu­rity blog­ger Bri­an Krebs talked to secu­rity firms who have evi­dence of extor­tion schemes linked to Ash­ley Madi­son data. Turns out spam fil­ters are catch­ing a num­ber of emails being sent to vic­tims from peo­ple who say they’ll make the infor­ma­tion pub­lic unless they get paid!

Here’s one caught by an email provider in Mil­wau­kee:

Hel­lo,

Unfor­tu­nately, your data was leaked in the recent hack­ing of Ash­ley Madi­son and I now have your infor­ma­tion.

If you would like to pre­vent me from find­ing and shar­ing this infor­ma­tion with your sig­nif­i­cant oth­er send exact­ly 1.0000001 Bit­coins (approx. val­ue $225 USD) to the fol­low­ing address:

1B8eH7HR87vbVbMzX4gk9nYyus3KnXs4Ez

Send­ing the wrong amount means I won’t know it’s you who paid.

You have 7 days from receipt of this email to send the BTC [bit­coins]. If you need help locat­ing a place to pur­chase BTC, you can start here…..

...

One secu­rity expert explained to Krebs that this type of extor­tion could be dan­ger­ous. “There is going to be a dra­matic crime wave of these types of vir­tual shake­downs, and they’ll evolve into spear-phish­ing cam­paigns that lever­age cryp­to mal­ware,” said Tom Keller­man of Trend Micro.

That sounds a lit­tle dra­matic, but bear in mind just how many peo­ple were involved. Even if you assume some of the accounts were fake, there are poten­tially mil­lions who’ve had their pri­vate infor­ma­tion post­ed on the dark web for any­body to see and abuse. Some of these peo­ple are in the mil­i­tary, too, where they’d face pos­si­ble penal­ties for adul­tery. If some goons think they can squeeze a bit­coin out of each of them, there are poten­tially tens of mil­lions of dol­lars to be made.

The word “poten­tially” is impor­tant because some of these extor­tion emails are obvi­ously get­ting stuck in spam fil­ters, and some of the extor­tion­ists could eas­ily just be bluff­ing. Either way, every­body los­es when com­pa­nies fail to secure their users’ data. Every­body except the crim­i­nals.

5. The emer­gence of what is com­ing to be called “The Inter­net of Things” holds tru­ly omi­nous pos­si­bil­i­ties. Not only can Big Data/Big Tech get their hooks into peo­ples’ lives to an even greater extent than they can now (see Item #1 in this descrip­tion) but hack­ers can have a field day.

“Why Smart Objects May Be a Dumb Idea’ by Zeynep Tufek­ci; The New York Times; 8/10/2015.

A fridge that puts milk on your shop­ping list when you run low. A safe that tal­lies the cash that is placed in it. A sniper rifle equipped with advanced com­put­er tech­nol­o­gy for improved accu­ra­cy. A car that lets you stream music from the Inter­net.

All of these inno­va­tions sound great, until you learn the risks that this type of con­nec­tiv­i­ty car­ries. Recent­ly, two secu­ri­ty researchers, sit­ting on a couch and armed only with lap­tops, remote­ly took over a Chrysler Jeep Chero­kee speed­ing along the high­way, shut­ting down its engine as an 18-wheel­er truck rushed toward it. They did this all while a Wired reporter was dri­ving the car. Their exper­tise would allow them to hack any Jeep as long as they knew the car’s I.P. address, its net­work address on the Inter­net. They turned the Jeep’s enter­tain­ment dash­board into a gate­way to the car’s steer­ing, brakes and trans­mis­sion.

A hacked car is a high-pro­file exam­ple of what can go wrong with the com­ing Inter­net of Things — objects equipped with soft­ware and con­nect­ed to dig­i­tal net­works. The sell­ing point for these well-con­nect­ed objects is added con­ve­nience and bet­ter safe­ty. In real­i­ty, it is a fast-motion train wreck in pri­va­cy and secu­ri­ty.

The ear­ly Inter­net was intend­ed to con­nect peo­ple who already trust­ed one anoth­er, like aca­d­e­m­ic researchers or mil­i­tary net­works. It nev­er had the robust secu­ri­ty that today’s glob­al net­work needs. As the Inter­net went from a few thou­sand users to more than three bil­lion, attempts to strength­en secu­ri­ty were stymied because of cost, short­sight­ed­ness and com­pet­ing inter­ests. Con­nect­ing every­day objects to this shaky, inse­cure base will cre­ate the Inter­net of Hacked Things. This is irre­spon­si­ble and poten­tial­ly cat­a­stroph­ic.

That smart safe? Hack­ers can emp­ty it with a sin­gle USB stick while eras­ing all logs of its activ­i­ty — the evi­dence of deposits and with­drawals — and of their crime. That high-tech rifle? Researchers man­aged to remote­ly manip­u­late its tar­get selec­tion with­out the shooter’s know­ing.

Home builders and car man­u­fac­tur­ers have shift­ed to a new busi­ness: the risky world of infor­ma­tion tech­nol­o­gy. Most seem utter­ly out of their depth.

Although Chrysler quick­ly recalled 1.4 mil­lion Jeeps to patch this par­tic­u­lar vul­ner­a­bil­i­ty, it took the com­pa­ny more than a year after the issue was first not­ed, and the recall occurred only after that spec­tac­u­lar pub­lic­i­ty stunt on the high­way and after it was request­ed by the Nation­al High­way Traf­fic Safe­ty Admin­is­tra­tion. In announc­ing the soft­ware fix, the com­pa­ny said that no defect had been found. If two guys sit­ting on their couch turn­ing off a speed­ing car’s engine from miles away doesn’t qual­i­fy, I’m not sure what counts as a defect in Chrysler’s world. And Chrysler is far from the only com­pa­ny com­pro­mised: from BMW to Tes­la to Gen­er­al Motors, many auto­mo­tive brands have been hacked, with sure­ly more to come.

Dra­mat­ic hacks attract the most atten­tion, but the soft­ware errors that allow them to occur are ubiq­ui­tous. While com­plex breach­es can take real effort — the Jeep hack­er duo spent two years research­ing — sim­ple errors in the code can also cause sig­nif­i­cant fail­ure. Adding soft­ware with mil­lions of lines of code to

The Inter­net of Things is also a pri­va­cy night­mare. Data­bas­es that already have too much infor­ma­tion about us will now be burst­ing with data on the places we’ve dri­ven, the food we’ve pur­chased and more. Last week, at Def Con, the annu­al infor­ma­tion secu­ri­ty con­fer­ence, researchers set up an Inter­net of Things vil­lage to show how they could hack every­day objects like baby mon­i­tors, ther­mostats and secu­ri­ty cam­eras.

Con­nect­ing every­day objects intro­duces new risks if done at mass scale. Take that smart refrig­er­a­tor. If a sin­gle fridge mal­func­tions, it’s a has­sle. How­ev­er, if the fridge’s com­put­er is con­nect­ed to its motor, a soft­ware bug or hack could “brick” mil­lions of them all at once — turn­ing them into plas­tic pantries with heavy doors.

Cars — two-ton met­al objects designed to hur­tle down high­ways — are already brac­ing­ly dan­ger­ous. The mod­ern auto­mo­bile is run by dozens of com­put­ers that most man­u­fac­tur­ers con­nect using a sys­tem that is old and known to be inse­cure. Yet automak­ers often use that flim­sy sys­tem to con­nect all of the car’s parts. That means once a hack­er is in, she’s in every­where — engine, steer­ing, trans­mis­sion and brakes, not just the enter­tain­ment sys­tem.

For years, secu­ri­ty researchers have been warn­ing about the dan­gers of cou­pling so many sys­tems in cars. Alarmed researchers have pub­lished aca­d­e­m­ic papers, hacked cars as demon­stra­tions, and begged the indus­try to step up. So far, the indus­try response has been to nod polite­ly and fix exposed flaws with­out fun­da­men­tal­ly chang­ing the way they oper­ate.

In 1965, Ralph Nad­er pub­lished “Unsafe at Any Speed,” doc­u­ment­ing car man­u­fac­tur­ers’ resis­tance to spend­ing mon­ey on safe­ty fea­tures like seat­belts. After pub­lic debate and final­ly some leg­is­la­tion, man­u­fac­tur­ers were forced to incor­po­rate safe­ty tech­nolo­gies.

No com­pa­ny wants to be the first to bear the costs of updat­ing the inse­cure com­put­er sys­tems that run most cars. We need fed­er­al safe­ty reg­u­la­tions to push automak­ers to move, as a whole indus­try. Last month, a bill with pri­va­cy and cyber­se­cu­ri­ty stan­dards for cars was intro­duced in the Sen­ate. That’s good, but it’s only a start. We need a new under­stand­ing of car safe­ty, and of the safe­ty of any object run­ning soft­ware or con­nect­ing to the Inter­net.

It may be hard to fix secu­ri­ty on the dig­i­tal Inter­net, but the Inter­net of Things should not be built on this faulty foun­da­tion. Respond­ing to dig­i­tal threats by patch­ing only exposed vul­ner­a­bil­i­ties is giv­ing just aspirin to a very ill patient.

It isn’t hope­less. We can make pro­grams more reli­able and data­bas­es more secure. Crit­i­cal func­tions on Inter­net-con­nect­ed objects should be iso­lat­ed and exter­nal audits man­dat­ed to catch prob­lems ear­ly. But this will require an ini­tial invest­ment to fore­stall future prob­lems — the exact oppo­site of the cur­rent cor­po­rate impulse. It also may be that not every­thing needs to be net­worked, and that the trade-off in vul­ner­a­bil­i­ty isn’t worth it. Maybe cars are unsafe at any I.P.

6. We con­clude by re-exam­in­ing one of the most impor­tant ana­lyt­i­cal arti­cles in a long time, David Golumbi­a’s arti­cle in Uncomputing.org about tech­nocrats and their fun­da­men­tal­ly unde­mo­c­ra­t­ic out­look.

“Tor, Tech­noc­racy, Democ­ra­cy” by David Golum­bia; Uncomputing.org; 4/23/2015.

” . . . . Such tech­no­cratic beliefs are wide­spread in our world today, espe­cially in the enclaves of dig­i­tal enthu­si­asts, whether or not they are part of the giant cor­po­rate-dig­i­tal leviathanHack­ers (“civic,” “eth­i­cal,” “white” and “black” hat alike), hack­tivists, Wik­iLeaks fans [and Julian Assange et al–D. E.], Anony­mous “mem­bers,” even Edward Snow­den him­self walk hand-in-hand with Face­book and Google in telling us that coders don’t just have good things to con­tribute to the polit­i­cal world, but that the polit­i­cal world is theirs to do with what they want, and the rest of us should stay out of it: the polit­i­cal world is bro­ken, they appear to think (right­ly, at least in part), and the solu­tion to that, they think (wrong­ly, at least for the most part), is for pro­gram­mers to take polit­i­cal mat­ters into their own hands. . . First, [Tor co-cre­ator] Din­gle­dine claimed that Tor must be sup­ported because it fol­lows direct­ly from a fun­da­men­tal “right to pri­vacy.” Yet when pressed—and not that hard—he admits that what he means by “right to pri­vacy” is not what any human rights body or “par­tic­u­lar legal regime” has meant by it. Instead of talk­ing about how human rights are pro­tected, he asserts that human rights are nat­ural rights and that these nat­ural rights cre­ate nat­ural law that is prop­erly enforced by enti­ties above and out­side of demo­c­ra­tic poli­tiesWhere the UN’s Uni­ver­sal Dec­la­ra­tion on Human Rights of 1948 is very clear that states and bod­ies like the UN to which states belong are the exclu­sive guar­an­tors of human rights, what­ever the ori­gin of those rights, Din­gle­dine asserts that a small group of soft­ware devel­op­ers can assign to them­selves that role, and that mem­bers of demo­c­ra­tic poli­ties have no choice but to accept them hav­ing that role. . . Fur­ther, it is hard not to notice that the appeal to nat­ural rights is today most often asso­ci­ated with the polit­i­cal right, for a vari­ety of rea­sons (ur-neo­con Leo Strauss was one of the most promi­nent 20th cen­tury pro­po­nents of these views). We aren’t sup­posed to endorse Tor because we endorse the right: it’s sup­posed to be above the left/right dis­tinc­tion. But it isn’t. . . .

 

Discussion

16 comments for “FTR #859 Because They Can: Update on Technocratic Fascism”

  1. It looks like the Ash­ley Madi­son hack may have just claimed its first two lives:

    Reuters
    Two peo­ple may have com­mit­ted sui­cide after Ash­ley Madi­son hack — police
    TORONTO | By Alas­tair Sharp

    Mon Aug 24, 2015 11:33pm IST

    At least two peo­ple may have com­mit­ted sui­cide fol­low­ing the hack­ing of the Ash­ley Madi­son cheat­ing web­site, Toron­to police said on Mon­day, warn­ing of a rip­ple effect that includes scams and extor­tion of clients des­per­ate to stop the expo­sure of their infi­deli­ty.

    Avid Life Media Inc, the par­ent com­pa­ny of the web­site, is offer­ing a C$500,000 ($379,132) reward to catch the hack­ers.

    In addi­tion to the expo­sure of the Ash­ley Madi­son accounts of as many as 37 mil­lion users, the attack on the dat­ing web­site for mar­ried peo­ple has sparked extor­tion attempts and at least two uncon­firmed sui­cides, Toron­to Police Act­ing Staff Super­in­ten­dent Bryce Evans told a news con­fer­ence.

    The data dump con­tained email address­es of U.S. gov­ern­ment offi­cials, UK civ­il ser­vants, and work­ers at Euro­pean and North Amer­i­can cor­po­ra­tions, tak­ing already deep-seat­ed fears about Inter­net secu­ri­ty and data pro­tec­tion to a new lev­el.

    “Your actions are ille­gal and will not be tol­er­at­ed. This is your wake-up call,” Evans said, address­ing the so-called “Impact Team” hack­ers direct­ly dur­ing the news con­fer­ence.

    “To the hack­ing com­mu­ni­ty who engage in dis­cus­sions on the dark web and who no doubt have infor­ma­tion that could assist this inves­ti­ga­tion, we’re also appeal­ing to you to do the right thing,” Evans said. “You know the Impact Team has crossed the line. Do the right thing and reach out to us.”

    Police declined to pro­vide any more details on the appar­ent sui­cides, say­ing they received uncon­firmed reports on Mon­day morn­ing.

    “The social impact behind this (hack­ing) — we’re talk­ing about fam­i­lies. We’re talk­ing about their chil­dren, we’re talk­ing about their wives, we’re talk­ing about their male part­ners,” Evans told reporters.

    “It’s going to have impacts on their lives. We’re now going to have hate crimes that are a result of this. There are so many things that are hap­pen­ing. The real­i­ty is ... this is not the fun and games that has been por­trayed.”

    The inves­ti­ga­tion into the hack­ing has broad­ened to include inter­na­tion­al law enforce­ment, with the U.S. Depart­ment of Home­land Secu­ri­ty join­ing last week. The U.S. Fed­er­al Bureau of Inves­ti­ga­tion and Cana­di­an fed­er­al and provin­cial police are also assist­ing.

    Evans also said the hack­ing has spawned online scams that fraud­u­lent­ly claim to be able to pro­tect Ash­ley Madi­son clients’ data for a fee.

    Peo­ple are also attempt­ing to extort Ash­ley Madi­son clients by threat­en­ing to send evi­dence of their mem­ber­ship direct­ly to friends, fam­i­ly or col­leagues, Evans said.

    In a sign of Ash­ley Madis­on’s deep­en­ing woes fol­low­ing the breach, lawyers last week launched a class-action law­suit seek­ing some $760 mil­lion in dam­ages on behalf of Cana­di­ans whose infor­ma­tion was leaked.

    ...

    Note that this is the Toron­to police depart­ment report­ing these two appar­ent sui­cides, so these two sui­cides are pre­sum­ably just in the Toron­to area. And with up to 37 mil­lion users com­pro­mised, it not only rais­es the ques­tion of just how high the final body count is going to be in the long run for this hack but also just how high it is already from sui­cides that haven’t yet been asso­ci­at­ed with the hack.

    It’s a grim reminder that, as more as more per­son­al data becomes vul­ner­a­ble to exploits of this nature, the more tor­tur­ous and poten­tial­ly lethal gener­ic hack­ing effec­tive­ly becomes. For instance, what if 37 mil­lion Gmail accounts got hacked and their con­tents were just thrown up on the dark­web. A full account email hack could be just as dam­ag­ing and humil­i­at­ing as the Ash­ley Madi­son hack, if not far more so, because the poten­tial range of per­son­al infor­ma­tion is just on a dif­fer­ent scale, and near­ly every­one these days has a email account with one of the major email ser­vices out there. Plus, unlike the Ashe­ly Madi­son hack, which large­ly lim­its the dam­age to indi­vid­u­als involved and their fam­i­ly mem­bers, a full email hack could end up vio­lat­ing very sen­si­tive pieces of data for not just the email account own­er but every­one they com­mu­ni­cat­ed with! It rais­es a rather alarm­ing ques­tion: giv­en the con­nec­tiv­i­ty of human soci­eties, you have to won­der just what per­cent­age of the US pop­u­la­tion would be at least indi­rect­ly impact­ed if, say, 37 mil­lion Gmail accounts got hacked and thrown up online? How about the glob­al pop­u­lace? It seems like the impact could be pret­ty wide­ly felt.

    Posted by Pterrafractyl | August 24, 2015, 3:20 pm
  2. David Golum­bia points us towards a arti­cle that does a great job of sum­ma­riz­ing one of the key hopes and dreams held by cryp­to-anar­chist/­cy­ber­lib­er­tar­i­ans of what bit­coin might bring to the world. That being the col­lapse of gov­ern­ment via mass tax eva­sion through the use of cryp­tocur­ren­cies and replac­ing gov­ern­ment with a fee-for-ser­vice tax­a­tion sys­tem run by pri­vate ser­vice providers. But don’t assume that by sub­vert­ing the abil­i­ty of gov­ern­ments to func­tion that the cyber­lib­er­tar­i­ans assume we would sud­den­ly live a world free of reg­u­la­tion because that’s not exact­ly the author below has in mind: “The only choice of reg­u­la­tion we have in terms of cryp­tocur­ren­cy tax­a­tion is not to try and fit it inside some exist­ing doc­trine, but to abide by their laws of finance and infor­ma­tion free­dom. We must be the one’s to con­form to the reg­u­la­tion, not have it con­form to our con­ven­tion­al beliefs. Bit­coin is a sys­tem which will only be gov­erned effec­tive­ly through dig­i­tal law, an approach which func­tions sole­ly through a medi­um of tech­nol­o­gy itself. It will not bend to the whim of those who still hold con­ven­tion­al forms of law-mak­ing as rel­e­vant today”:

    Dig­i­nomics
    Cryp­tocur­ren­cy Tax­a­tion May Subert Nation­al Col­lec­tion

    Travis Patron
    Sep­tem­ber 6, 2015

    As the age of cryp­tocur­ren­cy comes into full force, it will facil­i­tate a sub­ver­sive­ly viable tax­a­tion avoid­ance strat­e­gy for many of the tech­ni­cal­ly savvy users of peer-to-peer cryp­to­graph­ic pay­ment sys­tems. In doing so, cryp­tocur­ren­cy use will act to erode the tax rev­enue base of nation­al juris­dic­tions, and ulti­mate­ly, repo­si­tion tax­a­tion as a vol­un­tary, pay-for-per­for­mance func­tion. In this post, I’d like to cov­er some of the ben­e­fits such a strat­e­gy will have for cryp­tocur­ren­cy investors, why our notion of tax­a­tion is ripe for dis­rup­tion, and why cryp­tocur­ren­cy tax­a­tion is enabled by default.

    Although investors have been lured by the siren song of tax havens for as long as gov­ern­ments have exist­ed, none have exist­ed with the legal and struc­tur­al char­ac­ter­is­tics such as those found in cryp­tocur­ren­cy. By oper­at­ing behind a veil of cyber­secre­cy, it is rea­son­able to fore­cast the imprac­ti­cal­i­ty of sys­temic tax­a­tion on these types of finan­cial assets from nation­al juris­dic­tions. Indi­vid­ual enforce­ment of tax­a­tion is like­wise imprac­ti­cal due to ide­o­log­i­cal back­lash gov­ern­ments would receive for tar­get­ing indi­vid­u­als who avoid nation­al tax­a­tion via infor­ma­tion tech­nolo­gies. Even so, many juris­dic­tions have already declared dig­i­tal cur­ren­cy trans­ac­tions (some­thing which occurs between con­sent­ing par­ties on a net­work which no one owns) to be tax­able under cur­rent legal frame­works.

    How can the state lay claim to the right to tax that which they do not issue and can­not con­trol?

    Run­ning The Num­bers on Cryp­tocur­ren­cy Tax­a­tion

    It has been said that com­pound­ing inter­est is one of the most pow­er­ful forces in the uni­verse. When we apply the black mag­ic of com­pound­ing returns to the prof­it-max­i­miz­ing actions of con­sumers, we see quite clear­ly why every user aware of the ben­e­fits of using cryp­tocur­ren­cy, even if only for the tax-sav­ings, will opt to do so over tra­di­tion­al fiat mon­ey. The allure of avoid­ing the clutch­es of nation­al tax­a­tion is strong enough that any ratio­nal con­sumer will make cryp­tocur­ren­cy a por­tion of their finan­cial port­fo­lio giv­en they have the suf­fi­cient tech­ni­cal under­stand­ing.

    “Each $5,000 of annu­al tax pay­ments made over a 40-year peri­od reduces your net worth by $2.2 mil­lion assum­ing a 10% annu­al return on your invest­ments,” reports James Dale David­son in The Sov­er­eign Indi­vid­ual: Mas­ter­ing the Tran­si­tion to the Infor­ma­tion Age, “For high income earn­ers in preda­to­ry tax regimes (such as the Unit­ed States), you can expect to lose more of your mon­ey through cumu­la­tive tax­a­tion than you will ever earn.”

    As we explained in the report Bit­coin May Become A Glob­al Reserve Instru­ment, nev­er before has there exist­ed a tool that can pre­serve eco­nom­ic and infor­ma­tion­al assets with such a high degree of secu­ri­ty com­bined with a near-zero mar­gin­al cost to the user. This rev­o­lu­tion­ary capa­bil­i­ty of the bit­coin net­work does, and will con­tin­ue to pro­vide, a sub­ver­sive­ly lucra­tive tax super haven in direct cor­re­la­tion with its accep­tance on a world­wide basis
    .

    Gov­ern­ment Response to Cryp­tocur­ren­cy Tax­a­tion

    Many gov­ern­ment agen­cies have already cued in to the tax avoid­ance poten­tial of bit­coin and cryp­tocur­ren­cies. How­ev­er, it would seem that they mis­judge this emerg­ing threat loom­ing over their pre­cious tax cof­fers. The Finan­cial Crimes Enforce­ment Net­work in the Unit­ed States (FINCEN) for exam­ple, has already issued guid­ance on cryp­tocur­ren­cy tax­a­tion, yet makes a false dis­tinc­tion between real cur­ren­cy and vir­tu­al cur­ren­cy. FINCEN states that “In con­trast to real cur­ren­cy, “vir­tu­al” cur­ren­cy is a medi­um of exchange that oper­ates like a cur­ren­cy in some envi­ron­ments, but does not have all the attrib­ut­es of real cur­ren­cy,” and lat­er “vir­tu­al cur­ren­cy does not have legal ten­der sta­tus in any juris­dic­tion.” What these agen­cies fail to real­ize, is that cryp­tocur­ren­cy is not vir­tu­al in any sense of the word. Indeed it is as real, and per­haps even more real, than tra­di­tion­al fleet­ing fiat cur­ren­cies.

    Bit­coin and cryp­tocur­ren­cy offer a near per­fect alter­na­tive to tra­di­tion­al tax havens which are being tight­ly con­trolled by the new laws asso­ci­at­ed with the For­eign Account Tax Com­pli­ance Act (FATCA). In his report Are Cryp­tocur­ren­cies Super Tax Havens?, Omri Mar­i­an makes clear the pres­sure for finan­cial insti­tu­tions who inter­act with the US bank­ing sys­tem to hand over account hold­ers, and for a crack­down on off­shore tax havens with the enact­ment of FATCA in 2010.

    Tax pol­i­cy­mak­ers seem to be oper­at­ing under the faulty assump­tion that cryp­tocur­ren­cy-based economies are lim­it­ed by the size of vir­tu­al economies. The only vir­tu­al aspect of cryp­tocur­ren­cies, how­ev­er, is their form. Their oper­a­tion hap­pens with­in real economies, and as such their growth poten­tial is, at least the­o­ret­i­cal­ly, infi­nite. Such poten­tial, togeth­er with recent devel­op­ments in cryp­tocur­ren­cies mar­kets, should alert pol­i­cy-mak­ers to the urgency of the emerg­ing prob­lem.

    – Omri Mar­i­an, Are Cryp­tocur­ren­cies Super Tax Havens?

    Cur­rent pay­ment proces­sors such as Bit­Pay have recent­ly revealed that gov­ern­ment agen­cies are watch­ing cryp­tocur­ren­cy trans­ac­tions through the bot­tle­necks and exchanges where it can be tracked and traced with a high degree of trans­paren­cy. It should not come to any­ones sur­prise that gov­ern­ments are watch­ing cryp­tocur­ren­cy nor that com­pa­nies are com­ply­ing with their laws, but under­stand­ing why nation­al gov­ern­ments require users of the bit­coin dig­i­tal econ­o­my to cut them a slice of the pie while they con­tribute noth­ing to the oper­a­tion, and in many cas­es, hin­der the adop­tion of this tech­nol­o­gy, remains a cal­lus mys­tery.

    Gov­ern­ments ini­tial­ly attempt­ing to con­trol cryp­tocur­ren­cy tax­a­tion through the busi­ness­es and bot­tle­necks which it can be mon­i­tored through will meet with as much suc­cess as they have lim­it­ing file shar­ing, ille­gal down­loads, and Tor oper­a­tions. Cryp­tocur­ren­cies have an inher­ent reg­u­la­tion, that of the law from num­ber. Tru­ly, bit­coin is code as law.

    ...

    Cryp­tocur­ren­cy Tax­a­tion By Default

    What would you say if you were told cryp­tocur­ren­cy tax­a­tion occurs on every trans­ac­tion by default? In the realm of dig­i­tal cur­ren­cy, the trans­ac­tion fee which the user decides to (or decides not to) attach to each pay­ment rep­re­sents the tax­a­tion. This user can decide to attach a large fee or no fee at all. In doing so, the min­ers of the net­work will choose pref­er­ence for the trans­ac­tions with a larg­er fee attached, and will work to con­firm these pay­ments soon­er than those with small­er fees. This trans­ac­tions queue rep­re­sents a vol­un­tary, pay-for-per­for­mance tax­a­tion struc­ture where the per­for­mance derived from the sys­tem is depen­dent upon how much tax­a­tion they pay.

    Algo­rith­mic Reg­u­la­tion

    Cryp­tocur­ren­cies have reg­u­la­tion built into the very nature of their exis­tence, just not through our con­ven­tion­al ideas of human inter­ven­tion. Because of the tech­no­log­i­cal nature of cryp­tocur­ren­cy tax­a­tion, judi­cial reg­u­la­tions bestowed upon these types of sys­tems will always be, to a large degree, futile. Cryp­tocur­ren­cies have estab­lished their own set of rules and guide­lines through the source code they are built upon, forc­ing legal frame­works on this type of 21st cen­tu­ry inno­va­tion will only cause fric­tion dur­ing its adop­tion phase.

    The only choice of reg­u­la­tion we have in terms of cryp­tocur­ren­cy tax­a­tion is not to try and fit it inside some exist­ing doc­trine, but to abide by their laws of finance and infor­ma­tion free­dom. We must be the one’s to con­form to the reg­u­la­tion, not have it con­form to our con­ven­tion­al beliefs. Bit­coin is a sys­tem which will only be gov­erned effec­tive­ly through dig­i­tal law, an approach which func­tions sole­ly through a medi­um of tech­nol­o­gy itself. It will not bend to the whim of those who still hold con­ven­tion­al forms of law-mak­ing as rel­e­vant today.
    ...

    Con­clu­sion

    When we come to under­stand the sys­temic resilience to judi­cial inter­ven­tion, it becomes quite clear that cryp­tocur­ren­cy tax­a­tion will remain a vol­un­tary, pay-for-per­for­mance func­tion of the net­work itself. No longer will tax­a­tion be enforced through coer­cion, but become a vol­un­tary act towards increased sys­tem per­for­mance.

    Make no mis­take, in a cryp­to-anar­chist juris­dic­tion where there is no means to con­fis­cate or con­trol prop­er­ty on behalf of anoth­er indi­vid­ual, the need for the state will cease to exist. Mass tax­a­tion on dig­i­tal cur­ren­cy is not fea­si­ble through judi­cial enforce­ment while indi­vid­ual enforce­ment is bound to prove inef­fec­tive. You, or any­one moti­vat­ed to retain their net worth, will find a sub­ver­sive­ly lucra­tive tax haven in the realm of cryp­tocur­ren­cy.

    “Make no mis­take, in a cryp­to-anar­chist juris­dic­tion where there is no means to con­fis­cate or con­trol prop­er­ty on behalf of anoth­er indi­vid­ual, the need for the state will cease to exist.”
    Yes, as we saw, in a “cryp­to-anar­chist juris­dic­tion”, the need for the state will cease to exist, because the peo­ple that write the rules for the code that runs the pre­dom­i­nant dig­i­tal infra­struc­ture in the cryp­to-anar­chist juris­dic­tion’s econ­o­my will become the new state. At least that’s the dream. Freeee­dooom!

    Posted by Pterrafractyl | September 7, 2015, 10:32 am
  3. Just FYI, if you’ve been avoid­ing cre­at­ing a Face­book account under the assump­tion that this pre­vents Face­book from learn­ing pri­vate details about you there’s a law­suit you might want to learn more about:

    Inter­na­tion­al Busi­ness Times
    Face­book Keeps Get­ting Sued Over Face-Recog­ni­tion Soft­ware, And Pri­va­cy Groups Say We Should Be Pay­ing More Atten­tion

    By Christo­pher Zara on Sep­tem­ber 03 2015 3:49 PM EDT

    Who owns your face? Believe it or not, the answer depends on which state you live in, and chances are, you live in one that hasn’t even weighed in yet.

    That could soon change. For the fourth time this year, Face­book Inc. was hit with a class-action law­suit by an Illi­nois res­i­dent who says its face-recog­ni­tion soft­ware vio­lates an unusu­al state pri­va­cy law there. The lat­est com­plaint, filed Mon­day, under­scores a qui­et but high-stakes legal bat­tle for the social net­work­ing giant, one that could rever­ber­ate through­out the rest of the U.S. tech indus­try and much of the pri­vate sec­tor.

    With almost 1.5 bil­lion active users, Face­book has amassed what prob­a­bly is the world’s largest pri­vate data­base of “faceprints,” dig­i­tal scans that con­tain the unique geo­met­ric pat­terns of its users’ faces. The com­pa­ny says it uses these iden­ti­fiers to auto­mat­i­cal­ly sug­gest pho­to tags. When users upload new pic­tures to the site, an algo­rithm cal­cu­lates a numer­ic val­ue based on a person’s unique facial fea­tures. Face­book pitch­es the fea­ture as just anoth­er con­ve­nient way to stay con­nect­ed with friends, but pri­va­cy and civ­il rights advo­cates say the data gen­er­at­ed by face-recog­ni­tion tech­nol­o­gy is unique­ly sen­si­tive, and requires extra spe­cial safe­guards as it finds its way into the hands of pri­vate com­pa­nies.

    “You can’t turn off your face,” said Alvaro M. Bedoya, found­ing exec­u­tive direc­tor of George­town University’s Cen­ter on Pri­va­cy & Tech­nol­o­gy. “Yes, it’s 2015, and yes, we’re tracked in a mil­lion dif­fer­ent ways, but for most of those forms of track­ing, I can still turn it off if I want to.”

    Faceprints Are Most­ly Unreg­u­lat­ed

    Cur­rent­ly, there are no com­pre­hen­sive fed­er­al reg­u­la­tions gov­ern­ing the com­mer­cial use of bio­met­rics, the cat­e­go­ry of infor­ma­tion tech­nol­o­gy that includes faceprints. And Bedoya said the gov­ern­ment appears to be in no hur­ry to address the issue.

    Ear­li­er this year, the Cen­ter on Pri­va­cy & Tech­nol­o­gy was one of a num­ber of pri­va­cy-rights groups — along with the Elec­tron­ic Fron­tier Foun­da­tion and the Amer­i­can Civ­il Lib­er­ties Union, among oth­ers — that with­drew from dis­cus­sions on how to craft guide­lines for face-recog­ni­tion tech­nol­o­gy. After months of nego­ti­a­tions, Bedoya said the groups grew frus­trat­ed by tech indus­try trade asso­ci­a­tions that would not agree to even the most min­i­mal of pro­tec­tions, includ­ing a rule that would require com­pa­nies to obtain writ­ten con­sent before col­lect­ing and stor­ing faceprints on con­sumers.

    “When not a sin­gle trade asso­ci­a­tion would agree to that, we just real­ized we weren’t deal­ing with peo­ple who were there to nego­ti­ate,” Bedoya said. “We were there to deal basi­cal­ly with peo­ple who want­ed to stop the process, or make it some­thing that was watered down.”

    But Illi­nois is dif­fer­ent. It’s one of only two states — the oth­er being Texas — to reg­u­late bio­met­rics in the pri­vate sec­tor. Illi­nois passed its Bio­met­ric Infor­ma­tion Pri­va­cy Act in 2008, back when Face­book was still in its rel­a­tive infan­cy and most com­pa­nies were not think­ing about face-recog­ni­tion tech­nol­o­gy.

    “I think we were ahead of the curve,” said Mary Dixon, leg­isla­tive direc­tor for the ACLU of Illi­nois, which advanced the ini­tia­tive. “I think it’d be hard to pass sim­i­lar ini­tia­tives now giv­en the intense lob­by against some of the pro­tec­tions we were able to advance.”

    Lit­i­ga­tion Face­off

    Illi­nois’ law went pret­ty much unno­ticed until April of this year, when a high-pro­file pri­va­cy lawyer filed a law­suit in fed­er­al court on behalf of a Face­book user who charges that Face­book is col­lect­ing and stor­ing faceprints on its users with­out obtain­ing informed writ­ten con­sent, a vio­la­tion of Illi­nois’ BIPA. The suit is fed­er­al because Face­book is based in Cal­i­for­nia and the pro­posed plain­tiff class poten­tial­ly num­bers in the mil­lions. Since then, at least three more fed­er­al law­suits were filed, each mak­ing sim­i­lar claims. The lat­est suit comes from Fred­er­ick William Gullen, an Illi­nois res­i­dent who doesn’t even have a Face­book account, but who insists that Face­book cre­at­ed a tem­plate of his face when anoth­er user uploaded a pho­to of him.

    “Face­book is active­ly col­lect­ing, stor­ing, and using — with­out pro­vid­ing notice, obtain­ing informed writ­ten con­sent or pub­lish­ing data reten­tion poli­cies — the bio­met­rics of its users and unwit­ting non-users ... Specif­i­cal­ly, Face­book has cre­at­ed, col­lect­ed and stored over a bil­lion ‘face tem­plates’ (or ‘face prints’) — high­ly detailed geo­met­ric maps of the face — from over a bil­lion indi­vid­u­als, mil­lions of whom reside in the State of Illi­nois.”

    A Face­book spokes­woman said the law­suits are with­out mer­it and the com­pa­ny will defend itself vig­or­ous­ly against them, but the real­i­ty is, the cas­es could play out in a num­ber of ways giv­en that face recog­ni­tion is large­ly untest­ed legal ter­ri­to­ry.

    Dixon and oth­er legal experts famil­iar with BIPA say Face­book prob­a­bly will argue that because its faceprints are derived from pho­tographs, they are exempt from BIPA’s con­sent require­ments. Shut­ter­fly Inc., anoth­er Inter­net com­pa­ny being sued in Illi­nois over facial-recog­ni­tion tech­nol­o­gy, is argu­ing a sim­i­lar stance. Although BIPA clear­ly con­sid­ers scans of “hand or face geom­e­try” to be bio­met­ric iden­ti­fiers, it also says pho­tographs are not. Bedoya said the word­ing of the law rais­es a “seem­ing con­tra­dic­tion” that defen­dants fight­ing BIPA law­suits might be able to exploit.

    “The law was writ­ten in a way that could have been clear­er,” he said.

    Face­book points out that users can turn off tag sug­ges­tions, but Dixon said BIPA was writ­ten to ensure that bio­met­ric data col­lec­tion does not take place with­out writ­ten con­sent exe­cut­ed by the sub­ject of the bio­met­ric iden­ti­fi­er. The law also makes it ille­gal to sell, lease or oth­er­wise prof­it from a customer’s bio­met­ric infor­ma­tion, a par­tic­u­lar thorn in the side for com­pa­nies that trade in per­son­al data.

    The Face­book and Shut­ter­fly law­suits will be close­ly watched as pol­i­cy­mak­ers in oth­er states con­sid­er craft­ing bills gov­ern­ing the use of bio­met­rics. Mean­while, pri­va­cy advo­cates say we should all be pay­ing atten­tion. As face-recog­ni­tion tech­nol­o­gy becomes more per­va­sive, it will have increas­ing impli­ca­tions for our lives, both online and off.

    “There’s an awful lot at stake here,” Bedoya said. “In the end, do we want to live in a soci­ety where every­one is iden­ti­fied all the time the minute they walk out into pub­lic? I think most peo­ple aren’t ready for that world.”

    Fb Com­plaint
    ...

    “Cur­rent­ly, there are no com­pre­hen­sive fed­er­al reg­u­la­tions gov­ern­ing the com­mer­cial use of bio­met­rics, the cat­e­go­ry of infor­ma­tion tech­nol­o­gy that includes faceprints. And Bedoya said the gov­ern­ment appears to be in no hur­ry to address the issue.”
    No mean­ing­ful com­mer­cial facial recog­ni­tion fed­er­al reg­u­la­tions. Huh. Imag­ine that.

    Posted by Pterrafractyl | September 11, 2015, 4:39 pm
  4. Guess who’s bring­ing that fun “use your social net­work to infer your cred­it qual­i­ty” mod­el to the devel­op­ing world as part of an emerg­ing “Big Data, small cred­it” par­a­digm for finance. It’s not a par­tic­u­lar­ly hard guess:

    biznisafrica.co.za
    Omid­yar Net­work report reveals dis­rup­tion in emerg­ing mar­ket cred­it busi­ness

    Oct 27, 2015
    Zweli Sikhakhane

    Omid­yar Net­work on 26 Octo­ber 2015, released – Big Data, Small Cred­it: The Dig­i­tal Rev­o­lu­tion and Its Impact on Emerg­ing Mar­ket Con­sumers – a research report that ana­lyzes a new cat­e­go­ry of tech­nol­o­gy enter­pris­es that are dis­rupt­ing the tra­di­tion­al way of assess­ing con­sumer cred­it risk in emerg­ing mar­kets.

    Using non-finan­cial data—such as social media activ­i­ty and mobile phone usage patterns—complex algo­rithms and big data ana­lyt­ics are deliv­er­ing a quick­er, cheap­er, and more effec­tive cred­it assess­ment of con­sumers who lack cred­it his­to­ries and were invis­i­ble to lenders before.

    “The finan­cial ser­vices indus­try is on the brink of a new era, where har­ness­ing the pow­er of dig­i­tal infor­ma­tion to serve new seg­ments is becom­ing the new nor­mal,” says Mike Kubzan­sky, Omid­yar Net­work Part­ner.

    “Com­pa­nies in the ‘Big Data, Small Cred­it’ space are an exam­ple of how this par­a­digm shift can unlock an entire new pool of cus­tomers for for­mal lenders, while help­ing con­sumers in emerg­ing mar­kets get the ser­vices they need to improve their lives.”

    The report explores how the dig­i­tal rev­o­lu­tion and the result­ing explo­sion of data have con­verged to sig­nif­i­cant­ly enlarge the address­able con­sumer cred­it mar­ket for tra­di­tion­al and alter­na­tive lenders in devel­op­ing mar­kets. In India alone, this new approach to risk assess­ment can poten­tial­ly bring between 100 and 160 mil­lion new cus­tomers to the con­sumer cred­it mar­ket, which would mean tripling the cur­rent address­able mar­ket for retail lenders in the coun­try.

    “Big Data, Small Cred­it” also delves into the oppor­tu­ni­ties and chal­lenges ahead for these new busi­ness­es. It shares the results of an in-depth con­sumer sur­vey with ear­ly adopters in Kenya and Colom­bia by explor­ing press­ing ques­tions around pri­va­cy and trust, and pro­vides rec­om­men­da­tions to key stake­hold­ers on how to reap the ben­e­fits of this new, evolv­ing field.

    “Lis­ten­ing to the ear­ly adopter con­sumer is at the crux of real­iz­ing the poten­tial of the ‘Big Data, Small Cred­it’ busi­ness,” says Arju­na Cos­ta, Omid­yar Net­work Invest­ment Part­ner.

    “Our sur­vey shows that con­sumers in emerg­ing mar­kets have a clear under­stand­ing of the pri­va­cy trade­offs this type of solu­tion entails and sev­en out of 10 are will­ing to share infor­ma­tion they con­sid­er pri­vate in order to get a loan.”

    The con­sumer sur­vey found that ear­ly adopters can artic­u­late, dif­fer­en­ti­ate between, and rank dif­fer­ent types of pri­vate infor­ma­tion. They are also younger, sta­bly employed, and more edu­cat­ed and tech savvy than the aver­age pop­u­la­tion of both sur­veyed countries—an attrac­tive con­sumer seg­ment for any lender. How­ev­er, when faced with emer­gen­cies and cash-flow chal­lenges, the large major­i­ty still resort to an infor­mal source:

    – 88 per­cent of respon­dents in Kenya and 59 per­cent in Colom­bia go to fam­i­ly and friends for loans

    – 76 per­cent of respon­dents in Kenya and 34 per­cent in Colom­bia use oth­er infor­mal cred­it sources, such as pawn­shops and loan sharks

    While the report indi­cates that it is still ear­ly days for this new busi­ness and most providers are still exper­i­ment­ing with algo­rithms, mod­els, and data sources, both the eco­nom­ic and social ben­e­fits of this approach can already be ascer­tained. In the world’s six biggest emerg­ing economies includ­ing Chi­na, Brazil, India, Mex­i­co, Indone­sia, and Turkey—this new tech­nol­o­gy has the poten­tial to help between 325 and 580 mil­lion peo­ple gain access to for­mal cred­it for the first time.

    How­ev­er, in order to cap­i­tal­ize on this oppor­tu­ni­ty, the report rec­om­mends a con­cert­ed indus­try effort to build an ecosys­tem in which these enter­pris­es can con­tin­ue to devel­op.

    In par­tic­u­lar, it encour­ages incum­bents in the finan­cial ser­vices sec­tor to enhance their exist­ing risk assess­ment plat­forms with these new tech­nolo­gies, and advis­es pol­i­cy­mak­ers to bal­ance the need for con­sumer pro­tec­tion with the imper­a­tive to not reg­u­late this nascent indus­try too soon.

    ...

    So, accord­ing to the Omid­yar Net­work report:

    ...
    Using non-finan­cial data—such as social media activ­i­ty and mobile phone usage pat­terns—com­plex algo­rithms and big data ana­lyt­ics are deliv­er­ing a quick­er, cheap­er, and more effec­tive cred­it assess­ment of con­sumers who lack cred­it his­to­ries and were invis­i­ble to lenders before.

    “The finan­cial ser­vices indus­try is on the brink of a new era, where har­ness­ing the pow­er of dig­i­tal infor­ma­tion to serve new seg­ments is becom­ing the new nor­mal,” says Mike Kubzan­sky, Omid­yar Net­work Part­ner.

    “Com­pa­nies in the ‘Big Data, Small Cred­it’ space are an exam­ple of how this par­a­digm shift can unlock an entire new pool of cus­tomers for for­mal lenders, while help­ing con­sumers in emerg­ing mar­kets get the ser­vices they need to improve their lives.”

    ...

    “Our sur­vey shows that con­sumers in emerg­ing mar­kets have a clear under­stand­ing of the pri­va­cy trade­offs this type of solu­tion entails and sev­en out of 10 are will­ing to share infor­ma­tion they con­sid­er pri­vate in order to get a loan.”

    ...

    “The finan­cial ser­vices indus­try is on the brink of a new era, where har­ness­ing the pow­er of dig­i­tal infor­ma­tion to serve new seg­ments is becom­ing the new nor­mal”
    Ok, so ask­ing to see things like your social net­work­ing data is expect­ed to become “the new nor­mal” for the finan­cial ser­vices indus­try. Well that’s pret­ty hor­ri­fy­ing, but at least if the lenders are prof­it­ing from all that per­son­al infor­ma­tion hope­ful­ly that means there will be non-exor­bi­tant inter­est rates and more lenient terms in case bor­row­ers can’t pay back the loan, espe­cial­ly for the poor bor­row­ers. Hope­ful­ly.

    Posted by Pterrafractyl | October 29, 2015, 7:26 pm
  5. You know that clas­sic scene in Office Space where the hyper-con­sis­tent­ly cheer­ful phone oper­a­tor is asked to stop being being to hyper-con­sis­tent. Yeah, there’s prob­a­bly going to be a lot more con­ver­sa­tions like in the future and those con­ver­sa­tions are going to be a lot more futile:

    Pan­do Dai­ly
    Cog­i­to rais­es $5.5m to mon­i­tor the “tone” of call cen­ter work­ers

    By Dan Raile

    Novem­ber 17, 2015

    “This call may be mon­i­tored for qual­i­ty assur­ance pur­pos­es.”

    Every day mil­lions of mis­er­able human inter­ac­tions begin this way, usu­al­ly pit­ting a cus­tomer recent­ly escaped from “voice­mail jail” against a weary phone pro­fes­sion­al run­ning through a check­list on the screen in front of them.

    Today a cloud soft­ware out­fit from Cam­bridge, Mass. called Cog­i­to (“I think...”) announced it has raised a $5.5 mil­lion Series A from Romu­lus Cap­i­tal and Sales­force Ven­tures to fund its mis­sion to improve this expe­ri­ence for all involved.

    The Cog­i­to solu­tion is to pass the audio sig­nals of the calls through voice analy­sis and behav­ioral mod­els to give the agents and their super­vi­sors real time feed­back in a dash­board on their screen.

    “The tech­nol­o­gy is based on behav­ioral ana­lyt­ics. We can ana­lyze all the rich­ness of the human voice, not the words them­selves but things like pitch and tone and tex­ture and pace and over­lap­ping – all the rich com­po­nents of the human voice – using that to under­stand things like human inten­tions,” said Steve Kraus, Cog­i­to VP of Mar­ket­ing, by phone.

    The Cog­i­to dash­board dis­plays a con­tin­u­ous read­ing of those human qual­i­ties, giv­ing alerts and prompts to the agent so that they can adjust their tone and approach in order to sound more empa­thet­ic and devel­op bet­ter rap­port.

    “The beau­ti­ful thing is that on 100 per­cent of those calls we are gath­er­ing infor­ma­tion. Tra­di­tion­al­ly, agents have these one-off, once-a-month review ses­sions with their super­vi­sor, now the agents can be much more involved in the process,” said Kraus, “we can pro­vide objec­tive feed­back on 100 per­cent of calls.”

    That’s a promise that will sure­ly be music to the ears of call cen­ter work­ers, already amongst the most mea­sured and mon­i­tored employ­ees in the world. Now even their tone can be analysed: One imag­ines a con­stant feed of dig­i­tal con­cern trolling: U mad, bro? You seem stressed. Y u mad tho?

    Still, there is at least real sci­ence at work here. Cog­i­to began its life in the MIT Media Lab, before spin­ning out in 2007. Since then it has been devel­op­ing its mod­els and under­ly­ing archi­tec­ture, first val­i­dat­ing the prod­ucts of this research in pilot pro­grams fund­ed by DARPA and the Nation­al Insti­tutes of Health in stud­ies that attempt­ed to detect depres­sion and oth­er men­tal health dis­or­ders in the voice sig­nals of patients and vet­er­ans.

    That may also come in use­ful in call cen­ters, reports over the years have claimed phone pro­fes­sion­als exhib­it a high inci­dence of emo­tion­al fatigue. And cus­tomer ser­vice is big busi­ness. For over twen­ty years, the world’s biggest com­pa­nies have been cut­ting costs by out­sourc­ing this work, result­ing in a sub­stan­tial indus­try. In the Philip­pines this sec­tor, which falls under the rubric of Busi­ness Process Out­sourc­ing, is the fastest grow­ing in the entire econ­o­my. It accounts for 10% of GDP and has grown five­fold to some $15 bil­lion in rev­enues since the ear­ly 2000s, employ­ing over one mil­lion peo­ple. Rough­ly a third of the esti­mat­ed 13 mil­lion glob­al call cen­ter employ­ees work in the Unit­ed States.

    ...

    Cogito’s Dia­logue prod­uct is com­pat­i­ble with Avaya – one of the big soft­ware suites for call cen­ters in the mar­ket. Kraus said that Sales­force is a “chan­nel part­ner”– “they have so many deploy­ments and this is a nice way for us to get into those deploy­ments with them.”

    For now, Kraus says the com­pa­ny has its hands full “pen­e­trat­ing into the cus­tomer ser­vice space,” by invest­ing in sales and mar­ket­ing. From there, he said it’s nat­ur­al that the tech­nol­o­gy will spread into oth­er parts of their cus­tomers’ busi­ness­es. Will Cogito’s sales force be uti­liz­ing their own prod­uct in that process?

    “Yeah, our guys will use it,” he said.

    Through­out our phone call yes­ter­day, Kraus seemed inter­est­ed and engaged. He began his answers with val­i­da­tions – “that’s a good ques­tion” – and paused before speak­ing to make sure I had fin­ished. He spoke in a vari­ety of tones depend­ing on the con­tent of the con­ver­sa­tion, rang­ing from affa­ble to infor­ma­tive. It all seemed very nat­ur­al, spon­ta­neous, and authen­tic.

    Wow, so in addi­tion to cre­at­ing an even more depress­ing “Big Brother”-like work­place envi­ron­ment than already exists for many employ­ees, Cog­ni­to’s prod­uct might even be able to detect depres­sion and men­tal-health dis­or­ders. That’s, uh, con­ve­nient:

    Still, there is at least real sci­ence at work here. Cog­i­to began its life in the MIT Media Lab, before spin­ning out in 2007. Since then it has been devel­op­ing its mod­els and under­ly­ing archi­tec­ture, first val­i­dat­ing the prod­ucts of this research in pilot pro­grams fund­ed by DARPA and the Nation­al Insti­tutes of Health in stud­ies that attempt­ed to detect depres­sion and oth­er men­tal health dis­or­ders in the voice sig­nals of patients and vet­er­ans.

    That may also come in use­ful in call cen­ters, reports over the years have claimed phone pro­fes­sion­als exhib­it a high inci­dence of emo­tion­al fatigue...

    With such capa­bil­i­ties, you have to won­der what which “oth­er parts” of Cog­ni­to’s cus­tomers’ busi­ness­es will also start get­ting real-time audio sur­veil­lance feed­back:

    ...
    For now, Kraus says the com­pa­ny has its hands full “pen­e­trat­ing into the cus­tomer ser­vice space,” by invest­ing in sales and mar­ket­ing. From there, he said it’s nat­ur­al that the tech­nol­o­gy will spread into oth­er parts of their cus­tomers’ busi­ness­es. Will Cogito’s sales force be uti­liz­ing their own prod­uct in that process?
    ...

    Hmmm...yeah, it does­n’t look like Cog­ni­to-like tech­nol­o­gy is going to be lim­it­ed to call cen­ters...

    Posted by Pterrafractyl | November 18, 2015, 6:34 pm
  6. One of the grim­ly fas­ci­nat­ing aspects of the emerg­ing Big Data rev­o­lu­tion in the work­place is that as employ­ers con­tin­ue using more and more Big Data mon­i­tor­ing to increas­ing work­er pro­duc­tiv­i­ty, not only might this lead to declin­ing the health of work­ers, but that same Big Data approach might actu­al­ly allows employ­ers to track that decline in health. And now that com­pa­nies are start­ing to exper­i­ment with hir­ing third par­ty Big Data ser­vices providers to track their employ­ee health to pre­dict which work­ers might get sick using meth­ods that include buy­ing infor­ma­tion on employ­ees from third par­ty data bro­kers and scan­ning health insur­ance claims, you don’t need a lot of Big Data to pre­dict that this is going to be a trend:

    The Wall Street Jour­nal
    Boss­es Har­ness Big Data to Pre­dict Which Work­ers Might Get Sick
    Well­ness firms mine per­son­al infor­ma­tion, seek­ing to antic­i­pate employ­ee health needs, min­i­mize cost

    By Rachel Emma Sil­ver­man
    Feb. 16, 2016 6:22 p.m. ET

    Employ­ee well­ness firms and insur­ers are work­ing with com­pa­nies to mine data about the pre­scrip­tion drugs work­ers use, how they shop, and even whether they vote, to pre­dict their indi­vid­ual health needs and rec­om­mend treat­ments.

    Try­ing to stem ris­ing health-care costs, some com­pa­nies, includ­ing retail­er Wal-Mart Stores Inc., are pay­ing firms like Cast­light Health­care Inc. to col­lect and crunch employ­ee data to iden­ti­fy, for exam­ple, which work­ers are at risk for dia­betes, and tar­get them with per­son­al­ized mes­sages nudg­ing them toward a doc­tor or ser­vices such as weight-loss pro­grams.

    Com­pa­nies say the goal is to get employ­ees to improve their own health as a way to cut cor­po­rate health-care bills. But pri­va­cy advo­cates have con­cerns about such prac­tices, which are new enough that rel­a­tive­ly few work­ers are aware of them.

    “I bet I could bet­ter pre­dict your risk of a heart attack by where you shop and where you eat than by your genome,” says Har­ry Green­spun, direc­tor of Deloitte LLP’s Cen­ter for Health Solu­tions, a research arm of the con­sult­ing firm’s health-care prac­tice.

    An employ­ee who spends mon­ey at a bike shop is more like­ly to be in good health than some­one who spends on videogames, Mr. Green­spun says. Cred­it scores can also sug­gest whether an indi­vid­ual will be read­mit­ted to the hos­pi­tal fol­low­ing an ill­ness, he says. Those with low­er cred­it scores may be less like­ly to fill pre­scrip­tions and show up for fol­low-up appoint­ments, adds Mr. Green­spun.

    Well­tok Inc., whose clients include Colorado’s state employ­ees, has found that peo­ple who vote in midterm elec­tions tend to be health­i­er than those who skip them, says Chris Coloian, the firm’s chief solu­tions offi­cer. In gen­er­al, midterm vot­ers are more mobile and more active in the com­mu­ni­ty, strong indi­ca­tors of over­all health, he says.

    As employ­ers more active­ly involve them­selves in employ­ee well­ness, pri­va­cy experts wor­ry that man­age­ment could obtain work­ers’ health infor­ma­tion, even if by acci­dent, and use it to make work­place deci­sions.

    Fed­er­al health-pri­va­cy laws gen­er­al­ly bar employ­ers from view­ing work­ers’ per­son­al health infor­ma­tion, though self-insured employ­ers have more lee­way, says Careen Mar­tin, a health-care and cyber­se­cu­ri­ty lawyer at Nilan John­son Lewis PA. Instead, employ­ers con­tract with well­ness firms who have access to work­ers’ health data.

    “There are enor­mous poten­tial risks” in these efforts, such as the expo­sure of per­son­al health data to employ­ers or oth­ers,” says Frank Pasquale, a law pro­fes­sor at the Uni­ver­si­ty of Mary­land, who stud­ies health pri­va­cy.

    Typ­i­cal­ly, when a com­pa­ny hires a firm like Cast­light, it autho­rizes the firm to col­lect infor­ma­tion from insur­ers and oth­er health com­pa­nies that work with the client com­pa­ny. Employ­ees are prompt­ed to grant the firm per­mis­sion to send them health and well­ness infor­ma­tion via an app, email or oth­er chan­nels, but can opt out.

    Based on data such as an individual’s claims his­to­ry, the firms can iden­ti­fy an indi­vid­ual who might be con­sid­er­ing cost­ly pro­ce­dures like spinal surgery, and can send that per­son rec­om­men­da­tions for a sec­ond opin­ion or phys­i­cal ther­a­py. Some firms, such as Well­tok and GNS Health­care Inc., also buy infor­ma­tion from data bro­kers that lets them draw con­nec­tions between con­sumer behav­ior and health needs.

    Employ­ers gen­er­al­ly aren’t allowed to know which indi­vid­u­als are flagged by data min­ing, but the well­ness firms—usually paid sev­er­al dol­lars a month per employee—provide aggre­gat­ed data on the num­ber of employ­ees found to be at risk for a giv­en con­di­tion.

    To deter­mine which employ­ees might soon get preg­nant, Cast­light recent­ly launched a new prod­uct that scans insur­ance claims to find women who have stopped fill­ing birth-con­trol pre­scrip­tions, as well as women who have made fer­til­i­ty-relat­ed search­es on Castlight’s health app.

    That data is matched with the woman’s age, and if applic­a­ble, the ages of her chil­dren to com­pute the like­li­hood of an impend­ing preg­nan­cy, says Jonathan Rende, Castlight’s chief research and devel­op­ment offi­cer. She would then start receiv­ing emails or in-app mes­sages with tips for choos­ing an obste­tri­cian or oth­er pre­na­tal care. If the algo­rithm guessed wrong, she could opt out of receiv­ing sim­i­lar mes­sages.

    Spinal surgery, which can cost $20,000 or more, is anoth­er area where data experts are dig­ging in. After find­ing that 30% of employ­ees who got sec­ond opin­ions from top-rat­ed med­ical cen­ters end­ed up for­go­ing spinal surgery, Wal-Mart tapped Cast­light to iden­ti­fy and com­mu­ni­cate with work­ers suf­fer­ing from back pain.

    To find them, Cast­light scans insur­ance claims relat­ed to back pain, back imag­ing or phys­i­cal ther­a­py, plus phar­ma­ceu­ti­cal claims for pain med­ica­tions or spinal injec­tions. Once iden­ti­fied, the work­ers get infor­ma­tion about mea­sures that could delay or head off surgery, such as phys­i­cal ther­a­py or sec­ond-opin­ion providers.

    To steer more J.P. Mor­gan Chase & Co. employ­ees to doc­tors in its net­work, insur­er Cigna Corp. ana­lyzed claims data to iden­ti­fy employ­ees who lacked pri­ma­ry-care physi­cians. Those employ­ees got per­son­al­ized mes­sages on Cigna’s mobile app with rec­om­men­da­tions for in-net­work doc­tors, says Michael Sturmer, a region­al Cigna exec­u­tive in the North­east. Employ­ees who had down­loaded the Cigna app used in-net­work providers about 2% more than they did before the sys­tem was imple­ment­ed in 2015.

    Some peo­ple may feel uncom­fort­able with the idea that their per­son­al data is being used to pre­dict their future. Cast­light care­ful­ly test-mar­kets its mes­sages to try to avoid appear­ing too intru­sive, says Mr. Rende. “Every word mat­ters,” he says.

    ...

    Pre­dict­ing health out­comes is the easy part, the firms say. The tough part is get­ting employ­ees to take action—and mes­sag­ing them can only do so much.

    Health-care man­age­ment firm Jiff Inc. is using data to sort employ­ees by per­son­al­i­ty type, and tai­lor­ing its approach to each type. A work­er who is reluc­tant to par­tic­i­pate in fit­ness pro­grams, for exam­ple, might be offered rich­er incen­tives, such as a pre­mi­um reduc­tion on their health insur­ance, to take part.

    “Pre­dic­tion with no solu­tion isn’t very valu­able,” says Derek Newell, Jiff’s chief exec­u­tive. “If we can’t get peo­ple to do some­thing, then that pre­dic­tion has a val­ue of zero”

    “As employ­ers more active­ly involve them­selves in employ­ee well­ness, pri­va­cy experts wor­ry that man­age­ment could obtain work­ers’ health infor­ma­tion, even if by acci­dent, and use it to make work­place deci­sions.”
    Yeah, that seems like one of the obvi­ous risks here, espe­cial­ly when reduc­ing health­care costs is the pri­ma­ry pur­pose of the ser­vice. And espe­cial­ly when Wal­mart, a com­pa­ny known for its employ­ee food dri­ves for oth­er employ­ees who were paid so lit­tle they were going hun­gry, is the com­pa­ny lead­ing the way. And then there’s the fact that this is being done using third par­ty Bid Data ser­vice providers to get around employ­ee pri­va­cy restric­tions. It seems like quite a recipe for mak­ing “acci­dents” involv­ing employ­ers ‘acci­den­tal­ly’ find­ing out which employ­ee is about to get an expen­sive ill­ness and then maybe ‘acci­den­tal­ly’ inter­pret­ing the rest of that employ­ee’s Big Data in a man­ner that leads to them get­ting laid off, a rou­tine part of the work­place of the future. As the exec­u­tive says at the end, “Pre­dic­tion with no solu­tion isn’t very valu­able.” Well, there’s a pret­ty obvi­ous solu­tion regard­less of the employ­ee’s med­ical con­di­tion: fire them.

    So it looks like we’re on the cusp of a grand new ear­ly warn­ing sys­tem for com­ing health mal­adies: when you’re sud­den­ly fired with­out warn­ing but your com­pa­ny appears to be in decent health, you’re prob­a­bly about to suf­fer an expen­sive health cri­sis. Good luck! Just remem­ber to turn in your badge on the way out.

    Posted by Pterrafractyl | February 18, 2016, 11:37 am
  7. Just, FYI, if AT&T is your cell­phone provider, your local bill­boards might be about to get a lot more per­sua­sive:

    The New York Times
    See That Bill­board? It May See You, Too

    By SYDNEY EMBER
    FEB. 28, 2016

    Pass a bill­board while dri­ving in the next few months, and there is a good chance the com­pa­ny that owns it will know you were there and what you did after­ward.

    Clear Chan­nel Out­door Amer­i­c­as, which has tens of thou­sands of bill­boards across the Unit­ed States, will announce on Mon­day that it has part­nered with sev­er­al com­pa­nies, includ­ing AT&T, to track people’s trav­el pat­terns and behav­iors through their mobile phones.

    By aggre­gat­ing the trove of data from these com­pa­nies, Clear Chan­nel Out­door hopes to pro­vide adver­tis­ers with detailed infor­ma­tion about the peo­ple who pass its bill­boards to help them plan more effec­tive, tar­get­ed cam­paigns. With the data and ana­lyt­ics, Clear Chan­nel Out­door could deter­mine the aver­age age and gen­der of the peo­ple who are see­ing a par­tic­u­lar bill­board in, say, Boston at a cer­tain time and whether they sub­se­quent­ly vis­it a store.

    “In aggre­gate, that data can then tell you infor­ma­tion about what the aver­age view­er of that bill­board looks like,” said Andy Stevens, senior vice pres­i­dent for research and insights at Clear Chan­nel Out­door. “Obvi­ous­ly that’s very valu­able to an adver­tis­er.”

    Clear Chan­nel and its part­ners — AT&T Data Pat­terns, a unit of AT&T that col­lects loca­tion data from its sub­scribers; Pla­ceIQ, which uses loca­tion data col­lect­ed from oth­er apps to help deter­mine con­sumer behav­ior; and Placed, which pays con­sumers for the right to track their move­ments and is able to link expo­sure to ads to in-store vis­its — all insist that they pro­tect the pri­va­cy of con­sumers. All data is anony­mous and aggre­gat­ed, they say, mean­ing indi­vid­ual con­sumers can­not be iden­ti­fied.

    Still, Mr. Stevens acknowl­edged that the company’s new offer­ing “does sound a bit creepy.”

    But, he added, the com­pa­ny was using the same data that mobile adver­tis­ers have been using for years, and show­ing cer­tain ads to a spe­cif­ic group of con­sumers was not a new idea. “It’s easy to for­get that we’re just tap­ping into an exist­ing data ecosys­tem,” he said.

    In many ways, bill­boards are still stuck in the old-media world, where com­pa­nies tried to deter­mine how many peo­ple saw bill­boards by count­ing the cars that drove by. But in recent years, bill­board com­pa­nies have made more of an effort to step into the dig­i­tal age. Some bill­boards, for exam­ple, have been equipped with small cam­eras that col­lect infor­ma­tion about the peo­ple walk­ing by. Clear Chan­nel Outdoor’s move is yet anoth­er attempt to mod­ern­ize bill­boards and enable the kind of audi­ence mea­sure­ments that adver­tis­ers have come to expect.

    Pri­va­cy advo­cates, how­ev­er, have long raised ques­tions about mobile device track­ing, par­tic­u­lar­ly as com­pa­nies have meld­ed this loca­tion infor­ma­tion with con­sumers’ online behav­ior to form detailed audi­ence pro­files. Oppo­nents con­tend that peo­ple often do not real­ize their loca­tion and behav­ior are being tracked, even if they have agreed at some point to allow com­pa­nies to mon­i­tor them. And while near­ly all of these com­pa­nies claim that the data they col­lect is anony­mous and aggre­gat­ed — and that con­sumers can opt out of track­ing at any time — pri­va­cy advo­cates are skep­ti­cal.

    “Peo­ple have no idea that they’re being tracked and tar­get­ed,” said Jef­frey Chester, exec­u­tive direc­tor of the Cen­ter for Dig­i­tal Democ­ra­cy. “It is incred­i­bly creepy, and it’s the most recent intru­sion into our pri­va­cy.”

    The Fed­er­al Trade Com­mis­sion has brought a num­ber of cas­es relat­ed to mobile device track­ing and the col­lec­tion of geolo­ca­tion infor­ma­tion. In 2013, the agency set­tled charges with the com­pa­ny behind a pop­u­lar Android app that turned mobile devices into flash­lights. The agency said the company’s pri­va­cy pol­i­cy did not inform con­sumers that it was shar­ing their loca­tion infor­ma­tion with third par­ties like adver­tis­ers. Last year, the agency set­tled charges against Nomi Tech­nolo­gies, a retail-track­ing com­pa­ny that uses sig­nals from shop­pers’ mobile phones to track their move­ments through stores. The agency claimed that the com­pa­ny had mis­led con­sumers about their opt-out options.

    ...

    Clear Chan­nel Out­door will offer Radar in its top 11 mar­kets, includ­ing Los Ange­les and New York, start­ing on Mon­day, with plans to make it avail­able across the coun­try lat­er this year.

    “Clear Chan­nel Out­door Amer­i­c­as, which has tens of thou­sands of bill­boards across the Unit­ed States, will announce on Mon­day that it has part­nered with sev­er­al com­pa­nies, includ­ing AT&T, to track people’s trav­el pat­terns and behav­iors through their mobile phones.
    Note that AT&T asserts that users can opt out of this track­ing fea­ture on the AT&T web­site and that all of the data they pro­vide to Clear Chan­nel is aggre­gat­ed and anonymized. Let’s hope that’s true. But regard­less, there’s noth­ing stop­ping Clear Chan­nel from part­ner­ing with oth­er loca­tion data providers in the future and giv­en all the ran­dom apps out there that col­lect your loca­tion data even you’re not run­ning them, obtain­ing that data direct­ly from app providers seems pos­si­ble. So if you aren’t an AT&T wire­less cus­tomer, and you’d also like to opt out of any loca­tion-based adver­tis­ing ser­vices, it’s prob­a­bly a good time to make sure your loca­tion shar­ing set­tings on your smart­phone are turned off. That said, if you real­ly don’t want your loca­tion tracked by apps and adver­tis­ers, you prob­a­bly want to get rid of that smart­phone:

    CSO Online
    RSA: Geolo­ca­tion shows just how dead pri­va­cy is

    By Tay­lor Armerd­ing

    CSO | Mar 2, 2016 11:39 AM PT

    A reg­u­lar refrain with­in the online secu­ri­ty com­mu­ni­ty is that pri­va­cy is dead.

    David Adler’s talk at RSA Tues­day, titled “Where you are is who you are: Legal trends in geolo­ca­tion pri­va­cy and secu­ri­ty,” was about one of the major rea­sons it is so, so dead.

    To para­phrase Adler, founder of the Adler Law Group, it is not so much that in today’s con­nect­ed world there is a sin­gle, malev­o­lent Big Broth­er watch­ing you. It’s that there are dozens, per­haps hun­dreds, of “lit­tle broth­ers” eager­ly watch­ing you so they can sell you stuff more effec­tive­ly. Col­lec­tive­ly, they add up to an increas­ing­ly omni­scient big broth­er.

    “Every­thing is gath­er­ing loca­tion data – apps, mobile devices and plat­forms that you use,” he said. “Often it is being done with­out your knowl­edge or con­sent.

    “And at same time, pri­va­cy advo­cates have ID’d geolo­ca­tion as par­tic­u­lar­ly sen­si­tive infor­ma­tion.”

    That, as numer­ous experts have been warn­ing for some time now, is because data about where you are at all times of the day can paint an incred­i­bly detailed and inva­sive pic­ture about who you are – your polit­i­cal, food, reli­gious, sex­u­al and shop­ping pref­er­ences, med­ical con­di­tions, job, fam­i­ly, friends and oth­er rela­tion­ships, and, of course, where you live.

    And, as is also well known, peo­ple make it very easy to col­lect that data. They essen­tial­ly give it away. “A lot has to do with the shift to mobile devices,” Adler said. “What peo­ple used to do on their desk­tops, they now do on mobile.”

    He cit­ed a Pew Research Cen­ter study on how peo­ple use their cell phones, which found that 40 per­cent used it for gov­ern­ment ser­vices, 43 per­cent to research job infor­ma­tion, 18 per­cent to sub­mit job appli­ca­tions, 44 per­cent to look for real estate, 62 per­cent to research health con­di­tions and 57 per­cent for online bank­ing.

    “In addi­tion to the sen­si­tiv­i­ty of the sub­jects, you fold in the loca­tion data, and it can become very reveal­ing,” Adler said.

    Avoid­ing this is not as sim­ple as turn­ing off the “loca­tion ser­vices” fea­ture in a smart­phone either, he not­ed.

    “That is only one of sev­er­al ways loca­tion data is gath­ered,” he said. “I was shocked at tech­nol­o­gy behind it. It is col­lect­ed by the cell tow­er that your device talks to. Wi-Fi hotspots not only share the loca­tion, but time stamp it. Your phone logs all of it – your key­board cache, SIM card ser­i­al num­ber, your num­ber, your email address. All of this can be gath­ered by apps, and they don’t have to ask your per­mis­sion.”

    There is a grow­ing aware­ness of these risks not just from pri­va­cy advo­cates, but from at least some gov­ern­ment agen­cies as well. Adler quot­ed the Fed­er­al Trade Commission’s Direc­tor of the Con­sumer Pro­tec­tion Divi­sion Jes­si­ca Rich, who said two years ago that “Geolo­ca­tion infor­ma­tion divulges inti­mate­ly per­son­al details of an indi­vid­ual.”

    He also not­ed the pas­sage of the Con­sumer Pri­va­cy Bill of Rights Act of 2015, along with oth­er leg­is­la­tion pend­ing.

    But it is unlike­ly that things will change soon in any major way. The Cen­ter for Democ­ra­cy & Tech­nol­o­gy (CDT) called the con­sumer pri­va­cy bill “an incred­i­bly impor­tant first step,” but also said it con­tains, “too many loop­holes, and enforce­ment is lack­ing.”

    Adler said that is in part because the U.S. still, “has no uni­form pri­va­cy laws, and enforce­ment is ad hoc.” He said a num­ber of con­sumer com­plaints, “have fiz­zled in the courts, because they depend on very spe­cif­ic harm to indi­vid­u­als.”

    ...

    “To para­phrase Adler, founder of the Adler Law Group, it is not so much that in today’s con­nect­ed world there is a sin­gle, malev­o­lent Big Broth­er watch­ing you. It’s that there are dozens, per­haps hun­dreds, of “lit­tle broth­ers” eager­ly watch­ing you so they can sell you stuff more effec­tive­ly. Col­lec­tive­ly, they add up to an increas­ing­ly omni­scient big broth­er.”
    It’s def­i­nite­ly a “the whole is greater than the sum of its parts” sit­u­a­tion when you’re talk­ing about an array of “lit­tle broth­ers”. Espe­cial­ly when those “lit­tle broth­ers” are shar­ing infor­ma­tion with each oth­er. And it’s appar­ent­ly pos­si­ble that those “lit­tle broth­er” apps sit­ting on your smart­phone just might be going the extra mile to gath­er your loca­tion, even when you tell it not to:

    ...
    Avoid­ing this is not as sim­ple as turn­ing off the “loca­tion ser­vices” fea­ture in a smart­phone either, he not­ed.

    “That is only one of sev­er­al ways loca­tion data is gath­ered,” he said. “I was shocked at tech­nol­o­gy behind it. It is col­lect­ed by the cell tow­er that your device talks to. Wi-Fi hotspots not only share the loca­tion, but time stamp it. Your phone logs all of it – your key­board cache, SIM card ser­i­al num­ber, your num­ber, your email address. All of this can be gath­ered by apps, and they don’t have to ask your per­mis­sion.”
    ...

    So, giv­en how the groups that gath­er this data gen­er­al­ly do it for the pur­pose of sell­ing it to oth­ers, it seems like it should just take one of your “lit­tle broth­er” apps to sur­rep­ti­tious­ly gath­er­ing that loca­tion data using unortho­dox means before the rest of the data collection/marketing indus­tries starts get­ting access too. They’ll just buy it from the rogue app provider. At least it sounds like that’s pos­si­ble.

    As creepy as all that sounds, keep in mind that the future of per­son­al­ized bill­boards can always get creepi­er:

    Vice Moth­er­board
    The Bill­boards of the Future Are ‘Trix­e­lat­ed’ 3D Holo­grams

    Writ­ten by Becky Fer­reira
    Con­trib­u­tor

    Jan­u­ary 15, 2015 // 05:25 PM EST

    When Mar­ty McFly gets dropped off in 2015 in Back to the Future II, one of the first futur­is­tic tech­nolo­gies he expe­ri­ences is a 3D holo­gram shark adver­tis­ing Jaws 19. Now, it turns out that life­like 3D dis­plays may actu­al­ly come to fruition in the next year, just as Prophet Zemeck­is promised.

    A col­lab­o­ra­tion between the Aus­tri­an tech start­up TriLite Tech­nolo­gies and the Vien­na Uni­ver­si­ty of Tech­nol­o­gy has revealed that using tiny mir­rors to reflect lasers in numer­ous direc­tions can trick view­ers into inter­pret­ing an image as three-dimen­sion­al.

    “The mir­ror directs the laser beams across the field of vision, from left to right,” explained UT Vien­na com­put­er engi­neer Ulrich Schmid in a state­ment. “Dur­ing that move­ment the laser inten­si­ty is mod­u­lat­ed so that dif­fer­ent laser flash­es are sent into dif­fer­ent direc­tions.”

    The upshot is that these mir­rored 3D pix­els, or “trix­els” as the team calls them, could project hun­dreds of dif­fer­ent images out­ward, as com­pared to the 3D movie tech­nique, which only projects two, and requires that the view­er wear glass­es. Walk­ing around such a trix­e­lat­ed bill­board, on the oth­er hand, would make the image appear to be a high­ly resolved, three-dimen­sion­al object to the naked eye.

    Good­bye, 3D glass­es. Hel­lo, ubiq­ui­tous, laser-gen­er­at­ed images that jump out direct­ly at you from every angle.

    It actu­al­ly gets a lit­tle creepi­er. Accord­ing to a study authored by TriLite/UT Vien­na team and pub­lished in Optics Express, a sin­gle elec­tron­ic bill­board could present mul­ti­ple images, which could change depend­ing on the angle it’s viewed from.

    “Maybe some­one wants to appeal specif­i­cal­ly to the cus­tomers leav­ing the shop across the street, and a dif­fer­ent ad is shown to the peo­ple wait­ing at the bus stop”, TriLite CEO Fer­di­nand Saint-Julien, said in a state­ment. “Tech­ni­cal­ly, this would not be a prob­lem.”

    So, if you think tar­get­ed online adver­tis­ing is inva­sive, just wait until bus stops, train cars, and road­side bill­boards start spam­ming you with com­mer­cials based on your dai­ly habits. I’d rather have the cheesy Jaws 19 shark fol­low­ing me around than that.

    In the study, the team described their mod­est pro­to­type ver­sion of their dis­play, which has a trix­el res­o­lu­tion of five by three. But the next pro­to­type is already in the works, and the researchers are shoot­ing to launch the dis­play com­mer­cial­ly as ear­ly as 2016.

    As inge­nious as the con­cept of trix­e­la­tion is, it’s dis­cour­ag­ing to think of it sole­ly as an adver­tis­ing tool. After all, this approach could back­fire in all kinds of unpre­dictable ways. Dis­tract­ed dri­ving has become a huge prob­lem in the age of the Smart­phone, and now we want to throw tai­lored, 3D ads up every­where? It seems like a pub­lic safe­ty night­mare, not to men­tion the obvi­ous Orwellian dimen­sion of tar­get­ing spe­cif­ic per­spec­tives with dif­fer­ent mes­sages.

    ...

    One thing is for cer­tain: this display’s capa­bil­i­ties could fun­da­men­tal­ly change our rela­tion­ship with visu­al media. Whether that will end up being good, bad, or some­one in between remains to be seen.

    “The upshot is that these mir­rored 3D pix­els, or “trix­els” as the team calls them, could project hun­dreds of dif­fer­ent images out­ward, as com­pared to the 3D movie tech­nique, which only projects two, and requires that the view­er wear glass­es. Walk­ing around such a trix­e­lat­ed bill­board, on the oth­er hand, would make the image appear to be a high­ly resolved, three-dimen­sion­al object to the naked eye.”
    That’s right, the next gen­er­a­tion of bill­boards will involve 3D high­ly resolved objects. And when you com­bine loca­tion data, per­son­al mar­ket­ing data, and 3D holo­gram technology...“Goodbye, 3D glass­es. Hel­lo, ubiq­ui­tous, laser-gen­er­at­ed images that jump out direct­ly at you from every angle.” Yep, the world is about to become per­son­al­ized ver­sion of Dis­ney’s Haunt­ed Man­sion, except instead of fun ghosts holo­grams it will be crap­py prod­uct holo­grams. That sounds both kind of cool (yay holo­grams!), but also pret­ty creepy, which is still bet­ter than the ubiq­ui­tous loca­tion track­ing which is just plain creepy.

    There that’s a glimpse at the near-future of bill­board adver­tis­ing: creep­i­ly per­son­al­ized holo­grams. The holo­grams them­selves may or may not be creepy. But since they’ll be prob­a­bly be loca­tion-based per­son­al­ized holo­grams, they’ll def­i­nite­ly be creepy.

    Posted by Pterrafractyl | March 9, 2016, 4:01 pm
  8. Here’s an exam­ple of the pub­lic push­ing back against the end­less incur­sion of pri­va­cy-vio­lat­ing smart­phone tech­nol­o­gy: The US Fed­er­al Trade Com­mis­sion sent a warn­ing let­ter to a dozen smart­phone app devel­op­ers who have been caught using the back­ground-noise track­ing soft­ware devel­oped by Sil­ver­Push to secret­ly deter­mine what TV shows you’re watch­ing. The firms were told that if they don’t inform users that their apps were col­lect­ing TV back­ground data, it may vio­late FTC rules bar­ring unfair or decep­tive acts or prac­tices, which sug­gest that the FTC still isn’t quite sure if secret­ly embed­ding Sil­ver­Push’s soft­ware in your app actu­al­ly is a vio­la­tion of the FTC’s rules or not. Maybe it does vio­late the rules, but maybe not. At least that’s the strength of the lan­guage the FTC used in its warn­ing.

    So let’s hope the FTC was just choos­ing to be polite by not using stronger lan­guage, because if not, that would sug­gest this was actu­al­ly less a pub­lic push back and more a polite pub­lic request to app devel­op­ers that dou­bles as an admis­sion that the FTC still isn’t quite sure if it’s ille­gal:

    PC Word
    FTC warns app devel­op­ers against using audio mon­i­tor­ing soft­ware
    A dozen devel­op­ers appear to have pack­aged TV track­ing soft­ware into their prod­ucts, the agency says.

    Grant Gross
    IDG News Ser­vice

    Mar 17, 2016 2:28 PM

    The U.S. Fed­er­al Trade Com­mis­sion has sent warn­ing let­ters to 12 smart­phone app devel­op­ers for alleged­ly com­pro­mis­ing users’ pri­va­cy by pack­ag­ing audio mon­i­tor­ing soft­ware into their prod­ucts.

    The soft­ware, from an Indi­an com­pa­ny called Sil­ver­Push, allows apps to use the smart­phone’s micro­phone to lis­ten to near­by tele­vi­sion audio in an effort to deliv­er more tar­get­ed adver­tise­ments. Sil­ver­Push allows the apps to sur­rep­ti­tious­ly mon­i­tor the tele­vi­sion view­ing habits of peo­ple who down­loaded apps with the soft­ware includ­ed, the FTC said Thurs­day.

    “This func­tion­al­i­ty is designed to run silent­ly in the back­ground, even while the user is not active­ly using the appli­ca­tion,” the agency said in its let­ter to the app devel­op­ers. “Using this tech­nol­o­gy, Sil­ver­Push could gen­er­ate a detailed log of the tele­vi­sion con­tent viewed while a user’s mobile phone was turned on.”

    If the app devel­op­ers state or imply that their apps do not col­lect or trans­mit tele­vi­sion view­ing data when they actu­al­ly do, that may be a vio­la­tion of the sec­tion of the FTC Act bar­ring decep­tive and unfair busi­ness prac­tices, the agency said.

    The 12 devel­op­ers appear to have includ­ed Sil­ver­Push code in apps avail­able in the Google Play store, the FTC said.

    Sil­ver­Push has said its ser­vice isn’t now oper­at­ing in the U.S., but it encour­ages app devel­op­ers that pack­age its soft­ware to noti­fy cus­tomers about its abil­i­ty to mon­i­tor TV habits should the com­pa­ny move into the U.S. mar­ket, the FTC said. Sil­ver­Push rep­re­sen­ta­tives weren’t imme­di­ate­ly avail­able for com­ment on the FTC’s let­ter.

    “These apps were capa­ble of lis­ten­ing in the back­ground and col­lect­ing infor­ma­tion about con­sumers with­out noti­fy­ing them,” Jes­si­ca Rich, direc­tor of the FTC’s Bureau of Con­sumer Pro­tec­tion, said in a state­ment. “Com­pa­nies should tell peo­ple what infor­ma­tion is col­lect­ed, how it is col­lect­ed, and who it’s shared with.”

    Some app devel­op­ers ask for per­mis­sion to use a smart­phone’s micro­phone, even tough the apps do not appear to have a need for that func­tion­al­i­ty, the FTC said. The apps appar­ent­ly pack­ag­ing Sil­ver­Push don’t pro­vide users’ notice that they could mon­i­tor TV view­ing habits, even if the app is not in use, the agency said.

    ...

    “If the app devel­op­ers state or imply that their apps do not col­lect or trans­mit tele­vi­sion view­ing data when they actu­al­ly do, that may be a vio­la­tion of the sec­tion of the FTC Act bar­ring decep­tive and unfair busi­ness prac­tices, the agency said.”
    Part of what’s a lit­tle dis­con­cert­ing about the FTC’s warn­ing is that it’s specif­i­cal­ly warn­ing against app devel­op­ers “stat­ing or imply­ing” that their apps don’t col­lect this kind of data. So...what if the devel­op­ers don’t say any­thing at all? Does that fall under the “imply” cat­e­go­ry? Let’s hope so. Here’s the spe­cif­ic lan­guage:

    ...
    Upon down­load­ing and installing your mobile appli­ca­tion that embeds Sil­ver­push, we received no dis­clo­sures about the includ­ed audio bea­con func­tion­al­i­ty — either con­tex­tu­al­ly as part of the set­up flow, in a ded­i­cat­ed stand­alone pri­va­cy pol­i­cy, or any­where else.

    For the time being, Sil­ver­push has rep­re­sent­ed that its audio bea­cons are not cur­rent­ly embed­ded into any tele­vi­sion pro­gram­ming aimed at U.S. households.1 How­ev­er, if your appli­ca­tion enabled third par­ties to mon­i­tor tele­vi­sion-view­ing habits of U.S. con­sumers and your state­ments or user inter­face stat­ed or implied oth­er­wise, this could con­sti­tute a vio­la­tion of the Fed­er­al Trade Com­mis­sion Act.2 We would encour­age you to dis­close this fact to poten­tial cus­tomers, empow­er­ing them to make an informed deci­sion about what infor­ma­tion to dis­close in exchange for using your appli­ca­tion. Our busi­ness guid­ance “Mar­ket­ing Your Mobile App: Get It Right From The Start” can pro­vide addi­tion­al guid­ance on how to make sure con­sumers under­stand your data col­lec­tion and shar­ing practices.3
    ...

    “How­ev­er, if your appli­ca­tion enabled third par­ties to mon­i­tor tele­vi­sion-view­ing habits of U.S. con­sumers and your state­ments or user inter­face stat­ed or implied oth­er­wise, this could con­sti­tute a vio­la­tion of the Fed­er­al Trade Com­mis­sion Act.2 We would encour­age you to dis­close this fact to poten­tial cus­tomers, empow­er­ing them to make an informed deci­sion about what infor­ma­tion to dis­close in exchange for using your appli­ca­tion.
    That’s, uh, sort of encour­ag­ing, although it’s not quite clear who should be encour­aged.

    Posted by Pterrafractyl | March 18, 2016, 3:07 pm
  9. If you find your­self sud­den­ly feel­ing stalked the bill­boards in your town are track­ing you per­son­al­ly, don’t wor­ry, it’s not per­son­al. They’re track­ing every­one all per­son­al­ly. So, actu­al­ly, maybe some wor­ry is in order:

    Chica­go Tri­bune

    Hey, you in the Alti­ma! Chevy bill­board spots rival cars, makes tar­get­ed pitch

    By Robert Chan­nick
    April 15, 2016, 5:03 AM

    Dri­vers along a busy Chica­go-area toll­way may have recent­ly noticed a large dig­i­tal bill­board that seems to be talk­ing direct­ly to them.

    It is.

    Launched last month here as well as in Dal­las and New Jer­sey, the eeri­ly Orwellian out­door cam­paign for Chevy Mal­ibu uses vehi­cle recog­ni­tion tech­nol­o­gy to iden­ti­fy com­pet­ing mid­size sedans and instant­ly dis­play ads aimed at their dri­vers.

    Cruis­ing along in an Alti­ma? The mes­sage might be “More Safe­ty Fea­tures Than Your Nis­san Alti­ma.” Dri­ving a Ford Fusion or Toy­ota Cam­ry? You might see a miles-per-gal­lon com­par­i­son between the Mal­ibu and your car. The ads last just long enough for approach­ing dri­vers of those vehi­cles to know they got sin­gled out and served by a bill­board.

    Con­sumers used to receiv­ing per­son­al­ized ads on their smart­phones may be sur­prised to see one on a 672-square-foot high­way bill­board. But data-based tech­nol­o­gy is find­ing its way into dig­i­tal out­door dis­plays of all types, enabling adver­tis­ers to track, reach and sell you stuff — even at 55 mph.

    “This is just the tip­ping point of the dis­rup­tion in out-of-home,” said Hel­ma Larkin, CEO of Poster­scope, an out-of-home com­mu­ni­ca­tions agency that designed the Mal­ibu cam­paign with bill­board com­pa­ny Lamar Adver­tis­ing. “The tech­nol­o­gy com­ing down the pike is fas­ci­nat­ing around what we could poten­tial­ly do to bring dig­i­tal con­cepts into the phys­i­cal world.”

    Out-of-home adver­tis­ing, which includes bill­boards, bus shel­ters, mall kiosks and oth­er pub­lic plat­forms, is see­ing growth fueled by such dig­i­tal inno­va­tion. Spend­ing on out­door adver­tis­ing rose 4.6 per­cent last year to $7.3 bil­lion, accord­ing to the Out­door Adver­tis­ing Asso­ci­a­tion of Amer­i­ca, the indus­try’s nation­al trade orga­ni­za­tion.

    There are about 370,000 bill­boards in the U.S., most of which still deliv­er a large sta­t­ic mes­sage to motorists the old-fash­ioned way — with posters or paint. Dig­i­tal bill­boards — giant TV screens that gen­er­al­ly rotate in new mes­sages every 8 sec­onds or so — num­ber about 6,400 nation­wide, and are gain­ing trac­tion.

    “There’s a good growth trend that we’ve seen over the past of the dig­i­tal road­side inven­to­ry,” said Stephen Fre­itas, chief mar­ket­ing offi­cer of the out­door adver­tis­ing trade asso­ci­a­tion. “Sev­er­al hun­dred new loca­tions are built every year.”

    ...

    The Mal­ibu cam­paign has tak­en over a 14-by-48-foot Lamar dig­i­tal bill­board fac­ing east along the Rea­gan Memo­r­i­al Toll­way (Inter­state 88) at Eola Road, near the Chica­go Pre­mi­um Out­lets mall in Auro­ra. Watch­ing traf­fic 24/7, a sep­a­rate cam­era mount­ed 1,000 feet ahead of the bill­board scans for vehi­cle grilles. When it rec­og­nizes a Fusion, Cam­ry or Alti­ma, the bill­board shifts from a gener­ic Mal­ibu ad to a com­peti­tor-spe­cif­ic one.

    The bill­board takes into account the speed of traf­fic to cal­cu­late the pre­cise moment to pull the trig­ger on the per­son­al­ized mes­sage, giv­ing those dri­vers 7 sec­onds of high­way fame that is equal parts big data and Big Broth­er, and per­haps the future of out-of-home adver­tis­ing.

    “It’s real­ly inno­v­a­tive because it is able to inform the cre­ative on the fly as the car goes by, and that’s real­ly bring­ing online tech­niques to the offline world,” Larkin said.

    Inter­ac­tive out­door dig­i­tal dis­plays have been mak­ing a splash for sev­er­al years at the pedes­tri­an lev­el, such as a 2014 bus shel­ter cam­paign pro­mot­ing a Har­ry Hou­di­ni TV minis­eries that chal­lenged Chica­go com­muters to hold their breath for three min­utes — dupli­cat­ing one of the magi­cian’s leg­endary tricks.

    Serv­ing the same sort of tar­get­ed ads that con­sumers receive on their smart­phones to a giant bill­board, how­ev­er, rep­re­sents a leap in the dig­i­tal evo­lu­tion of out­door adver­tis­ing, and a bold new can­vas that is sure to grab atten­tion.

    “Most peo­ple think it’s real­ly cool,” Larkin said. “Con­sumers are much more attract­ed to an ad and are much more prone to take notice of it when it relates to them and the envi­ron­ment that they’re in as opposed to a blan­ket state­ment.”

    Oth­ers fear tar­get­ed bill­board adver­tis­ing rep­re­sents yet anoth­er dig­i­tal assault on pri­va­cy. And unlike mobile apps, con­sumers can’t opt out of the bill­board­’s pry­ing eyes.

    “It’s the begin­ning of a dig­i­tal­ly dri­ven, intel­li­gent, out­door spy­ing appa­ra­tus that cap­tures all your details in order to adver­tise and mar­ket to you,” said Jef­frey Chester, exec­u­tive direc­tor of the Cen­ter for Dig­i­tal Democ­ra­cy, a Wash­ing­ton-based non­prof­it focused on con­sumer pro­tec­tion and pri­va­cy issues. “It’s a mis­take to think it’s just an out­door ad.”

    Bill­boards are already being used to track con­sumers. Clear Chan­nel Out­door Amer­i­c­as, for exam­ple, is using aggre­gat­ed mobile data to iden­ti­fy dri­vers pass­ing its Chica­go bill­boards and their shop­ping habits.

    Larkin said the Mal­ibu bill­board cam­era is only cap­tur­ing the grilles of the autos to iden­ti­fy the com­pet­i­tive brands, yield­ing less data than a typ­i­cal online ses­sion.

    “They don’t take pic­tures of peo­ples’ faces — it’s blurred out by the tech­nol­o­gy, the license plate is blurred,” Larkin said. “We are pick­ing up the make and mod­el of the cars.”

    For Chester, promis­es of aggre­gat­ed anonymi­ty fall on deaf ears. He is con­vinced that the Mal­ibu bill­board is already learn­ing more about the pass­ing dri­vers than they real­ize.

    “You might be able to ignore a bill­board, but this is a bill­board that is going to know you,” Chester said.

    ““This is just the tip­ping point of the dis­rup­tion in out-of-home,” said Hel­ma Larkin, CEO of Poster­scope, an out-of-home com­mu­ni­ca­tions agency that designed the Mal­ibu cam­paign with bill­board com­pa­ny Lamar Adver­tis­ing. “The tech­nol­o­gy com­ing down the pike is fas­ci­nat­ing around what we could poten­tial­ly do to bring dig­i­tal con­cepts into the phys­i­cal world.”
    Well, yes, it is pret­ty fas­ci­nat­ing. Of course there are oth­er ways to describe this trend in bring­ing dig­i­tal con­cepts to the phys­i­cal world:

    ...
    “It’s the begin­ning of a dig­i­tal­ly dri­ven, intel­li­gent, out­door spy­ing appa­ra­tus that cap­tures all your details in order to adver­tise and mar­ket to you,” said Jef­frey Chester, exec­u­tive direc­tor of the Cen­ter for Dig­i­tal Democ­ra­cy, a Wash­ing­ton-based non­prof­it focused on con­sumer pro­tec­tion and pri­va­cy issues. “It’s a mis­take to think it’s just an out­door ad.”
    ...

    Won’t it be fun when this tech­nol­o­gy hits the fash­ion indus­try. “Hey, you in the frumpy dress. Here’s a wardrobe (that won’t make the bill­boards yell at you in pub­lic).” Now that’s going to be cus­tomer ser­vice!

    Also keep in mind that the bill­board sys­tem described above is sup­posed block­ing all per­son­al­ly iden­ti­fy­ing infor­ma­tion, like license plates and wind­shield shots that could be used to iden­ti­fy the actu­al occu­pants of a car and deliv­er even more per­son­al­ized ads in pub­lic spaces:

    ...
    “They don’t take pic­tures of peo­ples’ faces — it’s blurred out by the tech­nol­o­gy, the license plate is blurred,” Larkin said. “We are pick­ing up the make and mod­el of the cars.”
    ...

    So let’s hope that’s actu­al­ly the case and this firm real­ly is sys­tem­at­i­cal­ly pre­vent­ing itself from col­lect­ing any per­son­al­ly iden­ti­fy­ing data. That would be nice. And who knows if that’s actu­al­ly true. But if it is the case, it’s prob­a­bly just a mat­ter of time before it is not longer the case. It’s also worth keep­ing in mind that even if these new com­pa­nies aren’t scan­ning your actu­al license plate and col­lec­tive a data­base of your vehi­cle’s move­ments, plen­ty of oth­er com­pa­nies already are:

    Car and Dri­ver

    Screen-Plate Club: How License-Plate Scan­ning Com­pro­mis­es Your Pri­va­cy
    You’ve prob­a­bly been tagged at the office, at a mall, or even in your own dri­ve­way.

    Oct 2014 By CLIFFORD ATIYEH

    Tow­ing com­pa­nies are a nec­es­sary evil when it comes to park­ing enforce­ment and prop­er­ty repos­ses­sion. But in the Google Earth we now inhab­it, tow trucks do more than just yank cars out of load­ing zones. They use license-plate read­ers (LPRs) to assem­ble a detailed pro­file of where your car will be and when. That’s an unnec­es­sary evil.

    Plate read­ers have long been a tool of law enforce­ment, and police offi­cers swear by them for track­ing stolen cars and appre­hend­ing dan­ger­ous crim­i­nals. But pri­vate com­pa­nies, such as repo crews, also pho­to­graph mil­lions of plates a day, with scan­ners mount­ed on tow trucks and even on pur­pose-built cam­era cars whose sole mis­sion is to dri­ve around and col­lect plate scans. Each scan is GPS-tagged and stamped with the date and time, feed­ing a mas­sive data trove to any law-enforce­ment agency—or gov­ern­ment-approved pri­vate industry—willing to pay for it.

    You’ve prob­a­bly been tagged at the office, at a mall, or even in your own dri­ve­way. And the com­pa­nies that sell spe­cial­ized mon­i­tor­ing soft­ware that assem­bles all these sight­ings into a reli­able pro­file stand to prof­it huge­ly. Bri­an Hauss, a legal fel­low for the Amer­i­can Civ­il Lib­er­ties Union (ACLU), says: “The whole point is so you can fig­ure out somebody’s long-term loca­tion. Unless there are lim­its on how those trans­ac­tions can be processed, I think it’s just a mat­ter of time until there are sig­nif­i­cant pri­va­cy vio­la­tions, if they haven’t already occurred.”

    How Is This Even Legal? License-plate-read­er com­pa­nies don’t have access to DMV reg­is­tra­tions, so while they can track your car, they don’t know it’s yours. That infor­ma­tion is guard­ed by the Driver’s Pri­va­cy Pro­tec­tion Act of 1994, which keeps your name, address, and dri­ving his­to­ry from pub­lic view. Most­ly. There are plen­ty of excep­tions, includ­ing for insur­ance com­pa­nies and pri­vate inves­ti­ga­tors. LPR com­pa­nies say only two groups can use its soft­ware to find the per­son behind the plate: law-enforce­ment agen­cies and repos­ses­sion com­pa­nies. In addi­tion, the encrypt­ed data­bas­es keep a log of each plate search and allow the abil­i­ty to restrict access.

    The com­pa­nies that push plate read­ers enjoy unreg­u­lat­ed auton­o­my in most states. Vig­i­lant Solu­tions. of Cal­i­for­nia and its part­ner, Texas-based Dig­i­tal Recog­ni­tion Net­work, boast at least 2 bil­lion license-plate scans since start­ing the country’s largest pri­vate license-plate data­base, the Nation­al Vehi­cle Loca­tion Ser­vice, in 2009.

    In total, there are at least 3 bil­lion license-plate pho­tos in pri­vate data­bas­es. Since many are dupli­cates and nev­er delet­ed, ana­lyt­ics can paint a vivid pic­ture of any motorist. Pre­dict­ing where and when some­one will dri­ve is rel­a­tive­ly easy; soft­ware can sort how many times a car is spot­ted in a cer­tain area and, when fed enough data, can gen­er­ate a person’s dri­ving his­to­ry over time.

    You Can’t Run, But They Can Hide

    An aver­age license-plate read­er looks like four radar detec­tors, stacked two-by-two. But they aren’t always easy to spot. Both cops and pri­vate users hide LRPs in almost anthing.

    And the sys­tems are get­ting smarter quick­ly. Vig­i­lant alone adds 100 mil­lion pho­tos every month, but com­pa­ny ­ mar­ket­ing vice pres­i­dent Bri­an Shock­ley says the word “track­ing” is mis­lead­ing. LPRs, he says, cap­ture “momen­tary, point-in-time” infor­ma­tion.

    ...

    Scott Jack­son, CEO of data provider MVTrac, con­tends that license-plate read­ers just auto­mate what police offi­cers and repo men have always done—run plates by eye—and that most Ameri­cans have accept­ed that the days of hav­ing true pri­va­cy are gone.

    “The pros of this tech­nol­o­gy far, far out­weigh the fear fac­tor of pri­va­cy,” he says, refer­ring to its suc­cess­ful police busts. “There are so many ways to track a per­son; this is not the one you should be wor­ried about.”

    Hauss of the ACLU dis­agrees. He asks, “Is it just so you can have a giant haystack that you can search when­ev­er you want, for what­ev­er pur­pose you want?”

    Paul Kulas, pres­i­dent of Col­orado-based BellesLink, which sells ver­i­fi­ca­tion soft­ware to repo com­pa­nies, says his indus­try needs to face these pub­lic con­cerns before it’s “lumped in with the sur­veil­lance state.” Some pri­va­cy state­ments by plate-read­er com­pa­nies, he says, have been mis­lead­ing.

    Kulas believes that the idea that LPR data can­not be linked to per­son­al infor­ma­tion is inac­cu­rate. “With­out reg­u­la­tion and with­out fore­sight,” he says, “this could get to a point where numer­ous law­suits could be brought against lenders and cam­era com­pa­nies because they have, in effect, obtained our loca­tion infor­ma­tion with­out our per­mis­sion.”

    As omi­nous as their pri­vate-sec­tor deploy­ment is, LPRs have incit­ed con­tro­ver­sy with their law-enforce­ment usage as well. In Decem­ber 2013, the city of Boston sus­pend­ed its LPR pro­gram after police acci­den­tal­ly revealed DMV-tied infor­ma­tion from its cam­eras to the Boston Globe. While that one inci­dent high­light­ed fail­ings in the department’s data pol­i­cy, plen­ty of agen­cies don’t even have such a thing. Some keep data for days, oth­ers for years. In most states, police can mon­i­tor you with LPRs with­out serv­ing a search war­rant or court order. And this Feb­ru­ary, a Depart­ment of Home­land Secu­ri­ty pro­pos­al for a pri­vate­ly host­ed fed­er­al plate-track­ing sys­tem was scrapped days after the Wash­ing­ton Post exposed it.

    Last year, police in Tempe, Ari­zona, refused an offer from Vig­i­lant for free LPR cam­eras. The catch: Every month, offi­cers would have to serve 25 war­rants from a list sup­plied by Vig­i­lant. Miss the quo­ta, lose the cam­eras. Such lists, accord­ing to the Los Ange­les Times inves­ti­ga­tion that uncov­ered the offer, com­mon­ly come from debt-col­lec­tor “war­rants” against dri­vers with unpaid munic­i­pal fines.

    Even­tu­al­ly, police and repo men might not be the only cus­tomers buy­ing LPR data. MVTrac recent­ly com­plet­ed a beta test that tracked Acuras at spe­cif­ic areas and times, log­ging info includ­ing the exact mod­els and col­ors. That infor­ma­tion, far more real-time than state-reg­is­tra­tion data, could be gold to automak­ers, mar­keters, and insur­ance com­pa­nies.

    There has been push­back. Nine states have passed LPR laws, and four of those states bar pri­vate com­pa­nies such as Vig­i­lant from oper­at­ing or sell­ing their wares [see map, above]. Some of those states lim­it usage to legit­i­mate inves­ti­ga­tions by police and traf­fic agen­cies. And some set stan­dards for data secu­ri­ty and estab­lish for­mal process­es (such as requir­ing war­rants) and pub­lic audits.

    In 2007, New Hamp­shire was the first to ban LPRs com­plete­ly except for toll col­lec­tions and secu­ri­ty on cer­tain bridges. Maine answered in 2009 with a less restric­tive law, fol­lowed by Cal­i­for­nia, Arkansas, Utah, Ver­mont, Flori­da, Ten­nessee, and Mary­land. In Utah, leg­is­la­tors banned pri­vate com­pa­nies from using LPRs but amend­ed the law after Vig­i­lant and Dig­i­tal Recog­ni­tion Net­work sued the state, claim­ing the ban vio­lat­ed their First Amend­ment rights to pub­lic pho­tog­ra­phy and free speech. After help­ing to kill a sim­i­lar bill in Cal­i­for­nia this past May, the com­pa­nies are now suing Arkansas, which fol­lowed Utah’s orig­i­nal let­ter in restrict­ing LPRs to police use. At least nine states have pend­ing bills that reg­u­late plate read­ers.

    As with many tech­nolo­gies, license-plate read­ers are advanc­ing at a rate that is out­pac­ing leg­is­la­tion. Small­er cam­eras; smart­phone apps that can pick out plates from live video; and the ­poten­tial fusion of pub­lic records, DMV data­bas­es, and facial-recog­ni­tion soft­ware are already on the hori­zon. Because police osten­si­bly use LPRs for pub­lic safe­ty, dri­vers will like­ly have to accept some ero­sion of their pri­va­cy behind the wheel. But when cor­po­ra­tions start buy­ing track­ing data in the name of “cus­tomer focus” and law­mak­ers look the oth­er way, we say it’s time to bring on the James Bond–style plate flip­pers.

    “Even­tu­al­ly, police and repo men might not be the only cus­tomers buy­ing LPR data. MVTrac recent­ly com­plet­ed a beta test that tracked Acuras at spe­cif­ic areas and times, log­ging info includ­ing the exact mod­els and col­ors. That infor­ma­tion, far more real-time than state-reg­is­tra­tion data, could be gold to automak­ers, mar­keters, and insur­ance com­pa­nies.”
    So we have bill­board com­pa­nies scan­ning cars for tar­get­ed ads, but appar­ent­ly not scan­ning the license plates and car occu­pants that could make those ads much more tar­get­ed. And we also have a vast and grow­ing indus­try of com­pa­nies scan­ning license plates for the expressed pur­pose of iden­ti­fy­ing who owns those vehi­cles and build­ing a data­base that could be sold to who knows who. Huh.

    So will the bill­board com­pa­nies even­tu­al­ly buy the license plate data from the LPR indus­try or will they just col­lect the data from their bill­board cam­eras and join the indus­try and ad even more infor­ma­tion to this grow­ing pri­vate sur­veil­lance com­mer­cial sec­tor? Both seems like obvi­ous options, and they aren’t mutu­al­ly exclu­sive. It’s a reminder that, in our out­door com­mer­cial sur­veil­lance-state future, when you see a per­son­al­ized ads, you aren’t just expe­ri­enc­ing the pos­si­ble pri­va­cy vio­la­tion. You’re also help­ing make your future pri­va­cy vio­la­tions more per­son­al­ized. But since this is hap­pen­ing to every­one at least you won’t have to take it per­son­al­ly. It could be worse! Sil­ver lin­ings aren’t the best in the Panop­ti­con.

    Posted by Pterrafractyl | April 16, 2016, 4:09 pm
  10. The Wall Street Jour­nal has a recent arti­cle where three dif­fer­ent experts are asked about the grow­ing poten­tial inter­est of employ­ers in uti­liz­ing data gath­ered from employ­ee “wear­ables” and oth­er types of Big Data. Not sur­pris­ing­ly, the opin­ions range from the three experts ranged from ‘this is a scary trend with major poten­tial for pri­va­cy inva­sion’ from John M. Simp­son, direc­tor of the Pri­va­cy Project at the non­prof­it advo­ca­cy group Con­sumer Watch­dog, to ‘this is poten­tial­ly scary but poten­tial­ly use­ful too’ from Edward McNi­cholas, co-leader of pri­va­cy, data secu­ri­ty and infor­ma­tion law at law firm Sid­ley Austin LLP, all the way to ‘you won’t be able to com­pete in the job mar­ket unless you agree to gen­er­ate and hand over this data because you won’t be pro­duc­tive enough with­out it’ from Chris Brauer, direc­tor of inno­va­tion and senior lec­tur­er at the Insti­tute for Man­age­ment Sci­ence at Gold­smiths, Uni­ver­si­ty of Lon­don.

    It’s an expect­ed spec­trum of opin­ions for a top­ic like this, but it’s also worth keep­ing in mind that it’s a non-mutu­al­ly exclu­sive set of opin­ions: Big Data from employ­ee wear­able tech could, of course, indeed have some legit­i­mate uses. It could also lead to a hor­ri­bly abus­es and inva­sive coer­cive night­mare sit­u­a­tion for employ­ees. But that night­mare poten­tial is no rea­son to believe that employ­ees won’t effec­tive­ly be forced to sub­mit to per­va­sive wear­able sur­veil­lance that includes their activ­i­ty out­side of work, like hours of sleep.

    So get wor­ried about Big Data rewrit­ing the employer/employee con­tract to include per­va­sive sur­veil­lance at and away from the office. And since ours is a civ­i­liza­tion which often does that which you should be deeply wor­ried about, get ready too:

    The Wall Street Jour­nal

    How Should Com­pa­nies Han­dle Data From Employ­ees’ Wear­able Devices?
    Wear­ables at work allow employ­ers to track pro­duc­tiv­i­ty and health indicators—and pose tricky pri­va­cy issues

    By Patience Hag­gin
    May 22, 2016 10:00 p.m. ET

    Wear­able elec­tron­ics, like the Fit­bits and Apple Watch­es sport­ed by run­ners and ear­ly adopters, are fast becom­ing on-the-job gear. These devices offer employ­ers new ways to mea­sure pro­duc­tiv­i­ty and safe­ty, and allow insur­ers to track work­ers’ health indi­ca­tors and habits.

    For employ­ers, the prospect of track­ing people’s where­abouts and pro­duc­tiv­i­ty can be wel­come. But col­lect­ing data on employ­ees’ health—and putting that data to work—can trig­ger a host of pri­va­cy issues.

    The Wall Street Jour­nal asked John M. Simp­son, direc­tor of the Pri­va­cy Project at the non­prof­it advo­ca­cy group Con­sumer Watch­dog; Chris Brauer, direc­tor of inno­va­tion and senior lec­tur­er at Gold­smiths, Uni­ver­si­ty of Lon­don; and Edward McNi­cholas, co-leader of pri­va­cy, data secu­ri­ty and infor­ma­tion law at law firm Sid­ley Austin LLP, to weigh in on how com­pa­nies should han­dle data col­lect­ed from wear­ables. Here are edit­ed excerpts of their dis­cus­sion.

    Do I have to?

    WSJ: Should employ­ers be able to require their employ­ees to wear wear­ables?

    MR. BRAUER: It’s about a social con­tract between employ­er and employ­ee. It’s in nobody’s inter­est to have over­worked, stressed and anx­ious employ­ees who often aren’t even aware of their own con­di­tion. Mak­ing things vis­i­ble is a good thing if there is a cul­ture of trust and account­abil­i­ty.

    The real chal­lenge is in pro­duc­tiv­i­ty and per­for­mance. Sport sci­ence has evolved remark­ably in the last 10 years, and we can expect the same from man­age­ment sci­ence.

    Is it rea­son­able for a team to expect a foot­ball play­er to wear a sen­sor in his shirt to mon­i­tor gran­u­lar move­ment and injury susceptibility—things that video, psy­chol­o­gists and pitch­side observers just don’t pick up? Nowa­days you can’t com­pete at top-lev­el sport with­out this kind of wear­able insight and ana­lyt­ics.

    In the near future we’ll see the same kind of thing in all fields of endeav­or. In most fields it may be a sim­i­lar ques­tion, not so much of whether you should be able to require wear­ables as whether you can com­pete with­out them.

    MR. SIMPSON: Wear­ables that pro­vide health data about an indi­vid­ual pro­vide deeply per­son­al infor­ma­tion. Requir­ing an employ­ee to wear such a device is an Orwellian over­reach and an unjus­ti­fied inva­sion of pri­va­cy.

    Anoth­er issue to con­sid­er is just how accu­rate the data such devices pro­vide actu­al­ly turns out to be. There are seri­ous ques­tions about the accu­ra­cy of many of the apps that pow­er these devices. Mak­ing deci­sions about peo­ple based on their pri­vate infor­ma­tion is bad enough. Worse would be mak­ing deci­sions based on pri­vate health data that was wrong.

    I don’t see how there is a legit­i­mate place for manda­to­ry health wear­ables in the work­place. More­over, their required use would under­mine employ­ee morale, like­ly hav­ing a neg­a­tive impact on pro­duc­tiv­i­ty.

    If the employ­er makes the case for access to some of the data and the employ­ee agrees, that is a dif­fer­ent sit­u­a­tion. The prob­lem is that the employ­ee might feel under great pres­sure to agree to the use of their data. If data is shared on a vol­un­tary basis, there must be pro­vi­sions in place so there is no coer­cion.

    MR. MCNICHOLAS: Some wear­ables will pro­tect work­ers from radi­a­tion, acci­dents, par­tic­u­late mat­ter in their lungs. Such health pro­tec­tions should be treat­ed dif­fer­ent­ly.

    Per­for­mance mon­i­tor­ing, how­ev­er, rais­es oth­er issues. Trans­paren­cy and rea­son­able­ness strike me as key. Employ­ers should be man­dat­ed to be trans­par­ent with their employ­ees and to let the employ­ees make the choice about whether it is rea­son­able.

    When nobody is being harmed by a wear­able, I think we have to acknowl­edge that the equa­tion is dif­fer­ent and leans toward more lib­er­al use of wear­ables.

    The dan­ger of dis­crim­i­na­tion

    WSJ: Here’s an exam­ple of how employ­ers might use data from wear­ables: Imag­ine a company’s sales rep­re­sen­ta­tives wear track­ers that mea­sure sleep hours and qual­i­ty. The boss has access to this data, and can use it to inform deci­sions.

    Study­ing the sleep pat­terns of sales reps Jack and Jill, he notices that Jill slept well last night and Jack did not. He decides that Jill will make that afternoon’s client pitch, since the data gives him more con­fi­dence in her abil­i­ty to per­form that after­noon. Is this appro­pri­ate?

    MR. BRAUER: If there is a very strong his­tor­i­cal cor­re­la­tion between Jack and Jill’s sleep­ing pat­terns and their sales per­for­mance, then it makes sense to make a strate­gic deci­sion to send one or the oth­er into a big pitch using this data point. This assumes that Jack and Jill have vol­un­teered or are con­tract­ed to wear the fit­ness track­er with the knowl­edge that the data from the device may be used by man­age­ment to make these kinds of strate­gic resource-allo­ca­tion deci­sions.

    You’d also like to see orga­ni­za­tions that under­stand sleep qual­i­ty as a pre­dic­tor of per­for­mance incor­po­rat­ing this into health and well-being strate­gies for their workforce—offering sleep train­ing, for exam­ple, or shar­ing knowl­edge around anonymized and aggre­gat­ed data.

    We are also going to see lots of exam­ples of indi­vid­ual employ­ees devel­op­ing bio­met­ric cur­ric­u­la vitae that indi­cate their pro­duc­tiv­i­ty and per­for­mance under cer­tain con­di­tions, and they can use this to lob­by employ­ers or apply for jobs. So if the job requires high per­for­mance under stress­ful con­di­tions, you can demon­strate in your data how you have per­formed under stress­ful con­di­tions in the past. This pri­ma­ry data can poten­tial­ly be a very reli­able pre­dic­tor of future per­for­mance.

    MR. SIMPSON: Giv­en that dif­fer­ent peo­ple require dif­fer­ent amounts of sleep, it would be dif­fi­cult for any man­ag­er to make mean­ing­ful deci­sions about which employ­ee to send on a client pitch based on how much sleep they had. I’d think past job per­for­mance and results would be much more use­ful.

    Using pri­vate health data to apply for jobs would open the door to all sorts of unfair dis­crim­i­na­tion. A ques­tion: In this pre­dict­ed world of wear­able fit­ness devices in the work­place, would man­agers and exec­u­tives be expect­ed to share their pri­vate health data with employ­ees?

    MR. MCNICHOLAS: In some sit­u­a­tions, employ­ers need to know health infor­ma­tion about employ­ees in order to keep them safe. Employ­ees oper­at­ing dan­ger­ous machin­ery should also have some oblig­a­tion to share with their employ­er whether they are under the influ­ence of med­i­cines that may impact their abil­i­ty to do their job safe­ly. The safe­ty con­cerns here are often at least as much about oth­er work­ers, cus­tomers, and the gen­er­al pub­lic as they are about the health of the par­tic­u­lar employ­ee.

    Per­haps an employ­er could get the same result by giv­ing employ­ees an incen­tive pay­ment or award if they opt into shar­ing sleep pat­terns and hit their sleep goals.

    The poten­tial for dis­crim­i­na­tion against per­sons with phys­i­cal or men­tal dif­fer­ences must be kept in mind. If the results of this sort of track­ing led to dis­crim­i­na­tion against per­sons with con­di­tions rang­ing from insom­nia or depres­sion or ADHD, the pro­gram would need to be reformed. White House and Fed­er­al Trade Com­mis­sion reports on big data have high­light­ed the poten­tial for the new world of big data to lead to such results.

    The rub­ber will hit the road when we have arti­fi­cial intel­li­gence ana­lyz­ing the mas­sive data sets that will be cre­at­ed by the infor­ma­tion com­ing from these wear­able devices. To my mind, we should not deny our­selves the poten­tial ben­e­fits of these tech­nolo­gies by ban­ning them, but we must keep a crit­i­cal eye on par­tic­u­lar imple­men­ta­tions of such tech­nolo­gies in order to ensure that they do not become new ways of dis­crim­i­nat­ing against peo­ple based on any num­ber of ille­gal and illic­it cri­te­ria.

    ...

    “We are also going to see lots of exam­ples of indi­vid­ual employ­ees devel­op­ing bio­met­ric cur­ric­u­la vitae that indi­cate their pro­duc­tiv­i­ty and per­for­mance under cer­tain con­di­tions, and they can use this to lob­by employ­ers or apply for jobs. So if the job requires high per­for­mance under stress­ful con­di­tions, you can demon­strate in your data how you have per­formed under stress­ful con­di­tions in the past. This pri­ma­ry data can poten­tial­ly be a very reli­able pre­dic­tor of future per­for­mance.”
    Yes, it’s time to start col­lect­ing that data for your bio­met­ric CV. And while this might not be the best thing to add to your new bio­met­ric CV, if you hap­pen to be wear­ing a Fit­bit heart rate track­er while read­ing this arti­cle and your heart rate did­n’t spike, that is sort of a use­ful piece of data. Maybe you could read all sorts of arti­cles about the emerg­ing Orwellian employ­er sur­veil­lance state and show a nice, steady heart rate that does­n’t indi­cate any dis­tress. It Future employ­ers would prob­a­bly love see­ing some­thing like that on your bio­met­ric CV. No cheat­ing.

    Posted by Pterrafractyl | May 26, 2016, 2:57 pm
  11. Check out the fun ‘bug’ in the new smash hit Poke­mon Go app that’s already been down­loaded by mil­lions of peo­ple since its recent release. It sounds like the com­pa­ny, Niantic, a Google spin­off, has already fixed the bug. But as is appar­ent from the fact that they had to fix the bug, it’s a ‘bug’ that all sorts of app devel­op­ers can pre­sum­ably uti­lize: If you signed into the app using your Google Account on an iOS device, it’s pos­si­ble that Niantic could get com­plete access to ALL your Google Account infor­ma­tion, includ­ing your emails:

    Buz­zFeed

    You Should Prob­a­bly Check Your Poké­mon Go Pri­va­cy Set­tings

    The com­pa­ny behind the game is col­lect­ing play­ers’ data. And it’s most def­i­nite­ly catch­ing them all.

    Orig­i­nal­ly post­ed on Jul. 11, 2016, at 1:38 p.m. Updat­ed on Jul. 12, 2016, at 1:20 p.m.

    Joseph Bern­stein
    Buz­zFeed News Reporter

    UPDATE: In a state­ment attached to the first patch to the game, released today, Niantic said it “Fixed Google account scope.” iOS users who sign out and back into the game with Google will see the below screen, with the two per­mis­sions the game now requires: Google User ID and email address..

    In the five fren­zied days since its Amer­i­can release, Poké­mon Go has become an eco­nom­ic and cul­tur­al sen­sa­tion. Down­loaded by mil­lions, the game has boost­ed Nintendo’s mar­ket val­ue by $9 bil­lion (and count­ing), made a major case for aug­ment­ed real­i­ty as the gam­ing for­mat of the future, and led to a pletho­ra of strange, scary, and serendip­i­tous real-life encoun­ters.

    ...

    Like most apps that work with the GPS in your smart­phone, Poké­mon Go can tell a lot of things about you based on your move­ment as you play: where you go, when you went there, how you got there, how long you stayed, and who else was there. And, like many devel­op­ers who build those apps, Niantic keeps that infor­ma­tion.

    Accord­ing to the Poké­mon Go pri­va­cy pol­i­cy, Niantic may col­lect — among oth­er things — your email address, IP address, the web page you were using before log­ging into Poké­mon Go, your user­name, and your loca­tion. And if you use your Google account for sign-in and use an iOS device, unless you specif­i­cal­ly revoke it, Niantic has access to your entire Google account. That means Niantic could have read and write access to your email, Google Dri­ve docs, and more. (It also means that if the Niantic servers are hacked, who­ev­er hacked the servers would poten­tial­ly have access to your entire Google account. And you can bet the game’s extreme pop­u­lar­i­ty has made it a tar­get for hack­ers. Giv­en the num­ber of chil­dren play­ing the game, that’s a scary thought.) You can check what kind of access Niantic has to your Google account here.

    It also may share this infor­ma­tion with oth­er par­ties, includ­ing the Poké­mon Com­pa­ny that co-devel­oped the game, “third-par­ty ser­vice providers,” and “third par­ties” to con­duct “research and analy­sis, demo­graph­ic pro­fil­ing, and oth­er sim­i­lar pur­pos­es.” It also, per the pol­i­cy, may share any infor­ma­tion it col­lects with law enforce­ment in response to a legal claim, to pro­tect its own inter­ests, or stop “ille­gal, uneth­i­cal, or legal­ly action­able activ­i­ty.”

    Now, none of these pri­va­cy pro­vi­sions are of them­selves unique. Loca­tion-based apps from Foursquare to Tin­der can and do sim­i­lar things. But Poké­mon Go’s incred­i­bly gran­u­lar, block-by-block map data, com­bined with its surg­ing pop­u­lar­i­ty, may soon make it one of, if not the most, detailed loca­tion-based social graphs ever com­piled.

    And it’s all, or most­ly, in the hands of Niantic, a small aug­ment­ed real­i­ty devel­op­ment com­pa­ny with seri­ous Sil­i­con Val­ley roots. The company’s ori­gins trace back to the geospa­tial data visu­al­iza­tion start­up Key­hole, Inc., which Google acquired in 2004; it played a cru­cial role in the devel­op­ment of Google Earth and Google Maps. And though Niantic spun off from Alpha­bet late last year, Google’s par­ent com­pa­ny is still one of its a major investors, as is Nin­ten­do, which owns a major­i­ty stake in The Poké­mon Com­pa­ny. Indeed, Google still owned Niantic when the devel­op­er released its first game, Ingress, which is what Niantic used to pick the loca­tions for Poké­mon Go’s ubiq­ui­tous Pokéstops and gyms.

    Cit­ing CEO John Hanke’s trav­el plans, a rep­re­sen­ta­tive from Niantic was not able to clar­i­fy to Buz­zFeed News if the com­pa­ny will share loca­tion data with Alpha­bet or Nin­ten­do. A Google rep­re­sen­ta­tive for­ward­ed Buz­zFeed News’ request for com­ment to Niantic.

    How­ev­er, in a state­ment to Giz­mo­do Mon­day night, Niantic said they start­ed work­ing on a fix and ver­i­fied with Google that noth­ing beyond basic pro­file infor­ma­tion had been accessed.

    We recent­ly dis­cov­ered that the Poké­mon GO account cre­ation process on iOS erro­neous­ly requests full access per­mis­sion for the user’s Google account. How­ev­er, Poké­mon GO only access­es basic Google pro­file infor­ma­tion (specif­i­cal­ly, your User ID and email address) and no oth­er Google account infor­ma­tion is or has been accessed or col­lect­ed.

    Once we became aware of this error, we began work­ing on a client-side fix to request per­mis­sion for only basic Google pro­file infor­ma­tion, in line with the data that we actu­al­ly access. Google has ver­i­fied that no oth­er infor­ma­tion has been received or accessed by Poké­mon GO or Niantic.

    Google will soon reduce Poké­mon GO’s per­mis­sion to only the basic pro­file data that Poké­mon GO needs, and users do not need to take any actions them­selves.

    Giv­en the fact that Poké­mon Go already attract­ed the atten­tion of law enforce­ment, it seems like­ly that at some point police will try to get Niantic to hand over user infor­ma­tion. And if Google’s track record is any indi­ca­tion — a report ear­li­er this year showed that the com­pa­ny com­plied with 78% of law enforce­ment requests for user data — they are prob­a­bly pre­pared to coop­er­ate.

    Now, none of these pri­va­cy pro­vi­sions are of them­selves unique. Loca­tion-based apps from Foursquare to Tin­der can and do sim­i­lar things. But Poké­mon Go’s incred­i­bly gran­u­lar, block-by-block map data, com­bined with its surg­ing pop­u­lar­i­ty, may soon make it one of, if not the most, detailed loca­tion-based social graphs ever com­piled.”

    Wow. ‘Acci­den­tal­ly’ gain­ing full access to your Google account and all your emails is a thing smart­phone app mak­ers do these days. And while it’s like­ly Niantic real­ly did make that bug fix (a Google spin­off prob­a­bly does­n’t need access to your emails), it seems like this has got to be a wild­ly pop­u­lar ‘bug’ for app mak­ers. They can gain full access to your Google account sim­ply by adding an “sign in with Google” option.

    Keep in mind that there’s cur­rent­ly some con­fu­sion as to what exact­ly giv­ing “full account access” to an app entails, and it’s pos­si­ble that it would­n’t give access to things like emails. But even if its not your email con­tent but instead almost all the oth­er con­tent in you Google account, that’s still poten­tial­ly an immense amount of per­son­al con­tent. And now that Poke­mon Go has made sure the world it aware of these kinds of secu­ri­ty issues we can be pret­ty sure there’s going to be a lot more apps offer­ing a nice, con­ve­nient Google account log in option in the future.

    So, yeah, you might want to dou­ble check those third-par­ty app per­mis­sions.

    Posted by Pterrafractyl | July 12, 2016, 2:44 pm
  12. Just FYI, if you’re an employ­ee in the US and your employ­er is offer­ing free Fit­Bits or some oth­er ‘wear­able’ tech­nol­o­gy that streams basic health data like heart rate or steps tak­en each day as part of some sort of new employ­ee fit­ness plan, you might want to make sure that the plan is asso­ci­at­ed with your employ­er’s health insur­ance plan which means the data col­lect­ed would at least have fed­er­al HIPAA pro­tec­tion. Because if that fan­cy free Fit­Bit does­n’t have HIPAA pro­tec­tion, that heart rate data is going to be telling who knows who a lot more about you than just your heart rate and what it’s telling those unknown third-par­ties might not be remote­ly accu­rate:

    Slate

    There’s No Such Thing as Innocu­ous Per­son­al Data

    Why you should keep your heart rate, sleep pat­terns, and oth­er seem­ing­ly bor­ing info to your­self.

    By Eliz­a­beth Wein­garten
    Aug. 8 2016 7:28 AM

    It’s 2020, and a cou­ple is on a date. As they sip cock­tails and ban­ter, each is dying to sneak a peek at the other’s wear­able device to answer a very sen­si­tive ques­tion.

    What’s his or her heart rate vari­abil­i­ty?

    That’s because heart rate vari­abil­i­ty, which is the mea­sure­ment of the time in between heart­beats, can also be an indi­ca­tor of female sex­u­al dys­func­tion and male sex­u­al dys­func­tion.

    When you think about which of your devices and apps con­tain your most sen­si­tive data, you prob­a­bly think about your text mes­sages, Gchats, or Red­dit account. The fit­ness track­ing device you’re sport­ing right now may not imme­di­ate­ly come to mind. After all, what can peo­ple real­ly learn about you from your heart rate or your step count?

    More than you might think. In fact, an expand­ing trove of research links seem­ing­ly benign data points to behav­iors and health out­comes. Much of this research is still in its infan­cy, but com­pa­nies are already begin­ning to mine some of this data, and there’s grow­ing con­tro­ver­sy over just how far they can—and should—go. That’s because like most inno­va­tions, there’s a poten­tial bright side, and a dark side, to this data feed­ing fren­zy.

    Let’s go back to the exam­ple of heart rates. In a study con­duct­ed in Swe­den and pub­lished in 2015, researchers found that low rest­ing heart rates cor­re­lat­ed with propen­si­ty for vio­lence. It’s unclear whether these find­ings will hold up to fur­ther inves­ti­ga­tion. But if the con­nec­tion is con­firmed in the future, per­haps it could be cross-indexed, intro­duced into algo­rithms, and used, in con­junc­tion with oth­er data, to pro­file or con­vict indi­vid­u­als, sug­gests John Chuang, a pro­fes­sor at Berkeley’s School of Infor­ma­tion and the direc­tor of its BioSense lab. (Biosens­ing tech­nol­o­gy uses dig­i­tal data to learn about liv­ing sys­tems like peo­ple.) “It’s some­thing we can’t anticipate—these new class­es of data we assume are innocu­ous that turn out not to be,” says Chuang.

    And in the absence of research link­ing heart rate to par­tic­u­lar health or behav­ioral out­comes, we tend to have our own entrenched social inter­pre­ta­tions of what a faster heart rate actu­al­ly means—that some­one is lying, or ner­vous, or inter­est­ed. Berke­ley researchers have found that even those assumed asso­ci­a­tions could have com­pli­cat­ed impli­ca­tions for apps that allow users to share heart rate infor­ma­tion with friends or employ­ers. In one recent study cur­rent­ly under­go­ing peer review, when par­tic­i­pants in a trust game observed that their part­ners had an ele­vat­ed heart rate, they were less like­ly to coop­er­ate with them and more like­ly to attribute some kind of neg­a­tive mood to that per­son. In anoth­er study sched­uled to be pub­lished soon, par­tic­i­pants were asked to imag­ine a sce­nario: They were about to meet an acquain­tance to talk about a legal dis­pute, and the acquain­tance texted that he or she was run­ning late. Along­side the text, that person’s heart rate appeared. If the heart rate was nor­mal, many study par­tic­i­pants felt it should have been ele­vat­ed to show that their acquain­tance cared about being late. The authors warn of the “poten­tial dan­ger” of apps that could encour­age heart rate shar­ers to make the wrong asso­ci­a­tions between their sig­nals and behav­ior. One app, Car­dio­gram, is already pos­ing the ques­tion: “What’s your heart telling you?”

    Sud­den­ly, any­one who knows your heart rate may prejudge—accurately or not—your emo­tions, mood, and sex­u­al prowess. “This data can be very eas­i­ly mis­in­ter­pret­ed,” says Michelle De Mooy, the act­ing direc­tor of the Pri­va­cy and Data Project at the Cen­ter for Democ­ra­cy and Tech­nol­o­gy. “Peo­ple tend to think of data as fact, when in fact it’s gov­erned by algo­rithms that are cre­at­ed by humans who have bias.”

    And it’s wor­ri­some that com­pa­nies, employ­ers, and oth­ers could use such imper­fect infor­ma­tion. Most biosens­ing data gath­ered from wear­ables isn’t pro­tect­ed by the Health Insur­ance Porta­bil­i­ty and Account­abil­i­ty Act or reg­u­lat­ed by the Fed­er­al Trade Com­mis­sion, a reflec­tion of the fact that the bound­aries between med­ical and non­med­ical data are still being defined. “Reg­u­la­tion can some­times be a good thing, and some­times more com­pli­cat­ing,” says De Mooy. “But in this case, it’s impor­tant because of the dif­fer­ent ways in which activ­i­ty track­ers are start­ing to be a part of our lives. Out­side of a fun vague activ­i­ty mea­sure, they are com­ing into work­places and well­ness pro­grams in lots of dif­fer­ent ways.”

    ...

    Not all well­ness pro­gram data can be legal­ly fun­neled to employ­ers or third par­ties. It depends on whether the well­ness pro­gram is inside a com­pa­ny insur­ance plan—mean­ing that it would be pro­tect­ed by HIPAA—or out­side a com­pa­ny insur­ance plan and admin­is­tered by a third-par­ty ven­dor. If it’s admin­is­tered by a third par­ty, your data could be passed on to oth­er com­pa­nies. At that point, the data is pro­tect­ed only by the pri­va­cy poli­cies of those third-par­ty ven­dors, “mean­ing they can essen­tial­ly do what they like with it,” De Mooy says.

    Most com­pa­nies that are gath­er­ing this infor­ma­tion empha­size that they’re doing every­thing they can to pro­tect users’ data and that they don’t sell it to third-par­ty providers (yet). But when data pass­es from a device, to a phone, to the cloud through Wi-Fi, even all of the encryp­tion and pro­tec­tive algo­rithms in the world can’t ensure data secu­ri­ty. Many of these pro­grams, like Aetna’s sleep ini­tia­tive, are option­al, but some­times employ­ees don’t have much of a choice. If they opt out, they often have to pay more for insur­ance cov­er­age, though com­pa­nies pre­fer to frame it as offer­ing a dis­count to those who par­tic­i­pate, as opposed to a penal­ty for those who don’t.

    And even if you choose to opt out, com­pa­nies may find ways to col­lect the same data in the future. For exam­ple, MIT researchers are able now to detect heart rate and breath­ing infor­ma­tion remote­ly with 99 per­cent accu­ra­cy from a Wi-Fi sig­nal that they reflect off of your body. “In the future, could stores cap­ture heart rate to show how it changes when you see a new gad­get inside a store?” imag­ines Chuang. “These may be things that you as a con­sumer may not be able to opt out of.”

    Yet there’s anoth­er side to this future. The way you walk can be as unique as your fin­ger­print; a cou­ple of stud­ies show that gait can help ver­i­fy the iden­ti­ty of smart­phone users. And gait can also pre­dict whether some­one is at risk for demen­tia. Seem­ing­ly use­less pieces of data may let experts deduce or pre­dict cer­tain behav­iors or con­di­tions now, .but the big insights will come in the next few years, when com­pa­nies and con­sumers are able to view a tapes­try of dif­fer­ent indi­vid­ual data points and con­trast them with data across the entire pop­u­la­tion. That’s when, accord­ing to a recent report from Berkeley’s Cen­ter for Long-Term Cyber­se­cu­ri­ty, we’ll be able to “gain deep insight into human emo­tion­al expe­ri­ences.”

    But it’s the data that you’re cre­at­ing now that will fuel those insights. Far from mean­ing­less, it’s the foun­da­tion of what you (and every­one else) may be able to learn about your future self.

    “Not all well­ness pro­gram data can be legal­ly fun­neled to employ­ers or third par­ties. It depends on whether the well­ness pro­gram is inside a com­pa­ny insur­ance plan—mean­ing that it would be pro­tect­ed by HIPAA—or out­side a com­pa­ny insur­ance plan and admin­is­tered by a third-par­ty ven­dor. If it’s admin­is­tered by a third par­ty, your data could be passed on to oth­er com­pa­nies. At that point, the data is pro­tect­ed only by the pri­va­cy poli­cies of those third-par­ty ven­dors, “mean­ing they can essen­tial­ly do what they like with it,” De Mooy says.

    Yep, if you hand that seem­ing­ly innocu­ous per­son­al health data like a heart rate over to a non-HIPAA pro­tect­ed enti­ty, ran­dom third par­ties can get to infer all sorts of fun things about you like whether or not you’re suf­fer­ing from some sort of sex­u­al dys­func­tion or your propen­si­ty for vio­lence. And whether or not those infer­ences are based on sol­id sci­ence or the lat­est pop the­o­ry is total­ly up to them. How fun.

    So check those HIPAA agree­ments before you slap that free Fit­Bit on your wrist. And if you real­ly don’t like the idea of hand over per­son­al health data like your heart rate to the world, you might need to avoid all Wi-Fi net­works too:

    ...
    And even if you choose to opt out, com­pa­nies may find ways to col­lect the same data in the future. For exam­ple, MIT researchers are able now to detect heart rate and breath­ing infor­ma­tion remote­ly with 99 per­cent accu­ra­cy from a Wi-Fi sig­nal that they reflect off of your body. “In the future, could stores cap­ture heart rate to show how it changes when you see a new gad­get inside a store?” imag­ines Chuang. “These may be things that you as a con­sumer may not be able to opt out of.”
    ...

    That’s right, com­pa­nies are poten­tial­ly going to have the abil­i­ty to just ran­dom­ly scan your heart rate and breath­ing infor­ma­tion with a Wi-Fi sig­nal. Like when you walk past their bill­boards. Won’t that also be fun.

    And while it’s just breath­ing and heart rate infor­ma­tion via Wi-Fi, just imag­ine what oth­er per­son­al health infor­ma­tion could pos­si­bly be detect­ed remote­ly for a much broad­er range of sen­sors. For instance, imag­ine if Google set up free ‘Wi-Fi Kiosks’ all over the place that not only pro­vid­ed Wi-Fi ser­vices but had oth­er types of sen­sors that detect­ed things like air pol­lu­tion or oth­er chem­i­cals along with UV and infrared cam­eras. If you’re hav­ing a hard time imag­in­ing that, this should give you a bet­ter idea:

    Engad­get

    Side­walk Labs’ smart city kiosks go way beyond free WiFi
    Google’s sis­ter com­pa­ny wants to mon­i­tor every­thing from traf­fic and air qual­i­ty to poten­tial ter­ror­ist activ­i­ty.

    Andrew Dal­ton
    07.01.16 in Gad­getry

    The details of an ambi­tious plan from Google’s sis­ter com­pa­ny Side­walk Labs to cre­ate entire “smart neigh­bor­hoods” just got a lit­tle clear­er. Accord­ing to Side­walk Labs’ pitch deck, which was obtained by Recode this week, the plan goes far beyond those free WiFi kiosks that are already on the streets of New York City. The kiosks will mon­i­tor every­thing from bike and pedes­tri­an traf­fic to air qual­i­ty and street noise.

    “The Kiosk sen­sor plat­form will help address com­plex issues where real-time ground truth is need­ed,” one doc­u­ment read. “Under­stand­ing and mea­sur­ing traf­fic con­ges­tion, iden­ti­fy­ing dan­ger­ous sit­u­a­tions like gas leaks, mon­i­tor­ing air qual­i­ty, and iden­ti­fy­ing qual­i­ty of life issues like idling trucks.”

    In addi­tion to mon­i­tor­ing envi­ron­men­tal fac­tors like humid­i­ty and tem­per­a­ture, a bank of air pol­lu­tant sen­sors will also mon­i­tor par­tic­u­lates, ozone, car­bon monox­ide and oth­er harm­ful chem­i­cals in the air. Two oth­er sen­sor banks will mea­sure “Nat­ur­al and Man­made Behav­ior” by track­ing street vibra­tions, sound lev­els, mag­net­ics fields and entire spec­trums of vis­i­ble, UV and infrared light. Final­ly, the “City Activ­i­ty” sen­sors will not only be able to mea­sure pedes­tri­an traf­fic, it will also look for secu­ri­ty threats like aban­doned pack­ages. While free giga­bit WiFi on the streets sounds like a win for every­one’s data plan, it also comes at a cost: the kiosks will also be able to track wire­less devices as they pass by, although it will most like­ly be anonymized.

    ...

    In one such exam­ple pro­vid­ed by the doc­u­ments, data col­lect­ed from traf­fic cam­eras and pass­ing devices could be used to re-cal­cu­late trav­el times in Google Maps — think Waze, but with data on the munic­i­pal lev­el. In the end, how­ev­er, it’s up to each city to decide which sen­sors they want includ­ed in the devices. While many have obvi­ous prac­ti­cal uses, Recode also points out there are some sig­nif­i­cant costs involved. Although the Side­walk Labs pitch offers to pro­vide the kiosks for free, there’s still instal­la­tion, set­up and main­te­nance fees. All told, 100 “free” kiosks are expect­ed to costs a city around $4.5 mil­lion in the first year.

    Of course, that cost can be defrayed if the city is will­ing to allow Side­walk Labs to install two 55-inch adver­tis­ing screens on each kiosk. While Side­walk will foot the bill for the ad space, it also gets to keep 50 per­cent of the prof­its. With 100 kiosks, a city stands to make back an esti­mat­ed $3 mil­lion per year in adver­tis­ing rev­enue.

    “In addi­tion to mon­i­tor­ing envi­ron­men­tal fac­tors like humid­i­ty and tem­per­a­ture, a bank of air pol­lu­tant sen­sors will also mon­i­tor par­tic­u­lates, ozone, car­bon monox­ide and oth­er harm­ful chem­i­cals in the air. Two oth­er sen­sor banks will mea­sure “Nat­ur­al and Man­made Behav­ior” by track­ing street vibra­tions, sound lev­els, mag­net­ics fields and entire spec­trums of vis­i­ble, UV and infrared light. Final­ly, the “City Activ­i­ty” sen­sors will not only be able to mea­sure pedes­tri­an traf­fic, it will also look for secu­ri­ty threats like aban­doned pack­ages. While free giga­bit WiFi on the streets sounds like a win for every­one’s data plan, it also comes at a cost: the kiosks will also be able to track wire­less devices as they pass by, although it will most like­ly be anonymized.”

    Wi-Fi side­walk kiosks with a bat­tery of sen­sors and large screens designed to grab your atten­tion and draw you clos­er (and then hope­ful­ly not detect your wire­less devices and iden­ti­fy you). Might the “Nat­ur­al and Mand­made Behav­ior” detect­ed by these kiosks include things like heart rate? And what oth­er types of health infor­ma­tion can be detect­ed with sen­sors designed to pick up a broad range of sounds along with entire spec­trums of vis­i­ble, UV and infrared light? We’ll find out someday...presumably after all this data is col­lect­ed.

    So, all in all, it’s increas­ing­ly clear that if you don’t like the idea of help­less­ly hav­ing your per­son­al health infor­ma­tion col­lect­ed and ana­lyzed (includ­ing very dubi­ous­ly ana­lyzed) by all sorts of ran­dom third-par­ties data preda­tors you might need to relo­cate. Away from civ­i­liza­tion. Far away. Prefer­ably a thick jun­gle where third-par­ty kiosks with Wi-Fi and infrared scan­ning will at least have a lim­it­ed reach giv­en all the block­ing foliage. Sure, there might be tigers and oth­er preda­tors to wor­ry about in the jun­gle, but at least those are the kinds of preda­tors you can poten­tial­ly defend your­self against. The tiger might be able to eat your body, but not your dig­ni­ty. Good luck!

    Posted by Pterrafractyl | August 15, 2016, 6:43 pm
  13. Remem­ber Tay, Microsoft­’s AI chat­bot that was turned into a neo-Nazi in under 24 hours because its cre­ators inex­plic­a­bly did­n’t take into account the pos­si­bil­i­ty that peo­ple would try to turn their chat­bot into a neo-Nazi? Well, it appears Face­book just had its own Tay-ish expe­ri­ence. Although instead of a bunch of trolls specif­i­cal­ly set­ting out to turn some new pub­licly acces­si­ble Face­book AI into an extrem­ist, Face­book instead removed the human cura­tion com­po­nent from their “trend­ing news” feed fol­low­ing charges that Face­book was fil­ter­ing out con­ser­v­a­tive news, and the endem­ic trolling already present in the right-wing medi­a­s­phere dump­ster fire took it from there:

    The Guardian

    Face­book fires trend­ing team, and algo­rithm with­out humans goes crazy

    Mod­ule push­es out false sto­ry about Fox’s Meg­yn Kel­ly, offen­sive Ann Coul­ter head­line and a sto­ry link about a man mas­tur­bat­ing with a McDonald’s sand­wich

    Sam Thiel­man in New York
    Mon­day 29 August 2016 12.48 EDT

    Just months after the dis­cov­ery that Facebook’s “trend­ing” news mod­ule was curat­ed and tweaked by human beings, the com­pa­ny has elim­i­nat­ed its edi­tors and left the algo­rithm to do its job. The results, so far, are a dis­as­ter.

    Face­book announced late Fri­day that it had elim­i­nat­ed jobs in its trend­ing mod­ule, the part of its news divi­sion where staff curat­ed pop­u­lar news for Face­book users. Over the week­end, the ful­ly auto­mat­ed Face­book trend­ing mod­ule pushed out a false sto­ry about Fox News host Meg­yn Kel­ly, a con­tro­ver­sial piece about a comedian’s four-let­ter word attack on rightwing pun­dit Ann Coul­ter, and links to an arti­cle about a video of a man mas­tur­bat­ing with a McDonald’s chick­en sand­wich.

    In a blog­post, Face­book said the deci­sion to drop peo­ple from the news mod­ule would allow it to oper­ate at a greater scale.

    “Our goal is to enable Trend­ing for as many peo­ple as pos­si­ble, which would be hard to do if we relied sole­ly on sum­ma­riz­ing top­ics by hand,” wrote a com­pa­ny rep­re­sen­ta­tive in the unat­trib­uted post. “A more algo­rith­mi­cal­ly dri­ven process allows us to scale Trend­ing to cov­er more top­ics and make it avail­able to more peo­ple glob­al­ly over time.”

    A source famil­iar with the mat­ter told the Guardian that the trend­ing team was fired with­out notice in a meet­ing with a secu­ri­ty guard present. The ex-employ­ees received four weeks’ sev­er­ance.

    In May, the Guardian pub­lished the guide­lines used by Facebook’s Trend­ing mod­ule team after Giz­mo­do revealed that the mod­ule was in fact curat­ed by humans. The rev­e­la­tion fuelled accu­sa­tions of poten­tial bias at the social net­work, which has become the world’s largest dis­trib­u­tor of news.

    The past week­end has been less than aus­pi­cious for Facebook’s new, inhu­man work­force: on Sat­ur­day, the site pushed an arti­cle to some of its users enti­tled: “BREAKING: Fox News Expos­es Trai­tor Meg­yn Kel­ly, Kicks Her Out For Back­ing Hillary.” Meg­yn Kel­ly is still employed by Fox News and has not endorsed Hillary Clin­ton for pres­i­dent.

    Face­book removed the offend­ing arti­cle, pub­lished by a web­site called End­ing the Fed and link­ing to anoth­er lit­tle known site, Con­ser­v­a­tive 101. Under Facebook’s old guide­lines, news cura­tors stuck to a list of trust­ed media sources. Nei­ther of these sources were on that list.

    Anoth­er sur­pris­ing head­line read: “SNL Star Calls Ann Coul­ter a Racist C*nt,” and referred to attacks on the author dur­ing a Com­e­dy Cen­tral roast of actor Rob Lowe. Oth­er trend­ing items picked by algo­rithm were pegged to Twit­ter hash­tags includ­ing #McChick­en, a hash­tag that had gone viral after some­one post­ed a video of a man mas­tur­bat­ing with a McChick­en sand­wich.

    ...

    The dis­missal of the trend­ing mod­ule team appears to have been a long-term plan at Face­book. A source told the Guardian the trend­ing mod­ule was meant to have “learned” from the human edi­tors’ cura­tion deci­sions and was always meant to even­tu­al­ly reach full automa­tion.

    Face­book announced late Fri­day that it had elim­i­nat­ed jobs in its trend­ing mod­ule, the part of its news divi­sion where staff curat­ed pop­u­lar news for Face­book users. Over the week­end, the ful­ly auto­mat­ed Face­book trend­ing mod­ule pushed out a false sto­ry about Fox News host Meg­yn Kel­ly, a con­tro­ver­sial piece about a comedian’s four-let­ter word attack on rightwing pun­dit Ann Coul­ter, and links to an arti­cle about a video of a man mas­tur­bat­ing with a McDonald’s chick­en sand­wich.

    Well, at least the sto­ry about the chick­en sand­wich was poten­tial­ly news­wor­thy. At least now we know not to click on any arti­cles about McChick­en sand­wich­es going for­ward.

    So per­haps the les­son here is that algo­rith­mi­cal­ly auto­mat­ed news­feeds may not be cred­i­ble sources of what we nor­mal­ly think of as “news”, but they are poten­tial­ly use­ful sum­maries for all the garbage peo­ple are read­ing instead of actu­al news. At least with Face­book’s new algo­rith­mi­cal­ly dri­ven trend news feed we can all watch civ­i­liza­tion’s col­lec­tive descent in igno­rance and mad­ness with some­what greater detail. That’s kind of a pos­i­tive ser­vice.

    Unfor­tu­nate­ly, that’s not the kind of pos­i­tive ser­vice we’re going to get. At least not yet. Why? Because it turns out Face­book did­n’t actu­al­ly elim­i­nate the human cura­tors. Instead, they just fired all their exist­ing team of pro­fes­sion­al jour­nal­ist cura­tors and hired a new team of non-jour­nal­ist human. So this is less an issue of “oops, our new algo­rithm just got over­whelmed by all the tox­ic ‘news’ out there!” and more of an issue of “oops, we fired all our jour­nal­ist cura­tors and qui­et­ly replaced them non-jour­nal­ists cura­tors who are hor­ri­ble at this job. How about we blame this on the algo­rithm”:

    Slate

    Trend­ing Bad

    How Facebook’s for­ay into auto­mat­ed news went from messy to dis­as­trous.

    By Will Ore­mus
    Aug. 30 2016 2:05 PM

    It seems Facebook’s human news edi­tors weren’t quite as expend­able as the com­pa­ny thought.

    On Mon­day, the social network’s lat­est move to auto­mate its “Trend­ing” news sec­tion back­fired when it pro­mot­ed a false sto­ry by a dubi­ous right-wing pro­pa­gan­da site. The sto­ry, which claimed that Fox News had fired anchor Meg­yn Kel­ly for being a “trai­tor,” racked up thou­sands of Face­book shares and was like­ly viewed by mil­lions before Face­book removed it for inac­cu­ra­cy.

    The blun­der came just three days after Face­book fired the entire New York–based team of con­trac­tors that had been curat­ing and edit­ing the trend­ing news sec­tion, as Quartz first report­ed on Fri­day and Slate has con­firmed. That same day, Face­book announced an “update” to its trend­ing section—a fea­ture that high­lights news top­ics pop­u­lar on the site—that would make it “more auto­mat­ed.”

    Facebook’s move away from human edi­tors was sup­posed to extin­guish the (far­ci­cal­ly overblown) con­tro­ver­sy over alle­ga­tions of lib­er­al bias in the trend­ing news sec­tion. But in its haste to mol­li­fy con­ser­v­a­tives, the com­pa­ny appears to have rolled out a new prod­uct that mem­bers of its own trend­ing news team viewed as seri­ous­ly flawed.

    Three of the trend­ing team mem­bers who were recent­ly fired told Slate they under­stood from the start that Facebook’s ulti­mate goal was to auto­mate the process of select­ing sto­ries for the trend­ing news sec­tion. Their team was clear­ly a stop­gap. But all three said inde­pen­dent­ly that they were shocked to have been let go so soon, because the soft­ware that was meant to sup­plant them was nowhere near ready. “It’s half-baked quiche,” one told me.

    Before we poke and prod that quiche, it’s worth clear­ing up a pop­u­lar mis­un­der­stand­ing. Face­book has not entire­ly elim­i­nat­ed humans from its trend­ing news prod­uct. Rather, the com­pa­ny replaced the New York–based team of con­trac­tors, most of whom were pro­fes­sion­al jour­nal­ists, with a new team of over­seers. Appar­ent­ly it was this new team that failed to real­ize the Kel­ly sto­ry was bogus when Facebook’s trend­ing algo­rithm sug­gest­ed it. Here’s how a com­pa­ny spokes­woman explained the mishap to me Mon­day after­noon:

    The Trend­ing review team accept­ed this top­ic over the week­end. Based on their review guide­lines, the top­ic met the con­di­tions for accep­tance at the time because there was a suf­fi­cient num­ber of rel­e­vant arti­cles and posts. On re-review, the top­ic was deemed as inac­cu­rate and does no longer appear in trend­ing. We’re work­ing to make our detec­tion of hoax and satir­i­cal sto­ries more accu­rate as part of our con­tin­ued effort to make the prod­uct bet­ter.

    So: Blame the peo­ple, not the algo­rithm, which is appar­ent­ly the same one Face­book was using before it fired the orig­i­nal trend­ing team. Who are these new gate­keep­ers, and why can’t they tell the dif­fer­ence between a reli­able news source and Endingthefed.com, the pub­lish­er of the Kel­ly piece? Face­book wouldn’t say, but it offered the fol­low­ing state­ment: “In this new ver­sion of Trend­ing we no longer need to draft top­ic descrip­tions or sum­maries, and as a result we are shift­ing to a team with an empha­sis on oper­a­tions and tech­ni­cal skillsets, which helps us bet­ter sup­port the new direc­tion of the prod­uct.”

    That helps clar­i­fy the blog post Face­book pub­lished Fri­day, in which it explained the move to sim­pli­fy its trend­ing sec­tion as part of a push to scale it glob­al­ly and per­son­al­ize it to each user. “This is some­thing we always hoped to do but we are mak­ing these changes soon­er giv­en the feed­back we got from the Face­book com­mu­ni­ty ear­li­er this year,” the com­pa­ny said.

    That all made sense to the three for­mer trend­ing news con­trac­tors who spoke with Slate. (They spoke sep­a­rate­ly and on con­di­tion of anonymi­ty, cit­ing a nondis­clo­sure agree­ment, but they agreed on mul­ti­ple key points and details.) The for­mer con­trac­tors said they weren’t told much about their role or the future of the prod­uct they were work­ing on, but the com­pa­nies that hired them—one an Indi­ana-based con­sul­tan­cy called BCfor­ward, the oth­er a Texas firm called MMC—did indi­cate their jobs were not per­ma­nent. They also under­stood that the inter­nal soft­ware that iden­ti­fied top­ics for trend­ing news was meant to improve over time, so it could even­tu­al­ly take on more of the work itself.

    The strange thing, they told me, was the algo­rithm didn’t seem to be get­ting much bet­ter at select­ing rel­e­vant sto­ries or reli­able news sources. “I didn’t notice a change at all,” said one, who had worked on the team for close to a year. The sys­tem was con­stant­ly being refined, the for­mer con­trac­tor added, by Face­book engi­neers with whom the trend­ing con­trac­tors had no direct con­tact. But the improve­ments focused on the con­tent man­age­ment sys­tem and the cura­tion guide­lines the humans worked with. The feed of trend­ing sto­ries sur­faced by the algo­rithm, mean­while, was “not ready for human consumption—you real­ly need­ed some­one to sift through the junk.”

    The sec­ond for­mer con­trac­tor, who joined the team more recent­ly, actu­al­ly liked the idea of help­ing to train soft­ware to curate a per­son­al­ized feed of trend­ing news sto­ries for read­ers around the world. “When I entered into it, I thought, ‘Well, the algorithm’s basic right now, so that’s not going to be [autonomous] for a cou­ple years.’ The vol­ume of top­ics we would get, it would be hun­dreds and hun­dreds. It was just this raw feed,” full of click­bait head­lines and top­ics that bore no rela­tion to actu­al news sto­ries. The con­trac­tor esti­mat­ed that, for every top­ic sur­faced by the algo­rithm that the team accept­ed and pub­lished, there were “four or five” that the cura­tors reject­ed as spu­ri­ous.

    The third con­trac­tor, who agreed that the algo­rithm remained sore­ly in need of human edit­ing and fact-check­ing, esti­mat­ed that out of every 50 top­ics it sug­gest­ed, about 20 cor­re­spond­ed to real, ver­i­fi­able news events. But when the news was real, the top sources sug­gest­ed by the algo­rithm often were not cred­i­ble news out­lets.

    The con­trac­tors’ per­cep­tion that their jobs were secure, at least for the medi­um term, was rein­forced when Face­book recent­ly began test­ing a new trend­ing news fea­ture—a stripped-down ver­sion that replaced sum­maries of each top­ic with the num­ber of Face­book users talk­ing about it. This new ver­sion, two con­trac­tors believed, gave the human cura­tors a great­ly dimin­ished role in sto­ry and source selec­tion. Said one: “You’ll get Endingthefed.com as your news source (sug­gest­ed by the algo­rithm), and you won’t be able to go out and say, ‘Oh, there’s a CNN source, or there’s a Fox News source, let’s use that instead.’ You just have a bina­ry choice to approve it or not.”

    The results were not pret­ty. “They were run­ning these tests with sub­sets of users, and the feed­back they got inter­nal­ly was over­whelm­ing­ly neg­a­tive. Peo­ple would say, ‘I don’t under­stand why I’m look­ing at this. I don’t see the con­text any­more.’ There were spelling mis­takes in the head­lines. And the num­ber of peo­ple talk­ing about a top­ic would just be wild­ly off.” The neg­a­tive feed­back came from both Face­book employ­ees par­tic­i­pat­ing in inter­nal tests and exter­nal Face­book users ran­dom­ly select­ed for small pub­lic tests.

    The con­trac­tor assumed Facebook’s engi­neers and prod­uct man­agers would go back to the draw­ing board. Instead, on Fri­day, the com­pa­ny dumped the jour­nal­ists and released the new, poor­ly reviewed ver­sion of trend­ing news to the pub­lic.

    Why was Face­book so eager to make this move? The com­pa­ny may well have deemed jour­nal­ists more trou­ble than they’re worth after sev­er­al of them set off a firestorm by crit­i­ciz­ing the prod­uct in the press. Oth­ers com­plained about the “tox­ic” work­ing con­di­tions or dished dirt on Twit­ter after being let go. Jour­nal­ists are a can­tan­ker­ous lot, and in many ways a poor fit for Sil­i­con Val­ley tech com­pa­nies like Face­book that thrive on opac­i­ty and cul­ti­vate the per­cep­tion of neu­tral­i­ty.
    face­book trend­ing.

    But Face­book appears to have thrown out the babies and kept the bath­wa­ter. What’s left of the trend­ing sec­tion, even after the removal of the Kel­ly sto­ry, looks a lot like the con­text-free, clickbait‑y mess the con­trac­tor described sift­ing through each day. “You click around, and it’s a garbage fire,” one said of the new ver­sion.

    Iron­i­cal­ly, the decline in qual­i­ty of the trend­ing sec­tion comes at the same time that Face­book is tout­ing val­ues such as authen­tic­i­ty and accu­ra­cy in its news feed, where it con­tin­ues to fight its nev­er-end­ing bat­tle against click­bait and preach the gospel of “high-qual­i­ty” news con­tent.

    ...

    “The con­trac­tors’ per­cep­tion that their jobs were secure, at least for the medi­um term, was rein­forced when Face­book recent­ly began test­ing a new trend­ing news fea­ture—a stripped-down ver­sion that replaced sum­maries of each top­ic with the num­ber of Face­book users talk­ing about it. This new ver­sion, two con­trac­tors believed, gave the human cura­tors a great­ly dimin­ished role in sto­ry and source selec­tion. Said one: “You’ll get Endingthefed.com as your news source (sug­gest­ed by the algo­rithm), and you won’t be able to go out and say, ‘Oh, there’s a CNN source, or there’s a Fox News source, let’s use that instead.’ You just have a bina­ry choice to approve it or not.””

    Yes, as part of its long-held goal of actu­al­ly ful­ly automat­ing news feeds so every sin­gle user can even­tu­al­ly get their own per­son­al­ized feed, Face­book was already in the process of reduc­ing the amount of human judge­ment involved with the human cura­tion before it fired and replaced its team. And then it fired and replaced them:

    ...

    Before we poke and prod that quiche, it’s worth clear­ing up a pop­u­lar mis­un­der­stand­ing. Face­book has not entire­ly elim­i­nat­ed humans from its trend­ing news prod­uct. Rather, the com­pa­ny replaced the New York–based team of con­trac­tors, most of whom were pro­fes­sion­al jour­nal­ists, with a new team of over­seers. Appar­ent­ly it was this new team that failed to real­ize the Kel­ly sto­ry was bogus when Facebook’s trend­ing algo­rithm sug­gest­ed it. Here’s how a com­pa­ny spokes­woman explained the mishap to me Mon­day after­noon:

    The Trend­ing review team accept­ed this top­ic over the week­end. Based on their review guide­lines, the top­ic met the con­di­tions for accep­tance at the time because there was a suf­fi­cient num­ber of rel­e­vant arti­cles and posts. On re-review, the top­ic was deemed as inac­cu­rate and does no longer appear in trend­ing. We’re work­ing to make our detec­tion of hoax and satir­i­cal sto­ries more accu­rate as part of our con­tin­ued effort to make the prod­uct bet­ter.

    So: Blame the peo­ple, not the algo­rithm, which is appar­ent­ly the same one Face­book was using before it fired the orig­i­nal trend­ing team. Who are these new gate­keep­ers, and why can’t they tell the dif­fer­ence between a reli­able news source and Endingthefed.com, the pub­lish­er of the Kel­ly piece? Face­book wouldn’t say, but it offered the fol­low­ing state­ment: “In this new ver­sion of Trend­ing we no longer need to draft top­ic descrip­tions or sum­maries, and as a result we are shift­ing to a team with an empha­sis on oper­a­tions and tech­ni­cal skillsets, which helps us bet­ter sup­port the new direc­tion of the prod­uct.”

    ...

    As we can see, there is indeed a “Trend­ing review team”. It’s still human. And even with the reduced flex­i­bil­i­ty to select from a vari­ety of news sources for a giv­en top­ic now get­ting replaced by a bina­ry choice, this team of humans still has the abil­i­ty to fil­ter out bla­tant­ly fake news. It’s just that the new team humans appar­ent­ly can’t actu­al­ly iden­ti­fy the fake news.

    All in all, it looks like Face­book basi­cal­ly mod­i­fied their inter­nal trend­ing news algo­rithms while keep­ing in place a team of humans to make the final judge­ment call. Then Face­book made an announce­ment that made it sound like the trend­ing news feed was now all algo­rith­mi­cal­ly dri­ven, but actu­al­ly just replaced the pre­vi­ous team of jour­nal­ists (who were com­plain­ing about their abu­sive work­ing con­di­tions just months ago) with a new team of non-jour­nal­ist but who were still tasked with mak­ing that final judge­ment call. And then every­one blamed the algo­rithm when this all blew up with bogus arti­cles in the news feed.

    So while Face­book may have trashed the util­i­ty of its trend­ing news feed today, it’s worth not­ing that this sad tale of poor cor­po­rate judge­ment in rolling out poor­ly designed algo­rithms run by poor­ly pre­pared peo­ple, and then blam­ing the algo­rithm when things go poor­ly, is giv­ing us a glimpse at the kind of news that could eas­i­ly become a major cat­e­go­ry of trend­ing news in the future as more and more human/algorithm ‘mis­takes’ are cre­at­ed and blamed sole­ly on the algo­rithm. The algo­rithm design­ers prob­a­bly did­n’t intend on doing that but it’s still kind of impres­sive in a sad way.

    Posted by Pterrafractyl | August 30, 2016, 7:00 pm
  14. It looks like Wik­iLeak­s’s quest to bring trans­paren­cy to gov­ern­ment and large cor­po­ra­tions is get­ting extend­ed. To every­one with a ver­i­fied Twit­ter account:

    ReCode

    Wik­iLeaks wants to cre­ate a data­base of ver­i­fied Twit­ter users and who they inter­act with

    That would include a lot of jour­nal­ists — and Don­ald Trump.

    by Kurt Wag­n­er Jan 7, 2017, 10:00am EST

    Wik­iLeaks tweet­ed Fri­day that it want­ed to build a data­base of infor­ma­tion about Twitter’s ver­i­fied users, includ­ing per­son­al rela­tion­ships that might have influ­ence on their lives.

    Then, after a num­ber of users sound­ed the alarm on what they per­ceived to be a mas­sive doxxing effort, Wik­iLeaks delet­ed the tweet, but not before blam­ing that per­cep­tion on the “dis­hon­est press.”

    In a sub­se­quent series of tweets on Fri­day,Wik­iLeaks Task Force — a ver­i­fied Twit­ter account described in its bio as the “Offi­cial @WikiLeaks sup­port account” — explained that it want­ed to look at the “family/job/financial/housing rela­tion­ships” of Twitter’s ver­i­fied users, which includes a ton of jour­nal­ists, politi­cians and activists.

    [see image of delet­ed tweet ]

    The point, the Wik­iLeaks account claims, is to ““devel­op a met­ric to under­stand influ­ence net­works based on prox­im­i­ty graphs.”.” That’s a pret­ty con­fus­ing expla­na­tion, and the com­ment left a num­ber of con­cerned Twit­ter users scratch­ing their col­lec­tive heads and won­der­ing just how inva­sive this data­base might be.

    The “task force” attempt­ed to clar­i­fy what it meant in a num­ber of sub­se­quent tweets, and it sounds like the data­base is an attempt to under­stand who or what might be influ­enc­ing Twitter’s ver­i­fied users. Imag­ine iden­ti­fy­ing rela­tion­ships like polit­i­cal par­ty affil­i­a­tion, for exam­ple, though it’s unclear if the data­base would include both online and offline rela­tion­ships users have. (We tweet­ed at Wik­iLeaks and will update if we hear back.)

    Wik­iLeaks men­tioned an arti­fi­cial intel­li­gence soft­ware pro­gram that it would use to help com­pile the data­base and sug­gest­ed it might be akin to the social graphs that Face­book and LinkedIn have cre­at­ed.

    It was all rather vague, which didn’t help with user con­cern on Twit­ter. But Wik­iLeaks claims the pro­posed data­base is not about releas­ing per­son­al info, like home address­es.

    Dis­hon­est press report­ing our spec­u­la­tive idea for data­base of account influ­enc­ing *rela­tion­ships* with Wik­iLeaks dox­ing home address­es.— Wik­iLeaks Task Force (@WLTaskForce) Jan­u­ary 6, 2017

    .@DaleInnis @kevincollier As we stat­ed the idea is to look at the net­work of *rela­tion­ships* that influ­ence — not to pub­lish address­es.— Wik­iLeaks Task Force (@WLTaskForce) Jan­u­ary 6, 2017

    Still, it was an unset­tling procla­ma­tion for many on Twit­ter, and fol­lowed just a few days after Wik­iLeaks founder Julian Assange told Fox News that Amer­i­can media cov­er­age is “very dis­hon­est.” It’s a descrip­tor Pres­i­dent-elect Don­ald Trump famous­ly uses, too.

    It seems pos­si­ble that the point of look­ing into ver­i­fied Twit­ter users — many of whom are jour­nal­ists — is so that Wik­iLeaks can rein in the “dis­hon­est media.”

    What could be inter­est­ing, though, is that build­ing a data­base would also mean look­ing into the rela­tion­ships influ­enc­ing Trump, who is also ver­i­fied on Twit­ter.

    Some of those rela­tion­ships are already pub­licly known. The Wall Street Jour­nal, for exam­ple, has report­ed that more than 150 insti­tu­tions hold Trump’s busi­ness debts. But many jour­nal­ists and politi­cians have com­plained of lack of trans­paren­cy from Trump, like his fail­ure to release his tax returns. These crit­ics may wel­come a clos­er look at the pow­ers influ­enc­ing the next Com­man­der in Chief.

    Even if Wik­iLeaks were to move for­ward with this data­base, it seems like it would have to store the project off of Twit­ter. The social com­mu­ni­ca­tions com­pa­ny tweet­ed out a state­ment short­ly after the orig­i­nal Wik­iLeaks tweet: “Post­ing anoth­er person’s pri­vate and con­fi­den­tial infor­ma­tion is a vio­la­tion of the Twit­ter Rules.”

    Post­ing anoth­er person’s pri­vate and con­fi­den­tial infor­ma­tion is a vio­la­tion of the Twit­ter Rules: https://t.co/NGx5hh2tTQ— Safe­ty (@safety) Jan­u­ary 6, 2017

    Twit­ter has already said that it will not allow any­one, includ­ing gov­ern­ment agen­cies, to use its ser­vices to cre­ate sur­veil­lance data­bas­es and has a pol­i­cy against post­ing anoth­er person’s pri­vate infor­ma­tion on the ser­vice.

    ...

    “In a sub­se­quent series of tweets on Fri­day,Wik­iLeaks Task Force — a ver­i­fied Twit­ter account described in its bio as the “Offi­cial @WikiLeaks sup­port account” — explained that it want­ed to look at the “family/job/financial/housing rela­tion­ships” of Twitter’s ver­i­fied users, which includes a ton of jour­nal­ists, politi­cians and activists.

    Yeah, that’s not creepy or any­thing.

    Now, it’s worth not­ing that cre­at­ing data­bas­es of ran­dom peo­ple on social media and try­ing to learn every­thing you can about them, like their rela­tion­ships and influ­ences, is noth­ing new for the gov­ern­ment or pri­vate sec­tor (like what Palan­tir does). And there’s noth­ing stop­ping Wik­iLeaks or any­one else from doing the same. But in this case it appears that Wik­iLeaks is float­ing the idea of cre­at­ing this data­base and then mak­ing it a search­able pub­lic tool. And it’s not at all clear that Wik­iLeaks would be lim­it­ing the data it col­lects on Twit­ter users to infor­ma­tion they can gath­er from on Twit­ter. Since they’re talk­ing about lim­it­ing it to “ver­i­fied users” (Twit­ter accounts that have been strong­ly iden­ti­fied with a real per­son using their real name) that sug­gests they could include all sorts of 3rd par­ty data from any­where.

    And if the above arti­cle’s spec­u­la­tion is cor­rect, the motive for this is basi­cal­ly to cre­ate a data set (with fan­cy graphs pre­sum­ably), would be to dis­cred­it peo­ple through guilt-by-asso­ci­a­tion:

    ...
    Still, it was an unset­tling procla­ma­tion for many on Twit­ter, and fol­lowed just a few days after Wik­iLeaks founder Julian Assange told Fox News that Amer­i­can media cov­er­age is “very dis­hon­est.” It’s a descrip­tor Pres­i­dent-elect Don­ald Trump famous­ly uses, too.

    It seems pos­si­ble that the point of look­ing into ver­i­fied Twit­ter users — many of whom are jour­nal­ists — is so that Wik­iLeaks can rein in the “dis­hon­est media.”
    ...

    Keep in mind that if Wik­iLeaks actu­al­ly cre­at­ed this tool, it would prob­a­bly have quite a bit of lee­way over the kind of data that gets includ­ed in the sys­tem and which “rela­tion­ships” or “influ­ences” show up for a giv­en indi­vid­ual. Also keep in mind that if this was done respon­si­bly there would have to be a great deal of human judge­ment that goes into whether or not a par­tic­u­lar piece of data that points towards a “rela­tion­ship” or “influ­ence” is accu­rate and hon­est. And it’s that kind of required flex­i­bil­i­ty that could give Wik­iLeaks a great deal of real pow­er over how some­one is pre­sent­ed.

    So it appears that Wik­iLeaks wants to cre­ate pub­licly acces­si­ble dossiers on ver­i­fied Twit­ter users. Pre­sum­ably for the pur­pose of ‘mak­ing a point’ of some sort. Sort of like the old “TheyRule.net” web tool that showed graphs of the peo­ple serv­ing on cor­po­rate boards of major cor­po­ra­tions and made the point of the inces­tu­ous nature of cor­po­rate lead­er­ship visu­al­ly clear. But in this case it won’t be lim­it­ed to big com­pa­ny CEOs. It’ll be every­one. At least every­one with a ver­i­fied Twit­ter account, which just hap­pens to include large num­bers of jour­nal­ists and activists. So, TheyRule.net, but with much more per­son­al infor­ma­tion on peo­ple who may or may not actu­al­ly rule. Great.

    Posted by Pterrafractyl | January 10, 2017, 3:58 pm
  15. Here’s some­thing worth not­ing while sift­ing through the 2016 elec­tion after­math: Sil­i­con Val­ley’s long right­ward shift became offi­cial in 2016. At least if you look at the cor­po­rate PACs of tech giants like Microsoft, Google, Face­book, and Ama­zon. Sure, the employ­ees tend­ed to still favor donat­ing to Democ­rats, although not as much as before (and not at all at Microsoft). But when it came to the cor­po­rate PACs Sil­i­con Val­ley was see­ing red:

    The New York Times
    Opin­ion

    Sil­i­con Val­ley Takes a Right Turn

    Thomas B. Edsall
    JAN. 12, 2017

    In 2016, the cor­po­rate PACs asso­ci­at­ed with Microsoft, Face­book, Google and Ama­zon broke ranks with the tra­di­tion­al alle­giance of the broad tech sec­tor to the Demo­c­ra­t­ic Par­ty. All four donat­ed more mon­ey to Repub­li­can Con­gres­sion­al can­di­dates than they did to their Demo­c­ra­t­ic oppo­nents.

    As these tech­nol­o­gy firms have become cor­po­rate behe­moths, their con­cerns over gov­ern­ment reg­u­la­to­ry pol­i­cy have inten­si­fied — on issues includ­ing pri­va­cy, tax­a­tion, automa­tion and antitrust. These are ques­tions on which they appear to view Repub­li­cans as stronger allies than Democ­rats.

    In 2016, the PACs of these four firms gave a total of $3.6 mil­lion to House and Sen­ate can­di­dates. Of that, $2.1 mil­lion went to Repub­li­cans, and $1.5 mil­lion went to Democ­rats. These PACs did not con­tribute to pres­i­den­tial can­di­dates.

    The PACs stand apart from dona­tions by employ­ees in the tech­nol­o­gy and inter­net sec­tors. Accord­ing to OpenSe­crets, these employ­ees gave $42.4 mil­lion to Democ­rats and $24.2 mil­lion to Repub­li­cans.

    In the pres­i­den­tial race, tech employ­ees (as opposed to cor­po­rate PACs) over­whelm­ing­ly favored Hillary Clin­ton over Don­ald Trump. Work­ers for inter­net firms, for exam­ple, gave her $6.3 mil­lion, and gave $59,622 to Trump. Employ­ees of elec­tron­ic man­u­fac­tur­ing firms donat­ed $12.6 mil­lion to Clin­ton and $534,228 to Trump.

    Most tech exec­u­tives and employ­ees remain sup­port­ive of Democ­rats, espe­cial­ly on social and cul­tur­al issues. The Repub­li­can tilt of the PACs at Microsoft, Ama­zon, Google and Face­book sug­gests, how­ev­er, that as these com­pa­nies’ domains grow larg­er, their bot­tom-line inter­ests are becom­ing increas­ing­ly aligned with the poli­cies of the Repub­li­can Par­ty.

    In terms of polit­i­cal con­tri­bu­tions, Microsoft has led the right­ward charge. In 2008, the Microsoft PAC deci­sive­ly favored Democ­rats, 60–40, accord­ing to data com­piled by the indis­pens­able Cen­ter for Respon­sive Pol­i­tics. By 2012, Repub­li­can can­di­dates and com­mit­tees had tak­en the lead, 54–46; and by 2016, the Microsoft PAC had become deci­sive­ly Repub­li­can, 65–35.

    In 2016, the Microsoft PAC gave $478,818 to Repub­li­can House can­di­dates and $272,000 to Demo­c­ra­t­ic House can­di­dates. It gave $164,000 to Repub­li­can Sen­ate can­di­dates, and $75,000 to Demo­c­ra­t­ic Sen­ate can­di­dates.

    Microsoft employ­ees’ con­tri­bu­tions fol­lowed a com­pa­ra­ble pat­tern. In 2008 and 2012, Microsoft work­ers were solid­ly pro-Demo­c­ra­t­ic, with 71 per­cent and 65 per­cent of their con­tri­bu­tions going to par­ty mem­bers. By 2016, the company’s work force had shift­ed gears. Democ­rats got 47 per­cent of their dona­tions.

    This was not small change. In 2016 Microsoft employ­ees gave a total of $6.47 mil­lion.

    A sim­i­lar pat­tern is vis­i­ble at Face­book.

    The firm first became a notice­able play­er in the world of cam­paign finance in 2012 when employ­ees and the com­pa­ny PAC togeth­er made con­tri­bu­tions of $910,000. That year, Face­book employ­ees backed Democ­rats over Repub­li­cans 64–35, while the company’s PAC tilt­ed Repub­li­can, 53–46.

    By 2016, when total Face­book con­tri­bu­tions reached $3.8 mil­lion, the Demo­c­ra­t­ic advan­tage in employ­ee dona­tions shrank to 51–47, while the PAC con­tin­ued to favor Repub­li­cans, 56–44.

    While the employ­ees of the three oth­er most valu­able tech com­pa­nies, Alpha­bet (Google), Ama­zon and Apple, remained Demo­c­ra­t­ic in their giv­ing in 2016, at the cor­po­rate lev­el of Alpha­bet and Ama­zon — that is, at the lev­el of their PACs — they have not.

    Google’s PAC gave 56 per­cent of its 2016 con­tri­bu­tions to Repub­li­cans and 44 per­cent to Democ­rats. The Ama­zon PAC fol­lowed a sim­i­lar path, favor­ing Repub­li­cans over Democ­rats 52–48. (Apple does not have a PAC.)

    Tech giants can no longer be described as insur­gents chal­leng­ing cor­po­rate Amer­i­ca.

    “By just about every mea­sure worth col­lect­ing,” Farhad Man­joo of The Times wrote in Jan­u­ary 2016:

    Amer­i­can con­sumer tech­nol­o­gy com­pa­nies are get­ting larg­er, more entrenched in their own sec­tors, more pow­er­ful in new sec­tors and bet­ter insu­lat­ed against sur­pris­ing com­pe­ti­tion from upstarts.

    These firms are now among the biggest of big busi­ness. In a 2016 USA Today rank­ing of the most valu­able com­pa­nies world­wide, the top four were Alpha­bet, $554.8 bil­lion; Apple, $529.3 bil­lion; Microsoft, $425.4 bil­lion; and Face­book, $333.6 bil­lion. Those firms deci­sive­ly beat out Berk­shire Hath­away, Exxon Mobil, John­son & John­son and Gen­er­al Elec­tric.

    In addi­tion to tech com­pa­nies’ con­cern about gov­ern­ment pol­i­cy on tax­a­tion, reg­u­la­tion and antitrust, there are oth­er sources of con­flict between tech firms and the Demo­c­ra­t­ic Par­ty. Gre­go­ry Fer­en­stein, a blog­ger who cov­ers the tech indus­try, con­duct­ed a sur­vey of 116 tech com­pa­ny founders for Fast Com­pa­ny in 2015. Using data from a poll con­duct­ed by the firm Sur­vey­Mon­key, Fer­en­stein com­pared the views of tech founders with those of Democ­rats, in some cas­es, and the views of the gen­er­al pub­lic, in oth­ers.

    Among Ferenstein’s find­ings: a minor­i­ty, 29 per­cent, of tech com­pa­ny founders described labor unions as “good,” com­pared to 73 per­cent of Democ­rats. Asked “is mer­i­toc­ra­cy nat­u­ral­ly unequal?” tech founders over­whelm­ing­ly agreed.

    Fer­en­stein went on:

    One hun­dred per­cent of the small­er sam­ple of founders to whom I pre­sent­ed this ques­tion said they believe that a tru­ly mer­i­to­crat­ic econ­o­my would be “most­ly” or “some­what” unequal. This is a key dis­tinc­tion: Oppor­tu­ni­ty is about max­i­miz­ing people’s poten­tial, which founders tend to believe is high­ly unequal. Founders may val­ue cit­i­zen con­tri­bu­tions to soci­ety, but they don’t think all cit­i­zens have the poten­tial to con­tribute equal­ly. When asked what per­cent of nation­al income the top 10% would hold in such a sce­nario, a major­i­ty (67%) of founders believed that the rich­est indi­vid­u­als would con­trol 50% or more of total income, while only 31% of the pub­lic believes such an out­come would occur in a mer­i­to­crat­ic soci­ety.

    One of the most inter­est­ing ques­tions posed by Fer­en­stein speaks to mid­dle and work­ing class anx­i­eties over glob­al com­pe­ti­tion:

    In inter­na­tion­al trade pol­i­cy, some peo­ple believe the U.S. gov­ern­ment should cre­ate laws that favor Amer­i­can busi­ness with poli­cies that pro­tect it from glob­al com­pe­ti­tion, such as fees on import­ed goods or mak­ing it cost­ly to hire cheap­er labor in oth­er coun­tries (“out­sourc­ing”). Oth­ers believe it would be bet­ter if there were less reg­u­la­tions and busi­ness­es were free to trade and com­pete with­out each coun­try favor­ing their own indus­tries. Which of these state­ments come clos­est to your belief?

    There was a large dif­fer­ence between tech com­pa­ny offi­cials, 73 per­cent of whom chose free trade and less reg­u­la­tion, while only 20 per­cent of Democ­rats sup­port­ed those choic­es.

    Fer­en­stein also found that tech founders are sub­stan­tial­ly more lib­er­al on immi­gra­tion pol­i­cy than Democ­rats gen­er­al­ly. 64 per­cent would increase total immi­gra­tion lev­els, com­pared to 39 per­cent of Democ­rats. Tech exec­u­tives are strong sup­port­ers of increas­ing the num­ber of high­ly trained immi­grants through the HB1 visa pro­gram.

    Joel Kotkin, a fel­low in urban stud­ies at Chap­man Uni­ver­si­ty who writes about demo­graph­ic, social and eco­nom­ic trends, sees these dif­fer­ences as the source of deep con­flict with­in the Demo­c­ra­t­ic Par­ty.

    In a provoca­tive August, 2015, col­umn in the Orange Coun­ty Reg­is­ter, Kotkin wrote:

    The dis­rup­tive force is large­ly Sil­i­con Val­ley, a nat­ur­al oli­garchy that now funds a par­ty tee­ter­ing toward pop­ulism and even social­ism. The fun­da­men­tal con­tra­dic­tions, as Karl Marx would have not­ed, lie in the col­li­sion of inter­ests between a group that has come to epit­o­mize self-con­scious­ly pro­gres­sive mega-wealth and a mass base which is increas­ing­ly con­cerned about down­ward mobil­i­ty.

    The tech elite, Kotkin writes, “far from desert­ing the Demo­c­ra­t­ic Par­ty, more like­ly will aim take to take it over.” Until very recent­ly, the

    con­flict between pop­ulists and tech oli­garchs has been mut­ed, in large part due to com­mon views on social issues like gay mar­riage and, to some extent, envi­ron­men­tal pro­tec­tion. But as the social issues fade, hav­ing been “won” by pro­gres­sives, the focus nec­es­sar­i­ly moves to eco­nom­ics, where the gap between these two fac­tions is great­est.

    Kotkin sees future par­ti­san machi­na­tion in cyn­i­cal terms:

    One can expect the oli­garchs to seek out a modus viven­di with the pop­ulists. They could exchange a regime of high­er tax­es and reg­u­la­tion for ever-expand­ing crony cap­i­tal­ist oppor­tu­ni­ties and polit­i­cal pro­tec­tion. As the hege­mons of today, Face­book and Google, not to men­tion Apple and Ama­zon, have an intense inter­est in pro­tect­ing them­selves, for exam­ple, from antitrust leg­is­la­tion. His­to­ry is pret­ty clear: Hero­ic entre­pre­neurs of one decade often turn into the insid­er cap­i­tal­ists of the next.

    In 2016, Don­ald Trump has pro­duced an upheaval with­in the Repub­li­can Par­ty that shift­ed atten­tion away from the less explo­sive tur­moil in Demo­c­ra­t­ic ranks.

    ...

    “The tech elite, Kotkin writes, “far from desert­ing the Demo­c­ra­t­ic Par­ty, more like­ly will aim take to take it over.””

    And that warn­ing is going to be some­thing to keep in mind as this trend con­tin­ues: the polit­i­cal red-shift­ing of Sil­i­con Val­ley does­n’t mean Sil­i­con Val­ley’s titans are going to even­tu­al­ly aban­don the Demo­c­ra­t­ic par­ty and stop giv­ing mon­ey. It’s worse. They’re going to keep giv­ing the Democ­rats mon­ey (although maybe not as much as they give the GOP) in the hopes of remak­ing it in the GOP’s image. And the more pow­er­ful the tech sec­tor becomes, the more mon­ey these giant cor­po­ra­tions are going to have to engage in this kind of polit­i­cal ‘per­sua­sion’.

    And in oth­er news, a new Oxfam study found that the just eight indi­vid­u­als — includ­ing tech titans Bill Gates, Jeff Bezos, Mark Zucker­berg, and Lar­ry Elli­son — own as much wealth as the poor­est half of the glob­al pop­u­la­tion. So, you know, wealth inequal­i­ty prob­a­bly isn’t a super big pri­or­i­ty for their super PACs.

    Posted by Pterrafractyl | January 17, 2017, 4:09 pm
  16. With the GOP and Trump White House scram­bling to find some sort of leg­isla­tive vic­to­ry in the wake of failed Oba­macare repeal bill last week that almost every­body hat­ed, it’s worth not­ing that the GOP-con­trolled House and Sen­ate may have just put in motion a major reg­u­la­to­ry change that could even be more hat­ed than Trump­care: mak­ing it legal for your ISP to sell your brows­ing habits, loca­tion, online shop­ping habits, and any­thing else they can extract from your online activ­i­ty:

    The New York Times
    Opin­ion

    How the Repub­li­cans Sold Your Pri­va­cy to Inter­net Providers

    By TOM WHEELER
    MARCH 29, 2017

    On Tues­day after­noon, while most peo­ple were focused on the lat­est news from the House Intel­li­gence Com­mit­tee, the House qui­et­ly vot­ed to undo rules that keep inter­net ser­vice providers — the com­pa­nies like Com­cast, Ver­i­zon and Char­ter that you pay for online access — from sell­ing your per­son­al infor­ma­tion.

    The Sen­ate already approved the bill, on a par­ty-line vote, last week, which means that in the com­ing days Pres­i­dent Trump will be able to sign leg­is­la­tion that will strike a sig­nif­i­cant blow against online pri­va­cy pro­tec­tion.

    The bill not only gives cable com­pa­nies and wire­less providers free rein to do what they like with your brows­ing his­to­ry, shop­ping habits, your loca­tion and oth­er infor­ma­tion gleaned from your online activ­i­ty, but it would also pre­vent the Fed­er­al Com­mu­ni­ca­tions Com­mis­sion from ever again estab­lish­ing sim­i­lar con­sumer pri­va­cy pro­tec­tions.

    The bill is an effort by the F.C.C.’s new Repub­li­can major­i­ty and con­gres­sion­al Repub­li­cans to over­turn a sim­ple but vital­ly impor­tant con­cept — name­ly that the infor­ma­tion that goes over a net­work belongs to you as the con­sumer, not to the net­work hired to car­ry it. It’s an old idea: For decades, in both Repub­li­can and Demo­c­ra­t­ic admin­is­tra­tions, fed­er­al rules have pro­tect­ed the pri­va­cy of the infor­ma­tion in a tele­phone call. In 2016, the F.C.C., which I led as chair­man under Pres­i­dent Barack Oba­ma, extend­ed those same pro­tec­tions to the inter­net.

    To my Demo­c­ra­t­ic col­leagues and me, the dig­i­tal tracks that a con­sumer leaves when using a net­work are the prop­er­ty of that con­sumer. They con­tain pri­vate infor­ma­tion about per­son­al pref­er­ences, health prob­lems and finan­cial mat­ters. Our Repub­li­can col­leagues on the com­mis­sion argued the data should be avail­able for the net­work to sell. The com­mis­sion vote was 3–2 in favor of con­sumers.

    Revers­ing those pro­tec­tions is a dream for cable and tele­phone com­pa­nies, which want to cap­i­tal­ize on the val­ue of such per­son­al infor­ma­tion. I under­stand that net­work exec­u­tives want to pro­duce the high­est return for share­hold­ers by sell­ing con­sumers’ infor­ma­tion. The prob­lem is they are sell­ing some­thing that doesn’t belong to them.

    Here’s one per­verse result of this action. When you make a voice call on your smart­phone, the infor­ma­tion is pro­tect­ed: Your phone com­pa­ny can’t sell the fact that you are call­ing car deal­er­ships to oth­ers who want to sell you a car. But if the same device and the same net­work are used to con­tact car deal­ers through the inter­net, that infor­ma­tion — the same infor­ma­tion, in fact — can be cap­tured and sold by the net­work. To add insult to injury, you pay the net­work a month­ly fee for the priv­i­lege of hav­ing your infor­ma­tion sold to the high­est bid­der.

    This bill isn’t the only gift to the indus­try. The Trump F.C.C. recent­ly vot­ed to stay require­ments that inter­net ser­vice providers must take “rea­son­able mea­sures” to pro­tect con­fi­den­tial infor­ma­tion they hold on their cus­tomers, such as Social Secu­ri­ty num­bers and cred­it card infor­ma­tion. This is not a hypo­thet­i­cal risk — in 2015 AT&T was fined $25 mil­lion for shod­dy prac­tices that allowed employ­ees to steal and sell the pri­vate infor­ma­tion of 280,000 cus­tomers.

    Among the many calami­ties engen­dered by the cir­cus atmos­phere of this White House is the diver­sion of pub­lic atten­tion away from many oth­er activ­i­ties under­tak­en by the Repub­li­can-con­trolled gov­ern­ment. Nobody seemed to notice when the Trump F.C.C. dropped the require­ment about net­works pro­tect­ing infor­ma­tion because we were all riv­et­ed by the Russ­ian hack­ing of the elec­tion and the attempt­ed repeal of Oba­macare.

    There’s a lot of hypocrisy at play here: The man who has raged end­less­ly at the alleged sur­veil­lance of the com­mu­ni­ca­tions of his aides (and poten­tial­ly him­self) will most like­ly soon glad­ly sign a bill that allows unre­strained sale of the per­son­al infor­ma­tion of any Amer­i­can using the inter­net.

    ...

    “The bill not only gives cable com­pa­nies and wire­less providers free rein to do what they like with your brows­ing his­to­ry, shop­ping habits, your loca­tion and oth­er infor­ma­tion gleaned from your online activ­i­ty, but it would also pre­vent the Fed­er­al Com­mu­ni­ca­tions Com­mis­sion from ever again estab­lish­ing sim­i­lar con­sumer pri­va­cy pro­tec­tions.

    The GOP is so intent on guar­an­tee­ing the rights of ISP to sell any­thing they can about you that the House bill would pre­vent the FCC from ever again estab­lish­ing the FCC the bill would repeal. At least, pre­sum­ably, unless a new law is passed to reem­pow­er the FCC. Which means that if this becomes law (and all indi­ca­tions are Trump will sign it into law), it’s prob­a­bly going to take a Demo­c­ra­t­ic con­trolled House, Sen­ate, and White House to reverse it. Yes, fol­low­ing the GOP’s epic Trump­car fail, it’s about to rebrand itself as the “ISPs will stop spy­ing on you over our dead body!”-party.

    And that’s all on top of Trump’s FCC vot­ing to pre­vent these same ISPs from hav­ing to take “rea­son­able mea­sure” to pro­tect the few cat­e­gories of infor­ma­tion they’re col­lect­ing on you that they would­n’t be sell­ing: you’re social secu­ri­ty num­ber and cred­it card info:

    ...
    This bill isn’t the only gift to the indus­try. The Trump F.C.C. recent­ly vot­ed to stay require­ments that inter­net ser­vice providers must take “rea­son­able mea­sures” to pro­tect con­fi­den­tial infor­ma­tion they hold on their cus­tomers, such as Social Secu­ri­ty num­bers and cred­it card infor­ma­tion. This is not a hypo­thet­i­cal risk — in 2015 AT&T was fined $25 mil­lion for shod­dy prac­tices that allowed employ­ees to steal and sell the pri­vate infor­ma­tion of 280,000 cus­tomers.
    ...

    So with ISPs set to com­pete with the exist­ing data-bro­ker giants like Face­book and Google and cre­ate a giant nation­al fire sale of per­son­al dig­i­tal infor­ma­tion, it’s prob­a­bly a good time to con­sid­er whether or not you’re at risk of iden­ti­ty theft. And here’s a nice quick way to fig­ure that out: Do you use the inter­net in the US? If the answer is “yes”, you’re prob­a­bly at risk of iden­ti­ty theft:

    Fox59

    Cyber expert explains inter­net pri­va­cy con­cerns after House pulls plug on FCC reg­u­la­tions

    Post­ed 5:13 PM, March 29, 2017, by Shan­non Houser, Updat­ed at 05:35PM, March 29, 2017

    BLOOMINGTON, Ind. — A vote Tues­day in Wash­ing­ton dis­man­tled online pri­va­cy reg­u­la­tions pre­vi­ous­ly set in place by the FCC.

    The reg­u­la­tions would have pre­vent­ed inter­net ser­vice providers like Com­cast, Ver­i­zon and AT&T from sell­ing your per­son­al infor­ma­tion on mar­ket­places and to start-up com­pa­nies.

    FOX59 spoke to IU Bloom­ing­ton Cyber Secu­ri­ty Pro­gram Chair Scott Shack­elford about what it now means for you when you’re brows­ing online.

    Shack­elford said those com­pa­nies can access infor­ma­tion like your brows­ing habits, but also dates impor­tant to you, like your birth­day. Those details can be sold where com­pa­nies would use that data to devel­op adver­tis­ing and mar­ket­ing trends.

    “It can be a cou­ple bucks. It can be a bit more, but when all of a sud­den you have mil­lions of con­sumers, that can be a cash cow for a lot of com­pa­nies,” Shack­elford said.

    The more per­son­al infor­ma­tion that’s out there, Shack­elford says, the eas­i­er it is for you to become a vic­tim of iden­ti­ty theft.

    ...

    “The more per­son­al infor­ma­tion that’s out there, Shack­elford says, the eas­i­er it is for you to become a vic­tim of iden­ti­ty theft.”

    If all the per­son­al infor­ma­tion about you that’s already out there and read­i­ly avail­able for any­one to pur­chase in the giant data-bro­ker­age indus­try (or just browse) has­n’t yet made you vul­ner­a­ble enough to iden­ti­ty theft, might adding the ISP’s trea­sure trove of per­son­al infor­ma­tion tip the scales? You’ll find out.

    So what can you do? Well, as the arti­cle below notes, there are some­thing things indi­vid­u­als can do to pro­tect their online data from their ISPs, like using a Vir­tu­al Pri­vate Net­work or pri­va­cy tools like Tor. But as the arti­cle also notes, even if you use every trick out there to pro­tect your online pri­va­cy, all of that pales in com­par­i­son to hav­ing an actu­al law pro­tect­ing you:

    The Dai­ly Dot

    Think you can pro­tect your pri­va­cy from inter­net providers with­out FCC rules? Good luck.

    Lau­ra Moy—
    Mar 28 at 5:33AM | Last updat­ed Mar 28 at 5:38AM

    Do you feel dis­sat­is­fied over the state of online pri­va­cy, and wish reg­u­la­tors would do more, not less, to pro­tect your pri­va­cy? For most Amer­i­cans, the answer to that ques­tion is yes. Unfor­tu­nate­ly, Con­gress is about to move online pri­va­cy in the wrong direc­tion.

    Despite the fact that Amer­i­cans over­whelm­ing­ly want more pri­va­cy pro­tec­tions, Con­gress is on the verge of doing a huge favor to cor­po­rate bene­fac­tors this week by elim­i­nat­ing some of the strongest pri­va­cy pro­tec­tions we have—rules that pre­vent inter­net providers from spy­ing on their cus­tomers and sell­ing or shar­ing pri­vate infor­ma­tion about what their cus­tomers do online with­out per­mis­sion. The rules also require inter­net providers to take steps to pro­tect that infor­ma­tion from harm­ful attack­ers.

    ...

    If you feel strong­ly about your elect­ed rep­re­sen­ta­tives in Con­gress work­ing to dis­man­tle pri­va­cy pro­tec­tions you val­ue, then, of course, you should reach out to them and tell them how you feel before it’s too late.

    But if Con­gress acts to elim­i­nate pri­va­cy any­way, you might be left won­der­ing: What next? What can I do to pro­tect my own pri­va­cy if Con­gress is work­ing to destroy it? You can’t very well for­go using the inter­net, which in today’s world is essen­tial for edu­ca­tion, job appli­ca­tions, health­care, finance, and more. You might not even have the abil­i­ty to switch providers if you don’t like the inva­sive prac­tices of your provider—many Amer­i­cans only have one option when it comes to high-speed internet. And your inter­net provider has both the abil­i­ty and incen­tive to spy on you. In the words of the founder and CEO of one inter­net provider in Maine, “Your ISP can look at your traf­fic and dis­cov­er the most inti­mate details of your life, and sell­ing that infor­ma­tion will ulti­mate­ly be more valu­able than sell­ing the inter­net con­nec­tion.”

    The depress­ing real­i­ty is that if and when Con­gress elim­i­nates inter­net pri­va­cy pro­tec­tions, you’ll be left with few options to defend your­self—the lit­tle you can do will pale in com­par­i­son to hav­ing con­crete rules that strict­ly lim­it inter­net providers’ abil­i­ty to share or sell pri­vate infor­ma­tion. Your self-help pri­va­cy options will be nei­ther appeal­ing nor effec­tive:

    Take advan­tage of pri­va­cy options offered by your provider (maybe). If you’re lucky, your provider might make lim­it­ed pri­va­cy options avail­able to you on what’s known as an “opt-out” basis—meaning they will share your infor­ma­tion by default, but allow you to tell them to stop doing that if you can fig­ure out how. Unfor­tu­nate­ly, if Con­gress elim­i­nates exist­ing pri­va­cy rules, the details about what infor­ma­tion your inter­net provider col­lects will prob­a­bly become even more dif­fi­cult to find and under­stand, and inter­net providers will prob­a­bly gut many of their more pri­va­cy-pro­tec­tive options.

    Sub­scribe to a “vir­tu­al pri­vate net­work.” In addi­tion to your already-expen­sive inter­net bill, you could decide to pay for a VPN ser­vice that helps to shield some of your Inter­net traf­fic from your provider. But it’s not as easy as it sounds: You have to have a bit of geek know-how to prop­er­ly con­fig­ure your VPN, and (annoy­ing­ly) you’ll also have to remem­ber to turn on your VPN every sin­gle time you con­nect to the inter­net. Not only that, but tun­nel­ing all of your traf­fic through a VPN will sub­stan­tial­ly slow down your inter­net expe­ri­ence. And if that wasn’t bad enough, it might not even address your pri­va­cy con­cerns: Just like your inter­net provider, your VPN provider could also track and sell your online activ­i­ties. Need­less to say, VPNs are not a mag­ic cure for inter­net pri­va­cy.*

    Install HTTPS Every­where. This is a free exten­sion for your web brows­er that routes you auto­mat­i­cal­ly to the HTTPS ver­sion of the web­sites you vis­it. This means you’ll see the friend­ly green lock icon more often, which indi­cates that your con­nec­tion is encrypt­ed and few­er details of your brows­ing activ­i­ties will be avail­able to your provider. The HTTPS Every­where exten­sion is won­der­ful because every­thing works auto­mat­i­cal­ly once you’ve installed it. But here’s the prob­lem: Many pop­u­lar sites don’t even sup­port HTTPS. A study by Google last year found that a shock­ing­ly high num­ber of sites either use out­dat­ed encryp­tion or offer none at all. This means that HTTPS Every­where can’t help pro­tect your pri­va­cy on those sites, even though there are oth­er sites that it helps with. So you should go install HTTPS Everywhere—it’s easy to use, and it does pro­vide some protection—but it’s a far cry from the strong pri­va­cy pro­tec­tions that Con­gress is try­ing to do away with.

    It should be clear that the things you can do to pro­tect your own pri­va­cy from your inter­net provider are at best sub­op­ti­mal and at worst hor­ri­bly insuf­fi­cient. Indeed, there are sev­er­al oth­er things not worth dis­cussing here because they are so tech­ni­cal­ly dif­fi­cult as to be effec­tive­ly unavail­able to the aver­age inter­net user: Things like swap­ping out your DNS serv­er, installing your own wire­less router (and retir­ing the one your inter­net provider gave you), set­ting up a pri­vate email serv­er, encrypt­ing your emails, using the Tor Brows­er, and peri­od­i­cal­ly chang­ing the MAC address­es of your con­nect­ed devices.

    And none of these solutions—or all of them togeth­er, for that matter—are as good as hav­ing rules on the books that just pro­hib­it inter­net providers from spy­ing on their cus­tomers and sell­ing their pri­vate infor­ma­tion with­out per­mis­sion. We have those rules today, but tomor­row they could be gone.

    Good luck, inter­net users. You may soon be on your own.

    And none of these solutions—or all of them togeth­er, for that matter—are as good as hav­ing rules on the books that just pro­hib­it inter­net providers from spy­ing on their cus­tomers and sell­ing their pri­vate infor­ma­tion with­out per­mis­sion. We have those rules today, but tomor­row they could be gone.”

    Yep, we could all just through elab­o­rate tech­ni­cal hoops in an end­less pri­va­cy-tools arms race to pro­tect our online pri­va­cy from the ISPs’ bot­tom lines. Or we could, you know, make it ille­gal and then enforce that law.

    Of course, even if this lat­est gift from the GOP pulls a ‘Trump­care’ and ends up going down in flames at the last minute, it’s not like there isn’t some valid­i­ty to the argu­ment that the ISPs mere­ly want to be able to do what online giants like Google and Face­book have been doing for years (and do to you whether or not you’re actu­al­ly vis­it­ing their sites or some ran­dom site). So it’s going to be impor­tant to keep in mind that part of the solu­tion to end­ing the threat of ISP data-bro­ker­ing is reg­u­lat­ing the hell out of all rapa­cious data-bro­kers in gen­er­al. Online and offline. That should even things out.

    Or we could just wait for the indus­try to come up with its own pri­va­cy ‘solu­tions’.

    Posted by Pterrafractyl | March 29, 2017, 8:14 pm

Post a comment