Spitfire List Web site and blog of anti-fascist researcher and radio personality Dave Emory.

For The Record  

FTR #859 Because They Can: Update on Technocratic Fascism

Dave Emory’s entire life­time of work is avail­able on a flash drive that can be obtained here. The new drive is a 32-gigabyte drive that is current as of the programs and articles posted by late spring of 2015. The new drive (available for a tax-deductible contribution of $65.00 or more) contains FTR #850.  

WFMU-FM is podcasting For The Record–You can subscribe to the podcast HERE.

You can subscribe to e-mail alerts from Spitfirelist.com HERE

You can subscribe to RSS feed from Spitfirelist.com HERE.

You can subscribe to the comments made on programs and posts–an excellent source of information in, and of, itself HERE.

This program was recorded in one, 60-minute segment

Introduction: Albert Einstein said of the invention of the atomic bomb: “Everything has changed but our way of thinking.” We feel that other, more recent developments in the world of Big Tech warrant the same type of warning.

This program further explores the Brave New World being midwived by technocrats. These stunning developments should be viewed against the background of what we call “technocratic fascism,” referencing a vitally important article by David Golumbia. ” . . . . Such tech­no­cratic beliefs are wide­spread in our world today, espe­cially in the enclaves of dig­i­tal enthu­si­asts, whether or not they are part of the giant corporate-digital leviathanHack­ers (“civic,” “eth­i­cal,” “white” and “black” hat alike), hack­tivists, Wik­iLeaks fans [and Julian Assange et al–D. E.], Anony­mous “mem­bers,” even Edward Snow­den him­self walk hand-in-hand with Face­book and Google in telling us that coders don’t just have good things to con­tribute to the polit­i­cal world, but that the polit­i­cal world is theirs to do with what they want, and the rest of us should stay out of it: the polit­i­cal world is bro­ken, they appear to think (rightly, at least in part), and the solu­tion to that, they think (wrongly, at least for the most part), is for pro­gram­mers to take polit­i­cal mat­ters into their own hands. . . First, [Tor co-creator] Din­gle­dine claimed that Tor must be sup­ported because it fol­lows directly from a fun­da­men­tal “right to pri­vacy.” Yet when pressed—and not that hard—he admits that what he means by “right to pri­vacy” is not what any human rights body or “par­tic­u­lar legal regime” has meant by it. Instead of talk­ing about how human rights are pro­tected, he asserts that human rights are nat­ural rights and that these nat­ural rights cre­ate nat­ural law that is prop­erly enforced by enti­ties above and out­side of demo­c­ra­tic poli­tiesWhere the UN’s Uni­ver­sal Dec­la­ra­tion on Human Rights of 1948 is very clear that states and bod­ies like the UN to which states belong are the exclu­sive guar­an­tors of human rights, what­ever the ori­gin of those rights, Din­gle­dine asserts that a small group of soft­ware devel­op­ers can assign to them­selves that role, and that mem­bers of demo­c­ra­tic poli­ties have no choice but to accept them hav­ing that role. . . Fur­ther, it is hard not to notice that the appeal to nat­ural rights is today most often asso­ci­ated with the polit­i­cal right, for a vari­ety of rea­sons (ur-neocon Leo Strauss was one of the most promi­nent 20th cen­tury pro­po­nents of these views). We aren’t sup­posed to endorse Tor because we endorse the right: it’s sup­posed to be above the left/right dis­tinc­tion. But it isn’t. . . .

We begin by examining a couple of articles relevant to the world of credit.

Big Tech and Big Data have reached the point where, for all intents and purposes, credit card users and virtually everyone else have no personal privacy. Even without detailed personal information, capable tech operators can identify people’s identities with an extraordinary degree of precision using a surprisingly small amount of information.

Compounding the worries of those seeking credit is a new Facebook “app” that will enable banks to identify how poor a customer’s friends are and enable those same institutions to deny the unsuspecting credit on the basis of how poor their friends are!

Even as Big Tech is permitting financial institutions to zero in on customers to an unprecedented degree, it is moving in the direction of obscuring the doings of Banksters. The Symphony network offers end-to-end encryption that appears to make the operations of the financial institutions using it opaque to regulators.

A new variant of the Bitcoin technology will not only facilitate the use of Bitcoin to assassinate public figures but may very well replace–to a certain extent–the functions performed by attorney. (We have covered Bitcoin–an apparent Underground Reich invention–in FTR #’s 760, 764, 770, 785.)

As frightening as some of the above possibilities may be, things may get dramatically worse with the introduction of “the Internet of Things,” permitting the hacking of many types of everyday technologies, as well as the use of those technologies to give Big Tech and Big Data unprecedented intrusion into people’s lives.

Program Highlights Include: 

  • Discussion of the hacking of an automobile using a laptop.
  • Comparison of the developments of Big Tech and Big Data to magic and the implications for a species that remains true to its neanderthal, femur-cracking, marrow-sucking roots.
  • Review of some of the points covered in L-2.
  • The need for vastly bigger, rigorously regulated government instead of the fascism inherent in the libertarian doctrine.
  • How hackers are attempting to extort users of the Ashley Madison cheaters website.

1. Big Tech and Big Data have reached the point where, for all intents and purposes, credit card users and virtually everyone else have no personal privacy. Even without detailed personal information, capable tech operators can identify people’s identities with an extraordinary degree of precision using a surprisingly small amount of information.

“The Singularity Is Already Here–It’s Name Is Big Data” submitted by Ben Hunt; Zerohedge.com; 2/08/2015.

Last Thursday the journal Science published an article by four MIT-affiliated data scientists (Sandy Pentland is in the group, and he’s a big name in these circles), titled “Unique in the shopping mall: On the reidentifiability of credit card metadata”. Sounds innocuous enough, but here’s the summary from the front page WSJ article describing the findings:

Researchers at the Massachusetts Institute of Technology, writing Thursday in the journal Science, analyzed anonymous credit-card transactions by 1.1 million people. Using a new analytic formula, they needed only four bits of secondary information—metadata such as location or timing—to identify the unique individual purchasing patterns of 90% of the people involved, even when the data were scrubbed of any names, account numbers or other obvious identifiers.

Still not sure what this means? It means that I don’t need your name and address, much less your social security number, to know who you ARE. With a trivial amount of transactional data I can figure out where you live, what you do, who you associate with, what you buy and what you sell. I don’t need to steal this data, and frankly I wouldn’t know what to do with your social security number even if I had it … it would just slow down my analysis. No, you give me everything I need just by living your very convenient life, where you’ve volunteered every bit of transactional information in the fine print of all of these wondrous services you’ve signed up for. And if there’s a bit more information I need – say, a device that records and transmits your driving habits – well, you’re only too happy to sell that to me for a few dollars off your insurance policy. After all, you’ve got nothing to hide. It’s free money!

Almost every investor I know believes that the tools of surveillance and Big Data are only used against the marginalized Other – terrorist “sympathizers” in Yemen, gang “associates” in Compton – but not us. Oh no, not us. And if those tools are trained on us, it’s only to promote “transparency” and weed out the bad guys lurking in our midst. Or maybe to suggest a movie we’d like to watch. What could possibly be wrong with that? I’ve written a lot (here, here, and here) about what’s wrong with that, about how the modern fetish with transparency, aided and abetted by technology and government, perverts the core small-l liberal institutions of markets and representative government.

It’s not that we’re complacent about our personal information. On the contrary, we are obsessed about the personal “keys” that are meaningful to humans – names, social security numbers, passwords and the like – and we spend billions of dollars and millions of hours every year to control those keys, to prevent them from falling into the wrong hands of other humans. But we willingly hand over a different set of keys to non-human hands without a second thought.

The problem is that our human brains are wired to think of data processing in human ways, and so we assume that computerized systems process data in these same human ways, albeit more quickly and more accurately. Our science fiction is filled with computer systems that are essentially god-like human brains, machines that can talk and “think” and manipulate physical objects, as if sentience in a human context is the pinnacle of data processing! This anthropomorphic bias drives me nuts, as it dampens both the sense of awe and the sense of danger we should be feeling at what already walks among us. It seems like everyone and his brother today are wringing their hands about AI and some impending “Singularity”, a moment of future doom where non-human intelligence achieves some human-esque sentience and decides in Matrix-like fashion to turn us into batteries or some such. Please. The Singularity is already here. Its name is Big Data.

Big Data is magic, in exactly the sense that Arthur C. Clarke wrote of sufficiently advanced technology. It’s magic in a way that thermonuclear bombs and television are not, because for all the complexity of these inventions they are driven by cause and effect relationships in the physical world that the human brain can process comfortably, physical world relationships that might not have existed on the African savanna 2,000,000 years ago but are understandable with the sensory and neural organs our ancestors evolved on that savanna. Big Data systems do not “see” the world as we do, with merely 3 dimensions of physical reality. Big Data systems are not social animals, evolved by nature and trained from birth to interpret all signals through a social lens. Big Data systems are sui generis, a way of perceiving the world that may have been invented by human ingenuity and can serve human interests, but are utterly non-human and profoundly not of this world.

A Big Data system couldn’t care less if it has your specific social security number or your specific account ID, because it’s not understanding who you are based on how you identify yourself to other humans. That’s the human bias here, that a Big Data system would try to predict our individual behavior based on an analysis of what we individually have done in the past, as if the computer were some super-advanced version of Sherlock Holmes. No, what a Big Data system can do is look at ALL of our behaviors, across ALL dimensions of that behavior, and infer what ANY of us would do under similar circumstances. It’s a simple concept, really, but what the human brain can’t easily comprehend is the vastness of the ALL part of the equation or what it means to look at the ALL simultaneously and in parallel. I’ve been working with inference engines for almost 30 years now, and while I think that I’ve got unusually good instincts for this and I’ve been able to train my brain to kinda sorta think in multi-dimensional terms, the truth is that I only get glimpses of what’s happening inside these engines. I can channel the magic, I can appreciate the magic, and on a purely symbolic level I can describe the magic. But on a fundamental level I don’t understand the magic, and neither does any other human. What I can say to you with absolute certainty, however, is that the magic exists and there are plenty of magicians like me out there, with more graduating from MIT and Harvard and Stanford every year.

Here’s the magic trick that I’m worried about for investors.

In exactly the same way that we have given away our personal behavioral data to banks and credit card companies and wireless carriers and insurance companies and a million app providers, so are we now being tempted to give away our portfolio behavioral data to mega-banks and mega-asset managers and the technology providers who work with them. Don’t worry, they say, there’s nothing in this information that identifies you directly. It’s all anonymous. What rubbish! With enough anonymous portfolio behavioral data and a laughably small IT budget, any competent magician can design a Big Data system that can predict with 90% accuracy what you will buy and sell in your account, at what price you will buy and sell, and under what external macro conditions you will buy and sell. Every day these private data sets at the mega-market players get bigger and bigger, and every day we get closer and closer to a Citadel or a Renaissance perfecting their Inference Machine for the liquid capital markets. For all I know, they already have. . . .

2. Check­out Facebook’s new patent, to be evaluated in conjunction with the previous story. Facebook’s patent is for a ser­vice that will let banks scan your Face­book friends for the pur­pose of assess­ing your credit qual­ity. For instance, Face­book might set up a ser­vice where banks can take the aver­age of the credit rat­ings for all of the peo­ple in your social net­work, and if that aver­age doesn’t meet a min­i­mum credit score, your loan appli­ca­tion is denied. And that’s not just some ran­dom appli­ca­tion of Facebook’s new patent–the sys­tem of using the aver­age credit scores of your social net­work to deny you loans is explic­itly part of the patent:

“Facebook’s New Plan: Help Banks Fig­ure Out How Poor You Are So They Can Deny You Loans” by Jack Smith IV; mic.com; 8/5/2015.

If you and your Face­book friends are poor, good luck get­ting approved for a loan.

Face­book has reg­is­tered a patent for a sys­tem that would let banks and lenders screen your social net­work before decid­ing whether or not you’re approved for a loan. If your Face­book friends’ aver­age credit scores don’t make the cut, the bank can reject you. The patent is worded in clear, ter­ri­fy­ing lan­guage that speaks for itself:

When an indi­vid­ual applies for a loan, the lender exam­ines the credit rat­ings of mem­bers of the individual’s social net­work who are con­nected to the indi­vid­ual through autho­rized nodes. If the aver­age credit rat­ing of these mem­bers is at least a min­i­mum credit score, the lender con­tin­ues to process the loan appli­ca­tion. Oth­er­wise, the loan appli­ca­tion is rejected.

It’s very lit­er­ally guilt by asso­ci­a­tion, allow­ing banks and lenders to pro­file you by the sta­tus of your loved ones.

Though a credit score isn’t nec­es­sar­ily a reflec­tion of your wealth, it can serve as a rough guide­line for who has a reli­able, man­aged income and who has had to lean on credit in try­ing times. A line of credit is some­times a life­line, either for start­ing a new busi­ness or escap­ing a tem­po­rary hardship.

Pro­fil­ing peo­ple for being in social cir­cles where low credit scores are likely could cut off someone’s chances of find­ing finan­cial relief. In effect, it’s a device that iso­lates the poor and keeps them poor.

A bold new era for dis­crim­i­na­tion: In the United States, it’s ille­gal to deny some­one a loan based on tra­di­tional iden­ti­fiers like race or gen­der — the kinds of things peo­ple usu­ally use to dis­crim­i­nate. But these laws were made before Face­book was able to peer into your social graph and learn when, where and how long you’ve known your friends and acquaintances.

The fitness-tracking tech com­pany Fit­bit said in 2014 that the fastest grow­ing part of their busi­ness is help­ing employ­ers mon­i­tor the health of their employ­ees. Once insur­ers show inter­est in this infor­ma­tion, you can bet they’ll be mak­ing a few rejec­tions of their own. And if a group insur­ance plan that affects every employee depends on mea­sur­able, real-time data for the fit­ness of its employ­ees, how will that affect the hir­ing process?

And if you don’t like it, just find richer friends.

3a. A con­sor­tium of 14 mega-banks have pri­vately devel­oped a spe­cial super-secure inter-bank mes­sag­ing sys­tem that uses end-to-end strong encryp­tion and per­ma­nently deletes data. The Symphony system may very well make it impossible for regulators to adequately oversee the financial malefactors responsible for the 2008 financial meltdown.

“NY Reg­u­la­tor Sends Mes­sage to Symphony” by Ben McLan­na­han and Gina Chon; Finan­cial Times; 7/22/2015.

New York’s state bank­ing reg­u­la­tor has fired a shot across the bows of Sym­phony, a mes­sag­ing ser­vice about to be launched by a con­sor­tium of Wall Street banks and asset man­agers, by call­ing for infor­ma­tion on how it man­ages — and deletes — cus­tomer data.

In a let­ter on Wednes­day to David Gurle, the chief exec­u­tive of Sym­phony Com­mu­ni­ca­tion Ser­vices, the New York Depart­ment of Finan­cial Ser­vices asked it to clar­ify how its tool would allow firms to erase their data trails, poten­tially falling foul of laws on record-keeping.

The let­ter, which was signed by act­ing super­in­ten­dent Anthony Albanese and shared with the press, noted that cha­t­room tran­scripts had formed a crit­i­cal part of author­i­ties’ inves­ti­ga­tions into the rig­ging of mar­kets for for­eign exchange and inter­bank loans. It called for Sym­phony to spell out its doc­u­ment reten­tion capa­bil­i­ties, poli­cies and fea­tures, cit­ing two spe­cific areas of inter­est as “data dele­tion” and “end-to-end encryption”.

The let­ter marks the first expres­sion of con­cern from reg­u­la­tors over a new ini­tia­tive that has set out to chal­lenge the dom­i­nance of Bloomberg, whose 320,000-plus sub­scribers ping about 200m mes­sages a day between ter­mi­nals using its com­mu­ni­ca­tion tools.

Peo­ple famil­iar with the mat­ter described the inquiry as an infor­ma­tion gath­er­ing exer­cise, which could con­clude that Sym­phony is a per­fectly legit­i­mate enterprise.

The NYDFS noted that Symphony’s mar­ket­ing mate­ri­als state that “Sym­phony has designed a spe­cific set of pro­ce­dures to guar­an­tee that data dele­tion is per­ma­nent and fully doc­u­mented. We also delete con­tent on a reg­u­lar basis in accor­dance with cus­tomer data reten­tion policies.”

Mr Albanese also wrote that he would fol­low up with four con­sor­tium mem­bers that the NYDFS reg­u­lates — Bank of New York Mel­lon, Credit Suisse, Deutsche Bank and Gold­man Sachs — to ask them how they plan to use the new ser­vice, which will go live for big cus­tomers in the first week of August.

The reg­u­la­tor said it was keen to find out how banks would ensure that mes­sages cre­ated using Sym­phony would be retained, and “whether their use of Symphony’s encryp­tion tech­nol­ogy can be used to pre­vent review by com­pli­ance per­son­nel or reg­u­la­tors”. It also flagged con­cerns over the open-source fea­tures of the prod­uct, won­der­ing if they could be used to “cir­cum­vent” oversight.

The other mem­bers of the con­sor­tium are Bank of Amer­ica Mer­rill Lynch, Black­Rock, Citadel, Cit­i­group, HSBC, Jef­feries, JPMor­gan, Mav­er­ick Cap­i­tal, Mor­gan Stan­ley and Wells Fargo. Together they have chipped in about $70m to get Sym­phony started. Another San Francisco-based fund run by a for­mer col­league of Mr Gurle’s, Merus Cap­i­tal, has a 5 per cent interest.

“Sym­phony is built on a foun­da­tion of secu­rity, com­pli­ance and pri­vacy fea­tures that were built to enable our finan­cial ser­vices and enter­prise cus­tomers to meet their reg­u­la­tory require­ments,” said Mr Gurle. “We look for­ward to explain­ing the var­i­ous aspects of our com­mu­ni­ca­tions plat­form to the New York Depart­ment of Finan­cial Services.”

3b. Accord­ing to Symphony’s back­ers, noth­ing could go wrong because all the infor­ma­tion that banks are required to retain for reg­u­la­tory pur­poses is indeed retained in the sys­tem. Whether or not regulators can actu­ally access that retained data, how­ever, appears to be more of an open ques­tion. Again, the end-to-end encryption may very well insulate Banksters from the regulation vital to avoid a repeat of the 2008 scenario.

“Sym­phony, the ‘What­sApp for Wall Street,’ Orches­trates a Nuanced Response to Reg­u­la­tory Critics” by Michael del CastilloNew York Busi­ness Jour­nal; 8/13/2015.

Sym­phony is tak­ing heat from some in Wash­ing­ton, D.C., D.C. for its WhatApp-like mes­sag­ing ser­vice that promises to encrypt Wall Street’s mes­sages from end to end. At the heart of the con­cern is whether or not the keys used to decrypt the mes­sages will be made avail­able to reg­u­la­tors, or if another form of back door access will be provided.

With­out such keys it would be immensely more dif­fi­cult to retrace the steps of shady char­ac­ters on Wall Street dur­ing reg­u­la­tory inves­ti­ga­tions — an abil­ity, which accord­ing to a New York Post report, has resulted $74 bil­lion in fines over the past five years.

So, ear­lier this week Sym­phony took to the blo­gos­phere with a rather detailed expla­na­tion of its plans to be com­pli­ant with reg­u­la­tors. In spite of answer­ing a lot of ques­tions though, one key point was either deftly evaded, or overlooked.

What Sym­phony does, accord­ing to the blog post:

Sym­phony pro­vides its cus­tomers with an inno­v­a­tive “end-to-end” secure mes­sag­ing capa­bil­ity that pro­tects com­mu­ni­ca­tions in the cloud from cyber-threats and the risk of data breach, while safe­guard­ing our cus­tomers’ abil­ity to retain records of their mes­sages. Sym­phony pro­tects data, not only when it trav­els from “point-to-point” over net­work con­nec­tions, but also the entire time the data is in the cloud.

How it works:

Large insti­tu­tions using Sym­phony typ­i­cally will store encryp­tion keys using spe­cial­ized hard­ware key man­age­ment devices known as Hard­ware Secu­rity Mod­ules (HSMs). These mod­ules are installed in data cen­ters and pro­tect an organization’s keys, stor­ing them within the secure pro­tected mem­ory of the HSM. Firms will use these keys to decrypt data and then feed the data into their record reten­tion systems.

The crux:

Sym­phony is designed to inter­face with record reten­tion sys­tems com­monly deployed in finan­cial insti­tu­tions. By help­ing orga­ni­za­tions reli­ably store mes­sages in a cen­tral archive, our plat­form facil­i­tates the rapid and com­plete retrieval of records when needed. Sym­phony pro­vides secu­rity while data trav­els through the cloud; firms then securely receive the data from Sym­phony, decrypt it and store it so they can meet their reten­tion obligations.

The poten­tial to store every key-stroke of every employee behind an encrypted wall safe from mali­cious gov­ern­ments and other enti­ties is one that should make Wall Streeters, and those depen­dent on Wall Street resources, sleep a bit bet­ter at night.

But nowhere in Symphony’s blog post does it actu­ally say that any of the 14 com­pa­nies which have invested $70 mil­lion in the prod­uct, or any of the forth­com­ing cus­tomers who might sign up to use it, will actu­ally share any­thing with reg­u­la­tors. Sure, it will retain all the infor­ma­tion obliged by reg­u­la­tors, which in the right hands is equally use­ful to the com­pa­nies. So there’s no sur­prise there.

The clos­est we see to any actual assur­ance that the Sil­i­con Valley-based com­pany plans to share that infor­ma­tion with reg­u­la­tors is that Sym­phony is “designed to inter­face with record reten­tion sys­tems com­monly deployed in finan­cial insti­tu­tions.” Which the­o­ret­i­cally, means the SEC, the DOJ, or any num­ber of reg­u­la­tory bod­ies could plug in, assum­ing they had access.

So, the ques­tions remain, will Sym­phony be build­ing in some sort of back-door access for reg­u­la­tors? Or will it just be stor­ing that infor­ma­tion required of reg­u­la­tors, but for its clients’ use?

4a. The Bit­coin assas­si­na­tion mar­kets are about to get some com­pe­ti­tion. A new variant of the Bitcoin technology will not only permit the use of Bitcoin to assassinate public figures but may very well replace–to a certain extent–the functions performed by attorney.

“Bitcoin’s Dark Side Could Get Darker” by Tom Simonite; MIT Tech­nol­ogy Review; 8/13/2015.

Investors see riches in a cryptography-enabled tech­nol­ogy called smart contracts–but it could also offer much to criminals.

Some of the ear­li­est adopters of the dig­i­tal cur­rency Bit­coin were crim­i­nals, who have found it invalu­able in online mar­ket­places for con­tra­band and as pay­ment extorted through lucra­tive “ran­somware” that holds per­sonal data hostage. A new Bitcoin-inspired tech­nol­ogy that some investors believe will be much more use­ful and pow­er­ful may be set to unlock a new wave of crim­i­nal innovation.

That tech­nol­ogy is known as smart contracts—small com­puter pro­grams that can do things like exe­cute finan­cial trades or nota­rize doc­u­ments in a legal agree­ment. Intended to take the place of third-party human admin­is­tra­tors such as lawyers, which are required in many deals and agree­ments, they can ver­ify infor­ma­tion and hold or use funds using sim­i­lar cryp­tog­ra­phy to that which under­pins Bitcoin.

Some com­pa­nies think smart con­tracts could make finan­cial mar­kets more effi­cient, or sim­plify com­plex trans­ac­tions such as prop­erty deals (see “The Startup Meant to Rein­vent What Bit­coin Can Do”)Ari Juels, a cryp­tog­ra­pher and pro­fes­sor at the Jacobs Technion-Cornell Insti­tute at Cor­nell Tech, believes they will also be use­ful for ille­gal activity–and, with two col­lab­o­ra­tors, he has demon­strated how.

“In some ways this is the per­fect vehi­cle for crim­i­nal acts, because it’s meant to cre­ate trust in sit­u­a­tions where oth­er­wise it’s dif­fi­cult to achieve,” says Juels.

In a paper to be released today, Juels, fel­low Cor­nell pro­fes­sor Elaine Shi, and Uni­ver­sity of Mary­land researcher Ahmed Kosbapresent sev­eral exam­ples of what they call “crim­i­nal con­tracts.” They wrote them to work on the recently launched smart-contract plat­form Ethereum.

One exam­ple is a con­tract offer­ing a cryp­tocur­rency reward for hack­ing a par­tic­u­lar web­site. Ethereum’s pro­gram­ming lan­guage makes it pos­si­ble for the con­tract to con­trol the promised funds. It will release them only to some­one who pro­vides proof of hav­ing car­ried out the job, in the form of a cryp­to­graph­i­cally ver­i­fi­able string added to the defaced site.

Con­tracts with a sim­i­lar design could be used to com­mis­sion many kinds of crime, say the researchers.Most provoca­tively, they out­line a ver­sion designed to arrange the assas­si­na­tion of a pub­lic fig­ure. A per­son wish­ing to claim the bounty would have to send infor­ma­tion such as the time and place of the killing in advance. The con­tract would pay out after ver­i­fy­ing that those details had appeared in sev­eral trusted news sources, such as news wires. A sim­i­lar approach could be used for lesser phys­i­cal crimes, such as high-profile vandalism.

“It was a bit of a sur­prise to me that these types of crimes in the phys­i­cal world could be enabled by a dig­i­tal sys­tem,” says Juels. He and his coau­thors say they are try­ing to pub­li­cize the poten­tial for such activ­ity to get tech­nol­o­gists and pol­icy mak­ers think­ing about how to make sure the pos­i­tives of smart con­tracts out­weigh the negatives.

“We are opti­mistic about their ben­e­fi­cial appli­ca­tions, but crime is some­thing that is going to have to be dealt with in an effec­tive way if those ben­e­fits are to bear fruit,” says Shi.

Nico­las Christin, an assis­tant pro­fes­sor at Carnegie Mel­lon Uni­ver­sity who has stud­ied crim­i­nal uses of Bit­coin, agrees there is poten­tial for smart con­tracts to be embraced by the under­ground. “It will not be sur­pris­ing,” he says. “Fringe busi­nesses tend to be the first adopters of new tech­nolo­gies, because they don’t have any­thing to lose.”

Gavin Wood, chief tech­nol­ogy offi­cer at Ethereum, notes that legit­i­mate busi­nesses are already plan­ning to make use of his technology—for exam­ple, to pro­vide a dig­i­tally trans­fer­able proof of own­er­ship of gold.

How­ever, Wood acknowl­edges it is likely that Ethereum will be used in ways that break the law—and even says that is part of what makes the tech­nol­ogy inter­est­ing. Just as file shar­ing found wide­spread unau­tho­rized use and forced changes in the enter­tain­ment and tech indus­tries, illicit activ­ity enabled by Ethereum could change the world, he says.

“The poten­tial for Ethereum to alter aspects of soci­ety is of sig­nif­i­cant mag­ni­tude,” says Wood. “This is some­thing that would pro­vide a tech­ni­cal basis for all sorts of social changes and I find that exciting.”

For exam­ple, Wood says that Ethereum’s soft­ware could be used to cre­ate a decen­tral­ized ver­sion of a ser­vice such as Uber, con­nect­ing peo­ple want­ing to go some­where with some­one will­ing to take them, and han­dling the pay­ments with­out the need for a com­pany in the mid­dle. Reg­u­la­tors like those har­ry­ing Uber in many places around the world would be left with noth­ing to tar­get. “You can imple­ment any Web ser­vice with­out there being a legal entity behind it,” he says. “The idea of mak­ing cer­tain things impos­si­ble to leg­is­late against is really interesting.”

4b. If you’re a for­mer sub­scriber of the “Ash­ley Madi­son” web­site for cheat­ing, just FYI, you might get­ting a friendly email soon:

“Extor­tion­ists Are After the Ash­ley Madi­son Users and They Want Bitcoin” by Adam Clark EstesGiz­modo; 8/21/15.

Peo­ple are the worst. An unknown num­ber of ass­holes are threat­en­ing to expose Ash­ley Madi­son users, pre­sum­ably ruin­ing their mar­riages. The hack­ing vic­tims must pay the extor­tion­ists “exactly 1.0000001 Bit­coins” or the spouse gets noti­fied. Ugh.

This is an unnerv­ing but not unpre­dictable turn of events. The data that the Ash­ley Madi­son hack­ers released early this week included mil­lions of real email addresses, along with real home addresses, sex­ual pro­cliv­i­ties and other very pri­vate infor­ma­tion. Secu­rity blog­ger Brian Krebs talked to secu­rity firms who have evi­dence of extor­tion schemes linked to Ash­ley Madi­son data. Turns out spam fil­ters are catch­ing a num­ber of emails being sent to vic­tims from peo­ple who say they’ll make the infor­ma­tion pub­lic unless they get paid!

Here’s one caught by an email provider in Mil­wau­kee:

Hello,

Unfor­tu­nately, your data was leaked in the recent hack­ing of Ash­ley Madi­son and I now have your information.

If you would like to pre­vent me from find­ing and shar­ing this infor­ma­tion with your sig­nif­i­cant other send exactly 1.0000001 Bit­coins (approx. value $225 USD) to the fol­low­ing address:

1B8eH7HR87vbVbMzX4gk9nYyus3KnXs4Ez

Send­ing the wrong amount means I won’t know it’s you who paid.

You have 7 days from receipt of this email to send the BTC [bit­coins]. If you need help locat­ing a place to pur­chase BTC, you can start here…..

One secu­rity expert explained to Krebs that this type of extor­tion could be dan­ger­ous. “There is going to be a dra­matic crime wave of these types of vir­tual shake­downs, and they’ll evolve into spear-phishing cam­paigns that lever­age crypto mal­ware,” said Tom Keller­man of Trend Micro.

That sounds a lit­tle dra­matic, but bear in mind just how many peo­ple were involved. Even if you assume some of the accounts were fake, there are poten­tially mil­lions who’ve had their pri­vate infor­ma­tion posted on the dark web for any­body to see and abuse. Some of these peo­ple are in the mil­i­tary, too, where they’d face pos­si­ble penal­ties for adul­tery. If some goons think they can squeeze a bit­coin out of each of them, there are poten­tially tens of mil­lions of dol­lars to be made.

The word “poten­tially” is impor­tant because some of these extor­tion emails are obvi­ously get­ting stuck in spam fil­ters, and some of the extor­tion­ists could eas­ily just be bluff­ing. Either way, every­body loses when com­pa­nies fail to secure their users’ data. Every­body except the criminals.

5. The emergence of what is coming to be called “The Internet of Things” holds truly ominous possibilities. Not only can Big Data/Big Tech get their hooks into peoples’ lives to an even greater extent than they can now (see Item #1 in this description) but hackers can have a field day.

“Why Smart Objects May Be a Dumb Idea’ by Zeynep Tufekci; The New York Times; 8/10/2015.

A fridge that puts milk on your shopping list when you run low. A safe that tallies the cash that is placed in it. A sniper rifle equipped with advanced computer technology for improved accuracy. A car that lets you stream music from the Internet.

All of these innovations sound great, until you learn the risks that this type of connectivity carries. Recently, two security researchers, sitting on a couch and armed only with laptops, remotely took over a Chrysler Jeep Cherokee speeding along the highway, shutting down its engine as an 18-wheeler truck rushed toward it. They did this all while a Wired reporter was driving the car. Their expertise would allow them to hack any Jeep as long as they knew the car’s I.P. address, its network address on the Internet. They turned the Jeep’s entertainment dashboard into a gateway to the car’s steering, brakes and transmission.

A hacked car is a high-profile example of what can go wrong with the coming Internet of Things — objects equipped with software and connected to digital networks. The selling point for these well-connected objects is added convenience and better safety. In reality, it is a fast-motion train wreck in privacy and security.

The early Internet was intended to connect people who already trusted one another, like academic researchers or military networks. It never had the robust security that today’s global network needs. As the Internet went from a few thousand users to more than three billion, attempts to strengthen security were stymied because of cost, shortsightedness and competing interests. Connecting everyday objects to this shaky, insecure base will create the Internet of Hacked Things. This is irresponsible and potentially catastrophic.

That smart safe? Hackers can empty it with a single USB stick while erasing all logs of its activity — the evidence of deposits and withdrawals — and of their crime. That high-tech rifle? Researchers managed to remotely manipulate its target selection without the shooter’s knowing.

Home builders and car manufacturers have shifted to a new business: the risky world of information technology. Most seem utterly out of their depth.

Although Chrysler quickly recalled 1.4 million Jeeps to patch this particular vulnerability, it took the company more than a year after the issue was first noted, and the recall occurred only after that spectacular publicity stunt on the highway and after it was requested by the National Highway Traffic Safety Administration. In announcing the software fix, the company said that no defect had been found. If two guys sitting on their couch turning off a speeding car’s engine from miles away doesn’t qualify, I’m not sure what counts as a defect in Chrysler’s world. And Chrysler is far from the only company compromised: from BMW to Tesla to General Motors, many automotive brands have been hacked, with surely more to come.

Dramatic hacks attract the most attention, but the software errors that allow them to occur are ubiquitous. While complex breaches can take real effort — the Jeep hacker duo spent two years researching — simple errors in the code can also cause significant failure. Adding software with millions of lines of code to

The Internet of Things is also a privacy nightmare. Databases that already have too much information about us will now be bursting with data on the places we’ve driven, the food we’ve purchased and more. Last week, at Def Con, the annual information security conference, researchers set up an Internet of Things village to show how they could hack everyday objects like baby monitors, thermostats and security cameras.

Connecting everyday objects introduces new risks if done at mass scale. Take that smart refrigerator. If a single fridge malfunctions, it’s a hassle. However, if the fridge’s computer is connected to its motor, a software bug or hack could “brick” millions of them all at once — turning them into plastic pantries with heavy doors.

Cars — two-ton metal objects designed to hurtle down highways — are already bracingly dangerous. The modern automobile is run by dozens of computers that most manufacturers connect using a system that is old and known to be insecure. Yet automakers often use that flimsy system to connect all of the car’s parts. That means once a hacker is in, she’s in everywhere — engine, steering, transmission and brakes, not just the entertainment system.

For years, security researchers have been warning about the dangers of coupling so many systems in cars. Alarmed researchers have published academic papers, hacked cars as demonstrations, and begged the industry to step up. So far, the industry response has been to nod politely and fix exposed flaws without fundamentally changing the way they operate.

In 1965, Ralph Nader published “Unsafe at Any Speed,” documenting car manufacturers’ resistance to spending money on safety features like seatbelts. After public debate and finally some legislation, manufacturers were forced to incorporate safety technologies.

No company wants to be the first to bear the costs of updating the insecure computer systems that run most cars. We need federal safety regulations to push automakers to move, as a whole industry. Last month, a bill with privacy and cybersecurity standards for cars was introduced in the Senate. That’s good, but it’s only a start. We need a new understanding of car safety, and of the safety of any object running software or connecting to the Internet.

It may be hard to fix security on the digital Internet, but the Internet of Things should not be built on this faulty foundation. Responding to digital threats by patching only exposed vulnerabilities is giving just aspirin to a very ill patient.

It isn’t hopeless. We can make programs more reliable and databases more secure. Critical functions on Internet-connected objects should be isolated and external audits mandated to catch problems early. But this will require an initial investment to forestall future problems — the exact opposite of the current corporate impulse. It also may be that not everything needs to be networked, and that the trade-off in vulnerability isn’t worth it. Maybe cars are unsafe at any I.P.

6. We conclude by re-examining one of the most important analytical articles in a long time, David Golumbia’s article in Uncomputing.org about technocrats and their fundamentally undemocratic outlook.

“Tor, Tech­noc­racy, Democracy” by David Golum­bia; Uncomputing.org; 4/23/2015.

” . . . . Such tech­no­cratic beliefs are wide­spread in our world today, espe­cially in the enclaves of dig­i­tal enthu­si­asts, whether or not they are part of the giant corporate-digital leviathanHack­ers (“civic,” “eth­i­cal,” “white” and “black” hat alike), hack­tivists, Wik­iLeaks fans [and Julian Assange et al–D. E.], Anony­mous “mem­bers,” even Edward Snow­den him­self walk hand-in-hand with Face­book and Google in telling us that coders don’t just have good things to con­tribute to the polit­i­cal world, but that the polit­i­cal world is theirs to do with what they want, and the rest of us should stay out of it: the polit­i­cal world is bro­ken, they appear to think (rightly, at least in part), and the solu­tion to that, they think (wrongly, at least for the most part), is for pro­gram­mers to take polit­i­cal mat­ters into their own hands. . . First, [Tor co-creator] Din­gle­dine claimed that Tor must be sup­ported because it fol­lows directly from a fun­da­men­tal “right to pri­vacy.” Yet when pressed—and not that hard—he admits that what he means by “right to pri­vacy” is not what any human rights body or “par­tic­u­lar legal regime” has meant by it. Instead of talk­ing about how human rights are pro­tected, he asserts that human rights are nat­ural rights and that these nat­ural rights cre­ate nat­ural law that is prop­erly enforced by enti­ties above and out­side of demo­c­ra­tic poli­tiesWhere the UN’s Uni­ver­sal Dec­la­ra­tion on Human Rights of 1948 is very clear that states and bod­ies like the UN to which states belong are the exclu­sive guar­an­tors of human rights, what­ever the ori­gin of those rights, Din­gle­dine asserts that a small group of soft­ware devel­op­ers can assign to them­selves that role, and that mem­bers of demo­c­ra­tic poli­ties have no choice but to accept them hav­ing that role. . . Fur­ther, it is hard not to notice that the appeal to nat­ural rights is today most often asso­ci­ated with the polit­i­cal right, for a vari­ety of rea­sons (ur-neocon Leo Strauss was one of the most promi­nent 20th cen­tury pro­po­nents of these views). We aren’t sup­posed to endorse Tor because we endorse the right: it’s sup­posed to be above the left/right dis­tinc­tion. But it isn’t. . . .

 

Discussion

16 comments for “FTR #859 Because They Can: Update on Technocratic Fascism”

  1. It looks like the Ashley Madison hack may have just claimed its first two lives:

    Reuters
    Two people may have committed suicide after Ashley Madison hack – police
    TORONTO | By Alastair Sharp

    Mon Aug 24, 2015 11:33pm IST

    At least two people may have committed suicide following the hacking of the Ashley Madison cheating website, Toronto police said on Monday, warning of a ripple effect that includes scams and extortion of clients desperate to stop the exposure of their infidelity.

    Avid Life Media Inc, the parent company of the website, is offering a C$500,000 ($379,132) reward to catch the hackers.

    In addition to the exposure of the Ashley Madison accounts of as many as 37 million users, the attack on the dating website for married people has sparked extortion attempts and at least two unconfirmed suicides, Toronto Police Acting Staff Superintendent Bryce Evans told a news conference.

    The data dump contained email addresses of U.S. government officials, UK civil servants, and workers at European and North American corporations, taking already deep-seated fears about Internet security and data protection to a new level.

    “Your actions are illegal and will not be tolerated. This is your wake-up call,” Evans said, addressing the so-called “Impact Team” hackers directly during the news conference.

    “To the hacking community who engage in discussions on the dark web and who no doubt have information that could assist this investigation, we’re also appealing to you to do the right thing,” Evans said. “You know the Impact Team has crossed the line. Do the right thing and reach out to us.”

    Police declined to provide any more details on the apparent suicides, saying they received unconfirmed reports on Monday morning.

    “The social impact behind this (hacking) – we’re talking about families. We’re talking about their children, we’re talking about their wives, we’re talking about their male partners,” Evans told reporters.

    “It’s going to have impacts on their lives. We’re now going to have hate crimes that are a result of this. There are so many things that are happening. The reality is … this is not the fun and games that has been portrayed.”

    The investigation into the hacking has broadened to include international law enforcement, with the U.S. Department of Homeland Security joining last week. The U.S. Federal Bureau of Investigation and Canadian federal and provincial police are also assisting.

    Evans also said the hacking has spawned online scams that fraudulently claim to be able to protect Ashley Madison clients’ data for a fee.

    People are also attempting to extort Ashley Madison clients by threatening to send evidence of their membership directly to friends, family or colleagues, Evans said.

    In a sign of Ashley Madison’s deepening woes following the breach, lawyers last week launched a class-action lawsuit seeking some $760 million in damages on behalf of Canadians whose information was leaked.

    Note that this is the Toronto police department reporting these two apparent suicides, so these two suicides are presumably just in the Toronto area. And with up to 37 million users compromised, it not only raises the question of just how high the final body count is going to be in the long run for this hack but also just how high it is already from suicides that haven’t yet been associated with the hack.

    It’s a grim reminder that, as more as more personal data becomes vulnerable to exploits of this nature, the more torturous and potentially lethal generic hacking effectively becomes. For instance, what if 37 million Gmail accounts got hacked and their contents were just thrown up on the darkweb. A full account email hack could be just as damaging and humiliating as the Ashley Madison hack, if not far more so, because the potential range of personal information is just on a different scale, and nearly everyone these days has a email account with one of the major email services out there. Plus, unlike the Ashely Madison hack, which largely limits the damage to individuals involved and their family members, a full email hack could end up violating very sensitive pieces of data for not just the email account owner but everyone they communicated with! It raises a rather alarming question: given the connectivity of human societies, you have to wonder just what percentage of the US population would be at least indirectly impacted if, say, 37 million Gmail accounts got hacked and thrown up online? How about the global populace? It seems like the impact could be pretty widely felt.

    Posted by Pterrafractyl | August 24, 2015, 3:20 pm
  2. David Golumbia points us towards a article that does a great job of summarizing one of the key hopes and dreams held by crypto-anarchist/cyberlibertarians of what bitcoin might bring to the world. That being the collapse of government via mass tax evasion through the use of cryptocurrencies and replacing government with a fee-for-service taxation system run by private service providers. But don’t assume that by subverting the ability of governments to function that the cyberlibertarians assume we would suddenly live a world free of regulation because that’s not exactly the author below has in mind: “The only choice of regulation we have in terms of cryptocurrency taxation is not to try and fit it inside some existing doctrine, but to abide by their laws of finance and information freedom. We must be the one’s to conform to the regulation, not have it conform to our conventional beliefs. Bitcoin is a system which will only be governed effectively through digital law, an approach which functions solely through a medium of technology itself. It will not bend to the whim of those who still hold conventional forms of law-making as relevant today”:

    Diginomics
    Cryptocurrency Taxation May Subert National Collection

    Travis Patron
    September 6, 2015

    As the age of cryptocurrency comes into full force, it will facilitate a subversively viable taxation avoidance strategy for many of the technically savvy users of peer-to-peer cryptographic payment systems. In doing so, cryptocurrency use will act to erode the tax revenue base of national jurisdictions, and ultimately, reposition taxation as a voluntary, pay-for-performance function. In this post, I’d like to cover some of the benefits such a strategy will have for cryptocurrency investors, why our notion of taxation is ripe for disruption, and why cryptocurrency taxation is enabled by default.

    Although investors have been lured by the siren song of tax havens for as long as governments have existed, none have existed with the legal and structural characteristics such as those found in cryptocurrency. By operating behind a veil of cybersecrecy, it is reasonable to forecast the impracticality of systemic taxation on these types of financial assets from national jurisdictions. Individual enforcement of taxation is likewise impractical due to ideological backlash governments would receive for targeting individuals who avoid national taxation via information technologies. Even so, many jurisdictions have already declared digital currency transactions (something which occurs between consenting parties on a network which no one owns) to be taxable under current legal frameworks.

    How can the state lay claim to the right to tax that which they do not issue and cannot control?

    Running The Numbers on Cryptocurrency Taxation

    It has been said that compounding interest is one of the most powerful forces in the universe. When we apply the black magic of compounding returns to the profit-maximizing actions of consumers, we see quite clearly why every user aware of the benefits of using cryptocurrency, even if only for the tax-savings, will opt to do so over traditional fiat money. The allure of avoiding the clutches of national taxation is strong enough that any rational consumer will make cryptocurrency a portion of their financial portfolio given they have the sufficient technical understanding.

    “Each $5,000 of annual tax payments made over a 40-year period reduces your net worth by $2.2 million assuming a 10% annual return on your investments,” reports James Dale Davidson in The Sovereign Individual: Mastering the Transition to the Information Age, “For high income earners in predatory tax regimes (such as the United States), you can expect to lose more of your money through cumulative taxation than you will ever earn.”

    As we explained in the report Bitcoin May Become A Global Reserve Instrument, never before has there existed a tool that can preserve economic and informational assets with such a high degree of security combined with a near-zero marginal cost to the user. This revolutionary capability of the bitcoin network does, and will continue to provide, a subversively lucrative tax super haven in direct correlation with its acceptance on a worldwide basis
    .

    Government Response to Cryptocurrency Taxation

    Many government agencies have already cued in to the tax avoidance potential of bitcoin and cryptocurrencies. However, it would seem that they misjudge this emerging threat looming over their precious tax coffers. The Financial Crimes Enforcement Network in the United States (FINCEN) for example, has already issued guidance on cryptocurrency taxation, yet makes a false distinction between real currency and virtual currency. FINCEN states that “In contrast to real currency, “virtual” currency is a medium of exchange that operates like a currency in some environments, but does not have all the attributes of real currency,” and later “virtual currency does not have legal tender status in any jurisdiction.” What these agencies fail to realize, is that cryptocurrency is not virtual in any sense of the word. Indeed it is as real, and perhaps even more real, than traditional fleeting fiat currencies.

    Bitcoin and cryptocurrency offer a near perfect alternative to traditional tax havens which are being tightly controlled by the new laws associated with the Foreign Account Tax Compliance Act (FATCA). In his report Are Cryptocurrencies Super Tax Havens?, Omri Marian makes clear the pressure for financial institutions who interact with the US banking system to hand over account holders, and for a crackdown on offshore tax havens with the enactment of FATCA in 2010.

    Tax policymakers seem to be operating under the faulty assumption that cryptocurrency-based economies are limited by the size of virtual economies. The only virtual aspect of cryptocurrencies, however, is their form. Their operation happens within real economies, and as such their growth potential is, at least theoretically, infinite. Such potential, together with recent developments in cryptocurrencies markets, should alert policy-makers to the urgency of the emerging problem.

    – Omri Marian, Are Cryptocurrencies Super Tax Havens?

    Current payment processors such as BitPay have recently revealed that government agencies are watching cryptocurrency transactions through the bottlenecks and exchanges where it can be tracked and traced with a high degree of transparency. It should not come to anyones surprise that governments are watching cryptocurrency nor that companies are complying with their laws, but understanding why national governments require users of the bitcoin digital economy to cut them a slice of the pie while they contribute nothing to the operation, and in many cases, hinder the adoption of this technology, remains a callus mystery.

    Governments initially attempting to control cryptocurrency taxation through the businesses and bottlenecks which it can be monitored through will meet with as much success as they have limiting file sharing, illegal downloads, and Tor operations. Cryptocurrencies have an inherent regulation, that of the law from number. Truly, bitcoin is code as law.

    Cryptocurrency Taxation By Default

    What would you say if you were told cryptocurrency taxation occurs on every transaction by default? In the realm of digital currency, the transaction fee which the user decides to (or decides not to) attach to each payment represents the taxation. This user can decide to attach a large fee or no fee at all. In doing so, the miners of the network will choose preference for the transactions with a larger fee attached, and will work to confirm these payments sooner than those with smaller fees. This transactions queue represents a voluntary, pay-for-performance taxation structure where the performance derived from the system is dependent upon how much taxation they pay.

    Algorithmic Regulation

    Cryptocurrencies have regulation built into the very nature of their existence, just not through our conventional ideas of human intervention. Because of the technological nature of cryptocurrency taxation, judicial regulations bestowed upon these types of systems will always be, to a large degree, futile. Cryptocurrencies have established their own set of rules and guidelines through the source code they are built upon, forcing legal frameworks on this type of 21st century innovation will only cause friction during its adoption phase.

    The only choice of regulation we have in terms of cryptocurrency taxation is not to try and fit it inside some existing doctrine, but to abide by their laws of finance and information freedom. We must be the one’s to conform to the regulation, not have it conform to our conventional beliefs. Bitcoin is a system which will only be governed effectively through digital law, an approach which functions solely through a medium of technology itself. It will not bend to the whim of those who still hold conventional forms of law-making as relevant today.

    Conclusion

    When we come to understand the systemic resilience to judicial intervention, it becomes quite clear that cryptocurrency taxation will remain a voluntary, pay-for-performance function of the network itself. No longer will taxation be enforced through coercion, but become a voluntary act towards increased system performance.

    Make no mistake, in a crypto-anarchist jurisdiction where there is no means to confiscate or control property on behalf of another individual, the need for the state will cease to exist. Mass taxation on digital currency is not feasible through judicial enforcement while individual enforcement is bound to prove ineffective. You, or anyone motivated to retain their net worth, will find a subversively lucrative tax haven in the realm of cryptocurrency.

    “Make no mistake, in a crypto-anarchist jurisdiction where there is no means to confiscate or control property on behalf of another individual, the need for the state will cease to exist.”
    Yes, as we saw, in a “crypto-anarchist jurisdiction”, the need for the state will cease to exist, because the people that write the rules for the code that runs the predominant digital infrastructure in the crypto-anarchist jurisdiction’s economy will become the new state. At least that’s the dream. Freeeedooom!

    Posted by Pterrafractyl | September 7, 2015, 10:32 am
  3. Just FYI, if you’ve been avoiding creating a Facebook account under the assumption that this prevents Facebook from learning private details about you there’s a lawsuit you might want to learn more about:

    International Business Times
    Facebook Keeps Getting Sued Over Face-Recognition Software, And Privacy Groups Say We Should Be Paying More Attention

    By Christopher Zara on September 03 2015 3:49 PM EDT

    Who owns your face? Believe it or not, the answer depends on which state you live in, and chances are, you live in one that hasn’t even weighed in yet.

    That could soon change. For the fourth time this year, Facebook Inc. was hit with a class-action lawsuit by an Illinois resident who says its face-recognition software violates an unusual state privacy law there. The latest complaint, filed Monday, underscores a quiet but high-stakes legal battle for the social networking giant, one that could reverberate throughout the rest of the U.S. tech industry and much of the private sector.

    With almost 1.5 billion active users, Facebook has amassed what probably is the world’s largest private database of “faceprints,” digital scans that contain the unique geometric patterns of its users’ faces. The company says it uses these identifiers to automatically suggest photo tags. When users upload new pictures to the site, an algorithm calculates a numeric value based on a person’s unique facial features. Facebook pitches the feature as just another convenient way to stay connected with friends, but privacy and civil rights advocates say the data generated by face-recognition technology is uniquely sensitive, and requires extra special safeguards as it finds its way into the hands of private companies.

    “You can’t turn off your face,” said Alvaro M. Bedoya, founding executive director of Georgetown University’s Center on Privacy & Technology. “Yes, it’s 2015, and yes, we’re tracked in a million different ways, but for most of those forms of tracking, I can still turn it off if I want to.”

    Faceprints Are Mostly Unregulated

    Currently, there are no comprehensive federal regulations governing the commercial use of biometrics, the category of information technology that includes faceprints. And Bedoya said the government appears to be in no hurry to address the issue.

    Earlier this year, the Center on Privacy & Technology was one of a number of privacy-rights groups — along with the Electronic Frontier Foundation and the American Civil Liberties Union, among others — that withdrew from discussions on how to craft guidelines for face-recognition technology. After months of negotiations, Bedoya said the groups grew frustrated by tech industry trade associations that would not agree to even the most minimal of protections, including a rule that would require companies to obtain written consent before collecting and storing faceprints on consumers.

    “When not a single trade association would agree to that, we just realized we weren’t dealing with people who were there to negotiate,” Bedoya said. “We were there to deal basically with people who wanted to stop the process, or make it something that was watered down.”

    But Illinois is different. It’s one of only two states — the other being Texas — to regulate biometrics in the private sector. Illinois passed its Biometric Information Privacy Act in 2008, back when Facebook was still in its relative infancy and most companies were not thinking about face-recognition technology.

    “I think we were ahead of the curve,” said Mary Dixon, legislative director for the ACLU of Illinois, which advanced the initiative. “I think it’d be hard to pass similar initiatives now given the intense lobby against some of the protections we were able to advance.”

    Litigation Faceoff

    Illinois’ law went pretty much unnoticed until April of this year, when a high-profile privacy lawyer filed a lawsuit in federal court on behalf of a Facebook user who charges that Facebook is collecting and storing faceprints on its users without obtaining informed written consent, a violation of Illinois’ BIPA. The suit is federal because Facebook is based in California and the proposed plaintiff class potentially numbers in the millions. Since then, at least three more federal lawsuits were filed, each making similar claims. The latest suit comes from Frederick William Gullen, an Illinois resident who doesn’t even have a Facebook account, but who insists that Facebook created a template of his face when another user uploaded a photo of him.

    “Facebook is actively collecting, storing, and using — without providing notice, obtaining informed written consent or publishing data retention policies — the biometrics of its users and unwitting non-users … Specifically, Facebook has created, collected and stored over a billion ‘face templates’ (or ‘face prints’) — highly detailed geometric maps of the face — from over a billion individuals, millions of whom reside in the State of Illinois.”

    A Facebook spokeswoman said the lawsuits are without merit and the company will defend itself vigorously against them, but the reality is, the cases could play out in a number of ways given that face recognition is largely untested legal territory.

    Dixon and other legal experts familiar with BIPA say Facebook probably will argue that because its faceprints are derived from photographs, they are exempt from BIPA’s consent requirements. Shutterfly Inc., another Internet company being sued in Illinois over facial-recognition technology, is arguing a similar stance. Although BIPA clearly considers scans of “hand or face geometry” to be biometric identifiers, it also says photographs are not. Bedoya said the wording of the law raises a “seeming contradiction” that defendants fighting BIPA lawsuits might be able to exploit.

    “The law was written in a way that could have been clearer,” he said.

    Facebook points out that users can turn off tag suggestions, but Dixon said BIPA was written to ensure that biometric data collection does not take place without written consent executed by the subject of the biometric identifier. The law also makes it illegal to sell, lease or otherwise profit from a customer’s biometric information, a particular thorn in the side for companies that trade in personal data.

    The Facebook and Shutterfly lawsuits will be closely watched as policymakers in other states consider crafting bills governing the use of biometrics. Meanwhile, privacy advocates say we should all be paying attention. As face-recognition technology becomes more pervasive, it will have increasing implications for our lives, both online and off.

    “There’s an awful lot at stake here,” Bedoya said. “In the end, do we want to live in a society where everyone is identified all the time the minute they walk out into public? I think most people aren’t ready for that world.”

    Fb Complaint

    “Currently, there are no comprehensive federal regulations governing the commercial use of biometrics, the category of information technology that includes faceprints. And Bedoya said the government appears to be in no hurry to address the issue.”
    No meaningful commercial facial recognition federal regulations. Huh. Imagine that.

    Posted by Pterrafractyl | September 11, 2015, 4:39 pm
  4. Guess who’s bringing that fun “use your social network to infer your credit quality” model to the developing world as part of an emerging “Big Data, small credit” paradigm for finance. It’s not a particularly hard guess:

    biznisafrica.co.za
    Omidyar Network report reveals disruption in emerging market credit business

    Oct 27, 2015
    Zweli Sikhakhane

    Omidyar Network on 26 October 2015, released – Big Data, Small Credit: The Digital Revolution and Its Impact on Emerging Market Consumers – a research report that analyzes a new category of technology enterprises that are disrupting the traditional way of assessing consumer credit risk in emerging markets.

    Using non-financial data—such as social media activity and mobile phone usage patterns—complex algorithms and big data analytics are delivering a quicker, cheaper, and more effective credit assessment of consumers who lack credit histories and were invisible to lenders before.

    “The financial services industry is on the brink of a new era, where harnessing the power of digital information to serve new segments is becoming the new normal,” says Mike Kubzansky, Omidyar Network Partner.

    “Companies in the ‘Big Data, Small Credit’ space are an example of how this paradigm shift can unlock an entire new pool of customers for formal lenders, while helping consumers in emerging markets get the services they need to improve their lives.”

    The report explores how the digital revolution and the resulting explosion of data have converged to significantly enlarge the addressable consumer credit market for traditional and alternative lenders in developing markets. In India alone, this new approach to risk assessment can potentially bring between 100 and 160 million new customers to the consumer credit market, which would mean tripling the current addressable market for retail lenders in the country.

    “Big Data, Small Credit” also delves into the opportunities and challenges ahead for these new businesses. It shares the results of an in-depth consumer survey with early adopters in Kenya and Colombia by exploring pressing questions around privacy and trust, and provides recommendations to key stakeholders on how to reap the benefits of this new, evolving field.

    “Listening to the early adopter consumer is at the crux of realizing the potential of the ‘Big Data, Small Credit’ business,” says Arjuna Costa, Omidyar Network Investment Partner.

    “Our survey shows that consumers in emerging markets have a clear understanding of the privacy tradeoffs this type of solution entails and seven out of 10 are willing to share information they consider private in order to get a loan.”

    The consumer survey found that early adopters can articulate, differentiate between, and rank different types of private information. They are also younger, stably employed, and more educated and tech savvy than the average population of both surveyed countries—an attractive consumer segment for any lender. However, when faced with emergencies and cash-flow challenges, the large majority still resort to an informal source:

    – 88 percent of respondents in Kenya and 59 percent in Colombia go to family and friends for loans

    – 76 percent of respondents in Kenya and 34 percent in Colombia use other informal credit sources, such as pawnshops and loan sharks

    While the report indicates that it is still early days for this new business and most providers are still experimenting with algorithms, models, and data sources, both the economic and social benefits of this approach can already be ascertained. In the world’s six biggest emerging economies including China, Brazil, India, Mexico, Indonesia, and Turkey—this new technology has the potential to help between 325 and 580 million people gain access to formal credit for the first time.

    However, in order to capitalize on this opportunity, the report recommends a concerted industry effort to build an ecosystem in which these enterprises can continue to develop.

    In particular, it encourages incumbents in the financial services sector to enhance their existing risk assessment platforms with these new technologies, and advises policymakers to balance the need for consumer protection with the imperative to not regulate this nascent industry too soon.

    So, according to the Omidyar Network report:


    Using non-financial data—such as social media activity and mobile phone usage patterns—complex algorithms and big data analytics are delivering a quicker, cheaper, and more effective credit assessment of consumers who lack credit histories and were invisible to lenders before.

    “The financial services industry is on the brink of a new era, where harnessing the power of digital information to serve new segments is becoming the new normal,” says Mike Kubzansky, Omidyar Network Partner.

    “Companies in the ‘Big Data, Small Credit’ space are an example of how this paradigm shift can unlock an entire new pool of customers for formal lenders, while helping consumers in emerging markets get the services they need to improve their lives.”

    “Our survey shows that consumers in emerging markets have a clear understanding of the privacy tradeoffs this type of solution entails and seven out of 10 are willing to share information they consider private in order to get a loan.”

    “The financial services industry is on the brink of a new era, where harnessing the power of digital information to serve new segments is becoming the new normal”
    Ok, so asking to see things like your social networking data is expected to become “the new normal” for the financial services industry. Well that’s pretty horrifying, but at least if the lenders are profiting from all that personal information hopefully that means there will be non-exorbitant interest rates and more lenient terms in case borrowers can’t pay back the loan, especially for the poor borrowers. Hopefully.

    Posted by Pterrafractyl | October 29, 2015, 7:26 pm
  5. You know that classic scene in Office Space where the hyper-consistently cheerful phone operator is asked to stop being being to hyper-consistent. Yeah, there’s probably going to be a lot more conversations like in the future and those conversations are going to be a lot more futile:

    Pando Daily
    Cogito raises $5.5m to monitor the “tone” of call center workers

    By Dan Raile

    November 17, 2015

    “This call may be monitored for quality assurance purposes.”

    Every day millions of miserable human interactions begin this way, usually pitting a customer recently escaped from “voicemail jail” against a weary phone professional running through a checklist on the screen in front of them.

    Today a cloud software outfit from Cambridge, Mass. called Cogito (“I think…”) announced it has raised a $5.5 million Series A from Romulus Capital and Salesforce Ventures to fund its mission to improve this experience for all involved.

    The Cogito solution is to pass the audio signals of the calls through voice analysis and behavioral models to give the agents and their supervisors real time feedback in a dashboard on their screen.

    “The technology is based on behavioral analytics. We can analyze all the richness of the human voice, not the words themselves but things like pitch and tone and texture and pace and overlapping – all the rich components of the human voice – using that to understand things like human intentions,” said Steve Kraus, Cogito VP of Marketing, by phone.

    The Cogito dashboard displays a continuous reading of those human qualities, giving alerts and prompts to the agent so that they can adjust their tone and approach in order to sound more empathetic and develop better rapport.

    “The beautiful thing is that on 100 percent of those calls we are gathering information. Traditionally, agents have these one-off, once-a-month review sessions with their supervisor, now the agents can be much more involved in the process,” said Kraus, “we can provide objective feedback on 100 percent of calls.”

    That’s a promise that will surely be music to the ears of call center workers, already amongst the most measured and monitored employees in the world. Now even their tone can be analysed: One imagines a constant feed of digital concern trolling: U mad, bro? You seem stressed. Y u mad tho?

    Still, there is at least real science at work here. Cogito began its life in the MIT Media Lab, before spinning out in 2007. Since then it has been developing its models and underlying architecture, first validating the products of this research in pilot programs funded by DARPA and the National Institutes of Health in studies that attempted to detect depression and other mental health disorders in the voice signals of patients and veterans.

    That may also come in useful in call centers, reports over the years have claimed phone professionals exhibit a high incidence of emotional fatigue. And customer service is big business. For over twenty years, the world’s biggest companies have been cutting costs by outsourcing this work, resulting in a substantial industry. In the Philippines this sector, which falls under the rubric of Business Process Outsourcing, is the fastest growing in the entire economy. It accounts for 10% of GDP and has grown fivefold to some $15 billion in revenues since the early 2000s, employing over one million people. Roughly a third of the estimated 13 million global call center employees work in the United States.

    Cogito’s Dialogue product is compatible with Avaya – one of the big software suites for call centers in the market. Kraus said that Salesforce is a “channel partner”– “they have so many deployments and this is a nice way for us to get into those deployments with them.”

    For now, Kraus says the company has its hands full “penetrating into the customer service space,” by investing in sales and marketing. From there, he said it’s natural that the technology will spread into other parts of their customers’ businesses. Will Cogito’s sales force be utilizing their own product in that process?

    “Yeah, our guys will use it,” he said.

    Throughout our phone call yesterday, Kraus seemed interested and engaged. He began his answers with validations – “that’s a good question” – and paused before speaking to make sure I had finished. He spoke in a variety of tones depending on the content of the conversation, ranging from affable to informative. It all seemed very natural, spontaneous, and authentic.

    Wow, so in addition to creating an even more depressing “Big Brother”-like workplace environment than already exists for many employees, Cognito’s product might even be able to detect depression and mental-health disorders. That’s, uh, convenient:

    Still, there is at least real science at work here. Cogito began its life in the MIT Media Lab, before spinning out in 2007. Since then it has been developing its models and underlying architecture, first validating the products of this research in pilot programs funded by DARPA and the National Institutes of Health in studies that attempted to detect depression and other mental health disorders in the voice signals of patients and veterans.

    That may also come in useful in call centers, reports over the years have claimed phone professionals exhibit a high incidence of emotional fatigue…

    With such capabilities, you have to wonder what which “other parts” of Cognito’s customers’ businesses will also start getting real-time audio surveillance feedback:


    For now, Kraus says the company has its hands full “penetrating into the customer service space,” by investing in sales and marketing. From there, he said it’s natural that the technology will spread into other parts of their customers’ businesses. Will Cogito’s sales force be utilizing their own product in that process?

    Hmmm…yeah, it doesn’t look like Cognito-like technology is going to be limited to call centers…

    Posted by Pterrafractyl | November 18, 2015, 6:34 pm
  6. One of the grimly fascinating aspects of the emerging Big Data revolution in the workplace is that as employers continue using more and more Big Data monitoring to increasing worker productivity, not only might this lead to declining the health of workers, but that same Big Data approach might actually allows employers to track that decline in health. And now that companies are starting to experiment with hiring third party Big Data services providers to track their employee health to predict which workers might get sick using methods that include buying information on employees from third party data brokers and scanning health insurance claims, you don’t need a lot of Big Data to predict that this is going to be a trend:

    The Wall Street Journal
    Bosses Harness Big Data to Predict Which Workers Might Get Sick
    Wellness firms mine personal information, seeking to anticipate employee health needs, minimize cost

    By Rachel Emma Silverman
    Feb. 16, 2016 6:22 p.m. ET

    Employee wellness firms and insurers are working with companies to mine data about the prescription drugs workers use, how they shop, and even whether they vote, to predict their individual health needs and recommend treatments.

    Trying to stem rising health-care costs, some companies, including retailer Wal-Mart Stores Inc., are paying firms like Castlight Healthcare Inc. to collect and crunch employee data to identify, for example, which workers are at risk for diabetes, and target them with personalized messages nudging them toward a doctor or services such as weight-loss programs.

    Companies say the goal is to get employees to improve their own health as a way to cut corporate health-care bills. But privacy advocates have concerns about such practices, which are new enough that relatively few workers are aware of them.

    “I bet I could better predict your risk of a heart attack by where you shop and where you eat than by your genome,” says Harry Greenspun, director of Deloitte LLP’s Center for Health Solutions, a research arm of the consulting firm’s health-care practice.

    An employee who spends money at a bike shop is more likely to be in good health than someone who spends on videogames, Mr. Greenspun says. Credit scores can also suggest whether an individual will be readmitted to the hospital following an illness, he says. Those with lower credit scores may be less likely to fill prescriptions and show up for follow-up appointments, adds Mr. Greenspun.

    Welltok Inc., whose clients include Colorado’s state employees, has found that people who vote in midterm elections tend to be healthier than those who skip them, says Chris Coloian, the firm’s chief solutions officer. In general, midterm voters are more mobile and more active in the community, strong indicators of overall health, he says.

    As employers more actively involve themselves in employee wellness, privacy experts worry that management could obtain workers’ health information, even if by accident, and use it to make workplace decisions.

    Federal health-privacy laws generally bar employers from viewing workers’ personal health information, though self-insured employers have more leeway, says Careen Martin, a health-care and cybersecurity lawyer at Nilan Johnson Lewis PA. Instead, employers contract with wellness firms who have access to workers’ health data.

    “There are enormous potential risks” in these efforts, such as the exposure of personal health data to employers or others,” says Frank Pasquale, a law professor at the University of Maryland, who studies health privacy.

    Typically, when a company hires a firm like Castlight, it authorizes the firm to collect information from insurers and other health companies that work with the client company. Employees are prompted to grant the firm permission to send them health and wellness information via an app, email or other channels, but can opt out.

    Based on data such as an individual’s claims history, the firms can identify an individual who might be considering costly procedures like spinal surgery, and can send that person recommendations for a second opinion or physical therapy. Some firms, such as Welltok and GNS Healthcare Inc., also buy information from data brokers that lets them draw connections between consumer behavior and health needs.

    Employers generally aren’t allowed to know which individuals are flagged by data mining, but the wellness firms—usually paid several dollars a month per employee—provide aggregated data on the number of employees found to be at risk for a given condition.

    To determine which employees might soon get pregnant, Castlight recently launched a new product that scans insurance claims to find women who have stopped filling birth-control prescriptions, as well as women who have made fertility-related searches on Castlight’s health app.

    That data is matched with the woman’s age, and if applicable, the ages of her children to compute the likelihood of an impending pregnancy, says Jonathan Rende, Castlight’s chief research and development officer. She would then start receiving emails or in-app messages with tips for choosing an obstetrician or other prenatal care. If the algorithm guessed wrong, she could opt out of receiving similar messages.

    Spinal surgery, which can cost $20,000 or more, is another area where data experts are digging in. After finding that 30% of employees who got second opinions from top-rated medical centers ended up forgoing spinal surgery, Wal-Mart tapped Castlight to identify and communicate with workers suffering from back pain.

    To find them, Castlight scans insurance claims related to back pain, back imaging or physical therapy, plus pharmaceutical claims for pain medications or spinal injections. Once identified, the workers get information about measures that could delay or head off surgery, such as physical therapy or second-opinion providers.

    To steer more J.P. Morgan Chase & Co. employees to doctors in its network, insurer Cigna Corp. analyzed claims data to identify employees who lacked primary-care physicians. Those employees got personalized messages on Cigna’s mobile app with recommendations for in-network doctors, says Michael Sturmer, a regional Cigna executive in the Northeast. Employees who had downloaded the Cigna app used in-network providers about 2% more than they did before the system was implemented in 2015.

    Some people may feel uncomfortable with the idea that their personal data is being used to predict their future. Castlight carefully test-markets its messages to try to avoid appearing too intrusive, says Mr. Rende. “Every word matters,” he says.

    Predicting health outcomes is the easy part, the firms say. The tough part is getting employees to take action—and messaging them can only do so much.

    Health-care management firm Jiff Inc. is using data to sort employees by personality type, and tailoring its approach to each type. A worker who is reluctant to participate in fitness programs, for example, might be offered richer incentives, such as a premium reduction on their health insurance, to take part.

    “Prediction with no solution isn’t very valuable,” says Derek Newell, Jiff’s chief executive. “If we can’t get people to do something, then that prediction has a value of zero”

    “As employers more actively involve themselves in employee wellness, privacy experts worry that management could obtain workers’ health information, even if by accident, and use it to make workplace decisions.”
    Yeah, that seems like one of the obvious risks here, especially when reducing healthcare costs is the primary purpose of the service. And especially when Walmart, a company known for its employee food drives for other employees who were paid so little they were going hungry, is the company leading the way. And then there’s the fact that this is being done using third party Bid Data service providers to get around employee privacy restrictions. It seems like quite a recipe for making “accidents” involving employers ‘accidentally’ finding out which employee is about to get an expensive illness and then maybe ‘accidentally’ interpreting the rest of that employee’s Big Data in a manner that leads to them getting laid off, a routine part of the workplace of the future. As the executive says at the end, “Prediction with no solution isn’t very valuable.” Well, there’s a pretty obvious solution regardless of the employee’s medical condition: fire them.

    So it looks like we’re on the cusp of a grand new early warning system for coming health maladies: when you’re suddenly fired without warning but your company appears to be in decent health, you’re probably about to suffer an expensive health crisis. Good luck! Just remember to turn in your badge on the way out.

    Posted by Pterrafractyl | February 18, 2016, 11:37 am
  7. Just, FYI, if AT&T is your cellphone provider, your local billboards might be about to get a lot more persuasive:

    The New York Times
    See That Billboard? It May See You, Too

    By SYDNEY EMBER
    FEB. 28, 2016

    Pass a billboard while driving in the next few months, and there is a good chance the company that owns it will know you were there and what you did afterward.

    Clear Channel Outdoor Americas, which has tens of thousands of billboards across the United States, will announce on Monday that it has partnered with several companies, including AT&T, to track people’s travel patterns and behaviors through their mobile phones.

    By aggregating the trove of data from these companies, Clear Channel Outdoor hopes to provide advertisers with detailed information about the people who pass its billboards to help them plan more effective, targeted campaigns. With the data and analytics, Clear Channel Outdoor could determine the average age and gender of the people who are seeing a particular billboard in, say, Boston at a certain time and whether they subsequently visit a store.

    “In aggregate, that data can then tell you information about what the average viewer of that billboard looks like,” said Andy Stevens, senior vice president for research and insights at Clear Channel Outdoor. “Obviously that’s very valuable to an advertiser.”

    Clear Channel and its partners — AT&T Data Patterns, a unit of AT&T that collects location data from its subscribers; PlaceIQ, which uses location data collected from other apps to help determine consumer behavior; and Placed, which pays consumers for the right to track their movements and is able to link exposure to ads to in-store visits — all insist that they protect the privacy of consumers. All data is anonymous and aggregated, they say, meaning individual consumers cannot be identified.

    Still, Mr. Stevens acknowledged that the company’s new offering “does sound a bit creepy.”

    But, he added, the company was using the same data that mobile advertisers have been using for years, and showing certain ads to a specific group of consumers was not a new idea. “It’s easy to forget that we’re just tapping into an existing data ecosystem,” he said.

    In many ways, billboards are still stuck in the old-media world, where companies tried to determine how many people saw billboards by counting the cars that drove by. But in recent years, billboard companies have made more of an effort to step into the digital age. Some billboards, for example, have been equipped with small cameras that collect information about the people walking by. Clear Channel Outdoor’s move is yet another attempt to modernize billboards and enable the kind of audience measurements that advertisers have come to expect.

    Privacy advocates, however, have long raised questions about mobile device tracking, particularly as companies have melded this location information with consumers’ online behavior to form detailed audience profiles. Opponents contend that people often do not realize their location and behavior are being tracked, even if they have agreed at some point to allow companies to monitor them. And while nearly all of these companies claim that the data they collect is anonymous and aggregated — and that consumers can opt out of tracking at any time — privacy advocates are skeptical.

    “People have no idea that they’re being tracked and targeted,” said Jeffrey Chester, executive director of the Center for Digital Democracy. “It is incredibly creepy, and it’s the most recent intrusion into our privacy.”

    The Federal Trade Commission has brought a number of cases related to mobile device tracking and the collection of geolocation information. In 2013, the agency settled charges with the company behind a popular Android app that turned mobile devices into flashlights. The agency said the company’s privacy policy did not inform consumers that it was sharing their location information with third parties like advertisers. Last year, the agency settled charges against Nomi Technologies, a retail-tracking company that uses signals from shoppers’ mobile phones to track their movements through stores. The agency claimed that the company had misled consumers about their opt-out options.

    Clear Channel Outdoor will offer Radar in its top 11 markets, including Los Angeles and New York, starting on Monday, with plans to make it available across the country later this year.

    “Clear Channel Outdoor Americas, which has tens of thousands of billboards across the United States, will announce on Monday that it has partnered with several companies, including AT&T, to track people’s travel patterns and behaviors through their mobile phones.
    Note that AT&T asserts that users can opt out of this tracking feature on the AT&T website and that all of the data they provide to Clear Channel is aggregated and anonymized. Let’s hope that’s true. But regardless, there’s nothing stopping Clear Channel from partnering with other location data providers in the future and given all the random apps out there that collect your location data even you’re not running them, obtaining that data directly from app providers seems possible. So if you aren’t an AT&T wireless customer, and you’d also like to opt out of any location-based advertising services, it’s probably a good time to make sure your location sharing settings on your smartphone are turned off. That said, if you really don’t want your location tracked by apps and advertisers, you probably want to get rid of that smartphone:

    CSO Online
    RSA: Geolocation shows just how dead privacy is

    By Taylor Armerding

    CSO | Mar 2, 2016 11:39 AM PT

    A regular refrain within the online security community is that privacy is dead.

    David Adler’s talk at RSA Tuesday, titled “Where you are is who you are: Legal trends in geolocation privacy and security,” was about one of the major reasons it is so, so dead.

    To paraphrase Adler, founder of the Adler Law Group, it is not so much that in today’s connected world there is a single, malevolent Big Brother watching you. It’s that there are dozens, perhaps hundreds, of “little brothers” eagerly watching you so they can sell you stuff more effectively. Collectively, they add up to an increasingly omniscient big brother.

    “Everything is gathering location data – apps, mobile devices and platforms that you use,” he said. “Often it is being done without your knowledge or consent.

    “And at same time, privacy advocates have ID’d geolocation as particularly sensitive information.”

    That, as numerous experts have been warning for some time now, is because data about where you are at all times of the day can paint an incredibly detailed and invasive picture about who you are – your political, food, religious, sexual and shopping preferences, medical conditions, job, family, friends and other relationships, and, of course, where you live.

    And, as is also well known, people make it very easy to collect that data. They essentially give it away. “A lot has to do with the shift to mobile devices,” Adler said. “What people used to do on their desktops, they now do on mobile.”

    He cited a Pew Research Center study on how people use their cell phones, which found that 40 percent used it for government services, 43 percent to research job information, 18 percent to submit job applications, 44 percent to look for real estate, 62 percent to research health conditions and 57 percent for online banking.

    “In addition to the sensitivity of the subjects, you fold in the location data, and it can become very revealing,” Adler said.

    Avoiding this is not as simple as turning off the “location services” feature in a smartphone either, he noted.

    “That is only one of several ways location data is gathered,” he said. “I was shocked at technology behind it. It is collected by the cell tower that your device talks to. Wi-Fi hotspots not only share the location, but time stamp it. Your phone logs all of it – your keyboard cache, SIM card serial number, your number, your email address. All of this can be gathered by apps, and they don’t have to ask your permission.”

    There is a growing awareness of these risks not just from privacy advocates, but from at least some government agencies as well. Adler quoted the Federal Trade Commission’s Director of the Consumer Protection Division Jessica Rich, who said two years ago that “Geolocation information divulges intimately personal details of an individual.”

    He also noted the passage of the Consumer Privacy Bill of Rights Act of 2015, along with other legislation pending.

    But it is unlikely that things will change soon in any major way. The Center for Democracy & Technology (CDT) called the consumer privacy bill “an incredibly important first step,” but also said it contains, “too many loopholes, and enforcement is lacking.”

    Adler said that is in part because the U.S. still, “has no uniform privacy laws, and enforcement is ad hoc.” He said a number of consumer complaints, “have fizzled in the courts, because they depend on very specific harm to individuals.”

    “To paraphrase Adler, founder of the Adler Law Group, it is not so much that in today’s connected world there is a single, malevolent Big Brother watching you. It’s that there are dozens, perhaps hundreds, of “little brothers” eagerly watching you so they can sell you stuff more effectively. Collectively, they add up to an increasingly omniscient big brother.”
    It’s definitely a “the whole is greater than the sum of its parts” situation when you’re talking about an array of “little brothers”. Especially when those “little brothers” are sharing information with each other. And it’s apparently possible that those “little brother” apps sitting on your smartphone just might be going the extra mile to gather your location, even when you tell it not to:


    Avoiding this is not as simple as turning off the “location services” feature in a smartphone either, he noted.

    “That is only one of several ways location data is gathered,” he said. “I was shocked at technology behind it. It is collected by the cell tower that your device talks to. Wi-Fi hotspots not only share the location, but time stamp it. Your phone logs all of it – your keyboard cache, SIM card serial number, your number, your email address. All of this can be gathered by apps, and they don’t have to ask your permission.”

    So, given how the groups that gather this data generally do it for the purpose of selling it to others, it seems like it should just take one of your “little brother” apps to surreptitiously gathering that location data using unorthodox means before the rest of the data collection/marketing industries starts getting access too. They’ll just buy it from the rogue app provider. At least it sounds like that’s possible.

    As creepy as all that sounds, keep in mind that the future of personalized billboards can always get creepier:

    Vice Motherboard
    The Billboards of the Future Are ‘Trixelated’ 3D Holograms

    Written by Becky Ferreira
    Contributor

    January 15, 2015 // 05:25 PM EST

    When Marty McFly gets dropped off in 2015 in Back to the Future II, one of the first futuristic technologies he experiences is a 3D hologram shark advertising Jaws 19. Now, it turns out that lifelike 3D displays may actually come to fruition in the next year, just as Prophet Zemeckis promised.

    A collaboration between the Austrian tech startup TriLite Technologies and the Vienna University of Technology has revealed that using tiny mirrors to reflect lasers in numerous directions can trick viewers into interpreting an image as three-dimensional.

    “The mirror directs the laser beams across the field of vision, from left to right,” explained UT Vienna computer engineer Ulrich Schmid in a statement. “During that movement the laser intensity is modulated so that different laser flashes are sent into different directions.”

    The upshot is that these mirrored 3D pixels, or “trixels” as the team calls them, could project hundreds of different images outward, as compared to the 3D movie technique, which only projects two, and requires that the viewer wear glasses. Walking around such a trixelated billboard, on the other hand, would make the image appear to be a highly resolved, three-dimensional object to the naked eye.

    Goodbye, 3D glasses. Hello, ubiquitous, laser-generated images that jump out directly at you from every angle.

    It actually gets a little creepier. According to a study authored by TriLite/UT Vienna team and published in Optics Express, a single electronic billboard could present multiple images, which could change depending on the angle it’s viewed from.

    “Maybe someone wants to appeal specifically to the customers leaving the shop across the street, and a different ad is shown to the people waiting at the bus stop”, TriLite CEO Ferdinand Saint-Julien, said in a statement. “Technically, this would not be a problem.”

    So, if you think targeted online advertising is invasive, just wait until bus stops, train cars, and roadside billboards start spamming you with commercials based on your daily habits. I’d rather have the cheesy Jaws 19 shark following me around than that.

    In the study, the team described their modest prototype version of their display, which has a trixel resolution of five by three. But the next prototype is already in the works, and the researchers are shooting to launch the display commercially as early as 2016.

    As ingenious as the concept of trixelation is, it’s discouraging to think of it solely as an advertising tool. After all, this approach could backfire in all kinds of unpredictable ways. Distracted driving has become a huge problem in the age of the Smartphone, and now we want to throw tailored, 3D ads up everywhere? It seems like a public safety nightmare, not to mention the obvious Orwellian dimension of targeting specific perspectives with different messages.

    One thing is for certain: this display’s capabilities could fundamentally change our relationship with visual media. Whether that will end up being good, bad, or someone in between remains to be seen.

    “The upshot is that these mirrored 3D pixels, or “trixels” as the team calls them, could project hundreds of different images outward, as compared to the 3D movie technique, which only projects two, and requires that the viewer wear glasses. Walking around such a trixelated billboard, on the other hand, would make the image appear to be a highly resolved, three-dimensional object to the naked eye.”
    That’s right, the next generation of billboards will involve 3D highly resolved objects. And when you combine location data, personal marketing data, and 3D hologram technology…”Goodbye, 3D glasses. Hello, ubiquitous, laser-generated images that jump out directly at you from every angle.” Yep, the world is about to become personalized version of Disney’s Haunted Mansion, except instead of fun ghosts holograms it will be crappy product holograms. That sounds both kind of cool (yay holograms!), but also pretty creepy, which is still better than the ubiquitous location tracking which is just plain creepy.

    There that’s a glimpse at the near-future of billboard advertising: creepily personalized holograms. The holograms themselves may or may not be creepy. But since they’ll be probably be location-based personalized holograms, they’ll definitely be creepy.

    Posted by Pterrafractyl | March 9, 2016, 4:01 pm
  8. Here’s an example of the public pushing back against the endless incursion of privacy-violating smartphone technology: The US Federal Trade Commission sent a warning letter to a dozen smartphone app developers who have been caught using the background-noise tracking software developed by SilverPush to secretly determine what TV shows you’re watching. The firms were told that if they don’t inform users that their apps were collecting TV background data, it may violate FTC rules barring unfair or deceptive acts or practices, which suggest that the FTC still isn’t quite sure if secretly embedding SilverPush’s software in your app actually is a violation of the FTC’s rules or not. Maybe it does violate the rules, but maybe not. At least that’s the strength of the language the FTC used in its warning.

    So let’s hope the FTC was just choosing to be polite by not using stronger language, because if not, that would suggest this was actually less a public push back and more a polite public request to app developers that doubles as an admission that the FTC still isn’t quite sure if it’s illegal:

    PC Word
    FTC warns app developers against using audio monitoring software
    A dozen developers appear to have packaged TV tracking software into their products, the agency says.

    Grant Gross
    IDG News Service

    Mar 17, 2016 2:28 PM

    The U.S. Federal Trade Commission has sent warning letters to 12 smartphone app developers for allegedly compromising users’ privacy by packaging audio monitoring software into their products.

    The software, from an Indian company called SilverPush, allows apps to use the smartphone’s microphone to listen to nearby television audio in an effort to deliver more targeted advertisements. SilverPush allows the apps to surreptitiously monitor the television viewing habits of people who downloaded apps with the software included, the FTC said Thursday.

    “This functionality is designed to run silently in the background, even while the user is not actively using the application,” the agency said in its letter to the app developers. “Using this technology, SilverPush could generate a detailed log of the television content viewed while a user’s mobile phone was turned on.”

    If the app developers state or imply that their apps do not collect or transmit television viewing data when they actually do, that may be a violation of the section of the FTC Act barring deceptive and unfair business practices, the agency said.

    The 12 developers appear to have included SilverPush code in apps available in the Google Play store, the FTC said.

    SilverPush has said its service isn’t now operating in the U.S., but it encourages app developers that package its software to notify customers about its ability to monitor TV habits should the company move into the U.S. market, the FTC said. SilverPush representatives weren’t immediately available for comment on the FTC’s letter.

    “These apps were capable of listening in the background and collecting information about consumers without notifying them,” Jessica Rich, director of the FTC’s Bureau of Consumer Protection, said in a statement. “Companies should tell people what information is collected, how it is collected, and who it’s shared with.”

    Some app developers ask for permission to use a smartphone’s microphone, even tough the apps do not appear to have a need for that functionality, the FTC said. The apps apparently packaging SilverPush don’t provide users’ notice that they could monitor TV viewing habits, even if the app is not in use, the agency said.

    “If the app developers state or imply that their apps do not collect or transmit television viewing data when they actually do, that may be a violation of the section of the FTC Act barring deceptive and unfair business practices, the agency said.”
    Part of what’s a little disconcerting about the FTC’s warning is that it’s specifically warning against app developers “stating or implying” that their apps don’t collect this kind of data. So…what if the developers don’t say anything at all? Does that fall under the “imply” category? Let’s hope so. Here’s the specific language:


    Upon downloading and installing your mobile application that embeds Silverpush, we received no disclosures about the included audio beacon functionality — either contextually as part of the setup flow, in a dedicated standalone privacy policy, or anywhere else.

    For the time being, Silverpush has represented that its audio beacons are not currently embedded into any television programming aimed at U.S. households.1 However, if your application enabled third parties to monitor television-viewing habits of U.S. consumers and your statements or user interface stated or implied otherwise, this could constitute a violation of the Federal Trade Commission Act.2 We would encourage you to disclose this fact to potential customers, empowering them to make an informed decision about what information to disclose in exchange for using your application. Our business guidance “Marketing Your Mobile App: Get It Right From The Start” can provide additional guidance on how to make sure consumers understand your data collection and sharing practices.3

    “However, if your application enabled third parties to monitor television-viewing habits of U.S. consumers and your statements or user interface stated or implied otherwise, this could constitute a violation of the Federal Trade Commission Act.2 We would encourage you to disclose this fact to potential customers, empowering them to make an informed decision about what information to disclose in exchange for using your application.
    That’s, uh, sort of encouraging, although it’s not quite clear who should be encouraged.

    Posted by Pterrafractyl | March 18, 2016, 3:07 pm
  9. If you find yourself suddenly feeling stalked the billboards in your town are tracking you personally, don’t worry, it’s not personal. They’re tracking everyone all personally. So, actually, maybe some worry is in order:

    Chicago Tribune

    Hey, you in the Altima! Chevy billboard spots rival cars, makes targeted pitch

    By Robert Channick
    April 15, 2016, 5:03 AM

    Drivers along a busy Chicago-area tollway may have recently noticed a large digital billboard that seems to be talking directly to them.

    It is.

    Launched last month here as well as in Dallas and New Jersey, the eerily Orwellian outdoor campaign for Chevy Malibu uses vehicle recognition technology to identify competing midsize sedans and instantly display ads aimed at their drivers.

    Cruising along in an Altima? The message might be “More Safety Features Than Your Nissan Altima.” Driving a Ford Fusion or Toyota Camry? You might see a miles-per-gallon comparison between the Malibu and your car. The ads last just long enough for approaching drivers of those vehicles to know they got singled out and served by a billboard.

    Consumers used to receiving personalized ads on their smartphones may be surprised to see one on a 672-square-foot highway billboard. But data-based technology is finding its way into digital outdoor displays of all types, enabling advertisers to track, reach and sell you stuff — even at 55 mph.

    “This is just the tipping point of the disruption in out-of-home,” said Helma Larkin, CEO of Posterscope, an out-of-home communications agency that designed the Malibu campaign with billboard company Lamar Advertising. “The technology coming down the pike is fascinating around what we could potentially do to bring digital concepts into the physical world.”

    Out-of-home advertising, which includes billboards, bus shelters, mall kiosks and other public platforms, is seeing growth fueled by such digital innovation. Spending on outdoor advertising rose 4.6 percent last year to $7.3 billion, according to the Outdoor Advertising Association of America, the industry’s national trade organization.

    There are about 370,000 billboards in the U.S., most of which still deliver a large static message to motorists the old-fashioned way — with posters or paint. Digital billboards — giant TV screens that generally rotate in new messages every 8 seconds or so — number about 6,400 nationwide, and are gaining traction.

    “There’s a good growth trend that we’ve seen over the past of the digital roadside inventory,” said Stephen Freitas, chief marketing officer of the outdoor advertising trade association. “Several hundred new locations are built every year.”

    The Malibu campaign has taken over a 14-by-48-foot Lamar digital billboard facing east along the Reagan Memorial Tollway (Interstate 88) at Eola Road, near the Chicago Premium Outlets mall in Aurora. Watching traffic 24/7, a separate camera mounted 1,000 feet ahead of the billboard scans for vehicle grilles. When it recognizes a Fusion, Camry or Altima, the billboard shifts from a generic Malibu ad to a competitor-specific one.

    The billboard takes into account the speed of traffic to calculate the precise moment to pull the trigger on the personalized message, giving those drivers 7 seconds of highway fame that is equal parts big data and Big Brother, and perhaps the future of out-of-home advertising.

    “It’s really innovative because it is able to inform the creative on the fly as the car goes by, and that’s really bringing online techniques to the offline world,” Larkin said.

    Interactive outdoor digital displays have been making a splash for several years at the pedestrian level, such as a 2014 bus shelter campaign promoting a Harry Houdini TV miniseries that challenged Chicago commuters to hold their breath for three minutes — duplicating one of the magician’s legendary tricks.

    Serving the same sort of targeted ads that consumers receive on their smartphones to a giant billboard, however, represents a leap in the digital evolution of outdoor advertising, and a bold new canvas that is sure to grab attention.

    “Most people think it’s really cool,” Larkin said. “Consumers are much more attracted to an ad and are much more prone to take notice of it when it relates to them and the environment that they’re in as opposed to a blanket statement.”

    Others fear targeted billboard advertising represents yet another digital assault on privacy. And unlike mobile apps, consumers can’t opt out of the billboard’s prying eyes.

    “It’s the beginning of a digitally driven, intelligent, outdoor spying apparatus that captures all your details in order to advertise and market to you,” said Jeffrey Chester, executive director of the Center for Digital Democracy, a Washington-based nonprofit focused on consumer protection and privacy issues. “It’s a mistake to think it’s just an outdoor ad.”

    Billboards are already being used to track consumers. Clear Channel Outdoor Americas, for example, is using aggregated mobile data to identify drivers passing its Chicago billboards and their shopping habits.

    Larkin said the Malibu billboard camera is only capturing the grilles of the autos to identify the competitive brands, yielding less data than a typical online session.

    “They don’t take pictures of peoples’ faces — it’s blurred out by the technology, the license plate is blurred,” Larkin said. “We are picking up the make and model of the cars.”

    For Chester, promises of aggregated anonymity fall on deaf ears. He is convinced that the Malibu billboard is already learning more about the passing drivers than they realize.

    “You might be able to ignore a billboard, but this is a billboard that is going to know you,” Chester said.

    “”This is just the tipping point of the disruption in out-of-home,” said Helma Larkin, CEO of Posterscope, an out-of-home communications agency that designed the Malibu campaign with billboard company Lamar Advertising. “The technology coming down the pike is fascinating around what we could potentially do to bring digital concepts into the physical world.”
    Well, yes, it is pretty fascinating. Of course there are other ways to describe this trend in bringing digital concepts to the physical world:


    “It’s the beginning of a digitally driven, intelligent, outdoor spying apparatus that captures all your details in order to advertise and market to you,” said Jeffrey Chester, executive director of the Center for Digital Democracy, a Washington-based nonprofit focused on consumer protection and privacy issues. “It’s a mistake to think it’s just an outdoor ad.”

    Won’t it be fun when this technology hits the fashion industry. “Hey, you in the frumpy dress. Here’s a wardrobe (that won’t make the billboards yell at you in public).” Now that’s going to be customer service!

    Also keep in mind that the billboard system described above is supposed blocking all personally identifying information, like license plates and windshield shots that could be used to identify the actual occupants of a car and deliver even more personalized ads in public spaces:


    “They don’t take pictures of peoples’ faces — it’s blurred out by the technology, the license plate is blurred,” Larkin said. “We are picking up the make and model of the cars.”

    So let’s hope that’s actually the case and this firm really is systematically preventing itself from collecting any personally identifying data. That would be nice. And who knows if that’s actually true. But if it is the case, it’s probably just a matter of time before it is not longer the case. It’s also worth keeping in mind that even if these new companies aren’t scanning your actual license plate and collective a database of your vehicle’s movements, plenty of other companies already are:

    Car and Driver

    Screen-Plate Club: How License-Plate Scanning Compromises Your Privacy
    You’ve probably been tagged at the office, at a mall, or even in your own driveway.

    Oct 2014 By CLIFFORD ATIYEH

    Towing companies are a necessary evil when it comes to parking enforcement and property repossession. But in the Google Earth we now inhabit, tow trucks do more than just yank cars out of loading zones. They use license-plate readers (LPRs) to assemble a detailed profile of where your car will be and when. That’s an unnecessary evil.

    Plate readers have long been a tool of law enforcement, and police officers swear by them for tracking stolen cars and apprehending dangerous criminals. But private companies, such as repo crews, also photograph millions of plates a day, with scanners mounted on tow trucks and even on purpose-built camera cars whose sole mission is to drive around and collect plate scans. Each scan is GPS-tagged and stamped with the date and time, feeding a massive data trove to any law-enforcement agency—or government-approved private industry—willing to pay for it.

    You’ve probably been tagged at the office, at a mall, or even in your own driveway. And the companies that sell specialized monitoring software that assembles all these sightings into a reliable profile stand to profit hugely. Brian Hauss, a legal fellow for the American Civil Liberties Union (ACLU), says: “The whole point is so you can figure out somebody’s long-term location. Unless there are limits on how those transactions can be processed, I think it’s just a matter of time until there are significant privacy violations, if they haven’t already occurred.”

    How Is This Even Legal? License-plate-reader companies don’t have access to DMV registrations, so while they can track your car, they don’t know it’s yours. That information is guarded by the Driver’s Privacy Protection Act of 1994, which keeps your name, address, and driving history from public view. Mostly. There are plenty of exceptions, including for insurance companies and private investigators. LPR companies say only two groups can use its software to find the person behind the plate: law-enforcement agencies and repossession companies. In addition, the encrypted databases keep a log of each plate search and allow the ability to restrict access.

    The companies that push plate readers enjoy unregulated autonomy in most states. Vigilant Solutions. of California and its partner, Texas-based Digital Recognition Network, boast at least 2 billion license-plate scans since starting the country’s largest private license-plate database, the National Vehicle Location Service, in 2009.

    In total, there are at least 3 billion license-plate photos in private databases. Since many are duplicates and never deleted, analytics can paint a vivid picture of any motorist. Predicting where and when someone will drive is relatively easy; software can sort how many times a car is spotted in a certain area and, when fed enough data, can generate a person’s driving history over time.

    You Can’t Run, But They Can Hide

    An average license-plate reader looks like four radar detectors, stacked two-by-two. But they aren’t always easy to spot. Both cops and private users hide LRPs in almost anthing.

    And the systems are getting smarter quickly. Vigilant alone adds 100 million photos every month, but company ­ marketing vice president Brian Shockley says the word “tracking” is misleading. LPRs, he says, capture “momentary, point-in-time” information.

    Scott Jackson, CEO of data provider MVTrac, contends that license-plate readers just automate what police officers and repo men have always done—run plates by eye—and that most Ameri­cans have accepted that the days of having true privacy are gone.

    “The pros of this technology far, far outweigh the fear factor of privacy,” he says, referring to its successful police busts. “There are so many ways to track a person; this is not the one you should be worried about.”

    Hauss of the ACLU disagrees. He asks, “Is it just so you can have a giant haystack that you can search whenever you want, for whatever purpose you want?”

    Paul Kulas, president of Colorado-based BellesLink, which sells verification software to repo companies, says his industry needs to face these public concerns before it’s “lumped in with the surveillance state.” Some privacy statements by plate-reader companies, he says, have been misleading.

    Kulas believes that the idea that LPR data cannot be linked to personal information is inaccurate. “Without regulation and without foresight,” he says, “this could get to a point where numerous lawsuits could be brought against lenders and camera companies because they have, in effect, obtained our location information without our permission.”

    As ominous as their private-sector deployment is, LPRs have incited controversy with their law-enforcement usage as well. In December 2013, the city of Boston suspended its LPR program after police accidentally revealed DMV-tied information from its cameras to the Boston Globe. While that one incident highlighted failings in the department’s data policy, plenty of agencies don’t even have such a thing. Some keep data for days, others for years. In most states, police can monitor you with LPRs without serving a search warrant or court order. And this February, a Department of Homeland Security proposal for a privately hosted federal plate-tracking system was scrapped days after the Washington Post exposed it.

    Last year, police in Tempe, Arizona, refused an offer from Vigilant for free LPR cameras. The catch: Every month, officers would have to serve 25 warrants from a list supplied by Vigilant. Miss the quota, lose the cameras. Such lists, according to the Los Angeles Times investigation that uncovered the offer, commonly come from debt-collector “warrants” against drivers with unpaid municipal fines.

    Eventually, police and repo men might not be the only customers buying LPR data. MVTrac recently completed a beta test that tracked Acuras at specific areas and times, logging info including the exact models and colors. That information, far more real-time than state-registration data, could be gold to automakers, marketers, and insurance companies.

    There has been pushback. Nine states have passed LPR laws, and four of those states bar private companies such as Vigilant from operating or selling their wares [see map, above]. Some of those states limit usage to legitimate investigations by police and traffic agencies. And some set standards for data security and establish formal processes (such as requiring warrants) and public audits.

    In 2007, New Hampshire was the first to ban LPRs completely except for toll collections and security on certain bridges. Maine answered in 2009 with a less restrictive law, followed by California, Arkansas, Utah, Vermont, Florida, Tennessee, and Maryland. In Utah, legislators banned private companies from using LPRs but amended the law after Vigilant and Digital Recognition Network sued the state, claiming the ban violated their First Amendment rights to public photography and free speech. After helping to kill a similar bill in California this past May, the companies are now suing Arkansas, which followed Utah’s original letter in restricting LPRs to police use. At least nine states have pending bills that regulate plate readers.

    As with many technologies, license-plate readers are advancing at a rate that is outpacing legislation. Smaller cameras; smartphone apps that can pick out plates from live video; and the ­potential fusion of public records, DMV databases, and facial-recognition software are already on the horizon. Because police ostensibly use LPRs for public safety, drivers will likely have to accept some erosion of their privacy behind the wheel. But when corporations start buying tracking data in the name of “customer focus” and lawmakers look the other way, we say it’s time to bring on the James Bond–style plate flippers.

    “Eventually, police and repo men might not be the only customers buying LPR data. MVTrac recently completed a beta test that tracked Acuras at specific areas and times, logging info including the exact models and colors. That information, far more real-time than state-registration data, could be gold to automakers, marketers, and insurance companies.”
    So we have billboard companies scanning cars for targeted ads, but apparently not scanning the license plates and car occupants that could make those ads much more targeted. And we also have a vast and growing industry of companies scanning license plates for the expressed purpose of identifying who owns those vehicles and building a database that could be sold to who knows who. Huh.

    So will the billboard companies eventually buy the license plate data from the LPR industry or will they just collect the data from their billboard cameras and join the industry and ad even more information to this growing private surveillance commercial sector? Both seems like obvious options, and they aren’t mutually exclusive. It’s a reminder that, in our outdoor commercial surveillance-state future, when you see a personalized ads, you aren’t just experiencing the possible privacy violation. You’re also helping make your future privacy violations more personalized. But since this is happening to everyone at least you won’t have to take it personally. It could be worse! Silver linings aren’t the best in the Panopticon.

    Posted by Pterrafractyl | April 16, 2016, 4:09 pm
  10. The Wall Street Journal has a recent article where three different experts are asked about the growing potential interest of employers in utilizing data gathered from employee “wearables” and other types of Big Data. Not surprisingly, the opinions range from the three experts ranged from ‘this is a scary trend with major potential for privacy invasion’ from John M. Simpson, director of the Privacy Project at the nonprofit advocacy group Consumer Watchdog, to ‘this is potentially scary but potentially useful too’ from Edward McNicholas, co-leader of privacy, data security and information law at law firm Sidley Austin LLP, all the way to ‘you won’t be able to compete in the job market unless you agree to generate and hand over this data because you won’t be productive enough without it’ from Chris Brauer, director of innovation and senior lecturer at the Institute for Management Science at Goldsmiths, University of London.

    It’s an expected spectrum of opinions for a topic like this, but it’s also worth keeping in mind that it’s a non-mutually exclusive set of opinions: Big Data from employee wearable tech could, of course, indeed have some legitimate uses. It could also lead to a horribly abuses and invasive coercive nightmare situation for employees. But that nightmare potential is no reason to believe that employees won’t effectively be forced to submit to pervasive wearable surveillance that includes their activity outside of work, like hours of sleep.

    So get worried about Big Data rewriting the employer/employee contract to include pervasive surveillance at and away from the office. And since ours is a civilization which often does that which you should be deeply worried about, get ready too:

    The Wall Street Journal

    How Should Companies Handle Data From Employees’ Wearable Devices?
    Wearables at work allow employers to track productivity and health indicators—and pose tricky privacy issues

    By Patience Haggin
    May 22, 2016 10:00 p.m. ET

    Wearable electronics, like the Fitbits and Apple Watches sported by runners and early adopters, are fast becoming on-the-job gear. These devices offer employers new ways to measure productivity and safety, and allow insurers to track workers’ health indicators and habits.

    For employers, the prospect of tracking people’s whereabouts and productivity can be welcome. But collecting data on employees’ health—and putting that data to work—can trigger a host of privacy issues.

    The Wall Street Journal asked John M. Simpson, director of the Privacy Project at the nonprofit advocacy group Consumer Watchdog; Chris Brauer, director of innovation and senior lecturer at Goldsmiths, University of London; and Edward McNicholas, co-leader of privacy, data security and information law at law firm Sidley Austin LLP, to weigh in on how companies should handle data collected from wearables. Here are edited excerpts of their discussion.

    Do I have to?

    WSJ: Should employers be able to require their employees to wear wearables?

    MR. BRAUER: It’s about a social contract between employer and employee. It’s in nobody’s interest to have overworked, stressed and anxious employees who often aren’t even aware of their own condition. Making things visible is a good thing if there is a culture of trust and accountability.

    The real challenge is in productivity and performance. Sport science has evolved remarkably in the last 10 years, and we can expect the same from management science.

    Is it reasonable for a team to expect a football player to wear a sensor in his shirt to monitor granular movement and injury susceptibility—things that video, psychologists and pitchside observers just don’t pick up? Nowadays you can’t compete at top-level sport without this kind of wearable insight and analytics.

    In the near future we’ll see the same kind of thing in all fields of endeavor. In most fields it may be a similar question, not so much of whether you should be able to require wearables as whether you can compete without them.

    MR. SIMPSON: Wearables that provide health data about an individual provide deeply personal information. Requiring an employee to wear such a device is an Orwellian overreach and an unjustified invasion of privacy.

    Another issue to consider is just how accurate the data such devices provide actually turns out to be. There are serious questions about the accuracy of many of the apps that power these devices. Making decisions about people based on their private information is bad enough. Worse would be making decisions based on private health data that was wrong.

    I don’t see how there is a legitimate place for mandatory health wearables in the workplace. Moreover, their required use would undermine employee morale, likely having a negative impact on productivity.

    If the employer makes the case for access to some of the data and the employee agrees, that is a different situation. The problem is that the employee might feel under great pressure to agree to the use of their data. If data is shared on a voluntary basis, there must be provisions in place so there is no coercion.

    MR. MCNICHOLAS: Some wearables will protect workers from radiation, accidents, particulate matter in their lungs. Such health protections should be treated differently.

    Performance monitoring, however, raises other issues. Transparency and reasonableness strike me as key. Employers should be mandated to be transparent with their employees and to let the employees make the choice about whether it is reasonable.

    When nobody is being harmed by a wearable, I think we have to acknowledge that the equation is different and leans toward more liberal use of wearables.

    The danger of discrimination

    WSJ: Here’s an example of how employers might use data from wearables: Imagine a company’s sales representatives wear trackers that measure sleep hours and quality. The boss has access to this data, and can use it to inform decisions.

    Studying the sleep patterns of sales reps Jack and Jill, he notices that Jill slept well last night and Jack did not. He decides that Jill will make that afternoon’s client pitch, since the data gives him more confidence in her ability to perform that afternoon. Is this appropriate?

    MR. BRAUER: If there is a very strong historical correlation between Jack and Jill’s sleeping patterns and their sales performance, then it makes sense to make a strategic decision to send one or the other into a big pitch using this data point. This assumes that Jack and Jill have volunteered or are contracted to wear the fitness tracker with the knowledge that the data from the device may be used by management to make these kinds of strategic resource-allocation decisions.

    You’d also like to see organizations that understand sleep quality as a predictor of performance incorporating this into health and well-being strategies for their workforce—offering sleep training, for example, or sharing knowledge around anonymized and aggregated data.

    We are also going to see lots of examples of individual employees developing biometric curricula vitae that indicate their productivity and performance under certain conditions, and they can use this to lobby employers or apply for jobs. So if the job requires high performance under stressful conditions, you can demonstrate in your data how you have performed under stressful conditions in the past. This primary data can potentially be a very reliable predictor of future performance.

    MR. SIMPSON: Given that different people require different amounts of sleep, it would be difficult for any manager to make meaningful decisions about which employee to send on a client pitch based on how much sleep they had. I’d think past job performance and results would be much more useful.

    Using private health data to apply for jobs would open the door to all sorts of unfair discrimination. A question: In this predicted world of wearable fitness devices in the workplace, would managers and executives be expected to share their private health data with employees?

    MR. MCNICHOLAS: In some situations, employers need to know health information about employees in order to keep them safe. Employees operating dangerous machinery should also have some obligation to share with their employer whether they are under the influence of medicines that may impact their ability to do their job safely. The safety concerns here are often at least as much about other workers, customers, and the general public as they are about the health of the particular employee.

    Perhaps an employer could get the same result by giving employees an incentive payment or award if they opt into sharing sleep patterns and hit their sleep goals.

    The potential for discrimination against persons with physical or mental differences must be kept in mind. If the results of this sort of tracking led to discrimination against persons with conditions ranging from insomnia or depression or ADHD, the program would need to be reformed. White House and Federal Trade Commission reports on big data have highlighted the potential for the new world of big data to lead to such results.

    The rubber will hit the road when we have artificial intelligence analyzing the massive data sets that will be created by the information coming from these wearable devices. To my mind, we should not deny ourselves the potential benefits of these technologies by banning them, but we must keep a critical eye on particular implementations of such technologies in order to ensure that they do not become new ways of discriminating against people based on any number of illegal and illicit criteria.

    “We are also going to see lots of examples of individual employees developing biometric curricula vitae that indicate their productivity and performance under certain conditions, and they can use this to lobby employers or apply for jobs. So if the job requires high performance under stressful conditions, you can demonstrate in your data how you have performed under stressful conditions in the past. This primary data can potentially be a very reliable predictor of future performance.”
    Yes, it’s time to start collecting that data for your biometric CV. And while this might not be the best thing to add to your new biometric CV, if you happen to be wearing a Fitbit heart rate tracker while reading this article and your heart rate didn’t spike, that is sort of a useful piece of data. Maybe you could read all sorts of articles about the emerging Orwellian employer surveillance state and show a nice, steady heart rate that doesn’t indicate any distress. It Future employers would probably love seeing something like that on your biometric CV. No cheating.

    Posted by Pterrafractyl | May 26, 2016, 2:57 pm
  11. Check out the fun ‘bug’ in the new smash hit Pokemon Go app that’s already been downloaded by millions of people since its recent release. It sounds like the company, Niantic, a Google spinoff, has already fixed the bug. But as is apparent from the fact that they had to fix the bug, it’s a ‘bug’ that all sorts of app developers can presumably utilize: If you signed into the app using your Google Account on an iOS device, it’s possible that Niantic could get complete access to ALL your Google Account information, including your emails:

    BuzzFeed

    You Should Probably Check Your Pokémon Go Privacy Settings

    The company behind the game is collecting players’ data. And it’s most definitely catching them all.

    Originally posted on Jul. 11, 2016, at 1:38 p.m. Updated on Jul. 12, 2016, at 1:20 p.m.

    Joseph Bernstein
    BuzzFeed News Reporter

    UPDATE: In a statement attached to the first patch to the game,, released today, Niantic said it “Fixed Google account scope.” iOS users who sign out and back into the game with Google will see the below screen, with the two permissions the game now requires: Google User ID and email address..

    In the five frenzied days since its American release, Pokémon Go has become an economic and cultural sensation. Downloaded by millions, the game has boosted Nintendo’s market value by $9 billion (and counting), made a major case for augmented reality as the gaming format of the future, and led to a plethora of strange, scary, and serendipitous real-life encounters.

    Like most apps that work with the GPS in your smartphone, Pokémon Go can tell a lot of things about you based on your movement as you play: where you go, when you went there, how you got there, how long you stayed, and who else was there. And, like many developers who build those apps, Niantic keeps that information.

    According to the Pokémon Go privacy policy, Niantic may collect — among other things — your email address, IP address, the web page you were using before logging into Pokémon Go, your username, and your location. And if you use your Google account for sign-in and use an iOS device, unless you specifically revoke it, Niantic has access to your entire Google account. That means Niantic could have read and write access to your email, Google Drive docs, and more. (It also means that if the Niantic servers are hacked, whoever hacked the servers would potentially have access to your entire Google account. And you can bet the game’s extreme popularity has made it a target for hackers. Given the number of children playing the game, that’s a scary thought.) You can check what kind of access Niantic has to your Google account here.

    It also may share this information with other parties, including the Pokémon Company that co-developed the game, “third-party service providers,” and “third parties” to conduct “research and analysis, demographic profiling, and other similar purposes.” It also, per the policy, may share any information it collects with law enforcement in response to a legal claim, to protect its own interests, or stop “illegal, unethical, or legally actionable activity.”

    Now, none of these privacy provisions are of themselves unique. Location-based apps from Foursquare to Tinder can and do similar things. But Pokémon Go’s incredibly granular, block-by-block map data, combined with its surging popularity, may soon make it one of, if not the most, detailed location-based social graphs ever compiled.

    And it’s all, or mostly, in the hands of Niantic, a small augmented reality development company with serious Silicon Valley roots. The company’s origins trace back to the geospatial data visualization startup Keyhole, Inc., which Google acquired in 2004; it played a crucial role in the development of Google Earth and Google Maps. And though Niantic spun off from Alphabet late last year, Google’s parent company is still one of its a major investors, as is Nintendo, which owns a majority stake in The Pokémon Company. Indeed, Google still owned Niantic when the developer released its first game, Ingress, which is what Niantic used to pick the locations for Pokémon Go’s ubiquitous Pokéstops and gyms.

    Citing CEO John Hanke’s travel plans, a representative from Niantic was not able to clarify to BuzzFeed News if the company will share location data with Alphabet or Nintendo. A Google representative forwarded BuzzFeed News’ request for comment to Niantic.

    However, in a statement to Gizmodo Monday night, Niantic said they started working on a fix and verified with Google that nothing beyond basic profile information had been accessed.

    We recently discovered that the Pokémon GO account creation process on iOS erroneously requests full access permission for the user’s Google account. However, Pokémon GO only accesses basic Google profile information (specifically, your User ID and email address) and no other Google account information is or has been accessed or collected.

    Once we became aware of this error, we began working on a client-side fix to request permission for only basic Google profile information, in line with the data that we actually access. Google has verified that no other information has been received or accessed by Pokémon GO or Niantic.

    Google will soon reduce Pokémon GO’s permission to only the basic profile data that Pokémon GO needs, and users do not need to take any actions themselves.

    Given the fact that Pokémon Go already attracted the attention of law enforcement, it seems likely that at some point police will try to get Niantic to hand over user information. And if Google’s track record is any indication — a report earlier this year showed that the company complied with 78% of law enforcement requests for user data — they are probably prepared to cooperate.

    Now, none of these privacy provisions are of themselves unique. Location-based apps from Foursquare to Tinder can and do similar things. But Pokémon Go’s incredibly granular, block-by-block map data, combined with its surging popularity, may soon make it one of, if not the most, detailed location-based social graphs ever compiled.”

    Wow. ‘Accidentally’ gaining full access to your Google account and all your emails is a thing smartphone app makers do these days. And while it’s likely Niantic really did make that bug fix (a Google spinoff probably doesn’t need access to your emails), it seems like this has got to be a wildly popular ‘bug’ for app makers. They can gain full access to your Google account simply by adding an “sign in with Google” option.

    Keep in mind that there’s currently some confusion as to what exactly giving “full account access” to an app entails, and it’s possible that it wouldn’t give access to things like emails. But even if its not your email content but instead almost all the other content in you Google account, that’s still potentially an immense amount of personal content. And now that Pokemon Go has made sure the world it aware of these kinds of security issues we can be pretty sure there’s going to be a lot more apps offering a nice, convenient Google account log in option in the future.

    So, yeah, you might want to double check those third-party app permissions.

    Posted by Pterrafractyl | July 12, 2016, 2:44 pm
  12. Just FYI, if you’re an employee in the US and your employer is offering free FitBits or some other ‘wearable’ technology that streams basic health data like heart rate or steps taken each day as part of some sort of new employee fitness plan, you might want to make sure that the plan is associated with your employer’s health insurance plan which means the data collected would at least have federal HIPAA protection. Because if that fancy free FitBit doesn’t have HIPAA protection, that heart rate data is going to be telling who knows who a lot more about you than just your heart rate and what it’s telling those unknown third-parties might not be remotely accurate:

    Slate

    There’s No Such Thing as Innocuous Personal Data

    Why you should keep your heart rate, sleep patterns, and other seemingly boring info to yourself.

    By Elizabeth Weingarten
    Aug. 8 2016 7:28 AM

    It’s 2020, and a couple is on a date. As they sip cocktails and banter, each is dying to sneak a peek at the other’s wearable device to answer a very sensitive question.

    What’s his or her heart rate variability?

    That’s because heart rate variability, which is the measurement of the time in between heartbeats, can also be an indicator of female sexual dysfunction and male sexual dysfunction.

    When you think about which of your devices and apps contain your most sensitive data, you probably think about your text messages, Gchats, or Reddit account. The fitness tracking device you’re sporting right now may not immediately come to mind. After all, what can people really learn about you from your heart rate or your step count?

    More than you might think. In fact, an expanding trove of research links seemingly benign data points to behaviors and health outcomes. Much of this research is still in its infancy, but companies are already beginning to mine some of this data, and there’s growing controversy over just how far they can—and should—go. That’s because like most innovations, there’s a potential bright side, and a dark side, to this data feeding frenzy.

    Let’s go back to the example of heart rates. In a study conducted in Sweden and published in 2015, researchers found that low resting heart rates correlated with propensity for violence. It’s unclear whether these findings will hold up to further investigation. But if the connection is confirmed in the future, perhaps it could be cross-indexed, introduced into algorithms, and used, in conjunction with other data, to profile or convict individuals, suggests John Chuang, a professor at Berkeley’s School of Information and the director of its BioSense lab. (Biosensing technology uses digital data to learn about living systems like people.) “It’s something we can’t anticipate—these new classes of data we assume are innocuous that turn out not to be,” says Chuang.

    And in the absence of research linking heart rate to particular health or behavioral outcomes, we tend to have our own entrenched social interpretations of what a faster heart rate actually means—that someone is lying, or nervous, or interested. Berkeley researchers have found that even those assumed associations could have complicated implications for apps that allow users to share heart rate information with friends or employers. In one recent study currently undergoing peer review, when participants in a trust game observed that their partners had an elevated heart rate, they were less likely to cooperate with them and more likely to attribute some kind of negative mood to that person. In another study scheduled to be published soon, participants were asked to imagine a scenario: They were about to meet an acquaintance to talk about a legal dispute, and the acquaintance texted that he or she was running late. Alongside the text, that person’s heart rate appeared. If the heart rate was normal, many study participants felt it should have been elevated to show that their acquaintance cared about being late. The authors warn of the “potential danger” of apps that could encourage heart rate sharers to make the wrong associations between their signals and behavior. One app, Cardiogram, is already posing the question: “What’s your heart telling you?”

    Suddenly, anyone who knows your heart rate may prejudge—accurately or not—your emotions, mood, and sexual prowess. “This data can be very easily misinterpreted,” says Michelle De Mooy, the acting director of the Privacy and Data Project at the Center for Democracy and Technology. “People tend to think of data as fact, when in fact it’s governed by algorithms that are created by humans who have bias.”

    And it’s worrisome that companies, employers, and others could use such imperfect information. Most biosensing data gathered from wearables isn’t protected by the Health Insurance Portability and Accountability Act or regulated by the Federal Trade Commission, a reflection of the fact that the boundaries between medical and nonmedical data are still being defined. “Regulation can sometimes be a good thing, and sometimes more complicating,” says De Mooy. “But in this case, it’s important because of the different ways in which activity trackers are starting to be a part of our lives. Outside of a fun vague activity measure, they are coming into workplaces and wellness programs in lots of different ways.”

    Not all wellness program data can be legally funneled to employers or third parties. It depends on whether the wellness program is inside a company insurance plan—meaning that it would be protected by HIPAA—or outside a company insurance plan and administered by a third-party vendor. If it’s administered by a third party, your data could be passed on to other companies. At that point, the data is protected only by the privacy policies of those third-party vendors, “meaning they can essentially do what they like with it,” De Mooy says.

    Most companies that are gathering this information emphasize that they’re doing everything they can to protect users’ data and that they don’t sell it to third-party providers (yet). But when data passes from a device, to a phone, to the cloud through Wi-Fi, even all of the encryption and protective algorithms in the world can’t ensure data security. Many of these programs, like Aetna’s sleep initiative, are optional, but sometimes employees don’t have much of a choice. If they opt out, they often have to pay more for insurance coverage, though companies prefer to frame it as offering a discount to those who participate, as opposed to a penalty for those who don’t.

    And even if you choose to opt out, companies may find ways to collect the same data in the future. For example, MIT researchers are able now to detect heart rate and breathing information remotely with 99 percent accuracy from a Wi-Fi signal that they reflect off of your body. “In the future, could stores capture heart rate to show how it changes when you see a new gadget inside a store?” imagines Chuang. “These may be things that you as a consumer may not be able to opt out of.”

    Yet there’s another side to this future. The way you walk can be as unique as your fingerprint; a couple of studies show that gait can help verify the identity of smartphone users. And gait can also predict whether someone is at risk for dementia. Seemingly useless pieces of data may let experts deduce or predict certain behaviors or conditions now, .but the big insights will come in the next few years, when companies and consumers are able to view a tapestry of different individual data points and contrast them with data across the entire population. That’s when, according to a recent report from Berkeley’s Center for Long-Term Cybersecurity, we’ll be able to “gain deep insight into human emotional experiences.”

    But it’s the data that you’re creating now that will fuel those insights. Far from meaningless, it’s the foundation of what you (and everyone else) may be able to learn about your future self.

    “Not all wellness program data can be legally funneled to employers or third parties. It depends on whether the wellness program is inside a company insurance plan—meaning that it would be protected by HIPAA—or outside a company insurance plan and administered by a third-party vendor. If it’s administered by a third party, your data could be passed on to other companies. At that point, the data is protected only by the privacy policies of those third-party vendors, “meaning they can essentially do what they like with it,” De Mooy says.

    Yep, if you hand that seemingly innocuous personal health data like a heart rate over to a non-HIPAA protected entity, random third parties can get to infer all sorts of fun things about you like whether or not you’re suffering from some sort of sexual dysfunction or your propensity for violence. And whether or not those inferences are based on solid science or the latest pop theory is totally up to them. How fun.

    So check those HIPAA agreements before you slap that free FitBit on your wrist. And if you really don’t like the idea of hand over personal health data like your heart rate to the world, you might need to avoid all Wi-Fi networks too:


    And even if you choose to opt out, companies may find ways to collect the same data in the future. For example, MIT researchers are able now to detect heart rate and breathing information remotely with 99 percent accuracy from a Wi-Fi signal that they reflect off of your body. “In the future, could stores capture heart rate to show how it changes when you see a new gadget inside a store?” imagines Chuang. “These may be things that you as a consumer may not be able to opt out of.”

    That’s right, companies are potentially going to have the ability to just randomly scan your heart rate and breathing information with a Wi-Fi signal. Like when you walk past their billboards. Won’t that also be fun.

    And while it’s just breathing and heart rate information via Wi-Fi, just imagine what other personal health information could possibly be detected remotely for a much broader range of sensors. For instance, imagine if Google set up free ‘Wi-Fi Kiosks’ all over the place that not only provided Wi-Fi services but had other types of sensors that detected things like air pollution or other chemicals along with UV and infrared cameras. If you’re having a hard time imagining that, this should give you a better idea:

    Engadget

    Sidewalk Labs’ smart city kiosks go way beyond free WiFi
    Google’s sister company wants to monitor everything from traffic and air quality to potential terrorist activity.

    Andrew Dalton
    07.01.16 in Gadgetry

    The details of an ambitious plan from Google’s sister company Sidewalk Labs to create entire “smart neighborhoods” just got a little clearer. According to Sidewalk Labs’ pitch deck, which was obtained by Recode this week, the plan goes far beyond those free WiFi kiosks that are already on the streets of New York City. The kiosks will monitor everything from bike and pedestrian traffic to air quality and street noise.

    “The Kiosk sensor platform will help address complex issues where real-time ground truth is needed,” one document read. “Understanding and measuring traffic congestion, identifying dangerous situations like gas leaks, monitoring air quality, and identifying quality of life issues like idling trucks.”

    In addition to monitoring environmental factors like humidity and temperature, a bank of air pollutant sensors will also monitor particulates, ozone, carbon monoxide and other harmful chemicals in the air. Two other sensor banks will measure “Natural and Manmade Behavior” by tracking street vibrations, sound levels, magnetics fields and entire spectrums of visible, UV and infrared light. Finally, the “City Activity” sensors will not only be able to measure pedestrian traffic, it will also look for security threats like abandoned packages. While free gigabit WiFi on the streets sounds like a win for everyone’s data plan, it also comes at a cost: the kiosks will also be able to track wireless devices as they pass by, although it will most likely be anonymized.

    In one such example provided by the documents, data collected from traffic cameras and passing devices could be used to re-calculate travel times in Google Maps — think Waze, but with data on the municipal level. In the end, however, it’s up to each city to decide which sensors they want included in the devices. While many have obvious practical uses, Recode also points out there are some significant costs involved. Although the Sidewalk Labs pitch offers to provide the kiosks for free, there’s still installation, setup and maintenance fees. All told, 100 “free” kiosks are expected to costs a city around $4.5 million in the first year.

    Of course, that cost can be defrayed if the city is willing to allow Sidewalk Labs to install two 55-inch advertising screens on each kiosk. While Sidewalk will foot the bill for the ad space, it also gets to keep 50 percent of the profits. With 100 kiosks, a city stands to make back an estimated $3 million per year in advertising revenue.

    “In addition to monitoring environmental factors like humidity and temperature, a bank of air pollutant sensors will also monitor particulates, ozone, carbon monoxide and other harmful chemicals in the air. Two other sensor banks will measure “Natural and Manmade Behavior” by tracking street vibrations, sound levels, magnetics fields and entire spectrums of visible, UV and infrared light. Finally, the “City Activity” sensors will not only be able to measure pedestrian traffic, it will also look for security threats like abandoned packages. While free gigabit WiFi on the streets sounds like a win for everyone’s data plan, it also comes at a cost: the kiosks will also be able to track wireless devices as they pass by, although it will most likely be anonymized.”

    Wi-Fi sidewalk kiosks with a battery of sensors and large screens designed to grab your attention and draw you closer (and then hopefully not detect your wireless devices and identify you). Might the “Natural and Mandmade Behavior” detected by these kiosks include things like heart rate? And what other types of health information can be detected with sensors designed to pick up a broad range of sounds along with entire spectrums of visible, UV and infrared light? We’ll find out someday…presumably after all this data is collected.

    So, all in all, it’s increasingly clear that if you don’t like the idea of helplessly having your personal health information collected and analyzed (including very dubiously analyzed) by all sorts of random third-parties data predators you might need to relocate. Away from civilization. Far away. Preferably a thick jungle where third-party kiosks with Wi-Fi and infrared scanning will at least have a limited reach given all the blocking foliage. Sure, there might be tigers and other predators to worry about in the jungle, but at least those are the kinds of predators you can potentially defend yourself against. The tiger might be able to eat your body, but not your dignity. Good luck!

    Posted by Pterrafractyl | August 15, 2016, 6:43 pm
  13. Remember Tay, Microsoft’s AI chatbot that was turned into a neo-Nazi in under 24 hours because its creators inexplicably didn’t take into account the possibility that people would try to turn their chatbot into a neo-Nazi? Well, it appears Facebook just had its own Tay-ish experience. Although instead of a bunch of trolls specifically setting out to turn some new publicly accessible Facebook AI into an extremist, Facebook instead removed the human curation component from their “trending news” feed following charges that Facebook was filtering out conservative news, and the endemic trolling already present in the right-wing mediasphere dumpster fire took it from there:

    The Guardian

    Facebook fires trending team, and algorithm without humans goes crazy

    Module pushes out false story about Fox’s Megyn Kelly, offensive Ann Coulter headline and a story link about a man masturbating with a McDonald’s sandwich

    Sam Thielman in New York
    Monday 29 August 2016 12.48 EDT

    Just months after the discovery that Facebook’s “trending” news module was curated and tweaked by human beings, the company has eliminated its editors and left the algorithm to do its job. The results, so far, are a disaster.

    Facebook announced late Friday that it had eliminated jobs in its trending module, the part of its news division where staff curated popular news for Facebook users. Over the weekend, the fully automated Facebook trending module pushed out a false story about Fox News host Megyn Kelly, a controversial piece about a comedian’s four-letter word attack on rightwing pundit Ann Coulter, and links to an article about a video of a man masturbating with a McDonald’s chicken sandwich.

    In a blogpost, Facebook said the decision to drop people from the news module would allow it to operate at a greater scale.

    “Our goal is to enable Trending for as many people as possible, which would be hard to do if we relied solely on summarizing topics by hand,” wrote a company representative in the unattributed post. “A more algorithmically driven process allows us to scale Trending to cover more topics and make it available to more people globally over time.”

    A source familiar with the matter told the Guardian that the trending team was fired without notice in a meeting with a security guard present. The ex-employees received four weeks’ severance.

    In May, the Guardian published the guidelines used by Facebook’s Trending module team after Gizmodo revealed that the module was in fact curated by humans. The revelation fuelled accusations of potential bias at the social network, which has become the world’s largest distributor of news.

    The past weekend has been less than auspicious for Facebook’s new, inhuman workforce: on Saturday, the site pushed an article to some of its users entitled: “BREAKING: Fox News Exposes Traitor Megyn Kelly, Kicks Her Out For Backing Hillary.” Megyn Kelly is still employed by Fox News and has not endorsed Hillary Clinton for president.

    Facebook removed the offending article, published by a website called Ending the Fed and linking to another little known site, Conservative 101. Under Facebook’s old guidelines, news curators stuck to a list of trusted media sources. Neither of these sources were on that list.

    Another surprising headline read: “SNL Star Calls Ann Coulter a Racist C*nt,” and referred to attacks on the author during a Comedy Central roast of actor Rob Lowe. Other trending items picked by algorithm were pegged to Twitter hashtags including #McChicken, a hashtag that had gone viral after someone posted a video of a man masturbating with a McChicken sandwich.

    The dismissal of the trending module team appears to have been a long-term plan at Facebook. A source told the Guardian the trending module was meant to have “learned” from the human editors’ curation decisions and was always meant to eventually reach full automation.

    Facebook announced late Friday that it had eliminated jobs in its trending module, the part of its news division where staff curated popular news for Facebook users. Over the weekend, the fully automated Facebook trending module pushed out a false story about Fox News host Megyn Kelly, a controversial piece about a comedian’s four-letter word attack on rightwing pundit Ann Coulter, and links to an article about a video of a man masturbating with a McDonald’s chicken sandwich.

    Well, at least the story about the chicken sandwich was potentially newsworthy. At least now we know not to click on any articles about McChicken sandwiches going forward.

    So perhaps the lesson here is that algorithmically automated newsfeeds may not be credible sources of what we normally think of as “news”, but they are potentially useful summaries for all the garbage people are reading instead of actual news. At least with Facebook’s new algorithmically driven trend news feed we can all watch civilization’s collective descent in ignorance and madness with somewhat greater detail. That’s kind of a positive service.

    Unfortunately, that’s not the kind of positive service we’re going to get. At least not yet. Why? Because it turns out Facebook didn’t actually eliminate the human curators. Instead, they just fired all their existing team of professional journalist curators and hired a new team of non-journalist human. So this is less an issue of “oops, our new algorithm just got overwhelmed by all the toxic ‘news’ out there!” and more of an issue of “oops, we fired all our journalist curators and quietly replaced them non-journalists curators who are horrible at this job. How about we blame this on the algorithm”:

    Slate

    Trending Bad

    How Facebook’s foray into automated news went from messy to disastrous.

    By Will Oremus
    Aug. 30 2016 2:05 PM

    It seems Facebook’s human news editors weren’t quite as expendable as the company thought.

    On Monday, the social network’s latest move to automate its “Trending” news section backfired when it promoted a false story by a dubious right-wing propaganda site. The story, which claimed that Fox News had fired anchor Megyn Kelly for being a “traitor,” racked up thousands of Facebook shares and was likely viewed by millions before Facebook removed it for inaccuracy.

    The blunder came just three days after Facebook fired the entire New York–based team of contractors that had been curating and editing the trending news section, as Quartz first reported on Friday and Slate has confirmed. That same day, Facebook announced an “update” to its trending section—a feature that highlights news topics popular on the site—that would make it “more automated.”

    Facebook’s move away from human editors was supposed to extinguish the (farcically overblown) controversy over allegations of liberal bias in the trending news section. But in its haste to mollify conservatives, the company appears to have rolled out a new product that members of its own trending news team viewed as seriously flawed.

    Three of the trending team members who were recently fired told Slate they understood from the start that Facebook’s ultimate goal was to automate the process of selecting stories for the trending news section. Their team was clearly a stopgap. But all three said independently that they were shocked to have been let go so soon, because the software that was meant to supplant them was nowhere near ready. “It’s half-baked quiche,” one told me.

    Before we poke and prod that quiche, it’s worth clearing up a popular misunderstanding. Facebook has not entirely eliminated humans from its trending news product. Rather, the company replaced the New York–based team of contractors, most of whom were professional journalists, with a new team of overseers. Apparently it was this new team that failed to realize the Kelly story was bogus when Facebook’s trending algorithm suggested it. Here’s how a company spokeswoman explained the mishap to me Monday afternoon:

    The Trending review team accepted this topic over the weekend. Based on their review guidelines, the topic met the conditions for acceptance at the time because there was a sufficient number of relevant articles and posts. On re-review, the topic was deemed as inaccurate and does no longer appear in trending. We’re working to make our detection of hoax and satirical stories more accurate as part of our continued effort to make the product better.

    So: Blame the people, not the algorithm, which is apparently the same one Facebook was using before it fired the original trending team. Who are these new gatekeepers, and why can’t they tell the difference between a reliable news source and Endingthefed.com, the publisher of the Kelly piece? Facebook wouldn’t say, but it offered the following statement: “In this new version of Trending we no longer need to draft topic descriptions or summaries, and as a result we are shifting to a team with an emphasis on operations and technical skillsets, which helps us better support the new direction of the product.”

    That helps clarify the blog post Facebook published Friday, in which it explained the move to simplify its trending section as part of a push to scale it globally and personalize it to each user. “This is something we always hoped to do but we are making these changes sooner given the feedback we got from the Facebook community earlier this year,” the company said.

    That all made sense to the three former trending news contractors who spoke with Slate. (They spoke separately and on condition of anonymity, citing a nondisclosure agreement, but they agreed on multiple key points and details.) The former contractors said they weren’t told much about their role or the future of the product they were working on, but the companies that hired them—one an Indiana-based consultancy called BCforward, the other a Texas firm called MMC—did indicate their jobs were not permanent. They also understood that the internal software that identified topics for trending news was meant to improve over time, so it could eventually take on more of the work itself.

    The strange thing, they told me, was the algorithm didn’t seem to be getting much better at selecting relevant stories or reliable news sources. “I didn’t notice a change at all,” said one, who had worked on the team for close to a year. The system was constantly being refined, the former contractor added, by Facebook engineers with whom the trending contractors had no direct contact. But the improvements focused on the content management system and the curation guidelines the humans worked with. The feed of trending stories surfaced by the algorithm, meanwhile, was “not ready for human consumption—you really needed someone to sift through the junk.”

    The second former contractor, who joined the team more recently, actually liked the idea of helping to train software to curate a personalized feed of trending news stories for readers around the world. “When I entered into it, I thought, ‘Well, the algorithm’s basic right now, so that’s not going to be [autonomous] for a couple years.’ The volume of topics we would get, it would be hundreds and hundreds. It was just this raw feed,” full of clickbait headlines and topics that bore no relation to actual news stories. The contractor estimated that, for every topic surfaced by the algorithm that the team accepted and published, there were “four or five” that the curators rejected as spurious.

    The third contractor, who agreed that the algorithm remained sorely in need of human editing and fact-checking, estimated that out of every 50 topics it suggested, about 20 corresponded to real, verifiable news events. But when the news was real, the top sources suggested by the algorithm often were not credible news outlets.

    The contractors’ perception that their jobs were secure, at least for the medium term, was reinforced when Facebook recently began testing a new trending news feature—a stripped-down version that replaced summaries of each topic with the number of Facebook users talking about it. This new version, two contractors believed, gave the human curators a greatly diminished role in story and source selection. Said one: “You’ll get Endingthefed.com as your news source (suggested by the algorithm), and you won’t be able to go out and say, ‘Oh, there’s a CNN source, or there’s a Fox News source, let’s use that instead.’ You just have a binary choice to approve it or not.”

    The results were not pretty. “They were running these tests with subsets of users, and the feedback they got internally was overwhelmingly negative. People would say, ‘I don’t understand why I’m looking at this. I don’t see the context anymore.’ There were spelling mistakes in the headlines. And the number of people talking about a topic would just be wildly off.” The negative feedback came from both Facebook employees participating in internal tests and external Facebook users randomly selected for small public tests.

    The contractor assumed Facebook’s engineers and product managers would go back to the drawing board. Instead, on Friday, the company dumped the journalists and released the new, poorly reviewed version of trending news to the public.

    Why was Facebook so eager to make this move? The company may well have deemed journalists more trouble than they’re worth after several of them set off a firestorm by criticizing the product in the press. Others complained about the “toxic” working conditions or dished dirt on Twitter after being let go. Journalists are a cantankerous lot, and in many ways a poor fit for Silicon Valley tech companies like Facebook that thrive on opacity and cultivate the perception of neutrality.
    facebook trending.

    But Facebook appears to have thrown out the babies and kept the bathwater. What’s left of the trending section, even after the removal of the Kelly story, looks a lot like the context-free, clickbait-y mess the contractor described sifting through each day. “You click around, and it’s a garbage fire,” one said of the new version.

    Ironically, the decline in quality of the trending section comes at the same time that Facebook is touting values such as authenticity and accuracy in its news feed, where it continues to fight its never-ending battle against clickbait and preach the gospel of “high-quality” news content.

    “The contractors’ perception that their jobs were secure, at least for the medium term, was reinforced when Facebook recently began testing a new trending news feature—a stripped-down version that replaced summaries of each topic with the number of Facebook users talking about it. This new version, two contractors believed, gave the human curators a greatly diminished role in story and source selection. Said one: “You’ll get Endingthefed.com as your news source (suggested by the algorithm), and you won’t be able to go out and say, ‘Oh, there’s a CNN source, or there’s a Fox News source, let’s use that instead.’ You just have a binary choice to approve it or not.””

    Yes, as part of its long-held goal of actually fully automating news feeds so every single user can eventually get their own personalized feed, Facebook was already in the process of reducing the amount of human judgement involved with the human curation before it fired and replaced its team. And then it fired and replaced them:

    Before we poke and prod that quiche, it’s worth clearing up a popular misunderstanding. Facebook has not entirely eliminated humans from its trending news product. Rather, the company replaced the New York–based team of contractors, most of whom were professional journalists, with a new team of overseers. Apparently it was this new team that failed to realize the Kelly story was bogus when Facebook’s trending algorithm suggested it. Here’s how a company spokeswoman explained the mishap to me Monday afternoon:

    The Trending review team accepted this topic over the weekend. Based on their review guidelines, the topic met the conditions for acceptance at the time because there was a sufficient number of relevant articles and posts. On re-review, the topic was deemed as inaccurate and does no longer appear in trending. We’re working to make our detection of hoax and satirical stories more accurate as part of our continued effort to make the product better.

    So: Blame the people, not the algorithm, which is apparently the same one Facebook was using before it fired the original trending team. Who are these new gatekeepers, and why can’t they tell the difference between a reliable news source and Endingthefed.com, the publisher of the Kelly piece? Facebook wouldn’t say, but it offered the following statement: “In this new version of Trending we no longer need to draft topic descriptions or summaries, and as a result we are shifting to a team with an emphasis on operations and technical skillsets, which helps us better support the new direction of the product.”

    As we can see, there is indeed a “Trending review team”. It’s still human. And even with the reduced flexibility to select from a variety of news sources for a given topic now getting replaced by a binary choice, this team of humans still has the ability to filter out blatantly fake news. It’s just that the new team humans apparently can’t actually identify the fake news.

    All in all, it looks like Facebook basically modified their internal trending news algorithms while keeping in place a team of humans to make the final judgement call. Then Facebook made an announcement that made it sound like the trending news feed was now all algorithmically driven, but actually just replaced the previous team of journalists (who were complaining about their abusive working conditions just months ago) with a new team of non-journalist but who were still tasked with making that final judgement call. And then everyone blamed the algorithm when this all blew up with bogus articles in the news feed.

    So while Facebook may have trashed the utility of its trending news feed today, it’s worth noting that this sad tale of poor corporate judgement in rolling out poorly designed algorithms run by poorly prepared people, and then blaming the algorithm when things go poorly, is giving us a glimpse at the kind of news that could easily become a major category of trending news in the future as more and more human/algorithm ‘mistakes’ are created and blamed solely on the algorithm. The algorithm designers probably didn’t intend on doing that but it’s still kind of impressive in a sad way.

    Posted by Pterrafractyl | August 30, 2016, 7:00 pm
  14. It looks like WikiLeaks’s quest to bring transparency to government and large corporations is getting extended. To everyone with a verified Twitter account:

    ReCode

    WikiLeaks wants to create a database of verified Twitter users and who they interact with

    That would include a lot of journalists — and Donald Trump.

    by Kurt Wagner Jan 7, 2017, 10:00am EST

    WikiLeaks tweeted Friday that it wanted to build a database of information about Twitter’s verified users, including personal relationships that might have influence on their lives.

    Then, after a number of users sounded the alarm on what they perceived to be a massive doxxing effort, WikiLeaks deleted the tweet, but not before blaming that perception on the “dishonest press.”

    In a subsequent series of tweets on Friday,WikiLeaks Task Force — a verified Twitter account described in its bio as the “Official @WikiLeaks support account” — explained that it wanted to look at the “family/job/financial/housing relationships” of Twitter’s verified users, which includes a ton of journalists, politicians and activists.

    [see image of deleted tweet ]

    The point, the WikiLeaks account claims, is to ““develop a metric to understand influence networks based on proximity graphs.”.” That’s a pretty confusing explanation, and the comment left a number of concerned Twitter users scratching their collective heads and wondering just how invasive this database might be.

    The “task force” attempted to clarify what it meant in a number of subsequent tweets, and it sounds like the database is an attempt to understand who or what might be influencing Twitter’s verified users. Imagine identifying relationships like political party affiliation, for example, though it’s unclear if the database would include both online and offline relationships users have. (We tweeted at WikiLeaks and will update if we hear back.)

    WikiLeaks mentioned an artificial intelligence software program that it would use to help compile the database and suggested it might be akin to the social graphs that Facebook and LinkedIn have created.

    It was all rather vague, which didn’t help with user concern on Twitter. But WikiLeaks claims the proposed database is not about releasing personal info, like home addresses.

    Dishonest press reporting our speculative idea for database of account influencing *relationships* with WikiLeaks doxing home addresses.— WikiLeaks Task Force (@WLTaskForce) January 6, 2017

    .@DaleInnis @kevincollier As we stated the idea is to look at the network of *relationships* that influence — not to publish addresses.— WikiLeaks Task Force (@WLTaskForce) January 6, 2017

    Still, it was an unsettling proclamation for many on Twitter, and followed just a few days after WikiLeaks founder Julian Assange told Fox News that American media coverage is “very dishonest.” It’s a descriptor President-elect Donald Trump famously uses, too.

    It seems possible that the point of looking into verified Twitter users — many of whom are journalists — is so that WikiLeaks can rein in the “dishonest media.”

    What could be interesting, though, is that building a database would also mean looking into the relationships influencing Trump, who is also verified on Twitter.

    Some of those relationships are already publicly known. The Wall Street Journal, for example, has reported that more than 150 institutions hold Trump’s business debts. But many journalists and politicians have complained of lack of transparency from Trump, like his failure to release his tax returns. These critics may welcome a closer look at the powers influencing the next Commander in Chief.

    Even if WikiLeaks were to move forward with this database, it seems like it would have to store the project off of Twitter. The social communications company tweeted out a statement shortly after the original WikiLeaks tweet: “Posting another person’s private and confidential information is a violation of the Twitter Rules.”

    Posting another person’s private and confidential information is a violation of the Twitter Rules: https://t.co/NGx5hh2tTQ— Safety (@safety) January 6, 2017

    Twitter has already said that it will not allow anyone, including government agencies, to use its services to create surveillance databases and has a policy against posting another person’s private information on the service.

    “In a subsequent series of tweets on Friday,WikiLeaks Task Force — a verified Twitter account described in its bio as the “Official @WikiLeaks support account” — explained that it wanted to look at the “family/job/financial/housing relationships” of Twitter’s verified users, which includes a ton of journalists, politicians and activists.

    Yeah, that’s not creepy or anything.

    Now, it’s worth noting that creating databases of random people on social media and trying to learn everything you can about them, like their relationships and influences, is nothing new for the government or private sector (like what Palantir does). And there’s nothing stopping WikiLeaks or anyone else from doing the same. But in this case it appears that WikiLeaks is floating the idea of creating this database and then making it a searchable public tool. And it’s not at all clear that WikiLeaks would be limiting the data it collects on Twitter users to information they can gather from on Twitter. Since they’re talking about limiting it to “verified users” (Twitter accounts that have been strongly identified with a real person using their real name) that suggests they could include all sorts of 3rd party data from anywhere.

    And if the above article’s speculation is correct, the motive for this is basically to create a data set (with fancy graphs presumably), would be to discredit people through guilt-by-association:


    Still, it was an unsettling proclamation for many on Twitter, and followed just a few days after WikiLeaks founder Julian Assange told Fox News that American media coverage is “very dishonest.” It’s a descriptor President-elect Donald Trump famously uses, too.

    It seems possible that the point of looking into verified Twitter users — many of whom are journalists — is so that WikiLeaks can rein in the “dishonest media.”

    Keep in mind that if WikiLeaks actually created this tool, it would probably have quite a bit of leeway over the kind of data that gets included in the system and which “relationships” or “influences” show up for a given individual. Also keep in mind that if this was done responsibly there would have to be a great deal of human judgement that goes into whether or not a particular piece of data that points towards a “relationship” or “influence” is accurate and honest. And it’s that kind of required flexibility that could give WikiLeaks a great deal of real power over how someone is presented.

    So it appears that WikiLeaks wants to create publicly accessible dossiers on verified Twitter users. Presumably for the purpose of ‘making a point’ of some sort. Sort of like the old “TheyRule.net” web tool that showed graphs of the people serving on corporate boards of major corporations and made the point of the incestuous nature of corporate leadership visually clear. But in this case it won’t be limited to big company CEOs. It’ll be everyone. At least everyone with a verified Twitter account, which just happens to include large numbers of journalists and activists. So, TheyRule.net, but with much more personal information on people who may or may not actually rule. Great.

    Posted by Pterrafractyl | January 10, 2017, 3:58 pm
  15. Here’s something worth noting while sifting through the 2016 election aftermath: Silicon Valley’s long rightward shift became official in 2016. At least if you look at the corporate PACs of tech giants like Microsoft, Google, Facebook, and Amazon. Sure, the employees tended to still favor donating to Democrats, although not as much as before (and not at all at Microsoft). But when it came to the corporate PACs Silicon Valley was seeing red:

    The New York Times
    Opinion

    Silicon Valley Takes a Right Turn

    Thomas B. Edsall
    JAN. 12, 2017

    In 2016, the corporate PACs associated with Microsoft, Facebook, Google and Amazon broke ranks with the traditional allegiance of the broad tech sector to the Democratic Party. All four donated more money to Republican Congressional candidates than they did to their Democratic opponents.

    As these technology firms have become corporate behemoths, their concerns over government regulatory policy have intensified — on issues including privacy, taxation, automation and antitrust. These are questions on which they appear to view Republicans as stronger allies than Democrats.

    In 2016, the PACs of these four firms gave a total of $3.6 million to House and Senate candidates. Of that, $2.1 million went to Republicans, and $1.5 million went to Democrats. These PACs did not contribute to presidential candidates.

    The PACs stand apart from donations by employees in the technology and internet sectors. According to OpenSecrets, these employees gave $42.4 million to Democrats and $24.2 million to Republicans.

    In the presidential race, tech employees (as opposed to corporate PACs) overwhelmingly favored Hillary Clinton over Donald Trump. Workers for internet firms, for example, gave her $6.3 million, and gave $59,622 to Trump. Employees of electronic manufacturing firms donated $12.6 million to Clinton and $534,228 to Trump.

    Most tech executives and employees remain supportive of Democrats, especially on social and cultural issues. The Republican tilt of the PACs at Microsoft, Amazon, Google and Facebook suggests, however, that as these companies’ domains grow larger, their bottom-line interests are becoming increasingly aligned with the policies of the Republican Party.

    In terms of political contributions, Microsoft has led the rightward charge. In 2008, the Microsoft PAC decisively favored Democrats, 60-40, according to data compiled by the indispensable Center for Responsive Politics. By 2012, Republican candidates and committees had taken the lead, 54-46; and by 2016, the Microsoft PAC had become decisively Republican, 65-35.

    In 2016, the Microsoft PAC gave $478,818 to Republican House candidates and $272,000 to Democratic House candidates. It gave $164,000 to Republican Senate candidates, and $75,000 to Democratic Senate candidates.

    Microsoft employees’ contributions followed a comparable pattern. In 2008 and 2012, Microsoft workers were solidly pro-Democratic, with 71 percent and 65 percent of their contributions going to party members. By 2016, the company’s work force had shifted gears. Democrats got 47 percent of their donations.

    This was not small change. In 2016 Microsoft employees gave a total of $6.47 million.

    A similar pattern is visible at Facebook.

    The firm first became a noticeable player in the world of campaign finance in 2012 when employees and the company PAC together made contributions of $910,000. That year, Facebook employees backed Democrats over Republicans 64-35, while the company’s PAC tilted Republican, 53-46.

    By 2016, when total Facebook contributions reached $3.8 million, the Democratic advantage in employee donations shrank to 51-47, while the PAC continued to favor Republicans, 56-44.

    While the employees of the three other most valuable tech companies, Alphabet (Google), Amazon and Apple, remained Democratic in their giving in 2016, at the corporate level of Alphabet and Amazon — that is, at the level of their PACs — they have not.

    Google’s PAC gave 56 percent of its 2016 contributions to Republicans and 44 percent to Democrats. The Amazon PAC followed a similar path, favoring Republicans over Democrats 52-48. (Apple does not have a PAC.)

    Tech giants can no longer be described as insurgents challenging corporate America.

    “By just about every measure worth collecting,” Farhad Manjoo of The Times wrote in January 2016:

    American consumer technology companies are getting larger, more entrenched in their own sectors, more powerful in new sectors and better insulated against surprising competition from upstarts.

    These firms are now among the biggest of big business. In a 2016 USA Today ranking of the most valuable companies worldwide, the top four were Alphabet, $554.8 billion; Apple, $529.3 billion; Microsoft, $425.4 billion; and Facebook, $333.6 billion. Those firms decisively beat out Berkshire Hathaway, Exxon Mobil, Johnson & Johnson and General Electric.

    In addition to tech companies’ concern about government policy on taxation, regulation and antitrust, there are other sources of conflict between tech firms and the Democratic Party. Gregory Ferenstein, a blogger who covers the tech industry, conducted a survey of 116 tech company founders for Fast Company in 2015. Using data from a poll conducted by the firm SurveyMonkey, Ferenstein compared the views of tech founders with those of Democrats, in some cases, and the views of the general public, in others.

    Among Ferenstein’s findings: a minority, 29 percent, of tech company founders described labor unions as “good,” compared to 73 percent of Democrats. Asked “is meritocracy naturally unequal?” tech founders overwhelmingly agreed.

    Ferenstein went on:

    One hundred percent of the smaller sample of founders to whom I presented this question said they believe that a truly meritocratic economy would be “mostly” or “somewhat” unequal. This is a key distinction: Opportunity is about maximizing people’s potential, which founders tend to believe is highly unequal. Founders may value citizen contributions to society, but they don’t think all citizens have the potential to contribute equally. When asked what percent of national income the top 10% would hold in such a scenario, a majority (67%) of founders believed that the richest individuals would control 50% or more of total income, while only 31% of the public believes such an outcome would occur in a meritocratic society.

    One of the most interesting questions posed by Ferenstein speaks to middle and working class anxieties over global competition:

    In international trade policy, some people believe the U.S. government should create laws that favor American business with policies that protect it from global competition, such as fees on imported goods or making it costly to hire cheaper labor in other countries (“outsourcing”). Others believe it would be better if there were less regulations and businesses were free to trade and compete without each country favoring their own industries. Which of these statements come closest to your belief?

    There was a large difference between tech company officials, 73 percent of whom chose free trade and less regulation, while only 20 percent of Democrats supported those choices.

    Ferenstein also found that tech founders are substantially more liberal on immigration policy than Democrats generally. 64 percent would increase total immigration levels, compared to 39 percent of Democrats. Tech executives are strong supporters of increasing the number of highly trained immigrants through the HB1 visa program.

    Joel Kotkin, a fellow in urban studies at Chapman University who writes about demographic, social and economic trends, sees these differences as the source of deep conflict within the Democratic Party.

    In a provocative August, 2015, column in the Orange County Register, Kotkin wrote:

    The disruptive force is largely Silicon Valley, a natural oligarchy that now funds a party teetering toward populism and even socialism. The fundamental contradictions, as Karl Marx would have noted, lie in the collision of interests between a group that has come to epitomize self-consciously progressive mega-wealth and a mass base which is increasingly concerned about downward mobility.

    The tech elite, Kotkin writes, “far from deserting the Democratic Party, more likely will aim take to take it over.” Until very recently, the

    conflict between populists and tech oligarchs has been muted, in large part due to common views on social issues like gay marriage and, to some extent, environmental protection. But as the social issues fade, having been “won” by progressives, the focus necessarily moves to economics, where the gap between these two factions is greatest.

    Kotkin sees future partisan machination in cynical terms:

    One can expect the oligarchs to seek out a modus vivendi with the populists. They could exchange a regime of higher taxes and regulation for ever-expanding crony capitalist opportunities and political protection. As the hegemons of today, Facebook and Google, not to mention Apple and Amazon, have an intense interest in protecting themselves, for example, from antitrust legislation. History is pretty clear: Heroic entrepreneurs of one decade often turn into the insider capitalists of the next.

    In 2016, Donald Trump has produced an upheaval within the Republican Party that shifted attention away from the less explosive turmoil in Democratic ranks.

    “The tech elite, Kotkin writes, “far from deserting the Democratic Party, more likely will aim take to take it over.””

    And that warning is going to be something to keep in mind as this trend continues: the political red-shifting of Silicon Valley doesn’t mean Silicon Valley’s titans are going to eventually abandon the Democratic party and stop giving money. It’s worse. They’re going to keep giving the Democrats money (although maybe not as much as they give the GOP) in the hopes of remaking it in the GOP’s image. And the more powerful the tech sector becomes, the more money these giant corporations are going to have to engage in this kind of political ‘persuasion’.

    And in other news, a new Oxfam study found that the just eight individuals – including tech titans Bill Gates, Jeff Bezos, Mark Zuckerberg, and Larry Ellison – own as much wealth as the poorest half of the global population. So, you know, wealth inequality probably isn’t a super big priority for their super PACs.

    Posted by Pterrafractyl | January 17, 2017, 4:09 pm
  16. With the GOP and Trump White House scrambling to find some sort of legislative victory in the wake of failed Obamacare repeal bill last week that almost everybody hated, it’s worth noting that the GOP-controlled House and Senate may have just put in motion a major regulatory change that could even be more hated than Trumpcare: making it legal for your ISP to sell your browsing habits, location, online shopping habits, and anything else they can extract from your online activity:

    The New York Times
    Opinion

    How the Republicans Sold Your Privacy to Internet Providers

    By TOM WHEELER
    MARCH 29, 2017

    On Tuesday afternoon, while most people were focused on the latest news from the House Intelligence Committee, the House quietly voted to undo rules that keep internet service providers — the companies like Comcast, Verizon and Charter that you pay for online access — from selling your personal information.

    The Senate already approved the bill, on a party-line vote, last week, which means that in the coming days President Trump will be able to sign legislation that will strike a significant blow against online privacy protection.

    The bill not only gives cable companies and wireless providers free rein to do what they like with your browsing history, shopping habits, your location and other information gleaned from your online activity, but it would also prevent the Federal Communications Commission from ever again establishing similar consumer privacy protections.

    The bill is an effort by the F.C.C.’s new Republican majority and congressional Republicans to overturn a simple but vitally important concept — namely that the information that goes over a network belongs to you as the consumer, not to the network hired to carry it. It’s an old idea: For decades, in both Republican and Democratic administrations, federal rules have protected the privacy of the information in a telephone call. In 2016, the F.C.C., which I led as chairman under President Barack Obama, extended those same protections to the internet.

    To my Democratic colleagues and me, the digital tracks that a consumer leaves when using a network are the property of that consumer. They contain private information about personal preferences, health problems and financial matters. Our Republican colleagues on the commission argued the data should be available for the network to sell. The commission vote was 3-2 in favor of consumers.

    Reversing those protections is a dream for cable and telephone companies, which want to capitalize on the value of such personal information. I understand that network executives want to produce the highest return for shareholders by selling consumers’ information. The problem is they are selling something that doesn’t belong to them.

    Here’s one perverse result of this action. When you make a voice call on your smartphone, the information is protected: Your phone company can’t sell the fact that you are calling car dealerships to others who want to sell you a car. But if the same device and the same network are used to contact car dealers through the internet, that information — the same information, in fact — can be captured and sold by the network. To add insult to injury, you pay the network a monthly fee for the privilege of having your information sold to the highest bidder.

    This bill isn’t the only gift to the industry. The Trump F.C.C. recently voted to stay requirements that internet service providers must take “reasonable measures” to protect confidential information they hold on their customers, such as Social Security numbers and credit card information. This is not a hypothetical risk — in 2015 AT&T was fined $25 million for shoddy practices that allowed employees to steal and sell the private information of 280,000 customers.

    Among the many calamities engendered by the circus atmosphere of this White House is the diversion of public attention away from many other activities undertaken by the Republican-controlled government. Nobody seemed to notice when the Trump F.C.C. dropped the requirement about networks protecting information because we were all riveted by the Russian hacking of the election and the attempted repeal of Obamacare.

    There’s a lot of hypocrisy at play here: The man who has raged endlessly at the alleged surveillance of the communications of his aides (and potentially himself) will most likely soon gladly sign a bill that allows unrestrained sale of the personal information of any American using the internet.

    “The bill not only gives cable companies and wireless providers free rein to do what they like with your browsing history, shopping habits, your location and other information gleaned from your online activity, but it would also prevent the Federal Communications Commission from ever again establishing similar consumer privacy protections.

    The GOP is so intent on guaranteeing the rights of ISP to sell anything they can about you that the House bill would prevent the FCC from ever again establishing the FCC the bill would repeal. At least, presumably, unless a new law is passed to reempower the FCC. Which means that if this becomes law (and all indications are Trump will sign it into law), it’s probably going to take a Democratic controlled House, Senate, and White House to reverse it. Yes, following the GOP’s epic Trumpcar fail, it’s about to rebrand itself as the “ISPs will stop spying on you over our dead body!”-party.

    And that’s all on top of Trump’s FCC voting to prevent these same ISPs from having to take “reasonable measure” to protect the few categories of information they’re collecting on you that they wouldn’t be selling: you’re social security number and credit card info:


    This bill isn’t the only gift to the industry. The Trump F.C.C. recently voted to stay requirements that internet service providers must take “reasonable measures” to protect confidential information they hold on their customers, such as Social Security numbers and credit card information. This is not a hypothetical risk — in 2015 AT&T was fined $25 million for shoddy practices that allowed employees to steal and sell the private information of 280,000 customers.

    So with ISPs set to compete with the existing data-broker giants like Facebook and Google and create a giant national fire sale of personal digital information, it’s probably a good time to consider whether or not you’re at risk of identity theft. And here’s a nice quick way to figure that out: Do you use the internet in the US? If the answer is “yes”, you’re probably at risk of identity theft:

    Fox59

    Cyber expert explains internet privacy concerns after House pulls plug on FCC regulations

    Posted 5:13 PM, March 29, 2017, by Shannon Houser, Updated at 05:35PM, March 29, 2017

    BLOOMINGTON, Ind. — A vote Tuesday in Washington dismantled online privacy regulations previously set in place by the FCC.

    The regulations would have prevented internet service providers like Comcast, Verizon and AT&T from selling your personal information on marketplaces and to start-up companies.

    FOX59 spoke to IU Bloomington Cyber Security Program Chair Scott Shackelford about what it now means for you when you’re browsing online.

    Shackelford said those companies can access information like your browsing habits, but also dates important to you, like your birthday. Those details can be sold where companies would use that data to develop advertising and marketing trends.

    “It can be a couple bucks. It can be a bit more, but when all of a sudden you have millions of consumers, that can be a cash cow for a lot of companies,” Shackelford said.

    The more personal information that’s out there, Shackelford says, the easier it is for you to become a victim of identity theft.

    “The more personal information that’s out there, Shackelford says, the easier it is for you to become a victim of identity theft.”

    If all the personal information about you that’s already out there and readily available for anyone to purchase in the giant data-brokerage industry (or just browse) hasn’t yet made you vulnerable enough to identity theft, might adding the ISP’s treasure trove of personal information tip the scales? You’ll find out.

    So what can you do? Well, as the article below notes, there are something things individuals can do to protect their online data from their ISPs, like using a Virtual Private Network or privacy tools like Tor. But as the article also notes, even if you use every trick out there to protect your online privacy, all of that pales in comparison to having an actual law protecting you:

    The Daily Dot

    Think you can protect your privacy from internet providers without FCC rules? Good luck.

    Laura Moy—
    Mar 28 at 5:33AM | Last updated Mar 28 at 5:38AM

    Do you feel dissatisfied over the state of online privacy, and wish regulators would do more, not less, to protect your privacy? For most Americans, the answer to that question is yes. Unfortunately, Congress is about to move online privacy in the wrong direction.

    Despite the fact that Americans overwhelmingly want more privacy protections, Congress is on the verge of doing a huge favor to corporate benefactors this week by eliminating some of the strongest privacy protections we have—rules that prevent internet providers from spying on their customers and selling or sharing private information about what their customers do online without permission. The rules also require internet providers to take steps to protect that information from harmful attackers.

    If you feel strongly about your elected representatives in Congress working to dismantle privacy protections you value, then, of course, you should reach out to them and tell them how you feel before it’s too late.

    But if Congress acts to eliminate privacy anyway, you might be left wondering: What next? What can I do to protect my own privacy if Congress is working to destroy it? You can’t very well forgo using the internet, which in today’s world is essential for education, job applications, healthcare, finance, and more. You might not even have the ability to switch providers if you don’t like the invasive practices of your provider—many Americans only have one option when it comes to high-speed internet. And your internet provider has both the ability and incentive to spy on you. In the words of the founder and CEO of one internet provider in Maine, “Your ISP can look at your traffic and discover the most intimate details of your life, and selling that information will ultimately be more valuable than selling the internet connection.”

    The depressing reality is that if and when Congress eliminates internet privacy protections, you’ll be left with few options to defend yourself—the little you can do will pale in comparison to having concrete rules that strictly limit internet providers’ ability to share or sell private information. Your self-help privacy options will be neither appealing nor effective:

    Take advantage of privacy options offered by your provider (maybe). If you’re lucky, your provider might make limited privacy options available to you on what’s known as an “opt-out” basis—meaning they will share your information by default, but allow you to tell them to stop doing that if you can figure out how. Unfortunately, if Congress eliminates existing privacy rules, the details about what information your internet provider collects will probably become even more difficult to find and understand, and internet providers will probably gut many of their more privacy-protective options.

    Subscribe to a “virtual private network.” In addition to your already-expensive internet bill, you could decide to pay for a VPN service that helps to shield some of your Internet traffic from your provider. But it’s not as easy as it sounds: You have to have a bit of geek know-how to properly configure your VPN, and (annoyingly) you’ll also have to remember to turn on your VPN every single time you connect to the internet. Not only that, but tunneling all of your traffic through a VPN will substantially slow down your internet experience. And if that wasn’t bad enough, it might not even address your privacy concerns: Just like your internet provider, your VPN provider could also track and sell your online activities. Needless to say, VPNs are not a magic cure for internet privacy.*

    Install HTTPS Everywhere. This is a free extension for your web browser that routes you automatically to the HTTPS version of the websites you visit. This means you’ll see the friendly green lock icon more often, which indicates that your connection is encrypted and fewer details of your browsing activities will be available to your provider. The HTTPS Everywhere extension is wonderful because everything works automatically once you’ve installed it. But here’s the problem: Many popular sites don’t even support HTTPS. A study by Google last year found that a shockingly high number of sites either use outdated encryption or offer none at all. This means that HTTPS Everywhere can’t help protect your privacy on those sites, even though there are other sites that it helps with. So you should go install HTTPS Everywhere—it’s easy to use, and it does provide some protection—but it’s a far cry from the strong privacy protections that Congress is trying to do away with.

    It should be clear that the things you can do to protect your own privacy from your internet provider are at best suboptimal and at worst horribly insufficient. Indeed, there are several other things not worth discussing here because they are so technically difficult as to be effectively unavailable to the average internet user: Things like swapping out your DNS server, installing your own wireless router (and retiring the one your internet provider gave you), setting up a private email server, encrypting your emails, using the Tor Browser, and periodically changing the MAC addresses of your connected devices.

    And none of these solutions—or all of them together, for that matter—are as good as having rules on the books that just prohibit internet providers from spying on their customers and selling their private information without permission. We have those rules today, but tomorrow they could be gone.

    Good luck, internet users. You may soon be on your own.

    And none of these solutions—or all of them together, for that matter—are as good as having rules on the books that just prohibit internet providers from spying on their customers and selling their private information without permission. We have those rules today, but tomorrow they could be gone.”

    Yep, we could all just through elaborate technical hoops in an endless privacy-tools arms race to protect our online privacy from the ISPs’ bottom lines. Or we could, you know, make it illegal and then enforce that law.

    Of course, even if this latest gift from the GOP pulls a ‘Trumpcare’ and ends up going down in flames at the last minute, it’s not like there isn’t some validity to the argument that the ISPs merely want to be able to do what online giants like Google and Facebook have been doing for years (and do to you whether or not you’re actually visiting their sites or some random site). So it’s going to be important to keep in mind that part of the solution to ending the threat of ISP data-brokering is regulating the hell out of all rapacious data-brokers in general. Online and offline. That should even things out.

    Or we could just wait for the industry to come up with its own privacy ‘solutions’.

    Posted by Pterrafractyl | March 29, 2017, 8:14 pm

Post a comment