- Spitfire List - http://spitfirelist.com -

FTR #859 Because They Can: Update on Technocratic Fascism

Dave Emory’s entire life­time of work is avail­able on a flash dri­ve that can be obtained here. [1] The new dri­ve is a 32-giga­byte dri­ve that is cur­rent as of the pro­grams and arti­cles post­ed by late spring of 2015. The new dri­ve (avail­able for a tax-deductible con­tri­bu­tion of $65.00 or more) con­tains FTR #850 [1].  

WFMU-FM is pod­cast­ing For The Record–You can sub­scribe to the pod­cast HERE [2].

You can sub­scribe to e‑mail alerts from Spitfirelist.com HERE [3]

You can sub­scribe to RSS feed from Spitfirelist.com HERE [4].

You can sub­scribe to the com­ments made on pro­grams and posts–an excel­lent source of infor­ma­tion in, and of, itself HERE [5].

This pro­gram was record­ed in one, 60-minute seg­ment [6]

[7]Intro­duc­tion: Albert Ein­stein said of the inven­tion of the atom­ic bomb: “Every­thing has changed but our way of think­ing.” We feel that oth­er, more recent devel­op­ments in the world of Big Tech war­rant the same type of warn­ing.

This pro­gram fur­ther explores the Brave New World being mid­wived by tech­nocrats. These stun­ning devel­op­ments should be viewed against the back­ground of what we call “tech­no­crat­ic fas­cism,” ref­er­enc­ing a vital­ly impor­tant arti­cle by David Golum­bia. [8] ” . . . . Such tech­no­cratic beliefs are wide­spread in our world today, espe­cially in the enclaves of dig­i­tal enthu­si­asts, whether or not they are part of the giant cor­po­rate-dig­i­tal leviathanHack­ers (“civic,” “eth­i­cal,” “white” and “black” hat alike), hack­tivists, Wik­iLeaks fans [and Julian Assange et al–D. E.], Anony­mous “mem­bers,” even Edward Snow­den him­self [9] walk hand-in-hand with Face­book and Google in telling us that coders don’t just have good things to con­tribute to the polit­i­cal world, but that the polit­i­cal world is theirs to do with what they want, and the rest of us should stay out of it: the polit­i­cal world is bro­ken, they appear to think (right­ly, at least in part), and the solu­tion to that, they think (wrong­ly, at least for the most part), is for pro­gram­mers to take polit­i­cal mat­ters into their own hands. . . First, [Tor co-cre­ator] Din­gle­dine claimed that Tor must be sup­ported because it fol­lows direct­ly from a fun­da­men­tal “right to pri­vacy.” Yet when pressed—and not that hard—he admits that what he means by “right to pri­vacy” is not what any human rights body or “par­tic­u­lar legal regime” has meant by it. Instead of talk­ing about how human rights are pro­tected, he asserts that human rights are nat­ural rights and that these nat­ural rights cre­ate nat­ural law that is prop­erly enforced by enti­ties above and out­side of demo­c­ra­tic poli­tiesWhere the UN’s Uni­ver­sal Dec­la­ra­tion on Human Rights [10] of 1948 is very clear that states and bod­ies like the UN to which states belong are the exclu­sive guar­an­tors of human rights, what­ever the ori­gin of those rights, Din­gle­dine asserts that a small group of soft­ware devel­op­ers can assign to them­selves that role, and that mem­bers of demo­c­ra­tic poli­ties have no choice but to accept them hav­ing that role. . . Fur­ther, it is hard not to notice that the appeal to nat­ural rights is today most often asso­ci­ated with the polit­i­cal right, for a vari­ety of rea­sons (ur-neo­con Leo Strauss was one of the most promi­nent 20th cen­tury pro­po­nents of these views [11]). We aren’t sup­posed to endorse Tor because we endorse the right: it’s sup­posed to be above the left/right dis­tinc­tion. But it isn’t. . . .

We begin by exam­in­ing a cou­ple of arti­cles rel­e­vant to the world of cred­it.

Big Tech and Big Data have reached the point [12] where, for all intents and pur­pos­es, cred­it card users and vir­tu­al­ly every­one else have no per­son­al pri­va­cy. Even with­out detailed per­son­al infor­ma­tion, capa­ble tech oper­a­tors can iden­ti­fy peo­ple’s iden­ti­ties with an extra­or­di­nary degree of pre­ci­sion using a sur­pris­ing­ly small amount of infor­ma­tion.

Com­pound­ing the wor­ries of those seek­ing cred­it is a new Face­book “app” [13] that will enable banks to iden­ti­fy how poor a cus­tomer’s friends are and enable those same insti­tu­tions to deny the unsus­pect­ing cred­it on the basis of how poor their friends are!

Even as Big Tech is per­mit­ting finan­cial insti­tu­tions to zero in on cus­tomers to an unprece­dent­ed degree, it is mov­ing in the direc­tion of obscur­ing [14] the doings of Banksters. The Sym­pho­ny [15] net­work offers end-to-end encryp­tion that appears to make the oper­a­tions of the finan­cial insti­tu­tions using it opaque to reg­u­la­tors.

A new vari­ant [16] of the Bit­coin tech­nol­o­gy will not only facil­i­tate the use of Bit­coin to assas­si­nate [17] pub­lic fig­ures but may very well replace–to a cer­tain extent–the func­tions per­formed by attor­ney. (We have cov­ered Bitcoin–an appar­ent Under­ground Reich invention–in FTR #‘s 760 [18], 764 [19], 770 [20], 785 [21].)

As fright­en­ing as some of the above pos­si­bil­i­ties may be, things may get dra­mat­i­cal­ly worse with the intro­duc­tion of “the Inter­net of Things,” [22] per­mit­ting the hack­ing of many types of every­day tech­nolo­gies, as well as the use of those tech­nolo­gies to give Big Tech and Big Data unprece­dent­ed intru­sion into peo­ple’s lives.

Pro­gram High­lights Include: 

1. Big Tech and Big Data have reached the point where, for all intents and pur­pos­es, cred­it card users and vir­tu­al­ly every­one else have no per­son­al pri­va­cy. Even with­out detailed per­son­al infor­ma­tion, capa­ble tech oper­a­tors can iden­ti­fy peo­ple’s iden­ti­ties with an extra­or­di­nary degree of pre­ci­sion using a sur­pris­ing­ly small amount of infor­ma­tion.

“The Sin­gu­lar­i­ty Is Already Here–It’s Name Is Big Data” sub­mit­ted by Ben Hunt; Zerohedge.com; 2/08/2015. [12]

Last Thurs­day the jour­nal Sci­ence pub­lished an arti­cle by four MIT-affil­i­at­ed data sci­en­tists (Sandy Pent­land is in the group, and he’s a big name in these cir­cles), titled “Unique in the shop­ping mall: On the rei­den­ti­fi­a­bil­i­ty of cred­it card meta­da­ta”. Sounds innocu­ous enough, but here’s the sum­ma­ry from the front page WSJ arti­cle describ­ing the find­ings:

Researchers at the Mass­a­chu­setts Insti­tute of Tech­nol­o­gy, writ­ing Thurs­day in the jour­nal Sci­ence, ana­lyzed anony­mous cred­it-card trans­ac­tions by 1.1 mil­lion peo­ple. Using a new ana­lyt­ic for­mu­la, they need­ed only four bits of sec­ondary information—metadata such as loca­tion or timing—to iden­ti­fy the unique indi­vid­ual pur­chas­ing pat­terns of 90% of the peo­ple involved, even when the data were scrubbed of any names, account num­bers or oth­er obvi­ous iden­ti­fiers.

Still not sure what this means? It means that I don’t need your name and address, much less your social secu­ri­ty num­ber, to know who you ARE. With a triv­ial amount of trans­ac­tion­al data I can fig­ure out where you live, what you do, who you asso­ciate with, what you buy and what you sell. I don’t need to steal this data, and frankly I wouldn’t know what to do with your social secu­ri­ty num­ber even if I had it … it would just slow down my analy­sis. No, you give me every­thing I need just by liv­ing your very con­ve­nient life, where you’ve vol­un­teered every bit of trans­ac­tion­al infor­ma­tion in the fine print of all of these won­drous ser­vices you’ve signed up for. And if there’s a bit more infor­ma­tion I need – say, a device that records and trans­mits your dri­ving habits – well, you’re only too hap­py to sell that to me for a few dol­lars off your insur­ance pol­i­cy. After all, you’ve got noth­ing to hide. It’s free mon­ey!

Almost every investor I know believes that the tools of sur­veil­lance and Big Data are only used against the mar­gin­al­ized Oth­er – ter­ror­ist “sym­pa­thiz­ers” in Yemen, gang “asso­ciates” in Comp­ton – but not us. Oh no, not us. And if those tools are trained on us, it’s only to pro­mote “trans­paren­cy” and weed out the bad guys lurk­ing in our midst. Or maybe to sug­gest a movie we’d like to watch. What could pos­si­bly be wrong with that? I’ve writ­ten a lot (here, here, and here) about what’s wrong with that, about how the mod­ern fetish with trans­paren­cy, aid­ed and abet­ted by tech­nol­o­gy and gov­ern­ment, per­verts the core small‑l lib­er­al insti­tu­tions of mar­kets and rep­re­sen­ta­tive gov­ern­ment.

It’s not that we’re com­pla­cent about our per­son­al infor­ma­tion. On the con­trary, we are obsessed about the per­son­al “keys” that are mean­ing­ful to humans – names, social secu­ri­ty num­bers, pass­words and the like – and we spend bil­lions of dol­lars and mil­lions of hours every year to con­trol those keys, to pre­vent them from falling into the wrong hands of oth­er humans. But we will­ing­ly hand over a dif­fer­ent set of keys to non-human hands with­out a sec­ond thought.

The prob­lem is that our human brains are wired to think of data pro­cess­ing in human ways, and so we assume that com­put­er­ized sys­tems process data in these same human ways, albeit more quick­ly and more accu­rate­ly. Our sci­ence fic­tion is filled with com­put­er sys­tems that are essen­tial­ly god-like human brains, machines that can talk and “think” and manip­u­late phys­i­cal objects, as if sen­tience in a human con­text is the pin­na­cle of data pro­cess­ing! This anthro­po­mor­phic bias dri­ves me nuts, as it damp­ens both the sense of awe and the sense of dan­ger we should be feel­ing at what already walks among us. [25] It seems like every­one and his broth­er today are wring­ing their hands about AI and some impend­ing “Sin­gu­lar­i­ty”, a moment of future doom where non-human intel­li­gence achieves some human-esque sen­tience and decides in Matrix-like fash­ion to turn us into bat­ter­ies or some such. Please. The Sin­gu­lar­i­ty is already here. Its name is Big Data.

Big Data is mag­ic, in exact­ly the sense that Arthur C. Clarke wrote of suf­fi­cient­ly advanced tech­nol­o­gy. It’s mag­ic in a way that ther­monu­clear bombs and tele­vi­sion are not, because for all the com­plex­i­ty of these inven­tions they are dri­ven by cause and effect rela­tion­ships in the phys­i­cal world that the human brain can process com­fort­ably, phys­i­cal world rela­tion­ships that might not have exist­ed on the African savan­na 2,000,000 years ago but are under­stand­able with the sen­so­ry and neur­al organs our ances­tors evolved on that savan­na. Big Data sys­tems do not “see” the world as we do, with mere­ly 3 dimen­sions of phys­i­cal real­i­ty. Big Data sys­tems are not social ani­mals, evolved by nature and trained from birth to inter­pret all sig­nals through a social lens. Big Data sys­tems are sui gener­is, a way of per­ceiv­ing the world that may have been invent­ed by human inge­nu­ity and can serve human inter­ests, but are utter­ly non-human and pro­found­ly not of this world.

A Big Data sys­tem couldn’t care less if it has your spe­cif­ic social secu­ri­ty num­ber or your spe­cif­ic account ID, because it’s not under­stand­ing who you are based on how you iden­ti­fy your­self to oth­er humans. That’s the human bias here, that a Big Data sys­tem would try to pre­dict our indi­vid­ual behav­ior based on an analy­sis of what we indi­vid­u­al­ly have done in the past, as if the com­put­er were some super-advanced ver­sion of Sher­lock Holmes. No, what a Big Data sys­tem can do is look at ALL of our behav­iors, across ALL dimen­sions of that behav­ior, and infer what ANY of us would do under sim­i­lar cir­cum­stances. It’s a sim­ple con­cept, real­ly, but what the human brain can’t eas­i­ly com­pre­hend is the vast­ness of the ALL part of the equa­tion or what it means to look at the ALL simul­ta­ne­ous­ly and in par­al­lel. I’ve been work­ing with infer­ence engines for almost 30 years now, and while I think that I’ve got unusu­al­ly good instincts for this and I’ve been able to train my brain to kin­da sor­ta think in mul­ti-dimen­sion­al terms, the truth is that I only get glimpses of what’s hap­pen­ing inside these engines. I can chan­nel the mag­ic, I can appre­ci­ate the mag­ic, and on a pure­ly sym­bol­ic lev­el I can describe the mag­ic. But on a fun­da­men­tal lev­el I don’t under­stand the mag­ic, and nei­ther does any oth­er human. What I can say to you with absolute cer­tain­ty, how­ev­er, is that the mag­ic exists and there are plen­ty of magi­cians like me out there, with more grad­u­at­ing from MIT and Har­vard and Stan­ford every year.

Here’s the mag­ic trick that I’m wor­ried about for investors.

In exact­ly the same way that we have giv­en away our per­son­al behav­ioral data to banks and cred­it card com­pa­nies and wire­less car­ri­ers and insur­ance com­pa­nies and a mil­lion app providers, so are we now being tempt­ed to give away our port­fo­lio behav­ioral data to mega-banks and mega-asset man­agers and the tech­nol­o­gy providers who work with them. Don’t wor­ry, they say, there’s noth­ing in this infor­ma­tion that iden­ti­fies you direct­ly. It’s all anony­mous. What rub­bish! With enough anony­mous port­fo­lio behav­ioral data and a laugh­ably small IT bud­get, any com­pe­tent magi­cian can design a Big Data sys­tem that can pre­dict with 90% accu­ra­cy what you will buy and sell in your account, at what price you will buy and sell, and under what exter­nal macro con­di­tions you will buy and sell. Every day these pri­vate data sets at the mega-mar­ket play­ers get big­ger and big­ger, and every day we get clos­er and clos­er to a Citadel or a Renais­sance per­fect­ing their Infer­ence Machine for the liq­uid cap­i­tal mar­kets. For all I know, they already have. . . .

2. Check­out Facebook’s new patent, to be eval­u­at­ed in con­junc­tion with the pre­vi­ous sto­ry. Facebook’s patent is for a ser­vice that will let banks scan your Face­book friends for the pur­pose of assess­ing your cred­it qual­ity. For instance, Face­book might set up a ser­vice where banks can take the aver­age of the cred­it rat­ings for all of the peo­ple in your social net­work, and if that aver­age doesn’t meet a min­i­mum cred­it score, your loan appli­ca­tion is denied. And that’s not just some ran­dom appli­ca­tion of Facebook’s new patent–the sys­tem of using the aver­age cred­it scores of your social net­work to deny you loans is explic­itly part of the patent [13]:

“Facebook’s New Plan: Help Banks Fig­ure Out How Poor You Are So They Can Deny You Loans”  [13]by Jack Smith IV; mic.com [13]; 8/5/2015. [13]

If you and your Face­book friends are poor, good luck get­ting approved for a loan.

Face­book has reg­is­tered a patent [26] for a sys­tem that would let banks and lenders screen your social net­work before decid­ing whether or not you’re approved for a loan. If your Face­book friends’ aver­age cred­it scores don’t make the cut, the bank can reject you. The patent is word­ed in clear, ter­ri­fy­ing lan­guage that speaks for itself:

When an indi­vid­ual applies for a loan, the lender exam­ines the cred­it rat­ings of mem­bers of the individual’s social net­work who are con­nected to the indi­vid­ual through autho­rized nodes. If the aver­age cred­it rat­ing of these mem­bers is at least a min­i­mum cred­it score, the lender con­tin­ues to process the loan appli­ca­tion. Oth­er­wise, the loan appli­ca­tion is reject­ed.

It’s very lit­er­ally guilt by asso­ci­a­tion, allow­ing banks and lenders to pro­file you by the sta­tus of your loved ones.

Though a cred­it score isn’t nec­es­sar­ily a reflec­tion of your wealth, it can serve as a rough guide­line [27] for who has a reli­able, man­aged income and who has had to lean on cred­it in try­ing times. A line of cred­it is some­times a life­line, either for start­ing a new busi­ness or escap­ing a tem­po­rary hard­ship.

Pro­fil­ing peo­ple for being in social cir­cles where low cred­it scores are like­ly could cut off someone’s chances of find­ing finan­cial relief. In effect, it’s a device that iso­lates the poor and keeps them poor.

A bold new era for dis­crim­i­na­tion: In the Unit­ed States, it’s ille­gal to deny some­one a loan based on tra­di­tional iden­ti­fiers like race or gen­der — the kinds of things peo­ple usu­ally use to dis­crim­i­nate. But these laws were made before Face­book was able to peer into your social graph and learn when, where and how long you’ve known your friends and acquain­tances.

The fit­ness-track­ing tech com­pany Fit­bit said in 2014 that the fastest grow­ing part of their busi­ness [28] is help­ing employ­ers mon­i­tor the health of their employ­ees. Once insur­ers show inter­est in this infor­ma­tion, you can bet they’ll be mak­ing a few rejec­tions of their own. And if a group insur­ance plan that affects every employ­ee depends on mea­sur­able, real-time data for the fit­ness of its employ­ees, how will that affect the hir­ing process?

...

And if you don’t like it, just find rich­er friends.

3a. A con­sor­tium of 14 mega-banks have pri­vately devel­oped a spe­cial super-secure inter-bank mes­sag­ing sys­tem that uses end-to-end strong encryp­tion and per­ma­nently deletes data. The Sym­pho­ny sys­tem may very well make it impos­si­ble for reg­u­la­tors to ade­quate­ly over­see the finan­cial male­fac­tors respon­si­ble for the 2008 finan­cial melt­down.

“NY Reg­u­la­tor Sends Mes­sage to Sym­pho­ny” by  [14]Ben McLan­na­han and Gina Chon; Finan­cial Times [14]; 7/22/2015. [14]

New York’s state bank­ing reg­u­la­tor has fired a shot across the bows of Sym­phony [29], a mes­sag­ing ser­vice about to be launched by a con­sor­tium of Wall Street banks and asset man­agers, by call­ing for infor­ma­tion on how it man­ages — and deletes — cus­tomer data.

In a let­ter on Wednes­day to David Gurle, the chief exec­u­tive of Sym­phony Com­mu­ni­ca­tion Ser­vices, the New York Depart­ment of Finan­cial Ser­vices asked it to clar­ify how its tool would allow firms to erase their data trails, poten­tially falling foul of laws on record-keep­ing.

The let­ter, which was signed by act­ing super­in­ten­dent Antho­ny Albanese and shared with the press, not­ed that cha­t­room tran­scripts had formed a crit­i­cal part of author­i­ties’ inves­ti­ga­tions into the rig­ging of mar­kets for for­eign exchange and inter­bank loans. It called for Sym­phony to spell out its doc­u­ment reten­tion [30] capa­bil­i­ties, poli­cies and fea­tures, cit­ing two spe­cific areas of inter­est as “data dele­tion” and “end-to-end encryp­tion”.

The let­ter marks the first expres­sion of con­cern from reg­u­la­tors over a new ini­tia­tive that has set out to chal­lenge the dom­i­nance [31] of Bloomberg, whose 320,000-plus sub­scribers [32] ping about 200m mes­sages a day between ter­mi­nals using its com­mu­ni­ca­tion tools.

Peo­ple famil­iar with the mat­ter described the inquiry as an infor­ma­tion gath­er­ing exer­cise, which could con­clude that Sym­phony is a per­fectly legit­i­mate enter­prise.

The NYDFS not­ed that Symphony’s mar­ket­ing mate­ri­als state that “Sym­phony has designed a spe­cific set of pro­ce­dures to guar­an­tee that data dele­tion is per­ma­nent and ful­ly doc­u­mented. We also delete con­tent on a reg­u­lar basis in accor­dance with cus­tomer data reten­tion poli­cies.”

Mr Albanese also wrote that he would fol­low up with four con­sor­tium mem­bers that the NYDFS reg­u­lates — Bank of New York Mel­lon, Cred­it Suisse, Deutsche Bank and Gold­man Sachs — to ask them how they plan to use the new ser­vice, which will go live for big cus­tomers in the first week of August.

The reg­u­la­tor said it was keen to find out how banks would ensure that mes­sages cre­ated using Sym­phony would be retained, and “whether their use of Symphony’s encryp­tion tech­nol­ogy can be used to pre­vent review by com­pli­ance per­son­nel or reg­u­la­tors”. It also flagged con­cerns over the open-source fea­tures of the prod­uct, won­der­ing if they could be used to “cir­cum­vent” over­sight.

The oth­er mem­bers of the con­sor­tium are Bank of Amer­ica Mer­rill Lynch, Black­Rock, Citadel, Cit­i­group, HSBC, Jef­feries, JPMor­gan, Mav­er­ick Cap­i­tal, Mor­gan Stan­ley and Wells Far­go. Togeth­er they have chipped in about $70m to get Sym­phony start­ed. Anoth­er San Fran­cis­co-based fund run by a for­mer col­league of Mr Gurle’s, Merus Cap­i­tal, has a 5 per cent inter­est.

“Sym­phony is built on a foun­da­tion of secu­rity, com­pli­ance and pri­vacy fea­tures that were built to enable our finan­cial ser­vices and enter­prise cus­tomers to meet their reg­u­la­tory require­ments,” said Mr Gurle. “We look for­ward to explain­ing the var­i­ous aspects of our com­mu­ni­ca­tions plat­form to the New York Depart­ment of Finan­cial Ser­vices.”

3b. Accord­ing to Symphony’s back­ers, noth­ing could go wrong because all the infor­ma­tion that banks are required to retain for reg­u­la­tory pur­poses is indeed retained in the sys­tem. Whether or not reg­u­la­tors can actu­ally access that retained data, how­ever, appears to be more of an open ques­tion. Again, the end-to-end encryp­tion may very well insu­late Banksters from the reg­u­la­tion vital to avoid a repeat of the 2008 sce­nario.

“Sym­phony, the ‘What­sApp for Wall Street,’ Orches­trates a Nuanced Response to Reg­u­la­tory Crit­ics” by Michael del Castil­loNew York Busi­ness Jour­nal [15]; 8/13/2015. [15]

Sym­phony is tak­ing heat from some in Wash­ing­ton, D.C. [33], D.C. for its What­App-like mes­sag­ing ser­vice that promis­es to encrypt Wall Street’s mes­sages from end to end. At the heart of the con­cern is whether or not the keys used to decrypt the mes­sages will be made avail­able to reg­u­la­tors, or if anoth­er form of back door access will be pro­vid­ed.

With­out such keys it would be immense­ly more dif­fi­cult to retrace the steps of shady char­ac­ters on Wall Street dur­ing reg­u­la­tory inves­ti­ga­tions — an abil­ity, which accord­ing to a New York Post report [34], has result­ed $74 bil­lion in fines over the past five years.

So, ear­lier this week Sym­phony took to the blo­gos­phere with a rather detailed expla­na­tion [35] of its plans to be com­pli­ant with reg­u­la­tors. In spite of answer­ing a lot of ques­tions though, one key point was either deft­ly evad­ed, or over­looked.

What Sym­phony does, accord­ing to the blog post:

Sym­phony pro­vides its cus­tomers with an inno­v­a­tive “end-to-end” secure mes­sag­ing capa­bil­ity that pro­tects com­mu­ni­ca­tions in the cloud from cyber-threats and the risk of data breach, while safe­guard­ing our cus­tomers’ abil­ity to retain records of their mes­sages. Sym­phony pro­tects data, not only when it trav­els from “point-to-point” over net­work con­nec­tions, but also the entire time the data is in the cloud.

How it works:

Large insti­tu­tions using Sym­phony typ­i­cally will store encryp­tion keys using spe­cial­ized hard­ware key man­age­ment devices known as Hard­ware Secu­rity Mod­ules (HSMs). These mod­ules are installed in data cen­ters and pro­tect an organization’s keys, stor­ing them with­in the secure pro­tected mem­ory of the HSM. Firms will use these keys to decrypt data and then feed the data into their record reten­tion sys­tems.

The crux:

Sym­phony is designed to inter­face with record reten­tion sys­tems com­monly deployed in finan­cial insti­tu­tions. By help­ing orga­ni­za­tions reli­ably store mes­sages in a cen­tral archive, our plat­form facil­i­tates the rapid and com­plete retrieval of records when need­ed. Sym­phony pro­vides secu­rity while data trav­els through the cloud; firms then secure­ly receive the data from Sym­phony, decrypt it and store it so they can meet their reten­tion oblig­a­tions.

The poten­tial to store every key-stroke of every employ­ee behind an encrypt­ed wall safe from mali­cious gov­ern­ments and oth­er enti­ties is one that should make Wall Streeters, and those depen­dent on Wall Street resources, sleep a bit bet­ter at night.

But nowhere in Symphony’s blog post does it actu­ally say that any of the 14 com­pa­nies which have invest­ed $70 mil­lion in the prod­uct, or any of the forth­com­ing cus­tomers who might sign up to use it, will actu­ally share any­thing with reg­u­la­tors. Sure, it will retain all the infor­ma­tion oblig­ed by reg­u­la­tors, which in the right hands is equal­ly use­ful to the com­pa­nies. So there’s no sur­prise there.

The clos­est we see to any actu­al assur­ance that the Sil­i­con Val­ley-based com­pany plans to share that infor­ma­tion with reg­u­la­tors is that Sym­phony is “designed to inter­face with record reten­tion sys­tems com­monly deployed in finan­cial insti­tu­tions.” Which the­o­ret­i­cally, means the SEC, the DOJ, or any num­ber of reg­u­la­tory bod­ies could plug in, assum­ing they had access.

So, the ques­tions remain, will Sym­phony be build­ing in some sort of back-door access for reg­u­la­tors? Or will it just be stor­ing that infor­ma­tion required of reg­u­la­tors, but for its clients’ use?

...

4a. The Bit­coin assas­si­na­tion mar­kets [17] are about to get some com­pe­ti­tion. A new vari­ant of the Bit­coin tech­nol­o­gy will not only per­mit the use of Bit­coin to assas­si­nate pub­lic fig­ures but may very well replace–to a cer­tain extent–the func­tions per­formed by attor­ney.

“Bitcoin’s Dark Side Could Get Dark­er”  [16]by Tom Simonite; MIT Tech­nol­ogy Review; 8/13/2015. [16]

Investors see rich­es in a cryp­tog­ra­phy-enabled tech­nol­ogy called smart contracts–but it could also offer much to crim­i­nals.

Some of the ear­li­est adopters of the dig­i­tal cur­rency Bit­coin were crim­i­nals, who have found it invalu­able in online mar­ket­places for con­tra­band and as pay­ment extort­ed through lucra­tive “ran­somware” that holds per­sonal data hostage. A new Bit­coin-inspired tech­nol­ogy that some investors believe will be much more use­ful and pow­er­ful may be set to unlock a new wave of crim­i­nal inno­va­tion.

That tech­nol­ogy is known as smart contracts—small com­puter pro­grams that can do things like exe­cute finan­cial trades or nota­rize doc­u­ments in a legal agree­ment. Intend­ed to take the place of third-par­ty human admin­is­tra­tors such as lawyers, which are required in many deals and agree­ments, they can ver­ify infor­ma­tion and hold or use funds using sim­i­lar cryp­tog­ra­phy to that which under­pins Bit­coin.

Some com­pa­nies think smart con­tracts could make finan­cial mar­kets more effi­cient, or sim­plify com­plex trans­ac­tions such as prop­erty deals (see “The Start­up Meant to Rein­vent What Bit­coin Can Do [36]”)Ari Juels [37], a cryp­tog­ra­pher and pro­fes­sor at the Jacobs Tech­nion-Cor­nell Insti­tute at Cor­nell Tech, believes they will also be use­ful for ille­gal activity–and, with two col­lab­o­ra­tors, he has demon­strated how.

“In some ways this is the per­fect vehi­cle for crim­i­nal acts, because it’s meant to cre­ate trust in sit­u­a­tions where oth­er­wise it’s dif­fi­cult to achieve,” says Juels.

In a paper to be released today [38], Juels, fel­low Cor­nell pro­fes­sor Elaine Shi [39], and Uni­ver­sity of Mary­land researcher Ahmed Kos­ba [40]present sev­eral exam­ples of what they call “crim­i­nal con­tracts.” They wrote them to work on the recent­ly launched smart-con­tract plat­form Ethereum [41].

One exam­ple is a con­tract offer­ing a cryp­tocur­rency reward for hack­ing a par­tic­u­lar web­site. Ethereum’s pro­gram­ming lan­guage makes it pos­si­ble for the con­tract to con­trol the promised funds. It will release them only to some­one who pro­vides proof of hav­ing car­ried out the job, in the form of a cryp­to­graph­i­cally ver­i­fi­able string added to the defaced site.

Con­tracts with a sim­i­lar design could be used to com­mis­sion many kinds of crime, say the researchers.Most provoca­tively, they out­line a ver­sion designed to arrange the assas­si­na­tion of a pub­lic fig­ure. A per­son wish­ing to claim the boun­ty would have to send infor­ma­tion such as the time and place of the killing in advance. The con­tract would pay out after ver­i­fy­ing that those details had appeared in sev­eral trust­ed news sources, such as news wires. A sim­i­lar approach could be used for less­er phys­i­cal crimes, such as high-pro­file van­dal­ism.

“It was a bit of a sur­prise to me that these types of crimes in the phys­i­cal world could be enabled by a dig­i­tal sys­tem,” says Juels. He and his coau­thors say they are try­ing to pub­li­cize the poten­tial for such activ­ity to get tech­nol­o­gists and pol­icy mak­ers think­ing about how to make sure the pos­i­tives of smart con­tracts out­weigh the neg­a­tives.

“We are opti­mistic about their ben­e­fi­cial appli­ca­tions, but crime is some­thing that is going to have to be dealt with in an effec­tive way if those ben­e­fits are to bear fruit,” says Shi.

Nico­las Christin [42], an assis­tant pro­fes­sor at Carnegie Mel­lon Uni­ver­sity who has stud­ied crim­i­nal uses of Bit­coin, agrees there is poten­tial for smart con­tracts to be embraced by the under­ground. “It will not be sur­pris­ing,” he says. “Fringe busi­nesses tend to be the first adopters of new tech­nolo­gies, because they don’t have any­thing to lose.”

...

Gavin Wood, chief tech­nol­ogy offi­cer at Ethereum, notes that legit­i­mate busi­nesses are already plan­ning to make use of his technology—for exam­ple, to pro­vide a dig­i­tally trans­fer­able proof of own­er­ship of gold [43].

How­ever, Wood acknowl­edges it is like­ly that Ethereum will be used in ways that break the law—and even says that is part of what makes the tech­nol­ogy inter­est­ing. Just as file shar­ing found wide­spread unau­tho­rized use and forced changes in the enter­tain­ment and tech indus­tries, illic­it activ­ity enabled by Ethereum could change the world, he says.

“The poten­tial for Ethereum to alter aspects of soci­ety is of sig­nif­i­cant mag­ni­tude,” says Wood. “This is some­thing that would pro­vide a tech­ni­cal basis for all sorts of social changes and I find that excit­ing.”

For exam­ple, Wood says that Ethereum’s soft­ware could be used to cre­ate a decen­tral­ized ver­sion of a ser­vice such as Uber, con­nect­ing peo­ple want­ing to go some­where with some­one will­ing to take them, and han­dling the pay­ments with­out the need for a com­pany in the mid­dle. Reg­u­la­tors like those har­ry­ing Uber in many places around the world would be left with noth­ing to tar­get. “You can imple­ment any Web ser­vice with­out there being a legal enti­ty behind it,” he says. “The idea of mak­ing cer­tain things impos­si­ble to leg­is­late against is real­ly inter­est­ing.”

4b. If you’re a for­mer sub­scriber of the “Ash­ley Madi­son” web­site for cheat­ing, just FYI, you might get­ting a friend­ly email soon [24]:

“Extor­tion­ists Are After the Ash­ley Madi­son Users and They Want Bit­coin” by Adam Clark EstesGiz­modo; 8/21/15. [24]

Peo­ple are the worst. An unknown num­ber of ass­holes are threat­en­ing to expose Ash­ley Madi­son users [44], pre­sum­ably ruin­ing their mar­riages. The hack­ing vic­tims must pay the extor­tion­ists “exact­ly 1.0000001 Bit­coins” or the spouse gets noti­fied. Ugh.

This is an unnerv­ing but not unpre­dictable turn of events. The data that the Ash­ley Madi­son hack­ers released ear­ly this week [45] includ­ed mil­lions of real email address­es, along with real home address­es, sex­ual pro­cliv­i­ties and oth­er very pri­vate infor­ma­tion. Secu­rity blog­ger Bri­an Krebs talked to secu­rity firms who have evi­dence of extor­tion schemes linked to Ash­ley Madi­son data. Turns out spam fil­ters are catch­ing a num­ber of emails being sent to vic­tims from peo­ple who say they’ll make the infor­ma­tion pub­lic unless they get paid!

Here’s one caught by an email provider in Mil­wau­kee [44]:

Hel­lo,

Unfor­tu­nately, your data was leaked in the recent hack­ing of Ash­ley Madi­son and I now have your infor­ma­tion.

If you would like to pre­vent me from find­ing and shar­ing this infor­ma­tion with your sig­nif­i­cant oth­er send exact­ly 1.0000001 Bit­coins (approx. val­ue $225 USD) to the fol­low­ing address:

1B8eH7HR87vbVbMzX4gk9nYyus3KnXs4Ez

Send­ing the wrong amount means I won’t know it’s you who paid.

You have 7 days from receipt of this email to send the BTC [bit­coins]. If you need help locat­ing a place to pur­chase BTC, you can start here…..

...

One secu­rity expert explained to Krebs that this type of extor­tion could be dan­ger­ous. “There is going to be a dra­matic crime wave of these types of vir­tual shake­downs, and they’ll evolve into spear-phish­ing cam­paigns that lever­age cryp­to mal­ware,” said Tom Keller­man of Trend Micro.

That sounds a lit­tle dra­matic, but bear in mind just how many peo­ple were involved. Even if you assume some of the accounts were fake, there are poten­tially mil­lions [46] who’ve had their pri­vate infor­ma­tion post­ed on the dark web for any­body to see and abuse. Some of these peo­ple are in the mil­i­tary, too, where they’d face pos­si­ble penal­ties for adul­tery. [47] If some goons think they can squeeze a bit­coin out of each of them, there are poten­tially tens of mil­lions of dol­lars to be made.

The word “poten­tially” is impor­tant because some of these extor­tion emails are obvi­ously get­ting stuck in spam fil­ters, and some of the extor­tion­ists could eas­ily just be bluff­ing. Either way, every­body los­es when com­pa­nies fail to secure their users’ data. Every­body except the crim­i­nals.

5. The emer­gence of what is com­ing to be called “The Inter­net of Things” holds tru­ly omi­nous pos­si­bil­i­ties. Not only can Big Data/Big Tech get their hooks into peo­ples’ lives to an even greater extent than they can now (see Item #1 in this descrip­tion) but hack­ers can have a field day.

“Why Smart Objects May Be a Dumb Idea’ by Zeynep Tufek­ci; The New York Times; 8/10/2015. [22]

A fridge that puts milk on your shop­ping list when you run low. A safe that tal­lies the cash that is placed in it. A sniper rifle equipped with advanced com­put­er tech­nol­o­gy for improved accu­ra­cy. A car that lets you stream music from the Inter­net.

All of these inno­va­tions sound great, until you learn the risks that this type of con­nec­tiv­i­ty car­ries. Recent­ly, two secu­ri­ty researchers [48], sit­ting on a couch and armed only with lap­tops, remote­ly took over a Chrysler Jeep Chero­kee [49]speed­ing along the high­way, shut­ting down its engine as an 18-wheel­er truck rushed toward it. They did this all while a Wired reporter was dri­ving the car. Their exper­tise would allow them to hack any Jeep as long as they knew the car’s I.P. address, its net­work address on the Inter­net. They turned the Jeep’s enter­tain­ment dash­board into a gate­way to the car’s steer­ing, brakes and trans­mis­sion.

A hacked car is a high-pro­file exam­ple of what can go wrong with the com­ing Inter­net of Things — objects equipped with soft­ware and con­nect­ed to dig­i­tal net­works. The sell­ing point for these well-con­nect­ed objects is added con­ve­nience and bet­ter safe­ty. In real­i­ty, it is a fast-motion train wreck in pri­va­cy and secu­ri­ty.

The ear­ly Inter­net was intend­ed to con­nect peo­ple who already trust­ed one anoth­er, like aca­d­e­m­ic researchers or mil­i­tary net­works. It nev­er had the robust secu­ri­ty that today’s glob­al net­work needs. As the Inter­net went from a few thou­sand users to more than three bil­lion [50], attempts to strength­en secu­ri­ty were stymied because of cost, short­sight­ed­ness and com­pet­ing inter­ests. Con­nect­ing every­day objects to this shaky, inse­cure base will cre­ate the Inter­net of Hacked Things. This is irre­spon­si­ble and poten­tial­ly cat­a­stroph­ic.

That smart safe? Hack­ers can emp­ty it with a sin­gle USB stick [51] while eras­ing all logs of its activ­i­ty — the evi­dence of deposits and with­drawals — and of their crime. That high-tech rifle? Researchers man­aged to remote­ly manip­u­late its tar­get selec­tion [52] with­out the shooter’s know­ing.

Home builders and car man­u­fac­tur­ers have shift­ed to a new busi­ness: the risky world of infor­ma­tion tech­nol­o­gy. Most seem utter­ly out of their depth.

Although Chrysler quick­ly recalled 1.4 mil­lion Jeeps [53] to patch this par­tic­u­lar vul­ner­a­bil­i­ty, it took the com­pa­ny more than a year [54] after the issue was first not­ed, and the recall occurred only after that spec­tac­u­lar pub­lic­i­ty stunt on the high­way and after it was request­ed by the Nation­al High­way Traf­fic Safe­ty Admin­is­tra­tion [54]. In announc­ing the soft­ware fix, the com­pa­ny said that no defect had been found [55]. If two guys sit­ting on their couch turn­ing off a speed­ing car’s engine from miles away doesn’t qual­i­fy, I’m not sure what counts as a defect in Chrysler’s world. And Chrysler is far from the only com­pa­ny com­pro­mised: from BMW [56] to Tes­la [57] to Gen­er­al Motors [58], many auto­mo­tive brands have been hacked, with sure­ly more to come.

Dra­mat­ic hacks attract the most atten­tion, but the soft­ware errors that allow them to occur are ubiq­ui­tous. While com­plex breach­es can take real effort — the Jeep hack­er duo spent two years research­ing — sim­ple errors in the code can also cause sig­nif­i­cant fail­ure. Adding soft­ware with mil­lions of lines of code to

The Inter­net of Things is also a pri­va­cy night­mare. Data­bas­es that already have too much infor­ma­tion about us will now be burst­ing with data on the places we’ve dri­ven, the food we’ve pur­chased and more. Last week, at Def Con, the annu­al infor­ma­tion secu­ri­ty con­fer­ence, researchers set up an Inter­net of Things vil­lage [59] to show how they could hack every­day objects like baby mon­i­tors, ther­mostats and secu­ri­ty cam­eras.

Con­nect­ing every­day objects intro­duces new risks if done at mass scale. Take that smart refrig­er­a­tor. If a sin­gle fridge mal­func­tions, it’s a has­sle. How­ev­er, if the fridge’s com­put­er is con­nect­ed to its motor, a soft­ware bug or hack could “brick” mil­lions of them all at once — turn­ing them into plas­tic pantries with heavy doors.

Cars — two-ton met­al objects designed to hur­tle down high­ways — are already brac­ing­ly dan­ger­ous. The mod­ern auto­mo­bile is run by dozens of com­put­ers that most man­u­fac­tur­ers con­nect using a sys­tem that is old and known to be inse­cure [60]. Yet automak­ers often use that flim­sy sys­tem to con­nect all of the car’s parts. That means once a hack­er is in, she’s in every­where — engine, steer­ing, trans­mis­sion and brakes, not just the enter­tain­ment sys­tem.

For years, secu­ri­ty researchers have been warn­ing about the dan­gers of cou­pling so many sys­tems in cars. Alarmed researchers have pub­lished aca­d­e­m­ic papers, hacked cars as demon­stra­tions, and begged the indus­try [61] to step up. So far, the indus­try response has been to nod polite­ly and fix exposed flaws with­out fun­da­men­tal­ly chang­ing the way they oper­ate.

In 1965, Ralph Nad­er pub­lished “Unsafe at Any Speed,” doc­u­ment­ing car man­u­fac­tur­ers’ resis­tance to spend­ing mon­ey on safe­ty fea­tures like seat­belts. After pub­lic debate and final­ly some leg­is­la­tion, man­u­fac­tur­ers were forced to incor­po­rate safe­ty tech­nolo­gies.

No com­pa­ny wants to be the first to bear the costs of updat­ing the inse­cure com­put­er sys­tems that run most cars. We need fed­er­al safe­ty reg­u­la­tions to push automak­ers to move, as a whole indus­try. Last month, a bill with pri­va­cy and cyber­se­cu­ri­ty stan­dards for cars was intro­duced in the Sen­ate. That’s good, but it’s only a start. We need a new under­stand­ing of car safe­ty, and of the safe­ty of any object run­ning soft­ware or con­nect­ing to the Inter­net.

It may be hard to fix secu­ri­ty on the dig­i­tal Inter­net, but the Inter­net of Things should not be built on this faulty foun­da­tion. Respond­ing to dig­i­tal threats by patch­ing only exposed vul­ner­a­bil­i­ties is giv­ing just aspirin to a very ill patient.

It isn’t hope­less. We can make pro­grams more reli­able and data­bas­es more secure. Crit­i­cal func­tions on Inter­net-con­nect­ed objects should be iso­lat­ed and exter­nal audits man­dat­ed to catch prob­lems ear­ly. But this will require an ini­tial invest­ment to fore­stall future prob­lems — the exact oppo­site of the cur­rent cor­po­rate impulse. It also may be that not every­thing needs to be net­worked, and that the trade-off in vul­ner­a­bil­i­ty isn’t worth it. Maybe cars are unsafe at any I.P.

6. We con­clude by re-exam­in­ing one of the most impor­tant ana­lyt­i­cal arti­cles in a long time, David Golumbi­a’s arti­cle in Uncomputing.org about tech­nocrats and their fun­da­men­tal­ly unde­mo­c­ra­t­ic out­look.

“Tor, Tech­noc­racy, Democ­ra­cy”  [8]by David Golum­bia; Uncomputing.org [8]; 4/23/2015. [8]

” . . . . Such tech­no­cratic beliefs are wide­spread in our world today, espe­cially in the enclaves of dig­i­tal enthu­si­asts, whether or not they are part of the giant cor­po­rate-dig­i­tal leviathanHack­ers (“civic,” “eth­i­cal,” “white” and “black” hat alike), hack­tivists, Wik­iLeaks fans [and Julian Assange et al–D. E.], Anony­mous “mem­bers,” even Edward Snow­den him­self [9] walk hand-in-hand with Face­book and Google in telling us that coders don’t just have good things to con­tribute to the polit­i­cal world, but that the polit­i­cal world is theirs to do with what they want, and the rest of us should stay out of it: the polit­i­cal world is bro­ken, they appear to think (right­ly, at least in part), and the solu­tion to that, they think (wrong­ly, at least for the most part), is for pro­gram­mers to take polit­i­cal mat­ters into their own hands. . . First, [Tor co-cre­ator] Din­gle­dine claimed that Tor must be sup­ported because it fol­lows direct­ly from a fun­da­men­tal “right to pri­vacy.” Yet when pressed—and not that hard—he admits that what he means by “right to pri­vacy” is not what any human rights body or “par­tic­u­lar legal regime” has meant by it. Instead of talk­ing about how human rights are pro­tected, he asserts that human rights are nat­ural rights and that these nat­ural rights cre­ate nat­ural law that is prop­erly enforced by enti­ties above and out­side of demo­c­ra­tic poli­tiesWhere the UN’s Uni­ver­sal Dec­la­ra­tion on Human Rights [10] of 1948 is very clear that states and bod­ies like the UN to which states belong are the exclu­sive guar­an­tors of human rights, what­ever the ori­gin of those rights, Din­gle­dine asserts that a small group of soft­ware devel­op­ers can assign to them­selves that role, and that mem­bers of demo­c­ra­tic poli­ties have no choice but to accept them hav­ing that role. . . Fur­ther, it is hard not to notice that the appeal to nat­ural rights is today most often asso­ci­ated with the polit­i­cal right, for a vari­ety of rea­sons (ur-neo­con Leo Strauss was one of the most promi­nent 20th cen­tury pro­po­nents of these views [11]). We aren’t sup­posed to endorse Tor because we endorse the right: it’s sup­posed to be above the left/right dis­tinc­tion. But it isn’t. . . .