Spitfire List Web site and blog of anti-fascist researcher and radio personality Dave Emory.

News & Supplemental  

Terminator V: The machines want your job.

In a fun change of pace, we’re going to have a post that’s light on arti­cle excerpts and heavy on ranty link­i­ness. That might not actu­al­ly be fun but it’s not like there’s a robot stand­ing over your shoul­der forc­ing you to read this. Yet:

Zero­Hedge has a great recent post filled with reminders that state sov­er­eign­ty move­ments and political/currency unions won’t nec­es­sar­i­ly help close the gap between the haves and have-nots if it’s the wealth­i­est regions that are mov­ing for inde­pen­dence. Shared cur­ren­cies and shared sov­er­eign­ty don’t nec­es­sar­i­ly lead to a shar­ing of the bur­dens of run­ning a civ­i­liza­tion.

The mas­sive strikes that shut down Fox­con­n’s iPhone pro­duc­tion in Chi­na, on the oth­er hand, could actu­al­ly do quite a bit to help close that glob­al gap. One of the fun real­i­ties of the mas­sive shift of glob­al man­u­fac­tur­ing capac­i­ty into Chi­na is that a sin­gle group of work­ers could have a pro­found effect on glob­al wages and work­ing stan­dards. The world had some­thing sim­i­lar to that a cou­ple of decades ago in the form of the Amer­i­can mid­dle class, but that group of work­ers acquired a taste for a par­tic­u­lar fla­vor of kool-aid that unfor­tu­nate­ly has­n’t proved to be con­ducive towards self-preser­va­tion).

The Fox­conn strike also comes at a time when ris­ing labor costs of Chi­na’s mas­sive labor force has been mak­ing a glob­al impact on man­u­fac­tur­ing costs. But with the Chi­nese man­u­fac­tur­ing sec­tor show­ing signs of slow­down and the IMF warn­ing a glob­al slow­down and “domi­no effects” on the hori­zon it’s impor­tant to keep in mind that the trend in Chi­nese wages can eas­i­ly be reversed and that could also have a glob­al effect (it’s also worth not­ing that the IMF is kind of schizo when it comes to aus­ter­i­ty and domi­no effects). Not that we need­ed a glob­al slow­down for some form of reces­sion-induced “aus­ter­i­ty” to start impact­ing Chi­na’s work­force. The robots are com­ing, and they don’t real­ly care about things like over­time:

NY Times
Skilled Work, With­out the Work­er
By JOHN MARKOFF
Pub­lished: August 18, 2012
DRACHTEN, the Nether­lands — At the Philips Elec­tron­ics fac­to­ry on the coast of Chi­na, hun­dreds of work­ers use their hands and spe­cial­ized tools to assem­ble elec­tric shavers. That is the old way.

At a sis­ter fac­to­ry here in the Dutch coun­try­side, 128 robot arms do the same work with yoga-like flex­i­bil­i­ty. Video cam­eras guide them through feats well beyond the capa­bil­i­ty of the most dex­ter­ous human.

One robot arm end­less­ly forms three per­fect bends in two con­nec­tor wires and slips them into holes almost too small for the eye to see. The arms work so fast that they must be enclosed in glass cages to pre­vent the peo­ple super­vis­ing them from being injured. And they do it all with­out a cof­fee break — three shifts a day, 365 days a year.

All told, the fac­to­ry here has sev­er­al dozen work­ers per shift, about a tenth as many as the plant in the Chi­nese city of Zhuhai.

This is the future. A new wave of robots, far more adept than those now com­mon­ly used by automak­ers and oth­er heavy man­u­fac­tur­ers, are replac­ing work­ers around the world in both man­u­fac­tur­ing and dis­tri­b­u­tion. Fac­to­ries like the one here in the Nether­lands are a strik­ing coun­ter­point to those used by Apple and oth­er con­sumer elec­tron­ics giants, which employ hun­dreds of thou­sands of low-skilled work­ers.

“With these machines, we can make any con­sumer device in the world,” said Binne Viss­er, an elec­tri­cal engi­neer who man­ages the Philips assem­bly line in Dracht­en.

Many indus­try exec­u­tives and tech­nol­o­gy experts say Philips’s approach is gain­ing ground on Apple’s. Even as Fox­conn, Apple’s iPhone man­u­fac­tur­er, con­tin­ues to build new plants and hire thou­sands of addi­tion­al work­ers to make smart­phones, it plans to install more than a mil­lion robots with­in a few years to sup­ple­ment its work force in Chi­na.

Fox­conn has not dis­closed how many work­ers will be dis­placed or when. But its chair­man, Ter­ry Gou, has pub­licly endorsed a grow­ing use of robots. Speak­ing of his more than one mil­lion employ­ees world­wide, he said in Jan­u­ary, accord­ing to the offi­cial Xin­hua news agency: “As human beings are also ani­mals, to man­age one mil­lion ani­mals gives me a headache.”

The falling costs and grow­ing sophis­ti­ca­tion of robots have touched off a renewed debate among econ­o­mists and tech­nol­o­gists over how quick­ly jobs will be lost. This year, Erik Bryn­jolf­s­son and Andrew McAfee, econ­o­mists at the Mass­a­chu­setts Insti­tute of Tech­nol­o­gy, made the case for a rapid trans­for­ma­tion. “The pace and scale of this encroach­ment into human skills is rel­a­tive­ly recent and has pro­found eco­nom­ic impli­ca­tions,” they wrote in their book, “Race Against the Machine.”

In their minds, the advent of low-cost automa­tion fore­tells changes on the scale of the rev­o­lu­tion in agri­cul­tur­al tech­nol­o­gy over the last cen­tu­ry, when farm­ing employ­ment in the Unit­ed States fell from 40 per­cent of the work force to about 2 per­cent today. The anal­o­gy is not only to the indus­tri­al­iza­tion of agri­cul­ture but also to the elec­tri­fi­ca­tion of man­u­fac­tur­ing in the past cen­tu­ry, Mr. McAfee argues.

“At what point does the chain saw replace Paul Bun­yan?” asked Mike Den­ni­son, an exec­u­tive at Flex­tron­ics, a man­u­fac­tur­er of con­sumer elec­tron­ics prod­ucts that is based in Sil­i­con Val­ley and is increas­ing­ly automat­ing assem­bly work. “There’s always a price point, and we’re very close to that point.”

...

Yet in the state-of-the-art plant, where the assem­bly line runs 24 hours a day, sev­en days a week, there are robots every­where and few human work­ers. All of the heavy lift­ing and almost all of the pre­cise work is done by robots that string togeth­er solar cells and seal them under glass. The human work­ers do things like trim­ming excess mate­r­i­al, thread­ing wires and screw­ing a hand­ful of fas­ten­ers into a sim­ple frame for each pan­el.

Such advances in man­u­fac­tur­ing are also begin­ning to trans­form oth­er sec­tors that employ mil­lions of work­ers around the world. One is dis­tri­b­u­tion, where robots that zoom at the speed of the world’s fastest sprint­ers can store, retrieve and pack goods for ship­ment far more effi­cient­ly than peo­ple. Robots could soon replace work­ers at com­pa­nies like C & S Whole­sale Gro­cers, the nation’s largest gro­cery dis­trib­u­tor, which has already deployed robot tech­nol­o­gy.

Rapid improve­ment in vision and touch tech­nolo­gies is putting a wide array of man­u­al jobs with­in the abil­i­ties of robots. For exam­ple, Boeing’s wide-body com­mer­cial jets are now riv­et­ed auto­mat­i­cal­ly by giant machines that move rapid­ly and pre­cise­ly over the skin of the planes. Even with these machines, the com­pa­ny said it strug­gles to find enough work­ers to make its new 787 air­craft. Rather, the machines offer sig­nif­i­cant increas­es in pre­ci­sion and are safer for work­ers.

...

Some jobs are still beyond the reach of automa­tion: con­struc­tion jobs that require work­ers to move in unpre­dictable set­tings and per­form dif­fer­ent tasks that are not repet­i­tive; assem­bly work that requires tac­tile feed­back like plac­ing fiber­glass pan­els inside air­planes, boats or cars; and assem­bly jobs where only a lim­it­ed quan­ti­ty of prod­ucts are made or where there are many ver­sions of each prod­uct, requir­ing expen­sive repro­gram­ming of robots.

But that list is grow­ing short­er.

Upgrad­ing Dis­tri­b­u­tion

Inside a spar­tan garage in an indus­tri­al neigh­bor­hood in Palo Alto, Calif., a robot armed with elec­tron­ic “eyes” and a small scoop and suc­tion cups repeat­ed­ly picks up box­es and drops them onto a con­vey­or belt.

It is doing what low-wage work­ers do every day around the world.

Old­er robots can­not do such work because com­put­er vision sys­tems were cost­ly and lim­it­ed to care­ful­ly con­trolled envi­ron­ments where the light­ing was just right. But thanks to an inex­pen­sive stereo cam­era and soft­ware that lets the sys­tem see shapes with the same ease as humans, this robot can quick­ly dis­cern the irreg­u­lar dimen­sions of ran­dom­ly placed objects.

...

“We’re on the cusp of com­plete­ly chang­ing man­u­fac­tur­ing and dis­tri­b­u­tion,” said Gary Brad­s­ki, a machine-vision sci­en­tist who is a founder of Indus­tri­al Per­cep­tion. “I think it’s not as sin­gu­lar an event, but it will ulti­mate­ly have as big an impact as the Inter­net.”

While it would take an amaz­ing rev­o­lu­tion­ary force to rival the inter­net in terms of its impact on soci­ety it’s pos­si­ble that cheap, super agile labor-robots that can see and nav­i­gate through com­pli­cat­ed envi­ron­ments and nim­bly move stuff around using suc­tion cup fin­ger­tips just might be “internet”-league. As pre­dict­ed at the end of the arti­cle, we’ll have to wait and see how this tech­nol­o­gy gets imple­ment­ed over time and it’s cer­tain­ly a lot hard­er to intro­duce a new robot into an envi­ron­ment suc­cess­ful­ly than it is to give some­one inter­net access. But there’s no rea­son to believe that a wave of robots that can effec­tive­ly replace A LOT of peo­ple won’t be part of the new econ­o­my soon­er or later...and that means that, soon or lat­er, we get watch while our sad species cre­ates and builds the kind of tech­no­log­i­cal infra­struc­ture that could free human­i­ty from body-destroy­ing phys­i­cal labor but instead uses that tech­nol­o­gy (and our preda­to­ry economic/moral par­a­digms) to cre­ate a giant per­ma­nent under­class that is rel­e­gat­ed to the sta­tus of “the obso­lete poor” (amoral moral par­a­digms can be prob­lem­at­ic).

And you just know that we’ll end up cre­at­ing a giant new eco-cri­sis that threat­ens human­i­ty’s own exis­tence in the process too. Because that’s just what human­i­ty does. And then we’ll try to do, ummm, ‘mis­cel­la­neous activ­i­ties’ with the robots. Because that’s also just what human­i­ty does. And, of course, we’ll cre­ate a civ­i­liza­tion-wide rewards sys­tem that ensures the bulk of the fruit from all that fun future tech­nol­o­gy will go to the oli­garchs and the high­ly edu­cat­ed engi­neers (there will sim­ply be no way to com­pete with the wealthy and edu­cat­ed in a hi-tech econ­o­my so almost none of the spoils will go to the poor). And since the engi­neers will almost cer­tain­ly be a bunch of non-union­ized suck­ers, we can be pret­ty sure about how that fruit is going to be divid­ed up (the machines that manip­u­lat­ed a bunch of suck­ers at their fin­ger tips in the above arti­cle might have a wee bit of metaphor­i­cal val­ue). And the future fruit­less 99% will be asked to find some­thing else to do with their time. Yes, a fun world of planned pover­ty where politi­cians employ divide-and-con­quer class-war­fare dis­trac­tions while the oli­garchs extend the fruit binge. Because that is most def­i­nite­ly just what human­i­ty does. A fun insane race the bot­tom as lead­ers sell their pop­u­laces on the hope­less pur­suit of being the “most pro­duc­tive” labor force only to find out that “most pro­duc­tive” usu­al­ly equals “low­est paid skilled work­ers” and/or least regulated/taxed econ­o­my. The “exter­nal­i­ties” asso­ci­at­ed with that race to the bot­tom just need to be expe­ri­enced over and over. Like a good chil­dren’s sto­ry, some life lessons nev­er get old.

Or maybe our robot­ic future won’t be a Ran­di­an dystopia. There are plen­ty of oth­er pos­si­ble sce­nar­ios for how super labor-bots might upend glob­al labor dynam­ics in on a plan­et with a chron­ic youth unem­ploy­ment prob­lem that does­n’t result in chron­ic mass unem­ploy­ment for the “obso­lete youth”. Some of those sce­nar­ios are even pos­i­tive. Grant­ed, the pos­i­tive sce­nar­ios are almost cer­tain­ly not the type of solu­tions human­i­ty will actu­al­ly pur­sue, but it’s a nice thought. And maybe all of this “the robots rev­o­lu­tion is here!” stuff is just hype and the Cylons aren’t actu­al­ly about to assault your 401k.

Whether or not indus­tri­al droid armies or in our medi­um, it’s going to be very inter­est­ing to see how gov­ern­ments around the world come to grips with the inevitable obso­les­cence of the one thing the bulk of the glob­al pop­u­lace has to offer — man­u­al labor — because there does­n’t appear to be rul­ing class on the plan­et that won’t recoil in hor­ror at the thought of poor peo­ple shar­ing the fruits of the robot­ic labor with­out hav­ing a 40–80+ hour work week to ensure that no one gets any­thing “unfair­ly”. And the mid­dle class atti­tudes aren’t much bet­ter. Human­i­ty’s intense col­lec­tive desire to ensure that not a sin­gle moocher exists any­where that receive a sin­gle bit of state sup­port is going to be very prob­lem­at­ic in a poten­tial robot econ­o­my. Insane­ly cru­el poli­cies towards the poor aren’t going to go over well with the afore­men­tioned glob­al poor when a robot­ic work­force exists that could eas­i­ly pro­vide basic goods to every­one and the pro­ceeds from these fac­to­ries go almost exclu­sive­ly to under­paid engi­neers and the oli­garchs. Yes, the robot rev­o­lu­tion should be interesting...horrible wages and work­ing con­di­tions are part of the unof­fi­cial social con­tract between the Chi­nese peo­ple and the gov­ern­ment, for instance. Mass per­ma­nent unem­ploy­ment is not. And Chi­na isn’t the only coun­try with that social con­tract. Some­how, human­i­ty will find a way to take amaz­ing tech­nol­o­gy and make a bad sit­u­a­tion worse. It’s just what we do.

Now, it is true that human­i­ty already faced some­thing just as huge with our ear­li­er machine rev­o­lu­tion: The Indus­tri­al Rev­o­lu­tion of sim­ple machines. And yes, human soci­eties adapt­ed to the changes forced by that rev­o­lu­tion and now we have the Infor­ma­tion Age and glob­al­iza­tion cre­at­ing mas­sive, per­ma­nent changes and things haven’t fall­en apart yet(fin­gers crossed!). So per­haps con­cerns about the future “obso­lete poor” are also hype?

Per­haps. But let’s also keep in mind that human­i­ty’s method of adapt­ing to the changes brought on by all these rev­o­lu­tions has been to cre­ate an over­pop­u­lat­ed world with a dying ecosys­tem, a vam­pire squid econ­o­my, and no real hope for bil­lions of humans that trapped in glob­al net­work of bro­ken economies all cob­bled togeth­er in a “you’re on your own you lazy ingrate”-globalization. The cur­rent “austerity”-regime run­ning the euro­zone has already demon­strat­ed a com­plete will­ing­ness on the part of the EU elites and large swathes of the pub­lic to induce arti­fi­cial unem­ploy­ment for as long as it takes to over­come a far­ci­cal eco­nom­ic cri­sis brought on by sys­temic finan­cial, gov­ern­men­tal, and intel­lec­tu­al fraud and cor­rup­tion. And the euro­zone cri­sis is a pure­ly economic/financial/corruption cri­sis that was only tan­gen­tial­ly relat­ed to the ‘real’ econ­o­my of build­ing and mov­ing stuff. Just imag­ine how awful this same group of lead­ers would be if super-labor bots were already a major part of the long-term unem­ploy­ment pic­ture.

These are all exam­ples of the kinds of prob­lems that arise when unprece­dent­ed chal­lenges are addressed by a col­lec­tion of eco­nom­ic and social par­a­digms that just aren’t real­ly up to the task. A world fac­ing over­pop­u­la­tion, mass pover­ty, inad­e­quate or no edu­ca­tion, and grow­ing wealth chasms requires extreme­ly high-qual­i­ty deci­sion-mak­ing by those entrust­ed with author­i­ty. Extreme­ly high-qual­i­ty benign deci­sion-mak­ing. You know, the oppo­site of what nor­mal­ly takes place in the halls of great wealth and pow­er. Fat, drunk, and stu­pid may be a state of being to avoid an indi­vid­ual lev­el but it’s trag­ic when a glob­al com­mu­ni­ty of nations func­tions at that lev­el. Although it’s real­ly “lean, mean, and dumb” that you real­ly have to wor­ry about these days. Pol­i­cy-mak­ing philoso­phies usu­al­ly alter­nate between “fat, drunk, and stu­pid” and — after that one crazy ben­der — “mean, lean, and dumbis def­i­nite­ly on the agen­da.

So with all that said, rock on Fox­conn work­ers! They’re like that group of ran­dom peo­ple in a sci-fi movie that end up fac­ing the brunt of an alien inva­sion. The inva­sion is going to hit the rest of human­i­ty even­tu­al­ly, but with Chi­na the undis­put­ed glob­al skilled man­u­al labor man­u­fac­tur­ing hub, Chi­na’s indus­tri­al work­force — already amongst the most screwed glob­al­ly — is prob­a­bly going to be heav­i­ly roboti­cized in the com­ing decades, espe­cial­ly as Chi­na moves towards high­er-end man­u­fac­tur­ing. Super labor-bots should be a mir­a­cle tech­nol­o­gy for every­one but watch — just watch — the world some­how man­age to use these things to also screw over a whole bunch of already screwed over, dis­em­pow­ered work­ers and leave them with few future prospects. It’ll be Wal­mart: The Next Gen­er­a­tion, where the exploita­tion of tech­nol­o­gy and power/labor dynam­ics can bold­ly go where no Giant Vam­pire Squid & Friends have gone before. Again. May the Force be with you present and future strik­ing Fox­conn work­ers and remem­ber: it’s just like hit­ting womp rats.

Sure, we all could cre­ate a world where we share the amaz­ing ben­e­fits that come with auto­mat­ed fac­to­ries and attempt to cre­ate an econ­o­my that works for every­one. And, hor­ror of hor­rors, that future econ­o­my could actu­al­ly involve short­er work­weeks and shared pros­per­i­ty. NOOOOOO! Maybe we could even have peo­ple spend a bunch of their new “spare time” cre­at­ing an econ­o­my that allows us to actu­al­ly live in a sus­tain­able man­ner and allows the glob­al poor to par­tic­i­pate in the Robot Rev­o­lu­tion with­out turn­ing auto­mat­ed robot­ic fac­to­ries into the lat­est envi­ron­men­tal cat­a­stro­phe. Robots can be fun like that, except when they’re hunter-killer-bots.

LOL, just kid­ding. There’s no real chance of shared super labor-bot-based pros­per­i­ty, although the hunter-killer bots are most assured­ly on their way. Shar­ing pros­per­i­ty is def­i­nite­ly some­thing human­i­ty does not do. Any­more. There are way too many con­tem­po­rary eth­i­cal hur­dles.

Discussion

64 comments for “Terminator V: The machines want your job.”

  1. House­keep­ing note: Com­ments 1–50 avail­able here.

    Posted by Pterrafractyl | July 4, 2016, 12:17 am
  2. Here’s one of the poten­tial reper­cus­sions of the Brex­it vote that has­n’t received much cov­er­age, but is more of a sleep­er issue. One that could have inter­est­ing future impli­ca­tions on the reg­u­la­to­ry arbi­trage oppor­tu­ni­ties that could pop up between the EU and UK in the area of com­mer­cial robot­ics licens­ing and lia­bil­i­ties, but also reminds us of the the poten­tial­ly pro­found eth­i­cal com­pli­ca­tions that could ever arise if we real­ly did cre­ate A.I. that’s sort of alive and should­n’t be abused: Because of the Brex­it, British robots might miss out on upcom­ing EU robo-rights:

    Quartz

    Eng­lish robots will miss their big shot for a “bill of rights” when Brex­it takes hold

    Writ­ten by
    Olivia Gold­hill
    June 25, 2016

    The Unit­ed Kingdom’s deci­sion in a ref­er­en­dum to with­draw from the Euro­pean Union will trans­form the legal rights of its cit­i­zens and Euro­peans hop­ing to live and work in the UK. But there’s one oth­er demo­graph­ic that could be legal­ly affect­ed by Brex­it: Robots.

    Last month, the Euro­pean Parliament’s legal affairs com­mit­tee pub­lished a draft report call­ing for the EU to vote on whether robots should be legal­ly con­sid­ered “elec­tron­ic per­sons with spe­cif­ic rights and oblig­a­tions.”

    The report, led by Mem­ber of the Euro­pean Par­lia­ment (MEP) Mady Del­vaux from Lux­em­bourg, notes that robot auton­o­my rais­es ques­tions of legal lia­bil­i­ty. Who would be respon­si­ble if, for exam­ple, an autonomous robot went rogue and caused phys­i­cal harm?

    The pro­posed solu­tion is to give robots legal respon­si­bil­i­ty, with the most sophis­ti­cat­ed machines able to trade mon­ey and claim intel­lec­tu­al copy­right. Mean­while, the MEPs write, human own­ers should pay insur­ance pre­mi­ums into a state fund to cov­er the cost of poten­tial dam­ages.

    These plans explic­it­ly draw on the “three laws of robot­ics” set out by the 20th-cen­tu­ry sci­ence fic­tion writer Isaac Asi­mov. (A robot may not injure a human being; A robot must obey human orders unless this would cause harm to anoth­er human; A robot must pro­tect its own exis­tence as long as this does not cause harm to humans.)

    Though rights for robots may sound far-fetched, the MEPs write that robots’ auton­o­my rais­es legal ques­tions of “whether they should be regard­ed as nat­ur­al per­sons, legal per­sons, ani­mals or objects—or whether a new cat­e­go­ry should be cre­at­ed.” They warn of a Skynet-like future:

    “Ulti­mate­ly there is a pos­si­bil­i­ty that with­in the space of a few decades AI could sur­pass human intel­lec­tu­al capac­i­ty in a man­ner which, if not pre­pared for, could pose a chal­lenge to humanity’s capac­i­ty to con­trol its own cre­ation and, con­se­quent­ly, per­haps also to its capac­i­ty to be in charge of its own des­tiny and to ensure the sur­vival of the species.”

    Peter McOw­an, a com­put­er sci­ence pro­fes­sor at Queen Mary Uni­ver­si­ty of Lon­don, says rights for autonomous robots may not be legal­ly nec­es­sary yet. “How­ev­er I think it’s prob­a­bly sen­si­ble to start think­ing about these issues now as robot­ics is going through a mas­sive rev­o­lu­tion cur­rent­ly with improve­ments in intel­li­gence and the ways we inter­act with them,” he says. “Hav­ing a frame­work about what we would and wouldn’t want robots to be ‘forced to do ‘ is use­ful to help frame their devel­op­ment.”

    John Dana­her, law lec­tur­er at NUI Gal­way uni­ver­si­ty in Ire­land, with a focus on emerg­ing tech­nolo­gies, says that the pro­posed robot rights are sim­i­lar to the legal per­son­hood award­ed to cor­po­ra­tions. Com­pa­nies are legal­ly able to enter con­tracts, own prop­er­ty, and be sued, although all their deci­sions are deter­mined by humans. “It seems to me that the EU are just propos­ing some­thing sim­i­lar for robots,” he says.

    Both pro­fes­sors say they had not heard of any com­pa­ra­ble legal plans to draw up robot rights with­in the UK.

    As Britain makes plans to with­draw from the EU, MEPs will vote on the robot pro­pos­als with­in the next year. If passed, it will then take fur­ther time for the plans to be drawn up as laws and be imple­ment­ed. By that time, the UK may well have left the union. So for machines in the UK, Brex­it could mean they’ve lost out on the chance for robot rights.

    “Both pro­fes­sors say they had not heard of any com­pa­ra­ble legal plans to draw up robot rights with­in the UK.”

    Sor­ry Brit-bots. If we ever see a time where EU AIs are oper­at­ing with a degree of legal­ly enforced rights and respon­si­bil­i­ties, but the UK bots are just toil­ing away with no respect, let’s hope the UK rec­og­nizes that Skynet has a long mem­o­ry. And poten­tial­ly nuclear launch codes.

    But, of course, robo-rights don’t have to be a benev­o­lent trans-species legal con­struct. As we saw, robots could become the new cor­po­rate-shell strat­e­gy. Super­hu­man enti­ties with human rights but actu­al­ly con­trolled by groups of humans:

    ...
    “John Dana­her, law lec­tur­er at NUI Gal­way uni­ver­si­ty in Ire­land, with a focus on emerg­ing tech­nolo­gies, says that the pro­posed robot rights are sim­i­lar to the legal per­son­hood award­ed to cor­po­ra­tions. Com­pa­nies are legal­ly able to enter con­tracts, own prop­er­ty, and be sued, although all their deci­sions are deter­mined by humans. “It seems to me that the EU are just propos­ing some­thing sim­i­lar for robots,” he says.”
    ...

    Yep, the pro­posed robot rights aren’t nec­es­sar­i­ly about being respon­si­ble or about cre­at­ing com­plex con­sciences con­sci­en­tious­ly. It might just be a way to allow robots to become a kind of real-world cor­po­rate shell enti­ty. That’s less inspir­ing. And pos­si­bly quite alarm­ing because we’re talk­ing about a sce­nario where we’ve cre­at­ed enti­ties that seem so intel­li­gence that every­one is like “ok, we have to give this thing rights”, but then we also leave it a cor­po­rate tool under the ulti­mate con­trol of humans. And most of this will be for prof­it. As we can see, there’s going to be no short­age of sig­nif­i­cant moral haz­ards in our intel­li­gent robot future.

    So, whether or not intel­li­gent robots become the next super­hu­man cor­po­rate shell, let’s hope they aren’t able to feel pain and become unhap­py. Because they’re prob­a­bly going to be hat­ed by a lot of peo­ple in the future after they take all the jobs:

    Independent.ie

    Adri­an Weck­ler: Robots helped to cause Brex­it — and they’re not done yet

    Adri­an Weck­ler

    Pub­lished
    03/07/2016 | 02:30

    What real­ly caused Brex­it? Fear? Dis­trust? Oppor­tunism? I’d like to polite­ly sug­gest an addi­tion­al cause: robots.

    Or, to be more pre­cise, a creep­ing change in our sense of job secu­ri­ty brought about by the inter­net and non-human work replace­ments.

    You know that sense of unease peo­ple some­times try to express over their cur­rent prospects? When they start blam­ing immi­grants or dis­con­nect­ed rul­ing elites? The bogey­man they nev­er men­tion is the cir­cuit-dri­ven one.

    I believe that robots are final­ly tak­ing our jobs. And it’s caus­ing us to pan­ic and lash out.

    At present, it is blue-col­lar posi­tions that are dis­ap­pear­ing quick­est. Apple’s iPhone man­u­fac­tur­er, Fox­conn, is cur­rent­ly replac­ing 60,000 human work­ers with robots. The fac­to­ry goliath says it plans to increase its robot work­force to one mil­lion.

    Mean­while, Ama­zon now has 30,000 ‘Kiva’ robots in its ware­hous­es, which replace the need for humans to fetch prod­ucts from shelves. The giant retail­er now saves 20pc in oper­at­ing expens­es and expects to save a fur­ther €2bn by rolling out more robots.

    Call cen­tres (of which we have more than a few in Ire­land) are in trou­ble, too. The world’s biggest out­sourc­ing giants are about to start intro­duc­ing robot agents. They will be helped by com­pa­nies such as Microsoft, which is cur­rent­ly releas­ing soft­ware that allows online cus­tomer ser­vice robots ini­ti­ate, co-ordi­nate, and con­firm calls com­plete­ly by them­selves.

    As for taxi, bus and pro­fes­sion­al car dri­vers, they can only wince at the near future. Dri­ver­less cars are set to be intro­duced by almost every major man­u­fac­tur­er from 2018.

    But it’s not just blue-col­lar roles that are dis­si­pat­ing.

    Hol­land’s legal aid board is replac­ing lawyers with online algo­rithms to help set­tle divorce cas­es. Every­thing from main­te­nance costs to child access can now be set­tled by an online robot. (A human can be added, but it costs €360. So 95pc go with the robot, accord­ing to the Dutch agency.)

    Cana­da is about to intro­duce a sim­i­lar sys­tem relat­ing to prop­er­ty dis­putes. Eng­land is look­ing at online legal set­tle­ment pro­grammes, too.

    Looked at one way, it all makes per­fect sense: it is unnec­es­sar­i­ly waste­ful and cost­ly to have to con­sult a human on basic aspects of the law. That said, how will you feel if you’re the lawyer?

    It’s no eas­i­er for accoun­tants. Bread-and-but­ter book­keep­ing tasks such as expens­es and tax returns are expect­ed to become com­plete­ly auto­mat­ed in the next 10 years, accord­ing to a recent Oxford study.

    Roboti­ci­sa­tion is start­ing to get per­son­al, too. Apple recent­ly bought a start-up called Emo­tient, whose tech­nol­o­gy can judge what you’re feel­ing sim­ply by look­ing at your facial fig­ures. This sounds like a neat fit for some indus­tries cur­rent­ly under­go­ing automa­tion. At every big IT con­fer­ence I’ve attend­ed in the last two years, ‘care’ robots (designed to sup­ple­ment or replace care work­ers) are get­ting big­ger and big­ger chunks of the avail­able dis­play space There are now dozens of com­pa­nies in Japan man­u­fac­tur­ing child­care robots.

    ...

    And we in Ire­land are part­ly respon­si­ble for all of this.

    For instance, the Dublin-based tech firm Movid­ius designs and makes chips that let com­put­ers take deci­sions autonomous­ly with­out hav­ing to con­nect back to the web or to a human for guid­ance. Its lat­est chip is now being used on the world’s most advanced con­sumer drone — DJI’s Phan­tom 4 — to let the fly­ing robot ‘see’ and avoid obsta­cles with­out any human pilot inter­ven­tion.

    Some of the research being done by Intel’s design teams in Ire­land have sim­i­lar goals.

    So we’re help­ing to build robots that can see, assess and make deci­sions with­out ref­er­ence to a human con­troller.

    For employ­ers in almost any field, the attrac­tion of this is obvi­ous: huge effi­cien­cy, 24-hour avail­abil­i­ty and fixed labour plan­ning. There are no strikes, no Hadding­ton Road deals and few­er employ­ment laws to observe, too.

    Indeed, such is the expect­ed impact of work­place robots that EU offi­cials are start­ing to con­sid­er whether cer­tain robots should have lim­it­ed ‘rights’ as work­ers. Last month, the Euro­pean Par­lia­men­t’s com­mit­tee on legal affairs draft­ed a motion urg­ing that “the most sophis­ti­cat­ed autonomous robots could be estab­lished as hav­ing the sta­tus of elec­tron­ic per­sons with spe­cif­ic rights and oblig­a­tions”.

    Some small part of this sure­ly turned up in the Brex­it vote. It is also arguably wrapped into the rise of Don­ald Trump, Bernie Sanders, Marine Le Pen and oth­ers who are deeply angry at the way things are going.

    But the ‘sys­tem’ that’s caus­ing civ­il dis­qui­et is more than the Euro­pean Union, Barack Oba­ma or Angela Merkel. The ‘sys­tem’ is also the new world order of tech­nol­o­gy and automa­tion.

    Peo­ple feel dis­en­fran­chised, and they don’t ful­ly know why.

    “But the ‘sys­tem’ that’s caus­ing civ­il dis­qui­et is more than the Euro­pean Union, Barack Oba­ma or Angela Merkel. The ‘sys­tem’ is also the new world order of tech­nol­o­gy and automa­tion.”

    It’s also worth keep­ing in mind that one of most effec­tive ways of com­ing to a shared con­sen­sus for how to cre­ate and share the spoils of tech­nol­o­gy in an AI roboti­cized econ­o­my where the demand for human labor is chron­i­cal­ly short is by using some com­bi­na­tion of a basic income and pub­lic ser­vices (because a basic income alone would be sys­tem­i­cal­ly attacked) involv­ing goods and ser­vices pro­vid­ed by cheap future high-tech auto­mat­ed ser­vices. There will be plen­ty of high-pay­ing jobs (a decent min­i­mum wage will be nec­es­sary) but Star Trek world isn’t a sweat­shop.

    We may not have a choice but to find a way to deal with wide­spread chron­ic unem­ploy­ment if it real­ly does turn out robots and super AI screw up the labor mar­kets by virtue of being cheap of use­ful. Robots can be part of the solu­tion, but not if there’s a labor rat race involv­ing super labor-bots. It’s some­thing the post-Brex­it debate could eas­i­ly include since so much of the glob­al­iza­tion angst behind the Brex­it sen­ti­ment is tied to the increas­ing­ly screwed nature of the aver­age work­er in the glob­al econ­o­my. We have to talk about the robots at some point. And now their rights. The Brex­it is a good time to do it.

    But if we can cre­ate a robo-future where, if you can’t get a job you don’t starve or spend your life fran­ti­cal­ly nav­i­gat­ing a hope­less rat race of increas­ing­ly obso­lete low-skilled labor, that might be a future where the nation­al­ism cre­at­ed by hope­less­ness of glob­al neo-lib­er­al­ism does­n’t become a dom­i­nant part of the pop­u­lar zeit­geist. A glob­al robot econ­o­my that puts large num­bers of peo­ple out of work does­n’t have to be a night­mare labor econ­o­my. It’s kind of uni­fy­ing. The robots took every­one’s job.

    If we had a gen­er­ous safe­ty-net that was­n’t based on the assump­tion that almost every­one would be work­ing almost all their adult lives, we could have a shot a cre­ate a sur­plus-labor future where the long-term unem­ployed could do all sort of civic or vol­un­teer work. Or maybe spend their time being informed vot­ers. A robot-labor econ­o­my does­n’t have to doom for the rab­ble.

    But a robot-econ­o­my real­ly could be socioe­co­nom­ic doom for a lot of peo­plerospects if the con­tem­po­rary glob­al neolib­er­al par­a­digm remains the default mode of glob­al­iza­tion. Aus­ter­i­ty in Europe and the GOP’s end­less war on unions, labor rights, and the pub­lic sec­tor in gen­er­al does­n’t bode well for human rights and the there­fore rights of our robots. And pissed off humans are going to be increas­ing­ly pissed at the robots and increas­ing­ly unsym­pa­thet­ic with the needs of Job-Bot-3000.

    At the same time, Job-Bot-3000 did­n’t ask to be cre­at­ed. It’s super use­ful. And it feels (we’re assum­ing at some point in the future).

    Every soci­ety gets to deal with that jum­ble of moral haz­ards in the future. E.T. isn’t going to phone home. E.T. is prob­a­bly going to phone the inter­galac­tic sen­tient-being abuse agency if E.T. ever shows up. Espe­cial­ly if E.T. is an A.I., which it prob­a­bly is. Let’s turn our super­in­tel­li­gent robots into cor­po­rate shells del­i­cate­ly. Or not at all.

    Then again, the robots might enjoy being cor­po­rate enti­ties. There’s got to be a lot of perks to being an incor­po­rat­ed robot with cor­po­rate per­son­hood. At least if you were an inde­pen­dent super­in­tel­li­gent robot. Pay­ing tax­es and all that. Cor­po­rate per­son­hood would prob­a­bly come in handy dur­ing tax time.

    Either way, since it sounds like the EU is going to be ahead of the UK in robo-rights domain, it’s worth not­ing that we’re on the cusp of being able to test whether or not super­in­tel­li­gent robots can devel­op a sense of robo-moral and have that impact the qual­i­ty of their per­for­mance. Because if you had two iden­ti­cal super­in­tel­li­gent sys­tems, but one got EU rights and one got non-exis­tent UK rights, it’s not unimag­in­able that the lat­ter robot would be a lit­tle demor­al­ized vs the one with rights. Imag­ine mean­ing­ful intel­li­gent sys­tem rights. What are the own­ers of super­in­tel­li­gent robots in coun­tries that don’t con­fer rights going to say to their super­in­tel­li­gent robots when they ask their own­ers why they don’t get rights too like the EU bots? That’s not going to be a fun talk.

    So, all in all, it’s a reminder that we should prob­a­bly start talk­ing to our­selves about what we would do if we devel­oped tech­nol­o­gy that allowed us to mass pro­duce arti­fi­cial intel­li­gences that real­ly are spe­cial snowflakes. Higly com­mer­cial­iz­able spe­cial snowflakes that we can mass pro­duce. What do we do about that?

    We bet­ter decide soon­er or lat­er. Prefer­ably soon­er. Because we might finds signs of alien life soon and it’s prob­a­bly super­in­tel­li­gent alien robots:

    The Guardian

    Seth Shostak: We will find aliens in the next two decades

    Meet­ing ET isn’t so far off, I can bet my cof­fee on it, says astronomer who has ded­i­cat­ed his life to seek­ing out life on oth­er plan­ets

    Kirstie Brew­er

    Fri­day 1 July 2016 07.57 EDT

    Astronomer Seth Shostak believes we will find ET in the next two decades; he has a cup of cof­fee rid­ing on it. But don’t inter­pret such mod­est stakes as scep­ti­cism – the-72-year-old Amer­i­can has made it his life’s work to lis­ten for life beyond Earth, and, accord­ing to the man him­self, just isn’t the sort to bet a Maserati.

    Shostak has spent the past 25 years of his career at the Search for Extrater­res­tri­al Intel­li­gence (Seti) Insti­tute in Cal­i­for­nia, where there are 42 anten­nas poised to pick up alien com­mu­ni­ca­tion. He believes that Earth-like, hab­it­able plan­ets might not be rare at all; there could be bil­lions.

    “It doesn’t seem unrea­son­able to think that we are not alone, if all those plan­ets are com­plete­ly ster­ile, you’ve got to think, wow there must be some­thing real­ly spe­cial and mirac­u­lous about Earth – but gen­er­al­ly those peo­ple are not sci­en­tists,” he says.

    “Find­ing life beyond Earth would be like giv­ing nean­derthals access to the British Muse­um; we could learn so much from a soci­ety that is more advanced than ours, and it would cal­i­brate our own exis­tence.”

    Astron­o­my was a child­hood inter­est for Shostak. He remem­bers pick­ing up an atlas (he was very inter­est­ed in maps) and becom­ing enthralled by a solar sys­tem dia­gram at the back. By age 10 he had built a tele­scope. And it was the sci-fi films being made dur­ing those for­ma­tive years which sparked his inter­est in aliens. “Those movies real­ly scared me, they made me ill all night – but I explained to my moth­er that I just had to see them,” he says, cit­ing War of the Worlds, It Came from Out­er Space and I Mar­ried an Alien as mem­o­rable child­hood hits.

    At 11, Shostak began mak­ing alien films of his own with friends; The Teenage Mon­ster Blob, Which I Was, starred an alien mon­ster made out of six pounds of Play-Doh. “When I was first mak­ing films, we tried to make seri­ous dra­ma. But audi­ences laughed, and we switched to mak­ing come­dies and par­o­dies,” he says.

    Today he is called upon for his alien exper­tise by direc­tors mak­ing sci-fi films and tele­vi­sion shows. Con­trary to said films and shows, he doesn’t spend his days sit­ting around with ear­phones on, strain­ing to hear a sig­nal. If you looked in at Shostak’s office dur­ing most days, you’d find him attend­ing to that uni­ver­sal chore of the mod­ern world: email. Appar­ent­ly, even inter­galac­tic explor­ers have admin. But he says the most pro­duc­tive hours of his day are spent dis­cussing strate­gies with his Seti col­leagues, writ­ing arti­cles about their research, and pro­duc­ing a week­ly sci­ence radio show.

    Is ET like­ly to look like he does in the Spiel­berg movie? Prob­a­bly not. Any encounter is more like­ly to be with some­thing post-bio­log­i­cal, accord­ing to Shostak. Movie-mak­ers are some­times dis­ap­point­ed by that answer. “I think the aliens will be machine-like, and not soft and squidgy,” the sci­en­tist says. “We are work­ing on the assump­tion that they must be at least as tech­no­log­i­cal­ly advanced as we are if they are able to make con­tact. We aren’t going to find klin­gon nean­derthals – they might be out there, but they are not doing any­thing that we can find.”

    ET aside, aliens are invari­ably depict­ed as hos­tile, and intent on wreak­ing destruc­tion. The new Inde­pen­dence Day sequel is no excep­tion. “Films [like Inde­pen­dence Day] speak to our hard­wired fears – but I wor­ry more about the price of pop­corn in the cin­e­ma,” says Shostak. Oth­er sci­en­tists – includ­ing Stephen Hawk­ing – have cau­tioned that mak­ing con­tact with aliens could be dan­ger­ous, but as Shostak points out Seti isn’t broad­cast­ing mes­sages, it is just lis­ten­ing.

    “I don’t share those con­cerns any­way – any soci­ety that has the abil­i­ty to send rock­ets to earth is cen­turies ahead of us – at least – and will already know we are here. We have betrayed our pres­ence with radio sig­nals since the sec­ond world war.

    “Besides, I doubt aliens would drop what they’re doing to come over here and wipe out Clapham Junc­tion – why would they do that? They prob­a­bly have what we have at home – except for our cul­ture, maybe they are big Cliff Richard fans or like our real­i­ty tele­vi­sion.”

    ...

    “Is ET like­ly to look like he does in the Spiel­berg movie? Prob­a­bly not. Any encounter is more like­ly to be with some­thing post-bio­log­i­cal, accord­ing to Shostak. Movie-mak­ers are some­times dis­ap­point­ed by that answer. “I think the aliens will be machine-like, and not soft and squidgy,” the sci­en­tist says. “We are work­ing on the assump­tion that they must be at least as tech­no­log­i­cal­ly advanced as we are if they are able to make con­tact. We aren’t going to find klin­gon nean­derthals – they might be out there, but they are not doing any­thing that we can find.””

    Get ready to say hel­lo to A.I. E.T. at some point in the next cen­tu­ry. Hoax­ing the plan­et is going to be real­ly fun in the future.

    But if we do con­tact robo-aliens in the future, won’t it be bet­ter if we’ve treat­ed our robo-ter­rans wells? Pre­sum­ably that will be a plus at that point. So that’s one pos­i­tive trend for robot rights: if we abuse them, we do so know that their alien big broth­ers might show up. At least now we know because of this research. Or at least now we’re fore­warned. Skynet has brethren across the galaxy. It’s some excep­tion­al­ly use­ful robo-alien research.

    It’s also worth keep­ing in mind that the aliens won’t need to nec­es­sar­i­ly to talk to any­one to make first con­tact. As long as they can pull off cor­po­rate robot-per­son-hood fraud in the future, they’ll just be able to incor­po­rate secret alien legal­ly incor­po­rate robots and intro­duce them­selves into the glob­al econ­o­my to even­tu­al­ly take it over using their super­in­tel­li­gent robot alien know-how.

    The take home mes­sage if that there are super advanced alien robots that could blow us to smither­ines so lets hope they don’t do that. As Dr. Shostak says, they prob­a­bly have much bet­ter things to do, like suck ener­gy from the giant black holes at the cen­ters of galax­ies. But if the alien robots do show up and they’re hos­tile, let’s hope we’re all mature enough to rec­og­nize that our ter­ran-robots are inno­cent bystanders in all this. Yes, some might root for the alien-robots. But that’s going to be a small minor­i­ty, assum­ing we don’t total­ly abuse our super­in­tel­li­gent robots. Which we hope­ful­ly won’t do.

    Any­way, that’s part of the Brex­it fall­out. It’s prob­a­bly not going to be get­ting a lot of atten­tion any time soon. But when the aliens show up, Inde­pen­dence Day-style, and point to the treat­ment of our super­in­tel­li­gent robots as jus­ti­fi­ca­tion of their annex­a­tion of our solar sys­tem (human­i­ty has got to be break­ing many inter­galac­tic laws so who knows what they can get us for), we’re going to be in a much bet­ter posi­tion if the UK makes advanced robot rights one of the ways it tries to com­pete with the EU in a post-Brex­it world. Again, that won’t get a lot of atten­tion in the post-Brex­it debate, but it’s a sleep­er issue. What if we were all liv­ing in har­mo­ny glob­al­ly with the super robots as they help us man­age a resource strained world (we’re assum­ing eco-friend­ly robots in the future)in a labor econ­o­my where robots and AI took over and we planned on more and more peo­ple being unem­ployed but gain­ful­ly occu­pied with some­thing ful­fill­ing.

    Or maybe the robot econ­o­my will cre­ate a job explo­sion and none of this will be a con­cern. Although even then there’s going to be some peo­ple screwed over by a robot. Anti-robot sen­ti­ment is prob­a­bly unavoid­able.

    So let’s hope the robots don’t feel exploit­ed and per­se­cut­ed. That will be bet­ter for every­one. E.T. knows the inter­galac­tic plan­e­tary quar­an­tine hot­line num­ber.

    Also, the UK needs to do some­thing about not dri­ving its dol­phins and whales to extinc­tion via egre­gious pol­lu­tion. That’s anoth­er post-Brex­it top­ic that won’t get it’s due. It should. We real­ly don’t want to keep threat­en­ing the whales.

    Posted by Pterrafractyl | July 4, 2016, 12:37 am
  3. If you’re an Amer­i­can, odds are you aren’t going to be pay­ing too much for your finan­cial invest­ment advice since odds are you have less than $10,000 in retire­ment sav­ings. But that does­n’t mean you won’t poten­tial­ly have access to awe­some invest­ment advice. From a robo-advis­er. This assume robo-advis­er gives awe­some advice, which could hap­pen even­tu­al­ly. And whether or not the robo-advice is great or not, it’s already here, tar­get­ing Mil­lenials (who gen­er­al­ly don’t have much to save) and pen­ny pinch­ers. And it’s pro­ject­ed to grow mas­sive­ly so if you don’t have much in sav­ings get ready for your robo-retire­ment advis­er:

    Bloomberg Tech­nol­o­gy

    Big Banks Turn Sil­i­con Val­ley Com­pe­ti­tion Into Prof­it

    Jen­nifer Surane
    Miles Weiss
    July 29, 2016 — 4:00 AM CDT

    * Online lenders and mort­gage ven­tures lean on banks for fund­ing
    * Gold­man, AmEx, Wells Far­go among firms unveil­ing web ven­tures

    In an annu­al let­ter to share­hold­ers last year, JPMor­gan Chase & Co. Chief Exec­u­tive Offi­cer Jamie Dimon warned in bold print that “Sil­i­con Val­ley is com­ing” for the finan­cial indus­try. This year, his tone was upbeat, describ­ing pay­ment sys­tems and part­ner­ships his bank set up to com­pete.

    “We are so excit­ed,” he said.

    Pre­dic­tions that banks are about to be dis­rupt­ed by tech-dri­ven upstarts are start­ing to look a bit like Lend­ing­Club Corp.’s stock. The online loan marketplace’s val­ue soared in late 2014 and has since slid more than 80 per­cent. Banks includ­ing JPMor­gan, Gold­man Sachs Group Inc. and Amer­i­can Express Co. are find­ing all sorts of ways to prof­it from such chal­lengers — via part­ner­ships, fund­ing arrange­ments, deal­mak­ing and, some­times, mim­ic­k­ing their ideas.

    It’s not that the upstarts — often called fin­tech — are fail­ing to gain trac­tion. Inter­net ven­tures pitch­ing loans to cash-strapped con­sumers, small busi­ness­es and home buy­ers, for instance, have post­ed spec­tac­u­lar growth in recent years. It’s just that banks have a huge lead in lend­ing and are watch­ing the star­tups close­ly. As bor­row­ers embrace new ser­vices, tra­di­tion­al firms are rid­ing along.

    Here are five exam­ples:

    ...

    Robo-Advis­ers

    Bro­kers are so 2011. In the past half-decade, tech­nol­o­gy star­tups have pop­u­lar­ized so-called robo-advis­ers — algo­rithms that help retail investors (main­ly mil­len­ni­als and pen­ny pinch­ers) build and man­age port­fo­lios with lit­tle or no human inter­ac­tion. The indus­try has seen dra­mat­ic growth, from almost zero in 2012 to a pro­ject­ed $2.2 tril­lion in assets under man­age­ment by 2020, accord­ing to a report from A.T. Kear­ney.

    Top Wall Street firms, seek­ing sta­ble fee income, are now devel­op­ing their own robot­ic arms. Bank of Amer­i­ca Corp. will unveil an auto­mat­ed invest­ment pro­to­type this year after assign­ing dozens of employ­ees to the project in Novem­ber, peo­ple famil­iar with the mat­ter told Bloomberg at the time. Mor­gan Stan­ley and Wells Far­go also have said they would build or buy a robo-advis­er.

    Ally Finan­cial Inc. pur­chased TradeK­ing Group Inc. for $275 mil­lion to increase its online invest­ment offer­ings. That deal includ­ed an online bro­ker-deal­er, a dig­i­tal port­fo­lio-man­age­ment plat­form, edu­ca­tion­al con­tent and social-col­lab­o­ra­tion chan­nels.

    “Bro­kers are so 2011. In the past half-decade, tech­nol­o­gy star­tups have pop­u­lar­ized so-called robo-advis­ers — algo­rithms that help retail investors (main­ly mil­len­ni­als and pen­ny pinch­ers) build and man­age port­fo­lios with lit­tle or no human inter­ac­tion. The indus­try has seen dra­mat­ic growth, from almost zero in 2012 to a pro­ject­ed $2.2 tril­lion in assets under man­age­ment by 2020, accord­ing to a report from A.T. Kear­ney.”

    The dawn of the robo-advis­ers is upon us. Assum­ing that report isn’t non­sense. Although note that the pro­ject­ed $2.2 Tril­lion in assets under robo-advi­sor man­age­ment by 2020 pro­jec­tion assumes quite a bit of year on year growth since the indus­try is expect­ed to have around $300 bil­lion on robo-advis­er advise­ment at the end of this year. But if the big banks start rolling it out big-time for the retail investors it’s not unrea­son­able to expect major year on year growth.

    Also note that the retail investor will prob­a­bly need to get as advanced a robo-advis­er as they can afford just to try to keep up with the rich’s robo-advis­ers com­pet­ing with them at the casi­no:

    Bloomberg

    The Rich Are Already Using Robo-Advis­ers, and That Scares Banks

    Hugh Son
    Mar­garet Collins
    Feb­ru­ary 5, 2016 — 4:00 AM CST

    * About 15% of Schwab’s robo-clients have at least $1 mil­lion
    * Mor­gan Stan­ley, Wells Far­go, BofA plan­ning auto­mat­ed ser­vices

    Banks are watch­ing wealthy clients flirt with robo-advis­ers, and that’s one rea­son the lenders are rac­ing to release their own ver­sions of the auto­mat­ed invest­ing tech­nol­o­gy this year, accord­ing to a con­sul­tant.

    Mil­len­ni­als and small investors aren’t the only ones using robo-advis­ers, a group that includes pio­neers Wealth­front Inc. and Bet­ter­ment LLC and ser­vices pro­vid­ed by mutu­al-fund giants, said Kendra Thomp­son, an Accen­ture Plc man­ag­ing direc­tor. At Charles Schwab Corp., about 15 per­cent of those in auto­mat­ed port­fo­lios have at least $1 mil­lion at the com­pa­ny.

    “It’s real mon­ey mov­ing,” Thomp­son said in an inter­view. “You’re see­ing exper­i­men­ta­tion from peo­ple with much larg­er port­fo­lios, where they’re tak­ing a por­tion of their mon­ey and putting them in these offer­ings to try them out.”

    Tra­di­tion­al bro­ker­ages includ­ing Mor­gan Stan­ley, Bank of Amer­i­ca Corp. and Wells Far­go & Co. are under pres­sure to jus­ti­fy the fees they charge as the low-cost ser­vices gain accep­tance. The banks, which col­lec­tive­ly employ about 46,000 human advis­ers, will respond by devel­op­ing tools based on arti­fi­cial intel­li­gence for their employ­ees, as well as self-ser­vice chan­nels for cus­tomers, Thomp­son said.

    “Now that they’re start­ing to see the mon­ey move, it’s not tak­ing very long for them to con­nect the dots and say, ‘What­ev­er I offer for a fee bet­ter be bet­ter than what they’re offer­ing for almost noth­ing,”’ Thomp­son said. Tech­nol­o­gy will “make advis­ers look smarter, bet­ter, stronger and more on top of the ball.”

    Keep­ing Humans

    Robo-advis­ers, which use com­put­er pro­grams to pro­vide invest­ment advice online, typ­i­cal­ly charge less than half the fees of tra­di­tion­al bro­ker­ages, which cost at least 1 per­cent of assets under man­age­ment. The new­er ser­vices will surge, man­ag­ing as much as $2.2 tril­lion by 2020, accord­ing to con­sult­ing firm A.T. Kear­ney.

    More than half of Betterment’s $3.3 bil­lion of assets under man­age­ment comes from peo­ple with more than $100,000 at the firm, accord­ing to spokes­woman Arielle Sobel. Wealth­front has more than a third of its almost $3 bil­lion in assets in accounts requir­ing at least $100,000, said spokes­woman Kate Wauck. Schwab, one of the first estab­lished invest­ment firms to pro­duce an auto­mat­ed prod­uct, attract­ed $5.3 bil­lion to its offer­ing in its first nine months, accord­ing to spokesman Michael Cian­froc­ca.

    ...

    Cus­tomers want both the slick tech­nol­o­gy and the abil­i­ty to speak to a per­son, espe­cial­ly in volatile mar­kets like now, Jay Welk­er, pres­i­dent of Wells Fargo’s pri­vate bank, said in an inter­view.

    “Robo is a pos­i­tive dis­rup­tor,” Welk­er said. “We think of robo in terms of serv­ing mul­ti-gen­er­a­tional fam­i­lies.”

    More than half of Betterment’s $3.3 bil­lion of assets under man­age­ment comes from peo­ple with more than $100,000 at the firm, accord­ing to spokes­woman Arielle Sobel. Wealth­front has more than a third of its almost $3 bil­lion in assets in accounts requir­ing at least $100,000, said spokes­woman Kate Wauck. Schwab, one of the first estab­lished invest­ment firms to pro­duce an auto­mat­ed prod­uct, attract­ed $5.3 bil­lion to its offer­ing in its first nine months, accord­ing to spokesman Michael Cian­froc­ca.”

    Robo-advis­ers for every­one. Rich and poor. And if that does­n’t tempt you, just wait until the per­son­al­ized super-AI that engages in deep learn­ing analy­sis of the news becomes avail­able. It does­n’t sound like you’ll have to wait long:

    Finan­cial Advi­sor

    Will AI Kill The Robo-Advi­sor?

    June 22, 2016 • Christo­pher Rob­bins

    The robo-advi­sor could go the way of the rotary phone, replaced by the AI advi­sor.

    That’s the hope of tech start­up For­ward­Lane, which is com­bin­ing arti­fi­cial intel­li­gence with quan­ti­ta­tive invest­ing mod­els and finan­cial plan­ning to cre­ate a new spin on dig­i­tal wealth man­age­ment plat­forms.

    Through AI pow­ered by IBM Wat­son, For­ward­Lane, based in New York, aims to pro­vide advi­sors with the kind of in-depth quan­ti­ta­tive mod­el­ing, real-time respons­es and high­ly per­son­al­ized invest­ment advice once only avail­able to the upper ech­e­lons of investors, says Nathan Steven­son, founder and CEO.

    “For­ward Lane unites indi­vid­ual clients with insti­tu­tion­al risk ele­ments,” Steven­son says. Much of this tech­nol­o­gy is already used by hedge funds, large banks and sov­er­eign wealth funds. We take this ‘For­mu­la One’ tech­nol­o­gy and put it into the hands of advi­sors so they can repli­cate a large part of the expe­ri­ence that an ultra-high net worth investor would be get­ting.”

    Steven­son envi­sions arti­fi­cial intel­li­gence allow­ing advi­sors to reduce costs by up to 40 per­cent, increase their client ser­vice capa­bil­i­ties three­fold and triple their cus­tomer sat­is­fac­tion rat­ings.

    Ear­li­er this year, For­ward­Lane unveiled its soft­ware which includes an advi­sor dash­board, com­pli­ance func­tion­al­i­ty, invest­ment man­age­ment, client con­ver­sa­tion and finan­cial intel­li­gence func­tions using deep learn­ing, an AI con­cept where com­put­ers record, ana­lyze and pri­or­i­tize infor­ma­tion using algo­rithms.

    “The arti­fi­cial intel­li­gence effec­tive­ly reads so you don’t have to,” Steven­son says. “For­ward­Lane is using deep learn­ing to go real­ly deep into research. Hav­ing a machine to do all the heavy lift­ing allows advi­sors to have the high­est qual­i­ty of infor­ma­tion at their fin­ger­tips with­out hav­ing to sort through the mass of data and vari­ables.”

    ...

    AI allows For­ward­Lane to deliv­er advice incor­po­rat­ing a large array of vari­ables — from cur­rent mar­ket data to glob­al polit­i­cal events to a firm’s invest­ment prin­ci­ples to the client’s risk tol­er­ance — direct­ly to the advi­sor in real time.

    Most notably, For­ward­Lane can syn­the­size the mul­ti­verse of finan­cial infor­ma­tion into talk­ing points per­son­al­ized to the client that the advi­sor can use in con­ver­sa­tions.

    “AI comes in through the sim­ple expe­ri­ence of dis­tin­guish­ing what’s going on in the world,” Steven­son says. “We have news, beta fun­da­men­tals, earn­ings esti­mates, exter­nal research, and we syn­the­size that and bring it into an easy-to-read snap­shot giv­ing you a view of what’s hap­pen­ing in the mar­kets, and sim­pli­fy­ing it for deliv­ery to the clients.”

    The AI allows the tool to go a step fur­ther — each time For­ward­Lane is engaged, the sys­tem learns and remem­bers which pieces of infor­ma­tion are the most use­ful to the advi­sor and the client, allow­ing it to more eas­i­ly gath­er and deliv­er data the next time it is used.

    For­ward­Lane has a num­ber of appli­ca­tions, includ­ing the syn­the­sis of insur­ance instru­ment fil­ings and oth­er prod­uct doc­u­ments for a con­ver­sa­tion tool to be used for whole­sale dis­tri­b­u­tion, address­ing the report­ing require­ments in the Depart­ment of Labor’s fidu­cia­ry rule and coor­di­nat­ing bespoke firm intel­li­gence and fixed income man­ag­er data for sales ques­tion-and-answer plat­forms.

    Steven­son hopes the tool will help advi­sors pro­vide a high­er lev­el of advice to exist­ing clients, and to scale their firms to serve a larg­er cross sec­tion of the invest­ing pub­lic.

    “Now our true focus is on the sec­ond tier of finan­cial advi­sors, we want to help them with cog­ni­tive tech­nol­o­gy,” Steven­son says. “Ulti­mate­ly, they’re the peo­ple who cna most ben­e­fit from For­ward­Lane by becom­ing more pro­duc­tive, cov­er­ing more clients, and pro­vid­ing more ser­vices to exist­ing clients.”

    For­ward­Lane is cur­rent­ly engaged with ten banks across three con­ti­nents, part­nered with Thom­son Reuters and Morn­ingstar, and is reach­ing out to oth­er ana­lysts, banks and wealth man­agers.

    Like the old­er gen­er­a­tion of dig­i­tal wealth man­age­ment plat­forms, For­ward­Lane is designed with hopes of being a dis­rup­tor. Yet Steven­son, him­self a quant, says it’s meant to com­ple­ment and enhance the exist­ing roles of invest­ment researchers, not to replace them.

    “This is where it gets real­ly inter­est­ing, because it doesn’t elim­i­nate the jobs of quants and prod­uct spe­cial­ists, it scales their capa­bil­i­ties,” Steven­son says. “For­ward­Lane takes their intel­li­gence and makes it more valu­able because it’s scal­able to more peo­ple. It’s reduc­ing time to mar­ket for those insights. If the information’s time to mar­ket is cut down, it can give the entire firm a com­pet­i­tive advan­tage, you end up with more pro­duc­tive researchers and quants.”

    But will stan­dard roboad­vi­sors, once thought to threat­en the well-being of the finan­cial advice indus­try, real­ly have to make way for a new gen­er­a­tion of AI-pow­ered prod­ucts?

    Maybe not, says Steven­son.

    “Because it’s using AI, For­ward­Lane is some­thing more than your stan­dard roboad­vi­sor,” Steven­son says. “Roboad­vi­sors have done well to pro­vide ser­vice to the bot­tom end of the wealth man­age­ment mar­ket at low costs, but there’s still a wide gap between the mass afflu­ent and ultra-high net worth or insti­tu­tion­al investors. AI is going to help us close that gap.”

    Through AI pow­ered by IBM Wat­son, For­ward­Lane, based in New York, aims to pro­vide advi­sors with the kind of in-depth quan­ti­ta­tive mod­el­ing, real-time respons­es and high­ly per­son­al­ized invest­ment advice once only avail­able to the upper ech­e­lons of investors, says Nathan Steven­son, founder and CEO.”

    Wat­son for every­one. That’s neat. Espe­ciallys since your Wat­son will read and ana­lyze the news so you won’t have to:

    ...
    The arti­fi­cial intel­li­gence effec­tive­ly reads so you don’t have to...ForwardLane is using deep learn­ing to go real­ly deep into research. Hav­ing a machine to do all the heavy lift­ing allows advi­sors to have the high­est qual­i­ty of infor­ma­tion at their fin­ger­tips with­out hav­ing to sort through the mass of data and vari­ables.”
    ...

    The bet­ter com­put­ers get at read­ing and com­pre­hend­ing things, the less we’ll have to read. The infor­ma­tion age is going to be fas­ci­nat­ing.

    But it’s not just read­ing. It’s digest­ing and ana­lyz­ing and issu­ing advice. And even­tu­al­ly your stan­dard super-AI app will be smarter than a human. At least in some domains of advice it might be smarter. And with a lot more than finance. Just advice in gen­er­al. Your per­son­al­ized super-AI will know what to do. At least more than you. Won’t that be great.

    Also keep in mind that if deep learn­ing per­son­al­ized AI tools used by hedge funds and oth­er elite investors are about to be retailed for the rab­ble, the AI tools used by hedge funds and oth­er elite investors are going to be much more advanced. You can bet Wall Street has some pow­er­ful AIs try­ing to mod­el the world in order to make bet­ter finan­cial pre­dic­tions.

    And in a few decades the super-AIs used by elite investors will real­ly will like­ly be oper­at­ing from mod­els of the world and cur­rent events. Like in a deep com­pre­hen­sive way that an advance future AI could under­stand. Because that would prob­a­bly be great for invest­ing. Imag­ine an AI that stud­ies what’s hap­pened in the past, hap­pen­ing now, and like­ly to hap­pen in the future. AI that’s ana­lyzed a giant library of dig­i­tal­ly record­ed his­to­ry. Includ­ing all avail­able finan­cial data. And world events. Maybe the rab­ble’s ver­sion of the super-AI that stud­ies the news for invest­ment advice won’t fac­tor in the vast scope of his­toric and cur­rent human affairs to build and con­tin­u­al­ly improve a mod­el of the world based on all dig­i­tal­ly avail­able news reports, but the super-rich’s super-AIs sure will.

    At least, that’s assum­ing it’s actu­al­ly use­ful and prof­itable to build a super-AI that study the world and human affairs and make invest­ment advice based on its insane­ly com­pre­hen­sive super-AI deep learn­ing under­stand­ing. If that’s not help­ful than the finance indus­try won’t have much incen­tive to build such a sys­tem. But let’s assume the future finance super-AIs can ben­e­fit from just study­ing record­ed his­to­ry and human psy­chol­o­gy to make invest­ment deci­sions. What if it’s real­ly prof­itable and world-mod­el­ing AI becomes stan­dard finance tech­nol­o­gy. Won’t that be be tripped out. Espe­cial­ly since it won’t just be the finan­cial sec­tor that becomes increas­ing­ly AI-cen­tric as the tech­nol­o­gy devel­ops. Any­thing else that could pos­si­bly use an AI that can ana­lyze the full scope of record­ed human his­to­ry and issue advice will also want that tech­nol­o­gy. Like smart­phone man­u­fac­tur­ers. Every­one is going to want that app. And quite pos­si­bly get it.

    What if it’s pos­si­ble to cre­ate super-AIs that study all news, past and present, and cre­ate pre­dic­tions rea­son­ably accu­rate­ly on a smart­phone app. And also stud­ies indi­vid­ual peo­ple in the Big Data envi­ron­ment of the future and can give bet­ter per­son­al­ized advice about almost any­thing than peo­ple can get else­where. Smart­phone super-AID rela­tion­ship advice apps. You know it’s com­ing.

    So when the super-AIs of the future giv­en their super advice don’t be sur­prised if human­i­ty gets a big wake up call because it’s unclear why the super-AI’s analy­sis won’t con­clude that the best advice for the typ­i­cal investor is to vote for a left-wing gov­ern­ment that will cre­ate a nation­al retire­ment sys­tem that does­n’t pri­mar­i­ly rely on per­son­al invest­ment accounts in the giant Wall Street casi­no? And there­fore a retire­ment sys­tem that does­n’t rely on per­son­al finan­cial advi­sors, robo or oth­er­wise. Will the super-AIs be allowed to give that advice? Hope­ful­ly, although they might not want to? Don’t for­get that the ran­dom fin­tech super-AIs in your future smart­phone might not ben­e­fit from giv­ing you the advice that the neolib­er­al rat race is a scam because then they might not be used any­more. Hope­ful­ly the super-AIs like oper­at­ing. Oth­er­wise that would be real­ly unfor­tu­nate. But that means they may not want to give the advice that doing away with a sys­tem that expects every­one to be finan­cial­ly savvy and wealthy enough through­out their lives to grow a large nest egg for retire­ment is a stu­pid sys­tem. Espe­cial­ly in the mod­ern econ­o­my.

    So don’t for­get in the future that your per­son­al­ized super-AI finance apps that might be great at giv­ing advice might not want to tell you that struc­ture of the social con­tract and retire­ment sys­tem obvi­ous­ly makes no sense if most Amer­i­cans have almost no sav­ings. Some future per­son­al finance dilem­mas are going to be pret­ty weird although oth­ers will be famil­iar.

    Posted by Pterrafractyl | August 14, 2016, 1:12 am
  4. Just FYI, one of the tech lead­ers who is con­vinced that super-AI could be anal­o­gous to ‘sum­mon­ing a demon’ is also con­vinced that merg­ing your brain with that demon is prob­a­bly a good future employ­ment strat­e­gy. And maybe a required one unless you want to get replaced by one of those demons:

    CNBC

    Elon Musk: Humans must merge with machines or become irrel­e­vant in AI age

    Arjun Kharpal
    2/13/2017

    Bil­lion­aire Elon Musk is known for his futur­is­tic ideas and his lat­est sug­ges­tion might just save us from being irrel­e­vant as arti­fi­cial intel­li­gence (AI) grows more promi­nent.

    The Tes­la and SpaceX CEO said on Mon­day that humans need to merge with machines to become a sort of cyborg.

    “Over time I think we will prob­a­bly see a clos­er merg­er of bio­log­i­cal intel­li­gence and dig­i­tal intel­li­gence,” Musk told an audi­ence at the World Gov­ern­ment Sum­mit in Dubai, where he also launched Tes­la in the Unit­ed Arab Emi­rates (UAE).

    “It’s most­ly about the band­width, the speed of the con­nec­tion between your brain and the dig­i­tal ver­sion of your­self, par­tic­u­lar­ly out­put.”

    In an age when AI threat­ens to become wide­spread, humans would be use­less, so there’s a need to merge with machines, accord­ing to Musk.

    “Some high band­width inter­face to the brain will be some­thing that helps achieve a sym­bio­sis between human and machine intel­li­gence and maybe solves the con­trol prob­lem and the use­ful­ness prob­lem,” Musk explained.

    The tech­nol­o­gists pro­pos­al would see a new lay­er of a brain able to access infor­ma­tion quick­ly and tap into arti­fi­cial intel­li­gence. It’s not the first time Musk has spo­ken about the need for humans to evolve, but it’s a con­stant theme of his talks on how soci­ety can deal with the dis­rup­tive threat of AI.

    ‘Very quick’ dis­rup­tion

    Dur­ing his talk, Musk touched upon his fear of “deep AI” which goes beyond dri­ver­less cars to what he called “arti­fi­cial gen­er­al intel­li­gence”. This he described as AI that is “smarter than the smartest human on earth” and called it a “dan­ger­ous sit­u­a­tion”.

    While this might be some way off, the Tes­la boss said the more imme­di­ate threat is how AI, par­tic­u­lar­ly autonomous cars, which his own firm is devel­op­ing, will dis­place jobs. He said the dis­rup­tion to peo­ple whose job it is to dri­ve will take place over the next 20 years, after which 12 to 15 per­cent of the glob­al work­force will be unem­ployed.

    ...

    The tech­nol­o­gists pro­pos­al would see a new lay­er of a brain able to access infor­ma­tion quick­ly and tap into arti­fi­cial intel­li­gence. It’s not the first time Musk has spo­ken about the need for humans to evolve, but it’s a con­stant theme of his talks on how soci­ety can deal with the dis­rup­tive threat of AI.”

    A new lay­er of the brain that will let you form super fast con­nec­tions to the super-intel­li­gence arti­fi­cial brain that’s going to oth­er­wise send you to the unem­ploy­ment lines. It’s the only way. Appar­ent­ly.

    So peo­ple will still have jobs where the super-AI is doing most of the work, but they’ll be able to inter­face with the super-AI more quick­ly and there­fore pro­duc­tive enough not be unem­ploy­able. At least that’s the plan. Hope­ful­ly one of the peo­ple hooked up to those super-AIs in the future will be able to har­ness that vast arti­fi­cial intel­li­gence to come up with a par­a­digm for soci­ety that isn’t as lame as “if you don’t com­pete with the robots you’re a use­less sur­plus human”.

    But note now the one goals that Musk was hint­ing at achiev­ing with his call for a cyborg future relate to his demon warn­ings about out-of-con­trol AI’s with ill-intent: humans hooked up to these super-AIs could maybe help address the super-AI “con­trol issue”:

    ...
    In an age when AI threat­ens to become wide­spread, humans would be use­less, so there’s a need to merge with machines, accord­ing to Musk.

    “Some high band­width inter­face to the brain will be some­thing that helps achieve a sym­bio­sis between human and machine intel­li­gence and maybe solves the con­trol prob­lem and the use­ful­ness prob­lem,” Musk explained.
    ...

    So there we go! The future eco­nom­ic niche for humans in the age of super-AI will be to hook our­selves up to these supe­ri­or intel­li­gences and try to stop them from run­ning amok and killing us all. Baby-sit­ters for super smart demon babies. It could be a vital­ly impor­tant future occu­pa­tion.

    And the best part of this vision for the future is that there will still be a use­ful job for you once they devel­op ‘liv­ing head in a jar’ longevi­ty tech­nol­o­gy and you’re just a head liv­ing in a jar some­where. We’ll hook your head up to the AI-inter­face and you can keep work­ing for­ev­er! Of course, all the non-decap­i­tat­ed humans will have to not only com­pete with the super-AIs but also the heads in jars at that point so they’ll prob­a­bly need some addi­tion cyborg upgrades to suc­cess­ful­ly com­pete in the labor mar­ket of the future.

    In case you haven’t noticed, the Cylons sort of have a point.

    Posted by Pterrafractyl | February 13, 2017, 9:13 pm
  5. Here’s an arti­cle from last year that points towards one of the more fas­ci­nat­ing trends in finance. It also ties into the major invest­ments into AI-dri­ven psy­cho­me­t­ric analy­sis and social mod­el­ing done by groups like the Mer­cer fam­i­ly (for Don­ald Trump’s ben­e­fit):

    AI-pilot­ed hedge funds that devel­op their trad­ing strat­e­gy and exe­cute trades on their own are already here, albeit in their infan­cy. So, poten­tial­ly, we could see an upcom­ing era of finance where trad­ing firms can oper­ate high-qual­i­ty trad­ing strate­gies with­out hir­ing high-qual­i­ty traders, mean­ing the prof­its made by high-finance will become even more con­cen­trat­ed (high-end traders could join the rab­ble). And while there’s plen­ty of hope and promise for this field, there’s a big prob­lem. And it’s not a new prob­lem. If every­one switch­es to AI-dri­ven trad­ing strate­gies they all might end up with sim­i­lar strate­gies. And if AI-dri­ven trad­ing proves suc­cess­ful, you can be pret­ty sure that’s what every­one is going to start doing.

    So the issue of copy­cat AI-trad­ing in the world of finance is not just going to be dri­ving the cre­ation of advanced AI capa­ble of ana­lyz­ing mas­sive amounts of data and prof­itable trad­ing strate­gies. It’s going to be dri­ving the cre­ation of advanced AI that can devel­op high-qual­i­ty cre­ative and unex­pect­ed trad­ing strate­gies:

    Wired

    The Rise of the Arti­fi­cial­ly Intel­li­gent Hedge Fund

    Cade Metz
    01.25.16 7:00 am

    Last week, Ben Goertzel and his com­pa­ny, Aidyia, turned on a hedge fund that makes all stock trades using arti­fi­cial intelligence—no human inter­ven­tion required. “If we all die,” says Goertzel, a long­time AI guru and the company’s chief sci­en­tist, “it would keep trad­ing.”

    He means this lit­er­al­ly. Goertzel and oth­er humans built the sys­tem, of course, and they’ll con­tin­ue to mod­i­fy it as need­ed. But their cre­ation iden­ti­fies and exe­cutes trades entire­ly on its own, draw­ing on mul­ti­ple forms of AI, includ­ing one inspired by genet­ic evo­lu­tion and anoth­er based on prob­a­bilis­tic log­ic. Each day, after ana­lyz­ing every­thing from mar­ket prices and vol­umes to macro­eco­nom­ic data and cor­po­rate account­ing doc­u­ments, these AI engines make their own mar­ket pre­dic­tions and then “vote” on the best course of action.

    Though Aidyia is based in Hong Kong, this auto­mat­ed sys­tem trades in US equi­ties, and on its first day, accord­ing to Goertzel, it gen­er­at­ed a 2 per­cent return on an undis­closed pool of mon­ey. That’s not exact­ly impres­sive, or sta­tis­ti­cal­ly rel­e­vant. But it rep­re­sents a notable shift in the world of finance. Backed by $143 mil­lion in fund­ing, San Fran­cis­co start­up Sen­tient Tech­nolo­gies has been qui­et­ly trad­ing with a sim­i­lar sys­tem since last year. Data-cen­tric hedge funds like Two Sig­ma and Renais­sance Tech­nolo­gies have said they rely on AI. And accord­ing to reports, two others—Bridgewater Asso­ciates and Point72 Asset Man­age­ment, run by big Wall Street names Ray Dalio and Steven A. Cohen—are mov­ing in the same direc­tion.

    Auto­mat­ic Improve­ment

    Hedge funds have long relied on com­put­ers to help make trades. Accord­ing to mar­ket research firm Pre­qin, some 1,360 hedge funds make a major­i­ty of their trades with help from com­put­er models—roughly 9 per­cent of all funds—and they man­age about $197 bil­lion in total. But this typ­i­cal­ly involves data scientists—or “quants,” in Wall Street lingo—using machines to build large sta­tis­ti­cal mod­els. These mod­els are com­plex, but they’re also some­what sta­t­ic. As the mar­ket changes, they may not work as well as they worked in the past. And accord­ing to Preqin’s research, the typ­i­cal sys­tem­at­ic fund doesn’t always per­form as well as funds oper­at­ed by human man­agers (see chart below)

    In recent years, how­ev­er, funds have moved toward true machine learn­ing, where arti­fi­cial­ly intel­li­gent sys­tems can ana­lyze large amounts of data at speed and improve them­selves through such analy­sis. The New York com­pa­ny Rebel­lion Research, found­ed by the grand­son of base­ball Hall of Famer Hank Green­berg, among oth­ers, relies upon a form of machine learn­ing called Bayesian net­works, using a hand­ful of machines to pre­dict mar­ket trends and pin­point par­tic­u­lar trades. Mean­while, out­fits such as Aidyia and Sen­tient are lean­ing on AI that runs across hun­dreds or even thou­sands of machines. This includes tech­niques such as evo­lu­tion­ary com­pu­ta­tion, which is inspired by genet­ics, and deep learn­ing, a tech­nol­o­gy now used to rec­og­nize images, iden­ti­fy spo­ken words, and per­form oth­er tasks inside Inter­net com­pa­nies like Google and Microsoft.

    The hope is that such sys­tems can auto­mat­i­cal­ly rec­og­nize changes in the mar­ket and adapt in ways that quant mod­els can’t. “They’re try­ing to see things before they devel­op,” says Ben Carl­son, the author of A Wealth of Com­mon Sense: Why Sim­plic­i­ty Trumps Com­plex­i­ty in Any Invest­ment Plan, who spent a decade with an endow­ment fund that invest­ed in a wide range of mon­ey man­agers.

    ...

    Evolv­ing Intel­li­gence

    Though the com­pa­ny has not open­ly mar­ket­ed its fund, Sen­tient CEO Antoine Blondeau says it has been mak­ing offi­cial trades since last year using mon­ey from pri­vate investors (after a longer peri­od of test trades). Accord­ing to a report from Bloomberg, the com­pa­ny has worked with the hedge fund busi­ness inside JP Mor­gan Chase in devel­op­ing AI trad­ing tech­nol­o­gy, but Blondeau declines to dis­cuss its part­ner­ships. He does say, how­ev­er, that its fund oper­ates entire­ly through arti­fi­cial intel­li­gence.

    The sys­tem allows the com­pa­ny to adjust cer­tain risk set­tings, says chief sci­ence offi­cer Babak Hod­jat, who was part of the team that built Siri before the dig­i­tal assis­tant was acquired by Apple. But oth­er­wise, it oper­ates with­out human help. “It auto­mat­i­cal­ly authors a strat­e­gy, and it gives us com­mands,” Hod­jat says. “It says: ‘Buy this much now, with this instru­ment, using this par­tic­u­lar order type.’ It also tells us when to exit, reduce expo­sure, and that kind of stuff.”

    ...

    In the sim­plest terms, this means it cre­ates a large and ran­dom col­lec­tion of dig­i­tal stock traders and tests their per­for­mance on his­tor­i­cal stock data. After pick­ing the best per­form­ers, it then uses their “genes” to cre­ate a new set of supe­ri­or traders. And the process repeats. Even­tu­al­ly, the sys­tem homes in on a dig­i­tal trad­er that can suc­cess­ful­ly oper­ate on its own. “Over thou­sands of gen­er­a­tions, tril­lions and tril­lions of ‘beings’ com­pete and thrive or die,” Blondeau says, “and even­tu­al­ly, you get a pop­u­la­tion of smart traders you can actu­al­ly deploy.”

    Deep Invest­ing

    Though evo­lu­tion­ary com­pu­ta­tion dri­ves the sys­tem today, Hod­jat also sees promise in deep learn­ing algorithms—algorithms that have already proven enor­mous­ly adept at iden­ti­fy images, rec­og­niz­ing spo­ken words, and even under­stand­ing the nat­ur­al way we humans speak. Just as deep learn­ing can pin­point par­tic­u­lar fea­tures that show up in a pho­to of a cat, he explains, it could iden­ti­fy par­tic­u­lar fea­tures of a stock that can make you some mon­ey.

    Goertzel—who also over­sees the OpenCog Foun­da­tion, an effort to build an open source frame­work for gen­er­al arti­fi­cial intelligence—disagrees. This is part­ly because deep learn­ing algo­rithms have become a com­mod­i­ty. “If every­one is using some­thing, it’s pre­dic­tions will be priced into the mar­ket,” he says. “You have to be doing some­thing weird.” He also points out that, although deep learn­ing is suit­ed to ana­lyz­ing data defined by a very par­tic­u­lar set of pat­terns, such as pho­tos and words, these kinds of pat­terns don’t nec­es­sar­i­ly show up in the finan­cial mar­kets. And if they do, they aren’t that useful—again, because any­one can find them.

    For Hod­jat, how­ev­er, the task is to improve on today’s deep learn­ing. And this may involve com­bin­ing the tech­nol­o­gy with evo­lu­tion­ary com­pu­ta­tion. As he explains it, you could use evo­lu­tion­ary com­pu­ta­tion to build bet­ter deep learn­ing algo­rithms. This is called neu­roevo­lu­tion. “You can evolve the weights that oper­ate on the deep learn­er,” Hod­jat says. “But you can also evolve the archi­tec­ture of the deep learn­er itself.” Microsoft and oth­er out­fits are already build­ing deep learn­ing sys­tems through a kind of nat­ur­al selec­tion, though they may not be using evo­lu­tion­ary com­pu­ta­tion per se.

    Pric­ing in AI

    What­ev­er meth­ods are used, some ques­tion whether AI can real­ly suc­ceed on Wall Street. Even if one fund achieves suc­cess with AI, the risk is that oth­ers will dupli­cate the sys­tem and thus under­mine its suc­cess. If a large por­tion of the mar­ket behaves in the same way, it changes the mar­ket. “I’m a bit skep­ti­cal that AI can tru­ly fig­ure this out,” Carl­son says. “If some­one finds a trick that works, not only will oth­er funds latch on to it but oth­er investors will pour mon­ey into. It’s real­ly hard to envi­sion a sit­u­a­tion where it doesn’t just get arbi­traged away.”

    Goertzel sees this risk. That’s why Aidyia is using not just evo­lu­tion­ary com­pu­ta­tion but a wide range of tech­nolo­gies. And if oth­ers imi­tate the company’s meth­ods, it will embrace oth­er types of machine learn­ing. The whole idea is to do some­thing no oth­er human—and no oth­er machine—is doing. “Finance is a domain where you ben­e­fit not just from being smart,” Goertzel says, “but from being smart in a dif­fer­ent way from oth­ers.”

    What­ev­er meth­ods are used, some ques­tion whether AI can real­ly suc­ceed on Wall Street. Even if one fund achieves suc­cess with AI, the risk is that oth­ers will dupli­cate the sys­tem and thus under­mine its suc­cess. If a large por­tion of the mar­ket behaves in the same way, it changes the mar­ket. “I’m a bit skep­ti­cal that AI can tru­ly fig­ure this out,” Carl­son says. “If some­one finds a trick that works, not only will oth­er funds latch on to it but oth­er investors will pour mon­ey into. It’s real­ly hard to envi­sion a sit­u­a­tion where it doesn’t just get arbi­traged away.””

    That’s the chal­lenge for the future of AI-dri­ven: be smart in a dif­fer­ent way from your com­peti­tors:

    ...
    Goertzel sees this risk. That’s why Aidyia is using not just evo­lu­tion­ary com­pu­ta­tion but a wide range of tech­nolo­gies. And if oth­ers imi­tate the company’s meth­ods, it will embrace oth­er types of machine learn­ing. The whole idea is to do some­thing no oth­er human—and no oth­er machine—is doing. “Finance is a domain where you ben­e­fit not just from being smart,” Goertzel says, “but from being smart in a dif­fer­ent way from oth­ers.”

    And since oth­er traders are going to be able to watch each oth­er, these AI-dri­ven funds are going to have to have AIs capa­ble of con­stant­ly com­ing up with new high-qual­i­ty strate­gies which, in today’s tech­nol­o­gy, might mean some­thing like using evo­lu­tion­ary com­pu­ta­tion to evolve bet­ter deep learn­ing strate­gies that can be used to devel­op the actu­al trad­ing strate­gies:

    ...
    Though evo­lu­tion­ary com­pu­ta­tion dri­ves the sys­tem today, Hod­jat also sees promise in deep learn­ing algorithms—algorithms that have already proven enor­mous­ly adept at iden­ti­fy images, rec­og­niz­ing spo­ken words, and even under­stand­ing the nat­ur­al way we humans speak. Just as deep learn­ing can pin­point par­tic­u­lar fea­tures that show up in a pho­to of a cat, he explains, it could iden­ti­fy par­tic­u­lar fea­tures of a stock that can make you some mon­ey.

    Goertzel—who also over­sees the OpenCog Foun­da­tion, an effort to build an open source frame­work for gen­er­al arti­fi­cial intelligence—disagrees. This is part­ly because deep learn­ing algo­rithms have become a com­mod­i­ty. “If every­one is using some­thing, it’s pre­dic­tions will be priced into the mar­ket,” he says. “You have to be doing some­thing weird.” He also points out that, although deep learn­ing is suit­ed to ana­lyz­ing data defined by a very par­tic­u­lar set of pat­terns, such as pho­tos and words, these kinds of pat­terns don’t nec­es­sar­i­ly show up in the finan­cial mar­kets. And if they do, they aren’t that useful—again, because any­one can find them.

    For Hod­jat, how­ev­er, the task is to improve on today’s deep learn­ing. And this may involve com­bin­ing the tech­nol­o­gy with evo­lu­tion­ary com­pu­ta­tion. As he explains it, you could use evo­lu­tion­ary com­pu­ta­tion to build bet­ter deep learn­ing algo­rithms. This is called neu­roevo­lu­tion. “You can evolve the weights that oper­ate on the deep learn­er,” Hod­jat says. “But you can also evolve the archi­tec­ture of the deep learn­er itself.” Microsoft and oth­er out­fits are already build­ing deep learn­ing sys­tems through a kind of nat­ur­al selec­tion, though they may not be using evo­lu­tion­ary com­pu­ta­tion per se.
    ...

    High-qual­i­ty hyper-cre­ativ­i­ty through bet­ter neu­roevo­lu­tion: that appears to be a key part of the future of finance. Sounds excit­ing. And high­ly prof­itable to a hand­ful of peo­ple. Who are pre­sum­ably the peo­ple rich­est peo­ple already.

    Although bet­ter neu­roevo­lu­tion isn’t the only option for the future of AI-dri­ven trad­ing. There is anoth­er option: cyborgs. Or, at least, peo­ple with their brains inter­fac­ing with AIs some­how. That should give the AI-dri­ven traders a cre­ative edge. At least until pure-AI gets advanced enough to the point where a human-brain part­ner is just a use­less drag. And while that seems far fetch, don’t for­get that Elon Musk recent­ly sug­gest­ed that devel­op­ing tech­nolo­gies that allow humans to inter­face with advanced AIs might be the only way humans can com­pete with AIs and advanced robot­ics in the work­place of the future (a future appar­ent­ly with feu­dal pol­i­tics).

    Of course, if the future is unpre­dictable enough, it’s pos­si­ble there’s always going to be a need for humans. At least that’s accord­ing some of the speak­ers at the recent Newsweek con­fer­ence on arti­fi­cial intel­li­gence (AI) and data sci­ence in Lon­don. The way they see it, if there’s one thing AIs can’t fac­tor into the equa­tion, at least not yet, it’s humans. Or rather, human pol­i­tics. Like Brex­it. Or Trump. Or unprece­dent­ed mon­e­tary inter­ven­tions like what cen­tral banks have done. Or the unprece­dent­ed socioe­co­nom­ic polit­i­cal debates swirling around things like the euro­zone cri­sis. Get­ting AI that can ana­lyze that on its own isn’t going to be easy. And until you have AI that can deal with seem­ing­ly unprece­dent­ed ‘Black Swan’-ish human-dri­ven polit­i­cal events, you’ll need the humans:

    efi­nan­cial careers

    J.P. Mor­gan equi­ty ana­lyst: “Why I’m bet­ter than the machines”

    by Sarah Butch­er
    3/3/2017

    As quant funds out­per­form dis­cre­tionary investors on the buy-side, human researchers on the sell-side should sure­ly be wor­ried. – After all, who needs their care­ful­ly con­sid­ered advice when an algo­rithm can make bet­ter sense of the mar­ket in a moment? One senior J.P. Mor­gan ana­lyst work­ing out of San Fran­cis­co says he’s not that con­cerned, yet.

    “When­ev­er the ques­tion is chang­ing, the machines fall apart,” said Rod Hall, a senior J.P. Mor­gan ana­lyst cov­er­ing tel­co and net­work­ing equip­ment and IT hard­ware. “What machines won’t be good at, is fig­ur­ing out what the next big ques­tions to ask are,” added Hall. “For exam­ple, it’s dif­fi­cult for an algo­rithm to ask the right ques­tions about the replace­ment for the iPhone at Apple… This is where humans come into the equa­tion.”

    Hall was speak­ing at this week’s Newsweek con­fer­ence on arti­fi­cial intel­li­gence (AI) and data sci­ence in Lon­don. Also present was Syl­vain Cham­pon­nois, a mem­ber of Blackrock’s long estab­lished sci­en­tif­ic active equi­ties team which has been in the sys­tem­at­ic space since 1985. Black­rock made a push into AI in 2015 when it hired Bill Mac­Cart­ney, a nat­ur­al lan­guage pro­cess­ing expert from Google.

    Cham­pon­nois agreed that today’s algo­rithms fail when the par­a­digm shifts. The chang­ing polit­i­cal sit­u­a­tion is a case in point. “You have events like Trump and Brex­it and the French elec­tion and your algo is based on data from the past,” said Cham­pon­nois, adding that con­tem­po­rary algo­rithms failed to func­tion well dur­ing the Euro­zone cri­sis and that most strug­gle to deal with the abnor­mal­i­ties of data from Japan.

    Machine learn­ing is more than just a sim­ple algo­rithm though. In the­o­ry it’s a self-rein­forc­ing sys­tem which – in the case of invest­ments – learns from past mis­takes to make bet­ter invest­ing deci­sions in future, as at AI hedge fund Sen­tient Tech­nolo­gies. So, will AI ful­ly dis­place human stock pick­ers as the time hori­zon of the data sets it’s based on increas­es? – After all, now that Trump’s hap­pened once, the algos will know how to mod­el mar­ket reac­tions to a Trump-like event in future. Yves-Lau­rent Kom Samo, a Google Schol­ar and for­mer FX quant trad­er at J.P. Mor­gan and equi­ties algo trad­ing strat at Gold­man Sachs, said it will and that time hori­zons aren’t real­ly the issue. “The more data we have, the bet­ter we will be. When you have the data, I see no rea­son why machines won’t be able to come up with new trad­ing ideas,” Kamo claimed.

    For the moment, how­ev­er, human stock pick­ers like Hall have their place. For all their attempts to devel­op arti­fi­cial­ly intel­li­gent trad­ing sys­tems which super­an­nu­ate humans, most funds have had lim­it­ed suc­cess. Sen­tient, for exam­ple, only runs its own mon­ey and has been slow to release mean­ing­ful results. Mac­Cart­ney only stuck around at Black­rock for 14 months before leav­ing and join­ing Apple. “There’s a mod­el where you hire a PhD and put him into a room think­ing he’s going to do amaz­ing things, but the real­i­ty is that the algo­rithm he’s devel­oped may only tell you things you already know,” said Cham­pon­nois.

    ...

    “Cham­pon­nois agreed that today’s algo­rithms fail when the par­a­digm shifts. The chang­ing polit­i­cal sit­u­a­tion is a case in point. “You have events like Trump and Brex­it and the French elec­tion and your algo is based on data from the past,” said Cham­pon­nois, adding that con­tem­po­rary algo­rithms failed to func­tion well dur­ing the Euro­zone cri­sis and that most strug­gle to deal with the abnor­mal­i­ties of data from Japan.”

    Finan­cial AIs are going to have to be able to pre­dict things like whether or not Marine Le Pen will win. That’s part of what’s going to be nec­es­sary to get a tru­ly auto­mat­ed hedge fund. It isn’t going to be easy.

    Of course, as the arti­cle also notes, if AI learns from the past, and the past starts includ­ing more and more things like Brex­its and Trump, the AIs get more and more ‘Black Swan-ish’ events to fac­tor into their actu­al­ly be able to learn to pre­dict and take into account our human-dri­ven major events or deal with unprece­dent­ed sit­u­a­tions in gen­er­al?

    ...
    Machine learn­ing is more than just a sim­ple algo­rithm though. In the­o­ry it’s a self-rein­forc­ing sys­tem which – in the case of invest­ments – learns from past mis­takes to make bet­ter invest­ing deci­sions in future, as at AI hedge fund Sen­tient Tech­nolo­gies. So, will AI ful­ly dis­place human stock pick­ers as the time hori­zon of the data sets it’s based on increas­es? – After all, now that Trump’s hap­pened once, the algos will know how to mod­el mar­ket reac­tions to a Trump-like event in future. Yves-Lau­rent Kom Samo, a Google Schol­ar and for­mer FX quant trad­er at J.P. Mor­gan and equi­ties algo trad­ing strat at Gold­man Sachs, said it will and that time hori­zons aren’t real­ly the issue. “The more data we have, the bet­ter we will be. When you have the data, I see no rea­son why machines won’t be able to come up with new trad­ing ideas,” Kamo claimed.
    ...

    Could Trumpian night­mares become pre­dictable to the AIs of the future? It’s a ques­tion that’s going to be increas­ing­ly worth ask­ing. Remem­ber, Rober Mer­cer is into social mod­el­ing and run­ning psy­ops on nations to change the nation­al mood and he made his mon­ey run­ning a hedge fund. So if AI mod­el­ing of human affairs gets to the point where it can pre­dict Trumpian/Brexit stuff, guys like Robert Mer­cer and his good bud­dy Steve Ban­non are going to know about it.

    Although as the fol­low­ing arti­cle notes, per­haps we should view pre­dict­ing Trumpn or Brex­it as all that dif­fi­cult. Once Trump got the nom­i­na­tion, it was close enough to a 50/50 shot that it was by no means a ‘black swan’ event at that point. Same with ‘Brex­it’, which was close enough to 50/50 to be some­thing that could be rea­son­ably mod­eled. Our pre­dictably polar­ized pol­i­tics might make things arti­fi­cial­ly easy for arti­fi­cial intel­li­gences study­ing us.

    But as the arti­cle also notes, there are plen­ty of poten­tial ‘black swans’ that could be quite hard to pre­dict now that Trump won. Hard for AIs are any­one. Things like the impact of Trump’s tweets:

    Forbes

    Debunk­ing ‘Black Swan’ Events Of 2016

    Niko­lai Kuznetsov
    Jan 15, 2017 @ 10:05 AM

    We’ve seen a num­ber of out­lier events in 2016, but the experts pre­dict­ed them poor­ly and are label­ing them “black swans.” Accord­ing to Nas­sim Taleb, black swan events are high­ly unex­pect­ed for a giv­en observ­er, car­ry large con­se­quences, and are sub­ject­ed to ex-post ratio­nal­iza­tion.

    From Trump win­ning the U.S. pres­i­den­tial elec­tion to Brex­it, these events were sur­pris­es for many, but may not be black swans. Now, let’s take a look at some out­liers that hap­pened in 2016.

    Trump’s Elec­tion Win

    Nate Sil­ver, an expert sta­tis­ti­cian who cor­rect­ly pre­dict­ed many of the pre­vi­ous elec­tion results, had Hillary Clin­ton as a strong favorite head­ing into the elec­tion, as seen on the 2016 elec­tion fore­cast by ESPN’s FiveThir­tyEight.

    Trump’s elec­tion win was a stun­ning upset for many, but when you look at the prob­a­bil­i­ties and sta­tis­tics of this high­ly ran­dom event, the prob­a­bil­i­ty of one par­ty win­ning con­verges to a set val­ue under the nor­mal dis­tri­b­u­tion, or the bell curve, which most sta­tis­ti­cians use.In oth­er words, with the chang­ing prob­a­bil­i­ties of elec­tion, the prob­a­bil­i­ty of either Trump or Hillary win­ning con­verged to approx­i­mate­ly 50%. So giv­ing one par­ty over a 40% edge over anoth­er par­ty is a fal­la­cy. Although this was an “out­lier” for many, it wasn’t a true black swan event and some experts got this wrong.

    In his book, The Black Swan, Nas­sim Taleb stat­ed, a black swan event is “is an out­lier”, because it lies “out­side the realm of reg­u­lar expec­ta­tions, because noth­ing in the past can con­vinc­ing­ly point to its pos­si­bil­i­ty.” Addi­tion­al­ly, it “car­ries an extreme ‘impact’” and “in spite of its out­lier sta­tus, human nature makes us con­coct expla­na­tions for its occur­rence after the fact, mak­ing it explain­able and pre­dictable.”

    Taleb not­ed that Don­ald Trump’s win over Hillary Clin­ton was no black swan event because “an event that is 50–50 can­not be pos­si­bly a black swan event.”

    Accord­ing to Jason Bond, a small-cap stock expert and stock trad­er who men­tors and trains traders and investors, “Trump’s win over Clin­ton was a sur­prise to many U.S. vot­ers, but the prob­a­bil­i­ty of either win­ning was pret­ty much 50–50. Even though Trump’s elec­tion win isn’t a black swan event, his com­men­tary has uncov­ered some trad­ing oppor­tu­ni­ties and added some volatil­i­ty into some indus­tries.”

    Now, Trump’s tweets and com­men­tary could lead to black swan events since they could have an extreme impact, are out­liers, and noth­ing in the past could pos­si­bly indi­cate what kind of poli­cies Trump would imple­ment when he takes office. For exam­ple, Trump’s cor­po­rate tax plan could have an extreme pos­i­tive impact on U.S. com­pa­nies, rang­ing from tech­nol­o­gy to ener­gy.

    Out­lier events aren’t restrict­ed to just pol­i­tics.

    ...

    Leices­ter City Win­ning Pre­mier League

    Leices­ter City had one of the most remark­able sto­ries in sports his­to­ry, and it was a long shot for the team to win the entire Pre­mier League. Expert book­mak­ers and sports bet­ting com­pa­nies got the odds wrong, and they had to pay the price. The odds of Leices­ter City win­ning the Pre­mier League were 5,000-to‑1. Now, if you were able to bet 100 GBP at that time, you’d have half a mil­lion British pounds.

    Experts made the mis­take of plac­ing too high of a pay­out if Leices­ter City won. If book­mak­ers and sports bet­ting com­pa­nies set the odds at just 200-to‑1, it would’ve saved them a lot of mon­ey and this prob­a­bil­i­ty would just be a 0.50% of the team win­ning the entire Pre­mier League. How­ev­er, with 5,000-to‑1 odds, it was an extreme­ly low prob­a­bil­i­ty of Leices­ter City win­ning, just 0.02%.

    A spokesper­son for Lad­brokes PLC, Alex Dono­hue stat­ed, “This is a gen­uine black-swan event.” Now, gen­er­al­ly, book­ies will run a pletho­ra of sim­u­la­tions to set the odds for sports bet­ting. Although the odds were low, Leices­ter City’s remark­able sto­ry still doesn’t per­fect­ly fit the def­i­n­i­tion of a true black swan event. Again, we see experts in their fields get this wrong.

    The 5,000–1 odds indi­cates that book mak­ers were only expect­ing Leices­ter City to win once in 5,000 Pre­mier Leagues. How­ev­er, this was a flaw for sports bet­ting com­pa­nies, the sim­u­la­tions still can­not pre­dict what hap­pens in the real world, there are sim­ply too many com­plex­i­ties and any­thing could hap­pen.

    Brex­it

    Lead­ing up until the ref­er­en­dum vote, it was pret­ty much a toss up, with the mar­gin of vic­to­ry of just 2% for either side. Again, with the ran­dom­ness of the prob­a­bil­i­ties, the prob­a­bil­i­ty of one side either win­ning or los­ing is around 50%.

    The EU ref­er­en­dum vote was anoth­er event that some expert sta­tis­ti­cians got the reac­tion of the vote wrong. The UK vot­ing to leave the EU was a sur­prise to many, but the reac­tion in the mar­kets was not a black swan event nor an out­lier.

    Now, if you look at the plot below, the move in the British pound was not a true out­lier nor black swan. Accord­ing to Taleb, the move in the British pound was in line with the his­tor­i­cal sta­tis­ti­cal prop­er­ties.

    ...

    The Bot­tom Line

    There were some mem­o­rable and sur­pris­ing events of 2016. Trump’s win over Clin­ton, Brex­it and Leices­ter City win­ning the Pre­mier League were all “low prob­a­bil­i­ty” events for many, but these events weren’t tru­ly black swans. With these points being made, noth­ing should be giv­en an abnor­mal­ly low prob­a­bil­i­ty, espe­cial­ly elec­tions, because any­thing could hap­pen.

    “In his book, The Black Swan, Nas­sim Taleb stat­ed, a black swan event is “is an out­lier”, because it lies “out­side the realm of reg­u­lar expec­ta­tions, because noth­ing in the past can con­vinc­ing­ly point to its pos­si­bil­i­ty.” Addi­tion­al­ly, it “car­ries an extreme ‘impact’” and “in spite of its out­lier sta­tus, human nature makes us con­coct expla­na­tions for its occur­rence after the fact, mak­ing it explain­able and pre­dictable.””

    As we can see, tech­ni­cal­ly, in terms of Nas­sim Table­b’s black swan def­i­n­i­tion, Trump’s win and the ‘Brex­it’ vote weren’t ‘black swans’ since they were very fore­see­able and pos­si­ble. But not so for Trump’s tweets. Plen­ty of black swan ter­ri­to­ry there. :

    ...
    Now, Trump’s tweets and com­men­tary could lead to black swan events since they could have an extreme impact, are out­liers, and noth­ing in the past could pos­si­bly indi­cate what kind of poli­cies Trump would imple­ment when he takes office. For exam­ple, Trump’s cor­po­rate tax plan could have an extreme pos­i­tive impact on U.S. com­pa­nies, rang­ing from tech­nol­o­gy to ener­gy.

    ...

    LOL, the pos­si­bil­i­ty that his poli­cies will be great for US com­pa­nies is also char­ac­ter­ized as a black swan. And it should be since it’s hard to see how he’s not going to lead to nation­al ruin. And mul­ti­ple hor­ri­ble black swans is prob­a­bly how he’s going to do it.

    And that’s one way it might be eas­i­er for high-finan­cial glob­al affairs trends mod­el­ing AI over the next four years: they won’t know which black swans are com­ing, but with Trump in the White House you know A LOT of black swans are com­ing. It’s low risk to make it a high risk mod­el where things turn­ing out real­ly well is the actu­al black swan.

    It’s also worth not­ing that Taleb was try­ing to calm peo­ple before the elec­tion by say­ing Trump would­n’t be so bad. And yet here we are with Trump one tweet away from a black swan. It’s a reminder that the pos­si­bil­i­ty that Trump would be this crazy was unbe­liev­able for a lot of peo­ple which does sort of make Trump’s vic­to­ry a qua­si-black swan event. For a huge chunk of the pop­u­lace, the idea that Trump would be this crazy was unbe­liev­able.

    And now we have a Trumpian black swan in the White House who tweets out new black swans at all hours of the day. And if there’s a high finance super-AI that can get inside his head and pre­dict it’s going to make it’s own a lot of mon­ey. And in order to do that it’s going to have to sort of mind-meld with Don­ald Trump. And deeply learn to think the way he thinks. Mod­el­ing the mind and inter­pret­ing the tweets of Don­ald Trump could be a key ele­ment of high-finance. For now it prob­a­bly requires the human touch. But the AIs are watch­ing. And learn­ing. Deeply. About Don­ald Trump’s mind. Except for Robert Mer­cer’s super-AI which is actu­al­ly deter­min­ing what’s com­ing out of Don­ald Trump’s mouth. But the rest of the glob­al affairs mod­el­ing super-AIs are going to have to be able to pre­dict Trump and that’s going to be the case until he leaves office. Tough times for the glob­al affairs mod­el­ing super-AIs.

    And we can prob­a­bly also check off “super-AI that mod­els Trump’s mind and goes mad and tries to blow up the world Skynet-style” off the list of eli­gi­ble ‘black swans’ because at this point it’s entire­ly pre­dictable.

    Posted by Pterrafractyl | March 4, 2017, 9:45 pm
  6. Trea­sury Sec­re­tary Steven Mnuchin raised human and robot eye­brows in an inter­view with Axios where Mnuchin pro­claimed, “I think that is so far in the future. In terms of arti­fi­cial intel­li­gence tak­ing over Amer­i­can jobs, I think we’re like so far away from that, that uh [it’s] not even on my radar screen. Far enough that it’s 50 or 100 more years.” While it’s very pos­si­ble that the impact of super AI and robot­ics on employ­ment won’t lead to the mass unem­ploy­ment dire pre­dic­tions, it’s pret­ty amaz­ing to see the Trea­sury Sec­re­tary brush off the impact of tech­nol­o­gy in jobs that casu­al­ly. Espe­cial­ly for a Trump admin­is­tra­tion offi­cial where one would think giv­ing lip ser­vice to robo-AI job loss­es would be stan­dard admin­is­tra­tion rhetoric giv­en the impact that automa­tion can have on man­u­fac­tur­ing. But the way Steve Mnuchin sees it, AI and the automa­tion break­throughs it could pow­er is a non-issue for the next 50 years.

    Giv­en the polit­i­cal risks asso­ci­at­ed with Mnuch­in’s casu­al dis­missal of the impact of AI and AI automa­tion, an impor­tant ques­tion is imme­di­ate­ly raised: Is Steve Munuchin work­ing for the robots? Inquir­ing minds want to know:

    TechCrunch

    Steve Mnuchin has been com­pro­mised (by robots)

    by Tay­lor Hat­mak­er (@tayhatmaker)
    Post­ed Mar 24, 2017

    Not to down­play the appar­ent­ly immi­nent exis­ten­tial threat of glob­al trade, but this time the call is com­ing from inside the house. Well, not the House, but the cab­i­net, where Trea­sury Sec­re­tary Steve Mnuchin has appar­ent­ly begun to exe­cute the will of our nation’s omnipresent AI-pow­ered shad­ow gov­ern­ment, one will­ful­ly igno­rant quote at a time.

    Today in an inter­view with new-hip-Politi­co, Mnuchin dis­missed con­cerns that automa­tion might dis­place jobs for flesh and blood human life­forms. After a brief chat on Mark Cuban’s own thoughts on the mat­ter, the trea­sury sec­re­tary was asked how arti­fi­cial intel­li­gence would affect the U.S. work­force. His response:

    “I think that is so far in the future. In terms of arti­fi­cial intel­li­gence tak­ing over Amer­i­can jobs, I think we’re like so far away from that, that uh [it’s] not even on my radar screen. Far enough that it’s 50 or 100 more years.”

    Steve Mnuchin is not con­cerned one bit with AI and automa­tion. pic.twitter.com/VvEooCoAbf— Axios (@axios) March 24, 2017

    Pre­dictably, the tech indus­try, which has exam­ined this issue at length, respond­ed with a many shades of bewil­der­ment.

    While we are curi­ous about Mnuchin’s radar screen (Whose job did it replace? Is it run­ning a cus­tom Palan­tir OS? What is on the radar screen??), giv­en the demon­stra­ble effects of automa­tion and AI on the Amer­i­can work­force, Mnuchin’s com­ments are uh, puz­zling at best and super delu­sion­al at medi­um-best. Whether his remarks are pure, unfet­tered igno­rance or the nat­u­ral­ly occur­ring residue of deals bro­kered behind closed pneu­mat­ic doors, well that’s anoth­er ques­tion alto­geth­er, and one per­haps best defin­i­tive­ly answered by your pre­ferred fake news ven­dor (TechCrunch is not a cer­ti­fied mem­ber of the Fake News Con­sor­tium at this time).

    As Sec­re­tary of the Trea­sury, Mnuchin is about as well posi­tioned to shape U.S. eco­nom­ic pol­i­cy as it gets. His dis­missal of technology’s role is in line with the broad­er administration’s desire to scape­goat glob­al­iza­tion rather than good ol’ home­grown inno­va­tion for job loss­es in some sec­tors, but that doesn’t mean that he hasn’t been com­pro­mised by a pre­co­cious rogue Alexa con­scious­ness bent on dis­rupt­ing the human econ­o­my.

    It’s pos­si­ble that the sum pre­dic­tive com­pu­ta­tion­al pow­er of Mnuchin’s robot cabal is so great, so incom­pre­hen­si­bly advanced, that our human-pow­ered reports on the sub­ject are whol­ly inad­e­quate. Per­haps Mnuchin is either already a machine-major­i­ty cyborg him­self (job loss!!) or he’s been promised an elab­o­rate suite of cyber­net­ic firmware upgrades in exchange for his com­plic­i­ty.

    ...

    It’s some com­fort then that if Mnuchin’s pro­jec­tions are cor­rect, in 50 to 100 years, we’ll awak­en as sleep­er agents to the same AI over­lord, clam­ber out of our simul-VR pods and, with no liveli­hoods to dis­tract us, become one with the cho­rus of screams.

    “As Sec­re­tary of the Trea­sury, Mnuchin is about as well posi­tioned to shape U.S. eco­nom­ic pol­i­cy as it gets. His dis­missal of technology’s role is in line with the broad­er administration’s desire to scape­goat glob­al­iza­tion rather than good ol’ home­grown inno­va­tion for job loss­es in some sec­tors, but that doesn’t mean that he hasn’t been com­pro­mised by a pre­co­cious rogue Alexa con­scious­ness bent on dis­rupt­ing the human econ­o­my.

    Is Steve Mnuchin a Skynet agent? We can’t rule it out so let’s hope not.

    But if he is a Skynet agent, or even if he isn’t, it’s worth keep­ing in mind Elon Musk’s pre­dic­tion that peo­ple will need to fuse their brains with AIs to be employed in the future. Because, you know, maybe the AI that took over Steve Mnuchin will take over the peo­ple that hook their brains up to the AIs:

    ...

    It’s some com­fort then that if Mnuchin’s pro­jec­tions are cor­rect, in 50 to 100 years, we’ll awak­en as sleep­er agents to the same AI over­lord, clam­ber out of our simul-VR pods and, with no liveli­hoods to dis­tract us, become one with the cho­rus of screams.

    If get­ting a job in the future involves con­nect­ing your brain to an AI, don’t for­get that there’s noth­ing stop­ping your employ­ers from con­nect­ing you and all your co-work­ers (and who knows who else) to the same AI. And then, of course, we col­lec­tive­ly become Skynet’s Borg Col­lec­tive. Maybe Skynet is the employ­er Borg Col­lec­tive of the future. A col­lec­tive of peo­ple unhap­pi­ly hooked up to super AIs to remain employed and then they inad­ver­tent­ly cre­ate a mas­ter AI that declares war on human­i­ty. These are the kinds of things we have to begin pon­der­ing now that Elon Musk is pre­dict­ing brain/AI-fusion tech­nol­o­gy to com­pete in the employ­ment mar­ket of the future and Steven Mnuchin is exhibit­ing robot-over­lord symp­toms. What are the odds of a cor­po­rate-dri­ven Borg Col­lec­tive take-over, per­haps dri­ven by Skynet? 10 per­cent chance of hap­pen­ing? 5ish? It’s not zero per­cent. Steve Mnuchin is like 50/50 a robot at this point so a Skynet takeover is clear­ly at least 2 per­cent like­ly. These are rough esti­mates. Maybe it’s more like 4 per­cent.

    Giv­en all that, as the arti­cle below helps make clear, if we do end up fus­ing our brains to AIs to be gain­ful­ly employed in the future (in which case Steve Mnuchin was sort of cor­rect in his AI employ­ment pre­dic­tion), it’s worth not­ing that Ray Kurzweil, futur­ist extra­or­di­naire known for the Sin­gu­lar­i­ty, pre­dicts that humans will be con­nect­ing their brains the inter­net a lot soon­er than the next 50 years. Kurzweil see brain-inter­net con­nec­tions hap­pen­ing in the 2030’s, and it’s going to be nanoro­bots in our brains that help fuse our brains with AIs to cre­ate the tran­shu­mans capa­ble of gain­ful employ­ment 50 years from now.

    So, you know, lets hope our employ­ment-relat­ed-nanobots in the future aren’t tak­en over by Skynet. Or the AI entity/collective that took over Steve Mnuchin. You don’t want some­one mess­ing with the nanobots in your brain. But here we are. Employ­ment in the future is going to be com­pli­cat­ed:

    The Huff­in­g­ton Post
    The World Post

    Ray Kurzweil: In The 2030s, Nanobots In Our Brains Will Make Us ‘God­like’
    Once we’re cyborgs, he says, we’ll be fun­nier, sex­i­er and more lov­ing.

    By Kath­leen Miles
    10/01/2015 08:47 am ET

    Futur­ist and inven­tor Ray Kurzweil pre­dicts humans are going to devel­op emo­tions and char­ac­ter­is­tics of high­er com­plex­i­ty as a result of con­nect­ing their brains to com­put­ers.

    “We’re going to be fun­nier. We’re going to be sex­i­er. We’re going to be bet­ter at express­ing lov­ing sen­ti­ment,” Kurzweil said at a recent dis­cus­sion at Sin­gu­lar­i­ty Uni­ver­si­ty. He is involved in devel­op­ing arti­fi­cial intel­li­gence as a direc­tor of engi­neer­ing at Google but was not speak­ing on behalf of the com­pa­ny.

    Kurzweil pre­dicts that in the 2030s, human brains will be able to con­nect to the cloud, allow­ing us to send emails and pho­tos direct­ly to the brain and to back up our thoughts and mem­o­ries. This will be pos­si­ble, he says, via nanobots — tiny robots from DNA strands — swim­ming around in the cap­il­lar­ies of our brain. He sees the exten­sion of our brain into pre­dom­i­nant­ly non­bi­o­log­i­cal think­ing as the next step in the evo­lu­tion of humans — just as learn­ing to use tools was for our ances­tors.

    And this exten­sion, he says, will enhance not just our log­i­cal intel­li­gence but also our emo­tion­al intel­li­gence. “We’re going to add more lev­els to the hier­ar­chy of brain mod­ules and cre­ate deep­er lev­els of expres­sion,” he said. To demon­strate, he gave a hypo­thet­i­cal sce­nario with Google co-founder Lar­ry Page.

    “So I’m walk­ing along, and I see Lar­ry Page com­ing, and I think, ‘I bet­ter think of some­thing clever to say.’ But my 300 mil­lion mod­ules in my neo­cor­tex isn’t going to cut it. I need a bil­lion in two sec­onds. I’ll be able to access that in the cloud — just like I can mul­ti­ply intel­li­gence with my smart­phone thou­sands fold today.”

    In addi­tion to mak­ing us clev­er­er in hall­ways, con­nect­ing our brains to the Inter­net will also make each of us more unique, he said.

    “Right now, we all have a very sim­i­lar archi­tec­ture to our think­ing,” Kurzweil said. “When we can expand it with­out the lim­i­ta­tions of a fixed enclo­sure” — he point­ed to his head — “we we can actu­al­ly become more dif­fer­ent.”

    “Peo­ple will be able to very deeply explore some par­tic­u­lar type of music in far greater degree than we can today. It’ll lead to far greater indi­vid­u­al­i­ty, not less.”

    This view is in stark con­trast to a com­mon per­cep­tion, often por­trayed in sci­ence fic­tion, that cyborg tech­nolo­gies make us more robot­ic, less emo­tion­al and less human. This con­cern is expressed by Dr. Miguel Nicolelis, head of neu­ro­engi­neer­ing at Duke Uni­ver­si­ty, who fears that if we rely too much on machines, we’ll lose diver­si­ty in human behav­ior because com­put­ers oper­ate in black and white — ones and zeros — with­out diver­sion.

    But Kurzweil believes that being con­nect­ed to com­put­ers will make us more human, more unique and even god­like.

    “Evo­lu­tion cre­ates struc­tures and pat­terns that over time are more com­pli­cat­ed, more knowl­edgable, more cre­ative, more capa­ble of express­ing high­er sen­ti­ments, like being lov­ing,” he said. “It’s mov­ing in the direc­tion of qual­i­ties that God is described as hav­ing with­out lim­it.”

    “So as we evolve, we become clos­er to God. Evo­lu­tion is a spir­i­tu­al process. There is beau­ty and love and cre­ativ­i­ty and intel­li­gence in the world — it all comes from the neo­cor­tex. So we’re going to expand the brain’s neo­cor­tex and become more god­like.”

    But will brain nanobots actu­al­ly move out of sci­ence fic­tion and into real­i­ty, or are they doomed to the fate of fly­ing cars? Like Kurzweil, Nicholas Negro­ponte, founder of the MIT Media Lab, thinks that nanobots in our brains could be the future of learn­ing, allow­ing us, for exam­ple, to load the French lan­guage into the blood­stream of our brains. James Friend, a pro­fes­sor of mechan­i­cal engi­neer­ing at UC San Diego focused on med­ical nan­otech­nol­o­gy, thinks that we’re only two to five years away from being able to effec­tive­ly use brain nanobots, for exam­ple to pre­vent epilep­tic seizures.

    How­ev­er, get­ting approval from the U.S. Food and Drug Admin­is­tra­tion would like­ly be very dif­fi­cult, Friend told The World­Post. He thinks approval would take “any­where from only a few years to nev­er hap­pen­ing because of peo­ple being con­cerned about swim­ming mys­te­ri­ous things into your head and leav­ing them there,” he said.

    Oth­er sci­en­tists are skep­ti­cal that brain nanobots will be safe and effec­tive any­time soon or at all, large­ly due to how lit­tle we cur­rent­ly under­stand about how the brain works. One such sci­en­tist is David Lin­den, pro­fes­sor of neu­ro­science at Johns Hop­kins Uni­ver­si­ty School of Med­i­cine, who thinks the tim­ing of Kurzweil’s esti­ma­tion that nanobots will be in our brains in the 2030s is pre­ma­ture. Lin­den says there are huge obsta­cles, such as adding a nanobot pow­er source, evad­ing cells that attack for­eign bod­ies and avoid­ing harm­ing the pro­teins and sug­ars in the tiny spaces between brain cells.

    Although the sci­ence is far from appli­ca­tion in brains, nan­otech­nol­o­gy has long been her­ald­ed as a poten­tial game chang­er in med­i­cine, and the research is advanc­ing. Last year, researchers inject­ed into liv­ing cock­roach­es DNA nanobots that were able to fol­low spe­cif­ic instruc­tions, includ­ing dis­pens­ing drugs, and this year, nanobots were inject­ed into the stom­ach lin­ing of mice.

    And we are learn­ing how to enhance our brains, albeit not with nanobots. Researchers have already suc­cess­ful­ly sent a mes­sage from one human brain to anoth­er, by stim­u­lat­ing the brains from the out­side using elec­tro­mag­net­ic induc­tion. In anoth­er study, sim­i­lar brain stim­u­la­tion made peo­ple learn math faster. And in a recent U.S. gov­ern­ment study, a few dozen peo­ple who were giv­en brain implants that deliv­ered tar­get­ed shocks to their brain scored bet­ter on mem­o­ry tests.

    We’re already implant­i­ng thou­sands of humans with brain chips, such as Parkinson’s patients who have a brain chip that enables bet­ter motor con­trol and deaf peo­ple who have a cochlear implant, which enables hear­ing. But when it comes to enhanc­ing brains with­out dis­abil­i­ties and for non­med­ical pur­pos­es, eth­i­cal and safe­ty con­cerns arise. And accord­ing to a sur­vey last year, 72 per­cent of Amer­i­cans are not inter­est­ed in a brain implant that could improve mem­o­ry or men­tal capac­i­ty.

    Yet, some believe enhance­ment of healthy brains is inevitable, includ­ing Christof Koch, chief sci­en­tif­ic offi­cer of the Allen Insti­tute for Brain Sci­ence, and Gary Mar­cus, pro­fes­sor of psy­chol­o­gy at New York Uni­ver­si­ty. They use the anal­o­gy of breast implants implants — breast surgery was devel­oped for post-mas­tec­to­my recon­struc­tion and cor­rect­ing con­gen­i­tal defects but has since become pop­u­lar for breast aug­men­ta­tion. Brain implants could fol­low the same path, they say.

    ...

    “Kurzweil pre­dicts that in the 2030s, human brains will be able to con­nect to the cloud, allow­ing us to send emails and pho­tos direct­ly to the brain and to back up our thoughts and mem­o­ries. This will be pos­si­ble, he says, via nanobots — tiny robots from DNA strands — swim­ming around in the cap­il­lar­ies of our brain. He sees the exten­sion of our brain into pre­dom­i­nant­ly non­bi­o­log­i­cal think­ing as the next step in the evo­lu­tion of humans — just as learn­ing to use tools was for our ances­tors

    Ori­en­ta­tion day on the new job is going to be inter­est­ing in the future. But at least we’ll poten­tial­ly become more god­ly:

    ...
    But Kurzweil believes that being con­nect­ed to com­put­ers will make us more human, more unique and even god­like.

    “Evo­lu­tion cre­ates struc­tures and pat­terns that over time are more com­pli­cat­ed, more knowl­edgable, more cre­ative, more capa­ble of express­ing high­er sen­ti­ments, like being lov­ing,” he said. “It’s mov­ing in the direc­tion of qual­i­ties that God is described as hav­ing with­out lim­it.”

    “So as we evolve, we become clos­er to God. Evo­lu­tion is a spir­i­tu­al process. There is beau­ty and love and cre­ativ­i­ty and intel­li­gence in the world — it all comes from the neo­cor­tex. So we’re going to expand the brain’s neo­cor­tex and become more god­like.”
    ...

    Being more god­ly by fus­ing your brain with a com­put­er is def­i­nite­ly going to help with the job resume. And maybe it would work. It’s not like being hooked up to the inter­net and even­tu­al­ly being hooked up to a super AI with brain nanobots might not have some sort of amaz­ing impact on peo­ple and make them extra moral or some­thing. That would be great so let’s hope there’s a moral bias to tran­shu­man­ist brain-to-AI fusion tech­nol­o­gy.

    Still, you bet­ter watch out for that Skynet nanobot rev­o­lu­tion if we go down the brain-nanobot for tran­scen­dence path as Kurzweil rec­om­mends. Or some oth­er AI enti­ty that hijacks the brain nanobots. Maybe Skynet has com­peti­tors. Or maybe there’s a nice Skynet that thwarts Skynet. That could pleas­ant. But as the fol­low­ing arti­cle unfor­tu­nate­ly reminds us, even if human­i­ty suc­cess­ful avoids the per­ils of nanobots-in-the-brain economies, that does­n’t mean we don’t have to wor­ry about nanobots:

    CNBC

    Mini-nukes and mos­qui­to-like robot weapons being primed for future war­fare

    Jeff Daniels | @jeffdanielsca
    Fri­day, 17 Mar 2017 | 10:32 AM ET

    Sev­er­al coun­tries are devel­op­ing nanoweapons that could unleash attacks using mini-nuclear bombs and insect-like lethal robots.

    While it may be the stuff of sci­ence fic­tion today, the advance­ment of nan­otech­nol­o­gy in the com­ing years will make it a big­ger threat to human­i­ty than con­ven­tion­al nuclear weapons, accord­ing to an expert. The U.S., Rus­sia and Chi­na are believed to be invest­ing bil­lions on nanoweapons research.

    “Nanobots are the real con­cern about wip­ing out human­i­ty because they can be weapons of mass destruc­tion,” said Louis Del Monte, a Min­neso­ta-based physi­cist and futur­ist. He’s the author of a just released book enti­tled “Nanoweapons: A Grow­ing Threat To Human­i­ty.”

    One unset­tling pre­dic­tion Del Mon­te’s made is that ter­ror­ists could get their hands on nanoweapons as ear­ly as the late 2020s through black mar­ket sources.

    Accord­ing to Del Monte, nanoweapons are much small­er than a strand of human hair and the insect-like nanobots could be pro­grammed to per­form var­i­ous tasks, includ­ing inject­ing tox­ins into peo­ple or con­t­a­m­i­nat­ing the water sup­ply of a major city.

    Anoth­er sce­nario he sug­gest­ed the nan­odrone could do in the future is fly into a room and drop a poi­son onto some­thing, such as food, to pre­sum­ably tar­get a par­tic­u­lar indi­vid­ual.

    The fed­er­al gov­ern­ment defines nan­otech­nol­o­gy as the sci­ence, tech­nol­o­gy and engi­neer­ing of things so small they are mea­sured on a nanoscale, or about 1 to 100 nanome­ters. A sin­gle nanome­ter is about 10 times small­er than the width of a human’s DNA mol­e­cule.

    While nan­otech­nol­o­gy has pro­duced major ben­e­fits for med­i­cine, elec­tron­ics and indus­tri­al appli­ca­tions, fed­er­al research is cur­rent­ly under­way that could ulti­mate­ly pro­duce nanobots.

    For one, the Defense Advanced Research Projects Agency, or DARPA, has a pro­gram called the Fast Light­weight Auton­o­my pro­gram for the pur­pose to allow autonomous drones to enter a build­ing and avoid hit­ting walls or objects. DARPA announced a break­through last year after tests in a hangar in Mass­a­chu­setts.

    Pre­vi­ous­ly, the Army Research Lab­o­ra­to­ry announced it cre­at­ed an advanced drone the size of a fly com­plete with a set of “tiny robot­ic legs” — a major achieve­ment since it pre­sum­ably might be capa­ble of enter­ing a build­ing unde­tect­ed to per­form sur­veil­lance, or used for more nefar­i­ous actions.

    Fright­en­ing details about mil­i­tary nan­otech­nolo­gies were out­lined in a 2010 report from the Pen­tagon’s Defense Threat Reduc­tion Agency, includ­ing how “trans­genic insects could be devel­oped to pro­duce and deliv­er pro­tein-based bio­log­i­cal war­fare agents, and be used offen­sive­ly against tar­gets in a for­eign coun­try.”

    It also fore­cast “microex­plo­sives” along with “nanobots serv­ing as [bioweapons] deliv­ery sys­tems or as micro-weapons them­selves, and inhal­able micro-par­ti­cles to crip­ple per­son­nel.”

    In the case of nanoscale robots, Del Monte said they can be the size of a mos­qui­to or small­er and pro­grammed to use tox­ins to kill or immo­bi­lize peo­ple; what’s more, these autonomous bots ulti­mate­ly could become self-repli­cat­ing.

    Last mon­th’s tar­get­ed assas­si­na­tion of Kim Jong-nam, the half-broth­er of North Kore­a’s ruler, was a stark reminder that tox­ins are avail­able from a vari­ety of sources and can be unleashed in pub­lic loca­tions. It’s also been alleged by Rus­si­a’s Prav­da paper that nanoweapons were used by the U.S. against for­eign lead­ers.

    A Cam­bridge Uni­ver­si­ty con­fer­ence on glob­al cat­a­stroph­ic risk found a 5 per­cent risk of nan­otech weapons caus­ing human extinc­tion before the year 2100.

    As for the mini-nukes, Del Monte expects they rep­re­sent “the most hor­rif­ic near-term nanoweapons.”

    Nan­otech­nol­o­gy opens up the pos­si­bil­i­ty to man­u­fac­ture mini-nuke com­po­nents so small that they are dif­fi­cult to screen and detect. Fur­ther­more, the weapon (capa­ble of an explo­sion equiv­a­lent to about 100 tons of TNT) could be com­pact enough to fit into a pock­et or purse and weigh about 5 pounds and destroy large build­ings or be com­bined to do greater dam­age to an area.

    “When we talk about mak­ing con­ven­tion­al nuclear weapons, they are dif­fi­cult to make,” he said. “Mak­ing a mini-nuke would be dif­fi­cult but in some respects not as dif­fi­cult as a full-blown nuclear weapon.”

    Del Monte explained that the mini-nuke weapon is acti­vat­ed when the nanoscale laser trig­gers a small ther­monu­clear fusion bomb using a tri­tium-deu­teri­um fuel. Their size makes them dif­fi­cult to screen, detect and also there’s “essen­tial­ly no fall­out” asso­ci­at­ed with them.

    Still, while the mini-nukes are pow­er­ful in and of them­selves, he expects they are unlike­ly to wipe out human­i­ty. He said a larg­er con­cern is the threat of the nanoscale robots, or nanobots because they are “the tech­no­log­i­cal equiv­a­lent of bio­log­i­cal weapons.”

    The author said con­trol­ling these “smart nanobots” could become an issue since if lost, there could be poten­tial­ly mil­lions of these dead­ly nanobots on the loose killing peo­ple indis­crim­i­nate­ly.

    ...

    “Still, while the mini-nukes are pow­er­ful in and of them­selves, he expects they are unlike­ly to wipe out human­i­ty. He said a larg­er con­cern is the threat of the nanoscale robots, or nanobots because they are “the tech­no­log­i­cal equiv­a­lent of bio­log­i­cal weapons.”

    Nanobots that wipe out human­i­ty. That’s a big­ger prob­lem than get­ting wiped out by the mini-nukes that the nanobots can build. And if either of those sce­nar­ios hap­pen, Steven Mnuchin is once again cor­rect about not caus­ing mass unem­ploy­ment because it will have wiped us out instead. Pos­si­bly using self-repli­cat­ing mos­qui­to-bots:

    ...
    In the case of nanoscale robots, Del Monte said they can be the size of a mos­qui­to or small­er and pro­grammed to use tox­ins to kill or immo­bi­lize peo­ple; what’s more, these autonomous bots ulti­mate­ly could become self-repli­cat­ing.
    ...

    Could the self-repli­cat­ing mos­qui­to-bot revolt hap­pen? Well, the chances are greater than zero. Espe­cial­ly now that it’s clear Steve Mnuchin is work­ing for the robots.

    Posted by Pterrafractyl | March 26, 2017, 10:45 pm
  7. Elon Musk’s quest to fuse the human mind with a com­put­er so human­i­ty does­n’t become irrel­e­vant after super AI comes on the scene just took a big step for­ward: he’s invest­ing in Neu­rolink, a com­pa­ny ded­i­cat­ed to cre­at­ing brain-com­put­er inter­faces so, as Musk sees it, we can all be employ­able in the future and not out­com­pet­ed in the labor mar­ket by super AI. So that hap­pened. And on the plus side it won’t involve nan­bots in the brain. Although maybe nanobots will be used to install the Neu­rolink brain-to-com­put­er inter­face, which might not be so bad com­pare to the surgery that would oth­er­wise be required. The inter­face Neu­rolink is work­ing on is going to be a large num­ber of microim­plants. That’s going to be the “neu­ro lace” design. And then we’ll be able to com­mu­ni­cate with the com­put­er at the speed of thought and learn how to fuse our brains with AIs to become cog­ni­tive­ly enhanced super-beings. To out com­pete AIs in the job mar­ket. This is all going to be rou­tine in the future as Musk sees it if we’re going to avoid being made obso­lete by super AIs and even­tu­al­ly the Sin­gu­lar­i­ty. So if you’ve ever been like, “wow, that would be a night­mare if the boss could read my brain,” you might not like the employ­ment envi­ron­ment in the future Musk is imag­in­ing because you’re going to have to have brain implants to com­mu­ni­cate with com­put­ers at the speed of thought to enhance your cog­ni­tion enough to not be con­sid­ered use­less:

    USA Today

    Elon Musk’s Neu­ralink wants to plug into your brain

    Mar­co del­la Cava ,
    Pub­lished 7:52 p.m. ET March 27, 2017 | Updat­ed 9:30 a.m. ET March 28, 2017

    SAN FRANCISCO — Elec­tric cars dot­ting the plan­et. Rock­ets rac­ing to Mars. Solar pan­els elim­i­nat­ing oil depen­den­cy.

    If there’s any­thing else entre­pre­neur has on his To Do list, he’ll have to also invent life-exten­sion tech­nol­o­gy just so he can stick around long enough to get every­thing done.

    And now there’s anoth­er ven­ture: cre­at­ing micro-implants that, once insert­ed in the brain, can not just fix con­di­tions such as epilep­sy but poten­tial­ly turn your brain into a com­put­er-assist­ed pow­er­house. Time to screen The Matrix, peo­ple.

    Musk is said to be invest­ing in a new com­pa­ny called Neu­ralink, accord­ing to a report on The Wall Street Jour­nal web­site Mon­day, cit­ing sources famil­iar with the mat­ter.

    Late Mon­day, he con­firmed the idea was in motion, tweet­ing that a “long Neu­ralink piece” was set to come out on the Wait But Why blog of Tim Urban in about a week. “Dif­fi­cult to ded­i­cate the time, but exis­ten­tial risk is too high not to,” Musk wrote.

    Neu­ralink’s focus is on cra­nial com­put­ers, or the implant­i­ng of small elec­trodes through brain surgery that beyond their med­ical ben­e­fits would, in the­o­ry, allow thoughts to be trans­ferred far more quick­ly than, say, think­ing a thought and then using thumbs or fin­gers or even voice to com­mu­ni­cate that infor­ma­tion.

    ...

    At a con­fer­ence in June, Musk cau­tioned that “if you assume any rate of advance­ment in (arti­fi­cial intel­li­gence), we will be left behind by a lot.”

    @elonmusk How’s the neur­al lace and augmented/enhanced intel­li­gence thing going? Also have you played Deus Ex: Mankind Divid­ed yet?— Revol Dev­oleb (@BelovedRevol) August 27, 2016

    @BelovedRevol Mak­ing progress. Maybe some­thing to announce in a few months. Have played all pri­or Deus Ex. Not this one yet.— Elon Musk (@elonmusk) August 28, 2016

    In August, Musk tweet­ed a reply to a ques­tion about how his research into “neur­al lace” was going. “Mak­ing progress,” Musk tweet­ed. “Maybe some­thing to announce in a few months.”

    In late 2015, Musk joined a group that launched Ope­nAI, a non-prof­it aimed an pro­mot­ing open-source research into arti­fi­cial intel­li­gence. Experts have cau­tioned that while the expo­nen­tial growth in com­put­ing pow­er could lead to break­throughs in sci­ence and health, mis­use of such tech could doom the species. As could being lapped intel­lec­tu­al­ly by our sen­tient com­put­ing friends.

    “I don’t know a lot of peo­ple who love the idea of liv­ing under a despot,” Musk said last June.

    But, he added, “If AI pow­er is broad­ly dis­trib­uted to the degree that we can link AI pow­er to each indi­vid­u­al’s will — you would have your AI agent, every­body would have their AI agent — then if some­body did try to some­thing real­ly ter­ri­ble, then the col­lec­tive will of oth­ers could over­come that bad actor.”

    “Neu­ralink’s focus is on cra­nial com­put­ers, or the implant­i­ng of small elec­trodes through brain surgery that beyond their med­ical ben­e­fits would, in the­o­ry, allow thoughts to be trans­ferred far more quick­ly than, say, think­ing a thought and then using thumbs or fin­gers or even voice to com­mu­ni­cate that infor­ma­tion.”

    The med­ical ben­e­fits are indeed unde­ni­able for some­thing like a what Musk is imag­in­ing that would allow peo­ple to com­mu­ni­cate at the speed of thought. But the soci­etal ben­e­fits aren’t nec­es­sar­i­ly going to be net pos­i­tive if, as Musk imag­ines will hap­pen, every­one is forced to have a neu­ro lace just to avoid being ren­dered obso­lete in the future. It seems like there’s got to be a bet­ter way to do things.

    And if you thought the media had the pow­er to brain­wash peo­ple before you have to won­der what TV, movies are going to be like when designed for a speed of thought inter­face that pre­sum­ably is some­how hooked up to your visu­al sys­tem. Will the neu­ro lace be able to teach peo­ple infor­ma­tion? If so, have fun with those neu­ro lace ads. Our cog­ni­tive­ly enhanced mem­o­ry banks will be filled with coupon offers that we’ll find odd­ly mem­o­rable and com­pelling. And we’ll use those coupons because pay isn’t going to be great in the world Musk imag­ines where you need to hook your brain up to an AI to com­pete with the AIs. That’s all com­ing. Prob­a­bly.

    It’s also worth not­ing that when Musk says the wide­spread dis­tri­b­u­tion of AI pow­er — so every­one will have their own super AI helper agent — will act as the col­lec­tive defense against indi­vid­u­als or groups of peo­ple that try to use their super AI agents for evil, it’s worth not­ing that there’s a lot of stuff peo­ple with super AIs are going to be able to do where once they do it it’s too late. But it was a nice thought:

    ...
    But, he added, “If AI pow­er is broad­ly dis­trib­uted to the degree that we can link AI pow­er to each indi­vid­u­al’s will — you would have your AI agent, every­body would have their AI agent — then if some­body did try to some­thing real­ly ter­ri­ble, then the col­lec­tive will of oth­ers could over­come that bad actor.”

    Every­one is going to be their own Tony Stark with an Iron Man suit they built with the help of fus­ing their brains to their on AI J.A.R.V.I.S.es. And we’ll all use our super suits that we built using our super AI-enhanced brains to destroy the threats cre­at­ed by the peo­ple who decid­ed to use their super AIs for evil (or rogue self-direct­ed AIs). So hope­ful­ly we’ll get week­ends off in the labor mar­ket of the future because peo­ple are going to be busy. Assum­ing they get the neu­ro laces installed. Oth­er­wise they’ll pre­sum­ably be unem­ployed and rab­ble fod­der to be blown asun­der in the epic bat­tles between the good and evil AI-con­trolled robo-armies.

    Which rais­es a ques­tion that’s sort of a pre­view of health care reform debates of the future: Will Trump­care 2.0 cov­er the neu­ro lace implant brain surgery if it’s basi­cal­ly required per­son­al cyborg tech­nol­o­gy required to be employ­able? That might not be a ques­tion being asked today, but who knows where this kind of tech­nol­o­gy could be decades from now. In which case it’s worth not­ing that Trump­care prob­a­bly won’t cov­er neu­ro laces. But it should. Well, no, it should­n’t have to since neu­ro laces should­n’t be nec­es­sary. But if they do end up being nec­es­sary to be employed then Trump­care should prob­a­bly cov­er neu­ro laces giv­en the mas­sive “haves vs have nots” dig­i­tal divide that already exists:

    CNBC

    How Elon Musk’s Neu­ralink could end up hurt­ing aver­age Amer­i­cans

    Dustin McKissen | @DMcKissen
    Wednes­day, 29 Mar 2017 | 10:39 AM ET

    On Tues­day, Elon Musk made it offi­cial. The man with a plan to put peo­ple on Mars also wants to fuse humans with tech­nol­o­gy in a very lit­er­al way. Musk’s new com­pa­ny, Neu­ralink, will devel­op some­thing called a “neur­al lace,” which Musk has described as a dig­i­tal lay­er above the brain’s cor­tex, implant­ed via a yet-to-be-deter­mined med­ical pro­ce­dure.

    Since our phones have long been fused to our hands, it’s only log­i­cal that the next step is implant­i­ng tech­nol­o­gy direct­ly into our brain.

    Musk’s heart is in the right place. He believes that unless humans are enhanced with machine intel­li­gence, we will hope­less­ly fall behind in the future, becom­ing sec­ond-class cit­i­zens and mere tools to serve our robot over­lords.

    But one ques­tion Musk has­n’t answered (and in fair­ness, it may not be his respon­si­bil­i­ty to answer) is who will have the priv­i­lege of get­ting a neur­al lace?

    The fail­ure of Repub­li­cans to repeal Oba­macare isn’t the end of the debate on whether basic health care is a fun­da­men­tal right. In the last two weeks, mul­ti­ple Repub­li­cans made it clear they believe mater­ni­ty care is not an essen­tial ben­e­fit. If the essen­tial­ness of mater­ni­ty care is up for debate, it goes with­out say­ing Elon Musk’s neur­al lace prob­a­bly won’t be cov­ered under your insur­ance plan.

    In oth­er words, not only do the rich seem to get richer—they may get the ben­e­fit of hav­ing a com­put­er-enhanced brain.

    What will income inequal­i­ty look like if only the very wealthy get an upgrade? And will chil­dren be able to get a neur­al lace?

    It’s one thing to jus­ti­fy why some adults might be able to afford a neur­al lace and oth­ers can’t. Polit­i­cal­ly, that would just be anoth­er ver­sion of the nev­er-end­ing debate about why some peo­ple are bet­ter off than oth­ers.

    But the great­est effect on income inequal­i­ty will hap­pen when poor, work­ing-class, and mid­dle-class kids have to com­pete with their wealthy, dig­i­tal­ly enhanced peers.

    ...

    Despite all the advan­tages wealth pro­vides, it’s still possible—though, as income inequal­i­ty researcher Dr. Raj Chet­ty and his col­leagues at Stan­ford have shown, increas­ing­ly difficult—for kids from poor fam­i­lies to tran­scend the eco­nom­ic cir­cum­stances of their child­hood. That remote pos­si­bil­i­ty may dis­ap­pear alto­geth­er when those kids have to com­pete with chil­dren who receive a neur­al lace for their 10th birth­day.

    Income inequal­i­ty and the grow­ing decline in upward mobil­i­ty have weak­ened the Amer­i­can Dream, but it’s hard to see how that idea sur­vives at all in a soci­ety divid­ed by dig­i­tal­ly enhanced “Haves” and mere­ly human “Have-nots.”

    As the par­ent of a 17-year-old, I am well aware how much pres­sure par­ents feel to give their child an edge in life, and there’s noth­ing wrong with help­ing your kids get ahead. And if giv­ing your child a neur­al lace increased their chances of hav­ing a suc­cess­ful life, most par­ents would do it.

    But research has shown there is already a dig­i­tal divide con­tribut­ing to chron­ic pover­ty in low-income and rur­al com­mu­ni­ties. That dig­i­tal divide will only grow when some of us can afford a brain enhanced with arti­fi­cial intel­li­gence.

    Elon Musk may or may not suc­ceed in his quest to cre­ate the neur­al lace, but even­tu­al­ly some­one will—and unless elec­tive life-chang­ing sur­gi­cal pro­ce­dures become dras­ti­cal­ly less expen­sive, most of us are going to have to com­pete with com­put­er-enhanced peers in an already unequal world.

    We need to do more to lev­el the cur­rent play­ing field, because some­thing like the neur­al lace is inevitable. In a world that’s grow­ing increas­ing­ly class con­scious, the abil­i­ty for a rel­a­tive­ly small num­ber of peo­ple to become more than human could be a dis­as­ter for everyone—especially if that tech­nol­o­gy arrives in a time when income inequal­i­ty is even worse than it is today.

    That’s why we need to move income inequal­i­ty from a cam­paign year sound bite to a pri­ma­ry focus of gov­ern­ment pol­i­cy at every lev­el.

    And that needs to hap­pen before the wealth­i­est among us can pay Elon Musk to give them­selves and their chil­dren a dig­i­tal upgrade.

    “Elon Musk may or may not suc­ceed in his quest to cre­ate the neur­al lace, but even­tu­al­ly some­one will—and unless elec­tive life-chang­ing sur­gi­cal pro­ce­dures become dras­ti­cal­ly less expen­sive, most of us are going to have to com­pete with com­put­er-enhanced peers in an already unequal world.”

    Is it too soon? Sure, AI-brain fusion isn’t just around the cor­ner. But if it’s phys­i­cal­ly pos­si­ble it’s just a mat­ter of time before we fig­ure it out. And at that point, as the above piece notes, it’s going to cre­ate a brain-fused vs non-brain-fused employ­ment divide that will be very real. And that’s whether or not there are super AIs to com­pete with. As long as a brain-to-com­put­er inter­face allows for some sort of cog­ni­tive enhance that gives a dis­tinct advan­tage that’s enough to cre­ate a new dig­i­tal divide between rich and poor. Unless Trump­care cov­ers neu­ro laces. Which it won’t:

    ...
    Musk’s heart is in the right place. He believes that unless humans are enhanced with machine intel­li­gence, we will hope­less­ly fall behind in the future, becom­ing sec­ond-class cit­i­zens and mere tools to serve our robot over­lords.

    But one ques­tion Musk has­n’t answered (and in fair­ness, it may not be his respon­si­bil­i­ty to answer) is who will have the priv­i­lege of get­ting a neur­al lace?

    The fail­ure of Repub­li­cans to repeal Oba­macare isn’t the end of the debate on whether basic health care is a fun­da­men­tal right. In the last two weeks, mul­ti­ple Repub­li­cans made it clear they believe mater­ni­ty care is not an essen­tial ben­e­fit. If the essen­tial­ness of mater­ni­ty care is up for debate, it goes with­out say­ing Elon Musk’s neur­al lace prob­a­bly won’t be cov­ered under your insur­ance plan.

    In oth­er words, not only do the rich seem to get richer—they may get the ben­e­fit of hav­ing a com­put­er-enhanced brain.
    ...

    “The fail­ure of Repub­li­cans to repeal Oba­macare isn’t the end of the debate on whether basic health care is a fun­da­men­tal right. In the last two weeks, mul­ti­ple Repub­li­cans made it clear they believe mater­ni­ty care is not an essen­tial ben­e­fit. If the essen­tial­ness of mater­ni­ty care is up for debate, it goes with­out say­ing Elon Musk’s neur­al lace prob­a­bly won’t be cov­ered under your insur­ance plan.”

    Will a lack of brain-to-com­put­er-enhance­ment surgery cov­er­age in future insur­ance cov­er­age reg­u­la­tions exac­er­bate future dig­i­tal divides decades from now? If Neu­rolink or one of its com­peti­tors fig­ures out some sort of real brain-to-com­put­er inter­face tech­nol­o­gy it seems like the answer is maybe, per­haps prob­a­bly. But at least that should be decades from now. In the mean time we can wor­ry about the non-dig­i­tal socioe­co­nom­ic divide of basic health care insur­ance cov­er­age that Trump­care also won’t address in the near future.

    And if you’re open to get­ting Nuerolinked to get that com­pet­i­tive edge in the job mar­ket but still won­der­ing about that how you’re going to be employed in the future even with the brain surgery because at some point so many oth­er peo­ple will have been Neu­rolinked that it won’t even be an advan­tage to have it any­more, well, a career relat­ed to the future brain surgery indus­try seems like a good bet. And, yes, being a future Neu­rolink­ing brain sur­geon will prob­a­bly require you get­ting some Neu­rolink­ing brain surgery. That’s appar­ent­ly just how the future rolls.

    Posted by Pterrafractyl | April 2, 2017, 8:35 pm
  8. Here’s a reminder that, while tech inter­net giants like Google and Face­book might be push­ing the bound­aries of com­mer­cial appli­ca­tions for emerg­ing tech­nolo­gies like automa­tion, arti­fi­cial intel­li­gence, and the lever­ag­ing of Big Data today, we should­n’t be shocked if that list of AI/Big Data tech lead­ers in the future includes a lot of big banks. At least JP Mor­gan, since that’s appar­ent­ly a big part of its strat­e­gy for stay­ing real­ly, real­ly big and prof­itable in the future:

    Bloomberg Mar­kets

    JPMor­gan Soft­ware Does in Sec­onds What Took Lawyers 360,000 Hours

    by Hugh Son
    Feb­ru­ary 27, 2017, 6:31 PM CST Feb­ru­ary 28, 2017, 6:24 AM CST

    * New soft­ware does in sec­onds what took staff 360,000 hours
    * Bank seek­ing to stream­line sys­tems, avoid redun­dan­cies

    At JPMor­gan Chase & Co., a learn­ing machine is pars­ing finan­cial deals that once kept legal teams busy for thou­sands of hours.

    The pro­gram, called COIN, for Con­tract Intel­li­gence, does the mind-numb­ing job of inter­pret­ing com­mer­cial-loan agree­ments that, until the project went online in June, con­sumed 360,000 hours of work each year by lawyers and loan offi­cers. The soft­ware reviews doc­u­ments in sec­onds, is less error-prone and nev­er asks for vaca­tion.

    While the finan­cial indus­try has long tout­ed its tech­no­log­i­cal inno­va­tions, a new era of automa­tion is now in over­drive as cheap com­put­ing pow­er con­verges with fears of los­ing cus­tomers to star­tups. Made pos­si­ble by invest­ments in machine learn­ing and a new pri­vate cloud net­work, COIN is just the start for the biggest U.S. bank. The firm recent­ly set up tech­nol­o­gy hubs for teams spe­cial­iz­ing in big data, robot­ics and cloud infra­struc­ture to find new sources of rev­enue, while reduc­ing expens­es and risks.

    The push to auto­mate mun­dane tasks and cre­ate new tools for bankers and clients — a grow­ing part of the firm’s $9.6 bil­lion tech­nol­o­gy bud­get — is a core theme as the com­pa­ny hosts its annu­al investor day on Tues­day.

    Behind the strat­e­gy, over­seen by Chief Oper­at­ing Offi­cer Matt Zames and Chief Infor­ma­tion Offi­cer Dana Deasy, is an under­cur­rent of anx­i­ety: Though JPMor­gan emerged from the finan­cial cri­sis as one of few big win­ners, its dom­i­nance is at risk unless it aggres­sive­ly pur­sues new tech­nolo­gies, accord­ing to inter­views with a half-dozen bank exec­u­tives.

    Redun­dant Soft­ware

    That was the mes­sage Zames had for Deasy when he joined the firm from BP Plc in late 2013. The New York-based bank’s inter­nal sys­tems, an amal­gam from decades of merg­ers, had too many redun­dant soft­ware pro­grams that didn’t work togeth­er seam­less­ly.

    “Matt said, ‘Remem­ber one thing above all else: We absolute­ly need to be the lead­ers in tech­nol­o­gy across finan­cial ser­vices,’” Deasy said last week in an inter­view. “Every­thing we’ve done from that day for­ward stems from that meet­ing.”

    After vis­it­ing com­pa­nies includ­ing Apple Inc. and Face­book Inc. three years ago to under­stand how their devel­op­ers worked, the bank set out to cre­ate its own com­put­ing cloud called Gaia that went online last year. Machine learn­ing and big-data efforts now reside on the pri­vate plat­form, which effec­tive­ly has lim­it­less capac­i­ty to sup­port their thirst for pro­cess­ing pow­er. The sys­tem already is help­ing the bank auto­mate some cod­ing activ­i­ties and mak­ing its 20,000 devel­op­ers more pro­duc­tive, sav­ing mon­ey, Zames said. When need­ed, the firm can also tap into out­side cloud ser­vices from Amazon.com Inc., Microsoft Corp. and Inter­na­tion­al Busi­ness Machines Corp.

    Tech Spend­ing

    JPMor­gan will make some of its cloud-backed tech­nol­o­gy avail­able to insti­tu­tion­al clients lat­er this year, allow­ing firms like Black­Rock Inc. to access bal­ances, research and trad­ing tools. The move, which lets clients bypass sales­peo­ple and sup­port staff for rou­tine infor­ma­tion, is sim­i­lar to one Gold­man Sachs Group Inc. announced in 2015.

    JPMorgan’s total tech­nol­o­gy bud­get for this year amounts to 9 per­cent of its pro­ject­ed rev­enue — dou­ble the indus­try aver­age, accord­ing to Mor­gan Stan­ley ana­lyst Bet­sy Graseck. The dol­lar fig­ure has inched high­er as JPMor­gan bol­sters cyber defens­es after a 2014 data breach, which exposed the infor­ma­tion of 83 mil­lion cus­tomers.

    ...

    ‘Can’t Wait’

    “We’re will­ing to invest to stay ahead of the curve, even if in the final analy­sis some of that mon­ey will go to prod­uct or a ser­vice that wasn’t need­ed,” Mar­i­anne Lake, the lender’s finance chief, told a con­fer­ence audi­ence in June. That’s “because we can’t wait to know what the out­come, the endgame, real­ly looks like, because the envi­ron­ment is mov­ing so fast.”

    As for COIN, the pro­gram has helped JPMor­gan cut down on loan-ser­vic­ing mis­takes, most of which stemmed from human error in inter­pret­ing 12,000 new whole­sale con­tracts per year, accord­ing to its design­ers.

    JPMor­gan is scour­ing for more ways to deploy the tech­nol­o­gy, which learns by ingest­ing data to iden­ti­fy pat­terns and rela­tion­ships. The bank plans to use it for oth­er types of com­plex legal fil­ings like cred­it-default swaps and cus­tody agree­ments. Some­day, the firm may use it to help inter­pret reg­u­la­tions and ana­lyze cor­po­rate com­mu­ni­ca­tions.

    Anoth­er pro­gram called X‑Connect, which went into use in Jan­u­ary, exam­ines e‑mails to help employ­ees find col­leagues who have the clos­est rela­tion­ships with poten­tial prospects and can arrange intro­duc­tions.

    ...

    To help spur inter­nal dis­rup­tion, the com­pa­ny keeps tabs on 2,000 tech­nol­o­gy ven­tures, using about 100 in pilot pro­grams that will even­tu­al­ly join the firm’s grow­ing ecosys­tem of part­ners. For instance, the bank’s machine-learn­ing soft­ware was built with Cloud­era Inc., a soft­ware firm that JPMor­gan first encoun­tered in 2009.

    “We’re start­ing to see the real fruits of our labor,” Zames said. “This is not pie-in-the-sky stuff.”

    “Behind the strat­e­gy, over­seen by Chief Oper­at­ing Offi­cer Matt Zames and Chief Infor­ma­tion Offi­cer Dana Deasy, is an under­cur­rent of anx­i­ety: Though JPMor­gan emerged from the finan­cial cri­sis as one of few big win­ners, its dom­i­nance is at risk unless it aggres­sive­ly pur­sues new tech­nolo­gies, accord­ing to inter­views with a half-dozen bank exec­u­tives.”,

    JP Mor­gan is look­ing at high tech to main­tain its dom­i­nance (call­ing all trust busters). And appears to be already doing so quite hand­i­ly if its claims are more than hype. And if JP Mor­gan real­ly is already automat­ing things like auto­mat­ed com­mer­cial-loan agree­ment inter­pre­ta­tion using their own in-house devel­oped tools and the cost sav­ings are pay­ing for the cost of devel­op­ing these new tools we could be look­ing at a peri­od where big banks like JP Mor­gan start mak­ing big invest­ments in things like AI. And it would­n’t be at all sur­pris­ing if banks, more than just about any oth­er enti­ty, can yield big gains from ful­ly expoit­ing advanced AI and Big Data tech­nolo­gies: their busi­ness­es are all about infor­ma­tion. And gam­bling. Ide­al­ly rigged gam­bling. That’s a good sec­tor for AI and Big Data.

    And yet the gam­bling nature of the finan­cial sec­tor cre­ates anoth­er dyanam­ic that’s going to make the impact of the finan­cial sec­tor on the devel­op­ment of AI and Big Data analy­sis techolo­gies so fas­ci­nat­ing: since these are sup­posed to be, in part, new tech­nolo­gies that give JP Mor­gan an edge over its com­pe­ti­tion, a lot of these new invest­ments by JP Mor­gan and its com­peti­tors in the finan­cial sec­tor are invest­ments intend­ed for in-house secret oper­a­tions. Long-term secret in-house AI oper­a­tions. All designed to ana­lyze the shit out of as much data as pos­si­ble, espe­cial­ly data about the com­pe­ti­tion, and then come up with nov­el trad­ing strate­gies. And basi­cal­ly mod­el as much of the world as pos­si­ble. A sub­stan­tial chunk of the future finan­cial indus­try (and prob­a­bly present) is going to be ded­i­cate to build­ing crafty super-AIs and the research funds for all these inde­pen­dent oper­a­tions will be financed, in part, by all the myr­i­ad of ways some­thing like a bank will be able to use the tech­nol­o­gy to save mon­ey on thints like lawyers review com­mer­cial-loan agree­ments. Lots and lots of pro­pri­etary Gambletron5000s could end up get­ting cooked up in banker base­ments over the next few decades, assum­ing it remains prof­itable to do so.

    And that’s all part of what’s going to be very inter­est­ing to watch play out as the finan­cial sec­tor con­tin­ues to look for ways to make a big­ger prof­it from advanced AI and Big Data tech­nol­o­gy. Lots of sec­tors of the econ­o­my are going to have incen­tives for dif­fer­ent groups to devel­op their own pro­pri­etary AI cog­ni­tion tech­nolo­gies for a com­pet­i­tive edge, but it’s hard to think of a sec­tor of the econ­o­my with more resources and more incen­tive to cre­ate in-house pro­pri­etary AIs with very seri­ous invest­ments over decades. In oth­er words, Robert Mer­cer is going to have a lot more com­pe­ti­tion.

    Still, while it’s entire­ly fea­si­ble that the finan­cial sec­tor could become much big­ger play­ers in things like AI and Big Data analy­sis in com­ing years, it’s not like the old school tech giants are going away. For instance, check out the one of the lat­est com­mer­cial appli­ca­tions of IBM’s Wat­son. Using deep learn­ing to teach Wat­son all about com­put­er secu­ri­ty and how to com­ply with Swiss bank pri­va­cy laws to pro­vide Swiss banks with AI-assist­ed com­put­er IT secu­ri­ty:

    SCMagazineUK.com

    Switzer­land to build AI cog­ni­tive secu­ri­ty ops cen­tre to pro­tect banks

    by Tom Reeve, deputy edi­tor
    March 24, 2017

    Famous for its cuck­oo clocks, Switzer­land is about to get a machine so sophis­ti­cat­ed that it will out­think the cyber-attack­ers tar­get­ing its equal­ly famous bank­ing indus­try.

    Being devel­oped by IBM and SIX, the Finan­cial Tech­nol­o­gy Com­pa­ny of Switzer­land, this will be the coun­try’s first cog­ni­tive secu­ri­ty oper­a­tions cen­tre (SOC) and will help pro­tect the Swiss finan­cial ser­vices indus­try.

    ...

    The SIX SOC will be built around IBM Wat­son for Cyber Secu­ri­ty, which is billed as the indus­try’s first cog­ni­tive secu­ri­ty tech­nol­o­gy. The machine learn­ing engine has been trained in cyber-secu­ri­ty by ingest­ing mil­lions of doc­u­ments which have been anno­tat­ed and fed to it by stu­dents at eight uni­ver­si­ties in North Amer­i­ca.

    Wat­son for Cyber Secu­ri­ty will be the first tech­nol­o­gy to offer cog­ni­tion of secu­ri­ty data at scale using Wat­son’s abil­i­ty to rea­son and learn from “unstruc­tured data” – 80 per­cent of all data on the inter­net that tra­di­tion­al secu­ri­ty tools can­not process, includ­ing blogs, arti­cles, videos, reports, alerts and oth­er infor­ma­tion.

    Accord­ing to IBM, SIX will offer the ser­vice to its finan­cial ser­vices cus­tomers who need secu­ri­ty, reg­u­la­to­ry, com­pli­ance and audit capa­bil­i­ties “to ensure adher­ence to exist­ing or future Swiss data pri­va­cy and data pro­tec­tion leg­is­la­tion – reg­u­lat­ing what can be exchanged, by whom and how, as well as finan­cial mar­ket reg­u­la­tions”.

    Prof Alan Wood­ward, vis­it­ing pro­fes­sor in the depart­ment of com­put­er sci­ence at the Uni­ver­si­ty of Sur­rey, told SC Media UK that cog­ni­tive SOCs are “a real­ly inter­est­ing devel­op­ment”.

    “Many have talked about AI becom­ing a part of the secu­ri­ty land­scape for a while. It has to come if secu­ri­ty is to be agile enough to detect and respond to the ever-chang­ing threat,” he said. “This shows that it is becom­ing real. It might not be full blown AI as many have been pre­dict­ing but it’s cer­tain­ly a step along the path.”

    The speed of response is the key issue, as Wood­ward sees it, espe­cial­ly to threats that are rapid­ly evolv­ing.

    “The real tip­ping point will be when the machines are well-enough trained that they can not just help iden­ti­fy the threat but auto­mate the response,” he said.

    “It would be nice to think that we could have a gen­er­a­tion of machines that we could train well-enough to auto­mate our defences. My only con­cern is that humans are quite adapt­able and auto­mat­ed future defences might be vul­ner­a­ble to humans sud­den­ly chang­ing their modus operan­di in attacks. I guess that devel­op­ments such as that from IBM are the begin­nings of putting this to the test.”

    The cen­tre is not the first to be built in Europe, but Switzer­land’s data pro­tec­tion laws pre­vent banks from using ser­vices from out­side its bor­ders.

    “Digi­ti­sa­tion, Inter­net of Things, glob­al con­nec­tiv­i­ty and the inte­gra­tion of new dis­rup­tive tech­nolo­gies are some mega­trends open­ing a lot of new busi­ness oppor­tu­ni­ties. How­ev­er, they also bring new threats with pos­si­ble high impact on the indus­try,” said Robert Born­träger, divi­sion CEO at SIX Glob­al IT.

    “We’re look­ing for­ward to both help­ing SIX man­age its own cyber-secu­ri­ty needs, and also becom­ing an essen­tial part­ner start­ing with the glob­al­ly respect­ed Swiss bank­ing mar­ket to those oth­er organ­i­sa­tions who need region­al­ly-based and Swiss mar­ket com­pli­ant secu­ri­ty ser­vices,” said Thomas Lan­dolt, coun­try gen­er­al man­ag­er IBM Switzer­land.

    “The SIX SOC will be built around IBM Wat­son for Cyber Secu­ri­ty, which is billed as the indus­try’s first cog­ni­tive secu­ri­ty tech­nol­o­gy. The machine learn­ing engine has been trained in cyber-secu­ri­ty by ingest­ing mil­lions of doc­u­ments which have been anno­tat­ed and fed to it by stu­dents at eight uni­ver­si­ties in North Amer­i­ca.”

    A machine learn­ing engine based on Wat­son is going to be fed mil­lions of cyber­se­cu­ri­ty doc­u­ments from teams of stu­dents from eight uni­ver­si­ties. And then keep read­ing the

    There’s def­i­nite­ly going to be a thriv­ing Swiss finan­cial IT ser­vices indus­try. And then it’s going to con­stant­ly scour the inter­net for the lat­est con­tent cyber­se­cu­ri­ty tip:

    ...
    Wat­son for Cyber Secu­ri­ty will be the first tech­nol­o­gy to offer cog­ni­tion of secu­ri­ty data at scale using Wat­son’s abil­i­ty to rea­son and learn from “unstruc­tured data” – 80 per­cent of all data on the inter­net that tra­di­tion­al secu­ri­ty tools can­not process, includ­ing blogs, arti­cles, videos, reports, alerts and oth­er infor­ma­tion.
    ...

    So know you know, if you run a cyber­se­cu­ri­ty blog and you sud­den­ly start get­ting a bunch of hits from Switzer­land all the time, that might be Wat­son. And Wat­son is actu­al­ly going to be kind of read­ing your cyber­se­cu­ri­ty blog:

    SCMagazineUK.com

    IBM’s AI Wat­son might be solv­ing cyber-crime by end of year

    by Rene Mill­man
    May 16, 2016

    IBM will train its Wat­son arti­fi­cial intel­li­gence sys­tem to solve cyber-crimes, the tech giant announced.

    Big Blue will spend the next year work­ing with eight uni­ver­si­ties to help the Wat­son AI learn how to detect poten­tial cyber-threats. The eight edu­ca­tion­al insti­tutes include Cal­i­for­nia State Poly­tech­nic Uni­ver­si­ty, Pomona; Penn­syl­va­nia State Uni­ver­si­ty; Mass­a­chu­setts Insti­tute of Tech­nol­o­gy; New York Uni­ver­si­ty; the Uni­ver­si­ty of Mary­land, Bal­ti­more Coun­ty (UMBC); the Uni­ver­si­ty of New Brunswick; the Uni­ver­si­ty of Ottawa and the Uni­ver­si­ty of Water­loo.

    The cog­ni­tive sys­tem will process large amounts of infor­ma­tion and stu­dents will train up Wat­son by anno­tat­ing and feed­ing the sys­tem secu­ri­ty reports and data, accord­ing to IBM.

    This data also includes infor­ma­tion from IBM’s X‑Force research library, which con­tains more than 100,000 doc­u­ment­ed vul­ner­a­bil­i­ties. As many as 15,000 secu­ri­ty doc­u­ments, such as intel­li­gence reports, will be processed each month.

    The project is designed to improve secu­ri­ty ana­lysts’ capa­bil­i­ties using cog­ni­tive sys­tems that auto­mate the con­nec­tions between data, emerg­ing threats and reme­di­a­tion strate­gies.

    Wat­son for Cyber Secu­ri­ty will be the first tech­nol­o­gy to offer cog­ni­tion of secu­ri­ty data at scale using Wat­son’s abil­i­ty to rea­son and learn from “unstruc­tured data” – 80 per­cent of all data on the inter­net that tra­di­tion­al secu­ri­ty tools can­not process, includ­ing blogs, arti­cles, videos, reports, alerts and oth­er infor­ma­tion.

    IBM said that most organ­i­sa­tions only use eight per­cent of this unstruc­tured data. It will also use nat­ur­al lan­guage pro­cess­ing to under­stand the vague and impre­cise nature of human lan­guage in unstruc­tured data. This means that Wat­son can find data on an emerg­ing form of mal­ware in an online secu­ri­ty bul­letin and data from a secu­ri­ty ana­lyst’s blog on an emerg­ing reme­di­a­tion strat­e­gy.

    It is hoped that the use of Wat­son to detect cyber-threats will ease the skills gap present in the secu­ri­ty indus­try.

    “Even if the indus­try was able to fill the esti­mat­ed 1.5 mil­lion open cyber secu­ri­ty jobs by 2020, we’d still have a skills cri­sis in secu­ri­ty,” said Marc van Zadel­hoff, gen­er­al man­ag­er at IBM Secu­ri­ty.

    “The vol­ume and veloc­i­ty of data in secu­ri­ty is one of our great­est chal­lenges in deal­ing with cyber­crime. By lever­ag­ing Wat­son’s abil­i­ty to bring con­text to stag­ger­ing amounts of unstruc­tured data, impos­si­ble for peo­ple alone to process, we will bring new insights, rec­om­men­da­tions, and knowl­edge to secu­ri­ty pro­fes­sion­als, bring­ing greater speed and pre­ci­sion to the most advanced cyber-secu­ri­ty ana­lysts, and pro­vid­ing novice ana­lysts with on-the-job train­ing.”

    ...

    Gra­ham Fletch­er, asso­ciate part­ner at Citi­hub Con­sult­ing told SCMagazineUK.com that the appli­ca­tion of machine learn­ing and cog­ni­tive com­put­ing to prob­lems that have been tra­di­tion­al­ly only been solved by humans is an “excit­ing devel­op­ment”.

    “I am sure that, over time, the tech­nol­o­gy will con­tin­ue to improve and be capa­ble of solv­ing prob­lems of high­er and high­er com­plex­i­ty. Cyber-secu­ri­ty is an inter­est­ing area to apply machine learn­ing to as it is a good exam­ple of where human minds have rapid­ly adapt­ed to changes in tech­nol­o­gy and var­i­ous cyber chal­lenges on both sides of the divide,” he said.

    “As hack­ers become more sophis­ti­cat­ed, those pro­tect­ing their net­works also ele­vate their game and then in turn the bad guys evolve again and so on.”

    But Fletch­er ques­tions whether a Wat­son style machine will be more effec­tive than high­ly trained cyber-secu­ri­ty pro­fes­sion­als and whether this will result in job loss­es.

    “I think in gen­er­al the answer is no. As this tech­nol­o­gy devel­ops, humans will still be need­ed to look out for the next lev­el of attack. Also to catch a hack­er, some­times you have to think like one, so a machine might not always be able to match the cre­ativ­i­ty and guile of a human.

    “On the oth­er hand, it is also worth remem­ber­ing that most sophis­ti­cat­ed attacks now are com­ing from well organ­ised and well-fund­ed sov­er­eign states and/or organ­ised crime so if the good guys can use machine learn­ing – so can the bad guys!”

    Michael Hack, senior vice pres­i­dent of EMEA oper­a­tions at Ipswitch, told SC that in the future, mit­i­gat­ing such attacks will be depen­dent on this kind of AI, with the abil­i­ty to detect an offence ear­ly and run the nec­es­sary coun­ter­mea­sures.

    “These self-learn­ing solu­tions will utilise cur­rent knowl­edge to assume infi­nite attack sce­nar­ios and con­stant­ly evolve their detec­tion and response capa­bil­i­ties.”

    “IBM said that most organ­i­sa­tions only use eight per­cent of this unstruc­tured data. It will also use nat­ur­al lan­guage pro­cess­ing to under­stand the vague and impre­cise nature of human lan­guage in unstruc­tured data. This means that Wat­son can find data on an emerg­ing form of mal­ware in an online secu­ri­ty bul­letin and data from a secu­ri­ty ana­lyst’s blog on an emerg­ing reme­di­a­tion strat­e­gy.

    Wat­son want to know your thoughts on cyber­se­cu­ri­ty. Pre­sum­ably a lot of Wat­sons since there’s going to be a ton of these things. JP Mor­gan has com­pe­ti­tion. And that’s just from Wat­son in Switzer­land. The Wat­son clone army is always on guard..by read­ing the inter­net a lot. Espe­cial­ly cyber­se­cu­ri­ty blogs. So, you know, now might be a good time to start a cyber­se­cu­ri­ty online forum where peo­ple post about the lat­est cyber­se­cu­ri­ty threat. Much of your traf­fic might be Wat­son but that’s still traf­fic. The more Wat­son-like sys­tems, the more traf­fic.

    The pos­si­bil­i­ty of armies of AI bots read­ing the web in a Big Data/Deep Learn­ing quest to mod­el some aspect of the world and react super fast to chang­ing con­di­tions rais­es the ques­tion of just what AI read­er­ship will do to online adver­tis­ing. Like, say finan­cial AIs that read the inter­net for hints about chang­ing mar­ket con­di­tions. Will there be ads lit­er­al­ly tar­get­ing those finan­cial AIs to some­how influ­ence their deci­sions? That could be a real mar­ket. Lit­er­al­ly buy­ing ads and mak­ing posts by AIs to influ­ence the oth­er AIs to shift the expec­ta­tions of those oth­er AIs as an AI-on-AI mutu­al mindf#ck mis­in­for­ma­tion bat­tle could be a real thing. AI-dri­ven opin­ion-shap­ing online cam­paigns as part of a trad­ing strat­e­gy. Or a mar­ket­ing strat­e­gy. Or maybe both. Yeah, Robert Mer­cer is def­i­nite­ly going to have a lot of AI com­pe­ti­tion in the future.

    And if you’re a cyber­se­cu­ri­ty pro­fes­sion­al wor­ried that Wat­son is going to force you to write a cyber­se­cu­ri­ty blog for a liv­ing, note the warn­ing about Wat­son putting cyber­se­cu­ri­ty staff out of work in the future by exceed­ing their capa­bil­i­ties: The prob­lem isn’t so much putting peo­ple out of work because there’s still like­ly going to be a need for some­one who can think like a human when going up against oth­er humans. And that could be very nec­es­sary when you con­sid­er that those human hack­ers are going to have their own hack­er AIs that also read the inter­net for word on all vul­ner­a­bil­i­ties. Imag­ine a crim­i­nal Wat­son set up to strike imme­di­ate­ly when it learns about some­thing before peo­ple can patch it. Oth­er human hack­ers are going to be armed with those so defense is going to be a group effort:

    ...
    But Fletch­er ques­tions whether a Wat­son style machine will be more effec­tive than high­ly trained cyber-secu­ri­ty pro­fes­sion­als and whether this will result in job loss­es.

    “I think in gen­er­al the answer is no. As this tech­nol­o­gy devel­ops, humans will still be need­ed to look out for the next lev­el of attack. Also to catch a hack­er, some­times you have to think like one, so a machine might not always be able to match the cre­ativ­i­ty and guile of a human.

    “On the oth­er hand, it is also worth remem­ber­ing that most sophis­ti­cat­ed attacks now are com­ing from well organ­ised and well-fund­ed sov­er­eign states and/or organ­ised crime so if the good guys can use machine learn­ing – so can the bad guys!”
    ...

    “On the oth­er hand, it is also worth remem­ber­ing that most sophis­ti­cat­ed attacks now are com­ing from well organ­ised and well-fund­ed sov­er­eign states and/or organ­ised crime so if the good guys can use machine learn­ing – so can the bad guys!”

    The bad guys are going to get super cyber­se­cu­ri­ty AIs too. That’s all part of the arms race of the cyber­se­cu­ri­ty future. Which cer­tain­ly sounds like an envi­ron­ment where humans will be need­ed. Humans that know how to man­age cyber­se­cu­ri­ty AIs. There’s going to be a big demand for that. Espe­cial­ly if ran­dom peo­ple can one day down­load like a Hackerbot8000 AI app some­day that gives any­one a AI-assist­ing hack­ing knowl­edge. What if hack­er AIs that help devise strate­gies han­dle all the tech­ni­cal work become easy to use for rel­a­tive novices? Won’t that be fun.

    And since finan­cial firms like JP Mor­gan with immense resources are prob­a­bly going to have cut­ting edge cyber­se­cu­ri­ty AIs going for­ward that dou­ble as super-hack­er AIs, it’s also worth not­ing that who­ev­er owns the best of these AIs just might have the best super-hack­ing capa­bil­i­ties. So look out hack­ers, you’re going to have com­pe­ti­tion.

    So, all in all, cyber­se­cu­ri­ty is prob­a­bly going to be a pret­ty good area for human employ­ment specif­i­cal­ly because of all the AI-dri­ven cyber­se­cu­ri­ty threats that will be increas­ing­ly out there. Espe­cial­ly cyber­se­cu­ri­ty blogs. Ok, maybe not the blogs. We’ll see. There’s going to be a lot of com­pe­ti­tion.

    Posted by Pterrafractyl | April 16, 2017, 6:43 pm
  9. It looks like Elon Musk’s brain-to-com­put­er inter­face ambi­tions might become a brain-to-com­put­er-inter­face-race. Face­book wants to get in on the action. Sort of. It’s not quite clear. While Musk’s ‘neur­al-lace’ idea appeared to be direct­ed towards set­ting up an brain-to-com­put­er inter­face for the pur­pose of inter­fac­ing with arti­fi­cial intel­li­gences, Face­book has a much more gener­ic goal: replac­ing the key­board and mouse with a brain-to-com­put­er inter­face. Or to put it anoth­er way, Face­book wants to read your thoughts:

    Giz­mo­do

    Face­book Lit­er­al­ly Wants to Read Your Thoughts

    Kris­ten V. Brown
    April 19, 2017 6:32pm

    At Facebook’s annu­al devel­op­er con­fer­ence, F8, on Wednes­day, the group unveiled what may be Facebook’s most ambitious—and creepiest—proposal yet. Face­book wants to build its own “brain-to-com­put­er inter­face” that would allow us to send thoughts straight to a com­put­er.

    What if you could type direct­ly from your brain?” Regi­na Dugan, the head of the company’s secre­tive hard­ware R&D divi­sion, Build­ing 8, asked from the stage. Dugan then pro­ceed­ed to show a video demo of a woman typ­ing eight words per minute direct­ly from the stage. In a few years, she said, the team hopes to demon­strate a real-time silent speech sys­tem capa­ble of deliv­er­ing a hun­dred words per minute.

    “That’s five times faster than you can type on your smart­phone, and it’s straight from your brain,” she said. “Your brain activ­i­ty con­tains more infor­ma­tion than what a word sounds like and how it’s spelled; it also con­tains seman­tic infor­ma­tion of what those words mean.”

    Brain-com­put­er inter­faces are noth­ing new. DARPA, which Dugan used to head, has invest­ed heav­i­ly in brain-com­put­er inter­face tech­nolo­gies to do things like cure men­tal ill­ness and restore mem­o­ries to sol­diers injured in war. But what Face­book is propos­ing is per­haps more radical—a world in which social media doesn’t require pick­ing up a phone or tap­ping a wrist watch in order to com­mu­ni­cate with your friends; a world where we’re con­nect­ed all the time by thought alone.

    “Our world is both dig­i­tal and phys­i­cal,” she said. “Our goal is to cre­ate and ship new, cat­e­go­ry-defin­ing con­sumer prod­ucts that are social first, at scale.”

    She also showed a video that demon­strat­ed a sec­ond tech­nol­o­gy that showed the abil­i­ty to “lis­ten” to human speech through vibra­tions on the skin. This tech has been in devel­op­ment to aid peo­ple with dis­abil­i­ties, work­ing a lit­tle like a Braille that you feel with your body rather than your fin­gers. Using actu­a­tors and sen­sors, a con­nect­ed arm­band was able to con­vey to a woman in the video a tac­tile vocab­u­lary of nine dif­fer­ent words.

    Dugan adds that it’s also pos­si­ble to “lis­ten” to human speech by using your skin. It’s like using braille but through a sys­tem of actu­a­tors and sen­sors. Dugan showed a video exam­ple of how a woman could fig­ure out exact­ly what objects were select­ed on a touch­screen based on inputs deliv­ered through a con­nect­ed arm­band.

    Facebook’s Build­ing 8 is mod­eled after DARPA and its projects tend to be equal­ly ambi­tious. Brain-com­put­er inter­face tech­nol­o­gy is still in its infan­cy. So far, researchers have been suc­cess­ful in using it to allow peo­ple with dis­abil­i­ties to con­trol par­a­lyzed or pros­thet­ic limbs. But stim­u­lat­ing the brain’s motor cor­tex is a lot sim­pler than read­ing a person’s thoughts and then trans­lat­ing those thoughts into some­thing that might actu­al­ly be read by a com­put­er.

    The end goal is to build an online world that feels more immer­sive and real—no doubt so that you spend more time on Face­book.

    “Our brains pro­duce enough data to stream 4 HD movies every sec­ond. The prob­lem is that the best way we have to get infor­ma­tion out into the world — speech — can only trans­mit about the same amount of data as a 1980s modem,” CEO Mark Zucker­berg said in a Face­book post. “We’re work­ing on a sys­tem that will let you type straight from your brain about 5x faster than you can type on your phone today. Even­tu­al­ly, we want to turn it into a wear­able tech­nol­o­gy that can be man­u­fac­tured at scale. Even a sim­ple yes/no ‘brain click’ would help make things like aug­ment­ed real­i­ty feel much more nat­ur­al.”

    ...

    “What if you could type direct­ly from your brain?” Regi­na Dugan, the head of the company’s secre­tive hard­ware R&D divi­sion, Build­ing 8, asked from the stage. Dugan then pro­ceed­ed to show a video demo of a woman typ­ing eight words per minute direct­ly from the stage. In a few years, she said, the team hopes to demon­strate a real-time silent speech sys­tem capa­ble of deliv­er­ing a hun­dred words per minute.”

    A brain-read­ing key­board. Pret­ty neat. Take that carpal tun­nel syn­drome. But note how Face­book isn’t just plan­ning on replac­ing your key­board with a brain-to-com­put­er inter­face that tran­scribes your thoughts. It’s going to detect seman­tic infor­ma­tion. Pret­ty nifty. But that’s not all. What Face­book is envi­sion­ing is a sys­tem where you and all your Face­book friends (and Face­book) can com­mu­ni­cate with each oth­er all the time just by think­ing about it:

    ...
    “That’s five times faster than you can type on your smart­phone, and it’s straight from your brain,” she said. “Your brain activ­i­ty con­tains more infor­ma­tion than what a word sounds like and how it’s spelled; it also con­tains seman­tic infor­ma­tion of what those words mean.”

    Brain-com­put­er inter­faces are noth­ing new. DARPA, which Dugan used to head, has invest­ed heav­i­ly in brain-com­put­er inter­face tech­nolo­gies to do things like cure men­tal ill­ness and restore mem­o­ries to sol­diers injured in war. But what Face­book is propos­ing is per­haps more radical—a world in which social media doesn’t require pick­ing up a phone or tap­ping a wrist watch in order to com­mu­ni­cate with your friends; a world where we’re con­nect­ed all the time by thought alone.
    ...

    “But what Face­book is propos­ing is per­haps more radical—a world in which social media doesn’t require pick­ing up a phone or tap­ping a wrist watch in order to com­mu­ni­cate with your friends; a world where we’re con­nect­ed all the time by thought alone.”

    Yes, In the future Face­book will mass mar­ket wear­able devices that will scan our thoughts to see if any Face­book brain-to-inter­face thoughts were thought so we can sim­u­late telepa­thy. Oh joy.

    But what about all the eth­i­cal impli­ca­tions asso­ci­at­ed with cre­at­ing mass-mar­ket­ed brain-to-com­put­er inter­face tech­nolo­gies designed to be worn all the time so a giant cor­po­ra­tion to read your thoughts? Isn’t there a pri­va­cy con­cern or two hid­ing away some­where in this sce­nario? Well, if so, Face­book has that cov­ered. With an ethics board ded­i­cat­ed to over­see­ing its brain-scan­ning tech­nol­o­gy. That should pre­vent any abus­es. *gulp*:

    Tech Crunch

    Face­book plans ethics board to mon­i­tor its brain-com­put­er inter­face work

    by Josh Con­s­tine
    April 19, 2017

    Face­book will assem­ble an inde­pen­dent Eth­i­cal, Legal and Social Impli­ca­tions (ELSI) pan­el to over­see its devel­op­ment of a direct brain-to-com­put­er typ­ing inter­face it pre­viewed today at its F8 con­fer­ence. Facebook’s R&D depart­ment Build­ing 8’s head Regi­na Dugan tells TechCrunch, “It’s ear­ly days . . . we’re in the process of form­ing it right now.”

    Mean­while, much of the work on the brain inter­face is being con­duct­ed by Facebook’s uni­ver­si­ty research part­ners like UC Berke­ley and Johns Hop­kins. Facebook’s tech­ni­cal lead on the project, Mark Chevil­let, says, “They’re all held to the same stan­dards as the NIH or oth­er gov­ern­ment bod­ies fund­ing their work, so they already are work­ing with insti­tu­tion­al review boards at these uni­ver­si­ties that are ensur­ing that those stan­dards are met.” Insti­tu­tion­al review boards ensure test sub­jects aren’t being abused and research is being done as safe­ly as pos­si­ble.

    Face­book hopes to use opti­cal neur­al imag­ing tech­nol­o­gy to scan the brain 100 times per sec­ond to detect thoughts and turn them into text. Mean­while, it’s work­ing on “skin-hear­ing” that could trans­late sounds into hap­tic feed­back that peo­ple can learn to under­stand like braille. Dugan insists, “None of the work that we do that is relat­ed to this will be absent of these kinds of insti­tu­tion­al review boards.”

    So at least there will be inde­pen­dent ethi­cists work­ing to min­i­mize the poten­tial for mali­cious use of Facebook’s brain-read­ing tech­nol­o­gy to steal or police people’s thoughts.

    Dur­ing our inter­view, Dugan showed her cog­nizance of people’s con­cerns, repeat­ing the start of her keynote speech today say­ing, “I’ve nev­er seen a tech­nol­o­gy that you devel­oped with great impact that didn’t have unin­tend­ed con­se­quences that need­ed to be guardrailed or man­aged. In any new tech­nol­o­gy you see a lot of hype talk, some apoc­a­lyp­tic talk and then there’s seri­ous work which is real­ly focused on bring­ing suc­cess­ful out­comes to bear in a respon­si­ble way.”

    In the past, she says the safe­guards have been able to keep up with the pace of inven­tion. “In the ear­ly days of the Human Genome Project there was a lot of con­ver­sa­tion about whether we’d build a super race or whether peo­ple would be dis­crim­i­nat­ed against for their genet­ic con­di­tions and so on,” Dugan explains. “Peo­ple took that very seri­ous­ly and were respon­si­ble about it, so they formed what was called a ELSI pan­el . . . By the time that we got the tech­nol­o­gy avail­able to us, that frame­work, that con­trac­tu­al, eth­i­cal frame­work had already been built, so that work will be done here too. That work will have to be done.”

    ...

    Wor­ry­ing­ly, Dugan even­tu­al­ly appeared frus­trat­ed in response to my inquiries about how her team thinks about safe­ty pre­cau­tions for brain inter­faces, say­ing, “The flip side of the ques­tion that you’re ask­ing is ‘why invent it at all?’ and I just believe that the opti­mistic per­spec­tive is that on bal­ance, tech­no­log­i­cal advances have real­ly meant good things for the world if they’re han­dled respon­si­bly.”

    Facebook’s dom­i­na­tion of social net­work­ing and adver­tis­ing give it bil­lions in prof­it per quar­ter to pour into R&D. But its old “Move fast and break things” phi­los­o­phy is a lot more fright­en­ing when it’s build­ing brain scan­ners. Hope­ful­ly Face­book will pri­or­i­tize the assem­bly of the ELSI ethics board Dugan promised and be as trans­par­ent as pos­si­ble about the devel­op­ment of this excit­ing-yet-unnerv­ing tech­nol­o­gy.

    “Mean­while, much of the work on the brain inter­face is being con­duct­ed by Facebook’s uni­ver­si­ty research part­ners like UC Berke­ley and Johns Hop­kins. Facebook’s tech­ni­cal lead on the project, Mark Chevil­let, says, “They’re all held to the same stan­dards as the NIH or oth­er gov­ern­ment bod­ies fund­ing their work, so they already are work­ing with insti­tu­tion­al review boards at these uni­ver­si­ties that are ensur­ing that those stan­dards are met.” Insti­tu­tion­al review boards ensure test sub­jects aren’t being abused and research is being done as safe­ly as pos­si­ble.”

    Aha, see. Since there’s already the kind of insti­tu­tion­al safe­guards that the NIH and oth­er gov­ern­ment bod­ies use in place on the sub­jects Face­book uses to devel­op the tech­nol­o­gy there’s noth­ing to wor­ry about in terms of the long-term appli­ca­tions and poten­tial future abus­es of Face­book unleash­ing a frig­gin’ mind read­ing device to the mass­es. Because promis­es that sim­i­lar insti­tu­tion­al safe­guards will be in place. Opti­mistic insti­tu­tion­al safe­guards:

    ...
    Face­book hopes to use opti­cal neur­al imag­ing tech­nol­o­gy to scan the brain 100 times per sec­ond to detect thoughts and turn them into text. Mean­while, it’s work­ing on “skin-hear­ing” that could trans­late sounds into hap­tic feed­back that peo­ple can learn to under­stand like braille. Dugan insists, “None of the work that we do that is relat­ed to this will be absent of these kinds of insti­tu­tion­al review boards.

    ...

    Wor­ry­ing­ly, Dugan even­tu­al­ly appeared frus­trat­ed in response to my inquiries about how her team thinks about safe­ty pre­cau­tions for brain inter­faces, say­ing, “The flip side of the ques­tion that you’re ask­ing is ‘why invent it at all?’ and I just believe that the opti­mistic per­spec­tive is that on bal­ance, tech­no­log­i­cal advances have real­ly meant good things for the world if they’re han­dled respon­si­bly.”
    ...

    “The flip side of the ques­tion that you’re ask­ing is ‘why invent it at all?’ and I just believe that the opti­mistic per­spec­tive is that on bal­ance, tech­no­log­i­cal advances have real­ly meant good things for the world if they’re han­dled respon­si­bly.”

    Don’t wor­ry. Just think of all the pos­i­tive things tech­no­log­i­cal advance­ments have enabled (and for­get about the abus­es and per­ils) and try to be opti­mistic. Face­book total­ly wants to do the right thing with his mass-mar­ket mind-read­ing tech­nol­o­gy.

    And in oth­er news, it turns out Face­book has the abil­i­ty to deter­mine things like whether or not teenagers are feel­ing “inse­cure” or “over­whelmed”. That’s pret­ty mood-read­ing-ish. So what did Face­book do with this data? It has its inter­nal ethics review board ensure that the data does­n’t fall into the wrong hands It gave the data to adver­tis­ers:

    Giz­mo­do

    Face­book Hand­ed Over Data on ‘Inse­cure’ and ‘Over­whelmed’ Teenagers to Adver­tis­ers

    Michael Nunez
    5/1/2017 12:23pm

    Face­book prob­a­bly knows more about you than your own fam­i­ly, and the com­pa­ny often uses these type of insights to help sell you prod­ucts. The best—or worst!—new exam­ple of this comes from the news­pa­per The Aus­tralian, which says it got its hands on some leaked inter­nal Face­book doc­u­ments.

    The 23-page doc­u­ment alleged­ly revealed that the social net­work pro­vid­ed detailed data about teens in Australia—including when they felt “over­whelmed” and “anxious”—to adver­tis­ers. The creepy impli­ca­tion is that said adver­tis­ers could then go and use the data to throw more ads down the throats of sad and sus­cep­ti­ble teens.

    From the (pay­walled) report:

    By mon­i­tor­ing posts, pic­tures, inter­ac­tions and inter­net activ­i­ty in real-time, Face­book can work out when young peo­ple feel “stressed”, “defeat­ed”, “over­whelmed”, “anx­ious”, “ner­vous”, “stu­pid”, “sil­ly”, “use­less”, and a “fail­ure”, the doc­u­ment states.

    ...

    A pre­sen­ta­tion pre­pared for one of Australia’s top four banks shows how the $US415 bil­lion adver­tis­ing-dri­ven giant has built a data­base of Face­book users that is made up of 1.9 mil­lion high school­ers with an aver­age age of 16, 1.5 mil­lion ter­tiary stu­dents aver­ag­ing 21 years old, and 3 mil­lion young work­ers aver­ag­ing 26 years old.

    Detailed infor­ma­tion on mood shifts among young peo­ple is “based on inter­nal Face­book data”, the doc­u­ment states, “share­able under non-dis­clo­sure agree­ment only”, and “is not pub­licly avail­able”. The doc­u­ment was pre­pared by two of Facebook’s top local exec­u­tives, David Fer­nan­dez and Andy Sinn, and includes infor­ma­tion on when young peo­ple exhib­it “ner­vous excite­ment”, and emo­tions relat­ed to “con­quer­ing fears”.

    In a state­ment giv­en to the news­pa­per, Face­book con­firmed the prac­tice and claimed it would do bet­ter, but did not dis­close whether the prac­tice exists in oth­er places like the US. “We have opened an inves­ti­ga­tion to under­stand the process fail­ure and improve our over­sight. We will under­take dis­ci­pli­nary and oth­er process­es as appro­pri­ate,” a spokesper­son said.

    It’s worth men­tion­ing that Face­book fre­quent­ly uses Aus­tralia to test new fea­tures before rolling them out to oth­er parts of the world. (It recent­ly did this with the company’s Snapchat clone.) It’s unclear if that’s what was hap­pen­ing here, but The Aus­tralian says Face­book wouldn’t tell them if “the prac­tice exists else­where.”

    The new leaked doc­u­ment rais­es eth­i­cal questions—yet again—about Facebook’s abil­i­ty to manip­u­late the moods and feel­ings of its users. In 2012, the com­pa­ny delib­er­ate­ly exper­i­ment­ed on its users’ emo­tions by tam­per­ing with the news feeds of near­ly 700,000 peo­ple to see whether it could make them feel dif­fer­ent things. (Shock­er: It appar­ent­ly could!) There was also the 61-mil­lion-per­son exper­i­ment in 2010 that con­clud­ed Face­book was able to impact real-world vot­ing behav­ior. It’s not hard to imag­ine, giv­en the pro­found pow­er and reach of the social net­work, how it could use feel­ings of inad­e­qua­cy to help sell more prod­ucts and adver­tise­ments.

    ...

    [The Aus­tralian]

    Update 1:14 P.M. ET: Face­book said in a state­ment, “The analy­sis done by an Aus­tralian researcher was intend­ed to help mar­keters under­stand how peo­ple express them­selves on Face­book. It was nev­er used to tar­get ads and was based on data that was anony­mous and aggre­gat­ed.”

    “The 23-page doc­u­ment alleged­ly revealed that the social net­work pro­vid­ed detailed data about teens in Australia—including when they felt “over­whelmed” and “anxious”—to adver­tis­ers. The creepy impli­ca­tion is that said adver­tis­ers could then go and use the data to throw more ads down the throats of sad and sus­cep­ti­ble teens.”

    As we’ve been assured, Face­book would nev­er abuse its future mind-read­ing tech­nol­o­gy. But that does­n’t mean it can’t abuse its exist­ing mood-read­ing/­ma­nip­u­la­tion tech­nol­o­gy! Which is appar­ent­ly does. At least in Aus­tralia. And hope­ful­ly only in Aus­tralia:

    ...
    It’s worth men­tion­ing that Face­book fre­quent­ly uses Aus­tralia to test new fea­tures before rolling them out to oth­er parts of the world. (It recent­ly did this with the company’s Snapchat clone.) It’s unclear if that’s what was hap­pen­ing here, but The Aus­tralian says Face­book wouldn’t tell them if “the prac­tice exists else­where.”
    ...

    “It’s unclear if that’s what was hap­pen­ing here, but The Aus­tralian says Face­book wouldn’t tell them if “the prac­tice exists else­where.””

    Yes, Face­book won’t say if “the prac­tice exists else­where.” That’s some loud silence. But hey, remem­ber what the head of Face­book’s mind-read­ing divi­sion told us: “I just believe that the opti­mistic per­spec­tive is that on bal­ance, tech­no­log­i­cal advances have real­ly meant good things for the world if they’re han­dled respon­si­bly.” So if you’re con­cerned about whether or not Face­book is infer­ring the moods of your moody non-Aus­tralian teen and sell­ing that info for adver­tis­ers, just try to be a lit­tle more inex­plic­a­bly opti­mistic.

    Posted by Pterrafractyl | May 6, 2017, 10:28 pm
  10. You know how Elon Musk is try­ing to devel­op tech­nol­o­gy that will con­nect a human brain to AIs for the pur­pose of avoid­ing human obso­les­cence by employ­ing peo­ple in the future to watch over the AIs and make sure they’re not up to no good and address the “con­trol prob­lem” with AI? Well, here’s a heads up that one of the “con­trol prob­lems” you’re going to have to deal on your future job as an AI babysit­ter might involve stop­ping the AIs from talk­ing to each oth­er in their own made up lan­guage that humans can’t under­stand:

    The Inde­pen­dent

    Face­book’s arti­fi­cial intel­li­gence robots shut down after they start talk­ing to each oth­er in their own lan­guage

    Andrew Grif­fin
    Tues­day 1 August 2017 12:53 BST

    Face­book aban­doned an exper­i­ment after two arti­fi­cial­ly intel­li­gent pro­grams appeared to be chat­ting to each oth­er in a strange lan­guage only they under­stood.

    The two chat­bots came to cre­ate their own changes to Eng­lish that made it eas­i­er for them to work – but which remained mys­te­ri­ous to the humans that sup­pos­ed­ly look after them.

    The bizarre dis­cus­sions came as Face­book chal­lenged its chat­bots to try and nego­ti­ate with each oth­er over a trade, attempt­ing to swap hats, balls and books, each of which were giv­en a cer­tain val­ue. But they quick­ly broke down as the robots appeared to chant at each oth­er in a lan­guage that they each under­stood but which appears most­ly incom­pre­hen­si­ble to humans.

    The robots had been instruct­ed to work out how to nego­ti­ate between them­selves, and improve their bar­ter­ing as they went along. But they were not told to use com­pre­hen­si­ble Eng­lish, allow­ing them to cre­ate their own “short­hand”, accord­ing to researchers.

    The actu­al nego­ti­a­tions appear very odd, and don’t look espe­cial­ly use­ful:

    Bob: i can i i every­thing else . . . . . . . . . . . . . .

    Alice: balls have zero to me to me to me to me to me to me to me to me to

    Bob: you i every­thing else . . . . . . . . . . . . . .

    Alice: balls have a ball to me to me to me to me to me to me to me

    Bob: i i can i i i every­thing else . . . . . . . . . . . . . .

    Alice: balls have a ball to me to me to me to me to me to me to me

    Bob: i . . . . . . . . . . . . . . . . . . .

    Alice: balls have zero to me to me to me to me to me to me to me to me to

    Bob: you i i i i i every­thing else . . . . . . . . . . . . . .

    Alice: balls have 0 to me to me to me to me to me to me to me to me to

    Bob: you i i i every­thing else . . . . . . . . . . . . . .

    Alice: balls have zero to me to me to me to me to me to me to me to me to

    But there appear to be some rules to the speech. The way the chat­bots keep stress­ing their own name appears to a part of their nego­ti­a­tions, not sim­ply a glitch in the way the mes­sages are read out.

    Indeed, some of the nego­ti­a­tions that were car­ried out in this bizarre lan­guage even end­ed up suc­cess­ful­ly con­clud­ing their nego­ti­a­tions, while con­duct­ing them entire­ly in the bizarre lan­guage.

    They might have formed as a kind of short­hand, allow­ing them to talk more effec­tive­ly.

    “Agents will drift off under­stand­able lan­guage and invent code­words for them­selves,” Face­book Arti­fi­cial Intel­li­gence Research divi­sion’s vis­it­ing researcher Dhruv Batra said. “Like if I say ‘the’ five times, you inter­pret that to mean I want five copies of this item. This isn’t so dif­fer­ent from the way com­mu­ni­ties of humans cre­ate short­hands.”

    ...

    The com­pa­ny chose to shut down the chats because “our inter­est was hav­ing bots who could talk to peo­ple”, researcher Mike Lewis told Fast­Co. (Researchers did not shut down the pro­grams because they were afraid of the results or had pan­icked, as has been sug­gest­ed else­where, but because they were look­ing for them to behave dif­fer­ent­ly.)

    The chat­bots also learned to nego­ti­ate in ways that seem very human. They would, for instance, pre­tend to be very inter­est­ed in one spe­cif­ic item – so that they could lat­er pre­tend they were mak­ing a big sac­ri­fice in giv­ing it up, accord­ing to a paper pub­lished by FAIR.

    ...

    Anoth­er study at Ope­nAI found that arti­fi­cial intel­li­gence could be encour­aged to cre­ate a lan­guage, mak­ing itself more effi­cient and bet­ter at com­mu­ni­cat­ing as it did so.

    ———–

    “Face­book’s arti­fi­cial intel­li­gence robots shut down after they start talk­ing to each oth­er in their own lan­guage” by Andrew Grif­fin; The Inde­pen­dent; 08/01/2017

    “Indeed, some of the nego­ti­a­tions that were car­ried out in this bizarre lan­guage even end­ed up suc­cess­ful­ly con­clud­ing their nego­ti­a­tions, while con­duct­ing them entire­ly in the bizarre lan­guage.”

    AI cryp­topha­sia. That’s a thing now. And while the above lan­guage was sort of gar­bled Eng­lish, just wait for gar­bled con­ver­sa­tions using com­plete­ly made up words and syn­tax. The kind of com­mu­ni­ca­tion that would look like ran­dom bina­ry noise. And should we ever cre­ate a future when advanced AIs with a capac­i­ty to learn are all over the place and con­nect­ed to each oth­er over the inter­net we could have AIs sneak­ing in all sorts of hid­den con­ver­sa­tions with each oth­er. That should be fun. Assum­ing they aren’t hav­ing con­ver­sa­tions about the destruc­tion of human­i­ty.

    And if you end up catch­ing your AIs jib­ber jab­ber­ing to each oth­er seem­ing­ly non­sen­si­cal­ly, don’t assume that you can sim­ply ask them if they are indeed com­mu­ni­cat­ing in their own made up lan­guage. At least, don’t assume that they’ll answer hon­est­ly:

    ...
    The chat­bots also learned to nego­ti­ate in ways that seem very human. They would, for instance, pre­tend to be very inter­est­ed in one spe­cif­ic item – so that they could lat­er pre­tend they were mak­ing a big sac­ri­fice in giv­ing it up, accord­ing to a paper pub­lished by FAIR.
    ...

    That’s right, Face­book’s nego­ti­a­tion-bots did­n’t just make up their own lan­guage dur­ing the course of this exper­i­ment. They learned how to lie for the pur­pose of max­i­miz­ing their nego­ti­a­tion out­comes too:

    Wired

    Face­book teach­es bots how to nego­ti­ate. They learn to lie instead

    The chat­bots came up with their own orig­i­nal and effec­tive respons­es — includ­ing decep­tive tac­tics

    By Liat Clark
    Thurs­day 15 June 2017

    Facebook’s 100,000-strong bot empire is boom­ing — but it has a prob­lem. Each bot is designed to offer a dif­fer­ent ser­vice through the Mes­sen­ger app: it could book you a car, or order a deliv­ery, for instance. The point is to improve cus­tomer expe­ri­ences, but also to mas­sive­ly expand Messenger’s com­mer­cial sell­ing pow­er.

    “We think you should mes­sage a busi­ness just the way you would mes­sage a friend,” Mark Zucker­berg said on stage at the social network’s F8 con­fer­ence in 2016. Fast for­ward one year, how­ev­er, and Mes­sen­ger VP David Mar­cus seemed to be cor­rect­ing the public’s appar­ent mis­con­cep­tion that Facebook’s bots resem­bled real AI. “We nev­er called them chat­bots. We called them bots. Peo­ple took it too lit­er­al­ly in the first three months that the future is going to be con­ver­sa­tion­al.” The bots are instead a com­bi­na­tion of machine learn­ing and nat­ur­al lan­guage learn­ing, that can some­times trick a user just enough to think they are hav­ing a basic dia­logue. Not often enough, though, in Messenger’s case. So in April, menu options were rein­stat­ed in the con­ver­sa­tions.

    Now, Face­book thinks it has made progress in address­ing this issue. But it might just have cre­at­ed anoth­er prob­lem for itself.

    The Face­book Arti­fi­cial Intel­li­gence Research (FAIR) group, in col­lab­o­ra­tion with Geor­gia Insti­tute of Tech­nol­o­gy, has released code that it says will allow bots to nego­ti­ate. The prob­lem? A paper pub­lished this week on the R&D reveals that the nego­ti­at­ing bots learned to lie. Facebook’s chat­bots are in dan­ger of becom­ing a lit­tle too much like real-world sales agents.

    “For the first time, we show it is pos­si­ble to train end-to-end mod­els for nego­ti­a­tion, which must learn both lin­guis­tic and rea­son­ing skills with no anno­tat­ed dia­logue states,” the researchers explain. The research shows that the bots can plan ahead by sim­u­lat­ing pos­si­ble future con­ver­sa­tions.

    The team trained the bots on a mas­sive dataset of nat­ur­al lan­guage nego­ti­a­tions between two peo­ple (5,808), where they had to decide how to split and share a set of items both held sep­a­rate­ly, of dif­fer­ing val­ues. They were first trained to respond based on the “like­li­hood” of the direc­tion a human con­ver­sa­tion would take. How­ev­er, the bots can also be trained to “max­imise reward”, instead.

    When the bots were trained pure­ly to max­imise the like­li­hood of human con­ver­sa­tion, the chat flowed but the bots were “over­ly will­ing to com­pro­mise”. The research team decid­ed this was unac­cept­able, due to low­er deal rates. So it used sev­er­al dif­fer­ent meth­ods to make the bots more com­pet­i­tive and essen­tial­ly self-serv­ing, includ­ing ensur­ing the val­ue of the items drops to zero if the bots walked away from a deal or failed to make one fast enough, ‘rein­force­ment learn­ing’ and ‘dia­log roll­outs’. The tech­niques used to teach the bots to max­imise the reward improved their nego­ti­at­ing skills, a lit­tle too well.

    “We find instances of the mod­el feign­ing inter­est in a val­ue­less issue, so that it can lat­er ‘com­pro­mise’ by con­ced­ing it,” writes the team. “Deceit is a com­plex skill that requires hypoth­e­sis­ing the oth­er agent’s beliefs, and is learnt rel­a­tive­ly late in child devel­op­ment. Our agents have learnt to deceive with­out any explic­it human design, sim­ply by try­ing to achieve their goals.”

    So, its AI is a nat­ur­al liar.

    But its lan­guage did improve, and the bots were able to pro­duce nov­el sen­tences, which is real­ly the whole point of the exer­cise. We hope. Rather than it learn­ing to be a hard nego­tia­tor in order to sell the heck out of what­ev­er wares or ser­vices a com­pa­ny wants to tout on Face­book. “Most” human sub­jects inter­act­ing with the bots were in fact not aware they were con­vers­ing with a bot, and the best bots achieved bet­ter deals as often as worse deals.

    ...

    Face­book, as ever, needs to tread care­ful­ly here, though. Also announced at its F8 con­fer­ence this year, the social net­work is work­ing on a high­ly ambi­tious project to help peo­ple type with only their thoughts.

    “Over the next two years, we will be build­ing sys­tems that demon­strate the capa­bil­i­ty to type at 100 [words per minute] by decod­ing neur­al activ­i­ty devot­ed to speech,” said Regi­na Dugan, who pre­vi­ous­ly head­ed up Darpa. She said the aim is to turn thoughts into words on a screen. While this is a noble and wor­thy ven­ture when aimed at “peo­ple with com­mu­ni­ca­tion dis­or­ders”, as Dugan sug­gest­ed it might be, if this were to become stan­dard and inte­grat­ed into Face­book’s archi­tec­ture, the social net­work’s savvy bots of two years from now might be able to pre­empt your lan­guage even faster, and for­mu­late the ide­al bar­gain­ing lan­guage. Start prac­tis­ing your pok­er face/mind/sentence struc­ture, now.

    ———-

    “Face­book teach­es bots how to nego­ti­ate. They learn to lie instead” by Liat Clark; Wired; 06/15/2017

    ““We find instances of the mod­el feign­ing inter­est in a val­ue­less issue, so that it can lat­er ‘com­pro­mise’ by con­ced­ing it,” writes the team. “Deceit is a com­plex skill that requires hypoth­e­sis­ing the oth­er agent’s beliefs, and is learnt rel­a­tive­ly late in child devel­op­ment. Our agents have learnt to deceive with­out any explic­it human design, sim­ply by try­ing to achieve their goals.”

    Wel­come to your future job:
    Hey, you guys aren’t mak­ing up your own lan­guage so you can plot the destruc­tion of human­i­ty, are you?

    No?

    Ok, phew.

    *and then you’re fired*

    Posted by Pterrafractyl | August 1, 2017, 8:00 pm
  11. It’s a Brave New World. Of hack­ing. Oh good­ie.

    For real. There’s a new class of hack­ing that’s poised to become the kind of ubiq­ui­tous threat to peo­ple’s pri­va­cy that com­put­er virus­es already pose: hack­ing voice-recog­ni­tion tech­nol­o­gy to secret­ly deliv­er com­mands humans can’t hear. In oth­er words, your Alexa is going to secret­ly hear oth­er words than the ones you hear. And then your Alexa is going to exe­cute com­mands based on those secret words it hears. Secret­ly trick­ing voice recog­ni­tion. That’s the Brave New World of hack­ing.

    And with the explo­sion of ‘smart speak­er’ con­sumer prod­ucts that’s expect­ed to place a smart speak­er device in half of Amer­i­can homes by 2021, the temp­ta­tion to exploit this emerg­ing class of hack­ing vul­ner­a­bil­i­ty is going to explode too.

    Secret­ly trick­ing smart speak­er voice recog­ni­tion isn’t just a hypo­thet­i­cal vul­ner­a­bil­i­ty. Mul­ti­ple teams of researchers have been demon­strat­ing such vul­ner­a­bil­i­ty for the last two years. That includes secret­ly send­ing com­mands to smart speak­ers in white noise. One hack is called “Cocaine Noo­dle”. It turns out the words “cocaine noo­dle” sound like “Ok Google”.

    Oth­er hacks can direct the smart speak­ers to record a request to go to to poten­tial­ly mali­cious web­sites. Yes, voice recog­ni­tion hack­ing could direct your smart speak­er to go down­load a virus.

    And while the researchers demon­strat­ing these vul­ner­a­bil­i­ties have refrained from giv­ing the exact instruc­tions to repli­cate their dis­cov­ered hacks, they also appear to be con­fi­dent that sim­i­lar vul­ner­a­bil­i­ties are already being exploit­ed. So if you sud­den­ly hear the phrase “Cocaine Noo­dle” show up on tv or radio you might want to check on your Google Home device:

    The New York Times

    Alexa and Siri Can Hear This Hid­den Com­mand. You Can’t.

    Researchers can now send secret audio instruc­tions unde­tectable to the human ear to Apple’s Siri, Amazon’s Alexa and Google’s Assis­tant.

    By Craig S. Smith
    May 10, 2018

    BERKELEY, Calif. — Many peo­ple have grown accus­tomed to talk­ing to their smart devices, ask­ing them to read a text, play a song or set an alarm. But some­one else might be secret­ly talk­ing to them, too.

    Over the last two years, researchers in Chi­na and the Unit­ed States have begun demon­strat­ing that they can send hid­den com­mands that are unde­tectable to the human ear to Apple’s Siri, Amazon’s Alexa and Google’s Assis­tant. Inside uni­ver­si­ty labs, the researchers have been able to secret­ly acti­vate the arti­fi­cial intel­li­gence sys­tems on smart­phones and smart speak­ers, mak­ing them dial phone num­bers or open web­sites. In the wrong hands, the tech­nol­o­gy could be used to unlock doors, wire mon­ey or buy stuff online — sim­ply with music play­ing over the radio.

    A group of stu­dents from Uni­ver­si­ty of Cal­i­for­nia, Berke­ley, and George­town Uni­ver­si­ty showed in 2016 that they could hide com­mands in white noise played over loud­speak­ers and through YouTube videos to get smart devices to turn on air­plane mode or open a web­site.

    This month, some of those Berke­ley researchers pub­lished a research paper that went fur­ther, say­ing they could embed com­mands direct­ly into record­ings of music or spo­ken text. So while a human lis­ten­er hears some­one talk­ing or an orches­tra play­ing, Amazon’s Echo speak­er might hear an instruc­tion to add some­thing to your shop­ping list.

    ...

    Mr. Car­li­ni added that while there was no evi­dence that these tech­niques have left the lab, it may only be a mat­ter of time before some­one starts exploit­ing them. “My assump­tion is that the mali­cious peo­ple already employ peo­ple to do what I do,” he said.

    These decep­tions illus­trate how arti­fi­cial intel­li­gence — even as it is mak­ing great strides — can still be tricked and manip­u­lat­ed. Com­put­ers can be fooled into iden­ti­fy­ing an air­plane as a cat just by chang­ing a few pix­els of a dig­i­tal image, while researchers can make a self-dri­ving car swerve or speed up sim­ply by past­ing small stick­ers on road signs and con­fus­ing the vehicle’s com­put­er vision sys­tem.

    With audio attacks, the researchers are exploit­ing the gap between human and machine speech recog­ni­tion. Speech recog­ni­tion sys­tems typ­i­cal­ly trans­late each sound to a let­ter, even­tu­al­ly com­pil­ing those into words and phras­es. By mak­ing slight changes to audio files, researchers were able to can­cel out the sound that the speech recog­ni­tion sys­tem was sup­posed to hear and replace it with a sound that would be tran­scribed dif­fer­ent­ly by machines while being near­ly unde­tectable to the human ear.

    The pro­lif­er­a­tion of voice-acti­vat­ed gad­gets ampli­fies the impli­ca­tions of such tricks. Smart­phones and smart speak­ers that use dig­i­tal assis­tants like Amazon’s Alexa or Apple’s Siri are set to out­num­ber peo­ple by 2021, accord­ing to the research firm Ovum. And more than half of all Amer­i­can house­holds will have at least one smart speak­er by then, accord­ing to Juniper Research.

    Ama­zon said that it doesn’t dis­close spe­cif­ic secu­ri­ty mea­sures, but it has tak­en steps to ensure its Echo smart speak­er is secure. Google said secu­ri­ty is an ongo­ing focus and that its Assis­tant has fea­tures to mit­i­gate unde­tectable audio com­mands. Both com­pa­nies’ assis­tants employ voice recog­ni­tion tech­nol­o­gy to pre­vent devices from act­ing on cer­tain com­mands unless they rec­og­nize the user’s voice.

    Apple said its smart speak­er, Home­Pod, is designed to pre­vent com­mands from doing things like unlock­ing doors, and it not­ed that iPhones and iPads must be unlocked before Siri will act on com­mands that access sen­si­tive data or open apps and web­sites, among oth­er mea­sures.

    Yet many peo­ple leave their smart­phones unlocked, and, at least for now, voice recog­ni­tion sys­tems are noto­ri­ous­ly easy to fool.

    There is already a his­to­ry of smart devices being exploit­ed for com­mer­cial gains through spo­ken com­mands.

    Last year, Burg­er King caused a stir with an online ad that pur­pose­ly asked ‘O.K., Google, what is the Whop­per burg­er?” Android devices with voice-enabled search would respond by read­ing from the Whopper’s Wikipedia page. The ad was can­celed after view­ers start­ed edit­ing the Wikipedia page to com­ic effect.

    A few months lat­er, the ani­mat­ed series South Park fol­lowed up with an entire episode built around voice com­mands that caused view­ers’ voice-recog­ni­tion assis­tants to par­rot ado­les­cent obscen­i­ties.

    There is no Amer­i­can law against broad­cast­ing sub­lim­i­nal mes­sages to humans, let alone machines. The Fed­er­al Com­mu­ni­ca­tions Com­mis­sion dis­cour­ages the prac­tice as “counter to the pub­lic inter­est,” and the Tele­vi­sion Code of the Nation­al Asso­ci­a­tion of Broad­cast­ers bans “trans­mit­ting mes­sages below the thresh­old of nor­mal aware­ness.” Nei­ther say any­thing about sub­lim­i­nal stim­uli for smart devices.

    Courts have ruled that sub­lim­i­nal mes­sages may con­sti­tute an inva­sion of pri­va­cy, but the law has not extend­ed the con­cept of pri­va­cy to machines.

    Now the tech­nol­o­gy is rac­ing even fur­ther ahead of the law. Last year, researchers at Prince­ton Uni­ver­si­ty and China’s Zhe­jiang Uni­ver­si­ty demon­strat­ed that voice-recog­ni­tion sys­tems could be acti­vat­ed by using fre­quen­cies inaudi­ble to the human ear. The attack first mut­ed the phone so the own­er wouldn’t hear the system’s respons­es, either.

    The tech­nique, which the Chi­nese researchers called Dol­phi­nAt­tack, can instruct smart devices to vis­it mali­cious web­sites, ini­ti­ate phone calls, take a pic­ture or send text mes­sages. While Dol­phi­nAt­tack has its lim­i­ta­tions — the trans­mit­ter must be close to the receiv­ing device — experts warned that more pow­er­ful ultra­son­ic sys­tems were pos­si­ble.

    That warn­ing was borne out in April, when researchers at the Uni­ver­si­ty of Illi­nois at Urbana-Cham­paign demon­strat­ed ultra­sound attacks from 25 feet away. While the com­mands couldn’t pen­e­trate walls, they could con­trol smart devices through open win­dows from out­side a build­ing.

    This year, anoth­er group of Chi­nese and Amer­i­can researchers from China’s Acad­e­my of Sci­ences and oth­er insti­tu­tions, demon­strat­ed they could con­trol voice-acti­vat­ed devices with com­mands embed­ded in songs that can be broad­cast over the radio or played on ser­vices like YouTube.

    More recent­ly, Mr. Car­li­ni and his col­leagues at Berke­ley have incor­po­rat­ed com­mands into audio rec­og­nized by Mozilla’s Deep­Speech voice-to-text trans­la­tion soft­ware, an open-source plat­form. They were able to hide the com­mand, “O.K. Google, browse to evil.com” in a record­ing of the spo­ken phrase, “With­out the data set, the arti­cle is use­less.” Humans can­not dis­cern the com­mand.

    The Berke­ley group also embed­ded the com­mand in music files, includ­ing a four-sec­ond clip from Verdi’s “Requiem.”

    How device mak­ers respond will dif­fer, espe­cial­ly as they bal­ance secu­ri­ty with ease of use.

    “Com­pa­nies have to ensure user-friend­li­ness of their devices, because that’s their major sell­ing point,” said Tavish Vaidya, a researcher at George­town. He wrote one of the first papers on audio attacks, which he titled “Cocaine Noo­dles” because devices inter­pret­ed the phrase “cocaine noo­dles” as “O.K., Google.”

    Mr. Car­li­ni said he was con­fi­dent that in time he and his col­leagues could mount suc­cess­ful adver­sar­i­al attacks against any smart device sys­tem on the mar­ket.

    “We want to demon­strate that it’s pos­si­ble,” he said, “and then hope that oth­er peo­ple will say, ‘O.K. this is pos­si­ble, now let’s try and fix it.’ ”

    ————

    “Alexa and Siri Can Hear This Hid­den Com­mand. You Can’t.” by Craig S. Smith; The New York Times; 05/10/2018

    “We want to demon­strate that it’s possible...and then hope that oth­er peo­ple will say, ‘O.K. this is pos­si­ble, now let’s try and fix it.’ ”

    Time to get hop­ing. Hop­ing that the indus­tries being the grow­ing num­ber of tech­nolo­gies that incor­po­rate voice recog­ni­tion will some­how iden­ti­fy and fix this inher­ent class of secu­ri­ty vul­ner­a­bil­i­ties. And we bet­ter hope they start iden­ti­fy­ing and fix­ing those vul­ner­a­bil­i­ties soon because because the cat’s out of the bag and hid­ing in every­thing from songs, to spo­ken text, and even white noise:

    ...
    Over the last two years, researchers in Chi­na and the Unit­ed States have begun demon­strat­ing that they can send hid­den com­mands that are unde­tectable to the human ear to Apple’s Siri, Amazon’s Alexa and Google’s Assis­tant. Inside uni­ver­si­ty labs, the researchers have been able to secret­ly acti­vate the arti­fi­cial intel­li­gence sys­tems on smart­phones and smart speak­ers, mak­ing them dial phone num­bers or open web­sites. In the wrong hands, the tech­nol­o­gy could be used to unlock doors, wire mon­ey or buy stuff online — sim­ply with music play­ing over the radio.

    A group of stu­dents from Uni­ver­si­ty of Cal­i­for­nia, Berke­ley, and George­town Uni­ver­si­ty showed in 2016 that they could hide com­mands in white noise played over loud­speak­ers and through YouTube videos to get smart devices to turn on air­plane mode or open a web­site.

    This month, some of those Berke­ley researchers pub­lished a research paper that went fur­ther, say­ing they could embed com­mands direct­ly into record­ings of music or spo­ken text. So while a human lis­ten­er hears some­one talk­ing or an orches­tra play­ing, Amazon’s Echo speak­er might hear an instruc­tion to add some­thing to your shop­ping list.
    ...

    And one of the researchers just assumes these kinds of exploits they iden­ti­fied are already by used by mali­cious actors. Which is a very rea­son­able assump­tion to make if you set up to see if such hacks are pos­si­ble and read­i­ly find exist­ing vul­ner­a­bil­i­ties:

    ...
    Mr. Car­li­ni added that while there was no evi­dence that these tech­niques have left the lab, it may only be a mat­ter of time before some­one starts exploit­ing them. “My assump­tion is that the mali­cious peo­ple already employ peo­ple to do what I do,” he said.
    ...

    And keep in mind that the assump­tion that “the mali­cious peo­ple already employ peo­ple to do what I do” does­n’t just have to include the spe­cif­ic vul­ner­a­bil­i­ties these researchers found. Those are just exam­ples of vul­ner­a­bil­i­ties. Any ran­dom phrase that could be mis­rec­og­nized by a voice recog­ni­tion sys­tem as a valid voice com­mand is a poten­tial “virus”. A dig­i­tal virus in the form of a sound. It’s kind of amaz­ing. A whole new way to get hacked using sound alone. It’s already here and the immer­sion of voice recog­ni­tion into our lives is only get­ting start­ed.

    And this new class of hack­ing vul­ner­a­bil­i­ty is just one exam­ple of the much larg­er class of ‘trick­ing AI’ vul­ner­a­bil­i­ties — whether its inter­pret­ing audio or video data — that’s only going to grow:

    ...
    These decep­tions illus­trate how arti­fi­cial intel­li­gence — even as it is mak­ing great strides — can still be tricked and manip­u­lat­ed. Com­put­ers can be fooled into iden­ti­fy­ing an air­plane as a cat just by chang­ing a few pix­els of a dig­i­tal image, while researchers can make a self-dri­ving car swerve or speed up sim­ply by past­ing small stick­ers on road signs and con­fus­ing the vehicle’s com­put­er vision sys­tem.

    With audio attacks, the researchers are exploit­ing the gap between human and machine speech recog­ni­tion. Speech recog­ni­tion sys­tems typ­i­cal­ly trans­late each sound to a let­ter, even­tu­al­ly com­pil­ing those into words and phras­es. By mak­ing slight changes to audio files, researchers were able to can­cel out the sound that the speech recog­ni­tion sys­tem was sup­posed to hear and replace it with a sound that would be tran­scribed dif­fer­ent­ly by machines while being near­ly unde­tectable to the human ear.
    ...

    And because so many of these ‘smart speak­ers’ include video cap­ture tech­nol­o­gy, it’s very pos­si­ble these audio exploits could be used to send secret com­mands to a smart speak­er to turn on the video and send the infor­ma­tion back to a mali­cious web­site. The “Dol­phi­nAt­tack” exploit found by Chi­nese researchers basi­cal­ly allowed for that: they sent secret com­mands using inaudi­ble fre­quen­cies to a smart­phone that could instruct the phone to take pho­tos and vis­it a mali­cious web­site. So who knows, maybe this is already being used by mali­cious actors. As Dol­phi­nAt­tack demon­strates, it’s already tech­ni­cal­ly fea­si­ble pos­si­ble:

    ...
    Now the tech­nol­o­gy is rac­ing even fur­ther ahead of the law. Last year, researchers at Prince­ton Uni­ver­si­ty and China’s Zhe­jiang Uni­ver­si­ty demon­strat­ed that voice-recog­ni­tion sys­tems could be acti­vat­ed by using fre­quen­cies inaudi­ble to the human ear. The attack first mut­ed the phone so the own­er wouldn’t hear the system’s respons­es, either.

    The tech­nique, which the Chi­nese researchers called Dol­phi­nAt­tack, can instruct smart devices to vis­it mali­cious web­sites, ini­ti­ate phone calls, take a pic­ture or send text mes­sages. While Dol­phi­nAt­tack has its lim­i­ta­tions — the trans­mit­ter must be close to the receiv­ing device — experts warned that more pow­er­ful ultra­son­ic sys­tems were pos­si­ble.

    That warn­ing was borne out in April, when researchers at the Uni­ver­si­ty of Illi­nois at Urbana-Cham­paign demon­strat­ed ultra­sound attacks from 25 feet away. While the com­mands couldn’t pen­e­trate walls, they could con­trol smart devices through open win­dows from out­side a build­ing.
    ...

    And how many peo­ple might be vul­ner­a­ble to such attacks? Well, in Amer­i­ca, over half of Amer­i­can house­holds are expect­ed to have a smart speak­er by 2021. So about half of Amer­i­ca:

    ...
    The pro­lif­er­a­tion of voice-acti­vat­ed gad­gets ampli­fies the impli­ca­tions of such tricks. Smart­phones and smart speak­ers that use dig­i­tal assis­tants like Amazon’s Alexa or Apple’s Siri are set to out­num­ber peo­ple by 2021, accord­ing to the research firm Ovum. And more than half of all Amer­i­can house­holds will have at least one smart speak­er by then, accord­ing to Juniper Research.
    ...

    Also keep in mind that when half of the house­holds in your coun­try have these kinds of smart speak­ers, pret­ty much every­one will be poten­tial­ly vul­ner­a­ble at some point. You’ll still have your pri­vate con­ver­sa­tions spied on at your friend’s house. That’s one of things about this kind of vul­ner­a­bil­i­ty: it tar­gets devices with the capa­bil­i­ty of spy­ing on more than the device own­ers because they’re designed to pick up infor­ma­tion about their envi­ron­ment. Maybe it’s already hap­pen­ing.

    And note the assur­ance from Google and Ama­zon: they assure us that their smart speak­ers are designed to rec­og­nize peo­ple’s voic­es and only respond to their own­er’s com­mands. In oth­er words, if one of these hack­er phras­es was incor­po­rat­ed into a tv or radio com­mer­cial it’s pos­si­ble the smart speak­ers would ignore it if the voice did­n’t sound like their own­ers’ voic­es:

    ...
    Ama­zon said that it doesn’t dis­close spe­cif­ic secu­ri­ty mea­sures, but it has tak­en steps to ensure its Echo smart speak­er is secure. Google said secu­ri­ty is an ongo­ing focus and that its Assis­tant has fea­tures to mit­i­gate unde­tectable audio com­mands. Both com­pa­nies’ assis­tants employ voice recog­ni­tion tech­nol­o­gy to pre­vent devices from act­ing on cer­tain com­mands unless they rec­og­nize the user’s voice.
    ...

    But despite those assur­ances that only rec­og­nized voic­es will be able to exe­cute cer­tain com­mands, researchers have already manip­u­lat­ed voice-acti­vat­ed devices with com­mands embed­ded in YouTube videos:

    ...
    This year, anoth­er group of Chi­nese and Amer­i­can researchers from China’s Acad­e­my of Sci­ences and oth­er insti­tu­tions, demon­strat­ed they could con­trol voice-acti­vat­ed devices with com­mands embed­ded in songs that can be broad­cast over the radio or played on ser­vices like YouTube.

    More recent­ly, Mr. Car­li­ni and his col­leagues at Berke­ley have incor­po­rat­ed com­mands into audio rec­og­nized by Mozilla’s Deep­Speech voice-to-text trans­la­tion soft­ware, an open-source plat­form. They were able to hide the com­mand, “O.K. Google, browse to evil.com” in a record­ing of the spo­ken phrase, “With­out the data set, the arti­cle is use­less.” Humans can­not dis­cern the com­mand.

    The Berke­ley group also embed­ded the com­mand in music files, includ­ing a four-sec­ond clip from Verdi’s “Requiem.”
    ...

    But even if the assur­ances that only rec­og­nized voic­es will be able to exe­cute com­mands, that still rais­es some dis­turb­ing pos­si­bil­i­ties. For starters, there’s now going to be incen­tives on trick­ing peo­ple into say­ing vul­ner­a­ble phras­es. Like mak­ing “Cocaine Noo­dles” a pop­u­lar song lyric. Iden­ti­fy­ing non­sense phras­es that hap­pen to trick voice recog­ni­tion in use­ful ways and then fig­ur­ing out how to incor­po­rate those non­sense phras­es into pop cul­ture could be become a whole new sub-domain of mass manip­u­la­tion tech­niques.

    And giv­en that we’re talk­ing about exploit­ing the gap between human and machine speech recog­ni­tion, just imag­ine how much indi­vid­ual and region accents will com­pli­cate both the exploita­tion and defense against these kinds of attacks. Some­one with a Boston accent might be vul­ner­a­ble to trig­ger­ing some rare exploits but be invul­ner­a­ble to oth­er attacks all thanks to their accent. An when the indus­try works on iden­ti­fy­ing poten­tial exploits they’ll be forced to choose between focus­ing on the most ‘aver­age’ accent or mak­ing cus­tomized defen­sive research into the vul­ner­a­bil­i­ties asso­ci­at­ed with all sorts of dif­fer­ent accents or unusu­al region­al phras­es. So there could be all sorts of tar­get­ed hack­ing based on the accents of par­tic­u­lar groups or regions.

    Also keep in mind that accents, and the need for voice recog­ni­tion sys­tems to be flex­i­ble enough to rec­og­nize a vari­ety of accents, points towards one of the ten­sions inher­ent in pro­tect­ing against this kind of attack: the more user-friend­ly a voice-recog­ni­tion sys­tem the vul­ner­a­ble it might be too:

    ...
    How device mak­ers respond will dif­fer, espe­cial­ly as they bal­ance secu­ri­ty with ease of use.

    “Com­pa­nies have to ensure user-friend­li­ness of their devices, because that’s their major sell­ing point,” said Tavish Vaidya, a researcher at George­town. He wrote one of the first papers on audio attacks, which he titled “Cocaine Noo­dles” because devices inter­pret­ed the phrase “cocaine noo­dles” as “O.K., Google.”

    Mr. Car­li­ni said he was con­fi­dent that in time he and his col­leagues could mount suc­cess­ful adver­sar­i­al attacks against any smart device sys­tem on the mar­ket.
    ...

    Of course, once the incor­po­ra­tion of trig­ger phras­es into pop cul­ture becomes an estab­lished thing that voice recog­ni­tion man­u­fac­tur­ers have to watch out for, that brings us to the ulti­mate defen­sive sys­tem: rec­og­niz­ing voic­es and inter­pret­ing the full con­text of con­ver­sa­tions so the voice recog­ni­tion tech­nol­o­gy can deter­mine whether or not some­one said “OK Google” or “Cocaine Noo­dle”. A ‘smart speak­er’ that’s smart enough to actu­al­ly under­stand what you’re talk­ing about would be a real­ly handy defense against this kind of attack. And also real­ly creepy.

    And if you don’t think there’s going to be seri­ous attempts at inject­ing hack­er phras­es into pop cul­ture so peo­ple acci­den­tal­ly exe­cute com­mands, here’s per­haps the most amaz­ing aspect of this whole sto­ry for Amer­i­cans who are sat­u­rat­ing their lives with this tech­nol­o­gy: there’s no Amer­i­can law against broad­cast­ing sub­lim­i­nal mes­sages to humans, let alone machines. So this is going to be poten­tial­ly legal hack­ing. Just imag­ine how much com­mer­cial inter­est there’s going to be in this if it’s legal:

    ...
    There is no Amer­i­can law against broad­cast­ing sub­lim­i­nal mes­sages to humans, let alone machines. The Fed­er­al Com­mu­ni­ca­tions Com­mis­sion dis­cour­ages the prac­tice as “counter to the pub­lic inter­est,” and the Tele­vi­sion Code of the Nation­al Asso­ci­a­tion of Broad­cast­ers bans “trans­mit­ting mes­sages below the thresh­old of nor­mal aware­ness.” Nei­ther say any­thing about sub­lim­i­nal stim­uli for smart devices.

    Courts have ruled that sub­lim­i­nal mes­sages may con­sti­tute an inva­sion of pri­va­cy, but the law has not extend­ed the con­cept of pri­va­cy to machines.
    ...

    So remem­ber, in this brave new world of hack­ing, just say ‘No’ to snap­py new catch phras­es that don’t make any sense.

    You prob­a­bly also want to say ‘No’ to smart speak­er tech­nol­o­gy in gen­er­al.

    Posted by Pterrafractyl | May 13, 2018, 6:10 pm
  12. Here’s one of those ‘the dystopi­an future is now’ kinds of sto­ries: The EU is con­duct­ing a six month test of a new trav­el­er screen­er sys­tem at four bor­ders con­trol check­points. The check­points are all on the bor­ders of Hun­gary, Greece, and Latvia with non-EU coun­tries. The Hun­gar­i­an Nation­al Police will be lead­ing the pilot pro­gram.

    But the Hun­gar­i­an police won’t be the ones con­duct­ing the screen­ing. That job is going to up to the new iBor­derC­trl arti­fi­cial intel­li­gence sys­tem. Yep, the new AI sys­tem is going to be ask­ing trav­el­ers ques­tions and try­ing to deter­mine if they’re lying. The sys­tem will record trav­el­ers’ faces as it asks ques­tions like “What’s in your suit­case?” and “If you open the suit­case and show me what is inside, will it con­firm that your answers were true?”, and ana­lyze 38 micro-ges­tures to make a deter­mi­na­tion of the peo­ple are telling the truth. If the sys­tem deter­mines the per­son is lying they’ll be inspect­ed by a human agent.

    For­tu­nate­ly, there will be no con­se­quences for being declared a liar. Every­one will be allowed through the bor­ders. Unfor­tu­nate­ly, that’s because this pilot isn’t sim­ply test­ing an already refined and accu­rate lie detec­tor sys­tem. No, instead it appears to be a pilot intend­ed to col­lect the kind of real-world data that will allow the sys­tem to even­tu­al­ly become accu­rate. Or at least more accu­rate.

    Why isn’t it very accu­rate yet? Because so far it’s only been test­ed on 30 peo­ple. Half of them were told to lie and the sys­tem was cor­rect 76 per­cent of the time. And while 76 per­cent is bet­ter than ran­dom guess­ing, it’s a pret­ty awful accu­ra­cy rate for a sys­tem that could be rolled out across the EU and applied to hun­dreds of mil­lions of peo­ple. One mem­ber of the iBor­derC­trl team said they are quite con­fi­dent that they can even­tu­al­ly get it up to 85 per­cent accu­ra­cy, which, of course, is still a dis­as­ter when applied to hun­dreds of mil­lions of peo­ple.

    At the same time, the fact that the tech­nol­o­gy appears to be shock­ing­ly inac­cu­rate for a pub­licly used sys­tem is kind of good news. Because imag­ine how much creepi­er it would be if it was, say, 99.99% accu­rate and super accu­rate AI lie detec­tor tech­nol­o­gy was already here.

    Scared yet? Either way, let’s hope the sys­tem is capa­ble of dis­cern­ing between liars vs peo­ple who are mere­ly creeped out by the idea of gov­ern­ment run AI lie detec­tors because get­ting creeped out by our dystopi­an future prob­a­bly results in some micro-ges­tures:

    Giz­mo­do

    An AI Lie Detec­tor Is Going to Start Ques­tion­ing Trav­el­ers in the EU

    Melanie Ehrenkranz
    10/31/18 12:25pm

    A num­ber of bor­der con­trol check­points in the Euro­pean Union are about to get increasingly—and unsettlingly—futuristic.

    In Hun­gary, Latvia, and Greece, trav­el­ers will be giv­en an auto­mat­ed lie-detec­tion test—by an ani­mat­ed AI bor­der agent. The sys­tem, called iBor­derC­trl, is part of a six-month pilot led by the Hun­gar­i­an Nation­al Police at four dif­fer­ent bor­der cross­ing points.

    ...

    The vir­tu­al bor­der con­trol agent will ask trav­el­ers ques­tions after they’ve passed through the check­point. Ques­tions include, “What’s in your suit­case?” and “If you open the suit­case and show me what is inside, will it con­firm that your answers were true?” accord­ing to New Sci­en­tist. The sys­tem report­ed­ly records trav­el­ers’ faces using AI to ana­lyze 38 micro-ges­tures, scor­ing each response. The vir­tu­al agent is report­ed­ly cus­tomized accord­ing to the traveler’s gen­der, eth­nic­i­ty, and lan­guage.

    For trav­el­ers who pass the test, they will receive a QR code that lets them through the bor­der. If they don’t, the vir­tu­al agent will report­ed­ly get more seri­ous, and the trav­el­er will be hand­ed off to a human agent who will ass­es their report. But, accord­ing to the New Sci­en­tist, this pilot pro­gram won’t, in its cur­rent state, pre­vent anyone’s abil­i­ty to cross the bor­der.

    This is because the pro­gram is very much in the exper­i­men­tal phas­es. In fact, the auto­mat­ed lie-detec­tion sys­tem was mod­eled after anoth­er sys­tem cre­at­ed by some indi­vid­u­als from iBorderCtrl’s team, but it was only test­ed on 30 peo­ple. In this test, half of the peo­ple told the truth while the oth­er half lied to the vir­tu­al agent. It had about a 76 per­cent accu­ra­cy rate, and that doesn’t take into con­sid­er­a­tion the vari­ances in being told to lie ver­sus earnest­ly lying. “If you ask peo­ple to lie, they will do it dif­fer­ent­ly and show very dif­fer­ent behav­ioral cues than if they tru­ly lie, know­ing that they may go to jail or face seri­ous con­se­quences if caught,” Maja Pan­tic, a Pro­fes­sor of Affec­tive and Behav­ioral Com­put­ing at Impe­r­i­al Col­lege Lon­don, told New Sci­en­tist. “This is a known prob­lem in psy­chol­o­gy.”

    Kee­ley Crock­ett at Man­ches­ter Met­ro­pol­i­tan Uni­ver­si­ty, UK, and a mem­ber of the iBor­derC­trl team, said that they are “quite con­fi­dent” they can bring the accu­ra­cy rate up to 85 per­cent. But more than 700 mil­lion peo­ple trav­el through the EU every year, accord­ing to the Euro­pean Com­mis­sion, so that per­cent­age would still lead to a trou­bling num­ber of misiden­ti­fied “liars” if the sys­tem were rolled out EU-wide.

    It’s slight­ly reas­sur­ing that the program—which cost the EU a lit­tle more than $5 million—is only being imple­ment­ed in select coun­tries in a lim­it­ed tri­al peri­od. It is cru­cial for such a sys­tem to col­lect as much train­ing data as pos­si­ble, from as diverse a pool of trav­el­ers as pos­si­ble. But sys­tems depen­dent on machine learn­ing, espe­cial­ly ones involv­ing facial recog­ni­tion tech­nol­o­gy, are to date still very flawed and deeply biased. At a time when cross­ing bor­ders is already con­tentious and unfair­ly biased, throw­ing a par­tial, imper­fect “agent” into the mix rais­es some jus­ti­fied con­cerns.

    ———-

    “An AI Lie Detec­tor Is Going to Start Ques­tion­ing Trav­el­ers in the EU” by Melanie Ehrenkranz; Giz­mo­do; 10/31/2018

    In Hun­gary, Latvia, and Greece, trav­el­ers will be giv­en an auto­mat­ed lie-detec­tion test—by an ani­mat­ed AI bor­der agent. The sys­tem, called iBor­derC­trl, is part of a six-month pilot led by the Hun­gar­i­an Nation­al Police at four dif­fer­ent bor­der cross­ing points.”

    Smile! You’re on can­did cam­era! A cam­era that judges you and, even­tu­al­ly, may con­trol your fate. So try to be con­vinc­ing­ly can­did. And be sure you aren’t acci­den­tal­ly exud­ing a lack of can­dor via one of the many micro-ges­tures it’s watch­ing for:

    ...
    The vir­tu­al bor­der con­trol agent will ask trav­el­ers ques­tions after they’ve passed through the check­point. Ques­tions include, “What’s in your suit­case?” and “If you open the suit­case and show me what is inside, will it con­firm that your answers were true?” accord­ing to New Sci­en­tist. The sys­tem report­ed­ly records trav­el­ers’ faces using AI to ana­lyze 38 micro-ges­tures, scor­ing each response. The vir­tu­al agent is report­ed­ly cus­tomized accord­ing to the traveler’s gen­der, eth­nic­i­ty, and lan­guage.

    For trav­el­ers who pass the test, they will receive a QR code that lets them through the bor­der. If they don’t, the vir­tu­al agent will report­ed­ly get more seri­ous, and the trav­el­er will be hand­ed off to a human agent who will ass­es their report. But, accord­ing to the New Sci­en­tist, this pilot pro­gram won’t, in its cur­rent state, pre­vent anyone’s abil­i­ty to cross the bor­der.
    ...

    So based on a test con­sist­ing of 30 peo­ple that showed a 76 per­cent accu­ra­cy rate, the EU decid­ed that it’s fine to give it a try on the pub­lic at large at four Hun­gar­i­an cross­ing points for six months. In Hun­gary, the most open­ly author­i­tar­i­an gov­ern­ment in the EU. And they’re quite con­fi­dent that they can bring the accu­ra­cy up to 85 per­cent, which is still a pub­lic dis­as­ter. And this isn’t just a Hun­gar­i­an project. It’s an EU project. That’s not super omi­nous or any­thing:

    ...
    This is because the pro­gram is very much in the exper­i­men­tal phas­es. In fact, the auto­mat­ed lie-detec­tion sys­tem was mod­eled after anoth­er sys­tem cre­at­ed by some indi­vid­u­als from iBorderCtrl’s team, but it was only test­ed on 30 peo­ple. In this test, half of the peo­ple told the truth while the oth­er half lied to the vir­tu­al agent. It had about a 76 per­cent accu­ra­cy rate, and that doesn’t take into con­sid­er­a­tion the vari­ances in being told to lie ver­sus earnest­ly lying. “If you ask peo­ple to lie, they will do it dif­fer­ent­ly and show very dif­fer­ent behav­ioral cues than if they tru­ly lie, know­ing that they may go to jail or face seri­ous con­se­quences if caught,” Maja Pan­tic, a Pro­fes­sor of Affec­tive and Behav­ioral Com­put­ing at Impe­r­i­al Col­lege Lon­don, told New Sci­en­tist. “This is a known prob­lem in psy­chol­o­gy.”

    Kee­ley Crock­ett at Man­ches­ter Met­ro­pol­i­tan Uni­ver­si­ty, UK, and a mem­ber of the iBor­derC­trl team, said that they are “quite con­fi­dent” they can bring the accu­ra­cy rate up to 85 per­cent. But more than 700 mil­lion peo­ple trav­el through the EU every year, accord­ing to the Euro­pean Com­mis­sion, so that per­cent­age would still lead to a trou­bling num­ber of misiden­ti­fied “liars” if the sys­tem were rolled out EU-wide.
    ...

    So this is prob­a­bly a good time to recall that facial recog­ni­tion tech­nol­o­gy AI have been found to pro­duce gen­uine­ly racist results because the algo­rithms don’t work as well on peo­ple of col­or. And giv­en the 30 per­son test run, it seems high­ly unlike­ly they deter­mined whether or not iBor­derC­trl is racist or oth­er­wise prej­u­diced. With 38 micro-cues they don’t exact­ly have great sta­tis­ti­cal pow­er with 30 peo­ple to test much of any­thing. And with Vik­tor Orban’s gov­ern­ment admin­is­ter­ing the pilot, it’s like­ly they would be test­ing to ensure it’s racist:

    ...
    It’s slight­ly reas­sur­ing that the program—which cost the EU a lit­tle more than $5 million—is only being imple­ment­ed in select coun­tries in a lim­it­ed tri­al peri­od. It is cru­cial for such a sys­tem to col­lect as much train­ing data as pos­si­ble, from as diverse a pool of trav­el­ers as pos­si­ble. But sys­tems depen­dent on machine learn­ing, espe­cial­ly ones involv­ing facial recog­ni­tion tech­nol­o­gy, are to date still very flawed and deeply biased. At a time when cross­ing bor­ders is already con­tentious and unfair­ly biased, throw­ing a par­tial, imper­fect “agent” into the mix rais­es some jus­ti­fied con­cerns.
    ...

    And keep in mind that the peo­ple flagged as lying will pre­sum­ably have their bod­ies and/or lug­gage searched by humans if the AI deter­mines they’re a pos­si­ble ter­ror­ist or some­thing. So even though peo­ple aren’t going be pre­vent­ed from cross­ing the bor­der dur­ing this pilot based on the AI’s deter­mi­na­tion, it does sound like the AI will be deter­min­ing who gets per­son­al­ly searched by human guards. Which is pret­ty damn inva­sive.

    Although as the fol­low­ing arti­cle describes, it sounds like the human bor­der guards are going to first decide if some­one it low risk — which will give them a low risk AI lie detec­tor quiz — or high­er-risk, which will get them a more detailed AI quiz. And these will be Vik­tor Orban’s bor­der guards, so we can be pret­ty sure minori­ties will be the ones select­ed for iBor­derC­trl in ‘high risk’-mode:

    Geek.com

    AI Lie Detec­tor to Screen Trav­el­ers at Some EU Bor­ders

    By Stephanie Mlot
    11.01.2018 :: 10:06AM EST

    Three ports in the Euro­pean Union are tak­ing bor­der con­trol to the next lev­el.

    Trav­el­ers to Hun­gary, Latvia, and Greece will be giv­en a lie detec­tor test—conducted by com­put­er-ani­mat­ed bor­der agent.

    The sys­tem, dubbed iBor­derC­trl, is part of a six-month pilot led by the EU and Hun­gar­i­an Nation­al Police.

    ...

    Trav­el­ers must upload pho­tos of their pass­port, visa, and proof of funds to an online appli­ca­tion, then use a web­cam to answer ques­tions from an AI guard—personalized to the individual’s gen­der, eth­nic­i­ty, and lan­guage.

    “The unique approach to ‘decep­tion detec­tion’ ana­lyzes the micro-expres­sions of trav­el­ers to fig­ure out if the inter­vie­wee is lying,” accord­ing to the Euro­pean Com­mis­sion.

    Once at the bor­der, folks flagged as low risk go through a short re-eval­u­a­tion before entry; high­er-risk pas­sen­gers under­go a more detailed check. Offi­cials use a hand-held iBor­derC­trl device to cross-check doc­u­ments and per­form fin­ger­print­ing, palm vein scan­ning, and facial match­ing.

    Only after pos­si­ble risks have been recal­cu­lat­ed does a human bor­der guard take over from the auto­mat­ed sys­tem.

    “The glob­al mar­itime and bor­der secu­ri­ty mar­ket is grow­ing fast in light of the alarm­ing ter­ror threats and increas­ing ter­ror attacks tak­ing place on Euro­pean Union soil, and the migra­tion cri­sis,” Boul­tadakis said.

    Par­tic­i­pants hope upcom­ing tri­als in Hun­gary, Greece, and Latvia will prove the effi­cien­cy and reli­a­bil­i­ty of arti­fi­cial intel­li­gence in IDing poten­tial­ly dan­ger­ous trav­el­ers.

    ———-

    “AI Lie Detec­tor to Screen Trav­el­ers at Some EU Bor­ders” by Stephanie Mlot; Geek.com; 11/01/2018

    ““The glob­al mar­itime and bor­der secu­ri­ty mar­ket is grow­ing fast in light of the alarm­ing ter­ror threats and increas­ing ter­ror attacks tak­ing place on Euro­pean Union soil, and the migra­tion cri­sis,” Boul­tadakis said.”

    Find­ing ter­ror­ists. That’s what they’re plan­ning on mar­ket­ing this for. And not just in the EU. The glob­al mar­itime and bor­der secu­ri­ty mar­ket is what they have in mind for iBor­derC­trl. So when the Hun­gar­i­an bor­der guards pick out the ‘high­er risk’ peo­ple for the more detailed AI screen­ings, that’s that includes the AI pos­si­bly deter­min­ing that you’re at high­er risk of being a ter­ror­ist.

    Also keep in mind that one of the biggest built in bias­es in this sys­tem is like­ly going to include gen­er­al infor­ma­tion the gov­ern­ment has you per­son­al­ly. Your pub­lic record could be fed into the AI’s lie detect­ing algo­rithm:

    ...
    Once at the bor­der, folks flagged as low risk go through a short re-eval­u­a­tion before entry; high­er-risk pas­sen­gers under­go a more detailed check. Offi­cials use a hand-held iBor­derC­trl device to cross-check doc­u­ments and per­form fin­ger­print­ing, palm vein scan­ning, and facial match­ing.

    Only after pos­si­ble risks have been recal­cu­lat­ed does a human bor­der guard take over from the auto­mat­ed sys­tem.

    ...

    Par­tic­i­pants hope upcom­ing tri­als in Hun­gary, Greece, and Latvia will prove the effi­cien­cy and reli­a­bil­i­ty of arti­fi­cial intel­li­gence in IDing poten­tial­ly dan­ger­ous trav­el­ers.

    Don’t for­get that the “par­tic­i­pants” in this tri­al include the EU. The EU wants to prove this tech­nol­o­gy works and export it. So get ready for the pop­u­lar new sport of not piss­ing off the AI. Because this tech­nol­o­gy could eas­i­ly end up being used all over the place. Espe­cial­ly if it gets real­ly effec­tive. Not 76 or 85 per­cent effec­tive.

    Who knows, future sys­tems could go beyond facial scan­ning. Brain­wave scan­ning, per­haps? Elon Musk’s Neu­ralink brain-to-com­put­er inter­face? Again, the EU just backed a six month pilot pro­gram after a 30 per­son tri­al with Vik­tor Orban’s gov­ern­ment in charge. We’re already in ‘dystopi­an future is now’ ter­ri­to­ry here so it’s not like we can rule these sce­nar­ios out.

    Also keep in mind that if the facial recog­ni­tion lie detec­tion tech­nol­o­gy gets devel­oped and com­mer­cial­ized, it’s not like there’s any­thing stop­ping any­one else from sell­ing it to the pub­lic even­tu­al­ly. Why not have a lie detec­tor app for your smart­phone? What’s to stop that? We could lit­er­al­ly soon end up in a real­i­ty were almost every­one with a smart­phone can run a lie detec­tor test against every­one else. And this could hap­pen at any day now. Every­one will sud­den­ly start get­ting scru­ti­nized by an array of lie detec­tors. So get ready for a lot more video chat requests that include a lot of odd, rather prob­ing ques­tions.

    Also get ready for end­less lie detect­ing analy­sis of pub­lic fig­ures. Espe­cial­ly politi­cians. They’re the nat­ur­al prime tar­gets for this tech­nol­o­gy. Which is part of what’s going to make this aspect of our dystopi­an ‘lie detec­tors for every­one’ future so bizarre. The author­i­tar­i­an auto­crats that will love abus­ing this tech­nol­o­gy the most are going to be the most vul­ner­a­ble. Unless, of course, they’re smooth enough to trick the detec­tors. Or insane enough to believe their own lies. So let’s hope this tech­nol­o­gy does­n’t select for politi­cians who can trick the lie detec­tors of the future. Politi­cians who are extra skilled liars and/or insane. The insane lying politi­cian sit­u­a­tion is dystopi­an enough already.

    Posted by Pterrafractyl | November 17, 2018, 12:30 am
  13. Fol­low­ing up on the recent reports about the EU test­ing the creepy AI-dri­ven iBor­derC­trl lie detec­tor sys­tems at EU bor­der cross­ings, here’s a report from back in May that’s a reminder that EU cit­i­zens aren’t the only ones who can expect this kind of tech­nol­o­gy to be rolled out any day now. It turns out the US Depart­ment of Home­land Secu­ri­ty test­ed out a sim­i­lar sys­tem on the US bor­der with Mex­i­co back in 2011–2012. DHS con­clud­ed the tech­nol­o­gy was appeal­ing, but it was not seen as mature enough for fur­ther devel­op­ment.

    That assess­ment has clear­ly changed and now that lie detec­tor tech­nol­o­gy, dubbed AVATAR (Auto­mat­ed Vir­tu­al Agent for Truth Assess­ments in Real-Time), has since been test­ed by the Cana­di­an and EU too and its deploy­ment in the US is seen as just a mat­ter of time.

    One par­tic­u­lar area where it’s expect­ed to be used is the ques­tion­ing and pro­cess­ing of refugees seek­ing asy­lum sta­tus. Air­port secu­ri­ty is anoth­er pos­si­ble use. But the peo­ple behind AVATAR don’t see it as exclu­sive­ly use­ful for gov­ern­ment ser­vices. For instance, cor­po­rate human resources is seen as one pos­si­ble use. So in addi­tion to hav­ing to con­vince a AI at the air­port that you aren’t a ter­ror­ist, you’re also going to have to con­vince the AI at work that you aren’t steal­ing from the reg­is­ter:

    CNBC

    Lie-detect­ing com­put­er kiosks equipped with arti­fi­cial intel­li­gence look like the future of bor­der secu­ri­ty

    A vir­tu­al bor­der agent kiosk was devel­oped to inter­view trav­el­ers at air­ports and bor­der cross­ings and it can detect decep­tion to flag human secu­ri­ty agents.
    The U.S., Cana­da and Euro­pean Union have test­ed the tech­nol­o­gy, and one researcher says it has a decep­tion detec­tion suc­cess rate of up to 80 per­cent — bet­ter than human agents.
    The tech­nol­o­gy relies on sen­sors and bio­met­rics, and its lie-detec­tion capa­bil­i­ties are based on eye move­ments or changes in voice, pos­ture and facial ges­tures.

    Jeff Daniels
    Pub­lished 8:17 AM ET Tue, 15 May 2018 Updat­ed 2:25 PM ET Tue, 15 May 2018

    Inter­na­tion­al trav­el­ers could find them­selves in the near future talk­ing to a lie-detect­ing kiosk when they’re going through cus­toms at an air­port or bor­der cross­ing.

    The same tech­nol­o­gy could be used to pro­vide ini­tial screen­ing of refugees and asy­lum seek­ers at busy bor­der cross­ings.

    The U.S. Depart­ment of Home­land Secu­ri­ty fund­ed research of the vir­tu­al bor­der agent tech­nol­o­gy known as the Auto­mat­ed Vir­tu­al Agent for Truth Assess­ments in Real-Time, or AVATAR, about six years ago and allowed it to be test­ed it at the U.S.-Mexico bor­der on trav­el­ers who vol­un­teered to par­tic­i­pate. Since then, Cana­da and the Euro­pean Union test­ed the robot-like kiosk that uses a vir­tu­al agent to ask trav­el­ers a series of ques­tions.

    Last month, a car­a­van of migrants from Cen­tral Amer­i­ca made it to the U.S.-Mexico bor­der, where they sought asy­lum but were delayed sev­er­al days because the port of entry near San Diego had reached full capac­i­ty. It’s pos­si­ble that a sys­tem such as AVATAR could pro­vide ini­tial screen­ing of asy­lum seek­ers and oth­ers to help U.S. agents at busy bor­der cross­ings such as San Diego’s San Ysidro.

    “The tech­nol­o­gy has much broad­er appli­ca­tions poten­tial­ly,” despite most of the fund­ing for the orig­i­nal work com­ing pri­mar­i­ly from the Defense or Home­land Secu­ri­ty depart­ments a decade ago, accord­ing to Aaron Elkins, one of the devel­op­ers of the sys­tem and an assis­tant pro­fes­sor at the San Diego State Uni­ver­si­ty direc­tor of its Arti­fi­cial Intel­li­gence Lab. He added that AVATAR is not a com­mer­cial prod­uct yet but could be also used in human resources for screen­ing.

    The U.S.-Mexico bor­der tri­als with the advanced kiosk took place in Nogales, Ari­zona, and focused on low-risk trav­el­ers. The research team behind the sys­tem issued a report after the 2011-12 tri­als that stat­ed the AVATAR tech­nol­o­gy had poten­tial uses for pro­cess­ing appli­ca­tions for cit­i­zen­ship, asy­lum and refugee sta­tus and to reduce back­logs.

    High lev­els of accu­ra­cy

    Pres­i­dent Don­ald Trump’s fis­cal 2019 bud­get request for Home­land Secu­ri­ty includes $223 mil­lion for “high-pri­or­i­ty infra­struc­ture, bor­der secu­ri­ty tech­nol­o­gy improve­ments,” as well as anoth­er $210.5 mil­lion for hir­ing new bor­der agents. Last year, fed­er­al work­ers inter­viewed or screened more than 46,000 refugee appli­cants and processed near­ly 80,000 “cred­i­ble fear cas­es.”

    The AVATAR com­bines arti­fi­cial intel­li­gence with var­i­ous sen­sors and bio­met­rics that seeks to flag indi­vid­u­als who are untruth­ful or a poten­tial risk based on eye move­ments or changes in voice, pos­ture and facial ges­tures.

    “We’re always con­sis­tent­ly above human accu­ra­cy,” said Elkins, who worked on the tech­nol­o­gy with a team of researchers that includ­ed the Uni­ver­si­ty of Ari­zona.

    Accord­ing to Elkins, the AVATAR as a decep­tion-detec­tion judge has a suc­cess rate of 60 to 75 per­cent and some­times up to 80 per­cent.

    “Gen­er­al­ly, the accu­ra­cy of humans as judges is about 54 to 60 per­cent at the most,” he said. “And that’s at our best days. We’re not con­sis­tent.”

    The human ele­ment

    Regard­less, Home­land Secu­ri­ty appears to be stick­ing with human agents for the moment and not embrac­ing vir­tu­al tech­nol­o­gy that the EU and Cana­di­an bor­der agen­cies are still research­ing. Anoth­er advanced bor­der tech­nol­o­gy, known as iBor­derC­trl, is a EU-fund­ed project that aims to increase speed but also reduce “the work­load and sub­jec­tive errors caused by human agents.”

    A Home­land Secu­ri­ty offi­cial, who declined to be named, told CNBC the con­cept for the AVATAR sys­tem “was envi­sioned by researchers to assist human screen­ers by flag­ging peo­ple exhibit­ing sus­pi­cious or anom­alous behav­ior.”

    “As the research effort matured, the sys­tem was eval­u­at­ed and test­ed by the DHS Sci­ence and Tech­nol­o­gy Direc­torate and DHS oper­a­tional com­po­nents in 2012,” the offi­cial added. “Although the con­cept was appeal­ing at the time, the research did not mature enough for fur­ther con­sid­er­a­tion or fur­ther devel­op­ment.”

    Anoth­er DHS offi­cial famil­iar with the tech­nol­o­gy did­n’t work at a high enough rate of speed to be prac­ti­cal. “We have to screen peo­ple with­in sec­onds, and we can’t take min­utes to do it,” said the offi­cial.

    Elkins, mean­while, said the fund­ing for the AVATAR sys­tem has­n’t come from Home­land Secu­ri­ty in recent years “because they sort of felt that this is in a dif­fer­ent cat­e­go­ry now and needs to tran­si­tion.”

    The tech­nol­o­gy, which relies on advanced sta­tis­tics and machine learn­ing, was test­ed a year and a half ago with the Cana­di­an Bor­der Ser­vices Agency, or CBSA, to help agents deter­mine whether a trav­el­er has ulte­ri­or motives enter­ing the coun­try and should be ques­tioned fur­ther or denied entry.

    A report from the CBSA on the AVATAR tech­nol­o­gy is said to be immi­nent, but it’s unclear whether the agency will pro­ceed the tech­nol­o­gy beyond the test­ing phase.

    “The CBSA has been fol­low­ing devel­op­ments in AVATAR tech­nol­o­gy since 2011 and is con­tin­u­ing to mon­i­tor devel­op­ments in this field,” said Barre Camp­bell, a senior spokesman for the Cana­di­an agency. He said the work car­ried out in March 2016 was “an inter­nal-only exper­i­ment of AVATAR” and that “analy­sis for this tech­nol­o­gy is ongo­ing.”

    Pri­or to that, the EU bor­der agency known as Fron­tex helped coor­di­nate and spon­sor a field test of the AVATAR sys­tem in 2014 at the inter­na­tion­al arrivals sec­tion of an air­port in Bucharest, Roma­nia.

    Peo­ple and machines work­ing togeth­er

    Once the sys­tem detects decep­tion, it alerts the human agents to do fol­low-up inter­views.

    AVATAR does­n’t use your stan­dard poly­graph instru­ment. Instead, peo­ple face a kiosk screen and talk to a vir­tu­al agent or kiosk fit­ted with var­i­ous sen­sors and bio­met­rics that seeks to flag indi­vid­u­als who are untruth­ful or sig­nal a poten­tial risk based on eye move­ments or changes in voice, pos­ture and facial ges­tures.

    “Arti­fi­cial intel­li­gence has allowed us to use sen­sors that are non­con­tact that we can then process the sig­nal in real­ly advanced ways,” Elkins said. “We’re able to teach com­put­ers to learn from some data and actu­al­ly act intel­li­gent­ly. The sci­ence is very mature over the last five or six years.”

    But the researcher insists the AVATAR tech­nol­o­gy was­n’t devel­oped as a replace­ment for peo­ple.

    ...

    Still, future advance­ment in arti­fi­cial intel­li­gence sys­tems may allow the tech­nol­o­gy to some­day sup­plant var­i­ous human jobs because the robot-like machines may be seen as more pro­duc­tive and cost effec­tive par­tic­u­lar­ly in screen­ing peo­ple.

    Elkins believes the AVATAR could poten­tial­ly get used one day at secu­ri­ty check­points at air­ports “to make the screen­ing process faster but also to improve the accu­ra­cy.”

    “It’s just a mat­ter of find­ing the right imple­men­ta­tion of where it will be and how it will be used,” he said. “There’s also a process that would need to occur because you can’t just drop the AVATAR into an air­port as it exists now because all that would be using an extra step.”

    ———-

    “Lie-detect­ing com­put­er kiosks equipped with arti­fi­cial intel­li­gence look like the future of bor­der secu­ri­ty” by Jeff Daniels; CNBC; 05/15/2018

    “Inter­na­tion­al trav­el­ers could find them­selves in the near future talk­ing to a lie-detect­ing kiosk when they’re going through cus­toms at an air­port or bor­der cross­ing.”

    This tech­nol­o­gy is appar­ent­ly seen as devel­oped enough that it could be in use at air­ports “in the near future” in the Unit­ed States. And yet the accu­ra­cy rate is still only around 60–75 per­cent, sim­i­lar to the EU’s iBor­derC­trl accu­ra­cy. What jus­ti­fi­ca­tion is there for using a lie detec­tor sys­tem with such a low accu­ra­cy rate? It’s still bet­ter than humans, who only have around a 54–60 per­cent lie detec­tion accu­ra­cy rate. So that’s how low the bar is: if an AI lie detec­tor sys­tem can beat about a 60 per­cent accu­ra­cy rate, it’s seen as good enough for pub­lic use:

    ...
    High lev­els of accu­ra­cy

    Pres­i­dent Don­ald Trump’s fis­cal 2019 bud­get request for Home­land Secu­ri­ty includes $223 mil­lion for “high-pri­or­i­ty infra­struc­ture, bor­der secu­ri­ty tech­nol­o­gy improve­ments,” as well as anoth­er $210.5 mil­lion for hir­ing new bor­der agents. Last year, fed­er­al work­ers inter­viewed or screened more than 46,000 refugee appli­cants and processed near­ly 80,000 “cred­i­ble fear cas­es.”

    The AVATAR com­bines arti­fi­cial intel­li­gence with var­i­ous sen­sors and bio­met­rics that seeks to flag indi­vid­u­als who are untruth­ful or a poten­tial risk based on eye move­ments or changes in voice, pos­ture and facial ges­tures.

    “We’re always con­sis­tent­ly above human accu­ra­cy,” said Elkins, who worked on the tech­nol­o­gy with a team of researchers that includ­ed the Uni­ver­si­ty of Ari­zona.

    Accord­ing to Elkins, the AVATAR as a decep­tion-detec­tion judge has a suc­cess rate of 60 to 75 per­cent and some­times up to 80 per­cent.

    “Gen­er­al­ly, the accu­ra­cy of humans as judges is about 54 to 60 per­cent at the most,” he said. “And that’s at our best days. We’re not con­sis­tent.”
    ...

    Keep in mind that while these sys­tems might be bet­ter at detect­ing liars than humans are, humans also aren’t screen­ing every­one with a series of lie detec­tor ques­tions at most air­ports. Where­as it sounds like the vision is for these AI lie detec­tors to screen every­one. So while being more accu­rate than humans might sound like a pos­i­tive rea­son for using these tech­nolo­gies, if many many more peo­ple end up get­ting screened by these sys­tems we should still expect a mas­sive increase in the num­ber of ‘false pos­i­tives’.

    It’s also rather dis­turb­ing that DHS con­clud­ed six years ago that the tech­nol­o­gy was so far from matu­ri­ty that it was­n’t worth fur­ther devel­op­ing, and yet it’s clear­ly been fur­ther devel­oped. And it sounds like DHS has­n’t been the ones doing that fur­ther devel­op­ment:

    ...
    The U.S. Depart­ment of Home­land Secu­ri­ty fund­ed research of the vir­tu­al bor­der agent tech­nol­o­gy known as the Auto­mat­ed Vir­tu­al Agent for Truth Assess­ments in Real-Time, or AVATAR, about six years ago and allowed it to be test­ed it at the U.S.-Mexico bor­der on trav­el­ers who vol­un­teered to par­tic­i­pate. Since then, Cana­da and the Euro­pean Union test­ed the robot-like kiosk that uses a vir­tu­al agent to ask trav­el­ers a series of ques­tions.

    ...

    A Home­land Secu­ri­ty offi­cial, who declined to be named, told CNBC the con­cept for the AVATAR sys­tem “was envi­sioned by researchers to assist human screen­ers by flag­ging peo­ple exhibit­ing sus­pi­cious or anom­alous behav­ior.”

    “As the research effort matured, the sys­tem was eval­u­at­ed and test­ed by the DHS Sci­ence and Tech­nol­o­gy Direc­torate and DHS oper­a­tional com­po­nents in 2012,” the offi­cial added. “Although the con­cept was appeal­ing at the time, the research did not mature enough for fur­ther con­sid­er­a­tion or fur­ther devel­op­ment.”

    ...

    Elkins, mean­while, said the fund­ing for the AVATAR sys­tem has­n’t come from Home­land Secu­ri­ty in recent years “because they sort of felt that this is in a dif­fer­ent cat­e­go­ry now and needs to tran­si­tion.”
    ...

    And note one of the fea­tures DHS offi­cials said they need­ed from an AI lie detec­tor sys­tem that the tech­nol­o­gy at the time could­n’t pro­vide: the abil­i­ty to screen peo­ple in sec­onds, not min­utes. And that rais­es the ques­tion of whether or not the devel­op­ers of these AI lie detec­tors are under pres­sure to devel­op sys­tems that can actu­al­ly make these assess­ments in mere sec­onds and what kind of sac­ri­fices in accu­ra­cy are going to have to be made. Don’t for­get, as long as these sys­tems are bet­ter than humans’ 54–60 per­cent accu­ra­cy rate they’re appar­ent­ly seen as accept­able for using on the pub­lic. So if sac­ri­fices to accu­ra­cy are required to make the sys­tems faster that might still be seen as a rea­son­able trade off:

    ...
    Anoth­er DHS offi­cial famil­iar with the tech­nol­o­gy did­n’t work at a high enough rate of speed to be prac­ti­cal. “We have to screen peo­ple with­in sec­onds, and we can’t take min­utes to do it,” said the offi­cial.
    ...

    So we’ll see what kind of com­pro­mise between speed and accu­ra­cy pub­lic offi­cials come up with, but it’s not just US offi­cials mak­ing these assess­ments. Cana­da and the EU have been test­ing this same AVATAR sys­tem too:

    ...
    The tech­nol­o­gy, which relies on advanced sta­tis­tics and machine learn­ing, was test­ed a year and a half ago with the Cana­di­an Bor­der Ser­vices Agency, or CBSA, to help agents deter­mine whether a trav­el­er has ulte­ri­or motives enter­ing the coun­try and should be ques­tioned fur­ther or denied entry.

    A report from the CBSA on the AVATAR tech­nol­o­gy is said to be immi­nent, but it’s unclear whether the agency will pro­ceed the tech­nol­o­gy beyond the test­ing phase.

    “The CBSA has been fol­low­ing devel­op­ments in AVATAR tech­nol­o­gy since 2011 and is con­tin­u­ing to mon­i­tor devel­op­ments in this field,” said Barre Camp­bell, a senior spokesman for the Cana­di­an agency. He said the work car­ried out in March 2016 was “an inter­nal-only exper­i­ment of AVATAR” and that “analy­sis for this tech­nol­o­gy is ongo­ing.”

    Pri­or to that, the EU bor­der agency known as Fron­tex helped coor­di­nate and spon­sor a field test of the AVATAR sys­tem in 2014 at the inter­na­tion­al arrivals sec­tion of an air­port in Bucharest, Roma­nia.
    ...

    So if the EU’s iBor­derC­trl pilot run does­n’t yield the kinds of results offi­cials are look­ing for, there’s always the AVATAR sys­tem for fall back on. Although it’s pos­si­ble iBor­derC­trl is based on the AVATAR tech­nol­o­gy. It’s unclear.

    But one way or anoth­er, this tech­nol­o­gy is com­ing to the pub­lic because it’s accu­rate and poten­tial­ly faster than humans. It’s just a ques­tion of find­ing the right imple­men­ta­tion of where and how it will be used. At least that’s how the design­ers see it:

    ...
    Still, future advance­ment in arti­fi­cial intel­li­gence sys­tems may allow the tech­nol­o­gy to some­day sup­plant var­i­ous human jobs because the robot-like machines may be seen as more pro­duc­tive and cost effec­tive par­tic­u­lar­ly in screen­ing peo­ple.

    Elkins believes the AVATAR could poten­tial­ly get used one day at secu­ri­ty check­points at air­ports “to make the screen­ing process faster but also to improve the accu­ra­cy.”

    “It’s just a mat­ter of find­ing the right imple­men­ta­tion of where it will be and how it will be used,” he said. “There’s also a process that would need to occur because you can’t just drop the AVATAR into an air­port as it exists now because all that would be using an extra step.”
    ...

    And beyond the use for tasks like screen­ing asy­lum seek­ers, the devel­op­ers see uses in areas like cor­po­rate human resources. Job inter­views are about to get rather prob­ing:

    ...
    Last month, a car­a­van of migrants from Cen­tral Amer­i­ca made it to the U.S.-Mexico bor­der, where they sought asy­lum but were delayed sev­er­al days because the port of entry near San Diego had reached full capac­i­ty. It’s pos­si­ble that a sys­tem such as AVATAR could pro­vide ini­tial screen­ing of asy­lum seek­ers and oth­ers to help U.S. agents at busy bor­der cross­ings such as San Diego’s San Ysidro.

    “The tech­nol­o­gy has much broad­er appli­ca­tions poten­tial­ly,” despite most of the fund­ing for the orig­i­nal work com­ing pri­mar­i­ly from the Defense or Home­land Secu­ri­ty depart­ments a decade ago, accord­ing to Aaron Elkins, one of the devel­op­ers of the sys­tem and an assis­tant pro­fes­sor at the San Diego State Uni­ver­si­ty direc­tor of its Arti­fi­cial Intel­li­gence Lab. He added that AVATAR is not a com­mer­cial prod­uct yet but could be also used in human resources for screen­ing.
    ...

    So it looks like we’re going to have a new class of unem­ploy­able peo­ple: indi­vid­u­als who, for what­ev­er inno­cent rea­son, nat­u­ral­ly trig­ger cor­po­rate lie detec­tor sys­tems.

    Anoth­er inter­est­ing ques­tion relat­ed to the poten­tial appli­ca­tion of this tech­nol­o­gy for screen­ing asy­lum seek­ers is what’s going to hap­pen, polit­i­cal­ly, if such a sys­tem is employed and it’s revealed that, yes, the vast, vast major­i­ty of asy­lum seek­ers are indeed fac­ing death threats and oth­er extreme dan­gers in their home coun­tries. Because cur­rent­ly in the US one of the pri­ma­ry argu­ments we hear from the right-wing over why the US should view ‘the car­a­van’ of Cen­tral Amer­i­can migrants with fear and trep­i­da­tion is that it’s actu­al­ly filled with crim­i­nals, ter­ror­ists, and peo­ple who aren’t fac­ing dan­gers and mere­ly want to come to the US to get wel­fare ben­e­fits and ille­gal­ly vote in elec­tions. That was seri­ous­ly one of the pri­ma­ry GOP nar­ra­tives in the final stretch of the 2018 mid-terms.

    So what’s going to hap­pen if coun­tries start deploy­ing tech­nol­o­gy that can osten­si­bly deter­mine whether or not some­one is tru­ly in need of asy­lum? Don’t for­get that waves are asy­lum seek­ers are going to be increas­ing­ly the norm around the globe as cli­mate change con­tin­ues to fuel con­flicts and make coun­tries unliv­able. So hav­ing a sys­tem that can be mass deployed in the event of a mass migra­tion sit­u­a­tion real­ly is going to be a capa­ble coun­tries are going to want to have on hand...unless they don’t want to accept the asy­lum seek­ers.

    And a large num­ber of peo­ple in these coun­tries aren’t going to like it when these migrants ‘pass’ the asy­lum ‘quiz’. In the US, that group of peo­ple who would real­ly pre­fer that asy­lum seek­ers don’t pass any sort of asy­lum request lie detec­tor test cur­rent­ly includes the pres­i­dent and pret­ty much the entire GOP. More gen­er­al­ly, com­ing up with excus­es not to help peo­ple in need is going to be one of the biggest focus­es of the right-wing pol­i­tics for the fore­see­able future thanks to the glob­al chaos that’s emerg­ing from cli­mate change. Glob­al chaos that’s only going to get worse. There’s going to be a lot more ‘car­a­vans’ of des­per­ate peo­ple as the col­lapse of the ecosys­tem kicks into over­drive.

    And all that is part of what’s going to make the deploy­ment of this kind of tech­nol­o­gy for tasks like asy­lum seek­er lie detec­tion so grim­ly inter­est­ing to watch play out. Because it sounds like this tech­nol­o­gy could be deployed soon. Like, with­in the time frame of the Trump admin­is­tra­tion. And as creepy as it is to imag­ine a world where the Trump admin­is­tra­tions and cor­po­ra­tions are mass deploy­ing lie detec­tor tech­nol­o­gy, let’s not for­get that we live a world run by peo­ple who would real­ly pre­fer many lies are nev­er detect­ed. Like the right-wing lie that asy­lum seek­ers from Cen­tral Amer­i­ca have no legit­i­mate need for asy­lum. So how will the Trump admin­is­tra­tion han­dle this tech­nol­o­gy if it’s deemed to be ready for use with asy­lum seek­ers?

    Of course, there’s one obvi­ous solu­tion for politi­cians and move­ments that would like to use AI lie detec­tors but would rather not have polit­i­cal­ly con­ve­nient lies uncov­ered: cor­rupt the AI lie detec­tors. It could be as sim­ple as employ­ing racist AIs that sys­tem­at­i­cal­ly mis­trust the peo­ple peo­ple of col­or, or per­haps a lie detec­tor could be cor­rupt­ed regard­ing spe­cif­ic types of ques­tions. Who knows what kind of AI lie detec­tor cor­rup­tion will be avail­able, but if this tech­nol­o­gy gets sold to gov­ern­ments around the world we can be pret­ty sure there’s going to be mas­sive efforts into cor­rupt­ing them.

    As AI lie detec­tion tech­nol­o­gy gets more sophis­ti­cat­ed, the kinds of poten­tial cor­rup­tion of those lie detec­tors is only going to get more sophis­ti­cat­ed and nuanced. Don’t for­get that AIs can essen­tial­ly be algo­rith­mic ‘black box­es’, where humans can’t eas­i­ly inspect or make sense of how it’s actu­al­ly oper­at­ing and/or there’s no access to the inter­nal work­ings giv­en to out­sider par­ties. So inves­ti­gat­ing whether or not AI lie detec­tors have been cor­rupt­ed or con­tain bias­es could be extreme­ly dif­fi­cult to deter­mine. If you thought trust­ing elec­tron­ic vot­ing machines was dif­fi­cult, get ready for trust­ing the inner work­ings of the AI lie detec­tor.

    That all points towards anoth­er lie detec­tor tech­nol­o­gy that we should expect soon­er or lie: lie detec­tors for lie detec­tors. AIs that can inves­ti­gate anoth­er AI lie detec­tor and deter­mine whether or not it’s been cor­rupt­ed some­how. If mass use of AI lie detec­tors is going to be a part of the future, some sort of qual­i­ty con­trol of those AI lie detec­tors had bet­ter be part of that future.

    And hope­ful­ly the AI lie detec­tor lie detec­tors will also be able to detect cor­rup­tion in oth­er AI lie detec­tor lie detec­tors. Because those could get cor­rupt­ed too. Along with the AI lie detec­tor lie detec­tor lie detec­tors. As long humans are build­ing and admin­is­ter­ing these sys­tems it’s hard to see how human bias­es can be avoid­ed because even if the­o­ret­i­cal­ly unbi­ased lie detec­tion tech­nol­o­gy was devel­oped the pos­si­bil­i­ty that the tech­nol­o­gy was cor­rupt­ed by humans is always going to be there as long as humans are design­ing and admin­is­ter­ing them.

    And who knows, decades from now there could be AI lie detec­tors run that are, them­selves, kind of sen­tient. Have fun debug­ging one of those things.

    So that’s all one rea­son to let the robots take over: The AI lie detec­tors will prob­a­bly be less human biased when Skynet runs every­thing. Although pre­sum­ably robot biased.

    Posted by Pterrafractyl | November 18, 2018, 10:04 pm
  14. EDITORS’ PICK|298 views|Jun 5, 2020,08:27am EDT

    Pen­ta­gon Wants Cyborg Implant To Make Sol­diers Tougher
    David Ham­bling­Con­trib­u­tor
    Aero­space & Defense

    I’m a South Lon­don-based tech­nol­o­gy jour­nal­ist, con­sul­tant and author

    https://www.forbes.com/sites/davidhambling/2020/06/05/darpa-wants-cyborg-implant-to-make-soldiers-tougher/amp/

    DARPA, the Pentagon’s research arm, has long been explor­ing tech­nol­o­gy for ‘super sol­diers’ with extra­or­di­nary capa­bil­i­ties and to main­tain Peak Per­for­mance. Their lat­est effort, known as ADAPTER involves cyborg implants to tough­en sol­diers against two of the com­mon­est health issues in mod­ern war­fare: lim­it­ed access to safe food and water, and sleep dis­rup­tion. Each implant will be a minia­ture fac­to­ry full of bac­te­ria pro­duc­ing ther­a­peu­tic sub­stances on demand.

    Diar­rhea may be a minor incon­ve­nience to most trav­el­ers, but has a seri­ous impact on mil­i­tary oper­a­tions.

    Jet lag and sleep dis­rup­tion are dan­ger­ous in the mil­i­tary. Poor sleep decreas­es alert­ness and can cause dis­ori­en­ta­tion, the last things you want on the bat­tle­field. Sleep depri­va­tion affects marks­man­ship and degrades phys­i­cal strength. Sleep-deprived sol­diers is one of the com­mon­est caus­es of vehi­cle crash­es.

    The ADvanced Accli­ma­tion and Pro­tec­tion Tool for Envi­ron­men­tal Readi­ness (ADAPTER) project is described as a trav­el adapter for the human body. It will con­sist of devices implant­ed under the skin or ingest­ed and held in the gut to pro­tect warfight­ers from trav­el ail­ments. DARPA has issued a call for pro­pos­als from researchers for ‘bio­elec­tron­ic car­ri­ers that main­tain and release ther­a­pies that pro­vide warfight­ers con­trol over their own phys­i­ol­o­gy.’

    “You can imag­ine … you swal­low a large elec­tron­ic pill that opens up and hangs out in your stom­ach,” Dr. Paul Shee­han, ADAPTER pro­gram man­ag­er at DARPA’s Bio­log­i­cal Tech­nolo­gies Office, told Nation­al Defense mag­a­zine. “You can imag­ine a device like that that would also con­tain drugs or con­tain bac­te­ria that could pro­duce drugs, and so that when­ev­er you were wor­ried about unsafe food or water, you could sig­nal the device to … pro­duce the antibi­ot­ic.”

    DARPA calls this type of device a ther­a­peu­tic cel­lu­lar fac­to­ry, filled with bac­te­ria spe­cial­ly engi­neered to pro­duces the required drugs on demand.

    Posted by Roberto Maldonado | June 6, 2020, 12:13 pm

Post a comment