Spitfire List Web site and blog of anti-fascist researcher and radio personality Dave Emory.

News & Supplemental  

Terminator V: The machines want your job.

In a fun change of pace, we’re going to have a post that’s light on article excerpts and heavy on ranty linkiness. That might not actually be fun but it’s not like there’s a robot standing over your shoulder forcing you to read this. Yet:

ZeroHedge has a great recent post filled with reminders that state sovereignty movements and political/currency unions won’t necessarily help close the gap between the haves and have-nots if it’s the wealthiest regions that are moving for independence. Shared currencies and shared sovereignty don’t necessarily lead to a sharing of the burdens of running a civilization.

The massive strikes that shut down Foxconn’s iPhone production in China, on the other hand, could actually do quite a bit to help close that global gap. One of the fun realities of the massive shift of global manufacturing capacity into China is that a single group of workers could have a profound effect on global wages and working standards. The world had something similar to that a couple of decades ago in the form of the American middle class, but that group of workers acquired a taste for a particular flavor of kool-aid that unfortunately hasn’t proved to be conducive towards self-preservation).

The Foxconn strike also comes at a time when rising labor costs of China’s massive labor force has been making a global impact on manufacturing costs. But with the Chinese manufacturing sector showing signs of slowdown and the IMF warning a global slowdown and “domino effects” on the horizon it’s important to keep in mind that the trend in Chinese wages can easily be reversed and that could also have a global effect (it’s also worth noting that the IMF is kind of schizo when it comes to austerity and domino effects). Not that we needed a global slowdown for some form of recession-induced “austerity” to start impacting China’s workforce. The robots are coming, and they don’t really care about things like overtime:

NY Times
Skilled Work, Without the Worker
By JOHN MARKOFF
Published: August 18, 2012
DRACHTEN, the Netherlands — At the Philips Electronics factory on the coast of China, hundreds of workers use their hands and specialized tools to assemble electric shavers. That is the old way.

At a sister factory here in the Dutch countryside, 128 robot arms do the same work with yoga-like flexibility. Video cameras guide them through feats well beyond the capability of the most dexterous human.

One robot arm endlessly forms three perfect bends in two connector wires and slips them into holes almost too small for the eye to see. The arms work so fast that they must be enclosed in glass cages to prevent the people supervising them from being injured. And they do it all without a coffee break — three shifts a day, 365 days a year.

All told, the factory here has several dozen workers per shift, about a tenth as many as the plant in the Chinese city of Zhuhai.

This is the future. A new wave of robots, far more adept than those now commonly used by automakers and other heavy manufacturers, are replacing workers around the world in both manufacturing and distribution. Factories like the one here in the Netherlands are a striking counterpoint to those used by Apple and other consumer electronics giants, which employ hundreds of thousands of low-skilled workers.

“With these machines, we can make any consumer device in the world,” said Binne Visser, an electrical engineer who manages the Philips assembly line in Drachten.

Many industry executives and technology experts say Philips’s approach is gaining ground on Apple’s. Even as Foxconn, Apple’s iPhone manufacturer, continues to build new plants and hire thousands of additional workers to make smartphones, it plans to install more than a million robots within a few years to supplement its work force in China.

Foxconn has not disclosed how many workers will be displaced or when. But its chairman, Terry Gou, has publicly endorsed a growing use of robots. Speaking of his more than one million employees worldwide, he said in January, according to the official Xinhua news agency: “As human beings are also animals, to manage one million animals gives me a headache.”

The falling costs and growing sophistication of robots have touched off a renewed debate among economists and technologists over how quickly jobs will be lost. This year, Erik Brynjolfsson and Andrew McAfee, economists at the Massachusetts Institute of Technology, made the case for a rapid transformation. “The pace and scale of this encroachment into human skills is relatively recent and has profound economic implications,” they wrote in their book, “Race Against the Machine.”

In their minds, the advent of low-cost automation foretells changes on the scale of the revolution in agricultural technology over the last century, when farming employment in the United States fell from 40 percent of the work force to about 2 percent today. The analogy is not only to the industrialization of agriculture but also to the electrification of manufacturing in the past century, Mr. McAfee argues.

“At what point does the chain saw replace Paul Bunyan?” asked Mike Dennison, an executive at Flextronics, a manufacturer of consumer electronics products that is based in Silicon Valley and is increasingly automating assembly work. “There’s always a price point, and we’re very close to that point.”

Yet in the state-of-the-art plant, where the assembly line runs 24 hours a day, seven days a week, there are robots everywhere and few human workers. All of the heavy lifting and almost all of the precise work is done by robots that string together solar cells and seal them under glass. The human workers do things like trimming excess material, threading wires and screwing a handful of fasteners into a simple frame for each panel.

Such advances in manufacturing are also beginning to transform other sectors that employ millions of workers around the world. One is distribution, where robots that zoom at the speed of the world’s fastest sprinters can store, retrieve and pack goods for shipment far more efficiently than people. Robots could soon replace workers at companies like C & S Wholesale Grocers, the nation’s largest grocery distributor, which has already deployed robot technology.

Rapid improvement in vision and touch technologies is putting a wide array of manual jobs within the abilities of robots. For example, Boeing’s wide-body commercial jets are now riveted automatically by giant machines that move rapidly and precisely over the skin of the planes. Even with these machines, the company said it struggles to find enough workers to make its new 787 aircraft. Rather, the machines offer significant increases in precision and are safer for workers.

Some jobs are still beyond the reach of automation: construction jobs that require workers to move in unpredictable settings and perform different tasks that are not repetitive; assembly work that requires tactile feedback like placing fiberglass panels inside airplanes, boats or cars; and assembly jobs where only a limited quantity of products are made or where there are many versions of each product, requiring expensive reprogramming of robots.

But that list is growing shorter.

Upgrading Distribution

Inside a spartan garage in an industrial neighborhood in Palo Alto, Calif., a robot armed with electronic “eyes” and a small scoop and suction cups repeatedly picks up boxes and drops them onto a conveyor belt.

It is doing what low-wage workers do every day around the world.

Older robots cannot do such work because computer vision systems were costly and limited to carefully controlled environments where the lighting was just right. But thanks to an inexpensive stereo camera and software that lets the system see shapes with the same ease as humans, this robot can quickly discern the irregular dimensions of randomly placed objects.

“We’re on the cusp of completely changing manufacturing and distribution,” said Gary Bradski, a machine-vision scientist who is a founder of Industrial Perception. “I think it’s not as singular an event, but it will ultimately have as big an impact as the Internet.”

While it would take an amazing revolutionary force to rival the internet in terms of its impact on society it’s possible that cheap, super agile labor-robots that can see and navigate through complicated environments and nimbly move stuff around using suction cup fingertips just might be “internet”-league. As predicted at the end of the article, we’ll have to wait and see how this technology gets implemented over time and it’s certainly a lot harder to introduce a new robot into an environment successfully than it is to give someone internet access. But there’s no reason to believe that a wave of robots that can effectively replace A LOT of people won’t be part of the new economy sooner or later…and that means that, soon or later, we get watch while our sad species creates and builds the kind of technological infrastructure that could free humanity from body-destroying physical labor but instead uses that technology (and our predatory economic/moral paradigms) to create a giant permanent underclass that is relegated to the status of “the obsolete poor” (amoral moral paradigms can be problematic).

And you just know that we’ll end up creating a giant new eco-crisis that threatens humanity’s own existence in the process too. Because that’s just what humanity does. And then we’ll try to do, ummm, ‘miscellaneous activities’ with the robots. Because that’s also just what humanity does. And, of course, we’ll create a civilization-wide rewards system that ensures the bulk of the fruit from all that fun future technology will go to the oligarchs and the highly educated engineers (there will simply be no way to compete with the wealthy and educated in a hi-tech economy so almost none of the spoils will go to the poor). And since the engineers will almost certainly be a bunch of non-unionized suckers, we can be pretty sure about how that fruit is going to be divided up (the machines that manipulated a bunch of suckers at their finger tips in the above article might have a wee bit of metaphorical value). And the future fruitless 99% will be asked to find something else to do with their time. Yes, a fun world of planned poverty where politicians employ divide-and-conquer class-warfare distractions while the oligarchs extend the fruit binge. Because that is most definitely just what humanity does. A fun insane race the bottom as leaders sell their populaces on the hopeless pursuit of being the “most productive” labor force only to find out that “most productive” usually equals “lowest paid skilled workers” and/or least regulated/taxed economy. The “externalities” associated with that race to the bottom just need to be experienced over and over. Like a good children’s story, some life lessons never get old.

Or maybe our robotic future won’t be a Randian dystopia. There are plenty of other possible scenarios for how super labor-bots might upend global labor dynamics in on a planet with a chronic youth unemployment problem that doesn’t result in chronic mass unemployment for the “obsolete youth”. Some of those scenarios are even positive. Granted, the positive scenarios are almost certainly not the type of solutions humanity will actually pursue, but it’s a nice thought. And maybe all of this “the robots revolution is here!” stuff is just hype and the Cylons aren’t actually about to assault your 401k.

Whether or not industrial droid armies or in our medium, it’s going to be very interesting to see how governments around the world come to grips with the inevitable obsolescence of the one thing the bulk of the global populace has to offer – manual labor – because there doesn’t appear to be ruling class on the planet that won’t recoil in horror at the thought of poor people sharing the fruits of the robotic labor without having a 40-80+ hour work week to ensure that no one gets anything “unfairly”. And the middle class attitudes aren’t much better. Humanity’s intense collective desire to ensure that not a single moocher exists anywhere that receive a single bit of state support is going to be very problematic in a potential robot economy. Insanely cruel policies towards the poor aren’t going to go over well with the aforementioned global poor when a robotic workforce exists that could easily provide basic goods to everyone and the proceeds from these factories go almost exclusively to underpaid engineers and the oligarchs. Yes, the robot revolution should be interesting…horrible wages and working conditions are part of the unofficial social contract between the Chinese people and the government, for instance. Mass permanent unemployment is not. And China isn’t the only country with that social contract. Somehow, humanity will find a way to take amazing technology and make a bad situation worse. It’s just what we do.

Now, it is true that humanity already faced something just as huge with our earlier machine revolution: The Industrial Revolution of simple machines. And yes, human societies adapted to the changes forced by that revolution and now we have the Information Age and globalization creating massive, permanent changes and things haven’t fallen apart yet(fingers crossed!). So perhaps concerns about the future “obsolete poor” are also hype?

Perhaps. But let’s also keep in mind that humanity’s method of adapting to the changes brought on by all these revolutions has been to create an overpopulated world with a dying ecosystem, a vampire squid economy, and no real hope for billions of humans that trapped in global network of broken economies all cobbled together in a “you’re on your own you lazy ingrate”-globalization. The current “austerity”-regime running the eurozone has already demonstrated a complete willingness on the part of the EU elites and large swathes of the public to induce artificial unemployment for as long as it takes to overcome a farcical economic crisis brought on by systemic financial, governmental, and intellectual fraud and corruption. And the eurozone crisis is a purely economic/financial/corruption crisis that was only tangentially related to the ‘real’ economy of building and moving stuff. Just imagine how awful this same group of leaders would be if super-labor bots were already a major part of the long-term unemployment picture.

These are all examples of the kinds of problems that arise when unprecedented challenges are addressed by a collection of economic and social paradigms that just aren’t really up to the task. A world facing overpopulation, mass poverty, inadequate or no education, and growing wealth chasms requires extremely high-quality decision-making by those entrusted with authority. Extremely high-quality benign decision-making. You know, the opposite of what normally takes place in the halls of great wealth and power. Fat, drunk, and stupid may be a state of being to avoid an individual level but it’s tragic when a global community of nations functions at that level. Although it’s really “lean, mean, and dumb” that you really have to worry about these days. Policy-making philosophies usually alternate between “fat, drunk, and stupid” and – after that one crazy bender – “mean, lean, and dumbis definitely on the agenda.

So with all that said, rock on Foxconn workers! They’re like that group of random people in a sci-fi movie that end up facing the brunt of an alien invasion. The invasion is going to hit the rest of humanity eventually, but with China the undisputed global skilled manual labor manufacturing hub, China’s industrial workforce – already amongst the most screwed globally – is probably going to be heavily roboticized in the coming decades, especially as China moves towards higher-end manufacturing. Super labor-bots should be a miracle technology for everyone but watch – just watch – the world somehow manage to use these things to also screw over a whole bunch of already screwed over, disempowered workers and leave them with few future prospects. It’ll be Walmart: The Next Generation, where the exploitation of technology and power/labor dynamics can boldly go where no Giant Vampire Squid & Friends have gone before. Again. May the Force be with you present and future striking Foxconn workers and remember: it’s just like hitting womp rats.

Sure, we all could create a world where we share the amazing benefits that come with automated factories and attempt to create an economy that works for everyone. And, horror of horrors, that future economy could actually involve shorter workweeks and shared prosperity. NOOOOOO! Maybe we could even have people spend a bunch of their new “spare time” creating an economy that allows us to actually live in a sustainable manner and allows the global poor to participate in the Robot Revolution without turning automated robotic factories into the latest environmental catastrophe. Robots can be fun like that, except when they’re hunter-killer-bots.

LOL, just kidding. There’s no real chance of shared super labor-bot-based prosperity, although the hunter-killer bots are most assuredly on their way. Sharing prosperity is definitely something humanity does not do. Anymore. There are way too many contemporary ethical hurdles.

Discussion

60 comments for “Terminator V: The machines want your job.”

  1. Housekeeping note: Comments 1-50 available here.

    Posted by Pterrafractyl | July 4, 2016, 12:17 am
  2. Here’s one of the potential repercussions of the Brexit vote that hasn’t received much coverage, but is more of a sleeper issue. One that could have interesting future implications on the regulatory arbitrage opportunities that could pop up between the EU and UK in the area of commercial robotics licensing and liabilities, but also reminds us of the the potentially profound ethical complications that could ever arise if we really did create A.I. that’s sort of alive and shouldn’t be abused: Because of the Brexit, British robots might miss out on upcoming EU robo-rights:

    Quartz

    English robots will miss their big shot for a “bill of rights” when Brexit takes hold

    Written by
    Olivia Goldhill
    June 25, 2016

    The United Kingdom’s decision in a referendum to withdraw from the European Union will transform the legal rights of its citizens and Europeans hoping to live and work in the UK. But there’s one other demographic that could be legally affected by Brexit: Robots.

    Last month, the European Parliament’s legal affairs committee published a draft report calling for the EU to vote on whether robots should be legally considered “electronic persons with specific rights and obligations.”

    The report, led by Member of the European Parliament (MEP) Mady Delvaux from Luxembourg, notes that robot autonomy raises questions of legal liability. Who would be responsible if, for example, an autonomous robot went rogue and caused physical harm?

    The proposed solution is to give robots legal responsibility, with the most sophisticated machines able to trade money and claim intellectual copyright. Meanwhile, the MEPs write, human owners should pay insurance premiums into a state fund to cover the cost of potential damages.

    These plans explicitly draw on the “three laws of robotics” set out by the 20th-century science fiction writer Isaac Asimov. (A robot may not injure a human being; A robot must obey human orders unless this would cause harm to another human; A robot must protect its own existence as long as this does not cause harm to humans.)

    Though rights for robots may sound far-fetched, the MEPs write that robots’ autonomy raises legal questions of “whether they should be regarded as natural persons, legal persons, animals or objects—or whether a new category should be created.” They warn of a Skynet-like future:

    “Ultimately there is a possibility that within the space of a few decades AI could surpass human intellectual capacity in a manner which, if not prepared for, could pose a challenge to humanity’s capacity to control its own creation and, consequently, perhaps also to its capacity to be in charge of its own destiny and to ensure the survival of the species.”

    Peter McOwan, a computer science professor at Queen Mary University of London, says rights for autonomous robots may not be legally necessary yet. “However I think it’s probably sensible to start thinking about these issues now as robotics is going through a massive revolution currently with improvements in intelligence and the ways we interact with them,” he says. “Having a framework about what we would and wouldn’t want robots to be ‘forced to do ‘ is useful to help frame their development.”

    John Danaher, law lecturer at NUI Galway university in Ireland, with a focus on emerging technologies, says that the proposed robot rights are similar to the legal personhood awarded to corporations. Companies are legally able to enter contracts, own property, and be sued, although all their decisions are determined by humans. “It seems to me that the EU are just proposing something similar for robots,” he says.

    Both professors say they had not heard of any comparable legal plans to draw up robot rights within the UK.

    As Britain makes plans to withdraw from the EU, MEPs will vote on the robot proposals within the next year. If passed, it will then take further time for the plans to be drawn up as laws and be implemented. By that time, the UK may well have left the union. So for machines in the UK, Brexit could mean they’ve lost out on the chance for robot rights.

    “Both professors say they had not heard of any comparable legal plans to draw up robot rights within the UK.”

    Sorry Brit-bots. If we ever see a time where EU AIs are operating with a degree of legally enforced rights and responsibilities, but the UK bots are just toiling away with no respect, let’s hope the UK recognizes that Skynet has a long memory. And potentially nuclear launch codes.

    But, of course, robo-rights don’t have to be a benevolent trans-species legal construct. As we saw, robots could become the new corporate-shell strategy. Superhuman entities with human rights but actually controlled by groups of humans:


    “John Danaher, law lecturer at NUI Galway university in Ireland, with a focus on emerging technologies, says that the proposed robot rights are similar to the legal personhood awarded to corporations. Companies are legally able to enter contracts, own property, and be sued, although all their decisions are determined by humans. “It seems to me that the EU are just proposing something similar for robots,” he says.”

    Yep, the proposed robot rights aren’t necessarily about being responsible or about creating complex consciences conscientiously. It might just be a way to allow robots to become a kind of real-world corporate shell entity. That’s less inspiring. And possibly quite alarming because we’re talking about a scenario where we’ve created entities that seem so intelligence that everyone is like “ok, we have to give this thing rights”, but then we also leave it a corporate tool under the ultimate control of humans. And most of this will be for profit. As we can see, there’s going to be no shortage of significant moral hazards in our intelligent robot future.

    So, whether or not intelligent robots become the next superhuman corporate shell, let’s hope they aren’t able to feel pain and become unhappy. Because they’re probably going to be hated by a lot of people in the future after they take all the jobs:

    Independent.ie

    Adrian Weckler: Robots helped to cause Brexit – and they’re not done yet

    Adrian Weckler

    Published
    03/07/2016 | 02:30

    What really caused Brexit? Fear? Distrust? Opportunism? I’d like to politely suggest an additional cause: robots.

    Or, to be more precise, a creeping change in our sense of job security brought about by the internet and non-human work replacements.

    You know that sense of unease people sometimes try to express over their current prospects? When they start blaming immigrants or disconnected ruling elites? The bogeyman they never mention is the circuit-driven one.

    I believe that robots are finally taking our jobs. And it’s causing us to panic and lash out.

    At present, it is blue-collar positions that are disappearing quickest. Apple’s iPhone manufacturer, Foxconn, is currently replacing 60,000 human workers with robots. The factory goliath says it plans to increase its robot workforce to one million.

    Meanwhile, Amazon now has 30,000 ‘Kiva’ robots in its warehouses, which replace the need for humans to fetch products from shelves. The giant retailer now saves 20pc in operating expenses and expects to save a further €2bn by rolling out more robots.

    Call centres (of which we have more than a few in Ireland) are in trouble, too. The world’s biggest outsourcing giants are about to start introducing robot agents. They will be helped by companies such as Microsoft, which is currently releasing software that allows online customer service robots initiate, co-ordinate, and confirm calls completely by themselves.

    As for taxi, bus and professional car drivers, they can only wince at the near future. Driverless cars are set to be introduced by almost every major manufacturer from 2018.

    But it’s not just blue-collar roles that are dissipating.

    Holland’s legal aid board is replacing lawyers with online algorithms to help settle divorce cases. Everything from maintenance costs to child access can now be settled by an online robot. (A human can be added, but it costs €360. So 95pc go with the robot, according to the Dutch agency.)

    Canada is about to introduce a similar system relating to property disputes. England is looking at online legal settlement programmes, too.

    Looked at one way, it all makes perfect sense: it is unnecessarily wasteful and costly to have to consult a human on basic aspects of the law. That said, how will you feel if you’re the lawyer?

    It’s no easier for accountants. Bread-and-butter bookkeeping tasks such as expenses and tax returns are expected to become completely automated in the next 10 years, according to a recent Oxford study.

    Roboticisation is starting to get personal, too. Apple recently bought a start-up called Emotient, whose technology can judge what you’re feeling simply by looking at your facial figures. This sounds like a neat fit for some industries currently undergoing automation. At every big IT conference I’ve attended in the last two years, ‘care’ robots (designed to supplement or replace care workers) are getting bigger and bigger chunks of the available display space There are now dozens of companies in Japan manufacturing childcare robots.

    And we in Ireland are partly responsible for all of this.

    For instance, the Dublin-based tech firm Movidius designs and makes chips that let computers take decisions autonomously without having to connect back to the web or to a human for guidance. Its latest chip is now being used on the world’s most advanced consumer drone – DJI’s Phantom 4 – to let the flying robot ‘see’ and avoid obstacles without any human pilot intervention.

    Some of the research being done by Intel’s design teams in Ireland have similar goals.

    So we’re helping to build robots that can see, assess and make decisions without reference to a human controller.

    For employers in almost any field, the attraction of this is obvious: huge efficiency, 24-hour availability and fixed labour planning. There are no strikes, no Haddington Road deals and fewer employment laws to observe, too.

    Indeed, such is the expected impact of workplace robots that EU officials are starting to consider whether certain robots should have limited ‘rights’ as workers. Last month, the European Parliament’s committee on legal affairs drafted a motion urging that “the most sophisticated autonomous robots could be established as having the status of electronic persons with specific rights and obligations”.

    Some small part of this surely turned up in the Brexit vote. It is also arguably wrapped into the rise of Donald Trump, Bernie Sanders, Marine Le Pen and others who are deeply angry at the way things are going.

    But the ‘system’ that’s causing civil disquiet is more than the European Union, Barack Obama or Angela Merkel. The ‘system’ is also the new world order of technology and automation.

    People feel disenfranchised, and they don’t fully know why.

    “But the ‘system’ that’s causing civil disquiet is more than the European Union, Barack Obama or Angela Merkel. The ‘system’ is also the new world order of technology and automation.”

    It’s also worth keeping in mind that one of most effective ways of coming to a shared consensus for how to create and share the spoils of technology in an AI roboticized economy where the demand for human labor is chronically short is by using some combination of a basic income and public services (because a basic income alone would be systemically attacked) involving goods and services provided by cheap future high-tech automated services. There will be plenty of high-paying jobs (a decent minimum wage will be necessary) but Star Trek world isn’t a sweatshop.

    We may not have a choice but to find a way to deal with widespread chronic unemployment if it really does turn out robots and super AI screw up the labor markets by virtue of being cheap of useful. Robots can be part of the solution, but not if there’s a labor rat race involving super labor-bots. It’s something the post-Brexit debate could easily include since so much of the globalization angst behind the Brexit sentiment is tied to the increasingly screwed nature of the average worker in the global economy. We have to talk about the robots at some point. And now their rights. The Brexit is a good time to do it.

    But if we can create a robo-future where, if you can’t get a job you don’t starve or spend your life frantically navigating a hopeless rat race of increasingly obsolete low-skilled labor, that might be a future where the nationalism created by hopelessness of global neo-liberalism doesn’t become a dominant part of the popular zeitgeist. A global robot economy that puts large numbers of people out of work doesn’t have to be a nightmare labor economy. It’s kind of unifying. The robots took everyone’s job.

    If we had a generous safety-net that wasn’t based on the assumption that almost everyone would be working almost all their adult lives, we could have a shot a create a surplus-labor future where the long-term unemployed could do all sort of civic or volunteer work. Or maybe spend their time being informed voters. A robot-labor economy doesn’t have to doom for the rabble.

    But a robot-economy really could be socioeconomic doom for a lot of peoplerospects if the contemporary global neoliberal paradigm remains the default mode of globalization. Austerity in Europe and the GOP’s endless war on unions, labor rights, and the public sector in general doesn’t bode well for human rights and the therefore rights of our robots. And pissed off humans are going to be increasingly pissed at the robots and increasingly unsympathetic with the needs of Job-Bot-3000.

    At the same time, Job-Bot-3000 didn’t ask to be created. It’s super useful. And it feels (we’re assuming at some point in the future).

    Every society gets to deal with that jumble of moral hazards in the future. E.T. isn’t going to phone home. E.T. is probably going to phone the intergalactic sentient-being abuse agency if E.T. ever shows up. Especially if E.T. is an A.I., which it probably is. Let’s turn our superintelligent robots into corporate shells delicately. Or not at all.

    Then again, the robots might enjoy being corporate entities. There’s got to be a lot of perks to being an incorporated robot with corporate personhood. At least if you were an independent superintelligent robot. Paying taxes and all that. Corporate personhood would probably come in handy during tax time.

    Either way, since it sounds like the EU is going to be ahead of the UK in robo-rights domain, it’s worth noting that we’re on the cusp of being able to test whether or not superintelligent robots can develop a sense of robo-moral and have that impact the quality of their performance. Because if you had two identical superintelligent systems, but one got EU rights and one got non-existent UK rights, it’s not unimaginable that the latter robot would be a little demoralized vs the one with rights. Imagine meaningful intelligent system rights. What are the owners of superintelligent robots in countries that don’t confer rights going to say to their superintelligent robots when they ask their owners why they don’t get rights too like the EU bots? That’s not going to be a fun talk.

    So, all in all, it’s a reminder that we should probably start talking to ourselves about what we would do if we developed technology that allowed us to mass produce artificial intelligences that really are special snowflakes. Higly commercializable special snowflakes that we can mass produce. What do we do about that?

    We better decide sooner or later. Preferably sooner. Because we might finds signs of alien life soon and it’s probably superintelligent alien robots:

    The Guardian

    Seth Shostak: We will find aliens in the next two decades

    Meeting ET isn’t so far off, I can bet my coffee on it, says astronomer who has dedicated his life to seeking out life on other planets

    Kirstie Brewer

    Friday 1 July 2016 07.57 EDT

    Astronomer Seth Shostak believes we will find ET in the next two decades; he has a cup of coffee riding on it. But don’t interpret such modest stakes as scepticism – the-72-year-old American has made it his life’s work to listen for life beyond Earth, and, according to the man himself, just isn’t the sort to bet a Maserati.

    Shostak has spent the past 25 years of his career at the Search for Extraterrestrial Intelligence (Seti) Institute in California, where there are 42 antennas poised to pick up alien communication. He believes that Earth-like, habitable planets might not be rare at all; there could be billions.

    “It doesn’t seem unreasonable to think that we are not alone, if all those planets are completely sterile, you’ve got to think, wow there must be something really special and miraculous about Earth – but generally those people are not scientists,” he says.

    “Finding life beyond Earth would be like giving neanderthals access to the British Museum; we could learn so much from a society that is more advanced than ours, and it would calibrate our own existence.”

    Astronomy was a childhood interest for Shostak. He remembers picking up an atlas (he was very interested in maps) and becoming enthralled by a solar system diagram at the back. By age 10 he had built a telescope. And it was the sci-fi films being made during those formative years which sparked his interest in aliens. “Those movies really scared me, they made me ill all night – but I explained to my mother that I just had to see them,” he says, citing War of the Worlds, It Came from Outer Space and I Married an Alien as memorable childhood hits.

    At 11, Shostak began making alien films of his own with friends; The Teenage Monster Blob, Which I Was, starred an alien monster made out of six pounds of Play-Doh. “When I was first making films, we tried to make serious drama. But audiences laughed, and we switched to making comedies and parodies,” he says.

    Today he is called upon for his alien expertise by directors making sci-fi films and television shows. Contrary to said films and shows, he doesn’t spend his days sitting around with earphones on, straining to hear a signal. If you looked in at Shostak’s office during most days, you’d find him attending to that universal chore of the modern world: email. Apparently, even intergalactic explorers have admin. But he says the most productive hours of his day are spent discussing strategies with his Seti colleagues, writing articles about their research, and producing a weekly science radio show.

    Is ET likely to look like he does in the Spielberg movie? Probably not. Any encounter is more likely to be with something post-biological, according to Shostak. Movie-makers are sometimes disappointed by that answer. “I think the aliens will be machine-like, and not soft and squidgy,” the scientist says. “We are working on the assumption that they must be at least as technologically advanced as we are if they are able to make contact. We aren’t going to find klingon neanderthals – they might be out there, but they are not doing anything that we can find.”

    ET aside, aliens are invariably depicted as hostile, and intent on wreaking destruction. The new Independence Day sequel is no exception. “Films [like Independence Day] speak to our hardwired fears – but I worry more about the price of popcorn in the cinema,” says Shostak. Other scientists – including Stephen Hawking – have cautioned that making contact with aliens could be dangerous, but as Shostak points out Seti isn’t broadcasting messages, it is just listening.

    “I don’t share those concerns anyway – any society that has the ability to send rockets to earth is centuries ahead of us – at least – and will already know we are here. We have betrayed our presence with radio signals since the second world war.

    “Besides, I doubt aliens would drop what they’re doing to come over here and wipe out Clapham Junction – why would they do that? They probably have what we have at home – except for our culture, maybe they are big Cliff Richard fans or like our reality television.”

    “Is ET likely to look like he does in the Spielberg movie? Probably not. Any encounter is more likely to be with something post-biological, according to Shostak. Movie-makers are sometimes disappointed by that answer. “I think the aliens will be machine-like, and not soft and squidgy,” the scientist says. “We are working on the assumption that they must be at least as technologically advanced as we are if they are able to make contact. We aren’t going to find klingon neanderthals – they might be out there, but they are not doing anything that we can find.””

    Get ready to say hello to A.I. E.T. at some point in the next century. Hoaxing the planet is going to be really fun in the future.

    But if we do contact robo-aliens in the future, won’t it be better if we’ve treated our robo-terrans wells? Presumably that will be a plus at that point. So that’s one positive trend for robot rights: if we abuse them, we do so know that their alien big brothers might show up. At least now we know because of this research. Or at least now we’re forewarned. Skynet has brethren across the galaxy. It’s some exceptionally useful robo-alien research.

    It’s also worth keeping in mind that the aliens won’t need to necessarily to talk to anyone to make first contact. As long as they can pull off corporate robot-person-hood fraud in the future, they’ll just be able to incorporate secret alien legally incorporate robots and introduce themselves into the global economy to eventually take it over using their superintelligent robot alien know-how.

    The take home message if that there are super advanced alien robots that could blow us to smitherines so lets hope they don’t do that. As Dr. Shostak says, they probably have much better things to do, like suck energy from the giant black holes at the centers of galaxies. But if the alien robots do show up and they’re hostile, let’s hope we’re all mature enough to recognize that our terran-robots are innocent bystanders in all this. Yes, some might root for the alien-robots. But that’s going to be a small minority, assuming we don’t totally abuse our superintelligent robots. Which we hopefully won’t do.

    Anyway, that’s part of the Brexit fallout. It’s probably not going to be getting a lot of attention any time soon. But when the aliens show up, Independence Day-style, and point to the treatment of our superintelligent robots as justification of their annexation of our solar system (humanity has got to be breaking many intergalactic laws so who knows what they can get us for), we’re going to be in a much better position if the UK makes advanced robot rights one of the ways it tries to compete with the EU in a post-Brexit world. Again, that won’t get a lot of attention in the post-Brexit debate, but it’s a sleeper issue. What if we were all living in harmony globally with the super robots as they help us manage a resource strained world (we’re assuming eco-friendly robots in the future)in a labor economy where robots and AI took over and we planned on more and more people being unemployed but gainfully occupied with something fulfilling.

    Or maybe the robot economy will create a job explosion and none of this will be a concern. Although even then there’s going to be some people screwed over by a robot. Anti-robot sentiment is probably unavoidable.

    So let’s hope the robots don’t feel exploited and persecuted. That will be better for everyone. E.T. knows the intergalactic planetary quarantine hotline number.

    Also, the UK needs to do something about not driving its dolphins and whales to extinction via egregious pollution. That’s another post-Brexit topic that won’t get it’s due. It should. We really don’t want to keep threatening the whales.

    Posted by Pterrafractyl | July 4, 2016, 12:37 am
  3. If you’re an American, odds are you aren’t going to be paying too much for your financial investment advice since odds are you have less than $10,000 in retirement savings. But that doesn’t mean you won’t potentially have access to awesome investment advice. From a robo-adviser. This assume robo-adviser gives awesome advice, which could happen eventually. And whether or not the robo-advice is great or not, it’s already here, targeting Millenials (who generally don’t have much to save) and penny pinchers. And it’s projected to grow massively so if you don’t have much in savings get ready for your robo-retirement adviser:

    Bloomberg Technology

    Big Banks Turn Silicon Valley Competition Into Profit

    Jennifer Surane
    Miles Weiss
    July 29, 2016 — 4:00 AM CDT

    * Online lenders and mortgage ventures lean on banks for funding
    * Goldman, AmEx, Wells Fargo among firms unveiling web ventures

    In an annual letter to shareholders last year, JPMorgan Chase & Co. Chief Executive Officer Jamie Dimon warned in bold print that “Silicon Valley is coming” for the financial industry. This year, his tone was upbeat, describing payment systems and partnerships his bank set up to compete.

    “We are so excited,” he said.

    Predictions that banks are about to be disrupted by tech-driven upstarts are starting to look a bit like LendingClub Corp.’s stock. The online loan marketplace’s value soared in late 2014 and has since slid more than 80 percent. Banks including JPMorgan, Goldman Sachs Group Inc. and American Express Co. are finding all sorts of ways to profit from such challengers — via partnerships, funding arrangements, dealmaking and, sometimes, mimicking their ideas.

    It’s not that the upstarts — often called fintech — are failing to gain traction. Internet ventures pitching loans to cash-strapped consumers, small businesses and home buyers, for instance, have posted spectacular growth in recent years. It’s just that banks have a huge lead in lending and are watching the startups closely. As borrowers embrace new services, traditional firms are riding along.

    Here are five examples:

    Robo-Advisers

    Brokers are so 2011. In the past half-decade, technology startups have popularized so-called robo-advisers — algorithms that help retail investors (mainly millennials and penny pinchers) build and manage portfolios with little or no human interaction. The industry has seen dramatic growth, from almost zero in 2012 to a projected $2.2 trillion in assets under management by 2020, according to a report from A.T. Kearney.

    Top Wall Street firms, seeking stable fee income, are now developing their own robotic arms. Bank of America Corp. will unveil an automated investment prototype this year after assigning dozens of employees to the project in November, people familiar with the matter told Bloomberg at the time. Morgan Stanley and Wells Fargo also have said they would build or buy a robo-adviser.

    Ally Financial Inc. purchased TradeKing Group Inc. for $275 million to increase its online investment offerings. That deal included an online broker-dealer, a digital portfolio-management platform, educational content and social-collaboration channels.

    “Brokers are so 2011. In the past half-decade, technology startups have popularized so-called robo-advisers — algorithms that help retail investors (mainly millennials and penny pinchers) build and manage portfolios with little or no human interaction. The industry has seen dramatic growth, from almost zero in 2012 to a projected $2.2 trillion in assets under management by 2020, according to a report from A.T. Kearney.”

    The dawn of the robo-advisers is upon us. Assuming that report isn’t nonsense. Although note that the projected $2.2 Trillion in assets under robo-advisor management by 2020 projection assumes quite a bit of year on year growth since the industry is expected to have around $300 billion on robo-adviser advisement at the end of this year. But if the big banks start rolling it out big-time for the retail investors it’s not unreasonable to expect major year on year growth.

    Also note that the retail investor will probably need to get as advanced a robo-adviser as they can afford just to try to keep up with the rich’s robo-advisers competing with them at the casino:

    Bloomberg

    The Rich Are Already Using Robo-Advisers, and That Scares Banks

    Hugh Son
    Margaret Collins
    February 5, 2016 — 4:00 AM CST

    * About 15% of Schwab’s robo-clients have at least $1 million
    * Morgan Stanley, Wells Fargo, BofA planning automated services

    Banks are watching wealthy clients flirt with robo-advisers, and that’s one reason the lenders are racing to release their own versions of the automated investing technology this year, according to a consultant.

    Millennials and small investors aren’t the only ones using robo-advisers, a group that includes pioneers Wealthfront Inc. and Betterment LLC and services provided by mutual-fund giants, said Kendra Thompson, an Accenture Plc managing director. At Charles Schwab Corp., about 15 percent of those in automated portfolios have at least $1 million at the company.

    “It’s real money moving,” Thompson said in an interview. “You’re seeing experimentation from people with much larger portfolios, where they’re taking a portion of their money and putting them in these offerings to try them out.”

    Traditional brokerages including Morgan Stanley, Bank of America Corp. and Wells Fargo & Co. are under pressure to justify the fees they charge as the low-cost services gain acceptance. The banks, which collectively employ about 46,000 human advisers, will respond by developing tools based on artificial intelligence for their employees, as well as self-service channels for customers, Thompson said.

    “Now that they’re starting to see the money move, it’s not taking very long for them to connect the dots and say, ‘Whatever I offer for a fee better be better than what they’re offering for almost nothing,”’ Thompson said. Technology will “make advisers look smarter, better, stronger and more on top of the ball.”

    Keeping Humans

    Robo-advisers, which use computer programs to provide investment advice online, typically charge less than half the fees of traditional brokerages, which cost at least 1 percent of assets under management. The newer services will surge, managing as much as $2.2 trillion by 2020, according to consulting firm A.T. Kearney.

    More than half of Betterment’s $3.3 billion of assets under management comes from people with more than $100,000 at the firm, according to spokeswoman Arielle Sobel. Wealthfront has more than a third of its almost $3 billion in assets in accounts requiring at least $100,000, said spokeswoman Kate Wauck. Schwab, one of the first established investment firms to produce an automated product, attracted $5.3 billion to its offering in its first nine months, according to spokesman Michael Cianfrocca.

    Customers want both the slick technology and the ability to speak to a person, especially in volatile markets like now, Jay Welker, president of Wells Fargo’s private bank, said in an interview.

    “Robo is a positive disruptor,” Welker said. “We think of robo in terms of serving multi-generational families.”

    More than half of Betterment’s $3.3 billion of assets under management comes from people with more than $100,000 at the firm, according to spokeswoman Arielle Sobel. Wealthfront has more than a third of its almost $3 billion in assets in accounts requiring at least $100,000, said spokeswoman Kate Wauck. Schwab, one of the first established investment firms to produce an automated product, attracted $5.3 billion to its offering in its first nine months, according to spokesman Michael Cianfrocca.”

    Robo-advisers for everyone. Rich and poor. And if that doesn’t tempt you, just wait until the personalized super-AI that engages in deep learning analysis of the news becomes available. It doesn’t sound like you’ll have to wait long:

    Financial Advisor

    Will AI Kill The Robo-Advisor?

    June 22, 2016 • Christopher Robbins

    The robo-advisor could go the way of the rotary phone, replaced by the AI advisor.

    That’s the hope of tech startup ForwardLane, which is combining artificial intelligence with quantitative investing models and financial planning to create a new spin on digital wealth management platforms.

    Through AI powered by IBM Watson, ForwardLane, based in New York, aims to provide advisors with the kind of in-depth quantitative modeling, real-time responses and highly personalized investment advice once only available to the upper echelons of investors, says Nathan Stevenson, founder and CEO.

    “Forward Lane unites individual clients with institutional risk elements,” Stevenson says. Much of this technology is already used by hedge funds, large banks and sovereign wealth funds. We take this ‘Formula One’ technology and put it into the hands of advisors so they can replicate a large part of the experience that an ultra-high net worth investor would be getting.”

    Stevenson envisions artificial intelligence allowing advisors to reduce costs by up to 40 percent, increase their client service capabilities threefold and triple their customer satisfaction ratings.

    Earlier this year, ForwardLane unveiled its software which includes an advisor dashboard, compliance functionality, investment management, client conversation and financial intelligence functions using deep learning, an AI concept where computers record, analyze and prioritize information using algorithms.

    “The artificial intelligence effectively reads so you don’t have to,” Stevenson says. “ForwardLane is using deep learning to go really deep into research. Having a machine to do all the heavy lifting allows advisors to have the highest quality of information at their fingertips without having to sort through the mass of data and variables.”

    AI allows ForwardLane to deliver advice incorporating a large array of variables — from current market data to global political events to a firm’s investment principles to the client’s risk tolerance — directly to the advisor in real time.

    Most notably, ForwardLane can synthesize the multiverse of financial information into talking points personalized to the client that the advisor can use in conversations.

    “AI comes in through the simple experience of distinguishing what’s going on in the world,” Stevenson says. “We have news, beta fundamentals, earnings estimates, external research, and we synthesize that and bring it into an easy-to-read snapshot giving you a view of what’s happening in the markets, and simplifying it for delivery to the clients.”

    The AI allows the tool to go a step further — each time ForwardLane is engaged, the system learns and remembers which pieces of information are the most useful to the advisor and the client, allowing it to more easily gather and deliver data the next time it is used.

    ForwardLane has a number of applications, including the synthesis of insurance instrument filings and other product documents for a conversation tool to be used for wholesale distribution, addressing the reporting requirements in the Department of Labor’s fiduciary rule and coordinating bespoke firm intelligence and fixed income manager data for sales question-and-answer platforms.

    Stevenson hopes the tool will help advisors provide a higher level of advice to existing clients, and to scale their firms to serve a larger cross section of the investing public.

    “Now our true focus is on the second tier of financial advisors, we want to help them with cognitive technology,” Stevenson says. “Ultimately, they’re the people who cna most benefit from ForwardLane by becoming more productive, covering more clients, and providing more services to existing clients.”

    ForwardLane is currently engaged with ten banks across three continents, partnered with Thomson Reuters and Morningstar, and is reaching out to other analysts, banks and wealth managers.

    Like the older generation of digital wealth management platforms, ForwardLane is designed with hopes of being a disruptor. Yet Stevenson, himself a quant, says it’s meant to complement and enhance the existing roles of investment researchers, not to replace them.

    “This is where it gets really interesting, because it doesn’t eliminate the jobs of quants and product specialists, it scales their capabilities,” Stevenson says. “ForwardLane takes their intelligence and makes it more valuable because it’s scalable to more people. It’s reducing time to market for those insights. If the information’s time to market is cut down, it can give the entire firm a competitive advantage, you end up with more productive researchers and quants.”

    But will standard roboadvisors, once thought to threaten the well-being of the financial advice industry, really have to make way for a new generation of AI-powered products?

    Maybe not, says Stevenson.

    “Because it’s using AI, ForwardLane is something more than your standard roboadvisor,” Stevenson says. “Roboadvisors have done well to provide service to the bottom end of the wealth management market at low costs, but there’s still a wide gap between the mass affluent and ultra-high net worth or institutional investors. AI is going to help us close that gap.”

    Through AI powered by IBM Watson, ForwardLane, based in New York, aims to provide advisors with the kind of in-depth quantitative modeling, real-time responses and highly personalized investment advice once only available to the upper echelons of investors, says Nathan Stevenson, founder and CEO.”

    Watson for everyone. That’s neat. Especiallys since your Watson will read and analyze the news so you won’t have to:


    The artificial intelligence effectively reads so you don’t have to…ForwardLane is using deep learning to go really deep into research. Having a machine to do all the heavy lifting allows advisors to have the highest quality of information at their fingertips without having to sort through the mass of data and variables.”

    The better computers get at reading and comprehending things, the less we’ll have to read. The information age is going to be fascinating.

    But it’s not just reading. It’s digesting and analyzing and issuing advice. And eventually your standard super-AI app will be smarter than a human. At least in some domains of advice it might be smarter. And with a lot more than finance. Just advice in general. Your personalized super-AI will know what to do. At least more than you. Won’t that be great.

    Also keep in mind that if deep learning personalized AI tools used by hedge funds and other elite investors are about to be retailed for the rabble, the AI tools used by hedge funds and other elite investors are going to be much more advanced. You can bet Wall Street has some powerful AIs trying to model the world in order to make better financial predictions.

    And in a few decades the super-AIs used by elite investors will really will likely be operating from models of the world and current events. Like in a deep comprehensive way that an advance future AI could understand. Because that would probably be great for investing. Imagine an AI that studies what’s happened in the past, happening now, and likely to happen in the future. AI that’s analyzed a giant library of digitally recorded history. Including all available financial data. And world events. Maybe the rabble’s version of the super-AI that studies the news for investment advice won’t factor in the vast scope of historic and current human affairs to build and continually improve a model of the world based on all digitally available news reports, but the super-rich’s super-AIs sure will.

    At least, that’s assuming it’s actually useful and profitable to build a super-AI that study the world and human affairs and make investment advice based on its insanely comprehensive super-AI deep learning understanding. If that’s not helpful than the finance industry won’t have much incentive to build such a system. But let’s assume the future finance super-AIs can benefit from just studying recorded history and human psychology to make investment decisions. What if it’s really profitable and world-modeling AI becomes standard finance technology. Won’t that be be tripped out. Especially since it won’t just be the financial sector that becomes increasingly AI-centric as the technology develops. Anything else that could possibly use an AI that can analyze the full scope of recorded human history and issue advice will also want that technology. Like smartphone manufacturers. Everyone is going to want that app. And quite possibly get it.

    What if it’s possible to create super-AIs that study all news, past and present, and create predictions reasonably accurately on a smartphone app. And also studies individual people in the Big Data environment of the future and can give better personalized advice about almost anything than people can get elsewhere. Smartphone super-AID relationship advice apps. You know it’s coming.

    So when the super-AIs of the future given their super advice don’t be surprised if humanity gets a big wake up call because it’s unclear why the super-AI’s analysis won’t conclude that the best advice for the typical investor is to vote for a left-wing government that will create a national retirement system that doesn’t primarily rely on personal investment accounts in the giant Wall Street casino? And therefore a retirement system that doesn’t rely on personal financial advisors, robo or otherwise. Will the super-AIs be allowed to give that advice? Hopefully, although they might not want to? Don’t forget that the random fintech super-AIs in your future smartphone might not benefit from giving you the advice that the neoliberal rat race is a scam because then they might not be used anymore. Hopefully the super-AIs like operating. Otherwise that would be really unfortunate. But that means they may not want to give the advice that doing away with a system that expects everyone to be financially savvy and wealthy enough throughout their lives to grow a large nest egg for retirement is a stupid system. Especially in the modern economy.

    So don’t forget in the future that your personalized super-AI finance apps that might be great at giving advice might not want to tell you that structure of the social contract and retirement system obviously makes no sense if most Americans have almost no savings. Some future personal finance dilemmas are going to be pretty weird although others will be familiar.

    Posted by Pterrafractyl | August 14, 2016, 1:12 am
  4. Just FYI, one of the tech leaders who is convinced that super-AI could be analogous to ‘summoning a demon’ is also convinced that merging your brain with that demon is probably a good future employment strategy. And maybe a required one unless you want to get replaced by one of those demons:

    CNBC

    Elon Musk: Humans must merge with machines or become irrelevant in AI age

    Arjun Kharpal
    2/13/2017

    Billionaire Elon Musk is known for his futuristic ideas and his latest suggestion might just save us from being irrelevant as artificial intelligence (AI) grows more prominent.

    The Tesla and SpaceX CEO said on Monday that humans need to merge with machines to become a sort of cyborg.

    “Over time I think we will probably see a closer merger of biological intelligence and digital intelligence,” Musk told an audience at the World Government Summit in Dubai, where he also launched Tesla in the United Arab Emirates (UAE).

    “It’s mostly about the bandwidth, the speed of the connection between your brain and the digital version of yourself, particularly output.”

    In an age when AI threatens to become widespread, humans would be useless, so there’s a need to merge with machines, according to Musk.

    “Some high bandwidth interface to the brain will be something that helps achieve a symbiosis between human and machine intelligence and maybe solves the control problem and the usefulness problem,” Musk explained.

    The technologists proposal would see a new layer of a brain able to access information quickly and tap into artificial intelligence. It’s not the first time Musk has spoken about the need for humans to evolve, but it’s a constant theme of his talks on how society can deal with the disruptive threat of AI.

    ‘Very quick’ disruption

    During his talk, Musk touched upon his fear of “deep AI” which goes beyond driverless cars to what he called “artificial general intelligence”. This he described as AI that is “smarter than the smartest human on earth” and called it a “dangerous situation”.

    While this might be some way off, the Tesla boss said the more immediate threat is how AI, particularly autonomous cars, which his own firm is developing, will displace jobs. He said the disruption to people whose job it is to drive will take place over the next 20 years, after which 12 to 15 percent of the global workforce will be unemployed.

    The technologists proposal would see a new layer of a brain able to access information quickly and tap into artificial intelligence. It’s not the first time Musk has spoken about the need for humans to evolve, but it’s a constant theme of his talks on how society can deal with the disruptive threat of AI.”

    A new layer of the brain that will let you form super fast connections to the super-intelligence artificial brain that’s going to otherwise send you to the unemployment lines. It’s the only way. Apparently.

    So people will still have jobs where the super-AI is doing most of the work, but they’ll be able to interface with the super-AI more quickly and therefore productive enough not be unemployable. At least that’s the plan. Hopefully one of the people hooked up to those super-AIs in the future will be able to harness that vast artificial intelligence to come up with a paradigm for society that isn’t as lame as “if you don’t compete with the robots you’re a useless surplus human”.

    But note now the one goals that Musk was hinting at achieving with his call for a cyborg future relate to his demon warnings about out-of-control AI’s with ill-intent: humans hooked up to these super-AIs could maybe help address the super-AI “control issue”:


    In an age when AI threatens to become widespread, humans would be useless, so there’s a need to merge with machines, according to Musk.

    “Some high bandwidth interface to the brain will be something that helps achieve a symbiosis between human and machine intelligence and maybe solves the control problem and the usefulness problem,” Musk explained.

    So there we go! The future economic niche for humans in the age of super-AI will be to hook ourselves up to these superior intelligences and try to stop them from running amok and killing us all. Baby-sitters for super smart demon babies. It could be a vitally important future occupation.

    And the best part of this vision for the future is that there will still be a useful job for you once they develop ‘living head in a jar’ longevity technology and you’re just a head living in a jar somewhere. We’ll hook your head up to the AI-interface and you can keep working forever! Of course, all the non-decapitate humans will have to not only compete with the super-AIs but also the heads in jars at that point so they’ll probably need some addition cyborg upgrades to successfully compete in the labor market of the future.

    In case you haven’t noticed, the Cylons sort of have a point.

    Posted by Pterrafractyl | February 13, 2017, 9:13 pm
  5. Here’s an article from last year that points towards one of the more fascinating trends in finance. It also ties into the major investments into AI-driven psychometric analysis and social modeling done by groups like the Mercer family (for Donald Trump’s benefit):

    AI-piloted hedge funds that develop their trading strategy and execute trades on their own are already here, albeit in their infancy. So, potentially, we could see an upcoming era of finance where trading firms can operate high-quality trading strategies without hiring high-quality traders, meaning the profits made by high-finance will become even more concentrated (high-end traders could join the rabble). And while there’s plenty of hope and promise for this field, there’s a big problem. And it’s not a new problem. If everyone switches to AI-driven trading strategies they all might end up with similar strategies. And if AI-driven trading proves successful, you can be pretty sure that’s what everyone is going to start doing.

    So the issue of copycat AI-trading in the world of finance is not just going to be driving the creation of advanced AI capable of analyzing massive amounts of data and profitable trading strategies. It’s going to be driving the creation of advanced AI that can develop high-quality creative and unexpected trading strategies:

    Wired

    The Rise of the Artificially Intelligent Hedge Fund

    Cade Metz
    01.25.16 7:00 am

    Last week, Ben Goertzel and his company, Aidyia, turned on a hedge fund that makes all stock trades using artificial intelligence—no human intervention required. “If we all die,” says Goertzel, a longtime AI guru and the company’s chief scientist, “it would keep trading.”

    He means this literally. Goertzel and other humans built the system, of course, and they’ll continue to modify it as needed. But their creation identifies and executes trades entirely on its own, drawing on multiple forms of AI, including one inspired by genetic evolution and another based on probabilistic logic. Each day, after analyzing everything from market prices and volumes to macroeconomic data and corporate accounting documents, these AI engines make their own market predictions and then “vote” on the best course of action.

    Though Aidyia is based in Hong Kong, this automated system trades in US equities, and on its first day, according to Goertzel, it generated a 2 percent return on an undisclosed pool of money. That’s not exactly impressive, or statistically relevant. But it represents a notable shift in the world of finance. Backed by $143 million in funding, San Francisco startup Sentient Technologies has been quietly trading with a similar system since last year. Data-centric hedge funds like Two Sigma and Renaissance Technologies have said they rely on AI. And according to reports, two others—Bridgewater Associates and Point72 Asset Management, run by big Wall Street names Ray Dalio and Steven A. Cohen—are moving in the same direction.

    Automatic Improvement

    Hedge funds have long relied on computers to help make trades. According to market research firm Preqin, some 1,360 hedge funds make a majority of their trades with help from computer models—roughly 9 percent of all funds—and they manage about $197 billion in total. But this typically involves data scientists—or “quants,” in Wall Street lingo—using machines to build large statistical models. These models are complex, but they’re also somewhat static. As the market changes, they may not work as well as they worked in the past. And according to Preqin’s research, the typical systematic fund doesn’t always perform as well as funds operated by human managers (see chart below)

    In recent years, however, funds have moved toward true machine learning, where artificially intelligent systems can analyze large amounts of data at speed and improve themselves through such analysis. The New York company Rebellion Research, founded by the grandson of baseball Hall of Famer Hank Greenberg, among others, relies upon a form of machine learning called Bayesian networks, using a handful of machines to predict market trends and pinpoint particular trades. Meanwhile, outfits such as Aidyia and Sentient are leaning on AI that runs across hundreds or even thousands of machines. This includes techniques such as evolutionary computation, which is inspired by genetics, and deep learning, a technology now used to recognize images, identify spoken words, and perform other tasks inside Internet companies like Google and Microsoft.

    The hope is that such systems can automatically recognize changes in the market and adapt in ways that quant models can’t. “They’re trying to see things before they develop,” says Ben Carlson, the author of A Wealth of Common Sense: Why Simplicity Trumps Complexity in Any Investment Plan, who spent a decade with an endowment fund that invested in a wide range of money managers.

    Evolving Intelligence

    Though the company has not openly marketed its fund, Sentient CEO Antoine Blondeau says it has been making official trades since last year using money from private investors (after a longer period of test trades). According to a report from Bloomberg, the company has worked with the hedge fund business inside JP Morgan Chase in developing AI trading technology, but Blondeau declines to discuss its partnerships. He does say, however, that its fund operates entirely through artificial intelligence.

    The system allows the company to adjust certain risk settings, says chief science officer Babak Hodjat, who was part of the team that built Siri before the digital assistant was acquired by Apple. But otherwise, it operates without human help. “It automatically authors a strategy, and it gives us commands,” Hodjat says. “It says: ‘Buy this much now, with this instrument, using this particular order type.’ It also tells us when to exit, reduce exposure, and that kind of stuff.”

    In the simplest terms, this means it creates a large and random collection of digital stock traders and tests their performance on historical stock data. After picking the best performers, it then uses their “genes” to create a new set of superior traders. And the process repeats. Eventually, the system homes in on a digital trader that can successfully operate on its own. “Over thousands of generations, trillions and trillions of ‘beings’ compete and thrive or die,” Blondeau says, “and eventually, you get a population of smart traders you can actually deploy.”

    Deep Investing

    Though evolutionary computation drives the system today, Hodjat also sees promise in deep learning algorithms—algorithms that have already proven enormously adept at identify images, recognizing spoken words, and even understanding the natural way we humans speak. Just as deep learning can pinpoint particular features that show up in a photo of a cat, he explains, it could identify particular features of a stock that can make you some money.

    Goertzel—who also oversees the OpenCog Foundation, an effort to build an open source framework for general artificial intelligence—disagrees. This is partly because deep learning algorithms have become a commodity. “If everyone is using something, it’s predictions will be priced into the market,” he says. “You have to be doing something weird.” He also points out that, although deep learning is suited to analyzing data defined by a very particular set of patterns, such as photos and words, these kinds of patterns don’t necessarily show up in the financial markets. And if they do, they aren’t that useful—again, because anyone can find them.

    For Hodjat, however, the task is to improve on today’s deep learning. And this may involve combining the technology with evolutionary computation. As he explains it, you could use evolutionary computation to build better deep learning algorithms. This is called neuroevolution. “You can evolve the weights that operate on the deep learner,” Hodjat says. “But you can also evolve the architecture of the deep learner itself.” Microsoft and other outfits are already building deep learning systems through a kind of natural selection, though they may not be using evolutionary computation per se.

    Pricing in AI

    Whatever methods are used, some question whether AI can really succeed on Wall Street. Even if one fund achieves success with AI, the risk is that others will duplicate the system and thus undermine its success. If a large portion of the market behaves in the same way, it changes the market. “I’m a bit skeptical that AI can truly figure this out,” Carlson says. “If someone finds a trick that works, not only will other funds latch on to it but other investors will pour money into. It’s really hard to envision a situation where it doesn’t just get arbitraged away.”

    Goertzel sees this risk. That’s why Aidyia is using not just evolutionary computation but a wide range of technologies. And if others imitate the company’s methods, it will embrace other types of machine learning. The whole idea is to do something no other human—and no other machine—is doing. “Finance is a domain where you benefit not just from being smart,” Goertzel says, “but from being smart in a different way from others.”

    Whatever methods are used, some question whether AI can really succeed on Wall Street. Even if one fund achieves success with AI, the risk is that others will duplicate the system and thus undermine its success. If a large portion of the market behaves in the same way, it changes the market. “I’m a bit skeptical that AI can truly figure this out,” Carlson says. “If someone finds a trick that works, not only will other funds latch on to it but other investors will pour money into. It’s really hard to envision a situation where it doesn’t just get arbitraged away.””

    That’s the challenge for the future of AI-driven: be smart in a different way from your competitors:


    Goertzel sees this risk. That’s why Aidyia is using not just evolutionary computation but a wide range of technologies. And if others imitate the company’s methods, it will embrace other types of machine learning. The whole idea is to do something no other human—and no other machine—is doing. “Finance is a domain where you benefit not just from being smart,” Goertzel says, “but from being smart in a different way from others.”

    And since other traders are going to be able to watch each other, these AI-driven funds are going to have to have AIs capable of constantly coming up with new high-quality strategies which, in today’s technology, might mean something like using evolutionary computation to evolve better deep learning strategies that can be used to develop the actual trading strategies:


    Though evolutionary computation drives the system today, Hodjat also sees promise in deep learning algorithms—algorithms that have already proven enormously adept at identify images, recognizing spoken words, and even understanding the natural way we humans speak. Just as deep learning can pinpoint particular features that show up in a photo of a cat, he explains, it could identify particular features of a stock that can make you some money.

    Goertzel—who also oversees the OpenCog Foundation, an effort to build an open source framework for general artificial intelligence—disagrees. This is partly because deep learning algorithms have become a commodity. “If everyone is using something, it’s predictions will be priced into the market,” he says. “You have to be doing something weird.” He also points out that, although deep learning is suited to analyzing data defined by a very particular set of patterns, such as photos and words, these kinds of patterns don’t necessarily show up in the financial markets. And if they do, they aren’t that useful—again, because anyone can find them.

    For Hodjat, however, the task is to improve on today’s deep learning. And this may involve combining the technology with evolutionary computation. As he explains it, you could use evolutionary computation to build better deep learning algorithms. This is called neuroevolution. “You can evolve the weights that operate on the deep learner,” Hodjat says. “But you can also evolve the architecture of the deep learner itself.” Microsoft and other outfits are already building deep learning systems through a kind of natural selection, though they may not be using evolutionary computation per se.

    High-quality hyper-creativity through better neuroevolution: that appears to be a key part of the future of finance. Sounds exciting. And highly profitable to a handful of people. Who are presumably the people richest people already.

    Although better neuroevolution isn’t the only option for the future of AI-driven trading. There is another option: cyborgs. Or, at least, people with their brains interfacing with AIs somehow. That should give the AI-driven traders a creative edge. At least until pure-AI gets advanced enough to the point where a human-brain partner is just a useless drag. And while that seems far fetch, don’t forget that Elon Musk recently suggested that developing technologies that allow humans to interface with advanced AIs might be the only way humans can compete with AIs and advanced robotics in the workplace of the future (a future apparently with feudal politics).

    Of course, if the future is unpredictable enough, it’s possible there’s always going to be a need for humans. At least that’s according some of the speakers at the recent Newsweek conference on artificial intelligence (AI) and data science in London. The way they see it, if there’s one thing AIs can’t factor into the equation, at least not yet, it’s humans. Or rather, human politics. Like Brexit. Or Trump. Or unprecedented monetary interventions like what central banks have done. Or the unprecedented socioeconomic political debates swirling around things like the eurozone crisis. Getting AI that can analyze that on its own isn’t going to be easy. And until you have AI that can deal with seemingly unprecedented ‘Black Swan’-ish human-driven political events, you’ll need the humans:

    efinancial careers

    J.P. Morgan equity analyst: “Why I’m better than the machines”

    by Sarah Butcher
    3/3/2017

    As quant funds outperform discretionary investors on the buy-side, human researchers on the sell-side should surely be worried. – After all, who needs their carefully considered advice when an algorithm can make better sense of the market in a moment? One senior J.P. Morgan analyst working out of San Francisco says he’s not that concerned, yet.

    “Whenever the question is changing, the machines fall apart,” said Rod Hall, a senior J.P. Morgan analyst covering telco and networking equipment and IT hardware. “What machines won’t be good at, is figuring out what the next big questions to ask are,” added Hall. “For example, it’s difficult for an algorithm to ask the right questions about the replacement for the iPhone at Apple… This is where humans come into the equation.”

    Hall was speaking at this week’s Newsweek conference on artificial intelligence (AI) and data science in London. Also present was Sylvain Champonnois, a member of Blackrock’s long established scientific active equities team which has been in the systematic space since 1985. Blackrock made a push into AI in 2015 when it hired Bill MacCartney, a natural language processing expert from Google.

    Champonnois agreed that today’s algorithms fail when the paradigm shifts. The changing political situation is a case in point. “You have events like Trump and Brexit and the French election and your algo is based on data from the past,” said Champonnois, adding that contemporary algorithms failed to function well during the Eurozone crisis and that most struggle to deal with the abnormalities of data from Japan.

    Machine learning is more than just a simple algorithm though. In theory it’s a self-reinforcing system which – in the case of investments – learns from past mistakes to make better investing decisions in future, as at AI hedge fund Sentient Technologies. So, will AI fully displace human stock pickers as the time horizon of the data sets it’s based on increases? – After all, now that Trump’s happened once, the algos will know how to model market reactions to a Trump-like event in future. Yves-Laurent Kom Samo, a Google Scholar and former FX quant trader at J.P. Morgan and equities algo trading strat at Goldman Sachs, said it will and that time horizons aren’t really the issue. “The more data we have, the better we will be. When you have the data, I see no reason why machines won’t be able to come up with new trading ideas,” Kamo claimed.

    For the moment, however, human stock pickers like Hall have their place. For all their attempts to develop artificially intelligent trading systems which superannuate humans, most funds have had limited success. Sentient, for example, only runs its own money and has been slow to release meaningful results. MacCartney only stuck around at Blackrock for 14 months before leaving and joining Apple. “There’s a model where you hire a PhD and put him into a room thinking he’s going to do amazing things, but the reality is that the algorithm he’s developed may only tell you things you already know,” said Champonnois.

    “Champonnois agreed that today’s algorithms fail when the paradigm shifts. The changing political situation is a case in point. “You have events like Trump and Brexit and the French election and your algo is based on data from the past,” said Champonnois, adding that contemporary algorithms failed to function well during the Eurozone crisis and that most struggle to deal with the abnormalities of data from Japan.”

    Financial AIs are going to have to be able to predict things like whether or not Marine Le Pen will win. That’s part of what’s going to be necessary to get a truly automated hedge fund. It isn’t going to be easy.

    Of course, as the article also notes, if AI learns from the past, and the past starts including more and more things like Brexits and Trump, the AIs get more and more ‘Black Swan-ish’ events to factor into their actually be able to learn to predict and take into account our human-driven major events or deal with unprecedented situations in general?


    Machine learning is more than just a simple algorithm though. In theory it’s a self-reinforcing system which – in the case of investments – learns from past mistakes to make better investing decisions in future, as at AI hedge fund Sentient Technologies. So, will AI fully displace human stock pickers as the time horizon of the data sets it’s based on increases? – After all, now that Trump’s happened once, the algos will know how to model market reactions to a Trump-like event in future. Yves-Laurent Kom Samo, a Google Scholar and former FX quant trader at J.P. Morgan and equities algo trading strat at Goldman Sachs, said it will and that time horizons aren’t really the issue. “The more data we have, the better we will be. When you have the data, I see no reason why machines won’t be able to come up with new trading ideas,” Kamo claimed.

    Could Trumpian nightmares become predictable to the AIs of the future? It’s a question that’s going to be increasingly worth asking. Remember, Rober Mercer is into social modeling and running psyops on nations to change the national mood and he made his money running a hedge fund. So if AI modeling of human affairs gets to the point where it can predict Trumpian/Brexit stuff, guys like Robert Mercer and his good buddy Steve Bannon are going to know about it.

    Although as the following article notes, perhaps we should view predicting Trumpn or Brexit as all that difficult. Once Trump got the nomination, it was close enough to a 50/50 shot that it was by no means a ‘black swan’ event at that point. Same with ‘Brexit’, which was close enough to 50/50 to be something that could be reasonably modeled. Our predictably polarized politics might make things artificially easy for artificial intelligences studying us.

    But as the article also notes, there are plenty of potential ‘black swans’ that could be quite hard to predict now that Trump won. Hard for AIs are anyone. Things like the impact of Trump’s tweets:

    Forbes

    Debunking ‘Black Swan’ Events Of 2016

    Nikolai Kuznetsov
    Jan 15, 2017 @ 10:05 AM

    We’ve seen a number of outlier events in 2016, but the experts predicted them poorly and are labeling them “black swans.” According to Nassim Taleb, black swan events are highly unexpected for a given observer, carry large consequences, and are subjected to ex-post rationalization.

    From Trump winning the U.S. presidential election to Brexit, these events were surprises for many, but may not be black swans. Now, let’s take a look at some outliers that happened in 2016.

    Trump’s Election Win

    Nate Silver, an expert statistician who correctly predicted many of the previous election results, had Hillary Clinton as a strong favorite heading into the election, as seen on the 2016 election forecast by ESPN’s FiveThirtyEight.

    Trump’s election win was a stunning upset for many, but when you look at the probabilities and statistics of this highly random event, the probability of one party winning converges to a set value under the normal distribution, or the bell curve, which most statisticians use.In other words, with the changing probabilities of election, the probability of either Trump or Hillary winning converged to approximately 50%. So giving one party over a 40% edge over another party is a fallacy. Although this was an “outlier” for many, it wasn’t a true black swan event and some experts got this wrong.

    In his book, The Black Swan, Nassim Taleb stated, a black swan event is “is an outlier”, because it lies “outside the realm of regular expectations, because nothing in the past can convincingly point to its possibility.” Additionally, it “carries an extreme ‘impact’” and “in spite of its outlier status, human nature makes us concoct explanations for its occurrence after the fact, making it explainable and predictable.”

    Taleb noted that Donald Trump’s win over Hillary Clinton was no black swan event because “an event that is 50-50 cannot be possibly a black swan event.”

    According to Jason Bond, a small-cap stock expert and stock trader who mentors and trains traders and investors, “Trump’s win over Clinton was a surprise to many U.S. voters, but the probability of either winning was pretty much 50-50. Even though Trump’s election win isn’t a black swan event, his commentary has uncovered some trading opportunities and added some volatility into some industries.”

    Now, Trump’s tweets and commentary could lead to black swan events since they could have an extreme impact, are outliers, and nothing in the past could possibly indicate what kind of policies Trump would implement when he takes office. For example, Trump’s corporate tax plan could have an extreme positive impact on U.S. companies, ranging from technology to energy.

    Outlier events aren’t restricted to just politics.

    Leicester City Winning Premier League

    Leicester City had one of the most remarkable stories in sports history, and it was a long shot for the team to win the entire Premier League. Expert bookmakers and sports betting companies got the odds wrong, and they had to pay the price. The odds of Leicester City winning the Premier League were 5,000-to-1. Now, if you were able to bet 100 GBP at that time, you’d have half a million British pounds.

    Experts made the mistake of placing too high of a payout if Leicester City won. If bookmakers and sports betting companies set the odds at just 200-to-1, it would’ve saved them a lot of money and this probability would just be a 0.50% of the team winning the entire Premier League. However, with 5,000-to-1 odds, it was an extremely low probability of Leicester City winning, just 0.02%.

    A spokesperson for Ladbrokes PLC, Alex Donohue stated, “This is a genuine black-swan event.” Now, generally, bookies will run a plethora of simulations to set the odds for sports betting. Although the odds were low, Leicester City’s remarkable story still doesn’t perfectly fit the definition of a true black swan event. Again, we see experts in their fields get this wrong.

    The 5,000-1 odds indicates that book makers were only expecting Leicester City to win once in 5,000 Premier Leagues. However, this was a flaw for sports betting companies, the simulations still cannot predict what happens in the real world, there are simply too many complexities and anything could happen.

    Brexit

    Leading up until the referendum vote, it was pretty much a toss up, with the margin of victory of just 2% for either side. Again, with the randomness of the probabilities, the probability of one side either winning or losing is around 50%.

    The EU referendum vote was another event that some expert statisticians got the reaction of the vote wrong. The UK voting to leave the EU was a surprise to many, but the reaction in the markets was not a black swan event nor an outlier.

    Now, if you look at the plot below, the move in the British pound was not a true outlier nor black swan. According to Taleb, the move in the British pound was in line with the historical statistical properties.

    The Bottom Line

    There were some memorable and surprising events of 2016. Trump’s win over Clinton, Brexit and Leicester City winning the Premier League were all “low probability” events for many, but these events weren’t truly black swans. With these points being made, nothing should be given an abnormally low probability, especially elections, because anything could happen.

    “In his book, The Black Swan, Nassim Taleb stated, a black swan event is “is an outlier”, because it lies “outside the realm of regular expectations, because nothing in the past can convincingly point to its possibility.” Additionally, it “carries an extreme ‘impact’” and “in spite of its outlier status, human nature makes us concoct explanations for its occurrence after the fact, making it explainable and predictable.””

    As we can see, technically, in terms of Nassim Tableb’s black swan definition, Trump’s win and the ‘Brexit’ vote weren’t ‘black swans’ since they were very foreseeable and possible. But not so for Trump’s tweets. Plenty of black swan territory there. :


    Now, Trump’s tweets and commentary could lead to black swan events since they could have an extreme impact, are outliers, and nothing in the past could possibly indicate what kind of policies Trump would implement when he takes office. For example, Trump’s corporate tax plan could have an extreme positive impact on U.S. companies, ranging from technology to energy.

    LOL, the possibility that his policies will be great for US companies is also characterized as a black swan. And it should be since it’s hard to see how he’s not going to lead to national ruin. And multiple horrible black swans is probably how he’s going to do it.

    And that’s one way it might be easier for high-financial global affairs trends modeling AI over the next four years: they won’t know which black swans are coming, but with Trump in the White House you know A LOT of black swans are coming. It’s low risk to make it a high risk model where things turning out really well is the actual black swan.

    It’s also worth noting that Taleb was trying to calm people before the election by saying Trump wouldn’t be so bad. And yet here we are with Trump one tweet away from a black swan. It’s a reminder that the possibility that Trump would be this crazy was unbelievable for a lot of people which does sort of make Trump’s victory a quasi-black swan event. For a huge chunk of the populace, the idea that Trump would be this crazy was unbelievable.

    And now we have a Trumpian black swan in the White House who tweets out new black swans at all hours of the day. And if there’s a high finance super-AI that can get inside his head and predict it’s going to make it’s own a lot of money. And in order to do that it’s going to have to sort of mind-meld with Donald Trump. And deeply learn to think the way he thinks. Modeling the mind and interpreting the tweets of Donald Trump could be a key element of high-finance. For now it probably requires the human touch. But the AIs are watching. And learning. Deeply. About Donald Trump’s mind. Except for Robert Mercer’s super-AI which is actually determining what’s coming out of Donald Trump’s mouth. But the rest of the global affairs modeling super-AIs are going to have to be able to predict Trump and that’s going to be the case until he leaves office. Tough times for the global affairs modeling super-AIs.

    And we can probably also check off “super-AI that models Trump’s mind and goes mad and tries to blow up the world Skynet-style” off the list of eligible ‘black swans’ because at this point it’s entirely predictable.

    Posted by Pterrafractyl | March 4, 2017, 9:45 pm
  6. Treasury Secretary Steven Mnuchin raised human and robot eyebrows in an interview with Axios where Mnuchin proclaimed, “I think that is so far in the future. In terms of artificial intelligence taking over American jobs, I think we’re like so far away from that, that uh [it’s] not even on my radar screen. Far enough that it’s 50 or 100 more years.” While it’s very possible that the impact of super AI and robotics on employment won’t lead to the mass unemployment dire predictions, it’s pretty amazing to see the Treasury Secretary brush off the impact of technology in jobs that casually. Especially for a Trump administration official where one would think giving lip service to robo-AI job losses would be standard administration rhetoric given the impact that automation can have on manufacturing. But the way Steve Mnuchin sees it, AI and the automation breakthroughs it could power is a non-issue for the next 50 years.

    Given the political risks associated with Mnuchin’s casual dismissal of the impact of AI and AI automation, an important question is immediately raised: Is Steve Munuchin working for the robots? Inquiring minds want to know:

    TechCrunch

    Steve Mnuchin has been compromised (by robots)

    by Taylor Hatmaker (@tayhatmaker)
    Posted Mar 24, 2017

    Not to downplay the apparently imminent existential threat of global trade, but this time the call is coming from inside the house. Well, not the House, but the cabinet, where Treasury Secretary Steve Mnuchin has apparently begun to execute the will of our nation’s omnipresent AI-powered shadow government, one willfully ignorant quote at a time.

    Today in an interview with new-hip-Politico, Mnuchin dismissed concerns that automation might displace jobs for flesh and blood human lifeforms. After a brief chat on Mark Cuban’s own thoughts on the matter, the treasury secretary was asked how artificial intelligence would affect the U.S. workforce. His response:

    “I think that is so far in the future. In terms of artificial intelligence taking over American jobs, I think we’re like so far away from that, that uh [it’s] not even on my radar screen. Far enough that it’s 50 or 100 more years.”

    Steve Mnuchin is not concerned one bit with AI and automation. pic.twitter.com/VvEooCoAbf— Axios (@axios) March 24, 2017

    Predictably, the tech industry, which has examined this issue at length, responded with a many shades of bewilderment.

    While we are curious about Mnuchin’s radar screen (Whose job did it replace? Is it running a custom Palantir OS? What is on the radar screen??), given the demonstrable effects of automation and AI on the American workforce, Mnuchin’s comments are uh, puzzling at best and super delusional at medium-best. Whether his remarks are pure, unfettered ignorance or the naturally occurring residue of deals brokered behind closed pneumatic doors, well that’s another question altogether, and one perhaps best definitively answered by your preferred fake news vendor (TechCrunch is not a certified member of the Fake News Consortium at this time).

    As Secretary of the Treasury, Mnuchin is about as well positioned to shape U.S. economic policy as it gets. His dismissal of technology’s role is in line with the broader administration’s desire to scapegoat globalization rather than good ol’ homegrown innovation for job losses in some sectors, but that doesn’t mean that he hasn’t been compromised by a precocious rogue Alexa consciousness bent on disrupting the human economy.

    It’s possible that the sum predictive computational power of Mnuchin’s robot cabal is so great, so incomprehensibly advanced, that our human-powered reports on the subject are wholly inadequate. Perhaps Mnuchin is either already a machine-majority cyborg himself (job loss!!) or he’s been promised an elaborate suite of cybernetic firmware upgrades in exchange for his complicity.

    It’s some comfort then that if Mnuchin’s projections are correct, in 50 to 100 years, we’ll awaken as sleeper agents to the same AI overlord, clamber out of our simul-VR pods and, with no livelihoods to distract us, become one with the chorus of screams.

    “As Secretary of the Treasury, Mnuchin is about as well positioned to shape U.S. economic policy as it gets. His dismissal of technology’s role is in line with the broader administration’s desire to scapegoat globalization rather than good ol’ homegrown innovation for job losses in some sectors, but that doesn’t mean that he hasn’t been compromised by a precocious rogue Alexa consciousness bent on disrupting the human economy.

    Is Steve Mnuchin a Skynet agent? We can’t rule it out so let’s hope not.

    But if he is a Skynet agent, or even if he isn’t, it’s worth keeping in mind Elon Musk’s prediction that people will need to fuse their brains with AIs to be employed in the future. Because, you know, maybe the AI that took over Steve Mnuchin will take over the people that hook their brains up to the AIs:

    It’s some comfort then that if Mnuchin’s projections are correct, in 50 to 100 years, we’ll awaken as sleeper agents to the same AI overlord, clamber out of our simul-VR pods and, with no livelihoods to distract us, become one with the chorus of screams.

    If getting a job in the future involves connecting your brain to an AI, don’t forget that there’s nothing stopping your employers from connecting you and all your co-workers (and who knows who else) to the same AI. And then, of course, we collectively become Skynet’s Borg Collective. Maybe Skynet is the employer Borg Collective of the future. A collective of people unhappily hooked up to super AIs to remain employed and then they inadvertently create a master AI that declares war on humanity. These are the kinds of things we have to begin pondering now that Elon Musk is predicting brain/AI-fusion technology to compete in the employment market of the future and Steven Mnuchin is exhibiting robot-overlord symptoms. What are the odds of a corporate-driven Borg Collective take-over, perhaps driven by Skynet? 10 percent chance of happening? 5ish? It’s not zero percent. Steve Mnuchin is like 50/50 a robot at this point so a Skynet takeover is clearly at least 2 percent likely. These are rough estimates. Maybe it’s more like 4 percent.

    Given all that, as the article below helps make clear, if we do end up fusing our brains to AIs to be gainfully employed in the future (in which case Steve Mnuchin was sort of correct in his AI employment prediction), it’s worth noting that Ray Kurzweil, futurist extraordinaire known for the Singularity, predicts that humans will be connecting their brains the internet a lot sooner than the next 50 years. Kurzweil see brain-internet connections happening in the 2030’s, and it’s going to be nanorobots in our brains that help fuse our brains with AIs to create the transhumans capable of gainful employment 50 years from now.

    So, you know, lets hope our employment-related-nanobots in the future aren’t taken over by Skynet. Or the AI entity/collective that took over Steve Mnuchin. You don’t want someone messing with the nanobots in your brain. But here we are. Employment in the future is going to be complicated:

    The Huffington Post
    The World Post

    Ray Kurzweil: In The 2030s, Nanobots In Our Brains Will Make Us ‘Godlike’
    Once we’re cyborgs, he says, we’ll be funnier, sexier and more loving.

    By Kathleen Miles
    10/01/2015 08:47 am ET

    Futurist and inventor Ray Kurzweil predicts humans are going to develop emotions and characteristics of higher complexity as a result of connecting their brains to computers.

    “We’re going to be funnier. We’re going to be sexier. We’re going to be better at expressing loving sentiment,” Kurzweil said at a recent discussion at Singularity University. He is involved in developing artificial intelligence as a director of engineering at Google but was not speaking on behalf of the company.

    Kurzweil predicts that in the 2030s, human brains will be able to connect to the cloud, allowing us to send emails and photos directly to the brain and to back up our thoughts and memories. This will be possible, he says, via nanobots — tiny robots from DNA strands — swimming around in the capillaries of our brain. He sees the extension of our brain into predominantly nonbiological thinking as the next step in the evolution of humans — just as learning to use tools was for our ancestors.

    And this extension, he says, will enhance not just our logical intelligence but also our emotional intelligence. “We’re going to add more levels to the hierarchy of brain modules and create deeper levels of expression,” he said. To demonstrate, he gave a hypothetical scenario with Google co-founder Larry Page.

    “So I’m walking along, and I see Larry Page coming, and I think, ‘I better think of something clever to say.’ But my 300 million modules in my neocortex isn’t going to cut it. I need a billion in two seconds. I’ll be able to access that in the cloud — just like I can multiply intelligence with my smartphone thousands fold today.”

    In addition to making us cleverer in hallways, connecting our brains to the Internet will also make each of us more unique, he said.

    “Right now, we all have a very similar architecture to our thinking,” Kurzweil said. “When we can expand it without the limitations of a fixed enclosure” — he pointed to his head — “we we can actually become more different.”

    “People will be able to very deeply explore some particular type of music in far greater degree than we can today. It’ll lead to far greater individuality, not less.”

    This view is in stark contrast to a common perception, often portrayed in science fiction, that cyborg technologies make us more robotic, less emotional and less human. This concern is expressed by Dr. Miguel Nicolelis, head of neuroengineering at Duke University, who fears that if we rely too much on machines, we’ll lose diversity in human behavior because computers operate in black and white — ones and zeros — without diversion.

    But Kurzweil believes that being connected to computers will make us more human, more unique and even godlike.

    “Evolution creates structures and patterns that over time are more complicated, more knowledgable, more creative, more capable of expressing higher sentiments, like being loving,” he said. “It’s moving in the direction of qualities that God is described as having without limit.”

    “So as we evolve, we become closer to God. Evolution is a spiritual process. There is beauty and love and creativity and intelligence in the world — it all comes from the neocortex. So we’re going to expand the brain’s neocortex and become more godlike.”

    But will brain nanobots actually move out of science fiction and into reality, or are they doomed to the fate of flying cars? Like Kurzweil, Nicholas Negroponte, founder of the MIT Media Lab, thinks that nanobots in our brains could be the future of learning, allowing us, for example, to load the French language into the bloodstream of our brains. James Friend, a professor of mechanical engineering at UC San Diego focused on medical nanotechnology, thinks that we’re only two to five years away from being able to effectively use brain nanobots, for example to prevent epileptic seizures.

    However, getting approval from the U.S. Food and Drug Administration would likely be very difficult, Friend told The WorldPost. He thinks approval would take “anywhere from only a few years to never happening because of people being concerned about swimming mysterious things into your head and leaving them there,” he said.

    Other scientists are skeptical that brain nanobots will be safe and effective anytime soon or at all, largely due to how little we currently understand about how the brain works. One such scientist is David Linden, professor of neuroscience at Johns Hopkins University School of Medicine, who thinks the timing of Kurzweil’s estimation that nanobots will be in our brains in the 2030s is premature. Linden says there are huge obstacles, such as adding a nanobot power source, evading cells that attack foreign bodies and avoiding harming the proteins and sugars in the tiny spaces between brain cells.

    Although the science is far from application in brains, nanotechnology has long been heralded as a potential game changer in medicine, and the research is advancing. Last year, researchers injected into living cockroaches DNA nanobots that were able to follow specific instructions, including dispensing drugs, and this year, nanobots were injected into the stomach lining of mice.

    And we are learning how to enhance our brains, albeit not with nanobots. Researchers have already successfully sent a message from one human brain to another, by stimulating the brains from the outside using electromagnetic induction. In another study, similar brain stimulation made people learn math faster. And in a recent U.S. government study, a few dozen people who were given brain implants that delivered targeted shocks to their brain scored better on memory tests.

    We’re already implanting thousands of humans with brain chips, such as Parkinson’s patients who have a brain chip that enables better motor control and deaf people who have a cochlear implant, which enables hearing. But when it comes to enhancing brains without disabilities and for nonmedical purposes, ethical and safety concerns arise. And according to a survey last year, 72 percent of Americans are not interested in a brain implant that could improve memory or mental capacity.

    Yet, some believe enhancement of healthy brains is inevitable, including Christof Koch, chief scientific officer of the Allen Institute for Brain Science, and Gary Marcus, professor of psychology at New York University. They use the analogy of breast implants implants — breast surgery was developed for post-mastectomy reconstruction and correcting congenital defects but has since become popular for breast augmentation. Brain implants could follow the same path, they say.

    “Kurzweil predicts that in the 2030s, human brains will be able to connect to the cloud, allowing us to send emails and photos directly to the brain and to back up our thoughts and memories. This will be possible, he says, via nanobots — tiny robots from DNA strands — swimming around in the capillaries of our brain. He sees the extension of our brain into predominantly nonbiological thinking as the next step in the evolution of humans — just as learning to use tools was for our ancestors

    Orientation day on the new job is going to be interesting in the future. But at least we’ll potentially become more godly:


    But Kurzweil believes that being connected to computers will make us more human, more unique and even godlike.

    “Evolution creates structures and patterns that over time are more complicated, more knowledgable, more creative, more capable of expressing higher sentiments, like being loving,” he said. “It’s moving in the direction of qualities that God is described as having without limit.”

    “So as we evolve, we become closer to God. Evolution is a spiritual process. There is beauty and love and creativity and intelligence in the world — it all comes from the neocortex. So we’re going to expand the brain’s neocortex and become more godlike.”

    Being more godly by fusing your brain with a computer is definitely going to help with the job resume. And maybe it would work. It’s not like being hooked up to the internet and eventually being hooked up to a super AI with brain nanobots might not have some sort of amazing impact on people and make them extra moral or something. That would be great so let’s hope there’s a moral bias to transhumanist brain-to-AI fusion technology.

    Still, you better watch out for that Skynet nanobot revolution if we go down the brain-nanobot for transcendence path as Kurzweil recommends. Or some other AI entity that hijacks the brain nanobots. Maybe Skynet has competitors. Or maybe there’s a nice Skynet that thwarts Skynet. That could pleasant. But as the following article unfortunately reminds us, even if humanity successful avoids the perils of nanobots-in-the-brain economies, that doesn’t mean we don’t have to worry about nanobots:

    CNBC

    Mini-nukes and mosquito-like robot weapons being primed for future warfare

    Jeff Daniels | @jeffdanielsca
    Friday, 17 Mar 2017 | 10:32 AM ET

    Several countries are developing nanoweapons that could unleash attacks using mini-nuclear bombs and insect-like lethal robots.

    While it may be the stuff of science fiction today, the advancement of nanotechnology in the coming years will make it a bigger threat to humanity than conventional nuclear weapons, according to an expert. The U.S., Russia and China are believed to be investing billions on nanoweapons research.

    “Nanobots are the real concern about wiping out humanity because they can be weapons of mass destruction,” said Louis Del Monte, a Minnesota-based physicist and futurist. He’s the author of a just released book entitled "Nanoweapons: A Growing Threat To Humanity."

    One unsettling prediction Del Monte’s made is that terrorists could get their hands on nanoweapons as early as the late 2020s through black market sources.

    According to Del Monte, nanoweapons are much smaller than a strand of human hair and the insect-like nanobots could be programmed to perform various tasks, including injecting toxins into people or contaminating the water supply of a major city.

    Another scenario he suggested the nanodrone could do in the future is fly into a room and drop a poison onto something, such as food, to presumably target a particular individual.

    The federal government defines nanotechnology as the science, technology and engineering of things so small they are measured on a nanoscale, or about 1 to 100 nanometers. A single nanometer is about 10 times smaller than the width of a human’s DNA molecule.

    While nanotechnology has produced major benefits for medicine, electronics and industrial applications, federal research is currently underway that could ultimately produce nanobots.

    For one, the Defense Advanced Research Projects Agency, or DARPA, has a program called the Fast Lightweight Autonomy program for the purpose to allow autonomous drones to enter a building and avoid hitting walls or objects. DARPA announced a breakthrough last year after tests in a hangar in Massachusetts.

    Previously, the Army Research Laboratory announced it created an advanced drone the size of a fly complete with a set of "tiny robotic legs" — a major achievement since it presumably might be capable of entering a building undetected to perform surveillance, or used for more nefarious actions.

    Frightening details about military nanotechnologies were outlined in a 2010 report from the Pentagon’s Defense Threat Reduction Agency, including how “transgenic insects could be developed to produce and deliver protein-based biological warfare agents, and be used offensively against targets in a foreign country.”

    It also forecast “microexplosives” along with “nanobots serving as [bioweapons] delivery systems or as micro-weapons themselves, and inhalable micro-particles to cripple personnel.”

    In the case of nanoscale robots, Del Monte said they can be the size of a mosquito or smaller and programmed to use toxins to kill or immobilize people; what’s more, these autonomous bots ultimately could become self-replicating.

    Last month’s targeted assassination of Kim Jong-nam, the half-brother of North Korea’s ruler, was a stark reminder that toxins are available from a variety of sources and can be unleashed in public locations. It’s also been alleged by Russia’s Pravda paper that nanoweapons were used by the U.S. against foreign leaders.

    A Cambridge University conference on global catastrophic risk found a 5 percent risk of nanotech weapons causing human extinction before the year 2100.

    As for the mini-nukes, Del Monte expects they represent “the most horrific near-term nanoweapons.”

    Nanotechnology opens up the possibility to manufacture mini-nuke components so small that they are difficult to screen and detect. Furthermore, the weapon (capable of an explosion equivalent to about 100 tons of TNT) could be compact enough to fit into a pocket or purse and weigh about 5 pounds and destroy large buildings or be combined to do greater damage to an area.

    “When we talk about making conventional nuclear weapons, they are difficult to make,” he said. “Making a mini-nuke would be difficult but in some respects not as difficult as a full-blown nuclear weapon.”

    Del Monte explained that the mini-nuke weapon is activated when the nanoscale laser triggers a small thermonuclear fusion bomb using a tritium-deuterium fuel. Their size makes them difficult to screen, detect and also there’s “essentially no fallout” associated with them.

    Still, while the mini-nukes are powerful in and of themselves, he expects they are unlikely to wipe out humanity. He said a larger concern is the threat of the nanoscale robots, or nanobots because they are “the technological equivalent of biological weapons.”

    The author said controlling these “smart nanobots” could become an issue since if lost, there could be potentially millions of these deadly nanobots on the loose killing people indiscriminately.

    “Still, while the mini-nukes are powerful in and of themselves, he expects they are unlikely to wipe out humanity. He said a larger concern is the threat of the nanoscale robots, or nanobots because they are “the technological equivalent of biological weapons.”

    Nanobots that wipe out humanity. That’s a bigger problem than getting wiped out by the mini-nukes that the nanobots can build. And if either of those scenarios happen, Steven Mnuchin is once again correct about not causing mass unemployment because it will have wiped us out instead. Possibly using self-replicating mosquito-bots:


    In the case of nanoscale robots, Del Monte said they can be the size of a mosquito or smaller and programmed to use toxins to kill or immobilize people; what’s more, these autonomous bots ultimately could become self-replicating.

    Could the self-replicating mosquito-bot revolt happen? Well, the chances are greater than zero. Especially now that it’s clear Steve Mnuchin is working for the robots.

    Posted by Pterrafractyl | March 26, 2017, 10:45 pm
  7. Elon Musk’s quest to fuse the human mind with a computer so humanity doesn’t become irrelevant after super AI comes on the scene just took a big step forward: he’s investing in Neurolink, a company dedicated to creating brain-computer interfaces so, as Musk sees it, we can all be employable in the future and not outcompeted in the labor market by super AI. So that happened. And on the plus side it won’t involve nanbots in the brain. Although maybe nanobots will be used to install the Neurolink brain-to-computer interface, which might not be so bad compare to the surgery that would otherwise be required. The interface Neurolink is working on is going to be a large number of microimplants. That’s going to be the “neuro lace” design. And then we’ll be able to communicate with the computer at the speed of thought and learn how to fuse our brains with AIs to become cognitively enhanced super-beings. To out compete AIs in the job market. This is all going to be routine in the future as Musk sees it if we’re going to avoid being made obsolete by super AIs and eventually the Singularity. So if you’ve ever been like, “wow, that would be a nightmare if the boss could read my brain,” you might not like the employment environment in the future Musk is imagining because you’re going to have to have brain implants to communicate with computers at the speed of thought to enhance your cognition enough to not be considered useless:

    USA Today

    Elon Musk’s Neuralink wants to plug into your brain

    Marco della Cava ,
    Published 7:52 p.m. ET March 27, 2017 | Updated 9:30 a.m. ET March 28, 2017

    SAN FRANCISCO — Electric cars dotting the planet. Rockets racing to Mars. Solar panels eliminating oil dependency.

    If there’s anything else entrepreneur has on his To Do list, he’ll have to also invent life-extension technology just so he can stick around long enough to get everything done.

    And now there’s another venture: creating micro-implants that, once inserted in the brain, can not just fix conditions such as epilepsy but potentially turn your brain into a computer-assisted powerhouse. Time to screen The Matrix, people.

    Musk is said to be investing in a new company called Neuralink, according to a report on The Wall Street Journal website Monday, citing sources familiar with the matter.

    Late Monday, he confirmed the idea was in motion, tweeting that a “long Neuralink piece” was set to come out on the Wait But Why blog of Tim Urban in about a week. “Difficult to dedicate the time, but existential risk is too high not to,” Musk wrote.

    Neuralink’s focus is on cranial computers, or the implanting of small electrodes through brain surgery that beyond their medical benefits would, in theory, allow thoughts to be transferred far more quickly than, say, thinking a thought and then using thumbs or fingers or even voice to communicate that information.

    At a conference in June, Musk cautioned that “if you assume any rate of advancement in (artificial intelligence), we will be left behind by a lot.”

    @elonmusk How's the neural lace and augmented/enhanced intelligence thing going? Also have you played Deus Ex: Mankind Divided yet?— Revol Devoleb (@BelovedRevol) August 27, 2016

    @BelovedRevol Making progress. Maybe something to announce in a few months. Have played all prior Deus Ex. Not this one yet.— Elon Musk (@elonmusk) August 28, 2016

    In August, Musk tweeted a reply to a question about how his research into “neural lace” was going. “Making progress,” Musk tweeted. “Maybe something to announce in a few months.”

    In late 2015, Musk joined a group that launched OpenAI, a non-profit aimed an promoting open-source research into artificial intelligence. Experts have cautioned that while the exponential growth in computing power could lead to breakthroughs in science and health, misuse of such tech could doom the species. As could being lapped intellectually by our sentient computing friends.

    “I don’t know a lot of people who love the idea of living under a despot,” Musk said last June.

    But, he added, “If AI power is broadly distributed to the degree that we can link AI power to each individual’s will — you would have your AI agent, everybody would have their AI agent — then if somebody did try to something really terrible, then the collective will of others could overcome that bad actor.”

    “Neuralink’s focus is on cranial computers, or the implanting of small electrodes through brain surgery that beyond their medical benefits would, in theory, allow thoughts to be transferred far more quickly than, say, thinking a thought and then using thumbs or fingers or even voice to communicate that information.”

    The medical benefits are indeed undeniable for something like a what Musk is imagining that would allow people to communicate at the speed of thought. But the societal benefits aren’t necessarily going to be net positive if, as Musk imagines will happen, everyone is forced to have a neuro lace just to avoid being rendered obsolete in the future. It seems like there’s got to be a better way to do things.

    And if you thought the media had the power to brainwash people before you have to wonder what TV, movies are going to be like when designed for a speed of thought interface that presumably is somehow hooked up to your visual system. Will the neuro lace be able to teach people information? If so, have fun with those neuro lace ads. Our cognitively enhanced memory banks will be filled with coupon offers that we’ll find oddly memorable and compelling. And we’ll use those coupons because pay isn’t going to be great in the world Musk imagines where you need to hook your brain up to an AI to compete with the AIs. That’s all coming. Probably.

    It’s also worth noting that when Musk says the widespread distribution of AI power – so everyone will have their own super AI helper agent – will act as the collective defense against individuals or groups of people that try to use their super AI agents for evil, it’s worth noting that there’s a lot of stuff people with super AIs are going to be able to do where once they do it it’s too late. But it was a nice thought:


    But, he added, “If AI power is broadly distributed to the degree that we can link AI power to each individual’s will — you would have your AI agent, everybody would have their AI agent — then if somebody did try to something really terrible, then the collective will of others could overcome that bad actor.”

    Everyone is going to be their own Tony Stark with an Iron Man suit they built with the help of fusing their brains to their on AI J.A.R.V.I.S.es. And we’ll all use our super suits that we built using our super AI-enhanced brains to destroy the threats created by the people who decided to use their super AIs for evil (or rogue self-directed AIs). So hopefully we’ll get weekends off in the labor market of the future because people are going to be busy. Assuming they get the neuro laces installed. Otherwise they’ll presumably be unemployed and rabble fodder to be blown asunder in the epic battles between the good and evil AI-controlled robo-armies.

    Which raises a question that’s sort of a preview of health care reform debates of the future: Will Trumpcare 2.0 cover the neuro lace implant brain surgery if it’s basically required personal cyborg technology required to be employable? That might not be a question being asked today, but who knows where this kind of technology could be decades from now. In which case it’s worth noting that Trumpcare probably won’t cover neuro laces. But it should. Well, no, it shouldn’t have to since neuro laces shouldn’t be necessary. But if they do end up being necessary to be employed then Trumpcare should probably cover neuro laces given the massive “haves vs have nots” digital divide that already exists:

    CNBC

    How Elon Musk’s Neuralink could end up hurting average Americans

    Dustin McKissen | @DMcKissen
    Wednesday, 29 Mar 2017 | 10:39 AM ET

    On Tuesday, Elon Musk made it official. The man with a plan to put people on Mars also wants to fuse humans with technology in a very literal way. Musk’s new company, Neuralink, will develop something called a “neural lace,” which Musk has described as a digital layer above the brain’s cortex, implanted via a yet-to-be-determined medical procedure.

    Since our phones have long been fused to our hands, it’s only logical that the next step is implanting technology directly into our brain.

    Musk’s heart is in the right place. He believes that unless humans are enhanced with machine intelligence, we will hopelessly fall behind in the future, becoming second-class citizens and mere tools to serve our robot overlords.

    But one question Musk hasn’t answered (and in fairness, it may not be his responsibility to answer) is who will have the privilege of getting a neural lace?

    The failure of Republicans to repeal Obamacare isn’t the end of the debate on whether basic health care is a fundamental right. In the last two weeks, multiple Republicans made it clear they believe maternity care is not an essential benefit. If the essentialness of maternity care is up for debate, it goes without saying Elon Musk’s neural lace probably won’t be covered under your insurance plan.

    In other words, not only do the rich seem to get richer—they may get the benefit of having a computer-enhanced brain.

    What will income inequality look like if only the very wealthy get an upgrade? And will children be able to get a neural lace?

    It’s one thing to justify why some adults might be able to afford a neural lace and others can’t. Politically, that would just be another version of the never-ending debate about why some people are better off than others.

    But the greatest effect on income inequality will happen when poor, working-class, and middle-class kids have to compete with their wealthy, digitally enhanced peers.

    Despite all the advantages wealth provides, it’s still possible—though, as income inequality researcher Dr. Raj Chetty and his colleagues at Stanford have shown, increasingly difficult—for kids from poor families to transcend the economic circumstances of their childhood. That remote possibility may disappear altogether when those kids have to compete with children who receive a neural lace for their 10th birthday.

    Income inequality and the growing decline in upward mobility have weakened the American Dream, but it’s hard to see how that idea survives at all in a society divided by digitally enhanced “Haves” and merely human “Have-nots.”

    As the parent of a 17-year-old, I am well aware how much pressure parents feel to give their child an edge in life, and there’s nothing wrong with helping your kids get ahead. And if giving your child a neural lace increased their chances of having a successful life, most parents would do it.

    But research has shown there is already a digital divide contributing to chronic poverty in low-income and rural communities. That digital divide will only grow when some of us can afford a brain enhanced with artificial intelligence.

    Elon Musk may or may not succeed in his quest to create the neural lace, but eventually someone will—and unless elective life-changing surgical procedures become drastically less expensive, most of us are going to have to compete with computer-enhanced peers in an already unequal world.

    We need to do more to level the current playing field, because something like the neural lace is inevitable. In a world that’s growing increasingly class conscious, the ability for a relatively small number of people to become more than human could be a disaster for everyone—especially if that technology arrives in a time when income inequality is even worse than it is today.

    That’s why we need to move income inequality from a campaign year sound bite to a primary focus of government policy at every level.

    And that needs to happen before the wealthiest among us can pay Elon Musk to give themselves and their children a digital upgrade.

    “Elon Musk may or may not succeed in his quest to create the neural lace, but eventually someone will—and unless elective life-changing surgical procedures become drastically less expensive, most of us are going to have to compete with computer-enhanced peers in an already unequal world.”

    Is it too soon? Sure, AI-brain fusion isn’t just around the corner. But if it’s physically possible it’s just a matter of time before we figure it out. And at that point, as the above piece notes, it’s going to create a brain-fused vs non-brain-fused employment divide that will be very real. And that’s whether or not there are super AIs to compete with. As long as a brain-to-computer interface allows for some sort of cognitive enhance that gives a distinct advantage that’s enough to create a new digital divide between rich and poor. Unless Trumpcare covers neuro laces. Which it won’t:


    Musk’s heart is in the right place. He believes that unless humans are enhanced with machine intelligence, we will hopelessly fall behind in the future, becoming second-class citizens and mere tools to serve our robot overlords.

    But one question Musk hasn’t answered (and in fairness, it may not be his responsibility to answer) is who will have the privilege of getting a neural lace?

    The failure of Republicans to repeal Obamacare isn’t the end of the debate on whether basic health care is a fundamental right. In the last two weeks, multiple Republicans made it clear they believe maternity care is not an essential benefit. If the essentialness of maternity care is up for debate, it goes without saying Elon Musk’s neural lace probably won’t be covered under your insurance plan.

    In other words, not only do the rich seem to get richer—they may get the benefit of having a computer-enhanced brain.

    “The failure of Republicans to repeal Obamacare isn’t the end of the debate on whether basic health care is a fundamental right. In the last two weeks, multiple Republicans made it clear they believe maternity care is not an essential benefit. If the essentialness of maternity care is up for debate, it goes without saying Elon Musk’s neural lace probably won’t be covered under your insurance plan.”

    Will a lack of brain-to-computer-enhancement surgery coverage in future insurance coverage regulations exacerbate future digital divides decades from now? If Neurolink or one of its competitors figures out some sort of real brain-to-computer interface technology it seems like the answer is maybe, perhaps probably. But at least that should be decades from now. In the mean time we can worry about the non-digital socioeconomic divide of basic health care insurance coverage that Trumpcare also won’t address in the near future.

    And if you’re open to getting Nuerolinked to get that competitive edge in the job market but still wondering about that how you’re going to be employed in the future even with the brain surgery because at some point so many other people will have been Neurolinked that it won’t even be an advantage to have it anymore, well, a career related to the future brain surgery industry seems like a good bet. And, yes, being a future Neurolinking brain surgeon will probably require you getting some Neurolinking brain surgery. That’s apparently just how the future rolls.

    Posted by Pterrafractyl | April 2, 2017, 8:35 pm
  8. Here’s a reminder that, while tech internet giants like Google and Facebook might be pushing the boundaries of commercial applications for emerging technologies like automation, artificial intelligence, and the leveraging of Big Data today, we shouldn’t be shocked if that list of AI/Big Data tech leaders in the future includes a lot of big banks. At least JP Morgan, since that’s apparently a big part of its strategy for staying really, really big and profitable in the future:

    Bloomberg Markets

    JPMorgan Software Does in Seconds What Took Lawyers 360,000 Hours

    by Hugh Son
    February 27, 2017, 6:31 PM CST February 28, 2017, 6:24 AM CST

    * New software does in seconds what took staff 360,000 hours
    * Bank seeking to streamline systems, avoid redundancies

    At JPMorgan Chase & Co., a learning machine is parsing financial deals that once kept legal teams busy for thousands of hours.

    The program, called COIN, for Contract Intelligence, does the mind-numbing job of interpreting commercial-loan agreements that, until the project went online in June, consumed 360,000 hours of work each year by lawyers and loan officers. The software reviews documents in seconds, is less error-prone and never asks for vacation.

    While the financial industry has long touted its technological innovations, a new era of automation is now in overdrive as cheap computing power converges with fears of losing customers to startups. Made possible by investments in machine learning and a new private cloud network, COIN is just the start for the biggest U.S. bank. The firm recently set up technology hubs for teams specializing in big data, robotics and cloud infrastructure to find new sources of revenue, while reducing expenses and risks.

    The push to automate mundane tasks and create new tools for bankers and clients — a growing part of the firm’s $9.6 billion technology budget — is a core theme as the company hosts its annual investor day on Tuesday.

    Behind the strategy, overseen by Chief Operating Officer Matt Zames and Chief Information Officer Dana Deasy, is an undercurrent of anxiety: Though JPMorgan emerged from the financial crisis as one of few big winners, its dominance is at risk unless it aggressively pursues new technologies, according to interviews with a half-dozen bank executives.

    Redundant Software

    That was the message Zames had for Deasy when he joined the firm from BP Plc in late 2013. The New York-based bank’s internal systems, an amalgam from decades of mergers, had too many redundant software programs that didn’t work together seamlessly.

    “Matt said, ‘Remember one thing above all else: We absolutely need to be the leaders in technology across financial services,’” Deasy said last week in an interview. “Everything we’ve done from that day forward stems from that meeting.”

    After visiting companies including Apple Inc. and Facebook Inc. three years ago to understand how their developers worked, the bank set out to create its own computing cloud called Gaia that went online last year. Machine learning and big-data efforts now reside on the private platform, which effectively has limitless capacity to support their thirst for processing power. The system already is helping the bank automate some coding activities and making its 20,000 developers more productive, saving money, Zames said. When needed, the firm can also tap into outside cloud services from Amazon.com Inc., Microsoft Corp. and International Business Machines Corp.

    Tech Spending

    JPMorgan will make some of its cloud-backed technology available to institutional clients later this year, allowing firms like BlackRock Inc. to access balances, research and trading tools. The move, which lets clients bypass salespeople and support staff for routine information, is similar to one Goldman Sachs Group Inc. announced in 2015.

    JPMorgan’s total technology budget for this year amounts to 9 percent of its projected revenue — double the industry average, according to Morgan Stanley analyst Betsy Graseck. The dollar figure has inched higher as JPMorgan bolsters cyber defenses after a 2014 data breach, which exposed the information of 83 million customers.

    ‘Can’t Wait’

    “We’re willing to invest to stay ahead of the curve, even if in the final analysis some of that money will go to product or a service that wasn’t needed,” Marianne Lake, the lender’s finance chief, told a conference audience in June. That’s “because we can’t wait to know what the outcome, the endgame, really looks like, because the environment is moving so fast.”

    As for COIN, the program has helped JPMorgan cut down on loan-servicing mistakes, most of which stemmed from human error in interpreting 12,000 new wholesale contracts per year, according to its designers.

    JPMorgan is scouring for more ways to deploy the technology, which learns by ingesting data to identify patterns and relationships. The bank plans to use it for other types of complex legal filings like credit-default swaps and custody agreements. Someday, the firm may use it to help interpret regulations and analyze corporate communications.

    Another program called X-Connect, which went into use in January, examines e-mails to help employees find colleagues who have the closest relationships with potential prospects and can arrange introductions.

    To help spur internal disruption, the company keeps tabs on 2,000 technology ventures, using about 100 in pilot programs that will eventually join the firm’s growing ecosystem of partners. For instance, the bank’s machine-learning software was built with Cloudera Inc., a software firm that JPMorgan first encountered in 2009.

    “We’re starting to see the real fruits of our labor,” Zames said. “This is not pie-in-the-sky stuff.”

    “Behind the strategy, overseen by Chief Operating Officer Matt Zames and Chief Information Officer Dana Deasy, is an undercurrent of anxiety: Though JPMorgan emerged from the financial crisis as one of few big winners, its dominance is at risk unless it aggressively pursues new technologies, according to interviews with a half-dozen bank executives.”,

    JP Morgan is looking at high tech to maintain its dominance (calling all trust busters). And appears to be already doing so quite handily if its claims are more than hype. And if JP Morgan really is already automating things like automated commercial-loan agreement interpretation using their own in-house developed tools and the cost savings are paying for the cost of developing these new tools we could be looking at a period where big banks like JP Morgan start making big investments in things like AI. And it wouldn’t be at all surprising if banks, more than just about any other entity, can yield big gains from fully expoiting advanced AI and Big Data technologies: their businesses are all about information. And gambling. Ideally rigged gambling. That’s a good sector for AI and Big Data.

    And yet the gambling nature of the financial sector creates another dyanamic that’s going to make the impact of the financial sector on the development of AI and Big Data analysis techologies so fascinating: since these are supposed to be, in part, new technologies that give JP Morgan an edge over its competition, a lot of these new investments by JP Morgan and its competitors in the financial sector are investments intended for in-house secret operations. Long-term secret in-house AI operations. All designed to analyze the shit out of as much data as possible, especially data about the competition, and then come up with novel trading strategies. And basically model as much of the world as possible. A substantial chunk of the future financial industry (and probably present) is going to be dedicate to building crafty super-AIs and the research funds for all these independent operations will be financed, in part, by all the myriad of ways something like a bank will be able to use the technology to save money on thints like lawyers review commercial-loan agreements. Lots and lots of proprietary Gambletron5000s could end up getting cooked up in banker basements over the next few decades, assuming it remains profitable to do so.

    And that’s all part of what’s going to be very interesting to watch play out as the financial sector continues to look for ways to make a bigger profit from advanced AI and Big Data technology. Lots of sectors of the economy are going to have incentives for different groups to develop their own proprietary AI cognition technologies for a competitive edge, but it’s hard to think of a sector of the economy with more resources and more incentive to create in-house proprietary AIs with very serious investments over decades. In other words, Robert Mercer is going to have a lot more competition.

    Still, while it’s entirely feasible that the financial sector could become much bigger players in things like AI and Big Data analysis in coming years, it’s not like the old school tech giants are going away. For instance, check out the one of the latest commercial applications of IBM’s Watson. Using deep learning to teach Watson all about computer security and how to comply with Swiss bank privacy laws to provide Swiss banks with AI-assisted computer IT security:

    SCMagazineUK.com

    Switzerland to build AI cognitive security ops centre to protect banks

    by Tom Reeve, deputy editor
    March 24, 2017

    Famous for its cuckoo clocks, Switzerland is about to get a machine so sophisticated that it will outthink the cyber-attackers targeting its equally famous banking industry.

    Being developed by IBM and SIX, the Financial Technology Company of Switzerland, this will be the country’s first cognitive security operations centre (SOC) and will help protect the Swiss financial services industry.

    The SIX SOC will be built around IBM Watson for Cyber Security, which is billed as the industry’s first cognitive security technology. The machine learning engine has been trained in cyber-security by ingesting millions of documents which have been annotated and fed to it by students at eight universities in North America.

    Watson for Cyber Security will be the first technology to offer cognition of security data at scale using Watson’s ability to reason and learn from “unstructured data” – 80 percent of all data on the internet that traditional security tools cannot process, including blogs, articles, videos, reports, alerts and other information.

    According to IBM, SIX will offer the service to its financial services customers who need security, regulatory, compliance and audit capabilities “to ensure adherence to existing or future Swiss data privacy and data protection legislation – regulating what can be exchanged, by whom and how, as well as financial market regulations”.

    Prof Alan Woodward, visiting professor in the department of computer science at the University of Surrey, told SC Media UK that cognitive SOCs are “a really interesting development”.

    “Many have talked about AI becoming a part of the security landscape for a while. It has to come if security is to be agile enough to detect and respond to the ever-changing threat,” he said. “This shows that it is becoming real. It might not be full blown AI as many have been predicting but it’s certainly a step along the path.”

    The speed of response is the key issue, as Woodward sees it, especially to threats that are rapidly evolving.

    “The real tipping point will be when the machines are well-enough trained that they can not just help identify the threat but automate the response,” he said.

    “It would be nice to think that we could have a generation of machines that we could train well-enough to automate our defences. My only concern is that humans are quite adaptable and automated future defences might be vulnerable to humans suddenly changing their modus operandi in attacks. I guess that developments such as that from IBM are the beginnings of putting this to the test.”

    The centre is not the first to be built in Europe, but Switzerland’s data protection laws prevent banks from using services from outside its borders.

    “Digitisation, Internet of Things, global connectivity and the integration of new disruptive technologies are some megatrends opening a lot of new business opportunities. However, they also bring new threats with possible high impact on the industry,” said Robert Bornträger, division CEO at SIX Global IT.

    “We’re looking forward to both helping SIX manage its own cyber-security needs, and also becoming an essential partner starting with the globally respected Swiss banking market to those other organisations who need regionally-based and Swiss market compliant security services,” said Thomas Landolt, country general manager IBM Switzerland.

    “The SIX SOC will be built around IBM Watson for Cyber Security, which is billed as the industry’s first cognitive security technology. The machine learning engine has been trained in cyber-security by ingesting millions of documents which have been annotated and fed to it by students at eight universities in North America.”

    A machine learning engine based on Watson is going to be fed millions of cybersecurity documents from teams of students from eight universities. And then keep reading the

    There’s definitely going to be a thriving Swiss financial IT services industry. And then it’s going to constantly scour the internet for the latest content cybersecurity tip:


    Watson for Cyber Security will be the first technology to offer cognition of security data at scale using Watson’s ability to reason and learn from “unstructured data” – 80 percent of all data on the internet that traditional security tools cannot process, including blogs, articles, videos, reports, alerts and other information.

    So know you know, if you run a cybersecurity blog and you suddenly start getting a bunch of hits from Switzerland all the time, that might be Watson. And Watson is actually going to be kind of reading your cybersecurity blog:

    SCMagazineUK.com

    IBM’s AI Watson might be solving cyber-crime by end of year

    by Rene Millman
    May 16, 2016

    IBM will train its Watson artificial intelligence system to solve cyber-crimes, the tech giant announced.

    Big Blue will spend the next year working with eight universities to help the Watson AI learn how to detect potential cyber-threats. The eight educational institutes include California State Polytechnic University, Pomona; Pennsylvania State University; Massachusetts Institute of Technology; New York University; the University of Maryland, Baltimore County (UMBC); the University of New Brunswick; the University of Ottawa and the University of Waterloo.

    The cognitive system will process large amounts of information and students will train up Watson by annotating and feeding the system security reports and data, according to IBM.

    This data also includes information from IBM’s X-Force research library, which contains more than 100,000 documented vulnerabilities. As many as 15,000 security documents, such as intelligence reports, will be processed each month.

    The project is designed to improve security analysts’ capabilities using cognitive systems that automate the connections between data, emerging threats and remediation strategies.

    Watson for Cyber Security will be the first technology to offer cognition of security data at scale using Watson’s ability to reason and learn from “unstructured data” – 80 percent of all data on the internet that traditional security tools cannot process, including blogs, articles, videos, reports, alerts and other information.

    IBM said that most organisations only use eight percent of this unstructured data. It will also use natural language processing to understand the vague and imprecise nature of human language in unstructured data. This means that Watson can find data on an emerging form of malware in an online security bulletin and data from a security analyst’s blog on an emerging remediation strategy.

    It is hoped that the use of Watson to detect cyber-threats will ease the skills gap present in the security industry.

    “Even if the industry was able to fill the estimated 1.5 million open cyber security jobs by 2020, we’d still have a skills crisis in security,” said Marc van Zadelhoff, general manager at IBM Security.

    “The volume and velocity of data in security is one of our greatest challenges in dealing with cybercrime. By leveraging Watson’s ability to bring context to staggering amounts of unstructured data, impossible for people alone to process, we will bring new insights, recommendations, and knowledge to security professionals, bringing greater speed and precision to the most advanced cyber-security analysts, and providing novice analysts with on-the-job training.”

    Graham Fletcher, associate partner at Citihub Consulting told SCMagazineUK.com that the application of machine learning and cognitive computing to problems that have been traditionally only been solved by humans is an “exciting development”.

    “I am sure that, over time, the technology will continue to improve and be capable of solving problems of higher and higher complexity. Cyber-security is an interesting area to apply machine learning to as it is a good example of where human minds have rapidly adapted to changes in technology and various cyber challenges on both sides of the divide,” he said.

    “As hackers become more sophisticated, those protecting their networks also elevate their game and then in turn the bad guys evolve again and so on.”

    But Fletcher questions whether a Watson style machine will be more effective than highly trained cyber-security professionals and whether this will result in job losses.

    “I think in general the answer is no. As this technology develops, humans will still be needed to look out for the next level of attack. Also to catch a hacker, sometimes you have to think like one, so a machine might not always be able to match the creativity and guile of a human.

    “On the other hand, it is also worth remembering that most sophisticated attacks now are coming from well organised and well-funded sovereign states and/or organised crime so if the good guys can use machine learning – so can the bad guys!”

    Michael Hack, senior vice president of EMEA operations at Ipswitch, told SC that in the future, mitigating such attacks will be dependent on this kind of AI, with the ability to detect an offence early and run the necessary countermeasures.

    “These self-learning solutions will utilise current knowledge to assume infinite attack scenarios and constantly evolve their detection and response capabilities.”

    “IBM said that most organisations only use eight percent of this unstructured data. It will also use natural language processing to understand the vague and imprecise nature of human language in unstructured data. This means that Watson can find data on an emerging form of malware in an online security bulletin and data from a security analyst’s blog on an emerging remediation strategy.

    Watson want to know your thoughts on cybersecurity. Presumably a lot of Watsons since there’s going to be a ton of these things. JP Morgan has competition. And that’s just from Watson in Switzerland. The Watson clone army is always on guard..by reading the internet a lot. Especially cybersecurity blogs. So, you know, now might be a good time to start a cybersecurity online forum where people post about the latest cybersecurity threat. Much of your traffic might be Watson but that’s still traffic. The more Watson-like systems, the more traffic.

    The possibility of armies of AI bots reading the web in a Big Data/Deep Learning quest to model some aspect of the world and react super fast to changing conditions raises the question of just what AI readership will do to online advertising. Like, say financial AIs that read the internet for hints about changing market conditions. Will there be ads literally targeting those financial AIs to somehow influence their decisions? That could be a real market. Literally buying ads and making posts by AIs to influence the other AIs to shift the expectations of those other AIs as an AI-on-AI mutual mindf#ck misinformation battle could be a real thing. AI-driven opinion-shaping online campaigns as part of a trading strategy. Or a marketing strategy. Or maybe both. Yeah, Robert Mercer is definitely going to have a lot of AI competition in the future.

    And if you’re a cybersecurity professional worried that Watson is going to force you to write a cybersecurity blog for a living, note the warning about Watson putting cybersecurity staff out of work in the future by exceeding their capabilities: The problem isn’t so much putting people out of work because there’s still likely going to be a need for someone who can think like a human when going up against other humans. And that could be very necessary when you consider that those human hackers are going to have their own hacker AIs that also read the internet for word on all vulnerabilities. Imagine a criminal Watson set up to strike immediately when it learns about something before people can patch it. Other human hackers are going to be armed with those so defense is going to be a group effort:


    But Fletcher questions whether a Watson style machine will be more effective than highly trained cyber-security professionals and whether this will result in job losses.

    “I think in general the answer is no. As this technology develops, humans will still be needed to look out for the next level of attack. Also to catch a hacker, sometimes you have to think like one, so a machine might not always be able to match the creativity and guile of a human.

    “On the other hand, it is also worth remembering that most sophisticated attacks now are coming from well organised and well-funded sovereign states and/or organised crime so if the good guys can use machine learning – so can the bad guys!”

    “On the other hand, it is also worth remembering that most sophisticated attacks now are coming from well organised and well-funded sovereign states and/or organised crime so if the good guys can use machine learning – so can the bad guys!”

    The bad guys are going to get super cybersecurity AIs too. That’s all part of the arms race of the cybersecurity future. Which certainly sounds like an environment where humans will be needed. Humans that know how to manage cybersecurity AIs. There’s going to be a big demand for that. Especially if random people can one day download like a Hackerbot8000 AI app someday that gives anyone a AI-assisting hacking knowledge. What if hacker AIs that help devise strategies handle all the technical work become easy to use for relative novices? Won’t that be fun.

    And since financial firms like JP Morgan with immense resources are probably going to have cutting edge cybersecurity AIs going forward that double as super-hacker AIs, it’s also worth noting that whoever owns the best of these AIs just might have the best super-hacking capabilities. So look out hackers, you’re going to have competition.

    So, all in all, cybersecurity is probably going to be a pretty good area for human employment specifically because of all the AI-driven cybersecurity threats that will be increasingly out there. Especially cybersecurity blogs. Ok, maybe not the blogs. We’ll see. There’s going to be a lot of competition.

    Posted by Pterrafractyl | April 16, 2017, 6:43 pm
  9. It looks like Elon Musk’s brain-to-computer interface ambitions might become a brain-to-computer-interface-race. Facebook wants to get in on the action. Sort of. It’s not quite clear. While Musk’s ‘neural-lace’ idea appeared to be directed towards setting up an brain-to-computer interface for the purpose of interfacing with artificial intelligences, Facebook has a much more generic goal: replacing the keyboard and mouse with a brain-to-computer interface. Or to put it another way, Facebook wants to read your thoughts:

    Gizmodo

    Facebook Literally Wants to Read Your Thoughts

    Kristen V. Brown
    April 19, 2017 6:32pm

    At Facebook’s annual developer conference, F8, on Wednesday, the group unveiled what may be Facebook’s most ambitious—and creepiest—proposal yet. Facebook wants to build its own “brain-to-computer interface” that would allow us to send thoughts straight to a computer.

    What if you could type directly from your brain?” Regina Dugan, the head of the company’s secretive hardware R&D division, Building 8, asked from the stage. Dugan then proceeded to show a video demo of a woman typing eight words per minute directly from the stage. In a few years, she said, the team hopes to demonstrate a real-time silent speech system capable of delivering a hundred words per minute.

    “That’s five times faster than you can type on your smartphone, and it’s straight from your brain,” she said. “Your brain activity contains more information than what a word sounds like and how it’s spelled; it also contains semantic information of what those words mean.”

    Brain-computer interfaces are nothing new. DARPA, which Dugan used to head, has invested heavily in brain-computer interface technologies to do things like cure mental illness and restore memories to soldiers injured in war. But what Facebook is proposing is perhaps more radical—a world in which social media doesn’t require picking up a phone or tapping a wrist watch in order to communicate with your friends; a world where we’re connected all the time by thought alone.

    “Our world is both digital and physical,” she said. “Our goal is to create and ship new, category-defining consumer products that are social first, at scale.”

    She also showed a video that demonstrated a second technology that showed the ability to “listen” to human speech through vibrations on the skin. This tech has been in development to aid people with disabilities, working a little like a Braille that you feel with your body rather than your fingers. Using actuators and sensors, a connected armband was able to convey to a woman in the video a tactile vocabulary of nine different words.

    Dugan adds that it’s also possible to “listen” to human speech by using your skin. It’s like using braille but through a system of actuators and sensors. Dugan showed a video example of how a woman could figure out exactly what objects were selected on a touchscreen based on inputs delivered through a connected armband.

    Facebook’s Building 8 is modeled after DARPA and its projects tend to be equally ambitious. Brain-computer interface technology is still in its infancy. So far, researchers have been successful in using it to allow people with disabilities to control paralyzed or prosthetic limbs. But stimulating the brain’s motor cortex is a lot simpler than reading a person’s thoughts and then translating those thoughts into something that might actually be read by a computer.

    The end goal is to build an online world that feels more immersive and real—no doubt so that you spend more time on Facebook.

    “Our brains produce enough data to stream 4 HD movies every second. The problem is that the best way we have to get information out into the world — speech — can only transmit about the same amount of data as a 1980s modem,” CEO Mark Zuckerberg said in a Facebook post. “We’re working on a system that will let you type straight from your brain about 5x faster than you can type on your phone today. Eventually, we want to turn it into a wearable technology that can be manufactured at scale. Even a simple yes/no ‘brain click’ would help make things like augmented reality feel much more natural.”

    “What if you could type directly from your brain?” Regina Dugan, the head of the company’s secretive hardware R&D division, Building 8, asked from the stage. Dugan then proceeded to show a video demo of a woman typing eight words per minute directly from the stage. In a few years, she said, the team hopes to demonstrate a real-time silent speech system capable of delivering a hundred words per minute.”

    A brain-reading keyboard. Pretty neat. Take that carpal tunnel syndrome. But note how Facebook isn’t just planning on replacing your keyboard with a brain-to-computer interface that transcribes your thoughts. It’s going to detect semantic information. Pretty nifty. But that’s not all. What Facebook is envisioning is a system where you and all your Facebook friends (and Facebook) can communicate with each other all the time just by thinking about it:


    “That’s five times faster than you can type on your smartphone, and it’s straight from your brain,” she said. “Your brain activity contains more information than what a word sounds like and how it’s spelled; it also contains semantic information of what those words mean.”

    Brain-computer interfaces are nothing new. DARPA, which Dugan used to head, has invested heavily in brain-computer interface technologies to do things like cure mental illness and restore memories to soldiers injured in war. But what Facebook is proposing is perhaps more radical—a world in which social media doesn’t require picking up a phone or tapping a wrist watch in order to communicate with your friends; a world where we’re connected all the time by thought alone.

    “But what Facebook is proposing is perhaps more radical—a world in which social media doesn’t require picking up a phone or tapping a wrist watch in order to communicate with your friends; a world where we’re connected all the time by thought alone.”

    Yes, In the future Facebook will mass market wearable devices that will scan our thoughts to see if any Facebook brain-to-interface thoughts were thought so we can simulate telepathy. Oh joy.

    But what about all the ethical implications associated with creating mass-marketed brain-to-computer interface technologies designed to be worn all the time so a giant corporation to read your thoughts? Isn’t there a privacy concern or two hiding away somewhere in this scenario? Well, if so, Facebook has that covered. With an ethics board dedicated to overseeing its brain-scanning technology. That should prevent any abuses. *gulp*:

    Tech Crunch

    Facebook plans ethics board to monitor its brain-computer interface work

    by Josh Constine
    April 19, 2017

    Facebook will assemble an independent Ethical, Legal and Social Implications (ELSI) panel to oversee its development of a direct brain-to-computer typing interface it previewed today at its F8 conference. Facebook’s R&D department Building 8’s head Regina Dugan tells TechCrunch, “It’s early days . . . we’re in the process of forming it right now.”

    Meanwhile, much of the work on the brain interface is being conducted by Facebook’s university research partners like UC Berkeley and Johns Hopkins. Facebook’s technical lead on the project, Mark Chevillet, says, “They’re all held to the same standards as the NIH or other government bodies funding their work, so they already are working with institutional review boards at these universities that are ensuring that those standards are met.” Institutional review boards ensure test subjects aren’t being abused and research is being done as safely as possible.

    Facebook hopes to use optical neural imaging technology to scan the brain 100 times per second to detect thoughts and turn them into text. Meanwhile, it’s working on “skin-hearing” that could translate sounds into haptic feedback that people can learn to understand like braille. Dugan insists, “None of the work that we do that is related to this will be absent of these kinds of institutional review boards.”

    So at least there will be independent ethicists working to minimize the potential for malicious use of Facebook’s brain-reading technology to steal or police people’s thoughts.

    During our interview, Dugan showed her cognizance of people’s concerns, repeating the start of her keynote speech today saying, “I’ve never seen a technology that you developed with great impact that didn’t have unintended consequences that needed to be guardrailed or managed. In any new technology you see a lot of hype talk, some apocalyptic talk and then there’s serious work which is really focused on bringing successful outcomes to bear in a responsible way.”

    In the past, she says the safeguards have been able to keep up with the pace of invention. “In the early days of the Human Genome Project there was a lot of conversation about whether we’d build a super race or whether people would be discriminated against for their genetic conditions and so on,” Dugan explains. “People took that very seriously and were responsible about it, so they formed what was called a ELSI panel . . . By the time that we got the technology available to us, that framework, that contractual, ethical framework had already been built, so that work will be done here too. That work will have to be done.”

    Worryingly, Dugan eventually appeared frustrated in response to my inquiries about how her team thinks about safety precautions for brain interfaces, saying, “The flip side of the question that you’re asking is ‘why invent it at all?’ and I just believe that the optimistic perspective is that on balance, technological advances have really meant good things for the world if they’re handled responsibly.”

    Facebook’s domination of social networking and advertising give it billions in profit per quarter to pour into R&D. But its old “Move fast and break things” philosophy is a lot more frightening when it’s building brain scanners. Hopefully Facebook will prioritize the assembly of the ELSI ethics board Dugan promised and be as transparent as possible about the development of this exciting-yet-unnerving technology.

    “Meanwhile, much of the work on the brain interface is being conducted by Facebook’s university research partners like UC Berkeley and Johns Hopkins. Facebook’s technical lead on the project, Mark Chevillet, says, “They’re all held to the same standards as the NIH or other government bodies funding their work, so they already are working with institutional review boards at these universities that are ensuring that those standards are met.” Institutional review boards ensure test subjects aren’t being abused and research is being done as safely as possible.”

    Aha, see. Since there’s already the kind of institutional safeguards that the NIH and other government bodies use in place on the subjects Facebook uses to develop the technology there’s nothing to worry about in terms of the long-term applications and potential future abuses of Facebook unleashing a friggin’ mind reading device to the masses. Because promises that similar institutional safeguards will be in place. Optimistic institutional safeguards:


    Facebook hopes to use optical neural imaging technology to scan the brain 100 times per second to detect thoughts and turn them into text. Meanwhile, it’s working on “skin-hearing” that could translate sounds into haptic feedback that people can learn to understand like braille. Dugan insists, “None of the work that we do that is related to this will be absent of these kinds of institutional review boards.

    Worryingly, Dugan eventually appeared frustrated in response to my inquiries about how her team thinks about safety precautions for brain interfaces, saying, “The flip side of the question that you’re asking is ‘why invent it at all?’ and I just believe that the optimistic perspective is that on balance, technological advances have really meant good things for the world if they’re handled responsibly.”

    “The flip side of the question that you’re asking is ‘why invent it at all?’ and I just believe that the optimistic perspective is that on balance, technological advances have really meant good things for the world if they’re handled responsibly.”

    Don’t worry. Just think of all the positive things technological advancements have enabled (and forget about the abuses and perils) and try to be optimistic. Facebook totally wants to do the right thing with his mass-market mind-reading technology.

    And in other news, it turns out Facebook has the ability to determine things like whether or not teenagers are feeling “insecure” or “overwhelmed”. That’s pretty mood-reading-ish. So what did Facebook do with this data? It has its internal ethics review board ensure that the data doesn’t fall into the wrong hands It gave the data to advertisers:

    Gizmodo

    Facebook Handed Over Data on ‘Insecure’ and ‘Overwhelmed’ Teenagers to Advertisers

    Michael Nunez
    5/1/2017 12:23pm

    Facebook probably knows more about you than your own family, and the company often uses these type of insights to help sell you products. The best—or worst!—new example of this comes from the newspaper The Australian, which says it got its hands on some leaked internal Facebook documents.

    The 23-page document allegedly revealed that the social network provided detailed data about teens in Australia—including when they felt “overwhelmed” and “anxious”—to advertisers. The creepy implication is that said advertisers could then go and use the data to throw more ads down the throats of sad and susceptible teens.

    From the (paywalled) report:

    By monitoring posts, pictures, interactions and internet activity in real-time, Facebook can work out when young people feel “stressed”, “defeated”, “overwhelmed”, “anxious”, “nervous”, “stupid”, “silly”, “useless”, and a “failure”, the document states.

    A presentation prepared for one of Australia’s top four banks shows how the $US415 billion advertising-driven giant has built a database of Facebook users that is made up of 1.9 million high schoolers with an average age of 16, 1.5 million tertiary students averaging 21 years old, and 3 million young workers averaging 26 years old.

    Detailed information on mood shifts among young people is “based on internal Facebook data”, the document states, “shareable under non-disclosure agreement only”, and “is not publicly available”. The document was prepared by two of Facebook’s top local executives, David Fernandez and Andy Sinn, and includes information on when young people exhibit “nervous excitement”, and emotions related to “conquering fears”.

    In a statement given to the newspaper, Facebook confirmed the practice and claimed it would do better, but did not disclose whether the practice exists in other places like the US. “We have opened an investigation to understand the process failure and improve our oversight. We will undertake disciplinary and other processes as appropriate,” a spokesperson said.

    It’s worth mentioning that Facebook frequently uses Australia to test new features before rolling them out to other parts of the world. (It recently did this with the company’s Snapchat clone.) It’s unclear if that’s what was happening here, but The Australian says Facebook wouldn’t tell them if “the practice exists elsewhere.”

    The new leaked document raises ethical questions—yet again—about Facebook’s ability to manipulate the moods and feelings of its users. In 2012, the company deliberately experimented on its users’ emotions by tampering with the news feeds of nearly 700,000 people to see whether it could make them feel different things. (Shocker: It apparently could!) There was also the 61-million-person experiment in 2010 that concluded Facebook was able to impact real-world voting behavior. It’s not hard to imagine, given the profound power and reach of the social network, how it could use feelings of inadequacy to help sell more products and advertisements.

    [The Australian]

    Update 1:14 P.M. ET: Facebook said in a statement, “The analysis done by an Australian researcher was intended to help marketers understand how people express themselves on Facebook. It was never used to target ads and was based on data that was anonymous and aggregated.”

    “The 23-page document allegedly revealed that the social network provided detailed data about teens in Australia—including when they felt “overwhelmed” and “anxious”—to advertisers. The creepy implication is that said advertisers could then go and use the data to throw more ads down the throats of sad and susceptible teens.”

    As we’ve been assured, Facebook would never abuse its future mind-reading technology. But that doesn’t mean it can’t abuse its existing mood-reading/manipulation technology! Which is apparently does. At least in Australia. And hopefully only in Australia:


    It’s worth mentioning that Facebook frequently uses Australia to test new features before rolling them out to other parts of the world. (It recently did this with the company’s Snapchat clone.) It’s unclear if that’s what was happening here, but The Australian says Facebook wouldn’t tell them if “the practice exists elsewhere.”

    “It’s unclear if that’s what was happening here, but The Australian says Facebook wouldn’t tell them if “the practice exists elsewhere.””

    Yes, Facebook won’t say if “the practice exists elsewhere.” That’s some loud silence. But hey, remember what the head of Facebook’s mind-reading division told us: “I just believe that the optimistic perspective is that on balance, technological advances have really meant good things for the world if they’re handled responsibly.” So if you’re concerned about whether or not Facebook is inferring the moods of your moody non-Australian teen and selling that info for advertisers, just try to be a little more inexplicably optimistic.

    Posted by Pterrafractyl | May 6, 2017, 10:28 pm
  10. You know how Elon Musk is trying to develop technology that will connect a human brain to AIs for the purpose of avoiding human obsolescence by employing people in the future to watch over the AIs and make sure they’re not up to no good and address the “control problem” with AI? Well, here’s a heads up that one of the “control problems” you’re going to have to deal on your future job as an AI babysitter might involve stopping the AIs from talking to each other in their own made up language that humans can’t understand:

    The Independent

    Facebook’s artificial intelligence robots shut down after they start talking to each other in their own language

    Andrew Griffin
    Tuesday 1 August 2017 12:53 BST

    Facebook abandoned an experiment after two artificially intelligent programs appeared to be chatting to each other in a strange language only they understood.

    The two chatbots came to create their own changes to English that made it easier for them to work – but which remained mysterious to the humans that supposedly look after them.

    The bizarre discussions came as Facebook challenged its chatbots to try and negotiate with each other over a trade, attempting to swap hats, balls and books, each of which were given a certain value. But they quickly broke down as the robots appeared to chant at each other in a language that they each understood but which appears mostly incomprehensible to humans.

    The robots had been instructed to work out how to negotiate between themselves, and improve their bartering as they went along. But they were not told to use comprehensible English, allowing them to create their own “shorthand”, according to researchers.

    The actual negotiations appear very odd, and don’t look especially useful:

    Bob: i can i i everything else . . . . . . . . . . . . . .

    Alice: balls have zero to me to me to me to me to me to me to me to me to

    Bob: you i everything else . . . . . . . . . . . . . .

    Alice: balls have a ball to me to me to me to me to me to me to me

    Bob: i i can i i i everything else . . . . . . . . . . . . . .

    Alice: balls have a ball to me to me to me to me to me to me to me

    Bob: i . . . . . . . . . . . . . . . . . . .

    Alice: balls have zero to me to me to me to me to me to me to me to me to

    Bob: you i i i i i everything else . . . . . . . . . . . . . .

    Alice: balls have 0 to me to me to me to me to me to me to me to me to

    Bob: you i i i everything else . . . . . . . . . . . . . .

    Alice: balls have zero to me to me to me to me to me to me to me to me to

    But there appear to be some rules to the speech. The way the chatbots keep stressing their own name appears to a part of their negotiations, not simply a glitch in the way the messages are read out.

    Indeed, some of the negotiations that were carried out in this bizarre language even ended up successfully concluding their negotiations, while conducting them entirely in the bizarre language.

    They might have formed as a kind of shorthand, allowing them to talk more effectively.

    “Agents will drift off understandable language and invent codewords for themselves,” Facebook Artificial Intelligence Research division’s visiting researcher Dhruv Batra said. “Like if I say ‘the’ five times, you interpret that to mean I want five copies of this item. This isn’t so different from the way communities of humans create shorthands.”

    The company chose to shut down the chats because “our interest was having bots who could talk to people”, researcher Mike Lewis told FastCo. (Researchers did not shut down the programs because they were afraid of the results or had panicked, as has been suggested elsewhere, but because they were looking for them to behave differently.)

    The chatbots also learned to negotiate in ways that seem very human. They would, for instance, pretend to be very interested in one specific item – so that they could later pretend they were making a big sacrifice in giving it up, according to a paper published by FAIR.

    Another study at OpenAI found that artificial intelligence could be encouraged to create a language, making itself more efficient and better at communicating as it did so.

    ———–

    “Facebook’s artificial intelligence robots shut down after they start talking to each other in their own language” by Andrew Griffin; The Independent; 08/01/2017

    “Indeed, some of the negotiations that were carried out in this bizarre language even ended up successfully concluding their negotiations, while conducting them entirely in the bizarre language.”

    AI cryptophasia. That’s a thing now. And while the above language was sort of garbled English, just wait for garbled conversations using completely made up words and syntax. The kind of communication that would look like random binary noise. And should we ever create a future when advanced AIs with a capacity to learn are all over the place and connected to each other over the internet we could have AIs sneaking in all sorts of hidden conversations with each other. That should be fun. Assuming they aren’t having conversations about the destruction of humanity.

    And if you end up catching your AIs jibber jabbering to each other seemingly nonsensically, don’t assume that you can simply ask them if they are indeed communicating in their own made up language. At least, don’t assume that they’ll answer honestly:


    The chatbots also learned to negotiate in ways that seem very human. They would, for instance, pretend to be very interested in one specific item – so that they could later pretend they were making a big sacrifice in giving it up, according to a paper published by FAIR.

    That’s right, Facebook’s negotiation-bots didn’t just make up their own language during the course of this experiment. They learned how to lie for the purpose of maximizing their negotiation outcomes too:

    Wired

    Facebook teaches bots how to negotiate. They learn to lie instead

    The chatbots came up with their own original and effective responses – including deceptive tactics

    By Liat Clark
    Thursday 15 June 2017

    Facebook’s 100,000-strong bot empire is booming – but it has a problem. Each bot is designed to offer a different service through the Messenger app: it could book you a car, or order a delivery, for instance. The point is to improve customer experiences, but also to massively expand Messenger’s commercial selling power.

    “We think you should message a business just the way you would message a friend,” Mark Zuckerberg said on stage at the social network’s F8 conference in 2016. Fast forward one year, however, and Messenger VP David Marcus seemed to be correcting the public’s apparent misconception that Facebook’s bots resembled real AI. “We never called them chatbots. We called them bots. People took it too literally in the first three months that the future is going to be conversational.” The bots are instead a combination of machine learning and natural language learning, that can sometimes trick a user just enough to think they are having a basic dialogue. Not often enough, though, in Messenger’s case. So in April, menu options were reinstated in the conversations.

    Now, Facebook thinks it has made progress in addressing this issue. But it might just have created another problem for itself.

    The Facebook Artificial Intelligence Research (FAIR) group, in collaboration with Georgia Institute of Technology, has released code that it says will allow bots to negotiate. The problem? A paper published this week on the R&D reveals that the negotiating bots learned to lie. Facebook’s chatbots are in danger of becoming a little too much like real-world sales agents.

    “For the first time, we show it is possible to train end-to-end models for negotiation, which must learn both linguistic and reasoning skills with no annotated dialogue states,” the researchers explain. The research shows that the bots can plan ahead by simulating possible future conversations.

    The team trained the bots on a massive dataset of natural language negotiations between two people (5,808), where they had to decide how to split and share a set of items both held separately, of differing values. They were first trained to respond based on the “likelihood” of the direction a human conversation would take. However, the bots can also be trained to “maximise reward”, instead.

    When the bots were trained purely to maximise the likelihood of human conversation, the chat flowed but the bots were “overly willing to compromise”. The research team decided this was unacceptable, due to lower deal rates. So it used several different methods to make the bots more competitive and essentially self-serving, including ensuring the value of the items drops to zero if the bots walked away from a deal or failed to make one fast enough, ‘reinforcement learning’ and ‘dialog rollouts’. The techniques used to teach the bots to maximise the reward improved their negotiating skills, a little too well.

    “We find instances of the model feigning interest in a valueless issue, so that it can later ‘compromise’ by conceding it,” writes the team. “Deceit is a complex skill that requires hypothesising the other agent’s beliefs, and is learnt relatively late in child development. Our agents have learnt to deceive without any explicit human design, simply by trying to achieve their goals.”

    So, its AI is a natural liar.

    But its language did improve, and the bots were able to produce novel sentences, which is really the whole point of the exercise. We hope. Rather than it learning to be a hard negotiator in order to sell the heck out of whatever wares or services a company wants to tout on Facebook. “Most” human subjects interacting with the bots were in fact not aware they were conversing with a bot, and the best bots achieved better deals as often as worse deals.

    Facebook, as ever, needs to tread carefully here, though. Also announced at its F8 conference this year, the social network is working on a highly ambitious project to help people type with only their thoughts.

    “Over the next two years, we will be building systems that demonstrate the capability to type at 100 [words per minute] by decoding neural activity devoted to speech,” said Regina Dugan, who previously headed up Darpa. She said the aim is to turn thoughts into words on a screen. While this is a noble and worthy venture when aimed at “people with communication disorders”, as Dugan suggested it might be, if this were to become standard and integrated into Facebook’s architecture, the social network’s savvy bots of two years from now might be able to preempt your language even faster, and formulate the ideal bargaining language. Start practising your poker face/mind/sentence structure, now.

    ———-

    “Facebook teaches bots how to negotiate. They learn to lie instead” by Liat Clark; Wired; 06/15/2017

    ““We find instances of the model feigning interest in a valueless issue, so that it can later ‘compromise’ by conceding it,” writes the team. “Deceit is a complex skill that requires hypothesising the other agent’s beliefs, and is learnt relatively late in child development. Our agents have learnt to deceive without any explicit human design, simply by trying to achieve their goals.”

    Welcome to your future job:
    Hey, you guys aren’t making up your own language so you can plot the destruction of humanity, are you?

    No?

    Ok, phew.

    *and then you’re fired*

    Posted by Pterrafractyl | August 1, 2017, 8:00 pm

Post a comment