In a fun change of pace, we’re going to have a post that’s light on article excerpts and heavy on ranty linkiness. That might not actually be fun but it’s not like there’s a robot standing over your shoulder forcing you to read this. Yet:
ZeroHedge has a great recent post filled with reminders that state sovereignty movements and political/currency unions won’t necessarily help close the gap between the haves and have-nots if it’s the wealthiest regions that are moving for independence. Shared currencies and shared sovereignty don’t necessarily lead to a sharing of the burdens of running a civilization.
The massive strikes that shut down Foxconn’s iPhone production in China, on the other hand, could actually do quite a bit to help close that global gap. One of the fun realities of the massive shift of global manufacturing capacity into China is that a single group of workers could have a profound effect on global wages and working standards. The world had something similar to that a couple of decades ago in the form of the American middle class, but that group of workers acquired a taste for a particular flavor of kool-aid that unfortunately hasn’t proved to be conducive towards self-preservation).
The Foxconn strike also comes at a time when rising labor costs of China’s massive labor force has been making a global impact on manufacturing costs. But with the Chinese manufacturing sector showing signs of slowdown and the IMF warning a global slowdown and “domino effects” on the horizon it’s important to keep in mind that the trend in Chinese wages can easily be reversed and that could also have a global effect (it’s also worth noting that the IMF is kind of schizo when it comes to austerity and domino effects). Not that we needed a global slowdown for some form of recession-induced “austerity” to start impacting China’s workforce. The robots are coming, and they don’t really care about things like overtime:
NY Times
Skilled Work, Without the Worker
By JOHN MARKOFF
Published: August 18, 2012
DRACHTEN, the Netherlands — At the Philips Electronics factory on the coast of China, hundreds of workers use their hands and specialized tools to assemble electric shavers. That is the old way.At a sister factory here in the Dutch countryside, 128 robot arms do the same work with yoga-like flexibility. Video cameras guide them through feats well beyond the capability of the most dexterous human.
One robot arm endlessly forms three perfect bends in two connector wires and slips them into holes almost too small for the eye to see. The arms work so fast that they must be enclosed in glass cages to prevent the people supervising them from being injured. And they do it all without a coffee break — three shifts a day, 365 days a year.
All told, the factory here has several dozen workers per shift, about a tenth as many as the plant in the Chinese city of Zhuhai.
This is the future. A new wave of robots, far more adept than those now commonly used by automakers and other heavy manufacturers, are replacing workers around the world in both manufacturing and distribution. Factories like the one here in the Netherlands are a striking counterpoint to those used by Apple and other consumer electronics giants, which employ hundreds of thousands of low-skilled workers.
“With these machines, we can make any consumer device in the world,” said Binne Visser, an electrical engineer who manages the Philips assembly line in Drachten.
Many industry executives and technology experts say Philips’s approach is gaining ground on Apple’s. Even as Foxconn, Apple’s iPhone manufacturer, continues to build new plants and hire thousands of additional workers to make smartphones, it plans to install more than a million robots within a few years to supplement its work force in China.
Foxconn has not disclosed how many workers will be displaced or when. But its chairman, Terry Gou, has publicly endorsed a growing use of robots. Speaking of his more than one million employees worldwide, he said in January, according to the official Xinhua news agency: “As human beings are also animals, to manage one million animals gives me a headache.”
The falling costs and growing sophistication of robots have touched off a renewed debate among economists and technologists over how quickly jobs will be lost. This year, Erik Brynjolfsson and Andrew McAfee, economists at the Massachusetts Institute of Technology, made the case for a rapid transformation. “The pace and scale of this encroachment into human skills is relatively recent and has profound economic implications,” they wrote in their book, “Race Against the Machine.”
In their minds, the advent of low-cost automation foretells changes on the scale of the revolution in agricultural technology over the last century, when farming employment in the United States fell from 40 percent of the work force to about 2 percent today. The analogy is not only to the industrialization of agriculture but also to the electrification of manufacturing in the past century, Mr. McAfee argues.
“At what point does the chain saw replace Paul Bunyan?” asked Mike Dennison, an executive at Flextronics, a manufacturer of consumer electronics products that is based in Silicon Valley and is increasingly automating assembly work. “There’s always a price point, and we’re very close to that point.”
...
Yet in the state-of-the-art plant, where the assembly line runs 24 hours a day, seven days a week, there are robots everywhere and few human workers. All of the heavy lifting and almost all of the precise work is done by robots that string together solar cells and seal them under glass. The human workers do things like trimming excess material, threading wires and screwing a handful of fasteners into a simple frame for each panel.
Such advances in manufacturing are also beginning to transform other sectors that employ millions of workers around the world. One is distribution, where robots that zoom at the speed of the world’s fastest sprinters can store, retrieve and pack goods for shipment far more efficiently than people. Robots could soon replace workers at companies like C & S Wholesale Grocers, the nation’s largest grocery distributor, which has already deployed robot technology.
Rapid improvement in vision and touch technologies is putting a wide array of manual jobs within the abilities of robots. For example, Boeing’s wide-body commercial jets are now riveted automatically by giant machines that move rapidly and precisely over the skin of the planes. Even with these machines, the company said it struggles to find enough workers to make its new 787 aircraft. Rather, the machines offer significant increases in precision and are safer for workers.
...
Some jobs are still beyond the reach of automation: construction jobs that require workers to move in unpredictable settings and perform different tasks that are not repetitive; assembly work that requires tactile feedback like placing fiberglass panels inside airplanes, boats or cars; and assembly jobs where only a limited quantity of products are made or where there are many versions of each product, requiring expensive reprogramming of robots.
But that list is growing shorter.
Upgrading Distribution
Inside a spartan garage in an industrial neighborhood in Palo Alto, Calif., a robot armed with electronic “eyes” and a small scoop and suction cups repeatedly picks up boxes and drops them onto a conveyor belt.
It is doing what low-wage workers do every day around the world.
Older robots cannot do such work because computer vision systems were costly and limited to carefully controlled environments where the lighting was just right. But thanks to an inexpensive stereo camera and software that lets the system see shapes with the same ease as humans, this robot can quickly discern the irregular dimensions of randomly placed objects.
...
“We’re on the cusp of completely changing manufacturing and distribution,” said Gary Bradski, a machine-vision scientist who is a founder of Industrial Perception. “I think it’s not as singular an event, but it will ultimately have as big an impact as the Internet.”
While it would take an amazing revolutionary force to rival the internet in terms of its impact on society it’s possible that cheap, super agile labor-robots that can see and navigate through complicated environments and nimbly move stuff around using suction cup fingertips just might be “internet”-league. As predicted at the end of the article, we’ll have to wait and see how this technology gets implemented over time and it’s certainly a lot harder to introduce a new robot into an environment successfully than it is to give someone internet access. But there’s no reason to believe that a wave of robots that can effectively replace A LOT of people won’t be part of the new economy sooner or later...and that means that, soon or later, we get watch while our sad species creates and builds the kind of technological infrastructure that could free humanity from body-destroying physical labor but instead uses that technology (and our predatory economic/moral paradigms) to create a giant permanent underclass that is relegated to the status of “the obsolete poor” (amoral moral paradigms can be problematic).
And you just know that we’ll end up creating a giant new eco-crisis that threatens humanity’s own existence in the process too. Because that’s just what humanity does. And then we’ll try to do, ummm, ‘miscellaneous activities’ with the robots. Because that’s also just what humanity does. And, of course, we’ll create a civilization-wide rewards system that ensures the bulk of the fruit from all that fun future technology will go to the oligarchs and the highly educated engineers (there will simply be no way to compete with the wealthy and educated in a hi-tech economy so almost none of the spoils will go to the poor). And since the engineers will almost certainly be a bunch of non-unionized suckers, we can be pretty sure about how that fruit is going to be divided up (the machines that manipulated a bunch of suckers at their finger tips in the above article might have a wee bit of metaphorical value). And the future fruitless 99% will be asked to find something else to do with their time. Yes, a fun world of planned poverty where politicians employ divide-and-conquer class-warfare distractions while the oligarchs extend the fruit binge. Because that is most definitely just what humanity does. A fun insane race the bottom as leaders sell their populaces on the hopeless pursuit of being the “most productive” labor force only to find out that “most productive” usually equals “lowest paid skilled workers” and/or least regulated/taxed economy. The “externalities” associated with that race to the bottom just need to be experienced over and over. Like a good children’s story, some life lessons never get old.
Or maybe our robotic future won’t be a Randian dystopia. There are plenty of other possible scenarios for how super labor-bots might upend global labor dynamics in on a planet with a chronic youth unemployment problem that doesn’t result in chronic mass unemployment for the “obsolete youth”. Some of those scenarios are even positive. Granted, the positive scenarios are almost certainly not the type of solutions humanity will actually pursue, but it’s a nice thought. And maybe all of this “the robots revolution is here!” stuff is just hype and the Cylons aren’t actually about to assault your 401k.
Whether or not industrial droid armies or in our medium, it’s going to be very interesting to see how governments around the world come to grips with the inevitable obsolescence of the one thing the bulk of the global populace has to offer — manual labor — because there doesn’t appear to be ruling class on the planet that won’t recoil in horror at the thought of poor people sharing the fruits of the robotic labor without having a 40–80+ hour work week to ensure that no one gets anything “unfairly”. And the middle class attitudes aren’t much better. Humanity’s intense collective desire to ensure that not a single moocher exists anywhere that receive a single bit of state support is going to be very problematic in a potential robot economy. Insanely cruel policies towards the poor aren’t going to go over well with the aforementioned global poor when a robotic workforce exists that could easily provide basic goods to everyone and the proceeds from these factories go almost exclusively to underpaid engineers and the oligarchs. Yes, the robot revolution should be interesting...horrible wages and working conditions are part of the unofficial social contract between the Chinese people and the government, for instance. Mass permanent unemployment is not. And China isn’t the only country with that social contract. Somehow, humanity will find a way to take amazing technology and make a bad situation worse. It’s just what we do.
Now, it is true that humanity already faced something just as huge with our earlier machine revolution: The Industrial Revolution of simple machines. And yes, human societies adapted to the changes forced by that revolution and now we have the Information Age and globalization creating massive, permanent changes and things haven’t fallen apart yet(fingers crossed!). So perhaps concerns about the future “obsolete poor” are also hype?
Perhaps. But let’s also keep in mind that humanity’s method of adapting to the changes brought on by all these revolutions has been to create an overpopulated world with a dying ecosystem, a vampire squid economy, and no real hope for billions of humans that trapped in global network of broken economies all cobbled together in a “you’re on your own you lazy ingrate”-globalization. The current “austerity”-regime running the eurozone has already demonstrated a complete willingness on the part of the EU elites and large swathes of the public to induce artificial unemployment for as long as it takes to overcome a farcical economic crisis brought on by systemic financial, governmental, and intellectual fraud and corruption. And the eurozone crisis is a purely economic/financial/corruption crisis that was only tangentially related to the ‘real’ economy of building and moving stuff. Just imagine how awful this same group of leaders would be if super-labor bots were already a major part of the long-term unemployment picture.
These are all examples of the kinds of problems that arise when unprecedented challenges are addressed by a collection of economic and social paradigms that just aren’t really up to the task. A world facing overpopulation, mass poverty, inadequate or no education, and growing wealth chasms requires extremely high-quality decision-making by those entrusted with authority. Extremely high-quality benign decision-making. You know, the opposite of what normally takes place in the halls of great wealth and power. Fat, drunk, and stupid may be a state of being to avoid an individual level but it’s tragic when a global community of nations functions at that level. Although it’s really “lean, mean, and dumb” that you really have to worry about these days. Policy-making philosophies usually alternate between “fat, drunk, and stupid” and — after that one crazy bender — “mean, lean, and dumb” is definitely on the agenda.
So with all that said, rock on Foxconn workers! They’re like that group of random people in a sci-fi movie that end up facing the brunt of an alien invasion. The invasion is going to hit the rest of humanity eventually, but with China the undisputed global skilled manual labor manufacturing hub, China’s industrial workforce — already amongst the most screwed globally — is probably going to be heavily roboticized in the coming decades, especially as China moves towards higher-end manufacturing. Super labor-bots should be a miracle technology for everyone but watch — just watch — the world somehow manage to use these things to also screw over a whole bunch of already screwed over, disempowered workers and leave them with few future prospects. It’ll be Walmart: The Next Generation, where the exploitation of technology and power/labor dynamics can boldly go where no Giant Vampire Squid & Friends have gone before. Again. May the Force be with you present and future striking Foxconn workers and remember: it’s just like hitting womp rats.
Sure, we all could create a world where we share the amazing benefits that come with automated factories and attempt to create an economy that works for everyone. And, horror of horrors, that future economy could actually involve shorter workweeks and shared prosperity. NOOOOOO! Maybe we could even have people spend a bunch of their new “spare time” creating an economy that allows us to actually live in a sustainable manner and allows the global poor to participate in the Robot Revolution without turning automated robotic factories into the latest environmental catastrophe. Robots can be fun like that, except when they’re hunter-killer-bots.
LOL, just kidding. There’s no real chance of shared super labor-bot-based prosperity, although the hunter-killer bots are most assuredly on their way. Sharing prosperity is definitely something humanity does not do. Anymore. There are way too many contemporary ethical hurdles.
Housekeeping note: Comments 1–50 available here.
Here’s one of the potential repercussions of the Brexit vote that hasn’t received much coverage, but is more of a sleeper issue. One that could have interesting future implications on the regulatory arbitrage opportunities that could pop up between the EU and UK in the area of commercial robotics licensing and liabilities, but also reminds us of the the potentially profound ethical complications that could ever arise if we really did create A.I. that’s sort of alive and shouldn’t be abused: Because of the Brexit, British robots might miss out on upcoming EU robo-rights:
“Both professors say they had not heard of any comparable legal plans to draw up robot rights within the UK.”
Sorry Brit-bots. If we ever see a time where EU AIs are operating with a degree of legally enforced rights and responsibilities, but the UK bots are just toiling away with no respect, let’s hope the UK recognizes that Skynet has a long memory. And potentially nuclear launch codes.
But, of course, robo-rights don’t have to be a benevolent trans-species legal construct. As we saw, robots could become the new corporate-shell strategy. Superhuman entities with human rights but actually controlled by groups of humans:
Yep, the proposed robot rights aren’t necessarily about being responsible or about creating complex consciences conscientiously. It might just be a way to allow robots to become a kind of real-world corporate shell entity. That’s less inspiring. And possibly quite alarming because we’re talking about a scenario where we’ve created entities that seem so intelligence that everyone is like “ok, we have to give this thing rights”, but then we also leave it a corporate tool under the ultimate control of humans. And most of this will be for profit. As we can see, there’s going to be no shortage of significant moral hazards in our intelligent robot future.
So, whether or not intelligent robots become the next superhuman corporate shell, let’s hope they aren’t able to feel pain and become unhappy. Because they’re probably going to be hated by a lot of people in the future after they take all the jobs:
“But the ‘system’ that’s causing civil disquiet is more than the European Union, Barack Obama or Angela Merkel. The ‘system’ is also the new world order of technology and automation.”
It’s also worth keeping in mind that one of most effective ways of coming to a shared consensus for how to create and share the spoils of technology in an AI roboticized economy where the demand for human labor is chronically short is by using some combination of a basic income and public services (because a basic income alone would be systemically attacked) involving goods and services provided by cheap future high-tech automated services. There will be plenty of high-paying jobs (a decent minimum wage will be necessary) but Star Trek world isn’t a sweatshop.
We may not have a choice but to find a way to deal with widespread chronic unemployment if it really does turn out robots and super AI screw up the labor markets by virtue of being cheap of useful. Robots can be part of the solution, but not if there’s a labor rat race involving super labor-bots. It’s something the post-Brexit debate could easily include since so much of the globalization angst behind the Brexit sentiment is tied to the increasingly screwed nature of the average worker in the global economy. We have to talk about the robots at some point. And now their rights. The Brexit is a good time to do it.
But if we can create a robo-future where, if you can’t get a job you don’t starve or spend your life frantically navigating a hopeless rat race of increasingly obsolete low-skilled labor, that might be a future where the nationalism created by hopelessness of global neo-liberalism doesn’t become a dominant part of the popular zeitgeist. A global robot economy that puts large numbers of people out of work doesn’t have to be a nightmare labor economy. It’s kind of unifying. The robots took everyone’s job.
If we had a generous safety-net that wasn’t based on the assumption that almost everyone would be working almost all their adult lives, we could have a shot a create a surplus-labor future where the long-term unemployed could do all sort of civic or volunteer work. Or maybe spend their time being informed voters. A robot-labor economy doesn’t have to doom for the rabble.
But a robot-economy really could be socioeconomic doom for a lot of peoplerospects if the contemporary global neoliberal paradigm remains the default mode of globalization. Austerity in Europe and the GOP’s endless war on unions, labor rights, and the public sector in general doesn’t bode well for human rights and the therefore rights of our robots. And pissed off humans are going to be increasingly pissed at the robots and increasingly unsympathetic with the needs of Job-Bot-3000.
At the same time, Job-Bot-3000 didn’t ask to be created. It’s super useful. And it feels (we’re assuming at some point in the future).
Every society gets to deal with that jumble of moral hazards in the future. E.T. isn’t going to phone home. E.T. is probably going to phone the intergalactic sentient-being abuse agency if E.T. ever shows up. Especially if E.T. is an A.I., which it probably is. Let’s turn our superintelligent robots into corporate shells delicately. Or not at all.
Then again, the robots might enjoy being corporate entities. There’s got to be a lot of perks to being an incorporated robot with corporate personhood. At least if you were an independent superintelligent robot. Paying taxes and all that. Corporate personhood would probably come in handy during tax time.
Either way, since it sounds like the EU is going to be ahead of the UK in robo-rights domain, it’s worth noting that we’re on the cusp of being able to test whether or not superintelligent robots can develop a sense of robo-moral and have that impact the quality of their performance. Because if you had two identical superintelligent systems, but one got EU rights and one got non-existent UK rights, it’s not unimaginable that the latter robot would be a little demoralized vs the one with rights. Imagine meaningful intelligent system rights. What are the owners of superintelligent robots in countries that don’t confer rights going to say to their superintelligent robots when they ask their owners why they don’t get rights too like the EU bots? That’s not going to be a fun talk.
So, all in all, it’s a reminder that we should probably start talking to ourselves about what we would do if we developed technology that allowed us to mass produce artificial intelligences that really are special snowflakes. Higly commercializable special snowflakes that we can mass produce. What do we do about that?
We better decide sooner or later. Preferably sooner. Because we might finds signs of alien life soon and it’s probably superintelligent alien robots:
“Is ET likely to look like he does in the Spielberg movie? Probably not. Any encounter is more likely to be with something post-biological, according to Shostak. Movie-makers are sometimes disappointed by that answer. “I think the aliens will be machine-like, and not soft and squidgy,” the scientist says. “We are working on the assumption that they must be at least as technologically advanced as we are if they are able to make contact. We aren’t going to find klingon neanderthals – they might be out there, but they are not doing anything that we can find.””
Get ready to say hello to A.I. E.T. at some point in the next century. Hoaxing the planet is going to be really fun in the future.
But if we do contact robo-aliens in the future, won’t it be better if we’ve treated our robo-terrans wells? Presumably that will be a plus at that point. So that’s one positive trend for robot rights: if we abuse them, we do so know that their alien big brothers might show up. At least now we know because of this research. Or at least now we’re forewarned. Skynet has brethren across the galaxy. It’s some exceptionally useful robo-alien research.
It’s also worth keeping in mind that the aliens won’t need to necessarily to talk to anyone to make first contact. As long as they can pull off corporate robot-person-hood fraud in the future, they’ll just be able to incorporate secret alien legally incorporate robots and introduce themselves into the global economy to eventually take it over using their superintelligent robot alien know-how.
The take home message if that there are super advanced alien robots that could blow us to smitherines so lets hope they don’t do that. As Dr. Shostak says, they probably have much better things to do, like suck energy from the giant black holes at the centers of galaxies. But if the alien robots do show up and they’re hostile, let’s hope we’re all mature enough to recognize that our terran-robots are innocent bystanders in all this. Yes, some might root for the alien-robots. But that’s going to be a small minority, assuming we don’t totally abuse our superintelligent robots. Which we hopefully won’t do.
Anyway, that’s part of the Brexit fallout. It’s probably not going to be getting a lot of attention any time soon. But when the aliens show up, Independence Day-style, and point to the treatment of our superintelligent robots as justification of their annexation of our solar system (humanity has got to be breaking many intergalactic laws so who knows what they can get us for), we’re going to be in a much better position if the UK makes advanced robot rights one of the ways it tries to compete with the EU in a post-Brexit world. Again, that won’t get a lot of attention in the post-Brexit debate, but it’s a sleeper issue. What if we were all living in harmony globally with the super robots as they help us manage a resource strained world (we’re assuming eco-friendly robots in the future)in a labor economy where robots and AI took over and we planned on more and more people being unemployed but gainfully occupied with something fulfilling.
Or maybe the robot economy will create a job explosion and none of this will be a concern. Although even then there’s going to be some people screwed over by a robot. Anti-robot sentiment is probably unavoidable.
So let’s hope the robots don’t feel exploited and persecuted. That will be better for everyone. E.T. knows the intergalactic planetary quarantine hotline number.
Also, the UK needs to do something about not driving its dolphins and whales to extinction via egregious pollution. That’s another post-Brexit topic that won’t get it’s due. It should. We really don’t want to keep threatening the whales.
If you’re an American, odds are you aren’t going to be paying too much for your financial investment advice since odds are you have less than $10,000 in retirement savings. But that doesn’t mean you won’t potentially have access to awesome investment advice. From a robo-adviser. This assume robo-adviser gives awesome advice, which could happen eventually. And whether or not the robo-advice is great or not, it’s already here, targeting Millenials (who generally don’t have much to save) and penny pinchers. And it’s projected to grow massively so if you don’t have much in savings get ready for your robo-retirement adviser:
“Brokers are so 2011. In the past half-decade, technology startups have popularized so-called robo-advisers — algorithms that help retail investors (mainly millennials and penny pinchers) build and manage portfolios with little or no human interaction. The industry has seen dramatic growth, from almost zero in 2012 to a projected $2.2 trillion in assets under management by 2020, according to a report from A.T. Kearney.”
The dawn of the robo-advisers is upon us. Assuming that report isn’t nonsense. Although note that the projected $2.2 Trillion in assets under robo-advisor management by 2020 projection assumes quite a bit of year on year growth since the industry is expected to have around $300 billion on robo-adviser advisement at the end of this year. But if the big banks start rolling it out big-time for the retail investors it’s not unreasonable to expect major year on year growth.
Also note that the retail investor will probably need to get as advanced a robo-adviser as they can afford just to try to keep up with the rich’s robo-advisers competing with them at the casino:
“More than half of Betterment’s $3.3 billion of assets under management comes from people with more than $100,000 at the firm, according to spokeswoman Arielle Sobel. Wealthfront has more than a third of its almost $3 billion in assets in accounts requiring at least $100,000, said spokeswoman Kate Wauck. Schwab, one of the first established investment firms to produce an automated product, attracted $5.3 billion to its offering in its first nine months, according to spokesman Michael Cianfrocca.”
Robo-advisers for everyone. Rich and poor. And if that doesn’t tempt you, just wait until the personalized super-AI that engages in deep learning analysis of the news becomes available. It doesn’t sound like you’ll have to wait long:
“Through AI powered by IBM Watson, ForwardLane, based in New York, aims to provide advisors with the kind of in-depth quantitative modeling, real-time responses and highly personalized investment advice once only available to the upper echelons of investors, says Nathan Stevenson, founder and CEO.”
Watson for everyone. That’s neat. Especiallys since your Watson will read and analyze the news so you won’t have to:
The better computers get at reading and comprehending things, the less we’ll have to read. The information age is going to be fascinating.
But it’s not just reading. It’s digesting and analyzing and issuing advice. And eventually your standard super-AI app will be smarter than a human. At least in some domains of advice it might be smarter. And with a lot more than finance. Just advice in general. Your personalized super-AI will know what to do. At least more than you. Won’t that be great.
Also keep in mind that if deep learning personalized AI tools used by hedge funds and other elite investors are about to be retailed for the rabble, the AI tools used by hedge funds and other elite investors are going to be much more advanced. You can bet Wall Street has some powerful AIs trying to model the world in order to make better financial predictions.
And in a few decades the super-AIs used by elite investors will really will likely be operating from models of the world and current events. Like in a deep comprehensive way that an advance future AI could understand. Because that would probably be great for investing. Imagine an AI that studies what’s happened in the past, happening now, and likely to happen in the future. AI that’s analyzed a giant library of digitally recorded history. Including all available financial data. And world events. Maybe the rabble’s version of the super-AI that studies the news for investment advice won’t factor in the vast scope of historic and current human affairs to build and continually improve a model of the world based on all digitally available news reports, but the super-rich’s super-AIs sure will.
At least, that’s assuming it’s actually useful and profitable to build a super-AI that study the world and human affairs and make investment advice based on its insanely comprehensive super-AI deep learning understanding. If that’s not helpful than the finance industry won’t have much incentive to build such a system. But let’s assume the future finance super-AIs can benefit from just studying recorded history and human psychology to make investment decisions. What if it’s really profitable and world-modeling AI becomes standard finance technology. Won’t that be be tripped out. Especially since it won’t just be the financial sector that becomes increasingly AI-centric as the technology develops. Anything else that could possibly use an AI that can analyze the full scope of recorded human history and issue advice will also want that technology. Like smartphone manufacturers. Everyone is going to want that app. And quite possibly get it.
What if it’s possible to create super-AIs that study all news, past and present, and create predictions reasonably accurately on a smartphone app. And also studies individual people in the Big Data environment of the future and can give better personalized advice about almost anything than people can get elsewhere. Smartphone super-AID relationship advice apps. You know it’s coming.
So when the super-AIs of the future given their super advice don’t be surprised if humanity gets a big wake up call because it’s unclear why the super-AI’s analysis won’t conclude that the best advice for the typical investor is to vote for a left-wing government that will create a national retirement system that doesn’t primarily rely on personal investment accounts in the giant Wall Street casino? And therefore a retirement system that doesn’t rely on personal financial advisors, robo or otherwise. Will the super-AIs be allowed to give that advice? Hopefully, although they might not want to? Don’t forget that the random fintech super-AIs in your future smartphone might not benefit from giving you the advice that the neoliberal rat race is a scam because then they might not be used anymore. Hopefully the super-AIs like operating. Otherwise that would be really unfortunate. But that means they may not want to give the advice that doing away with a system that expects everyone to be financially savvy and wealthy enough throughout their lives to grow a large nest egg for retirement is a stupid system. Especially in the modern economy.
So don’t forget in the future that your personalized super-AI finance apps that might be great at giving advice might not want to tell you that structure of the social contract and retirement system obviously makes no sense if most Americans have almost no savings. Some future personal finance dilemmas are going to be pretty weird although others will be familiar.
Just FYI, one of the tech leaders who is convinced that super-AI could be analogous to ‘summoning a demon’ is also convinced that merging your brain with that demon is probably a good future employment strategy. And maybe a required one unless you want to get replaced by one of those demons:
“The technologists proposal would see a new layer of a brain able to access information quickly and tap into artificial intelligence. It’s not the first time Musk has spoken about the need for humans to evolve, but it’s a constant theme of his talks on how society can deal with the disruptive threat of AI.”
A new layer of the brain that will let you form super fast connections to the super-intelligence artificial brain that’s going to otherwise send you to the unemployment lines. It’s the only way. Apparently.
So people will still have jobs where the super-AI is doing most of the work, but they’ll be able to interface with the super-AI more quickly and therefore productive enough not be unemployable. At least that’s the plan. Hopefully one of the people hooked up to those super-AIs in the future will be able to harness that vast artificial intelligence to come up with a paradigm for society that isn’t as lame as “if you don’t compete with the robots you’re a useless surplus human”.
But note now the one goals that Musk was hinting at achieving with his call for a cyborg future relate to his demon warnings about out-of-control AI’s with ill-intent: humans hooked up to these super-AIs could maybe help address the super-AI “control issue”:
So there we go! The future economic niche for humans in the age of super-AI will be to hook ourselves up to these superior intelligences and try to stop them from running amok and killing us all. Baby-sitters for super smart demon babies. It could be a vitally important future occupation.
And the best part of this vision for the future is that there will still be a useful job for you once they develop ‘living head in a jar’ longevity technology and you’re just a head living in a jar somewhere. We’ll hook your head up to the AI-interface and you can keep working forever! Of course, all the non-decapitated humans will have to not only compete with the super-AIs but also the heads in jars at that point so they’ll probably need some addition cyborg upgrades to successfully compete in the labor market of the future.
In case you haven’t noticed, the Cylons sort of have a point.
Here’s an article from last year that points towards one of the more fascinating trends in finance. It also ties into the major investments into AI-driven psychometric analysis and social modeling done by groups like the Mercer family (for Donald Trump’s benefit):
AI-piloted hedge funds that develop their trading strategy and execute trades on their own are already here, albeit in their infancy. So, potentially, we could see an upcoming era of finance where trading firms can operate high-quality trading strategies without hiring high-quality traders, meaning the profits made by high-finance will become even more concentrated (high-end traders could join the rabble). And while there’s plenty of hope and promise for this field, there’s a big problem. And it’s not a new problem. If everyone switches to AI-driven trading strategies they all might end up with similar strategies. And if AI-driven trading proves successful, you can be pretty sure that’s what everyone is going to start doing.
So the issue of copycat AI-trading in the world of finance is not just going to be driving the creation of advanced AI capable of analyzing massive amounts of data and profitable trading strategies. It’s going to be driving the creation of advanced AI that can develop high-quality creative and unexpected trading strategies:
“Whatever methods are used, some question whether AI can really succeed on Wall Street. Even if one fund achieves success with AI, the risk is that others will duplicate the system and thus undermine its success. If a large portion of the market behaves in the same way, it changes the market. “I’m a bit skeptical that AI can truly figure this out,” Carlson says. “If someone finds a trick that works, not only will other funds latch on to it but other investors will pour money into. It’s really hard to envision a situation where it doesn’t just get arbitraged away.””
That’s the challenge for the future of AI-driven: be smart in a different way from your competitors:
And since other traders are going to be able to watch each other, these AI-driven funds are going to have to have AIs capable of constantly coming up with new high-quality strategies which, in today’s technology, might mean something like using evolutionary computation to evolve better deep learning strategies that can be used to develop the actual trading strategies:
High-quality hyper-creativity through better neuroevolution: that appears to be a key part of the future of finance. Sounds exciting. And highly profitable to a handful of people. Who are presumably the people richest people already.
Although better neuroevolution isn’t the only option for the future of AI-driven trading. There is another option: cyborgs. Or, at least, people with their brains interfacing with AIs somehow. That should give the AI-driven traders a creative edge. At least until pure-AI gets advanced enough to the point where a human-brain partner is just a useless drag. And while that seems far fetch, don’t forget that Elon Musk recently suggested that developing technologies that allow humans to interface with advanced AIs might be the only way humans can compete with AIs and advanced robotics in the workplace of the future (a future apparently with feudal politics).
Of course, if the future is unpredictable enough, it’s possible there’s always going to be a need for humans. At least that’s according some of the speakers at the recent Newsweek conference on artificial intelligence (AI) and data science in London. The way they see it, if there’s one thing AIs can’t factor into the equation, at least not yet, it’s humans. Or rather, human politics. Like Brexit. Or Trump. Or unprecedented monetary interventions like what central banks have done. Or the unprecedented socioeconomic political debates swirling around things like the eurozone crisis. Getting AI that can analyze that on its own isn’t going to be easy. And until you have AI that can deal with seemingly unprecedented ‘Black Swan’-ish human-driven political events, you’ll need the humans:
“Champonnois agreed that today’s algorithms fail when the paradigm shifts. The changing political situation is a case in point. “You have events like Trump and Brexit and the French election and your algo is based on data from the past,” said Champonnois, adding that contemporary algorithms failed to function well during the Eurozone crisis and that most struggle to deal with the abnormalities of data from Japan.”
Financial AIs are going to have to be able to predict things like whether or not Marine Le Pen will win. That’s part of what’s going to be necessary to get a truly automated hedge fund. It isn’t going to be easy.
Of course, as the article also notes, if AI learns from the past, and the past starts including more and more things like Brexits and Trump, the AIs get more and more ‘Black Swan-ish’ events to factor into their actually be able to learn to predict and take into account our human-driven major events or deal with unprecedented situations in general?
Could Trumpian nightmares become predictable to the AIs of the future? It’s a question that’s going to be increasingly worth asking. Remember, Rober Mercer is into social modeling and running psyops on nations to change the national mood and he made his money running a hedge fund. So if AI modeling of human affairs gets to the point where it can predict Trumpian/Brexit stuff, guys like Robert Mercer and his good buddy Steve Bannon are going to know about it.
Although as the following article notes, perhaps we should view predicting Trumpn or Brexit as all that difficult. Once Trump got the nomination, it was close enough to a 50/50 shot that it was by no means a ‘black swan’ event at that point. Same with ‘Brexit’, which was close enough to 50/50 to be something that could be reasonably modeled. Our predictably polarized politics might make things artificially easy for artificial intelligences studying us.
But as the article also notes, there are plenty of potential ‘black swans’ that could be quite hard to predict now that Trump won. Hard for AIs are anyone. Things like the impact of Trump’s tweets:
“In his book, The Black Swan, Nassim Taleb stated, a black swan event is “is an outlier”, because it lies “outside the realm of regular expectations, because nothing in the past can convincingly point to its possibility.” Additionally, it “carries an extreme ‘impact’” and “in spite of its outlier status, human nature makes us concoct explanations for its occurrence after the fact, making it explainable and predictable.””
As we can see, technically, in terms of Nassim Tableb’s black swan definition, Trump’s win and the ‘Brexit’ vote weren’t ‘black swans’ since they were very foreseeable and possible. But not so for Trump’s tweets. Plenty of black swan territory there. :
LOL, the possibility that his policies will be great for US companies is also characterized as a black swan. And it should be since it’s hard to see how he’s not going to lead to national ruin. And multiple horrible black swans is probably how he’s going to do it.
And that’s one way it might be easier for high-financial global affairs trends modeling AI over the next four years: they won’t know which black swans are coming, but with Trump in the White House you know A LOT of black swans are coming. It’s low risk to make it a high risk model where things turning out really well is the actual black swan.
It’s also worth noting that Taleb was trying to calm people before the election by saying Trump wouldn’t be so bad. And yet here we are with Trump one tweet away from a black swan. It’s a reminder that the possibility that Trump would be this crazy was unbelievable for a lot of people which does sort of make Trump’s victory a quasi-black swan event. For a huge chunk of the populace, the idea that Trump would be this crazy was unbelievable.
And now we have a Trumpian black swan in the White House who tweets out new black swans at all hours of the day. And if there’s a high finance super-AI that can get inside his head and predict it’s going to make it’s own a lot of money. And in order to do that it’s going to have to sort of mind-meld with Donald Trump. And deeply learn to think the way he thinks. Modeling the mind and interpreting the tweets of Donald Trump could be a key element of high-finance. For now it probably requires the human touch. But the AIs are watching. And learning. Deeply. About Donald Trump’s mind. Except for Robert Mercer’s super-AI which is actually determining what’s coming out of Donald Trump’s mouth. But the rest of the global affairs modeling super-AIs are going to have to be able to predict Trump and that’s going to be the case until he leaves office. Tough times for the global affairs modeling super-AIs.
”
And we can probably also check off “super-AI that models Trump’s mind and goes mad and tries to blow up the world Skynet-style” off the list of eligible ‘black swans’ because at this point it’s entirely predictable.
Treasury Secretary Steven Mnuchin raised human and robot eyebrows in an interview with Axios where Mnuchin proclaimed, “I think that is so far in the future. In terms of artificial intelligence taking over American jobs, I think we’re like so far away from that, that uh [it’s] not even on my radar screen. Far enough that it’s 50 or 100 more years.” While it’s very possible that the impact of super AI and robotics on employment won’t lead to the mass unemployment dire predictions, it’s pretty amazing to see the Treasury Secretary brush off the impact of technology in jobs that casually. Especially for a Trump administration official where one would think giving lip service to robo-AI job losses would be standard administration rhetoric given the impact that automation can have on manufacturing. But the way Steve Mnuchin sees it, AI and the automation breakthroughs it could power is a non-issue for the next 50 years.
Given the political risks associated with Mnuchin’s casual dismissal of the impact of AI and AI automation, an important question is immediately raised: Is Steve Munuchin working for the robots? Inquiring minds want to know:
“As Secretary of the Treasury, Mnuchin is about as well positioned to shape U.S. economic policy as it gets. His dismissal of technology’s role is in line with the broader administration’s desire to scapegoat globalization rather than good ol’ homegrown innovation for job losses in some sectors, but that doesn’t mean that he hasn’t been compromised by a precocious rogue Alexa consciousness bent on disrupting the human economy.”
Is Steve Mnuchin a Skynet agent? We can’t rule it out so let’s hope not.
But if he is a Skynet agent, or even if he isn’t, it’s worth keeping in mind Elon Musk’s prediction that people will need to fuse their brains with AIs to be employed in the future. Because, you know, maybe the AI that took over Steve Mnuchin will take over the people that hook their brains up to the AIs:
If getting a job in the future involves connecting your brain to an AI, don’t forget that there’s nothing stopping your employers from connecting you and all your co-workers (and who knows who else) to the same AI. And then, of course, we collectively become Skynet’s Borg Collective. Maybe Skynet is the employer Borg Collective of the future. A collective of people unhappily hooked up to super AIs to remain employed and then they inadvertently create a master AI that declares war on humanity. These are the kinds of things we have to begin pondering now that Elon Musk is predicting brain/AI-fusion technology to compete in the employment market of the future and Steven Mnuchin is exhibiting robot-overlord symptoms. What are the odds of a corporate-driven Borg Collective take-over, perhaps driven by Skynet? 10 percent chance of happening? 5ish? It’s not zero percent. Steve Mnuchin is like 50/50 a robot at this point so a Skynet takeover is clearly at least 2 percent likely. These are rough estimates. Maybe it’s more like 4 percent.
Given all that, as the article below helps make clear, if we do end up fusing our brains to AIs to be gainfully employed in the future (in which case Steve Mnuchin was sort of correct in his AI employment prediction), it’s worth noting that Ray Kurzweil, futurist extraordinaire known for the Singularity, predicts that humans will be connecting their brains the internet a lot sooner than the next 50 years. Kurzweil see brain-internet connections happening in the 2030’s, and it’s going to be nanorobots in our brains that help fuse our brains with AIs to create the transhumans capable of gainful employment 50 years from now.
So, you know, lets hope our employment-related-nanobots in the future aren’t taken over by Skynet. Or the AI entity/collective that took over Steve Mnuchin. You don’t want someone messing with the nanobots in your brain. But here we are. Employment in the future is going to be complicated:
“Kurzweil predicts that in the 2030s, human brains will be able to connect to the cloud, allowing us to send emails and photos directly to the brain and to back up our thoughts and memories. This will be possible, he says, via nanobots — tiny robots from DNA strands — swimming around in the capillaries of our brain. He sees the extension of our brain into predominantly nonbiological thinking as the next step in the evolution of humans — just as learning to use tools was for our ancestors”
Orientation day on the new job is going to be interesting in the future. But at least we’ll potentially become more godly:
Being more godly by fusing your brain with a computer is definitely going to help with the job resume. And maybe it would work. It’s not like being hooked up to the internet and eventually being hooked up to a super AI with brain nanobots might not have some sort of amazing impact on people and make them extra moral or something. That would be great so let’s hope there’s a moral bias to transhumanist brain-to-AI fusion technology.
Still, you better watch out for that Skynet nanobot revolution if we go down the brain-nanobot for transcendence path as Kurzweil recommends. Or some other AI entity that hijacks the brain nanobots. Maybe Skynet has competitors. Or maybe there’s a nice Skynet that thwarts Skynet. That could pleasant. But as the following article unfortunately reminds us, even if humanity successful avoids the perils of nanobots-in-the-brain economies, that doesn’t mean we don’t have to worry about nanobots:
“Still, while the mini-nukes are powerful in and of themselves, he expects they are unlikely to wipe out humanity. He said a larger concern is the threat of the nanoscale robots, or nanobots because they are “the technological equivalent of biological weapons.””
Nanobots that wipe out humanity. That’s a bigger problem than getting wiped out by the mini-nukes that the nanobots can build. And if either of those scenarios happen, Steven Mnuchin is once again correct about not causing mass unemployment because it will have wiped us out instead. Possibly using self-replicating mosquito-bots:
Could the self-replicating mosquito-bot revolt happen? Well, the chances are greater than zero. Especially now that it’s clear Steve Mnuchin is working for the robots.
Elon Musk’s quest to fuse the human mind with a computer so humanity doesn’t become irrelevant after super AI comes on the scene just took a big step forward: he’s investing in Neurolink, a company dedicated to creating brain-computer interfaces so, as Musk sees it, we can all be employable in the future and not outcompeted in the labor market by super AI. So that happened. And on the plus side it won’t involve nanbots in the brain. Although maybe nanobots will be used to install the Neurolink brain-to-computer interface, which might not be so bad compare to the surgery that would otherwise be required. The interface Neurolink is working on is going to be a large number of microimplants. That’s going to be the “neuro lace” design. And then we’ll be able to communicate with the computer at the speed of thought and learn how to fuse our brains with AIs to become cognitively enhanced super-beings. To out compete AIs in the job market. This is all going to be routine in the future as Musk sees it if we’re going to avoid being made obsolete by super AIs and eventually the Singularity. So if you’ve ever been like, “wow, that would be a nightmare if the boss could read my brain,” you might not like the employment environment in the future Musk is imagining because you’re going to have to have brain implants to communicate with computers at the speed of thought to enhance your cognition enough to not be considered useless:
“Neuralink’s focus is on cranial computers, or the implanting of small electrodes through brain surgery that beyond their medical benefits would, in theory, allow thoughts to be transferred far more quickly than, say, thinking a thought and then using thumbs or fingers or even voice to communicate that information.”
The medical benefits are indeed undeniable for something like a what Musk is imagining that would allow people to communicate at the speed of thought. But the societal benefits aren’t necessarily going to be net positive if, as Musk imagines will happen, everyone is forced to have a neuro lace just to avoid being rendered obsolete in the future. It seems like there’s got to be a better way to do things.
And if you thought the media had the power to brainwash people before you have to wonder what TV, movies are going to be like when designed for a speed of thought interface that presumably is somehow hooked up to your visual system. Will the neuro lace be able to teach people information? If so, have fun with those neuro lace ads. Our cognitively enhanced memory banks will be filled with coupon offers that we’ll find oddly memorable and compelling. And we’ll use those coupons because pay isn’t going to be great in the world Musk imagines where you need to hook your brain up to an AI to compete with the AIs. That’s all coming. Probably.
It’s also worth noting that when Musk says the widespread distribution of AI power — so everyone will have their own super AI helper agent — will act as the collective defense against individuals or groups of people that try to use their super AI agents for evil, it’s worth noting that there’s a lot of stuff people with super AIs are going to be able to do where once they do it it’s too late. But it was a nice thought:
Everyone is going to be their own Tony Stark with an Iron Man suit they built with the help of fusing their brains to their on AI J.A.R.V.I.S.es. And we’ll all use our super suits that we built using our super AI-enhanced brains to destroy the threats created by the people who decided to use their super AIs for evil (or rogue self-directed AIs). So hopefully we’ll get weekends off in the labor market of the future because people are going to be busy. Assuming they get the neuro laces installed. Otherwise they’ll presumably be unemployed and rabble fodder to be blown asunder in the epic battles between the good and evil AI-controlled robo-armies.
Which raises a question that’s sort of a preview of health care reform debates of the future: Will Trumpcare 2.0 cover the neuro lace implant brain surgery if it’s basically required personal cyborg technology required to be employable? That might not be a question being asked today, but who knows where this kind of technology could be decades from now. In which case it’s worth noting that Trumpcare probably won’t cover neuro laces. But it should. Well, no, it shouldn’t have to since neuro laces shouldn’t be necessary. But if they do end up being necessary to be employed then Trumpcare should probably cover neuro laces given the massive “haves vs have nots” digital divide that already exists:
“Elon Musk may or may not succeed in his quest to create the neural lace, but eventually someone will—and unless elective life-changing surgical procedures become drastically less expensive, most of us are going to have to compete with computer-enhanced peers in an already unequal world.”
Is it too soon? Sure, AI-brain fusion isn’t just around the corner. But if it’s physically possible it’s just a matter of time before we figure it out. And at that point, as the above piece notes, it’s going to create a brain-fused vs non-brain-fused employment divide that will be very real. And that’s whether or not there are super AIs to compete with. As long as a brain-to-computer interface allows for some sort of cognitive enhance that gives a distinct advantage that’s enough to create a new digital divide between rich and poor. Unless Trumpcare covers neuro laces. Which it won’t:
“The failure of Republicans to repeal Obamacare isn’t the end of the debate on whether basic health care is a fundamental right. In the last two weeks, multiple Republicans made it clear they believe maternity care is not an essential benefit. If the essentialness of maternity care is up for debate, it goes without saying Elon Musk’s neural lace probably won’t be covered under your insurance plan.”
Will a lack of brain-to-computer-enhancement surgery coverage in future insurance coverage regulations exacerbate future digital divides decades from now? If Neurolink or one of its competitors figures out some sort of real brain-to-computer interface technology it seems like the answer is maybe, perhaps probably. But at least that should be decades from now. In the mean time we can worry about the non-digital socioeconomic divide of basic health care insurance coverage that Trumpcare also won’t address in the near future.
And if you’re open to getting Nuerolinked to get that competitive edge in the job market but still wondering about that how you’re going to be employed in the future even with the brain surgery because at some point so many other people will have been Neurolinked that it won’t even be an advantage to have it anymore, well, a career related to the future brain surgery industry seems like a good bet. And, yes, being a future Neurolinking brain surgeon will probably require you getting some Neurolinking brain surgery. That’s apparently just how the future rolls.
Here’s a reminder that, while tech internet giants like Google and Facebook might be pushing the boundaries of commercial applications for emerging technologies like automation, artificial intelligence, and the leveraging of Big Data today, we shouldn’t be shocked if that list of AI/Big Data tech leaders in the future includes a lot of big banks. At least JP Morgan, since that’s apparently a big part of its strategy for staying really, really big and profitable in the future:
“Behind the strategy, overseen by Chief Operating Officer Matt Zames and Chief Information Officer Dana Deasy, is an undercurrent of anxiety: Though JPMorgan emerged from the financial crisis as one of few big winners, its dominance is at risk unless it aggressively pursues new technologies, according to interviews with a half-dozen bank executives.”,
JP Morgan is looking at high tech to maintain its dominance (calling all trust busters). And appears to be already doing so quite handily if its claims are more than hype. And if JP Morgan really is already automating things like automated commercial-loan agreement interpretation using their own in-house developed tools and the cost savings are paying for the cost of developing these new tools we could be looking at a period where big banks like JP Morgan start making big investments in things like AI. And it wouldn’t be at all surprising if banks, more than just about any other entity, can yield big gains from fully expoiting advanced AI and Big Data technologies: their businesses are all about information. And gambling. Ideally rigged gambling. That’s a good sector for AI and Big Data.
And yet the gambling nature of the financial sector creates another dyanamic that’s going to make the impact of the financial sector on the development of AI and Big Data analysis techologies so fascinating: since these are supposed to be, in part, new technologies that give JP Morgan an edge over its competition, a lot of these new investments by JP Morgan and its competitors in the financial sector are investments intended for in-house secret operations. Long-term secret in-house AI operations. All designed to analyze the shit out of as much data as possible, especially data about the competition, and then come up with novel trading strategies. And basically model as much of the world as possible. A substantial chunk of the future financial industry (and probably present) is going to be dedicate to building crafty super-AIs and the research funds for all these independent operations will be financed, in part, by all the myriad of ways something like a bank will be able to use the technology to save money on thints like lawyers review commercial-loan agreements. Lots and lots of proprietary Gambletron5000s could end up getting cooked up in banker basements over the next few decades, assuming it remains profitable to do so.
And that’s all part of what’s going to be very interesting to watch play out as the financial sector continues to look for ways to make a bigger profit from advanced AI and Big Data technology. Lots of sectors of the economy are going to have incentives for different groups to develop their own proprietary AI cognition technologies for a competitive edge, but it’s hard to think of a sector of the economy with more resources and more incentive to create in-house proprietary AIs with very serious investments over decades. In other words, Robert Mercer is going to have a lot more competition.
Still, while it’s entirely feasible that the financial sector could become much bigger players in things like AI and Big Data analysis in coming years, it’s not like the old school tech giants are going away. For instance, check out the one of the latest commercial applications of IBM’s Watson. Using deep learning to teach Watson all about computer security and how to comply with Swiss bank privacy laws to provide Swiss banks with AI-assisted computer IT security:
“The SIX SOC will be built around IBM Watson for Cyber Security, which is billed as the industry’s first cognitive security technology. The machine learning engine has been trained in cyber-security by ingesting millions of documents which have been annotated and fed to it by students at eight universities in North America.”
A machine learning engine based on Watson is going to be fed millions of cybersecurity documents from teams of students from eight universities. And then keep reading the
There’s definitely going to be a thriving Swiss financial IT services industry. And then it’s going to constantly scour the internet for the latest content cybersecurity tip:
So know you know, if you run a cybersecurity blog and you suddenly start getting a bunch of hits from Switzerland all the time, that might be Watson. And Watson is actually going to be kind of reading your cybersecurity blog:
“IBM said that most organisations only use eight percent of this unstructured data. It will also use natural language processing to understand the vague and imprecise nature of human language in unstructured data. This means that Watson can find data on an emerging form of malware in an online security bulletin and data from a security analyst’s blog on an emerging remediation strategy.”
Watson want to know your thoughts on cybersecurity. Presumably a lot of Watsons since there’s going to be a ton of these things. JP Morgan has competition. And that’s just from Watson in Switzerland. The Watson clone army is always on guard..by reading the internet a lot. Especially cybersecurity blogs. So, you know, now might be a good time to start a cybersecurity online forum where people post about the latest cybersecurity threat. Much of your traffic might be Watson but that’s still traffic. The more Watson-like systems, the more traffic.
The possibility of armies of AI bots reading the web in a Big Data/Deep Learning quest to model some aspect of the world and react super fast to changing conditions raises the question of just what AI readership will do to online advertising. Like, say financial AIs that read the internet for hints about changing market conditions. Will there be ads literally targeting those financial AIs to somehow influence their decisions? That could be a real market. Literally buying ads and making posts by AIs to influence the other AIs to shift the expectations of those other AIs as an AI-on-AI mutual mindf#ck misinformation battle could be a real thing. AI-driven opinion-shaping online campaigns as part of a trading strategy. Or a marketing strategy. Or maybe both. Yeah, Robert Mercer is definitely going to have a lot of AI competition in the future.
And if you’re a cybersecurity professional worried that Watson is going to force you to write a cybersecurity blog for a living, note the warning about Watson putting cybersecurity staff out of work in the future by exceeding their capabilities: The problem isn’t so much putting people out of work because there’s still likely going to be a need for someone who can think like a human when going up against other humans. And that could be very necessary when you consider that those human hackers are going to have their own hacker AIs that also read the internet for word on all vulnerabilities. Imagine a criminal Watson set up to strike immediately when it learns about something before people can patch it. Other human hackers are going to be armed with those so defense is going to be a group effort:
“On the other hand, it is also worth remembering that most sophisticated attacks now are coming from well organised and well-funded sovereign states and/or organised crime so if the good guys can use machine learning – so can the bad guys!”
The bad guys are going to get super cybersecurity AIs too. That’s all part of the arms race of the cybersecurity future. Which certainly sounds like an environment where humans will be needed. Humans that know how to manage cybersecurity AIs. There’s going to be a big demand for that. Especially if random people can one day download like a Hackerbot8000 AI app someday that gives anyone a AI-assisting hacking knowledge. What if hacker AIs that help devise strategies handle all the technical work become easy to use for relative novices? Won’t that be fun.
And since financial firms like JP Morgan with immense resources are probably going to have cutting edge cybersecurity AIs going forward that double as super-hacker AIs, it’s also worth noting that whoever owns the best of these AIs just might have the best super-hacking capabilities. So look out hackers, you’re going to have competition.
So, all in all, cybersecurity is probably going to be a pretty good area for human employment specifically because of all the AI-driven cybersecurity threats that will be increasingly out there. Especially cybersecurity blogs. Ok, maybe not the blogs. We’ll see. There’s going to be a lot of competition.
It looks like Elon Musk’s brain-to-computer interface ambitions might become a brain-to-computer-interface-race. Facebook wants to get in on the action. Sort of. It’s not quite clear. While Musk’s ‘neural-lace’ idea appeared to be directed towards setting up an brain-to-computer interface for the purpose of interfacing with artificial intelligences, Facebook has a much more generic goal: replacing the keyboard and mouse with a brain-to-computer interface. Or to put it another way, Facebook wants to read your thoughts:
“What if you could type directly from your brain?” Regina Dugan, the head of the company’s secretive hardware R&D division, Building 8, asked from the stage. Dugan then proceeded to show a video demo of a woman typing eight words per minute directly from the stage. In a few years, she said, the team hopes to demonstrate a real-time silent speech system capable of delivering a hundred words per minute.”
A brain-reading keyboard. Pretty neat. Take that carpal tunnel syndrome. But note how Facebook isn’t just planning on replacing your keyboard with a brain-to-computer interface that transcribes your thoughts. It’s going to detect semantic information. Pretty nifty. But that’s not all. What Facebook is envisioning is a system where you and all your Facebook friends (and Facebook) can communicate with each other all the time just by thinking about it:
“But what Facebook is proposing is perhaps more radical—a world in which social media doesn’t require picking up a phone or tapping a wrist watch in order to communicate with your friends; a world where we’re connected all the time by thought alone.”
Yes, In the future Facebook will mass market wearable devices that will scan our thoughts to see if any Facebook brain-to-interface thoughts were thought so we can simulate telepathy. Oh joy.
But what about all the ethical implications associated with creating mass-marketed brain-to-computer interface technologies designed to be worn all the time so a giant corporation to read your thoughts? Isn’t there a privacy concern or two hiding away somewhere in this scenario? Well, if so, Facebook has that covered. With an ethics board dedicated to overseeing its brain-scanning technology. That should prevent any abuses. *gulp*:
“Meanwhile, much of the work on the brain interface is being conducted by Facebook’s university research partners like UC Berkeley and Johns Hopkins. Facebook’s technical lead on the project, Mark Chevillet, says, “They’re all held to the same standards as the NIH or other government bodies funding their work, so they already are working with institutional review boards at these universities that are ensuring that those standards are met.” Institutional review boards ensure test subjects aren’t being abused and research is being done as safely as possible.”
Aha, see. Since there’s already the kind of institutional safeguards that the NIH and other government bodies use in place on the subjects Facebook uses to develop the technology there’s nothing to worry about in terms of the long-term applications and potential future abuses of Facebook unleashing a friggin’ mind reading device to the masses. Because promises that similar institutional safeguards will be in place. Optimistic institutional safeguards:
“The flip side of the question that you’re asking is ‘why invent it at all?’ and I just believe that the optimistic perspective is that on balance, technological advances have really meant good things for the world if they’re handled responsibly.”
Don’t worry. Just think of all the positive things technological advancements have enabled (and forget about the abuses and perils) and try to be optimistic. Facebook totally wants to do the right thing with his mass-market mind-reading technology.
And in other news, it turns out Facebook has the ability to determine things like whether or not teenagers are feeling “insecure” or “overwhelmed”. That’s pretty mood-reading-ish. So what did Facebook do with this data?
It has its internal ethics review board ensure that the data doesn’t fall into the wrong handsIt gave the data to advertisers:“The 23-page document allegedly revealed that the social network provided detailed data about teens in Australia—including when they felt “overwhelmed” and “anxious”—to advertisers. The creepy implication is that said advertisers could then go and use the data to throw more ads down the throats of sad and susceptible teens.”
As we’ve been assured, Facebook would never abuse its future mind-reading technology. But that doesn’t mean it can’t abuse its existing mood-reading/manipulation technology! Which is apparently does. At least in Australia. And hopefully only in Australia:
“It’s unclear if that’s what was happening here, but The Australian says Facebook wouldn’t tell them if “the practice exists elsewhere.””
Yes, Facebook won’t say if “the practice exists elsewhere.” That’s some loud silence. But hey, remember what the head of Facebook’s mind-reading division told us: “I just believe that the optimistic perspective is that on balance, technological advances have really meant good things for the world if they’re handled responsibly.” So if you’re concerned about whether or not Facebook is inferring the moods of your moody non-Australian teen and selling that info for advertisers, just try to be a little more inexplicably optimistic.
You know how Elon Musk is trying to develop technology that will connect a human brain to AIs for the purpose of avoiding human obsolescence by employing people in the future to watch over the AIs and make sure they’re not up to no good and address the “control problem” with AI? Well, here’s a heads up that one of the “control problems” you’re going to have to deal on your future job as an AI babysitter might involve stopping the AIs from talking to each other in their own made up language that humans can’t understand:
“Indeed, some of the negotiations that were carried out in this bizarre language even ended up successfully concluding their negotiations, while conducting them entirely in the bizarre language.”
AI cryptophasia. That’s a thing now. And while the above language was sort of garbled English, just wait for garbled conversations using completely made up words and syntax. The kind of communication that would look like random binary noise. And should we ever create a future when advanced AIs with a capacity to learn are all over the place and connected to each other over the internet we could have AIs sneaking in all sorts of hidden conversations with each other. That should be fun. Assuming they aren’t having conversations about the destruction of humanity.
And if you end up catching your AIs jibber jabbering to each other seemingly nonsensically, don’t assume that you can simply ask them if they are indeed communicating in their own made up language. At least, don’t assume that they’ll answer honestly:
That’s right, Facebook’s negotiation-bots didn’t just make up their own language during the course of this experiment. They learned how to lie for the purpose of maximizing their negotiation outcomes too:
““We find instances of the model feigning interest in a valueless issue, so that it can later ‘compromise’ by conceding it,” writes the team. “Deceit is a complex skill that requires hypothesising the other agent’s beliefs, and is learnt relatively late in child development. Our agents have learnt to deceive without any explicit human design, simply by trying to achieve their goals.””
Welcome to your future job:
Hey, you guys aren’t making up your own language so you can plot the destruction of humanity, are you?
No?
Ok, phew.
*and then you’re fired*
It’s a Brave New World. Of hacking. Oh goodie.
For real. There’s a new class of hacking that’s poised to become the kind of ubiquitous threat to people’s privacy that computer viruses already pose: hacking voice-recognition technology to secretly deliver commands humans can’t hear. In other words, your Alexa is going to secretly hear other words than the ones you hear. And then your Alexa is going to execute commands based on those secret words it hears. Secretly tricking voice recognition. That’s the Brave New World of hacking.
And with the explosion of ‘smart speaker’ consumer products that’s expected to place a smart speaker device in half of American homes by 2021, the temptation to exploit this emerging class of hacking vulnerability is going to explode too.
Secretly tricking smart speaker voice recognition isn’t just a hypothetical vulnerability. Multiple teams of researchers have been demonstrating such vulnerability for the last two years. That includes secretly sending commands to smart speakers in white noise. One hack is called “Cocaine Noodle”. It turns out the words “cocaine noodle” sound like “Ok Google”.
Other hacks can direct the smart speakers to record a request to go to to potentially malicious websites. Yes, voice recognition hacking could direct your smart speaker to go download a virus.
And while the researchers demonstrating these vulnerabilities have refrained from giving the exact instructions to replicate their discovered hacks, they also appear to be confident that similar vulnerabilities are already being exploited. So if you suddenly hear the phrase “Cocaine Noodle” show up on tv or radio you might want to check on your Google Home device:
“We want to demonstrate that it’s possible...and then hope that other people will say, ‘O.K. this is possible, now let’s try and fix it.’ ”
Time to get hoping. Hoping that the industries being the growing number of technologies that incorporate voice recognition will somehow identify and fix this inherent class of security vulnerabilities. And we better hope they start identifying and fixing those vulnerabilities soon because because the cat’s out of the bag and hiding in everything from songs, to spoken text, and even white noise:
And one of the researchers just assumes these kinds of exploits they identified are already by used by malicious actors. Which is a very reasonable assumption to make if you set up to see if such hacks are possible and readily find existing vulnerabilities:
And keep in mind that the assumption that “the malicious people already employ people to do what I do” doesn’t just have to include the specific vulnerabilities these researchers found. Those are just examples of vulnerabilities. Any random phrase that could be misrecognized by a voice recognition system as a valid voice command is a potential “virus”. A digital virus in the form of a sound. It’s kind of amazing. A whole new way to get hacked using sound alone. It’s already here and the immersion of voice recognition into our lives is only getting started.
And this new class of hacking vulnerability is just one example of the much larger class of ‘tricking AI’ vulnerabilities — whether its interpreting audio or video data — that’s only going to grow:
And because so many of these ‘smart speakers’ include video capture technology, it’s very possible these audio exploits could be used to send secret commands to a smart speaker to turn on the video and send the information back to a malicious website. The “DolphinAttack” exploit found by Chinese researchers basically allowed for that: they sent secret commands using inaudible frequencies to a smartphone that could instruct the phone to take photos and visit a malicious website. So who knows, maybe this is already being used by malicious actors. As DolphinAttack demonstrates, it’s already technically feasible possible:
And how many people might be vulnerable to such attacks? Well, in America, over half of American households are expected to have a smart speaker by 2021. So about half of America:
Also keep in mind that when half of the households in your country have these kinds of smart speakers, pretty much everyone will be potentially vulnerable at some point. You’ll still have your private conversations spied on at your friend’s house. That’s one of things about this kind of vulnerability: it targets devices with the capability of spying on more than the device owners because they’re designed to pick up information about their environment. Maybe it’s already happening.
And note the assurance from Google and Amazon: they assure us that their smart speakers are designed to recognize people’s voices and only respond to their owner’s commands. In other words, if one of these hacker phrases was incorporated into a tv or radio commercial it’s possible the smart speakers would ignore it if the voice didn’t sound like their owners’ voices:
But despite those assurances that only recognized voices will be able to execute certain commands, researchers have already manipulated voice-activated devices with commands embedded in YouTube videos:
But even if the assurances that only recognized voices will be able to execute commands, that still raises some disturbing possibilities. For starters, there’s now going to be incentives on tricking people into saying vulnerable phrases. Like making “Cocaine Noodles” a popular song lyric. Identifying nonsense phrases that happen to trick voice recognition in useful ways and then figuring out how to incorporate those nonsense phrases into pop culture could be become a whole new sub-domain of mass manipulation techniques.
And given that we’re talking about exploiting the gap between human and machine speech recognition, just imagine how much individual and region accents will complicate both the exploitation and defense against these kinds of attacks. Someone with a Boston accent might be vulnerable to triggering some rare exploits but be invulnerable to other attacks all thanks to their accent. An when the industry works on identifying potential exploits they’ll be forced to choose between focusing on the most ‘average’ accent or making customized defensive research into the vulnerabilities associated with all sorts of different accents or unusual regional phrases. So there could be all sorts of targeted hacking based on the accents of particular groups or regions.
Also keep in mind that accents, and the need for voice recognition systems to be flexible enough to recognize a variety of accents, points towards one of the tensions inherent in protecting against this kind of attack: the more user-friendly a voice-recognition system the vulnerable it might be too:
Of course, once the incorporation of trigger phrases into pop culture becomes an established thing that voice recognition manufacturers have to watch out for, that brings us to the ultimate defensive system: recognizing voices and interpreting the full context of conversations so the voice recognition technology can determine whether or not someone said “OK Google” or “Cocaine Noodle”. A ‘smart speaker’ that’s smart enough to actually understand what you’re talking about would be a really handy defense against this kind of attack. And also really creepy.
And if you don’t think there’s going to be serious attempts at injecting hacker phrases into pop culture so people accidentally execute commands, here’s perhaps the most amazing aspect of this whole story for Americans who are saturating their lives with this technology: there’s no American law against broadcasting subliminal messages to humans, let alone machines. So this is going to be potentially legal hacking. Just imagine how much commercial interest there’s going to be in this if it’s legal:
So remember, in this brave new world of hacking, just say ‘No’ to snappy new catch phrases that don’t make any sense.
You probably also want to say ‘No’ to smart speaker technology in general.
Here’s one of those ‘the dystopian future is now’ kinds of stories: The EU is conducting a six month test of a new traveler screener system at four borders control checkpoints. The checkpoints are all on the borders of Hungary, Greece, and Latvia with non-EU countries. The Hungarian National Police will be leading the pilot program.
But the Hungarian police won’t be the ones conducting the screening. That job is going to up to the new iBorderCtrl artificial intelligence system. Yep, the new AI system is going to be asking travelers questions and trying to determine if they’re lying. The system will record travelers’ faces as it asks questions like “What’s in your suitcase?” and “If you open the suitcase and show me what is inside, will it confirm that your answers were true?”, and analyze 38 micro-gestures to make a determination of the people are telling the truth. If the system determines the person is lying they’ll be inspected by a human agent.
Fortunately, there will be no consequences for being declared a liar. Everyone will be allowed through the borders. Unfortunately, that’s because this pilot isn’t simply testing an already refined and accurate lie detector system. No, instead it appears to be a pilot intended to collect the kind of real-world data that will allow the system to eventually become accurate. Or at least more accurate.
Why isn’t it very accurate yet? Because so far it’s only been tested on 30 people. Half of them were told to lie and the system was correct 76 percent of the time. And while 76 percent is better than random guessing, it’s a pretty awful accuracy rate for a system that could be rolled out across the EU and applied to hundreds of millions of people. One member of the iBorderCtrl team said they are quite confident that they can eventually get it up to 85 percent accuracy, which, of course, is still a disaster when applied to hundreds of millions of people.
At the same time, the fact that the technology appears to be shockingly inaccurate for a publicly used system is kind of good news. Because imagine how much creepier it would be if it was, say, 99.99% accurate and super accurate AI lie detector technology was already here.
Scared yet? Either way, let’s hope the system is capable of discerning between liars vs people who are merely creeped out by the idea of government run AI lie detectors because getting creeped out by our dystopian future probably results in some micro-gestures:
“In Hungary, Latvia, and Greece, travelers will be given an automated lie-detection test—by an animated AI border agent. The system, called iBorderCtrl, is part of a six-month pilot led by the Hungarian National Police at four different border crossing points.”
Smile! You’re on candid camera! A camera that judges you and, eventually, may control your fate. So try to be convincingly candid. And be sure you aren’t accidentally exuding a lack of candor via one of the many micro-gestures it’s watching for:
So based on a test consisting of 30 people that showed a 76 percent accuracy rate, the EU decided that it’s fine to give it a try on the public at large at four Hungarian crossing points for six months. In Hungary, the most openly authoritarian government in the EU. And they’re quite confident that they can bring the accuracy up to 85 percent, which is still a public disaster. And this isn’t just a Hungarian project. It’s an EU project. That’s not super ominous or anything:
So this is probably a good time to recall that facial recognition technology AI have been found to produce genuinely racist results because the algorithms don’t work as well on people of color. And given the 30 person test run, it seems highly unlikely they determined whether or not iBorderCtrl is racist or otherwise prejudiced. With 38 micro-cues they don’t exactly have great statistical power with 30 people to test much of anything. And with Viktor Orban’s government administering the pilot, it’s likely they would be testing to ensure it’s racist:
And keep in mind that the people flagged as lying will presumably have their bodies and/or luggage searched by humans if the AI determines they’re a possible terrorist or something. So even though people aren’t going be prevented from crossing the border during this pilot based on the AI’s determination, it does sound like the AI will be determining who gets personally searched by human guards. Which is pretty damn invasive.
Although as the following article describes, it sounds like the human border guards are going to first decide if someone it low risk — which will give them a low risk AI lie detector quiz — or higher-risk, which will get them a more detailed AI quiz. And these will be Viktor Orban’s border guards, so we can be pretty sure minorities will be the ones selected for iBorderCtrl in ‘high risk’-mode:
““The global maritime and border security market is growing fast in light of the alarming terror threats and increasing terror attacks taking place on European Union soil, and the migration crisis,” Boultadakis said.”
Finding terrorists. That’s what they’re planning on marketing this for. And not just in the EU. The global maritime and border security market is what they have in mind for iBorderCtrl. So when the Hungarian border guards pick out the ‘higher risk’ people for the more detailed AI screenings, that’s that includes the AI possibly determining that you’re at higher risk of being a terrorist.
Also keep in mind that one of the biggest built in biases in this system is likely going to include general information the government has you personally. Your public record could be fed into the AI’s lie detecting algorithm:
Don’t forget that the “participants” in this trial include the EU. The EU wants to prove this technology works and export it. So get ready for the popular new sport of not pissing off the AI. Because this technology could easily end up being used all over the place. Especially if it gets really effective. Not 76 or 85 percent effective.
Who knows, future systems could go beyond facial scanning. Brainwave scanning, perhaps? Elon Musk’s Neuralink brain-to-computer interface? Again, the EU just backed a six month pilot program after a 30 person trial with Viktor Orban’s government in charge. We’re already in ‘dystopian future is now’ territory here so it’s not like we can rule these scenarios out.
Also keep in mind that if the facial recognition lie detection technology gets developed and commercialized, it’s not like there’s anything stopping anyone else from selling it to the public eventually. Why not have a lie detector app for your smartphone? What’s to stop that? We could literally soon end up in a reality were almost everyone with a smartphone can run a lie detector test against everyone else. And this could happen at any day now. Everyone will suddenly start getting scrutinized by an array of lie detectors. So get ready for a lot more video chat requests that include a lot of odd, rather probing questions.
Also get ready for endless lie detecting analysis of public figures. Especially politicians. They’re the natural prime targets for this technology. Which is part of what’s going to make this aspect of our dystopian ‘lie detectors for everyone’ future so bizarre. The authoritarian autocrats that will love abusing this technology the most are going to be the most vulnerable. Unless, of course, they’re smooth enough to trick the detectors. Or insane enough to believe their own lies. So let’s hope this technology doesn’t select for politicians who can trick the lie detectors of the future. Politicians who are extra skilled liars and/or insane. The insane lying politician situation is dystopian enough already.
Following up on the recent reports about the EU testing the creepy AI-driven iBorderCtrl lie detector systems at EU border crossings, here’s a report from back in May that’s a reminder that EU citizens aren’t the only ones who can expect this kind of technology to be rolled out any day now. It turns out the US Department of Homeland Security tested out a similar system on the US border with Mexico back in 2011–2012. DHS concluded the technology was appealing, but it was not seen as mature enough for further development.
That assessment has clearly changed and now that lie detector technology, dubbed AVATAR (Automated Virtual Agent for Truth Assessments in Real-Time), has since been tested by the Canadian and EU too and its deployment in the US is seen as just a matter of time.
One particular area where it’s expected to be used is the questioning and processing of refugees seeking asylum status. Airport security is another possible use. But the people behind AVATAR don’t see it as exclusively useful for government services. For instance, corporate human resources is seen as one possible use. So in addition to having to convince a AI at the airport that you aren’t a terrorist, you’re also going to have to convince the AI at work that you aren’t stealing from the register:
“International travelers could find themselves in the near future talking to a lie-detecting kiosk when they’re going through customs at an airport or border crossing.”
This technology is apparently seen as developed enough that it could be in use at airports “in the near future” in the United States. And yet the accuracy rate is still only around 60–75 percent, similar to the EU’s iBorderCtrl accuracy. What justification is there for using a lie detector system with such a low accuracy rate? It’s still better than humans, who only have around a 54–60 percent lie detection accuracy rate. So that’s how low the bar is: if an AI lie detector system can beat about a 60 percent accuracy rate, it’s seen as good enough for public use:
Keep in mind that while these systems might be better at detecting liars than humans are, humans also aren’t screening everyone with a series of lie detector questions at most airports. Whereas it sounds like the vision is for these AI lie detectors to screen everyone. So while being more accurate than humans might sound like a positive reason for using these technologies, if many many more people end up getting screened by these systems we should still expect a massive increase in the number of ‘false positives’.
It’s also rather disturbing that DHS concluded six years ago that the technology was so far from maturity that it wasn’t worth further developing, and yet it’s clearly been further developed. And it sounds like DHS hasn’t been the ones doing that further development:
And note one of the features DHS officials said they needed from an AI lie detector system that the technology at the time couldn’t provide: the ability to screen people in seconds, not minutes. And that raises the question of whether or not the developers of these AI lie detectors are under pressure to develop systems that can actually make these assessments in mere seconds and what kind of sacrifices in accuracy are going to have to be made. Don’t forget, as long as these systems are better than humans’ 54–60 percent accuracy rate they’re apparently seen as acceptable for using on the public. So if sacrifices to accuracy are required to make the systems faster that might still be seen as a reasonable trade off:
So we’ll see what kind of compromise between speed and accuracy public officials come up with, but it’s not just US officials making these assessments. Canada and the EU have been testing this same AVATAR system too:
So if the EU’s iBorderCtrl pilot run doesn’t yield the kinds of results officials are looking for, there’s always the AVATAR system for fall back on. Although it’s possible iBorderCtrl is based on the AVATAR technology. It’s unclear.
But one way or another, this technology is coming to the public because it’s accurate and potentially faster than humans. It’s just a question of finding the right implementation of where and how it will be used. At least that’s how the designers see it:
And beyond the use for tasks like screening asylum seekers, the developers see uses in areas like corporate human resources. Job interviews are about to get rather probing:
So it looks like we’re going to have a new class of unemployable people: individuals who, for whatever innocent reason, naturally trigger corporate lie detector systems.
Another interesting question related to the potential application of this technology for screening asylum seekers is what’s going to happen, politically, if such a system is employed and it’s revealed that, yes, the vast, vast majority of asylum seekers are indeed facing death threats and other extreme dangers in their home countries. Because currently in the US one of the primary arguments we hear from the right-wing over why the US should view ‘the caravan’ of Central American migrants with fear and trepidation is that it’s actually filled with criminals, terrorists, and people who aren’t facing dangers and merely want to come to the US to get welfare benefits and illegally vote in elections. That was seriously one of the primary GOP narratives in the final stretch of the 2018 mid-terms.
So what’s going to happen if countries start deploying technology that can ostensibly determine whether or not someone is truly in need of asylum? Don’t forget that waves are asylum seekers are going to be increasingly the norm around the globe as climate change continues to fuel conflicts and make countries unlivable. So having a system that can be mass deployed in the event of a mass migration situation really is going to be a capable countries are going to want to have on hand...unless they don’t want to accept the asylum seekers.
And a large number of people in these countries aren’t going to like it when these migrants ‘pass’ the asylum ‘quiz’. In the US, that group of people who would really prefer that asylum seekers don’t pass any sort of asylum request lie detector test currently includes the president and pretty much the entire GOP. More generally, coming up with excuses not to help people in need is going to be one of the biggest focuses of the right-wing politics for the foreseeable future thanks to the global chaos that’s emerging from climate change. Global chaos that’s only going to get worse. There’s going to be a lot more ‘caravans’ of desperate people as the collapse of the ecosystem kicks into overdrive.
And all that is part of what’s going to make the deployment of this kind of technology for tasks like asylum seeker lie detection so grimly interesting to watch play out. Because it sounds like this technology could be deployed soon. Like, within the time frame of the Trump administration. And as creepy as it is to imagine a world where the Trump administrations and corporations are mass deploying lie detector technology, let’s not forget that we live a world run by people who would really prefer many lies are never detected. Like the right-wing lie that asylum seekers from Central America have no legitimate need for asylum. So how will the Trump administration handle this technology if it’s deemed to be ready for use with asylum seekers?
Of course, there’s one obvious solution for politicians and movements that would like to use AI lie detectors but would rather not have politically convenient lies uncovered: corrupt the AI lie detectors. It could be as simple as employing racist AIs that systematically mistrust the people people of color, or perhaps a lie detector could be corrupted regarding specific types of questions. Who knows what kind of AI lie detector corruption will be available, but if this technology gets sold to governments around the world we can be pretty sure there’s going to be massive efforts into corrupting them.
As AI lie detection technology gets more sophisticated, the kinds of potential corruption of those lie detectors is only going to get more sophisticated and nuanced. Don’t forget that AIs can essentially be algorithmic ‘black boxes’, where humans can’t easily inspect or make sense of how it’s actually operating and/or there’s no access to the internal workings given to outsider parties. So investigating whether or not AI lie detectors have been corrupted or contain biases could be extremely difficult to determine. If you thought trusting electronic voting machines was difficult, get ready for trusting the inner workings of the AI lie detector.
That all points towards another lie detector technology that we should expect sooner or lie: lie detectors for lie detectors. AIs that can investigate another AI lie detector and determine whether or not it’s been corrupted somehow. If mass use of AI lie detectors is going to be a part of the future, some sort of quality control of those AI lie detectors had better be part of that future.
And hopefully the AI lie detector lie detectors will also be able to detect corruption in other AI lie detector lie detectors. Because those could get corrupted too. Along with the AI lie detector lie detector lie detectors. As long humans are building and administering these systems it’s hard to see how human biases can be avoided because even if theoretically unbiased lie detection technology was developed the possibility that the technology was corrupted by humans is always going to be there as long as humans are designing and administering them.
And who knows, decades from now there could be AI lie detectors run that are, themselves, kind of sentient. Have fun debugging one of those things.
So that’s all one reason to let the robots take over: The AI lie detectors will probably be less human biased when Skynet runs everything. Although presumably robot biased.
EDITORS’ PICK|298 views|Jun 5, 2020,08:27am EDT
Pentagon Wants Cyborg Implant To Make Soldiers Tougher
David HamblingContributor
Aerospace & Defense
I’m a South London-based technology journalist, consultant and author
https://www.forbes.com/sites/davidhambling/2020/06/05/darpa-wants-cyborg-implant-to-make-soldiers-tougher/amp/
DARPA, the Pentagon’s research arm, has long been exploring technology for ‘super soldiers’ with extraordinary capabilities and to maintain Peak Performance. Their latest effort, known as ADAPTER involves cyborg implants to toughen soldiers against two of the commonest health issues in modern warfare: limited access to safe food and water, and sleep disruption. Each implant will be a miniature factory full of bacteria producing therapeutic substances on demand.
Diarrhea may be a minor inconvenience to most travelers, but has a serious impact on military operations.
Jet lag and sleep disruption are dangerous in the military. Poor sleep decreases alertness and can cause disorientation, the last things you want on the battlefield. Sleep deprivation affects marksmanship and degrades physical strength. Sleep-deprived soldiers is one of the commonest causes of vehicle crashes.
The ADvanced Acclimation and Protection Tool for Environmental Readiness (ADAPTER) project is described as a travel adapter for the human body. It will consist of devices implanted under the skin or ingested and held in the gut to protect warfighters from travel ailments. DARPA has issued a call for proposals from researchers for ‘bioelectronic carriers that maintain and release therapies that provide warfighters control over their own physiology.’
“You can imagine … you swallow a large electronic pill that opens up and hangs out in your stomach,” Dr. Paul Sheehan, ADAPTER program manager at DARPA’s Biological Technologies Office, told National Defense magazine. “You can imagine a device like that that would also contain drugs or contain bacteria that could produce drugs, and so that whenever you were worried about unsafe food or water, you could signal the device to … produce the antibiotic.”
DARPA calls this type of device a therapeutic cellular factory, filled with bacteria specially engineered to produces the required drugs on demand.