Dave Emory’s entire lifetime of work is available on a flash drive that can be obtained here. The new drive is a 32-gigabyte drive that is current as of the programs and articles posted by late spring of 2015. The new drive (available for a tax-deductible contribution of $65.00 or more) contains FTR #850.
WFMU-FM is podcasting For The Record–You can subscribe to the podcast HERE.
You can subscribe to e-mail alerts from Spitfirelist.com HERE.
You can subscribe to RSS feed from Spitfirelist.com HERE.
You can subscribe to the comments made on programs and posts–an excellent source of information in, and of, itself HERE.
This program was recorded in one, 60-minute segment.
Introduction: Albert Einstein said of the invention of the atomic bomb: “Everything has changed but our way of thinking.” We feel that other, more recent developments in the world of Big Tech warrant the same type of warning.
This program further explores the Brave New World being midwived by technocrats. These stunning developments should be viewed against the background of what we call “technocratic fascism,” referencing a vitally important article by David Golumbia. ” . . . . Such technocratic beliefs are widespread in our world today, especially in the enclaves of digital enthusiasts, whether or not they are part of the giant corporate-digital leviathan. Hackers (“civic,” “ethical,” “white” and “black” hat alike), hacktivists, WikiLeaks fans [and Julian Assange et al–D. E.], Anonymous “members,” even Edward Snowden himself walk hand-in-hand with Facebook and Google in telling us that coders don’t just have good things to contribute to the political world, but that the political world is theirs to do with what they want, and the rest of us should stay out of it: the political world is broken, they appear to think (rightly, at least in part), and the solution to that, they think (wrongly, at least for the most part), is for programmers to take political matters into their own hands. . . First, [Tor co-creator] Dingledine claimed that Tor must be supported because it follows directly from a fundamental “right to privacy.” Yet when pressed—and not that hard—he admits that what he means by “right to privacy” is not what any human rights body or “particular legal regime” has meant by it. Instead of talking about how human rights are protected, he asserts that human rights are natural rights and that these natural rights create natural law that is properly enforced by entities above and outside of democratic polities. Where the UN’s Universal Declaration on Human Rights of 1948 is very clear that states and bodies like the UN to which states belong are the exclusive guarantors of human rights, whatever the origin of those rights, Dingledine asserts that a small group of software developers can assign to themselves that role, and that members of democratic polities have no choice but to accept them having that role. . . Further, it is hard not to notice that the appeal to natural rights is today most often associated with the political right, for a variety of reasons (ur-neocon Leo Strauss was one of the most prominent 20th century proponents of these views). We aren’t supposed to endorse Tor because we endorse the right: it’s supposed to be above the left/right distinction. But it isn’t. . . .”
We begin by examining a couple of articles relevant to the world of credit.
Big Tech and Big Data have reached the point where, for all intents and purposes, credit card users and virtually everyone else have no personal privacy. Even without detailed personal information, capable tech operators can identify people’s identities with an extraordinary degree of precision using a surprisingly small amount of information.
Compounding the worries of those seeking credit is a new Facebook “app” that will enable banks to identify how poor a customer’s friends are and enable those same institutions to deny the unsuspecting credit on the basis of how poor their friends are!
Even as Big Tech is permitting financial institutions to zero in on customers to an unprecedented degree, it is moving in the direction of obscuring the doings of Banksters. The Symphony network offers end-to-end encryption that appears to make the operations of the financial institutions using it opaque to regulators.
A new variant of the Bitcoin technology will not only facilitate the use of Bitcoin to assassinate public figures but may very well replace–to a certain extent–the functions performed by attorney. (We have covered Bitcoin–an apparent Underground Reich invention–in FTR #’s 760, 764, 770, 785.)
As frightening as some of the above possibilities may be, things may get dramatically worse with the introduction of “the Internet of Things,” permitting the hacking of many types of everyday technologies, as well as the use of those technologies to give Big Tech and Big Data unprecedented intrusion into people’s lives.
Program Highlights Include:
- Discussion of the hacking of an automobile using a laptop.
- Comparison of the developments of Big Tech and Big Data to magic and the implications for a species that remains true to its neanderthal, femur-cracking, marrow-sucking roots.
- Review of some of the points covered in L-2.
- The need for vastly bigger, rigorously regulated government instead of the fascism inherent in the libertarian doctrine.
- How hackers are attempting to extort users of the Ashley Madison cheaters website.
1. Big Tech and Big Data have reached the point where, for all intents and purposes, credit card users and virtually everyone else have no personal privacy. Even without detailed personal information, capable tech operators can identify people’s identities with an extraordinary degree of precision using a surprisingly small amount of information.
Last Thursday the journal Science published an article by four MIT-affiliated data scientists (Sandy Pentland is in the group, and he’s a big name in these circles), titled “Unique in the shopping mall: On the reidentifiability of credit card metadata”. Sounds innocuous enough, but here’s the summary from the front page WSJ article describing the findings:
Researchers at the Massachusetts Institute of Technology, writing Thursday in the journal Science, analyzed anonymous credit-card transactions by 1.1 million people. Using a new analytic formula, they needed only four bits of secondary information—metadata such as location or timing—to identify the unique individual purchasing patterns of 90% of the people involved, even when the data were scrubbed of any names, account numbers or other obvious identifiers.
Still not sure what this means? It means that I don’t need your name and address, much less your social security number, to know who you ARE. With a trivial amount of transactional data I can figure out where you live, what you do, who you associate with, what you buy and what you sell. I don’t need to steal this data, and frankly I wouldn’t know what to do with your social security number even if I had it … it would just slow down my analysis. No, you give me everything I need just by living your very convenient life, where you’ve volunteered every bit of transactional information in the fine print of all of these wondrous services you’ve signed up for. And if there’s a bit more information I need – say, a device that records and transmits your driving habits – well, you’re only too happy to sell that to me for a few dollars off your insurance policy. After all, you’ve got nothing to hide. It’s free money!
Almost every investor I know believes that the tools of surveillance and Big Data are only used against the marginalized Other – terrorist “sympathizers” in Yemen, gang “associates” in Compton – but not us. Oh no, not us. And if those tools are trained on us, it’s only to promote “transparency” and weed out the bad guys lurking in our midst. Or maybe to suggest a movie we’d like to watch. What could possibly be wrong with that? I’ve written a lot (here, here, and here) about what’s wrong with that, about how the modern fetish with transparency, aided and abetted by technology and government, perverts the core small-l liberal institutions of markets and representative government.
It’s not that we’re complacent about our personal information. On the contrary, we are obsessed about the personal “keys” that are meaningful to humans – names, social security numbers, passwords and the like – and we spend billions of dollars and millions of hours every year to control those keys, to prevent them from falling into the wrong hands of other humans. But we willingly hand over a different set of keys to non-human hands without a second thought.
The problem is that our human brains are wired to think of data processing in human ways, and so we assume that computerized systems process data in these same human ways, albeit more quickly and more accurately. Our science fiction is filled with computer systems that are essentially god-like human brains, machines that can talk and “think” and manipulate physical objects, as if sentience in a human context is the pinnacle of data processing! This anthropomorphic bias drives me nuts, as it dampens both the sense of awe and the sense of danger we should be feeling at what already walks among us. It seems like everyone and his brother today are wringing their hands about AI and some impending “Singularity”, a moment of future doom where non-human intelligence achieves some human-esque sentience and decides in Matrix-like fashion to turn us into batteries or some such. Please. The Singularity is already here. Its name is Big Data.
Big Data is magic, in exactly the sense that Arthur C. Clarke wrote of sufficiently advanced technology. It’s magic in a way that thermonuclear bombs and television are not, because for all the complexity of these inventions they are driven by cause and effect relationships in the physical world that the human brain can process comfortably, physical world relationships that might not have existed on the African savanna 2,000,000 years ago but are understandable with the sensory and neural organs our ancestors evolved on that savanna. Big Data systems do not “see” the world as we do, with merely 3 dimensions of physical reality. Big Data systems are not social animals, evolved by nature and trained from birth to interpret all signals through a social lens. Big Data systems are sui generis, a way of perceiving the world that may have been invented by human ingenuity and can serve human interests, but are utterly non-human and profoundly not of this world.
A Big Data system couldn’t care less if it has your specific social security number or your specific account ID, because it’s not understanding who you are based on how you identify yourself to other humans. That’s the human bias here, that a Big Data system would try to predict our individual behavior based on an analysis of what we individually have done in the past, as if the computer were some super-advanced version of Sherlock Holmes. No, what a Big Data system can do is look at ALL of our behaviors, across ALL dimensions of that behavior, and infer what ANY of us would do under similar circumstances. It’s a simple concept, really, but what the human brain can’t easily comprehend is the vastness of the ALL part of the equation or what it means to look at the ALL simultaneously and in parallel. I’ve been working with inference engines for almost 30 years now, and while I think that I’ve got unusually good instincts for this and I’ve been able to train my brain to kinda sorta think in multi-dimensional terms, the truth is that I only get glimpses of what’s happening inside these engines. I can channel the magic, I can appreciate the magic, and on a purely symbolic level I can describe the magic. But on a fundamental level I don’t understand the magic, and neither does any other human. What I can say to you with absolute certainty, however, is that the magic exists and there are plenty of magicians like me out there, with more graduating from MIT and Harvard and Stanford every year.
Here’s the magic trick that I’m worried about for investors.
In exactly the same way that we have given away our personal behavioral data to banks and credit card companies and wireless carriers and insurance companies and a million app providers, so are we now being tempted to give away our portfolio behavioral data to mega-banks and mega-asset managers and the technology providers who work with them. Don’t worry, they say, there’s nothing in this information that identifies you directly. It’s all anonymous. What rubbish! With enough anonymous portfolio behavioral data and a laughably small IT budget, any competent magician can design a Big Data system that can predict with 90% accuracy what you will buy and sell in your account, at what price you will buy and sell, and under what external macro conditions you will buy and sell. Every day these private data sets at the mega-market players get bigger and bigger, and every day we get closer and closer to a Citadel or a Renaissance perfecting their Inference Machine for the liquid capital markets. For all I know, they already have. . . .
2. Checkout Facebook’s new patent, to be evaluated in conjunction with the previous story. Facebook’s patent is for a service that will let banks scan your Facebook friends for the purpose of assessing your credit quality. For instance, Facebook might set up a service where banks can take the average of the credit ratings for all of the people in your social network, and if that average doesn’t meet a minimum credit score, your loan application is denied. And that’s not just some random application of Facebook’s new patent–the system of using the average credit scores of your social network to deny you loans is explicitly part of the patent:
If you and your Facebook friends are poor, good luck getting approved for a loan.
Facebook has registered a patent for a system that would let banks and lenders screen your social network before deciding whether or not you’re approved for a loan. If your Facebook friends’ average credit scores don’t make the cut, the bank can reject you. The patent is worded in clear, terrifying language that speaks for itself:
When an individual applies for a loan, the lender examines the credit ratings of members of the individual’s social network who are connected to the individual through authorized nodes. If the average credit rating of these members is at least a minimum credit score, the lender continues to process the loan application. Otherwise, the loan application is rejected.
It’s very literally guilt by association, allowing banks and lenders to profile you by the status of your loved ones.
Though a credit score isn’t necessarily a reflection of your wealth, it can serve as a rough guideline for who has a reliable, managed income and who has had to lean on credit in trying times. A line of credit is sometimes a lifeline, either for starting a new business or escaping a temporary hardship.
Profiling people for being in social circles where low credit scores are likely could cut off someone’s chances of finding financial relief. In effect, it’s a device that isolates the poor and keeps them poor.
A bold new era for discrimination: In the United States, it’s illegal to deny someone a loan based on traditional identifiers like race or gender — the kinds of things people usually use to discriminate. But these laws were made before Facebook was able to peer into your social graph and learn when, where and how long you’ve known your friends and acquaintances.
The fitness-tracking tech company Fitbit said in 2014 that the fastest growing part of their business is helping employers monitor the health of their employees. Once insurers show interest in this information, you can bet they’ll be making a few rejections of their own. And if a group insurance plan that affects every employee depends on measurable, real-time data for the fitness of its employees, how will that affect the hiring process?
And if you don’t like it, just find richer friends.
3a. A consortium of 14 mega-banks have privately developed a special super-secure inter-bank messaging system that uses end-to-end strong encryption and permanently deletes data. The Symphony system may very well make it impossible for regulators to adequately oversee the financial malefactors responsible for the 2008 financial meltdown.
New York’s state banking regulator has fired a shot across the bows of Symphony, a messaging service about to be launched by a consortium of Wall Street banks and asset managers, by calling for information on how it manages — and deletes — customer data.
In a letter on Wednesday to David Gurle, the chief executive of Symphony Communication Services, the New York Department of Financial Services asked it to clarify how its tool would allow firms to erase their data trails, potentially falling foul of laws on record-keeping.
The letter, which was signed by acting superintendent Anthony Albanese and shared with the press, noted that chatroom transcripts had formed a critical part of authorities’ investigations into the rigging of markets for foreign exchange and interbank loans. It called for Symphony to spell out its document retention capabilities, policies and features, citing two specific areas of interest as “data deletion” and “end-to-end encryption”.
The letter marks the first expression of concern from regulators over a new initiative that has set out to challenge the dominance of Bloomberg, whose 320,000-plus subscribers ping about 200m messages a day between terminals using its communication tools.
People familiar with the matter described the inquiry as an information gathering exercise, which could conclude that Symphony is a perfectly legitimate enterprise.
The NYDFS noted that Symphony’s marketing materials state that “Symphony has designed a specific set of procedures to guarantee that data deletion is permanent and fully documented. We also delete content on a regular basis in accordance with customer data retention policies.”
Mr Albanese also wrote that he would follow up with four consortium members that the NYDFS regulates — Bank of New York Mellon, Credit Suisse, Deutsche Bank and Goldman Sachs — to ask them how they plan to use the new service, which will go live for big customers in the first week of August.
The regulator said it was keen to find out how banks would ensure that messages created using Symphony would be retained, and “whether their use of Symphony’s encryption technology can be used to prevent review by compliance personnel or regulators”. It also flagged concerns over the open-source features of the product, wondering if they could be used to “circumvent” oversight.
The other members of the consortium are Bank of America Merrill Lynch, BlackRock, Citadel, Citigroup, HSBC, Jefferies, JPMorgan, Maverick Capital, Morgan Stanley and Wells Fargo. Together they have chipped in about $70m to get Symphony started. Another San Francisco-based fund run by a former colleague of Mr Gurle’s, Merus Capital, has a 5 per cent interest.
“Symphony is built on a foundation of security, compliance and privacy features that were built to enable our financial services and enterprise customers to meet their regulatory requirements,” said Mr Gurle. “We look forward to explaining the various aspects of our communications platform to the New York Department of Financial Services.”
3b. According to Symphony’s backers, nothing could go wrong because all the information that banks are required to retain for regulatory purposes is indeed retained in the system. Whether or not regulators can actually access that retained data, however, appears to be more of an open question. Again, the end-to-end encryption may very well insulate Banksters from the regulation vital to avoid a repeat of the 2008 scenario.
Symphony is taking heat from some in Washington, D.C., D.C. for its WhatApp-like messaging service that promises to encrypt Wall Street’s messages from end to end. At the heart of the concern is whether or not the keys used to decrypt the messages will be made available to regulators, or if another form of back door access will be provided.
Without such keys it would be immensely more difficult to retrace the steps of shady characters on Wall Street during regulatory investigations — an ability, which according to a New York Post report, has resulted $74 billion in fines over the past five years.
So, earlier this week Symphony took to the blogosphere with a rather detailed explanation of its plans to be compliant with regulators. In spite of answering a lot of questions though, one key point was either deftly evaded, or overlooked.
What Symphony does, according to the blog post:
Symphony provides its customers with an innovative “end-to-end” secure messaging capability that protects communications in the cloud from cyber-threats and the risk of data breach, while safeguarding our customers’ ability to retain records of their messages. Symphony protects data, not only when it travels from “point-to-point” over network connections, but also the entire time the data is in the cloud.
How it works:
Large institutions using Symphony typically will store encryption keys using specialized hardware key management devices known as Hardware Security Modules (HSMs). These modules are installed in data centers and protect an organization’s keys, storing them within the secure protected memory of the HSM. Firms will use these keys to decrypt data and then feed the data into their record retention systems.
Symphony is designed to interface with record retention systems commonly deployed in financial institutions. By helping organizations reliably store messages in a central archive, our platform facilitates the rapid and complete retrieval of records when needed. Symphony provides security while data travels through the cloud; firms then securely receive the data from Symphony, decrypt it and store it so they can meet their retention obligations.
The potential to store every key-stroke of every employee behind an encrypted wall safe from malicious governments and other entities is one that should make Wall Streeters, and those dependent on Wall Street resources, sleep a bit better at night.
But nowhere in Symphony’s blog post does it actually say that any of the 14 companies which have invested $70 million in the product, or any of the forthcoming customers who might sign up to use it, will actually share anything with regulators. Sure, it will retain all the information obliged by regulators, which in the right hands is equally useful to the companies. So there’s no surprise there.
The closest we see to any actual assurance that the Silicon Valley-based company plans to share that information with regulators is that Symphony is “designed to interface with record retention systems commonly deployed in financial institutions.” Which theoretically, means the SEC, the DOJ, or any number of regulatory bodies could plug in, assuming they had access.
So, the questions remain, will Symphony be building in some sort of back-door access for regulators? Or will it just be storing that information required of regulators, but for its clients’ use?
4a. The Bitcoin assassination markets are about to get some competition. A new variant of the Bitcoin technology will not only permit the use of Bitcoin to assassinate public figures but may very well replace–to a certain extent–the functions performed by attorney.
Investors see riches in a cryptography-enabled technology called smart contracts–but it could also offer much to criminals.
Some of the earliest adopters of the digital currency Bitcoin were criminals, who have found it invaluable in online marketplaces for contraband and as payment extorted through lucrative “ransomware” that holds personal data hostage. A new Bitcoin-inspired technology that some investors believe will be much more useful and powerful may be set to unlock a new wave of criminal innovation.
That technology is known as smart contracts—small computer programs that can do things like execute financial trades or notarize documents in a legal agreement. Intended to take the place of third-party human administrators such as lawyers, which are required in many deals and agreements, they can verify information and hold or use funds using similar cryptography to that which underpins Bitcoin.
Some companies think smart contracts could make financial markets more efficient, or simplify complex transactions such as property deals (see “The Startup Meant to Reinvent What Bitcoin Can Do”). Ari Juels, a cryptographer and professor at the Jacobs Technion-Cornell Institute at Cornell Tech, believes they will also be useful for illegal activity–and, with two collaborators, he has demonstrated how.
“In some ways this is the perfect vehicle for criminal acts, because it’s meant to create trust in situations where otherwise it’s difficult to achieve,” says Juels.
In a paper to be released today, Juels, fellow Cornell professor Elaine Shi, and University of Maryland researcher Ahmed Kosbapresent several examples of what they call “criminal contracts.” They wrote them to work on the recently launched smart-contract platform Ethereum.
One example is a contract offering a cryptocurrency reward for hacking a particular website. Ethereum’s programming language makes it possible for the contract to control the promised funds. It will release them only to someone who provides proof of having carried out the job, in the form of a cryptographically verifiable string added to the defaced site.
Contracts with a similar design could be used to commission many kinds of crime, say the researchers.Most provocatively, they outline a version designed to arrange the assassination of a public figure. A person wishing to claim the bounty would have to send information such as the time and place of the killing in advance. The contract would pay out after verifying that those details had appeared in several trusted news sources, such as news wires. A similar approach could be used for lesser physical crimes, such as high-profile vandalism.
“It was a bit of a surprise to me that these types of crimes in the physical world could be enabled by a digital system,” says Juels. He and his coauthors say they are trying to publicize the potential for such activity to get technologists and policy makers thinking about how to make sure the positives of smart contracts outweigh the negatives.
“We are optimistic about their beneficial applications, but crime is something that is going to have to be dealt with in an effective way if those benefits are to bear fruit,” says Shi.
Nicolas Christin, an assistant professor at Carnegie Mellon University who has studied criminal uses of Bitcoin, agrees there is potential for smart contracts to be embraced by the underground. “It will not be surprising,” he says. “Fringe businesses tend to be the first adopters of new technologies, because they don’t have anything to lose.”
Gavin Wood, chief technology officer at Ethereum, notes that legitimate businesses are already planning to make use of his technology—for example, to provide a digitally transferable proof of ownership of gold.
However, Wood acknowledges it is likely that Ethereum will be used in ways that break the law—and even says that is part of what makes the technology interesting. Just as file sharing found widespread unauthorized use and forced changes in the entertainment and tech industries, illicit activity enabled by Ethereum could change the world, he says.
“The potential for Ethereum to alter aspects of society is of significant magnitude,” says Wood. “This is something that would provide a technical basis for all sorts of social changes and I find that exciting.”
For example, Wood says that Ethereum’s software could be used to create a decentralized version of a service such as Uber, connecting people wanting to go somewhere with someone willing to take them, and handling the payments without the need for a company in the middle. Regulators like those harrying Uber in many places around the world would be left with nothing to target. “You can implement any Web service without there being a legal entity behind it,” he says. “The idea of making certain things impossible to legislate against is really interesting.”
4b. If you’re a former subscriber of the “Ashley Madison” website for cheating, just FYI, you might getting a friendly email soon:
People are the worst. An unknown number of assholes are threatening to expose Ashley Madison users, presumably ruining their marriages. The hacking victims must pay the extortionists “exactly 1.0000001 Bitcoins” or the spouse gets notified. Ugh.
This is an unnerving but not unpredictable turn of events. The data that the Ashley Madison hackers released early this week included millions of real email addresses, along with real home addresses, sexual proclivities and other very private information. Security blogger Brian Krebs talked to security firms who have evidence of extortion schemes linked to Ashley Madison data. Turns out spam filters are catching a number of emails being sent to victims from people who say they’ll make the information public unless they get paid!
Here’s one caught by an email provider in Milwaukee:
Unfortunately, your data was leaked in the recent hacking of Ashley Madison and I now have your information.
If you would like to prevent me from finding and sharing this information with your significant other send exactly 1.0000001 Bitcoins (approx. value $225 USD) to the following address:
Sending the wrong amount means I won’t know it’s you who paid.
You have 7 days from receipt of this email to send the BTC [bitcoins]. If you need help locating a place to purchase BTC, you can start here…..
One security expert explained to Krebs that this type of extortion could be dangerous. “There is going to be a dramatic crime wave of these types of virtual shakedowns, and they’ll evolve into spear-phishing campaigns that leverage crypto malware,” said Tom Kellerman of Trend Micro.
That sounds a little dramatic, but bear in mind just how many people were involved. Even if you assume some of the accounts were fake, there are potentially millions who’ve had their private information posted on the dark web for anybody to see and abuse. Some of these people are in the military, too, where they’d face possible penalties for adultery. If some goons think they can squeeze a bitcoin out of each of them, there are potentially tens of millions of dollars to be made.
The word “potentially” is important because some of these extortion emails are obviously getting stuck in spam filters, and some of the extortionists could easily just be bluffing. Either way, everybody loses when companies fail to secure their users’ data. Everybody except the criminals.
5. The emergence of what is coming to be called “The Internet of Things” holds truly ominous possibilities. Not only can Big Data/Big Tech get their hooks into peoples’ lives to an even greater extent than they can now (see Item #1 in this description) but hackers can have a field day.
A fridge that puts milk on your shopping list when you run low. A safe that tallies the cash that is placed in it. A sniper rifle equipped with advanced computer technology for improved accuracy. A car that lets you stream music from the Internet.
All of these innovations sound great, until you learn the risks that this type of connectivity carries. Recently, two security researchers, sitting on a couch and armed only with laptops, remotely took over a Chrysler Jeep Cherokee speeding along the highway, shutting down its engine as an 18-wheeler truck rushed toward it. They did this all while a Wired reporter was driving the car. Their expertise would allow them to hack any Jeep as long as they knew the car’s I.P. address, its network address on the Internet. They turned the Jeep’s entertainment dashboard into a gateway to the car’s steering, brakes and transmission.
A hacked car is a high-profile example of what can go wrong with the coming Internet of Things — objects equipped with software and connected to digital networks. The selling point for these well-connected objects is added convenience and better safety. In reality, it is a fast-motion train wreck in privacy and security.
The early Internet was intended to connect people who already trusted one another, like academic researchers or military networks. It never had the robust security that today’s global network needs. As the Internet went from a few thousand users to more than three billion, attempts to strengthen security were stymied because of cost, shortsightedness and competing interests. Connecting everyday objects to this shaky, insecure base will create the Internet of Hacked Things. This is irresponsible and potentially catastrophic.
That smart safe? Hackers can empty it with a single USB stick while erasing all logs of its activity — the evidence of deposits and withdrawals — and of their crime. That high-tech rifle? Researchers managed to remotely manipulate its target selection without the shooter’s knowing.
Home builders and car manufacturers have shifted to a new business: the risky world of information technology. Most seem utterly out of their depth.
Although Chrysler quickly recalled 1.4 million Jeeps to patch this particular vulnerability, it took the company more than a year after the issue was first noted, and the recall occurred only after that spectacular publicity stunt on the highway and after it was requested by the National Highway Traffic Safety Administration. In announcing the software fix, the company said that no defect had been found. If two guys sitting on their couch turning off a speeding car’s engine from miles away doesn’t qualify, I’m not sure what counts as a defect in Chrysler’s world. And Chrysler is far from the only company compromised: from BMW to Tesla to General Motors, many automotive brands have been hacked, with surely more to come.
Dramatic hacks attract the most attention, but the software errors that allow them to occur are ubiquitous. While complex breaches can take real effort — the Jeep hacker duo spent two years researching — simple errors in the code can also cause significant failure. Adding software with millions of lines of code to
The Internet of Things is also a privacy nightmare. Databases that already have too much information about us will now be bursting with data on the places we’ve driven, the food we’ve purchased and more. Last week, at Def Con, the annual information security conference, researchers set up an Internet of Things village to show how they could hack everyday objects like baby monitors, thermostats and security cameras.
Connecting everyday objects introduces new risks if done at mass scale. Take that smart refrigerator. If a single fridge malfunctions, it’s a hassle. However, if the fridge’s computer is connected to its motor, a software bug or hack could “brick” millions of them all at once — turning them into plastic pantries with heavy doors.
Cars — two-ton metal objects designed to hurtle down highways — are already bracingly dangerous. The modern automobile is run by dozens of computers that most manufacturers connect using a system that is old and known to be insecure. Yet automakers often use that flimsy system to connect all of the car’s parts. That means once a hacker is in, she’s in everywhere — engine, steering, transmission and brakes, not just the entertainment system.
For years, security researchers have been warning about the dangers of coupling so many systems in cars. Alarmed researchers have published academic papers, hacked cars as demonstrations, and begged the industry to step up. So far, the industry response has been to nod politely and fix exposed flaws without fundamentally changing the way they operate.
In 1965, Ralph Nader published “Unsafe at Any Speed,” documenting car manufacturers’ resistance to spending money on safety features like seatbelts. After public debate and finally some legislation, manufacturers were forced to incorporate safety technologies.
No company wants to be the first to bear the costs of updating the insecure computer systems that run most cars. We need federal safety regulations to push automakers to move, as a whole industry. Last month, a bill with privacy and cybersecurity standards for cars was introduced in the Senate. That’s good, but it’s only a start. We need a new understanding of car safety, and of the safety of any object running software or connecting to the Internet.
It may be hard to fix security on the digital Internet, but the Internet of Things should not be built on this faulty foundation. Responding to digital threats by patching only exposed vulnerabilities is giving just aspirin to a very ill patient.
It isn’t hopeless. We can make programs more reliable and databases more secure. Critical functions on Internet-connected objects should be isolated and external audits mandated to catch problems early. But this will require an initial investment to forestall future problems — the exact opposite of the current corporate impulse. It also may be that not everything needs to be networked, and that the trade-off in vulnerability isn’t worth it. Maybe cars are unsafe at any I.P.
6. We conclude by re-examining one of the most important analytical articles in a long time, David Golumbia’s article in Uncomputing.org about technocrats and their fundamentally undemocratic outlook.
” . . . . Such technocratic beliefs are widespread in our world today, especially in the enclaves of digital enthusiasts, whether or not they are part of the giant corporate-digital leviathan. Hackers (“civic,” “ethical,” “white” and “black” hat alike), hacktivists, WikiLeaks fans [and Julian Assange et al–D. E.], Anonymous “members,” even Edward Snowden himself walk hand-in-hand with Facebook and Google in telling us that coders don’t just have good things to contribute to the political world, but that the political world is theirs to do with what they want, and the rest of us should stay out of it: the political world is broken, they appear to think (rightly, at least in part), and the solution to that, they think (wrongly, at least for the most part), is for programmers to take political matters into their own hands. . . First, [Tor co-creator] Dingledine claimed that Tor must be supported because it follows directly from a fundamental “right to privacy.” Yet when pressed—and not that hard—he admits that what he means by “right to privacy” is not what any human rights body or “particular legal regime” has meant by it. Instead of talking about how human rights are protected, he asserts that human rights are natural rights and that these natural rights create natural law that is properly enforced by entities above and outside of democratic polities. Where the UN’s Universal Declaration on Human Rights of 1948 is very clear that states and bodies like the UN to which states belong are the exclusive guarantors of human rights, whatever the origin of those rights, Dingledine asserts that a small group of software developers can assign to themselves that role, and that members of democratic polities have no choice but to accept them having that role. . . Further, it is hard not to notice that the appeal to natural rights is today most often associated with the political right, for a variety of reasons (ur-neocon Leo Strauss was one of the most prominent 20th century proponents of these views). We aren’t supposed to endorse Tor because we endorse the right: it’s supposed to be above the left/right distinction. But it isn’t. . . .”