Dave Emory’s entire lifetime of work is available on a flash drive that can be obtained here. The new drive is a 32-gigabyte drive that is current as of the programs and articles posted by late spring of 2015. The new drive (available for a tax-deductible contribution of $65.00 or more) contains FTR #850.
WFMU-FM is podcasting For The Record–You can subscribe to the podcast HERE.
You can subscribe to e‑mail alerts from Spitfirelist.com HERE.
You can subscribe to RSS feed from Spitfirelist.com HERE.
You can subscribe to the comments made on programs and posts–an excellent source of information in, and of, itself HERE.
This program was recorded in one, 60-minute segment.
Introduction: Albert Einstein said of the invention of the atomic bomb: “Everything has changed but our way of thinking.” We feel that other, more recent developments in the world of Big Tech warrant the same type of warning.
This program further explores the Brave New World being midwived by technocrats. These stunning developments should be viewed against the background of what we call “technocratic fascism,” referencing a vitally important article by David Golumbia. ” . . . . Such technocratic beliefs are widespread in our world today, especially in the enclaves of digital enthusiasts, whether or not they are part of the giant corporate-digital leviathan. Hackers (“civic,” “ethical,” “white” and “black” hat alike), hacktivists, WikiLeaks fans [and Julian Assange et al–D. E.], Anonymous “members,” even Edward Snowden himself walk hand-in-hand with Facebook and Google in telling us that coders don’t just have good things to contribute to the political world, but that the political world is theirs to do with what they want, and the rest of us should stay out of it: the political world is broken, they appear to think (rightly, at least in part), and the solution to that, they think (wrongly, at least for the most part), is for programmers to take political matters into their own hands. . . First, [Tor co-creator] Dingledine claimed that Tor must be supported because it follows directly from a fundamental “right to privacy.” Yet when pressed—and not that hard—he admits that what he means by “right to privacy” is not what any human rights body or “particular legal regime” has meant by it. Instead of talking about how human rights are protected, he asserts that human rights are natural rights and that these natural rights create natural law that is properly enforced by entities above and outside of democratic polities. Where the UN’s Universal Declaration on Human Rights of 1948 is very clear that states and bodies like the UN to which states belong are the exclusive guarantors of human rights, whatever the origin of those rights, Dingledine asserts that a small group of software developers can assign to themselves that role, and that members of democratic polities have no choice but to accept them having that role. . . Further, it is hard not to notice that the appeal to natural rights is today most often associated with the political right, for a variety of reasons (ur-neocon Leo Strauss was one of the most prominent 20th century proponents of these views). We aren’t supposed to endorse Tor because we endorse the right: it’s supposed to be above the left/right distinction. But it isn’t. . . .”
We begin by examining a couple of articles relevant to the world of credit.
Big Tech and Big Data have reached the point where, for all intents and purposes, credit card users and virtually everyone else have no personal privacy. Even without detailed personal information, capable tech operators can identify people’s identities with an extraordinary degree of precision using a surprisingly small amount of information.
Compounding the worries of those seeking credit is a new Facebook “app” that will enable banks to identify how poor a customer’s friends are and enable those same institutions to deny the unsuspecting credit on the basis of how poor their friends are!
Even as Big Tech is permitting financial institutions to zero in on customers to an unprecedented degree, it is moving in the direction of obscuring the doings of Banksters. The Symphony network offers end-to-end encryption that appears to make the operations of the financial institutions using it opaque to regulators.
A new variant of the Bitcoin technology will not only facilitate the use of Bitcoin to assassinate public figures but may very well replace–to a certain extent–the functions performed by attorney. (We have covered Bitcoin–an apparent Underground Reich invention–in FTR #‘s 760, 764, 770, 785.)
As frightening as some of the above possibilities may be, things may get dramatically worse with the introduction of “the Internet of Things,” permitting the hacking of many types of everyday technologies, as well as the use of those technologies to give Big Tech and Big Data unprecedented intrusion into people’s lives.
Program Highlights Include:
- Discussion of the hacking of an automobile using a laptop.
- Comparison of the developments of Big Tech and Big Data to magic and the implications for a species that remains true to its neanderthal, femur-cracking, marrow-sucking roots.
- Review of some of the points covered in L‑2.
- The need for vastly bigger, rigorously regulated government instead of the fascism inherent in the libertarian doctrine.
- How hackers are attempting to extort users of the Ashley Madison cheaters website.
1. Big Tech and Big Data have reached the point where, for all intents and purposes, credit card users and virtually everyone else have no personal privacy. Even without detailed personal information, capable tech operators can identify people’s identities with an extraordinary degree of precision using a surprisingly small amount of information.
Last Thursday the journal Science published an article by four MIT-affiliated data scientists (Sandy Pentland is in the group, and he’s a big name in these circles), titled “Unique in the shopping mall: On the reidentifiability of credit card metadata”. Sounds innocuous enough, but here’s the summary from the front page WSJ article describing the findings:
Researchers at the Massachusetts Institute of Technology, writing Thursday in the journal Science, analyzed anonymous credit-card transactions by 1.1 million people. Using a new analytic formula, they needed only four bits of secondary information—metadata such as location or timing—to identify the unique individual purchasing patterns of 90% of the people involved, even when the data were scrubbed of any names, account numbers or other obvious identifiers.
Still not sure what this means? It means that I don’t need your name and address, much less your social security number, to know who you ARE. With a trivial amount of transactional data I can figure out where you live, what you do, who you associate with, what you buy and what you sell. I don’t need to steal this data, and frankly I wouldn’t know what to do with your social security number even if I had it … it would just slow down my analysis. No, you give me everything I need just by living your very convenient life, where you’ve volunteered every bit of transactional information in the fine print of all of these wondrous services you’ve signed up for. And if there’s a bit more information I need – say, a device that records and transmits your driving habits – well, you’re only too happy to sell that to me for a few dollars off your insurance policy. After all, you’ve got nothing to hide. It’s free money!
Almost every investor I know believes that the tools of surveillance and Big Data are only used against the marginalized Other – terrorist “sympathizers” in Yemen, gang “associates” in Compton – but not us. Oh no, not us. And if those tools are trained on us, it’s only to promote “transparency” and weed out the bad guys lurking in our midst. Or maybe to suggest a movie we’d like to watch. What could possibly be wrong with that? I’ve written a lot (here, here, and here) about what’s wrong with that, about how the modern fetish with transparency, aided and abetted by technology and government, perverts the core small‑l liberal institutions of markets and representative government.
It’s not that we’re complacent about our personal information. On the contrary, we are obsessed about the personal “keys” that are meaningful to humans – names, social security numbers, passwords and the like – and we spend billions of dollars and millions of hours every year to control those keys, to prevent them from falling into the wrong hands of other humans. But we willingly hand over a different set of keys to non-human hands without a second thought.
The problem is that our human brains are wired to think of data processing in human ways, and so we assume that computerized systems process data in these same human ways, albeit more quickly and more accurately. Our science fiction is filled with computer systems that are essentially god-like human brains, machines that can talk and “think” and manipulate physical objects, as if sentience in a human context is the pinnacle of data processing! This anthropomorphic bias drives me nuts, as it dampens both the sense of awe and the sense of danger we should be feeling at what already walks among us. It seems like everyone and his brother today are wringing their hands about AI and some impending “Singularity”, a moment of future doom where non-human intelligence achieves some human-esque sentience and decides in Matrix-like fashion to turn us into batteries or some such. Please. The Singularity is already here. Its name is Big Data.
Big Data is magic, in exactly the sense that Arthur C. Clarke wrote of sufficiently advanced technology. It’s magic in a way that thermonuclear bombs and television are not, because for all the complexity of these inventions they are driven by cause and effect relationships in the physical world that the human brain can process comfortably, physical world relationships that might not have existed on the African savanna 2,000,000 years ago but are understandable with the sensory and neural organs our ancestors evolved on that savanna. Big Data systems do not “see” the world as we do, with merely 3 dimensions of physical reality. Big Data systems are not social animals, evolved by nature and trained from birth to interpret all signals through a social lens. Big Data systems are sui generis, a way of perceiving the world that may have been invented by human ingenuity and can serve human interests, but are utterly non-human and profoundly not of this world.
A Big Data system couldn’t care less if it has your specific social security number or your specific account ID, because it’s not understanding who you are based on how you identify yourself to other humans. That’s the human bias here, that a Big Data system would try to predict our individual behavior based on an analysis of what we individually have done in the past, as if the computer were some super-advanced version of Sherlock Holmes. No, what a Big Data system can do is look at ALL of our behaviors, across ALL dimensions of that behavior, and infer what ANY of us would do under similar circumstances. It’s a simple concept, really, but what the human brain can’t easily comprehend is the vastness of the ALL part of the equation or what it means to look at the ALL simultaneously and in parallel. I’ve been working with inference engines for almost 30 years now, and while I think that I’ve got unusually good instincts for this and I’ve been able to train my brain to kinda sorta think in multi-dimensional terms, the truth is that I only get glimpses of what’s happening inside these engines. I can channel the magic, I can appreciate the magic, and on a purely symbolic level I can describe the magic. But on a fundamental level I don’t understand the magic, and neither does any other human. What I can say to you with absolute certainty, however, is that the magic exists and there are plenty of magicians like me out there, with more graduating from MIT and Harvard and Stanford every year.
Here’s the magic trick that I’m worried about for investors.
In exactly the same way that we have given away our personal behavioral data to banks and credit card companies and wireless carriers and insurance companies and a million app providers, so are we now being tempted to give away our portfolio behavioral data to mega-banks and mega-asset managers and the technology providers who work with them. Don’t worry, they say, there’s nothing in this information that identifies you directly. It’s all anonymous. What rubbish! With enough anonymous portfolio behavioral data and a laughably small IT budget, any competent magician can design a Big Data system that can predict with 90% accuracy what you will buy and sell in your account, at what price you will buy and sell, and under what external macro conditions you will buy and sell. Every day these private data sets at the mega-market players get bigger and bigger, and every day we get closer and closer to a Citadel or a Renaissance perfecting their Inference Machine for the liquid capital markets. For all I know, they already have. . . .
2. Checkout Facebook’s new patent, to be evaluated in conjunction with the previous story. Facebook’s patent is for a service that will let banks scan your Facebook friends for the purpose of assessing your credit quality. For instance, Facebook might set up a service where banks can take the average of the credit ratings for all of the people in your social network, and if that average doesn’t meet a minimum credit score, your loan application is denied. And that’s not just some random application of Facebook’s new patent–the system of using the average credit scores of your social network to deny you loans is explicitly part of the patent:
“Facebook’s New Plan: Help Banks Figure Out How Poor You Are So They Can Deny You Loans” by Jack Smith IV; mic.com; 8/5/2015.
If you and your Facebook friends are poor, good luck getting approved for a loan.
Facebook has registered a patent for a system that would let banks and lenders screen your social network before deciding whether or not you’re approved for a loan. If your Facebook friends’ average credit scores don’t make the cut, the bank can reject you. The patent is worded in clear, terrifying language that speaks for itself:
When an individual applies for a loan, the lender examines the credit ratings of members of the individual’s social network who are connected to the individual through authorized nodes. If the average credit rating of these members is at least a minimum credit score, the lender continues to process the loan application. Otherwise, the loan application is rejected.
It’s very literally guilt by association, allowing banks and lenders to profile you by the status of your loved ones.
Though a credit score isn’t necessarily a reflection of your wealth, it can serve as a rough guideline for who has a reliable, managed income and who has had to lean on credit in trying times. A line of credit is sometimes a lifeline, either for starting a new business or escaping a temporary hardship.
Profiling people for being in social circles where low credit scores are likely could cut off someone’s chances of finding financial relief. In effect, it’s a device that isolates the poor and keeps them poor.
A bold new era for discrimination: In the United States, it’s illegal to deny someone a loan based on traditional identifiers like race or gender — the kinds of things people usually use to discriminate. But these laws were made before Facebook was able to peer into your social graph and learn when, where and how long you’ve known your friends and acquaintances.
The fitness-tracking tech company Fitbit said in 2014 that the fastest growing part of their business is helping employers monitor the health of their employees. Once insurers show interest in this information, you can bet they’ll be making a few rejections of their own. And if a group insurance plan that affects every employee depends on measurable, real-time data for the fitness of its employees, how will that affect the hiring process?
...
And if you don’t like it, just find richer friends.
3a. A consortium of 14 mega-banks have privately developed a special super-secure inter-bank messaging system that uses end-to-end strong encryption and permanently deletes data. The Symphony system may very well make it impossible for regulators to adequately oversee the financial malefactors responsible for the 2008 financial meltdown.
“NY Regulator Sends Message to Symphony” by Ben McLannahan and Gina Chon; Financial Times; 7/22/2015.
New York’s state banking regulator has fired a shot across the bows of Symphony, a messaging service about to be launched by a consortium of Wall Street banks and asset managers, by calling for information on how it manages — and deletes — customer data.
In a letter on Wednesday to David Gurle, the chief executive of Symphony Communication Services, the New York Department of Financial Services asked it to clarify how its tool would allow firms to erase their data trails, potentially falling foul of laws on record-keeping.
The letter, which was signed by acting superintendent Anthony Albanese and shared with the press, noted that chatroom transcripts had formed a critical part of authorities’ investigations into the rigging of markets for foreign exchange and interbank loans. It called for Symphony to spell out its document retention capabilities, policies and features, citing two specific areas of interest as “data deletion” and “end-to-end encryption”.
The letter marks the first expression of concern from regulators over a new initiative that has set out to challenge the dominance of Bloomberg, whose 320,000-plus subscribers ping about 200m messages a day between terminals using its communication tools.
People familiar with the matter described the inquiry as an information gathering exercise, which could conclude that Symphony is a perfectly legitimate enterprise.
The NYDFS noted that Symphony’s marketing materials state that “Symphony has designed a specific set of procedures to guarantee that data deletion is permanent and fully documented. We also delete content on a regular basis in accordance with customer data retention policies.”
Mr Albanese also wrote that he would follow up with four consortium members that the NYDFS regulates — Bank of New York Mellon, Credit Suisse, Deutsche Bank and Goldman Sachs — to ask them how they plan to use the new service, which will go live for big customers in the first week of August.
The regulator said it was keen to find out how banks would ensure that messages created using Symphony would be retained, and “whether their use of Symphony’s encryption technology can be used to prevent review by compliance personnel or regulators”. It also flagged concerns over the open-source features of the product, wondering if they could be used to “circumvent” oversight.
The other members of the consortium are Bank of America Merrill Lynch, BlackRock, Citadel, Citigroup, HSBC, Jefferies, JPMorgan, Maverick Capital, Morgan Stanley and Wells Fargo. Together they have chipped in about $70m to get Symphony started. Another San Francisco-based fund run by a former colleague of Mr Gurle’s, Merus Capital, has a 5 per cent interest.
“Symphony is built on a foundation of security, compliance and privacy features that were built to enable our financial services and enterprise customers to meet their regulatory requirements,” said Mr Gurle. “We look forward to explaining the various aspects of our communications platform to the New York Department of Financial Services.”
3b. According to Symphony’s backers, nothing could go wrong because all the information that banks are required to retain for regulatory purposes is indeed retained in the system. Whether or not regulators can actually access that retained data, however, appears to be more of an open question. Again, the end-to-end encryption may very well insulate Banksters from the regulation vital to avoid a repeat of the 2008 scenario.
“Symphony, the ‘WhatsApp for Wall Street,’ Orchestrates a Nuanced Response to Regulatory Critics” by Michael del Castillo; New York Business Journal; 8/13/2015.
Symphony is taking heat from some in Washington, D.C., D.C. for its WhatApp-like messaging service that promises to encrypt Wall Street’s messages from end to end. At the heart of the concern is whether or not the keys used to decrypt the messages will be made available to regulators, or if another form of back door access will be provided.
Without such keys it would be immensely more difficult to retrace the steps of shady characters on Wall Street during regulatory investigations — an ability, which according to a New York Post report, has resulted $74 billion in fines over the past five years.
So, earlier this week Symphony took to the blogosphere with a rather detailed explanation of its plans to be compliant with regulators. In spite of answering a lot of questions though, one key point was either deftly evaded, or overlooked.
What Symphony does, according to the blog post:
Symphony provides its customers with an innovative “end-to-end” secure messaging capability that protects communications in the cloud from cyber-threats and the risk of data breach, while safeguarding our customers’ ability to retain records of their messages. Symphony protects data, not only when it travels from “point-to-point” over network connections, but also the entire time the data is in the cloud.
How it works:
Large institutions using Symphony typically will store encryption keys using specialized hardware key management devices known as Hardware Security Modules (HSMs). These modules are installed in data centers and protect an organization’s keys, storing them within the secure protected memory of the HSM. Firms will use these keys to decrypt data and then feed the data into their record retention systems.
The crux:
Symphony is designed to interface with record retention systems commonly deployed in financial institutions. By helping organizations reliably store messages in a central archive, our platform facilitates the rapid and complete retrieval of records when needed. Symphony provides security while data travels through the cloud; firms then securely receive the data from Symphony, decrypt it and store it so they can meet their retention obligations.
The potential to store every key-stroke of every employee behind an encrypted wall safe from malicious governments and other entities is one that should make Wall Streeters, and those dependent on Wall Street resources, sleep a bit better at night.
But nowhere in Symphony’s blog post does it actually say that any of the 14 companies which have invested $70 million in the product, or any of the forthcoming customers who might sign up to use it, will actually share anything with regulators. Sure, it will retain all the information obliged by regulators, which in the right hands is equally useful to the companies. So there’s no surprise there.
The closest we see to any actual assurance that the Silicon Valley-based company plans to share that information with regulators is that Symphony is “designed to interface with record retention systems commonly deployed in financial institutions.” Which theoretically, means the SEC, the DOJ, or any number of regulatory bodies could plug in, assuming they had access.
So, the questions remain, will Symphony be building in some sort of back-door access for regulators? Or will it just be storing that information required of regulators, but for its clients’ use?
...
4a. The Bitcoin assassination markets are about to get some competition. A new variant of the Bitcoin technology will not only permit the use of Bitcoin to assassinate public figures but may very well replace–to a certain extent–the functions performed by attorney.
“Bitcoin’s Dark Side Could Get Darker” by Tom Simonite; MIT Technology Review; 8/13/2015.
Investors see riches in a cryptography-enabled technology called smart contracts–but it could also offer much to criminals.
Some of the earliest adopters of the digital currency Bitcoin were criminals, who have found it invaluable in online marketplaces for contraband and as payment extorted through lucrative “ransomware” that holds personal data hostage. A new Bitcoin-inspired technology that some investors believe will be much more useful and powerful may be set to unlock a new wave of criminal innovation.
That technology is known as smart contracts—small computer programs that can do things like execute financial trades or notarize documents in a legal agreement. Intended to take the place of third-party human administrators such as lawyers, which are required in many deals and agreements, they can verify information and hold or use funds using similar cryptography to that which underpins Bitcoin.
Some companies think smart contracts could make financial markets more efficient, or simplify complex transactions such as property deals (see “The Startup Meant to Reinvent What Bitcoin Can Do”). Ari Juels, a cryptographer and professor at the Jacobs Technion-Cornell Institute at Cornell Tech, believes they will also be useful for illegal activity–and, with two collaborators, he has demonstrated how.
“In some ways this is the perfect vehicle for criminal acts, because it’s meant to create trust in situations where otherwise it’s difficult to achieve,” says Juels.
In a paper to be released today, Juels, fellow Cornell professor Elaine Shi, and University of Maryland researcher Ahmed Kosbapresent several examples of what they call “criminal contracts.” They wrote them to work on the recently launched smart-contract platform Ethereum.
One example is a contract offering a cryptocurrency reward for hacking a particular website. Ethereum’s programming language makes it possible for the contract to control the promised funds. It will release them only to someone who provides proof of having carried out the job, in the form of a cryptographically verifiable string added to the defaced site.
Contracts with a similar design could be used to commission many kinds of crime, say the researchers.Most provocatively, they outline a version designed to arrange the assassination of a public figure. A person wishing to claim the bounty would have to send information such as the time and place of the killing in advance. The contract would pay out after verifying that those details had appeared in several trusted news sources, such as news wires. A similar approach could be used for lesser physical crimes, such as high-profile vandalism.
“It was a bit of a surprise to me that these types of crimes in the physical world could be enabled by a digital system,” says Juels. He and his coauthors say they are trying to publicize the potential for such activity to get technologists and policy makers thinking about how to make sure the positives of smart contracts outweigh the negatives.
“We are optimistic about their beneficial applications, but crime is something that is going to have to be dealt with in an effective way if those benefits are to bear fruit,” says Shi.
Nicolas Christin, an assistant professor at Carnegie Mellon University who has studied criminal uses of Bitcoin, agrees there is potential for smart contracts to be embraced by the underground. “It will not be surprising,” he says. “Fringe businesses tend to be the first adopters of new technologies, because they don’t have anything to lose.”
...
Gavin Wood, chief technology officer at Ethereum, notes that legitimate businesses are already planning to make use of his technology—for example, to provide a digitally transferable proof of ownership of gold.
However, Wood acknowledges it is likely that Ethereum will be used in ways that break the law—and even says that is part of what makes the technology interesting. Just as file sharing found widespread unauthorized use and forced changes in the entertainment and tech industries, illicit activity enabled by Ethereum could change the world, he says.
“The potential for Ethereum to alter aspects of society is of significant magnitude,” says Wood. “This is something that would provide a technical basis for all sorts of social changes and I find that exciting.”
For example, Wood says that Ethereum’s software could be used to create a decentralized version of a service such as Uber, connecting people wanting to go somewhere with someone willing to take them, and handling the payments without the need for a company in the middle. Regulators like those harrying Uber in many places around the world would be left with nothing to target. “You can implement any Web service without there being a legal entity behind it,” he says. “The idea of making certain things impossible to legislate against is really interesting.”
4b. If you’re a former subscriber of the “Ashley Madison” website for cheating, just FYI, you might getting a friendly email soon:
People are the worst. An unknown number of assholes are threatening to expose Ashley Madison users, presumably ruining their marriages. The hacking victims must pay the extortionists “exactly 1.0000001 Bitcoins” or the spouse gets notified. Ugh.
This is an unnerving but not unpredictable turn of events. The data that the Ashley Madison hackers released early this week included millions of real email addresses, along with real home addresses, sexual proclivities and other very private information. Security blogger Brian Krebs talked to security firms who have evidence of extortion schemes linked to Ashley Madison data. Turns out spam filters are catching a number of emails being sent to victims from people who say they’ll make the information public unless they get paid!
Here’s one caught by an email provider in Milwaukee:
Hello,
Unfortunately, your data was leaked in the recent hacking of Ashley Madison and I now have your information.
If you would like to prevent me from finding and sharing this information with your significant other send exactly 1.0000001 Bitcoins (approx. value $225 USD) to the following address:
1B8eH7HR87vbVbMzX4gk9nYyus3KnXs4Ez
Sending the wrong amount means I won’t know it’s you who paid.
You have 7 days from receipt of this email to send the BTC [bitcoins]. If you need help locating a place to purchase BTC, you can start here…..
...
One security expert explained to Krebs that this type of extortion could be dangerous. “There is going to be a dramatic crime wave of these types of virtual shakedowns, and they’ll evolve into spear-phishing campaigns that leverage crypto malware,” said Tom Kellerman of Trend Micro.
That sounds a little dramatic, but bear in mind just how many people were involved. Even if you assume some of the accounts were fake, there are potentially millions who’ve had their private information posted on the dark web for anybody to see and abuse. Some of these people are in the military, too, where they’d face possible penalties for adultery. If some goons think they can squeeze a bitcoin out of each of them, there are potentially tens of millions of dollars to be made.
The word “potentially” is important because some of these extortion emails are obviously getting stuck in spam filters, and some of the extortionists could easily just be bluffing. Either way, everybody loses when companies fail to secure their users’ data. Everybody except the criminals.
5. The emergence of what is coming to be called “The Internet of Things” holds truly ominous possibilities. Not only can Big Data/Big Tech get their hooks into peoples’ lives to an even greater extent than they can now (see Item #1 in this description) but hackers can have a field day.
“Why Smart Objects May Be a Dumb Idea’ by Zeynep Tufekci; The New York Times; 8/10/2015.
A fridge that puts milk on your shopping list when you run low. A safe that tallies the cash that is placed in it. A sniper rifle equipped with advanced computer technology for improved accuracy. A car that lets you stream music from the Internet.
All of these innovations sound great, until you learn the risks that this type of connectivity carries. Recently, two security researchers, sitting on a couch and armed only with laptops, remotely took over a Chrysler Jeep Cherokee speeding along the highway, shutting down its engine as an 18-wheeler truck rushed toward it. They did this all while a Wired reporter was driving the car. Their expertise would allow them to hack any Jeep as long as they knew the car’s I.P. address, its network address on the Internet. They turned the Jeep’s entertainment dashboard into a gateway to the car’s steering, brakes and transmission.
A hacked car is a high-profile example of what can go wrong with the coming Internet of Things — objects equipped with software and connected to digital networks. The selling point for these well-connected objects is added convenience and better safety. In reality, it is a fast-motion train wreck in privacy and security.
The early Internet was intended to connect people who already trusted one another, like academic researchers or military networks. It never had the robust security that today’s global network needs. As the Internet went from a few thousand users to more than three billion, attempts to strengthen security were stymied because of cost, shortsightedness and competing interests. Connecting everyday objects to this shaky, insecure base will create the Internet of Hacked Things. This is irresponsible and potentially catastrophic.
That smart safe? Hackers can empty it with a single USB stick while erasing all logs of its activity — the evidence of deposits and withdrawals — and of their crime. That high-tech rifle? Researchers managed to remotely manipulate its target selection without the shooter’s knowing.
Home builders and car manufacturers have shifted to a new business: the risky world of information technology. Most seem utterly out of their depth.
Although Chrysler quickly recalled 1.4 million Jeeps to patch this particular vulnerability, it took the company more than a year after the issue was first noted, and the recall occurred only after that spectacular publicity stunt on the highway and after it was requested by the National Highway Traffic Safety Administration. In announcing the software fix, the company said that no defect had been found. If two guys sitting on their couch turning off a speeding car’s engine from miles away doesn’t qualify, I’m not sure what counts as a defect in Chrysler’s world. And Chrysler is far from the only company compromised: from BMW to Tesla to General Motors, many automotive brands have been hacked, with surely more to come.
Dramatic hacks attract the most attention, but the software errors that allow them to occur are ubiquitous. While complex breaches can take real effort — the Jeep hacker duo spent two years researching — simple errors in the code can also cause significant failure. Adding software with millions of lines of code to
The Internet of Things is also a privacy nightmare. Databases that already have too much information about us will now be bursting with data on the places we’ve driven, the food we’ve purchased and more. Last week, at Def Con, the annual information security conference, researchers set up an Internet of Things village to show how they could hack everyday objects like baby monitors, thermostats and security cameras.
Connecting everyday objects introduces new risks if done at mass scale. Take that smart refrigerator. If a single fridge malfunctions, it’s a hassle. However, if the fridge’s computer is connected to its motor, a software bug or hack could “brick” millions of them all at once — turning them into plastic pantries with heavy doors.
Cars — two-ton metal objects designed to hurtle down highways — are already bracingly dangerous. The modern automobile is run by dozens of computers that most manufacturers connect using a system that is old and known to be insecure. Yet automakers often use that flimsy system to connect all of the car’s parts. That means once a hacker is in, she’s in everywhere — engine, steering, transmission and brakes, not just the entertainment system.
For years, security researchers have been warning about the dangers of coupling so many systems in cars. Alarmed researchers have published academic papers, hacked cars as demonstrations, and begged the industry to step up. So far, the industry response has been to nod politely and fix exposed flaws without fundamentally changing the way they operate.
In 1965, Ralph Nader published “Unsafe at Any Speed,” documenting car manufacturers’ resistance to spending money on safety features like seatbelts. After public debate and finally some legislation, manufacturers were forced to incorporate safety technologies.
No company wants to be the first to bear the costs of updating the insecure computer systems that run most cars. We need federal safety regulations to push automakers to move, as a whole industry. Last month, a bill with privacy and cybersecurity standards for cars was introduced in the Senate. That’s good, but it’s only a start. We need a new understanding of car safety, and of the safety of any object running software or connecting to the Internet.
It may be hard to fix security on the digital Internet, but the Internet of Things should not be built on this faulty foundation. Responding to digital threats by patching only exposed vulnerabilities is giving just aspirin to a very ill patient.
It isn’t hopeless. We can make programs more reliable and databases more secure. Critical functions on Internet-connected objects should be isolated and external audits mandated to catch problems early. But this will require an initial investment to forestall future problems — the exact opposite of the current corporate impulse. It also may be that not everything needs to be networked, and that the trade-off in vulnerability isn’t worth it. Maybe cars are unsafe at any I.P.
6. We conclude by re-examining one of the most important analytical articles in a long time, David Golumbia’s article in Uncomputing.org about technocrats and their fundamentally undemocratic outlook.
“Tor, Technocracy, Democracy” by David Golumbia; Uncomputing.org; 4/23/2015.
” . . . . Such technocratic beliefs are widespread in our world today, especially in the enclaves of digital enthusiasts, whether or not they are part of the giant corporate-digital leviathan. Hackers (“civic,” “ethical,” “white” and “black” hat alike), hacktivists, WikiLeaks fans [and Julian Assange et al–D. E.], Anonymous “members,” even Edward Snowden himself walk hand-in-hand with Facebook and Google in telling us that coders don’t just have good things to contribute to the political world, but that the political world is theirs to do with what they want, and the rest of us should stay out of it: the political world is broken, they appear to think (rightly, at least in part), and the solution to that, they think (wrongly, at least for the most part), is for programmers to take political matters into their own hands. . . First, [Tor co-creator] Dingledine claimed that Tor must be supported because it follows directly from a fundamental “right to privacy.” Yet when pressed—and not that hard—he admits that what he means by “right to privacy” is not what any human rights body or “particular legal regime” has meant by it. Instead of talking about how human rights are protected, he asserts that human rights are natural rights and that these natural rights create natural law that is properly enforced by entities above and outside of democratic polities. Where the UN’s Universal Declaration on Human Rights of 1948 is very clear that states and bodies like the UN to which states belong are the exclusive guarantors of human rights, whatever the origin of those rights, Dingledine asserts that a small group of software developers can assign to themselves that role, and that members of democratic polities have no choice but to accept them having that role. . . Further, it is hard not to notice that the appeal to natural rights is today most often associated with the political right, for a variety of reasons (ur-neocon Leo Strauss was one of the most prominent 20th century proponents of these views). We aren’t supposed to endorse Tor because we endorse the right: it’s supposed to be above the left/right distinction. But it isn’t. . . .”
It looks like the Ashley Madison hack may have just claimed its first two lives:
Note that this is the Toronto police department reporting these two apparent suicides, so these two suicides are presumably just in the Toronto area. And with up to 37 million users compromised, it not only raises the question of just how high the final body count is going to be in the long run for this hack but also just how high it is already from suicides that haven’t yet been associated with the hack.
It’s a grim reminder that, as more as more personal data becomes vulnerable to exploits of this nature, the more torturous and potentially lethal generic hacking effectively becomes. For instance, what if 37 million Gmail accounts got hacked and their contents were just thrown up on the darkweb. A full account email hack could be just as damaging and humiliating as the Ashley Madison hack, if not far more so, because the potential range of personal information is just on a different scale, and nearly everyone these days has a email account with one of the major email services out there. Plus, unlike the Ashely Madison hack, which largely limits the damage to individuals involved and their family members, a full email hack could end up violating very sensitive pieces of data for not just the email account owner but everyone they communicated with! It raises a rather alarming question: given the connectivity of human societies, you have to wonder just what percentage of the US population would be at least indirectly impacted if, say, 37 million Gmail accounts got hacked and thrown up online? How about the global populace? It seems like the impact could be pretty widely felt.
David Golumbia points us towards a article that does a great job of summarizing one of the key hopes and dreams held by crypto-anarchist/cyberlibertarians of what bitcoin might bring to the world. That being the collapse of government via mass tax evasion through the use of cryptocurrencies and replacing government with a fee-for-service taxation system run by private service providers. But don’t assume that by subverting the ability of governments to function that the cyberlibertarians assume we would suddenly live a world free of regulation because that’s not exactly the author below has in mind: “The only choice of regulation we have in terms of cryptocurrency taxation is not to try and fit it inside some existing doctrine, but to abide by their laws of finance and information freedom. We must be the one’s to conform to the regulation, not have it conform to our conventional beliefs. Bitcoin is a system which will only be governed effectively through digital law, an approach which functions solely through a medium of technology itself. It will not bend to the whim of those who still hold conventional forms of law-making as relevant today”:
“Make no mistake, in a crypto-anarchist jurisdiction where there is no means to confiscate or control property on behalf of another individual, the need for the state will cease to exist.”
Yes, as we saw, in a “crypto-anarchist jurisdiction”, the need for the state will cease to exist, because the people that write the rules for the code that runs the predominant digital infrastructure in the crypto-anarchist jurisdiction’s economy will become the new state. At least that’s the dream. Freeeedooom!
Just FYI, if you’ve been avoiding creating a Facebook account under the assumption that this prevents Facebook from learning private details about you there’s a lawsuit you might want to learn more about:
“Currently, there are no comprehensive federal regulations governing the commercial use of biometrics, the category of information technology that includes faceprints. And Bedoya said the government appears to be in no hurry to address the issue.”
No meaningful commercial facial recognition federal regulations. Huh. Imagine that.
Guess who’s bringing that fun “use your social network to infer your credit quality” model to the developing world as part of an emerging “Big Data, small credit” paradigm for finance. It’s not a particularly hard guess:
So, according to the Omidyar Network report:
“The financial services industry is on the brink of a new era, where harnessing the power of digital information to serve new segments is becoming the new normal”
Ok, so asking to see things like your social networking data is expected to become “the new normal” for the financial services industry. Well that’s pretty horrifying, but at least if the lenders are profiting from all that personal information hopefully that means there will be non-exorbitant interest rates and more lenient terms in case borrowers can’t pay back the loan, especially for the poor borrowers. Hopefully.
You know that classic scene in Office Space where the hyper-consistently cheerful phone operator is asked to stop being being to hyper-consistent. Yeah, there’s probably going to be a lot more conversations like in the future and those conversations are going to be a lot more futile:
Wow, so in addition to creating an even more depressing “Big Brother”-like workplace environment than already exists for many employees, Cognito’s product might even be able to detect depression and mental-health disorders. That’s, uh, convenient:
With such capabilities, you have to wonder what which “other parts” of Cognito’s customers’ businesses will also start getting real-time audio surveillance feedback:
Hmmm...yeah, it doesn’t look like Cognito-like technology is going to be limited to call centers...
One of the grimly fascinating aspects of the emerging Big Data revolution in the workplace is that as employers continue using more and more Big Data monitoring to increasing worker productivity, not only might this lead to declining the health of workers, but that same Big Data approach might actually allows employers to track that decline in health. And now that companies are starting to experiment with hiring third party Big Data services providers to track their employee health to predict which workers might get sick using methods that include buying information on employees from third party data brokers and scanning health insurance claims, you don’t need a lot of Big Data to predict that this is going to be a trend:
“As employers more actively involve themselves in employee wellness, privacy experts worry that management could obtain workers’ health information, even if by accident, and use it to make workplace decisions.”
Yeah, that seems like one of the obvious risks here, especially when reducing healthcare costs is the primary purpose of the service. And especially when Walmart, a company known for its employee food drives for other employees who were paid so little they were going hungry, is the company leading the way. And then there’s the fact that this is being done using third party Bid Data service providers to get around employee privacy restrictions. It seems like quite a recipe for making “accidents” involving employers ‘accidentally’ finding out which employee is about to get an expensive illness and then maybe ‘accidentally’ interpreting the rest of that employee’s Big Data in a manner that leads to them getting laid off, a routine part of the workplace of the future. As the executive says at the end, “Prediction with no solution isn’t very valuable.” Well, there’s a pretty obvious solution regardless of the employee’s medical condition: fire them.
So it looks like we’re on the cusp of a grand new early warning system for coming health maladies: when you’re suddenly fired without warning but your company appears to be in decent health, you’re probably about to suffer an expensive health crisis. Good luck! Just remember to turn in your badge on the way out.
Just, FYI, if AT&T is your cellphone provider, your local billboards might be about to get a lot more persuasive:
“Clear Channel Outdoor Americas, which has tens of thousands of billboards across the United States, will announce on Monday that it has partnered with several companies, including AT&T, to track people’s travel patterns and behaviors through their mobile phones.”
Note that AT&T asserts that users can opt out of this tracking feature on the AT&T website and that all of the data they provide to Clear Channel is aggregated and anonymized. Let’s hope that’s true. But regardless, there’s nothing stopping Clear Channel from partnering with other location data providers in the future and given all the random apps out there that collect your location data even you’re not running them, obtaining that data directly from app providers seems possible. So if you aren’t an AT&T wireless customer, and you’d also like to opt out of any location-based advertising services, it’s probably a good time to make sure your location sharing settings on your smartphone are turned off. That said, if you really don’t want your location tracked by apps and advertisers, you probably want to get rid of that smartphone:
“To paraphrase Adler, founder of the Adler Law Group, it is not so much that in today’s connected world there is a single, malevolent Big Brother watching you. It’s that there are dozens, perhaps hundreds, of “little brothers” eagerly watching you so they can sell you stuff more effectively. Collectively, they add up to an increasingly omniscient big brother.”
It’s definitely a “the whole is greater than the sum of its parts” situation when you’re talking about an array of “little brothers”. Especially when those “little brothers” are sharing information with each other. And it’s apparently possible that those “little brother” apps sitting on your smartphone just might be going the extra mile to gather your location, even when you tell it not to:
So, given how the groups that gather this data generally do it for the purpose of selling it to others, it seems like it should just take one of your “little brother” apps to surreptitiously gathering that location data using unorthodox means before the rest of the data collection/marketing industries starts getting access too. They’ll just buy it from the rogue app provider. At least it sounds like that’s possible.
As creepy as all that sounds, keep in mind that the future of personalized billboards can always get creepier:
“The upshot is that these mirrored 3D pixels, or “trixels” as the team calls them, could project hundreds of different images outward, as compared to the 3D movie technique, which only projects two, and requires that the viewer wear glasses. Walking around such a trixelated billboard, on the other hand, would make the image appear to be a highly resolved, three-dimensional object to the naked eye.”
That’s right, the next generation of billboards will involve 3D highly resolved objects. And when you combine location data, personal marketing data, and 3D hologram technology...“Goodbye, 3D glasses. Hello, ubiquitous, laser-generated images that jump out directly at you from every angle.” Yep, the world is about to become personalized version of Disney’s Haunted Mansion, except instead of fun ghosts holograms it will be crappy product holograms. That sounds both kind of cool (yay holograms!), but also pretty creepy, which is still better than the ubiquitous location tracking which is just plain creepy.
There that’s a glimpse at the near-future of billboard advertising: creepily personalized holograms. The holograms themselves may or may not be creepy. But since they’ll be probably be location-based personalized holograms, they’ll definitely be creepy.
Here’s an example of the public pushing back against the endless incursion of privacy-violating smartphone technology: The US Federal Trade Commission sent a warning letter to a dozen smartphone app developers who have been caught using the background-noise tracking software developed by SilverPush to secretly determine what TV shows you’re watching. The firms were told that if they don’t inform users that their apps were collecting TV background data, it may violate FTC rules barring unfair or deceptive acts or practices, which suggest that the FTC still isn’t quite sure if secretly embedding SilverPush’s software in your app actually is a violation of the FTC’s rules or not. Maybe it does violate the rules, but maybe not. At least that’s the strength of the language the FTC used in its warning.
So let’s hope the FTC was just choosing to be polite by not using stronger language, because if not, that would suggest this was actually less a public push back and more a polite public request to app developers that doubles as an admission that the FTC still isn’t quite sure if it’s illegal:
“If the app developers state or imply that their apps do not collect or transmit television viewing data when they actually do, that may be a violation of the section of the FTC Act barring deceptive and unfair business practices, the agency said.”
Part of what’s a little disconcerting about the FTC’s warning is that it’s specifically warning against app developers “stating or implying” that their apps don’t collect this kind of data. So...what if the developers don’t say anything at all? Does that fall under the “imply” category? Let’s hope so. Here’s the specific language:
“However, if your application enabled third parties to monitor television-viewing habits of U.S. consumers and your statements or user interface stated or implied otherwise, this could constitute a violation of the Federal Trade Commission Act.2 We would encourage you to disclose this fact to potential customers, empowering them to make an informed decision about what information to disclose in exchange for using your application.”
That’s, uh, sort of encouraging, although it’s not quite clear who should be encouraged.
If you find yourself suddenly feeling stalked the billboards in your town are tracking you personally, don’t worry, it’s not personal. They’re tracking everyone all personally. So, actually, maybe some worry is in order:
““This is just the tipping point of the disruption in out-of-home,” said Helma Larkin, CEO of Posterscope, an out-of-home communications agency that designed the Malibu campaign with billboard company Lamar Advertising. “The technology coming down the pike is fascinating around what we could potentially do to bring digital concepts into the physical world.””
Well, yes, it is pretty fascinating. Of course there are other ways to describe this trend in bringing digital concepts to the physical world:
Won’t it be fun when this technology hits the fashion industry. “Hey, you in the frumpy dress. Here’s a wardrobe (that won’t make the billboards yell at you in public).” Now that’s going to be customer service!
Also keep in mind that the billboard system described above is supposed blocking all personally identifying information, like license plates and windshield shots that could be used to identify the actual occupants of a car and deliver even more personalized ads in public spaces:
So let’s hope that’s actually the case and this firm really is systematically preventing itself from collecting any personally identifying data. That would be nice. And who knows if that’s actually true. But if it is the case, it’s probably just a matter of time before it is not longer the case. It’s also worth keeping in mind that even if these new companies aren’t scanning your actual license plate and collective a database of your vehicle’s movements, plenty of other companies already are:
“Eventually, police and repo men might not be the only customers buying LPR data. MVTrac recently completed a beta test that tracked Acuras at specific areas and times, logging info including the exact models and colors. That information, far more real-time than state-registration data, could be gold to automakers, marketers, and insurance companies.”
So we have billboard companies scanning cars for targeted ads, but apparently not scanning the license plates and car occupants that could make those ads much more targeted. And we also have a vast and growing industry of companies scanning license plates for the expressed purpose of identifying who owns those vehicles and building a database that could be sold to who knows who. Huh.
So will the billboard companies eventually buy the license plate data from the LPR industry or will they just collect the data from their billboard cameras and join the industry and ad even more information to this growing private surveillance commercial sector? Both seems like obvious options, and they aren’t mutually exclusive. It’s a reminder that, in our outdoor commercial surveillance-state future, when you see a personalized ads, you aren’t just experiencing the possible privacy violation. You’re also helping make your future privacy violations more personalized. But since this is happening to everyone at least you won’t have to take it personally. It could be worse! Silver linings aren’t the best in the Panopticon.
The Wall Street Journal has a recent article where three different experts are asked about the growing potential interest of employers in utilizing data gathered from employee “wearables” and other types of Big Data. Not surprisingly, the opinions range from the three experts ranged from ‘this is a scary trend with major potential for privacy invasion’ from John M. Simpson, director of the Privacy Project at the nonprofit advocacy group Consumer Watchdog, to ‘this is potentially scary but potentially useful too’ from Edward McNicholas, co-leader of privacy, data security and information law at law firm Sidley Austin LLP, all the way to ‘you won’t be able to compete in the job market unless you agree to generate and hand over this data because you won’t be productive enough without it’ from Chris Brauer, director of innovation and senior lecturer at the Institute for Management Science at Goldsmiths, University of London.
It’s an expected spectrum of opinions for a topic like this, but it’s also worth keeping in mind that it’s a non-mutually exclusive set of opinions: Big Data from employee wearable tech could, of course, indeed have some legitimate uses. It could also lead to a horribly abuses and invasive coercive nightmare situation for employees. But that nightmare potential is no reason to believe that employees won’t effectively be forced to submit to pervasive wearable surveillance that includes their activity outside of work, like hours of sleep.
So get worried about Big Data rewriting the employer/employee contract to include pervasive surveillance at and away from the office. And since ours is a civilization which often does that which you should be deeply worried about, get ready too:
“We are also going to see lots of examples of individual employees developing biometric curricula vitae that indicate their productivity and performance under certain conditions, and they can use this to lobby employers or apply for jobs. So if the job requires high performance under stressful conditions, you can demonstrate in your data how you have performed under stressful conditions in the past. This primary data can potentially be a very reliable predictor of future performance.”
Yes, it’s time to start collecting that data for your biometric CV. And while this might not be the best thing to add to your new biometric CV, if you happen to be wearing a Fitbit heart rate tracker while reading this article and your heart rate didn’t spike, that is sort of a useful piece of data. Maybe you could read all sorts of articles about the emerging Orwellian employer surveillance state and show a nice, steady heart rate that doesn’t indicate any distress. It Future employers would probably love seeing something like that on your biometric CV. No cheating.
Check out the fun ‘bug’ in the new smash hit Pokemon Go app that’s already been downloaded by millions of people since its recent release. It sounds like the company, Niantic, a Google spinoff, has already fixed the bug. But as is apparent from the fact that they had to fix the bug, it’s a ‘bug’ that all sorts of app developers can presumably utilize: If you signed into the app using your Google Account on an iOS device, it’s possible that Niantic could get complete access to ALL your Google Account information, including your emails:
“Now, none of these privacy provisions are of themselves unique. Location-based apps from Foursquare to Tinder can and do similar things. But Pokémon Go’s incredibly granular, block-by-block map data, combined with its surging popularity, may soon make it one of, if not the most, detailed location-based social graphs ever compiled.”
Wow. ‘Accidentally’ gaining full access to your Google account and all your emails is a thing smartphone app makers do these days. And while it’s likely Niantic really did make that bug fix (a Google spinoff probably doesn’t need access to your emails), it seems like this has got to be a wildly popular ‘bug’ for app makers. They can gain full access to your Google account simply by adding an “sign in with Google” option.
Keep in mind that there’s currently some confusion as to what exactly giving “full account access” to an app entails, and it’s possible that it wouldn’t give access to things like emails. But even if its not your email content but instead almost all the other content in you Google account, that’s still potentially an immense amount of personal content. And now that Pokemon Go has made sure the world it aware of these kinds of security issues we can be pretty sure there’s going to be a lot more apps offering a nice, convenient Google account log in option in the future.
So, yeah, you might want to double check those third-party app permissions.
Just FYI, if you’re an employee in the US and your employer is offering free FitBits or some other ‘wearable’ technology that streams basic health data like heart rate or steps taken each day as part of some sort of new employee fitness plan, you might want to make sure that the plan is associated with your employer’s health insurance plan which means the data collected would at least have federal HIPAA protection. Because if that fancy free FitBit doesn’t have HIPAA protection, that heart rate data is going to be telling who knows who a lot more about you than just your heart rate and what it’s telling those unknown third-parties might not be remotely accurate:
“Not all wellness program data can be legally funneled to employers or third parties. It depends on whether the wellness program is inside a company insurance plan—meaning that it would be protected by HIPAA—or outside a company insurance plan and administered by a third-party vendor. If it’s administered by a third party, your data could be passed on to other companies. At that point, the data is protected only by the privacy policies of those third-party vendors, “meaning they can essentially do what they like with it,” De Mooy says.”
Yep, if you hand that seemingly innocuous personal health data like a heart rate over to a non-HIPAA protected entity, random third parties can get to infer all sorts of fun things about you like whether or not you’re suffering from some sort of sexual dysfunction or your propensity for violence. And whether or not those inferences are based on solid science or the latest pop theory is totally up to them. How fun.
So check those HIPAA agreements before you slap that free FitBit on your wrist. And if you really don’t like the idea of hand over personal health data like your heart rate to the world, you might need to avoid all Wi-Fi networks too:
That’s right, companies are potentially going to have the ability to just randomly scan your heart rate and breathing information with a Wi-Fi signal. Like when you walk past their billboards. Won’t that also be fun.
And while it’s just breathing and heart rate information via Wi-Fi, just imagine what other personal health information could possibly be detected remotely for a much broader range of sensors. For instance, imagine if Google set up free ‘Wi-Fi Kiosks’ all over the place that not only provided Wi-Fi services but had other types of sensors that detected things like air pollution or other chemicals along with UV and infrared cameras. If you’re having a hard time imagining that, this should give you a better idea:
“In addition to monitoring environmental factors like humidity and temperature, a bank of air pollutant sensors will also monitor particulates, ozone, carbon monoxide and other harmful chemicals in the air. Two other sensor banks will measure “Natural and Manmade Behavior” by tracking street vibrations, sound levels, magnetics fields and entire spectrums of visible, UV and infrared light. Finally, the “City Activity” sensors will not only be able to measure pedestrian traffic, it will also look for security threats like abandoned packages. While free gigabit WiFi on the streets sounds like a win for everyone’s data plan, it also comes at a cost: the kiosks will also be able to track wireless devices as they pass by, although it will most likely be anonymized.”
Wi-Fi sidewalk kiosks with a battery of sensors and large screens designed to grab your attention and draw you closer (and then hopefully not detect your wireless devices and identify you). Might the “Natural and Mandmade Behavior” detected by these kiosks include things like heart rate? And what other types of health information can be detected with sensors designed to pick up a broad range of sounds along with entire spectrums of visible, UV and infrared light? We’ll find out someday...presumably after all this data is collected.
So, all in all, it’s increasingly clear that if you don’t like the idea of helplessly having your personal health information collected and analyzed (including very dubiously analyzed) by all sorts of random third-parties data predators you might need to relocate. Away from civilization. Far away. Preferably a thick jungle where third-party kiosks with Wi-Fi and infrared scanning will at least have a limited reach given all the blocking foliage. Sure, there might be tigers and other predators to worry about in the jungle, but at least those are the kinds of predators you can potentially defend yourself against. The tiger might be able to eat your body, but not your dignity. Good luck!
Remember Tay, Microsoft’s AI chatbot that was turned into a neo-Nazi in under 24 hours because its creators inexplicably didn’t take into account the possibility that people would try to turn their chatbot into a neo-Nazi? Well, it appears Facebook just had its own Tay-ish experience. Although instead of a bunch of trolls specifically setting out to turn some new publicly accessible Facebook AI into an extremist, Facebook instead removed the human curation component from their “trending news” feed following charges that Facebook was filtering out conservative news, and the endemic trolling already present in the right-wing mediasphere dumpster fire took it from there:
Facebook announced late Friday that it had eliminated jobs in its trending module, the part of its news division where staff curated popular news for Facebook users. Over the weekend, the fully automated Facebook trending module pushed out a false story about Fox News host Megyn Kelly, a controversial piece about a comedian’s four-letter word attack on rightwing pundit Ann Coulter, and links to an article about a video of a man masturbating with a McDonald’s chicken sandwich.
Well, at least the story about the chicken sandwich was potentially newsworthy. At least now we know not to click on any articles about McChicken sandwiches going forward.
So perhaps the lesson here is that algorithmically automated newsfeeds may not be credible sources of what we normally think of as “news”, but they are potentially useful summaries for all the garbage people are reading instead of actual news. At least with Facebook’s new algorithmically driven trend news feed we can all watch civilization’s collective descent in ignorance and madness with somewhat greater detail. That’s kind of a positive service.
Unfortunately, that’s not the kind of positive service we’re going to get. At least not yet. Why? Because it turns out Facebook didn’t actually eliminate the human curators. Instead, they just fired all their existing team of professional journalist curators and hired a new team of non-journalist human. So this is less an issue of “oops, our new algorithm just got overwhelmed by all the toxic ‘news’ out there!” and more of an issue of “oops, we fired all our journalist curators and quietly replaced them non-journalists curators who are horrible at this job. How about we blame this on the algorithm”:
“The contractors’ perception that their jobs were secure, at least for the medium term, was reinforced when Facebook recently began testing a new trending news feature—a stripped-down version that replaced summaries of each topic with the number of Facebook users talking about it. This new version, two contractors believed, gave the human curators a greatly diminished role in story and source selection. Said one: “You’ll get Endingthefed.com as your news source (suggested by the algorithm), and you won’t be able to go out and say, ‘Oh, there’s a CNN source, or there’s a Fox News source, let’s use that instead.’ You just have a binary choice to approve it or not.””
Yes, as part of its long-held goal of actually fully automating news feeds so every single user can eventually get their own personalized feed, Facebook was already in the process of reducing the amount of human judgement involved with the human curation before it fired and replaced its team. And then it fired and replaced them:
As we can see, there is indeed a “Trending review team”. It’s still human. And even with the reduced flexibility to select from a variety of news sources for a given topic now getting replaced by a binary choice, this team of humans still has the ability to filter out blatantly fake news. It’s just that the new team humans apparently can’t actually identify the fake news.
All in all, it looks like Facebook basically modified their internal trending news algorithms while keeping in place a team of humans to make the final judgement call. Then Facebook made an announcement that made it sound like the trending news feed was now all algorithmically driven, but actually just replaced the previous team of journalists (who were complaining about their abusive working conditions just months ago) with a new team of non-journalist but who were still tasked with making that final judgement call. And then everyone blamed the algorithm when this all blew up with bogus articles in the news feed.
So while Facebook may have trashed the utility of its trending news feed today, it’s worth noting that this sad tale of poor corporate judgement in rolling out poorly designed algorithms run by poorly prepared people, and then blaming the algorithm when things go poorly, is giving us a glimpse at the kind of news that could easily become a major category of trending news in the future as more and more human/algorithm ‘mistakes’ are created and blamed solely on the algorithm. The algorithm designers probably didn’t intend on doing that but it’s still kind of impressive in a sad way.
It looks like WikiLeaks’s quest to bring transparency to government and large corporations is getting extended. To everyone with a verified Twitter account:
“In a subsequent series of tweets on Friday,WikiLeaks Task Force — a verified Twitter account described in its bio as the “Official @WikiLeaks support account” — explained that it wanted to look at the “family/job/financial/housing relationships” of Twitter’s verified users, which includes a ton of journalists, politicians and activists.”
Yeah, that’s not creepy or anything.
Now, it’s worth noting that creating databases of random people on social media and trying to learn everything you can about them, like their relationships and influences, is nothing new for the government or private sector (like what Palantir does). And there’s nothing stopping WikiLeaks or anyone else from doing the same. But in this case it appears that WikiLeaks is floating the idea of creating this database and then making it a searchable public tool. And it’s not at all clear that WikiLeaks would be limiting the data it collects on Twitter users to information they can gather from on Twitter. Since they’re talking about limiting it to “verified users” (Twitter accounts that have been strongly identified with a real person using their real name) that suggests they could include all sorts of 3rd party data from anywhere.
And if the above article’s speculation is correct, the motive for this is basically to create a data set (with fancy graphs presumably), would be to discredit people through guilt-by-association:
Keep in mind that if WikiLeaks actually created this tool, it would probably have quite a bit of leeway over the kind of data that gets included in the system and which “relationships” or “influences” show up for a given individual. Also keep in mind that if this was done responsibly there would have to be a great deal of human judgement that goes into whether or not a particular piece of data that points towards a “relationship” or “influence” is accurate and honest. And it’s that kind of required flexibility that could give WikiLeaks a great deal of real power over how someone is presented.
So it appears that WikiLeaks wants to create publicly accessible dossiers on verified Twitter users. Presumably for the purpose of ‘making a point’ of some sort. Sort of like the old “TheyRule.net” web tool that showed graphs of the people serving on corporate boards of major corporations and made the point of the incestuous nature of corporate leadership visually clear. But in this case it won’t be limited to big company CEOs. It’ll be everyone. At least everyone with a verified Twitter account, which just happens to include large numbers of journalists and activists. So, TheyRule.net, but with much more personal information on people who may or may not actually rule. Great.
Here’s something worth noting while sifting through the 2016 election aftermath: Silicon Valley’s long rightward shift became official in 2016. At least if you look at the corporate PACs of tech giants like Microsoft, Google, Facebook, and Amazon. Sure, the employees tended to still favor donating to Democrats, although not as much as before (and not at all at Microsoft). But when it came to the corporate PACs Silicon Valley was seeing red:
“The tech elite, Kotkin writes, “far from deserting the Democratic Party, more likely will aim take to take it over.””
And that warning is going to be something to keep in mind as this trend continues: the political red-shifting of Silicon Valley doesn’t mean Silicon Valley’s titans are going to eventually abandon the Democratic party and stop giving money. It’s worse. They’re going to keep giving the Democrats money (although maybe not as much as they give the GOP) in the hopes of remaking it in the GOP’s image. And the more powerful the tech sector becomes, the more money these giant corporations are going to have to engage in this kind of political ‘persuasion’.
And in other news, a new Oxfam study found that the just eight individuals — including tech titans Bill Gates, Jeff Bezos, Mark Zuckerberg, and Larry Ellison — own as much wealth as the poorest half of the global population. So, you know, wealth inequality probably isn’t a super big priority for their super PACs.
With the GOP and Trump White House scrambling to find some sort of legislative victory in the wake of failed Obamacare repeal bill last week that almost everybody hated, it’s worth noting that the GOP-controlled House and Senate may have just put in motion a major regulatory change that could even be more hated than Trumpcare: making it legal for your ISP to sell your browsing habits, location, online shopping habits, and anything else they can extract from your online activity:
“The bill not only gives cable companies and wireless providers free rein to do what they like with your browsing history, shopping habits, your location and other information gleaned from your online activity, but it would also prevent the Federal Communications Commission from ever again establishing similar consumer privacy protections.”
The GOP is so intent on guaranteeing the rights of ISP to sell anything they can about you that the House bill would prevent the FCC from ever again establishing the FCC the bill would repeal. At least, presumably, unless a new law is passed to reempower the FCC. Which means that if this becomes law (and all indications are Trump will sign it into law), it’s probably going to take a Democratic controlled House, Senate, and White House to reverse it. Yes, following the GOP’s epic Trumpcar fail, it’s about to rebrand itself as the “ISPs will stop spying on you over our dead body!”-party.
And that’s all on top of Trump’s FCC voting to prevent these same ISPs from having to take “reasonable measure” to protect the few categories of information they’re collecting on you that they wouldn’t be selling: you’re social security number and credit card info:
So with ISPs set to compete with the existing data-broker giants like Facebook and Google and create a giant national fire sale of personal digital information, it’s probably a good time to consider whether or not you’re at risk of identity theft. And here’s a nice quick way to figure that out: Do you use the internet in the US? If the answer is “yes”, you’re probably at risk of identity theft:
“The more personal information that’s out there, Shackelford says, the easier it is for you to become a victim of identity theft.”
If all the personal information about you that’s already out there and readily available for anyone to purchase in the giant data-brokerage industry (or just browse) hasn’t yet made you vulnerable enough to identity theft, might adding the ISP’s treasure trove of personal information tip the scales? You’ll find out.
So what can you do? Well, as the article below notes, there are something things individuals can do to protect their online data from their ISPs, like using a Virtual Private Network or privacy tools like Tor. But as the article also notes, even if you use every trick out there to protect your online privacy, all of that pales in comparison to having an actual law protecting you:
“And none of these solutions—or all of them together, for that matter—are as good as having rules on the books that just prohibit internet providers from spying on their customers and selling their private information without permission. We have those rules today, but tomorrow they could be gone.”
Yep, we could all just through elaborate technical hoops in an endless privacy-tools arms race to protect our online privacy from the ISPs’ bottom lines. Or we could, you know, make it illegal and then enforce that law.
Of course, even if this latest gift from the GOP pulls a ‘Trumpcare’ and ends up going down in flames at the last minute, it’s not like there isn’t some validity to the argument that the ISPs merely want to be able to do what online giants like Google and Facebook have been doing for years (and do to you whether or not you’re actually visiting their sites or some random site). So it’s going to be important to keep in mind that part of the solution to ending the threat of ISP data-brokering is regulating the hell out of all rapacious data-brokers in general. Online and offline. That should even things out.
Or we could just wait for the industry to come up with its own privacy ‘solutions’.