With last week’s Snowden-leak that the NSA can break a large amount of the encryption used across the web using a variety of backdoors and secret agreements with manufacturers, there’s now a push in Congress for legal restrictions on the use of these backdoors:
The New York Times
Legislation Seeks to Bar N.S.A. Tactic in EncryptionBy SCOTT SHANE and NICOLE PERLROTH
Published: September 6, 2013After disclosures about the National Security Agency’s stealth campaign to counter Internet privacy protections, a congressman has proposed legislation that would prohibit the agency from installing “back doors” into encryption, the electronic scrambling that protects e‑mail, online transactions and other communications.
Representative Rush D. Holt, a New Jersey Democrat who is also a physicist, said Friday that he believed the N.S.A. was overreaching and could hurt American interests, including the reputations of American companies whose products the agency may have altered or influenced.
“We pay them to spy,” Mr. Holt said. “But if in the process they degrade the security of the encryption we all use, it’s a net national disservice.”
Mr. Holt, whose Surveillance State Repeal Act would eliminate much of the escalation in the government’s spying powers undertaken after the 2001 terrorist attacks, was responding to news reports about N.S.A. documents showing that the agency has spent billions of dollars over the last decade in an effort to defeat or bypass encryption. The reports, by The New York Times, ProPublica and The Guardian, were posted online on Thursday.
The agency has encouraged or coerced companies to install back doors in encryption software and hardware, worked to weaken international standards for encryption and employed custom-built supercomputers to break codes or find mathematical vulnerabilities to exploit, according to the documents, disclosed by Edward J. Snowden, the former N.S.A. contractor.
The documents show that N.S.A. cryptographers have made major progress in breaking the encryption in common use for everyday transactions on the Web, like Secure Sockets Layer, or SSL, as well as the virtual private networks, or VPNs, that many businesses use for confidential communications among employees.
Intelligence officials say that many of their most important targets, including terrorist groups, use the same Webmail and other Internet services that many Americans use, so it is crucial to be able to penetrate the encryption that protects them. In an intense competition with other sophisticated cyberespionage services, including those of China and Russia, the N.S.A. cannot rule large parts of the Internet off limits, the officials argue.
A statement from the director of national intelligence, James R. Clapper Jr., criticized the reports, saying that it was “not news” that the N.S.A. works to break encryption, and that the articles would damage American intelligence collection.
The reports, the statement said, “reveal specific and classified details about how we conduct this critical intelligence activity.”
“Anything that yesterday’s disclosures add to the ongoing public debate,” it continued, “is outweighed by the road map they give to our adversaries about the specific techniques we are using to try to intercept their communications in our attempts to keep America and our allies safe and to provide our leaders with the information they need to make difficult and critical national security decisions.”
But if intelligence officials felt a sense of betrayal by the disclosures, Internet security experts felt a similar letdown — at the N.S.A. actions.
“There’s widespread disappointment,” said Dan Kaminsky, a prominent security researcher. “This has been the stuff of wild-eyed accusations for years. A lot of people are heartbroken to find out it’s not just wild-eyed accusations.”
Sascha Meinrath, the director of the Open Technology Institute, a research group in Washington, said the reports were “a startling indication that the U.S. has been a remarkably irresponsible steward of the Internet,” which he said the N.S.A. was trying to turn into “a massive platform for detailed, intrusive and unrestrained surveillance.”
Companies like Google and Facebook have been moving to new systems that, in principle, would make government eavesdropping more difficult. Google is in the process of encrypting all data that travels via fiber-optic lines between its data centers. The company speeded up the process in June after the initial N.S.A. disclosures, according to two people who were briefed on Google’s plans but were not authorized to speak publicly about them. The acceleration of the process was first reported Friday by The Washington Post.
For services like Gmaili, once data reaches a user’s computer it has been encrypted. But as messages and other data like search queries travel internally among Google’s data centers they are not encrypted, largely because it is technically complicated and expensive to do.
Facebook announced last month that it would also transition to a novel encryption method, called perfect forward secrecy, that makes eavesdropping far more difficult.
...
But the perception of an N.S.A. intrusion into the networks of major Internet companies, whether surreptitious or with the companies’ cooperation, could hurt business, especially in international markets.
“What buyer is going to purchase a product that has been deliberately made less secure?” asked Mr. Holt, the congressman. “Even if N.S.A. does it with the purest motive, it can ruin the reputations of billion-dollar companies.”
In addition, news that the N.S.A. is inserting vulnerabilities into widely used technologies could put American lawmakers and technology companies in a bind with regard to China.
Over the last two years, American lawmakers have accused two of China’s largest telecommunications companies, Huawei Technologies and ZTE, of doing something parallel to what the N.S.A. has done: planting back doors into their equipment to allow for eavesdropping by the Chinese government and military.
Both companies have denied collaborating with the Chinese government, but the allegations have eliminated the companies’ hopes for significant business growth in the United States. After an investigation last year, the House Intelligence Committee concluded that government agencies should be barred from doing business with Huawei and ZTE, and that American companies should avoid buying their equipment.
Some foreign governments and companies have also said that they would not rely on the Chinese companies’ equipment out of security concerns. Last year, Australia barred Huawei from bidding on contracts in Australia’s $38 billion national broadband network. And this year, as part of its effort to acquire Sprint Nextel, SoftBank of Japan pledged that it would not use Huawei equipment in Sprint’s cellphone network.
Part of what makes a backdoor-decryption ban so intriguing is that the nature of the encryption techniques employed today is such that, without a backdoor or some other algorithmic “cheat” of some sort it’s theoretically really really really hard for even an intelligence agency with the capabilities of the NSA to break the encryption. It’s one of those realities of the digital age that German security officials reminded us of in 2007, when policy experts requested a backdoor into users’s computer to get around Skype’s encryption:
TechDirt
German Proposal Gives A New Perspective On ‘Spyware’
from the big-brother-is-hacking-yo deptby Timothy Lee
Tue, Nov 27th 2007 5:10pm
A VoIP expert has unveiled new proof-of-concept software that allows an attacker to monitor other peoples’ VoIP calls and record them for later review. Unencrypted VoIP really isn’t very secure; if you have access to the raw network traffic of a call, it’s not too hard to reconstruct the audio. Encrypted traffic is another story. German officials have discovered that when suspects use Skype’s encryption feature, they aren’t able to decode calls even if they have a court order authorizing them to do so. Some law enforcement officials in Germany apparently want to deal with this problem by having courts give them permission to surreptitiously install spying software on the target’s computer. To his credit, Joerg Ziercke, president of Germany’s Federal Police Office, says that he’s not asking Skype to put back doors in its software. But the proposal still raises some serious question. Once the installation of spyware becomes a standard surveillance method, law enforcement will have a vested interest in making sure that operating systems and VoIP applications have vulnerabilities they can exploit. There will inevitably be pressure on Microsoft, Skype, and other software vendors to provide the police with backdoors. And backdoors are problematic because they can be extremely difficult to limit to authorized individuals. It would be a disaster if the backdoor to a popular program like Skype were discovered by unauthorized individuals. A similar issue applies to anti-virus software. If anti-virus products detect and notify users when court-ordered spyware is found on a machine, it could obviously disrupt investigations and tip off suspects. On the other hand, if antivirus software ignores “official” spyware, then spyware vendors will start trying to camouflage their software as government-installed software to avoid detection. Ultimately, there may be no way for anti-spyware products to turn a blind eye to government-approved spyware without undermining the effectiveness of their products.
Hence, I’m skeptical of the idea of government-mandated spyware, although I don’t think it should be ruled out entirely. That may sound like grim news for law enforcement, which does have a legitimate need to eavesdrop on crime suspects. But it’s important to keep in mind that law enforcement officials do have other tools at their disposal. If they’re not able to install software surveillance tools, it’s always possible to do it the old-fashioned way–in hardware. Law enforcement agencies can always sneak into a suspect’s home (with a court order, of course) and install bugging devices. That tried and true method works regardless of the communications technology being used.
The battle over backdoors is an ongoing issue that isn’t going away any time soon. And as the above article indicated, one of the reasons that backdoors installed into hardware and software for use by law enforcement is guaranteed to be an ongoing issue is because encryption done right can’t be cracked. At least not in a reasonable time frame. It’s a reflection of the asymmetric nature of the mathematics behind encryption: it’s a lot easier to hide a needle in a haystack than find it. At least in theory:
Ars Technica
Crypto experts issue a call to arms to avert the cryptopocalypse
Nobody can crack important algorithms yet, but the world needs to prepare for that to happen.by Peter Bright — Aug 1 2013, 10:49pm CST
At the Black Hat security conference in Las Vegas, a quartet of researchers, Alex Stamos, Tom Ritter, Thomas Ptacek, and Javed Samuel, implored everyone involved in cryptography, from software developers to certificate authorities to companies buying SSL certificates, to switch to newer algorithms and protocols, lest they wake up one day to find that all of their crypto infrastructure is rendered useless and insecure by mathematical advances.
We’ve written before about asymmetric encryption and its importance to secure communication. Asymmetric encryption algorithms have pairs of keys: one key can decrypt data encrypted with the other key, but cannot decrypt data encrypted with itself.
The asymmetric algorithms are built on an underlying assumption that certain mathematical operations are “hard,” which is to say, that the time it takes to do the operation increases proportional to some number raised to the power of the length of the key (“exponential time”). This assumption, however, is not actually proven, and nobody knows for certain if it is true. The risk exists that the problems are actually “easy,” where “easy” means that there are algorithms that will run in a time proportional only to the key length raised to some constant power (“polynomial time”).
The most widely used asymmetric algorithms (Diffie Hellman, RSA, and DSA) depend on the difficulty of two problems: integer factorization, and the discrete logarithm. The current state of the mathematical art is that there aren’t—yet—any easy, polynomial time solutions to these problems; however, after decades of relatively little progress in improlving algorithms related to these problems, a flurry of activity in the past six months has produced faster algorithms for limited versions of the discrete logarithm problem.
At the moment, there’s no known way to generalize these improvements to make them useful to attack real cryptography, but the work is enough to make cryptographers nervous. They draw an analogy with the BEAST, CRIME, and BREACH attacks used to attack SSL. The theoretical underpinnings for these attacks are many years old, but for a long time were dismissed as merely theoretical and impossible to use in practice. It took new researchers and new thinking to turn them into practical attacks.
When that happened, it uncovered a software industry ill-prepared to cope. A lot of software, rather than allowing new algorithms and protocols to be easily plugged in, has proven difficult or impossible to change. This means that switching to schemes that are immune to the BEAST, CRIME, and BREACH attacks is much more difficult than it should be. Though there are newer protocols and different algorithms that avoid the problems that these attacks exploit, compatibility concerns mean that they can’t be rapidly rolled out and used.
The attacks against SSL are at least fairly narrow in scope and utility. A general purpose polynomial time algorithm for integer factorization or the discrete logarithm, however, would not be narrow in scope or utility: it would be readily adapted to blow wide open almost all SSL/TLS, ssh, PGP, and other encrypted communication. (The two mathematical problems, while distinct, share many similarities, so it’s likely that an algorithm that solved integer factorization could be adapted in some way to solve the discrete logarithm, and vice versa).
Worse, it would make updating these systems in a trustworthy manner nearly impossible: operating systems such as Windows and OS X depend on digital signatures that in turn depend on these same mathematical underpinnings to protect against the installation of fraudulent or malicious updates. If the algorithms were undermined, there would be no way of verifying the authenticity of the updates.
While there’s no guarantee that this catastrophe will occur—it’s even possible that one day it might be proven that the two problems really are hard—the risk is enough to have researchers concerned. The difficulties of change that BEAST et al. demonstrated mean that if the industry is to have a hope of surviving such a revolution in cryptography, it must start making changes now. If it waits for a genius mathematician somewhere to solve these problems, it will be too late to do anything about it.
Fortunately, a solution of sorts does exist. A family of encryption algorithms called elliptic curve cryptography (ECC) exists. ECC is similar to the other asymmetric algorithms, in that it’s based on a problem that’s assumed to be hard (in this case, the elliptic curve discrete logarithm). ECC, however, has the additional property that its hard problem is sufficiently different from integer factorization and the regular discrete logarithm that breakthroughs in either of those shouldn’t imply breakthroughs in cracking ECC.
However, support for ECC is still very problematic. Much of the technology is patented by BlackBerry, and those patents are enforced. There are certain narrow licenses available for implementations of ECC that meet various US government criteria, but the broader patent issues have led some vendors to refuse to support the technology.
Further, support of protocols that can use ECC, such as TLS 1.2 (the latest iteration of SSL technology) is still not widely available. Certificate authorities have also been slow to offer ECC certificates.
As such, the researchers are calling for the computer industry as a whole to do two things. First, embrace ECC today. Second, ensure that systems that use cryptography are agile. They must not be lumbered with limited sets of algorithms and obsolete protocols. They must instead make updating algorithms and protocols quick and easy, to ensure that software systems can keep pace with the mathematical research and adapt quickly to new developments and techniques. The cryptopocalypse might never happen—but we should be prepared in case it does.
Note that the above article was published August 1st, a month before the latest Snowden leak about the advances in NSA techniques that includes both backdoors but also advances in decryption algorithms. So the references to algorithmic risks (because we don’t know how “hard” the underlying mathematical algorithms truly are) in the above article might relate to the recent advances in the NSA’s decryption algorithms. This could even include turning theoretically “hard” (non-polynomial-time) mathematical problems into somewhat less hard problems that can be cracked without the NSA’s backdoors (or anyone else’s backdoors). In other words, while the concerns about the NSA or some other allied intelligence agency abusing those encryption backdoors are valid, there’s also the very real possibility that other 3rd parties (rival intelligence agencies, organized crime, private parties, etc) are also using the new algorithmic hacks where no backdoors are required. The algorithm is effectively defeated. So even if those NSA backdoors (or anyone else’s backdoors) didn’t exists there is still the possibility that the underlying mathematical algorithms currently used to encrypt the bulk of the internet communications have already been mathematically effectively hacked. And if those algorithms have already been hacked (in the sense that code-breakers have found a method of finding the correct keys within a predictable timeframe) then it might just be a matter of time before that algorithm gets out into “the wild” and anyone with the computing resources will be able to decrypt conventionally encrypted data. No backdoors or secret manufacturer agreements needed. Just a powerful enough computer and the knowledge about the flaws int the encryption algorithm. That’s the ‘cryptopocalypse’.
But there’s another interesting possibility that could emerge in the medium-term: Right now it’s known that NSA uses custom-built chips to break the encryption and it’s believed that these chips can decrypt any of the traffic on Tor that doesn’t use the most advanced “elliptic curve cryptography” encryption described above. Tor is supposed to be anonymous.
So we should probably expect to see a broad shift towards these newer kinds of encryption methods. And if that shift towards using these newer methods takes place without those NSA backdoors we could start seeing truly secure encryption methods employed — methods that no spy agency, anywhere, will be able to decrypt. At least not unless there’s some super secret powerful computing technology hiding somewhere. If that encrypted future is what’s in store for us we should probably expect a dramatic expansion of traditional spying: human intelligence will simply become much more important because there won’t be other options. Traditional hacking will also become paramount. When a backdoor closes, a job opportunity for a hacker opens.
But also note that the FinFisher tool is reportedly to be able to hack your Blackberry which uses “elliptic curve cryptography”. Same with the NSA and GCHQ. So whatever secure encryption method the world eventually settles upon will have to be more secure that currently recommended secure methods. Give it time.
Beware Software Updates Bearing Gifts
If we do eventually see an encrypted future — one where direct hacking with the benefit of pervasive backdoors or algorithmic trickery is no longer an option — we should expect an explosion of Trojan spyware and custom hacks. Even with the pervasive backdoors and algorithmic trickery we should still expect an explosion of spyware because that’s what’s already happening. So whole the NSA hardware and software backdoor network is the spy scandal of the moment, perhaps the UK/German Bundestrojaner/FinFisher/FinSpy spyware scandals should be considered liklier spy scandal templates for tomorrow:
Slate
U.S. and Other Western Nations Met With Germany Over Shady Computer-Surveillance TacticsBy Ryan Gallagher
Posted Tuesday, April 3, 2012, at 11:51 AM
Infecting a computer with spyware in order to secretly siphon data is a tactic most commonly associated with criminals. But explosive new revelations in Germany suggest international law enforcement agencies are adopting similar methods as a form of intrusive suspect surveillance, raising fresh civil liberties concerns.
Information released last month by the German government shows that between 2008–2011, representatives from the FBI; the U.K.’s Serious Organised Crime Agency (SOCA); and France’s secret service, the DCRI, were among those to have held meetings with German federal police about deploying “monitoring software” used to covertly infiltrate computers.
The disclosure was made in response to a series of questions tabled by Left Party Member of Parliament Andrej Hunko and reported by German-language media. It comes on the heels of an exposé by the Chaos Computer Club, a Berlin-based hacker collective, which revealed in October that German police forces had been using a so-called “Bundestrojaner” (federal Trojan) to spy on suspects.
The Bundestrojaner technology could be sent disguised as a legitimate software update and was capable of recording Skype calls, monitoring Internet use, and logging messenger chats and keystrokes. It could also activate computer hardware such as microphones or webcams and secretly take snapshots or record audio before sending it back to the authorities.
German federal authorities initially denied deploying any Bundestrojaner, but it soon transpired that courts had in fact approved requests from officials to employ such Trojan horse programs more than 50 times. Following a public outcry over the use of the technology, which many believe breached the country’s strict privacy laws, further details have surfaced.
Inquiries by Green Party MP Konstantin von Notz revealed in January that, in addition to the Bundestrojaner discovered by the CCC, German authorities had also acquired a license in early 2011 to test a similar Trojan technology called “FinSpy,”manufactured by England-based firm Gamma Group. FinSpy enables clandestine access to a targeted computer, and was reportedly used for five months by Hosni Mubarak’s Egyptian state security forces in 2010 to monitor personal Skype accounts and record voice and video conversations over the Internet.
But it is the German government’s response to a series of questions recently submitted by Hunko that is perhaps the most revealing to date. In a letter from Secretary of State Ole Schröder on March 6, which I have translated, Hunko was informed that German federal police force, the Bundeskriminalamt (BKA), met to discuss the use of monitoring software with counterparts from the U.S., Britain, Israel, Luxemburg, Liechtenstein, the Netherlands, Belgium, France, Switzerland, and Austria. The meetings took place separately between Feb. 19, 2008, and Feb. 1, 2012. While this story has been covered in the German media, it hasn’t received the English-language attention it deserves.
Both the FBI and Britain’s SOCA are said to have discussed with the Germans the “basic legal requirements” of using computer-monitoring software. The meeting with SOCA also covered the “technical and tactical aspects” of deploying computer infiltration technology, according to Schröder’s letter. France’s secret service and police from Switzerland, Austria, Luxemburg, and Liechtenstein were separately briefed by the BKA on its experiences using Trojan computer infiltration.
Interestingly, at a meeting in October 2010 attended by police from Germany, the Netherlands, and Belgium, representatives from the Gamma Group were present and apparently showcased their shadowy products. It is possible that the Germans decided at this meeting to proceed with the FinSpy trial we now know took place in early 2011.
If nothing else, these revelations confirm that police internationally are increasingly looking to deploy ethically contentious computer intrusion techniques that exist in a legal gray area. The combination of the rapid development of Internet technologies and persistent fears about national security seem to have led to a paradigm shift in police tactics—one that appears, worryingly, to be taking place almost entirely behind closed doors and under cover of state secrecy.
...
Your Passwords Can Be Stolen. So Can Your Spyware
The world continues to freak out about NSA and UK possessing the centralized mass-surveillance capabilities that come from the power to collect and decrypt massive volumes of internet traffic. Such a freak out is understandable because, hey, centralized mass internet traffic surveillance is kind of creepy. It’s also understandable that the global debate would be almost exclusively focused on spying by the NSA because that’s been the focus of the Snowden leaks. But it might be worth incorporating into an ongoing global debate about the balance privacy, security, and government accountability the fact that extremely powerful spyware is being peddled by major governments and is currently used by governments all over the globe. It might also be used by unknown parties all over the globe, because spyware can be stolen:
Bloomberg
FinFisher Spyware Reach Found on Five Continents: Report
By Vernon Silver — Aug 8, 2012 6:34 AM CTThe FinFisher spyware made by U.K.- based Gamma Group likely has previously undisclosed global reach, with computers on at least five continents showing signs of being command centers that run the intrusion tool, according to cybersecurity experts.
FinFisher can secretly monitor computers — intercepting Skype calls, turning on Web cameras and recording every keystroke. It is marketed by Gamma for law enforcement and government use.
Research published last month based on e‑mails obtained by Bloomberg News showed activists from the Persian Gulf kingdom of Bahrain were targeted by what looked like the software, sparking a hunt for further clues to the product’s deployment.
In new findings, a team, led by Claudio Guarnieri of Boston-based security risk-assessment company Rapid7, analyzed how the presumed FinFisher samples from Bahrain communicated with their command computer. They then compared those attributes with a global scan of computers on the Internet.
The survey has so far come up with what it reports as matches in Australia, the Czech Republic, Dubai, Ethiopia, Estonia, Indonesia, Latvia, Mongolia, Qatar and the U.S.
Guarnieri, a security researcher based in Amsterdam, said that the locations aren’t proof that the governments of any of these countries use Gamma’s FinFisher. It’s possible that Gamma clients use computers based in other nations to run their FinFisher systems, he said in an interview.
‘Active Fingerprinting’
“They are simply the results of an active fingerprinting of a unique behavior associated with what is believed to be the FinFisher infrastructure,” he wrote in his report, which Rapid7 is publishing today on its blog at https://community.rapid7.com/community/infosec/blog.
The emerging picture of the commercially available spyware’s reach shines a light on the growing, global marketplace for cyber weapons with potential consequences.
“Once any malware is used in the wild, it’s typically only a matter of time before it gets used for nefarious purposes,” Guarnieri wrote in his report. “It’s impossible to keep this kind of thing under control in the long term.”
In response to questions about Guarnieri’s findings, Gamma International GmbH managing director Martin J. Muench said a global scan by third parties would not reveal servers running the FinFisher product in question, which is called FinSpy.
“The core FinSpy servers are protected with firewalls,” he said in an Aug. 4 e‑mail.
Gamma International
Muench, who is based in Munich, has said his company didn’t sell FinFisher spyware to Bahrain. He said he’s investigating whether the samples used against Bahraini activists were stolen demonstration copies or were sold via a third party.
Gamma International GmbH in Germany is part of U.K.-based Gamma Group. The group also markets FinFisher through Andover, England-based Gamma International UK Ltd. Muench leads the FinFisher product portfolio.
Muench says that Gamma complies with the export regulations of the U.K., U.S. and Germany.
It was unclear which, if any, government agencies in the countries Guarnieri identified are Gamma clients.
A U.S. Federal Bureau of Investigation spokeswoman in Washington declined to comment.
Officials in Ethiopia’s Communications Minister, Qatar’s foreign ministry and Mongolia’s president’s office didn’t immediately return phone calls seeking comment or respond to questions. Dubai’s deputy commander of police said he has no knowledge of such programs when reached on his mobile phone.
Australia’s department of foreign affairs and trade said in an e‑mailed statement it does not use FinFisher software. A spokesman at the Czech Republic’s interior ministry said he has no information of Gamma being used there, nor any knowledge of its use at other state institutions.
Violating Human Rights?
At Indonesia’s Ministry of Communications, head of public relations Gatot S. Dewa Broto said that to his knowledge the government doesn’t use that program, or ones that do similar things, because it would violate privacy and human rights in that country. The ministry got an offer to purchase a similar program about six months ago but declined, he said, unable to recall the name of the company pitching it.
The Estonian Information Systems Authority RIA has not detected any exposure to FinSpy, a spokeswoman said. Neither has Latvia’s information technologies security incident response institution, according to a technical expert there.
...
If the above description of the emerging global spyware-surviellance state sounds a little unsettling, keep in mind that FinFisher/FinSpy is just one toolkit. There could be all sorts of other spyware “products” out there.
Also don’t forget that the world is still learning about the FinFisher/FinSpy spyware’s capability: For instance, it appears that a “FinIntrusion” tool made by the same company can be used to collect WiFi signals. Part of the FinIntrusion suite includes decryption capabilities so all that WiFi traffic can be picked up. It’s a reminder that, whether or not the centralized mass-surviellance state on the wane, the global decentralized spyware party is still going strong:
ITNews.com
Further details of FinFisher govt spyware leaked
By Juha Saarinen on Sep 2, 2013 6:04 AM
Filed under SecurityClaims it can break encryption.
Sales brochures and presentations leaked online have shed further light on the FinFisher malware and spyware toolkit that is thought to be used by law enforcement agencies worldwide.
FinFisher is made by the Anglo-German Gamma International and is marketed to law enforcement agencies arould the world. It is also known as FinSpy and the sales presentation traces its origins to BackTrack Linux, an open source penetration testing Linux distribution.
The spyware can record screen shots, Skype chats, operate built-in web cams and microphones on computers and is able to capture a large range of user data.
Last year, an internet scan by a security company showed up FinFisher control nodes in eleven countries, including Australia. The malware has been analysed [pdf] by the Citizen Lab project in which the University of Toronto, Munk School of Global Affairs and the Canada Centre for Global Studies participate in.
In July this year, the Australia Federal Police turned down a Freedom of Information Act request from the director of the OpenAustralia Foundation, Henare Degan, about the use of FinFisher by the country’s top law enforcement agency.
The spyware runs on all versions of Windows newer than Windows 2000, and can infect computers via USB drivers, drive-by web browser exploits or with the help of local internet providers that inject the malware when users visit trusted sites such as Google Gmail or YouTube.
The FinSpy Mobile versions works on Blackberry, Apple IOS, Google Android and Microsoft’s Windows Mobile and Windows Phone operating systems, the documents claim. On these, it can record incoming and outgoing calls, track location with cellular ID and GPS data, and surveillance by making silent calls and more.
According to the documents found by security firm F‑Secure, the FinIntrusion portable hacking kit can break encryption and record all traffic, and steal users’ online banking and social media media credentials.
...
Really protecting data privacy involves a lot more than just protecting internet traffic or stopping and of the NSA or GCHQ’s custom backdoors. That was a intelligence-convenience that’s now been thwarted but the spying will continue. If effectively-unbreakable encryption is truly implemented espionage activities will merely shifted to spying on data after it’s been decrypted by the intended recipient. And if the entire history of spying scandals have taught us anything it’s that governments are going to be tempted to spread spyware around like a rapid zombie. Barring a truly populist global revolution that somehow leads to a golden age of shared prosperity and minimal suffering Governments around the world will be spying on other countries’ citizens all over the globe for a whole lot of valid and invalid reasons. Governments can be kind of crazy and so can people. So the spying will continue. And don’t forget that as spyware spreads more and more it’ll be harder to tell apart the state-sponsored spyware from their private/criminal counterparts and all that private spying will warrant more public spying to stop the private spying. Achieving digital privacy isn’t just a matter slaying the NSA-mass-wiretapping-dragon in the modern age and sealing those backdoors. The public/private global spyware chimera also roams the forest and it can make backdoors too.
http://hosted.ap.org/dynamic/stories/U/US_BORDER_COMPUTER_SEARCHES?SITE=AP&SECTION=HOME&TEMPLATE=DEFAULT&CTIME=2013–09-10–05-30–23
Sep 10, 9:14 AM EDT
New details in how the feds take laptops at border
By ANNE FLAHERTY
Associated Press
WASHINGTON (AP) — Newly disclosed U.S. government files provide an inside look at the Homeland Security Department’s practice of seizing and searching electronic devices at the border without showing reasonable suspicion of a crime or getting a judge’s approval.
The documents published Monday describe the case of David House, a young computer programmer in Boston who had befriended Army Pvt. Chelsea Manning, the soldier convicted of giving classified documents to WikiLeaks. U.S. agents quietly waited for months for House to leave the country then seized his laptop, thumb drive, digital camera and cellphone when he re-entered the United States. They held his laptop for weeks before returning it, acknowledging one year later that House had committed no crime and promising to destroy copies the government made of House’s personal data.
The government turned over the federal records to House as part of a legal settlement agreement after a two-year court battle with the American Civil Liberties Union, which had sued the government on House’s behalf. The ACLU said the records suggest that federal investigators are using border crossings to investigate U.S. citizens in ways that would otherwise violate the Fourth Amendment.
The Homeland Security Department declined to discuss the case, saying it was still being litigated. But Customs and Border Protection spokesman Michael Friel said border checks are focused on identifying national security or public safety risks.
“Any allegations about the use of the CBP screening process at ports of entry for other purposes by DHS are false,” Friel said. “These checks are essential to enforcing the law, and protecting national security and public safety, always with the shared goals of protecting the American people while respecting civil rights and civil liberties.”
House said he was 22 when he first met Manning, who now is serving a 35-year sentence for one of the biggest intelligence leaks in U.S. history. It was a brief, uneventful encounter at a January 2010 computer science event. But when Manning was arrested later that June, that nearly forgotten handshake came to mind. House, another tech enthusiast, considered Manning a bright, young, tech-savvy person who was trying to stand up to the U.S. government and expose what he believed were wrongheaded politics.
House volunteered with friends to set up an advocacy group they called the Bradley Manning Support Network, and he went to prison to visit Manning, formerly known as Bradley Manning.
It was that summer that House quietly landed on a government watchlist used by immigrations and customs agents at the border. His file noted that the government was on the lookout for a second batch of classified documents Manning had reportedly shared with the group WikiLeaks but hadn’t made public yet. Border agents were told that House was “wanted for questioning” regarding the “leak of classified material.” They were given explicit instructions: If House attempted to cross the U.S. border, “secure digital media,” and “ID all companions.”
But if House had been wanted for questioning, why hadn’t federal agents gone back to his home in Boston? House said the Army, State Department and FBI had already interviewed him.
Instead, investigators monitored passenger flight records and waited for House to leave the country that November for a Mexico vacation with his girlfriend. When he returned, two agents were waiting for him, including one who specialized in computer forensics. They seized House’s laptop and detained his computer for seven weeks, giving the government enough time to try to copy every file and keystroke House had made since declaring himself a Manning supporter.
President Barack Obama and his predecessors have maintained that people crossing into U.S. territory aren’t protected by the Fourth Amendment. That policy is intended to allow for intrusive searches that keep drugs, child pornography and other illegal imports out of the country. But it also means the government can target travelers for no reason other than political advocacy if it wants, and obtain electronic documents identifying fellow supporters.
House and the ACLU are hoping his case will draw attention to the issue, and show how searching a suitcase is different than searching a computer.
“It was pretty clear to me I was being targeted for my visits to Manning (in prison) and my support for him,” said House, in an interview last week.
How Americans end up getting their laptops searched at the border still isn’t entirely clear.
The Homeland Security Department said it should be able to act on a hunch if someone seems suspicious. But agents also rely on a massive government-wide system called TECS, named after its predecessor the Treasury Enforcement Communications System.
Federal agencies, including the FBI and IRS, as well as Interpol, can feed TECS with information and flag travelers’ files.
In one case that reached a federal appeals court, Howard Cotterman wound up in the TECS system because a 1992 child sex conviction. That “hit” encouraged border patrol agents to detain his computer, which was found to contain child pornography. Cotterman’s case ended up before the 9th Circuit Court of Appeals, which ruled this spring that the government should have reasonable suspicion before conducting a comprehensive search of an electronic device; but that ruling only applies to states that fall under that court’s jurisdiction, and left questions about what constitutes a comprehensive search.
In the case of House, he showed up in TECS in July 2010, about the same time he was helping to establish the Bradley Manning Support Network. His TECS file, released as part of his settlement agreement, was the document that told border agents House was wanted in the questioning of the leak of classified material.
It wasn’t until late October, though, that investigators noticed House’s passport number in an airline reservation system for travel to Los Cabos. When he returned to Chicago O’Hare airport, the agents waiting for him took House’s laptop, thumb drive, digital camera and cellphone. He was questioned about his affiliation with Manning and his visits to Manning in prison. The agents eventually let him go and returned his cell phone. But the other items were detained and taken to an ICE field office in Manhattan.
Seven weeks after the incident, House faxed a letter to immigration authorities asking that the devices be returned. They were sent to him the next day, via Federal Express.
By then agents had already created an “image” of his laptop, according to the documents. Because House had refused to give the agents his password and apparently had configured his computer in such a way that appeared to stump computer forensics experts, it wasn’t until June 2011 that investigators were satisfied that House’s computer didn’t contain anything illegal. By then, they had already sent a second image of his hard drive to Army criminal investigators familiar with the Manning case. In August 2011, the Army agreed that House’s laptop was clean and promised to destroy any files from House’s computer.
Catherine Crump, an ACLU lawyer who represented House, said she doesn’t understand why Congress or the White House are leaving the debate up to the courts.
“Ultimately, the Supreme Court will need to address this question because unfortunately neither of the other two branches of government appear motivated to do so,” said Crump.
House, an Alabama native, said he didn’t ask for any money as part of his settlement agreement and said his primary concern was ensuring that a document containing the names of Manning Support Network donors didn’t wind up in a permanent government file. The court order required the destruction of all his files, which House said satisfied him.
He is writing a book about his experiences and his hope to create a youth-based political organization. House said he severed ties with the Support Network last year after becoming disillusioned with Manning and WikiLeaks, which he said appeared more focused on destroying America and ruining lives than challenging policy.
“That era was a strange time,” House said. “I’m hoping we can get our country to go in a better direction.”
SAIC is Oakland’s choice to “serve and protect” its citizens.
Oakland is quite the laboratory for many social science experiments:
- Black Panthers
— SLA — Patty Hearst
— Car bombing activist Judi Bari
— Gangs
— “Oaksterdam”
Now SAIC is the vendor of choice!
The comments for this NYTimes article are reflecting an awareness and unease with these unconstitutional encroachments, but where is it all leading to?
Direct conflict while these systems are deployed, or prison riots in concentration camps after every “unproductive,” jobless, homeless, poor person is contained?
Literal “panopticons” deployed in our lives and no legislator willing to uphold constitutional protections for citizens’ privacy?
—
October 13, 2013
Privacy Fears Grow as Cities Increase Surveillance
http://www.nytimes.com/2013/10/14/technology/privacy-fears-as-surveillance-grows-in-cities.html?_r=0&pagewanted=all&pagewanted=print
By SOMINI SENGUPTA
OAKLAND, Calif. — Federal grants of $7 million awarded to this city were meant largely to help thwart terror attacks at its bustling port. But instead, the money is going to a police initiative that will collect and analyze reams of surveillance data from around town — from gunshot-detection sensors in the barrios of East Oakland to license plate readers mounted on police cars patrolling the city’s upscale hills.
The new system, scheduled to begin next summer, is the latest example of how cities are compiling and processing large amounts of information, known as big data, for routine law enforcement. And the system underscores how technology has enabled the tracking of people in many aspects of life.
The police can monitor a fire hose of social media posts to look for evidence of criminal activities; transportation agencies can track commuters’ toll payments when drivers use an electronic pass; and the National Security Agency, as news reports this summer revealed, scooped up telephone records of millions of cellphone customers in the United States.
Like the Oakland effort, other pushes to use new surveillance tools in law enforcement are supported with federal dollars. The New York Police Department, aided by federal financing, has a big data system that links 3,000 surveillance cameras with license plate readers, radiation sensors, criminal databases and terror suspect lists. Police in Massachusetts have used federal money to buy automated license plate scanners. And police in Texas have bought a drone with homeland security money, something that Alameda County, which Oakland is part of, also tried but shelved after public protest.
Proponents of the Oakland initiative, formally known as the Domain Awareness Center, say it will help the police reduce the city’s notoriously high crime rates. But critics say the program, which will create a central repository of surveillance information, will also gather data about the everyday movements and habits of law-abiding residents, raising legal and ethical questions about tracking people so closely.
Libby Schaaf, an Oakland City Council member, said that because of the city’s high crime rate, “it’s our responsibility to take advantage of new tools that become available.” She added, though, that the center would be able to “paint a pretty detailed picture of someone’s personal life, someone who may be innocent.”
For example, if two men were caught on camera at the port stealing goods and driving off in a black Honda sedan, Oakland authorities could look up where in the city the car had been in the last several weeks. That could include stoplights it drove past each morning and whether it regularly went to see Oakland A’s baseball games.
For law enforcement, data mining is a big step toward more complete intelligence gathering. The police have traditionally made arrests based on small bits of data — witness testimony, logs of license plate readers, footage from a surveillance camera perched above a bank machine. The new capacity to collect and sift through all that information gives the authorities a much broader view of the people they are investigating.
For the companies that make big data tools, projects like Oakland’s are a big business opportunity. Microsoft built the technology for the New York City program. I.B.M. has sold data-mining tools for Las Vegas and Memphis.
Oakland has a contract with the Science Applications International Corporation, or SAIC, to build its system. That company has earned the bulk of its $12 billion in annual revenue from military contracts. As the federal military budget has fallen, though, SAIC has diversified to other government agency projects, though not without problems.
The company’s contract to help modernize the New York City payroll system, using new technology like biometric readers, resulted in reports of kickbacks. Last year, the company paid the city $500 million to avoid a federal prosecution. The amount was believed to be the largest ever paid to settle accusations of government contract fraud. SAIC declined to comment.
Even before the initiative, Oakland spent millions of dollars on traffic cameras, license plate readers and a network of sound sensors to pick up gunshots. Still, the city has one of the highest violent crime rates in the country. And an internal audit in August 2012 found that the police had spent $1.87 million on technology tools that did not work properly or remained unused because their vendors had gone out of business.
The new center will be far more ambitious. From a central location, it will electronically gather data around the clock from a variety of sensors and databases, analyze that data and display some of the information on a bank of giant monitors.
The city plans to staff the center around the clock. If there is an incident, workers can analyze the many sources of data to give leads to the police, fire department or Coast Guard. In the absence of an incident, how the data would be used and how long it would be kept remain largely unclear.
The center will collect feeds from cameras at the port, traffic cameras, license plate readers and gunshot sensors. The center will also be integrated next summer with a database that allows police to tap into reports of 911 calls. Renee Domingo, the city’s emergency services coordinator, said school surveillance cameras, as well as video data from the regional commuter rail system and state highways, may be added later.
Far less advanced surveillance programs have elicited resistance at the local and state level. Iowa City, for example, recently imposed a moratorium on some surveillance devices, including license plate readers. The Seattle City Council forced its police department to return a federally financed drone to the manufacturer.
In Virginia, the state police purged a database of millions of license plates collected by cameras, including some at political rallies, after the state’s attorney general said the method of collecting and saving the data violated state law. But for a cash-starved city like Oakland, the expectation of more federal financing makes the project particularly attractive. The City Council approved the program in late July, but public outcry later compelled the council to add restrictions. The council instructed public officials to write a policy detailing what kind of data could be collected and protected, and how it could be used. The council expects the privacy policy to be ready before the center can start operations.
The American Civil Liberties Union of Northern California described the program as “warrantless surveillance” and said “the city would be able to collect and stockpile comprehensive information about Oakland residents who have engaged in no wrongdoing.”
The port’s chief security officer, Michael O’Brien, sought to allay fears, saying the center was meant to hasten law-enforcement response time to crimes and emergencies. “It’s not to spy on people,” he said.
Steve Spiker, research and technology director at the Urban Strategies Council, an Oakland nonprofit organization that has examined the effectiveness of police technology tools, said he was uncomfortable with city officials knowing so much about his movements. But, he said, there is already so much public data that it makes sense to enable government officials to collect and analyze it for the public good.
Still, he would like to know how all that data would be kept and shared. “What happens,” he wondered, “when someone doesn’t like me and has access to all that information?”
Bob Filner — the former mayor of San Diego and former serial-groper (hopefully) — and the City of San Diego recently settled the sexual harassment lawsuit brought by Filner’s ex-communications director. But there’s a new Filner-related scandal that’s been brewing: The serial groping of democracy by big-money foreign donors:
It is indeed quite a story that’s emerging from the investigation of Azano’s adventures in influence peddling. It’s especially interesting because Azano is described as “almost a legend” in Mexico and that legendary status includes major security and surveillance contracts with the Mexican defense department. Contracts that reportedly give the Mexican government remotely access to phone microphones, text messages, contacts and multimedia. So the guy at the center of this latest Citizens United-induced campaign-finance scandal in the US is also a private spy-master for the Mexican government:
The capabilities of the surveillance system Azano’s “Security Tracking Devices” sound a lot like FinFisher software that’s being sold to governments around the world, which raises the question: Was that FinFisher that Azano’s company was selling to Mexico? Maybe, assuming the company “Obses” is affiliated with Azano, because Obses has definitely been selling FinFisher to Mexico:
Remember when Gamma, the maker of FinFisher, claimed that it must have been stolen copies of their super-spy software that were being used against Bahraini activists? Someone just hacked Gamma and stole 40 GB of documents and, shocker, it looks like Gamma was lying:
At only 1.4 million euros for the FinSpy system and virtually no export controls for the sale of this kind of potent software, you almost have to wonder which governments around the world haven’t purchased the system by now. It seems like a bit of a bargain.
Get ready for the next big cyber-growth sector: corporate “active defense” anti-hacking services. It’s an “active defense” that increasingly includes an active offense and possibly even a pre-emptive offense. And generally seems to embrace the vigilante spirit:
It should be interesting to see which companies end up jumping into the counter-hack-attack-for-hire market. The competition could be fierce.
They are only showing what they want us to see. I think whatever
they have now far exceeds the dense and blocky material you’ll find
in the Jacob Applebaum video below, except for a hint you’ll find at
56:19 or so (“Portable” continuous wave radar generator). I think
what they have now doesn’t use wires or have any protocols I’m
aware of. Dave talked about this before but I don’t remember which
show. Like the holographic projection system, but capable of doing
a whole lot more.
Jacob Applebaum: To Protect And Infect, Part 2 [30c3]
https://www.youtube.com/watch?v=vILAlhwUgIU
An Italian cybersecurity/antisecurity firm with a large number of government clients, Hacking Team, just got mega-hacked. 400 GB of internal documents are now released that verify allegations that Hacking Team sells its wares to governments with extensive human rights abuses. So, assuming this data/PR breach results in an end to those contracts, it’s at least possible that the kind of powerful software that you really don’t want in the wrong hands might not stay in the wrong hands. Of course, assuming nothing happens, Hacking Team just go a big free global advertisement. Either way, it’s just one more example of the fact that efforts to government hacking-abuses can’t exclusively focus on government hacking abilities. Cyberwarfare that goes far beyond the abilities of your normal cyber-security expert can be privatized too. Privatized and, or course, sold to some of the worst governments on the planet:
“Sudan’s National Intelligence Security Service was one of two customers in the client list given the special designation of “not officially supported”.”
Note that if this sounds awfully similar to the FinFisher hack that revealed that firm was also selling its powerful services to highly questionable client states, this is going to make sound even more familiar: The same hacker that took down FinFisher, hacked Hacking Team:
So “PhineasFisher” repeated their FinFisher hack, this time against an Italian firm that appeared to have a very similar business model. Assuming PhineasFisher is a well-meaning “hackivist”, the situation could certainly be worse.
But, of course, when you read about how Vice Motherboard did a piece on Hacking Team’s assertion that it can now hack the dark web just last month, it’s also obvious that anyone, whether its another cyber security firm, another government, or another hacker outfit, anyone with an interest in hacking would want to learn how to do that. Or how to prevent someone else from doing that to them.
In other words, when you’re hacking an organization like Hacking Team, even a “black hat” hacker is going to be incentivized to attempt to make it look like a “white hat” hack because it’s so easy to do given Hacking Team’s horrible client list. “White hat” hacking is a gimme cover even if all they wanted was the dark net stuff. But let’s hope it was totally a real “hackivist” action by a “white hat” hacker anyways. It’s not only much more of a feel-good story, but the alternatives involving fake “white hat” hackers is actually rather feel-bad-ish. That Hacking Team source code that was apparently stolen sounds scary.
Following yesterday’s triplet of “glitches” that took down the New York Stock Exchange, United Airlines, and the Wall Street Journal’s home page, a number of people are scratching their head and wondering if Anonymous’s tweet the previous day, which simply stated, “Wonder if tomorrow is going to be bad for Wall Street.... we can only hope,” was somehow related. Hmmm....
US officials and the impacted companies, however, strongly deny that the technical difficulties were anything other than coincidental:
Well, if Anonymous didn’t do the hack, it all points towards one obvious and ominous explanation: Anonymous has developed psychic precognition abilities!
Well, ok, there are non-paranormal explanations, but if we are dealing with Anonymous Who Stare at Goats, let’s hope they’re just limited to the precog abilities. Precog Anonymous would be messy enough on its own, but at least if it’s just the occasional precog tweet that’s ok. Scanner Anonymous might be a little too over the top.
Here’s an indication of just how sensitive the client list is for a for companies like “Hacking Team”, the Italy-based government-spyware firm that was recently hacked: South Korean intelligence agency, the National Intelligence Service, acknowledged Tuesday that it had indeed purchased Hacking Team software, but assure the public that it was only used it to monitor North Korea and for other research purposes.
A South Korean intelligence agent’s dead body that was just found alongside a suicide note would appear to suggest otherwise:
Note that the 2012 online smear campaign allegedly directed by the former spy chief now facing a retrial didn’t appear to involve the the use of any hacking tools, although with this Hacking Team revelation will see if that continues to be the case (note that NIS reportedly purchased the software in 2012). No, he was convicted of directing sock-puppetry. Lot’s and lots of sock-puppetry.
But also note that the NIS wasn’t the only intelligence agency caught up in the scandal. South Korea’s Cyberwarfare Command was also accused the same political meddling:
“The intelligence agency was created to spy on North Korea, which is still technically at war with the South. But over its history, it has been repeatedly accused of meddling in domestic politics and of being used as a political tool by sitting presidents.”
With worth noting that the current president, Park Geun-hye, is the daughter of former president/military strongman Park Chung-hee, who set up the predecessor to the NIS in 1961.
It’s also worth noting that if the South Korean government was planning on engaging in illegal domestic surveillance, Hacking Team’s software probably wasn’t very necessary.
A German police officer recently made the news after ‘arresting’ a squirrel following reports from a distressed women that the critter was aggressively stalking her. Authorities determined that the squirrel was suffering from exhaustion and ordered the furry criminal to consume apples and honey tea as punishment.
As far as stalker squirrels go, it could have been worse. It could have been a robo-squirrel. A robosquirrel that’s interested in a lot more than just your apples and honey tea and specifically interested in your passwords:
Yes, Hacking Team, the government spyware firm, just got hacked and now WikiLeaks is leaking its emails. And according those emails, Hacking Team and Boening apparently are thinging about putting a suit of hacking tools on a drone, and why not? That makes perfect sense and there are probalby plenty of other companies and governments trying to the same thing. It would be shocking if that wasn’t the case. Whether or not that involves robosquirrels remains to be seen, but the industry for cryptodrones that blend into the environment and can sneak up on people is one of those inevitable technological advances that threatens to turn reality into a paranoid schizophrenic’s worst nightmare someday.
And no government will be required to create that nightmare, although they’ll surely contribute. The private sector demand for drones that can hunt someone down and do one of any number of possible tasks that go far beyond hacking will provide more than enough of the required demand for creating a de facto cryptodrone surveillance state. A public and private army of stalker robo-squirrels and robo-everything-else is just a matter of time. Why? Because it’s just a matter of time before that kind of drone technology is 3D printable or somehow available the masses through some sort of do-it-yourself drone building technology. How many decades before 3D printable microdrones that’s are capable of highly sophisticated surveillance/hacking/whatever are just part of everyday reality because it’s all available through readily do-it-yourself manufacturing technology? That’s going to be really amazing and awesome when we can generate little robots on command, but it also pretty much guarantees an epidemic of public and private spy drones.
So enjoy public wi-fi while you still can because hacker-stalker robo-squirrels are just a matter of time. And if that sounds alarming, just be glad the spy cyborg-squirrel or cyborg-any-critter technology isn’t going to available any time soon. And hopefully never. Again.
There was a rather startling report last year that was never proven but certainly possible: Did over 100,000 smart TVs, home networking routers, smart refrigerators and other “Internet of Things” get turned into a giant spam-spewing “botnet”? If so, since we’ve never seen proof that such a botnet exists, if it existed back in 2014 it presumably exists today too:
Was that Christma spam you got in 2011 sent by the Great Proofpoint Botnet enabled via a large number of misconfigured Internet-ready devices that had vulnerabilities like default passwords still running?
Sadly, we may never know. We do know, however, that changing the default password for anything connected to the internet is a really good idea. And if you have a large number of internet-ready devices all connected to the internet with their default passwords still in place and other misconfigurations, the botnet they described does seem very possible:
Did someone discover a massive number of internet-ready devices with their default passwords and other misconfigurations created the great Christmas spam machine ever created? And does it still exist, dribbling out spam one device at a time? It’s possible.
And since it’s also possible that you haven’t changed the default passwords on your devices, perhaps that’s something to look into. But while you can change your internet-ready devices’ passwords easily enough, for some internet-ready devices you might actually need to change more than just the password to secure it on the internet. You might need to change the whole device for a new one. Why? Because, as the article below points out, the emerging “Internet of Things”, especially the “Internet of Relatively Cheap Things”, might actually be the “Internet of Relatively Cheap Things Sharing the Same Set of Encryption Keys”:
“The problem with this sort of vulnerability is that the device owner [user] actually can’t do anything other than replace the device”
And note that if “replace device” becomes the default option following a future wave of IoT Botnet attacks, that’s going to be a lot of replaced devices:
A lot.
But look on the bright side. At least botnets don’t fly through your neighborhood seeking vulnerable devices to infect with sophisticated malware. And when there eventually are botnets flying through the neighborhood, don’t forget that there’s always another bright side.
Well, it’s been quite a year for the Internet of Hackable Things:
That was, uh, a bit terrifying. It’s almost hard to choose the creepiest hack from such a selection, although that hackable Barbie Doll just might have the greatest creep potential.
If you’ve ever wondered why it is that online web ads, which are often little programs you can interact with, aren’t avenues for infecting your computer with a malware, here’s your answer: you were wondering incorrectly because online advertisements are already increasingly “malvertisements”:
“With Flash-based attacks, one of the simple things you can do is to either remove Flash,which in the long term I don’t think is the best solution because eventually attackers will move to something else. Or there’s a feature in Flash that allows the user to activate Flash when they need it. That’s a major component in your defence because of all these drive-by-download attacks assume that Flash is enabled by default”
Word to the wise.
So don’t if you’re a Windows user who read goes to sites like the New York Times or the BBC and you also have Adobe Flash or Microsoft Silverlight installed, you probably want to change those Adobe Flash permissions. Soon. Or better yet, yesterday:
“The tainted ads may have exposed tens of thousands of people over the past 24 hours alone, according to a blog post published Monday by Trend Micro. The new campaign started last week when “Angler,” a toolkit that sells exploits for Adobe Flash, Microsoft Silverlight, and other widely used Internet software, started pushing laced banner ads through a compromised ad network.”
So, all in all, it sounds like we have a crypto-ransomware-mini-pocalypse due largely to malicious elements predictably infiltrating an online marketing industry that operates on trusted. You have to wonder if this kind of misplaced trust is limited to online “malvertisement” peddlers. Hmmm...
This probably should go without saying, but if you own a smartphone running Google’s Android operating, it’s really not a good idea to download apps from anywhere other than the Google Play store. Even if you really, really, really want to play Pokemon Go:
“Even though this APK has not been observed in the wild, it represents an important proof of concept: namely, that cybercriminals can take advantage of the popularity of applications like Pokemon GO to trick users into installing malware on their devices. Bottom line, just because you can get the latest software on your device does not mean that you should. Instead, downloading available applications from legitimate app stores is the best way to avoid compromising your device and the networks it accesses.”
That is indeed good advice: Just so No to “side-downloading”. Be safe and download your malware apps from the Google Play store:
“People who want to run Pokémon Go on their Android phone should download the app only from Google Play, and even then, they should closely inspect the publisher, the number of downloads, and other data for signs of fraud before installing.”
Isn’t getting software in the smartphone era fun. It’s free. And easy to access. And maybe or maybe not a malware trojan horse. The software industry has always had to worry about shady operators peddling malicious software. But in the pre-internet age you didn’t have to worry as much about anyone with any internet connection selling you software. It was more of a pain in the ass to get you install malware. Especially since the software you were installing wasn’t connected to a global internet that could relay your information back to whoever sold you the malware.
But nowadays, treasure troves of our personal digital information is bundled into one pocket-sized device we all carry around that’s designed to download apps from trusted places like the Google Play store that apparently allow in some rather nasty content. But these Pokemon Go malware apps weren’t just random nasty content. They were content extremely related to the rollout of Pokeman Go, an app co-developed by the Google spinoff Niantic. That’s disturbing. Especially because Google is supposed to manually reviewing all its Google Play store apps now:
“Google Play, Google’s marketplace for Android applications which now reaches a billion people in over 190 countries, has historically differentiated itself from rival Apple by allowing developers to immediately publish their mobile applications without a lengthy review process. However, Google has today disclosed that, beginning a couple of months ago, it began having an internal team of reviewers analyze apps for policy violations prior to publication. And going forward, human reviewers will continue to go hands-on with apps before they go live on Google Play.”
The hands-on human testing of the Pokemon Go screenlock malware app must have used some pretty lenient criteria. But at least it was rolled out in time for the big Pokemon Go rollout.
It’s one more reminder that the debates over the trade-offs between digital security and convenience isn’t just about the convenience and lower costs for end user with privacy-infringing, yet convenient and free features that tantalize us. It’s also about the convenience and profits of app developers and distributors like Google and the convenience and profits of operating system and hardware developers. Like Google.
As Google’s rival distributors Apple found out in September, even trusted developers can accidentally but innocently end up being the malware vector for the through common mistakes or shortcuts that Apples human reviewers didn’t catch. Which is to be expected in some cases because human reviewing means human error.
But the fact a Pokemon Go screenlock app made it onto the Google Play store during the week of the big Pokemon Go rollout indicates that there might be some serious systemic issues with the app review system which raises serious questions about just how much malware is really floating around on the supposedly vetted mainstream app stores. Probably a lot.
Just FYI, it turns out that Grand Theft Auto mod for Minecraft that your kid couldn’t resist downloading to your Andoid smartphone should probably be renamed Grand Theft Smartphone since it turns out to be a malicious data-stealing piece of malware. Available from the Google Play store. Along with 400 other Google Play store apps carrying the same malware:
“This is not the first time in recent history when Google Play was reportedly breeding security liabilities. About three weeks ago, experts with security firm Check Point discovered 40 DressCode-infected apps in Google Play. At the time, Check Point reported that infected apps scored between 500,000 and 2 million downloads on the Android app platform.”
Keep in mind that since the DressCode malware isn’t just a data thief but also a springboard for further attacks on the networks the infected phone is connecting to so those 2,000,000 DressCode downloads presumably translate into a much larger number of infected devices. It’s a reminder that the ‘how to not get digitally mugged in Minecraft’ talk that parents have to give their kids these days probably shouldn’t be limited to your kids or Minecraft.
This should do wonders for Germany’s brand as an anti-state-hacking nation: Due to concerns that strong encryption is making investigations into digital evidence impossible, the two major parties just pushed through a law that would expand law enforcement’s authority to use state-owned trojan hacking tools to get around that encryption by inserting malware on targets’ devices. These powers already existed for extreme circumstances, like terrorism, but under the new law the investigators could use it for any crime that allows for a wiretap:
“According to the government, the spread of encrypted communications makes traditional wiretapping impossible, so the authorities need to be able to bypass encryption by directly hacking into the communications device.”
Keep in mind that there are plenty of legitimate concerns over the ability of law enforcement to actually enforce law in the age of encryption. If society wants impregnable digital systems that will no doubt avoid government abuses. But it will also allow for things like organized crime getting a lot more, well, organized. It’s a trade off. So it’s no surprise to see the German government make the decision it made. Well, ok, for most other governments it wouldn’t be surprising. Considering Berlin led the global collective outrage over the revelations of the Snowden Affair, however, it is a little surprising. But only a little.
Here’s something folks might want to belatedly add to their New Year’s Resolution lists: turn off your browser’s password manager so online advertisers can’t turn it into a persistent tracking cookie and maybe use it to steal your passwords:
“The researchers examined two different scripts — AdThink and OnAudience — both of are designed to get identifiable information out of browser-based password managers. The scripts work by injecting invisible login forms in the background of the webpage and scooping up whatever the browsers autofill into the available slots. That information can then be used as a persistent ID to track users from page to page, a potentially valuable tool in targeting advertising.”
Online ad scripts that turns the auto-filled data in your browser’s password manager into a persistent ID that lets advertisers track users from page to page. Just what the internet needed.
And notice how it’s entirely possible these rogue ads could collect the actual password information stored in password manager:
So we have a report about online advertisers successfully turning the password managers built into browsers into persistent tracking IDs, and potentially password info. Not that they could do this. That they are doing this, at least these two particular advertising platforms, AdThink and AudienceInsight. And AdThink’s ads were sending the persistent ID information back to Axciom, one of the largest consumer data brokers in the world. So it’s not just that advertising brokers are already using this password manager vulnerability. It’s that they are doing this already and already sending that information about what pages people are reading back to one of the larger consumer data brokers in the world:
Yep, when these researchers stumble upon this vulnerability in password managers, they also discover that it’s already being exploited by one of the largest data brokers on the planet. And that means one of the largest data brokers on the planet has been getting even more information about web pages everyone is reading thanks to this password manager exploit:
“How do data brokers collect information? As you might guess, Web browsing is a bountiful source. What sites you visit, what topics or products you research there, what you buy, even what you post in forums can be turned into an entry in a broker’s database. But there are offline sources as well. Public court records are, of course, public. But retail store owners have found they can bring in additional revenue by selling their sales records to broker companies”
What data do data brokers collect? Everything they possibly can collect. Online and offline. Including all the browsing information the industry can get its hands on. And all that industry is result in something much more than just lists of consumers for sale like the data broker industry of decades past. Today’s data broker industry sells “consumer scores”. Scores for all sorts of things — like the propensity to get a disease — that are generating using the increasingly data-intensive personalized profiles the industry is building for everyone:
And these “consumer scores” are going sold for all sort of things, despite the fact that they’re wildly inaccurate, with a review of Acxiom’s data showing a 50 percent error rate:
““We found a 50 percent accuracy rate in Acxiom data we looked at,” says Dixon, “and they are considered among the best.””
And that’s the data that could be generating secret illegal blacklists. Data that’s maybe 50 percent accurate with Acxiom, the industry leader.
And now, thanks to this password manager exploit, companies like Acxiom can add even more browsing history data to their personalized models of each of us. So, on the plus side, all that web browsing data Acxiom has been collecting on you with with password manager loophole will probably improve the accuracy of the “consumer scores” its selling about you. Unless you happen to be browsing very ironically. Yay?
Also keep in mind that, while the web browsing history that the password manager vulnerability make available to companies like Acxiom would be extremely useful for giving the data brokerage industry a better idea of our individual interests and things like health histories, there’s another critical benefit to companies like Acxiom for to turning password managers into persistent IDs: those persistent IDs will likely be the same across different devices, enabling companies like Acxiom to determine, for instance, that the same individual owns a given smartphone, laptop, and desktop devices because they all have the same default info set up for the password manager.
So with that handy device deanonymization-technique in mind, check out the service Acxiom was hyping in this recent interview: thanks to Acxiom’s 2014 buyout of LiveRamp — a company specializing in “using both personally identifiable and anonymous information from device ID’s and cookies to provide a single identity graph—for people or households—across all platforms” — Acxiom is now positioned “to be the predominate provider of identity graphs across digital and television”:
“LiveRamp has long been active in the digital space, using both personally identifiable and anonymous information from device ID’s and cookies to provide a single identity graph—for people or households—across all platforms. Acxiom, meanwhile, worked with pay-TV operators to create a safe haven for matching subscriber files.”
A single identity graph—for people or households—across all platforms using cookies and device ID’s. That sure sounds like the kind of product that could benefit from a browser vulnerability that turns the password managers into a persistent cookies.
And that ‘identity graph’ technology is just one of the services offered by one of the largest data brokers in the world. A company that operates in near complete secrecy and yet is more open than almost all of the thousands of other companies operating in this same data-broker space. A company with almost no regulatory oversight and a demonstrated willingness to exploit the password manager vulnerability — a vulnerability that could, in theory, allow for the stealing of passwords. And it’s the company that just got caught exploiting that password manager loophole and is simultaneously super excited about being the predominate provider of identity graphs across digital and television platforms.
So, yeah, you probably want to disable those password managers on your browser at some point in 2018. Good luck! And yes, given that the GOPers in Congress already voted to allow US internet service providers to sell their users browsing history to advertisers, it’s not like preventing the web browsing tracking that this password manager vulnerability enables will lead to a huge increase in online privacy for most Americans. But there’s also the possibility of passwords being stolen from the exploit, so it’s still an important fix for 2018. One of many important internet-privacy fixes for 2018.
It’s that time again. Time to change your passwords: One of the largest database ever seen of hacked emails and passwords was just released to the public by someone. A cache of files containing almost 773 million unique email address and 21 million unique passwords were briefly posted to the MEGA upload site and made available to the public for anyone to down. After the files were taken down they showed up again on a hacker forum.
The emails and passwords don’t appear to have come from a single breach. Instead, they appear to be a compilation of large number of different databases of previously hacked emails and passwords. So much of this data was already ‘in the wild’ which fortunately means the damage is likely to be limited.
That said, the person who discovered this, Troy Hunt, the guy who maintains the “Have I Been Pwned” website, says that around 140 million of the email accounts and over 10 million unique passwords are ones he hasn’t seen before. So around half of the passwords released in this cache might be newly released, or at least previously only accessible to hackers on the dark web.
Adding to the security peril is that these passwords are NOT hashed. It’s the raw text of the passwords. And that makes this information ideal for credential-stuffing attacks, where hackers repeatedly try email and password combinations. So this is clearly horrible security news. But for people who reuse the same passwords over and over on different websites this could be devastating (if they haven’t already been devastated):
“There are breaches, and there are megabreaches, and there’s Equifax. But a newly revealed trove of leaked data tops them all for sheer volume: 772,904,991 unique email addresses, over 21 million unique passwords, all recently posted to a hacking forum.”
21 million unique passwords and almost 773 million email address. And since logging into websites typically involves inputting an email address and a password, that makes this release perfect for credential-stuffing attacks on a global scale:
And they key reason this release is so incredibly useful for not just hackers but anyone who wants to try to log into your website accounts is that the passwords aren’t hashed. They’re in plain text:
And while it might seem like the potential damage should be limited because this appears to be a compilation of a large number of different databases of emails and passwords from previous hacks — so many of these passwords have likely already been updated — the fact that 10 million of the 21 million unique passwords haven’t been seen before suggests that there are 10 million people who probably haven’t updated their passwords yet and really, really, really need to do so soon. Especially if they use the same password on different sites:
So it doesn’t sound like this release is a super massive disaster. But if those 10 million passwords are passwords that were freshly stolen and haven’t been updated, that’s still potentially quite bad for those 10 million people.
Fortunately, according to Brian Krebs, it sounds like those 10 million passwords might also be from old hacks and have likely been updated. Krebs interviewed Alex Holden, CTO of a company that specializes in trawling underground spaces for intelligence about malicious actors and their stolen data dumps. According to Holden, the data appears to have first been posted to underground forums in October 2018 and it’s just a subset of a much larger tranche of passwords being peddled by a shadowy seller online. Holden also asserts that his company has already accounted for 99 percent of the released data from previous hacks. So that hopefully means those 10 million passwords that Troy Hunt hadn’t seen before have indeed been stolen a while ago and already updated.
The bad news is that the hacker(s) appear to have a much larger cache of data for sale and that data is much newer. Krebs sort of confirmed this after contacting the hacker via Telegram. That hacker, who goes by the name Sanixer, told Krebs that this release is 2–3 years old and that he has other password packages totaling more than 4 terabytes in size that are less than a year old.
Interestingly, while the above article notes that the released data was available for anyone to download for free, the hacker had a price of $45 for the cache when Krebs contacted him. Still, that’s pretty cheap all things considered.
And that likely explains the purpose of this massive release of free/cheap data that’s already largely known by the hacker community: It’s a teaser designed to solicit customers for the newer, more useful, and presumably more expensive password packages for sale:
“KrebsOnSecurity sought perspective on this discovery from Alex Holden, CTO of Hold Security, a company that specializes in trawling underground spaces for intelligence about malicious actors and their stolen data dumps. Holden said the data appears to have first been posted to underground forums in October 2018, and that it is just a subset of a much larger tranche of passwords being peddled by a shadowy seller online.”
Yep, those 773 million emails and 21 million passwords are just a subset of a much large tranche of credentials for sale. But at least there’s the good news: cybersecurity firms have already seen the vast majority of this released data from other sources, with Alex Holden claiming his company can account for 99 percent of it. And the fact that the hacker basically gave this data away, and is now selling it for $45, underscores the relatively low utility of it:
But the bad news is pretty bad if the claims of the hacker of true: the hacker(s) claim they have 4 terabytes of hacked credentials that are less than a year old:
Keep in mind that it’s possible that ‘Sanixer’ doesn’t have 4 terabytes of relatively ‘fresh’ and useful credentials and that this whole thing is designed to entice people into paying big money for another tranche of relatively useless data. That’s a real possibility. After all, it’s not like there’s a return policy when you buy stolen material over the dark web. So let’s hope that’s the case. But given the rate at which major hacks are announced these days, it wouldn’t be too surprising if Sanixer really does have a lot more ‘fresh’ emails and passwords for sale.
It’s all a reminder to avoid reusing passwords whenever possible and consider using a password manager.
In related news, a 2017 study found that 9 of the most popular password manager apps available on the Google Play store had software vulnerabilities that would potentially allow hackers to to steal the stored passwords. And while those vulnerabilities have since been fixed, it’s important to keep in mind that password managers aren’t necessarily a defense against malware/spyware on your systems. And the spywarepocalypse rolls on...
The issue of privacy and security vulnerabilities in WhatsApp, the wildly popular encrypted texting app owned by Facebook, was once again in the news this week. The parent company of NSO Group — the Israel-base hacking tool company that provided the spyware used to spy on the encrypted communications of Saudi dissident (and Muslim Brotherhood associate) Jamal Khashoggi — responded to petitions by Amnesty International to have NSO Group’s export license revoked by pledging to do whatever is necessary to ensure that the company’s software isn’t used to violate human rights and “ensure that NSO technology is used for the purpose for which it is intended – the prevention of harm to fundamental human rights arising from terrorism and serious crime – and not abused in a manner that undermines other equally fundamental human rights.”
Given NSO Group is the kind of company that knowingly sold its software to the Saudi government (and may have hired private investigators itself to harass the activist at CityLab after those activists started investigating NSO Group’s ties to the Saudis) it’s hard to take the company’s pledges seriously. But as the following article reminds us, while NSO Group may not be the kind of entity one should trust to address serious crime on WhatsApp there needs to at least be someone with the ability to address serious crime on the platform and other fully-encrypted platforms. Which, of course, is the fundamental paradox of these platforms: they secure the human rights of privacy and simultaneously facilitate all sorts of other human rights violations. Like the trafficking of child porn. If NSO Group’s WhatsApp hacking software was used to crack down on child porn that would be a lot less controversial than when the same hacking software is used for hacking Saudi dissidents.
And yet, within the context of the contemporary encryption and digital privacy debates, it would still be somewhat controversial for NSO Group’s hacking software to get used for cracking down on child porn over WhatsApp simply because platforms like WhatsApp’s encryption is supposed to be uncrackable for everyone, including WhatsApp itself and the NSA. Accepting that platforms like WhatsApp will be used for the untrackable exchange of things like child porn is part of the package and according to the Cypherpunk worldview it’s a small price to pay. Recall Jacob Applebaum making this exact point during a 2012 panel discussion with Julian Assange (at 1 hour 7 minutes). The book Cypherpunks: Freedom and the Future of the Internet was based on that panel discussion. And that’s a key aspect of this topic to keep in mind in the context of the controversy over NSO Group’s spyware successfully hacking WhatsApp (by planting spyware on the victim’s phone, not by hacking the encryption). Clients like the Saudi government are the kinds of clients who are guaranteed to abuse powerful hacking tools. But as the following article remind us, it’s not like serious abuses that are taking place on these platforms specifically because they are (mostly) unhackable aren’t serious.
At the same time, one argument often used by the Cypherpunk community in defense of unbreakable communication platforms that no one can police is that there are other ways of policing them without breaking the encryption. That’s not true in all cases, but in this case of WhatsApp it’s tragically true. At least at this point it’s true. Because it turns out that WhatsApp’s child porn problem has been right out in the open: In late 2016, WhatsApp started offering the ability of strangers to join WhatsApp groups without having to know a member first. This led to the explosion in a new marketplace of WhatsApp private groups for all sorts of topics and something entirely predictable happened. People started setting up WhatsApp child porn exchange groups. That’s the findings of two Israeli NGO’s that discovered this problem last year.
And it was obvious these were child porn groups based on names with “cp” in them or other barely veiled known codes. The only barrier for turning this into a nightmare for child porn distribution was a searchable database of groups. So of course various smartphone apps offering searchable databases of WhatsApp groups were made available on the Google Play store and these apps included these child porn groups with obvious names. Apparently no one at these apps or Google or WhatsApp was moderating to see if these searchable database started advertising child porn group because it turns out there were numerous groups with obvious child porn names found by the two NGOs. There were groups with names like “child porn only no adv” and “child porn xvideos” found in these discovery apps.
Here’s perhaps the most chilling part of this story: When it broke last December, a WhatsApp spokesperson responded by declaring that WhatsApp banned 130,000 accounts in a recent 10-day period for violating its policies against child exploitation. So there were 130,000 accounts WhatsApp was suddenly able to find after these two NGOs pointed out that the platform had a child porn problem. That’s not an encryption problem. That’s some sort of basic moderation problem.
So we’ll see if addressing future child porn distribution problems on WhatsApp require weakening its encryption. That seems likely if they get these moderation problems fixed where people can’t just openly advertise child porn WhatsApp groups. But in terms of today’s WhatsApp child porn problem there’s no weakening of encryption required. The only thing required is public group name moderators from one of the major stakeholders (WhatsApp(Facebook) or Google) which was apparently too much to ask for:
“But it’s that over-reliance on technology and subsequent under-staffing that seems to have allowed the problem to fester. AntiToxin’s CEO Zohar Levkovitz tells me, “Can it be argued that Facebook has unwittingly growth-hacked pedophilia? Yes. As parents and tech executives we cannot remain complacent to that.””
Whoops. Growth-hacking pedophilia is definitely up there on Facebook’s list of crimes and that’s one helluva list. But there’s no denying that the highly predictable and avoidable explosion of child porn WhatsApp public groups was a disaster even by Facebook’s normally disastrous standards.
And the fact that Facebook owns WhatsApp apparently didn’t prompt WhatsApp to hire an adequate number of employees. It has just 300 employees despite the fact that over a billion people use the app and Facebook owns it:
And note how the names of WhatsApp groups made available on these WhatsApp group discovery apps were completely explicit in some cases with names like “child porn only no adv” and “child porn xvideos”. So WhatsApp was apparently making basically no attempt to monitor for this after making this public group feature in late 2016. Nor were the app makers. And it was multiple apps. Google Pay removed at least six apps for carrying links to child porn WhatsApp groups. So Google also wasn’t paying attention. There was a whole bunch of no one watching for this obvious abuse of this technology. Given what a predictable PR disaster this is it’s kind of amazing they weren’t watching for this more:
And note how uninterested Facebook apparently was when these NGOs approached the company with their findings: Facebook’s head of Policy, Jordana Cutler, starting September 4th. They requested a meeting four times to discuss their findings. Cutler asked for email evidence but did not agree to a meeting, instead following Israeli law enforcement’s guidance to instruct researchers to contact the authorities. And while recommending the researchers contact authorities is good advice, it seems like Facebook should have wanted to meet with them too. But nope. Maybe it’s the proper legal move but that still seems disturbing:
And WhatsApp told reporters they were now investigating the groups visible from the research provided by the researcher. It’s important to keep in mind that these groups were already visible to WhatsApp had it actually been watching for this stuff which it clearly wasn’t. Which, again, is amazing. It’s not like it would have been that hard for WhatsApp to watch for this. But WhatsApp was only starting to look into this when the NGOs told them about it in December and ended up banning 130,000 accounts in a 10-day period that month:
It’s also notable that the WhatsApp group discovery apps available throuhg Apple’s app store did block child exploitation groups. Only the Google Play store apps showed them. It underscores the role Google could be playing but chooses not to:
Beyond that, as the following article notes, both Google and Facebook were allowing their ad respective networks to serve up ads on these WhatsApp group discovery apps found to be serving up child porn groups. Facebook blamed Google by pointing out that its ad network agreed to advertise on any apps on Google Play and therefore it was Google’s responsibility, which would be a ok-ish excuse if Facebook didn’t own WhatsApp.
Note that Mark Zuckerberg announced in January that Facebook would be merging the communications infrastructures of Facebook Messenger, WhatsApp, and Instagram, so it’s possible that Facebook’s much larger army of content moderators will be able to address WhatsApp’s seemingly complete lack over moderation of its public groups. But since we’re talking about Facebook it’s still going to be horribly botched somehow.
And when WhatsApp hopefully cracks down on the open trafficking of public child porn groups on its platform someday, let’s not forget that this is just going to drive the trafficking of those links more underground. It’s just not going to be openly marketed with obvious group names. But it’s still going to exist. And that’s why there’s still ultimately going to be a need for someone to have a way to get around WhatsApp’s encryption in order to really prevent the platform from remaining a child porn haven. But, of course, that can’t happen without fundamentally undermining the WhatsApp model.
And that’s all why the story of the NSO Group laughably pledging to ensure its super hacking products won’t be abused by its clients like Saudi Arabia is part of the same larger story about the costs and benefits of strong encryption that includes stories like WhatsApp’s casually turning its platform into an open child porn distribution network.