Spitfire List Web site and blog of anti-fascist researcher and radio personality Dave Emory.

News & Supplemental  

The Spywarepocalypse Cometh. Lock the Backdoor.

With last week’s Snowden-leak that the NSA can break a large amount of the encryption used across the web using a variety of backdoors and secret agreements with manufacturers, there’s now a push in Congress for legal restrictions on the use of these backdoors:

The New York Times
Legislation Seeks to Bar N.S.A. Tactic in Encryption

By SCOTT SHANE and NICOLE PERLROTH
Published: September 6, 2013

After disclosures about the National Security Agency’s stealth campaign to counter Internet privacy protections, a congressman has proposed legislation that would prohibit the agency from installing “back doors” into encryption, the electronic scrambling that protects e-mail, online transactions and other communications.

Representative Rush D. Holt, a New Jersey Democrat who is also a physicist, said Friday that he believed the N.S.A. was overreaching and could hurt American interests, including the reputations of American companies whose products the agency may have altered or influenced.

“We pay them to spy,” Mr. Holt said. “But if in the process they degrade the security of the encryption we all use, it’s a net national disservice.”

Mr. Holt, whose Surveillance State Repeal Act would eliminate much of the escalation in the government’s spying powers undertaken after the 2001 terrorist attacks, was responding to news reports about N.S.A. documents showing that the agency has spent billions of dollars over the last decade in an effort to defeat or bypass encryption. The reports, by The New York Times, ProPublica and The Guardian, were posted online on Thursday.

The agency has encouraged or coerced companies to install back doors in encryption software and hardware, worked to weaken international standards for encryption and employed custom-built supercomputers to break codes or find mathematical vulnerabilities to exploit, according to the documents, disclosed by Edward J. Snowden, the former N.S.A. contractor.

The documents show that N.S.A. cryptographers have made major progress in breaking the encryption in common use for everyday transactions on the Web, like Secure Sockets Layer, or SSL, as well as the virtual private networks, or VPNs, that many businesses use for confidential communications among employees.

Intelligence officials say that many of their most important targets, including terrorist groups, use the same Webmail and other Internet services that many Americans use, so it is crucial to be able to penetrate the encryption that protects them. In an intense competition with other sophisticated cyberespionage services, including those of China and Russia, the N.S.A. cannot rule large parts of the Internet off limits, the officials argue.

A statement from the director of national intelligence, James R. Clapper Jr., criticized the reports, saying that it was “not news” that the N.S.A. works to break encryption, and that the articles would damage American intelligence collection.

The reports, the statement said, “reveal specific and classified details about how we conduct this critical intelligence activity.”

“Anything that yesterday’s disclosures add to the ongoing public debate,” it continued, “is outweighed by the road map they give to our adversaries about the specific techniques we are using to try to intercept their communications in our attempts to keep America and our allies safe and to provide our leaders with the information they need to make difficult and critical national security decisions.”

But if intelligence officials felt a sense of betrayal by the disclosures, Internet security experts felt a similar letdown — at the N.S.A. actions.

“There’s widespread disappointment,” said Dan Kaminsky, a prominent security researcher. “This has been the stuff of wild-eyed accusations for years. A lot of people are heartbroken to find out it’s not just wild-eyed accusations.”

Sascha Meinrath, the director of the Open Technology Institute, a research group in Washington, said the reports were “a startling indication that the U.S. has been a remarkably irresponsible steward of the Internet,” which he said the N.S.A. was trying to turn into “a massive platform for detailed, intrusive and unrestrained surveillance.”

Companies like Google and Facebook have been moving to new systems that, in principle, would make government eavesdropping more difficult. Google is in the process of encrypting all data that travels via fiber-optic lines between its data centers. The company speeded up the process in June after the initial N.S.A. disclosures, according to two people who were briefed on Google’s plans but were not authorized to speak publicly about them. The acceleration of the process was first reported Friday by The Washington Post.

For services like Gmaili, once data reaches a user’s computer it has been encrypted. But as messages and other data like search queries travel internally among Google’s data centers they are not encrypted, largely because it is technically complicated and expensive to do.

Facebook announced last month that it would also transition to a novel encryption method, called perfect forward secrecy, that makes eavesdropping far more difficult.

But the perception of an N.S.A. intrusion into the networks of major Internet companies, whether surreptitious or with the companies’ cooperation, could hurt business, especially in international markets.

“What buyer is going to purchase a product that has been deliberately made less secure?” asked Mr. Holt, the congressman. “Even if N.S.A. does it with the purest motive, it can ruin the reputations of billion-dollar companies.”

In addition, news that the N.S.A. is inserting vulnerabilities into widely used technologies could put American lawmakers and technology companies in a bind with regard to China.

Over the last two years, American lawmakers have accused two of China’s largest telecommunications companies, Huawei Technologies and ZTE, of doing something parallel to what the N.S.A. has done: planting back doors into their equipment to allow for eavesdropping by the Chinese government and military.

Both companies have denied collaborating with the Chinese government, but the allegations have eliminated the companies’ hopes for significant business growth in the United States. After an investigation last year, the House Intelligence Committee concluded that government agencies should be barred from doing business with Huawei and ZTE, and that American companies should avoid buying their equipment.

Some foreign governments and companies have also said that they would not rely on the Chinese companies’ equipment out of security concerns. Last year, Australia barred Huawei from bidding on contracts in Australia’s $38 billion national broadband network. And this year, as part of its effort to acquire Sprint Nextel, SoftBank of Japan pledged that it would not use Huawei equipment in Sprint’s cellphone network.

Part of what makes a backdoor-decryption ban so intriguing is that the nature of the encryption techniques employed today is such that, without a backdoor or some other algorithmic “cheat” of some sort it’s theoretically really really really hard for even an intelligence agency with the capabilities of the NSA to break the encryption. It’s one of those realities of the digital age that German security officials reminded us of in 2007, when policy experts requested a backdoor into users’s computer to get around Skype’s encryption:

TechDirt
German Proposal Gives A New Perspective On ‘Spyware’
from the big-brother-is-hacking-yo dept

by Timothy Lee

Tue, Nov 27th 2007 5:10pm

A VoIP expert has unveiled new proof-of-concept software that allows an attacker to monitor other peoples’ VoIP calls and record them for later review. Unencrypted VoIP really isn’t very secure; if you have access to the raw network traffic of a call, it’s not too hard to reconstruct the audio. Encrypted traffic is another story. German officials have discovered that when suspects use Skype’s encryption feature, they aren’t able to decode calls even if they have a court order authorizing them to do so. Some law enforcement officials in Germany apparently want to deal with this problem by having courts give them permission to surreptitiously install spying software on the target’s computer. To his credit, Joerg Ziercke, president of Germany’s Federal Police Office, says that he’s not asking Skype to put back doors in its software. But the proposal still raises some serious question. Once the installation of spyware becomes a standard surveillance method, law enforcement will have a vested interest in making sure that operating systems and VoIP applications have vulnerabilities they can exploit. There will inevitably be pressure on Microsoft, Skype, and other software vendors to provide the police with backdoors. And backdoors are problematic because they can be extremely difficult to limit to authorized individuals. It would be a disaster if the backdoor to a popular program like Skype were discovered by unauthorized individuals. A similar issue applies to anti-virus software. If anti-virus products detect and notify users when court-ordered spyware is found on a machine, it could obviously disrupt investigations and tip off suspects. On the other hand, if antivirus software ignores “official” spyware, then spyware vendors will start trying to camouflage their software as government-installed software to avoid detection. Ultimately, there may be no way for anti-spyware products to turn a blind eye to government-approved spyware without undermining the effectiveness of their products.

Hence, I’m skeptical of the idea of government-mandated spyware, although I don’t think it should be ruled out entirely. That may sound like grim news for law enforcement, which does have a legitimate need to eavesdrop on crime suspects. But it’s important to keep in mind that law enforcement officials do have other tools at their disposal. If they’re not able to install software surveillance tools, it’s always possible to do it the old-fashioned way–in hardware. Law enforcement agencies can always sneak into a suspect’s home (with a court order, of course) and install bugging devices. That tried and true method works regardless of the communications technology being used.

The battle over backdoors is an ongoing issue that isn’t going away any time soon. And as the above article indicated, one of the reasons that backdoors installed into hardware and software for use by law enforcement is guaranteed to be an ongoing issue is because encryption done right can’t be cracked. At least not in a reasonable time frame. It’s a reflection of the asymmetric nature of the mathematics behind encryption: it’s a lot easier to hide a needle in a haystack than find it. At least in theory:

Ars Technica
Crypto experts issue a call to arms to avert the cryptopocalypse
Nobody can crack important algorithms yet, but the world needs to prepare for that to happen.

by Peter Bright – Aug 1 2013, 10:49pm CST

At the Black Hat security conference in Las Vegas, a quartet of researchers, Alex Stamos, Tom Ritter, Thomas Ptacek, and Javed Samuel, implored everyone involved in cryptography, from software developers to certificate authorities to companies buying SSL certificates, to switch to newer algorithms and protocols, lest they wake up one day to find that all of their crypto infrastructure is rendered useless and insecure by mathematical advances.

We’ve written before about asymmetric encryption and its importance to secure communication. Asymmetric encryption algorithms have pairs of keys: one key can decrypt data encrypted with the other key, but cannot decrypt data encrypted with itself.

The asymmetric algorithms are built on an underlying assumption that certain mathematical operations are “hard,” which is to say, that the time it takes to do the operation increases proportional to some number raised to the power of the length of the key (“exponential time”). This assumption, however, is not actually proven, and nobody knows for certain if it is true. The risk exists that the problems are actually “easy,” where “easy” means that there are algorithms that will run in a time proportional only to the key length raised to some constant power (“polynomial time”).

The most widely used asymmetric algorithms (Diffie Hellman, RSA, and DSA) depend on the difficulty of two problems: integer factorization, and the discrete logarithm. The current state of the mathematical art is that there aren’t—yet—any easy, polynomial time solutions to these problems; however, after decades of relatively little progress in improlving algorithms related to these problems, a flurry of activity in the past six months has produced faster algorithms for limited versions of the discrete logarithm problem.

At the moment, there’s no known way to generalize these improvements to make them useful to attack real cryptography, but the work is enough to make cryptographers nervous. They draw an analogy with the BEAST, CRIME, and BREACH attacks used to attack SSL. The theoretical underpinnings for these attacks are many years old, but for a long time were dismissed as merely theoretical and impossible to use in practice. It took new researchers and new thinking to turn them into practical attacks.

When that happened, it uncovered a software industry ill-prepared to cope. A lot of software, rather than allowing new algorithms and protocols to be easily plugged in, has proven difficult or impossible to change. This means that switching to schemes that are immune to the BEAST, CRIME, and BREACH attacks is much more difficult than it should be. Though there are newer protocols and different algorithms that avoid the problems that these attacks exploit, compatibility concerns mean that they can’t be rapidly rolled out and used.

The attacks against SSL are at least fairly narrow in scope and utility. A general purpose polynomial time algorithm for integer factorization or the discrete logarithm, however, would not be narrow in scope or utility: it would be readily adapted to blow wide open almost all SSL/TLS, ssh, PGP, and other encrypted communication. (The two mathematical problems, while distinct, share many similarities, so it’s likely that an algorithm that solved integer factorization could be adapted in some way to solve the discrete logarithm, and vice versa).

Worse, it would make updating these systems in a trustworthy manner nearly impossible: operating systems such as Windows and OS X depend on digital signatures that in turn depend on these same mathematical underpinnings to protect against the installation of fraudulent or malicious updates. If the algorithms were undermined, there would be no way of verifying the authenticity of the updates.

While there’s no guarantee that this catastrophe will occur—it’s even possible that one day it might be proven that the two problems really are hard—the risk is enough to have researchers concerned. The difficulties of change that BEAST et al. demonstrated mean that if the industry is to have a hope of surviving such a revolution in cryptography, it must start making changes now. If it waits for a genius mathematician somewhere to solve these problems, it will be too late to do anything about it.

Fortunately, a solution of sorts does exist. A family of encryption algorithms called elliptic curve cryptography (ECC) exists. ECC is similar to the other asymmetric algorithms, in that it’s based on a problem that’s assumed to be hard (in this case, the elliptic curve discrete logarithm). ECC, however, has the additional property that its hard problem is sufficiently different from integer factorization and the regular discrete logarithm that breakthroughs in either of those shouldn’t imply breakthroughs in cracking ECC.

However, support for ECC is still very problematic. Much of the technology is patented by BlackBerry, and those patents are enforced. There are certain narrow licenses available for implementations of ECC that meet various US government criteria, but the broader patent issues have led some vendors to refuse to support the technology.

Further, support of protocols that can use ECC, such as TLS 1.2 (the latest iteration of SSL technology) is still not widely available. Certificate authorities have also been slow to offer ECC certificates.

As such, the researchers are calling for the computer industry as a whole to do two things. First, embrace ECC today. Second, ensure that systems that use cryptography are agile. They must not be lumbered with limited sets of algorithms and obsolete protocols. They must instead make updating algorithms and protocols quick and easy, to ensure that software systems can keep pace with the mathematical research and adapt quickly to new developments and techniques. The cryptopocalypse might never happen—but we should be prepared in case it does.

Note that the above article was published August 1st, a month before the latest Snowden leak about the advances in NSA techniques that includes both backdoors but also advances in decryption algorithms. So the references to algorithmic risks (because we don’t know how “hard” the underlying mathematical algorithms truly are) in the above article might relate to the recent advances in the NSA’s decryption algorithms. This could even include turning theoretically “hard” (non-polynomial-time) mathematical problems into somewhat less hard problems that can be cracked without the NSA’s backdoors (or anyone else’s backdoors). In other words, while the concerns about the NSA or some other allied intelligence agency abusing those encryption backdoors are valid, there’s also the very real possibility that other 3rd parties (rival intelligence agencies, organized crime, private parties, etc) are also using the new algorithmic hacks where no backdoors are required. The algorithm is effectively defeated. So even if those NSA backdoors (or anyone else’s backdoors) didn’t exists there is still the possibility that the underlying mathematical algorithms currently used to encrypt the bulk of the internet communications have already been mathematically effectively hacked. And if those algorithms have already been hacked (in the sense that code-breakers have found a method of finding the correct keys within a predictable timeframe) then it might just be a matter of time before that algorithm gets out into “the wild” and anyone with the computing resources will be able to decrypt conventionally encrypted data. No backdoors or secret manufacturer agreements needed. Just a powerful enough computer and the knowledge about the flaws int the encryption algorithm. That’s the ‘cryptopocalypse’.

But there’s another interesting possibility that could emerge in the medium-term: Right now it’s known that NSA uses custom-built chips to break the encryption and it’s believed that these chips can decrypt any of the traffic on Tor that doesn’t use the most advanced “elliptic curve cryptography” encryption described above. Tor is supposed to be anonymous.

So we should probably expect to see a broad shift towards these newer kinds of encryption methods. And if that shift towards using these newer methods takes place without those NSA backdoors we could start seeing truly secure encryption methods employed – methods that no spy agency, anywhere, will be able to decrypt. At least not unless there’s some super secret powerful computing technology hiding somewhere. If that encrypted future is what’s in store for us we should probably expect a dramatic expansion of traditional spying: human intelligence will simply become much more important because there won’t be other options. Traditional hacking will also become paramount. When a backdoor closes, a job opportunity for a hacker opens.

But also note that the FinFisher tool is reportedly to be able to hack your Blackberry which uses “elliptic curve cryptography”. Same with the NSA and GCHQ. So whatever secure encryption method the world eventually settles upon will have to be more secure that currently recommended secure methods. Give it time.

Beware Software Updates Bearing Gifts
If we do eventually see an encrypted future – one where direct hacking with the benefit of pervasive backdoors or algorithmic trickery is no longer an option – we should expect an explosion of Trojan spyware and custom hacks. Even with the pervasive backdoors and algorithmic trickery we should still expect an explosion of spyware because that’s what’s already happening. So whole the NSA hardware and software backdoor network is the spy scandal of the moment, perhaps the UK/German Bundestrojaner/FinFisher/FinSpy spyware scandals should be considered liklier spy scandal templates for tomorrow:

Slate
U.S. and Other Western Nations Met With Germany Over Shady Computer-Surveillance Tactics

By Ryan Gallagher

Posted Tuesday, April 3, 2012, at 11:51 AM

Infecting a computer with spyware in order to secretly siphon data is a tactic most commonly associated with criminals. But explosive new revelations in Germany suggest international law enforcement agencies are adopting similar methods as a form of intrusive suspect surveillance, raising fresh civil liberties concerns.

Information released last month by the German government shows that between 2008-2011, representatives from the FBI; the U.K.’s Serious Organised Crime Agency (SOCA); and France’s secret service, the DCRI, were among those to have held meetings with German federal police about deploying “monitoring software” used to covertly infiltrate computers.

The disclosure was made in response to a series of questions tabled by Left Party Member of Parliament Andrej Hunko and reported by German-language media. It comes on the heels of an exposé by the Chaos Computer Club, a Berlin-based hacker collective, which revealed in October that German police forces had been using a so-called “Bundestrojaner” (federal Trojan) to spy on suspects.

The Bundestrojaner technology could be sent disguised as a legitimate software update and was capable of recording Skype calls, monitoring Internet use, and logging messenger chats and keystrokes. It could also activate computer hardware such as microphones or webcams and secretly take snapshots or record audio before sending it back to the authorities.

German federal authorities initially denied deploying any Bundestrojaner, but it soon transpired that courts had in fact approved requests from officials to employ such Trojan horse programs more than 50 times. Following a public outcry over the use of the technology, which many believe breached the country’s strict privacy laws, further details have surfaced.

Inquiries by Green Party MP Konstantin von Notz revealed in January that, in addition to the Bundestrojaner discovered by the CCC, German authorities had also acquired a license in early 2011 to test a similar Trojan technology called “FinSpy,”manufactured by England-based firm Gamma Group. FinSpy enables clandestine access to a targeted computer, and was reportedly used for five months by Hosni Mubarak’s Egyptian state security forces in 2010 to monitor personal Skype accounts and record voice and video conversations over the Internet.

But it is the German government’s response to a series of questions recently submitted by Hunko that is perhaps the most revealing to date. In a letter from Secretary of State Ole Schröder on March 6, which I have translated, Hunko was informed that German federal police force, the Bundeskriminalamt (BKA), met to discuss the use of monitoring software with counterparts from the U.S., Britain, Israel, Luxemburg, Liechtenstein, the Netherlands, Belgium, France, Switzerland, and Austria. The meetings took place separately between Feb. 19, 2008, and Feb. 1, 2012. While this story has been covered in the German media, it hasn’t received the English-language attention it deserves.

Both the FBI and Britain’s SOCA are said to have discussed with the Germans the “basic legal requirements” of using computer-monitoring software. The meeting with SOCA also covered the “technical and tactical aspects” of deploying computer infiltration technology, according to Schröder’s letter. France’s secret service and police from Switzerland, Austria, Luxemburg, and Liechtenstein were separately briefed by the BKA on its experiences using Trojan computer infiltration.

Interestingly, at a meeting in October 2010 attended by police from Germany, the Netherlands, and Belgium, representatives from the Gamma Group were present and apparently showcased their shadowy products. It is possible that the Germans decided at this meeting to proceed with the FinSpy trial we now know took place in early 2011.

If nothing else, these revelations confirm that police internationally are increasingly looking to deploy ethically contentious computer intrusion techniques that exist in a legal gray area. The combination of the rapid development of Internet technologies and persistent fears about national security seem to have led to a paradigm shift in police tactics—one that appears, worryingly, to be taking place almost entirely behind closed doors and under cover of state secrecy.

Your Passwords Can Be Stolen. So Can Your Spyware
The world continues to freak out about NSA and UK possessing the centralized mass-surveillance capabilities that come from the power to collect and decrypt massive volumes of internet traffic. Such a freak out is understandable because, hey, centralized mass internet traffic surveillance is kind of creepy. It’s also understandable that the global debate would be almost exclusively focused on spying by the NSA because that’s been the focus of the Snowden leaks. But it might be worth incorporating into an ongoing global debate about the balance privacy, security, and government accountability the fact that extremely powerful spyware is being peddled by major governments and is currently used by governments all over the globe. It might also be used by unknown parties all over the globe, because spyware can be stolen:

Bloomberg
FinFisher Spyware Reach Found on Five Continents: Report
By Vernon Silver – Aug 8, 2012 6:34 AM CT

The FinFisher spyware made by U.K.- based Gamma Group likely has previously undisclosed global reach, with computers on at least five continents showing signs of being command centers that run the intrusion tool, according to cybersecurity experts.

FinFisher can secretly monitor computers — intercepting Skype calls, turning on Web cameras and recording every keystroke. It is marketed by Gamma for law enforcement and government use.

Research published last month based on e-mails obtained by Bloomberg News showed activists from the Persian Gulf kingdom of Bahrain were targeted by what looked like the software, sparking a hunt for further clues to the product’s deployment.

In new findings, a team, led by Claudio Guarnieri of Boston-based security risk-assessment company Rapid7, analyzed how the presumed FinFisher samples from Bahrain communicated with their command computer. They then compared those attributes with a global scan of computers on the Internet.

The survey has so far come up with what it reports as matches in Australia, the Czech Republic, Dubai, Ethiopia, Estonia, Indonesia, Latvia, Mongolia, Qatar and the U.S.

Guarnieri, a security researcher based in Amsterdam, said that the locations aren’t proof that the governments of any of these countries use Gamma’s FinFisher. It’s possible that Gamma clients use computers based in other nations to run their FinFisher systems, he said in an interview.

‘Active Fingerprinting’

“They are simply the results of an active fingerprinting of a unique behavior associated with what is believed to be the FinFisher infrastructure,” he wrote in his report, which Rapid7 is publishing today on its blog at https://community.rapid7.com/community/infosec/blog.

The emerging picture of the commercially available spyware’s reach shines a light on the growing, global marketplace for cyber weapons with potential consequences.

Once any malware is used in the wild, it’s typically only a matter of time before it gets used for nefarious purposes,” Guarnieri wrote in his report. “It’s impossible to keep this kind of thing under control in the long term.”

In response to questions about Guarnieri’s findings, Gamma International GmbH managing director Martin J. Muench said a global scan by third parties would not reveal servers running the FinFisher product in question, which is called FinSpy.

“The core FinSpy servers are protected with firewalls,” he said in an Aug. 4 e-mail.

Gamma International

Muench, who is based in Munich, has said his company didn’t sell FinFisher spyware to Bahrain. He said he’s investigating whether the samples used against Bahraini activists were stolen demonstration copies or were sold via a third party.

Gamma International GmbH in Germany is part of U.K.-based Gamma Group. The group also markets FinFisher through Andover, England-based Gamma International UK Ltd. Muench leads the FinFisher product portfolio.

Muench says that Gamma complies with the export regulations of the U.K., U.S. and Germany.

It was unclear which, if any, government agencies in the countries Guarnieri identified are Gamma clients.

A U.S. Federal Bureau of Investigation spokeswoman in Washington declined to comment.

Officials in Ethiopia’s Communications Minister, Qatar’s foreign ministry and Mongolia’s president’s office didn’t immediately return phone calls seeking comment or respond to questions. Dubai’s deputy commander of police said he has no knowledge of such programs when reached on his mobile phone.

Australia’s department of foreign affairs and trade said in an e-mailed statement it does not use FinFisher software. A spokesman at the Czech Republic’s interior ministry said he has no information of Gamma being used there, nor any knowledge of its use at other state institutions.

Violating Human Rights?

At Indonesia’s Ministry of Communications, head of public relations Gatot S. Dewa Broto said that to his knowledge the government doesn’t use that program, or ones that do similar things, because it would violate privacy and human rights in that country. The ministry got an offer to purchase a similar program about six months ago but declined, he said, unable to recall the name of the company pitching it.

The Estonian Information Systems Authority RIA has not detected any exposure to FinSpy, a spokeswoman said. Neither has Latvia’s information technologies security incident response institution, according to a technical expert there.

If the above description of the emerging global spyware-surviellance state sounds a little unsettling, keep in mind that FinFisher/FinSpy is just one toolkit. There could be all sorts of other spyware “products” out there.

Also don’t forget that the world is still learning about the FinFisher/FinSpy spyware’s capability: For instance, it appears that a “FinIntrusion” tool made by the same company can be used to collect WiFi signals. Part of the FinIntrusion suite includes decryption capabilities so all that WiFi traffic can be picked up. It’s a reminder that, whether or not the centralized mass-surviellance state on the wane, the global decentralized spyware party is still going strong:

ITNews.com
Further details of FinFisher govt spyware leaked
By Juha Saarinen on Sep 2, 2013 6:04 AM
Filed under Security

Claims it can break encryption.

Sales brochures and presentations leaked online have shed further light on the FinFisher malware and spyware toolkit that is thought to be used by law enforcement agencies worldwide.

FinFisher is made by the Anglo-German Gamma International and is marketed to law enforcement agencies arould the world. It is also known as FinSpy and the sales presentation traces its origins to BackTrack Linux, an open source penetration testing Linux distribution.

The spyware can record screen shots, Skype chats, operate built-in web cams and microphones on computers and is able to capture a large range of user data.

Last year, an internet scan by a security company showed up FinFisher control nodes in eleven countries, including Australia. The malware has been analysed [pdf] by the Citizen Lab project in which the University of Toronto, Munk School of Global Affairs and the Canada Centre for Global Studies participate in.

In July this year, the Australia Federal Police turned down a Freedom of Information Act request from the director of the OpenAustralia Foundation, Henare Degan, about the use of FinFisher by the country’s top law enforcement agency.

The spyware runs on all versions of Windows newer than Windows 2000, and can infect computers via USB drivers, drive-by web browser exploits or with the help of local internet providers that inject the malware when users visit trusted sites such as Google Gmail or YouTube.

The FinSpy Mobile versions works on Blackberry, Apple IOS, Google Android and Microsoft’s Windows Mobile and Windows Phone operating systems, the documents claim. On these, it can record incoming and outgoing calls, track location with cellular ID and GPS data, and surveillance by making silent calls and more.

According to the documents found by security firm F-Secure, the FinIntrusion portable hacking kit can break encryption and record all traffic, and steal users’ online banking and social media media credentials.

Really protecting data privacy involves a lot more than just protecting internet traffic or stopping and of the NSA or GCHQ’s custom backdoors. That was a intelligence-convenience that’s now been thwarted but the spying will continue. If effectively-unbreakable encryption is truly implemented espionage activities will merely shifted to spying on data after it’s been decrypted by the intended recipient. And if the entire history of spying scandals have taught us anything it’s that governments are going to be tempted to spread spyware around like a rapid zombie. Barring a truly populist global revolution that somehow leads to a golden age of shared prosperity and minimal suffering Governments around the world will be spying on other countries’ citizens all over the globe for a whole lot of valid and invalid reasons. Governments can be kind of crazy and so can people. So the spying will continue. And don’t forget that as spyware spreads more and more it’ll be harder to tell apart the state-sponsored spyware from their private/criminal counterparts and all that private spying will warrant more public spying to stop the private spying. Achieving digital privacy isn’t just a matter slaying the NSA-mass-wiretapping-dragon in the modern age and sealing those backdoors. The public/private global spyware chimera also roams the forest and it can make backdoors too.

Discussion

16 comments for “The Spywarepocalypse Cometh. Lock the Backdoor.”

  1. http://hosted.ap.org/dynamic/stories/U/US_BORDER_COMPUTER_SEARCHES?SITE=AP&SECTION=HOME&TEMPLATE=DEFAULT&CTIME=2013-09-10-05-30-23

    Sep 10, 9:14 AM EDT

    New details in how the feds take laptops at border

    By ANNE FLAHERTY
    Associated Press

    WASHINGTON (AP) — Newly disclosed U.S. government files provide an inside look at the Homeland Security Department’s practice of seizing and searching electronic devices at the border without showing reasonable suspicion of a crime or getting a judge’s approval.

    The documents published Monday describe the case of David House, a young computer programmer in Boston who had befriended Army Pvt. Chelsea Manning, the soldier convicted of giving classified documents to WikiLeaks. U.S. agents quietly waited for months for House to leave the country then seized his laptop, thumb drive, digital camera and cellphone when he re-entered the United States. They held his laptop for weeks before returning it, acknowledging one year later that House had committed no crime and promising to destroy copies the government made of House’s personal data.

    The government turned over the federal records to House as part of a legal settlement agreement after a two-year court battle with the American Civil Liberties Union, which had sued the government on House’s behalf. The ACLU said the records suggest that federal investigators are using border crossings to investigate U.S. citizens in ways that would otherwise violate the Fourth Amendment.

    The Homeland Security Department declined to discuss the case, saying it was still being litigated. But Customs and Border Protection spokesman Michael Friel said border checks are focused on identifying national security or public safety risks.

    “Any allegations about the use of the CBP screening process at ports of entry for other purposes by DHS are false,” Friel said. “These checks are essential to enforcing the law, and protecting national security and public safety, always with the shared goals of protecting the American people while respecting civil rights and civil liberties.”

    House said he was 22 when he first met Manning, who now is serving a 35-year sentence for one of the biggest intelligence leaks in U.S. history. It was a brief, uneventful encounter at a January 2010 computer science event. But when Manning was arrested later that June, that nearly forgotten handshake came to mind. House, another tech enthusiast, considered Manning a bright, young, tech-savvy person who was trying to stand up to the U.S. government and expose what he believed were wrongheaded politics.

    House volunteered with friends to set up an advocacy group they called the Bradley Manning Support Network, and he went to prison to visit Manning, formerly known as Bradley Manning.

    It was that summer that House quietly landed on a government watchlist used by immigrations and customs agents at the border. His file noted that the government was on the lookout for a second batch of classified documents Manning had reportedly shared with the group WikiLeaks but hadn’t made public yet. Border agents were told that House was “wanted for questioning” regarding the “leak of classified material.” They were given explicit instructions: If House attempted to cross the U.S. border, “secure digital media,” and “ID all companions.”

    But if House had been wanted for questioning, why hadn’t federal agents gone back to his home in Boston? House said the Army, State Department and FBI had already interviewed him.

    Instead, investigators monitored passenger flight records and waited for House to leave the country that November for a Mexico vacation with his girlfriend. When he returned, two agents were waiting for him, including one who specialized in computer forensics. They seized House’s laptop and detained his computer for seven weeks, giving the government enough time to try to copy every file and keystroke House had made since declaring himself a Manning supporter.

    President Barack Obama and his predecessors have maintained that people crossing into U.S. territory aren’t protected by the Fourth Amendment. That policy is intended to allow for intrusive searches that keep drugs, child pornography and other illegal imports out of the country. But it also means the government can target travelers for no reason other than political advocacy if it wants, and obtain electronic documents identifying fellow supporters.

    House and the ACLU are hoping his case will draw attention to the issue, and show how searching a suitcase is different than searching a computer.

    “It was pretty clear to me I was being targeted for my visits to Manning (in prison) and my support for him,” said House, in an interview last week.

    How Americans end up getting their laptops searched at the border still isn’t entirely clear.

    The Homeland Security Department said it should be able to act on a hunch if someone seems suspicious. But agents also rely on a massive government-wide system called TECS, named after its predecessor the Treasury Enforcement Communications System.

    Federal agencies, including the FBI and IRS, as well as Interpol, can feed TECS with information and flag travelers’ files.

    In one case that reached a federal appeals court, Howard Cotterman wound up in the TECS system because a 1992 child sex conviction. That “hit” encouraged border patrol agents to detain his computer, which was found to contain child pornography. Cotterman’s case ended up before the 9th Circuit Court of Appeals, which ruled this spring that the government should have reasonable suspicion before conducting a comprehensive search of an electronic device; but that ruling only applies to states that fall under that court’s jurisdiction, and left questions about what constitutes a comprehensive search.

    In the case of House, he showed up in TECS in July 2010, about the same time he was helping to establish the Bradley Manning Support Network. His TECS file, released as part of his settlement agreement, was the document that told border agents House was wanted in the questioning of the leak of classified material.

    It wasn’t until late October, though, that investigators noticed House’s passport number in an airline reservation system for travel to Los Cabos. When he returned to Chicago O’Hare airport, the agents waiting for him took House’s laptop, thumb drive, digital camera and cellphone. He was questioned about his affiliation with Manning and his visits to Manning in prison. The agents eventually let him go and returned his cell phone. But the other items were detained and taken to an ICE field office in Manhattan.

    Seven weeks after the incident, House faxed a letter to immigration authorities asking that the devices be returned. They were sent to him the next day, via Federal Express.

    By then agents had already created an “image” of his laptop, according to the documents. Because House had refused to give the agents his password and apparently had configured his computer in such a way that appeared to stump computer forensics experts, it wasn’t until June 2011 that investigators were satisfied that House’s computer didn’t contain anything illegal. By then, they had already sent a second image of his hard drive to Army criminal investigators familiar with the Manning case. In August 2011, the Army agreed that House’s laptop was clean and promised to destroy any files from House’s computer.

    Catherine Crump, an ACLU lawyer who represented House, said she doesn’t understand why Congress or the White House are leaving the debate up to the courts.

    “Ultimately, the Supreme Court will need to address this question because unfortunately neither of the other two branches of government appear motivated to do so,” said Crump.

    House, an Alabama native, said he didn’t ask for any money as part of his settlement agreement and said his primary concern was ensuring that a document containing the names of Manning Support Network donors didn’t wind up in a permanent government file. The court order required the destruction of all his files, which House said satisfied him.

    He is writing a book about his experiences and his hope to create a youth-based political organization. House said he severed ties with the Support Network last year after becoming disillusioned with Manning and WikiLeaks, which he said appeared more focused on destroying America and ruining lives than challenging policy.

    “That era was a strange time,” House said. “I’m hoping we can get our country to go in a better direction.”

    Posted by Vanfield | September 10, 2013, 8:58 am
  2. SAIC is Oakland’s choice to “serve and protect” its citizens.

    Oakland is quite the laboratory for many social science experiments:

    – Black Panthers
    – SLA – Patty Hearst
    – Car bombing activist Judi Bari
    – Gangs
    – “Oaksterdam”

    Now SAIC is the vendor of choice!

    The comments for this NYTimes article are reflecting an awareness and unease with these unconstitutional encroachments, but where is it all leading to?

    Direct conflict while these systems are deployed, or prison riots in concentration camps after every “unproductive,” jobless, homeless, poor person is contained?

    Literal “panopticons” deployed in our lives and no legislator willing to uphold constitutional protections for citizens’ privacy?

    October 13, 2013
    Privacy Fears Grow as Cities Increase Surveillance

    http://www.nytimes.com/2013/10/14/technology/privacy-fears-as-surveillance-grows-in-cities.html?_r=0&pagewanted=all&pagewanted=print

    By SOMINI SENGUPTA

    OAKLAND, Calif. — Federal grants of $7 million awarded to this city were meant largely to help thwart terror attacks at its bustling port. But instead, the money is going to a police initiative that will collect and analyze reams of surveillance data from around town — from gunshot-detection sensors in the barrios of East Oakland to license plate readers mounted on police cars patrolling the city’s upscale hills.

    The new system, scheduled to begin next summer, is the latest example of how cities are compiling and processing large amounts of information, known as big data, for routine law enforcement. And the system underscores how technology has enabled the tracking of people in many aspects of life.

    The police can monitor a fire hose of social media posts to look for evidence of criminal activities; transportation agencies can track commuters’ toll payments when drivers use an electronic pass; and the National Security Agency, as news reports this summer revealed, scooped up telephone records of millions of cellphone customers in the United States.

    Like the Oakland effort, other pushes to use new surveillance tools in law enforcement are supported with federal dollars. The New York Police Department, aided by federal financing, has a big data system that links 3,000 surveillance cameras with license plate readers, radiation sensors, criminal databases and terror suspect lists. Police in Massachusetts have used federal money to buy automated license plate scanners. And police in Texas have bought a drone with homeland security money, something that Alameda County, which Oakland is part of, also tried but shelved after public protest.

    Proponents of the Oakland initiative, formally known as the Domain Awareness Center, say it will help the police reduce the city’s notoriously high crime rates. But critics say the program, which will create a central repository of surveillance information, will also gather data about the everyday movements and habits of law-abiding residents, raising legal and ethical questions about tracking people so closely.

    Libby Schaaf, an Oakland City Council member, said that because of the city’s high crime rate, “it’s our responsibility to take advantage of new tools that become available.” She added, though, that the center would be able to “paint a pretty detailed picture of someone’s personal life, someone who may be innocent.”

    For example, if two men were caught on camera at the port stealing goods and driving off in a black Honda sedan, Oakland authorities could look up where in the city the car had been in the last several weeks. That could include stoplights it drove past each morning and whether it regularly went to see Oakland A’s baseball games.

    For law enforcement, data mining is a big step toward more complete intelligence gathering. The police have traditionally made arrests based on small bits of data — witness testimony, logs of license plate readers, footage from a surveillance camera perched above a bank machine. The new capacity to collect and sift through all that information gives the authorities a much broader view of the people they are investigating.

    For the companies that make big data tools, projects like Oakland’s are a big business opportunity. Microsoft built the technology for the New York City program. I.B.M. has sold data-mining tools for Las Vegas and Memphis.

    Oakland has a contract with the Science Applications International Corporation, or SAIC, to build its system. That company has earned the bulk of its $12 billion in annual revenue from military contracts. As the federal military budget has fallen, though, SAIC has diversified to other government agency projects, though not without problems.

    The company’s contract to help modernize the New York City payroll system, using new technology like biometric readers, resulted in reports of kickbacks. Last year, the company paid the city $500 million to avoid a federal prosecution. The amount was believed to be the largest ever paid to settle accusations of government contract fraud. SAIC declined to comment.

    Even before the initiative, Oakland spent millions of dollars on traffic cameras, license plate readers and a network of sound sensors to pick up gunshots. Still, the city has one of the highest violent crime rates in the country. And an internal audit in August 2012 found that the police had spent $1.87 million on technology tools that did not work properly or remained unused because their vendors had gone out of business.

    The new center will be far more ambitious. From a central location, it will electronically gather data around the clock from a variety of sensors and databases, analyze that data and display some of the information on a bank of giant monitors.

    The city plans to staff the center around the clock. If there is an incident, workers can analyze the many sources of data to give leads to the police, fire department or Coast Guard. In the absence of an incident, how the data would be used and how long it would be kept remain largely unclear.

    The center will collect feeds from cameras at the port, traffic cameras, license plate readers and gunshot sensors. The center will also be integrated next summer with a database that allows police to tap into reports of 911 calls. Renee Domingo, the city’s emergency services coordinator, said school surveillance cameras, as well as video data from the regional commuter rail system and state highways, may be added later.

    Far less advanced surveillance programs have elicited resistance at the local and state level. Iowa City, for example, recently imposed a moratorium on some surveillance devices, including license plate readers. The Seattle City Council forced its police department to return a federally financed drone to the manufacturer.

    In Virginia, the state police purged a database of millions of license plates collected by cameras, including some at political rallies, after the state’s attorney general said the method of collecting and saving the data violated state law. But for a cash-starved city like Oakland, the expectation of more federal financing makes the project particularly attractive. The City Council approved the program in late July, but public outcry later compelled the council to add restrictions. The council instructed public officials to write a policy detailing what kind of data could be collected and protected, and how it could be used. The council expects the privacy policy to be ready before the center can start operations.

    The American Civil Liberties Union of Northern California described the program as “warrantless surveillance” and said “the city would be able to collect and stockpile comprehensive information about Oakland residents who have engaged in no wrongdoing.”

    The port’s chief security officer, Michael O’Brien, sought to allay fears, saying the center was meant to hasten law-enforcement response time to crimes and emergencies. “It’s not to spy on people,” he said.

    Steve Spiker, research and technology director at the Urban Strategies Council, an Oakland nonprofit organization that has examined the effectiveness of police technology tools, said he was uncomfortable with city officials knowing so much about his movements. But, he said, there is already so much public data that it makes sense to enable government officials to collect and analyze it for the public good.

    Still, he would like to know how all that data would be kept and shared. “What happens,” he wondered, “when someone doesn’t like me and has access to all that information?”

    Posted by participo | October 14, 2013, 7:59 am
  3. Bob Filner – the former mayor of San Diego and former serial-groper (hopefully) – and the City of San Diego recently settled the sexual harassment lawsuit brought by Filner’s ex-communications director. But there’s a new Filner-related scandal that’s been brewing: The serial groping of democracy by big-money foreign donors:

    Hullabaloo
    Foreign .001 percenters are people too

    by digby
    2/16/2014 09:00:00 AM

    It stands to reason that the .001% would band together to influence elections. There are so few of them, it also stands to reason they’d recruit foreign members of their class. And why not? The policies that help the American mega-rich will very likely help the foreign mega-rich as well. And that’s what counts:

    In a first of its kind case, federal prosecutors say a Mexican businessman funnelled more than $500,000 into U.S. political races through Super PACs and various shell companies. The alleged financial scheme is the first known instance of a foreign national exploiting the Supreme Court’s Citizens United decision in order to influence U.S. elections. If proven, the campaign finance scandal could reshape the public debate over the high court’s landmark decision.

    Until now, allegations surrounding Jose Susumo Azano Matsura, the owner of multiple construction companies in Mexico, have not spread beyond local news outlets in San Diego, where he’s accused of bankrolling a handful of southern California candidates. But the scandal is beginning to attract national interest as it ensnares a U.S. congressman, a Washington, D.C.-based campaign firm and the legacy of one of the most important Supreme Court decisions in a generation.

    How could this happen? Well, there’s one big change in campaign finance law that made it possible:

    “Before Citizens United, in order for a foreign national to try and do this, they’d have to set up a pretty complex system of shell corporations,” said Brett Kappel, a campaign finance expert at the law firm Arent Fox. “And even then, there were dollar limits in place. After Citizens United, there are no limits on independent expenditures.”

    Read on. It’s quite a story.

    But remember that the real problem with our elections is non-existent voter fraud.

    It is indeed quite a story that’s emerging from the investigation of Azano’s adventures in influence peddling. It’s especially interesting because Azano is described as “almost a legend” in Mexico and that legendary status includes major security and surveillance contracts with the Mexican defense department. Contracts that reportedly give the Mexican government remotely access to phone microphones, text messages, contacts and multimedia. So the guy at the center of this latest Citizens United-induced campaign-finance scandal in the US is also a private spy-master for the Mexican government:

    inewsource

    The “foreign national” in the San Diego campaign finance scandal is “almost a legend” in Mexico

    By Leo Castaneda
    Posted on Jan 24, 2014

    Jose Susumo Azano Matsura, the foreign national at the center of a local campaign finance scandal, is a little-known figure in the U.S., but back home in Mexico, he has a reputation as a billionaire who reportedly moves in high government circles.

    Mexican newspapers and periodicals have for years followed Azano’s business dealings, his political connections, and his monumental fight with Sempra Energy over land for a liquified natural gas plant in Ensenada. El Sol de Tijuana newspaper calls him “almost a legend.”

    He is clearly regarded as a man with power.

    For example, a column in the Mexican newspaper El Universal reported on a meeting between incoming Mexican President Enrique Pena Nieto and President Barack Obama in 2012 when they discussed threats to relations between the two nations. The number one threat identified by the U.S., according to the newspaper, was the Sempra Energy land dispute allegedly financed by Azano. Number two was the price of Mexican tomatoes, and number three, drug cartels, in that order.

    inewsource reviewed the scant mentions of Azano in U.S. media and pored through dozens of Mexican newspapers and websites, as well as personal social media accounts of Azano family members for information about a man referred to only as a “wealthy businessman” in a federal complaint unsealed in San Diego this week.

    Prosecutors contend this man funneled more than $500,000 into local political campaigns, using a “straw donor” to make the donations with the help of a Washington, D.C.-based campaign consultant and a former San Diego cop, who are both charged with crimes. Foreign nationals like Azano are not allowed to fund political campaigns at any level in the U.S.

    Azano is not named in the complaint, but inewsource verified his role as the FBI-described “wealthy businessman” after matching up the donation amounts cited in the federal complaint with the San Diego City Clerk’s campaign contributions and expenditures data and Secretary of State’s business registry information.

    Azano’s limited-liability company, Airsam N492RM, LLC, donated $100,000 to a PAC named, “San Diegans for Bonnie Dumanis for Mayor 2012, Sponsored by Airsam N492RM, LLC.” Ernesto Encinas, the retired police detective at the center of the probe, also donated $3,000 to that PAC. Those two donation amounts were highlighted in the federal complaint as examples of illegal campaign expenditures.

    Azano is no stranger to controversy or political intrigue.

    Mexico City newspaper Reforma on Thursday morning reported Azano’s company was implicated in a money laundering and fraud investigation. The Mexican attorney general alleges that the company, Security Tracking Devices SA de CV, either through a payment or contract services, deposited 33 million pesos (almost $2.5 million dollars) into the account of a corporation linked to two men who were charged with fraud and money laundering.

    Azano is of Japanese ancestry and grew up in the western Mexican state of Jalisco. He is the son of businessman Susumo Azano Matsura, head of Grupo Azano, a corporation that has done everything from construction to manufacturing license plates. One of Azano’s websites says he earned degrees from University of Massachusetts Boston and University of Guadalajara.

    He made his money in surveillance and security technology. One online news outlet in Mexico estimates the value of the family enterprises at $30 billion.

    Azano reportedly has a residence in Coronado, but the large waterfront home on Green Turtle Road is owned by a Margarita Hester de Azano, according to the county assessor. Also according to the county assessor, Azano himself owns no property in San Diego County.

    In 1998, Azano started Security Tracking Devices, which today sells surveillance and security technology to Mexico’s defense department, Sedena. A 2011 deal for $5 billion pesos (almost $374 million dollars) worth of equipment raised eyebrows in the U.S. and Mexico.

    According to the Mexican newspaper Aristegui, the contract was awarded to Azano’s company without competitive bidding. An editorial by the newspaper El Mexicano claimed Security Tracking Devices sold one of the systems for an almost 800 percent markup.

    On this side of the border, the San Francisco-based Electronic Frontier Foundation warned about the technology being sold, which allowed the Mexican government to remotely access and activate phone microphones and download text messages, contacts and multimedia.

    Security Tracking Devices has global reach and is part of the reason Azano is often referred to as a billionaire. However, Azano is not included on Forbe’s list of the wealthiest Mexicans.

    The capabilities of the surveillance system Azano’s “Security Tracking Devices” sound a lot like FinFisher software that’s being sold to governments around the world, which raises the question: Was that FinFisher that Azano’s company was selling to Mexico? Maybe, assuming the company “Obses” is affiliated with Azano, because Obses has definitely been selling FinFisher to Mexico:

    Privacy Internation
    Corruption scandal reveals use of FinFisher by Mexican authorities
    By: Alinda Vermeer
    on: 22-Jul-2013

    Following reports that the Mexican prosecution authority appears to be not only using FinFisher, but also to be involved in a corruption scandal surrounding the purchase of this intrusive surveillance technology, the Mexican Permanent Commission (composed of members of the Mexican Senate and Congress) has urged Mexico’s Federal Institute for Access to Public Information and Data Protection (IFAI) to investigate the use of spyware in Mexico.

    The corruption scandal, which entails the price of the surveillance technology being purchased at more than double the market rate, revealed that the Mexican government had bought FinFisher from Obses, a company which has been on the receiving end of dozens of no-bid governmental projects.

    While we don’t know if Obses purchased the malware from Gamma International, the British company that developed FinFisher, this is the first instance we are aware of where a reseller was involved in the sale of FinFisher. The standards of international responsible business conduct of the OECD guidelines however remain relevant even if a reseller is selling its products.

    FinFisher in Mexico

    The revelations followed a recent access to information request of a group of Mexican human rights activists and journalists that urged the IFAI to investigate the use of FinFisher in Mexico.

    According to the group, which includes individuals as well as civic organisations such as Propuesta Civica A.C., Al Consumidor and Contingente MX, the malware has been used to spy on journalists and activists in the country and breaches Mexico’s data protection law. The Federal Law for the Protection of Personal data applies to both private and public entities, and regulates the collection, use and disclosure of personal data. The law also provides limitations on government access to data. FinFisher is particularly intrusive spyware that once installed, will gain complete control over a computer, mobile phone or other device. As a result, every keystroke can be recorded, email and chat conversations can be monitored and Skype calls can be listened into.

    FinFisher has been linked to Mexico by researchers of the Citizen Lab, a research centre based at the Munk School of Global Affairs of the University of Toronto, who found FinFisher command and control servers with two local Internet service providers, IUSACELL and UNINET. Mexican newspaper Reforma revealed that in Mexico the Procuraduría General de la Nación and several other governmental organisations are using FinFisher. Privacy International supported the acces to information request via a letter to the IFAI.

    Example for other countries

    In recent years Mexican authorities have sought to improve their surveillance capabilities in an effort to combat drug-related violence. According to Latin American human rights activist Renata Avila:

    “Mexico’s public has been overwhelmed by drug-related violence in recent years, a problem that has left citizens fearing for their safety and generally unopposed to aggressive surveillance practices. As a result, the government has been able to launch sophisticated surveillance programs without facing significant resistance from civil society.”

    A turning point seems to have been reached however, now that civic organisations have called for transparency on the use of FinFisher in Mexico, suggesting it is not only being used to combat crime, but also to spy on activists and journalists. Concerned by the capabilities of FinFisher and its ramifications for the privacy of individuals, the Permanent Commission has already asked IFAI to investigate, and has proposed to request full disclosure of the contracts on the basis of which FinFisher was bought, together with detailed information on all other federal purchases of surveillance technologies. The Permanent Commission will meet this week to discuss the proposal.

    This strong response is an example for the 36 other countries in which FinFisher command and control servers have been found.

    Posted by Pterrafractyl | February 16, 2014, 7:41 pm
  4. Remember when Gamma, the maker of FinFisher, claimed that it must have been stolen copies of their super-spy software that were being used against Bahraini activists? Someone just hacked Gamma and stole 40 GB of documents and, shocker, it looks like Gamma was lying:

    ZDNet
    Top gov’t spyware company hacked; Gamma’s FinFisher leaked

    Summary: The maker of secretive FinFisher spyware — sold exclusively to governments and police agencies — has been hacked, revealing its clients, prices and its effectiveness across an unbelievable span of apps, operating systems and more.
    Violet Blue

    By Violet Blue for Zero Day | August 6, 2014 — 21:01 GMT (14:01 PDT)

    The company that makes and sells the world’s most elusive cyber weapon, FinFisher spyware, has been hacked and a 40G file has been dumped on the internet.

    The slick and highly secret surveillance software can remotely control any computer it infects, copy files, intercept Skype calls, log keystrokes — and now we know it can do much, much more.

    A hacker has announced on Reddit and Twitter that they’d hacked Anglo-German company Gamma International UK Ltd., makers of FinFisher spyware sold exclusively to governments and police agencies.

    The file was linked both on Reddit and “@GammaGroupPR” — a parody Twitter account by the hacker taking credit for the breach. The Twitter account is still doling out tidbits from the massive theft.

    The Reddit post Gamma International Leaked in self.Anarchism said,

    Two years ago their software was found being widely used by governments in the middle east, especially Bahrain, to hack and spy on the computers and phones of journalists and dissidents.

    Gamma Group (the company that makes FinFisher) denied having anything to do with it, saying they only sell their hacking tools to ‘good’ governments, and those authoritarian regimes most [sic] have stolen a copy.

    …a couple days ago [when] I hacked in and made off with 40GB of data from Gamma’s networks. I have hard proof they knew they were selling (and still are) to people using their software to attack Bahraini activists, along with a whole lot of other stuff in that 40GB.

    The stolen FinFisher spoils were first leaked as a torrent file on Dropbox and have since been shared across the internet, meaning that controlling the information leak is now impossible.

    FinFisher’s notoriety of late has come from its use in the government targeting of activists, notably linked to the monitoring of high profile dissidents in Bahrain.

    According to initial reports, the enormous file contains client lists, price lists, source code, details about the effectiveness of Finfisher malware, user and support documentation, a list of classes/tutorials, and much more.

    One spreadsheet in the dump explains that FinFisher performed well against 35 top antivirus products, showing how the sophisticated malware efficiently defeats detection.

    A release notes doc covers Gamma’s April 2014 patches to ensure its rootkit avoids Microsoft Security Essentials. It also explains that the malware records dual screen Windows setups, and reports better email spying with Mozilla Thunderbird and Apple Mail.

    Gamma does note that FinFisher is detected by OSX Skype (a recording prompt appears), and the same is for Windows 8 Metro — though the spyware goes well undetected by the desktop client.

    The files also contain lists of apps the spyware utilizes, and things it can’t use — many still to be determined. There is a fake Adobe Flash Player updater, and a Firefox plugin for RealPlayer.

    One of the files contains extensive (though still undetermined) documentation for WhatsApp.

    Reporting on just such spyware last month, The Economist noted,

    Currently it is legal for governments to buy the spyware—the sale and export of surveillance tools is virtually unregulated by international law.

    Spyware providers say they sell their products to governments for “lawful purposes”.

    But activists allege that their governments violate national laws in their often politically motivated use of such software. They argue that companies should be held accountable for selling spyware to repressive governments.

    The Register reported:

    A price list, which appeared to be a customers’ record, revealed the FinSpy program cost 1.4 million Euros and a variety of penetration testing training services priced at 27,000 Euros each.

    The document did not contain a date but it did show prices for malware targeting the recent iOS version 7 platform.

    Links have appeared on Twitter to the GitHub repository for Finfisher docs, although it’s being noted that due to Gamma’s operational security practices, the unencerypted source code is fairly useless.

    Gamma isn’t in the business of creating zero-days because they are more of an “ecosystem” spyware company, but apparently they do sell it to their clients.

    On the list of zero-day companies from which Gamma appears to purchase its exploits is the controversial French company, VUPEN.

    At only 1.4 million euros for the FinSpy system and virtually no export controls for the sale of this kind of potent software, you almost have to wonder which governments around the world haven’t purchased the system by now. It seems like a bit of a bargain.

    Posted by Pterrafractyl | August 7, 2014, 11:39 am
  5. Get ready for the next big cyber-growth sector: corporate “active defense” anti-hacking services. It’s an “active defense” that increasingly includes an active offense and possibly even a pre-emptive offense. And generally seems to embrace the vigilante spirit:

    Slate
    The Mercenaries

    Ex-NSA hackers and their corporate clients are stretching legal boundaries and shaping the future of cyberwar.
    By Shane Harris
    Nov. 12 2014 1:37 PM

    Excerpted from @War: The Rise of the Military-Internet Complex by Shane Harris. Out now from Houghton Mifflin Harcourt.

    Bright twenty- and thirtysomethings clad in polo shirts and jeans perch on red Herman Miller chairs in front of silver Apple laptops and sleek, flat-screen monitors. They might be munching on catered lunch—brought in once a week—or scrounging the fully stocked kitchen for snacks, or making plans for the company softball game later that night. Their office is faux-loft industrial chic: open floor plan, high ceilings, strategically exposed ductwork and plumbing. To all outward appearances, Endgame Inc. looks like the typical young tech startup.

    It is anything but. Endgame is one of the leading players in the global cyber arms business. Among other things, it compiles and sells zero day information to governments and corporations. “Zero days,” as they’re known in the security business, are flaws in computer software that have never been disclosed and can be secretly exploited by an attacker. And judging by the prices Endgame has charged, business has been good. Marketing documents show that Endgame has charged up to $2.5 million for a zero day subscription package, which promises 25 exploits per year. For $1.5 million, customers have access to a database that shows the physical location and Internet addresses of hundreds of millions of vulnerable computers around the world. Armed with this intelligence, an Endgame customer could see where its own systems are vulnerable to attack and set up defenses. But it could also find computers to exploit. Those machines could be mined for data—such as government documents or corporate trade secrets—or attacked using malware. Endgame can decide whom it wants to do business with, but it doesn’t dictate how its customers use the information it sells, nor can it stop them from using it for illegal purposes, any more than Smith & Wesson can stop a gun buyer from using a firearm to commit a crime.

    Endgame is one of a small but growing number of boutique cyber mercenaries that specialize in what security professionals euphemistically call “active defense.” It’s a somewhat misleading term, since this kind of defense doesn’t entail just erecting firewalls or installing antivirus software. It can also mean launching a pre-emptive or retaliatory strike. Endgame doesn’t conduct the attack, but the intelligence it provides can give clients the information they need to carry out their own strikes. It’s illegal for a company to launch a cyberattack, but not for a government agency. According to three sources familiar with Endgame’s business, nearly all of its customers are U.S. government agencies. According to security researchers and former government officials, one of Endgame’s biggest customers is the National Security Agency. The company is also known to sell to the CIA, Cyber Command, and the British intelligence services. But since 2013, executives have sought to grow the company’s commercial business and have struck deals with marquee technology companies and banks.

    Endgame was founded in 2008 by Chris Rouland, a top-notch hacker who first came on the Defense Department’s radar in 1990—after he hacked into a Pentagon computer. Reportedly the United States declined to prosecute him in exchange for his working for the government. He started Endgame with a group of fellow hackers who worked as white-hat researchers for a company called Internet Security Systems, which was bought by IBM in 2006 for $1.3 billion. Technically, they were supposed to be defending their customers’ computers and networks. But the skills they learned and developed were interchangeable from offense.

    Rouland, described by former colleagues as domineering and hot-tempered, has become a vocal proponent for letting companies launch counterattacks on individuals, groups, or even countries that attack them. “Eventually we need to enable corporations in this country to be able to fight back,” Rouland said during a panel discussion at a conference on ethics and international affairs in New York in September 2013.

    Rouland stepped down as the CEO of Endgame in 2012, following embarrassing disclosures of the company’s internal marketing documents by the hacker group Anonymous. Endgame had tried to stay quiet and keep its name out of the press, and went so far as to take down its website. But Rouland provocatively resurfaced at the conference and, while emphasizing that he was speaking in his personal capacity, said American companies would never be free from cyberattack unless they retaliated. “There is no concept of deterrence today in cyber. It’s a global free-fire zone.” One of Rouland’s fellow panelists seemed to agree. Robert Clark, a professor of law at the Naval Academy Center of Cyber Security Studies, told the audience that it would be illegal for a company that had been hacked to break in to the thief ’s computer and delete its own purloined information. “This is the most asinine thing I can think of,” Clark said. “It’s my data, it’s here, I should be able to delete it.”

    To date, no American company has been willing to say that it engages in offensive cyber operations designed to steal information or destroy an adversary’s system. But former intelligence officials say “hack-backs”—that is, breaking into the intruder’s computer, which is illegal in the United States—are occurring, even if they’re not advertised. “It is illegal. It is going on,” says a former senior NSA official, now a corporate consultant. “It’s happening with very good legal advice. But I would not advise a client to try it.”

    A former military intelligence officer said the most active hack-backs are coming from the banking industry. In the past several years banks have lost billions of dollars to cybercriminals, primarily those based in Eastern Europe and Russia who use sophisticated malware to steal usernames and passwords from customers and then clean out their accounts.

    In June 2013, Microsoft joined forces with some of the world’s biggest financial institutions, including Bank of America, American Express, JPMorgan Chase, Citigroup, Wells Fargo, Credit Suisse, HSBC, the Royal Bank of Canada, and PayPal, to disable a huge cluster of hijacked computers being used for online crime. Their target was a notorious outfit called Citadel, which had infected thousands of machines around the world and, without their owners’ knowledge, conscripted them into armies of “botnets,” or clusters of infected computers under the remote control of a hacker, which the criminals used to steal account credentials, and thus money, from millions of people. In a counterstrike that Microsoft code-named Operation b54, the company’s Digital Crimes Unit severed the lines of communication between Citadel’s more than 1,400 botnets and an estimated 5 million personal computers that Citadel had infected with malware. Microsoft also took over servers that Citadel was using to conduct its operations.

    Microsoft hacked Citadel. That would have been illegal had the company not obtained a civil court order blessing the operation. Effectively now in control of Citadel’s victims—who had no idea that their machines had ever been infected—Microsoft could alert them to install patches to their vulnerable software. In effect, Microsoft had hacked the users in order to save them. (And to save itself, since the machines had been infected in the first place owing to flaws in Microsoft’s products, which are probably the most frequently exploited in the world.)

    It was the first time that Microsoft had teamed up with the FBI. But it was the seventh time it had knocked down botnets since 2010. The company’s lawyers had used novel legal arguments, such as accusing criminals who had attacked Microsoft products of violating its trademark. This was a new legal frontier. Even Microsoft’s lawyers, who included a former U.S. attorney, acknowledged that they’d never considered using alleged violations of common law to obtain permission for a cyberattack. For Operation b54, Microsoft and the banks had spied on Citadel for six months before talking to the FBI. The sleuths from Microsoft’s counter-hacking group eventually went to two Internet hosting facilities, in Pennsylvania and New Jersey, where, accompanied by U.S. marshals, they gathered forensic evidence to attack Citadel’s network of botnets. The military would call that collecting targeting data. And in many respects, Operation b54 looked like a military cyberstrike. Technically speaking, it was not so different from the attack that U.S. cyber forces launched on the Obelisk network used by al-Qaida in Iraq.

    Microsoft also worked with law enforcement agencies in 80 countries to strike at Citadel. The head of cybercrime investigations for Europol, the European Union’s law enforcement organization, declared that Operation b54 had succeeded in wiping out Citadel from nearly all its infected hosts. And a lawyer with Microsoft’s Digital Crimes Unit declared, “The bad guys will feel the punch in the gut.”

    Microsoft has continued to attack botnets, and its success has encouraged government officials and company executives, who see partnerships between cops and corporate hackers as a viable way to fight cybercriminals. But coordinated counterstrikes like the one against Citadel take time to plan, and teams of lawyers to approve them. What happens when a company doesn’t want to wait six months to hack back, or would just as soon not have federal law enforcement officers looking over its shoulder?

    The former military intelligence officer worries that the relative technical ease of hack-backs will inspire banks in particular to forgo partnerships with companies like Microsoft and hack back on their own—without asking a court for permission. “Banks have an appetite now to strike back because they’re sick of taking it in the shorts,” he says. “It gets to the point where an industry won’t accept that kind of risk. And if the government can’t act, or won’t, it’s only logical they’ll do it themselves.” And hack-backs won’t be exclusive to big corporations, he says. “If you’re a celebrity, would you pay someone to find the source of some dirty pictures of you about to be released online? Hell yes!”

    Undoubtedly, they’ll find a ready supply of talent willing and able to do the job. A survey of 181 attendees at the 2012 Black Hat USA conference in Las Vegas found that 36 percent of “information security professionals” said they’d engaged in retaliatory hack-backs. That’s still a minority of the profession, though one presumes that some of the respondents weren’t being honest. But even those security companies that won’t engage in hack-backs have the skills and the knowhow to launch a private cyberwar.

    Over the past several years, large defense contractors have been gobbling up smaller technology firms and boutique cybersecurity outfits, acquiring their personnel, their proprietary software, and their contracts with intelligence agencies, the military, and corporations. In 2010, Raytheon, one of the largest U.S. defense contractors, agreed to pay $490 million for Applied Signal Technology, a cybersecurity firm with military and government clients. The price tag, while objectively large, was a relative pittance for Raytheon, which had sales the prior year totaling $25 billion. In 2013 the network-equipment giant Cisco agreed to buy Sourcefire for $2.7 billion in cash, in a transaction that reflected what the New York Times called “the growing fervor” for companies that defend other companies from cyberattacks and espionage.

    After the acquisition was announced, a former military intelligence officer said he was astounded that Cisco had paid so much money for a company whose flagship product is built on an open-source intrusion detection system called Snort, which anyone can use. It was a sign of just how valuable cybersecurity expertise had become—either that or a massive bubble in the market, the former officer said.

    But the companies are betting on a sure thing—government spending on cybersecurity. The Pentagon cybersecurity budget for 2014 is $4.7 billion, a $1 billion increase over the previous year. The military is no longer buying expensive missile systems. With the advent of drone aircraft many executives believe the current generation of fighter aircraft will be the last ones built to be flown by humans. Spending has plummeted on the big-ticket weapons systems that kept Beltway contractors flush throughout the Cold War, so they’re pivoting to the booming cyber market.

    It should be interesting to see which companies end up jumping into the counter-hack-attack-for-hire market. The competition could be fierce.

    Posted by Pterrafractyl | November 12, 2014, 2:39 pm
  6. They are only showing what they want us to see. I think whatever
    they have now far exceeds the dense and blocky material you’ll find
    in the Jacob Applebaum video below, except for a hint you’ll find at
    56:19 or so (“Portable” continuous wave radar generator). I think
    what they have now doesn’t use wires or have any protocols I’m
    aware of. Dave talked about this before but I don’t remember which
    show. Like the holographic projection system, but capable of doing
    a whole lot more.

    Jacob Applebaum: To Protect And Infect, Part 2 [30c3]
    https://www.youtube.com/watch?v=vILAlhwUgIU

    Posted by My Tinfoil Hat | November 13, 2014, 3:27 pm
  7. An Italian cybersecurity/antisecurity firm with a large number of government clients, Hacking Team, just got mega-hacked. 400 GB of internal documents are now released that verify allegations that Hacking Team sells its wares to governments with extensive human rights abuses. So, assuming this data/PR breach results in an end to those contracts, it’s at least possible that the kind of powerful software that you really don’t want in the wrong hands might not stay in the wrong hands. Of course, assuming nothing happens, Hacking Team just go a big free global advertisement. Either way, it’s just one more example of the fact that efforts to government hacking-abuses can’t exclusively focus on government hacking abilities. Cyberwarfare that goes far beyond the abilities of your normal cyber-security expert can be privatized too. Privatized and, or course, sold to some of the worst governments on the planet:

    Reuters
    UPDATE 1-Surveillance software maker Hacking Team gets taste of its own medicine

    (Adds that company is recommending that customers suspend use of spy gear)

    By Eric Auchard and Joseph Menn
    Tue Jul 7, 2015 3:30am IST

    (Reuters) – Italy’s Hacking Team, which makes surveillance software used by governments to tap into phones and computers, found itself the victim of hacking on a grand scale on Monday.

    The controversial Milan-based company, which describes itself as a maker of lawful interception software used by police and intelligence services worldwide, has been accused by anti-surveillance campaigners of selling snooping tools to governments with poor human rights records.

    Hacking Team’s Twitter account was hijacked on Monday and used by hackers to release what is alleged to be more than 400 gigabytes of the company’s internal documents, email correspondence, employee passwords and the underlying source code of its products.

    “Since we have nothing to hide, we’re publishing all our emails, files and source code,” posts published on the company’s hijacked Twitter account said. The tweets were subsequently deleted.

    Company spokesman Eric Rabe confirmed the breach, adding that “law enforcement will investigate the illegal taking of proprietary company property.”

    Hacking Team customers include the U.S. FBI, according to internal documents published Monday. That agency did not immediately respond to a request for comment.

    One U.S. privacy rights activist hailed the publication of the stolen Hacking Team documents as the “best transparency report ever”, while another digital activist compared the disclosures to a Christmas gift in July for anti-surveillance campaigners.

    Among the documents published was a spreadsheet that purports to show the company’s active and inactive clients at the end of 2014.

    Those listed included police agencies in several European countries, the U.S. Drug Enforcement Administration and police and state security organisations in countries with records of human rights abuses such as Egypt, Ethiopia, Kazakhstan, Morocco, Nigeria, Saudi Arabia and Sudan.

    Sudan’s National Intelligence Security Service was one of two customers in the client list given the special designation of “not officially supported”.

    However, a second document, an invoice for 480,000 euros to the same security service, calls into question repeated denials by the Hacking Team that it has ever done business with Sudan, which is subject to heavy trade restrictions.

    Hacking Team did not dispute the veracity of any of the documents, though it said some reports that claimed to be based on them contained misstatements.

    It said it would not identify any customers because of still-binding confidentiality agreements.

    The 12-year-old Hacking Team was named one of five private-sector “Corporate Enemies of the Internet” in a 2012 report by Reporters Without Borders.

    Citizen Lab, a digital rights research group affiliated with the University of Toronto, has published numerous reports linking Hacking Team software to repression of minority and dissident groups, as well as journalists in a number of countries in Africa and the Middle East.

    “Sudan’s National Intelligence Security Service was one of two customers in the client list given the special designation of “not officially supported”.”

    Note that if this sounds awfully similar to the FinFisher hack that revealed that firm was also selling its powerful services to highly questionable client states, this is going to make sound even more familiar: The same hacker that took down FinFisher, hacked Hacking Team:

    Vice Motherboard
    Hacker Claims Responsibility for the Hit on Hacking Team

    Written by
    Lorenzo Franceschi-Bicchierai

    July 6, 2015 // 10:21 AM EST

    An online anti-surveillance crusader is back with a bang.

    Last year, a hacker who only went by the name “PhineasFisher” hacked the controversial surveillance tech company Gamma International, a British-German surveillance company that sells the spyware software FinFisher. He then went on to leak more than 40GB of internal data from the company, which has been long criticized for selling to repressive governments.

    That same hacker has now claimed responsibility for the breach of Hacking Team, an Italian surveillance tech company that sells a similar product called Remote Controlled System Galileo.

    On Sunday night, I reached out to the hacker while he was in control of Hacking Team’s Twitter account via a direct message to @hackingteam. Initially, PhineasFisher responded with sarcasm, saying he was willing to chat because “we got such good publicity from your last story!” referring to a recent story I wrote about the company’s CEO claiming to be able to crack the dark web.

    He then went on to reference the story publicly on Twitter, posting a screenshot of an internal email which included the link to my story.

    Afterwards, however, he also claimed that he was PhineasFisher. To prove it, he told me he would use the parody account he used last year to promote the FinFisher hack to claim responsibility.

    “I am the same person behind that hack,” he told me before coming out publicly.

    The hacker, however, declined to answer to any further questions.

    In any case, now at least we know who is responsible for the massive hack of the controversial company, which has been accused in repeated occasions to sell its software to governments with questionable human rights records. Some of these customers were then caught using Hacking Team’s spyware against human rights activists or journalists.

    The leak of 400GB of internal files contains “everything,” according to a person close to the company, who only spoke on condition of anonymity. The files contain internal emails between employees; a list of customers, including some, such as the FBI, that were previously unknown; and allegedly even the source code of Hacking Team’s software, its crown jewels.

    So “PhineasFisher” repeated their FinFisher hack, this time against an Italian firm that appeared to have a very similar business model. Assuming PhineasFisher is a well-meaning “hackivist”, the situation could certainly be worse.

    But, of course, when you read about how Vice Motherboard did a piece on Hacking Team’s assertion that it can now hack the dark web just last month, it’s also obvious that anyone, whether its another cyber security firm, another government, or another hacker outfit, anyone with an interest in hacking would want to learn how to do that. Or how to prevent someone else from doing that to them.

    In other words, when you’re hacking an organization like Hacking Team, even a “black hat” hacker is going to be incentivized to attempt to make it look like a “white hat” hack because it’s so easy to do given Hacking Team’s horrible client list. “White hat” hacking is a gimme cover even if all they wanted was the dark net stuff. But let’s hope it was totally a real “hackivist” action by a “white hat” hacker anyways. It’s not only much more of a feel-good story, but the alternatives involving fake “white hat” hackers is actually rather feel-bad-ish. That Hacking Team source code that was apparently stolen sounds scary.

    Posted by Pterrafractyl | July 6, 2015, 11:23 pm
  8. Following yesterday’s triplet of “glitches” that took down the New York Stock Exchange, United Airlines, and the Wall Street Journal’s home page, a number of people are scratching their head and wondering if Anonymous’s tweet the previous day, which simply stated, “Wonder if tomorrow is going to be bad for Wall Street…. we can only hope,” was somehow related. Hmmm….

    US officials and the impacted companies, however, strongly deny that the technical difficulties were anything other than coincidental:

    Haaretz
    U.S. denies cyber-attack caused technical glitches at NYSE, United Airlines and WSJ
    Anonymous hackers suggest they may be behind New York Stock Exchange fail; White House says no indication of malicious actors in technical difficulties.
    By Oded Yaron | Jul. 8, 2015 | 11:23 PM

    A series of technical glitches in the United States on Wednesday morning Eastern Time have sparked rumors of a coordinated cyber-attack. The New York Stock Exchange was shut down and United Airlines flights were grounded due to technical difficulties. In addition, the home page of the Wall Street Journal’s website temporarily went down. American officials, however, denied any connection between the events, insisting the United States was not under attack.

    U.S. Homeland Security Secretary Jeh Johnson said technical problems reported by United and the NYSE were apparently not related to “nefarious” activity.

    “I have spoken to the CEO of United, Jeff Smisek, myself. It appears from what we know at this stage that the malfunctions at United and the stock exchange were not the result of any nefarious actor,” Johnson said during a speech at the Center for Strategic and International Studies, a Washington think tank.

    “We know less about the Wall Street Journal at this point, except that their system is in fact up again,” he added.

    On Tuesday, the Twitter account of the hacker group Anonymous posted a Tweet that read, “Wonder if tomorrow is going to be bad for Wall Street…. we can only hope.” On Wednesday afternoon, it tweeted, ” #YAN Successfully predicts @NYSE fail yesterday. Hmmmm.”

    United’s computer glitch prompted America’s Federal Aviation Administration to ground all of the company’s departures for almost two hours. According to the airline, more than 800 flights were delayed and about 60 were canceled due to the problem, which was later resolved.

    In a statement, United said it had suffered from “a network connectivity issue” and a spokeswoman for the company said the glitch was caused by an internal technology issue and not an outside threat.

    The airline, the second largest in the world, had a similar issue on June 2, when it was forced to briefly halt all takeoffs in the United States due to a problem in its flight-dispatching system.

    Just as United was bringing its systems back on-line, trading on the New York Stock Exchange came to a halt because of a technical problem and the Wall Street Journal’s website experienced errors.

    The New York Stock Exchange suspended trading in all securities on its platform shortly after 11:30 A.M. for what it called an internal technical issue, and canceled all open orders. The exchange, a unit of Intercontinental Exchange Inc (ICE.N) said the halt was not the result of a cyber-attack. “We chose to suspend trading on NYSE to avoid problems arising from our technical issue,” the NYSE tweeted about one hour after trading was suspended. Other exchanges were trading normally.

    A technical problem at NYSE’s Arca exchange in March caused some of the most popular exchange-traded funds to be temporarily unavailable for trading. And in August 2013, trading of all Nasdaq-listed stocks was frozen for three hours, leading U.S. Securities and Exchange Commission Chair Mary Jo White to call for a meeting of Wall Street executives to insure “continuous and orderly” functioning of the markets.

    White House Spokesman Josh Earnest said Wednesday that there was no indication of malicious actors involved in the technical difficulties experienced at the NYSE.

    Well, if Anonymous didn’t do the hack, it all points towards one obvious and ominous explanation: Anonymous has developed psychic precognition abilities!

    Well, ok, there are non-paranormal explanations, but if we are dealing with Anonymous Who Stare at Goats, let’s hope they’re just limited to the precog abilities. Precog Anonymous would be messy enough on its own, but at least if it’s just the occasional precog tweet that’s ok. Scanner Anonymous might be a little too over the top.

    Posted by Pterrafractyl | July 9, 2015, 11:55 am
  9. Here’s an indication of just how sensitive the client list is for a for companies like “Hacking Team”, the Italy-based government-spyware firm that was recently hacked: South Korean intelligence agency, the National Intelligence Service, acknowledged Tuesday that it had indeed purchased Hacking Team software, but assure the public that it was only used it to monitor North Korea and for other research purposes.

    A South Korean intelligence agent’s dead body that was just found alongside a suicide note would appear to suggest otherwise:

    Associated Press
    Dead S. Korean agent leaves note hinting at hacking scandal

    By KIM TONG-HYUNG

    July 18, 2015

    SEOUL, South Korea — A South Korean government spy was found dead Saturday in an apparent suicide alongside a note that seemed to comment on the recent revelation that the spy agency had acquired hacking programs capable of intercepting communications on cellphones and computers, police said.

    A police official in Yongin city, just south of Seoul, said the 46-year-old National Intelligence Service agent was found dead in his car, but would not reveal the agent’s name or details about the note, saying his family requested that the information not be made public. The official spoke on condition of anonymity, citing office rules.

    The NIS said Tuesday that it had purchased the hacking programs in 2012 from an Italian company, Hacking Team, but that it used them only to monitor agents from rival North Korea and for research purposes. The story emerged earlier this month when a searchable library of a massive email trove stolen from Hacking Team, released by WikiLeaks, showed that South Korean entities were among those dealing with the firm.

    The revelation is sensitive because the NIS has a history of illegally tapping South Koreans’ private conversations.

    Two NIS directors who successively headed the spy service from 1999 to 2003 were convicted and received suspended prison terms for overseeing the monitoring of cellphone conversations of about 1,800 of South Korea’s political, corporate and media elite.

    On Thursday, South Korea’s Supreme Court ordered a new trial for another former spy chief convicted of directing an online campaign to smear a main opposition candidate in the 2012 presidential election, won by current President Park Geun-hye.

    Note that the 2012 online smear campaign allegedly directed by the former spy chief now facing a retrial didn’t appear to involve the the use of any hacking tools, although with this Hacking Team revelation will see if that continues to be the case (note that NIS reportedly purchased the software in 2012). No, he was convicted of directing sock-puppetry. Lot’s and lots of sock-puppetry.

    But also note that the NIS wasn’t the only intelligence agency caught up in the scandal. South Korea’s Cyberwarfare Command was also accused the same political meddling:

    The New York Times
    Former South Korean Spy Chief Convicted in Online Campaign Against Liberals

    By CHOE SANG-HUN
    SEPT. 11, 2014

    SEOUL, South Korea — A former South Korean intelligence chief accused of directing agents who posted online criticisms of liberal candidates during the 2012 presidential election campaign was convicted Thursday of violating a law that banned the spy agency from involvement in domestic politics.

    Won Sei-hoon, who served as director of the National Intelligence Service under President Park Geun-hye’s predecessor, Lee Myung-bak, was sentenced to two and a half years in prison, but the Seoul Central District Court suspended the sentence. Mr. Won had just been released from prison Tuesday after completing a 14-month sentence stemming from a separate corruption trial.

    Prosecutors indicted Mr. Won in June of last year, saying that a secret team of National Intelligence Service agents had posted more than 1.2 million messages on Twitter and other forums in a bid to sway public opinion in favor of the conservative governing party and its leader, Ms. Park, ahead of the presidential and parliamentary elections in 2012.

    Many of the messages merely lauded government policies, but many others ridiculed liberal critics of the government and of Ms. Park, including Ms. Park’s rivals in the presidential election. Some messages called the liberal politicians “servants” of North Korea for holding views on the North that conservatives considered too conciliatory, prosecutors said.

    For the spy agency to “directly interfere with the free expression of ideas by the people with the aim of creating a certain public opinion cannot be tolerated under any pretext,” the court said in its ruling on Thursday. “This is a serious crime that shakes the foundation of democracy.”

    But though Mr. Won was convicted of violating the law governing the spy agency, the court dismissed a separate charge: that he had violated the country’s election law, which prohibits public servants generally from interfering in elections. In explaining that decision, the court said Mr. Won had not ordered his agents to support or oppose any specific presidential candidate.

    That finding spared Ms. Park a potentially serious political liability. Had Mr. Won been convicted of violating the election law, it would have provided fodder for critics of Ms. Park who say that the agency’s online smear campaign undermined the legitimacy of her election. Ms. Park, who was elected by a margin of about a million votes, has said that she neither ordered nor benefited from such a campaign.

    The intelligence service has denied trying to discredit opposition politicians, saying that its online messages were posted as part of a normal campaign of psychological warfare against North Korea. It said the North was increasingly using the Internet to spread misinformation in support of the Pyongyang government and to criticize South Korean policies, forcing its agents to defend those policies online.

    The intelligence agency was created to spy on North Korea, which is still technically at war with the South. But over its history, it has been repeatedly accused of meddling in domestic politics and of being used as a political tool by sitting presidents. In recent months, courts have acquitted two defectors from North Korea who had been indicted on charges of spying for Pyongyang; the courts said the intelligence service had kept them in solitary confinement for several months, failed to provide the suspects with appropriate access to lawyers and, in one case, even fabricated evidence to build its cases.

    The South Korean military’s Cyberwarfare Command was also accused of smearing opposition politicians online before the 2012 elections. Last month, military investigators formally asked prosecutors to consider legal action against the former heads of the command, which was created in 2010 to guard against hacking threats from the North.

    “The intelligence agency was created to spy on North Korea, which is still technically at war with the South. But over its history, it has been repeatedly accused of meddling in domestic politics and of being used as a political tool by sitting presidents.”

    With worth noting that the current president, Park Geun-hye, is the daughter of former president/military strongman Park Chung-hee, who set up the predecessor to the NIS in 1961.

    It’s also worth noting that if the South Korean government was planning on engaging in illegal domestic surveillance, Hacking Team’s software probably wasn’t very necessary.

    Posted by Pterrafractyl | July 18, 2015, 2:50 pm
  10. A German police officer recently made the news after ‘arresting’ a squirrel following reports from a distressed women that the critter was aggressively stalking her. Authorities determined that the squirrel was suffering from exhaustion and ordered the furry criminal to consume apples and honey tea as punishment.

    As far as stalker squirrels go, it could have been worse. It could have been a robo-squirrel. A robosquirrel that’s interested in a lot more than just your apples and honey tea and specifically interested in your passwords:

    Engadget
    Boeing and Hacking Team want drones to deliver spyware

    by Jon Fingas
    July 18th 2015 at 9:27pm

    Forget safeguarding drones against hacksif Boeing and Hacking Team have their way, robotic aircraft would dish out a few internet attacks of their own. Email conversations posted on WikiLeaks reveal that the two companies want drones to carry devices that inject spyware into target computers through WiFi networks. If a suspect makes the mistake of using a computer at a coffee shop, the drone could slip in surveillance code from a safe distance.

    The conversation was still in the early stages as of the leak, so you don’t have to worry about drones planting bugs any time soon. It’s also unclear as to who the customers would be. While the NSA is fond of spyware, there’s no certainty that it or other US agencies would line up as customers. Still, don’t be surprised if military recon drones are eventually doing a lot more than snapping pictures.

    Yes, Hacking Team, the government spyware firm, just got hacked and now WikiLeaks is leaking its emails. And according those emails, Hacking Team and Boening apparently are thinging about putting a suit of hacking tools on a drone, and why not? That makes perfect sense and there are probalby plenty of other companies and governments trying to the same thing. It would be shocking if that wasn’t the case. Whether or not that involves robosquirrels remains to be seen, but the industry for cryptodrones that blend into the environment and can sneak up on people is one of those inevitable technological advances that threatens to turn reality into a paranoid schizophrenic’s worst nightmare someday.

    And no government will be required to create that nightmare, although they’ll surely contribute. The private sector demand for drones that can hunt someone down and do one of any number of possible tasks that go far beyond hacking will provide more than enough of the required demand for creating a de facto cryptodrone surveillance state. A public and private army of stalker robo-squirrels and robo-everything-else is just a matter of time. Why? Because it’s just a matter of time before that kind of drone technology is 3D printable or somehow available the masses through some sort of do-it-yourself drone building technology. How many decades before 3D printable microdrones that’s are capable of highly sophisticated surveillance/hacking/whatever are just part of everyday reality because it’s all available through readily do-it-yourself manufacturing technology? That’s going to be really amazing and awesome when we can generate little robots on command, but it also pretty much guarantees an epidemic of public and private spy drones.

    So enjoy public wi-fi while you still can because hacker-stalker robo-squirrels are just a matter of time. And if that sounds alarming, just be glad the spy cyborg-squirrel or cyborg-any-critter technology isn’t going to available any time soon. And hopefully never. Again.

    Posted by Pterrafractyl | July 19, 2015, 10:24 pm
  11. There was a rather startling report last year that was never proven but certainly possible: Did over 100,000 smart TVs, home networking routers, smart refrigerators and other “Internet of Things” get turned into a giant spam-spewing “botnet”? If so, since we’ve never seen proof that such a botnet exists, if it existed back in 2014 it presumably exists today too:

    Ars Technica
    Is your refrigerator really part of a massive spam-sending botnet?
    Ars unravels the report that hackers have commandeered 100,000 smart devices.

    by Dan Goodin – Jan 17, 2014 2:25pm CST

    Security researchers have published a report that Ars is having a tough time swallowing, despite considerable effort chewing—a botnet of more than 100,000 smart TVs, home networking routers, and other Internet-connected consumer devices that recently took part in sending 750,000 malicious e-mails over a two-week period.

    The “thingbots,” as Sunnyvale, California-based Proofpoint dubbed them in a press release issued Thursday, were compromised by exploiting default administration passwords that hadn’t been changed and other misconfigurations. A Proofpoint official told Ars the attackers were also able to commandeer devices running older versions of the Linux operating system by exploiting critical software bugs. The 100,000 hacked consumer gadgets were then corralled into a botnet that also included infected PCs, and they were then used in a global campaign involving more than 750,000 spam and phishing messages. The report continued:

    The attack that Proofpoint observed and profiled occurred between December 23, 2013 and January 6, 2014 and featured waves of malicious email, typically sent in bursts of 100,000, three times per day, targeting Enterprises and individuals worldwide. More than 25 percent of the volume was sent by things that were not conventional laptops, desktop computers or mobile devices; instead, the emails were sent by everyday consumer gadgets such as compromised home-networking routers, connected multi-media centers, televisions and at least one refrigerator. No more than 10 emails were initiated from any single IP address, making the attack difficult to block based on location – and in many cases, the devices had not been subject to a sophisticated compromise; instead, misconfiguration and the use of default passwords left the devices completely exposed on public networks, available for takeover and use.

    The Proofpoint report quickly went viral, with many mainstream news outlets breathlessly reporting the findings. The interest is understandable. The finding of a sophisticated spam network running on 100,000 compromised smart devices is extraordinary, if not unprecedented. And while the engineering effort required to pull off such a feat would be considerable, the botnet Proofpoint describes is possible. After all, many Internet-connected devices run on Linux versions that accept outside connections over telnet, SSH, and Web interfaces.

    What’s more, in an age of James Bond-like infections that bug thousands of air-gapped computers and cryptographic attacks that hijack Microsoft’s Windows update mechanism, a botnet of refrigerators, thermostats, and other smart devices is by no means impossible. Last year, an anonymous guerrilla researcher presented credible evidence that he hijacked more than 420,000 Internet-connected devices. The growing number of these devices and their advances in processing power also make these scenarios increasingly feasible.

    Where’s the smoking gun?

    Still, there’s a significant lack of technical detail for a report with such an extraordinary finding. Among other things, Proofpoint provided no details about the software the researchers say compromised the devices; it said it didn’t “sinkhole” or otherwise monitor any of the command-and-control servers that would have been necessary to coordinate botnet activities; and it didn’t convincingly explain how it arrived at the determination that 100,000 smart devices were commandeered. My doubts lingered even after a one-on-one interview with David Knight, general manager of Proofpoint’s information security division.

    Knight said Proofpoint knows appliances sent the spam directly because researchers scanned the IP addresses that sent the malicious e-mails and received responses from the Internet interfaces of name-brand devices. I pointed out that many home networks have dozens of devices connected to them. How, I asked, did researchers determine that spam was sent by, say, an infected refrigerator? Isn’t it possible that a home network with a misconfigured smart device might also have an infected Windows XP laptop that was churning out the malicious e-mails?

    Knight’s response: in some cases, the researchers directly queried the smart devices on IP addresses that sent spam and observed that the appliances were equipped with the Simple Mail Transfer Protocol or similar capabilities that caused them to send spam. In other cases, the researchers determined the devices were connected directly to the Internet rather than through a router, making them the only possible source of the spam that came from that IP address.

    Again, what Proofpoint is reporting is plausible, but it doesn’t add up. Experienced botnet researchers know that estimating the number of infected machines is a vexingly imprecise endeavor. No technique is perfect, but the scanning of public IP addresses is particularly problematic. Among other things, the intricacies of network address translation mean that the IP address footprint of a home router will be the same as the PC, smart TV, and thermostat connected to the same network.

    It’s also hard to understand why someone would go to all the trouble of infecting a smart device and then use it to send just 10 spam messages. Traditional spam botnets will push infected PCs to send as many messages as its resources allow. The botnet reported by Proofpoint requires too much effort and not enough reward.

    None of this is to say that the reported 100,000-strong smart-device botnet doesn’t exist. And as most students of logic accept, it’s not feasible to prove a negative. Still, the lack of evidence documenting any malware sample or a command and control server should give any reporter pause before repeating such an extraordinary claim. The research methodology is also a red flag.

    I contacted Paul Royal, a research scientist at Georgia Tech who specializes in network and system security, and I asked for his take on the Proofpoint report and the additional information provided by Knight. He was skeptical, too.

    “The aggregate of the information doesn’t paint an adequately compelling picture that what they’re asserting occurred actually occurred,” Royal said. “When you ask something as simple as how do you know the spam came from gadgets they say: ‘Well, we looked at the IP addresses of the systems sending the spam and when we presumably probed them we observed that they were coming from set-top-box-like devices.’ The technical analysis of that shows that there could be plenty of other explanations.”

    Was that Christma spam you got in 2011 sent by the Great Proofpoint Botnet enabled via a large number of misconfigured Internet-ready devices that had vulnerabilities like default passwords still running?


    The attack that Proofpoint observed and profiled occurred between December 23, 2013 and January 6, 2014 and featured waves of malicious email, typically sent in bursts of 100,000, three times per day, targeting Enterprises and individuals worldwide. More than 25 percent of the volume was sent by things that were not conventional laptops, desktop computers or mobile devices; instead, the emails were sent by everyday consumer gadgets such as compromised home-networking routers, connected multi-media centers, televisions and at least one refrigerator. No more than 10 emails were initiated from any single IP address, making the attack difficult to block based on location – and in many cases, the devices had not been subject to a sophisticated compromise; instead, misconfiguration and the use of default passwords left the devices completely exposed on public networks, available for takeover and use.

    Sadly, we may never know. We do know, however, that changing the default password for anything connected to the internet is a really good idea. And if you have a large number of internet-ready devices all connected to the internet with their default passwords still in place and other misconfigurations, the botnet they described does seem very possible:


    The finding of a sophisticated spam network running on 100,000 compromised smart devices is extraordinary, if not unprecedented. And while the engineering effort required to pull off such a feat would be considerable, the botnet Proofpoint describes is possible. After all, many Internet-connected devices run on Linux versions that accept outside connections over telnet, SSH, and Web interfaces.

    Did someone discover a massive number of internet-ready devices with their default passwords and other misconfigurations created the great Christmas spam machine ever created? And does it still exist, dribbling out spam one device at a time? It’s possible.

    And since it’s also possible that you haven’t changed the default passwords on your devices, perhaps that’s something to look into. But while you can change your internet-ready devices’ passwords easily enough, for some internet-ready devices you might actually need to change more than just the password to secure it on the internet. You might need to change the whole device for a new one. Why? Because, as the article below points out, the emerging “Internet of Things”, especially the “Internet of Relatively Cheap Things”, might actually be the “Internet of Relatively Cheap Things Sharing the Same Set of Encryption Keys”:

    eWeek
    Cryptographic Key Reuse Exposed, Leaving Users at Risk

    A lack of unique keys in embedded devices is revealed, leaving such devices subject to impersonation, man-in-the-middle or passive decryption attacks.

    By Sean Michael Kerner | Posted 2015-11-30

    The promise of encryption is that it keeps information hidden from public view. But what happens when multiple devices share the same encryption key? According to a report from security firm SEC Consult, millions of devices are at risk because vendors have been reusing HTTPS and Secure Shell (SSH) encryption keys.

    “Research by Stefan Viehböck of SEC Consult has found that numerous embedded devices accessible on the public Internet use non-unique X.509 certificates and SSH host keys,” CERT warns in vulnerability note #566724. “Vulnerable devices may be subject to impersonation, man-in-the-middle, or passive decryption attacks.”

    Viehböck looked at more than 4,000 devices from 70 vendors and found only 580 unique private keys were in use. There is a significant amount of reuse across keys that SEC Consult has estimated to impact approximately 50 vendors and 900 products. CERT’s vulnerability note explains that for the majority of vulnerable devices, vendors reused certificates and keys across their own product lines.

    “There are some instances where identical certificates and keys are used by multiple vendors,” CERT’s vulnerability note states. “In these cases, the root cause may be due to firmware that is developed from common SDKs (Software Development Kits), or OEM (Original Equipment Manufacturer) devices using ISP-provided firmware.”

    Tod Beardsley, research manager at Rapid7, is not surprised at the SEC Consult findings. When auditing inexpensive embedded devices, his No. 1 complaint is when the administrative interface isn’t encrypted at all, he said.

    “However, even when I do see that there is an encrypted interface, they’re often vulnerable to the shared key problem detailed by VU#566724,” Beardsley told eWEEK. “The problem here is that it’s difficult for low-end, low-margin device managers to implement unique key generation on individual devices.”

    Plus, generating unique keys as part of the manufacturing process cuts into a vendor’s already thin margins, and designing something that generates a key on first use is going to require some development and quality assurance effort, Beardsley said.

    “The problem is that software developers and security architects haven’t yet come together to design an easy-to-use, push button library that embedded devices leverage routinely,” he said. “As technologists, we need to get ahead of this problem and design encryption solutions that are not only secure, but easy to implement.”

    Using hardcoded private keys is a security disaster, according to Dr. Yehuda Lindell, co-founder and chief scientist at Dyadic. Lindell sees a number of reasons why the private keys may have been left exposed and reused by multiple vendors.

    “Sometimes, keys are hardwired for the purpose of development and testing, and are just forgotten when moving the software into production,” Lindell told eWEEK. “Other times, developers don’t know where to put the keys and mistakenly think that hardwiring them is a good idea.”

    Angel Grant, senior manager at RSA, the security division of EMC, noted that security key management has always been a challenge and will continue to propagate as the Internet of things (IoT) expands.

    “There currently is no model of trust between machines, so organizations need to pause and think about the potential attack vectors that will leverage the potential computing power of IOT to create things like a Botnet of Things (BOTOT) or Thing in the Middle (TITM),” Grant told eWEEK.

    Best Practices

    There are a number of things that vendors can and should do to limit the risk of cryptographic key reuse. In many cases, however, the challenge lies with the actual end users of devices.

    The problem with this sort of vulnerability is that the device owner [user] actually can’t do anything other than replace the device,” Lindell said. “It’s also not necessarily the case that a vendor can issue a simple firmware update. This is because not all of these devices may support such a remote update securely.”

    Kevin Bocek, vice president of security strategy and Threat intelligence at Venafi, said his company along with the National Institute of Standards and Technology (NIST) recently issued a new publication titled Security of Interactive and Automated Access Management using Secure Shell (SSH). The NIST document provides guidance on several critical aspects of SSH, including its underlying technologies, inherent vulnerabilities and best practices for managing SSH keys throughout their life cycle.

    “All SSH access depends on the proper management and security of SSH keys,” Bocek said. “If your organization does not have an active SSH key management and security project, it is at risk.”

    There is also a short-term fix that can help to limit the risk of being exposed to reused cryptographic keys.

    “As far as protecting today’s vulnerable devices, moving them off the general Internet and into a VPN controlled network is probably the best short-term solution,” Beardsley suggested. “VPNs are an increasingly important component of modern enterprise networks. There are pretty easy-to-use interfaces on laptops, tablets and phones today, and their use is getting more normalized on otherwise public networks.”

    “The problem with this sort of vulnerability is that the device owner [user] actually can’t do anything other than replace the device”
    And note that if “replace device” becomes the default option following a future wave of IoT Botnet attacks, that’s going to be a lot of replaced devices:

    Viehböck looked at more than 4,000 devices from 70 vendors and found only 580 unique private keys were in use. There is a significant amount of reuse across keys that SEC Consult has estimated to impact approximately 50 vendors and 900 products. CERT’s vulnerability note explains that for the majority of vulnerable devices, vendors reused certificates and keys across their own product lines.”

    A lot.

    But look on the bright side. At least botnets don’t fly through your neighborhood seeking vulnerable devices to infect with sophisticated malware. And when there eventually are botnets flying through the neighborhood, don’t forget that there’s always another bright side.

    Posted by Pterrafractyl | December 1, 2015, 9:26 pm
  12. Well, it’s been quite a year for the Internet of Hackable Things:

    Wired

    How the Internet of Things Got Hacked

    Andy Greenberg and Kim Zetter
    12.28.15, 7:00 am

    There was once a time when people distinguished between cyberspace, the digital world of computers and hackers, and the flesh-and-blood reality known as meatspace. Anyone overwhelmed by the hackable perils of cyberspace could unplug and retreat to the reliable, analog world of physical objects.

    But today, cheap, radio-connected computers have invaded meatspace. They’re now embedded in everything from our toys to our cars to our bodies. And this year has made clearer than ever before that this Internet of Things introduces all the vulnerabilities of the digital world into our real world.

    Security researchers exposed holes in everything from Wi-Fi-enabled Barbie dolls to two-ton Jeep Cherokees. For now, those demonstrations have yet to manifest in real-world malicious hacks, says security entrepreneur Chris Rouland. But Rouland, who once ran the controversial government hacking contractor firm Endgame, has bet his next company, an Internet-of-Things-focused security startup called Bastille, on the risks of hackable digital objects. And he argues that public understanding of those risks is on the rise. “2015 has been the pivotal year when we saw awareness and vulnerability discoveries published about ‘things’,” Rouland says. He’s added a new slogan to his powerpoint presentations: “Cyber Barbie is now part of the kill chain.”

    Here are a few of the hacks that made 2015 the year of insecure internet things:

    Internet-Enabled Automobiles

    Security researchers Charlie Miller and Chris Valasek forever altered the automobile industry’s notion of “vehicle safety” in July when they demonstrated for WIRED that they could remotely hack a 2014 Jeep Cherokee to disable its transmission and brakes. Their work led Fiat Chrysler to issue an unprecedented recall for 1.4 million vehicles, mailing out USB drives with a patch for the vulnerable infotainment systems and blocking the attack on the Sprint network that connected its cars and trucks.

    That Jeep attack turned out to be only the first in a series of car hacks that rattled the auto industry through the summer. At the DefCon hacker conference in August, Marc Rogers, principal security researcher for CloudFlare, and Kevin Mahaffey, co-founder and CTO of mobile security firm Lookout, revealed a suite of vulnerabilities they found in the Tesla Model S that would have allowed someone to connect their laptop to the car’s network cable behind the driver’s-side dashboard, start the $100,000 vehicle with a software command, and drive off with it—or they could plant a remote-access Trojan on the car’s internal network to later remotely cut the engine while someone was driving. Other vulnerabilities they found could theoretically have been exploited remotely without needing physical access to the car first, though they didn’t test these. Tesla patched most of these in an over-the-air patch delivered directly to vehicles.

    Also at Defcon this year, security researcher Samy Kamkar showed off a book-sized device he’d created called OwnStar, which could be planted on a GM vehicle to intercept communications from a driver’s OnStar smartphone app and give the hacker the ability to geolocate the car, unlock it at will, and even turn on its engine. Kamkar soon found that similar tricks worked for BMW and Mercedes Benz apps, too. Just days later, researchers at the University of California at San Diego showed that they could remotely exploit a small dongle that insurance companies ask users to plug into their dashboards to monitor their car’s speed and acceleration. Through that tiny gadget’s radio, they were able to send commands to a Corvette that disabled its brakes.

    All of those high-profile hacks were meant to send a message not only to the automobile industry, but to the consumers and regulators who hold them accountable. “If consumers don’t realize this is an issue, they should, and they should start complaining to carmakers,” Miller told WIRED after the Jeep hack. “This might be the kind of software bug most likely to kill someone.”

    Medical Devices

    Hacked cars aren’t the only devices in the Internet of Things that are capable of killing, of course. Critical medical equipment and devices also have software and architecture vulnerabilities that would let malicious actors hijack and control them, with potentially deadly consequences. Just ask the cardiologist for Dick Cheney who, fearing that an attacker could deliver a fatal shock to the former vice president through his pacemaker, disabled the device’s Wi-Fi capability during his time in office. Students at the University of Alabama showed why Cheney’s cardiologist had cause for concern this year when they hacked the pacemaker implanted in an iStan—a robotic dummy patient used to train medical students—and theoretically killed it. “[W]e could speed the heart rate up; we could slow it down,” Mike Jacobs, director of the university’s simulation program told Motherboard. “If it had a defibrillator, which most do, we could have shocked it repeatedly.”

    Drug infusion pumps—which dole out morphine, chemotherapy, antibiotics, and other drugs to patients—were also in the spotlight this year. Security researcher Billy Rios took a special interest in them after he had a stint in the hospital for emergency surgery. After taking a close look at the ones that were used in his hospital, Rios found serious vulnerabilities in them that would allow a hacker to surreptitiously and remotely change the dose of drugs administered to patients. The pump maker patched some of the vulnerabilities but insisted others weren’t a problem.

    The Federal Drug Administration, which oversees the safety approval process of medical equipment, has taken note of the problems found in all of these devices and others and is beginning to take steps to remedy them. The federal agency began working this year with a California doctor to find a way to fix security problems found in insulin pumps specifically. But the remedies they devise for these pumps could serve as a model for securing other medical devices as well.

    Unfortunately, many of the problems with medical devices can’t be fixed with a simple software patch—instead, they require the systems to be re-architected. All of this takes time, however, which means it could be years before hospitals and patients see more secure devices.

    Everything Else

    For any given consumer product, there seemed to be at least one company this year who eagerly added Wi-Fi to it. Securing that Wi-Fi, on the other hand, seemed to be a more distant priority.

    When Mattel added Wi-Fi connectivity to its Hello Barbie to enable what it described as real-time artificially intelligent conversations, it left its connection to the Hello Barbie smartphone app open to spoofing and interception of all the audio the doll records. A Samsung “smart fridge,” designed to synch over Wi-Fi with the user’s Google Calendar, failed to validate SSL certificates, leaving users’ Gmail credentials open to theft. Even baby monitors, despite the creepy risk of hackers spying on kids, remain worryingly insecure: A study from the security firm Rapid7 found that all nine of the monitors it tested were relatively easy to hack.

    Not even guns have been spared from the risks of hacking. Married hacker couple Runa Sandvik and Michael Auger in July showed WIRED that they could take control of a Wi-Fi-enabled TrackingPoint sniper rifle. Sandvik and Auger exploited the rifle’s insecure Wi-Fi to change variables in the gun’s self-aiming scope system, allowing them to disable the rifle, make it miss its target, or even make it hit a target of their choosing instead of the intended one. “There’s a message here for TrackingPoint and other companies,” Sandvik told WIRED at the time. “When you put technology on items that haven’t had it before, you run into security challenges you haven’t thought about before.” That rule certainly applies to any consumer-focused company thinking of connecting their product to the Internet of Things. But for those whose product can kill—whether a gun, a medical implant, or a car—let’s hope the lesson is taken more seriously in 2016.

    That was, uh, a bit terrifying. It’s almost hard to choose the creepiest hack from such a selection, although that hackable Barbie Doll just might have the greatest creep potential.

    Posted by Pterrafractyl | December 29, 2015, 11:29 am
  13. If you’ve ever wondered why it is that online web ads, which are often little programs you can interact with, aren’t avenues for infecting your computer with a malware, here’s your answer: you were wondering incorrectly because online advertisements are already increasingly “malvertisements”:

    Ad Age
    What You Should Know About Yahoo’s Malvertising Attack
    Malwarebytes’ Jermore Segura Explains How the Attack Happened and How People Can Protect Themselves

    By Tim Peterson. Published on August 05, 2015.

    People often cite lethargic page-load speeds or general aesthetics as the reasons they install ad-blocking software on their web browsers. But hackers are making perhaps the best case for people to block banner ads — and for advertisers and publishers to take ad-blocking seriously.

    Hackers have been exploiting Adobe‘s Flash software, which brands use to make and display visually appealing and interactive online ads, to take over personal computers entirely and hold them hostage, or to send fake traffic to sites built to siphon ad spending. According to cybersecurity company RiskIQ, the number of ads created for malicious reasons — called “malvertisements” — increased by 260% in the first quarter from the period a year earlier.

    On Monday cybersecurity company Malwarebytes said Yahoo’s ad network had fallen victim to a "malvertising" attack. Yahoo said in a statement that its team took action as soon as it learned of the issue but that “the scale of the attack was grossly misrepresented in initial media reports.”

    Ad Age spoke with Malwarebytes’ senior security researcher, Jerome Segura, to understand why these types of attacks appear to be happening more often, what Flash has to do with it and what can be done to prevent future attacks. Adobe declined to comment.

    The transcript has been condensed for clarity and length.

    Advertising Age: How did this happen? Yahoo’s one of the biggest online publishers out there and operates one of the higher-profile ad networks, so it seems like they should be among the least vulnerable to this kind of attack.

    Jerome Segura: Right, exactly. It is quite unusual to see, in this case, the publisher and the advertiser caught at the same time. We have observed malicious advertising before where you have companies like Google‘s DoubleClick where the ads are displayed on various websites. But in this case it was on the main Yahoo site as well as some of the various portals. The malvertising attack itself, the chain went through a third-party ad server called AdJuggler that Yahoo had been dealing with already. What happened is a rogue advertiser basically abused AdJuggler, which in turn affected Yahoo because they were publishing their ads on their main site.

    One of the big issues of malvertising: There are many layers and this is due to things like real-time bidding where various advertisers can bid on an ad using ad platforms. It’s a very complex situation. There are billions of impressions each day. I think Yahoo itself admitted in its statement that this is a problem that comes with the business of online advertising. You won’t be able to catch all of the attacks before they actually happen. To some extent I think that’s true.

    Ad Age: The crazy thing to me is that, from what I’ve read, it sounds like the easiest part of all this is in getting these bad ads to run on publishers’ sites.

    Mr. Segura: There are many techniques that cybercriminals are using to fool ad networks and advertising agencies. For starters it’s quite easy on a lot of ad networks — maybe not Yahoo’s — to register an account as an advertiser and start uploading your ad and bidding for spots. It’s very anonymous. You can register without providing a lot of information necessarily. There is not really a very strong barrier to entry for advertisers to start going on to ad platforms and pushing their ads. One of the reasons is they’re willing to give money to the ad networks to run the ads, like any normal advertiser, so it is in the ad networks’ interest to have the advertisers come and upload their creative.

    It is definitely an issue that’s been shown and a lot of people have wondered how is this possible and isn’t there some kind of monitoring in place to detect these kinds of advertisers that are malicious in nature. There are different techniques that are used. Some advertisers will start legitimately to gain the trust of the ad network and later turn on ads that are malicious, but only activate them a few times of day to not create too much noise.

    Others that know they will be caught, once they get into an ad network they push it as much as possible in a short time frame before somebody actually notices the irregular activity and shuts them down. Because it’s a very layered, complex system and billions of impressions, there is always room for abuse.

    Ad Age: From an audience perspective, one of the scarier pieces of this is that if I visited one of Yahoo’s affected sites while these bad ads were running, my computer could have been infected even if I didn’t click on any ads, right?

    Mr. Segura: Exactly. Malvertising does not require any user interaction. Simply browsing in this case to Yahoo.com and the page loading with the ad would be enough for the code to silently try to infect your computer. In terms of how successful that is, it’s actually pretty, pretty high. There was a report from Cisco that showed that in 40% of cases users that were faced with a malvertising attack would be infected because in most cases their computers aren’t fully secured properly. The 40% ratio of infection is definitely something that the bad guys are enjoying at the moment because they know when they run one of these malvertising campaigns, the budget they dedicate to it will see a good return on investment.

    Ad Age: It feels like these malvertising attacks are happening more often. RiskIQ said that 260% more malvertisements ran in the first quarter of this year than in the first quarter of last year. Why are these becoming more common?

    Mr. Segura: That’s a good question. First of all those numbers are only attacks that have been detected. There are a lot of other attacks happening that nobody really sees or is able to immediately pinpoint. One example of this is earlier this year there was a malvertising attack that lasted almost two months and used a zero-day exploit — exploiting a vulnerability before the software maker is aware of the vulnerability — in the Flash player. But overall you’re right. The trend is that there are more attacks and the campaigns seem to last longer and affect sites that have higher profiles. I think one of the primary reasons is right now cybercriminals have a lot of vulnerabilities and exploits that work really well. In the last few months we have had several Flash player zero-days or vulnerabilities where there was no patch from the vendor for several days yet the exploits were already being used for malvertising attacks. The current situation, especially due to those Flash player exploits, is making it increasingly attractive for cybercriminals.

    Ad Age: Why does Flash always seem to be at the root of these malvertising attacks?

    Mr. Segura: Typically cybercriminals try to exploit a piece of software that is very common and also give them a good return in terms of the effort spent trying to find exploits. With Flash what’s interesting is we’ve seen in a few high-profile cases where you can combine the exploit — that is going to find the vulnerability in the Flash player and be able to open the machine for an infection — and combine that with the advert itself in one package. So not only can you have an ad that works perfectly fine in Flash, but that ad contains the exploit code. It’s pretty unique. It’s not something you can do with other plug-ins or pieces of software. In terms of what is required from the attacker point of view, it’s pretty much streamlined. It’s a very efficient way to compromise systems.

    Ad Age: Is this a desktop-only problem, or is it something that’s also going on with the mobile web or even ads in mobile apps?

    Mr. Segura: This particular Yahoo case was for desktop computers and Windows computers. But malvertising in general isn’t just about malware. We see actually a lot of malvertising that targets mobile devices and is not primarily malware-related, like downloading an app you weren’t prepared for. More recently we’ve seen malvertising attacks that have these pop-ups you couldn’t get rid of for tech-support scams. That was very popular on Apple’s iOS. You’d be browsing a site and this pop-up would not let you close it and ask you to call a number for support, which turned out to be a scam. As the number of users on mobile has surpassed desktop users, malvertisers are infecting or exploiting users in different ways.

    Ad Age: What can publishers do about this?

    Mr. Segura: They don’t have a lot of control in all of this unfortunately. Most of them offer content for free, so advertising is part of their revenue and an important part of their revenue. In terms of how to minimize this, one of the important things they can do is pick advertisers wisely and go for a well known, top-level ad network, for example Google’s DoubleClick or Yahoo Bing Contextual Ads. You know, the major ones. These traditionally have more resources and stricter controls in terms of quality assurance in terms of the type of ads that go through. So you are definitely minimizing your risk by going with a popular ad network.

    Ad Age: Wouldn’t Yahoo’s ad network have been considered in that tier, at least before this attack was revealed? And so how comfortable should people feel with DoubleClick or Bing’s network until something potentially happens and they’re affected just like Yahoo has been?

    Mr. Segura: It’s perfectly valid. Overall the number of incidents for the major ad networks is much, much lower than those that are less reputable. There’s no such thing as no incident when it comes to security. It’s about the frequency but also the duration of an incident. So by going with a major ad network, you know that they’re more likely to respond in a timely manner. That’s what really matters, I think.

    Ad Age: What about advertisers and ad networks? What can they do?

    Mr. Segura: They have already have a lot of things in place to detect fraud. For example when a new advertiser comes on board, they don’t let them get the full privilege of running campaigns on major sites. They might start with a subset of sites that are lower profile, and they also may have certain features that are disabled by default. For example, they might only be able to carry text-based ads until they’ve been around for long enough that they’re trusted and can now introduce more dynamic ads, Flash-based ads for example. Overall what they really can do is — knowing that incidents do happen — they need to prepare themselves for what to do when they happen: what is the response, how fast can they react to an incident. Each second that goes by, somebody else is getting infected.

    Ad Age: What can people do to protect themselves from getting infected?

    Mr. Segura: Getting your computers patched is the primary piece of advice anybody can give. Obviously a lot of machines aren’t patched and are getting compromised because of that. But with what’s happened this year, we’ve seen that patching is not enough because there are more and more zero-day exploits out there. People need to start thinking of going beyond patching. Traditionally we’ve been talking about anti-virus and anti-malware software, which is critical.

    But the problem is with a lot of these attacks, because they’re happening in real time, the malware that is being distributed is so novel that most antivirus software products aren’t even detecting it at the specific time it’s been released. That’s because criminals are able to test the malware by running it against antivirus software. The next solution is being able to block attacks as early as possible. With Flash-based attacks, one of the simple things you can do is to either remove Flash,which in the long term I don’t think is the best solution because eventually attackers will move to something else. Or there’s a feature in Flash that allows the user to activate Flash when they need it. That’s a major component in your defence because of all these drive-by-download attacks assume that Flash is enabled by default. Looking at the scope of the attacks, they target vulnerabilities wherever they are in the browser. So users need to be able to use the right tools that prevent the vulnerabilities from being exploited.

    “With Flash-based attacks, one of the simple things you can do is to either remove Flash,which in the long term I don’t think is the best solution because eventually attackers will move to something else. Or there’s a feature in Flash that allows the user to activate Flash when they need it. That’s a major component in your defence because of all these drive-by-download attacks assume that Flash is enabled by default”
    Word to the wise.

    So don’t if you’re a Windows user who read goes to sites like the New York Times or the BBC and you also have Adobe Flash or Microsoft Silverlight installed, you probably want to change those Adobe Flash permissions. Soon. Or better yet, yesterday:

    Ars Technica

    Big-name sites hit by rash of malicious ads spreading crypto ransomware [Updated]
    New malvertising campaign may have exposed tens of thousands in the past 24 hours.

    by Dan Goodin – Mar 15, 2016 12:37pm CDT

    Mainstream websites, including those published by The New York Times, the BBC, MSN, and AOL, are falling victim to a new rash of malicious ads that attempt to surreptitiously install crypto ransomware and other malware on the computers of unsuspecting visitors, security firms warned.

    The tainted ads may have exposed tens of thousands of people over the past 24 hours alone, according to a blog post published Monday by Trend Micro. The new campaign started last week when “Angler,” a toolkit that sells exploits for Adobe Flash, Microsoft Silverlight, and other widely used Internet software, started pushing laced banner ads through a compromised ad network.

    According to a separate blog post from Trustwave’s SpiderLabs group, one JSON-based file being served in the ads has more than 12,000 lines of heavily obfuscated code. When researchers deciphered the code, they discovered it enumerated a long list of security products and tools it avoided in an attempt to remain undetected.

    Update: According to a just-published post from Malwarebytes, a flurry of malvertising appeared over the weekend, almost out of the blue. It hit some of the biggest publishers in the business, including msn.com, nytimes.com, bbc.com, aol.com, my.xfinity.com, nfl.com, realtor.com, theweathernetwork.com, thehill.com, and newsweek.com. Affected networks included those owned by Google, AppNexis, AOL, and Rubicon. The attacks are flowing from two suspicious domains, including trackmytraffic[c],biz and talk915[.]pw.

    The ads are also spreading on sites including answers.com, zerohedge.com, and infolinks.com, according to SpiderLabs. Legitimate mainstream sites receive the malware from domain names that are associated with compromised ad networks. The most widely seen domain name in the current campaign is brentsmedia[.]com. Whois records show it was owned by an online marketer until January 1, when the address expired. It was snapped up by its current owner on March 6, a day before the malicious ad onslaught started.

    “The tainted ads may have exposed tens of thousands of people over the past 24 hours alone, according to a blog post published Monday by Trend Micro. The new campaign started last week when “Angler,” a toolkit that sells exploits for Adobe Flash, Microsoft Silverlight, and other widely used Internet software, started pushing laced banner ads through a compromised ad network.
    So, all in all, it sounds like we have a crypto-ransomware-mini-pocalypse due largely to malicious elements predictably infiltrating an online marketing industry that operates on trusted. You have to wonder if this kind of misplaced trust is limited to online “malvertisement” peddlers. Hmmm

    Posted by Pterrafractyl | March 16, 2016, 2:26 pm
  14. This probably should go without saying, but if you own a smartphone running Google’s Android operating, it’s really not a good idea to download apps from anywhere other than the Google Play store. Even if you really, really, really want to play Pokemon Go:

    Proofpoint

    DroidJack Uses Side-Load…It’s Super Effective! Backdoored Pokemon GO Android App Found

    Proofpoint Staff
    Thursday, July 7, 2016 – 18:30

    Overview

    Pokemon GO is the first Pokemon game sanctioned by Nintendo for iOS and Android devices. The augmented reality game was first released in Australia and New Zealand on July 4th and users in other regions quickly clamored for versions for their devices. It was released on July 6th in the US, but the rest of the world will remain tempted to find a copy outside legitimate channels. To that end, a number of publications have provided tutorials for “side-loading” the application on Android. However, as with any apps installed outside of official app stores, users may get more than they bargained for.

    In this case, Proofpoint researchers discovered an infected Android version of the newly released mobile game Pokemon GO [1]. This specific APK was modified to include the malicious remote access tool (RAT) called DroidJack (also known as SandroRAT), which would virtually give an attacker full control over a victim’s phone. The DroidJack RAT has been described in the past, including by Symantec [2] and Kaspersky [3]. Although we have not observed this malicious APK in the wild, it was uploaded to a malicious file repository service at 09:19:27 UTC on July 7, 2016, less than 72 hours after the game was officially released in New Zealand and Australia.

    Likely due to the fact that the game had not been officially released globally at the same time, many gamers wishing to access the game before it was released in their region resorted to downloading the APK from third parties. Additionally, many large media outlets provided instructions on how to download the game from a third party [4,5,6]. Some even went further and described how to install the APK downloaded from a third party [7]:

    “To install an APK directly you’ll first have to tell your Android device to accept side-loaded apps. This can usually be done by visiting Settings, clicking into the Security area, and then enabling the “unknown sources” checkbox.”

    Unfortunately, this is an extremely risky practice and can easily lead users to installing malicious apps on their own mobile devices.. Should an individual download an APK from a third party that has been infected with a backdoor, such as the one we discovered, their device would then be compromised.

    Individuals worried about whether or not they downloaded a malicious APK have a few options to help them determine if they are now infected. First, they may check the SHA256 hash of the downloaded APK. The legitimate application that has been often linked to by media outlets has a hash of 8bf2b0865bef06906cd854492dece202482c04ce9c5e881e02d2b6235661ab67, although it is possible that there are updated versions already released. The malicious APK that we analyzed has a SHA256 hash of 15db22fd7d961f4d4bd96052024d353b3ff4bd135835d2644d94d74c925af3c4.

    Another simple method to check if a device is infected would be to check the installed application’s permissions, which can typically be accessed by first going to Settings -> Apps -> Pokemon GO and then scrolling down to the PERMISSIONS section. Figure 1 shows a list of permissions granted to the legitimate application. These permissions are subject to change depending on the device’s configuration; for example the permissions “Google Play billing service” and “receive data from Internet” are not shown in the image but were granted on another device when downloading Pokemon GO from the Google Play Store. In Figures 2 and 3, the outlined permissions have been added by DroidJack. Seeing those permissions granted to the Pokemon GO app could indicate that the device is infected, although these permissions are also subject to change in the future.

    Conclusion

    Installing apps from third-party sources, other than officially vetted and sanctioned corporate app stores, is never advisable. Official and enterprise app stores have procedures and algorithms for vetting the security of mobile applications, while side-loading apps from other, often questionable sources, exposes users and their mobile devices to a variety of malware. As in the case of the compromised Pokemon GO APK we analyzed, the potential exists for attackers to completely compromise a mobile device. If that device is brought onto a corporate network, networked resources are also at risk.

    Even though this APK has not been observed in the wild, it represents an important proof of concept: namely, that cybercriminals can take advantage of the popularity of applications like Pokemon GO to trick users into installing malware on their devices. Bottom line, just because you can get the latest software on your device does not mean that you should. Instead, downloading available applications from legitimate app stores is the best way to avoid compromising your device and the networks it accesses.

    “Even though this APK has not been observed in the wild, it represents an important proof of concept: namely, that cybercriminals can take advantage of the popularity of applications like Pokemon GO to trick users into installing malware on their devices. Bottom line, just because you can get the latest software on your device does not mean that you should. Instead, downloading available applications from legitimate app stores is the best way to avoid compromising your device and the networks it accesses.

    That is indeed good advice: Just so No to “side-downloading”. Be safe and download your malware apps from the Google Play store:

    Ars Technica

    Fake Pokémon Go app on Google Play infects phones with screenlocker
    “Pokemon Go Ultimate” requires battery removal or Device Manager to be uninstalled.

    by Dan Goodin – Jul 15, 2016 2:20pm CDT

    Badware purveyors trying to capitalize on the ongoing Pokémon Go frenzy have achieved an important milestone by sneaking their fake wares into the official Google Play marketplace, security researchers said Friday.

    Researchers from antivirus provider Eset report finding at least three such apps in the Google-hosted marketplace. Of the three, the one titled “Pokemon Go Ultimate” posed the biggest threat because it deliberately locks the screen of devices immediately after being installed. In many cases, restarting an infected phone isn’t enough to unlock the screen. Infected phones can ultimately be unlocked either by removing the battery or by using the Android Device Manager.

    Once the screen has been unlocked and the device has restarted, the app—which by now has the title PI Network—is removed from the device’s app menu. Still, it continues to run in the background and surreptitiously clicks on ads in an attempt to generate revenue for its creators.

    “This is the first observation of lockscreen functionality being successfully used in a fake app that has landed on Google Play,” Eset malware researcher Lukas Stefanko wrote in Friday’s post. “It is important to note that from there it takes just one small step to add a ransom message and create the first lockscreen ransomware on Google Play.”

    Eset discovered two other fake Pokémon Go apps inhabiting Google Play, one named “Guide & Cheats for Pokemon Go” and the other “Install Pokemongo.” Both deliver ads carrying fraudulent, scary-sounding messages that are designed to trick users into buying expensive, unnecessary services. One such message claims the device is infected with malware and prompts the user to spend money to get the malicious apps removed.

    The apps are by no means the first case of scammers attempting to exploit the ongoing Pokémon Go craze. Last week, researchers from security firm Proofpoint discovered a backdoored version of the Pokémon Go app. It contained all the functions of the legitimate app, but behind the scenes it also included a remote access tool called DroidJack (aka SandroRAT), which gives an attacker full control over an infected phone.

    The malicious app was available in third-party app stores. While many people rightly avoid such marketplaces because of the increased chances that they include harmful wares, some die-hard Pokémon fans have been tempted to suspend the taboo against sideloading because the official Pokémon Go hasn’t been available in many countries. The apps discovered by Eset, by contrast, were available in Google Play. Google removed them after Eset reported them. The continued presence of malicious apps inside the official Android marketplace underscores the significant limits of Google’s attempts to detect malicious or abusive behavior before admitting titles.

    People who want to run Pokémon Go on their Android phone should download the app only from Google Play, and even then, they should closely inspect the publisher, the number of downloads, and other data for signs of fraud before installing.

    “People who want to run Pokémon Go on their Android phone should download the app only from Google Play, and even then, they should closely inspect the publisher, the number of downloads, and other data for signs of fraud before installing.

    Isn’t getting software in the smartphone era fun. It’s free. And easy to access. And maybe or maybe not a malware trojan horse. The software industry has always had to worry about shady operators peddling malicious software. But in the pre-internet age you didn’t have to worry as much about anyone with any internet connection selling you software. It was more of a pain in the ass to get you install malware. Especially since the software you were installing wasn’t connected to a global internet that could relay your information back to whoever sold you the malware.

    But nowadays, treasure troves of our personal digital information is bundled into one pocket-sized device we all carry around that’s designed to download apps from trusted places like the Google Play store that apparently allow in some rather nasty content. But these Pokemon Go malware apps weren’t just random nasty content. They were content extremely related to the rollout of Pokeman Go, an app co-developed by the Google spinoff Niantic. That’s disturbing. Especially because Google is supposed to manually reviewing all its Google Play store apps now:

    TechCrunch

    App Submissions On Google Play Now Reviewed By Staff, Will Include Age-Based Ratings

    Posted Mar 17, 2015 by Sarah Perez (@sarahintampa)

    Google Play, Google’s marketplace for Android applications which now reaches a billion people in over 190 countries, has historically differentiated itself from rival Apple by allowing developers to immediately publish their mobile applications without a lengthy review process. However, Google has today disclosed that, beginning a couple of months ago, it began having an internal team of reviewers analyze apps for policy violations prior to publication. And going forward, human reviewers will continue to go hands-on with apps before they go live on Google Play.

    Additionally, Google announced the rollout of a new age-based ratings system for games and apps on Google Play, which will utilize the scales provided by a given region’s official ratings authority, like the Entertainment Software Rating Board (ESRB) here in the U.S.

    According to Purnima Kochikar, Director of Business Development for Google Play, Google has been working to implement the new app review system for over half a year. The idea, she says, was to figure out a way to catch policy offenders earlier in the process, without adding friction and delays to the app publishing process. To that end, Google has been successful, it seems – the new system actually went live a couple of months ago, and there have been no complaints. Today, Android apps are approved in hours, not days, despite the addition of human reviewers.

    “We started reviewing all apps and games before they’re published – it’s rolled out 100%,” says Kochikcar. “And developers haven’t noticed the change.”

    The reason why Google’s app review team is able to process app submissions so quickly is because the system also includes an automated element. Before app reviewers are presented with the applications, Google uses software to pre-analyze the app for things like viruses and malware as well as other content violations. For example, its image analysis systems are capable of automatically detecting apps that include sexual content, as well as those that infringe on other applications’ copyright.

    Google didn’t want to get into the specifics of what it’s capable of in terms of automation, but notes that it can identify a number of violations beyond just the inclusion of malware.

    “We’re constantly trying to figure out how machines can learn more,” explains Kochikar. “So whatever the machines can catch today, the machines do. And whatever we need humans to weigh in on, humans do.”

    Though Google uses more machine-aided processes in reviewing applications than Apple does currently, Kochikar admits that with regard to its human element, Google’s system may not be “as robust” as those from “rivals.” That is, Google is trying to balance being able to catch the violations earlier without impacting the time it takes to get an app published to its Android app marketplace.

    The new system also means that developers will now be able to see their app’s publication status in more detail, and learn quickly if and why an app has been rejected or suspended, says Google. In the Developer Console, app creators will see their app’s latest publishing status, allowing them to easily fix problems and resubmit apps after correcting minor violations.

    Google Play, Google’s marketplace for Android applications which now reaches a billion people in over 190 countries, has historically differentiated itself from rival Apple by allowing developers to immediately publish their mobile applications without a lengthy review process. However, Google has today disclosed that, beginning a couple of months ago, it began having an internal team of reviewers analyze apps for policy violations prior to publication. And going forward, human reviewers will continue to go hands-on with apps before they go live on Google Play.

    The hands-on human testing of the Pokemon Go screenlock malware app must have used some pretty lenient criteria. But at least it was rolled out in time for the big Pokemon Go rollout.

    It’s one more reminder that the debates over the trade-offs between digital security and convenience isn’t just about the convenience and lower costs for end user with privacy-infringing, yet convenient and free features that tantalize us. It’s also about the convenience and profits of app developers and distributors like Google and the convenience and profits of operating system and hardware developers. Like Google.

    As Google’s rival distributors Apple found out in September, even trusted developers can accidentally but innocently end up being the malware vector for the through common mistakes or shortcuts that Apples human reviewers didn’t catch. Which is to be expected in some cases because human reviewing means human error.

    But the fact a Pokemon Go screenlock app made it onto the Google Play store during the week of the big Pokemon Go rollout indicates that there might be some serious systemic issues with the app review system which raises serious questions about just how much malware is really floating around on the supposedly vetted mainstream app stores. Probably a lot.

    Posted by Pterrafractyl | July 16, 2016, 10:50 pm
  15. Just FYI, it turns out that Grand Theft Auto mod for Minecraft that your kid couldn’t resist downloading to your Andoid smartphone should probably be renamed Grand Theft Smartphone since it turns out to be a malicious data-stealing piece of malware. Available from the Google Play store. Along with 400 other Google Play store apps carrying the same malware:

    Tech Times

    Malicious ‘DressCode’ Malware Now Spreading Across App Stores

    1 October 2016, 7:17 am EDT By Horia Ungureanu

    Google Play offers a myriad of great apps, but some infected ones bypass the vetting process and end up infecting the mobile devices of Android users.

    A recent wave of panic went through the Android community as it was revealed that more than 400 apps transformed infected phones into listening posts. What is more, the tampered phones are capable of siphoning sensitive data from protected networks and share them with malicious users.

    In a blog post, security researchers from Trend Micro affirm that an app carrying the so-called DressCode malware was downloaded between 100,000 and 500,000 times prior to being removed from the Google-hosted marketplace

    Specifically, the app is dubbed Mod GTA 5 for Minecraft PE and it appears to be just another mobile game. However, the developers of the “game” embedded mischievous components in its code that allow the phone to connect with a server that is being controlled by the attacker.

    Normally, when devices use a network, something called network address translation protections keep them away from harm, but the malign server was crafted to bypass the shielding system.

    Trend Micro explains that via the malware, threat actors get unauthorized access to a user’s network ecosystem. This means that should an infected device log in to an enterprise network, this enables the attacker to go around the NAT device and strike the internal server directly. Another way to make use of the infiltrated device is to use it “as a springboard” to siphon sensitive data.

    This is not the first time in recent history wen Google Play was reportedly breeding security liabilities. About three weeks ago, experts with security firm Check Point discovered 40 DressCode-infected apps in Google Play. At the time, Check Point reported that infected apps scored between 500,000 and 2 million downloads on the Android app platform.

    According to Trend Micro, it is quite challenging to pinpoint which part of the app contains malicious functions.

    In 2012, Google rolled out Bouncer, a cloud-based security scanner that eliminates malicious apps from its Play Store. In the four years that passed, researchers who are keeping an eye on Google Play Store detected and reported on thousands of apps that come packed with malware and other security exploits.

    This makes one wonder if Bouncer is maybe in need of an update.

    “This is not the first time in recent history when Google Play was reportedly breeding security liabilities. About three weeks ago, experts with security firm Check Point discovered 40 DressCode-infected apps in Google Play. At the time, Check Point reported that infected apps scored between 500,000 and 2 million downloads on the Android app platform.

    Keep in mind that since the DressCode malware isn’t just a data thief but also a springboard for further attacks on the networks the infected phone is connecting to so those 2,000,000 DressCode downloads presumably translate into a much larger number of infected devices. It’s a reminder that the ‘how to not get digitally mugged in Minecraft’ talk that parents have to give their kids these days probably shouldn’t be limited to your kids or Minecraft.

    Posted by Pterrafractyl | October 2, 2016, 10:19 pm
  16. This should do wonders for Germany’s brand as an anti-state-hacking nation: Due to concerns that strong encryption is making investigations into digital evidence impossible, the two major parties just pushed through a law that would expand law enforcement’s authority to use state-owned trojan hacking tools to get around that encryption by inserting malware on targets’ devices. These powers already existed for extreme circumstances, like terrorism, but under the new law the investigators could use it for any crime that allows for a wiretap:

    ZDNet

    Police get broad phone and computer hacking powers in Germany

    The German parliament has waved through a massive expansion of police hacking powers.

    By David Meyer
    June 23, 2017 — 12:31 GMT (05:31 PDT)

    Germany’s coalition government has significantly increased police hacking powers by slipping a last-minute amendment into a law that’s nominally supposed to deal with driving bans.

    While the police have so far only been allowed to hack into people’s phones and computers in extreme cases, such as those involving terrorist plots, the change allows them to use such techniques when investigating dozens of less serious offences.

    In Germany, the authorities’ hacking tools are widely known as Staatstrojanern, or state trojans. This term essentially refers to malware that the police can use to infect targets’ devices, to give them the access they need to monitor communications and conduct searches.

    The types of crime where investigators can now use this malware are all of the variety where existing law would allow them to tap a suspect’s phone. These range from murder and handling stolen goods to computer fraud and tax evasion.

    According to the government, the spread of encrypted communications makes traditional wiretapping impossible, so the authorities need to be able to bypass encryption by directly hacking into the communications device.

    Germany’s governing coalition of Angela Merkel’s conservatives plus Martin Schulz’s socialists used its overwhelming majority to push the change through on Thursday, ahead of the summer recess that begins in a week’s time.

    The opposition, while too weak to do much about it, had its say. The veteran Green politician Hans-Christian Ströbele, who will retire at September’s election, decried the change as the coalition’s “final attack on civil rights”.

    He also said it would weaken the “IT infrastructure as a whole” by deliberately maintaining the security vulnerabilities needed for the malware to work, and pointed out the irony in hiding a state trojan measure in the Trojan horse of a law that lets judges issue driving bans for non-vehicle-related criminal offences.

    It remains to be seen whether the shift will stand up in the constitutional court. It’s a near-certainty that someone will raise a constitutional challenge, and the court in Karlsruhe has previously been clear in strictly limiting the use of electronic searches to very serious cases, where life and limb are at risk.

    Other European countries that give the authorities broad hacking powers include the UK thanks to last year’s Investigatory Powers Act, and Spain through a 2015 update to the country’s criminal procedure law.

    ———-

    “Police get broad phone and computer hacking powers in Germany” by David Meyer; ZDNet; 06/23/2017

    “According to the government, the spread of encrypted communications makes traditional wiretapping impossible, so the authorities need to be able to bypass encryption by directly hacking into the communications device.”

    Keep in mind that there are plenty of legitimate concerns over the ability of law enforcement to actually enforce law in the age of encryption. If society wants impregnable digital systems that will no doubt avoid government abuses. But it will also allow for things like organized crime getting a lot more, well, organized. It’s a trade off. So it’s no surprise to see the German government make the decision it made. Well, ok, for most other governments it wouldn’t be surprising. Considering Berlin led the global collective outrage over the revelations of the Snowden Affair, however, it is a little surprising. But only a little.

    Posted by Pterrafractyl | June 29, 2017, 9:38 pm

Post a comment