Equation Group Update

More information about the Equation Group, aka the NSA.

Kaspersky Labs has published more information about the Equation Group — that’s the NSA — and its sophisticated malware platform.

Ars Technica article.

More here

Posted in Uncategorized | Leave a comment

Hardware Bit-Flipping Attack

The Project Zero team at Google has posted details of a new attack that targets a computer’s’ DRAM. It’s called Rowhammer. Here’s a good description:

Here’s how Rowhammer gets its name: In the Dynamic Random Access Memory (DRAM) used in some laptops, a hacker can run a program designed to repeatedly access a certain row of transistors in the computer’s memory, “hammering” it until the charge from that row leaks into the next row of memory. That electromagnetic leakage can cause what’s known as “bit flipping,” in which transistors in the neighboring row of memory have their state reversed, turning ones into zeros or vice versa. And for the first time, the Google researchers have shown that they can use that bit flipping to actually gain unintended levels of control over a victim computer. Their Rowhammer hack can allow a “privilege escalation,” expanding the attacker’s influence beyond a certain fenced-in portion of memory to more sensitive areas.

Basically:

When run on a machine vulnerable to the rowhammer problem, the process was able to induce bit flips in page table entries (PTEs). It was able to use this to gain write access to its own page table, and hence gain read-write access to all of physical memory.

The cause is simply the super dense packing of chips:

This works because DRAM cells have been getting smaller and closer together. As DRAM manufacturing scales down chip features to smaller physical dimensions, to fit more memory capacity onto a chip, it has become harder to prevent DRAM cells from interacting electrically with each other. As a result, accessing one location in memory can disturb neighbouring locations, causing charge to leak into or out of neighbouring cells. With enough accesses, this can change a cell’s value from 1 to 0 or vice versa.

Very clever, and yet another example of the security interplay between hardware and software.

This kind of thing is hard to fix, although the Google team gives some mitigation techniques at the end of their analysis.

Slashdot thread.

More here

Posted in Uncategorized | Leave a comment

Can the NSA Break Microsoft’s BitLocker?

The Intercept has a new story on the CIA’s — yes, the CIA, not the NSA — efforts to break encryption. These are from the Snowden documents, and talk about a conference called the Trusted Computing Base Jamboree. There are some interesting documents associated with the article, but not a lot of hard information.

There’s a paragraph about Microsoft’s BitLocker, the encryption system used to protect MS Windows computers:

Also presented at the Jamboree were successes in the targeting of Microsoft’s disk encryption technology, and the TPM chips that are used to store its encryption keys. Researchers at the CIA conference in 2010 boasted about the ability to extract the encryption keys used by BitLocker and thus decrypt private data stored on the computer. Because the TPM chip is used to protect the system from untrusted software, attacking it could allow the covert installation of malware onto the computer, which could be used to access otherwise encrypted communications and files of consumers. Microsoft declined to comment for this story.

This implies that the US intelligence community — I’m guessing the NSA here — can break BitLocker. The source document, though, is much less definitive about it.

Power analysis, a side-channel attack, can be used against secure devices to non-invasively extract protected cryptographic information such as implementation details or secret keys. We have employed a number of publically known attacks against the RSA cryptography found in TPMs from five different manufacturers. We will discuss the details of these attacks and provide insight into how private TPM key information can be obtained with power analysis. In addition to conventional wired power analysis, we will present results for extracting the key by measuring electromagnetic signals emanating from the TPM while it remains on the motherboard. We will also describe and present results for an entirely new unpublished attack against a Chinese Remainder Theorem (CRT) implementation of RSA that will yield private key information in a single trace.

The ability to obtain a private TPM key not only provides access to TPM-encrypted data, but also enables us to circumvent the root-of-trust system by modifying expected digest values in sealed data. We will describe a case study in which modifications to Microsoft’s Bitlocker encrypted metadata prevents software-level detection of changes to the BIOS.

Differential power analysis is a powerful cryptanalytic attack. Basically, it examines a chip’s power consumption while it performs encryption and decryption operations and uses that information to recover the key. What’s important here is that this is an attack to extract key information from a chip while it is running. If the chip is powered down, or if it doesn’t have the key inside, there’s no attack.

I don’t take this to mean that the NSA can take a BitLocker-encrypted hard drive and recover the key. I do take it to mean that the NSA can perform a bunch of clever hacks on a BitLocker-encrypted hard drive while it is running. So I don’t think this means that BitLocker is broken.

But who knows? We do know that the FBI pressured Microsoft into adding a backdoor in BitLocker in 2005. I believe that was unsuccessful.

More than that, we don’t know.

More here

Posted in Uncategorized | Leave a comment

Attack Attribution and Cyber Conflict

The vigorous debate after the Sony Pictures breach pitted the Obama administration against many of us in the cybersecurity community who didn’t buy Washington’s claim that North Korea was the culprit.

What’s both amazing — and perhaps a bit frightening — about that dispute over who hacked Sony is that it happened in the first place.

But what it highlights is the fact that we’re living in a world where we can’t easily tell the difference between a couple of guys in a basement apartment and the North Korean government with an estimated $10 billion military budget. And that ambiguity has profound implications for how countries will conduct foreign policy in the Internet age.

Clandestine military operations aren’t new. Terrorism can be hard to attribute, especially the murky edges of state-sponsored terrorism. What’s different in cyberspace is how easy it is for an attacker to mask his identity — and the wide variety of people and institutions that can attack anonymously.

In the real world, you can often identify the attacker by the weaponry. In 2006, Israel attacked a Syrian nuclear facility. It was a conventional attack — military airplanes flew over Syria and bombed the plant — and there was never any doubt who did it. That shorthand doesn’t work in cyberspace.

When the US and Israel attacked an Iranian nuclear facility in 2010, they used a cyberweapon and their involvement was a secret for years. On the Internet, technology broadly disseminates capability. Everyone from lone hackers to criminals to hypothetical cyberterrorists to nations’ spies and soldiers are using the same tools and the same tactics. Internet traffic doesn’t come with a return address, and it’s easy for an attacker to obscure his tracks by routing his attacks through some innocent third party.

And while it now seems that North Korea didindeed attack Sony, the attack it most resembles was conducted by members of the hacker group Anonymous against a company called HBGary Federal in 2011. In the same year, other members of Anonymous threatened NATO, and in 2014, still others announced that they were going to attack ISIS. Regardless of what you think of the group’s capabilities, it’s a new world when a bunch of hackers can threaten an international military alliance.

Even when a victim does manage to attribute a cyberattack, the process can take a long time. It took the US weeks to publicly blame North Korea for the Sony attacks. That was relatively fast; most of that time was probably spent trying to figure out how to respond. Attacks by China against US companies have taken muchlonger to attribute.

This delay makes defense policy difficult. Microsoft’s Scott Charney makes this point: When you’re being physically attacked, you can call on a variety of organizations to defend you — the police, the military, whoever does antiterrorism security in your country, your lawyers. The legal structure justifying that defense depends on knowing two things: who’s attacking you, and why. Unfortunately, when you’re being attacked in cyberspace, the two things you often don’t know are who’s attacking you, and why.

Whose job was it to defend Sony? Was it the US military’s, because it believed the attack to have come from North Korea? Was it the FBI, because this wasn’t an act of war? Was it Sony’s own problem, because it’s a private company? What about during those first weeks, when no one knew who the attacker was? These are just a few of the policy questions that we don’t have good answers for.

Certainly Sony needs enough security to protect itself regardless of who the attacker was, as do all of us. For the victim of a cyberattack, who the attacker is can be academic. The damage is the same, whether it’s a couple of hackers or a nation-state.

In the geopolitical realm, though, attribution is vital. And not only is attribution hard, providing evidence of any attribution is even harder. Because so much of the FBI’s evidence was classified—and probably provided by the National Security Agency — it was not able to explain why it was so sure North Korea did it. As I recently wrote: “The agency might have intelligence on the planning process for the hack. It might, say, have phone calls discussing the project, weekly PowerPoint status reports, or even Kim Jong-un’s sign-off on the plan.” Making any of this public would reveal the NSA’s “sources and methods,” something it regards as a very important secret.

Different types of attribution require different levels of evidence. In the Sony case, we saw the US government was able to generate enough evidence to convince itself. Perhaps it had the additional evidence required to convince North Korea it was sure, and provided that over diplomatic channels. But if the public is expected to support any government retaliatory action, they are going to need sufficient evidence made public to convince them. Today, trust in US intelligence agencies is low, especially after the 2003 Iraqi weapons-of-mass-destruction debacle.

What all of this means is that we are in the middle of an arms race between attackers and those that want to identify them: deception and deception detection. It’s an arms race in which the US — and, by extension, its allies — has a singular advantage. We spend more money on electronic eavesdropping than the rest of the world combined, we have more technology companies than any other country, and the architecture of the Internet ensures that most of the world’s traffic passes through networks the NSA can eavesdrop on.

In 2012, then US Secretary of Defense Leon Panetta said publicly that the US — presumably the NSA — has “made significant advances in … identifying the origins” of cyberattacks. We don’t know if this means they have made some fundamental technological advance, or that their espionage is so good that they’re monitoring the planning processes. Other US government officials have privately said that they’ve solved the attribution problem.

We don’t know how much of that is real and how much is bluster. It’s actually in America’s best interest to confidently accuse North Korea, even if it isn’t sure, because it sends a strong message to the rest of the world: “Don’t think you can hide in cyberspace. If you try anything, we’ll know it’s you.”

Strong attribution leads to deterrence. The detailed NSA capabilities leaked by Edward Snowden help with this, because they bolster an image of an almost-omniscient NSA.

It’s not, though — which brings us back to the arms race. A world where hackers and governments have the same capabilities, where governments can masquerade as hackers or as other governments, and where much of the attribution evidence intelligence agencies collect remains secret, is a dangerous place.

So is a world where countries have secret capabilities for deception and detection deception, and are constantly trying to get the best of each other. This is the world of today, though, and we need to be prepared for it.

This essay previously appeared in the Christian Science Monitor.

More here

Posted in Uncategorized | Leave a comment

Data and Goliath‘s Big Idea

Data and Goliath is a book about surveillance, both government and corporate. It’s an exploration in three parts: what’s happening, why it matters, and what to do about it. This is a big and important issue, and one that I’ve been working on for decades now. We’ve been on a headlong path of more and more surveillance, fueled by fear­–of terrorism mostly­–on the government side, and convenience on the corporate side. My goal was to step back and say “wait a minute; does any of this make sense?” I’m proud of the book, and hope it will contribute to the debate.

But there’s a big idea here too, and that’s the balance between group interest and self-interest. Data about us is individually private, and at the same time valuable to all us collectively. How do we decide between the two? If President Obama tells us that we have to sacrifice the privacy of our data to keep our society safe from terrorism, how do we decide if that’s a good trade-off? If Google and Facebook offer us free services in exchange for allowing them to build intimate dossiers on us, how do know whether to take the deal?

There are a lot of these sorts of deals on offer. Waze gives us real-time traffic information, but does it by collecting the location data of everyone using the service. The medical community wants our detailed health data to perform all sorts of health studies and to get early warning of pandemics. The government wants to know all about you to better deliver social services. Google wants to know everything about you for marketing purposes, but will “pay” you with free search, free e-mail, and the like.

Here’s another one I describe in the book: “Social media researcher Reynol Junco analyzes the study habits of his students. Many textbooks are online, and the textbook websites collect an enormous amount of data about how­–and how often­–students interact with the course material. Junco augments that information with surveillance of his students’ other computer activities. This is incredibly invasive research, but its duration is limited and he is gaining new understanding about how both good and bad students study­–and has developed interventions aimed at improving how students learn. Did the group benefit of this study outweigh the individual privacy interest of the subjects who took part in it?”

Again and again, it’s the same trade-off: individual value versus group value.

I believe this is the fundamental issue of the information age, and solving it means careful thinking about the specific issues and a moral analysis of how they affect our core values.

You can see that in some of the debate today. I know hardened privacy advocates who think it should be a crime for people to withhold their medical data from the pool of information. I know people who are fine with pretty much any corporate surveillance but want to prohibit all government surveillance, and others who advocate the exact opposite.

When possible, we need to figure out how to get the best of both: how to design systems that make use of our data collectively to benefit society as a whole, while at the same time protecting people individually.

The world isn’t waiting; decisions about surveillance are being made for us­–often in secret. If we don’t figure this out for ourselves, others will decide what they want to do with us and our data. And we don’t want that. I say: “We don’t want the FBI and NSA to secretly decide what levels of government surveillance are the default on our cell phones; we want Congress to decide matters like these in an open and public debate. We don’t want the governments of China and Russia to decide what censorship capabilities are built into the Internet; we want an international standards body to make those decisions. We don’t want Facebook to decide the extent of privacy we enjoy amongst our friends; we want to decide for ourselves.”

In my last chapter, I write: “Data is the pollution problem of the information age, and protecting privacy is the environmental challenge. Almost all computers produce personal information. It stays around, festering. How we deal with it­–how we contain it and how we dispose of it­–is central to the health of our information economy. Just as we look back today at the early decades of the industrial age and wonder how our ancestors could have ignored pollution in their rush to build an industrial world, our grandchildren will look back at us during these early decades of the information age and judge us on how we addressed the challenge of data collection and misuse.”

That’s it; that’s our big challenge. Some of our data is best shared with others. Some of it can be ‘processed’­–anonymized, maybe­–before reuse. Some of it needs to be disposed of properly, either immediately or after a time. And some of it should be saved forever. Knowing what data goes where is a balancing act between group and self-interest, a trade-off that will continually change as technology changes, and one that we will be debating for decades to come.

This essay previously appeared on John Scalzi’s blog Whatever.

EDITED TO ADD (3/7): Hacker News thread.

More here

Posted in Uncategorized | Leave a comment

How to keep your Smart Home safe

The Internet of Things (IoT) devices can help you save time and hassle and improve your quality of life. As an example, you can check the contents of your fridge and turn on the oven while at the grocery store thus saving money, uncertainty, and time when preparing dinner for your family. This is great and many people will benefit from features like these. However, as with all changes, along with the opportunity there are risks. Particularly there are risks to your online security and privacy but some of these risks extend to the physical World as well. As an example, the possibility to remotely open your front door lock for the plumber can be a great time saver but it also means that by hacking your cloud accounts it will be possible for also the hackers to open your door — and possibly sell access to your home on dark markets. And it’s not just about hacking: These gadgets collect data about what’s happening in your home and life and hence they themselves present a risk to your privacy.

Example of a smart home set up

Image: The above image shows a typical smart home configuration and the kinds of attacks it can face. While the smart home is not a target at the moment due to its low adoption rate and high fragmentation, all of the layers can be attacked with existing techniques.

If you are extremely worried about your privacy and security, the only way to really stay safe is to not buy and use these gadgets. However, for most people, the time-saving convenience benefits of IoT and the Smart Home will outweigh most privacy and security implications. Also, IoT devices are not widely targeted at the moment and even when they are, the attackers are after the computing power of the device — not yet your data or your home. Actually, the biggest risk right now comes from the way how the manufacturers of these devices handle your personal data. This all said, you shouldn’t just blindly jump in. There are some things that you can do to reduce the risks:

•  Do not connect these devices directly to public internet addresses. Use a firewall or at least a NAT (Network Address Translation) router in front of the devices to make sure they are not discoverable from the Internet. You should disable UPnP (Universal Plug and Play) on your router if you want to make sure the devices cannot open a port on your public internet address.

•  Go through the privacy and security settings of the device or service and remove everything you don’t need. For many of these devices the currently available settings are precious few, however. Shut down features you don’t need if you think they might have any privacy implications. For example, do you really use the voice commands feature in your Smart TV or gaming console? If you never use it, just disable it. You can always enable it back if you want to give the feature a try later.

•  When you register to the cloud service of the IoT device, use a strong and unique password and keep it safe. Change the password if you think there is a risk someone managed to spy it. Also, as all of these services allow for a password reset through your email account, make sure you secure the email account with a truly strong password and keep the password safe. Use 2-factor authentication (2FA) where available — and for most popular email services it is available today.

•  Keep your PCs, tablets, and mobile phones clear of malware. Malware often steals passwords and may hence steal the password to your smart home service or the email account linked to it. You need to install security software onto devices where you use the passwords, keep your software updated with the latest security fixes, and, as an example, make sure you don’t click on links or attachments in weird spam emails.

•  Think carefully if you really want to use remotely accessible smart locks on your home doors. If you’re one of those people who leave the key under the door mat or the flower pot, you’re probably safer with a smart lock, though.

•  If you install security cameras and nannycams, disconnect them from the network when you have no need for them. Consider doing the same for devices that constantly send audio from your home to the cloud unless you really do use them all the time. Remember that most IoT devices don’t have much computing power and hence the audio and video processing is most likely done on some server in the cloud.

•  Use encryption (preferably WPA2) in your home Wi-Fi. Use a strong Wi-Fi passphrase and keep it safe. Without a passphrase, with a weak passphrase, or when using an obsolete protocol such as WEP, your home Wi-Fi becomes an open network from a security perspective.

•  Be careful when using Open Wi-Fi networks such as the network in a coffee shop, a shopping mall, or a hotel. If you or your applications send your passwords in clear text, they can be stolen and you may become a victim of a Man-in-the-Middle (MitM) attack. Use a VPN application always when using Open Wi-Fi. Again, your passwords are they key to your identity and also to your personal Internet of Things.

•  Limit your attack surface. Don’t install devices you know you’re not going to need. Shut down and remove all devices that you no longer need or use. When you buy a top of the line washing machine, and you notice it can be connected through Wi-Fi, consider if you really want and need to connect it before you do. Disconnect the device from the network once you realize you actually don’t use the online features at all.

•  When selecting which manufacturer you buy your device from, check what they say about security and privacy and what their privacy principles are. Was the product rushed to the market and were any security corners cut? What is the motivation of the manufacturer to process your data? Do they sell it onwards to advertisers? Do they store any of your data and where do they store it?

•  Go to your home router settings today. Make sure you disable services that are exposed to the Internet — the WAN interface. Change the admin password to something strong and unique. Check that the DNS setting of the router points to your ISP’s DNS server or some open service like OpenDNS or Google DNS and hasn’t been tampered with.

•  Make sure you keep your router’s firmware up-to-date and consider replacing the router with a new one, especially, if the manufacturer no longer provides security updates. Consider moving away from a manufacturer that doesn’t do security updates or stops them after two years. The security of your home network starts from the router and the router is exposed to the Internet.

The above list of actions is extensive and maybe a bit on the “band-aid on the webcam”-paranoid side. However, it should give you an idea of what kinds of things you can do to stay in control of your security and privacy when taking a leap to the Internet of Things. Security in the IoT World is not that different from earlier: Your passwords are also very important in IoT as is the principle of deploying security patches and turning off services you don’t need.

On 02/03/15 At 10:11 PM

More here

Posted in Uncategorized | Leave a comment

The Democratization of Cyberattack

The thing about infrastructure is that everyone uses it. If it’s secure, it’s secure for everyone. And if it’s insecure, it’s insecure for everyone. This forces some hard policy choices.

When I was working with the Guardian on the Snowden documents, the one top-secret program the NSA desperately did not want us to expose was QUANTUM. This is the NSA’s program for what is called packet injection–basically, a technology that allows the agency to hack into computers.

Turns out, though, that the NSA was not alone in its use of this technology. The Chinese government uses packet injection to attack computers. The cyberweapons manufacturer Hacking Team sells packet injection technology to any government willing to pay for it. Criminals use it. And there are hacker tools that give the capability to individuals as well.

All of these existed before I wrote about QUANTUM. By using its knowledge to attack others rather than to build up the internet’s defenses, the NSA has worked to ensure that anyone can use packet injection to hack into computers.

This isn’t the only example of once-top-secret US government attack capabilities being used against US government interests. StingRay is a particular brand of IMSI catcher, and is used to intercept cell phone calls and metadata. This technology was once the FBI’s secret, but not anymore. There are dozens of these devices scattered around Washington, DC, as well as the rest of the country, run by who-knows-what government or organization. By accepting the vulnerabilities in these devices so the FBI can use them to solve crimes, we necessarily allow foreign governments and criminals to use them against us.

Similarly, vulnerabilities in phone switches–SS7 switches, for those who like jargon–have been long used by the NSA to locate cell phones. This same technology is sold by the US company Verint and the UK company Cobham to third-world governments, and hackers have demonstrated the same capabilities at conferences. An eavesdropping capability that was built into phone switches to enable lawful intercepts was used by still-unidentified unlawful intercepters in Greece between 2004 and 2005.

These are the stories you need to keep in mind when thinking about proposals to ensure that all communications systems can be eavesdropped on by government. Both the FBI’s James Comey and UK Prime Minister David Cameron recently proposed limiting secure cryptography in favor of cryptography they can have access to.

But here’s the problem: technological capabilities cannot distinguish based on morality, nationality, or legality; if the US government is able to use a backdoor in a communications system to spy on its enemies, the Chinese government can use the same backdoor to spy on its dissidents.

Even worse, modern computer technology is inherently democratizing. Today’s NSA secrets become tomorrow’s PhD theses and the next day’s hacker tools. As long as we’re all using the same computers, phones, social networking platforms, and computer networks, a vulnerability that allows us to spy also allows us to be spied upon.

We can’t choose a world where the US gets to spy but China doesn’t, or even a world where governments get to spy and criminals don’t. We need to choose, as a matter of policy, communications systems that are secure for all users, or ones that are vulnerable to all attackers. It’s security or surveillance.

As long as criminals are breaking into corporate networks and stealing our data, as long as totalitarian governments are spying on their citizens, as long as cyberterrorism and cyberwar remain a threat, and as long as the beneficial uses of computer technology outweighs the harmful uses, we have to choose security. Anything else is just too dangerous.

This essay previously appeared on Vice Motherboard.

EDITED TO ADD (3/4): Slashdot thread.

More here

Posted in Uncategorized | Leave a comment

The Equation Group Equals NSA / IRATEMONK

On December 29, 2013, Der Spiegel, a German weekly news magazine, published an article about an internal NSA catalogthat lists technology available to the NSA’s Tailored Access Operations (TAO). Among that technology is “IRATEMONK”.

“IRATEMONK provides software application persistence on desktop and laptop computers by implanting the hard drive firmware to gain execution through Master Boot Record (MBR) substitution.”

IRATEMONK, ANT Product Data
Source: Wikimedia

“This technique supports systems without RAID hardware that boot from a variety of Western Digital, Seagate, Maxtor, and Samsung hard drives.”

On January 31, 2014, Bruce Schneier deemed IRATEMONK his “NSA Exploit of the Day” which prompted this from Nicholas Weaver.

NCWeaver on IRATEMONK

“This is probably the most interesting of the BIOS-type implants.”

“yet the cost of evading the ‘boot from CD’ detection is now you have guaranteed ‘NSA WAS HERE‘ writ in big glowing letters if it ever IS detected.”

Well, funny story — components related to IRATEMONK have now been detected — by the folks at Kaspersky Labs. Kaspersky’s research paper refers to a threat actor called the “Equation group” whose country of origin is not named, but the group has exactly the capabilities detailed by the NSA’s ANT catalog.

Ars Technica has an excellent summary here: How “omnipotent” hackers tied to NSA hid for 14 years—and were found at last.

On 17/02/15 At 01:20 PM

More here

Posted in Uncategorized | Leave a comment

NSA/GCHQ Hacks SIM Card Database and Steals Billions of Keys

The Intercept has an extraordinary story: the NSA and/or GCHQ hacked into the Dutch SIM card manufacturer Gemalto, stealing the encryption keys for billions of cell phones. People are still trying to figure out exactly what this means, but it seems to mean that the intelligence agencies have access to both voice and data from all phones using those cards.

Me in The Register: “We always knew that they would occasionally steal SIM keys. But all of them? The odds that they just attacked this one firm are extraordinarily low and we know the NSA does like to steal keys where it can.”

I think this is one of the most important Snowden stories we’ve read.

More newsstories. Slashdot thread. Hacker News thread.

More here

Posted in Uncategorized | Leave a comment

Database of Ten Million Passwords

Earlier this month, Mark Burnett released a database of ten million usernames and passwords. He collected this data from already-public dumps from hackers who had stolen the information; hopefully everyone affected has changed their passwords by now.

Newsarticles.

More here

Posted in Uncategorized | Leave a comment