My Conversation with Edward Snowden

Today, as part of a Harvard computer science symposium, I had a public conversation with Edward Snowden. The topics were largely technical, ranging from cryptography to hacking to surveillance to what to do now.

Here’s the video.

EDITED TO ADD (1/24): News article.

More here

Posted in Uncategorized | Leave a comment

Notes on Stewart Baker Podcast with David Sanger

Yesterday Steptoe and Johnson LLP released the 50th edition of their podcast series, titled Steptoe Cyberlaw Podcast – Interview with David Sanger. Stewart Baker’s discussion with New York Times reporter David Sanger (pictured at left) begins at the 20:15 mark. The interview was prompted by the NYT story NSA Breached North Korean Networks Before Sony Attack, Officials Say. I took the following notes for those of you who would like some highlights.

Sanger has reported on the national security scene for decades. When he saw President Obama’s definitive statement on December 19, 2014 — “We can confirm that North Korea engaged in this attack [on Sony Pictures Entertainment].” — Sanger knew the President must have had solid attribution. He wanted to determine what evidence had convinced the President that the DPRK was responsible for the Sony intrusion.

Sanger knew from his reporting on the Obama presidency, including his book Confront and Conceal: Obama’s Secret Wars and Surprising Use of American Power, that the President takes a cautious approach to intelligence. Upon assuming his office, the President had little experience with intelligence or cyber issues (except for worries about privacy).

Obama had two primary concerns about intelligence, involving “leaps” and “leaks.” First, he feared making “leaps” from intelligence to support policy actions, such as the invasion of Iraq. Second, he worried that leaks of intelligence could “create a groundswell for action that the President doesn’t want to take.” An example of this second concern is the (mis)handling of the “red line” on Syrian use of chemical weapons.

In early 2009, however, the President became deeply involved with Olympic Games, reported by Sanger as the overall program for the Stuxnet operation. Obama also increased the use of drones for targeted killing. These experiences helped the President overcome some of his concerns with intelligence, but he was still likely to demand proof before taking actions.

Sanger stated in the podcast that, in his opinion, “the only way” to have solid attribution is to be inside adversary systems before an attack, such that the intelligence community can see attacks in progress. In this case, evidence from inside DPRK systems and related infrastructure (outside North Korea) convinced the President.

(I disagree that this is “the only way,” but I believe it is an excellent option for performing attribution. See my 2009 post Counterintelligence Options for Digital Security for more details.)

Sanger would not be surprised if we see more leaks about what the intelligence community observed. “There’s too many reporters inside the system” to ignore what’s happening, he said. The NYT talks with government officials “several times per month” to discuss reporting on sensitive issues. The NYT has a “presumption to publish” stance, although Sanger held back some details in his latest story that would have enabled the DPRK or others to identify “implants in specific systems.”

Regarding the purpose of announcing attribution against the DPRK, Sanger stated that deterrence against the DPRK and other actors is one motivation. Sanger reported meeting with NSA director Admiral Mike Rogers, who said the United States needs a deterrence capability in cyberspace. More importantly, the President wanted to signal to the North Koreans that they had crossed a red line. This was a destructive attack, coupled with a threat of physical harm against movie goers. The DPRK has become comfortable using “cyber weapons” because they are more flexible than missiles or nuclear bombs. The President wanted the DPRK to learn that destructive cyber attacks would not be tolerated.

Sanger and Baker then debated the nature of deterrence, arms control, and norms. Sanger stated that it took 17 years after Hiroshima and Nagasaki before President Kennedy made a policy announcement about seeking nuclear arms control with the Soviet Union. Leading powers don’t want arms control, until their advantage deteriorates. Once the Soviet Union’s nuclear capability exceeded the comfort level of the United States, Kennedy pitched arms control as an option. Sanger believes the nuclear experience offers the right set of questions to ask about deterrence and arms control, although all the answers will be different. He also hopes the US moves faster on deterrence, arms control, and norms than shown by the nuclear case, because other actors (China, Russia, Iran, North Korea, etc.) are “catching up fast.”

(Incidentally, Baker isn’t a fan of deterrence in cyberspace. He stated that he sees deterrence through the experience of bombers in the 1920s and 1930s.)

According to Sanger, the US can’t really discuss deterrence, arms control, and norms until it is willing to explain its offensive capabilities. The experience with drone strikes is illustrative, to a certain degree. However, to this day, no government official has confirmed Olympic Games.

I’d like to thank Stewart Baker for interviewing David Sanger, and I thank David Sanger for agreeing to be interviewed. I look forward to podcast 51, featuring my PhD advisor Dr Thomas Rid.

More here

Posted in Uncategorized | Leave a comment

USA’s Double Standard: Don’t Hack Like the USA

Here’s a list of companies allegedly hacked by the United States of America:

  •  RealTek
  •  JMicron
  •  C-Media

Hacked by the USA

And why did the United States hack three Taiwanese technology companies?

To steal digital certificates in order to sign drivers used by Stuxnet and Duqu.

Here’s a company allegedly hacked by North Korea:

  •  Sony Pictures

Hacked by North Korea?

Now where do you suppose the DRPK got the crazy idea it was okay to hack companies and to steal data from?

—————

From DER SPIEGEL: The Digital Arms Race: NSA Preps America for Future Battle

On 19/01/15 At 02:23 PM

More here

Posted in Uncategorized | Leave a comment

US Law Enforcement Also Conducting Mass Telephone Surveillance

Late last year, in a criminal case involving export violations, the US government disclosed a mysterious database of telephone call records that it had queried in the case.

The defendant argued that the database was the NSA’s, and that the query was unconditional and the evidence should be suppressed. The government said that the database was not the NSA’s. As part of the back and forth, the judge ordered the government to explain the call records database.

Someone from the Drug Enforcement Agency did that last week. Apparently, there’s anotherbulk telephone metadata collection program and a “federal law enforcement database” authorized as part of a federal drug trafficking statute:

This database [redacted] consisted of telecommunications metadata obtained from United Stated telecommunications service providers pursuant to administrative subpoenas served up on the service providers under the provisions of 21 U.S.C. 876. This metadata related to international telephone calls originating in the United States and calling [redacted] designated foreign countries, one of which was Iran, that were determined to have a demonstrated nexus to international drug trafficking and related criminal activities.

The program began in the 1990s and was “suspended” in September 2013.

Newsarticle. Slashdot thread. Hacker News thread.

EDITED TO ADD (1/19): Another article.

More here

Posted in Uncategorized | Leave a comment

New NSA Documents on Offensive Cyberoperations

Appelbaum, Poitras, and others have another NSA article with an enormous Snowden document dump on Der Spiegel, giving details on a variety of offensive NSA cyberoperations to infiltrate and exploit networks around the world. There’s a lot here: 199 pages. (Here they are in one compressed archive.)

Paired with the 666 pages released in conjunction with the December 28 Spiegel article (compressed archive here) on NSA cryptanalytic capabilities, we’ve seen a huge amount of Snowden documents in the past few weeks. According to one tally, it runs 3,560 pages in all.

Hacker News thread. Slashdot thread.

EDITED TO ADD (1/19): In related news, the New York Times is reporting that the NSA has infiltrated North Korea’s networks, and provided evidence to blame the country for the Sony hacks.

EDITED TO ADD (1/19): Also related, the Guardian has an article based on the Snowden documents that GCHQ has been spying on journalists. Another article.

More here

Posted in Uncategorized | Leave a comment

The Conscience of a Hacker

The Conscience of a Hacker — written just over 29 years ago.

The Conscience of a Hacker

It could have been written yesterday. Read the rest here.

More context here.

On 14/01/15 At 06:44 PM

More here

Posted in Uncategorized | Leave a comment

Security and Military Experts Fall For “Open” Wi-Fi

Seems like just about everybody will use “open” Wi-Fi — even Swedish security experts.

Open Guest
#Facepalm

A case of do as I say, not as I do?

From Ars Technica: Activist pulls off clever Wi-Fi honeypot to protest surveillance state

A link to our own Wi-Fi experiment report can be found here.

On 15/01/15 At 02:31 PM

More here

Posted in Uncategorized | Leave a comment

Cass Sunstein on Red Teaming

On January 7, 2015, FBI Director James Comey spoke to the International Conference on Cyber Security at Fordham University. Part of his remarks addressed controversy over the US government’s attribution of North Korea as being responsible for the digital attack on Sony Pictures Entertainment.

Near the end of his talk he noted the following:

We brought in a red team from all across the intelligence community and said, “Let’s hack at this. What else could be explaining this? What other explanations might there be? What might we be missing? What competing hypothesis might there be? Evaluate possible alternatives. What might we be missing?” And we end up in the same place.

I noticed some people in the technical security community expressing confusion about this statement. Isn’t a red team a bunch of hackers who exploit vulnerabilities to demonstrate defensive flaws?

In this case, “red team” refers to a group performing the actions Director Comey outlined above. Harvard Professor and former government official Cass Sunstein explains the sort of red team mentioned by Comey in his new book Wiser: Getting Beyond Groupthink to Make Groups Smarter. In this article published by Fortune, Sunstein and co-author Reid Hastie advise the following as one of the ways to avoid group think to improve decision making:

Appoint an adversary: Red-teaming

Many groups buy into the concept of devil’s advocates, or designating one member to play a “dissenting” role. Unfortunately, evidence for the efficacy of devil’s advocates is mixed. When people know that the advocate is not sincere, the method is weak. A much better strategy involves “red-teaming.”

This is the same concept as devil’s advocacy, but amplified: In military training, red teams play an adversary role and genuinely try to defeat the primary team in a simulated mission. In another version, the red team is asked to build the strongest case against a proposal or plan. Versions of both methods are used in the military and in many government offices, including NASA’s reviews of mission plans, where the practice is sometimes called a “murder board.”

Law firms have a long-running tradition of pre-trying cases or testing arguments with the equivalent of red teams. In important cases, some law firms pay attorneys from a separate firm to develop and present a case against them. The method is especially effective in the legal world, as litigators are naturally combative and accustomed to arguing a position assigned to them by circumstance. A huge benefit of legal red teaming is that it can helpt clients understand the weaknesses of their side of a case, often leading to settlements that avoid the devastating costs of losing at trial.

One size does not fit all, and cost and feasibility issues matter. But in many cases, red teams are worth the investment. In the private and public sectors, a lot of expensive mistakes can be avoided with the use of red teams.

Some critics of the government’s attribution statements have ignored the fact that the FBI took this important step. An article in Reuters, titled In cyberattacks such as Sony strike, Obama turns to ‘name and shame’, add some color to this action:

The new [name and shame] policy has meant wresting some control of the issue from U.S. intelligence agencies, which are traditionally wary of revealing much about what they know or how they know it.

Intelligence officers initially wanted more proof of North Korea’s involvement before going public, according to one person briefed on the matter. A step that helped build consensus was the creation of a team dedicated to pursuing rival theories – none of which panned out.

If you don’t trust the government, you’re unlikely to care that the intelligence community (which includes the FBI) red-teamed the attribution case. Nevertheless, it’s important to understand the process involved. The government and IC are unlikely to release additional details, unless and until they pursue an indictment similar to the one against the PLA and five individuals from Unit 61398 last year.

Thanks to Augusto Barros for pointing me to the new “Wiser” book.

More here

Posted in Uncategorized | Leave a comment

Does This Sound Familiar?

I read the following in the 2009 book Streetlights and Shadows:
Searching for the Keys to Adaptive Decision Making by Gary Klein. It reminded me of the myriad ways operational information technology and security processes fail.

This is a long excerpt, but it is compelling.

== Begin ==

A commercial airliner isn’t supposed to run out of fuel at 41,000 feet. There are too many safeguards, too many redundant systems, too many regulations and checklists. So when that happened to Captain Bob Pearson on July 23, 1983, flying a twin-engine Boeing 767 from Ottawa to Edmonton with 61 passengers, he didn’t have any standard flight procedures to fall back on.

First the fuel pumps for the left engine quit. Pearson could work around that problem by turning off the pumps, figuring that gravity would feed the engine. The computer showed that he had plenty of fuel for the flight.

Then the left engine itself quit. Down to one engine, Pearson made the obvious decision to divert from Edmonton to Winnipeg, only 128 miles away. Next, the fuel pumps on the right engine went.

Shortly after that, the cockpit warning system emitted a warning sound that neither Pearson nor the first officer had ever heard before. It meant that both the engines had failed.

And then the cockpit went dark. When the engines stopped, Pearson lost all electrical power, and his advanced cockpit instruments went blank, leaving him only with a few battery-powered emergency instruments that were barely enough to land; he could read the instruments because it was still early evening.

Even if Pearson did manage to come in for a landing, he didn’t have any way to slow the airplane down. The engines powered the hydraulic system that controlled the flaps used in taking off and in landing. Fortunately, the designers had provided a backup generator that used wind power from the forward momentum of the airplane.

With effort, Pearson could use this generator to manipulate some of his controls to change the direction and pitch of the airplane, but he couldn’t lower the flaps and slats, activate the speed brakes, or use normal braking to slow down when landing. He couldn’t use reverse thrust to slow the airplane, because the engines weren’t providing any thrust. None of the procedures or flight checklists covered the situation Pearson was facing.

 Pearson, a highly experienced pilot, had been flying B-767s for only three months-almost as long as the airplane had been in the Air Canada fleet. Somehow, he had to fly the plane to Winnipeg. However, “fly” is the wrong term. The airplane wasn’t flying. It was gliding, and poorly. Airliners aren’t designed to glide very well-they are too heavy, their wings are too short, they can’t take advantage of thermal currents. Pearson’s airplane was dropping more than 20 feet per second.

Pearson guessed that the best glide ratio speed would be 220 knots, and maintained that speed in order to keep the airplane going for the longest amount of time. Maurice Quintal, the first officer, calculated that they wouldn’t make it to Winnipeg. He suggested instead a former Royal Canadian Air Force base that he had used years earlier. It was only 12 miles away, in Gimli, a tiny community originally settled by Icelanders in 1875.1 So Pearson changed course once again.

Pearson had never been to Gimli but he accepted Quintal’s advice and headed for the Gimli runway. He steered by the texture of the clouds underneath him. He would ask Winnipeg Central for corrections in his heading, turn by about the amount requested, then ask the air traffic controllers whether he had made the correct turn. Near the end of the flight he thought he spotted the Gimli runway, but Quintal corrected him.

As Pearson got closer to the runway, he knew that the airplane was coming in too high and too fast. Normally he would try to slow to 130 knots when the wheels touched down, but that was not possible now and he was likely to crash.

Luckily, Pearson was also a skilled glider pilot. (So was Chesley Sullenberger, the pilot who landed a US Airways jetliner in the Hudson River in January of 2009. We will examine the Hudson River landing in chapter 6.) Pearson drew on some techniques that aren’t taught to commercial pilots. In desperation, he tried a maneuver called a slideslip, skidding the airplane forward in the way ice skaters twist their skates to skid to a stop.

He pushed the yoke to the left, as if he was going to turn, but pressed hard on the right rudder pedal to counter the turn. That kept the airplane on course toward the runway. Pearson used the ailerons and the rudder to create more drag. Pilots use this maneuver with gliders and light aircraft to produce a rapid drop in altitude and airspeed, but it had never been tried with a commercial jet. The slide-slip maneuver was Pearson’s only hope, and it worked.

 When the plane was only 40 feet off the ground, Pearson eased up on the controls, straightened out the airplane, and brought it in at 175 knots, almost precisely on the normal runway landing point. All the passengers and the crewmembers were safe, although a few had been injured in the scramble to exit the plane after it rolled to a stop.

The plane was repaired at Gimli and was flown out two days later. It returned to the Air Canada fleet and stayed in service another 25 years, until 2008.2 It was affectionately called “the Gimli Glider.”

The story had a reasonably happy ending, but a mysterious beginning. How had the plane run out of fuel? Four breakdowns, four strokes of bad luck, contributed to the crisis.

Ironically, safety features built into the instruments had caused the first breakdown. The Boeing 767, like all sophisticated airplanes, monitors fuel flow very carefully. It has two parallel systems measuring fuel, just to be safe. If either channel 1 or channel 2 fails, the other serves as a backup.

However, when you have independent systems, you also have to reconcile any differences between them. Therefore, the 767 has a separate computer system to figure out which of the two systems is more trustworthy. Investigators later found that a small drop of solder in Pearson’s airplane had created a partial connection in channel 2. The partial connection allowed just a small amount of current to flow-not enough for channel 2 to operate correctly, but just enough to keep the default mode from kicking in and shifting to channel 1.

The partial connection confused the computer, which gave up. This problem had been detected when the airplane had landed in Edmonton the night before. The Edmonton mechanic, Conrad Yaremko, wasn’t able to diagnose what caused the fault, nor did he have a spare fuel-quantity processor. But he had figured out a workaround. If he turned channel 2 off, that circumvented the problem; channel 1 worked fine as long as the computer let it.

The airplane could fly acceptably using just one fuel-quantity processor channel. Yaremko therefore pulled the circuit breaker to channel 2 and put tape over it, marking it as inoperative. The next morning, July 23, a crew flew the plane from Edmonton to Montreal without any trouble.

The second breakdown was a Montreal mechanic’s misguided attempt to fix the problem. The Montreal mechanic, jean Ouellet, took note of the problem and, out of curiosity, decided to investigate further. Ouellet had just completed a two-month training course for the 767 but had never worked on one before. He tinkered a bit with the faulty Fuel Quantity Indicator System without success. He re-enabled channel 2; as before, the fuel gauges in the cockpit went blank. Then he got distracted by another task and failed to pull the circuit breaker for channel 2, even though he left the tape in place showing the channel as inoperative. As a result, the automatic fuel-monitoring system stopped working and the fuel gauges stayed blank.

 A third breakdown was confusion about the nature of the fuel gauge problem. When Pearson saw the blank fuel gauges and consulted a list of minimum requirements, he knew that the airplane couldn’t be flown in that condition. He also knew that the 767 was still very new-it had first entered into airline service in 1982. The minimum requirements list had already been changed 55 times in the four months that Air Canada had been flying 767s. Therefore, pilots depended more on the maintenance crew to guide their judgment than on the lists and manuals.

Pearson saw that the maintenance crews had approved this airplane to keep flying despite the problem with the fuel gauges. Pearson didn’t understand that the crew had approved the airplane to fly using only channel 1. In talking with the pilot who had flown the previous legs, Pearson had gotten the mistaken impression that the airplane had just flown from Edmonton to Ottawa to Montreal with blank fuel gauges. That pilot had mentioned a “fuel gauge problem.” When Pearson climbed into the cockpit and saw that the fuel gauges were blank, he assumed that was the problem the previous pilot had encountered, which implied that it was somehow acceptable to continue to operate that way.

The mechanics had another way to provide the pilots with fuel information. They could use a drip-stick mechanism to measure the amount of fuel currently stored in each of the tanks, and they could manually enter that information into the computer. The computer system could then calculate, fairly accurately, how much fuel was remaining all through the flight.

In this case, the mechanics carefully determined the amount of fuel in the tanks. But they made an error when they converted that to weight. This error was the fourth breakdown.

Canada had converted to the metric system only a few years earlier, in 1979. The government had pressed Air Canada to direct Boeing to build the new 767s using metric measurements of liters and kilograms instead of gallons and pounds-the first, and at that time the only, airplane in the Air Canada fleet to use the metric system. The mechanics in Montreal weren’t sure about how to make the conversion (on other airplanes the flight engineer did that job, but the 767 didn’t use a flight engineer), and they got it wrong.

In using the drip-stick measurements, the mechanics plugged in the weight in pounds instead of kilograms. No one caught the error. Because of the error, everyone believed they had 22,300 kg of fuel on board, the amount needed to get them to Edmonton, but in fact they had only a little more than 10,000 kg-less than half the amount they needed.

 Pearson was understandably distressed by the thought of not being able to monitor the fuel flow directly. Still, the figures had been checked repeatedly, showing that the airplane had more fuel than was necessary. The drip test had been repeated several times, just to be sure.

That morning, the airplane had gotten approval to fly from Edmonton to Montreal despite having fuel gauges that were blank. (In this Pearson was mistaken; the airplane used channel 1 and did have working fuel gauges.) Pearson had been told that maintenance control had cleared the airplane.

The burden of proof had shifted, and Pearson would have to justify a decision to cancel this flight. On the basis of what he knew, or believed he knew, he couldn’t justify that decision. Thus, he took off, and everything went well until he ran out of fuel and both his engines stopped.

== End ==

This story is an example that one cannot build “unhackable systems.” I also believe this story demonstrates that operational and decision-based failures will continue to plague technology. It is no use building systems that theoretically “have no vulnerabilities” so long as people operate and make decisions based on use of those systems.

If you liked this post, I’ve written about engineering disasters in the past.

You can but the book which published this story at Amazon.com.

More here

Posted in Uncategorized | Leave a comment

One Definitive Prediction For 2015

As Carl Sagan used to say, extraordinary claims require extraordinary evidence. And recently, the public has been asked to believe one particularly extraordinary claim: that North Korea attacked Sony Pictures Entertainment and destroyed an incredible amount of its data. Thus far, there hasn’t yet been any extraordinary evidence offered.

Much of the “evidence” that has been offered has mainly come from anonymous senior US officials most of whom are reportedly not actively involved in the FBI’s investigation.

And the FBI itself? Well, Director James Comey’s position can be summed up rather simply as… trust us. But many in the information security industry don’t trust Comey’s position, an attitude that he has reportedly attributed to “post-Snowden mistrust”. He apparently fails to realize that in many circles mistrusting US government conclusions long pre-dates Edward Snowden.

Whomever hacked Sony Pictures Entertainment may never be known. But no matter, whomever is responsible, what’s especially enlightening about this case is the US government’s “trust us” stance. It demonstrates a continued lack of respect for the intelligence of US citizens and other people around the world.

Trust is an act of faith. But trust in government shouldn’t require a leap of faith. Trust in extraordinary claims in the face of murky and what appears to be contradictory information… is simply a leap too far. And so, the Obama administration’s rush to judge North Korea despite the lack of any real evidence brings us to our unfortunate prediction for 2015.

Prediction: Section 215 and Section 206 of the USA PATRIOT Act and Section 6001 of the Intelligence Reform and Terrorism Prevention Act will be reauthorized before their June 1, 2015 expiration date.

Post-Snowden, it appeared as though the controversial provisions might lack the political support needed to avoid sunset. But now, we are confident that Washington D.C. will act to protect itself from “nation state cyber-terrorism” and will renew them after all.

Don’t expect reform in 2015. The violation of your digital freedom will continue. Within 144 days from now. Mark your calendars.

—————

P.S. Bonus speculation!

You can track “cyber” related legislation at congress.gov. Keep an eye out for new Clipper chips and/or other backdoor mandates.

On 08/01/15 At 06:51 PM

More here

Posted in Uncategorized | Leave a comment