Main | February 2007 »

December 2006 Archives

December 7, 2006

Preface

I’ve been working in the area of network security and a bit in trusted operating systems for the last eleven years or so. In addition, I’ve been teaching computer security for the last two years. Teaching Information Assurance courses has really forced me to broaden my background. Through the material and student’s comments and questions, it has caused me to look at many things in a differnt light than I would have from my industry perspective.

In the blog, I want to capture some of those insights into keeping information trustworthy resulting from class research, comments, and just operating in today’s world.

Untrusted hosts

In our last class lecture of the semester, we discussed current news stories that involved security and computer technology. We spent quite a bit of time talking about a side-channel attack on RSA keys that was in the news recently thanks to an interview one of the creator’s of the attack gave to Le Monde (as reported by New Scientist). Here’s a draft of the paper describing the attack that has been submitted for publication.

The crux of the attack is that a spy process and the RSA process access a shared resource in the form of the branch target address cache. Depending on whether a bit of the RSA key is a 1 or a 0 it will make a different number of branches through the exponentiation logic. The spy process makes enough different branches to just fill up the branch target cache. Depending on whether the current bit that is being used to perform the exponentation is a 1 or a 0, the RSA process will make a different number of branchs and thus evict a different number of the spy processes branches. Then the spy process simply notes the time it took to execute its branches and note that a longer time means more branch target misses and can then deduce whether the RSA process was working with a 0 or a 1.

It all seems fine in theory. But of course for this to work with the amazing accuracy noted in the news articles (508 of 512 bits revealed in one encryption or decryption operation), your environment must be very controlled. The spy process needs to have a very good idea of when to start. If other processes are very active at the same time, then the timing results will be off. Though the attacker could still get valuable information. He would need to run multiple times to get the noise averaged out.

One of the students in class actually saw a demonstration of this attack over the summer. He said that the scenario was being played as the owner of the system running the spy process against a music or video service being run on his machine, the DRM scenario. This assumption was also brought up by Bruce Schneier’s comments on the attack. Scneier’s analysis brings up very good points about the futility of protecting your data (the media company’s data in this case) on an untrusted host (Joe-bob’s computer). If you own both the data and the device, you only have to protect yourself from the “outside”. If you own the data and trust the device, like the scenario of putting your valuables in a safe deposit box in a bank, this is also reasonable. But if you try to give your data to someone you don’t fully trust to “use” for a while you have problems. In the physical world, this is ameliorated by things like deposits and contractual law. But in cyperspace, the ability to trivally make identical copies breaks that physical analogy.

The paper talks of the attack more as a virus, an uninvited program (or maybe that was just my bias reading it). I think this is a more realistic scenario. In the DRM case where the attacker owns the device, it seems like there are more direct ways to get at the data. Sean Smith talks about directly accessing memory of the target process to pull out the information of interest as a means to avoid DRM protections implemented by the Trusted Platform Module (TPM). It would require a tech savvy person to do, but for a commoditized service like movies or videos, it only takes one take savvy person to figure it out and write a utility, and then it is available to the masses. If folks hack Tivo and XBoxes, someone will hack widely used protected music and video services.

External hosts

Another current-events topic we briefly touched on continues on the vein of examining options of data and device separation. Bruce Scheier has a recent entry on the separation of ownership of data and ownership of the containing device, pointing out if you own the data but don’t trust the containing device you are in for trouble.

Managed storage providers fit in the model of separating data ownership from hosting device ownership. Their success depends on their trustworthiness much like the success of a bank depends on its reputation for trust with its customers (which in today’s world may be a bit misplaced). The degree of trust between providers varies widely. I know someone whose web site went down for a few days, because an employee of the hosting company stole the disk the site happened to be stored on to sell on ebay. Of course he was caught immediately because he showed up on the cameras (not a very smart employee). He wasn’t really interested in the data (which was all public in my friend’s case), so it was just an availability problem.

So for most folks it is a risk analysis. For most small operators, outsourcing the care and maintence of non-sensitive data is a good ecconomic trade off. In fact this entry is hosted on non-local machine.

But to move to the next level, the storage providers must convince people to trust them with sensitive data. And this is already done in the physical world with disaster recover. Large companies may be able to afford to create and staff their own off-site storage sites, but it is not ecconomically feasible for small and midsized companies. Here is an example of The Bunker, a secure storage management company that is using old military bunkers to house data centers. Good from a natural disaster point of view but mainly PR from more mundane data security concerns. According to their site they implement the standard good data security produres to keep their customer’s data safe from the bad guys. Otherwise, the bunker wouldn’t be much protection from a good social engineer, a disgruntled employee, or a less than honest co-located customer.

Another approach that relies less on the integrity of a single host is being implemented by CleverSafe.. They rely on data spliting or secret sharing. They divide the data into 11 somewhat redundant streams and store them on widely geographically distributed services. If you can retrieve a majority of the streams, you can reconstruct the original data. It is like RAID in that you have availablity. The hardware is physically separate, so you are not subject to disasters at a single site as you would with RAID. In addition, you have protection against untrustworthy devices. The attacker would have to subvirt a majority of the storage locations to access your data. This is of course still possible, but much harder.

December 12, 2006

Distributed Computing Model of Network Flow Enforcement

My PhD work was in the area of parallel systems. Specifically, it was in the area of developing program development systems (compilers) for distributed memory, multi-processors. Unfortunately for me, it was pretty clear by the time I graduated that such computers and programming models where not going to take the world by storm. But looking at the world as a sea of distributed processors did influence my thinking in my next phase of life where we were developing firewalls.

At the time (mid nineties) firewalls where strictly border devices. Maybe an organization had a firewall protecting themselves from the outside unknowns, but it was clear that network connectivity and complexity was rapidly increasing. The number of network security enforcement devices within a single organization or within a set of cooperating organizations would only grow. Therefore, with the Centri firewall, we took the view of firewalls as elements in a distributed memory machine that we were compiling for. The user specified a global security policy. They also specified the target architecture, and Centri compiled the appropriate configurations to enforce the policy (assuming there was a sufficient density of network enforcement points). Solsoft takes a similar point of view. Researchers from AT&T also too a generation view with their Firmato and Fang firewall tool kits.

Once we were acquired by Cisco, Centri was not long for this world, but our compiler view of network security policy lived on in Cisco Secure Policy Manager (CSPM) which targetted PIX and IOS devices instead of Centri enforcement points. CSPM lived on for about 5 years, but was too far ahead of its time. The majority of our customers 1) had existing devices with existing configurations that they believed in and 2) wanted the option of a simplistic view where they could manage each device independently. In addition, there were some base technology problems with CSPM that were never adequately addressed.

Today the reality of complex, dynamic network environments is well upon us, but the predominate management model is that of configuring (programming) each network enforcement device independently (hopefully guided by some higher level security policy). My technical direction now is guided by the analysis model. Rather than having a compilation/generation model, we are working on tools that take the set of device configurations and make some determinations about how well they fit with the global policy intent. In many ways this has more technical difficulties than the distributed compilation model. There are many ways to program the same concept. The compiler only has to generate one of them. The analyzer must understand all options.

The analyzer gives the end user much freedom. He can apply an analyzer to an installed base of network device configurations. He can use an analyzer to help guide the evolution of these network device configurations. Perhaps eventually, the analyzer will evolve into a generator. However, this is more of a end-user psychological acceptability issue than a technical issue.

About December 2006

This page contains all entries posted to Trustworthy Thoughts in December 2006. They are listed from oldest to newest.

February 2007 is the next archive.

Many more can be found on the main index page or by looking through the archives.

Powered by
Movable Type 3.34