« December 2006 | Main | April 2007 »

February 2007 Archives

February 12, 2007

Notes on RSA Conference

I just returned from a couple days at the RSA Conference. I went with the primary goal of looking at the exhibitors to get some feeling of what is going on in the security market place. However, I didn’t read the schedule closely enought to see that the exhibition’s last day was Thursday, so I only got a couple hours on Thursday to walk the floor. I did get to hear a couple interesting talks Thursday and Friday on the developers track, which was a nice suprise.

My high level summary of the exhibition hall was: NAC (Network Admission Control), Security Compliance, and Identity Management.

My previous experience with NAC and its ilk was from Cisco and Microsoft (NAP), where it was a means of quarentining machines that did not match your software version and patch level requirements. For both Cisco and Microsoft solutions, a client agent had to be installed and the only client agent was available for Windows. Some of the vendors I talked with claimed that a client agent was desirable but not necessary, but non-Windows support even in the agentless case seemed to be lagging. This is probably a reasonable decision in the enterprise space where most laptops and desktops are running Windows. Many of these vendors seem like they have stop gap solutions in the Windows space until Vista takes off. Vista has NAP/NAC protection built in.

However, other vendors seem to have grown the boundaries of NAC quite a bit. In particular, it looks like the identity management people have adapted the NAC label to describe how they can download personalized policy to network enforcement points.

I was particularly looking for security compliance vendors to get a feel for tools in that space. There were quite a few companies with products that ranged from basic asset management to risk analysis frameworks to event and configuration integration. The advent of additional regulations really seems to have promoted action in this space. Most of these vendors have some comments about connecting the policy to your implementation, but it was hard to see how this was exactly performed. I need to dig through my bag of literature and do some web surfing to get a better idea of what the products really do with respect to policy.

Finally, there were a number of identity management companies. For the most part they are nicer AAA (Authentication, Authorization, and Auditing) implementations which communication to enforcing points using Radius. I was familiar with Cisco’s Access Control Server (ACS), and there is certainly room for growth in this space. As addresses become more dynamic through DHCP and people become more dynamic with laptops, tracking network policy though IP addresses becomes an ever cruder approximation. By authenticating people, and then configuration authorization decisions on the enforcing points based on the person, you get a pretty good approximation of user-based policy. As I recall, there were some rough edges with how the enforcing devices could actually deal with the authorization, but hopefully that is being addressed too.

The big players where there too, but I spent most of my time at the smaller booths. Intrusion Protection Systems (IPS) seemed to be prominately displayed at the big network security players. Checkpoint had just acquired a new IPS company. One smaller vendor, Arxceo Corp, had some interesting IPS products. Evidently, they found a way for a non-signature based approach to IPS. According to the fellow I spoke with, they get a lot of leverage from fingerprinting the source ports, and so they are more quickly able to correlate incoming and outgoing traffic and thus use statistically techniques to narrow down on anomolous traffic more quickly. They were handing out a SC magazine review where they won best buy. Their submission was the Ally ip100, about the size and shape of a fancy paint scraper. It was running against other products an order of magnitude or more in cost and size and evidently worked very well. It is heartening to hear that a statistical based approach finally works.

The Netscreen/Juniper person mentioned that they were going to put some of the virus scanning technology on box. Traditionally, firewalls route virus scanning, URL filtering, etc. off box to a partner’s solutions. By liscencing the software and keeping the analysis onbox, Netscreen/Juniper should see a significant performance improvement.

One other interesting bit of technology was from Coretrace. They are selling a box and software system that enables you to completely lock down desktop/laptop machines (presumably running Windows), and then centrally configure and manage them from their appliance. This was developed by one of the main architect of NetRanger (acquired by Cisco in ‘98? to be the basis of Cisco’s IDS solution). They install a special driver (presumably a filesystem shim) to intercept all file requests and prohibit file changes even if you are running in the Administrator’s group. The idea being that you create a golden image, and then push all changes from the management platform. Seems like a good idea in the Windows XP world. Maybe this is not essential in a Windows Vista world, we’ll see. Also, presumably, you must make some portion of the filesystem writeable, otherwise, the computer is not of much use to the enduser. I haven’t spent the time to convince myself that you can lock down enough to really be safe. Finally, couldn’t you acheive similar results just by banning people from the Administrators group? I’m not sure, but this does seem like an interesting idea.

Notes on Vista Secure Development talk

I got the Exhibition Plus package at the RSA conference, which gave me access to one of the talk tracks. Based primarily on this talk, I chose the developer’s track. I had hoped they were going to talk about the security model from a developer’s point of view, but instead they talked about the Security Development Lifecycle (SDL), they used in developing Vista. I had heard Mike Howard talk about the SDL process several years ago at the Security and Privacy Symposium, but it still was an interesting talk, and I gathered a few tidbits from it. There were two presenters. Mike Howard acted as the main presenter, and Jeffery Jones acted like the side kick feeding lines to Mike.

The idea of the SDL is to make security an essential part of the software development process. Everyone creates threat models. Everyone runs tools on their code before checkin. Security is no longer an afterthought. The speakers did not claim that bugs were now extinct. They acknowledged that there would always be bugs, but rather with SDL they eliminate the bugs they know how to eliminate. I heard from a colleague that Bill Gates in his keynote address made some silly comment about security bugs no longer being possible, so it is comforting anyway that people in charge of security have a more realistic view of things.

One big part of the SDL is threat modeling. I actually use part of the Microsoft thread modeling book when teaching Information Assurance. When designing a module, you understand the threats and create a formal model of how threats can co-opt your design. According to Howard, they developed 1400+ threat models in the course of writing Vista.

Two of the changes they introduced that had prehaps the biggest impact on bug count were not particularly technologically esoteric: banning unsafe functions and using an annotation language to help the compiler detect flaws. They banned over 120 functions such as strcpy and sprintf. This seems fairly straightforward, but evidently they dedicated a team to it just to deal with the millions of lines of existing code.

The added a Standard Annotation Language (SAL) to decorate the declarations of function calls, so it is clear how parameters to the calls are related, e.g. arg two is the length and arg three is the buffer. Evidently most of the Microsoft system headers are annotated. It wasn’t clear from the talk if this was just compile time analysis, or if it did runtime checking. It seems like it must do runtime checking to check buffer and length mismatches.

While Vista was under development, the security team would do root cause analysis on bugs found under XP and determine if the bugs would be an issue under Vista. According to the speaders, a root cause analysis of 63 buffer bugs showed that 82% were already fixed in Vista by a combination of function banning and SAL. 43% caught by SAL and 41% caught by removing functions.

Other security development techniques included fuzzing, integrity levels, and heap and stack protections. None of these are new techniques, but Microsoft is probably pushing these techniques to wider spread adoptions. Fuzzing is a means of input testing. You feed a variety of bad input to your project and see how it behaves. The speakers talked about it primarily with respect to fuzzing configuration files. For each file that a program uses, the developer must run it against a large number (didn’t write down the number) of randomly fuzzed versions.

Each process has a message integrity level associated with it. Presumably each file also has an integrity level associated with it. The ideal of integrity levels was promoted by Biba in the 70’s and has shown up in some trusted operating systems as a mandatory control. The idea is that a low integrity entity should not pass information to higher integrity entities. The example that the speakers gave was that IE should run at a low integrity level while other processes run at higher integrity levels. Thus even if malware takes over control in IE, it cannot harm the higher integrity processes. I have lots of questions about this technology, and it will take more than a cursory glance at MSDN to figure out what exactly this means. If this is in fact a mandatory control, this is a huge step, and potentially a quite error prone step. Trusted OS’s have traditionally been quite hard to use because the mandatory controls cause things to break in unexpected ways.

They also talked about the service hardening which seemed to me like one of the most imporant security features announced with vista. You can specify the privileges required when registring a service. You can also create a policy to describe the expected network behaviour of the service. This is defintely one of the areas I want to play with when I get Vista running again.

Vista includes all the heap and stack protections that have been floating around. Vista provides the Data Execution Protection (DEP, exposing of the no execute bit) that was introduced in XP SP2. By making the stack no-execute, you thwart a lot of exploits. However, the speakers noted that only system software is protected by DEP by default. A number of programs will break under DEP including IE and Sun JVM. Acrobat and Flash used to break, but newer versions will run with DEP enabled. A program such as a Java JIT that is creating a program to execute in data must use Virtual Protection calls to tell DEP that is ok to execute a subsegment of data. Though wouldn’t a savvy exploit add such a call too?

In Vista, the heap is cleaned up to make some of the heap exploits more difficult. The array of free lists is removed and canaries are added to detect corruptions.

Image randomization is added. At boot time one of 256 image memory layouts is selected. Plus when a program is executed the stack and heap bases are randomized.

All in all, it sounds like many smart people have been working Vista. While it may not be perfect, it sounds like they have made many security advances. I’ve been reading Peter Guttmann’s review of their DRM technology through and that makes me feel much less hopeful though.

February 28, 2007

Routing versus security

We had a guest lecturer from industry in class several weeks back. He gave a very good talk about the issues they encountered while designing a new firewall architecture for their organization. One of the issues he spoke about in passing was how their organization separated routing/networking and security expertise into separate organizations. I had encountered this division when talking with customers as a Cisco person. From the Cisco security perspective this was an undesirable division, because while the networking folks were generally familiar and comfortable with the Cisco way of doing things the security folks were not.

From my perspective as a network security person, I also tended to think that this division was undesirable. We encountered some rather poorly run organizations where not only were the security folks separate from the networking folks, the security folks did not have a good understanding of how traffic should flow. This was very problematic when deploying CSPM. This tool required the user to give a global network security policy and a description of the topology. Then it would generate the appropriate configurations for the policy enforcement points (i.e. firewalls and security appliances). But if you didn’t understand your network topology, the generated configurations were worse than useless. In a broader sense, you cannot secure network communication if you don’t know where the packets could flow.

However, my guest lecturer gave me a new appreciation for a separation of security and routing implementation. In their organizaton, a network device is either a router or a firewall. This modularity simplifies the components in their network architecture. They have no 5 legged firewalls. Each firewall has two traffic interfaces and one mangement interface, so the firewall does no routing. Similarly, the routers only route. They do no packet filtering.

Both firewall configuration and routing configuration can be fairly complicated. Trying to configure them both together just raises many possibility for inopportune interactions (e.g. conflicts between static address translation and routing on PIX devices).

Of course, even if you are separating the implementation of firewall and routing components, someone on your team must understand how both work together. You still cannot do adequate traffic control if you don’t know where the traffic will flow. I’m not sure that I agree with this division in all cases, but I now see how this separation of feature design feature approach can be beneficial in some cases.

Intitial thoughts on Vista Security

Finally have I have Vista installed on a machine again. I haven’t had much time to work with it. It is in the class lab, so hopefully some of the students will take advantage of the access too.

I did take a look at the service changes. No longer are most service run as Local System. Rather many of them are either run as “Local Service” or “Network Service”. These “users” don’t appear in the normal user list. But you can look at the “user rights assignment” to see what privileges are assocated with these service users.

In the “user rights assignment” there is “Modify an object label” privilege which is not assigned to any user by default. The explanation implies that it is the integrity label, which definitely looks mandatory. From some of the online documentation it looks like there are “low” versions of many directories. For example in my AppData directory, there is a local and a locallow directory. Presumably the locallow version is set at the lower integrity label. I couldn’t figure out how to determine levels from GUI. Guess I need to dig in and write up a command line tool.

I had heard that the granularity of auditing had improved from NT days. But looking at the local security policy, the number of items that you can enable for audit looks about the same.

About February 2007

This page contains all entries posted to Trustworthy Thoughts in February 2007. They are listed from oldest to newest.

December 2006 is the previous archive.

April 2007 is the next archive.

Many more can be found on the main index page or by looking through the archives.

Powered by
Movable Type 3.34