Trustworthy Thoughtstag:thought-mesh.net,2008:/twt/42008-05-20T16:11:49ZSecurity thoughts you can trustMovable Type 3.34Semester in reviewtag:thought-mesh.net,2008:/twt//4.24992008-05-20T15:28:29Z2008-05-20T16:11:49ZThe semester finished up for me a couple weeks ago. I taught the lab course this semester. There is no final in that course but instead a final project, so I’m done a bit earlier than regular lecture courses. This...Dr. Hhttp://thought-mesh.net/tthoughts
Virtual Machines - I used vmserver on a 64 bit FC8 base, and it was so much better than continually reloading images and dual booting. I got a set of new Dell systems, so this was the first year that running vmware was really feasible. I was able to build a image on one machine and then copy it around to the other machines. For some of the OS configuration labs, we were able to have each student have his/her own image. This made the SE Linux lab slightly better but not totally (see what went wrong below). The Department support guy at the beginning of the semester suggested just using vmware for everything. At that time, I didn't really have any experience with vmware and was hesitant to completely give up on a physical lab. I think I'll stay with the current environment for at least another semester. A good portion of the labs involve networking, and I think working with real devices is more beneficial that interacting with virtualized routers. I'm not familiar with how much variety vmware supports for virtual network devices. Plus having a physical lab forces students to directly interact with each other, which has fallen away from most of the other lab oriented courses.
Metasploit and the exploit lab - Each semester I've had the students write a stack smashing attack. I've followed the framework of the Smashing the Stack for Fun and Profit article. I give the students a basic shell spawning shellcode, and give them the assembly for a more advanced shellcode that they must translate to hexcode. Last year, I had another professor guest lecture and give a demo of metasploit. This semester we had an in lecture exercise using metasploit at the beginning of the exploit lab assignment. A couple students figured out another way of using the metasploit generated poison packet to launch the attack. And in the shared lab environment, they taught this technique to their colleagues. Only three students used the technique I outlined in the lab writeup. I think this is cool because the students dug into metasploit. However, I think many of the students lost out on a deeper understanding of how the exploit really works because they didn't have to translate to hex or create their own packet. I haven't decided on whether I'm going to insist on a particular technique next time I give this lab.
Vista Lab - Actually this was almost identical to the XP least privilege programming lab I've used in past years, but I added a component to look at the mandatory integrity controls (MIC) that Vista adds. The students also had to figure out how to work around the user account control. Based on the results of this semester's labs, I should be able to integrate a real MIC part to the lab next semester.
Virtual PIX - This is the first year I've used the virtual firewall (security context) feature of the PIX. Since I have only a very basic license, I could only set up 2-3 virtual firewalls per physical firewall (5 physical devices). But with that and VMs on the hosts, I could set up 10-15 reasonably separated firewall environments for pairs of students to work on. There was one major hicup that caused much student angst. For a each virtual firewall on a PIX, you want them to have unique MACs. You could assign your own, or use the "automatic" MAC. I went automatic and did not review the MAC selection closely. Alone in the lab, my few probes worked fine, but with a full lab traffic would fail mysteriously. Some students noticed that traffic destined for another students network was being delivered to their machine. Turns out the auto MAC selection only guarantees uniqueness within a device. Between devices it pretty much guarantees conflicts. Once I created my only MAC naming scheme, everything worked fine.
Things that did not go so well
SE Linux Lab - Each year I'm hopeful that the SE Linux lab will go well this year. In the first two years, we had file labeling problems because students were sharing machines. Last year, I had students work in groups so they did not have to share machines, but they ran into issues of not understanding what macros were needed to make basic file creation and execution work. This year with the per student VMs and the new Fedora SE Linux administrative GUI's, I thought we were all set. However, the user support has changed significantly, and the new mechanism was not well documented. Plus newrole didn't seem to work at all. I assigned my standard user separation policy where Alice is a member of two groups (or roles), so it would be natural to map that policy into different roles. Much time was wasted, but ultimately no one got the roles to work. I think next semester I'll give up and do a class exercise. Currently, I do the MCS as a class exercise. Perhaps I'll expand that to do a bit of policy entry. Then I'll add a snort lab or an identity management lab.
Written assignments - Again this year I only got one written assignment in. I just need to get thinking about this sooner. It is hard to get an interesting design assignment which everyone will have sufficient background on. Perhaps I can mine this years final projects for subsections to assign. I think that having some pure design and writing assignments is a good thing. Much of what these folks will be doing after graduation is designing and communicating designs. Perhaps I can work in a proper risk analysis exercise here.
Next fall I'm scheduled to teach the Intro to Security course again (assuming I get my contract). Last year I had a record number of 80 students. So far the number of students is much lower (40 or 50). We added another prerequisite class which may be dropping the numbers. The class will meet three times a week. In the past I've taught two longer sessions a week. I'm going to try and make one of the weekly meetings a more interactive class and less of a lecture. If anyone out there has ideas for security related in class exercises, I'd appreciate any and all pointers.]]>
Been a long timetag:thought-mesh.net,2008:/twt//4.24982008-05-20T15:24:44Z2008-05-20T15:28:19ZBeen a long time since I’ve posted here. I have a few ideas on sticky notes I hope to type up in the next couple days. Got caught up in the semester and bringing our InfoSecter product to market. We...Dr. Hhttp://thought-mesh.net/tthoughts
InfoSecter product to market.
We have also started an
external corporate web log where Alan and I will be posting technical musings.
]]>
So what is policy anyway?tag:thought-mesh.net,2008:/twt//4.24322008-01-10T16:34:23Z2008-01-10T17:25:27ZI’ve been working with tools and products that nominally work with network security policy for over ten years now, and the term policy is still unclear and over-used in my mind. In his security text “Computer Security: Art and Science”,...Dr. Hhttp://thought-mesh.net/tthoughts
Morris Sloman's group at the Imperial College London has performed a lot of work on policy language specifications. One aspect of their work is formalizing a refinement hierarchy. The policy statements at the top of the refinement are very broad, and too ill defined to be formally analyzed, but perhaps intuitively understandable to the executive's responsible for setting the organization's policy. At each level of the refinement hierarchy, you formalize some aspect of the policy, potentially creating multiple versions from the previous level, e.g. refining the policy for one site vs another or refining the policy for one technology (firewall) vs another (operating system). At the lowest level of the refinement hierarchy, you have policy that could be used to directly control enforcing devices. At the higher levels of the hierarchy, you have more general policies that could be used to direct a global validation of a security implementation.
In his text, Bishop identifies three points in this refinement hierarchy. One is the English or natural language policy. The others are the high level and low level formal policies. He distinguishes between these two types of formal policies by determining whether a policy statement could be picked up and used to direct the operation of a different device or different vendor's device. A high level policy could in theory be used to direct the operation of a different device. A low level policy statement is tied to a specific device and is really more configuration than policy. He gives the type enforcement language DTEL as an example of the high level policy. This language is the parent or the grandparent of the SE Linux type enforcement policy language. While DTEL is formal and fairly detailed it operates on general concepts of subjects (or processes) and objects (or files), so one could easily see how a specific DTEL policy could direct both firewall operation and operating system operation. He gives tripwire as an example of a low level policy. Tripwire uses a database of signatures of "good" versions of system binaries. Tripwire periodically checks the signatures against the binaries, and alerts the system administrator if there is a mismatch. One might want to build an organization wide database of good binaries and signatures and install that on all systems. However, due to how tripwire is implemented the database contains very low level information (timestamps and Inode #'s I believe) that prohibit such sharing.
This concept of policy as a system crossing configuration is consistent with how policy is used in the web services and identity management world. SAML statements are in essence policy that describe how individuals should be identified between administrative domains. Marianne Winslett's group works on algorithms and frameworks to reason about formal policies that control trust. In these cases, the policies are like little contracts that are exchanged between entities that are trying to collaborate. Unlike the network security management case, the trust policies don't control a single administrative domain. Rather the trust policies control how the base infrastructure evolves.
One concept that recurs in most uses of the term policy is that of conflict. In any non-trivial scenario there will be multiple, conflicting goals. An organization may want to provide a easy to use web interface to encourage new customers, but the organization must also ensure that their infrastructure has sufficient authentication and auditing to avoid fraudulent customers. These two goals are necessarily conflicting. At the implementation level, one group may require http communication with a particular server, but another group may need to prohibit all communication with that server to avoid potential conflict of interest contanimation.
The policy language could force the user to clarify all conflicts through ordered lists or policy priorities. Others try to implement more intelligent systems that deduce the correct conflict resolution. A lot of the autonomic management work involves creating greater intelligence and reasoning to save the user from the error prone work of detailed conflict resolution.]]>
Game targetstag:thought-mesh.net,2008:/twt//4.24252008-01-04T16:22:39Z2008-01-04T17:29:36ZOur household received two gaming devices over the holidays that are capable of communicating with the outside world. We activated the communications of the Wii, and are merrily looking at Mii’s from across the planent. This got me to thinking...Dr. Hhttp://thought-mesh.net/tthoughts
EULA prohibts reverse engineering the network protocol, but I presume that observing network communication to understand its impact on my network environment is fair game.
So the good news is that the Nintendo folks seem to have thought at least a little bit about communication security. The bad news is that is this won't likely be enough in the long run. Or maybe even now. There is lots of noise about Wii hacks on the web, but they seem to be the sort of hacks you do to yourself to improve your gaming experience and impress your friends. Once someone figures out the communication channel, you can "hack" others and then the real fun begins. Wii-bots anyone?
There are lots of flops to be had from the game hardware. The PS3 was the major platform contributing flops to Standford's Folding@home effort.]]>
Email and botstag:thought-mesh.net,2007:/twt//4.24142007-12-13T16:38:21Z2008-01-04T17:31:27ZI was reading through project writeups now at the end of the semester. Two were particularly interesting and fed on each other as I read them. One project writeup was a review of the sorry state of email security. The...Dr. Hhttp://thought-mesh.net/tthoughts
workshop paper , "Peer to peer botnets: overview and case study".
The bot engineers also frequently change their communication protocols and add encryption. With encryption, they can frequently rekey. All of this makes it more difficult for the outside observer to understand what is going on. All this change potentially also makes it more difficult to keep your bots in sync. If a bot was off the net for a while, it would miss some protocol updates. When it returns, would the bot software try to bring it up to date? How long does the bot have to be down to turn stale?
This is all bad new for a interposing bot traffic detector that I've been thinking about. The likelihood that a deployment could stay up to date is pretty low. When operating at the border, perhaps detecting and reacting to odd behavior is your best bet. Though if the bot nets are indeed so massive (100,000's or 1,000,000's of devices), the amount of malignant behavior generated by each bot might not rise above the noise. The dark address space work is one approach to making malignant behavior more apparent.
The bot paper relates to the email study because the mail means of distributing the initial infection was by getting folks to click on exe's attached to emails. One would think after all of this time, no one would click on a weird exe. But then again, if we had some sort of proper identity authentication, I would really know the attachment was from my trustworthy friend. Thus, we lead back to having secure expectations for email that are not currently based in reality.]]>
Security FUD and chronic infectiontag:thought-mesh.net,2007:/twt//4.23792007-11-01T22:35:21Z2007-11-01T22:52:59ZI’m periodically struck by now much security research and development is sold by scare talk (fear, uncertainty, and death). Unless you go over the top, you don’t get the news articles, the congressional hearings, or the money. There was a...Dr. Hhttp://thought-mesh.net/tthoughts
I'm periodically struck by now much security research and development is sold by scare talk (fear, uncertainty, and death). Unless you go over the top, you don't get the news articles, the congressional hearings, or the money. There was a recent video going around that showed security researchers messing with a SCADA system and blowing up a power substation (in a lab) using techniques that are generally known in the community. This got congressional hearings going.
A couple years back, I saw a local researcher present simulation results that showed how two or three simultaneous errors (squirrels, trees, or terrorists) could take down the greater Chicago-land power grid. But simulation results don't make the news. Real explosions do.
I was talking with a former colleague last week about trusted OS results from a couple decades back that were strongly worded and controversial at the time. He cynically said that the researcher in question made his statements as controversial as possible to ensure that he got attention and thus funding.
In both of these cases, raising attention through showmanship brought broader attention to valid security concerns. So I suppose the ends justify the means. It is just how the game is played.
In any case, while pulling together my network security notes for the semester, I was struck by the amount of FUD in my notes. However, I think much of the concern there is valid. Or perhaps better stated, the concern is chronic. With the IP infrastructure, our problems are chronic. We can incrementally try to make our part of the world better, but ultimately there will be infections, attacks out there. There is a shared resource out there, and by sharing we are exposing ourselves to the threat. Like getting lice by sharing a hair brush.
Perhaps we could just switch over to a newly designed, better network infrastructure. But it isn't going to happen. Not anytime soon anyway. Still waiting for IPv6 ten years later. Even in that case, there would be threats. With the shared network, you are at the mercy of the least prepared or least savvy connected entity.
No deep observation here really. Just noting that the biological analogies fit here with respect to chronic infectious diseases.
A border approach to bot mitigationtag:thought-mesh.net,2007:/twt//4.23782007-11-01T22:09:59Z2007-11-01T22:35:16ZWhile gathering up my notes for discussing firewall technology for this semester’s course, I started thinking about newer security issues that could be handled by the border or interposition approach of traffic cleansing. It seems like netbot cleanup would be...Dr. Hhttp://thought-mesh.net/tthoughts
Given the experience with my home service provider, the depth of technical expertise isn't there
Unless the home service providers are getting negative feedback for hosting bots, there is no upside to going through this bother. Just pissing of your customers without getting anything in return
The service provider customer are very price conscious and would not pay for centralized security features. This might explain we home service providers (at least around here) do not offer scanning, firewalling, or spam prevention services.
Scanning for bot control traffic may not be feasible. The volume of control traffic would be much smaller than the bot generated traffic. While historically control traffic has been sent over IRC at a fixed port, attackers are getting clever and sending commands over different ports. The specifics of the commands could be easily changed making it difficult to separate commands from real human to human communication. I don't know enough about today's bot technology to know how easily it could be detected.
I'm most curious about the second issue. Given what I've been reading about the prevalence of bot networks, I would think this would be generating a significant amount of traffic from some service providers. Aren't they getting negative feedback from upstream service providers and peer providers? Aren't they getting blacklisted, etc? Or is no one tying back? In the case of address spoofed bot generated traffic doing the tie back would be difficult. But I would assume that much bot generated traffic uses its real address.]]>
Becoming a LinkedIn junkytag:thought-mesh.net,2007:/twt//4.22942007-08-02T16:15:40Z2007-08-02T16:19:13ZI signed up for LinkedIn last week, and I have wasted too much time on it in the meantime. Pawing through pages of old acquaintances is an excellent work avoidance technique. And since it seems kind of business oriented, you...Dr. Hhttp://thought-mesh.net/tthoughts
I signed up for LinkedIn last week, and I have wasted too much time on it in the meantime. Pawing through pages of old acquaintances is an excellent work avoidance technique. And since it seems kind of business oriented, you don't have to feel as guilty about wasting time.
Just found the statistics on my "network" this morning. 25% of my network is from Brazil. Since I only have 3 contacts, my network is heavily dominated by one contact (runs a doggie day care) with over 500 contacts. Thanks to her, I am within 3 degrees of nearly everyone in my geographic area.
Validating non-functional featurestag:thought-mesh.net,2007:/twt//4.22932007-08-02T15:52:03Z2007-08-02T16:14:14ZIf you read up on security testing, one of the reasons given why testing security is so hard is that security a non-functional attribute. When writing a “traditional” test suite, you are testing the presence of a functional feature, e.g....Dr. Hhttp://thought-mesh.net/tthoughts
If you read up on security testing, one of the reasons given why testing security is so hard is that security a non-functional attribute. When writing a "traditional" test suite, you are testing the presence of a functional feature, e.g. the login widget or the client/server communication protocol. The test designer can look at the function specification and generate test cases to evaluate whether each aspect of product is working as designed. A given test case is very concrete and easy to determine pass/fail conditions. For example, the functional specification might say that the login widget should only accept user names using mixed-case alpha numeric characters. In this case the test suite might have cases to try "good" login names of various lengths and login names with bad characters of various lengths. There are issues of not being able to exhaustively test all possible inputs, but with fuzzing and other statistical techniques you can cover quite a bit of space.
Some product traits are not so concretely defined, such as security, performance, and reliability. These features are usually called non-functional characteristics, and are harder for the test suite designer to systematically address. The top level security requirement is that the product/system operates securely. So what does that mean? Hopefully, there is a security architecture for the design that dives down and identifies how security affects the design. At least then there are functional aspects of the security implementation that can be tested, e.g., functional testing of the user authentication system or the link encryption mechanism.
However, while you can approximate testing the security of the system by testing the functional aspects of the system design, there is still a big space left for negative testing. The product should operate within a security policy that defines secure and insecure states. Of the system starts in a secure state, it should continue to transition into secure states. Presumably, an attacker (or other user playing outside the rules) will try to use the system outside of how it was designed and communicated via the functional specification. Again, here there may be some directed statistically techniques to push the system through an wide variety of states. Also, threat analysis can be useful to help the test designer look at the system in non-standard ways.
Software testing is a form of system validation. The term auditing is generally used when testing a particular system installation. With auditing, the audit team is responsible for determining whether the system is working as desired. In this case, the auditor is working from an organizations' security policy rather than a product functional specification. But again, the cases of functional and non-functional features come into play.
In my current work, we are building tools that use formal network operation specifications derived from the organization's network security policy to determine whether a security configuration is operating within spec. Originally, I thought the fact that security is a non-functional system attribute made this validation "more difficult", but in working through the issues in the post, I see what we are doing is validating a functional approximation of the non-functional security policy. So while security is more concerned with negative results (blocked traffic) than standard network flow engineering, the type of validation is the same.
To really consider the unbounded security auditing problem, you need consider how to question the system security model. In many ways, this outside view used by penetration testers to try to exploit flaws in the infrastructure to move the system into an insecure state.
Should jaded people be allowed to teach?tag:thought-mesh.net,2007:/twt//4.22152007-04-19T22:22:16Z2007-04-19T22:38:55ZI find much self-recognition when reading Dilbert. While I enjoy technology and building things, much of the real world of engineering sadly has a very strong human element. And unfortunately not the good warm and fuzzy aspect of the human...Dr. Hhttp://thought-mesh.net/tthoughts
I find much self-recognition when reading Dilbert. While I enjoy technology and building things, much of the real world of engineering sadly has a very strong human element. And unfortunately not the good warm and fuzzy aspect of the human element.
There are many good and noble efforts that get co-opted by silly humans and mutated into something ridiculous. Sadly many activities that I teach about fall into this category: Software Development Process, Risk Analysis, Security Policies Development. These are all important areas, and I'm sure that many good and earnest folks have done great work in these areas. Unfortunately, I've run into many other folks who have made a mockery of these processes through willful ignorance or just plain stupidity. Like my experience with "Agile" programming where only the unpleasant aspects of the process were cherry-picked, e.g., daily meetings but each meeting lasting an hour rather than ten minutes. So I can really empathize with items like the Elbonian Software Process in Dilbert.
Because these are important topics, I try to keep a positive spin when teaching about say risk analysis or security policy development. I try to show how these processes can solve real problems, and only point how they can be misused. Unfortunately, by the end of the lecture it becomes all too easy to tell stupid industry stories. So perhaps jaded, sarcastic people should not be allowed to pollute the minds of young people. Or maybe I should just stop reading Dilbert.
Problems with composing security "best practices"tag:thought-mesh.net,2007:/twt//4.22032007-04-02T16:32:32Z2007-04-02T17:07:07ZMy local bank has been frustrating me with its current security “improvements”. I have not chosen my bank on the basis of its technological savvy, and if I didn’t have such inertia with this bank I would probably look for...Dr. Hhttp://thought-mesh.net/tthoughts
My local bank has been frustrating me with its current security "improvements". I have not chosen my bank on the basis of its technological savvy, and if I didn't have such inertia with this bank I would probably look for another bank.
Clearly, this bank has been chasing the security improvements made by the big boys. Unfortunately, blindly applying these changes particularly when starting from some rather weak security underpinning does not make for an attractive result. You get the worst of both worlds. A system with weak security plus intrusive security features that annoy the end user.
On the face of it, each security improvement sounds quite reasonable, but in total it just doesn't work. I've seen all these errors on other sites, but my bank seems to have more than its fair share. First is the issue of password safety. The bank site implements a three strikes and you're out policy. If you miss the password three times, your account is disabled and you must call a person during normal business hours. The security goal is admirable. You want to protect vulnerable accounts from a brute force attack. However, with the proliferation of passwords and different requirements on passwords at different sites (alpha-numeric only, must have a non-alpha-numeric character, at most 8 characters, at least 12) you either must write down your passwords or probably miss a few times as you enter the variants of your password families. If you can call a human 24x7 to reset as you can with most credit card companies this wouldn't be so back, but for me it is invariably Sunday night as I'm trying to get the accounting done that I lock myself out. An alternative to defeat the brute force attackers is a timed backoff. After one failure pause 1 second, after two failuers pause 5 seconds, etc. Somewhat annoying to the legitimate user, but fatal to the brute force attacker.
Of course, we all lose our passwords, and my bank has the recover your password option that emails a password to your previously registered email account. That is all pretty standard. As long as you reset your password immediately, your exposure is pretty low. However, most sites will reset your password and send you a new randomly generated password. This way your password can be stored in a form that can never be retrieved in plaintext by bank employees (e.g. in a cryptohash), but my bank actually sends your previous password. Not only is my password being sent through insecure email, but presumably some bank employees can retrieve my password at will. If you tend to use families of passwords this not only exposes your bank account security, but it also exposes all other accounts that have passwords in the same family.
Recently, the bank has added a ninety day lockout. If you don't use the bank web interface for ninety days, the interface is disabled. Again, sounds like a great idea. Closing unneeded holes can only improve security. But how is the account reactivated? You call the bank and give them the secret 4 digit number that was given to you on account creation. This magic number has a close correaltion to your social security number. It isn't too infeasible for a random person to find an account number and 4 digits of the social security number (through dumpster diving or social engineering). Then the attacker enters the information in the web, picks a new password and away he goes.
Finally, the bank web interface has added the pictures to avoid phishing links and the personal questions. I'm ok with the pictures. If I'm presented with a picture I'll probably remember it is the one I selected (recognition vs recall), and If I'm not going to remember the picture it doesn't deter me from getting to my account. I just don't gain from the additional security. However, the personal questions are quite annoying. This is a growth from the "Mother's madien name" question that we've had for years. Anyone who is somewhat paranoid will put in a fake name that only they recall. But in the last year, I've seen a number of web sites add more presonal questions. Generally, you get to pick one or two that are meaningful for you. However, my bank site had an unusually large number of personal questions. Some questions are factual and easy enough to recall, e.g. what city were you born. Others might have answers that are hard to spell, or variations that are hard to recall, e.g. what was your first car? Did I enter "Volkswagen" "Beetle" "Bug" or some combination? But the worst are the favorites. Who was your favorite teacher? What is your favorite movie, book, sports team? My favorite 2 years ago when I registered for the site may have changed or been forgotten. This is a classic issue of security vs usability. Either the user tries to play along and answer legitimately, or he might write down the answers and post them on sticky notes on his desk, or he just answers the same word for all questions. If the security is too annoying the user will seek ways to avoid it.
Hmm... As I'm writing all of this down, I think it might be time to reconsider my bank selection inertia or no inertia.
Intitial thoughts on Vista Securitytag:thought-mesh.net,2007:/twt//4.21732007-02-28T22:09:18Z2007-02-28T22:25:06ZFinally have I have Vista installed on a machine again. I haven’t had much time to work with it. It is in the class lab, so hopefully some of the students will take advantage of the access too. I did...Dr. Hhttp://thought-mesh.net/tthoughts
Finally have I have Vista installed on a machine again. I haven't had much time to work with it. It is in the class lab, so hopefully some of the students will take advantage of the access too.
I did take a look at the service changes. No longer are most service run as Local System. Rather many of them are either run as "Local Service" or "Network Service". These "users" don't appear in the normal user list. But you can look at the "user rights assignment" to see what privileges are assocated with these service users.
In the "user rights assignment" there is "Modify an object label" privilege which is not assigned to any user by default. The explanation implies that it is the integrity label, which definitely looks mandatory. From some of the online documentation it looks like there are "low" versions of many directories. For example in my AppData directory, there is a local and a locallow directory. Presumably the locallow version is set at the lower integrity label. I couldn't figure out how to determine levels from GUI. Guess I need to dig in and write up a command line tool.
I had heard that the granularity of auditing had improved from NT days. But looking at the local security policy, the number of items that you can enable for audit looks about the same.
Routing versus securitytag:thought-mesh.net,2007:/twt//4.21712007-02-28T21:24:19Z2007-02-28T21:40:24ZWe had a guest lecturer from industry in class several weeks back. He gave a very good talk about the issues they encountered while designing a new firewall architecture for their organization. One of the issues he spoke about in...Dr. Hhttp://thought-mesh.net/tthoughts
We had a guest lecturer from industry in class several weeks back. He gave a very good talk about the issues they encountered while designing a new firewall architecture for their organization. One of the issues he spoke about in passing was how their organization separated routing/networking and security expertise into separate organizations. I had encountered this division when talking with customers as a Cisco person. From the Cisco security perspective this was an undesirable division, because while the networking folks were generally familiar and comfortable with the Cisco way of doing things the security folks were not.
From my perspective as a network security person, I also tended to think that this division was undesirable. We encountered some rather poorly run organizations where not only were the security folks separate from the networking folks, the security folks did not have a good understanding of how traffic should flow. This was very problematic when deploying CSPM. This tool required the user to give a global network security policy and a description of the topology. Then it would generate the appropriate configurations for the policy enforcement points (i.e. firewalls and security appliances). But if you didn't understand your network topology, the generated configurations were worse than useless. In a broader sense, you cannot secure network communication if you don't know where the packets could flow.
However, my guest lecturer gave me a new appreciation for a separation of security and routing implementation. In their organizaton, a network device is either a router or a firewall. This modularity simplifies the components in their network architecture. They have no 5 legged firewalls. Each firewall has two traffic interfaces and one mangement interface, so the firewall does no routing. Similarly, the routers only route. They do no packet filtering.
Both firewall configuration and routing configuration can be fairly complicated. Trying to configure them both together just raises many possibility for inopportune interactions (e.g. conflicts between static address translation and routing on PIX devices).
Of course, even if you are separating the implementation of firewall and routing components, someone on your team must understand how both work together. You still cannot do adequate traffic control if you don't know where the traffic will flow. I'm not sure that I agree with this division in all cases, but I now see how this separation of feature design feature approach can be beneficial in some cases.
Notes on Vista Secure Development talktag:thought-mesh.net,2007:/twt//4.21502007-02-12T17:34:24Z2007-02-13T15:34:12ZI got the Exhibition Plus package at the RSA conference, which gave me access to one of the talk tracks. Based primarily on this talk, I chose the developer’s track. I had hoped they were going to talk about the...Dr. Hhttp://thought-mesh.net/tthoughts
I got the Exhibition Plus package at the RSA conference, which gave me access to one of the talk tracks. Based primarily on this talk, I chose the developer's track. I had hoped they were going to talk about the security model from a developer's point of view, but instead they talked about the Security Development Lifecycle (SDL), they used in developing Vista. I had heard Mike Howard talk about the SDL process several years ago at the Security and Privacy Symposium, but it still was an interesting talk, and I gathered a few tidbits from it. There were two presenters. Mike Howard acted as the main presenter, and Jeffery Jones acted like the side kick feeding lines to Mike.
The idea of the SDL is to make security an essential part of the software development process. Everyone creates threat models. Everyone runs tools on their code before checkin. Security is no longer an afterthought. The speakers did not claim that bugs were now extinct. They acknowledged that there would always be bugs, but rather with SDL they eliminate the bugs they know how to eliminate. I heard from a colleague that Bill Gates in his keynote address made some silly comment about security bugs no longer being possible, so it is comforting anyway that people in charge of security have a more realistic view of things.
One big part of the SDL is threat modeling. I actually use part of the Microsoft thread modeling book when teaching Information Assurance. When designing a module, you understand the threats and create a formal model of how threats can co-opt your design. According to Howard, they developed 1400+ threat models in the course of writing Vista.
Two of the changes they introduced that had prehaps the biggest impact on bug count were not particularly technologically esoteric: banning unsafe functions and using an annotation language to help the compiler detect flaws. They banned over 120 functions such as strcpy and sprintf. This seems fairly straightforward, but evidently they dedicated a team to it just to deal with the millions of lines of existing code.
The added a Standard Annotation Language (SAL) to decorate the declarations of function calls, so it is clear how parameters to the calls are related, e.g. arg two is the length and arg three is the buffer. Evidently most of the Microsoft system headers are annotated. It wasn't clear from the talk if this was just compile time analysis, or if it did runtime checking. It seems like it must do runtime checking to check buffer and length mismatches.
While Vista was under development, the security team would do root cause analysis on bugs found under XP and determine if the bugs would be an issue under Vista. According to the speaders, a root cause analysis of 63 buffer bugs showed that 82% were already fixed in Vista by a combination of function banning and SAL. 43% caught by SAL and 41% caught by removing functions.
Other security development techniques included fuzzing, integrity levels, and heap and stack protections. None of these are new techniques, but Microsoft is probably pushing these techniques to wider spread adoptions. Fuzzing is a means of input testing. You feed a variety of bad input to your project and see how it behaves. The speakers talked about it primarily with respect to fuzzing configuration files. For each file that a program uses, the developer must run it against a large number (didn't write down the number) of randomly fuzzed versions.
Each process has a message integrity level associated with it. Presumably each file also has an integrity level associated with it. The ideal of integrity levels was promoted by Biba in the 70's and has shown up in some trusted operating systems as a mandatory control. The idea is that a low integrity entity should not pass information to higher integrity entities. The example that the speakers gave was that IE should run at a low integrity level while other processes run at higher integrity levels. Thus even if malware takes over control in IE, it cannot harm the higher integrity processes. I have lots of questions about this technology, and it will take more than a cursory glance at MSDN to figure out what exactly this means. If this is in fact a mandatory control, this is a huge step, and potentially a quite error prone step. Trusted OS's have traditionally been quite hard to use because the mandatory controls cause things to break in unexpected ways.
They also talked about the service hardening which seemed to me like one of the most imporant security features announced with vista. You can specify the privileges required when registring a service. You can also create a policy to describe the expected network behaviour of the service. This is defintely one of the areas I want to play with when I get Vista running again.
Vista includes all the heap and stack protections that have been floating around. Vista provides the Data Execution Protection (DEP, exposing of the no execute bit) that was introduced in XP SP2. By making the stack no-execute, you thwart a lot of exploits. However, the speakers noted that only system software is protected by DEP by default. A number of programs will break under DEP including IE and Sun JVM. Acrobat and Flash used to break, but newer versions will run with DEP enabled. A program such as a Java JIT that is creating a program to execute in data must use Virtual Protection calls to tell DEP that is ok to execute a subsegment of data. Though wouldn't a savvy exploit add such a call too?
In Vista, the heap is cleaned up to make some of the heap exploits more difficult. The array of free lists is removed and canaries are added to detect corruptions.
Image randomization is added. At boot time one of 256 image memory layouts is selected. Plus when a program is executed the stack and heap bases are randomized.
All in all, it sounds like many smart people have been working Vista. While it may not be perfect, it sounds like they have made many security advances. I've been reading Peter Guttmann's review of their DRM technology through and that makes me feel much less hopeful though.
Notes on RSA Conferencetag:thought-mesh.net,2007:/twt//4.21492007-02-12T16:23:51Z2007-02-13T15:35:17ZI just returned from a couple days at the RSA Conference. I went with the primary goal of looking at the exhibitors to get some feeling of what is going on in the security market place. However, I didn’t read...Dr. Hhttp://thought-mesh.net/tthoughts
I just returned from a couple days at the RSA Conference. I went with the primary goal of looking at the exhibitors to get some feeling of what is going on in the security market place. However, I didn't read the schedule closely enought to see that the exhibition's last day was Thursday, so I only got a couple hours on Thursday to walk the floor. I did get to hear a couple interesting talks Thursday and Friday on the developers track, which was a nice suprise.
My high level summary of the exhibition hall was: NAC (Network Admission Control), Security Compliance, and Identity Management.
My previous experience with NAC and its ilk was from Cisco and Microsoft (NAP), where it was a means of quarentining machines that did not match your software version and patch level requirements. For both Cisco and Microsoft solutions, a client agent had to be installed and the only client agent was available for Windows. Some of the vendors I talked with claimed that a client agent was desirable but not necessary, but non-Windows support even in the agentless case seemed to be lagging. This is probably a reasonable decision in the enterprise space where most laptops and desktops are running Windows. Many of these vendors seem like they have stop gap solutions in the Windows space until Vista takes off. Vista has NAP/NAC protection built in.
However, other vendors seem to have grown the boundaries of NAC quite a bit. In particular, it looks like the identity management people have adapted the NAC label to describe how they can download personalized policy to network enforcement points.
I was particularly looking for security compliance vendors to get a feel for tools in that space. There were quite a few companies with products that ranged from basic asset management to risk analysis frameworks to event and configuration integration. The advent of additional regulations really seems to have promoted action in this space. Most of these vendors have some comments about connecting the policy to your implementation, but it was hard to see how this was exactly performed. I need to dig through my bag of literature and do some web surfing to get a better idea of what the products really do with respect to policy.
Finally, there were a number of identity management companies. For the most part they are nicer AAA (Authentication, Authorization, and Auditing) implementations which communication to enforcing points using Radius. I was familiar with Cisco's Access Control Server (ACS), and there is certainly room for growth in this space. As addresses become more dynamic through DHCP and people become more dynamic with laptops, tracking network policy though IP addresses becomes an ever cruder approximation. By authenticating people, and then configuration authorization decisions on the enforcing points based on the person, you get a pretty good approximation of user-based policy. As I recall, there were some rough edges with how the enforcing devices could actually deal with the authorization, but hopefully that is being addressed too.
The big players where there too, but I spent most of my time at the smaller booths. Intrusion Protection Systems (IPS) seemed to be prominately displayed at the big network security players. Checkpoint had just acquired a new IPS company. One smaller vendor, Arxceo Corp, had some interesting IPS products. Evidently, they found a way for a non-signature based approach to IPS. According to the fellow I spoke with, they get a lot of leverage from fingerprinting the source ports, and so they are more quickly able to correlate incoming and outgoing traffic and thus use statistically techniques to narrow down on anomolous traffic more quickly. They were handing out a SC magazine review where they won best buy. Their submission was the Ally ip100, about the size and shape of a fancy paint scraper. It was running against other products an order of magnitude or more in cost and size and evidently worked very well. It is heartening to hear that a statistical based approach finally works.
The Netscreen/Juniper person mentioned that they were going to put some of the virus scanning technology on box. Traditionally, firewalls route virus scanning, URL filtering, etc. off box to a partner's solutions. By liscencing the software and keeping the analysis onbox, Netscreen/Juniper should see a significant performance improvement.
One other interesting bit of technology was from Coretrace. They are selling a box and software system that enables you to completely lock down desktop/laptop machines (presumably running Windows), and then centrally configure and manage them from their appliance. This was developed by one of the main architect of NetRanger (acquired by Cisco in '98? to be the basis of Cisco's IDS solution). They install a special driver (presumably a filesystem shim) to intercept all file requests and prohibit file changes even if you are running in the Administrator's group. The idea being that you create a golden image, and then push all changes from the management platform. Seems like a good idea in the Windows XP world. Maybe this is not essential in a Windows Vista world, we'll see. Also, presumably, you must make some portion of the filesystem writeable, otherwise, the computer is not of much use to the enduser. I haven't spent the time to convince myself that you can lock down enough to really be safe. Finally, couldn't you acheive similar results just by banning people from the Administrators group? I'm not sure, but this does seem like an interesting idea.