If you read up on security testing, one of the reasons given why testing security is so hard is that security a non-functional attribute. When writing a “traditional” test suite, you are testing the presence of a functional feature, e.g. the login widget or the client/server communication protocol. The test designer can look at the function specification and generate test cases to evaluate whether each aspect of product is working as designed. A given test case is very concrete and easy to determine pass/fail conditions. For example, the functional specification might say that the login widget should only accept user names using mixed-case alpha numeric characters. In this case the test suite might have cases to try “good” login names of various lengths and login names with bad characters of various lengths. There are issues of not being able to exhaustively test all possible inputs, but with fuzzing and other statistical techniques you can cover quite a bit of space.
Some product traits are not so concretely defined, such as security, performance, and reliability. These features are usually called non-functional characteristics, and are harder for the test suite designer to systematically address. The top level security requirement is that the product/system operates securely. So what does that mean? Hopefully, there is a security architecture for the design that dives down and identifies how security affects the design. At least then there are functional aspects of the security implementation that can be tested, e.g., functional testing of the user authentication system or the link encryption mechanism.
However, while you can approximate testing the security of the system by testing the functional aspects of the system design, there is still a big space left for negative testing. The product should operate within a security policy that defines secure and insecure states. Of the system starts in a secure state, it should continue to transition into secure states. Presumably, an attacker (or other user playing outside the rules) will try to use the system outside of how it was designed and communicated via the functional specification. Again, here there may be some directed statistically techniques to push the system through an wide variety of states. Also, threat analysis can be useful to help the test designer look at the system in non-standard ways.
Software testing is a form of system validation. The term auditing is generally used when testing a particular system installation. With auditing, the audit team is responsible for determining whether the system is working as desired. In this case, the auditor is working from an organizations’ security policy rather than a product functional specification. But again, the cases of functional and non-functional features come into play.
In my current work, we are building tools that use formal network operation specifications derived from the organization’s network security policy to determine whether a security configuration is operating within spec. Originally, I thought the fact that security is a non-functional system attribute made this validation “more difficult”, but in working through the issues in the post, I see what we are doing is validating a functional approximation of the non-functional security policy. So while security is more concerned with negative results (blocked traffic) than standard network flow engineering, the type of validation is the same.
To really consider the unbounded security auditing problem, you need consider how to question the system security model. In many ways, this outside view used by penetration testers to try to exploit flaws in the infrastructure to move the system into an insecure state.