This is the 7th part on my CISSP Reloaded where I am revisiting the 10 CISSP domains I studied for many years ago to see what has changed and how much of it I have retained as well as adding in my own personal thoughts, experiences and rambles into the mix. Read the other domains here: (Domain 1) (Domain 2) (Domain 3) (Domain 4) (Domain 5) (Domain 6)
If you’re a religious person, you will believe in a higher being and hold onto the notion of intelligent design. If you don’t then you will believe in the conditions being right to allow a single cell organism to evolve over time into modern life as we know it today.
Secure applications aren’t the result of evolution or chance conditions coming together. Secure applications are only created with a definite degree of intelligent design. You, as the security person are responsible for providing that intelligent design into the application or system that is being developed.
With this in mind, we dive into the domain of applications and systems development, only to crack our heads open on the tiles and lay face first in an inch deep pool which has turned a crimson red from the blood pouring out of our craniums.
Why do I say this? Well, I’ve been through all of my notes a couple of times and even referred to the handbook I had and at least back in the day, the broad areas that were covered by this domain were:
Software development lifecycle processes such as the waterfall model or spiral model.
Software process capability maturity model
Artificial intelligence systems
None of these really make much sense from a real world perspective, at least not in the way my notes are written. Which makes this whole domain wrong. I mean really wrong. In Spinal Tap terms, this has a wrong rating all the way up to 11.
So now that I’ve vented about how this domain (was?) written, I guess I should balance out the Ying with a bit of Yang.
Delving into the depths of secure application design would probably mandate a book on itself which would go far beyond the depths a CISSP is required to know, and likely beyond my knowledge too.
First off, I’m not a programmer or developer myself, so we’ll keep the overview at a high level where you can appreciate the concepts well enough to make informed decisions. Some of you reading this may have had an eyebrow raised when I mention that I’m not a programmer, and now I’m going to attempt to describe some of my views on secure development, most likely the other eyebrow has joined the first, so let’s set the ground rules.
A common argument coming from developers is, “you have no idea how to code even ONE SINGLE LINE, how can you possibly teach me ANYTHING about secure design?”
That is usually accompanied by arm waving, coffee throwing and more colourful language before feet are stomped and doors are slammed and the non-developers are left in the room suffering an awkward silence thinking they deserve to be in basements.
My view is that you don’t need to know programming in order to help ensure secure applications are developed, it does definitely help to have some understanding though. The developers are there to develop and that’s their skillset. Your job is to be able to understand and articulate what behaviours an application should exhibit in order to be deemed secure. For example, I don’t know what the brake pads on my car look like. If you gave me a pair of brake pads and asked me to replace them, I would feel proud if I were able to even jack the car up and take of a wheel. Other than that I’m pretty useless when it comes to anything to do with cars. However, as a driver of a car, I know how the brakes should behave. I know that when I put my foot on the brake, it should slow the car down, or even bring it to a halt without making weird sounds or stopping one wheel faster than the other. If I find that the brakes do not slow the car down, then I can call a mechanic up and ask him to come and look at my cars brakes. The mechanic won’t say to me that I am not qualified to comment on the effectiveness of the brakes simply because I don’t know how they technically work or how to replace them. Rather he’ll agree that as a user I know what behaviour the brakes should exhibit.
This is not much different from secure application development. You may not know how to code, but you should be aware of the behaviours you want an application to exhibit in order to be secure. It’s a team effort between you and the developers, they have the skill to execute and you provide the guidance needed from a security point of view. Well, in most cases it’s less of teamwork and more like the chain gang where criminals who don’t like each other are forced to work with each other and if one slacks they all suffer the punishment.
There are some very good resources you can read up on online which are infinitely more useful than the CISSP material to learn about secure development. A good place to start would be to take a look at Microsoft’s Secure Development Lifecycle http://www.microsoft.com/security/sdl/discover/training.aspx . It’s mature and well established, not to mention free. So it’s worth taking a look at and avoid you doing a lot of the hard work. Also check out Troy Hunt’s blog www.troyhunt.comwhere he gives a lot of good security guidance particularly for .NET developers.
Another very good resource is the Security Ninja aka David Rook (www.securityninja.co.uk) he sums up the reason behind creating secure development principles as being, “to help developers create applications that are secure and not just built to prevent the current common vulnerabilities.”
He further elaborates by a proverb, “Teach a developer about a vulnerability and he will prevent it, teach him how to develop securely and he will prevent many vulnerabilities.”
This in itself is a profound statement that deserves deep meditative contemplation. In every place I’ve ever worked, where a vulnerability is found in an application, the tester usually documents the fix, such as implementing whitelisting to validate the inputs and that’s exactly what the developer would do without necessarily understanding why this was needed. Of course, where there’s a shiny new application that needs to be developed and deployed within time and budget, the tester and the developer will say there isn’t enough time to go over these intricate details. But in their minds the tester will be berating the lack of intelligence the developer has and the developer will mutter under his breath that he simply developed what was asked and if it was needed they should have specified in the requirements.
Which is what makes Rook’s statement so powerful because it tackles the issue at its root. Like Will Smith and Jeff Goldblum in Independence Day flying bravely to the mothership to hit them at their heart. Once the mothership was taken down, all other problems paled in comparison. If you only have time to do one thing to improve the security of your applications, I’d say invest it into developer training. If you can make your developers more security aware, they will be your greatest asset.
So what does this all mean to you as a potential or current CISSP? There’s a lot to take in and many methodologies out there. I like to break secure development down into three broad areas,
Imagine you are an aspiring fighter who wants to become a professional. The first thing you need to do is decide which form of fighting you want to get into. It could be boxing, it could be Karate or wrestling, or even a mixed martial artist. Each one of these disciplines has a different set of rules and skills that will need to be developed. Of course there are some things that are common across all of these. For example, you will need speed, strength and stamina in order to compete at the highest level in any fighting style.
Defining the requirements of an application are similar. There are a broad set of secure standards that will apply universally regardless of the type of application you are developing. Then there are specific requirements that are more tuned to that specific application and it’s functionality. Looking back to domain 1, we need to understand what functions the application will be undertaking, what data is handled by the application, what’s the overall risk rating.
If you were a boxer, you’d focus on defending your head and body because that’s where your opponent will attack, that’s the attack surface. If you’re a kickboxer, then you also have to defend your legs against those debilitating leg kicks so you have a larger attack surface to worry about. Again, applying this to your application you can work out the attack surface so you know where security controls are most needed.
The gist of it is that if you give a developer a wish list of 500 requirements where half of them may not be applicable and another bulk of them not necessary or not feasible, they will prioritise according to what can be done to deliver on time. Then when you try and block it further down the road, you’ll look like the bad guy, because you are, for not doing your work up front. Define your requirements on a risk based approach and work with the developers to help them understand the context of your requirements. It really helps if you just go and speak to the developers themselves. Sure produce a fancy document for the project manager, but make sure the person doing the actual coding is on the same page. Take along a box of chocolates and a replica light sabre. Win the hearts and minds of developers with your requirements. If they understand, they will believe, if they belive, they will implement regardless of the effort needed.
The scene has been set, the girl has been kidnapped and held hostage inside an isolated fortified palace. You’ve got the team together and even engaged in a training montage. Now’s the time to nut up and implement. From a security perspective you’re probably going to encourage a standard toolset is used and pre-approved modules are re-used etc. It’s pretty straightforward stuff once you’ve got it established. You’ll also get a good idea of the customer journey at this point as it comes to life. It’s usually worthwhile exploring how this works, seeing if there are any privacy concerns or scope changes that may mean additional requirements. Depending on the type of organization you’re in, it may be tough. Especially in large organisations where you have a development team sitting offshore while you struggle to find a suitable time of day where you can chat. Again, it goes back to the earlier point about training your developers to be security aware so they handle any changes to requirements and scope in a secure manner and consult with you on any aspects they may be unsure of.
So the car has been built and rolls off the production line. It’s got the leather interior and sports a matte black look because someone thought it made it look like the Batmobile. The requirements were perfectly articulated and implementation was flawless. But is it safe to let onto the road? Will it stop in time if a child steps onto the road? Is the petrol tank too close to the wheels and will it spontaneously combust when the car hits 88.8 miles per hour? These are questions that we may be pretty confident of… but we wouldn’t bet our lives on it. So we wheel in the crash test dummies.
Or in an IT security context, it’s where we invite specialists to conduct testing and validate the security of the application. This can involve static and dynamic code reviews, the use of automated tools or manual hacks. It’s where a lot of security people like spending time because it’s cool to try to break stuff. But there’s usually little to no time or budget left at this stage to fix anything but the most easiest of vulnerabilities and you’ll have business owners willing to accept a whole bunch of risks just to get their application deployed on time. You can slam the desk and mutter how incompetent the organization is, but taking into consideration the business impact, a delay of a couple of weeks to implement a security fix usually costs more than they can afford. Hence why I’ll reiterate the importance of getting it right up front with developer training and early engagement to get requirements understood.
It’s also a good idea to test your application in the live(ish) environment so you can see what vulnerabilities it inherits from the infrastructure it sits on and how well it plays with other components.
Once all of that is done and you’ve signed it off, its plain sailing for a little while. Well, make sure you know which version of the code you’ve signed off as being secure. Put a note in the diary to come back a review it again within a year or whatever is an appropriate timeframe or whenever an update is made. Like ancient works of art and great monuments. Building them was no small feat, but maintaining them can be a real nightmare. Your applications and systems may have stood proudly the day they were deployed in your data centre. A beacon of light and hope for a new company that was boldly going where no company had gone before. It was going to move your organization to the paperless office, be 99.7% greener than before, bring in efficiencies that would save 1.5m per year; but what is it today? Is it still well maintained and secure? Or is it a decayed old building infested with cockroaches and a hazard to anyone who wanders within?
And there you have it, secure application and systems development in a rather ‘meh’ domain. Can’t say there’s much else I can write here without plagiarizing other websites of which there are loads. The CISSP course as it was and me cannot help you beyond this point… so I’ll leave you with an XKCD comic that explains it all.
****** EDIT ******
It’s been pointed out to me by a couple of people that ISC2 recognised the depth of the domain and hence created the CSSLP (Certified Secure Software Lifecycle Professional) which goes into Application Security in great detail. I’m not pretending other certifications from ISC2 or other organisations don’t exist, this series of posts are intended to simply reflect what I covered as part of my CISSP nearly a decade ago.