Last week, Thom Langford wrote a post on his RSA 2019 itinerary, which featured some of the sessions he’d shortlisted to visit.

I found it to be a useful list, and thought I’d try compiling my list of vendors I’d put on my shortlist to find out more about.

My employer

AT&T Business – 5545 North Expo
AlienVault – 1235 South Expo

I’d have to start out with the booths that I’ll inevitable be spending most of my time, the AlienVault and AT&T Business ones. Come over, say hello, check out some of the products on offer, and grab some swag.

User Awareness

Elevate Security – 31 Early Stage Expo
Habitu8 – 45 Early Stage Expo

User awareness is hot, and it makes sense to check out some of the players in the market. While, there are many established vendors in this space like KnowBe4 which have a comprehensive offering, I’m interesting in finding out more from newer players such as Elevate Security which was co-founded by Masha Sedova; who really knows the space having worked previously at Salesforce.

It’s no secret that I’m a fan of videos for user awareness, and Habitu8 seems to have a whole line of fun videos – maybe they’ll be playing some at the show.

Threat Detection

Thinkst Canary – 6487 North Expo
SafeGuard Cyber – 3111 South Expo

Thinkst probably needs no introduction, but if you’re not familiar with the product, you should definitely check it out. The beauty is in its simplicity, deploy a small hardware device, make it appear to be whatever you want, and wait for it to sing. It’s a low-noise, but highly credible detection system that I would recommend most companies invest in.

SafeGuard Cyber is also another company that’s doing things from a different angle. In this case, it has a focus on social media channels with particular emphasis on brand & reputation risk, VIP exposure, data loss, legal exposure and so forth. It’s an area that will definitely get more focus in the future. (full disclosure, I’m on the advisory board for SafeGuard cyber)

API Security

ArecaBay Inc – 10 Early Stage Expo

How important is API security? Why would anyone want API security? Well, that’s what I’ll want to find out from this company.

Travel Security

WifiWall – 3 Early Stage Expo

You’re on the move, you need to connect to public wifi, but you don’t have a VPN, what do you do?

I have no idea how good this product is or how it works, but I’m a simple person and I see a hardware device, and am intrigued.

Coolest name

Secret Double Octopus – 753 South Hall

Is there a security company with a cooler name than Secret Double Octopus? Even if you aren’t looking to get rid of all your passwords, it’s probably worth going and grabbing a business card from one of the people that work there, just to see what kind of job titles they have!

Not exhibiting


These two companies aren’t exhibiting at RSA, but maybe some of the teams would be attending and worth catching up with. My old 451 colleague (old as in we used to work together, not that he’s old) Adrian Sanabria is now at NopSec, and he loves the opportunity to buy people breakfast, lunch, dinner, or a tea and discuss vulnerabilities.

I don’t know if anyone from Secfense will be attending, but I kinda like the team and their approach to try and democratise 2FA across all apps and functions.

The people


Of course, technology aside, it’s the people that make any event or conference, and RSA does attract a lot of people. It doesn’t matter whether they work for a competitor, or are unemployed, a student, or have over 30 years in the industry, it’s a great time to network and make some new acquaintances. I know I’ve met some wonderful people at conferences over the year who have turned into lifelong friends, colleagues, and even bosses.

Disclaimer part 1: I know I’ve left off many companies from the list, don’t get mad, I wanted to keep this a short post and focus mainly on smaller companies that tickled my interest.

Disclaimer part 2: I also didn’t want to annoy the powers that be, by naming too many direct competitors).

This tweet by member of Parliament Nadine Dorries was enough to gave significantly raise the blood pressure of half the infosec professionals in the world.

After getting a bit of ‘stick’, the MP tried to defuse the situation by claiming she was a mere back bench MP – an insignificant minion.

Some other MPs jumped to say, it’s a common occurence and that people are blowing it up into a major issue

Maybe five or ten years ago this wouldn’t have been an issue at all. But the world is very different today – attacks are very different and chaining together a series of attacks from even a compromised “low-level” employee isn’t all that difficult. Especially where MPs can make an attractive target to foreign, unfriendly agencies.

Like most things in life, nothing is ever black and white. Password sharing does occur, despite there being technology solutions in place to facilitate sharing in a manner whereby accountability remains. It happens in most companies. But that’s not quite what I take exception to here.

The attitude displayed by MPs is what is concerning. The casual brushing off, as if it is something that should be accepted.

It’s a bit like using a mobile phone while driving, or driving over the speed limit… or using a mobile phone while driving over the speed limit. Even though it puts lives at risk, most people have done it at some point. Completely eradicating such behaviour is impossible, but you wouldn’t accept the excuse of, “Well everybody else does it” especially if it came from a bus driver.

Similarly, society shouldn’t be willing to accept the risky behaviour displayed by people in government or other sensitive roles.

But maybe that is where infosec professionals can do a better job of educating the masses. Perhaps only when risky behaviour is shunned at a societal level – like the dirty looks you get for not separating your green from general waste – that people’s attitudes will change.


I have Amazon Prime, I quite like their shows, and whenever I have some time to kill I’ll watch an episode or 3.

A couple of weeks ago, I thought it would be a good idea to install the official Amazon video app on my android device, so that I could download episodes and watch them when travelling. I previously had it on my iPad, so knew it worked well.

However, I wasn’t able to find the Amazon video app in the Google Play store. Perplexed, I went hunting, and quickly found that Amazon does indeed have an app for Android, only it isn’t on the official store.

Amazon helpfully has instructions on how to install the app on your Android phone or tablet from its own Amazon Appstore.



Screen Shot 2017-09-13 at 09.36.33.jpg

For those of you playing along at home, you may have spotted the obvious flaw in this approach.

To install the app, Amazon is advising you to “Go to your device’s Settings, select Security or Applications (depending on device), and then check the Unknown Sources box.)”

But there are others

Unfortunately, this isn’t isolated to Amazon. Ken Munro pointed out on twitter that the National Lottery also asks you to download its app from a dark alley in the back.


nat lott.jpg

Although, to its credit, the National Lottery does mention to, “Remember to change your security settings back to your preferred choice.”

Quentyn Taylor pointed out that Samsung also does similar.

So what’s the big deal?

The official Google apps store isn’t a complete safe haven. Malicious apps have bypassed the security checks and ended up in the app store many times. Companies like Amazon, the National Lottery, or Samsung aren’t fly-by-night companies that will deliberately push out insecure apps; so what’s the harm in downloading the app and switching security back.

For most users that aren’t technically savvy, the ability for their Android device to block downloads from unknown sources is there to prevent them from accidentally downloading malicious software. – Strike one.

The security industry has spent a great deal of time and effort to educate users in the dangers of downloading unknown files from untrusted sources, and this approach undermines a lot of those efforts. – Strike two.

Normalising such actions enables attackers to try similar tactics. After all, if companies like Amazon have legitimised the approach of turning off security settings and downloading apps from their own environments, it is something that any company could emulate. – Strike three.

The reality is that convenience trumps security most of the time. Users will intentionally or accidentally bypass security controls to get the app of their choosing, often leaving themselves vulnerable in the process. Which is why it’s important that manufacturers, providers, app stores, and everyone in between work together to help deliver a secure experience to users, instead of elements working against each other.

European startup CLTRe founded by Kai Roer has spent the last couple of years examining the security awareness and user behaviour problem through the lens of security culture.

Based on findings over the course of 2016, CLTRe has produced its first annual Security Culture report, co-written by Roer and Gregor Petric, Ph.D., an Associate Professor of Social Informatics and Chair of the Center for Methodology and Informatics at the Faculty of Social Sciences, University of Ljubljana (Slovenia).

Many existing security awareness reports typically measure and report on a few basic metrics – pretty often based around number of phishing emails user click on.

It is here that the CLTRe report differentiates itself, by delving into statistics and metrics to provide a view that is probably the first of its kind. It takes into consideration not just behaviours, but adds insights to the behaviours based on gender, geographic location, age, duration of service, or acceptance of norms across seven dimensions.

The report has insightful nuggets of information scattered throughout, such as an  examination of the cultural difference across various industries in Norway and Sweden.

Screen Shot 2017-05-09 at 12.19.48.png

The report explains at length why security culture metrics matter and the value they provide. It states that similar to technical controls, security culture must be measured in order to understand and measure change.

For example, reporting the number of clicks on a phishing exercise is useful but has its limits. Those metrics do not provide the motivations or drivers for the users.

Screen Shot 2017-05-09 at 12.26.26.png


For its first report, CLTRe has produced a great report with very detailed insights. It’s not something to be considered light reading, and some segments feel quite academic in nature. It’s not a knock on the report, it’s needed to elevate the discussion to the higher level needed.

For next years report, I’d like to see the inclusion of case studies or quotes from companies that have been measuring their security culture and how they have used the information to improve the security culture.

Check out the full report here (registration needed).

Let Kai and the CLTRe team know what you think: Click to tweet:

Great job @getcltre @kairoer on the human factor report. (via @J4vv4D)

Why did you write this report? @kairoer @getcltre (via @j4vv4d) 

I’ve followed Scott Helme’s work for a while now and have been impressed with his approach. So was interested to find out that he had teamed up with BBC Click and Prof Alan Woodward to comprehensively dismantle a vendors claim to total security. Scott has published the whole story on his blog and The BBC Click episode is live.

This was a well-researched and executed piece, but let’s take a step back and look at the wider picture and what this means for vendor-research relations.

So, I felt it was a good time to grab some time with Scott to seek his opinions on some of the questions that came to mind.

One of the first things that strike me about Scott is his measured and calm demeanour. He has the look of a chess master that is quietly confident knowing that he’s always 7 moves ahead. The second thing I note is that I really can’t gauge how old he is. I think it’s one of the things that happens as you grow older, I find it increasingly difficult to differentiate between someone that is 20 or 35. They all just look “young” to me. So I look for clues such as ages of children, year of graduation, or years experience to try and make an educated guess.

What is secure?

Not wanting to waste time with warm up questions, I wanted to get to the heart of the matter. There is no benchmark or agreed standard upon when it’s appropriate to use the word secure, or claim a product is secure. The fact of the matter is that as far as technology goes, nothing is truly ever secure. So does that mean no-one should ever use the phrase secure at all?

On one hand one wants to avoid going down the route of legislation, or having stringent criteria in an industry that is constantly in a state of flux. On the other hand, Scott said, “We don’t see many car manufacturers rocking up with the safest car in the world that has no airbags or brakes.”

Which is a fair comment, but it is a lot easier for a lay person to see and understand security in physical products than in software.

The manufacturers dilemma
So what is a security startup to do? Nearly every security product has had vulnerabilities they’ve needed to patch – not even the largest of vendors are free of bugs.

Open source products, where the code is available for all to audit is no exception with products such as OpenSSL having had long-standing vulnerabilities. Given the landscape, what’s the best way to approach this?

Scott gives a half smile, indicating that it’s something he may have been asked many times. He told me that he believes that the more scrutiny a device or product has then the more likely you are to become aware of issues. “Researchers and bug bounties are your friend. They can’t replace traditional penetration testing and other standards or compliance requirements, but they sure add a lot of extra coverage, often for not a lot of cash.”

It’s somewhat a balancing act. After all, security costs time and money in order to implement properly. Far too many startups are caught up in trying to prove that their product serves a viable market and that there is demand before pouring resources into security.

But is relying on researchers to find vulnerabilities a scalable model? There are only so many Scott’s in the world, and researchers will be drawn to particular products out of personal curiosity, or where their expertise lie. So many products simply slip beneath the radar.  The number of secure products being released outpaces the time and effort needed to properly validate their capabilities.

Scott agrees with the sentiment, and states that it ties into the issue of lack of standards. “Right now there is no regulation or standards, so researchers are all we have. Even basic standards would start to raise that minimum bar and begin the process of filtering out the crud. I do it because I feel I can make a difference, I enjoy it and it helps me keep my skills sharp.”

With great power
With time running out, I wanted to go straight for the jugular with my last question.

While one can commend the good work Scott and others do. With the recent release, we’ve effectively seen a company torn down. Surely that kind of approach can have a negative impact into scaring other startups from releasing any product at all?

If I were considering starting up a secure product, I’d be scared that you, or other researchers could shut my business down. Which would leave me with the choice of either not producing anything at all, or try to bribe you up front. While you may be above bribery, can’t say that for every researcher out there.

Despite my Jeremy-Paxman style attempt at getting a rise out of Scott, he remained patient with me.

“I certainly hope it wouldn’t scare them from releasing a product, but perhaps consider engaging a reputable security firm to analyse the product prior to release. I feel that’s the minimum any responsible company should be doing anyway. They can also engage the community with a bounty program to get private feedback prior to release too. If someone plans to bribe you, I guess you can’t do much about that, except beat them to the punch and fix the issues first. The internet is full of bad people :(”

The internet is indeed full of bad people, you cannot argue with that.

The prestigious European Security Blogger awards are upon us. For those unfamiliar with the European Security blogger awards, it’s an award ceremony for bloggers who specialise in security and reside in Europe – at least that what I hope it means.

I am fortunate enough to have made it into the finals in five of the nine categories – which in itself feels like a great achievement considering how many super-awesome and cool security bloggers there are scattered around Europe. The categories I’m in are:

Best Security Video Blog
Most entertaining blog
Most educational blog
Best EU Security Tweeter
Grand prix prize for best overall security blog

Anyway, it would be a shame to let your vote go to waste so head over to  and make your vote count.

You’re the new guy in the security ops team, they’re giving you training and put you on a very crucial and important job… Monitoring. You’ll be told how important the job is and how it is essential to be done correctly to ensure the ongoing safety of the company. But you notice that nobody really shows any interest in doing it. There’s are two reasons for this. Firstly, it’s usually a job that they don’t really understand how to do, but secondly, and more crucially, even if they do understand how it works, it makes watching grass grow an extreme sport in comparison.
Having been subjected to monitoring of all kinds early in my career, I developed a set of techniques which can be used to give the impression you’re a monitoring guru:
1. The Blink and Chin Rub:
Blink frequently and rub your chin. This tried and tested technique gives the impression that you’re deep in thought and analyzing each packet individually. Having a couple of crushed cans of red bull or coke will give the impression you’re a man on the edge and very few people will interrupt or ignore you. Every now and then let off a low level grunt.
2. Look for Key Values and Strings
A quick find for key strings and values will save you trawling through gigs worth of logs. Identify the key ones first and type them up separately. That way if anyone looks at what you’re doing, they will be impressed by your apparent ability to detect patterns. At the end of the day simply delete it and sound frustrated whilst muttering “false positive”, bang the table for dramatic effect before grabbing your coat and heading off home.
3. Be Vague When Questioned
When your boss asks for your thoughts on some anomalous network traffic you need to tread carefully. Deliver a vague opinion, add that you’ve been analyzing a list of key values and strings to get to the root cause (see 2). For good measure ask a question which direct the conversation away from your view. Something like, “what made you think of that?” would be perfect. It gives the boss an opportunity to wax lyrical about how they arrived at a conclusion.
4. Blame A.P.T.
Should the unthinkable happen on your watch, blame it on being an A.P.T, or it being a state-sponsored and highly sophisticated attack that has evaded all your detection controls. Turn it around on your boss and ask him how you’re supposed to keep track of everything with such outdated hardware and software where the enemy have access to unlimited funds. If you’re lucky, you could end up with your own personal SOC being commissioned.
5. Harass an ISP
During a quiet patch people will begin to get suspicious. So to shake things up, send a passive-aggressive email to a random ISP every few weeks threatening them with legal action unless they block the state-sponsored APTers from constantly bombarding your network. When a complaint is filed with your CEO, simply point to the previous breach and say you suspect the ISP to be compromised. Careful how you balance this because you don’t want to end up looking like a crazed conspiracy theorist. Tell them you’ll withdraw the legal threat, but will be “keeping a close eye on them.” No-one will ever suspect you’ve got no idea how the IDS logs work.

For all the talk about it being an echo chamber and the like, I’ve met a ton of people in security whom I otherwise wouldn’t have. As I was pondering over this over breakfast one morning, I came to the conclusion that I end up grouping my infosec friends into different categories. They probably look a bit like this:

Level 0 – These are your closest security friends. They are the guys who you look out for and they look out for you. If you see a bug in their code, you’ll sort it out for them. When they call you up at 3am because they need help with a security strategy presentation, you’ll stay up with them all night working on it. Whenever you are stuck for something, you’ll turn to them for help. They’re your best teacher and most annoying student rolled into one. You know how many kids each other have, their ages and names. You can never get rid of them and they can’t get rid of you. The amazing thing is that you may never have met some of these people in real life.

Level 1 – These are best friends. You hang out with them, connect with them on every social media channel. Bond with them on a personal level and hear out their problems. When you need a LinkedIn reference or someone to endorse your CISSP you’ll go to them. If there’s a job going in their team, they’ll do what they can to get you on their team. They are there for you when you are stuck pen testing a website, but won’t do much beyond getting pizza and running Nmap.

Level 2 – These are more friends of friends. You’ll meet them at conferences and local chapter meet-ups. Sometimes they may move up the ranks and get promoted to a level 1 friend or maybe not. They’ll retweet something witty you say and will like your blog posts. They’ll meet you for lunch but never offer to go any more than halves with you on the bill.

Level 3 – These are those security people you have to be friends with. Normally these are work colleagues. You learn nothing from them and often put up with their moaning and spreading of office gossip. Every morning whilst going into work you pray they will be sick and you don’t have to see them.

Level 4 – Anyone who follows you on social media like twitter or facebook that doesn’t fall into any of the other catergories. They are the trolls who follow you and make smartass comments whenever they can. They contribute nothing positive to security, yet linger around like a bad smell. Secretly everyone hopes they fall down an open elevator shaft onto some bullets.

This is the 8th part on my CISSP Reloaded where I am revisiting the 10 CISSP domains I studied for many years ago to see what has changed and how much of it I have retained as well as adding in my own personal thoughts, experiences and rambles into the mix. Read the other domains here: (Domain 1) (Domain 2) (Domain 3) (Domain 4) (Domain 5) (Domain 6) (domain 7)

Do you ever watch those life insurance adverts where they show a family playing happily and then in comes the deep grim voiceover that somberly asks, “Who will look after them should the worst happen?” It doesn’t say the word death by heart attack, or falling from balcony to your doom or even death by PowerPoint. But we all know that’s what they mean.

Which is the exact same voice that goes through my head any time I’ve had to sit in on a Business Continuity or Disaster recovery planning session. I possibly I even end up speaking like that guys. “So tell me, how will you run your website…. Should the worst happen?”

“Who will answer your customers phone calls…. Should the worst happen?”

“how will we operate the projector… should the worst happen?”

Now you’re also going to have that stuck in your head and it will come out at the most inappropriate time causing you to chuckle to yourself while everyone around you thinks you’ve gone a bit loopy. I like how it’s worded in a non-offensive and open way, to bring about unlimited possibilities. It’s like a game of inception. You don’t want to say the exact words because you’ll look like a doomsday hater, but you want to plant the seed of doubt in their minds. Get them thinking.

If there’s nothing else you take away from this domain, take this lesson. Forget the technology, forget your load balancers and your availability criteria, when you talk to the business about continuity and disaster recovery, your job is to leave them feeling as paranoid as an A-List celebrity out shopping without wearing a pair of sunglasses 3 sizes too big to conceal their identities. I find that parents of children aged between 3-8 already have the necessary skills needed to be effective at this. They are used to giving out these subliminal messages to their kids. “you better clean your room right now young man, or you’re going to be in trouble.” Of course, the term ‘trouble’ is never really quantified. The kid usually conjures up some wild imaginative thing such as their parent will turn into a werewolf and eat them while they sleep. Whereas the parent is desperately hoping the child does what they’re told or they’ll be forced to make up some laughable punishment, like the naughty step.

Another of my favourite parenting techniques has to be the counting to 3 method. It’s where the parent tells the child to do something and the child stubbornly refuses. So the parent slowly, but very firmly starts to count. It becomes a battle of nerves at that point like a spaghetti Western standoff, the tension mounts in the room. The gauntlet has been thrown down by the parent, they’ve sent out a clear message that insubordination will not be tolerated. For a while the child resists…. Then the parent says “2”. At this stage time slows down, an eerie silence takes over, the child can hear the clock ticking, becomes aware of their own breathing and heightened pulse rate. The parent raises an eyebrow as if to indicate that if they reach “3”, the floodgates of hell will be opened up and demons will emerge from every corner and rip the child limb from limb.

So the child gives in, stomps their feet and does what they were asked to do. The parent sighs a giant sigh of relief knowing that if they ever reached 3, their whole game would be up. The bluff is saved to be repeated another day.

I have no idea how I’ve ended up talking about parenting techniques – this is the very reason I’ve been told I desperately need an editor, so they can take out all my crap. But that would probably turn a 4000 word chapter into 50 words.

Business Continuity Planning and Disaster recovery are usually uttered under the same breath and used somewhat interchangeably, but do they mean the same thing? Well they’re a bit like sisters who are born a year apart. They’re not quite twins, but the similarities are undeniable. They have the same mannerisms, probably share that same awkward snorty laugh and are the same build. But when you look closer you’ll note the differences, like how one has a mole on her left cheek or has 3 piercings in her ear whereas the other only has 1. This leads to it almost becoming an obsession with you wanting to check the mole out before speaking to either because you want to know exactly who you’re talking to. Which is how you should approach Business Continuity and Disaster Recovery. They’re the two sisters who everyone else mixes up, but you know who’s who because of their moles and piercings.

Sisters who aren’t twins and parenting tips? I swear this is the most messed up domain I’m writing about. Probably because I’ve got little experience in doing a lot of Business Continuity or Disaster recovery planning. I usually end up asking a project if they’ve considered it, and they grunt and mutter something about having bought two servers and installed one in each of the data centres which are 50 miles apart and I usually nod and tick it off on my checklist and make a mental note to go verify the data centres are actually ones the company owns and manages and to validate it on the plan. It would work a whole lot better if I actually made an actual note of these things on the piece of paper in front of me because I kind of forget mental notes. That’s the problem with mental notes. Depending on your mental capacity, you can end up forgetting them, or overwriting them with other notes, or even worse you start doodling on them in your mind. Which is why you should always document your business continuity and disaster recovery plans. The last thing you want is a tragedy hits and you have 8 different senior managers in a meeting trying to remember what they agreed would be the best course of action to take in the event of this incident.

Business continuity planning is rather pro-active. It’s like taking a first aid kit with you on holiday because you know that the kids will inevitably trip over and cut their knee. The first aid kit will allow you to take a bit of pleasure in disinfecting the wound while the child wails and apply a plaster. After a while your kid can carry on playing as normal. Or it’s like having a spare wheel in the back of your car. You have a plan that if your car journey was interrupted by a flat tyre, you could change it, or if you’re like my wife, you phone me up to come and change the wheel. This allows you to continue the car journey albeit with a slight interruption.

Disaster recover, as it names suggests is how you would recover from a real disaster. Like if your kid got taken hostage by flying monkeys who wanted to raise the man-cub as their own and name him Mowgli… or if your car’s engine blew up. These are disasters and the recovery is usually a reactive process. So having car breakdown cover so that a man who kills people on the weekends can drive up in his pickup truck, look around your car, confirm that the engine has blown up and offer to tow your car to a garage. I’m not sure what kind of plan you’d invoke to get your child back from the flying monkeys though.

Business Continuity Planning

BCP is all about continuing business activities whilst something has happened. My CISSP notes break it down into four phases:

1. Scope and Plan Initiation

2. Business impact Assessment

3. Business Continuity Plan Development

4. Plan Approval and Implementation

1. Scope Plan Initiation

As they say, a journey of a hundred miles begins with a single step. The scope plan initiation phase is the first step you need to take to create a BCP. In order to properly scope the plan, it’s important to understand what the company does, what activities are important or not and which systems are crucial to support the important activities. Now, if a company has done a good job of risk assessing and classifying all their assets, then this should be an easy process of simply going through your assets and ticking off the ones that are needed.

Unfortunately, it’s never that easy, so what happens is you end up setting up a working group of well-trained monkeys to go around with checklists trying to understand what all the assets are and trying to make sense of how the organization works. This highlights a fundamental disconnect between most security departments and the business. If you ever find yourself doing this activity in a business, stop and ask, if you don’t already know what the business does or what are the critical assets, then how do you know what you’re supposed to be protecting and where your security controls are most needed?

During this phase a lot of large organisations will setup a committee or two, a steering group and an advisory board of some sort. Just do what’s right for the organization and plough on.

2. Business Impact Assessment

The BIA is what we should be doing on each asset when it’s deployed. But again, we would have either misplaced the record or it would be so out of date that the BCP creation process will drive us down conducting another set of BIA’s on each system.

In simple terms, the BIA seeks to answer what the impact to the business will be if a particular system was rendered unavailable. Or if you want to jazz up the words a bit, what would be the impact to the business if a rogue state, sponsored some hacktivists to totally cyber-pwn your box.

There are different templates and complexities of BIA’s that different companies adopt. Generally though you’ll be working with someone in the business to answer the questions. Well crafted questions will allow you to prioritise the system you’re looking at, understand how it operates, and therefore reach some sort of scientific conclusion on what the impact to the business will be should the system be unavailable through any means.

You can then put it into a bucket for how much downtime is acceptable. For example, it could be 1-3 hours, or 1 day, 1 week or even 1 month or more than a system can be down before having significant impact. For example a monthly payroll system may only be used once a month, so if the system is down at the beginning of the month for a few days, there isn’t a major impact. Perhaps there are manual workarounds that could be deployed in the interim. On the other hand, there could be an online store that generates over 80% of the company’s revenue, so you can afford much downtime at all.

The important thing to bear in mind whilst completing a BIA and arriving at a conclusion is that people will answer questions based on their own understanding and view of the world. Once I was speaking to a person about the criticality of a system, to which he responded that it was very very critical and couldn’t be down for any length of time. Probing a bit further, I enquired as to why it was so important, to which he responded that without the system, he wouldn’t be able to do his job. I agreed that him not being able to do his job was most definitely an impact on him as an individual, but what would the impact to the company be? Would other processes fall down, would customers be unable to proceed with decisions, would the CEO be asking for data out of this system? He rather sheepishly replied “no, I don’t suppose anyone else would really notice.”

I tried to assure him that this wasn’t about his job security, I was just trying to figure out which systems need to be recovered first if we invoke Business Continuity procedures. He nodded, but I don’t think he believed me. The next time I walked past his desk he was on a jobsite.

The main point being that simply getting people to fill out a set of BIA questions alone isn’t sufficient. You need to be involved to a degree to ensure the quality of responses are sufficient.

3. Business Continuity Plan Development

By now you would have collated some information and saved some money by identifying redundant jobs so you have enough information to start developing the plan. The strategy should encompass everything that may need continuity, so that would not just be computing, but consider your facilities, people, supplies and other equipment. Things like planning for a transport strike and people couldn’t get into the office. Or if there was a blocked drain in the building and people couldn’t work just because of the smell.

Many years ago, when I first started work, there was a young graduate who had started at the same time as me. A few months into the job, he had an unfortunate accident and passed away. Naturally everyone in the office was upset by this incident, particularly his team members who directly worked with him daily. On his funeral all of his team and most of the rest of the office was out to be at the funeral. Sure it had a big impact on the company that day when nearly all of the IT support function was out, but no manager is going to prevent people from going to a funeral, and even if you could force a few people to stay behind, would they really be in the state of mind to operate efficiently?

When planning, keep in mind that people won’t always be in the best state of mind to make the most rational decisions. Factor this into your plans.

4. Plan Approval and Implementation

The emotional state of people during a disaster is a good reason why it’s so important to have your plan fully documented. Because it’s a lot easier to agree and document a plan of action in advance. What you do need to ensure though is that the documented plan is approved at right level. Nothing is more fun than having a documented plan that is ignored by some big chief who wants to play hero during an incident.

Also make sure that people are aware of the plan. Because if you’ve left the organization or are on holiday somewhere on the other side of the world when someone needs to use the plan, it’s no good if they can’t find it. But don’t make it publically available for anyone to pick up and read. After all, it could contain sensitive information about your company, it’s assets and other proprietary information.

Finally, keep the plan up to date. Why do you have a plan that talks about using the win95 recovery disk? Yes, I actually read that in a plan… in 2008!

Disaster Recovery Planning

DR plans are for when things really go bad. I’m not talking about a blocked sewage pipe, I’m talking about every single pipe getting blocked and spontaneously bursting thus flooding your entire building. It’s Armageddon, it’s a realization of those scenario’s you see in Hollywood movies but thought could never happen to you.

I find myself talking in the voice of the movie trailer guy as I write this.

Where DR plans differ from BCP is that with DR planning, you’re looking to setup a framework or a method of how the company can effectively make decisions in a logical way should the worst happen.

In essence, a DR plan will seek to minimize any decision making required by staff during a disaster. During an event, people may be emotional, worried, complacent, or just curious as to what’s going on and hence being distracted by what they need to do. It’s not the best time to expect them to be making strategic decisions. Having people know what they are supposed to be doing will hopefully protect the organization from major failure by minimizing the risk from delays in recovering from an incident.

The planning process is similar to the BCP process. So assuming you’ve already undertaken BC planning, you’ll have the BIA’s to hand, so you’ll start from defining what you need to do to for the business to recover. A lot of material you will read will go into the merits of having mutual aid agreements with other companies whose facilities you could share and vice versa if the need arises. Or having your own hot, cold or warm backup sites. Of course being so many years old, my notes don’t make any reference to the cloud. Which is also another place where businesses are hosting their critical applications.

It is interesting, because some companies a lot of companies are not factoring in cloud-based (or general 3rd party hosted) applications into their DR plans because contractually, the cloud provider is responsible for it all. Although, I’d argue that what would you do if your cloud-provider got hit by a disaster they couldn’t recover from. Then what would you do in order to continue your business operation? The answer will vary on the type of business and the criticality of the applications that are run in the cloud. The point is that you can’t blindly rely on a 3rd party provider just because they have claimed something in the contract. Which is why it’s important to conduct adequate due diligence on your 3rd parties to make sure they really do have the capabilities to backup their claims. It’s like would you seriously go into a dangerous situation armed with a gun that only fires plastic pellets and jams after 3 rounds? Or would you take in a fully tested and functioning AK47? It depends on the situation, but there are few times where having an AK47 doesn’t strongly increase your negotiating power.

Which brings us nicely onto testing. There’s no point in having a lovely and wonderfully orchestrated DR plan if it all starts to fall apart when you need it because somebody forgot to get toilet roll for the building or change the static IP addresses. There are different types of tests you can undertake such as:


This may seem like the lazy persons test, but it is very cost effective. It’s where you send the plan to everyone and they all review it in their own time. It’s not really a test per say but more of a guide to agree the principles. Think of it as sending instructions to someone on how to swim, but without the cost or hassle of actually getting into the pool.

Structured walk-through

A step up from the checklist, it’s where everyone gathers in a room and walk through the plan collectively. Again, people don’t get wet, but it’s a good way to laugh at others responsibilities and duties in the plan.


A simulation is where you do a dry run involving all the staff who will be involved in providing support in the event of a disaster. They’ll usually all be asked to reconvene at an alternative site and pretend there is a disaster going on around them. Most staff end up treating it as half a day out of the office and try to figure out what they’ll be doing once they finish.


A parallel test is a full test of the recovery plan, using all staff and resources. The key point though is that the actual production environment isn’t touched and left running as usual. In effect you end up running a second ‘hot’ site for the duration of this test.

Full interruption

This is the real deal. Other than the burning building and screaming ladies, this is where the production system is shut down and the disaster recovery plan is tested to its limits. Although this is the only way to be absolutely sure you’ve got a proper recovery plan in place, it’s one that requires extreme bravery to execute. Most people stop and wonder, what if they can’t recover, what if something breaks, what if the main system is unrecoverable. So they usually declare that their parallel test was good enough and leave it at that.

Communicating externally

A disaster is similar in many ways to any other incident, except at a much larger scale and just like an incident, it’s important to have well-established communication channels setup via which you can get in touch with key contacts such as the police, fire services, medical facilities, utility providers, press, customers, shareholders, partners, the list goes on.

It’s a good strategy to utilize social media to communicate with your wider customer base as it’s usually quicker and more direct. So you see, social media isn’t all bad is it.

Another important communication strategy that needs to be put in place is dealing with staff and their families. If, as a result of a disaster there is a loss of life or serious injury, how would you communicate with the family? If, as a result of the disaster, the business is impacted so badly, they have to lay off staff. How is that communicated?

Communicating with the media is also something that needs to be handled carefully in the best of times, but even more so during a disaster. You want the company to be accessible, but also ensure a media trained spokesperson is nominated. The last thing you want is Bill from mainframe support on the 9 o clock news flapping his gums about the disaster that he knows absolutely nothing about because he was in the basement when it all happened.

They think it’s all over

Be sure the plan is very clear as to when the disaster is over. Usually this is when all operations are returned to the normal state in the original locations etc. This is important because it’s at that point you can take a snapshot of your data and compare it with pre-disaster as well as your assets and personnel to effectively gauge the impact.

Disasters are a good opportunity for thieves and fraudsters to attack. Setting off fire alarms in order to walk out of buildings with a couple of laptops is a common technique. You can also find the business is subject to vandalism and looting so try to ensure as many different considerations are agreed upon before you’re faced with a calamity.

lord_sugar1Lord Alan Sugar is Britain’s answer to Donald Trump. Well as much as in that he’s the man behind the desk firing people on the UK’s “Apprentice”.

He’s also pretty active on twitter having just short of 2 million followers, so it’s not an understatement to say, he’s pretty popular and influential when compared to the average person.

Anyhow, on this lazy Sunday morning, I was scrolling through my tweets when I came across this one:


Sugar tweet

In case you’re wondering, this was very much a sarcastic tweet.

Kevin O’Sullivan, is a Journalist with the British newspaper the Mirror, and he clearly wrote an article on the Apprentice which wasn’t to Lord Sugars’ liking.

Put yourself in Lord Sugar’s position. You work hard, create a (in your opinion) great T.V. show and some reporter unfairly criticizes it. It’s not something you can really take to court. So your options are to either accept it and shut up, or you retaliate by sharing your pain with your twitter followers.

The problem with this approach is that conceptually, this is not too far off from what a group like Anonymous would do.

Let’s look at some of the characteristics:

1. A person or company make a statement, perform an action or support a cause that you do not agree with.

2. You feel as if there is no “fair” legal route you can pursue.

3. You launch a retaliation to make a statement such as a DDOS or more sophisticated attack (and as a byproduct instill fear in anyone else ever thinking of going against you).

4. In the process of retaliation personal details are usually leaked 

I’m not an expert in Anonymous or their real motives or actions, but you can find out more if you have a read of Josh Corman and Brian Martin’s article which goes into the workings of Anonymous in greater detail.

I’m keeping this at a high level and simply asking the question that Lord Sugar read something he didn’t agree with and instead of privately sharing his thoughts, or being in any way constructive, he exposed the email address of Kevin OSullivan and invited 2million people to DDOS his inbox. Knowing full well that a lot of people will end up hurling insults at Mr. O’Sullivan purely for having an opinion. Thus sending a clear message out to other journalists that if they dare print anything that his Sugar-ness doesn’t agree with, they could face the wrath of his social media army.

Maybe there is absolutely nothing with this. Maybe this is how business will be conducted from now on. But then, we must also stop using phrases like ‘terrorist activity’  to describe Anonymous when they simply ask a few million people to DDOS a company’s website because they don’t agree with some of their policies.

We can’t have it both ways.

This is the 7th part on my CISSP Reloaded where I am revisiting the 10 CISSP domains I studied for many years ago to see what has changed and how much of it I have retained as well as adding in my own personal thoughts, experiences and rambles into the mix. Read the other domains here: (Domain 1) (Domain 2) (Domain 3) (Domain 4) (Domain 5) (Domain 6)



If you’re a religious person, you will believe in a higher being and hold onto the notion of intelligent design. If you don’t then you will believe in the conditions being right to allow a single cell organism to evolve over time into modern life as we know it today.

Secure applications aren’t the result of evolution or chance conditions coming together. Secure applications are only created with a definite degree of intelligent design. You, as the security person are responsible for providing that intelligent design into the application or system that is being developed.

With this in mind, we dive into the domain of applications and systems development, only to crack our heads open on the tiles and lay face first in an inch deep pool which has turned a crimson red from the blood pouring out of our craniums.

Why do I say this? Well, I’ve been through all of my notes a couple of times and even referred to the handbook I had and at least back in the day, the broad areas that were covered by this domain were:

Software development lifecycle processes such as the waterfall model or spiral model.

Software process capability maturity model

Object-oriented systems

Artificial intelligence systems

Database systems

Application controls

None of these really make much sense from a real world perspective, at least not in the way my notes are written. Which makes this whole domain wrong. I mean really wrong. In Spinal Tap terms, this has a wrong rating all the way up to 11.

So now that I’ve vented about how this domain (was?) written, I guess I should balance out the Ying with a bit of Yang.

Delving into the depths of secure application design would probably mandate a book on itself which would go far beyond the depths a CISSP is required to know, and likely beyond my knowledge too.

First off, I’m not a programmer or developer myself, so we’ll keep the overview at a high level where you can appreciate the concepts well enough to make informed decisions. Some of you reading this may have had an eyebrow raised when I mention that I’m not a programmer, and now I’m going to attempt to describe some of my views on secure development, most likely the other eyebrow has joined the first, so let’s set the ground rules.

A common argument coming from developers is, “you have no idea how to code even ONE SINGLE LINE, how can you possibly teach me ANYTHING about secure design?”

That is usually accompanied by arm waving, coffee throwing and more colourful language before feet are stomped and doors are slammed and the non-developers are left in the room suffering an awkward silence thinking they deserve to be in basements.

My view is that you don’t need to know programming in order to help ensure secure applications are developed, it does definitely help to have some understanding though. The developers are there to develop and that’s their skillset. Your job is to be able to understand and articulate what behaviours an application should exhibit in order to be deemed secure. For example, I don’t know what the brake pads on my car look like. If you gave me a pair of brake pads and asked me to replace them, I would feel proud if I were able to even jack the car up and take of a wheel. Other than that I’m pretty useless when it comes to anything to do with cars. However, as a driver of a car, I know how the brakes should behave. I know that when I put my foot on the brake, it should slow the car down, or even bring it to a halt without making weird sounds or stopping one wheel faster than the other. If I find that the brakes do not slow the car down, then I can call a mechanic up and ask him to come and look at my cars brakes. The mechanic won’t say to me that I am not qualified to comment on the effectiveness of the brakes simply because I don’t know how they technically work or how to replace them. Rather he’ll agree that as a user I know what behaviour the brakes should exhibit.

This is not much different from secure application development. You may not know how to code, but you should be aware of the behaviours you want an application to exhibit in order to be secure. It’s a team effort between you and the developers, they have the skill to execute and you provide the guidance needed from a security point of view. Well, in most cases it’s less of teamwork and more like the chain gang where criminals who don’t like each other are forced to work with each other and if one slacks they all suffer the punishment.

There are some very good resources you can read up on online which are infinitely more useful than the CISSP material to learn about secure development. A good place to start would be to take a look at Microsoft’s Secure Development Lifecycle . It’s mature and well established, not to mention free. So it’s worth taking a look at and avoid you doing a lot of the hard work. Also check out Troy Hunt’s blog www.troyhunt.comwhere he gives a lot of good security guidance particularly for .NET developers.

Another very good resource is the Security Ninja aka David Rook ( he sums up the reason behind creating secure development principles as being, “to help developers create applications that are secure and not just built to prevent the current common vulnerabilities.”

He further elaborates by a proverb, “Teach a developer about a vulnerability and he will prevent it, teach him how to develop securely and he will prevent many vulnerabilities.”

This in itself is a profound statement that deserves deep meditative contemplation. In every place I’ve ever worked, where a vulnerability is found in an application, the tester usually documents the fix, such as implementing whitelisting to validate the inputs and that’s exactly what the developer would do without necessarily understanding why this was needed. Of course, where there’s a shiny new application that needs to be developed and deployed within time and budget, the tester and the developer will say there isn’t enough time to go over these intricate details. But in their minds the tester will be berating the lack of intelligence the developer has and the developer will mutter under his breath that he simply developed what was asked and if it was needed they should have specified in the requirements.

Which is what makes Rook’s statement so powerful because it tackles the issue at its root. Like Will Smith and Jeff Goldblum in Independence Day flying bravely to the mothership to hit them at their heart. Once the mothership was taken down, all other problems paled in comparison. If you only have time to do one thing to improve the security of your applications, I’d say invest it into developer training. If you can make your developers more security aware, they will be your greatest asset.

So what does this all mean to you as a potential or current CISSP? There’s a lot to take in and many methodologies out there. I like to break secure development down into three broad areas,

1. Requirements

2. Implementation

3. Validation


Imagine you are an aspiring fighter who wants to become a professional. The first thing you need to do is decide which form of fighting you want to get into. It could be boxing, it could be Karate or wrestling, or even a mixed martial artist. Each one of these disciplines has a different set of rules and skills that will need to be developed. Of course there are some things that are common across all of these. For example, you will need speed, strength and stamina in order to compete at the highest level in any fighting style.

Defining the requirements of an application are similar. There are a broad set of secure standards that will apply universally regardless of the type of application you are developing. Then there are specific requirements that are more tuned to that specific application and it’s functionality. Looking back to domain 1, we need to understand what functions the application will be undertaking, what data is handled by the application, what’s the overall risk rating.

If you were a boxer, you’d focus on defending your head and body because that’s where your opponent will attack, that’s the attack surface. If you’re a kickboxer, then you also have to defend your legs against those debilitating leg kicks so you have a larger attack surface to worry about. Again, applying this to your application you can work out the attack surface so you know where security controls are most needed.

The gist of it is that if you give a developer a wish list of 500 requirements where half of them may not be applicable and another bulk of them not necessary or not feasible, they will prioritise according to what can be done to deliver on time. Then when you try and block it further down the road, you’ll look like the bad guy, because you are, for not doing your work up front. Define your requirements on a risk based approach and work with the developers to help them understand the context of your requirements. It really helps if you just go and speak to the developers themselves. Sure produce a fancy document for the project manager, but make sure the person doing the actual coding is on the same page. Take along a box of chocolates and a replica light sabre. Win the hearts and minds of developers with your requirements. If they understand, they will believe, if they belive, they will implement regardless of the effort needed.


The scene has been set, the girl has been kidnapped and held hostage inside an isolated fortified palace. You’ve got the team together and even engaged in a training montage. Now’s the time to nut up and implement. From a security perspective you’re probably going to encourage a standard toolset is used and pre-approved modules are re-used etc. It’s pretty straightforward stuff once you’ve got it established. You’ll also get a good idea of the customer journey at this point as it comes to life. It’s usually worthwhile exploring how this works, seeing if there are any privacy concerns or scope changes that may mean additional requirements. Depending on the type of organization you’re in, it may be tough. Especially in large organisations where you have a development team sitting offshore while you struggle to find a suitable time of day where you can chat. Again, it goes back to the earlier point about training your developers to be security aware so they handle any changes to requirements and scope in a secure manner and consult with you on any aspects they may be unsure of.


So the car has been built and rolls off the production line. It’s got the leather interior and sports a matte black look because someone thought it made it look like the Batmobile. The requirements were perfectly articulated and implementation was flawless. But is it safe to let onto the road? Will it stop in time if a child steps onto the road? Is the petrol tank too close to the wheels and will it spontaneously combust when the car hits 88.8 miles per hour? These are questions that we may be pretty confident of… but we wouldn’t bet our lives on it. So we wheel in the crash test dummies.

Or in an IT security context, it’s where we invite specialists to conduct testing and validate the security of the application. This can involve static and dynamic code reviews, the use of automated tools or manual hacks. It’s where a lot of security people like spending time because it’s cool to try to break stuff. But there’s usually little to no time or budget left at this stage to fix anything but the most easiest of vulnerabilities and you’ll have business owners willing to accept a whole bunch of risks just to get their application deployed on time. You can slam the desk and mutter how incompetent the organization is, but taking into consideration the business impact, a delay of a couple of weeks to implement a security fix usually costs more than they can afford. Hence why I’ll reiterate the importance of getting it right up front with developer training and early engagement to get requirements understood.

It’s also a good idea to test your application in the live(ish) environment so you can see what vulnerabilities it inherits from the infrastructure it sits on and how well it plays with other components.

Once all of that is done and you’ve signed it off, its plain sailing for a little while. Well, make sure you know which version of the code you’ve signed off as being secure. Put a note in the diary to come back a review it again within a year or whatever is an appropriate timeframe or whenever an update is made. Like ancient works of art and great monuments. Building them was no small feat, but maintaining them can be a real nightmare. Your applications and systems may have stood proudly the day they were deployed in your data centre. A beacon of light and hope for a new company that was boldly going where no company had gone before. It was going to move your organization to the paperless office, be 99.7% greener than before, bring in efficiencies that would save 1.5m per year; but what is it today? Is it still well maintained and secure? Or is it a decayed old building infested with cockroaches and a hazard to anyone who wanders within?

And there you have it, secure application and systems development in a rather ‘meh’ domain. Can’t say there’s much else I can write here without plagiarizing other websites of which there are loads. The CISSP course as it was and me cannot help you beyond this point… so I’ll leave you with an XKCD comic that explains it all.


****** EDIT ******

It’s been pointed out to me by a couple of people that ISC2 recognised the depth of the domain and hence created the CSSLP (Certified Secure Software Lifecycle Professional)  which goes into Application Security in great detail.  I’m not pretending other certifications from ISC2 or other organisations don’t exist, this series of posts are intended to simply reflect what I covered as part of my CISSP nearly a decade ago.

I’ve made it past the mid-way point, I think a bit of self-back patting are in order! This is the 6th part on my CISSP Reloaded where I am revisiting the 10 CISSP domains I studied for many years ago to see what has changed and how much of it I have retained as well as adding in my own personal thoughts, experiences and rambles into the mix. Read the other domains here: (Domain 1) (Domain 2) (Domain 3) (Domain 4) (Domain 5)


The operations security domain is like a fresh breath of air amongst an overly theoretical world. It’s like a renegade cop who’s on the same mission as the rest of the force, but who goes against the grain in order to get things done without the red tape. Secops guys are the ones on the ground who get the job done while pencil pushers put on weight and eat donuts behind the desk telling everyone who’ll listen about their glory days.

Some may argue that operations security is primarily focused around IT security and bring up the age old argument of IT vs Information security and all the baggage that comes along with that. It’s an argument as old as whether PC’s are any better than Macs, Ninja’s could beat Pirates or Cagney was better than Lacey.

What it boils down to is that someone has to do the doing. There’s a guy who has to change the firewall rules that get approved, someone who has to reset a password and someone else to backup data. These all fall under operations and we need to appreciate what this entails and make sure it gets done in a secure manner.

There’s a bit in my notes about the orange book and some of the sections relevant to operations security which we can explore.

Covert Channel Analysis

A covert channel is just what it sounds like. A path that’s not usually used for communication, is used as such. For example as a child many of you may have tried the old trick of taking two plastic cups, poking small holes in the middle and threading a piece of string through the bottom and tying a knot. You could then use it to talk to a friend who was usually sat across the room from you. I’m not sure if it ever worked, in fact it may have been an urban myth. But despite the fact it worked or not, you were using two cups and a piece of string which were never intended to be communication devices, as a communication device. This wasn’t really a covert channel, it’s more a way of communicating using devices that weren’t meant to be used for communication. Now if you could do it in a way that no-one else could see, then that would make it a covert channel.

Similarly, people can use computer applications and resources to convey messages in a way that they were not intended to.

Some of you may be thinking that coming up with new communication channels would be tough, but it’s not. It’s just about looking at what’s you already have and how you can use it in a way different from what the manufacturer intended. This is what a hacker typically does as a whole. They look at a system or application or device and think to themselves what else they could use the device for. Very McGuyver in their approach.

My notes state there are two types of covert channels. Covert storage channels and timing channels. Apparently covert storage channels isn’t about sewing a hidden pocked on the inside of your shirt. It’s about changing space on a hard drive or the attributes of a file. Sounds a bit like storing some data on a disk or changing a file to me. There’s no mention of steganography, but I’m assuming that would be a covert storage channel. If you’re not familiar with steganography, it’s a method usually deployed in images (not always) where another image or information is hidden within the image in a way that it’s not detectable. I’ll refrain from making another Inception comment about an image within an image.

Covert timing channels work by altering time but without the use of a flux capacitor. It’s got some jargon about how changing the system time or the time it takes for a process to execute. Again, there are no examples I can find in my notes or book, but one that comes to mind is timing-based blind SQL injection.

Blind SQL injection is what Daredevil does when pen testing. <badum tssshh>. But in essence you send a query to the SQL database and it doesn’t give any meaningful response, so you’re effectively blind. However, by measuring the time it takes for a response to come back, you can extract data. There are technical books that will explain the technique in detail if you’re interested, but you won’t need to know that for the purposes of the CISSP. No, a CISSP never goes too technical, we’ve only got an inch of water to paddle in! Come to think of it, it’s how bats navigate. The blind ones emit a loud sound (the SQL query) and when the sound bounces back of an object, they form a picture of what it is and how near or far it is. The bounce back of sound will vary depending on distance which will be reflected in a longer or shorter time. I think it’s a good idea to start referring to Blind SQL injection as BatVision. It would make penetration test reports a bit more interesting.

Segregation of Duties

Segregation of duties goes beyond simply splitting the workload in half. Despite my best efforts to convince my wife that me making the mess, while she cleans it up is segregation of duties she doesn’t believe me; and rightly so. There’s a bit of theoretical stuff in my notes so I’ll ignore them for now.

You’re probably familiar with Spider-man’s phrase of “With great power comes great responsibility.” The problem is that in real life, people just can’t handle the power. Or we just don’t trust anyone enough to have that power because if going out into town on a Saturday night has taught us anything, it’s that people just aren’t responsible. In fact, they only pretend to be responsible when they’re around their kids and want them growing up thinking that their parents are rational and intelligent people. The question is, what is that power and how important is it that it is not misused. Let’s take for example the launch of a missile. That’s not something you want one person to be able to do on their own. So you may have 2 or 3 or more people who need to be involved in the process in order to authorize that launch. That way you don’t get some guy hitting the wrong button and saying oops.

Hopefully you get the concept. It’s about taking one activity that is super powerful and then splitting it out amongst 2 or more people so that one person can’t have all the power and turn into a power-crazed madman intent on taking over the world. The real question is which activities do you decide to segregate. Don’t be one of them sheep and simply enforce segregation of duties just because that’s the way it’s always been done. Do it because it’s right.

For example, in a bank, a junior clerk may be able to transfer a certain amount of money, maybe $1000. Anything over this amount and the transfer request will go to their supervisor who has to double check and authorize before the payment is made.

Money is usually an easy benchmark to use, anything over $x and you need a separation of duty. Some banks even enforce this for all amounts regardless of how small the value is.

But think about other situations and circumstances, where maybe you’d like segregation of duties, but can’t because operational requirements would deem it impractical. If we take our missile launch sequence again, we will note that the duty is divided amongst several people and although it’s not a slow process, it’s not exactly a task that gets completed in a couple of seconds either. However, an armed policeman doesn’t have time when faced with a dangerous criminal to pull out his gun, while another person loads it and a 3rd person authorises the firing. It’s just the one person who performs the entire task, even though it more often than not leads to fatal results.

So, in an operational environment, think about what the process is and the importance of it. Talk to the business who run the process and understand what their views are. Then totally ignore them and just quote the policy that references you need segregation of duties (no don’t). Of course you should never ignore the business. Come to a conclusion together as to how important a particular process or function or role is. How much power does that give to one person and are we happy giving that person all that power. If not, then segregate that stuff out. But maybe we can’t due to wanting to keep efficiency levels up. In which case think of other ways you can minimize the risks. Give the people training, get them to sign their lives away, put CCTV camera’s over their shoulder. Remember, operational security is fun and it’s only limited by your imagination.

Rotation of duties

Similar to segregation but not quite is the rotation of duties. This is a tough one to implement, because in small organisations it’s difficult to achieve (very much like segregation) and in large organisations it does incur a large overhead. You hire an expensive firewall administrator and then every 4 weeks you take him off firewall administrative duties so they can staple papers for a couple of weeks to rotate their job and lessen the chance they can collude with anyone else to commit a fraudulent act. This means you need to hire another equally qualified and hence expensive administrator to cover the first ones duties whilst he is on rotation. Not saying this doesn’t happen, but I haven’t really witnessed it. The only variation of this that I have seen in several places is a mandatory 1 or 2 week break per year that is mandated for employees. This states that each employee will take a minimum of a week or 2 off in one go. The principle is the same as rotation in as much as you hope that you can detect any wrongdoing whilst the employee is off. However, with advancements in real-time monitoring tools, the effectiveness of this control can be debatable.

As discussed before, don’t implement controls just for the sake of implementing controls or because that’s just how it’s always been done. In a lot of environments you may find rotation of duties does not help security at all. In other places it could be a very useful tool.

Trusted Recovery

Imagine you had a top assassin whom you spent millions on training over many years. He ends up failing a mission and getting shot, but survives. Unfortunately he loses his memory in the process and on top of that, he comes back to bite you. This is what we don’t want our systems to do when they crash or fail. Firstly, we need to have controls in place that prepare for failure. The simplest example is that of having data backups so that when your system does go poof, you know that you haven’t lost years of work. Secondly, you need to check the controls and processes that fire up when a system is being restored. Does it recover files or roll back to a previous version? Will a reboot of a machine allow someone to login as an administrator with no password?

If we’re looking at an IT system that fails, when looking at recovery go through all the different layers and examine how they recover. The operating system may recover in a safe state, but does the database, or the application or any other component recover in a slightly different way? Could that be a vulnerable point?

Change Management

Change management means well, it really does. But people just don’t seem to like it. Maybe it’s because organisations have made change management overly bureaucratic, or maybe it’s because project managers are too lazy to want to fill out another form. The purpose behind change management is to, well, manage changes to a live environment. It’s a pretty simple process, the person proposing the change will document the change and everything they believe it will impact. The various teams will review the changes from their end (including security) and give the seal of approval to allow it to proceed, or ask any questions, or even reject the changes.

What you quickly come to realize, is that even the sometimes most mundane of changes have a security impact and as if you’re lucky enough to be on the change approval board you need to look at a change from all angles to make sure it doesn’t affect the current security setup. I remember once a colleague of mine received a change request come through which seemed a simple enough moving of an application from one server to another. It seemed to be more of a capacity issue so it was approved and went ahead. Only when we started seeing one application after the other become unavailable did we realize something wasn’t quite right. Of course, the old server had more deeper links to the running than anyone had anticipated. This was the old days of NT4.0 and there was a lot of scripts with hardcoded information.

It’s not always the big changes that one has to worry about. You see, if you’re making a change to your payment platform, everyone is aware that security is of paramount importance and will focus on it. It’s the smaller changes that sometimes catch people out.

Also bear in mind that not everyone has the same definition of what constitutes a change. Or more specifically what changes are significant enough that they warrant the need to go through change management. The level at which that is set is a business decision as to how much they want to oversee changes. But the basis of security operations is that every security activity is in effect a change. Be that a password reset, granting access to a file, making a firewall rule change etc. Everything is a change and therefore you should analyse each one of these processes with the same mindset. I’m not saying raise a change request for each password that gets reset. But rather look at the process by which a password gets reset. Is there the right level of authorization in place, do the administrators know how to reset the password properly, is the password communicated to the end user in a secure manner, is there an audit trail of the password change? These are all things you need to be looking at on a daily basis. Don’t wait for an auditor to come in and pick holes in your process.

It reminds me of a time a colleague was asked to create a temporary account that expired in 2 week. He made the mistake of making a change at the global settings for the domain so that all users passwords were to expire in 2 weeks. What that resulted in was that every user whose password was due to expire suddenly found themselves locked out. A small change, but big impact. How could this have been prevented? Well clearly not employing a numpty would have helped. But also having the process for creating temporary accounts documented would have been a good idea. We were over-reliant on the expertise (or lack in this case) of individuals and paid the price.

Another time another colleague was working on a project which had just ended and he had to remove a large number of user accounts. Rather than going into domain manager and deleting each userID individually, he wrote a CACLs script to do delete these accounts in one quick sweep. I can’t remember the exact syntax of CACL’s scripts, but these were little scripts you could write in notepad and save as a .bat file which used to execute all the commands in one go. Anyway, he made an error in the script, possibly missed out a backslash or something and when he ran the script it began to duly delete every single account in the domain. Luckily or not, the guest account on Windows NT cannot be deleted so it stopped when it reached that. Phones were rang, incidents were logged and team leaders had their butt’s kicked.

Eventually we managed to restore from backup and by the end of the day everything was running smoothly again. But it delivered an important message that stays with me to this day. Which is that CACLs is the work of the devil. Well other than that, the fact that all changes are important. Security is all about change. We’re not here to stop change, but rather facilitate change in a manner that is as secure as possible and doesn’t compromise the business objectives.

Woah getting a bit philosophical in my old age. Let’s move on.

Least Privilege

The principle of least privilege, is to give people only the minimum amount of access and rights needed for them to carry out their job. There’s a whole lot of jibba jabba in my notes about this, but if you’re say implemented a role based access control system, then by definition each role should be designed with least privilege in mind and hence as long as you put people in the correct roles, they should only have access to what they need. Least privilege usually works like being on a diet. You cut out all the crap from your daily food intake and survive on lettuce, a thin slice of grilled chicken and bottled water because according to some Dr. at some university, that’s the least amount of food you need to survive. But after a few days the cravings will begin. You know the chocolate, crisps and fried chicken will seem oh ever so appetizing and despite your best intentions, you’ll fall into the trap and your diet will be long gone. The same thing ends up happening with least privilege. An administrator will accept it at first, but then think that if only they had access to that extra terminal or function, it would make their life so much easier and jobs could get done so much quicker.

Operator Job Functions

I can imagine that this section was written by a man in a grey suit who speaks in a highly nasal and patronizing voice, who likes to collect coins on the weekend and is a member of the local fishing society. Maybe I’m just out of touch with the operations departments, but do you have operators, analysis, tape librarians and other such titles for people doing very specific roles? Nowadays with everything in the ‘cloud’ I’m sure everyone just gets on with doing what needs to be done. This is another overall failing of the CISSP course material in which it tries to guess the type of organisation you will work in and what kind of jobs need to be done and assigning certain labels to them. In reality each company has their own way of working with their own job functions. If you work for a small company you’ll probably be more hands on in a lot of different areas. Whereas in larger organisations your role may be more specific and you’ll be left with being in charge of a much more limited set of activities.

I’m probably going to sound like a scratched record again (think of it as a corrupted MP3 file which loops the same line over and over) but as the security professional, you should understand how your organisation works. Take time to look at the IT infrastructure and all the parts that process information that needs to be secured and make sure all the activities are covered. Even where roles are defined, take time to understand what those roles actually entail. Just because someone has the title of backup operator, don’t assume they are responsible for all backups. At one organisation there was a team called the crypography team. I was looking for email encryption so I asked them what they had and how I could install it. They told me that they were only responsible for the HSM’s in the datacenters and for email encryption I’d have to ask the Windows Administration Team. Yeah, job titles don’t mean much. Don’t believe me? Have a search on LinkedIn and see what grand job titles people assign themselves.

Record Retention

How long do you keep your receipts for? Well, if you’re like me then unless I’m planning on claiming back expenses from the business, I usually throw them away. If they are incriminating, then I make sure I cross shred them and throw them in different bins randomly across town. This what records retention is about. How long do you keep records for? There are two reasons you retain records, either for business needs or for regulatory requirements. So your local laws may stipulate that you must keep a record of all customer emails or forms for a period of 6 years. From a business perspective, if you offer customers a 5 year guarantee with the product they purchase, then you need to at least maintain a record of that for 5 years.

There are usually some records retention experts who will look at your local regulations and help define the retention policy for you. With storage being a lot cheaper than what it was 10 years ago, meeting most requirements isn’t much of a problem. Terabytes of storage are available very cheaply, so don’t cut corners with your storage costs. Otherwise your business will end up like the guy from memento. A limited short term memory that gets over-written very quickly. So unless you want to spend time tattooing information on your body and scribbling notes on the back of polaroid pictures, it’s a good idea to invest in storage that meets your requirements.

On the flip side, just because storage is so cheaply available, it doesn’t mean you should keep all data forever. There are equally important requirements for you to delete data after certain periods of time. So don’t hoard data unnecessarily. From a security point of view the more data you’re storing, that’s more data you need to protect. Also, when someone says they’re deleting data, make sure it’s permanently deleted. Last thing you want is one of your company’s hard drives turning up on ebay full of customer details.

Monitoring and auditing

Back in the day, nobody in our team used to enjoy doing the monitoring tasks. It was dull and it was boring. Our team leader would constantly remind us that “monitoring is not a dirty word” and how it’s an important and vital task. Yet despite knowing this, it killed our souls. Granted, we didn’t have access to fancy dashboards or real-time alerting so we’d daily have to run a report against all the systems and painstakingly go through the logs of interest line by line to determine if anyone accessed or tried to access anything they should or shouldn’t have. By and large it was uneventful. But every now and then we’d find an overnight shift operator had signed into a system with the emergency access account and there was no incident or emergency change to justify the access. That’s when the pain of monitoring was channeled into a formal and harshly worded email to that operators line manager. Informing him of the misdemeanor and demanding an explanation into the act, ending with an implied threat of escalating higher if answers weren’t given.

It’s not that we were vindictive or bad people. It just happens to be that one of the side effects of monitoring is it will leave you a bitter and twister person. There are lots of fancy technological solutions nowadays that make the whole process easier and smoother. But the basics are still the same. First you need to be clear as to what you’re looking for, once you’ve found it, you need to know what to do with an alert. I was once at a company who were suffering an attack. Someone was asked to look at the IDS logs and people started looking at each other, blank looks were exchanged and heads were scratched. This online portal of theirs had been online for over 2 years and had IDS sensors deployed. The problem was that no-one had ever looked at them, in fact no-one even knew how to access the logs.

Then you have auditing, which is almost identical to monitoring, except it’s done at a later date. It’s about going through the systems and logs making sure everything are working as intended. It’s more of a Columbo style activity, where an auditor will look at samples of logs, check procedures and interview operators. They too suffer from the side effects of monitoring and end up losing their personality in the process. This is why a lot of auditors can come across cold, bitter and just anxious to find something wrong with your processes to give you a red finding. You have internal and external audits. Both of which have their pro’s and con’s. External auditors are more reputable because they are perceived to be totally independent, but they have no idea of the organisation so it becomes all too easy to pull the wool over their eyes. Setting the scope becomes an elaborate dance routine and people clean up a certain portion of their environment just for the external auditors.

Internal audit have more understanding of the organisation and thus makes it more difficult to direct them away from problem areas. But because they are internal, politics can come into the equation. If the head of audit plays golf with the head of security, the results may be very different than had the head of security taken the job that the head of audit desired. It’s no different from those internal affairs departments that are portrayed within police movies. You have the internal investigator who has it in for the sergeant and will do everything in his power to pin charges of corruption on him. Of course in the end we find out it was the lady in the accounts department who was behind it all and the auditor and the sergeant become best buddies which is where it becomes fantasy because you’ll never see a security person sharing some bromance with an auditor.

These are just some of the fun aspects of working in large organisations. It’s not worth worry about these because there’s little one can do. As a security profession, concerning yourself with red audits is a waste of time. You need to focus on doing what you do best, securing the assets that need securing and making sure you do whatever is right. It can be a thankless job at times, but we’re not in the business of being thanked, we’re in the business of keeping information secure.

There are some other bits and pieces covered in my notes but none of them seem worth mentioning here. Secops is one of those areas of security where you really will learn a lot more about it by actually doing it. It’s not such an area which you can learn by theory alone. Additionally, it will give you an appreciation of how security works on the ground. It’s all too easy for someone to do a masters in information security and then sit as a consultant writing policies that state how roles should be segregated and how monitoring should be carried out. But they won’t fully appreciate the impact these decisions make. When something becomes too inconvenient, naturally people will start bypassing the process in order to get the job done. Or maybe I’m just sentimental about it because I started out my career in a secops role.

This is the 5th part on my CISSP Reloaded where I am revisiting the 10 CISSP domains I studied for many years ago to see what has changed and how much of it I have retained as well as adding in my own personal thoughts, experiences and rambles into the mix. (Domain One) (Domain Two) (Domain Three) (Domain Four)

I was looking forward to going through this domain once more. But upon reading through my notes and books, I can’t help but feel a little bit disappointed. It’s a bit like how I used to fondly remember Airwolf (probably because of the iconic opening credits and soundtrack), but upon watching a rerun recently, I couldn’t help but feel a bit cheated. As if my childhood memories had been violated. Or maybe I was more concerned that as a child I actually enjoyed it.

Back on topic. This domain has a good title and there is probably a lot one can talk about with regards to security architecture. In fact, in my opinion, there are not enough good and competent security architects on the market. Sure you can get a lot of penetration testers of varying degrees of competence and generalists or risk and compliance type people. But good architects are hard to come by. I mean just watch the Matrix reloaded and see how difficult it was for Neo to find the Architect.

I guess this domain means well, but ended up being filled with three types of information, fluff, theoretical fluff and useless fluff. Am I being overly harsh to the course as it was written over 9 years ago? Or was my ability to make coherent notes really that bad? Well, I passed the exam with flying colours (or could have scraped by) so as bad as my notes may be, there were good enough.

So what did this domain cover in terms of security architecture? Well it covers an extensive list of topics from Computer organisation, hardware components, open systems, evaluation criteria, confidentiality models and so on and so forth. Now is a good time to insert the inch deep and mile wide analogy once more. Fancy getting half your toe wet?


Computer Architecture

Nothing really useful in covered in computer architecture. It feels a bit like algebra in school. It was complicated, didn’t make any sense and you wasted half your childhood trying find the value of x. Yes, if you went into mathematics or rocket science or something that actually uses the value of x on a day to day basis you would have found it useful, but for the rest of us it wasn’t. And don’t get me started on those pointless mathematical questions that I’m sure maths teachers used to just create to troll their students. I can only imagine that at home if their child asked them, “Dad, how old are you?” and they would respond with something like, “Well, on your 10th birthday I will be 3.5 times older than you, and right now you’re 6, so work out my age as the value of x.”

This part talks about the fundamental computer components. Things such as CPU, memory, input and output devices are touched upon. You will then be enlightened to hear about how the bus connects these components and the different types of buses. I’m going to be lazy and not bother looking up what the different types of buses are, but for arguments sake lets just call them the routemaster, double decker and night bus. If you’re not a London person, then those three types of buses probably mean about as much to you as human rights mean to a TSA agent. Needless to say, the fact that I am struggling to recall the details probably goes to show that in my day job I’ve never needed to look into these. But just in case you were wondering, routemasters are the classic busses with the open back where people would jump on and off which gave health and safety people heart attacks. Double decker busses are those where all respectable people sit downstairs and only muggers or those who want to be mugged go upstairs. The night busses are for those who don’t mind waiting over an hour at 2am for the bus to arrive smelling of sick, only to be stabbed.

Then there’s some information given about RAM, ROM, the ways CPU addresses memory and so forth.

Even if I did remember what all of this is about, the material doesn’t go into any sufficient depth for it to be very useful. It’s like thinking you’re a qualified lifeguard after watching all episodes of Baywatch.

You’d imagine that being a security certification, the course material would at least introduce students to concepts around how these components can be attacked or manipulated and what needs to be done to protect these. Not too long ago I watched a film called the American which starred George Clooney. It was billed as an assassin with one last job and wanting to get out of the game. So I grabbed some popcorn and awaited the Bourn-esque action to begin. Instead I was treated to some sappy drivel that focussed more on scenic shots of the Italian countryside and romance rather than any real action. I felt misled and lied to. Which is how you feel as a security person reading this chapter. I mean where do they cover issues like buffer overflow which explain how you can start overwriting different parts of memory. Even if the technicalities of doing so are beyond the scope, the concepts should have been explained. I don’t really know how the petrol I put in my car gets converted into energy, but I know enough that I shouldn’t put diesel in my petrol car and that if I don’t stop filling up when it’s full, I’ll get petrol over my shoes.

The same happens when discussing Input/Output interface. You’ll learn about how a user communicates with the processor with an interface called an input/output interface. It describes the different ways the I/O interface will work and gives the pros and cons of each method. Again, there’s no real security angle discussed which makes me question why this is included in the course material. Of course it’s important for people to know the basics, but if you expect someone to have 5 years experience before sitting the exam, then you’d hope they have some knowledge of the basics after 5 years. Maybe the new material is covering how malicious devices can masquerade as a legitimate I/O device and recommending what security protection needs to be in place.


It’s probably very clear that I am by no stretch of the imagination a programmer. I think the last bit of coding I done was HTML back in the good old days when it was a cool skill to have because you could set up your own website on Geocities. I’d spend hours coding away on notepad and saving as a HTML file. I remember when a friend showed me a program called HoT MetaL which was a html coding application with a nice interface a bit like word. So if you clicked bold, it would automatically add the <b> </b> tags before and after the words which was amazing. But I still opted for notepad because it made me feel far more hardcore.

If you go to a lower level, there are more ‘real’ programming languages that people use to write code that talks with the CPU.

Which in a nutshell is all this talks about. Included is a bit about MACRO’s, interpreted languages and compiler code etc. None of which make any reference to anything security.

It wouldn’t be hard to throw in a couple of lines into the text that would explain why or why not someone should consider software as part of their security design. What could or couldn’t happen with badly written software and the like. It just delves into 5 generations of languages (GL).

I’m smiling and shaking my head in disbelief reading my scribbled notes which I have written as follows:

“A program or set of programs that control the resources and operations of the computer is called an operating system (OS). Examples are Windows, Linux and Unix”

I must have been very naive back in the day to (a) not have walked out there and then and (b) actually written down what the examples of operating systems are because had I been asked to name an operating system in the exam, I would have been totally stumped! It would actually be amusing if someone was asked the question in the exam:

Q – Which of the following is an operating system:

Windows 95

Open and Closed Systems

Exactly two paragraphs are dedicated to open and closed systems in my old book, which is a bit of a shame. Maybe open systems weren’t as popular back then as they are today. But there is much more that can be said than simply stating that closed systems are proprietary and are not subject to independent examination therefore may have vulnerabilities. Whereas open systems have published specifications and is subject to review and evaluation by independent parties so the vulnerabilities are more likely to be found.

Which is a bit like saying the quiet ones are the ones you’ve got to be careful of because you never know when they’ll snap and go on a rampage. A security certification such as the CISSP pitches itself toward the management side, so should be more in tune with management ways of thinking and the challenges they face.

For arguments sake, let’s assume that it’s true that open source systems have fewer vulnerabilities that closed source systems. Does that mean a security person should always recommend an open source system? Maybe the business will look at open source and see that it’s a free system so they’re going to save a ton of money. I mean a system with less vulnerabilities and free. A win-win surely?

Well, not quite. You have support considerations to factor in. Maybe your vendor solution has the support that you need which may not be available with open source. Maybe you have interoperability concerns as to how your system will run your business applications. Also, free software does not mean cost-free. At the end of the day, you still need hardware upon which to install the software, you need a data centre of some sort in which to host the hardware, which comes along with it’s costs and you have the installation and maintenance costs. In essence, you only end up saving the cost of your license and nothing else.

Look at it from all angles, work with the business and come to the solution that fits the business needs. Maybe the answer is to go for an inherently more vulnerable system, but at least you’re aware of it and you can help protect against it by implementing other security controls.

I’m not saying this is the only way to look at it and neither am I advocating a flame war begin on whether open or closed systems are better, any more than I like to see Mac Vs PC debates break out. Well, they are fun to begin with, but a bit like watching too many episodes of Cops, there are only so many car chases you can see before they get repetitive and the voiceover becomes tedious.

Rather, someone looking to enhance their career, which what anyone preparing for a CISSP generally is seeking to do, is that they should really think things through from all angles. A lot of times your manager or someone will ask you a question and make you feel as if you have to give an answer immediately or risk looking like you don’t know your subject. But slowing down is sometimes the best thing to do. No-one will ever know all the answers, but you know what questions to ask and that’s what internet search engines were created for.

Yes, information security is akin to an action flick, there is lots of action. But don’t make it a no-brainer Jason Statham movie. It’s more like a Sergio Leone action film which takes it’s time to establish the shots and build up slowly and once the characters have been built up and the tension is thick, you have a blistering 5 seconds of blindly mad action and all is calm again.

Slowing down and taking time to think things through can mean you end up with a masterpiece at the end.

Distributed Architecture

I know beyond a doubt my notes are old because there’s not a single mention of ‘cloud’ anywhere! The topic covers how computing has migrated from a centralised model to a client-server model and therefore desktop PC’s and workstations have the ability to store and process information locally.

What’s good is that although this is quite a bit into the chapter, we finally get to start talking about some security concepts in as much as what does a security professional need to consider about distributed setups and some of the common protections.

But let’s not be sheep about this. Just because that’s how General Jones done things back in ‘Nam, doesn’t mean we should start cutting ears off our victims and make necklaces out of them.

Regardless of the type of architecture you have. You may still be running dumb terminals connected to a mainframe with a certain amount of processing time allocated to you. Or you could have the classic client-server model. If you’re a small business, maybe you just have a client-external hard drive model. Or maybe you just magically teleport your data into the cloud.

All of these scenarios have their pro’s and cons. But much like an undercover cop infiltrating a drug cartel. Keep an eye on the low life drug dealers on the street corners for sure. But the real money-shot is in following the money. Trace it back to Mr Big and find out where it goes and where the drugs are being produced and then bring the cartel down from the inside. You’ll need a big budget to do this, a white Ferrari, a stock of Hawaiian shirts and maybe a speedboat. Perhaps you’ll even grow a Tom Selleck style epic tache.

The point I’m trying to make is follow the data. Thats what you’re primarily focussing on protecting. Unless of course you’re providing a service whereby your infrastructure is the service in which case your infrastructure and model of delivery becomes more important. Remember the business impact analysis in conjunction with understanding how the business actually works. This will allow you to focus on the areas that need the most attention.


I’ve got some notes about rings. It’s a scheme that supports multiple layers of protection. You have the central ring which is the most privileged, Ring 0 and has access rights to all domains. The outer rings are surprisingly easily numbered 1,2,3… onwards. Think of Inception, a ring within a ring within another ring.

Actually I only think I decided to write about rings so that I could make an Inception reference. Except this doesn’t have any trains or crazy ex-wives who want to kill you. Well, that probably defines any ex-wife. At least quoting Inception is a welcome break from saying it’s like an onion layer. I’ve seen so many presentations over the years where someone breaks out an onion ring model that just the name of it is enough to bring tears to my eyes.

But the concept is that access rights decrease as the ring numbers increases. So you place your crown jewels in the central ring and the less important ones are cast out like Snow White by her evil step mother.

Security Labels

If something is super secret private important, then put a label on it saying, “Super Secret Private important stuff. Do not touch unless you’re super secret private and important.”

Security Modes

In the classic Arnie film Predator, which is among the best films ever made, Arnies team has to let Mr. Apollo Creed himself (Carl Weathers) tag along on their mission which is to rescue the guys who went in to rescue the guys. After a gripping fire fight in which Arnie and his bunch of expendable hard men totally blow the living crap out of some local rebels; he gets the feeling that something’s not quite right so he threatens to terminate Apollo. After pinning him up against the wall using nothing but his bicep and intimidating him with his accent, Apollo finally cracks and tells him that they were only on a mission to kill the living hell out of some rebels for some reason or another and that it was on a need to know basis only.

That’s an example of a security mode. All the team had the same clearance level. But they were only fed information on a need to know basis.

The types of security modes are, dedicated, compartmented, controlled and limited access. Your proper CISSP book will explain these in detail. I don’t think they’re very important to know i the real world. Unless maybe you work in Government, where beancounters and people who speak in annoyingly nasal voices will preach the importance of all types of security modes and who can access what and how without adding much real value when beyond writing a document.

Like anything, in an isolated silo the concepts are easy to grasp, understand and implement. But when you’re looking at global organisation with thousands of employees accessing dozens of different applications on different servers, crossing network devices it starts to resemble a bowl of spaghetti.


Assurance is a word that is thrown about very loosely by organisations these days so it’s lost some of it’s charm. Yes, the course material goes into the Orange book and the criteria that must be fulfilled. But let’s cut this cake in a different way.

One of the common definitions of assurance goes a little bit like you having confidence that a system (or whatever thing you’re assuring) acts properly and securely when under the control of proper people.

What people fail to accurately define before they embark on an assurance plan is exactly what level of confidence they are looking to achieve and for what purpose.

For example, a company may want to gain security assurance that all their externally facing websites are secure. But there’s no clear definition of what secure means so someone may opt for a vulnerability scan and leave it at that. Others may go for a full on manual penetration test, undertaken by a 3rd party. The approach will vary depending on the level of assurance you’ve defined and how much time and money you are willing to throw at it. This money is tied directly to the business case and objective. If your company is only scanning in order to meet PCI compliance requirements, chances are that’s all they will be doing it for.

I was once out in central London as a young teenager and whilst at Big Ben asked someone for directions on how to get to Leicester Square. He told me how to get to the nearest tube station. Wanting to save money and not pay for the tube, I asked him if Leicester Square was in walking distance, to which he responded, “everythings in walking distance depending on how long you’re prepared to walk for.” Which is a bit like what assurance is like. Sure you can tell your boss you can get systems assured within an inch of their life but he may not be willing to walk that far. As a side note, remember as a good upstanding security professional, do try and work with your business partners to help them work out the level of assurance needed. Don’t just simply sit back and ask them to give you a level, because chances are they won’t be totally sure on what’s needed either. If they ask how much they should budget for assurance, resist the urge to respond with, “how long is a piece of string.”

Once you’ve established the level to which you need to be assured, you can make things more interesting by introducing the concepts of certification and accreditation. Certification leads up to accreditation. It’s where a deep expert will evaluate the information security measures that are implemented by a system and determine if they are adequate and up to the benchmark you defined when you set out on your assurance program.

Accreditation is where a usually overweight man who breathes loudly through his mouth, in an ill fitting suit will inspect the certification report, pretend like he knows what he’s doing and pull out his big rubber stamp and officially declare in his capacity of being an approver that the system is accredited and the controls keep the risk within acceptable levels. He’ll then order a coffee with no sugar because he’s on a diet and drink it with a couple of donuts.

My notes go into some mention of government certification and accreditation standards. But I don’t want to go into them because all standards look the same after a while. Or maybe they get boring. Like going out with a hot super model and 3 months later dumping her because she has no personality. Standards are a bit like that. May seem sexy at first but you wake up one morning, roll over in bed and say, hey that’s just another boring standard.

So the thing to focus on is personality. What I mean to say is, first assess and agree what level of assurance you’re comfortable with. For example, you agree that you need to lock your front door. The next step would be to agree the type of lock that you need in order to protect your front door. Then you get a lock expert to come and inspect or certify that the lock is in accordance with your requirements and finally a big fat man comes up with a clipboard, ticks the box and grants you an accreditation.

Unlike fairy tales though, you don’t just live happily ever after. You live happily for a year maybe before you have to repeat the whole process once again. A year is a long time in which your risk position may change and you’ll need to re-evaluate your position. Is that rusty lock still sufficient? Do you need to invest in something more heavy duty? Does it hinder the flow of traffic through the door unnecessarily? Have the users of the lock just gotten lazy and leave a key in it all the time?

Information Security Models

Now you may think that because of my good looks, this chapter is going to be about me. Unfortunately, the topic isn’t quite as good as that but what it does touch upon is different models that have been defined for both access control and integrity.

There are a few access control models such as Bell-LaPadula, Take-Grant, Biba, Star,  and others that are mentioned.

These may be used and implemented in pencil pushing governmental departments. But most companies would use something like Role Based Access Control (RBAC) in order to define who has access to what. It’s probably the most commonly deployed and easy to understand model out there for larger organisations. Although in practical terms, a proper implementation of RBAC is rarely seen, just because of how large a lot of organisations are and the fact they have so many disparate systems that refuse to talk to one another. So sometimes people will push for a rule-based access model.

Either way, what these models are trying to achieve is control over who can access certain data and furthermore what they can do with that access. The problem is that models are great and technology vendors will offer their magic products that will solve all your problems. This isn’t a technology issue and can’t be solved by technology alone. You need to work out the business processes first and then, if need be, implement a technology that supports those processes. In reality, there’s no substitute for doing the hard work of listing everything you have and agreeing with the person who owns those assets what importance those assets have and who can access is. It’s a tough and dirty job but a lot of the time, there’s no substitute for getting tough and getting dirty.

So I’ve been back from Amsterdam for a couple of days and have been reflecting on what I learnt and who I spoke to at Black Hat. I realise a lot of the really juicy info was exchanged under a verbal NDA so I can’t disclose any of that. I also acknowledge I didn’t speak to everyone in attendance so may have missed the most crucial pieces of information, instead I’ll focus on which speakers I got a chance to talk to and what I took away from it all.

Future Considerations

Apparently small is the new black. We’ve had a rapid miniaturisation of technology and a rapid decline in costs which have made these a great arsenal in any attackers toolkit. Coupled with this, you have the legacy systems being retro-fitted with small communications channels and devices as elaborated on by Don Bailey’s talk “War Texting: Weaponising machine to machine systems.” He illustrated how legacy devices and systems such as cars, water pumps, medical equipment etc. are all being retro-fitted with some wireless capability of some sort. Be it for monitoring purposes or remote control. All of these have increased the attack points and a conscious effort needs to be made by both systems manufacturers and implementers to ensure security is adequately designed. (no prizes for guessing if it is being implemented properly).

Steve Lord took apart a Mifi Router (one of those portable personal hotspots) and demonstrated how they could be re-configured and used for nefarious purposes. Be it simply in your pocket as you walk along a public place, swallowing data, or tossing it over a fence into the range of an office. The possibilities are only limited by your imagination, which, if you’re Steve Lord is virtually limitless.

Similarly you have products such as Teensy as explained by Nikhil Mittal, which are as small as a USB drive and when plugged into your machine it is recognized as a keyboard, effectively defeating any end point protection you may have in place. From here the little Teensy runs off like a face-hugger and pops your box. It may even do this in stealth mode, so you don’t even know about it until an alien bursts out of your hard drive. Couple this with the fact that most organisations still run an M&M perimeter model with a crunchy hard shell and soft inside, once you’re inside the network, you’ve got free reign to do anything. Nikhil went on to explain how he’s had 100% success rate with these devices when conducting penetration tests. They can easily be hidden inside a mouse, a keyboard or even your average garden variety USB stick. Personally, I’d probably conceal it within a USB foam rocket launcher and send it as a present to someone. Who could resist plugging that into their laptop?

Tom Ritter and Jeff Jarmoc in separate talks spoke about the future of security protcols and the issues around SSL/TLS interception proxies and transitive trust. Essentially there are a mixture of issues here. But lumped in are some of the fundamental issues we see in network security including the whole argument around “are CA’s obsolete”. There are some very interesting alternatives being proposed and looked into.

SSL proxies are an interesting breed of product. In effect they are Man In The Middle devices that unwrap your SSL traffic, inspect and then pass it on. The privacy issues aside, there are known issues with the fact that the Proxy could potentially accept bad certificates on your behalf depending on how they are configured. So you could end up visiting a site with a bad cert and be blissfully unaware because the proxy just decided it would make that call on your behalf. Yeah, the network layer is going to continue being a pain. It doesn’t help that it’s one of those areas where there are a lack of qualified technical security analysts or architects. As long as traffic is traversing and some logs are being generated, people seem to be happy.

Finally, I got a chance to catch up with Sheeraj Shah who spoke about HTML5 and the top 10 threats that it faces primarily through stealth attacks and silent exploits. Like anything new, it will probably be a game of cat and mouse for a while before the leaky ship is plugged to an acceptable degree.

So we have a few new technologies to look forward to and lots of small devices to contend with. Nice.

Current stuff

There were some insightful views shared around existing systems. An interesting perspective was given by Rahul Sasi around deficiencies in IVR Security and how they can be leveraged to gain access to internal systems. One of the issues that repeatedly comes up with such systems, is not that they can’t be secured, it’s just that these usually end up being de-scoped for some reason or another, or worse still outsourced leaving a big blind spot in the security view of the organization. It sits in that weird crossover path where telecoms and IT and Security meet like their unwanted love-child… abandoned and left to the elements.

The ultimate authority on PDF’s, Didier Stevens was at blackhat explaining his tools and issues with malicious PDF files. He stated that Adobe reader 10 had made some significant improvements and added a sandbox environment although it was probably better to use another PDF reader such as Sumutra that also disables javascript, being one of the primary attack routes.

Justin Searle gave a workshop on using the Samurai Web Testing Framework. I didn’t attend, but having used Samurai myself from the SANS web application testing course, I can say it is an extremely useful distribution for when it comes to specifically testing web applications. Yes, you have the pen testing distributions like backtrack, but that is more geared towards network security whereas the Samurai guys really have a nice collection of very well organized tools focused on nothing but web applications. Also, it’s so much more cool when you look at your list of virtual machines and you fire up the “samurai”, sounds much cooler than anything else. All I need to do is figure out how to get a big gong to play each time I run it.

One of my favourite workshop topics was delivered by Ken Baylor where he helped everyone understand Botnets a lot better by showing everyone how they could build their own Zeus botnet. A lot of people I spoke to on the day found it a very useful workshop and it drove home the point that there are a lot of things we, as security professionals talk about and try to risk assess without fully understanding. It was interesting to see not only how botnets like Zeus continually adapt which make it difficult to detect and block, but how they are continually utilizing more social engineering methods to gain the trust of their victims.

Common Themes

An underlying theme throughout the event from nearly everyone I spoke to was that people are still neglecting the basics. Rafal Los and Shane MacDougall gave an interesting talk on offensive threat modeling for attackers, where they took the attackers viewpoint to model your threats. It’s an interesting approach and I’m sure a lot of companies should try it to see how it would help them to critique their current threat models (if they have one at all). A lot of the times people are missing the simple steps needed in order to protect themselves. David Litchfield demonstrated how some 5 year old Oracle vulnerabilities are still exploitable today.

Additionally, it was more or less unanimously agreed that user awareness training was not adequate in organisations. There either isn’t a properly managed awareness campaign, or no metrics are gathered by which the effectiveness can be measured. Or campaigns fall into the category of being overly boring (read this 50 page document), or overly patronizing (look at this cartoon of a dog eating a post-it note with your password on it.)

However, collectively, security professionals still aren’t the best equipped to deal with this. As someone said, you need people in security who come from a marketing and psychology background in order to really run an effective awareness campaign.


1. Computing is getting very small, powerful and your toaster will be connected to the internet soon.

2. New technologies are coming which will bring challenges, which is amusing because we still haven’t got to grips with how to secure the old technology yet.

3. Social engineering is here to stay and we still are very poor at engaging and educating our users. So to make ourselves feel better we resort to calling them “stupid”.

This is the 4th part on my CISSP Reloaded where I am revisiting the 10 CISSP domains I studied for many years ago to see what has changed and how much of it I have retained as well as adding in my own personal thoughts, experiences and rambles into the mix. Read domain 1, intro and risk management , domain 2, access controls at and domain 3, Telecomms and network security at


Cryptography, the dark art of information security. The deus-ex-machina, the silver bullet, the be all and end all of all security measures. Widely misunderstood, often poorly implemented.

My first introduction to cryptography was when I was told of this man called Phillip Zimmerman who’d created a piece of software called Pretty Good Privacy or PGP. A bit of sorcery that could protect emails so well, that even the prying eyes of Big Brother could not get at it easily. It was so profound, that the U.S. Government initiated an investigation against Zimmerman. This was on the premise that strong cryptography was classed as munitions so it was in the same classification as real life weapons.

How amazing could that be? This software called cryptography, according to the U.S. Government could be as potent as an AK47? I had to find out more.

But this was a pre-Google era whilst I was still a student and long before I’d known that I’d end up with a career in information security. I relied on Alta-Vista, infoseek and Ask that was still called AskJeeves and dig out useful information on what this cryptography thing was all about. In between all the distractions such as websites that played awfully rendered music when you visited them, the 56k modem periodically disconnecting from the phone line and very creative ASCII art I found little information that I could make sense of. There were all sorts of mathematical formulas that looked so horrendously complicated, they were like algebra to the power of 3.

This was what a digital hand grenade looked like? A bunch of maths and who know’s how many geek-hours worth of coding?

I was disappointed as a teenager who goes to a concert to hear their favourite band play only to find they sound absolutely awful when playing live.

So I closed that chapter in my life and moved on. Cryptography would be another one of those things that I could never understand.

Or was it?

We know the story goes: <title sequence> Boy meets recruiting manager. Recruiting manager offers job in information security. Happy years of employment ensue. Boy does CISSP <dramatic music> and learns more about cryptography. Boy actually understands how cryptography works (to a degree)<triumphant and uplifting music>. Boy now a man. <end credits>

What is cryptography?

Cryptography is a relatively simple concept to understand. It’s the ‘how’ that can get slightly complex.

In essence, it’s taking some information and scrambling it up so no-one else knows what it is. Then having a way to unscramble it back to the original information again.

And really that’s all there is to it. Just like how when you were a child you were told that’s all there is to a Rubik cube and wasted many frustrated hours failing to get it right before resorting to peeling off the stickers and sticking them on wherever you darn well felt.

Everything else surrounding cryptography is about finding ways to make sure the scrambling of information is done in a quick and efficient manner that nobody else in the entire universe can unscramble unless they possess the McGuffin.

Think of it like a Witch who can cast a spell on a Prince and turn him into a frog. If she waves her wand and says “hocus pocus” the Prince turns into the frog and becomes completely unrecognisable. No-one would ever know that the frog was actually a Prince unless they had the Witches wand and waved it over the frog saying “hocus pocus”.

Bear this example in mind as I’ll be using this little bit of hocus pocus to carry on my explanations because it’s a lot more simpler than maths!

But before we jump into modern day cryptography that will be relevant to you and your day job, the CISSP course would like to take you on a trip down memory lane on the history of encryption and see how cryptography started out and how it’s evolved from a single-cell organism in a mud pool to a complex self-aware science, that puts rocket science and brain surgery to shame.


To be honest, in my opinion, knowing the history of cryptography is very interesting to read about. You can even apply some of the learnings to make games to play with your kids. But probably not much use to a security professional going about their day to day job. I mean, I certainly haven’t been in a scenario where I just had to know how cavemen encrypted their messages!

Read more