Blog Post

I finally made it over to Ireland! It’s quite embarrassing having lived all my life in London that I never did get the chance to hop sooner. But we are where we are, and what better reason to go over than to attend IRISSCON.

At the airport, I was about to board my flight I saw Infosecurity Magazine’s Dan Raywood was also about to board, and we had one of those awkward very British moments where we nodded to each other in acknowledgement but felt we were too far apart to yell anything. This resulted in us exchanging a few gestures, which kind of translated to “Didn’t expect to see you here, let’s have a chat on the other side and share a taxi to IRISSCON.” Although, in hindsight, I realise it may have looked as if we were making gang signs and were going to get into a fist fight once we reached cruising altitude.

Lucky for Dans fists of fury remained in the cabin at all times for the duration of our  RyanAir flight. As we landed, I wish I could say I looked out of the window at the beauty of Dublin, the green meadows, rivers flowing with Guinness, and children singing – but it was just as overcast and rainy as it was in London and I couldn’t see anything other than clouds.

Upon landing in Dublin, I looked for Dan but couldn’t find him. I assumed he may have left before me. On a sidenote, I bump into Dan more times than anyone else at airports whenever I’m flying to conferences. So it’s clear that he’s been tasked by Mi5 to keep an eye on me.

Upon leaving the airport I saw William Lau and his colleague Ming (don’t say it Flash Gordon!). They kindly offered me a coffee and to share a taxi to the hotel we were all staying at once Thom Langford and their colleague Bernadette arrived.

William and Ming… the APT Brothers

Not only was the weather in Dublin terrible, but the traffic was even worse, and it took far longer than it should have to reach the hotel.

Mr Tumble’s long lost twin

Upon checking in, we were walking through the lobby only to bump into Mr Tumble’s smiling twin Andy, making it an unofficial Host Unknown reunion.

Rumours of Host Unknown’s demise have been greatly exaggerated. This is my happy face.

That night our wonderful host Brian Honan (who took one look at me in my suit and said all I was missing was a cane and top hat to complete the leprechaun look) took us all out to dinner to a boxty restaurant where many of the other speakers were in attendence. I do find slight pleasure in seeing my American friends over on this side of the Atlantic looking extremely jet-lagged and incoherent and am glad that its not me for a change.

Dinner at the boxty restaurant

After dinner, there was some socialising back in the hotel. And I do suspect a couple of the speakers were finalising their slides.

The next day, we started off early with breakfast and the opening remarks by our MC Gordon Smith, who did a wonderful job throughout the day. It would be remiss of me to not mention the great work that Cooper  did behind the scenes filming all the talks over the day. The event is a single talk track, and a CTF room – so the audience for the talks remain pretty consistent throughout the day.

Brian Honan kicked things off with a short welcome talk and went over some of the changes over the years. A decade ago the CERT received just under 450 incident reports, while this year it had over 30,000. Quite the jump – a trend which he believes shows no signs of slowing down, so urged more Government participation in security.

Wendy Nather was up next with her keynote which was wonderful as always. I always learn something new from Wendy whenever she gives a presentation and this was no exception. It was particularly focussed around trust and the user experience. It was also the first time I’d heard about Dark Patterns and something I’m quite interested in now.

Next up was Cliona Curley with a sobering talk about what kids were exposed to online these days. I would have paid more attention, but my talk was up next and I was having my B-Rabbit moment trying to hold in Mom’s spaghetti.

I don’t know why I sometimes still get very nervous before a talk. Once I get on stage and start speaking, I am usually fine – it’s just the short time before starting that I really begin to doubt myself and fear that no-one is going to like my talk, or I’ll choke on stage.

Thankfully, no choking occurred, and I think the talk went pretty well. I’m always rather critical of myself, but even Thom said my talk was good! That’s not to say I don’t appreciate other people’s feedback, it’s just that Thom and I have a long-standing tradition of being hyper-critical of each others presentations. Over the years, we’ve spent many hours sharing presentations with each other, trying to refine our messaging and delivery. So, I’m not used to him turning his head slightly as he does and saying, “I thought you did a really good job.”

The rest of the day had some very good talks. Martijn Grooten gave a great talk arond spam. To top it off, it was the first time I’d met Martijn, and he is a wonderfully charming person.

Martijn Grooten

Dave Lewis was on top form as usual, being extremely engaging and informative. Eoin Keary brought home the basics on vulnerability management.

Dave Lewis, happy as always

I could go on, but you get the gist – the whole day was full of fantastic talks and great content. I don’t think there was a single talk that didn’t provide great value.

Unfortunately, I had to leave for the airport earlier than planned because of how bad the traffic in Dublin was, which meant that I missed out on hearing Jack Daniel‘s talk – which from all accounts sounded like it was great.

So with that, just over 30 hours after I’d landed, I was back on a RyanAir flight heading home

It was a fantastic event, and Brian and the rest of the team did a top notch job in arranging it.

Looking forward to next year already!

Happy times with Wendy, Dave, and David

One of my favourite bloggers Troy Hunt posed a question on Twitter yesterday asking whether a user should share responsibility for a weak password that they reuse across multiple services.

Screen Shot 2018-11-08 at 10.05.14.png

There was a lot of great discussion and debate, and I found myself opposing Troy’s views. It was getting late in the night and despite my inner voice screaming, “Don’t go to sleep, someone on the internet is wrong” Troy did say he’d have a blog post up explaining his viewpoint in more detail.

Having woken up, got some caffeine, and read his post, I am ready to put across my view as to why I think he (and anyone that agrees with him) is wrong.

But before I continue… let me just say, I have great respect for Troy and consider him a friend. So don’t be stirring up stuff 🙂 I see this as a natural part of a much-needed dialogue in the security industry. I also reserve the right to change my opinion at any point because I’m also aware I’m not always right.

So onward!

Research Bias

Twitter polls are a great way to gather a ton of views quickly and easily. These are almost always biased though. Like the initial  question Troy posed, it is giving a very specific scenario, which need three things to happen to materialise.

  1. The user chooses a weak password
  2. That password is reused across multiple services
  3. The user is compromised via credential stuffing

Basically it’s a IF 1, AND IF 2, AND IF 3, THEN – “does user bear some responsibility?”

Actually, if you omit point 1, and say a user uses a strong password, but reuses it and it gets compromised via credential stuffing, the same thing would happen.

This is an important point, because Troy gives the example in his blog that many online services give advice to users about strong passwords, citing Twitter, Amazon, Google, and Disqus.

Note, you’ll see all of these make reference to creating a strong password, and to keep the password confidential. In none of these examples is the user advised to not re-use the password.

And as we’ve already established, swapping out a weak password for a strong one won’t change the scenario.

So, we end up with password reuse being the issue. And the question is whether users are savvy enough on the whole to be aware of the risks of password reuse. It’s easy for me to look at my peers and friends that work in tech and say, of course everyone is aware. But if I look to my family, or friends that don’t work in tech, who are only casual users, I’d say they don’t really know.

Imagine having three different bank cards, and one of them gets compromised somehow, and you’re blamed because you used the same PIN on all of them. I think we’d not be accepting of that. For most people, that’s the analogy they have grown up with, and have taken onto the online world.

The matter of awareness

Now, I do agree with Troy when he says that ignorance is no excuse. But it would also be naive to not recognise the massive awareness gap for a large portion of the population.

If the average user really did understand the need for strong passwords and not reusing them, enterprises probably wouldn’t need to invest so much time and resources into continually running security awareness campaigns.

What we’re talking about is a wholesale cultural shift to get people to adopt new behaviours. This is not something that can be underestimated.

If you’re like me and grew up in the 80’s, you’ll probably remember going on car trips without wearing seatbelts and thinking that caring for the environment was the job of some long-haired hippies that had nothing better to do than to tie themselves to oil rigs.

Fast forward a few decades and it’s inconceivable that I would get in a car and not ‘clunk clink’ – and I’ll walk a mile with an empty paper wrapper because I want to make sure I drop it in the blue recycling bin.

But these behavioural changes took decades. There have been sustained awareness campaigns, coupled with increased enforcement to get to the point where it’s almost deemed socially unacceptable to throw the wrong type of rubbish in the bin.

I don’t think security has had the same amount of time, and perhaps it won’t even have the will or resources to continually invest in raising awareness, because the general trend seems to be a few people trying for a while before giving up, taking their ball and going home to grumble about the stupid users.

Things take time to change – and there are speed bumps along the way. But it’s important to persevere, but also to see what can be changed.

A matter of risk

I think one of the issues I have with the initial question that was posed (other than the fact it was a biased, complex, leading question) was that it lacked context. So let’s try to put some context around it by focussing only on password reuse potentially leading to credential stuffing.

  1. A teenager reuses her social media password across Snapchat, Instagram, and her school canteen digital wallet.
  2. The CFO of a multinational organisation reuses the same password on the account system, HR system, email, social media, and local pizza delivery outlet.
  3. A sales rep reuses one password for their online banking and CRM, and reuses one password for a dozen (minor) sites which he needs for registering to go to events.
  4. A mother reuses password for her email, childcare nursery, and online shopping.

Now, consider these users were victims of credential stuffing because one of their online accounts were compromised. Would you apply the same level of responsibility or ‘blame’ to each of them? Maybe you’ll ask if any of them have received any formal security training.

Personally, I think context is important, understanding that different people, and cultures, and environments have different needs and drivers is vital before making broad-brush statements.

Empathy is probably the better word.

Put the user first

To borrow from Simon Wardley’s brilliant maps, every map has an anchor (the point it’s built around). For a geographical map, the anchor is the compass (this is north of that etc.).

When we look at technology systems, the user is the anchor. Everything needs to be built around the user needs. Everything is there to support the user – and the underpinning technology becomes less visible to the user the deeper it goes.

For example, when a user goes onto Twitter, their ‘need’ is to post a tweet, share content, read other people’s views, share private messages and so forth.

Security is something they expect, are aware of, but for the most, isn’t really a need. Security is more of a need for the service provider than the user.

Carl, and Kevin summed it up right in my opinion.  
Screen Shot 2018-11-08 at 11.30.45.png

I think what my major grievance isn’t even the question as to whether we should be blaming the users or not – because that’s the wrong question. The question is, why are technologists, developers, and security professionals allowing such poorly designed features going to market.

I put this down to the “us vs them” mentality where security professionals somehow try to wash their hands of their own responsibility.

A few weeks ago Bruce Schneier wrote an opinion piece in the NY Times entitled Internet Hacking is about to get much worse, in which there was this gem of a paragraph.

Screen Shot 2018-11-08 at 11.35.32.png

Really? I mean, this is more of a feeling than any real research. How many people has Bruce spoken to when arriving at this conclusion? It’s the old IT Security mindset of “us vs them” – the lemmings that are the general population don’t care about security boo hoo.

In closing

Screen Shot 2018-11-08 at 11.04.05.png

I just can’t wrap my head around why we build poor systems, have poor security, allow bad stuff to happen, then want to point the finger of blame at a user that is operating within the acceptable parameters and magically sit on our floating chair wearing an infinity gauntlet.

  • The user is your anchor – build security into and around their requirements.
  • The user is your friend, they just want to do things easily.
  • The user wants your help, they don’t want to get hacked any more than you do.

Be the security professional they need and deserve.

 

Around 2006 / 2007 I began blogging and tried to get into video blogging. Although I’d been working in information security for 7 years up to that point, I wasn’t well-connected in terms of what conferences ran, who the influencers were, or who the editors of any of the numerous security magazines or websites were.

I wanted to go to Infosec Europe and interview a few people on camera, but didn’t know the best way to go about it. I sent out a few dozen emails but didn’t hear any response.

Eventually I found a contact for the PR agency which was in charge of Infosec Europe. A company named Eskenzi PR. Why would they respond to someone like me, when much smaller websites had refused to even acknowledge my email? I didn’t think about it too much and sent off an email thinking that I’d have to make do another way.

Much to my surprise I got a call back from Eskenzi Neil Stinchcombe. I’d later learn that Neil founded the company along with his wife Yvonne Eskenzi.

Neil took the time to hear out my plan and proposal, and provided some valuable tips as well as pointing out some of the flaws in my approach. He offered me a press pass to Infosec and helped setup all the video meetings I requested, and some more for good measure.

Now, I maybe didn’t end up getting noticed by the BBC and ended up with my own TV series. But I’ll always be grateful for the help Eskenzi provided, when it would have been easy for them to have ignored me with no consequence.

Over the years, my relationship with Eskenzi grew. When I moved to 451 Research as an analyst, they would be one of the agencies that would pitch briefings and stories from their security clients. A job they would consistently do to a high standard.

When I joined AlienVault three years ago, I was pleased to see that Eskenzi PR was our UK agency.

As I reflect, Eskenzi PR is the longest-standing professional relationship I have; one where I like to think of Yvonne, Neil, and many of the people as colleagues and friends.

So, I was particularly happy to hear that Eskenzi has been honoured by receiving the Queen’s Award for Enterprise 2018, recognising its achievements in International Trade.

I can’t think of a company more deserving, and I’m glad the Queen agrees.

A lot of individuals and companies of all sizes often use the phrase where they ‘think’ they’ve been hacked or breached, or had some form of unwanted event.

There is usually a lack of conviction in this statement, and in hindsight it’s not easy to validate.

Sure, one could use a service like haveibeenpwned.com to retrospectively check, or wait for a service provider to inform them that their data has been compromised – but there are better ways, if one is more proactive in their approach.

Perhaps one of the best features of Gmail is the ability to add a +something to your email address to identify which providers are either breached or have shared your email address.

For example, if my email is [email protected]; when signing up for BSidesLondon, I’ll provide my email address as [email protected]

It’s also worth looking at getting an adblocker (note not all adblockers are created equally – look for a good one that won’t sell you out in other ways). But basically, the less scripts that are allowed to run in your browser, the less tracking, and the less opportunity available for anyone to inject malicious content is good.

For those that have a bit more patience to validate every connection, get something like LittleSnitch or RadioSilence (or similar – I’m not endorsing these products). But anything that can detect outbound connections applications and software on your machine is making. It gives you the ability to control and decide which apps can communicate externally and send who knows what data.

Finally, one of my favourite techniques is to use honey tokens. The free ones available at Canarytokens are super easy to use and set up.

Other ways to set up your own honey tokens would be to put false customer records into your CRM. Set this customers email to an address that you control. That way, if you ever get emails sent to that particular address, you know that your customer records have been compromised – probably by your most recently-departed sales person.

While there are many other things one can do to enable quick detection of compromises, I find these some of the easiest and quickest to setup and get running with.

Having an early warning system is good, but it’s only as good as the response. Therefore you should have a plan of action as to what to do if you are notified that someone has accessed your files or compromised your accounts. Mainly this would include changing your passwords, notifying relevant parties, and putting your guard up. But it will depend on what is triggered, by who, and what your personal risk tolerance is.

For small businesses, and even larger corporations, these techniques can still work – however, there robust enterprise-grade offerings available which are more suited to the task (maybe the Canary hardware device is good for you, or AlienVault USM Anywhere) . Still, I wouldn’t be against having a few honey tokens scattered around a corporate network just to see who may be poking their nose around where it doesn’t belong.

Fuelled by a twitter conversation both Adrian Sanabria and Anton Chuvakin posted articles here and here, sharing some good tips on what makes a good briefing and common pitfalls to avoid.

As a former (recovering?) analyst, I thought it only right that I jump on the bandwagon and share my thoughts on the topic.

What is a vendor briefing?

If you’re not familiar with vendor briefings, it’s basically where a vendor will speak to an analyst and explain what their product does, how the company is structured, financials, and so forth. The analyst will then, depending on how the analyst firm operates, will either write up a piece on the company, reference it as part of a broader piece of research, or maintain the details in their database of companies they are tracking.

Analyst tips

Both Anton and Adrian were very thorough in their advice to vendors on how to deliver a good briefing. But I’d like to shift focus and point out a few things analysts could be mindful of during such briefings.

1. You don’t know everything. Yes, you speak to very smart people every day and your reports are widely read. But it’s very easy to get on a high horse and think you are all-seeing all-knowing. If that were the case you’d have raised millions in funding and solved all technology problems by now.

2. Let the vendor make their point. You may not agree with them, but let them present their perspective and give the courtesy of hearing them out.

3. A briefing isn’t a fight – it’s not an argument that needs to be “won”. If putting others down makes you sleep better at night that’s cool. But chill out a little, you’re meant to be impartial and balanced.

4. Set expectations – let the vendor know up front what you are hoping to get out of the call. Be open about whether you’re more interested in the product, or the company strategy, or the numbers. Vendors aren’t mind-readers.

5. One of the most useful phrase I learnt as an analyst was, “Can you help me to understand…” It’s a simple and effective line that can mean so many things such as, “I don’t believe you”, “too many buzzwords”, “maybe you need to think this through”. Whatever it may mean, it doesn’t come across as confrontational – it puts you on the same page trying to work through a problem.

6. Be organised – be on time, have your notes in order, don’t just blunder through the briefing. Yes, you’re a busy analyst that has to do many of these a week – but a little organisation can go a long way.

7. Share your plans – be clear as to what the vendor can expect. Do you plan on covering their company, will you include them in a larger piece of research. How frequently would you like them to keep in touch with you. All this can go a long way in ensuring a long and meaningful relationship.

The numbers don’t lie

If I were to add to Adrian and Antons respective blogs as a tip to vendors, that is that while an analyst may disagree on the effectiveness of your product, or its value, the numbers don’t lie. Analysts have a lot of numbers – they spend a lot of time sizing markets, analysing competitors growth projections and targets, most will be able to analyse your numbers, or infer them very quickly. So please don’t try and impress by claiming huge numbers or ridiculous growth. Don’t claim your TAM is your SAM or SOM.

I’ll digress and give an example of what I mean.

Say you are a producer of bottled water.

Every human needs to drink water, so the total available market (TAM) is around 7 billion.

But you’re restricted by geographical reach. Say you can only ship your bottled water to the whole of England , then that is your serviceable available market (SAM).

However, there are other competitors in England, and there are many people who won’t buy bottled water, maybe they drink tap water, or boil their own water, or have their own water filters. So, in reality you’re looking at a much smaller serviceable obtainable market (SOM).

Maybe you’re a vendor that secures IoT devices. Don’t start your pitch by saying that your market is 22billion devices (or whatever the number of estimated IoT devices is) because it’s not. That may be the TAM, but your SOM will be much smaller. So think about how you will convince the analyst your product has the right strategy to get there.

In my opinion, recklessly throwing around numbers is worse than buzzword bingo – you could end up in the vapour-ware category of my vendor heirarchy pyramid.

DRpR1aKWkAAyZWD.jpg

 

Market sizing

Seeing as I’ve kicked the hornets nest about numbers – I guess it’s a good time to talk about market sizing. I see a lot of weird and wonderful numbers thrown about and sometimes I’m left scratchiing my rapidly-balding head as to how markets are sized up. Many times I’ll see claims that the {small infosec segment} industry will be worth {huge} billions by 2020 according to {analyst firm}.

I have typically been drawn more towards the bottom-up approach to market sizing, it can be more time consuming, but gives a more sane answer.

It’s rather simple in that you basically take the collective revenue of the current vendors in a given market segment to get todays market size. If you know the rate at which each of the vendors is growing, or predicting to grow, you can estimate how large the market will be in the future.

For example, if you take a list of security awareness providers and calculate their turnover (I’ll save that for another post), and add it all together, maybe the answer will be $200m (as an example). So that’s our market size.

On average, all the companies may be growing sales at 25% every year. Which means that, barring any major disruptions, in two years time – the market size would grow to $300m.

So, if a new security awareness vendor comes onto the scene, they shouldn’t make claims that the market is worth 5bn because every employee in every company in the world needs training, or that they plan on growing to $500m in revenue in five years – an analyst will be justified in rolling their eye and being skeptical.

 

 

 

This tweet by member of Parliament Nadine Dorries was enough to gave significantly raise the blood pressure of half the infosec professionals in the world.

After getting a bit of ‘stick’, the MP tried to defuse the situation by claiming she was a mere back bench MP – an insignificant minion.

Some other MPs jumped to say, it’s a common occurence and that people are blowing it up into a major issue

Maybe five or ten years ago this wouldn’t have been an issue at all. But the world is very different today – attacks are very different and chaining together a series of attacks from even a compromised “low-level” employee isn’t all that difficult. Especially where MPs can make an attractive target to foreign, unfriendly agencies.

Like most things in life, nothing is ever black and white. Password sharing does occur, despite there being technology solutions in place to facilitate sharing in a manner whereby accountability remains. It happens in most companies. But that’s not quite what I take exception to here.

The attitude displayed by MPs is what is concerning. The casual brushing off, as if it is something that should be accepted.

It’s a bit like using a mobile phone while driving, or driving over the speed limit… or using a mobile phone while driving over the speed limit. Even though it puts lives at risk, most people have done it at some point. Completely eradicating such behaviour is impossible, but you wouldn’t accept the excuse of, “Well everybody else does it” especially if it came from a bus driver.

Similarly, society shouldn’t be willing to accept the risky behaviour displayed by people in government or other sensitive roles.

But maybe that is where infosec professionals can do a better job of educating the masses. Perhaps only when risky behaviour is shunned at a societal level – like the dirty looks you get for not separating your green from general waste – that people’s attitudes will change.

 

I have Amazon Prime, I quite like their shows, and whenever I have some time to kill I’ll watch an episode or 3.

A couple of weeks ago, I thought it would be a good idea to install the official Amazon video app on my android device, so that I could download episodes and watch them when travelling. I previously had it on my iPad, so knew it worked well.

However, I wasn’t able to find the Amazon video app in the Google Play store. Perplexed, I went hunting, and quickly found that Amazon does indeed have an app for Android, only it isn’t on the official store.

Amazon helpfully has instructions on how to install the app on your Android phone or tablet from its own Amazon Appstore.

 

 

Screen Shot 2017-09-13 at 09.36.33.jpg

For those of you playing along at home, you may have spotted the obvious flaw in this approach.

To install the app, Amazon is advising you to “Go to your device’s Settings, select Security or Applications (depending on device), and then check the Unknown Sources box.)”

But there are others

Unfortunately, this isn’t isolated to Amazon. Ken Munro pointed out on twitter that the National Lottery also asks you to download its app from a dark alley in the back.

 

nat lott.jpg

Although, to its credit, the National Lottery does mention to, “Remember to change your security settings back to your preferred choice.”

Quentyn Taylor pointed out that Samsung also does similar.

So what’s the big deal?

The official Google apps store isn’t a complete safe haven. Malicious apps have bypassed the security checks and ended up in the app store many times. Companies like Amazon, the National Lottery, or Samsung aren’t fly-by-night companies that will deliberately push out insecure apps; so what’s the harm in downloading the app and switching security back.

For most users that aren’t technically savvy, the ability for their Android device to block downloads from unknown sources is there to prevent them from accidentally downloading malicious software. – Strike one.

The security industry has spent a great deal of time and effort to educate users in the dangers of downloading unknown files from untrusted sources, and this approach undermines a lot of those efforts. – Strike two.

Normalising such actions enables attackers to try similar tactics. After all, if companies like Amazon have legitimised the approach of turning off security settings and downloading apps from their own environments, it is something that any company could emulate. – Strike three.

The reality is that convenience trumps security most of the time. Users will intentionally or accidentally bypass security controls to get the app of their choosing, often leaving themselves vulnerable in the process. Which is why it’s important that manufacturers, providers, app stores, and everyone in between work together to help deliver a secure experience to users, instead of elements working against each other.

WannaCry_Sense_650_366.jpg

Whenever a calamity befalls, it’s only natural for people to try and rationalise and identify the problem.

As is now happening with the WannaCry ransomware outbreak that affected the UK’s NHS service, and other services in over 100 countries. People are discussing what should have been done to prevent it.

On one hand, there’s a debate ongoing about responsible disclosure practices. Should the NSA have “sat on” vulnerabilities for so long? Because when Shadowbrokers released the details it left a small window for enterprises to upgrade their systems.

On the other hand, there are several so-called “simple” steps the NHS or other similar organisations could have taken to protect themselves, these would include:

  1. Upgrading systems
  2. Patching systems
  3. Maintaining support contracts for out of date operating systems
  4. Architecting infrastructure to be more secure
  5. Acquiring and implementing additional security tools.

The reality is that while any of these defensive measures could have prevented or minimised the attack, none of these are easy for many enterprises to implement.

… Read the rest of the post here

European startup CLTRe founded by Kai Roer has spent the last couple of years examining the security awareness and user behaviour problem through the lens of security culture.

Based on findings over the course of 2016, CLTRe has produced its first annual Security Culture report, co-written by Roer and Gregor Petric, Ph.D., an Associate Professor of Social Informatics and Chair of the Center for Methodology and Informatics at the Faculty of Social Sciences, University of Ljubljana (Slovenia).

Many existing security awareness reports typically measure and report on a few basic metrics – pretty often based around number of phishing emails user click on.

It is here that the CLTRe report differentiates itself, by delving into statistics and metrics to provide a view that is probably the first of its kind. It takes into consideration not just behaviours, but adds insights to the behaviours based on gender, geographic location, age, duration of service, or acceptance of norms across seven dimensions.

The report has insightful nuggets of information scattered throughout, such as an  examination of the cultural difference across various industries in Norway and Sweden.

Screen Shot 2017-05-09 at 12.19.48.png

The report explains at length why security culture metrics matter and the value they provide. It states that similar to technical controls, security culture must be measured in order to understand and measure change.

For example, reporting the number of clicks on a phishing exercise is useful but has its limits. Those metrics do not provide the motivations or drivers for the users.

Screen Shot 2017-05-09 at 12.26.26.png

Thoughts

For its first report, CLTRe has produced a great report with very detailed insights. It’s not something to be considered light reading, and some segments feel quite academic in nature. It’s not a knock on the report, it’s needed to elevate the discussion to the higher level needed.

For next years report, I’d like to see the inclusion of case studies or quotes from companies that have been measuring their security culture and how they have used the information to improve the security culture.

Check out the full report here (registration needed).

Let Kai and the CLTRe team know what you think: Click to tweet:

Great job @getcltre @kairoer on the human factor report. (via @J4vv4D)

Why did you write this report? @kairoer @getcltre (via @j4vv4d) 

I’ve followed Scott Helme’s work for a while now and have been impressed with his approach. So was interested to find out that he had teamed up with BBC Click and Prof Alan Woodward to comprehensively dismantle a vendors claim to total security. Scott has published the whole story on his blog and The BBC Click episode is live.

This was a well-researched and executed piece, but let’s take a step back and look at the wider picture and what this means for vendor-research relations.

So, I felt it was a good time to grab some time with Scott to seek his opinions on some of the questions that came to mind.

One of the first things that strike me about Scott is his measured and calm demeanour. He has the look of a chess master that is quietly confident knowing that he’s always 7 moves ahead. The second thing I note is that I really can’t gauge how old he is. I think it’s one of the things that happens as you grow older, I find it increasingly difficult to differentiate between someone that is 20 or 35. They all just look “young” to me. So I look for clues such as ages of children, year of graduation, or years experience to try and make an educated guess.

What is secure?

Not wanting to waste time with warm up questions, I wanted to get to the heart of the matter. There is no benchmark or agreed standard upon when it’s appropriate to use the word secure, or claim a product is secure. The fact of the matter is that as far as technology goes, nothing is truly ever secure. So does that mean no-one should ever use the phrase secure at all?

On one hand one wants to avoid going down the route of legislation, or having stringent criteria in an industry that is constantly in a state of flux. On the other hand, Scott said, “We don’t see many car manufacturers rocking up with the safest car in the world that has no airbags or brakes.”

Which is a fair comment, but it is a lot easier for a lay person to see and understand security in physical products than in software.

The manufacturers dilemma
So what is a security startup to do? Nearly every security product has had vulnerabilities they’ve needed to patch – not even the largest of vendors are free of bugs.

Open source products, where the code is available for all to audit is no exception with products such as OpenSSL having had long-standing vulnerabilities. Given the landscape, what’s the best way to approach this?

Scott gives a half smile, indicating that it’s something he may have been asked many times. He told me that he believes that the more scrutiny a device or product has then the more likely you are to become aware of issues. “Researchers and bug bounties are your friend. They can’t replace traditional penetration testing and other standards or compliance requirements, but they sure add a lot of extra coverage, often for not a lot of cash.”

It’s somewhat a balancing act. After all, security costs time and money in order to implement properly. Far too many startups are caught up in trying to prove that their product serves a viable market and that there is demand before pouring resources into security.

Scalability
But is relying on researchers to find vulnerabilities a scalable model? There are only so many Scott’s in the world, and researchers will be drawn to particular products out of personal curiosity, or where their expertise lie. So many products simply slip beneath the radar.  The number of secure products being released outpaces the time and effort needed to properly validate their capabilities.

Scott agrees with the sentiment, and states that it ties into the issue of lack of standards. “Right now there is no regulation or standards, so researchers are all we have. Even basic standards would start to raise that minimum bar and begin the process of filtering out the crud. I do it because I feel I can make a difference, I enjoy it and it helps me keep my skills sharp.”

With great power
With time running out, I wanted to go straight for the jugular with my last question.

While one can commend the good work Scott and others do. With the recent release, we’ve effectively seen a company torn down. Surely that kind of approach can have a negative impact into scaring other startups from releasing any product at all?

If I were considering starting up a secure product, I’d be scared that you, or other researchers could shut my business down. Which would leave me with the choice of either not producing anything at all, or try to bribe you up front. While you may be above bribery, can’t say that for every researcher out there.

Despite my Jeremy-Paxman style attempt at getting a rise out of Scott, he remained patient with me.

“I certainly hope it wouldn’t scare them from releasing a product, but perhaps consider engaging a reputable security firm to analyse the product prior to release. I feel that’s the minimum any responsible company should be doing anyway. They can also engage the community with a bounty program to get private feedback prior to release too. If someone plans to bribe you, I guess you can’t do much about that, except beat them to the punch and fix the issues first. The internet is full of bad people :(”

The internet is indeed full of bad people, you cannot argue with that.

In between all the politics and memes on twitter, you sometimes come across a genuinely interesting security conversation.

My friend Quentyn Taylor, who happens to be a CISO posted this tweet that generated a lot of great commentary.

I recomment going through and having a read through the comments, there are some very valid points being made in support of, and against this viewpoint.

Personally, having worked in many banks which had huge legacy estates running critical banking applications, I agree with the statement. It’s easy to sit on the sidelines and say, “just upgrade” but it’s never really that simple. Security is often only a small consideration in the big scheme of things.

It’s why risk management is so important, it helps clarify what the tradeoffs are. A legacy system may be vulnerable, and that risk may equate to a dollar value. But the downtime, upgrade costs, and impact to associated systems of an upgrade may outweigh that considerably.

So many times it comes down to having a proper inventory, classifying data, and monitoring legacy systems with a closer eye.

However, this isn’t the whole reality.

It’s a reality based on my personal experience which likely mirror many of Quentyn’s experience. And that’s something many often forget – just because something works in one enterprise, or type of business, it doesn’t necessarily mean it will work in another.

Which is why, I feel that when discussing security topics, it’s worthwhile to be specific and add context around it. It’s something I’ve been guilty of in the past, and I’d like to change it.

For example, take these two statements:

Scanning QR codes is not popular.

Vs.

Scanning QR codes is not popular in the West

That is because in some countries like China, QR codes are everywhere. The location adds that important bit of context by which the statement turns from a generality to something more specific.

The logic can be applied to many of the broad security statements that are often made. So when someone makes a statement such as, “there’s a shortage of infosec talent.”, the questions that come to mind are:

Which geographies does this apply to?

 

Is there a lack of red teamers, blue teamers, risk managers?

Is it a lack of people with over 5 years experience, or do they too expensive?

If we stick to our own realities and speak only in general terms, we will remain adamant that our point of view is correct and never reach a consensus. And it’s probably about time that we start having better conversation.

I’ve been reading up on GDPR lately and frequently use mind maps to organise my thoughts.

So, I thought I’d share the interactive mind map I created for GDPR with its 11 chapters, 99 articles and 187 recitals. Let me know if I’ve missed anything or should amend for clarity.

A lot went down – some stories in the video and a ton of interesting links below. Enjoy!

 

Stories in Video

Tesco Bank Hacked

Adult Friend Finder hack

Facebook buyingstolen passwords

IP Bill set to becomelaw

Other interesting stories  

Cyber Security Challenge UK crowns youngest ever champion

GCHQ wants internet providers to rewrite systems to block hackers

Researchers’ Belkin Home Automation Hacks Show IoT Risks

UK halts Facebook’s WhatsApp data dip

Data Cleanliness and patch verification

A Cybercrime Report Template

Smart Light bulb worm hops from lamp to lamp

As if blogging and making videos wasn’t enough. I’ve wanted to stretch my creative legs for a while and dip a toe into the world of podcasting.

So, I jumped at the opportunity when there was the chance to start a new podcast at AlienVault. The AlienVault Security Perspectives is out, with the first episode featuring special guest Wendy Nather – who also happens to be one of my favourite people in the world.

I’d be interested in your feedback and opinion.

Click here to go to the podcast and download it on iTunes. 

Recently, I caught up with Priority One IT Support to provide advice to business owners on how they can protect their business from a security attack.

A glance at the media will show that attacks are not only on the rise, but the types of companies under attack are also varied. Whereas previously only the largest of companies and financial institutes came under attack, these days, companies of all sizes and industries are targeted.

 

Protecting your business

From a fundamental perspective it’s almost impossible to prevent 100% of all attacks, but you can reduce the impact that they have by:

  1. Understanding your key data elements and focus on your security controls around these.
  2. Put in place controls that can isolate and closely monitor those critical systems.
  3. Understand where you may be vulnerable. This will vary depending on your business e.g. if you are on a ground floor it is riskier leaving a window open compared to someone 10 floors up.

Common pitfalls

The most common pitfall is lack of user education and awareness. For example, if a member of staff receives an email informing them they have won the lottery, they should know how to ignore it. The basics of user behaviour and education often let a business down.

The second, often overlooked issue is the lack of robust monitoring controls. Many companies often only discover they have been hacked many months later once it makes the news.

What to do in the event of an attack

A business should have a plan in place before an attack takes place.

  1. Formulate a plan that includes steps to inform internal staff, stakeholders, partners, and customers.
  2. Know how to isolate systems to limit the damage and assess the impact.
  3. Have backups in place from which services can be resumed as quickly as possible.