The attitudes of credential sharing

This tweet by member of Parliament Nadine Dorries was enough to gave significantly raise the blood pressure of half the infosec professionals in the world.

After getting a bit of ‘stick’, the MP tried to defuse the situation by claiming she was a mere back bench MP – an insignificant minion.

Some other MPs jumped to say, it’s a common occurence and that people are blowing it up into a major issue

Maybe five or ten years ago this wouldn’t have been an issue at all. But the world is very different today – attacks are very different and chaining together a series of attacks from even a compromised “low-level” employee isn’t all that difficult. Especially where MPs can make an attractive target to foreign, unfriendly agencies.

Like most things in life, nothing is ever black and white. Password sharing does occur, despite there being technology solutions in place to facilitate sharing in a manner whereby accountability remains. It happens in most companies. But that’s not quite what I take exception to here.

The attitude displayed by MPs is what is concerning. The casual brushing off, as if it is something that should be accepted.

It’s a bit like using a mobile phone while driving, or driving over the speed limit… or using a mobile phone while driving over the speed limit. Even though it puts lives at risk, most people have done it at some point. Completely eradicating such behaviour is impossible, but you wouldn’t accept the excuse of, “Well everybody else does it” especially if it came from a bus driver.

Similarly, society shouldn’t be willing to accept the risky behaviour displayed by people in government or other sensitive roles.

But maybe that is where infosec professionals can do a better job of educating the masses. Perhaps only when risky behaviour is shunned at a societal level – like the dirty looks you get for not separating your green from general waste – that people’s attitudes will change.



Undermining security and weakening Android

I have Amazon Prime, I quite like their shows, and whenever I have some time to kill I’ll watch an episode or 3.

A couple of weeks ago, I thought it would be a good idea to install the official Amazon video app on my android device, so that I could download episodes and watch them when travelling. I previously had it on my iPad, so knew it worked well.

However, I wasn’t able to find the Amazon video app in the Google Play store. Perplexed, I went hunting, and quickly found that Amazon does indeed have an app for Android, only it isn’t on the official store.

Amazon helpfully has instructions on how to install the app on your Android phone or tablet from its own Amazon Appstore.



Screen Shot 2017-09-13 at 09.36.33.jpg

For those of you playing along at home, you may have spotted the obvious flaw in this approach.

To install the app, Amazon is advising you to “Go to your device’s Settings, select Security or Applications (depending on device), and then check the Unknown Sources box.)”

But there are others

Unfortunately, this isn’t isolated to Amazon. Ken Munro pointed out on twitter that the National Lottery also asks you to download its app from a dark alley in the back.


nat lott.jpg

Although, to its credit, the National Lottery does mention to, “Remember to change your security settings back to your preferred choice.”

Quentyn Taylor pointed out that Samsung also does similar.

So what’s the big deal?

The official Google apps store isn’t a complete safe haven. Malicious apps have bypassed the security checks and ended up in the app store many times. Companies like Amazon, the National Lottery, or Samsung aren’t fly-by-night companies that will deliberately push out insecure apps; so what’s the harm in downloading the app and switching security back.

For most users that aren’t technically savvy, the ability for their Android device to block downloads from unknown sources is there to prevent them from accidentally downloading malicious software. – Strike one.

The security industry has spent a great deal of time and effort to educate users in the dangers of downloading unknown files from untrusted sources, and this approach undermines a lot of those efforts. – Strike two.

Normalising such actions enables attackers to try similar tactics. After all, if companies like Amazon have legitimised the approach of turning off security settings and downloading apps from their own environments, it is something that any company could emulate. – Strike three.

The reality is that convenience trumps security most of the time. Users will intentionally or accidentally bypass security controls to get the app of their choosing, often leaving themselves vulnerable in the process. Which is why it’s important that manufacturers, providers, app stores, and everyone in between work together to help deliver a secure experience to users, instead of elements working against each other.


When culture eats awareness for breakfast

European startup CLTRe founded by Kai Roer has spent the last couple of years examining the security awareness and user behaviour problem through the lens of security culture.

Based on findings over the course of 2016, CLTRe has produced its first annual Security Culture report, co-written by Roer and Gregor Petric, Ph.D., an Associate Professor of Social Informatics and Chair of the Center for Methodology and Informatics at the Faculty of Social Sciences, University of Ljubljana (Slovenia).

Many existing security awareness reports typically measure and report on a few basic metrics – pretty often based around number of phishing emails user click on.

It is here that the CLTRe report differentiates itself, by delving into statistics and metrics to provide a view that is probably the first of its kind. It takes into consideration not just behaviours, but adds insights to the behaviours based on gender, geographic location, age, duration of service, or acceptance of norms across seven dimensions.

The report has insightful nuggets of information scattered throughout, such as an  examination of the cultural difference across various industries in Norway and Sweden.

Screen Shot 2017-05-09 at 12.19.48.png

The report explains at length why security culture metrics matter and the value they provide. It states that similar to technical controls, security culture must be measured in order to understand and measure change.

For example, reporting the number of clicks on a phishing exercise is useful but has its limits. Those metrics do not provide the motivations or drivers for the users.

Screen Shot 2017-05-09 at 12.26.26.png


For its first report, CLTRe has produced a great report with very detailed insights. It’s not something to be considered light reading, and some segments feel quite academic in nature. It’s not a knock on the report, it’s needed to elevate the discussion to the higher level needed.

For next years report, I’d like to see the inclusion of case studies or quotes from companies that have been measuring their security culture and how they have used the information to improve the security culture.

Check out the full report here (registration needed).

Let Kai and the CLTRe team know what you think: Click to tweet:

Great job @getcltre @kairoer on the human factor report. (via @J4vv4D)

Why did you write this report? @kairoer @getcltre (via @j4vv4d) 


The Growing Impact of Security Researchers

I’ve followed Scott Helme’s work for a while now and have been impressed with his approach. So was interested to find out that he had teamed up with BBC Click and Prof Alan Woodward to comprehensively dismantle a vendors claim to total security. Scott has published the whole story on his blog and The BBC Click episode is live.

This was a well-researched and executed piece, but let’s take a step back and look at the wider picture and what this means for vendor-research relations.

So, I felt it was a good time to grab some time with Scott to seek his opinions on some of the questions that came to mind.

One of the first things that strike me about Scott is his measured and calm demeanour. He has the look of a chess master that is quietly confident knowing that he’s always 7 moves ahead. The second thing I note is that I really can’t gauge how old he is. I think it’s one of the things that happens as you grow older, I find it increasingly difficult to differentiate between someone that is 20 or 35. They all just look “young” to me. So I look for clues such as ages of children, year of graduation, or years experience to try and make an educated guess.

What is secure?

Not wanting to waste time with warm up questions, I wanted to get to the heart of the matter. There is no benchmark or agreed standard upon when it’s appropriate to use the word secure, or claim a product is secure. The fact of the matter is that as far as technology goes, nothing is truly ever secure. So does that mean no-one should ever use the phrase secure at all?

On one hand one wants to avoid going down the route of legislation, or having stringent criteria in an industry that is constantly in a state of flux. On the other hand, Scott said, “We don’t see many car manufacturers rocking up with the safest car in the world that has no airbags or brakes.”

Which is a fair comment, but it is a lot easier for a lay person to see and understand security in physical products than in software.

The manufacturers dilemma
So what is a security startup to do? Nearly every security product has had vulnerabilities they’ve needed to patch – not even the largest of vendors are free of bugs.

Open source products, where the code is available for all to audit is no exception with products such as OpenSSL having had long-standing vulnerabilities. Given the landscape, what’s the best way to approach this?

Scott gives a half smile, indicating that it’s something he may have been asked many times. He told me that he believes that the more scrutiny a device or product has then the more likely you are to become aware of issues. “Researchers and bug bounties are your friend. They can’t replace traditional penetration testing and other standards or compliance requirements, but they sure add a lot of extra coverage, often for not a lot of cash.”

It’s somewhat a balancing act. After all, security costs time and money in order to implement properly. Far too many startups are caught up in trying to prove that their product serves a viable market and that there is demand before pouring resources into security.

But is relying on researchers to find vulnerabilities a scalable model? There are only so many Scott’s in the world, and researchers will be drawn to particular products out of personal curiosity, or where their expertise lie. So many products simply slip beneath the radar.  The number of secure products being released outpaces the time and effort needed to properly validate their capabilities.

Scott agrees with the sentiment, and states that it ties into the issue of lack of standards. “Right now there is no regulation or standards, so researchers are all we have. Even basic standards would start to raise that minimum bar and begin the process of filtering out the crud. I do it because I feel I can make a difference, I enjoy it and it helps me keep my skills sharp.”

With great power
With time running out, I wanted to go straight for the jugular with my last question.

While one can commend the good work Scott and others do. With the recent release, we’ve effectively seen a company torn down. Surely that kind of approach can have a negative impact into scaring other startups from releasing any product at all?

If I were considering starting up a secure product, I’d be scared that you, or other researchers could shut my business down. Which would leave me with the choice of either not producing anything at all, or try to bribe you up front. While you may be above bribery, can’t say that for every researcher out there.

Despite my Jeremy-Paxman style attempt at getting a rise out of Scott, he remained patient with me.

“I certainly hope it wouldn’t scare them from releasing a product, but perhaps consider engaging a reputable security firm to analyse the product prior to release. I feel that’s the minimum any responsible company should be doing anyway. They can also engage the community with a bounty program to get private feedback prior to release too. If someone plans to bribe you, I guess you can’t do much about that, except beat them to the punch and fix the issues first. The internet is full of bad people :(”

The internet is indeed full of bad people, you cannot argue with that.

Make your vote count

The prestigious European Security Blogger awards are upon us. For those unfamiliar with the European Security blogger awards, it’s an award ceremony for bloggers who specialise in security and reside in Europe – at least that what I hope it means.

I am fortunate enough to have made it into the finals in five of the nine categories – which in itself feels like a great achievement considering how many super-awesome and cool security bloggers there are scattered around Europe. The categories I’m in are:

Best Security Video Blog
Most entertaining blog
Most educational blog
Best EU Security Tweeter
Grand prix prize for best overall security blog

Anyway, it would be a shame to let your vote go to waste so head over to  and make your vote count.


How to Fake Monitoring

You’re the new guy in the security ops team, they’re giving you training and put you on a very crucial and important job… Monitoring. You’ll be told how important the job is and how it is essential to be done correctly to ensure the ongoing safety of the company. But you notice that nobody really shows any interest in doing it. There’s are two reasons for this. Firstly, it’s usually a job that they don’t really understand how to do, but secondly, and more crucially, even if they do understand how it works, it makes watching grass grow an extreme sport in comparison.
Having been subjected to monitoring of all kinds early in my career, I developed a set of techniques which can be used to give the impression you’re a monitoring guru:
1. The Blink and Chin Rub:
Blink frequently and rub your chin. This tried and tested technique gives the impression that you’re deep in thought and analyzing each packet individually. Having a couple of crushed cans of red bull or coke will give the impression you’re a man on the edge and very few people will interrupt or ignore you. Every now and then let off a low level grunt.
2. Look for Key Values and Strings
A quick find for key strings and values will save you trawling through gigs worth of logs. Identify the key ones first and type them up separately. That way if anyone looks at what you’re doing, they will be impressed by your apparent ability to detect patterns. At the end of the day simply delete it and sound frustrated whilst muttering “false positive”, bang the table for dramatic effect before grabbing your coat and heading off home.
3. Be Vague When Questioned
When your boss asks for your thoughts on some anomalous network traffic you need to tread carefully. Deliver a vague opinion, add that you’ve been analyzing a list of key values and strings to get to the root cause (see 2). For good measure ask a question which direct the conversation away from your view. Something like, “what made you think of that?” would be perfect. It gives the boss an opportunity to wax lyrical about how they arrived at a conclusion.
4. Blame A.P.T.
Should the unthinkable happen on your watch, blame it on being an A.P.T, or it being a state-sponsored and highly sophisticated attack that has evaded all your detection controls. Turn it around on your boss and ask him how you’re supposed to keep track of everything with such outdated hardware and software where the enemy have access to unlimited funds. If you’re lucky, you could end up with your own personal SOC being commissioned.
5. Harass an ISP
During a quiet patch people will begin to get suspicious. So to shake things up, send a passive-aggressive email to a random ISP every few weeks threatening them with legal action unless they block the state-sponsored APTers from constantly bombarding your network. When a complaint is filed with your CEO, simply point to the previous breach and say you suspect the ISP to be compromised. Careful how you balance this because you don’t want to end up looking like a crazed conspiracy theorist. Tell them you’ll withdraw the legal threat, but will be “keeping a close eye on them.” No-one will ever suspect you’ve got no idea how the IDS logs work.

Infosec Friends

For all the talk about it being an echo chamber and the like, I’ve met a ton of people in security whom I otherwise wouldn’t have. As I was pondering over this over breakfast one morning, I came to the conclusion that I end up grouping my infosec friends into different categories. They probably look a bit like this:

Level 0 – These are your closest security friends. They are the guys who you look out for and they look out for you. If you see a bug in their code, you’ll sort it out for them. When they call you up at 3am because they need help with a security strategy presentation, you’ll stay up with them all night working on it. Whenever you are stuck for something, you’ll turn to them for help. They’re your best teacher and most annoying student rolled into one. You know how many kids each other have, their ages and names. You can never get rid of them and they can’t get rid of you. The amazing thing is that you may never have met some of these people in real life.

Level 1 – These are best friends. You hang out with them, connect with them on every social media channel. Bond with them on a personal level and hear out their problems. When you need a LinkedIn reference or someone to endorse your CISSP you’ll go to them. If there’s a job going in their team, they’ll do what they can to get you on their team. They are there for you when you are stuck pen testing a website, but won’t do much beyond getting pizza and running Nmap.

Level 2 – These are more friends of friends. You’ll meet them at conferences and local chapter meet-ups. Sometimes they may move up the ranks and get promoted to a level 1 friend or maybe not. They’ll retweet something witty you say and will like your blog posts. They’ll meet you for lunch but never offer to go any more than halves with you on the bill.

Level 3 – These are those security people you have to be friends with. Normally these are work colleagues. You learn nothing from them and often put up with their moaning and spreading of office gossip. Every morning whilst going into work you pray they will be sick and you don’t have to see them.

Level 4 – Anyone who follows you on social media like twitter or facebook that doesn’t fall into any of the other catergories. They are the trolls who follow you and make smartass comments whenever they can. They contribute nothing positive to security, yet linger around like a bad smell. Secretly everyone hopes they fall down an open elevator shaft onto some bullets.