Blog Post

Around 2006 / 2007 I began blogging and tried to get into video blogging. Although I’d been working in information security for 7 years up to that point, I wasn’t well-connected in terms of what conferences ran, who the influencers were, or who the editors of any of the numerous security magazines or websites were.

I wanted to go to Infosec Europe and interview a few people on camera, but didn’t know the best way to go about it. I sent out a few dozen emails but didn’t hear any response.

Eventually I found a contact for the PR agency which was in charge of Infosec Europe. A company named Eskenzi PR. Why would they respond to someone like me, when much smaller websites had refused to even acknowledge my email? I didn’t think about it too much and sent off an email thinking that I’d have to make do another way.

Much to my surprise I got a call back from Eskenzi Neil Stinchcombe. I’d later learn that Neil founded the company along with his wife Yvonne Eskenzi.

Neil took the time to hear out my plan and proposal, and provided some valuable tips as well as pointing out some of the flaws in my approach. He offered me a press pass to Infosec and helped setup all the video meetings I requested, and some more for good measure.

Now, I maybe didn’t end up getting noticed by the BBC and ended up with my own TV series. But I’ll always be grateful for the help Eskenzi provided, when it would have been easy for them to have ignored me with no consequence.

Over the years, my relationship with Eskenzi grew. When I moved to 451 Research as an analyst, they would be one of the agencies that would pitch briefings and stories from their security clients. A job they would consistently do to a high standard.

When I joined AlienVault three years ago, I was pleased to see that Eskenzi PR was our UK agency.

As I reflect, Eskenzi PR is the longest-standing professional relationship I have; one where I like to think of Yvonne, Neil, and many of the people as colleagues and friends.

So, I was particularly happy to hear that Eskenzi has been honoured by receiving the Queen’s Award for Enterprise 2018, recognising its achievements in International Trade.

I can’t think of a company more deserving, and I’m glad the Queen agrees.

A lot of individuals and companies of all sizes often use the phrase where they ‘think’ they’ve been hacked or breached, or had some form of unwanted event.

There is usually a lack of conviction in this statement, and in hindsight it’s not easy to validate.

Sure, one could use a service like haveibeenpwned.com to retrospectively check, or wait for a service provider to inform them that their data has been compromised – but there are better ways, if one is more proactive in their approach.

Perhaps one of the best features of Gmail is the ability to add a +something to your email address to identify which providers are either breached or have shared your email address.

For example, if my email is [email protected]; when signing up for BSidesLondon, I’ll provide my email address as [email protected]

It’s also worth looking at getting an adblocker (note not all adblockers are created equally – look for a good one that won’t sell you out in other ways). But basically, the less scripts that are allowed to run in your browser, the less tracking, and the less opportunity available for anyone to inject malicious content is good.

For those that have a bit more patience to validate every connection, get something like LittleSnitch or RadioSilence (or similar – I’m not endorsing these products). But anything that can detect outbound connections applications and software on your machine is making. It gives you the ability to control and decide which apps can communicate externally and send who knows what data.

Finally, one of my favourite techniques is to use honey tokens. The free ones available at Canarytokens are super easy to use and set up.

Other ways to set up your own honey tokens would be to put false customer records into your CRM. Set this customers email to an address that you control. That way, if you ever get emails sent to that particular address, you know that your customer records have been compromised – probably by your most recently-departed sales person.

While there are many other things one can do to enable quick detection of compromises, I find these some of the easiest and quickest to setup and get running with.

Having an early warning system is good, but it’s only as good as the response. Therefore you should have a plan of action as to what to do if you are notified that someone has accessed your files or compromised your accounts. Mainly this would include changing your passwords, notifying relevant parties, and putting your guard up. But it will depend on what is triggered, by who, and what your personal risk tolerance is.

For small businesses, and even larger corporations, these techniques can still work – however, there robust enterprise-grade offerings available which are more suited to the task (maybe the Canary hardware device is good for you, or AlienVault USM Anywhere) . Still, I wouldn’t be against having a few honey tokens scattered around a corporate network just to see who may be poking their nose around where it doesn’t belong.

Fuelled by a twitter conversation both Adrian Sanabria and Anton Chuvakin posted articles here and here, sharing some good tips on what makes a good briefing and common pitfalls to avoid.

As a former (recovering?) analyst, I thought it only right that I jump on the bandwagon and share my thoughts on the topic.

What is a vendor briefing?

If you’re not familiar with vendor briefings, it’s basically where a vendor will speak to an analyst and explain what their product does, how the company is structured, financials, and so forth. The analyst will then, depending on how the analyst firm operates, will either write up a piece on the company, reference it as part of a broader piece of research, or maintain the details in their database of companies they are tracking.

Analyst tips

Both Anton and Adrian were very thorough in their advice to vendors on how to deliver a good briefing. But I’d like to shift focus and point out a few things analysts could be mindful of during such briefings.

1. You don’t know everything. Yes, you speak to very smart people every day and your reports are widely read. But it’s very easy to get on a high horse and think you are all-seeing all-knowing. If that were the case you’d have raised millions in funding and solved all technology problems by now.

2. Let the vendor make their point. You may not agree with them, but let them present their perspective and give the courtesy of hearing them out.

3. A briefing isn’t a fight – it’s not an argument that needs to be “won”. If putting others down makes you sleep better at night that’s cool. But chill out a little, you’re meant to be impartial and balanced.

4. Set expectations – let the vendor know up front what you are hoping to get out of the call. Be open about whether you’re more interested in the product, or the company strategy, or the numbers. Vendors aren’t mind-readers.

5. One of the most useful phrase I learnt as an analyst was, “Can you help me to understand…” It’s a simple and effective line that can mean so many things such as, “I don’t believe you”, “too many buzzwords”, “maybe you need to think this through”. Whatever it may mean, it doesn’t come across as confrontational – it puts you on the same page trying to work through a problem.

6. Be organised – be on time, have your notes in order, don’t just blunder through the briefing. Yes, you’re a busy analyst that has to do many of these a week – but a little organisation can go a long way.

7. Share your plans – be clear as to what the vendor can expect. Do you plan on covering their company, will you include them in a larger piece of research. How frequently would you like them to keep in touch with you. All this can go a long way in ensuring a long and meaningful relationship.

The numbers don’t lie

If I were to add to Adrian and Antons respective blogs as a tip to vendors, that is that while an analyst may disagree on the effectiveness of your product, or its value, the numbers don’t lie. Analysts have a lot of numbers – they spend a lot of time sizing markets, analysing competitors growth projections and targets, most will be able to analyse your numbers, or infer them very quickly. So please don’t try and impress by claiming huge numbers or ridiculous growth. Don’t claim your TAM is your SAM or SOM.

I’ll digress and give an example of what I mean.

Say you are a producer of bottled water.

Every human needs to drink water, so the total available market (TAM) is around 7 billion.

But you’re restricted by geographical reach. Say you can only ship your bottled water to the whole of England , then that is your serviceable available market (SAM).

However, there are other competitors in England, and there are many people who won’t buy bottled water, maybe they drink tap water, or boil their own water, or have their own water filters. So, in reality you’re looking at a much smaller serviceable obtainable market (SOM).

Maybe you’re a vendor that secures IoT devices. Don’t start your pitch by saying that your market is 22billion devices (or whatever the number of estimated IoT devices is) because it’s not. That may be the TAM, but your SOM will be much smaller. So think about how you will convince the analyst your product has the right strategy to get there.

In my opinion, recklessly throwing around numbers is worse than buzzword bingo – you could end up in the vapour-ware category of my vendor heirarchy pyramid.

DRpR1aKWkAAyZWD.jpg

 

Market sizing

Seeing as I’ve kicked the hornets nest about numbers – I guess it’s a good time to talk about market sizing. I see a lot of weird and wonderful numbers thrown about and sometimes I’m left scratchiing my rapidly-balding head as to how markets are sized up. Many times I’ll see claims that the {small infosec segment} industry will be worth {huge} billions by 2020 according to {analyst firm}.

I have typically been drawn more towards the bottom-up approach to market sizing, it can be more time consuming, but gives a more sane answer.

It’s rather simple in that you basically take the collective revenue of the current vendors in a given market segment to get todays market size. If you know the rate at which each of the vendors is growing, or predicting to grow, you can estimate how large the market will be in the future.

For example, if you take a list of security awareness providers and calculate their turnover (I’ll save that for another post), and add it all together, maybe the answer will be $200m (as an example). So that’s our market size.

On average, all the companies may be growing sales at 25% every year. Which means that, barring any major disruptions, in two years time – the market size would grow to $300m.

So, if a new security awareness vendor comes onto the scene, they shouldn’t make claims that the market is worth 5bn because every employee in every company in the world needs training, or that they plan on growing to $500m in revenue in five years – an analyst will be justified in rolling their eye and being skeptical.

 

 

 

This tweet by member of Parliament Nadine Dorries was enough to gave significantly raise the blood pressure of half the infosec professionals in the world.

After getting a bit of ‘stick’, the MP tried to defuse the situation by claiming she was a mere back bench MP – an insignificant minion.

Some other MPs jumped to say, it’s a common occurence and that people are blowing it up into a major issue

Maybe five or ten years ago this wouldn’t have been an issue at all. But the world is very different today – attacks are very different and chaining together a series of attacks from even a compromised “low-level” employee isn’t all that difficult. Especially where MPs can make an attractive target to foreign, unfriendly agencies.

Like most things in life, nothing is ever black and white. Password sharing does occur, despite there being technology solutions in place to facilitate sharing in a manner whereby accountability remains. It happens in most companies. But that’s not quite what I take exception to here.

The attitude displayed by MPs is what is concerning. The casual brushing off, as if it is something that should be accepted.

It’s a bit like using a mobile phone while driving, or driving over the speed limit… or using a mobile phone while driving over the speed limit. Even though it puts lives at risk, most people have done it at some point. Completely eradicating such behaviour is impossible, but you wouldn’t accept the excuse of, “Well everybody else does it” especially if it came from a bus driver.

Similarly, society shouldn’t be willing to accept the risky behaviour displayed by people in government or other sensitive roles.

But maybe that is where infosec professionals can do a better job of educating the masses. Perhaps only when risky behaviour is shunned at a societal level – like the dirty looks you get for not separating your green from general waste – that people’s attitudes will change.

 

I have Amazon Prime, I quite like their shows, and whenever I have some time to kill I’ll watch an episode or 3.

A couple of weeks ago, I thought it would be a good idea to install the official Amazon video app on my android device, so that I could download episodes and watch them when travelling. I previously had it on my iPad, so knew it worked well.

However, I wasn’t able to find the Amazon video app in the Google Play store. Perplexed, I went hunting, and quickly found that Amazon does indeed have an app for Android, only it isn’t on the official store.

Amazon helpfully has instructions on how to install the app on your Android phone or tablet from its own Amazon Appstore.

 

 

Screen Shot 2017-09-13 at 09.36.33.jpg

For those of you playing along at home, you may have spotted the obvious flaw in this approach.

To install the app, Amazon is advising you to “Go to your device’s Settings, select Security or Applications (depending on device), and then check the Unknown Sources box.)”

But there are others

Unfortunately, this isn’t isolated to Amazon. Ken Munro pointed out on twitter that the National Lottery also asks you to download its app from a dark alley in the back.

 

nat lott.jpg

Although, to its credit, the National Lottery does mention to, “Remember to change your security settings back to your preferred choice.”

Quentyn Taylor pointed out that Samsung also does similar.

So what’s the big deal?

The official Google apps store isn’t a complete safe haven. Malicious apps have bypassed the security checks and ended up in the app store many times. Companies like Amazon, the National Lottery, or Samsung aren’t fly-by-night companies that will deliberately push out insecure apps; so what’s the harm in downloading the app and switching security back.

For most users that aren’t technically savvy, the ability for their Android device to block downloads from unknown sources is there to prevent them from accidentally downloading malicious software. – Strike one.

The security industry has spent a great deal of time and effort to educate users in the dangers of downloading unknown files from untrusted sources, and this approach undermines a lot of those efforts. – Strike two.

Normalising such actions enables attackers to try similar tactics. After all, if companies like Amazon have legitimised the approach of turning off security settings and downloading apps from their own environments, it is something that any company could emulate. – Strike three.

The reality is that convenience trumps security most of the time. Users will intentionally or accidentally bypass security controls to get the app of their choosing, often leaving themselves vulnerable in the process. Which is why it’s important that manufacturers, providers, app stores, and everyone in between work together to help deliver a secure experience to users, instead of elements working against each other.

WannaCry_Sense_650_366.jpg

Whenever a calamity befalls, it’s only natural for people to try and rationalise and identify the problem.

As is now happening with the WannaCry ransomware outbreak that affected the UK’s NHS service, and other services in over 100 countries. People are discussing what should have been done to prevent it.

On one hand, there’s a debate ongoing about responsible disclosure practices. Should the NSA have “sat on” vulnerabilities for so long? Because when Shadowbrokers released the details it left a small window for enterprises to upgrade their systems.

On the other hand, there are several so-called “simple” steps the NHS or other similar organisations could have taken to protect themselves, these would include:

  1. Upgrading systems
  2. Patching systems
  3. Maintaining support contracts for out of date operating systems
  4. Architecting infrastructure to be more secure
  5. Acquiring and implementing additional security tools.

The reality is that while any of these defensive measures could have prevented or minimised the attack, none of these are easy for many enterprises to implement.

… Read the rest of the post here

European startup CLTRe founded by Kai Roer has spent the last couple of years examining the security awareness and user behaviour problem through the lens of security culture.

Based on findings over the course of 2016, CLTRe has produced its first annual Security Culture report, co-written by Roer and Gregor Petric, Ph.D., an Associate Professor of Social Informatics and Chair of the Center for Methodology and Informatics at the Faculty of Social Sciences, University of Ljubljana (Slovenia).

Many existing security awareness reports typically measure and report on a few basic metrics – pretty often based around number of phishing emails user click on.

It is here that the CLTRe report differentiates itself, by delving into statistics and metrics to provide a view that is probably the first of its kind. It takes into consideration not just behaviours, but adds insights to the behaviours based on gender, geographic location, age, duration of service, or acceptance of norms across seven dimensions.

The report has insightful nuggets of information scattered throughout, such as an  examination of the cultural difference across various industries in Norway and Sweden.

Screen Shot 2017-05-09 at 12.19.48.png

The report explains at length why security culture metrics matter and the value they provide. It states that similar to technical controls, security culture must be measured in order to understand and measure change.

For example, reporting the number of clicks on a phishing exercise is useful but has its limits. Those metrics do not provide the motivations or drivers for the users.

Screen Shot 2017-05-09 at 12.26.26.png

Thoughts

For its first report, CLTRe has produced a great report with very detailed insights. It’s not something to be considered light reading, and some segments feel quite academic in nature. It’s not a knock on the report, it’s needed to elevate the discussion to the higher level needed.

For next years report, I’d like to see the inclusion of case studies or quotes from companies that have been measuring their security culture and how they have used the information to improve the security culture.

Check out the full report here (registration needed).

Let Kai and the CLTRe team know what you think: Click to tweet:

Great job @getcltre @kairoer on the human factor report. (via @J4vv4D)

Why did you write this report? @kairoer @getcltre (via @j4vv4d) 

I’ve followed Scott Helme’s work for a while now and have been impressed with his approach. So was interested to find out that he had teamed up with BBC Click and Prof Alan Woodward to comprehensively dismantle a vendors claim to total security. Scott has published the whole story on his blog and The BBC Click episode is live.

This was a well-researched and executed piece, but let’s take a step back and look at the wider picture and what this means for vendor-research relations.

So, I felt it was a good time to grab some time with Scott to seek his opinions on some of the questions that came to mind.

One of the first things that strike me about Scott is his measured and calm demeanour. He has the look of a chess master that is quietly confident knowing that he’s always 7 moves ahead. The second thing I note is that I really can’t gauge how old he is. I think it’s one of the things that happens as you grow older, I find it increasingly difficult to differentiate between someone that is 20 or 35. They all just look “young” to me. So I look for clues such as ages of children, year of graduation, or years experience to try and make an educated guess.

What is secure?

Not wanting to waste time with warm up questions, I wanted to get to the heart of the matter. There is no benchmark or agreed standard upon when it’s appropriate to use the word secure, or claim a product is secure. The fact of the matter is that as far as technology goes, nothing is truly ever secure. So does that mean no-one should ever use the phrase secure at all?

On one hand one wants to avoid going down the route of legislation, or having stringent criteria in an industry that is constantly in a state of flux. On the other hand, Scott said, “We don’t see many car manufacturers rocking up with the safest car in the world that has no airbags or brakes.”

Which is a fair comment, but it is a lot easier for a lay person to see and understand security in physical products than in software.

The manufacturers dilemma
So what is a security startup to do? Nearly every security product has had vulnerabilities they’ve needed to patch – not even the largest of vendors are free of bugs.

Open source products, where the code is available for all to audit is no exception with products such as OpenSSL having had long-standing vulnerabilities. Given the landscape, what’s the best way to approach this?

Scott gives a half smile, indicating that it’s something he may have been asked many times. He told me that he believes that the more scrutiny a device or product has then the more likely you are to become aware of issues. “Researchers and bug bounties are your friend. They can’t replace traditional penetration testing and other standards or compliance requirements, but they sure add a lot of extra coverage, often for not a lot of cash.”

It’s somewhat a balancing act. After all, security costs time and money in order to implement properly. Far too many startups are caught up in trying to prove that their product serves a viable market and that there is demand before pouring resources into security.

Scalability
But is relying on researchers to find vulnerabilities a scalable model? There are only so many Scott’s in the world, and researchers will be drawn to particular products out of personal curiosity, or where their expertise lie. So many products simply slip beneath the radar.  The number of secure products being released outpaces the time and effort needed to properly validate their capabilities.

Scott agrees with the sentiment, and states that it ties into the issue of lack of standards. “Right now there is no regulation or standards, so researchers are all we have. Even basic standards would start to raise that minimum bar and begin the process of filtering out the crud. I do it because I feel I can make a difference, I enjoy it and it helps me keep my skills sharp.”

With great power
With time running out, I wanted to go straight for the jugular with my last question.

While one can commend the good work Scott and others do. With the recent release, we’ve effectively seen a company torn down. Surely that kind of approach can have a negative impact into scaring other startups from releasing any product at all?

If I were considering starting up a secure product, I’d be scared that you, or other researchers could shut my business down. Which would leave me with the choice of either not producing anything at all, or try to bribe you up front. While you may be above bribery, can’t say that for every researcher out there.

Despite my Jeremy-Paxman style attempt at getting a rise out of Scott, he remained patient with me.

“I certainly hope it wouldn’t scare them from releasing a product, but perhaps consider engaging a reputable security firm to analyse the product prior to release. I feel that’s the minimum any responsible company should be doing anyway. They can also engage the community with a bounty program to get private feedback prior to release too. If someone plans to bribe you, I guess you can’t do much about that, except beat them to the punch and fix the issues first. The internet is full of bad people :(”

The internet is indeed full of bad people, you cannot argue with that.

In between all the politics and memes on twitter, you sometimes come across a genuinely interesting security conversation.

My friend Quentyn Taylor, who happens to be a CISO posted this tweet that generated a lot of great commentary.

I recomment going through and having a read through the comments, there are some very valid points being made in support of, and against this viewpoint.

Personally, having worked in many banks which had huge legacy estates running critical banking applications, I agree with the statement. It’s easy to sit on the sidelines and say, “just upgrade” but it’s never really that simple. Security is often only a small consideration in the big scheme of things.

It’s why risk management is so important, it helps clarify what the tradeoffs are. A legacy system may be vulnerable, and that risk may equate to a dollar value. But the downtime, upgrade costs, and impact to associated systems of an upgrade may outweigh that considerably.

So many times it comes down to having a proper inventory, classifying data, and monitoring legacy systems with a closer eye.

However, this isn’t the whole reality.

It’s a reality based on my personal experience which likely mirror many of Quentyn’s experience. And that’s something many often forget – just because something works in one enterprise, or type of business, it doesn’t necessarily mean it will work in another.

Which is why, I feel that when discussing security topics, it’s worthwhile to be specific and add context around it. It’s something I’ve been guilty of in the past, and I’d like to change it.

For example, take these two statements:

Scanning QR codes is not popular.

Vs.

Scanning QR codes is not popular in the West

That is because in some countries like China, QR codes are everywhere. The location adds that important bit of context by which the statement turns from a generality to something more specific.

The logic can be applied to many of the broad security statements that are often made. So when someone makes a statement such as, “there’s a shortage of infosec talent.”, the questions that come to mind are:

Which geographies does this apply to?

 

Is there a lack of red teamers, blue teamers, risk managers?

Is it a lack of people with over 5 years experience, or do they too expensive?

If we stick to our own realities and speak only in general terms, we will remain adamant that our point of view is correct and never reach a consensus. And it’s probably about time that we start having better conversation.

I’ve been reading up on GDPR lately and frequently use mind maps to organise my thoughts.

So, I thought I’d share the interactive mind map I created for GDPR with its 11 chapters, 99 articles and 187 recitals. Let me know if I’ve missed anything or should amend for clarity.

A lot went down – some stories in the video and a ton of interesting links below. Enjoy!

 

Stories in Video

Tesco Bank Hacked

Adult Friend Finder hack

Facebook buyingstolen passwords

IP Bill set to becomelaw

Other interesting stories  

Cyber Security Challenge UK crowns youngest ever champion

GCHQ wants internet providers to rewrite systems to block hackers

Researchers’ Belkin Home Automation Hacks Show IoT Risks

UK halts Facebook’s WhatsApp data dip

Data Cleanliness and patch verification

A Cybercrime Report Template

Smart Light bulb worm hops from lamp to lamp

As if blogging and making videos wasn’t enough. I’ve wanted to stretch my creative legs for a while and dip a toe into the world of podcasting.

So, I jumped at the opportunity when there was the chance to start a new podcast at AlienVault. The AlienVault Security Perspectives is out, with the first episode featuring special guest Wendy Nather – who also happens to be one of my favourite people in the world.

I’d be interested in your feedback and opinion.

Click here to go to the podcast and download it on iTunes. 

Recently, I caught up with Priority One IT Support to provide advice to business owners on how they can protect their business from a security attack.

A glance at the media will show that attacks are not only on the rise, but the types of companies under attack are also varied. Whereas previously only the largest of companies and financial institutes came under attack, these days, companies of all sizes and industries are targeted.

 

Protecting your business

From a fundamental perspective it’s almost impossible to prevent 100% of all attacks, but you can reduce the impact that they have by:

  1. Understanding your key data elements and focus on your security controls around these.
  2. Put in place controls that can isolate and closely monitor those critical systems.
  3. Understand where you may be vulnerable. This will vary depending on your business e.g. if you are on a ground floor it is riskier leaving a window open compared to someone 10 floors up.

Common pitfalls

The most common pitfall is lack of user education and awareness. For example, if a member of staff receives an email informing them they have won the lottery, they should know how to ignore it. The basics of user behaviour and education often let a business down.

The second, often overlooked issue is the lack of robust monitoring controls. Many companies often only discover they have been hacked many months later once it makes the news.

What to do in the event of an attack

A business should have a plan in place before an attack takes place.

  1. Formulate a plan that includes steps to inform internal staff, stakeholders, partners, and customers.
  2. Know how to isolate systems to limit the damage and assess the impact.
  3. Have backups in place from which services can be resumed as quickly as possible.

 

Things I hearted has been probably one of the most regular series of posts I’ve done in recent times. At the same time, I was doing a weekly roundup over at my AlienVault blog. So, in the interest of saving time, energy, and preserving my youthful good looks; I decided to not only combine both into one weekly roundup – but also add a video element to it.

It ends up being all the same links you love – just a new home and a new format. I’ll still be listing out all the links and stories I found interesting during the week from the world of security and beyond. But this time with added video commentary.
Let me know what you think of the newish format.

For the week ending 25th September 2016

 

On one hand vendors want users to patch their systems and keep them secure. On the other hand, actions like this causes people to not want to apply official updates.

 

North Korea just accidentally turned on global zone transfers for their top level domains, archive of the data here.

 

My good friend James McQuiggan attended (ISC)2 congress where he not only MC’d the leadership awards, but also won the Presidents Award for a volunteer who has contributed to advancing the security profession. He wrote a nice writeup of the event.

 

The war Microsoft should have won.

 

Over 60k vulnerabilitie went unassigned by MITRE’s CVE project in 2015. Good research on the issues with CVE and what needs to be fixed.

 

Building Spring Cloud Microservices That Strangle Legacy Systems A good post on legacy systems, handing data etc. Worth bookmarking this one.

 

Well-written piece on how terrorists use encryption.

 

2016 best WiFi hacking and Defending Android application.