Blog Post

,

Undermining security and weakening Android

I have Amazon Prime, I quite like their shows, and whenever I have some time to kill I’ll watch an episode or 3.

A couple of weeks ago, I thought it would be a good idea to install the official Amazon video app on my android device, so that I could download episodes and watch them when travelling. I previously had it on my iPad, so knew it worked well.

However, I wasn’t able to find the Amazon video app in the Google Play store. Perplexed, I went hunting, and quickly found that Amazon does indeed have an app for Android, only it isn’t on the official store.

Amazon helpfully has instructions on how to install the app on your Android phone or tablet from its own Amazon Appstore.

 

 

Screen Shot 2017-09-13 at 09.36.33.jpg

For those of you playing along at home, you may have spotted the obvious flaw in this approach.

To install the app, Amazon is advising you to “Go to your device’s Settings, select Security or Applications (depending on device), and then check the Unknown Sources box.)”

But there are others

Unfortunately, this isn’t isolated to Amazon. Ken Munro pointed out on twitter that the National Lottery also asks you to download its app from a dark alley in the back.

 

nat lott.jpg

Although, to its credit, the National Lottery does mention to, “Remember to change your security settings back to your preferred choice.”

Quentyn Taylor pointed out that Samsung also does similar.

So what’s the big deal?

The official Google apps store isn’t a complete safe haven. Malicious apps have bypassed the security checks and ended up in the app store many times. Companies like Amazon, the National Lottery, or Samsung aren’t fly-by-night companies that will deliberately push out insecure apps; so what’s the harm in downloading the app and switching security back.

For most users that aren’t technically savvy, the ability for their Android device to block downloads from unknown sources is there to prevent them from accidentally downloading malicious software. – Strike one.

The security industry has spent a great deal of time and effort to educate users in the dangers of downloading unknown files from untrusted sources, and this approach undermines a lot of those efforts. – Strike two.

Normalising such actions enables attackers to try similar tactics. After all, if companies like Amazon have legitimised the approach of turning off security settings and downloading apps from their own environments, it is something that any company could emulate. – Strike three.

The reality is that convenience trumps security most of the time. Users will intentionally or accidentally bypass security controls to get the app of their choosing, often leaving themselves vulnerable in the process. Which is why it’s important that manufacturers, providers, app stores, and everyone in between work together to help deliver a secure experience to users, instead of elements working against each other.

,

Making Sense of WannaCry

WannaCry_Sense_650_366.jpg

Whenever a calamity befalls, it’s only natural for people to try and rationalise and identify the problem.

As is now happening with the WannaCry ransomware outbreak that affected the UK’s NHS service, and other services in over 100 countries. People are discussing what should have been done to prevent it.

On one hand, there’s a debate ongoing about responsible disclosure practices. Should the NSA have “sat on” vulnerabilities for so long? Because when Shadowbrokers released the details it left a small window for enterprises to upgrade their systems.

On the other hand, there are several so-called “simple” steps the NHS or other similar organisations could have taken to protect themselves, these would include:

  1. Upgrading systems
  2. Patching systems
  3. Maintaining support contracts for out of date operating systems
  4. Architecting infrastructure to be more secure
  5. Acquiring and implementing additional security tools.

The reality is that while any of these defensive measures could have prevented or minimised the attack, none of these are easy for many enterprises to implement.

… Read the rest of the post here

,

When culture eats awareness for breakfast

European startup CLTRe founded by Kai Roer has spent the last couple of years examining the security awareness and user behaviour problem through the lens of security culture.

Based on findings over the course of 2016, CLTRe has produced its first annual Security Culture report, co-written by Roer and Gregor Petric, Ph.D., an Associate Professor of Social Informatics and Chair of the Center for Methodology and Informatics at the Faculty of Social Sciences, University of Ljubljana (Slovenia).

Many existing security awareness reports typically measure and report on a few basic metrics – pretty often based around number of phishing emails user click on.

It is here that the CLTRe report differentiates itself, by delving into statistics and metrics to provide a view that is probably the first of its kind. It takes into consideration not just behaviours, but adds insights to the behaviours based on gender, geographic location, age, duration of service, or acceptance of norms across seven dimensions.

The report has insightful nuggets of information scattered throughout, such as an  examination of the cultural difference across various industries in Norway and Sweden.

Screen Shot 2017-05-09 at 12.19.48.png

The report explains at length why security culture metrics matter and the value they provide. It states that similar to technical controls, security culture must be measured in order to understand and measure change.

For example, reporting the number of clicks on a phishing exercise is useful but has its limits. Those metrics do not provide the motivations or drivers for the users.

Screen Shot 2017-05-09 at 12.26.26.png

Thoughts

For its first report, CLTRe has produced a great report with very detailed insights. It’s not something to be considered light reading, and some segments feel quite academic in nature. It’s not a knock on the report, it’s needed to elevate the discussion to the higher level needed.

For next years report, I’d like to see the inclusion of case studies or quotes from companies that have been measuring their security culture and how they have used the information to improve the security culture.

Check out the full report here (registration needed).

Let Kai and the CLTRe team know what you think: Click to tweet:

Great job @getcltre @kairoer on the human factor report. (via @J4vv4D)

Why did you write this report? @kairoer @getcltre (via @j4vv4d) 

,

The Growing Impact of Security Researchers

I’ve followed Scott Helme’s work for a while now and have been impressed with his approach. So was interested to find out that he had teamed up with BBC Click and Prof Alan Woodward to comprehensively dismantle a vendors claim to total security. Scott has published the whole story on his blog and The BBC Click episode is live.

This was a well-researched and executed piece, but let’s take a step back and look at the wider picture and what this means for vendor-research relations.

So, I felt it was a good time to grab some time with Scott to seek his opinions on some of the questions that came to mind.

One of the first things that strike me about Scott is his measured and calm demeanour. He has the look of a chess master that is quietly confident knowing that he’s always 7 moves ahead. The second thing I note is that I really can’t gauge how old he is. I think it’s one of the things that happens as you grow older, I find it increasingly difficult to differentiate between someone that is 20 or 35. They all just look “young” to me. So I look for clues such as ages of children, year of graduation, or years experience to try and make an educated guess.

What is secure?

Not wanting to waste time with warm up questions, I wanted to get to the heart of the matter. There is no benchmark or agreed standard upon when it’s appropriate to use the word secure, or claim a product is secure. The fact of the matter is that as far as technology goes, nothing is truly ever secure. So does that mean no-one should ever use the phrase secure at all?

On one hand one wants to avoid going down the route of legislation, or having stringent criteria in an industry that is constantly in a state of flux. On the other hand, Scott said, “We don’t see many car manufacturers rocking up with the safest car in the world that has no airbags or brakes.”

Which is a fair comment, but it is a lot easier for a lay person to see and understand security in physical products than in software.

The manufacturers dilemma
So what is a security startup to do? Nearly every security product has had vulnerabilities they’ve needed to patch – not even the largest of vendors are free of bugs.

Open source products, where the code is available for all to audit is no exception with products such as OpenSSL having had long-standing vulnerabilities. Given the landscape, what’s the best way to approach this?

Scott gives a half smile, indicating that it’s something he may have been asked many times. He told me that he believes that the more scrutiny a device or product has then the more likely you are to become aware of issues. “Researchers and bug bounties are your friend. They can’t replace traditional penetration testing and other standards or compliance requirements, but they sure add a lot of extra coverage, often for not a lot of cash.”

It’s somewhat a balancing act. After all, security costs time and money in order to implement properly. Far too many startups are caught up in trying to prove that their product serves a viable market and that there is demand before pouring resources into security.

Scalability
But is relying on researchers to find vulnerabilities a scalable model? There are only so many Scott’s in the world, and researchers will be drawn to particular products out of personal curiosity, or where their expertise lie. So many products simply slip beneath the radar.  The number of secure products being released outpaces the time and effort needed to properly validate their capabilities.

Scott agrees with the sentiment, and states that it ties into the issue of lack of standards. “Right now there is no regulation or standards, so researchers are all we have. Even basic standards would start to raise that minimum bar and begin the process of filtering out the crud. I do it because I feel I can make a difference, I enjoy it and it helps me keep my skills sharp.”

With great power
With time running out, I wanted to go straight for the jugular with my last question.

While one can commend the good work Scott and others do. With the recent release, we’ve effectively seen a company torn down. Surely that kind of approach can have a negative impact into scaring other startups from releasing any product at all?

If I were considering starting up a secure product, I’d be scared that you, or other researchers could shut my business down. Which would leave me with the choice of either not producing anything at all, or try to bribe you up front. While you may be above bribery, can’t say that for every researcher out there.

Despite my Jeremy-Paxman style attempt at getting a rise out of Scott, he remained patient with me.

“I certainly hope it wouldn’t scare them from releasing a product, but perhaps consider engaging a reputable security firm to analyse the product prior to release. I feel that’s the minimum any responsible company should be doing anyway. They can also engage the community with a bounty program to get private feedback prior to release too. If someone plans to bribe you, I guess you can’t do much about that, except beat them to the punch and fix the issues first. The internet is full of bad people :(”

The internet is indeed full of bad people, you cannot argue with that.

Understanding realities

In between all the politics and memes on twitter, you sometimes come across a genuinely interesting security conversation.

My friend Quentyn Taylor, who happens to be a CISO posted this tweet that generated a lot of great commentary.

I recomment going through and having a read through the comments, there are some very valid points being made in support of, and against this viewpoint.

Personally, having worked in many banks which had huge legacy estates running critical banking applications, I agree with the statement. It’s easy to sit on the sidelines and say, “just upgrade” but it’s never really that simple. Security is often only a small consideration in the big scheme of things.

It’s why risk management is so important, it helps clarify what the tradeoffs are. A legacy system may be vulnerable, and that risk may equate to a dollar value. But the downtime, upgrade costs, and impact to associated systems of an upgrade may outweigh that considerably.

So many times it comes down to having a proper inventory, classifying data, and monitoring legacy systems with a closer eye.

However, this isn’t the whole reality.

It’s a reality based on my personal experience which likely mirror many of Quentyn’s experience. And that’s something many often forget – just because something works in one enterprise, or type of business, it doesn’t necessarily mean it will work in another.

Which is why, I feel that when discussing security topics, it’s worthwhile to be specific and add context around it. It’s something I’ve been guilty of in the past, and I’d like to change it.

For example, take these two statements:

Scanning QR codes is not popular.

Vs.

Scanning QR codes is not popular in the West

That is because in some countries like China, QR codes are everywhere. The location adds that important bit of context by which the statement turns from a generality to something more specific.

The logic can be applied to many of the broad security statements that are often made. So when someone makes a statement such as, “there’s a shortage of infosec talent.”, the questions that come to mind are:

Which geographies does this apply to?

 

Is there a lack of red teamers, blue teamers, risk managers?

Is it a lack of people with over 5 years experience, or do they too expensive?

If we stick to our own realities and speak only in general terms, we will remain adamant that our point of view is correct and never reach a consensus. And it’s probably about time that we start having better conversation.

GDPR Mind Map

I’ve been reading up on GDPR lately and frequently use mind maps to organise my thoughts.

So, I thought I’d share the interactive mind map I created for GDPR with its 11 chapters, 99 articles and 187 recitals. Let me know if I’ve missed anything or should amend for clarity.

,

Alien Eye in the Sky

A lot went down – some stories in the video and a ton of interesting links below. Enjoy!

 

Stories in Video

Tesco Bank Hacked

Adult Friend Finder hack

Facebook buyingstolen passwords

IP Bill set to becomelaw

Other interesting stories  

Cyber Security Challenge UK crowns youngest ever champion

GCHQ wants internet providers to rewrite systems to block hackers

Researchers’ Belkin Home Automation Hacks Show IoT Risks

UK halts Facebook’s WhatsApp data dip

Data Cleanliness and patch verification

A Cybercrime Report Template

Smart Light bulb worm hops from lamp to lamp