The Problem with Weaponised Malware

In May we wrote a simple explanation of the WannaCrypt malware, and part of that article described how the self-replicating worm that made the malware so prolific was developed by the US NSA for national security purposes.  This act of creating malware as a weapon to be used by governments raises some significant security issues that need to be looked at closely, especially given the backdrop of national security.

Weaponised Malware

What's the big problem?

What?

Weaponised Malware refers to the creation of malicious network security tools used to attack network assets.  Many security researchers consider 'weaponised' a misnomer, even the NSA itself stated that the exploits created were purely for surveillance purposes.  Although this may very well be true, the fact that malicious technology was created and could theoretically be used in an attack scenario can be considered more than enough justification for the term weaponised.

Why?

The question of why a government needs weaponised malware is one that anyone outside of national security services is unlikely to be able to answer.  Without understanding the risks faced by a given agencies, we cannot properly judge the size of the countermeasures used against them.  Many groups have, however, assumed the exact nature of these threats; but as cyber security professionals and not socio-political experts we are not in a position to discuss these aspects in an informed way.  Suffice to say that agencies such as the US NSA consider the development and deployment of weaponised malware a valuable asset in their armoury to defeat these threats.

So What?

Asking why we should care if this is going on is a perfectly reasonable question, and one that was answered quite poetically by the WannaCrypt attacks in May.  That attack, one that caused millions of pounds worth of damage worldwide and potentially put thousands of lives at risk in NHS hospitals in the UK, was a result of malware developed by the NSA.  Weaponised malware is something we should all care deeply about, and it's effects are only going to get more damaging as we move to a more and more connected world.  If governments are to develop weapons of any type, should they be deployed if they could potentially cause damage on this scale?

What Needs to Change?

Judging by the sheer amount of exploits leaked last year by the ShadowBrokers, a lot needs to change.  Weaponised malware should be thought of just like any other weapon, kept under lock and key and only to be used by those authorised to do so.  The unique nature of software weapons, however, means that this problem is infinitely more difficult than with any other type of weapon.  Software can be stolen without removing the original, and it can all be done remotely.  Governments cannot simply hide weaponised malware like they do with other weapons, it is not a nuclear weapon that can be placed on a submarine, or a rifle locked in an armoury.  When considering how they secure any weapons, how easy it is to steal needs to come into the security process.  Do they need to invest as much into network security of weaponised malware as they invest into the secrecy surrounding locations of nuclear weapons?

The NSA got off easy when the ShadowBrokers leaked the malware.  Under no circumstances can cutting edge weaponry fall into public hands, regardless of whether it's software or not.  The scale of this leak was never fully considered until the WannaCrypt attacks occurred, and even then, if a more capable attacker had wanted to cause real damage, it could have been a lot worse.

The more contentious suggestion of what needs to change is whether the developers of weaponised malware inform the creators of target systems of the vulnerability.  That is to say, if a government agency discovers a flaw in a piece of software that allows them to attack an enemy, do they tell the manufacturer of that software immediately?  On one hand this would essentially sterilise the malware (for example when Microsoft released patches for the versions of Windows affected by WannaCrypt) but on the other hand, networks around the world are more secure by having fixes rolled out via updates.  This is the dilemma that national security agencies worldwide need to consider very closely, and it is the real question at the heart of this problem.  There will be instances where they fall on one side of the fence, and instances where they fall on the other, and both situations will have valid arguments for and against.  In the end all we can conclude is that this issue is one with a very large moral grey area at the heart of it, and an issue that will not be going away any time soon.

WannaCrypt, a simple explanation of the attack that took the NHS offline

Before we start, Microsoft have released an emergency patch for unsupported versions of Windows (XP, 2003, Vista, 2008) here and in March Microsoft released a patch for supported versions of Windows that stops the exploit used in the WannaCrypt attacks, details here 

WannaCrypt

Everything you need to know

WannaCrypt (aka WannaCrypt0r, WannaCry, Wcry) is a type of ransomware that proliferated very rapidly, with reports that it had affected several high-profile organisations as of 12th May.  Put simply, ransomware is an attack that encrypts files on a machine so they can’t be used, then demands a ransom be paid for them to be decrypted.  These types of attacks are common, but this month’s attacks in particular are noteworthy for a number of reasons.

Typically ransomware is what’s known as a Trojan, delivered via email, requiring hundreds of thousands (or potentially millions) of malicious phishing emails to be sent with attachments or links, and affecting those unfortunate enough to open the attachment or link.  WannaCrypt had an additional capability, a self-replicating payload (known as a worm) that meant that once it was in a network, it was able to propagate to other machines on that network.  In action, this meant that it only took one person in a business to be affected before everyone in that business was also affected. The worm also has the ability to self-replicate to other networks via the internet, depending on that network’s configuration.

There are multiple conflicting reports on whether WannaCrypt was delivered via email or another method, however, the large impact on businesses was largely caused by the self-propagating addition to the ransomware since several machines could be taken out of action if only one machine was initially infected.

The self-propagating fragment of the ransomware uses a vulnerability that was discovered by the US National Security Agency who also developed an associated exploit.  We do not know how long they knew about the vulnerability, but unlike security researchers the NSA tend to keep newly discovered exploits to themselves in order to use them for intelligence activities.  The particular exploit used by WannaCrypt was used internally as part of a toolkit codenamed ‘EternalBlue’.  Last year the NSA themselves were hacked by a group called the ShadowBrokers, who released details of EternalBlue to the public in April, which is why we are now seeing malicious attacks using the same methods.

WannaCrypt can affect all unpatched versions of Windows from XP to Windows 8.  Microsoft had patched the vulnerabilities exposed by EternalBlue in March before the exploit was publically released by ShadowBrokers and in the wake of the attack Microsoft released patches for unsupported versions of Windows (this is rare for Microsoft to patch older versions of Windows, but they did so due to the large scale impact of the WannaCrypt attacks).

Multiple organisations were affected by the attack, however it is not yet known (and unlikely we’ll ever know) if these were targeted directly or just randomly happened to be affected.  These include Telefonica in Spain, Fedex in the US and the NHS in the UK to name but a few.  Remediation and disaster recovery strategies were put in place in affected businesses, such as turning off all IT equipment and rolling back to pre-attack backups, actions which were hugely costly to those affected and may result in a loss of data in the organisation that may not be identified immediately.

WannaCry infections worldwide (Source: https://intel.malwaretech.com/botnet/wcrypt)

WannaCry infections worldwide (Source: https://intel.malwaretech.com/botnet/wcrypt)

As WannaCrypt started to spread uncontrollably, cyber security researchers started digging into the malware to see how it worked.  One of these researchers, MalwareTech, noticed that WannaCrypt contacts an external website before activating on a victim machine, however, when they looked to see who owned this domain it was unregistered.  They thought it would be useful to register this domain so they could understand how many connections it was receiving and consequently be able to estimate how many machines were being affected by WannaCrypt.  In an odd turn of events, WannaCrypt stops running if the domain has been registered when the malware starts running, therefore stopping the malware activating on internet-connected devices that were subsequently hit by it.  There’s many reasons for putting this ‘killswitch’ mechanism in malware, the leading theory is that it’s a way of understanding if the machine it’s affecting is being used in a test environment.  Since these test environments seldom have internet connections for security reasons, the malware is able to hide from the tests by not activating if there’s no external internet connection.  By registering this domain MalwareTech may have vastly reduced the infection rate of the initial version of the malware.

That’s not likely the end of the story for WannaCrypt, in the weeks since the initial infections were identified, variations with alternative killswitches have been created, and there’s even some variations with the killswitch removed entirely.  In essence, WannaCrypt is a combination of two attacks, Ransomware and a self-replicating worm; both of these attacks will continue to be produced by malicious actors.

So what can we do to stop these types of attacks going forward?  It goes without saying that good security procedures need to be adhered to, keep updating software as soon as possible and make sure not to open links or attachments we weren't expecting to receive.  From a business perspective the same advice applies but in situations where older software must be used, for example to control systems that have lifespans of several decades, a method must be in place to identify these vulnerabilities and put protections in place to stop them being attacked.  Tools such as Perception  can identify vulnerabilities on a network before they are attacked, giving businesses the chance to protect themselves where software updates aren’t possible.  If the worst does happen, these types of network monitoring tools can alert an analyst to exactly which files have been encrypted, and which hosts have been affected, assisting greatly in remediation activities.

Cyber Insurance is Changing, Here’s How Can You Lower Your Cyber Security Insurance Premium

The number of companies in the UK investing in Cyber Insurance cover is rising fast, and is rapidly becoming a necessity for any business.  As these policies become more popular, they are also under more and more scrutiny, with not only the number of claims increasing but also the number of disputed or denied pay-outs.  With the scope of cyber security being so broad and often misunderstood, underwriters of policies are often working with far less information when valuing premiums compared to other types of insurance policy such as motor or health plans.

So how are these premiums calculated?  Currently there are two ways, one based on a percentage of total revenue (the easy way), and the other based on the perceived risk to the business (the not-quite-as-easy way).  However, with the latter only taking into account assumed reputational harm and immediate financial implications rather than quantifying actual likelihood of breach, there is little impetus for businesses today to actually improve network security in order to reduce premiums.  This is the equivalent to a dangerous driver investing in more comprehensive cover rather than improving their driving, or a heavy smoker buying more health insurance instead of stopping smoking.

The situation is improving though, underwriters are now taking more steps to understand how businesses are approaching network security, to offer better value to securer networks.  With such a major step change occurring in the fastest growing insurance sector, how can companies prepare for the increase in scrutiny?

Improving Basic Cyber Security Policy

The first point is probably the most obvious, and many insurers already insist on basic levels of cyber policies being in place.  There are multiple guides on how to build these policies, but the basic steps always remain the same.  What data needs to be protected at all costs (customer info/valuable IP)? Who can access this and other sensitive data?  How are confidential communications and data movements protected?  It’s always good to think beyond the mandatory as well, whilst building a cyber policy to the lowest common denominator is the most cost efficient in the short term, it might not be sufficient to your business.  Furthermore, the policy needs regular review, the cyber landscape is vastly different today than it was even 1 year ago, so how those risks are approached needs to change too.

Enforcing the Policy

Creating a document to manage cyber risk is all well and good, but it’s all for nothing if that policy is not upheld.  The biggest problem most businesses have is knowing when policy has been breached, what is to stop someone with access to sensitive data sending it unencrypted across parts of an unprotected or uncontrolled network?  Often, network users will find the easiest method to do their jobs, rather than the most secure, and this results in unforeseen breaches of cyber policy.  The best course of action here is to make sure system administrators have visibility of what occurs on a network, and are properly incentivised to investigate anything they find suspicious.  Regular testing of a network can also be invaluable in understanding where vulnerabilities lie, and best of all this can be done by internal resource rather than forking out for expensive pen testers.

Training the Users

Often seen as the most vulnerable part of a network, the users themselves need to be trained on how to work according to network security basics.  Helping users to understand not just what to do but also why they need to do it can vastly improve how secure the network is as a whole.  For example, telling users why USB sticks cannot be used will improve adherence to a no-USB policy.  Likewise, training users on why Dropbox should be avoided instead of just a blanket block on Dropbox IPs will likely stop the inevitable workarounds the users will try to find.  Basic cyber awareness training can also be cheap and effective, making sure users are aware of phishing emails can radically reduce exposure to ransomware, and will protect them in their personal lives too.

Understanding the Risk

Without understanding how a compromise might occur, you cannot properly protect yourself against them.  Things that are often missed when building this picture include uncontrolled parts of a network, should we be responsible if AWS or Office cloud services are breached?  What steps can be taken to ensure this data stored outside of the business remains secure?  Understanding how the network is accessed externally is also useful for getting a good balance between usability of network assets externally and protection of those same assets from external actors.

Will this Actually Save Money?

Yes.  Even going through the above steps on an occasional basis will put a business streets ahead of the average enterprise network.  When considering that the insurance market is mostly about keeping premiums cheap for those above the average in the bell curve, massive saving can be made as more and more focus is put onto how data is protected rather than what data is being held. 

Perception Update - Network Drive Activity Classifiers

A suite of behavioural classifiers have been developed for the Perception sensors to detect suspicious activity based on the information gathered by the Network Drive Activity Cache.  These classifiers monitor behaviours such as file access, modification, upload, download and report on potential policy breaches and/or unusual activity.

The ability to attribute user network based activity to specific windows file sharing operations. This allows for enhanced detection of Ransomware during the Ransomware payload execution.
Additionally, policy-based classifiers can assist in ensuring that you company processes are being followed, for example search patterns can be setup to look for certain filenames, users or extensions of interest that have been seen being used within your network.

So as we said last week, we’ve implemented a Network Drive Activity Cache and naturally, because we have a behavioural engine, we can now identify behaviours based on the information in that cache.  We’ve put together a number of behavioural classifiers already based on some real world threats we’ve seen in the wild, but expect more of these classifiers to be implemented over time as we discover more vulnerabilities and scenarios we want to alert on.

One of the things Perception customers love the most isn’t just its ability to pick up on malicious activity, but its ability to discover network vulnerabilities before they are exploited by a malicious actor.  Again, these classifiers can be used to discover poor network security practice by discovering users storing confidential information in unencrypted files, it’s the little things like that make Perception so useful.

This update is CCS and sensor based, and will be pushed to all managed customers at the pre-agreed upgrade time.  Self-monitored customers can update their own sensors and CCSs using the software upgrade process.  Please be aware, this feature requires the Network Drive Activity Cache to be active.  If you have any further questions about this upgrade please contact us at info@perceptioncybersecurity.com

Perception Update - Network Drive Activity Cache

A new mechanism has been developed on Perception sensors to allow file sharing activity between client machines and windows network drives to be stored.

Enhanced visibility of network drive access provides the Perception classifiers with a huge amount of insight into a client machine’s behaviour.  This in turn allows classifiers to detect potential threat behaviours such as accessing and downloading large parts of a network share or repeated download/upload activities that can often be indicative of malicious behaviour.
This feature also facilitates the inclusion of additional associated meta-data in the events generated by the system such as the names and locations of the files accessed which can be vital in cases where data exfiltration has taken place.

The Network Drive Activity Cache gives Perception an extra level of information on top of all of the existing meta-data it has.  When files are transferred from or to Windows-based machines on a network, information about that transfer moves across the network.  Perception now includes this information in any behaviours that identify file movement across a network.  As a result, any behaviours that saw data movement can now also tell which files were accessed, and whether they were read or written.

Our analysts are already seeing great benefit from this feature, as it immediately identifies which files have been accessed in data movement events, so investigating suspicious events is far faster.  Rather than having to trawl through capture files looking for which data has been accessed, the file information is right there, front and centre.

This information provided by this feature enables a number of additional capabilities, the first set of which we’ll tell you about next week.  The system can also now build intelligence around who accesses which files, when, and how unusual this is for that person.  How we utilise the Network Drive Activity Cache will become more and more complex and beneficial as the system continues to improve, but it’s already showing great results.

This update is CCS and sensor based, and will be pushed to all managed customers at the pre-agreed upgrade time.  Self-monitored customers can update their own sensors and CCSs using the software upgrade process.  Please be aware, this feature may change the performance requirement of the sensor, and can therefore be turned on or off as required.  If you have any further questions about this upgrade please contact us at info@perceptioncybersecurity.com

Deliveroo get hit with the unhackiest-hack of the year

The BBC consumer advice show, “Watchdog” found hundreds of examples of customers being billed for food that they didn’t order via the restaurant delivery app Deliveroo, forcing the foodies-favourite business to deny that is has been targeted by hackers.  The company claimed that the fraudulent orders were made using credentials stolen in other attacks, and only worked on customers that used the same email/password combination for their Deliveroo account.

The customers contacted by the programme, which aired on the 23rd November (you can watch it on iPlayer here until the 23rd December if you are in the UK) all had their money refunded, which is good news, but we don’t know how much has had to be forked out in refunds to affected customers.  Deliveroo have since denied that any payment information had been taken, and the transactions were made using a one-click style payment process that doesn’t require customers to input their payment information again for every order.

The advice remains that any online accounts should be protected by a unique password.  Although this can rapidly become unmanageable, several password managers are available to stop you forgetting unique passwords for that one website you only use once a year and you’re never going to remember.  Apple users can use iCloud keychain, although cross-application support is often lacking, and several Perception staff members use and can vouch for 1Password.

The use of stolen credentials raises an interesting issue for businesses online.  Deliveroo obviously benefits from a massively streamlined ordering process, however, is this done to the detriment of security?  Deliveroo have stated they will ask for verification when orders are made to new addresses, which should help to stop the fraud entirely (although it still leaves doors open to send as much food as possible to a hacked customer's genuine address in the weirdest hacking prank ever).  If Deliveroo is able to prove where the passwords were stolen from, should they be able to make a claim against that organisation since it was technically their fault?  Should every breached company be forced to immediately contact all customers and let them know a single password is no longer usable on any other sites?

The European Banking Authority plans to regulate two-factor authentication on all orders over €10 in the near future, but already that has many businesses favouring one-click ordering up in arms stating more business will be lost than the savings made on fraud refunds.  Perhaps the responsibility of security lies solely with the consumers themselves, those that reuse passwords only having themselves to blame; we can hardly expect businesses to check all new accounts against haveibeenpwned.com and refusing service to those that have been hacked in the past, can we?

Source

Major Vulnerabilities found in Samsung KNOX Software

Security experts have disclosed 3 vulnerabilities in Samsung Knox, a piece of software deployed on phones to separate personal and professional data for security purposes, according to Wired.

The Israeli security firm Viral Security Group exposed the flaws on a Samsung Galaxy S6 and a Galaxy Note 5, which allowed full control of each device.  Considering the purpose of the software is to maintain the security of a business issued handset whilst allowing the flexibility of a personal device, the businesses that deploy this system may be assuming that these devices are safe despite moving between internal and external (protected and unprotected) network connections.

It's important to note that these vulnerabilities have since been patched in a security update, however, before the patch the researchers at Viral Security Group were able to replace legitimate applications with rogue versions, with access to all available permissions, without the user's notice.  Many businesses rely on the Knox software to make sure any connection to a business network is made from the "safe zone" of the phone, and once outside of it's protective environment the personal segment of the phone is used.  If movement between these two parts of the device's software is breached the protections are essentially useless and the device once again becomes a BYOD-type threat.

The take-away from this all is that you can't assume your security measures are foolproof, once protections are put in place, a significant responsibility still lies with understanding, controlling, and analysing network traffic.

The full white paper describing the flaws is well worth a read if you have time, but first make sure any devices on your network have fully up to date software.

Perception Update - DNS Behaviour Classification

Perception now includes several classification methods to detect various types of behaviour that rely on DNS use.

We have added enhanced DNS behavioural detection capability to detect malware behaviours such as DNS tunnelling. These methods are typically used to circumvent traditional security defences allowing Command and Control channels to be setup on even very ‘locked down’ networks. 

The detection of low and slow DNS tunnelling is complex and we have developed a number of Perception Behavioural Classifiers to assist in the detection. In addition, Forensic AI High Level Classifiers have also been developed to allow for a long term correlation capability.  What this means is that the identification of this very advanced exfiltration technique is now identified by Perception and clearly explained to the analyst.  You can learn more about DNS misuse as a data exfiltration technique by reading through our blog post on the topic.

This update is CCS and sensor based, and will be pushed to all managed customers at the pre-agreed upgrade time.  Self-monitored customers can update their own sensors and CCSs using the software upgrade process.  If you have any further questions about this upgrade please contact us at info@perceptioncybersecurity.co

Perception Update - Domain Classification

Perception allows analysts to assign network trust categories to assets improving threat behaviour attribution.

The ability for the analyst to assign domain types and trust levels to IP ranges has been added to the system.  This introduces the basis for assigning security layers to better attribute behaviours to risk factors.

Perception can set various parts of a network to ‘trusted’ or ‘untrusted’.  This feature enriches the information delivered in the behavioural events generated by the system enabling the analyst to better categorise potential threats.  This also enhances the ForensicAI engine’s ability to detect potential threats based on the source and destination domain types and trust levels.

For example, the system could perhaps see a data movement internally between two ‘trusted’ parts of the network as not threat-like, whereas a data movement from a ‘trusted’ internal server to an ‘untrusted’ public WiFi network is far more interesting.  ForensicAI also leverages this new data, being able to understand the relevance of multiple data movements, and correlating data moving between various trust levels of a network over time.

This update is CCS and sensor based, and will be pushed to all managed customers at the pre-agreed upgrade time.  Self-monitored customers can update their own sensors and CCSs using the software upgrade process.  If you have any further questions about this upgrade please contact us at info@perceptioncybersecurity.com

Perception Update - Web Proxy

Perception removes the web proxy blindspot

This feature adds a layer prior to the classification engine which enables the actual session destination IP addresses to be resolved from the hostnames visible from the traffic monitored behind the web proxy.  The classification engine is then able to process session information as if the clients were communicating directly with the destination servers.

Monitoring networks where web proxies are deployed presented an issue where actual destination IP addresses were hidden from the system.  Traffic being monitored behind a web proxy is always presented with the same destination IP address, that of the web proxy itself, rather than the real destination IP address.  This results in poor performance for network monitoring systems due to the fact that a significant chunk of data appears to be targeted at a single destination, when in reality it’s going to multiple different places.

Monitoring behind a web proxy may be the only available option for a given customer as the proxy itself may be located in the internet (eg cloud based proxies) and therefore access to the output of the proxy may not be available. This previously presented a potential blind-spot to the Perception classification engine.

This update solves this problem by delivering accurate IP information to Perception regardless of proxy use.  As a result, Perception provides the same level of coverage and accuracy when used behind proxies as it does when deployed in a typical network.

This update is sensor based, and will be pushed to all managed customers at the pre-agreed upgrade time.  Self-monitored customers can update their own sensors using the software upgrade process.  Please note that Perception may need some extra configuration to function with proxy networks.  If you have any further questions about this upgrade please contact us at info@perceptioncybersecurity.com