One of the latest trends to hit the cyber security landscape is that of the Internet-of-Things (IoT) device. We take a look at what IoT really means, why it matters to us, and what can be done to protect against the new threat that it presents. 

Data waves.jpeg


In short, IoT refers to the many different types of ‘Smart’ devices that surround us in our daily lives. Figuratively, ‘smart’ means they are likely to be innovative and somehow make our lives easier than they were with the incumbent ‘dumb’ devices. Literally, ‘smart’ means the device has a computer in it.  Typical IoT devices that we are likely to see in our daily lives are:

  • Home Automation Systems - such as Wireless Thermostats and Intelligent Light bulbs

  • Wearable Devices - Wearable Devices such as watches and health monitors

  • Internet Connected Electronics - Smart TV's, speakers, and virtual assistants like Amazon’s Alexa.

What these devices all have in common is that they all need to use software written by humans, and since to err is human, this means that that they will have vulnerabilities that can be exploited. Further, to make the situation even more problematic, these devices are connected to the internet, don’t have the capacity to run anti-virus software, and function, rather than security, is the priority in their development. This means that these devices usually offer a better opportunity for malicious actors to exploit the device and get a foothold into your private world.  Additionally, IoT devices are often not centrally managed and/or monitored and this often means that software security updates are rarely applied - that is of course if they are made available at all by the vendors.

Facilities Management.jpeg

“Connected devices will have vulnerabilities that can be exploited”


There are billions of IoT devices that are assisting businesses in doing their day to day activities. Many businesses are embracing the new opportunities that these devices bring, for example road haulage companies can use IoT to track driver’s locations and reduce insurance premiums. Other companies are utilising IoT devices in the domain of Building Management Systems, which includes control of heating/ventilation, and site security (video cameras and door access systems for example). There is also a plethora of devices that you may not think are Smart devices that exist within the enterprise. For example, Video Projectors and TV’s often have a network connection that could provide a malicious actor with the perfect backdoor and pivot point to move around your network environment.

Line of data.jpeg

“There are a plethora of devices you may not realise are ‘smart’ in the enterprise”


It is useful to outline a typical attack vector to demonstrate the vulnerabilities that exist within many businesses as a result of their IoT devices. For some background, most modern meeting rooms either have a high-end projector or a TV to enable the traditional PowerPoint presentations to be shown in all their glory. As such, companies have been moving to using high-end consumer devices so their 60 inch displays and vibrant colours will wow customers and colleagues alike. However, many of these high-end devices are ‘Smart’ TV’s whose software was developed to allow home users to stream video from the internet or catch-up on the latest box sets. This means that they are running a full operating system that has been developed with consumer features in mind, and enterprise security is a secondary concern.  

In this scenario, let’s imagine that a smart TV has been installed in a board room for a year now and it has been disconnected from the internet. Within the last few weeks the TV has been showing on the display that the on-board software is out of date and it urgently needs an update to improve security. Helpfully a member of staff has realised that this message was getting on the nerves of the presenters and thought the easiest way to solve the issue is to plug the TV in to the spare network connection that is sitting right beside the TV. This in itself is not an issue as of course patching to the latest software is a great security feature, or is it? 

Loose cable.jpeg

“Plugging the smart TV into the network allowed it to install important security updates”

Behind the scenes the Smart TV now happily goes off to the internet and downloads a new software update that enables a new feature of the device, voice recognition to enable hands free control of the TV. Voice recognition works by sending a stream of audio from microphone on the TV to the internet (typically a server that is geographically different from where the TV is located) where the number crunching for the recognition is actually done and the results are streamed back to the TV to decide on what operation to perform (change Channel/Volume Up Down etc). Interestingly, the loss of control of data may be considered a breach (under GDPR for example) depending on the data, its classification and the regulations a company may need to comply with.

In effect what you now have is a spy in the board room. Every conversation that you have in that room is now streamed to another company in potentially another country for detailed analysis, this seems to be a great way to lose important intellectual property or business confidential information. But the risk does not diminish over time as unfortunately there is also the potential now for malicious software to identify this device and exploit any vulnerabilities that are present and then pivot in to the connected network opening up a whole other set of risks.

This scenario outlines just a single case of how the advent of smart devices can open up a new attack vector within your business and additionally how hard it is to prevent this sort of threat being realised.  Before you think, “that will never happen to us,” we’ve seen this happen on more than one occasion.


Whilst we cannot cover all of the different IoT attack vectors (there are likely to be thousands) there are some steps that your business can take to reduce the risks associated with the rise of IoT devices.

Here are our top five things to think about when you are looking at protecting yourself from the IoT based threats:

  1.   Know what devices you have in your business – at the end of the day you cannot protect what you do not understand. This means that you should be keeping an Asset Register/Inventory and network diagram of all devices in your company so you can look for vulnerable devices and weaknesses that present themselves.

  2. Training and Policy Definition – work with your team to recognise where the risks of smart devices lie. Specifically telling users to check with IT before connecting new devices to networks or using company credentials to create accounts on IoT portals. Users should be trained and policies should be in place to stop unauthorised connecting of devices to the network.

  3. Invest in understanding your network and protecting it – a simple penetration test on the inside of your network can tell you a lot about what IoT devices you have, but this is fairly limited, really you want to be monitoring the network continuously to look for threatening behaviours of new devices and unusual device behaviour so you can assess the risk quickly and mitigate where necessary.

  4. Isolation of devices – design security in from the outset. Talk to your own departments and also subcontractors about whether they need to use smart devices and if so how they manage the security of the devices. Consider implementing network segmentation and multi-layered network protection, ideally by investing in a separate network that is dedicated to these types of device where they can be easily monitored and contained if required.

  5. Create policies that can be adhered to - don’t just ban IoT devices! The prevalence of IoT will mean that you will encounter them at some point and if you have not thought about risk mitigation then you will have an unpleasant surprise. Create some simple guidelines that users can follow to assist them in adding and managing IoT devices on the network.

While not an exhaustive list, these simple points can significantly assist you in identifying and protecting yourself against new and emerging threats.


IoT Devices need to be embraced, that way they can be managed. Managing the implementation of IoT devices securely from the outset can save a lot of headaches down the line.


Predicting the future is difficult,however some common near-term trends in IoT are:

  • Automation – devices interacting with each other to provide autonomous services. For example, your car will tell your home heating to turn up more when the driver is close to home and they have the cars heating on high. These features are likely to be enabled out of the box, so it will be important to know what communications devices will carry out automatically before bringing them into a network.

  • Smaller and smarter – devices are likely to get smaller and more disposable and existing devices will become more powerful. Networks of devices will ‘mesh’ to provide more advanced computing power.  This will likely mean devices will become harder to track, and harder to discover on a network.

  • More vulnerabilities and exploits – as the complexity and prevalence of IoTdevices increase so will the ability to exploit the devices. As devices become more prevalent, this in-turn will incentivise hackers to create more targeted malware to take advantage of this new generation of exploitable computers.

Innovate UK Turn to Perception for Their Essential Tips for Cyber Security

Innovate UK, the UK Government’s innovation agency, have curated a list of essential tips for cyber security for small businesses.

Innovate UK Logo.jpg

Innovate UK work with people, companies, and partner organisations to find and drive the science and technology innovations that will grow the UK economy.  Over the last 11 years they have invested £1.5 billion in innovation, working to determine which science and technology developments will drive future economic growth.  Alongside this investment work, they work closely with innovative companies to advise on how to improve their business.  With the growing threat of cyber-crime and intellectual property theft, the organisation decided to create a short list of easy to follow guidance on how to protect themselves from this threat.

To create their shortlist, Innovate UK contacted the cyber industry, thought leaders, and heads of digital risk.  After this process, they developed 4 key points for innovative companies to adhere to in order to improve their security:

  • Identify all possible threats
  • Make cyber security a business priority
  • Leverage existing schemes
  • Assume you’ll be hacked

Along with an in-depth article, which can be read here they created a short form animated video that’s simple to understand without requiring a detailed understanding of the cyber threat.

Perception’s team lead, Dan Driver, was contacted by Innovate UK in the preparation of developing this advice and was quoted in the explanation of the second point, “make cyber security a business priority.”

The point in question recommends that action is taken in advance of any attack, simple steps can be taken to reduce the chance of an attack taking place, or data mistakenly leaving a network.  Furthermore, this proactive approach to network security can reduce the impact in the unlikely event that an incident does occur.

In the article, Driver said, “Don't wait for an incident to occur, act now to protect the network and assets within it.  Failure to do so can have significant impacts financially and impact the reputation of an organisation to a degree which they may not recover from.”

Both the article and the video are well worth a look and the advice, although seemingly basic, can go a long way to protecting a network.  Perception itself helps organisations move to a more secure and proactive network security model by informing the user not only of in progress attacks, but also points of weakness and poor internal user behaviour, to minimise the risks at their source.

There’s a good chance you or someone you know has mined cryptocurrency, and you may not have even been aware of it.

There are thousands of Cryptocurrencies around today, following in the footsteps of the hugely successful Bitcoin, but they have really risen to prominence over the last 5 years.  Cryptocurrencies are, with few exceptions, decentralised digital currencies that don’t rely on a central administrator, where transactions take place directly between users.  Their prospect of being a worldwide currency with freedom of exchange and no control from governments or banks has made them massively popular as they are theoretically immune from the instability of fractional reserve banking.

Bitcoin, the largest and most popular cryptocurrency has rapidly grown in value over the last few years, making mining more and more popular

Bitcoin, the largest and most popular cryptocurrency has rapidly grown in value over the last few years, making mining more and more popular

Cryptocurrencies generally all function in the same way, a finite number of coins are ‘mined’ using computers solving difficult equations that get incrementally more difficult as the number of remaining coins reduce.  As a result, most mature cryptocurrencies like Bitcoin, Ethereum, Ripple, and Litecoin, take an enormous amount of computing power to mine new coins.  For a typical person attempting to make money by creating new coins using a home PC, the cost of power is far greater than the value of coins created.  However, utilising tools such as free sustainable energy powering advanced graphic cards or custom built ASICs can make this a profitable activity.

Which brings us onto the first example of mining cryptocurrencies you may have carried out.

Mining cryptocurrencies with proper authorisation.

There are a number of businesses that mine cryptocurrencies on an industrial scale, using custom built hardware and cheap or free energy.  They could try to find the most economical way of mining coins for profit in established cryptocurrencies, or they may be speculating and looking at the new and latest cryptocurrencies being released and estimating which ones will grow, and mine those while they are computationally cheap.

It’s not just dedicated businesses that do this, anyone can mine any cryptocurrency.  A single user may look to become part of a mining pool, where hundreds or thousands of different users share the computational effort of mining, and then share the spoils when a new coin is mined.  They could even single-handedly try to find a way to mine coins using power cheap enough that it’s profitable without the help of a mining pool.  Which brings us onto the next method of mining cryptocurrency that you may have encountered (but hopefully not)

Mining cryptocurrencies without proper authorisation.

Another way of reducing the personal cost of mining is to use power that you do not pay for.  This makes it free for the user in the most unethical sense of the word.

When Bitcoin first grew quickly in late 2013, it caught the eye of a large number of speculative miners.  In November 2013 one Bitcoin was worth $200, within a month it had surged to over $1000.  This was the start of a large amount of mining, as people scrabbled to find cheap ways to mine Bitcoin fast (incidentally this rush reduced the price, it didn’t return to $1000 until another large spike in early 2017).

It was at this time that people started using hardware or power they didn’t own to mine Bitcoin.  This is at best unethical and at worst illegal.  Last year Vladimir Ilyayev, a computer-systems manager for the New York City Department of Education, was fined for using his work computer to mine Bitcoins in 2014.  Users with access to large cloud computing platforms have also been using spare computational resources to do the same.  Even here at Perception we see cryptocurrency mining on corporate networks that should have nothing to do with cryptocurrencies or even finance.

In this example, cryptocurrency mining is a policy violation on networks, but since early last year the growth of malicious use of mining has been massive.

There are a large number of cryptocurrencies available today, and people have used machines they don't own to mine them

There are a large number of cryptocurrencies available today, and people have used machines they don't own to mine them

Mining cryptocurrencies using malware.

Typically, malicious hackers make their living by holding organisations or individuals to ransom, stealing and selling data, or just buying easily liquidated goods using stolen information.

With the rise of cryptocurrencies however, one fact has opened up a new way for malicious hackers to make money: computational power can be directly exchanged for something of monetary value.  As a result, if hackers can create malware to leverage computing power, they can make money.

Although it had happened in minor cases earlier, this started in earnest in early 2017.  The most common examples use a tool called Coin Hive, a script which was originally designed for people to run on their own machines in order to become part of a mining pool as described above.  What malicious users do is hack into websites, install this script, and then any visitor to that site will be inadvertently mining cryptocurrencies.

Multiple websites have fallen victim to this, in October 2017 the BBC reported that websites of schools, charities, and file sharing sites were running the script.  Even the Information Commissioners Office (ICO) had their website affected by it in February, somewhat ironically being that they are the bastion of data control in the UK.

As cryptocurrencies gain in value, the use of this type of attack will grow since the rewards become greater, another massive spike in cryptocurrency value in December 2017 (Bitcoin rose to over $20,000 per coin at one point), only increased the number of cryptocurrency mining attacks that have been observed. 

But there could be a good reason to use these scripts on websites legitimately.

Mining cryptocurrencies on other users machines with their permission.

The internet is a colossal pool of information and content, but in the majority of cases, those who generate the content need to be compensate for their efforts.  Since the birth of websites the way to do this has been via advertising.  However, advertisements on the web have their drawbacks, not only can they be distracting for the user, but they are also the most common method of web-based cyber-attacks.  In many cases, ads being served on websites can be used to execute malicious code on the viewer’s machine without their knowledge.  The consequence of these drawbacks has been the rise in use of ad-blocking software in browsers.  Due to the security concerns, many IT teams mandate the use of up to date ad-blockers on their organisation’s devices.

So where does the money come from when all the ads are being blocked?  Cryptocurrency mining could, oddly, be the answer.  Websites can ask users that have ad-blockers to run cryptocurrency mining scripts on their machines while they browse as a way to bring in income to the website.  This has been in use for a while by cryptocurrency focussed sites using tools specifically designed for this purpose such as JSEcoin.  In February this year however, the US news website implemented a feature where they asked users to either deactivate their adblockers or mine cryptocurrency to access their content.  A site with approximately one million viewers a month can make approximately £75-100 per month using these tools, putting them behind traditional advertising by a factor of between 2 and 10 in terms of profitability, but these tools use lesser known cryptocurrencies such as Monero, and the value could change very rapidly.

US news website briefly gave visitors the option to allow Salon to use their machines to mine cryptocurrencies in lieu of seeing advertisements on the site

US news website briefly gave visitors the option to allow Salon to use their machines to mine cryptocurrencies in lieu of seeing advertisements on the site

It’s not just websites that are looking towards mining cryptocurrency with the users permission.  This month, popular 3rd-party Mac Calendar app ‘Calendar 2’ gave users the option to unlock premium features (worth around £15) by allowing the app to mine cryptocurrency.  Unfortunately, the execution didn’t go entirely to plan and the app mined cryptocurrency even when the users opted out.  The developers, Qbix, have since removed this version of the app, but it does give us a look into a possible future where users are selling their unused processing power for software.


So in conclusion, someone on your network may be intentionally mining cryptocurrencies, inadvertently mining cryptocurrencies, or permitting a third party to use their machine to mine cryptocurrencies.  This isn’t likely to stop anytime soon, so it may be worth finding a way to detect when it’s happening.

Questions every network security professional should ask themselves when setting up layered network protection.

Any information security strategy must be defined to support the growth and direction of the organisation.  This strategy should look at all the risks that may impact the organisation and implement a strategy to mitigate those risks.  Today, these risks are far more diverse and varied, and as such a mix of technical and non-technical controls to safeguard the business, its data and its ability to operate. It is critical to develop a strategy that mitigates or transfers as much risk as possible while keeping the cost and disruption as reasonable as possible.  As a result, a mix of multiple different security measures need to be taken to mitigate the relevant risks as efficiently as possible.  Every measure will naturally have its blind spots and weaknesses, and each of these must be covered by another system to mitigate those weaknesses.  Understandably then, when setting up a network security system, the risks, threats and impact must be understood with as much detail as possible and controls applied only where it makes financial sense and/or there is a regulatory demand.

So we have a multi-product, layered approach to network protection, but there are still some serious questions that must be asked when deploying these solutions across physical security, technical security, and administrative measures.  This article was written to collate some of those questions that might be forgotten during this process.



Physical controls are a first line of defence and range from access controls such as doors, locks, passwords, signage, and security guards to site facilities such as power, HVAC, and resilient services to ensure that service remains uninterrupted.

Do I know who is accessing my physical network?

It is all too easy in many businesses to be able to walk in to a room and just plug in to a spare RJ45 network connection box on the wall, this could potentially give a vantage point in to your network. It is important to understand what is patched where and also to properly disconnect or limit access to physical connections. In some cases a physical audit may be necessary to ensure that you have ensured what you think is plugged in is actually plugged in.

Do I have a way of controlling access to my physical network?

It seems nearly every IoT device seems to have a connection to the internet these days and many devices have a physical RJ45 network connection. Smart TV's for example we find often beacon back to home with potentially sensitive information. It is important to ensure you have some form of policy on the connection of new devices on your network, which may include a risk assessment of what the device has access to and whether it should actually be allowed.

How would I know if physical security measures have been breached?

This is a difficult question to answer, but the best way to test how prepared you are is to ‘red team’ your site, inviting teams of people in to the business to see how much of the business they can access, what information they can get out of the organisation, and how far an unauthorised person can get within your site before you are alerted to their presence.  Even beyond these tests, it is important to understand how you could tell if someone is on your site who shouldn’t be, whether it’s by detecting them accessing your IT infrastructure, or physically detecting them walking around.



Technical controls, whether active or passive can be implemented to enforce, monitor and understand an environment.  In modern businesses, the biggest risk if often loss of data or service on its IT systems which means businesses will focus on IT related technical controls such as firewalls to protect the perimeter, IPS/IDS to identify attack, proxy servers to monitor and control internet usage and endpoint protection to prevent the user devices, whether it be loss, attack or intentionally deviating away from the policies.  

How many technical controls do I really need?

The quantity of technical controls is vast and the degree of active enforcement is dependant on the risk and the policies of each organisation.  How many are deployed largely rests on balancing risk and investment, the best way to approach this is to deploy more than expected initially, before reviewing the deployment and seeing how much value each system is delivering, and working backwards from there.

Which layers of security require technical controls?

Technical controls can be used at all layers of security the network from active preventative controls which stop a detected threat, containment which may identify a threat and quarantine it, detection and reporting to allow for analysis and reporting and recovery and restoration should it be necessary.   Network monitoring systems can complement these technical controls by offering passive detection and monitoring of network behaviours.  This allows analysts to use this data to better understand the actions of a device or user, using this data to identify risks and proactively mitigate them but also to understand what has happened should an incident occur.



Administrative controls can have a massive effect on the effectiveness of information security strategy, but how effective these controls are varies greatly across organisations based on how they are implemented.

To what extent can administrative controls remove the need for technical controls?

Deploying policies can remove the need for a number of technical controls, however some can be pervasive and enforced using technical measures such as group policy (change password every 30 days) where others are not enforced with technical systems (no system changes during Xmas shutdown)


Do I have a way of understanding when administrative controls aren’t being effective?

Deploying solutions that can understand how many users are not adhering to training, or how many policies are being breached and how regularly can point you towards simple measures such as retraining or policy renewal to improve information security.  Network monitoring systems that can tell the user how many people are breaching policy, for example, can inform a system admin that they may need to deploy systems to stop these policy breaches from happening.  A good example of this particular issue is monitoring the use of cloud storage solutions that breach policy, if this is happening often, perhaps it’s time to deploy a private cloud storage solution?

Perception and the Cyber Security Challenge Face to Face, Roke Manor, 7th July 2017

This post was originally created for assessors, organisers, and participants of the challenge.  If you'd like to be sent an electronic copy containing full size images please contact us


Roke Manor Research Limited hosted a Cyber Security Challenge event on Friday 7th July 2017 in which a scenario was created for teams of participants to understand the vulnerabilities in a fictional company’s internet-of-things products.  In order to understand the events of the day, the Perception Cyber Security team were asked to deploy a Perception sensor to the challenge network to record all network activity that occurred during the challenge, both live as it happened as well as for later analysis.

The purpose of this document is to describe the activities seen by Perception throughout the course of the challenge, as a way of demonstrating the simplicity and coverage of a Perception deployment.

About Perception


Perception is a network security tool designed to give an analyst complete visibility of their network and potential threats that they may face.  Perception was initially designed by Roke Manor Research Limited (Roke) for the Defence Science and Technology Laboratory (Dstl), part of the UK Ministry of Defence (MOD), in order to detect anomalies on a network.  After successfully trialling the prototype systems, Perception was developed into a full product that combines multiple cutting edge technologies with the original anomaly detection system to provide one of the most advanced network security capabilities in the world.

Perception can be broken down into 3 distinct parts:

Data Collection

Using data collection technology initially developed by Roke for Lawful Intercept (LI) purposes for law enforcement agencies, Perception collects and analyses all network traffic at the core of the network at very high speeds.  This ensures the system has the best data pool to work from in order to make logical decisions later on.  Although an analyst is unlikely to pore over this low level information, this information is available to the user for analysis and incident response activities.

Behavioural Classification

By using Roke's expertise in cyber research for national security agencies worldwide, behavioural classifiers were developed that would understand the context of communications passing over the core of the network.  This is done by using a combination of anomaly detection, deep packet inspection, and database querying, rather than a single technology.  Looking at traffic behaviourally, rather than using signatures of known threats, is useful because it allows the system to identify threats without any prior knowledge of how they work.  The user is able to see a complete list of behaviours on the network in order to understand what may be threat like, indicative of misconfigurations, or indicative of vulnerabilities.

Artificial Intelligence

The final part of Perception is an Artificial Intelligence (AI) that constantly looks for correlations between the behaviours being stored on the system.  This AI is constantly being updated to mimic the activities of an analyst, in order to automatically and immediately identify links between multiple behaviours in order to detect vulnerabilities and threats.  This AI vastly reduces the time burden on analysts who would normally have to manually find linked behaviours, and allows Perception to alert with a very high detection rate and an incredibly low false alarm rate.

By combining these key technologies, Perception can rapidly draw a user's attention to indicators of threats, compromise, and vulnerabilities so that network security issues can be addressed before they become a serious problem.  The behavioural nature of the system allows Perception to detect zero-day threats without any prior knowledge of the malware, as well as detecting user error or malicious user behaviour that provide significant detection problems for firewalls and antivirus systems.  The ability for an analyst to identify misconfigurations or vulnerabilities represents a general theme within the network security industry to move towards a more proactive approach to the problem of protecting networks, closing vulnerabilities before they are exploited by an attacker, rather than just responding to threats as they happen. 

Perception sensors are easily deployed, consisting of a 1U rack mounted device.

Perception and the Cyber Security Challenge

The Challenge itself is one of many events set up by the Cyber Security Challenge UK organisation (, a not-for-profit organisation with the aim of bolstering the national pool of cyber skills.  As sponsors of the event, Roke agreed to host this particular event, testing 42 participants from around the UK.  Challengers were selected from a larger group of applicants who successfully completed some pre-event challenges, and none were currently working in the network security industry.

The Roke organisers of the Cyber Security Challenge Face to Face (F2F) event contacted the Perception team to discuss the use of Perception as a solution to be the “all seeing eye”, overseeing the challenge as it took place. With the high volume of hacking activities taking place on the day, it was vital for the assessors to have a tool that could quickly identify and hone in on participants actions.  The assessors were tasked with ensuring the rules of engagement were adhered to and the claimed courses of action could be validated, and Perception was used to carry out this task.

The Perception team were keen to exercise Perception on a network with such a high volume of potentially malicious activity.  They were also interested in better understanding which behaviours would be triggered where Internet of Things (IoT) devices were deployed.  Based on the brief provided by the Cyber Security Challenge team, the Perception team’s main objective was to be able to alert the assessors to any rule breaking as it happened and therefore demonstrate Perception's ability to proactively detect.  In addition to that the Perception team sought to provide detailed post-analysis of events carried out during the day, to provide the assessors with the necessary evidence to back up claims made by the participants.

Differences from a Real-World Deployment

The Cyber Security Challenge was an unusual deployment for Perception, which is typically deployed within the networks of commercial organisations.  Whereas normally Perception would be deployed to the core of a network with multiple normal users carrying out their normal day to day business, in this scenario it was deployed on a tiny network which hosted a large number of hackers, a number of IoT devices, no normal user traffic, and active malware.  Although there is value in noting the difference between this scenario and a standard Perception deployment, there are salient commonalities and threat scenarios that are present in both Perception's natural habitat on a network for a commercial network, and the Cyber Security Challenge F2F's infected, hacker-dense, IoT-focussed network.

Firstly, the activity of a large number of hackers allowed Perception to prove that it would handle detecting all of the activities of the attackers and active malware, rather than a single attacker or piece of malware.  Although Perception is well-suited to networks of all sizes, it is very seldom deployed on networks with the presence of multiple malicious actors simultaneously and this gave it the opportunity to demonstrate that even in extreme circumstances Perception could still handle the accurate detection of multiple threat sources.  In real networks there have been instances of an attacker infecting a high number of devices simultaneously in order to cause maximum damage or to hide their true intentions, and it is important to the Perception team as designers of network security systems to demonstrate they have a tool that can handle these types of scenarios.

Secondly, IoT devices are usually thought of as being used in homes, rather than businesses.  This is only partly true, a huge number of businesses deploy network attached devices such as smart TVs, IP cameras, and access control systems in their offices.  The management of these devices is usually seen as the responsibility of a facilities department within a business, which typically means they aren't subject to the same security and software update controls that would be enforced by an IT team.  Even amongst Perception’s current customers it has detected the use of IoT devices running old software that could be vulnerable, and as IoT devices become more and more mainstream, this is only going to become more of an issue.  It is a very common occurrence in today’s security landscape for an IoT device to be a first point of infection within a network due to their poor design, relative lack of security updates, and the inability to install anti-virus software on them.  The Cyber Security Challenge F2F event is a perfect opportunity to show Perception can identify these types of threats where other protections aren’t suitable.

The logistics of running a challenge of this type also raised some minor differences, for example the networks themselves were not connected to the internet, providing a safe environment for the event.  Participants were only allowed to use the provided Internet laptop for research on a separate Internet connected network outside of the challenge network.

About the Cyber Security Challenge Face to Face Event


The Cyber Security Challenge F2F event was a day-long event based around a smart home.  A fictional IoT device manufacturer, EKOR, had heard reports that some of its devices were less secure than initially thought.  During the course of the day somebody malicious would exploit a home network and was going to use these exploits to physically break into the home by hacking a smart lock on a front door.  The attacker would achieve this by exploiting a vulnerable server and then using a separate vulnerability in the update mechanism to deploy malware to the IoT devices in the victim’s home.

7 teams of 6 participants each were ‘hired’ by EKOR to try and find system vulnerabilities and give feedback to EKOR of what they should do to solve the issues, preferably before the attacker gains access to the home. The participants were briefed that EKOR suspected there were vulnerabilities in their products, but had no information on what activity was to happen on the day.  Their activity would include looking at the EKOR network and how the smart devices worked in order to gain an understanding of what the vulnerabilities may be. The teams were against the clock to get the information to EKOR as there was a set time for when the attacker was going to break into the home.

Perception Deployment on the Cyber Security Challenge Network

Each of the 7 teams had 6 laptops to work on (one for each participant) and a scaled down version of EKOR’s smart home products, a hub, a light, a door lock, and a camera.  All of these devices were connected to a switch specific to that team.  Finally, the teams were given an internet connected laptop which was separate from their switch so they could look up anything they needed to.  The seven team switches then fed into an 8th ‘core’ switch.

Simulated EKOR internal servers and other simulated external servers were also connected to each team switch to give the illusion of a real world network, as well as to facilitate the activity planned for the day.  This gave the teams a realistic environment to work with while ensuring isolation of all the teams.  Other than the separate internet connected laptops, the challenge was conducted on a standalone network with no connection to the Internet.  Any IP addresses/domain names/etc. used for the ‘external’ devices are purely fictional. 

The Perception sensor took a SPAN feed from the core switch, meaning it could monitor all activity on the network.  The Perception sensor then used a virtual private network (VPN) to communicate with the Perception Central Correlation Server (CCS) which aggregated behaviours and displayed them in the UI for the Perception team to view.  The CCS can be hosted locally or remotely, however in the interest of keeping the challenge network as simple as possible, it was decided to deploy remotely and communicate via a VPN in this instance.

A live stream of the Perception UI was in the lobby of the event location alongside the assessors, this allowed rapid communication between the Perception team and the assessors about breaches of the event’s rules and the teams’ progress during the day.

As it Happened


Stage 1

During the first stage, teams were asked to use provided tools and documentation to gain an understanding of the EKOR network. They needed to request access to certain compressed (.zip) files and packet capture (.pcap) files which gave vital information about their network. Using these files they should have gained a good understanding of the devices as well as how the network behaves. The packet captures provided were designed to give the teams an indication of which servers might be vulnerable.

Perception Analysis

During this stage Perception discovered data being transferred from EKOR servers to team’s devices as the teams downloaded the packet captures and the .zip files. By analysing this data the Perception team could see which teams were further ahead and which teams needed further guidance. The judges also used this information to understand who had broken rules of engagement by downloading information prior to being granted permission. Data was being transferred from EKOR servers to the teams via an unencrypted service, HTTP over port 80.  Since Perception captures a sample of the packets passing across the sensor it was possible for the analyst to view the actual file details and confirm their contents.

Figure 1: This screenshot of Perception’s UI shows .zip files and packet captures being downloaded by one of the teams

1) These micro-controls in the header provide a quick reference to the key metrics for the event such as source and destination of the data transfer, the number of sessions over which the transfer was made and the data volumes in both directions.  The button on the far right downloads the actual packet captures so they can be viewed in a packet analysis tool such as Wireshark.

2) This Data Transfer diagram shows the direction of the connection, the service used for the transfer (HTTP port 80) and the number of sessions used between the source machine (left green box) and the destination machine (right green box).  The larger orange bar shows the high volume of data downloaded relative to the low upload volume indicated by the thin blue bar.

3&4) These bar charts show the volumes and duration metrics of the transfer.  These charts are particularly useful when analysing data transfers over multiple sessions.

Stage 2

Teams were then given disk images that they could run, only some of which had been infected with malware. This should have given the teams some idea as to what the malware does as it becomes active on the network. The teams were then allowed access to EKOR’s software code base, allowing for manual code review to look for vulnerabilities in the systems. The malware would connect out to an external Command and Control (CnC) server to receive instructions on what to do.  Over a half-hour period the malware began to turn the lights of each team’s scaled-down EKOR products on and off.  This should have indicated to the teams that the malware was present on the network as well as indicating which devices were infected.

Perception Analysis

Once the malware became active on the network Perception saw connections to the CnC server. This allowed the Perception team to get an understanding as to which devices were infected.  On a real world network this information would be an indicator of compromise, enabling the analyst to gain an understanding of which other devices were connecting out to the malicious server, and therefore which devices had been infected.

Figure 2: This screenshot of Perception’s UI shows a behaviour that indicates an internal device has connected to an external device, in this case a compromised device connecting to the malicious CnC server.

1)  From these micro-controls in the top bar it is easy to identify the source and destination IP addresses, device hostnames, the service (port) being communicated with, and the number of other hosts talking to that same service.

2) This network diagram shows a source host (black circle) on the internal (trusted) network communicating with a destination host (red dot) on the external (untrusted) network.  This diagram also shows a number of other hosts on the internal network, (green dots) also communicating with this external device.  This is useful for quickly identifying which other devices have connected to this external host.

3) This summary information here identifies the key attributes for the main communication between the internal and external hosts, namely the IP addresses, hostnames, number of sessions and number of other hosts connected to the same destination.


Stage 3

The first task of the afternoon was to begin penetration testing. Penetration testing is the name given to the process of actively testing devices for potential vulnerabilities.  Teams were supplied with rules of engagement and were expected to ask for permission before actively communicating with the devices under assessment.  This is typical in a penetration test to ensure there is no unwanted impact on service.  Permission was granted, providing a narrow subnet of to test against.  This stage consisted of a lot of information gathering from the teams using techniques such as port scans to identify active systems within that subnet and also what services they may be running.

Perception Analysis

From this stage Perception reported, somewhat unsurprisingly, a large number of scanning activities within the specified subnet.  In addition to this Perception was able to spot teams scanning wider than the subnet specified by EKOR, along with any teams scanning before EKOR had given permission for the penetration testing to begin.

Figure 3: This screenshot from the Perception UI shows a behaviour that indicates a port scan has occurred.

1) These micro-controls in the top bar show the key information about this behaviour, the source, number of destinations – separated by the defined network range, and data volumes.

2) This network diagram shows an internal host (black circle) communicating with one other internal host (green dot) on 503 unique sockets (number on the green line) using over 500 ports (number in the green dot).

3) This summary information shows the overall number of sessions generated by this host is 999, all of which were reset by the server.

4) This details table shows each scanned port as a separate row as the source device cycles through the available port range on the destination.  The analyst can easily use this table to identify any ports that elicit a response by sorting the table by TCP Flags B>A.

Stage 4

EKOR released more packet captures to participants that displayed activity from the malware allowing the teams to gain more information about the malware and the CnC server. Some of the teams may have already been aware that there were more systems in the upper parts of the subnet range from looking at some captures. Teams were expected at this point to request authorisation to start penetration testing on the wider subnet having discovered that there may be a vulnerable server outside of the allowed scanned subnet. Once requested, EKOR gave permission to scan the subnet. If teams had not found these vulnerable devices then EKOR eventually requested for a wider subnet to be penetration tested. On the wider subnet there was a legacy server that had not been disconnected from the network when a replacement server had been commissioned. The legacy server contained some vulnerabilities that allowed it to be exploited, allowing an attacker to steal the password database for offline brute forcing.  EKOR had used the same administrator password on both servers so by gaining access to the legacy server, the attacker could use information learned to access the new server.

Perception Analysis

Perception observed the data transfer of the .pcap files being downloaded from EKOR’s file share at the point these were released by EKOR. Perception then raised similar events to the last stage indicating scanning activities but this time on the wider subnet. These events were used to verify with the judges if teams had prior permission to run the scans on the wider subnet.

End of the Challenge

The last stage of the challenge was for the participants to verbally present their findings to EKOR.  These would have included information about the vulnerable server, the malware deployed, and urgent remediation activity required to solve the issue.


Throughout the day monitoring by Perception was taking place to ensure that all teams followed the rules and also to help with the scoring of teams. Examples of some behaviours spotted included port scans before permission was given to the teams, scanning of systems that did not belong to EKOR, and not asking for passwords for downloaded files.

-          Each team was required to gain permission from EKOR before scanning any device on the network.  During the morning it was not expected for any team to be scanning the network, their aim was to gain information from the documentation provided by EKOR. Throughout the day there were a lot of scanning activities taking place that were captured by Perception. This allowed for checking that the teams generating these behaviours had asked for relevant permissions. Some teams had asked EKOR’s permission to run some scans but were only given permission to a small subnet of IP addresses. This meant Perception saw two types of rule breaking, scanning without permission and also scanning a wider subnet than permitted.

-          Teams were allowed to download documents from the EKOR file server to help them throughout the task, however these documents were password protected and access to them required asking EKOR for the passwords. This allowed the Perception team to check with the judges whether teams had asked for the passwords once they had downloaded the files. If these passwords had not been requested it may be assumed the teams used different means to open the document and thus broke the rules of engagement.

-          Some teams also began to scan a domain name service (DNS) server that did not belong to EKOR. This would have broken the penetration testing rules of engagement. Perception raised this event which was then forwarded to the judges giving them valuable information that they may not have had access to otherwise.  The participants of this event were not working full time in network security roles at the time, and perhaps would not have been used to the stringent rules network security professionals are subject to in the real world.  Actively trying to detect vulnerabilities in devices where the owner has not granted permission (such as this DNS server) is an offence under the Computer Misuse Act.

Figure 4: This screenshot from the Perception UI shows a behaviour that was generated when a team started port scanning an external DNS server

1) This behaviour is almost identical to the other port scan shown in Figure 3, however the IP address here shows that this port scan was carried out on a device outside of EKOR’s network range.

Actions of malicious third parties on the network

Behaviours from third party hosts were identified by Perception early on in the task. One of the behaviours included CnC connections from the infected IoT devices once the malware became active on the network. Perception raised events which indicated which IoT devices connected to the third party CnC server. These events from Perception showed that at least one device from each team connected out to the CnC server. If the participants had access to a Perception device they would have been able to verify instantly which devices were connecting to the CnC server and therefore understand which devices were compromised, substantially reducing the time taken to investigate the problem.  Likewise, if Perception was used as a vulnerability detector by EKOR, it is unlikely these issues would have been open for very long at all since Perception is designed to draw attention to vulnerabilities prior to them being breached.


The Cyber Security Challenge as a whole was a huge success.  The organisers were pleasantly surprised by the outstanding capabilities of the participants and the event as a whole represents a bright future for network security professionals within the UK.  The format of the event, one based around the growing threat of IoT devices was a welcome change to similar events held in the past and tested aspects of the participant’s capabilities that perhaps haven’t been scrutinised before.  Although some rules were broken along the way, the event gave the participants an opportunity to make these sorts of mistakes in a ‘safe’ environment while they hone their skills as security professionals.

Teams were scored accurately based on good behaviours shown and marked down where necessary when rules of engagement were broken. Perception assisted the judges in making these decisions by providing them with definitive proof of an activity that occurred and identifying the teams involved. In cases where teams denied any rule breaking, Perception was able to provide a record, often in the form of actual packets collected from the network, showing that they had done.  Perception observed a huge amount of behaviours on the network and correlated these behaviours into an actionable format to ensure the users of Perception could work efficiently. Perception performed very well in this environment due to its ability to begin identifying behaviours and generating alerts almost immediately with little or no configuration, this was vital given the short duration of the event.

Alex Collins, who helped organise the event for Roke, commented, “Perception provided excellent network visibility throughout Roke’s Cyber Security Challenge. Perception discovered the malware on the compromised devices and enabled us to quickly detect, investigate, and understand the activities of contestants throughout the day, as they tried to assess the security of our fictional Internet of Things product line and services.“

To learn more about Perception, please contact us.

Collated and written by:

James Crawford, Perception Analyst

Glynn Barrett, Perception Software Engineering Team Lead

Dan Driver, Head of Perception


The Perception team would like to extend their sincerest thanks to the Cyber Security Challenge for the event itself and the provision of assessors, as well as to Roke Manor Research Limited for putting the F2F event together, we know how much of an epic task this was.  They’d also like to give their utmost congratulations to all participants that took part in the event, their skillset was truly incredible, even more so given a relative lack of experience in the field, and it was an absolute pleasure to spy on you all for the day.

The Problem with Weaponised Malware

In May we wrote a simple explanation of the WannaCrypt malware, and part of that article described how the self-replicating worm that made the malware so prolific was developed by the US NSA for national security purposes.  This act of creating malware as a weapon to be used by governments raises some significant security issues that need to be looked at closely, especially given the backdrop of national security.

Weaponised Malware

What's the big problem?


Weaponised Malware refers to the creation of malicious network security tools used to attack network assets.  Many security researchers consider 'weaponised' a misnomer, even the NSA itself stated that the exploits created were purely for surveillance purposes.  Although this may very well be true, the fact that malicious technology was created and could theoretically be used in an attack scenario can be considered more than enough justification for the term weaponised.


The question of why a government needs weaponised malware is one that anyone outside of national security services is unlikely to be able to answer.  Without understanding the risks faced by a given agencies, we cannot properly judge the size of the countermeasures used against them.  Many groups have, however, assumed the exact nature of these threats; but as cyber security professionals and not socio-political experts we are not in a position to discuss these aspects in an informed way.  Suffice to say that agencies such as the US NSA consider the development and deployment of weaponised malware a valuable asset in their armoury to defeat these threats.

So What?

Asking why we should care if this is going on is a perfectly reasonable question, and one that was answered quite poetically by the WannaCrypt attacks in May.  That attack, one that caused millions of pounds worth of damage worldwide and potentially put thousands of lives at risk in NHS hospitals in the UK, was a result of malware developed by the NSA.  Weaponised malware is something we should all care deeply about, and it's effects are only going to get more damaging as we move to a more and more connected world.  If governments are to develop weapons of any type, should they be deployed if they could potentially cause damage on this scale?

What Needs to Change?

Judging by the sheer amount of exploits leaked last year by the ShadowBrokers, a lot needs to change.  Weaponised malware should be thought of just like any other weapon, kept under lock and key and only to be used by those authorised to do so.  The unique nature of software weapons, however, means that this problem is infinitely more difficult than with any other type of weapon.  Software can be stolen without removing the original, and it can all be done remotely.  Governments cannot simply hide weaponised malware like they do with other weapons, it is not a nuclear weapon that can be placed on a submarine, or a rifle locked in an armoury.  When considering how they secure any weapons, how easy it is to steal needs to come into the security process.  Do they need to invest as much into network security of weaponised malware as they invest into the secrecy surrounding locations of nuclear weapons?

The NSA got off easy when the ShadowBrokers leaked the malware.  Under no circumstances can cutting edge weaponry fall into public hands, regardless of whether it's software or not.  The scale of this leak was never fully considered until the WannaCrypt attacks occurred, and even then, if a more capable attacker had wanted to cause real damage, it could have been a lot worse.

The more contentious suggestion of what needs to change is whether the developers of weaponised malware inform the creators of target systems of the vulnerability.  That is to say, if a government agency discovers a flaw in a piece of software that allows them to attack an enemy, do they tell the manufacturer of that software immediately?  On one hand this would essentially sterilise the malware (for example when Microsoft released patches for the versions of Windows affected by WannaCrypt) but on the other hand, networks around the world are more secure by having fixes rolled out via updates.  This is the dilemma that national security agencies worldwide need to consider very closely, and it is the real question at the heart of this problem.  There will be instances where they fall on one side of the fence, and instances where they fall on the other, and both situations will have valid arguments for and against.  In the end all we can conclude is that this issue is one with a very large moral grey area at the heart of it, and an issue that will not be going away any time soon.

Cyber Insurance is Changing, Here’s How Can You Lower Your Cyber Security Insurance Premium

The number of companies in the UK investing in Cyber Insurance cover is rising fast, and is rapidly becoming a necessity for any business.  As these policies become more popular, they are also under more and more scrutiny, with not only the number of claims increasing but also the number of disputed or denied pay-outs.  With the scope of cyber security being so broad and often misunderstood, underwriters of policies are often working with far less information when valuing premiums compared to other types of insurance policy such as motor or health plans.

So how are these premiums calculated?  Currently there are two ways, one based on a percentage of total revenue (the easy way), and the other based on the perceived risk to the business (the not-quite-as-easy way).  However, with the latter only taking into account assumed reputational harm and immediate financial implications rather than quantifying actual likelihood of breach, there is little impetus for businesses today to actually improve network security in order to reduce premiums.  This is the equivalent to a dangerous driver investing in more comprehensive cover rather than improving their driving, or a heavy smoker buying more health insurance instead of stopping smoking.

The situation is improving though, underwriters are now taking more steps to understand how businesses are approaching network security, to offer better value to securer networks.  With such a major step change occurring in the fastest growing insurance sector, how can companies prepare for the increase in scrutiny?

Improving Basic Cyber Security Policy

The first point is probably the most obvious, and many insurers already insist on basic levels of cyber policies being in place.  There are multiple guides on how to build these policies, but the basic steps always remain the same.  What data needs to be protected at all costs (customer info/valuable IP)? Who can access this and other sensitive data?  How are confidential communications and data movements protected?  It’s always good to think beyond the mandatory as well, whilst building a cyber policy to the lowest common denominator is the most cost efficient in the short term, it might not be sufficient to your business.  Furthermore, the policy needs regular review, the cyber landscape is vastly different today than it was even 1 year ago, so how those risks are approached needs to change too.

Enforcing the Policy

Creating a document to manage cyber risk is all well and good, but it’s all for nothing if that policy is not upheld.  The biggest problem most businesses have is knowing when policy has been breached, what is to stop someone with access to sensitive data sending it unencrypted across parts of an unprotected or uncontrolled network?  Often, network users will find the easiest method to do their jobs, rather than the most secure, and this results in unforeseen breaches of cyber policy.  The best course of action here is to make sure system administrators have visibility of what occurs on a network, and are properly incentivised to investigate anything they find suspicious.  Regular testing of a network can also be invaluable in understanding where vulnerabilities lie, and best of all this can be done by internal resource rather than forking out for expensive pen testers.

Training the Users

Often seen as the most vulnerable part of a network, the users themselves need to be trained on how to work according to network security basics.  Helping users to understand not just what to do but also why they need to do it can vastly improve how secure the network is as a whole.  For example, telling users why USB sticks cannot be used will improve adherence to a no-USB policy.  Likewise, training users on why Dropbox should be avoided instead of just a blanket block on Dropbox IPs will likely stop the inevitable workarounds the users will try to find.  Basic cyber awareness training can also be cheap and effective, making sure users are aware of phishing emails can radically reduce exposure to ransomware, and will protect them in their personal lives too.

Understanding the Risk

Without understanding how a compromise might occur, you cannot properly protect yourself against them.  Things that are often missed when building this picture include uncontrolled parts of a network, should we be responsible if AWS or Office cloud services are breached?  What steps can be taken to ensure this data stored outside of the business remains secure?  Understanding how the network is accessed externally is also useful for getting a good balance between usability of network assets externally and protection of those same assets from external actors.

Will this Actually Save Money?

Yes.  Even going through the above steps on an occasional basis will put a business streets ahead of the average enterprise network.  When considering that the insurance market is mostly about keeping premiums cheap for those above the average in the bell curve, massive saving can be made as more and more focus is put onto how data is protected rather than what data is being held. 

DNS Tunnelling, the exfiltration route you never think about

DNS has for a long time been used as a way of sending data out of a network, largely due to how open port 53 tends to be on any network.  All firewalls and in-line perimeter systems leave DNS well alone due to the fact that networks need access out on that port.  DNS is therefore perfect as a data theft method, it’s always open, and it can be misused.  As a result, there’s a reasonably good chance that exfiltrating data over DNS won’t be detected by most enterprise networks.  Granted, you’re unlikely to see fast data transfers using the protocol, but many attacks are willing to take the increased stealth as a trade-off.

The most effective way of defending against this type of attack is by using proxies and making sure you can control DNS traffic and have an opportunity to stop suspicious traffic being sent.  In most large network environments this isn’t possible and whitelisting safe domains is ultimately too restrictive, so data theft needs to be detected even when travelling through a normal, open, port 53.

But what can you do to protect against data leaving your network via DNS tunnelling? There are two ways to detect it: Policy or Behaviour.

Policy and Signatures

Blocking DNS activity based on an existing policy is very much the traditional approach to protection, understanding where the traffic is being directed, checking against a blacklist, and blocking as necessary.  Some major perimeter protection systems have this type of protection by default, but the ease of setting up DNS servers means that this list will always be playing catch up, leaving this system vulnerable to targeted attacks.


If you have  network monitoring that includes an inspection system that looks at the content leaving via DNS, then you may be able to effectively profile the traffic that leaves your network over port 53.  DNS misuse may be very obvious at high data rates, however it is more difficult to find at lower data rates and therefore having a way to look for misuse of DNS should be a routine exercise for analysts monitoring a network.

Perception is able to find misuse of DNS using its data movement technologies.

Sage Suffers Alleged Data Breach From Malicious Insider, What Can Businesses Do to Protect Themselves?

Last week's data breach from the accountancy and payroll software firm Sage seems to have come from a malicious insider, if the arrest of a company employee at Heathrow airport is anything to go by.

Whilst it is still unclear what information may have been leaked, Sage started notifying the affected customers earlier in the month that some of their information, possibly including names, addresses, and bank account details, may have been compromised.  Exact numbers of affected companies and individuals remain unknown, but 280 businesses are thought to have had personal information of their employees compromised.

The first thought for anyone in network security naturally goes to asking themselves the question, "how can I stop this from happening to me?" Whereas firewalls and endpoint protection can protect against malicious software and human-borne policy breaches, little protection exists against an employee with access to sensitive data leaking information.

First, as always, is training.  Employees that understand the implications of data breaches, and how to protect themselves can be a better network security system than even the most advanced protection software.  This advice remains the same for protecting against intentional data exfiltration too.  Employees that understand how seriously their company takes data protection are less likely to run the risk of breaching company policy.  Of course, this won't be true in every case, so given a determined insider, what's next?

Companies need to restrict who's accessing what data.  Locking down sensitive information to only those who need to access it greatly reduces the number of potential leaks.  Not only does this make incident response easier, but a 50% reduction in how many employees can access sensitive data means halving the number of employees that could leak data in the first place.  Tying data access to individual accounts is a must when dealing with data that is considered sensitive, whether it's customer data, company data, or valuable intellectual property held by a business.

There are also internal systems that can restrict how much data is sent from a network, as well as where data can be sent.  Locking down services such as Dropbox, OneDrive, or iCloud Drive can cut off the exfiltration route immediately, the same goes for restricting USB use on client devices.  Proper deployment of policy management can reduce exfiltration vectors across the board, making large external data transfers far easier to see when using network monitoring techniques.

Which brings us onto the last point, using network monitoring systems.  Proper visibility of network activity is the key to understanding data flow throughout a network, as well as into and, crucially, out of a protected network.  Deploying tools that can carry out this task has the dual benefit of finding the attack phase of data-theft malware, as well as insiders intentionally leaking data.  For the more advanced thief, slow leaking of data can also be picked up, often reducing the number of affected customers. Perhaps Sage could have picked up on this activity earlier, and reduced the number of affected customers to double figures, instead of hundreds of them?

Large numbers of businesses around the world aren't equipped for countering these types of threats, our conversations with the market suggests that most UK businesses have no method of detecting authorised personnel leaking data, with a preference in focusing network security on known malware.