The Problem with Weaponised Malware

In May we wrote a simple explanation of the WannaCrypt malware, and part of that article described how the self-replicating worm that made the malware so prolific was developed by the US NSA for national security purposes.  This act of creating malware as a weapon to be used by governments raises some significant security issues that need to be looked at closely, especially given the backdrop of national security.

Weaponised Malware

What's the big problem?


Weaponised Malware refers to the creation of malicious network security tools used to attack network assets.  Many security researchers consider 'weaponised' a misnomer, even the NSA itself stated that the exploits created were purely for surveillance purposes.  Although this may very well be true, the fact that malicious technology was created and could theoretically be used in an attack scenario can be considered more than enough justification for the term weaponised.


The question of why a government needs weaponised malware is one that anyone outside of national security services is unlikely to be able to answer.  Without understanding the risks faced by a given agencies, we cannot properly judge the size of the countermeasures used against them.  Many groups have, however, assumed the exact nature of these threats; but as cyber security professionals and not socio-political experts we are not in a position to discuss these aspects in an informed way.  Suffice to say that agencies such as the US NSA consider the development and deployment of weaponised malware a valuable asset in their armoury to defeat these threats.

So What?

Asking why we should care if this is going on is a perfectly reasonable question, and one that was answered quite poetically by the WannaCrypt attacks in May.  That attack, one that caused millions of pounds worth of damage worldwide and potentially put thousands of lives at risk in NHS hospitals in the UK, was a result of malware developed by the NSA.  Weaponised malware is something we should all care deeply about, and it's effects are only going to get more damaging as we move to a more and more connected world.  If governments are to develop weapons of any type, should they be deployed if they could potentially cause damage on this scale?

What Needs to Change?

Judging by the sheer amount of exploits leaked last year by the ShadowBrokers, a lot needs to change.  Weaponised malware should be thought of just like any other weapon, kept under lock and key and only to be used by those authorised to do so.  The unique nature of software weapons, however, means that this problem is infinitely more difficult than with any other type of weapon.  Software can be stolen without removing the original, and it can all be done remotely.  Governments cannot simply hide weaponised malware like they do with other weapons, it is not a nuclear weapon that can be placed on a submarine, or a rifle locked in an armoury.  When considering how they secure any weapons, how easy it is to steal needs to come into the security process.  Do they need to invest as much into network security of weaponised malware as they invest into the secrecy surrounding locations of nuclear weapons?

The NSA got off easy when the ShadowBrokers leaked the malware.  Under no circumstances can cutting edge weaponry fall into public hands, regardless of whether it's software or not.  The scale of this leak was never fully considered until the WannaCrypt attacks occurred, and even then, if a more capable attacker had wanted to cause real damage, it could have been a lot worse.

The more contentious suggestion of what needs to change is whether the developers of weaponised malware inform the creators of target systems of the vulnerability.  That is to say, if a government agency discovers a flaw in a piece of software that allows them to attack an enemy, do they tell the manufacturer of that software immediately?  On one hand this would essentially sterilise the malware (for example when Microsoft released patches for the versions of Windows affected by WannaCrypt) but on the other hand, networks around the world are more secure by having fixes rolled out via updates.  This is the dilemma that national security agencies worldwide need to consider very closely, and it is the real question at the heart of this problem.  There will be instances where they fall on one side of the fence, and instances where they fall on the other, and both situations will have valid arguments for and against.  In the end all we can conclude is that this issue is one with a very large moral grey area at the heart of it, and an issue that will not be going away any time soon.