Report 28

Malware in the WA State Government

Appendix 1: How we conducted this audit

Assessing agencies’ ability to prevent, detect, and respond to malware

We reviewed the security controls, both technical and non-technical, within agencies. This testing built on the work we already conduct during our annual general computer controls audits.

We reviewed some additional controls specific to preventing and detecting malware. The controls we assessed are seen as good practice by the industry. Our sources included the:

  • Australian Signals Directorate (ASD) Strategies To Mitigate Targeted Cyber Intrusions
  • Australian Government Information Security Manual (ISM), published by the ASD
  • National Institute of Standards and Technology (NIST) Guide to Malware Incident Prevention and Handling (SP 800-83)
  • ISO/IEC 27001:2013 Information Security Management Systems – Requirements
  • ISO/IEC 27002:2013 Code of Practice for Information Security Controls.

Capturing network traffic

The easiest way to look for malware activity inside agencies was to look at their network traffic.

Most malware will enter networks via the internet or email. Once activated, malware will ‘phone home’ to servers on the internet. This allows it to join what is known as a botnet: a network of infected machines controlled by the same attacker. The owner of the botnet uses these servers to send commands to the malware, update it with new features, or allow the malware to upload data that it collects. These malicious servers are known as command and control systems.

Figure 5 - The typical deployment of our traffic collector device

To investigate agency traffic, we installed a collection device on their network that received a feed of all internet traffic. This device included 3 different technologies to analyse traffic and generate alerts on suspicious behaviour. See ‘Tools and Technology used’ further down.

We tried to position the device ‘inside’ an agency’s network security systems. This way, any inbound attacks would need to have passed through the systems before hitting our device.

Our device would also capture the outbound connection from malware on agency computers and servers. Agency network security systems might block the outbound connections after they passed through our device. However, this traffic indicates that malware was able to install itself and activate.

We left the device to collect network traffic for a period of at least 10 days, including 2 weekends and a working week. After this period, we uploaded the data to a data analysis system. This allowed us to categorise, visualise, and sort alert data.

Analysing the results

A significant number of alerts of suspicious behaviour were generated. The data analytics system was able to provide some initial sorting and prioritising. We manually assessed the rest.

We looked for any evidence of suspicious or potentially malicious behaviour by analysing traffic patterns and connection protocols. This work was limited due to the sheer volume of data collected, most of which was legitimate traffic.

We provided the details of any alerts or issues we found to agency technical staff. They had the opportunity to investigate and act on the alerts. They were also able to tell us if any alerts were incorrect, known as a ‘false positives’ alerts.

Our approach had limitations

This assessment of agencies is limited, and it is entirely possible that some infections will have gone undetected.

Our network captures were limited to a 10 to 12 day period, and even this resulted in enormous volumes of data to process. In total, we collected almost 50 terabytes of data that was automatically processed. This processing generated almost 10 million alerts.

To analyse such a large amount of data efficiently, we were forced to rely on pattern-based tools and published list of websites known to be malicious. This increased the risk that attackers using newer techniques or unknown addresses would go undetected. Malware creators go to great lengths to evade detection by rules-based tools.

Malware can be located by analysing traffic patterns for unusual connections or uploads and downloads at suspicious times of the day. However, this is very time consuming and requires in-depth knowledge of the network. This was not feasible for our audit.

Finally, our device could not read or decrypt encrypted data. Web traffic secured using the HTTPS protocol is encrypted. At some agencies, this traffic makes up almost 50% of web browsing. Websites that handle sensitive information, like banks, use this encryption to protect their customers. Increasingly, attackers are also using encryption to hide malicious communications from security systems. During this audit, we observed encrypted connections to malicious websites.

Tools and technology used

The traffic collector device contained 3 different technologies:

A network traffic recorder: This system kept a copy of agency traffic, logging the details of all connections into a database. This database could be queried to search for and investigate suspicious traffic patterns.

An Intrusion Detection System (IDS): This tool uses pre-set rules to look for malicious behaviour in network traffic. The IDS that we deployed is ‘open source’; freely available to download and install. The detection rules for this system were compiled from a variety of sources, both commercial and freely available.

A ‘Sandbox Appliance’: This technology looks for files in network traffic that it does not recognise. If it finds an unknown file, it will take a copy of the file and run it inside an isolated virtual computer. The behaviour of the file is monitored to see if it behaves like malware. The sandbox appliance that we used was a commercial product.

 

 

Page last updated: December 7, 2016
 

Back to Top