Attackers use command and control servers to maintain communications with systems that have been compromised within a network that has been targeted. This allows them to “direct” malware that can enter the enterprise systems via a number of channels.
Identifying these types of attack requires a two-pronged approach of prevention and detection to reduce risks to a tolerable level. The right preventative controls stop the code or malware deployment in the first place; these need to be combined with effective monitoring and scanning (the detective element).
The minimum level of threat detection capability deemed necessary to protect the organisation which could be influenced by the criticality of the server, the application running on it, the type of data stored or the frequency of use needs to be deployed at all network access points, and updated regularly.
Activities could include scanning all incoming emails for viruses as well as access-triggered virus scanning, which flags when access to a server is requested.
For particularly sensitive systems, such as customer web portals or core financial applications, it is also important to reduce the surface area across which an attack is possible in order to limit the chance of it taking place. This could be through access restrictions and network segregation , thus reducing the ability to technically access the infrastructure or the application directly.
Effective firewalls play a critical role in preventing unwanted external communications, but traffic from internal network destinations also needs to be restricted to prevent a potential attack via a compromised laptop or device from within the network.
Adopting tactics of this nature provides a proactive defence line that minimises the potential for such threats to be launched into the operating environment.
But keeping out unwanted entities is only half the equation when it comes to preventing any kind of attack. Based on the general consensus that in today’s environment it is not “if” an attack takes place but “when”, it is also critical to ensure that monitoring systems are in place to quickly identify any incidents once they occur.
This could be in the form of a Siem ( security information and event monitoring ) tool that looks for suspicious behaviours in the operational systems. For example, a server might suddenly switch from focusing on internal traffic to communicating with the “outside world”.
Here it is important to take into account the logic behind the rules. The deployed virus or malware might be designed to mimic “usual” behaviours and actions, so a first glance might not detect a significant deviation from normal processing.
However, changes to the pattern of network traffic over time may highlight that something is not quite right. The traffic itself may appear to be legitimate, but if activity is at unusual times of the working day or month that are not easily explained by standard business processes, it needs to be flagged as suspicious.
Further controls could be on regular security updates ( patches ) to ensure that any vulnerabilities that may exist in the wider software environments are applied to an organisation’s own assets, thus reducing both the likelihood and the impact of any incident.
Bad actors are ever more nimble and organisations must be continually vigilant with regard to all potential new threats, so they can do all they can to prevent and detect infections.
New technologies should be embraced as soon as they become available, while technical solutions can be enhanced with activities such as keeping up to date on cyber security information, for example by using the services of specialised professionals, creating internal committee groups and taking part in industry conferences.
Overall, preventing this type of hacking event draws on our consistent message: preventing attacks of any kind is centred around good business practice.