Software vulnerabilities are a given and where possible, patches should be applied to address them as soon as practicable. However, sometimes, that is easier said than done.
Perhaps you are reliant on an unsupported operating system or software; you might have software installed that you don’t know about; or perhaps bespoke software with unidentified vulnerabilities. It may also be that operational priorities prevent patching in a timely manner meaning short term mitigation is needed.
As always, good practice should be followed, such as having a standard build and removing user privileges (where possible) to prevent unknown software being installed.
Using two factor authentication for admin login and giving admins lower privileged user accounts for non-administrative functions, zoning networks, internal monitoring etc., will all help to reduce the risk of unknown vulnerabilities and the impact of their exploitation.
With these basics covered, next is to establish a baseline of your assets with a corresponding list of known vulnerabilities within those assets. This should include the protocols that are exposed such as web, php, sql or other business services.
One approach is a manual assessment based on an asset list, the software you have installed and using internet resources such as common vulnerabilities exposure (CVE) databases, and warning, advice and reporting point (WARPs) to identify vulnerabilities, is possible. However, while this may not incur capital expense, it will take a lot of effort to do well and probably be less accurate than tool-supported methods.
Instead, vulnerability scanning tools or services, either internally or as a service, will allow you to scan your assets, identify what is installed and flag up any vulnerabilities in a wide range of software.
Open source tools are available for those on a limited budget, as well as commercial tools, but there is still an operational overhead. As vulnerability scanning is not a one off, but needs to be done regularly, vulnerability scanning services are becoming more popular, including cloud based services. However, while cloud-based services tend to be lower in cost and capable of identifying the latest vulnerabilities and matching patches, many will address only internet facing assets.
The issue with scanning approaches is that, while they may include generic vulnerability detection, many will not address unknown vulnerabilities. Detecting zero days may not be practical, or even necessary for most people, however it becomes exceptionally important if you are using bespoke software particularly a database back end for a website, where the biggest risks are cross site scripting (XSS), cross-site request forgery , or SQL injection (SQLi) vulnerabilities in the bespoke code.
Weaknesses can only be detected by examination of the code, or penetration testing. Ideally, this should be part of the development process, using an independent penetration tester . If you use an external supplier for such development, then you need to make sure that they are responsible for any remediation of vulnerabilities found during penetration testing, so they are incentivised to get it right first time.
Penetration testing can also be used to detect other vulnerabilities, but is expensive, so should be limited to things specific to your system that cannot be detected by other means.
Once you have identified, patched or eliminated as many vulnerabilities as you can, there is likely to be some remaining that will need to be mitigated. This can usually be done using existing capabilities to deny an attacker access to the vulnerability, to support detection of the attack, or minimise its impact. The simplest example I’ve seen was a windows XP machine used for a physical access control system.
The operating system could not be updated when XP became obsolete, so the mitigation was simply to disconnect it from the network thus physically limiting access to only the two users who needed to maintain it.Pass the Hash
More complex is the well-known Pass the Hash problem. Here the attacker gains access to password hashes and then uses these to gain access to other machines, typically through local admin accounts.
While it is harder to capture hashes in later versions of Windows, it is still possible to use common attack tools to steel hashes from the memory of single sign on applications. It is therefore necessary to minimise the impact by limiting access and minimising privileges. This can be achieved through good practice, as mentioned earlier, and also disabling local administrator accounts. (You will still be able to gain admin access by booting in safe mode if necessary.) This would limit access to that of a single lowly privileged user.
Another example is WannaCry, which used the EternalBlue vulnerability to spread. While there were patches available, they did not cover older operating systems, or had not been deployed. To mitigate this, blocking IPS signatures were developed and deployed by managed service providers within hours of the attack which blocked the network activity of EternalBlue.
This did not stop the initial delivery of WannaCry by phishing email, but did stop it spreading over the local network.
These mitigations typically require configuration of tools already in place, IDS, firewall, logging, SIEM etc., though in some cases some network re-configuration or additional tools may be necessary. Also, they provide protection against the general vulnerability, not a specific attack. For example, the WannaCry IPS signature will protect against any attack usingEternalBlue.To summarise, vulnerability protection does not generally require a large investment in new software, or equipment and while it does need an operational team with the right skills to develop and apply the mitigations, these typically have value beyond the specific vulnerabilities being mitigated.