Security teams are struggling to prioritise high-risk and high-impact cloud alerts. We look at how threat prioritisation using risk scoring can help.
Migration to the cloud has seen enterprises deploy multiple third-party vendor solutions, but as many of these aren’t adequately integrated, this has resulted in an exponential rise in alert levels.
It’s not out of the ordinary for companies to be dealing with 100,000 alerts, and it’s not humanly possible for the security teams to pick through that sheer volume of alerts, firstly, with any speed, but also, with any confidence that they are dealing with those that carry the greatest risk to the security of the cloud infrastructure and the enterprise.
It’s an issue made all the more difficult by the fact that the majority of these alerts can be false positives, leading to valuable time being wasted on triage and investigation. A survey by the Cloud Security Alliance (CSA) found that 23.2% of threats weren’t even real.
False positives also increase the likelihood that future alerts may be ignored – what is commonly referred to as alert fatigue – with the same survey finding that 31.9% of analysts don’t pay attention to alerts anymore due to the number of false alarms.
If alerts are ignored, this increases the risk of vulnerabilities and attacks on the network, particularly if the organisation has poor visibility of their cloud environment. Without oversight, it becomes nigh impossible to spot and stop these issues. This creates gaps and blind spots in your security posture, making the network and the digital supply chain more susceptible to attacks, such as ransomware.
When looking at thousands of critical alerts across your large cloud estate, where should the security team start? What is the basis for prioritisation? How do you wade through this deluge of alerts and segregate those that pose an actual risk to the enterprise from those that are probably associated with the observance of compliance and best practice?
In an ideal world, the security teams would be able to detect and apply importance to alerts so that the most critical vulnerabilities could be dealt with and resolved first. In reality, they can’t. There is a huge disconnect between the security alerts being generated and being able to actually grade and manage them effectively. As a result, the real risks remain hidden – causing significant security and compliance implications.
Wouldn’t security alerts be more useful and indeed welcomed if you could identify and prioritise those vulnerabilities that pose the greatest risk and impact to the business? This would allow the security team to focus its efforts where they are needed, deal with high-risk threats first and quarantine others till later and even identify and disregard false positives.
The problem is that in order to prioritise alerts, you need to be able to evaluate them first, and that requires additional information to build up a 360 view in the form of…
- Context: Most cloud alerts lack context because most security solutions look only at misconfigurations that affect a resource and do not report on risks from associated or connected cloud resources. Building a picture of how an alert relates to the cloud infrastructure and the resources that could be affected enables the assessment of impact.
- Risk: Measuring the extent of the risk posed from this contextual information is key. To do this, you need to be able to assess the threat posed by the alert and its likely impact on resources and the wider business if it were left unchecked. This ‘risk scoring’ requires access to information on the latest vulnerabilities so that benchmarks can be set.
How risk scoring can help
Security teams need to be able to quickly detect, investigate, triage, and resolve high-risk, high-impact vulnerabilities. That requires real-time prioritisation and identification of high-risk alerts using an industry-recognised scoring system.
The CVSS (Common Vulnerability Scoring System) framework is a published standard used by organisations worldwide. It “provides a way to capture the principal characteristics of a vulnerability and produce a numerical score reflecting its severity”, which can then be “translated into a qualitative representation (such as low, medium, high, and critical) to help organisations properly assess and prioritise their vulnerability management processes,” according to FiRST.org.
Using the CVSS and considering factors such as exploitability, impact and scope, the business can determine the CVSS score of an alert from 0-10. Exploitability checks how attackers can leverage vulnerabilities in cloud resources. Scope determines the blast radius of an exploit. And impact determines the consequence of a successful exploit on the confidentiality, integrity, and availability of a resource.
However, while the CVSS score provides a great basis, adding an additional mechanism on top can enable the assessment to consider cloud-specific resource information. Using another points-based scoring system enables the business to assign points on the attributes and risk factors for a given cloud resource to determine the impact factor (0-10).
Questions asked may include: Is the resource public? Are any privileges required to access the resources? For example, the assessment could be used to ascertain the likelihood and impact of a vulnerability in a Virtual Machine opening up an attack path to a storage bucket storing sensitive information.
Using defined policies, it’s then possible to determine the severity factor (0-10) of the alert and to prioritise a response accordingly.
Triple risk scoring in this way – using the CVSS framework, a proprietary cloud risk scoring mechanism and a risk impact score – provides both the context and risk needed to prioritise alerts. But it also enables the business to customise the weight of the different components according to risk appetite or organisational needs to fine-tune the outcomes still further.
This unique approach to threat prioritisation is embodied in C3M’s Risk Score, which forms part of its Cloud Control platform. Risk Score assesses each alert automatically using:
- CVSS 3.1 Framework – using Exploitability, Impact and Scope criteria
- C3M Intelligence – a proprietary ‘next level’ set of customisable assessment criteria that attributes risk factors to resources
- Alert Severity – a risk weighting based upon policies defined in C3M
The resultant risk score is then rated between 1-10 and is allocated one of four levels of severity – Minor, Moderate, Major and Severe – allowing security teams to immediately identify and resolve the most critical, high-risk threats. By putting in place a systematic process, it helps focus precious resources on those alerts that matter, mitigates the problem of false positives and eliminates alert fatigue, lessening burn-out and helping you retain your security personnel.
If you’d like to find out more about risk scoring and C3M’s Cloud Control, download our Cloud Control datasheet or our Cloud in Crisis whitepaper. Or, if you’d like a demo of the risk scoring process and/or to discuss your own requirements, do contact us.