Ben Dickson Crunch Network Contributor
Ben Dickson is a software engineer and freelance writer. He writes regularly on business, technology and politics.
More posts by this contributor: How to prevent your IoT devices from being forced into botnetbondage How fog computing pushes IoT intelligence to theedge
How to join the networkInsider threats are the biggest cybersecurity threats to firms, organizations and government agencies. This is something you hear a lot at security conference keynotes and read about in data breach reports , white papers and surveys ― and these insider threats are becoming increasingly more difficult to detect and prevent , as well as more frequent .
This seemingly unstoppable growth accentuates the problem and shortcomings of current solutions, and warrants the need for new defensive technologies to detect and stop the digital daggers aimed at our backs.
Data science ― the application of mathematics, big data analytics and machine learning to extract knowledge and detect patterns ― is an emergent, advanced technology area that is proving its effectiveness in the realm of cybersecurity , including fighting insider threats. Here’s how it succeeds where legacy solutions fail.
The need to focus on user behaviorThe wide adoption of cloud services and mobile technology in companies has transformed IT infrastructures considerably.
With physical boundaries of corporate networks and digital assets not as clearly defined as they once used to be, the focus in fighting insider threats needs to shift toward protecting user accounts. “Now that the traditional security perimeter has been erased by mobile and cloud computing, identities have become both an attack vector and security perimeter,” says Tom Clare, VP of marketing at cybersecurity startup Gurucul .
“What has changed recently is the fact that control of user accounts has become far more valuable than control of devices,” saysJarno Niemel, lead researcher at F-Secure Labs . “Years back, we were fighting against keeping computers clean from infection just to keep the computers clean. Nowadays, we are protecting computers just to be able to protect the user accounts that are on the computer.”
Organizations try hard to protect user identities by adopting different security solutions and training employees on the basics of cybersecurity , but it’s not enough.
“Good data hygiene is critical, but it is not enough,” says Stephan Jou, CTO at Interset . “A negligent employee is unlikely to change regardless of training, and a third-party attacker often can operate outside employee-focused processes. More importantly, the insider stealing for espionage is motivated to break rules.”
Insider threats are becoming increasingly more difficult to detect and prevent, as well as more frequent.
The truth is that credential theft does happen, and it happens a lot. In fact, a Verizon 2015 data breach report found that the majority of confirmed security incidents occur as a result of compromised user accounts. Massive lists of user credentials and passwords are being sold on the Dark Web at low prices , and, for a small fee, anyone can obtain access to all sorts of enterprise networks and cloud services, and impersonate legitimate users.
Therefore, fighting insider attacks hinges on detecting anomalous user behavior. But this again presents its own set of challenges, because defining normal and malicious behavior is not an exact science and involves a lot of intricacies.
Traditional security defenses rely on setting static rules and alerts on user activities in order to define and identify indicators of compromise (IoCs). But when applied to tens, hundreds and thousands of users, this model ends up generating a noisy flood, and security teams have to struggle with wasted time and must sort through tons of unimportant events that are mostly false positives. Meanwhile, actions don’t necessarily explain intents, and savvy attackers will be able to cloak their malicious activities by keeping them within the defined set of rules.
The use of data science can help move away from static models toward dynamic ones that are able to define normal user behavior based on identities, roles and working circumstances. This approach is very effective in reducing false positives and highlighting behavior that truly accounts for malicious activities.
Cybersecurity firms are increasingly leveraging this technology to deal with insider threats.
Analyzing user behavior through machine learningGurucul’s Risk Analytics security platform combines machine learning models with big data to understand normal baselines of behavior and uncover anomalies, and to provide visibility that spans identities, accounts, access and activity. “This behavioral analytics approach, sometimes called user behavior analytics or UBA, can detect excess access permissions and activity, define roles and detect unknown threats,” says Gurucul’s Clare.
The wide adoption of cloud services and mobile technology in companies has transformed IT infrastructures considerably.
Gurucul’sRisk Analytics also gathers and monitors identity-based data and activity from both on-premises and cloud environments. Its machine learning algorithms, including self-learning and training behavioral profile algorithms, look at every new transaction and risk scores it. Using clustering and outlier machine learning makes suspicious behavior stand out from other benign activities.
One of the features of Gurucul is its concept of dynamic pee