Quantcast
Channel: CodeSection,代码区,网络安全 - CodeSec
Viewing all articles
Browse latest Browse all 12749

The increasing role of AI in cyber security [Q&A]

$
0
0

The increasing role of AI in cyber security [Q&A]

As attacks become more frequent and sophisticated, conventional security techniques and human analysis struggle to keep pace.

As a result many companies are turning to artificial intelligence methods to help them defend their systems effectively. We spoke to Peter Gyongyosi product manager of security intelligence specialist Balabit to find out more about how AI is increasingly the future of cyber security.

BN: Why is AI becoming more popular in the security space?

PG: AI is able to create software that's capable of intelligent behavior. This intelligence can take many shapes, like suggesting to music to you to being able to beat humans in games like Go. With security there are two main challenges, there’s a huge amount of data and the signal to noise ratio in that data is pretty high. For security analysts it’s like trying to look for a needle in a haystack full of needles.

The other big problem is that it's a cat and mouse game where the defenders constantly have to play catch up with the attackers. AI can help with both of these issues, it can help with information overload by pre-filtering the data so instead of just presenting everything it can deliver easily understandable views that prioritize system events. It can produce insight based on the data that frees up security teams to focus on important tasks.

In addition an AI system can 'learn' what is normal and spot events that fall outside of this pattern of behavior. This allows security teams to catch zero day attacks, the 'unknown unknowns' they didn't even know they should be looking for. AI can also add a new harder to beat layer of security by monitoring the behavior of users as they interact with the system and use that as a form of ongoing authentication.

BN: Can this monitoring of behavior help to spot insider threats?

PG: Absolutely, in my view there is no big difference between insider and external attacks. At some point legitimate credentials to the system are involved so by monitoring behavior such as when and where users log in it’s possible to protect against insiders with malicious intent.

BN: How does AI work alongside human security teams?

PG: The two things have to interact, we are developing a system that uses AI to carry out security analysis. We've been talking to a huge number of customers and our general experience is that people are not yet ready to blindly trust a system to make security decisions no matter how sophisticated it is.

This attitude might change in the future but we see security still requiring quite a lot of human interaction for a while. Today there’s more need for tools that help security analysts rather than seeking tools that make decisions instead of them. In the long run we will probably be able to come up with systems that make better decisions than any human would be able to do. However, there are a lot of ethical and other questions that come up, so a high level of human interaction will still be required.

BN: Various high profile people have expressed concerns about artificial intelligence, is there a danger that we could end up becoming too reliant on it?

PG: That is a real danger and it will happen to some extent. Still, we're facing systems that are producing terabytes of data daily, there's just no way humans can review all that data, we need help, we need machines to do a large portion of the job. To an extent we’ve already become blind to certain types of attacks, we can only spot them when they’re highlighted for us, and that is a real danger. I think the focus should be on creating tools that enable us to focus on the important things and make more informed decisions, that's where these systems can help a lot.

BN: So AI can help to cope with the speed of change?

PG: Yes, on the one hand by building profiles for users and learning what they’re doing, what systems they’re using and so on, then by comparing their activities in real time to this baseline. Knowing that profile and showing the information about who the person is to a human analyst is really valuable by itself. It allows them to act and make decisions faster. The system learns the infrastructure for the analyst. It's much quicker for the system to do this than for a human analyst to pick up any discrepancies. In a large organization humans don’t stand a chance without computer assistance.

BN: Will we see all organizations having some kind of AI security in the future?

PG: That’s the direction we're moving towards, maybe not in really small organizations which are likely to move entirely to the cloud if they haven’t already. They won't have their own security systems set up, but their service providers will definitely use the technology. You can already see this happening such as when additional authentication is required if you log in from a different machine or location. Larger organizations with they're own security setups will start to use systems that are less and less rule based and more and more intelligent, allowing them to make better decisions.

Image Credit: Mopic / Shutterstock


Viewing all articles
Browse latest Browse all 12749

Trending Articles