Quantcast
Channel: CodeSection,代码区,网络安全 - CodeSec
Viewing all articles
Browse latest Browse all 12749

Why security will always be a people problem

0
0

Why security will always be a people problem

We've heard the phrase, "users are the weakest link," more than we can count. Building a more resilient cyber security strategy means flipping the model on its head and making people part of the solution. Instead of starting with a technology-based strategy, Absolute discusses how and why organizations can take a people-first security strategy.

Paul Proctor, chief of research for risk and security at Gartner was quoted as saying: "we are facing a cultural disconnect [...] executives believe that IT risk and security is a technical problem." Of course, that’s wrong. Deep down, we know it’s wrong. Security, is, and always will be, a people problem. At least until the robots fully take over. Until then, though, we have to come to grips with the simple fact that with the way security is typically deployed in enterprises today, users will continue to click on things they shouldn’t, visit sites they shouldn’t, or make other uninformed or careless choices leading to breaches, incidents, or loss in availability of systems and data. Attackers know that users still have a propensity to "click first and ask questions later" -- meaning that focusing attack resources on users is almost always the first vector tried when attempting to breach a target. Rob Joyce, head of the elite NSA group Tailored Access Operations (TAO) recently gave a talk where he stated that groups like his typically do not need to use zero-day exploits to gain access to a target: "[there are] so many more vectors that are easier, less risky and quite often more productive than going down that route." Risky Choices

But why is this the case? After the regular annual security review training, after the phishing training, after the education that teams give users, why do users continue to make risky choices? It’s not an easy question to answer, but a large part of it revolves around the simple fact that users don’t feel like they are part of the overall security equation. They’re not invested in the consequences of their actions (or inactions, as the case may be). We ask people in the real world to be responsible for their own personal security when they’re walking and driving, so why not ask them to do the same virtually? What if we could convince users to take a moment to pause and ask about the consequences of risky behaviors? Can we get users to ask: "Am I in a position where someone may want to target me? Am I part of a team that has critical assets, systems, tools, or data that would cause everyone around me harm if something bad were to happen?"

Current security strategies are often excessively onerous, incredibly unwieldy, and difficult to manage -- there are just too many "things" today to manage, and not enough hours in the day or resources to do it. No CISO or security team wants to be seen as heavy-handed dictators who rule with an iron fist. It makes for strained relationships, difficult discussions, and job dissatisfaction. How can security teams change this? Maybe it’s time to change your model and flip it around: focus on the people in the organization first, then technology second. The idea of People-Centric Security (PCS) has been around a few years now, and many organizations who have tried it have had a significant measure of success.

I think it’s reasonable to presume that we all want fully engaged employees practicing good security hygiene. We want to give users as much freedom as possible, and more control over the devices they use daily. But at the same time, we must make people understand the consequences of their actions if we grant them those freedoms and latitude. A trust-based, people-first model shifts the responsibility to individual business units and teams and allows them to make security decisions based on the unique risks and risk appetites of each group. In a PCS model, we must support and advise teams, not dictate policy and procedure.

At the same time though, because each team has unique security needs, it’s critical that every team understands this. If one group of employees is denied something that another team gets (like cloud-based file sharing, looser BYOD policies, etc.), that can lead to animosity, decreased morale, or worse: attempts to circumvent the limited controls you keep in place. People need to understand why each team has decided what they did, and why it’s critical to the security of the entire company.

Fundamental Principles

At the CISO or CIO/CSO level, you must provide business units and teams with a set of core standards, fundamental principles, and ethical guidelines to follow, and then allow them to decide what else they want or need to add to their unique security strategies. Clearly, teams dealing with sensitive data like Human Resources information or confidential financial data will have unique regulatory or compliance obligations that other teams will not, requiring them to think deeper about what should or should not be allowed within their teams.

Remember though: you must govern and guide. If it becomes clear a team isn’t going to meet their regulatory obligations -- for example, if there are unique data residency requirements, compliance issues around the encryption of data at-rest or in-transit, etc. -- you must ensure they understand the minimum requirements they have to follow. Having regular meetings and including resources from business units like Legal and HR may help reinforce this minimum requirement.

BYOD and SaaS are here to stay, based on the unique needs of users and individual business units. In the past, they may have just adopted both of these to the chagrin and consternation of IT/IT Security. Allowing teams to decide for themselves if these tools are right for them builds a much better culture of trust and collaboration with your security teams. But at the same time, moving toward this type of model is not for the weak-willed security team nor trivial to implement, and it requires an underlying culture of trust with your users. If you don’t already have a culture of trust, you will need to build that long before thinking about moving to a model like this. Without the underlying inherent trust of your users, you will not be able to effectively monitor users for non-compliant actions. In order for PCS to be truly successful, you must deploy monitoring mechanisms to identify violations and respond rapidly. Use violations as an education tool, not a tool to punish users -- at least until it becomes a regular occurrence with the same person or team. You should always assume that any inappropriate action or activity was a legitimate mistake and use that mistake as a teaching tool.

Remember, your users are professionals and want to be treated like professionals. If you give them the understanding and the autonomy to make good decisions in how they shepherd the data and information they are entrusted with, the vast majority will make good decisions. And at the same time, you can reduce the number of security controls in your infrastructure -- leading to an easier-to-manage security infrastructure, reduced costs, and improved morale. Maybe we can get users to take that extra split second before clicking that link after all.

Richard Henderson is the global security strategist at Absolute, where he is responsible for trend-spotting, industry-watching and idea-creating. He has nearly two decades of experience and involvement in the global hacker community.

Published under license from ITProPortal.com, a Future plc Publication. All rights reserved.

Image Credit: Manczurov / Shutterstock


Viewing all articles
Browse latest Browse all 12749

Latest Images

Trending Articles





Latest Images