Kevin O'Brien currently heads product marketing for the cloud security vendor CloudLock and has been part of the security community for more than a decade. He contributed this article to BusinessNewsDaily's Expert Voices: Op-Ed & Insights.
People-centric security emerged on my radar in 2012, when Tom Scholtz, research vice president at Gartner (with a focus on security trends), presented the idea at the company's symposium.
In brief, the concept is that employees will take more responsibility for their data when a company minimizes IT-driven controls and security standards. In other words, effective security increases in the absence of heavy-handed policies.
It sounds outlandish, doesn't it?
It should. Scholtz introduced the idea of reducing security controls to make an environment more as part of Gartner's Maverick Research Program, which is not home to the typically sedate, safe, self-evident analyst content. Gartner describes the program as being "something different, with a mission focused on "high-impact future scenarios that help [clients] think differently to uncover opportunity and enable innovation." As a result, much of the research performed under its banner exists only for a moment, relatively speaking. Trends are as often dismissed as they are debated, and even when a forecast get its right, a good amount of what is predicted is received as an interesting idea that the industry at large is not prepared to adopt.
Such was not the fate for Scholtz's presentation. In Barcelona a month ago, he instead expanded on the topic, drawing evidence from social scientists, traffic planners and more than a half-dozen large companies that have begun implementing the ideas he covered a year ago. "People-centric security" seems to have survived its infancy, and while it may not be widespread yet, it is both viable and worthy of a second look.
Why? Simply put, as the adoption of cloud computing platforms continues to accelerate, Gartner predicts that cloud-security spending will increase as well, reaching $4.2 billion by 2016. This poses an existential question about what that market will contain and how it will respond to risk.
One approach centers on creating an ecosystem based around tokenization and encryption, portraying risk as something fundamentally external. The alternative approach considers risk to be fundamentally related to the forces driving the growth curve in cloud computing overall: user insistence on mobile data-access, combined with social and collaborative tools that help employees work more effectively. In other words, it emphasizes insider threat, not external-actor risk.
As a first-hand participant in the last 15 years of enterprise security developments, I think the field has reached the inflection point in answering that question; the decisions of the mainstream Fortune-500 CIOs, CISOs (Chief Information Security Officers) and cyber-security experts over the next 12 months will determine how quickly the industry reaches that point, and whether the security community or its end users make the choice.
Why people-centric security makes sense
"Why not whip the teacher when the pupil misbehaves?" — Diogenes of Sinope
IT security is necessarily a reactive art. Traditionally, it has been built upon two pillars: data encryption and heavily regulated data access.
Historically, the thinking was that users are vectors for risk, and the statistics to back that view up are readily available: According to a Heidi Shey's "Understand the State of Data Security and Privacy: 2013 to 2014,"research conducted and released for Forrester this year, "inadvertent misuse of data from insiders tops the list of breach causes in 2013, responsible for 36 percent of breaches seen."
What might be termed "classical security thinking" would suggest that the problem is that users are untrustworthy actors, making bad decisions for self-benefit, bereft of consideration for the organization or its need for security. Clearly, the right reaction here is to lock down what those users can do, imposing ever-more controls and monitoring tools to ensure that these users' ability to do harm is contained, if not eliminated. In the cloud, this often means tokenization, whereby the organization eliminates the pesky problem of housing data in externally hosted servers by keeping data on-premise, or at least under IT control again, with only encrypted "pointer" data stored by an external platform provider. [ 5 Best Cloud-Based Apps for Business ]
In reality, this results in an environment in which one of the following can happen: (a) It becomes impossible to use the platform at all, such as if the keys to the unencrypted "real" data are corrupted or lost; (b) the environment is full of holes as new apps and mechanisms for accessing apps come into existence, and users find ways to bypass the tokenization solution; or (c) all user activities take place "underground," where employees can and will still be sharing sensitive data, but will be doing so in unsanctioned consumer-grade applications that are selected because they aren't subject to enterprise control.
The problem seems intractable. It isn't.
The security officers of any organization that takes security seriously need to think about and manage risk thoroughly, and not merely fall back on what they know. In essence, there are two reasons why solutions such as tokenization are a poor remedy for what that ails the modern, cloud-enabled company. The first is tactical, and the second is philosophical.
The tactical reason is about cost-to-benefit ratios. The level of effort to properly encrypt and tokenize data is non-zero; while there are vendors who specialize in offering this type of service, they put the burden of managing its implementation and ongoing use back on the IT or information security team at the customer's site. While this may justify an IT budget, the work required represents a massive waste of time and money, disproportionate to cost savings realized by going to the cloud in the first place.
In other words, it's based on fear rather than data. Some nebulous "other" is imagined, rather than the (proven) insider who is apt to accidentally overshare or externalize something and cause a data breach. The truth is that while outsider risk is real, the majority of data breaches are caused by users trying to get their work done — the "98 percent," in Scholtz's words, who are not criminals but rather employees using the technology at their disposal to do what they're tasked with accomplishing.
This leads to the second, philosophical issue. There is a simple reason why it is wrong to attempt to lock-down data and enforce as many controls as possible: Human beings have a tendency to act in ways that reflect the context and environment in which they find themselves. Treat your employees and users like criminals, and they'll behave accordingly, finding ways to circumvent you. Treat them as essentially good and trustworthy, and they'll act that way.
The people-centric approach
This all leads to a more reasonable alternative to the (rather unwieldy) model outlined above. Scholtz sees organizations beginning to adopt the core concept that people should be accountable for their actions. But instead of imposing new controls that try to account for every possible violation, IT should reduce how many such rules are in place. Paradoxically, this leads to a net increase in organizational security.
Controls, where they exist, should be designed to remediate risk, not block it.
It appears to be a truth that transcends software security. Take, for example, Ashford, Kent, in England. As the Financial Times reported in mid-2011, "Ashford has pioneered a so-called 'shared space' scheme on its busy ring road, removing almost all street [signs] and giving no priority to vehicles, cyclists or pedestrians."
The result? No fatalities, despite the more than 10,000 cars that use the space daily; a 76-percent decrease in accidents, the most severe of which was a broken ankle; and a track record that has been replicated in many countries across Europe. The idea is compelling in its simplicity. Scholtz references it in his presentations, and like all good stories, its moral is undeniable.
"Worse is better." — Richard P. Gabriel, software designer attributed with the creation of the "New Jersey style" school of thought, which claims that less functionality ("worse") is a preferable option ("better") in terms of practicality and usability.
The essential point is this: Either security professionals will find ways to be relevant in a post-control, people-centric model wherein they drive business efficiencies, or they will refute the paradigm until it, or they, are made irrelevant. The odds against cloud platforms growing are long: Cost savings, worker mobility and efficiency, and the various app markets that have sprung up to meet the needs of this new workforce are powerful forces.
That said, change can be frightening, and the number of vendors promising to return corporate data to expensive, often-breached and hard to manage on-premise systems are growing. Also growing, however, is the number of providers who are embracing Scholtz's ideas, pushing for accountability instead of prohibition, a belief in the goodness of people rather than innate malice and a "trust-but-verify" response to risk instead of command-and-control. What this approach requires is a willingness to change the role of IT and the relationship between users and their data, and the adaptability to allow efficiency to drive that connection rather than security.
Look to your vendors and peers in 2014. Are they offering a vision that recognizes that the rules have changed, or are they struggling to find a way to keep older ideas relevant in spite of that shift? If Scholtz is right, and I believe that he is, this shift will happen. What those of us in the security industry can influence is when, how and whether it is a change that we control, or one that controls us.
The views expressed are those of the author and do not necessarily reflect the views of the publisher. This version of the article was originally published on BusinessNewsDaily.