Industry Insight: How Will Cloud Security Evolve in 2017?

The coming year promises substantial growth for public cloud service providers and Software-as-a-Service (SaaS) solution vendors. For one, new foundation-level technologies such as microservice deployments and blockchain, among others, are delivering untapped avenues for innovation. But perhaps even more important, one of the most oft CIO-cited cloud adoption blockers (namely, security and data safety) appears to finally be moving to the background, especially for enterprises and midsize businesses.

While analysts agree that most businesses today—including the enterprise and midsize segments—have some cloud deployments to varying degrees, they also agree that larger organizations have been slow to move major workloads to the cloud, with the primary reason given being cloud security and data safety. That's important to these customers not just because of the massive volumes of data these organizations would be migrating, but also because passing rigorous compliance and regulatory checks, such as the Health Insurance Portability and Accountability Act (HIPAA) and ISO 27001, is critical for them to do business. Security is top-of-mind for these CIOs and, until recently, it simply wasn't robust enough for them to adopt the cloud in a large-scale way.

But, according to analyst predictions for 2017, that's all about to change. Cloud security has come a very long way in the last half-decade and it seems many IT professionals and CIOs agree. This means analysts are predicting that we'll see much larger take-up of cloud infrastructure and services from the enterprise sector in 2017.

I conducted an email interview with Brian Kelly, Chief Security Officer at well-known managed cloud provider Rackspace, to find out what's changing about cloud security in the coming year—and to see if he agreed with these analysts' predictions.

PCMag: Exactly how does Rackspace view its role versus that of its customers' IT staff when it comes to data safety and security?

Brian Kelly (BK): We are seeing direct evidence that customers are coming to the cloud because of security rather than running away from it. With few exceptions, companies simply do not have the resources and skills to effectively defend their organizations from the more sophisticated and persistent threats. Similarly, cloud providers recognize that the future of our businesses depends on delivering trust and confidence through effective security practices. Despite cloud providers' increased investments in security, protecting organizational assets will always remain a shared responsibility. While the cloud provider is directly responsible for the protection of facilities, data centers, networks, and virtual infrastructure, consumers also have a responsibility to protect operating systems, applications, data, access, and credentials.

Forrester coined the term the "uneven handshake" in reference to this shared responsibility. In some regards, consumers believe they are shouldering the burden for the security of their data. This may have been true a few years ago; however, we are witnessing a balancing of the handshake. That is, cloud providers can and should do more for consumers to share responsibility for security. This can take the form of simply providing greater visibility and transparency into hosted workloads, providing access to control planes or offering managed security services. While a consumer's security responsibilities will never disappear, cloud providers will continue to take on more responsibility and deliver value-added managed security offerings to build the trust necessary for both sides to operate safely in the cloud.

PCMag: Do you have any advice for IT professionals and businesses customers about what they can do in addition to what a provider delivers to help protect their cloud-based data themselves?

BK: They must continue to implement security best practices within their enclaves. They need to segment the workloads in the enclave responsibly to limit the scope of compromises, ensure the workload environments (operating systems, containers, virtual LANs) are properly secured and patched, leverage endpoint- and network-level sensing and response technologies (IDS/IPS, malware detection and containment), and actively manage accounts and accesses. Often, customers can include these services and technologies in their cloud usage contracts, but if not, the consumer must ensure that it happens on their side.

PCMag: One key question we've seen readers ask is about effective defense against massive Internet of Things (IoT)-powered distributed denial of service (DDoS) attacks, similar to the incident this past October where a Chinese IoT vendor inadvertently contributed heavily to the attack. Do such attacks work with upstream Internet Service Providers (ISPs)? And how do they keep an attack on one client from taking down everyone in a facility?

BK: The main goal of DDoS defense is maintaining availability when under attack. The DDoS attack capabilities of IoT are well-known and can be successfully mitigated by implementing security best practices and by using intelligent DDoS mitigation systems. The biggest threat is not the method of the attacks from IoT but the immense number of vulnerable internet-enabled devices. Networks need to be locked down to limit the exposure to threats on the internet. Network operators need to be proactive in detecting all possible threats and knowing the most effective techniques to mitigate them, while maintaining the ability to analyze and classify all network traffic.

A strong DDoS mitigation strategy requires taking a layered, defensive approach. The extensive number of IoT devices makes mitigating IoT attacks difficult for small-scale networks. The effectiveness of an IoT attack is [in] its flexibility to generate different attack vectors and produce massive, high-volume DDoS traffic. Even the most hardened network can quickly be overwhelmed by the enormous volume of traffic that IoT can generate in the hands of a capable attacker. Upstream ISPs are often better equipped and staffed to deal with these large-scale attacks that would rapidly saturate small network links. Furthermore, the scale of operating a network and the tools needed to mitigate such attacks put effective detection and response out of reach of most organizations. A better solution is outsourcing such operations to the upstream ISPs of cloud providers who are already working with this scale of network.

Upstream ISPs have many advantages through robust diversity of internet access points across which they can shift traffic. They also generally have large enough data pipes to absorb a lot of DDoS traffic initially while the response activities of re-routing traffic are spinning up. "Upstream" is a good term because it is somewhat analogous to a series of dams along a river. During a flood, you can protect the houses downstream by using each dam to capture progressively more water in each lake created by the dam and measure the flow to prevent downstream flooding. Bandwidth and access point diversity for upstream ISPs provide the same kind of resilience. They also have protocols negotiated across the internet community to shunt DDoS traffic nearer the sources they can activate.

As with other incident response activities, planning, preparation, and practice are essential. No two attacks are exactly the same, therefore, anticipating options and circumstances [and] then planning and practicing for them is crucial. For IoT attack scenarios, that includes scanning your network for vulnerable devices and taking corrective action. You should also be sure to inhibit scanning from outside your network for vulnerable IoT devices. To help, implement rigorous access control and operating system hardening, and develop procedures for patching different code versions, networked devices, and applications.

Click image for full infographic. Image credit: Twistlock

PCMag: Another question readers ask us is about container security. Do you worry about weaponized containers that could contain complex attack systems or do you think the architecture protects against exploits like that?

BK: Security with any newly-emphasized technology is always a heightened concern—containers are not unique in this aspect. But, as with many security challenges, there are trade-offs. While there may be increased risk, we also believe there are effective mitigation strategies for the risks we can control.

A container, essentially, is a highly transient and lightweight, virtualized operating system environment. Are virtual machines less secure than separate physical servers? They are, in most cases. However, many businesses see the cost benefits from virtualization (less spend, easier to manage, can re-purpose machines easily) and they choose to leverage those while mitigating as many risks as they can. Intel even realized they could help mitigate some of the risks themselves and that's where Intel VT came from.

Containers take the initial cost savings and flexibility of virtualization further. [However,] they're also more risky since there is a very thin wall between each container and the host operating system. I'm not aware of any hardware support for isolation so it's up to the kernel to keep everyone in line. Companies have to weigh the cost and flexibility benefits of this new technology along with these risks.

Linux experts are concerned because each container shares the host's kernel, which makes the surface area for exploits much larger than traditional virtualization technologies, like KVM and Xen. So, there's potential for a new attack wherein an attacker hacks privileges in one container to access—or affect conditions within—another container.

We don't yet have much in the way of intra container-specific security sensors. That area of the market must mature, in my opinion. In addition, containers cannot use the security features built into CPUs (like Intel VT) that allow code to be executed in different rings depending on its privilege level.

In the end, there are tons of exploits for physical servers, virtual machines, and containers. New ones crop up all of the time. Even air-gapped machines are exploited. IT professionals should be worried about security compromises at all of these levels. Much of the defenses are the same for all of these deployment types but each one has its own extra security defenses that must be applied.

The hosting provider must use Linux Security Modules (such as SELinux or AppArmor) to isolate containers and that system must be closely monitored. It's also critical to keep the host kernel updated to avoid local privilege escalation exploits. Unique ID (UID) isolation also helps since it prevents a root user in the container from actually being root on the host.

PCMag: One reason PCMag.com hasn't run a large-scale comparison of Managed Security Service Providers (MSSPs) is because there's confusion in the industry about exactly what that term means and what that class of provider can and should deliver. Can you break down Rackspace's managed security service? What it does, how that differs from other providers, and where you see it going so that readers can get a good idea of just what they're signing up for when they employ such a service?

BK: MSSPs have to accept that security hasn't been working and adjust their strategy and operations to be more effective in today's threat landscape—which contains more sophisticated and persistent adversaries. At Rackspace, we acknowledged this threat change and developed new capabilities needed to mitigate them. Rackspace Managed Security is a 24/7/365 advanced Detect and Respond operation. It's been designed not only to protect companies from attacks but to minimize business impact when attacks happen, even after an environment is successfully hacked.

To achieve this, we adjusted our strategy in three ways:

  1. We focus on the data, not on the perimeter. To effectively respond to attacks, the goal must be to minimize business impact. This requires a comprehensive understanding of the company's business and the context of the data and systems we're protecting. Only then can we understand what normal looks like, understand an attack, and respond in a way that minimizes impact to the business.
  2. We assume attackers have gained entry to the network and use highly skilled analysts to hunt to them down. Once on the network, attacks are hard for tools to identify because, to security tools, advanced attackers look like administrators conducting normal business functions. Our analysts actively search for patterns of activity that tools cannot alert on—these patterns are the footprints that lead us to the attacker.
  3. Knowing you are under attack isn't enough. It's critical to respond to attacks when they occur. Our Customer Security Operations Center uses a portfolio of "preapproved actions" to respond to attacks as soon as they see them. These are essentially run books we've tried and tested to successfully deal with attacks when they happen. Our customers see these run books and approve our analysts to execute them during the onboarding process. As a result, analysts are no longer passive observers—they can actively shut down an attacker as soon as they're detected, and often before persistence is achieved and before the business is impacted. This ability to respond to attacks is unique to Rackspace because we also manage the infrastructure that we're protecting for our customers. In addition, we find that compliance is a byproduct of security done well. We have a team that capitalizes on the rigor and best practices we implement as part of the security operation, by evidencing and reporting on the compliance requirements we help our customers meet.

PCMag: Rackspace is a big proponent, indeed a credited founder, of OpenStack. Some of our IT readers have asked whether security development for such an open platform is actually slower and less effective than that of a closed system such as Amazon Web Services (AWS) or Microsoft Azure because of the perceived "too many cooks" dilemma that plagues many large open-source projects. How do you respond to that?

BK: With open-source software, "bugs" are found in the open community and fixed in the open community. There is no way to hide the extent or impact of the security issue. With proprietary software, you're at the mercy of the software provider to fix vulnerabilities. What if they do nothing about a vulnerability for six months? What if they miss a report from a researcher? We view all those "too many cooks" you refer to as a huge software security enabler. Hundreds of smart engineers often look at each part of a major open-source package like OpenStack, which makes it really difficult for flaws to slip through the cracks. The discussion of the flaw and the evaluation of options to repair it [both] happen in the open. Private software packages can never receive this sort of per-line-of-code-level analysis and the fixes will never get such open vetting.

Open-source software also allows for mitigations outside the software stack. For example, if an OpenStack security issue appears but a cloud provider cannot upgrade or patch the vulnerability immediately, [then] other changes could be made. The function could be temporarily disabled or users could be prevented from using it via policy files. The attack could be effectively mitigated until a long-term fix is applied. Closed-source software often doesn't allow for that since it's difficult to see what needs to be mitigated.

Also, open-source communities spread knowledge of these security vulnerabilities quickly. The question of "How do we prevent this from happening later?" gets asked quickly, and deliberation is conducted collaboratively and in the open.

PCMag: Let's end on the original question for this interview: Do you agree with analysts that 2017 will be a "breakout" year in terms of enterprise cloud adoption, mainly or at least partially due to enterprise acceptance of cloud provider security?

BK: Let us step back for a moment to discuss the different cloud environments. Most of your question points to the public cloud market. As I mentioned above, Forrester researchers have noted the "uneven handshake" between cloud providers and consumers in that the cloud providers provide a set of services, but cloud consumers often assume they're receiving much more in terms of security, backup, resiliency, etc. I have advocated since joining Rackspace that cloud providers must even out that handshake by being more transparent with our consumers. Nowhere is the handshake less even, still today, than in public cloud environments.

Private cloud environments, however, and especially those implemented in the consumer's own [data center], don't suffer as much from such illusions. Consumers are much more clear about what they are buying and what the providers are giving them. Still, as consumers have raised expectations in the purchase process and cloud providers have stepped up our games to deliver more complete services and transparency, the emotional and risk-related barriers to moving workloads from a traditional data center to a public cloud environment are falling rapidly.

But I don't think this will create a stampede toward the cloud in 2017. Moving workloads and entire data centers entails significant planning and organizational change. It is far different from upgrading the hardware in a data center. I encourage your readers to study the Netflix transition; they transformed their business by moving to the cloud but it took them seven years of hard work. For one, they re-factored and re-wrote most of their apps to make them more efficient and better adapted to the cloud.

We also see many consumers adopting private clouds in their data centers using a hybrid cloud architecture as their starting point. These seem to be accelerating. I believe the adoption curve could see an elbow up in 2017 but it will take a few years for the swell to really build.

Brian Kelly is Chief Security Officer (CSO) of Rackspace. Kelly is responsible for the safety and security of Rackers, facilities, infrastructure, and data/information. Rackspace's 200,000 customers all over the globe trust them to provide and maintain mission critical compute, network and storage workloads.

This article originally appeared on PCMag.com.