Thought Leadership

Aug 5, 2019

The Massive Capital One Breach Post-Mortem: Three Lessons For Infosec Teams


If you work in IT, DevOps or information security, the tale of Paige “erratic” Thompson and her breach of a sensitive Capital One database reads like a nightmare. A formerly privileged individual uses her inside knowledge of security gaps common to databases deployed on Amazon Web Services to steal personally identifiable data from 106 million consumers and small businesses stored in a cloud database that was not security configured.

This attack could cost Capital One millions of dollars to remediate and will likely warrant a massive fine from a government enforcement body (there have been three such fines of over \$100 million announced in only the last month). The constant negative coverage will most likely damage the brand of Capital One and monopolize time and energy for internal development and operations teams as they scramble to check every potentially exposed system in order to ensure they don’t get hacked again.

At SafeBreach, we spend most of our time thinking about how to prevent incidents like this breach. We do not have detailed forensic information of what actually happened. But based on news reports and other insights, here are three lessons that we think are critical to understand, if you want to prevent this type of nightmare from happening to your business.

Lesson 1: Focus On Protecting The Most Critical Assets To Minimize Exposure

We know that humans cannot possibly maintain proper configurations of all their systems. We also know that even test results and guidance can be overwhelming without prioritization based on business logic. Capital One, and every other financial services company, should have a detailed assessment process that identifies the most critical data assets it holds and assign those a higher priority. For example, any database holding Social Security numbers or other unique government identifiers, like the ones exposed in the Capital One Breach, should be a top priority for protection. It appears, from her chat logs, that Thompson may have breached a Web Application Firewall that was not properly configured, or perhaps was left with a default admin password unchanged. While I noted above that proper configuration everywhere for everything is no longer manageable, part of the way we must deal with this is by identifying what is most important based on the business risk it causes and dedicating more resources and care to testing those assets and to remediate those security gaps quickly.

Our Advice: Create a system or leverage technology tools to analyze all security gaps and prioritize them for remediation based on the business risk the gap creates.

Lesson 2: Breaches Are Getting More Expensive, Fast – But More Humans and More Security Controls Are Not The Answer

In the past months, we have seen over \$1.23 billion in fines levied for security breaches that exposed consumer data – British Airways ($230 million), Marriott ($124 million) and Equifax (\$575 million). Both of the fines constitute considerable hits to the bottom lines of those businesses. And that’s just the beginning. As part of its loss, for example, Equifax has to set up a huge process to help consumers claim damages. This has spawned even more negative news coverage about the company. How much time and effort Equifax is spending on the breach is anyone’s guess but considering the size of the fines, it’s likely that CISOs are frightened and probably throwing even more bodies at security remediation activities. But the reality is that CISOs can’t hire their way out of the problem and buying more security controls is not going to help either.

Lesson 3: Cloud Sprawl Has Massively Expanded Attack Surface So You Must Have Tight Governance Of Cloud Resources

A decade ago, few businesses of any size had critical systems and data living in the cloud. Today, you would be hard pressed to find companies that were not doing this – and many businesses have multiple flavors of cloud. Because cloud resources are ephemeral and often hard to track effectively, this lays an additional burden on security teams charged with ensuring that every server (cloud or not) holding any type of company data is accounted for and locked down. This often means incorporating entirely new types of testing for breaches that account for highly ephemeral and mobile resources; in Kubernetes, for example, the entire container orchestration systems is designed to leverage ephemerality to maximize resources. Further, it’s important to realize that default cloud instances are massive risks and tight governance of cloud resources is perhaps even more important than governance of physical assets that, push comes to shove, can be air-gapped in an emergency.

The Bottom Line: Continuous Testing to Ensure Controls are Configured Properly Is More Critical Than Ever

All three of the above lessons point to the same thing.The only way to effectively prevent a Capital One-style breach is to adopt more powerful systems and tools for testing and configuration and to use them aggressively to protect your IT assets.

Here is the basic reality. In the case of Capital One – yes, someone made a terrible mistake misconfiguring or failing to properly configure an Amazon database that holds critical information. Amazon said its not to blame and that its customers should understand that they cannot trust the security settings of cloud instances out-of-the-box. Capital One accepted the blame.

But the reality is, the average IT department or infosec team must contend with properly configuring dozens or even hundreds of configuration settings on between 70 to 100 security and control systems. That includes constantly updating the systems to shore up security gaps that appear very naturally during IT and information security “drift”. Mind you, these are only the security and control systems; we’re not even talking about properly configuring and securing the core assets that the tools protect, the servers, networks and databases.

This is not a job humans can expect to master without help from intelligent systems that can automate tests to ensure constant probes of defenses and run those tests round the clock. Equally important, we need systems that can guide information security teams with precise steps to fix the problems that carry the biggest business risk operating round the clock.

It’s very easy to bash Capital One for failing to protect key customer data but Capital One is hardly alone in struggling with the three problems we address above. With systems, networks and numbers of devices exploding, not to mention the number of security and controls running in a corporate IT footprint, the only way to keep on top of this and prevent more breaches like this one is to automate, prioritize and remediate intelligently. This will allow your security team to make better decisions and focus on what really matters before, and not after, the breach.

Get the latest
research and news