by Tim Reilly

You Are Just Another Breach

Many data breach attacks involve arguably the weakest link in the entire security chain – the human. Addressing these attacks by altering the behavior of the target proves to be extraordinarily complex: humans continue to click on phishing emails, respond to malicious text messages and fall prey to attackers exposing the most basic vulnerabilities of human nature. This begs the question – why do these attacks keep happening?

The first step is to understand and embrace the reality: humans are vulnerable, computer systems are not perfect, corporations choose to focus on profits and not on the protection of their customer data. These three realities are recipe for disaster. An attacker who is motivated to get into your system, will get into your system. Having accepted these as inevitable, we can then shift our focus from mostly futile attempts to prevent an attack to deploying much more feasible strategies of quick detection, targeted response, and timely remediation.

What’s interesting about Uber’s recent breach was that it did not just go after the gold – customer data, trip histories, credit card numbers, and social security numbers. It went after the map to where that gold was held – the vulnerability reports that expose known weak spots and priorities assigned to fixing every one of these weak spots. This may indicate that a larger and longer-term goal could be to simplify further infiltration attempts or to broaden the number of malicious entities and to aid their efforts to infiltrate critical systems.

Additionally, systems containing source code were infiltrated. This highlights the reasoning and the intent beyond data theft. Knowing and understanding proprietary source code, additional vulnerabilities will be discovered; these vulnerabilities will be exploited in follow on attacks.

Let’s not concern ourselves with either human or corporate aspects of the breach. Let’s not look at whether employees were sufficiently trained in modern attack vectors. Let’s not consider whether corporations put profits and feature delivery ahead of security initiatives. Instead, let’s focus on what can be done programmatically, what systems can be deployed to protect (granularly), to detect (faster), to respond (in a targeted fashion), and to recover (without waiting days for a backup).

Enter Zero Trust. Not the hyped-up buzz word “Zero Trust” that everyone developed overnight, with all applications suddenly listed as a core feature just by implementing a rudimentary two-factor authentication. The real “Zero Trust” that was outlined as a blueprint in a joint paper by the Department of Defense and the National Security Agency. We quickly realize that borrowing just a few core tenets from the Zero Trust blueprint leads to a much more secure and responsive system.

The basic Zero Trust principles of “never trust; always verify” and “assume hostile environment and presume breach” quickly get the non-believers over the overwhelming desire to focus on perimeter protection and credential security, and to rely on the impenetrable walls of data centers.

The emphasis shifts to constant verification and dynamic adjustments to access policies. The basic enabling technology for these can be referred to as “monitor everything all the time.”

A side note before we dive in. Logging does not equal monitoring. Just because a system logs all events that occur in excruciating detail, does not mean this information is useful, let alone actionable. Logs and log analysis is only a small part of a properly implemented monitoring system.

Assuming that all logging and perimeter monitoring systems are already in place, and your superuser passwords are not written on sticky notes, we focus on the scenario when these systems have been breached.

We start with monitoring the authorization and authentication systems, looking for out-of-band entity behaviors (entities can be anything from users, to services, to devices). Examples of these out-of-band behaviors are frequent attempts by users to switch context, change passwords, or change authentication methods or sources.

Further downstream, we monitor data protection systems, which we would like to believe are deployed. We also make sure to eliminate reliance on legacy data protection systems that lack appropriate granularity and context of the data that is protected. A prime example of such oversight is using partition-level encryption, storage-level encryption, or cloud volume encryption to protect data in containerized or microservices environments. A granular data protection system allows for monitoring of access to protection keys and detecting out-of-policy access to these keys. For example, a database container that connects to three persistent volumes during normal business hours can be considered normal behavior, while the same database container attempting to connect to ten different persistent volumes over the weekend is something that needs to be immediately escalated for further analysis and potential threat response.

More information can be inferred by monitoring and analyzing access to the data itself. Access patterns can be established, and out-of-policy violations should be escalated and evaluated. These include a process such as opening too many files, reading large amounts of data, attempts to send these data over the network, or simply processing the data in bulk locally. Profiling such access patterns can help detect ransomware to make sure it does not run invisible in your environment for months, reducing your backup to worthless bitstreams.

The more monitoring and detection mechanisms are put in place, the better. There’s an understanding that a breach cannot be prevented. Armed with that realization, and with proper monitoring and analysis systems in place, a backup and recovery system can be warned to stand by a clean backup copy of the compromised dataset, so that when it is decided that the time comes to restore, the application is restored quickly, reducing the impact to the business. And remember that granular data protection system you put in place when you were deploying your legacy applications in microservices environments? This granular data protection system really came in handy because it allowed you to decommission only a handful of compromised volumes, without having to destroy and recover the entire environment.

Looking forward, all this monitored data that is aggregated, analyzed, and packed with actionable context, can be relayed to Zero Trust policy engines that will adjust the access policies on the fly. The goal is to make sure the healing and learning from the breach has come full circle. With each breach, the system will become smarter and more capable of self-healing.

Some of this may be just a set of goals for your organization. However, it is more than likely that you are going through a compliance initiative or recovering from a data breach or ransomware attack. You may be re-architecting your legacy system to take full advantage of microservices environments. In either of these scenarios, you are in a perfect position to implement some, or all the methods talked about in this brief.

The good news is that technologies required to implement these methods exist today. And while nothing will protect you from a malicious individual or a group that sets out to break into your environment, these technologies will allow you to control the intrusion, detect it sooner, and recover from it quicker. This amounts to increased business resilience. In practical terms, it can reduce the cost of the breach by about 30 percent. Considering the average cost of the breach to be around $4M, that is not a small number. As an added bonus, you will avoid bad publicity and will not become another line item on the ever-growing list of enterprises who think that all vulnerabilities can be remediated by securing your perimeter and credentials or putting higher walls around data centers.