Over the past few years, the concept of “zero trust” architecture has gone through a number of evolutionary phases. It’s gone from being the hot new fad, to being trite (in large part due to a deluge of marketing from those looking to cash in on the trend), to pass, and now has ultimately settled into what it probably should have always been all along: a solid, workmanlike security option with discrete, observable advantages and disadvantages that can be folded into our organization’s security approach.
Zero trust, as the name implies, is a security model where all assets — even managed endpoints that you provision and on-premise networks configured by you — are considered hostile, untrustworthy and potentially already compromised by attackers. Instead of legacy security models that differentiate a “trusted” interior from an untrusted external one, zero trust instead assumes that all networks and hosts are equally untrustworthy.
Once you make this fundamental shift in assumptions, you start to make different decisions about what, who, and when to trust, and acceptable validation methods to confirm a request or transaction is allowed.
As a security mindset, this has advantages and disadvantages.
One advantage is that it lets you strategically apply security resources where you need them most; and it increases resistance to attacker lateral movement (since each resource needs to be broken anew should they establish a beachhead).
There are disadvantages too. For example, policy enforcement is required on every system and application, and older legacy components built with different security assumptions may not fit in well, e.g. that the internal network is trustworthy.
One of the most potentially problematic downsides has to do with validation of the security posture, i.e. in situations where the security model requires review by older, more legacy-focused organizations. The dynamic is unfortunate: those same organizations that are likely to find the model most compelling are those same organizations that, in adopting it, are likely to set themselves up for vetting challenges.
Validation and Minimizing Exposure
To understand the dynamic we mean here, it’s useful to consider what the next logical step is once zero trust has been embraced. Specifically, if you assume that all endpoints are potentially compromised and all networks are likely hostile, a natural and logical consequence of that assumption is to minimize where sensitive data can go.
You might, for example, decide that certain environments aren’t sufficiently protected to store, process, or transmit sensitive data other than through very narrowly defined channels, such as authenticated HTTPS access to a web application.
In the case where heavy use is made of cloud services, it is quite logical to decide that sensitive data can be stored in the cloud — subject of course to access control mechanisms that are built explicitly for this purpose and that have security measures and operational staff that you can’t afford to deploy or maintain just for your own use.
As an example, say that you have a hypothetical younger organization in the mid-market. By “younger,” we mean that maybe only a few years have passed since the organization was established. Say this organization is “cloud native,” that is, 100% externalized for all business applications and architected entirely around the use of cloud.
For an organization like this, zero trust is compelling. Since it is 100% externalized, it has no datacenters or internal servers, and maintains only the most minimal on-premise technology footprint. This organization might explicitly require that no sensitive data can “live” on endpoints or inside their office network. Instead, all such data should reside in the subset of known, defined cloud services that are explicitly approved for that purpose.
Doing this means the entity can focus all of its resources on hardening cloud infrastructure, gate services such that all access (regardless of source) is protected in a robust way, and deprioritize things like physical security, hardening the internal network (assuming there even is one), deploying internal monitoring controls, etc. Assuming a diligent, workmanlike process is followed to secure the use of the cloud components, such an approach can help focus on limited resources.
However, the above example organization doesn’t operate in a vacuum — no organization does. It works with customers, leads in the sales process, business partners, and numerous others. Since the organization is a smaller one, many of its customers might be larger organizations — potentially customers with stringent requirements about securing external service providers and validating their security. Perhaps it has a regulatory obligation to do so depending on what industry it’s in. Now some of these customers might be fully externalized but the majority won’t be — they’ll have legacy applications, unique constraints, specialized requirements, and other business reasons why they can’t support a fully external model.
What results is often a perfectly understandable, but nevertheless counterproductive, discussion at cross purposes between the organization doing the assessment (the potential customer) and the one being assessed (the service provider). A service provider, for example, might very reasonably argue that physical security controls (to pick just one example) are out of scope for the purposes of the assessment. They might argue this on the basis that the only physical security controls that matter are the ones at the cloud providers they employ since, after all, this is the only place where data is allowed to reside.
The customer on the other hand might, also reasonably, worry about aspects of physical security that do relate to the service provider’s environment. For example, visitor access to facilities where customer data might be viewed on screen, even if the data isn’t stored there. They might envision a scenario, for example, where an unauthorized visitor to the office might “shoulder surf” data as it’s being entered on screen by a legitimate user.
A conversation like the one above, even when it doesn’t become contentious, is still suboptimal for both parties involved. From the point of view of the service provider, it slows down the sales process and saps time away from engineers who would otherwise be focused on product development. From the point of view of the potential customer though, it makes them nervous about potential sources of unaccounted for risk — while simultaneously generating ill-feeling with internal business partners anxious to onboard the service and who would like to see vetting happen quickly.
Principle Strategies
So, the question then becomes: How do we effectively communicate a zero-trust model if we wish to employ one in this way? If we’re validating such an approach, how do we answer the right questions so that we can reach a determination quickly and (ideally) enable business use of the service? It turns out there are a few approaches we can leverage. None of them are rocket science, but they do require having empathy — and doing some legwork — to support.
From a service provider point of view, there are three useful principles to keep in mind: 1) be forthcoming, 2) demonstrate validation of your assumptions, and 3) back up your assertions with documentation.
By “forthcoming” I’m referring to a willingness to share information beyond what a customer might ask. If you provide a cloud SaaS as in the example above, this might mean that you are willing to share information beyond the specific set of items asked by the customer. This lets you “genericize” information even to the point of leveraging standard deliverables. For example, you might consider participating in the CSA STAR registry, prepare standard information gathering artifacts like the CSA CAIQ, the Shared Assessments Standardized Control Assessment SIG, or the HITRUST Third Party Assessment Program in the healthcare space.
The second principle, demonstrating validation, means you’ve validated the assumptions that have gone into your security model. In the example above, this means we might back up the assumption of “no data stored internally” with validation of it. An assessor from a customer is much more likely to believe the statement if a control like DLP is used to validate it.
The last point of having documentation means documenting the model you espouse. For example, if you can supply an architectural document that describes your approach: why you employ it, the risk analysis you performed beforehand, the controls in place to validate, etc. Back it up with a defined policy that sets forth security principles and expectations.
From the assessor side, there’s really only one principle, which is to embrace flexibility where you can. If you understand the intent and rigor of the controls that you would expect and a service provider happens to be meeting the same intent at the same level of rigor but in a different way than you expect, providing options for the service provider (other than requiring them to purchase and install controls they don’t need) is helpful.
Again, none of this advice is rocket science, of course. But just because it’s obvious doesn’t mean that everyone does it. By doing some legwork ahead of time and looking through an empathic lens, you can streamline the assessment process in a situation like this.
Social Media
See all Social Media