Skip to main content

Microservice Security in Action

Microservices Security landscape

In times of monolith application security was a much simpler concern than today due to the smaller exposed surface of attack.

With microservice application we have more potential points of attack: multiple endpoints at multiple services, and each endpoint might be used at several separate business flows.

One of the solutions to the problem is to centralize the security handling in one microservice and go to from each other microservice when it needs to resolve security concern. Understandably, that itself introduces latency in handling requests.

As a reaction to it, quite often teams follow an approach where they separate network to private and publish segments, and trust all requests incoming from private network. This is a huge antipattern.

A better approach would be to build a zero-trust network where each service works as a security enforcer. Of course, in reality, quite a big part of security enforcement is being done by not service itself but by proxy, deployed in sidecar alongside the service.

Microservice architecture benefits from having immutable service. However, immutability works against security in this case – security demands an easy way to rotate certificates, secrets, and passwords.

In a good security setup we should care about nonrepudiation – ability to authenticate any transaction happened in the past. To achieve this, we need to store traces (audit) of each transaction with the timestamp and signature of the actor.

Securing data-in-transit – handling TLS at proxy. Typically, proxies can work in two modes: TLS bridging and TLS tunneling.

TLS bridging – The proxy terminates the TLS session, decritps the data, and then re-encrypts it when passing the data to service. This method is more dangerous, as anyone who gets access to a proxy can control the data passing through it. However, it also allows having a proxy working as a more powerful security enforcer. When picking this option, security of the proxy should be taken extremely carefully.

TLS tunneling – The proxy does not terminate the TLS session, thus it is not able to decrypt the data passing through it. This is a safer and more secure setup of a proxy, but it means proxy itself has a limited ability to enforce authorisation based on the content of a request.

Securing data-at-rest – encryption of stored data. There are multiple variants which could be used:

  • Encryption on disk made by OS – needs to be enabled by default;
  • Encryption made by DB – pretty much also always needed;
  • Encryption on an application level – can be useful for securing sensitive and security information (secrets, tokens). As downside – operation of encryption/decryption can be quite resource intensive, so this usage of encryption should be aligned with business needs.

Availability of the system is one of the goals that security sets to achieve. We need to protect the system from DoS and DDoS attacks, make sure bug in one service does not cause cascading failure in other services.

One of the most popular patterns of securing microservices is by having an API gateway. This is the service that accepts incoming requests from users, performs authentication and potentially other security policies, and transmits the request to the target service inside the private network.

There are several typical ways API gateways may perform the authentication:

  • Certificate-based authentication.
  • OAuth 2.0-based.

Service-to-service communication can be authenticated with these three main approaches:

  • Trust the network, where requests inside the private network are trusted by default. Approach opposing to this idea is called Zero-trust networking, where it is assumed that any request can be coming from a hostile actor and thus must be verified.
  • mTLS - Mutual TLS is currently the de-facto standard for service-to-service communication. TLS protects the data in transit, however it does not solve problem of identifying the author of the request. mTLS solves this issue.

Service-to-service communication can be authorized with these main approaches:

  • Authentication at gateway (coarse)
  • Authentication at service (more detailed)

Authorization at service can be decentralized, or centralized. There are 2 ways to do centralized authorization at service: have a single Policy Decision Point service, which is used by other services to make a judjement on a request. This setup introduces additional network hop to PDP for each request which affects latency. Caching may help alliviate this, but the network hop still remains.

Other way to do centralized judjement is to embed PDP inside the service as a library, although this raises another question on how to update policies. In more advanced setups this is solved by either polling master-data, or with topic, which distributes policy updates. Although many teams prefer simpler setup where policies are requested only one time - on service startup.

In big organisations it is common to have multiple trust domains - where each trust domain has its own security provider (and probably separate cluster).

Microservice architecture requires having multiple observability tools in place (a.k.a. pillars of observability):

  • logs
  • metrics
  • traces