Microservices work well for cloud-native application development, overcoming many of the issues associated with traditional monolithic architecture. However, they also pose their own challenges in terms of scalability, operational complexity, networking, data consistency, and security.

Containers and container orchestration systems such as Kubernetes help overcome many of these problems by packaging services with their own runtimes, executing containers, and mapping them to machines. However, there was still a gap in the operational logic: managing service-to-service communication.

With microservices, app components don’t communicate through fast, secure function calls. Instead, they talk to each other by way of an inherently unreliable network that lacks sufficient security. That makes visibility into network communication critical for identifying and addressing potential issues. A service mesh can make that happen.

The Why of a Service Mesh

In a microservices-based architecture, each part of an app is called a “service” and performs a specific business function. To execute its function, one service might need to request data from other services. If any of the services get overloaded with requests, performance and a user’s experience can be affected.

A service mesh extracts the complicated logic governing how services communicate in the cloud to a separate layer of infrastructure. This visible layer documents how well different parts of an app interact, making it easier to optimize communication and avoid downtime as an app grows.

A service mesh layer isn’t required for coding the logic governing communication into each service. Using a service mesh, however, takes the logic governing service-to-service communication out of individual services. It abstracts it to a layer of infrastructure and doesn’t introduce new functionality to an app’s runtime environment.

How Does a Service Mesh Work?

The way a service mesh works in containerized environments like Kubernetes, which are commonly used in cloud-native application development, is that every single instance of an application has its own proxy. The proxy can handle everything from encrypting the traffic to latency.

Requests are routed between microservices through these proxies in their own infrastructure layer. The individual proxies run alongside each service, not in them.

As traffic passes between services, each proxy has clear instructions for routing, throttling, and security that are applied locally. This rapid, real-time direction and control optimizes the responsiveness of the application.

Service Mesh Benefits

While the benefits offered by a service mesh can vary based on numerous variables, the following are some of the most common:

  • Certificate authority that generates a per service certificate for TLS communication between services
  • Delay and fault injection, allowing simulation of what would happen in a real situation
  • Granular traffic to determine where a request is routed
  • Observability, which eliminates the need for different tools to track things like logging and tracing, metrics, and security control
  • Resiliency features, including circuit breaking, latency-aware load balancing, eventually consistent service discovery, retries, timeout, and deadlines
  • More reliable, efficient service requests as data made visible can be applied to the rules for service-to-service communication
  • Reduced amount of coding required, freeing up developers’ time and increasing their productivity
Get the Free eBook
Application Modernization – The Microservices Approach

Service Mesh Challenges

Despite that long list of benefits, there are also several challenges with using a service mesh that must be considered. As stated upfront in this post, these challenges can include scalability, operational complexity, networking, data consistency, and security.

Operational issues provide a particular challenge. A service mesh adds one more level of abstraction and complexity. It’s not easy to understand how it works, how it should be set up, how to debug connectivity issues, and how to upgrade Amazon EKS clusters correctly.

And, a service mesh can add some overhead for network connectivity and increased resource usage. So, be aware that this solution does have some operational costs.

Getting Started

There are various ways to get started in using a service mesh in developing cloud-native apps built on microservices architecture. That includes using any of numerous open-source projects such as Istio and Linkerd. Many cloud service providers also offer service mesh capabilities within their Kubernetes platforms.

One to consider, particularly if AWS is your cloud service of choice, is AWS App Mesh. In addition to all the benefits of a service mesh, App Mesh uses Envoy, an open-source proxy. This enables exporting observability data to multiple AWS and third-party tools, including Amazon CloudWatch, AWS X-Ray, or any third-party monitoring and tracking tool that integrates with Envoy.

AWS App Mesh can be used with AWS Fargate, Amazon EC2, Amazon ECS, Amazon EKS, and Kubernetes running on AWS, to better run an app at scale. It also integrates with AWS Outposts for apps running on-premises.

If your team isn’t ready to dive into service mesh use or is unsure of how to maximize it in a cloud application development project, ClearScale can help.

Ready to plan your next cloud project?

Why ClearScale

ClearScale has extensive experience in cloud-native app development. That includes using microservices, containers, and container orchestration systems like Kubernetes.

That’s what ClearScale does. We customize our approach to application development to ensure we deliver the optimal project – and results – for our clients. You’ll find some good examples in our case studies here.

Get in touch today to speak with a cloud expert and discuss how we can help:

Call us at 1-800-591-0442
Send us an email at sales@clearscale.com
Fill out a Contact Form
Read our Customer Case Studies