Distinguished Microservices: It’s in the Behavior

This post originally appeared on DZone

Microservices is more than just an academic topic. It was born out of the challenges from running distributed applications at scale; enabled by recent advancements in cloud native technologies. What started as a hot topic between developers, operators, and architects alike, is now catching on within the enterprise because of what the shift in culture promises — the ability to deliver software quickly, effectively, and continuously. In today’s fast-paced and ever-changing landscape, that is more than just desirable; it’s required to stay competitive.

Culture shifts alone are not enough to make a real impact, so organizations embarking down this path must also examine what it actually means for the inner workings of their processes and systems. Dealing with immutable infrastructure and composable services at scale means investing in operational changes. While containers and their surrounding tools provide the building blocks through an independent, portable, and consistent workflow and runtime, there’s more to it than simply “build, ship, run.”

Making the Distinction

Where we as a community are all pretty much in agreement is with the characteristics that make up a microservice – a loosely coupled, stateless service that is independently developed and deployed, while remaining individually scalable, upgradeable, and replaceable. It’s best practice to apply the Unix philosophy of doing one thing, and doing it well, and of course… containerize all the things. The path to microservices starts by breaking down an application into components that can (and should) be decoupled based on these characteristics.

What’s often missing from the conversation, however, is their behavior in a real world production environment. Each independent microservice takes on a life of its own once shipped, introducing a whole new gamut of operational complexities. Despite any attempts to generalize microservices into a single behavioral pattern, there is no “one size fits all” way to deal with so many moving parts. With that in mind, what I’d like to accomplish here is a basic framework for distinguishing between two flavors of microservices: real-time requests (“app-centric“), and background processes (“job-centric“).

The key differentiator here lies in the call – synchronous or asynchronous. Beyond simply a communication method, that single letter diff makes a world of difference in how the workload behaves once it’s been shipped. The primary question to ask when examining your own workloads is simply: “do I need an immediate response?” If so, then follow the app-centric patterns. If not, follow the job-centric patterns. Once this baseline is established, there are a number of contrasting methodologies for managing each microservice throughout its entire lifecycle.

Build and Deploy

We build our microservices for their intended production runtime, commonly through a CI/CD pipeline to ensure consistency. A container image is the portable unit that gets deployed, but when creating the environment via Dockerfile, there is a key distinction between setting one up that’s ready for requests vs ready to execute.

App-centric microservices are pushed to a staged runtime at which point they are ready to accept requests from a client. Setting up the environment means pulling the image layers, importing any external dependencies, running the process, and exposing a port to accept incoming requests. This is similar to deploying an application via a buildpack, but we have more granular flexibility and control using Docker.

Job-centric microservices are uploaded to an image repository, at which point they are ready to execute from an event trigger. This means that the container is spun up on demand, so it’s best practice to keep the startup time at a minimum by using the smallest base layer possible, and consolidating the operations inline where applicable. Depending on your language, it’s also recommended to vendor the dependencies so there’s no additional startup time when invoked.

Tip: Consider using Alpine Linux as the base layer for your Docker images. It is an extremely lightweight distribution that still provides a package manager for external dependencies.

Request and Invoke

A well defined API is crucial to a microservices architecture because of its modular and distributed makeup. This goes beyond good documentation, logical resource naming and semantic versioning. It is crucial to understand the manner in which the endpoints are triggered and the workloads are initiated.

App-centric microservices follow a synchronous request/response model. API endpoints are directly invoked by the client, usually over pure HTTP. There is an expected real-time response, so it is best to use point-to-point communication. Being distributed, it’s important to factor in latency and potentially unreachable endpoints.

Job-centric microservices follow an event-driven model, where an action automatically triggers an asynchronous workflow. Events come in many forms from a wide range of sources – schedule, webhook, callback, notification, sensor, or direct API call. Due to its asynchronous nature, the job placed in a message queue to persist the request until it’s ready to execute.

Tip: Consider using an API Gateway as the single entry point for all requests for added features such as monitoring, authentication, security, throttling, and more.

Discover and Route

Ensuring that containerized microservices are properly distributed across a number of dynamic hosts takes a fair amount of finesse. The underlying system must be intelligent enough able to schedule workloads on-demand within the available container fleet without under representing or over committing resources.

App-centric microservices are distributed as running containers. This means that when a request comes in, the system needs to know where the container process is located (ip address and port), so it can route accordingly. There is an entire ecosystem centered around service registry and container orchestration, so picking the right tool for the job often comes down to how much abstraction you want vs how much control you want.

Job-centric microservices are are queued up prior to execution, meaning the orchestration question is not “where is the service?“, but rather “where can I run the service?” Worker nodes are registered with the system and pull from the queue when there are jobs to run. This means that the system needs to know the available capacity across the entire pool, so it can spin up a fresh container within its bounds to execute the process.

Tip: Profiling each microservice for its most optimal compute environment will help allocate resources effectively. For example, memory and/or CPU intensive workloads should be run on more powerful hardware.

Run and Scale

A key benefit of microservices is the ability to get the most out of your infrastructure resources, but anyone who has ever operated a distributed system in some form or fashion knows the gap between “run” and “run at scale” is far more reaching than just pure capacity.

App-centric microservices are container processes continually running across a shared resource pool. Scaling is traffic driven, spinning up and down new containers accordingly. To handle volume, a load balancer is placed in front, distributing the requests to available nodes.

Job-centric microservices execute and finish, only needing a compute environment for the duration of the process. Scaling is concurrency driven, spinning up n containers based on how many processes should run at a given time. The queue acts as a buffer in the case that there are more jobs to run than available containers.

Tip: Incorporate proactive and reactive autoscaling mechanisms based on known and unknown volume. Use traffic metrics (volume, rate) for app-centric microservices and queue metrics (size, rate) for job-centric microservices.

Monitor and Fail

Visibility is one of those key traits that makes something enterprise-grade. With microservices, there are more locations, hosts, environments, sources, and endpoints to keep track of. A resilient system is one that can react to failures accordingly.

App-centric microservices are live processes invoked by real-time requests. When an endpoint can’t be reached, the system needs to be able to failover to another running instance so the request is not lost. The container process must also be monitored in real-time, with proper alerting mechanisms in place.

Job-centric microservices can fail in the same manner; however, the distinction is that the job state is preserved in a queue until completed. This means the job can be retried when a failure occurs, either automatically or manually. Given the asynchronous nature of the workload, monitoring is more log-oriented, so developers can look back at what happened.

Tip: Make sure that monitoring, logging, alerting, and reporting is done at the individual microservice level in order to isolate failures, and also as a collection in order to have real-time visibility into the state of the whole system.

Bringing it All Together

Understanding these behavioral patterns allows you to make more educated decisions around how to manage various types of workloads in a highly scalable production environment. With microservices, the role of DevOps becomes increasingly important, evolving the “Infrastructure as Code” mantra even further.

Contrary to how it may sound, though, the holy grail of DevOps isn’t NoOps. The real point is to make operations such a natural extension to the development process that it doesn’t require a special skill set to run code at scale in any environment. The continued evolution of cloud infrastructure services and development platforms are enabling this type of “serverless” future, where developers can build and deploy distributed applications with the confidence to know how each workload will behave in the real world.

Leave a Comment





This site uses Akismet to reduce spam. Learn how your comment data is processed.