Unlocking the full potential of microservices architecture in application development has been a topic of extensive discussion. The benefits are widely acknowledged, but the challenge lies in executing it effectively. Fortunately, at ClearScale, we have discovered a highly advantageous approach: leveraging the 12-Factor App methodology. Originally devised by developers at Heroku, this methodology identifies key principles shared by successful applications.

 

While the 12-Factor App principles were initially tailored for apps developed on the Heroku platform, their versatility transcends specific technologies and programming languages. They prove invaluable in breaking down monolithic applications into microservices and in constructing cloud-native apps entirely from scratch, embracing the microservices paradigm.

 

In this blog post, we provide an overview of utilizing the 12-factor application methodology, specifically focusing on its application within microservices. For a comprehensive understanding of the 12-factor principles, we encourage you to refer to the complete document authored by the Heroku team here.

1. Codebase

One codebase tracked in revision control, many deploys

There is only one codebase per app ─ in microservices architecture, it’s one codebase per service ─ but there can be numerous deploys. A deploy is a running instance of the app. This is usually a production site and one or more staging sites. A copy of an app running in a developer’s local development environment is also considered a deploy.

The codebase must be the same across all deployments, although different versions may be active in each deploy. For this reason, it’s essential to track the app in a version control system, such as Subversion and Git.

2. Dependencies

Explicitly declare and isolate dependencies

Declare all dependencies precisely via a dependency declaration manifest. Use a dependency isolation tool during execution to ensure that no implicit dependencies enter from the surrounding system. This applies to both production and development.

Don’t rely on the implicit existence of any system-wide site packages or site tools. While these tools may exist on most systems, there’s no guarantee they’ll exist on the systems where the app runs in the future. Nor is it a sure thing that the version found on a future system will be compatible with the app.

3. Config

Store config in the environment

You should store all configuration data separately from the code – in the environment as variables and not in the code repository — and read in by the code at runtime. A separate config file makes it easy to update the config values without touching the actual codebase, eliminating the need for redeploying apps when config values are changed.

The following are some additional best practices to consider:

  • Use an environment variable for anything that can change at runtime, and for secrets that shouldn’t be committed to a shared repository.
  • Use non-version-controlled .env files for local development.
  • Keep all .env files in a secure storage system so they’re available to the development teams, but not committed to the code repository being used.
  • Once an app is deployed to a delivery platform, use the platform’s mechanism for managing environment variables.

Configs stored as variables are unlikely to be checked into the repository accidentally. Another bonus: then your configs are independent of language and OS.

4. Backing services

Treat backing services as attached resources

Backing services are usually managed locally by the same systems administrators who deploy the app’s runtime. The app may also have services provided and managed by third parties.

For microservices, anything external to a service is treated as an attached resource. This ensures that every service is completely portable and loosely coupled to the other resources in the system. In addition, the strict separation increases flexibility during development; developers only need to run the services they’re modifying.

5. Build, release, run

Strictly separate build and run stages

Use a continuous integration/continuous delivery (CI/CD) tool to automate builds and support a strict separation of build, release, and run stages, Docker images make it easy to separate the build and run stages. Images should be created from every commit and treated as deployment artifacts.

Start the build process by storing the app in source control and then build out its dependencies. Separating the config information enables you to combine it with the build for the release stage. It’s then ready for the run stage. Each release should have a unique ID.

Ready to plan your next cloud project?

6. Processes

Execute the app as one or more stateless processes

Ensuring the app is stateless makes it easy to scale a service horizontally by simply adding more instances of that service. Store any stateful data or data that needs to be shared between instances, in a backing service such as a database. The process memory space or filesystem can be used as a brief, single-transaction cache. Session state data is a good candidate for a datastore that offers time expiration.

Never assume that anything cached in memory or on disk will be available on a future request or job.

7. Port binding

Export services via port binding

The app should be completely self-contained and not rely on the runtime injection of a webserver into the execution environment to create a web-facing service. The web app exports HTTP as a service by binding to a port, and listening to requests coming in on that port. Nearly any kind of server software can be run via a process binding to a port and awaiting incoming requests.

To make the port binding factor more useful for microservices, allow access to the persistent data owned by a service only by way of the service’s API. This prevents implicit service contracts between microservices. It also ensures they can’t become tightly coupled. In addition, data isolation allows the developer to choose the type of data store for each service, that best suits its needs.

8. Concurrency

Scale-out via the process model

Supporting concurrency means that different parts of an app can scale up to meet the need at hand. When you develop the app to be concurrent, you can spin up new instances to the cloud effortlessly.

This principle draws from the UNIX model for running service daemons, enabling an app to be architected to handle diverse workloads by assigning each type of work to a process type. To do this, organize processes according to their purpose. Then separate them so that they can scale up and down according to need.

Docker and other containerized services provide service concurrency.

9. Disposability

Maximize robustness with fast startup and graceful shutdown

An app’s processes should be disposable so they can be started, stopped, and redeployed quickly with no loss of data. This facilitates fast elastic scaling, rapid deployment of code and config changes, and robustness of production deploys.

Services deployed in Docker containers do this automatically. It’s an inherent feature of containers that they can be stopped and started instantly.

The concept of disposable processes means that an app can die at any time, but it won’t affect the user. The app can be replaced by other apps, or it can start right up again. Building disposable processes into your app ensures that the app shuts down gracefully: it should clean up all utilized resources and shut down smoothly.

When designed this way, the app comes back up again quickly. Likewise, when processes terminate, they should finish their current request, refuse any incoming request, and exit.

10. Dev/prod parity

Keep development, staging, and production as similar as possible

Continuous deployment requires continuous integration based on matching environments to limit deviation and errors. As such, dev, staging, and production should be as similar as possible.

Containers work well for this as they enable you to run the exact same execution environment all the way from local development through production. Note: the differences in the underlying data can still cause differences at runtime.

Don’t use different backing services between development and production, even when adapters theoretically abstract away any differences. Even minor deviations can cause incompatibilities to crop up that can cause code that worked in development or staging to fail in production.

11. Logs

Treat logs as event streams

Stream logs to a chosen location rather than dumping them into a log file. You can direct the logs anywhere. For example, to a database in NoSQL, to another service, to a file in a repository, to a log-indexing-and-analysis system, or to a data-warehousing system.

Use a log-management solution for routing or storing logs. The process for routing log data needs to be separate from processing log data. Define your logging strategy as part of the architecture standards, so that all services generate logs in a similar fashion.

12. Admin processes

Run admin/management tasks as one-off processes

The idea here is to separate administrative tasks from the rest of the app to prevent one-off tasks from causing issues with your running apps. Containers make this easy, as you can spin up a container just to run a task and then shut it down.

Examples include doing data cleanup, running analytics for a presentation, or turning on and off features for A/B testing.

Though the admin processes are separate, you must continue to run them in the same environment and against the base code and config of the app itself. Shipping the admin tasks code alongside the app prevents drift.

Learn More About Our 12-Factor App Methodology

At ClearScale, we are dedicated to assisting organizations in various industries by providing innovative solutions that encompass everything from cloud migrations to the development of machine learning-powered applications. With a deep understanding of the importance of cloud-native apps, we leverage a wide array of methodologies and best practices, including the principles of microservices.

Our approach centers around our clients’ unique needs and objectives. We understand that every organization has distinct requirements when it comes to application development. Whether you are seeking to enhance scalability, improve agility, or optimize resource utilization, ClearScale is committed to delivering tailored solutions that align with your business goals.

To explore how ClearScale can support you in achieving your specific application development objectives, we invite you to reach out to us today. Our team is ready to collaborate with you, whether by taking on full project responsibility or working alongside your in-house team. By leveraging our expertise and collaborative approach, you can unlock the full potential of your application development initiatives and drive innovation within your organization.

Contact ClearScale now and embark on a journey towards scalable, resilient, and future-proof applications that will propel your business forward in the dynamic landscape of today’s digital era.

Get the Free eBook
Application Modernization – The Microservices Approach

Get in touch today to speak with a cloud expert and discuss how we can help:

Call us at 1-800-591-0442
Send us an email at sales@clearscale.com
Fill out a Contact Form
Read our Customer Case Studies