Skip to content
Architecture

Monolith vs Microservices: the path to teams & services autonomy

Defining the steps & challenges to transition from one architecture style to the other to achieve the right outcomes.

Grégoire Mielle
by Grégoire MielleLast updated on 5/5/2021
Table of content

Large monoliths suck, but badly designed microservices architectures suck even more. While the idea of starting from scratch with microservices is tempting, it does not represent reality:

  • No one wants you to spend 6 months setting up Kubernetes for an idea that has not been validated by your market
  • There's nothing natural in breaking up an application in different pieces without experiencing the ins & outs of it

Indeed, as an industry, we love to focus on the craft instead of the outcomes: managing multiple services, communicating using message brokers, triggering serverless functions, the whole thing being managed with Kubernetes.

Buying into these tools has clear benefits but hard to estimate costs. Unmanaged evolution within a monolith also has incurred costs which can lead to complete chaos.

The real challenge is thus to know when & how to transition from one to the other, making sure you & your team are ready from an organisational & technical standpoint, in order to achieve the right outcomes:

  • Autonomous teams built around products
  • Independent parts, built for resilience & evolution

Let's try to define which investments make sense from the start, what are the challenges you might face adopting microservices within your organization & list questions you might ask yourself to know where you are on that path.

What to focus on, whatever the architecture style

Defining your domain, its subparts & boundaries

If you're in business today, you must be bringing value to some users on a specific market. This value is the sum of multiple features closely linked to each others (eg. invoice generation, expenses management, etc.).

Encapsulating the logic for each of these subparts of your domain is essential.

By having clear boundaries between them, you make sure any part of your system does not depend on the internals of other parts, which can drastically increases the complexity of the whole system. Depending on the size of your subparts, this encapsulation can take multiple forms: methods, modules or components.

For example, generating invoices can be a single module whereas managing expenses can be a set of interconnected components. Being able to independently update these parts is key to your organization & application growth: imagine the implication of doing a major refactor of your invoice generation module if its internal currency conversation mechanism is used by your expense management components.

It doesn't mean these two subparts can not communicate, it means interaction mechanisms must be clear.

Establishing how subparts interact with each other

Your subparts need to communicate with each others. Wether it's by calling a function or making a network call to a REST endpoint, it all comes down to contracts.

These contracts must describe their effect & purpose explicitly. By using intention revealing interfaces, you avoid subparts assuming internal behaviours of others.

For example, do your expenses management components expose an updateExpense command or a validateExpenseRequest command? While the first one can easily be implemented as a REST endpoint, it is somewhat vague on the effect it has: at which step of the expense request flow is it used? does it have an impact on invoices? can it be called multiple times for the same expense?

With the second one, it's easier to assume it's only used once an expense request has been made & it needs to be validated, encapsulating the necessary updates & side-effects to perform.

If these data contracts are not clear in the context of a monolith, they will certainly not be with microservices.

The tools & processes supporting high velocity & confidence

Finally, there are tools & processes that can make or break engineers velocity & confidence. Which side of the fence you are on depends on how much your organization & team is willing to invest in them:

  • Feeling confident about iterating on new features thanks to a fast CI pipeline
  • Being able to deploy (& rollback if necessary) new releases of your application
  • Knowing that your monitoring stack will alert you on critical issues

They represent an invisible, yet essential part of how you operate your application. If you don't have the tools to observe your monolith with metrics like average latency, http request throughput, using microservices will only multiply the uncertainty by the number of services.

Having a consistent approach for these subjects at the beginning makes it easy for product teams to ship fast & leave the burden to experts.

Challenges you should be ready to deal with using Microservices

Avoiding the distributed monolith paradigm

We tend to not respect encapsulation within monoliths for various reasons. Breaking it into microservices makes it hard to do so, but not impossible.

Indeed, the goal with microservices is to deploy changes in isolation. If your update requires consumers of your service to update a client library in a coordinated way, this promise goes away. I let you imagine what a real life scenario looks like when you use tens of these client libraries.

In this situation, having a versioning strategy & focusing on remote data contracts (REST, gRPC) over client libraries let consumers choose to use or reimplement independent libraries depending on their timeline & architectural decisions.

If your microservices are highly temporally coupled, this promise also goes away: a service being unavailable has a direct impact on services consuming it. Analyzing the chattiness of your system can help detect this pattern and avoid it by using asynchronous communication or redrawing the boundaries around services.

Monitoring distributed services & their interactions

While getting the big picture within a monolith can be done analyzing call stacks, doing it in the context of microservices is hard: your services could all be up & running while the application is down for users. Monitoring a microservices architecture involves observing interactions between its parts, in a distributed way.

New solutions need to be considered like distributed tracing to pinpoint new potential flaws like network latency, congestion & scaling failures.

Maintaining distributed knowledge on central subjects

Faced with too much rigidity like a single team owning a shared subject, engineers always find a way around the rules. While you want to avoid such situations, there are things for which it can make sense: application security & site reliability to name a few.

These experts are key to your organization velocity as described earlier but can easily fail in a distributed environment.

If product teams don't know anything about these subjects, they will always have to rely upon these experts, making them a single point of failure (or slowness in this case). Of course they remain essential at building solutions, defining guidelines & reviewing the work of others, but teams need to learn about these subjects & develop a sense of autonomy around them.

7 questions to know where you are on the microservices path

  • Do we, as a team, have a good understanding of our domain & how it could be broken down into pieces?
  • Is it hard to coordinate deployments with the current team's size?
  • Could new features benefit from being isolated given the current stack? (eg. Machine learning, batch processing)
  • Are new releases a source of uncertainty given the impact they have?
  • Is our CI/CD pipeline approach scalable?
  • What is the state of our monitoring stack & observability practices?
  • How engineers feel about operating their own services?

References

Want to ship localized content faster
without loosing developers time?

Recontent helps product teams collaborate on content for web & mobile apps.
No more hardcoded keys, outdated text or misleading translations.