Microservices vs. Monolith
Status: Complete
Category: Architecture
Default enforcement: Advisory
Author: PushBackLog team
Tags
- Topic: architecture
- Skillset: backend, fullstack, devops
- Technology: generic
- Stage: planning
Summary
The choice between a monolith and a microservices architecture is an architectural bet with long-term operational consequences. Neither is inherently superior — the decision should be driven by team size, domain complexity, operational maturity, and rate of change, not by fashionability. Most systems should start as a well-structured monolith and decompose into services only when concrete pain points emerge.
Rationale
Microservices are a scaling solution, not a default
Microservices decompose a system into independently deployable services, each owning a bounded slice of the domain. They enable independent scaling, independent deployment, and independent team ownership. These are genuine advantages — at the right scale. Below that scale, they predominantly add overhead: distributed systems complexity, network latency, operationally expensive service mesh configuration, multiple databases to manage, and cross-service debugging.
The industry’s near-universal adoption of microservices in the mid-2010s produced many teams operating microservices at a scale that did not justify them. That experiment has produced a growing “modular monolith” and “majestic monolith” counter-movement, not because microservices are wrong but because they were applied without honest evaluation of the trade-offs.
Monoliths are not legacy
A monolith is a system deployed as a single unit. “Monolith” is not a synonym for “big ball of mud.” A well-structured modular monolith — with clear internal boundaries between domains, enforced module dependencies, and clean APIs between layers — provides many of the benefits of microservices with far less operational overhead. It can be refactored and eventually split if the need arises. Splitting a poorly structured monolith is much harder than splitting a modular one.
The distribution penalty is real and usually underestimated
Distributed systems must handle network failures, partial failures, inconsistent state, and asynchronous coordination. A method call that takes 0.1ms becomes an HTTP call that takes 3ms and can fail in eight different ways. Transactions that span services cannot use database ACID — they require sagas, eventual consistency, or other coordination patterns with significant design and testing overhead. Every service boundary is a place where deployment, monitoring, security, and debugging become more expensive.
Teams that have not operated distributed systems at scale consistently underestimate these costs.
Guidance
Decision criteria
| Factor | Monolith | Microservices |
|---|---|---|
| Team size | < 8 engineers | Multiple independently operating teams |
| Domain complexity | Single coherent domain | Multiple clearly bounded sub-domains |
| Operational maturity | Basic CI/CD | Mature DevOps, observability, service mesh |
| Deployment independence | Not needed | Critical for team autonomy |
| Scaling requirements | Whole-system scaling acceptable | Sub-system scaling needed independently |
| Stage of product | Early / uncertain | Established, stable domain model |
The modular monolith: the pragmatic middle ground
A modular monolith enforces domain boundaries in code without the operational cost of distribution:
src/
billing/
BillingService.ts # public API for billing module
internal/ # private — no external imports allowed
InvoiceCalculator.ts
PaymentProcessor.ts
fulfilment/
FulfilmentService.ts
internal/
WarehouseRouter.ts
shared/
types.ts # shared value types only, no behaviour
Enforce boundaries with lint rules (import/no-restricted-paths in ESLint, or @nx/enforce-module-boundaries in Nx) so the modular structure is maintained mechanically rather than by convention.
Decomposition triggers
Consider extracting a service when:
- A specific component has a substantially different scaling profile from the rest of the system
- Two teams are constantly blocked on each other due to shared ownership of a module
- A component has drastically different reliability or compliance requirements (e.g., a PCI-scoped service)
- The module needs to be deployed on a different cadence with full autonomy
- The domain has stabilised sufficiently that the service boundary is unlikely to shift
Anti-patterns to avoid
| Anti-pattern | Problem |
|---|---|
| Distributed monolith | Services deployed separately but tightly coupled — all the cost of microservices, none of the benefit |
| Nano-services | Services too granular to own a meaningful domain concept; high coupling, high overhead |
| Shared database | Multiple services writing to the same database — breaks service autonomy |
| Synchronous chains | Service A calls B calls C calls D — latency compounds; one failure cascades |
| Premature decomposition | Splitting before the domain model is stable forces expensive boundary rewrites |
Migration path: monolith to services
A strangler fig migration is the safest path:
- Identify a well-bounded sub-domain with a clean internal API
- Extract behind an interface; replace implementation with the new service
- Route traffic to the new service incrementally
- Remove the original code once the service is proven in production
Never attempt a “big bang” rewrite of a monolith into microservices.