Assume an Application Programming Interface (API) includes one or more of the below pieces:
- Data type definitions (structures, enums, etc)
- Function declarations
- Specifications for remote request-response formats, such as:
"Deep modules", according to John Ousterhout3, are those in which well-designed APIs hide, or abstract, an iceberg of underlying complexity.
Deep modules often have the advantage of making codebases easier to maintain and refactor. Good abstractions also make APIs straightforward to learn initially and use correctly. Whether the interface is for an external customer or for another component within the codebase.
Depth becomes especially important once we consider the cumulative complexity of a large system with multiple components.
In most contexts, module and component are synonymous. But for our purposes, a component is composed of one or more modules. So a component is a bigger (potentially multi-module) piece. We'll soon see diagrams that make this distinction clearer.
What's a real-world example of a "deep module"?
Ousterhout cites system calls3, the OS mechanism by which userspace applications request hardware-related and/or privileged services, as a prototypical example.
A small handful of calls abstract the gory details of, say, writing a file to a physical hard disk of a particular variety and manufactured by a particular vendor. The OS provides a small and stable API externally, while being free to manage the inherent complexity internally.
One could argue that the core project of this book is, like many other dynamic collections, also a deep module. We provide an API-compatible alternative to a standard library collection, but abstract away the specifics of the underlying data structure and its memory management strategy.
So, when leveraging any code organization facility, be it Rust's module system or some other language's equivalent, our goal is to create "deep" modules and collect them into components with "loose" coupling. Meaning well-isolated pieces that offer rich functionality via a small API surface.
In general, this approach results in a codebase that's easier to work with for new team members and easier to improve for everyone. That means less time firefighting and more time shipping new features. And, by keeping complexity in check, we also reduce security and reliability risks.
Depth tends to, naturally, minimize coupling and maximize cohesion. Defined as4:
Coupling: a measure of interdependency between APIs.
- E.g. Mutual reliance on the same custom data types in public signatures. Or private, global shared state.
Cohesion: a measure of the commonality between individual elements of an API.
- E.g. Do functions exposed by a module have a clear logical relationship to each other? If so, the module has high cohesion.
While low coupling, high cohesion components are generally desirable, they may not always be practical. For example, a centralized piece of functionality can be more easily replaced with a faster algorithm or a more secure implementation. But centralization sometimes increases coupling.
Similarly, an API that's overly-specific is hard to future-proof - new requirements can mean a breaking change. But if an API is too general, it likely requires cumbersome wrappers to meet current, specific needs without reducing cohesion.
Components whose modules have fewer and simpler public APIs often entail less stability burden and lower chance of misuse. Such components help us more effectively compose large, ambitious, multi-component systems.
Visually, that entails moving away from fragile systems where components expose and rely on each other's internals:
And toward agile systems that abstract away internal complexity (while delivering the same functionality):
Here, "agile" means a codebase that's easy to onboard for, extend, and refactor. Not Agile5, the umbrella term for a set of software development frameworks.
Note how, in both designs, the number of modules within each component (six) didn't change. We're not removing functionality, just external-facing complexity. The end user's cognitive load is reduced. Total "work done" isn't.
Complexity is the enemy of both productivity and security. But the first iteration of a feature to hit production likely won't be elegantly crafted. Aiming for perfection is unrealistic in most commercial contexts.
Instead, we can aim to make our first version well-designed. That may mean using our organization's or team's current quality bar as a watermark. And striving to push it a bit higher while still delivering on time.
Now the first architecture is sometimes the one a system gets stuck with for its entire lifecycle. So budgeting design time up front can pay significant dividends. For production infrastructure, the result could be a reduced number of 3:00am phone calls for outages and breaches. But the average case, planned maintenance, is also lower cost for well-designed systems.
The best designs for high-value systems are almost always a result of iteration. When we have the opportunity to significantly refactor an existing system, or create a successor from scratch, we can apply lessons learned. So even if you can't justify a sweeping change today, it's worth noting current limitations for tomorrow.
Low-complexity systems tend to be more reliable, maintainable, and secure. Keeping complexity in check typically means designing for low coupling and high cohesion. Deep modules lend themselves well to both goals.
What is a REST API?. RedHat (2020).
Core concepts, architecture and lifecycle. Google (Accessed 2022).
[PERSONAL FAVORITE] A Philosophy of Software Design. John Ousterhout (2021).
[PERSONAL FAVORITE] Effective C: An Introduction to Professional C Programming. Robert Seacord (2020).
What is Agile?. Agile Alliance (Accessed 2022).