← All Blog Articles

The Grain of a System: Why Some Platforms Absorb Change and Others Fight It

(updated Apr 21, 2026)
The Grain of a System: Why Some Platforms Absorb Change and Others Fight It

If you’ve ever worked with wood, you know the grain matters. Cut with it and the blade glides through. Cut against it and the wood splinters, the edge rips, and what should have been a clean joint becomes a mess. You can force your way through — you’ve got the tools, the power, the determination — but the result is never as good, takes twice as long, and everybody can see the difference.

Software systems have grain too. And just like woodworking, the most important skill isn’t raw ability — it’s learning to read the grain and work with it.

This isn’t a metaphor I hear used often in technology, but it captures something I’ve experienced across every platform I’ve built, assessed, or tried to fix over the past 25 years. Some systems welcome new features like old friends. Others resist every change like a body rejecting a transplant. The difference isn’t the quality of the engineers or the sophistication of the tooling. It’s the grain.

The Quality Without a Name

Christopher Alexander, the architect and design theorist, spent decades trying to articulate why some buildings feel right — alive, whole, comfortable — while others, technically competent, feel dead. He called this elusive property “the quality without a name.” It wasn’t beauty in the conventional sense. It wasn’t compliance with any particular style. It was something deeper: a coherence between the structure and the life it was meant to support.

Alexander’s work profoundly influenced software design — the Gang of Four’s Design Patterns book explicitly credits him, and Kieran Potts has written a thoughtful essay tracing that lineage — but somewhere along the way we lost the philosophical core and kept only the catalogue. We adopted his patterns but forgot his central insight: that the patterns exist to serve a deeper quality, not the other way around.

In software, the quality without a name manifests as that feeling when a system just works. Not “works” as in “passes the tests,” but works in the sense that its concepts map cleanly to the problem domain, its structure supports the operations people actually need to perform, and extending it feels like a natural continuation rather than a forced graft.

My Personal Experience

TL;DR: The grain of a system — its fundamental architectural character — is shaped primarily by the data model and the early design choices that persist throughout its life. You can read the grain from a 30-minute demo faster than from a week in the codebase.

I’ve worked on systems with beautiful grain that absorbed radical new capabilities in months, and systems with no grain at all where every feature was a battle. The difference isn’t about technology choices or team size — it’s about whether the system’s fundamental concepts are coherent and well-chosen.

Where the Grain Lives

So where does the grain actually come from? In my experience, it lives primarily in two places: the data model and the conceptual model that the system presents to its users.

These two things are more connected than most people realise. In well-designed systems, the UI maps closely to the underlying domain model. The concepts users see — the nouns they interact with, the relationships between them, the operations they can perform — reflect the actual structure of the data beneath. When someone navigates the system, they’re essentially navigating the data model. And when the data model is coherent, the navigation feels natural. When it isn’t, users end up confused, developers end up frustrated, and product teams end up building elaborate workarounds.

The data model is where the grain is set. Get it right early and you’ve created a system that can absorb decades of change. Get it wrong and you’ve created a system that will fight you on every feature, every integration, every pivot.

Early Choices Persist

One of the things that consistently surprises me — particularly in my PE consulting work where I assess platforms across many different companies — is how resilient the grain is. Architectural choices made at the very outset of a project tend to survive everything: team turnover, technology migrations, rewrites, acquisitions. The data model from year one is often still visible, structurally, in year ten.

This can be wonderful or terrible depending on whether those early choices were good. But the persistence itself is worth understanding: you don’t get to choose your grain twice. The founding team’s understanding of the problem domain — the entities they chose, the relationships they modelled, the boundaries they drew — becomes the DNA of the system.

This is why the first architect matters so much. Not because they write the most code, but because they set the grain that everyone else will work with (or against) for years to come.

When the Grain Is Right

When a system has good grain, new features don’t just fit — they feel inevitable. As if the system was always meant to do this; it just hadn’t been asked yet.

10x Banking: Current Accounts to Full Product Suite in 18 Months

The most dramatic example I’ve experienced was at 10x Banking, where we built a core banking platform for Tier 1 banks. The architecture was heavily event-sourced, rigorously decoupled, with each component maintaining its own data store. The grain of the system — its fundamental model of financial events, accounts, and products — was designed with care and defended with discipline.

Within 18 months, we went from a platform that supported basic current accounts to one that could manage all asset and liability products: savings, secured lending, unsecured lending, credit cards, the lot. That expansion didn’t require heroic engineering or constant refactoring. The new product types slid into the existing model because the grain supported them. The abstractions were right. An “account” meant what it needed to mean. A “transaction” behaved consistently regardless of product type. The event model captured the essential operations without being coupled to any specific product’s quirks.

That’s what good grain gives you: leverage. The initial investment in getting the model right paid for itself many times over as the platform absorbed capability after capability. And crucially, iteration was cheap — each product type could be refined and extended without the ship-it-and-move-on pattern that plagues systems where revisiting anything is painful.

RoadmapOne: Features That Slide In Without Touching the Sides

I see the same pattern in RoadmapOne . When I needed to add team location and cost information — understanding where squads are based and what they cost — the feature just slid in. The existing data model of workspaces, roadmaps, squads, and sprints had enough coherence that location and cost were natural extensions, not awkward additions.

That feeling — when a feature arrives and the system says “of course, where else would this go?” — is the quality without a name in action. Alexander would recognise it immediately. It’s the software equivalent of a room that just works: the light falls right, the proportions feel natural, and you can’t quite explain why, but you don’t want to leave.

When the Grain Is Wrong

Working against the grain is miserable. And I don’t mean that as hyperbole — I mean it quite literally destroys morale, productivity, and eventually the product itself. When every feature is a battle, teams settle for the minimum that technically works — and before long you’ve got a culture of adequacy where nobody even remembers what “beautiful” looks like.

The Deal Tree That Wasn’t

My early days at Trayport, a gas and power trading platform, were marked by a system with no discernible grain at all. It was a hodgepodge of different approaches to different problems, each locally sensible but collectively incoherent. We had a data structure called the “deal tree” that was, I kid you not, neither a tree nor a container of deals. The name had survived from some earlier incarnation, but the thing it described had mutated beyond recognition while the label stayed the same.

When your core concepts don’t mean what they say — when the vocabulary of the system lies to you — you’ve got no grain to work with. Every new feature becomes a negotiation with ghosts: design decisions made by people who’ve long since left, for reasons nobody remembers, creating constraints nobody can explain.

The Eight-Figure Cache Invalidation Disaster

I once interviewed a potential architect who walked me through the system they’d been building. It depended massively on caching for performance, and as they talked through the design, every part of me was screaming: this isn’t going to work. There had been no attention paid to cache invalidation — the architecture fundamentally assumed that cached data would remain correct, which anyone who’s worked with distributed systems knows is a fantasy.

The poor chap was completely oblivious to the issues. Eighteen months later, I interviewed someone else from the same company. The project had been killed with an eight-figure loss.

The grain of that system was wrong from the start. Not because caching is bad — it’s an essential technique — but because the fundamental model didn’t account for how data actually changed. The grain was set against reality, and no amount of engineering talent could fix that.

Trainline: Coupling as Anti-Grain

During my early days at Trainline, the system was characterised by extreme coupling. There was no locality of data — services reached into each other’s stores, shared assumptions leaked across boundaries, and a change in one area could cascade unpredictably across the platform. This wasn’t because the engineers were bad. It was because there had been no architectural oversight to maintain the grain.

Without someone “herding the cats” — ensuring that developers understood the key patterns and maintained consistency — individual engineers made changes that were perfectly sensible in isolation but collectively destroyed the system’s coherence. Each change cut across the grain, splintering it further, until there was no grain left to work with. Fixing this required not just an architectural overhaul but a complete transformation of how the organisation worked — from specs-over-the-wall to empowered teams who understood and respected the system they were building.

Broken Windows and Resilience

There’s a direct parallel to the broken windows theory in criminology: once the grain starts to degrade, degradation accelerates. When everything is hard and everything is ugly, adding new features that are ugly becomes acceptable. Why spend three days finding the elegant solution when the system is already a mess? Just hack it in and move on. The standards slip, the shortcuts accumulate, the team slides into feature factory mode — shipping output without impact — and the grain disappears entirely. Meanwhile, deferred maintenance that might have preserved the grain gets deprioritised in favour of yet more features, and the spiral accelerates.

But here’s the paradox: the grain is also remarkably resilient. In my PE consulting work, I regularly see products where the original architectural choices — the founding data model, the core abstractions — have survived years of neglect, team turnover, and feature accretion. The grain is battered and obscured, but it’s still there, still shaping what the system can and can’t do easily.

This persistence cuts both ways. A good early grain protects the system even when subsequent development is mediocre. A bad early grain constrains the system even when brilliant engineers are fighting to improve it. The grain doesn’t care about your sprint velocity or your hiring plans — it just is.

“10% Differently and It Will Be Beautiful”

This is perhaps the most important practical insight about grain, and it sits squarely at the intersection of product and technology.

When a product team brings a feature request to engineering, there’s often a version of that feature that works beautifully with the system’s grain and a version that fights it. The difference between these two versions can be surprisingly small — sometimes just a 10% adjustment to the specification — but the impact on implementation is enormous.

“Yes, we can do it that way and it will be a nightmare, or we do it 10% differently and it will be beautiful.”

This is where the relationship between product managers and architects matters enormously. A product team that understands the grain — that can see why one formulation of a requirement is dramatically easier than another — will consistently ship faster, with fewer bugs, and with less technical debt than a team that treats the system as a black box.

This doesn’t mean product should be constrained by engineering. It means that the framing of a capability — the specific model through which it’s expressed — should be informed by the system’s grain. This is outcome-based thinking at its most practical: committed to the outcome, flexible on the implementation. The outcome for the user can be identical. The underlying model that supports it can be radically different. And that difference determines whether the feature takes two weeks or six months.

Raj Nandan recently wrote about taste in the age of AI , describing it as “distinction under uncertainty” — the ability to recognise what feels generic or wrong and articulate why. This maps directly to reading the grain. An experienced architect doesn’t just know that a design is wrong — they can feel the resistance, the way the proposed feature cuts across the grain, and they can often see the 10% adjustment that would make it right.

This kind of taste can’t be reduced to checklists or static analysis scores. It’s pattern recognition built from years of working with systems that have good grain and systems that don’t. It’s the quality without a name applied not just to the system, but to the act of extending it.

Reading the Grain: A Practical Assessment

If you’re a CTO walking into a new role, a PE operating partner assessing an acquisition, or an architect evaluating a platform, how do you actually read the grain?

Start with the Demo, Not the Code

This might sound counterintuitive, but a 30-minute demo of the user interface tells me more about a system’s grain than a week spelunking through the codebase. Here’s what I’m looking for:

The key concepts. What nouns does the system surface? Are they coherent and well-defined, or are there overlapping concepts that seem to mean similar things? If the system has “projects” and “programmes” and “initiatives” and “workstreams” that all seem to be roughly the same thing, the grain is confused.

Navigation patterns. Can you reach the same information from multiple paths, or is there only one way in? Multiple natural paths suggest the data model has well-defined relationships. A single rigid path suggests the model is linear and brittle.

Consistency of interaction. Do similar things behave similarly? When you edit one type of entity, does it work the same way as editing another? Inconsistency in the UI almost always reflects inconsistency in the underlying model.

The vocabulary. Does the language of the system match the language of the domain? Or has the system developed its own jargon that users have had to learn? A system whose vocabulary matches its users’ mental model has good grain.

Then Look at the Data Model

Once you’ve formed an impression from the UI, validate it against the data model. Look for:

  • Coherent entities with clear boundaries and well-defined relationships
  • Consistent patterns — do similar concepts use similar structures?
  • Appropriate normalisation — not over-normalised (which makes the grain too fine to work with) and not under-normalised (which tangles concepts together)
  • Evidence of evolution — has the model been extended thoughtfully, or is it littered with bolt-on tables and polymorphic columns that don’t quite fit?

Red Flags

  • A table called “misc_data” or “extended_attributes” — someone ran out of grain and started dumping
  • Multiple representations of the same concept across different services
  • Naming that doesn’t match the domain (remember Trayport’s “deal tree”)
  • Features that require touching five or more services to implement — a sign that the grain has been cut across too many times

Grain in M&A and Platform Strategy

For anyone involved in mergers, acquisitions, or PE platform building, grain is perhaps the single most important technical concept to understand.

Bolt-On Acquisition Compatibility

In private equity, a common strategy is to acquire a platform company and then execute a series of bolt-on acquisitions, integrating smaller companies’ capabilities into the platform. The success of this strategy depends almost entirely on whether the bolt-ons’ concepts are compatible with the platform’s grain.

If the key concepts are really different — if the platform models the world one way and the bolt-on models it another — then integration costs are going to be astronomical. I’ve seen deals where the integration was budgeted at six months and took three years, entirely because the grain of the two systems was incompatible. Not because the technology was different (technology can be bridged) but because the conceptual models were fundamentally at odds.

Before acquiring a bolt-on, ask: does this system’s grain align with our platform’s grain? Do the core concepts — the entities, relationships, and operations — map naturally, or would integration require us to reshape one system to match the other’s model?

Value Creation Plan Alignment

Similarly, when assessing whether a platform can support a PE value creation plan, the question isn’t just “can this system scale?” It’s “does the VCP require capabilities that are orthogonal to the existing platform’s grain?”

If the value creation plan calls for adding capabilities that work with the grain — that extend naturally from the existing data model and conceptual framework — then the plan is realistic. If the VCP requires capabilities that cut across the grain — that demand fundamentally new concepts or relationships the system was never designed to support — then either the plan or the platform needs to change. Ignoring this mismatch is how eight-figure write-offs happen.

Integration Costs as a Function of Grain Compatibility

Here’s a rough heuristic I use:

Grain Compatibility Integration Effort Typical Outcome
Aligned — same core concepts, similar data model Months Smooth integration, shared features
Adjacent — overlapping concepts, different emphasis 6–12 months Workable with adapter layers
Orthogonal — fundamentally different conceptual models 18+ months Costly, often incomplete
Contradictory — conflicting assumptions about the domain Years, if ever Rewrite or abandon one system

The difference between a successful bolt-on strategy and an expensive disaster is often determined before a single line of integration code is written. It’s determined by the grain.

What AI Means for Grain

There’s a question I keep turning over: what happens to the grain of a system in an age of AI-assisted development?

On one hand, AI could make grain matter more. Large language models work better with coherent, well-structured codebases — they can understand the patterns, follow the conventions, and generate code that fits. A system with good grain gives AI a template to follow. A system with no grain gives AI nothing to latch onto, and the generated code will be just as incoherent as everything else.

On the other hand, there’s a risk that AI makes grain matter less — or rather, that it accelerates the broken windows problem. When generating code is cheap and fast, the temptation to just add features without considering the grain becomes overwhelming. Raj Nandan’s “crowded 7 out of 10” problem applies here: AI can produce competent, functional code that works in isolation but doesn’t respect the grain of the system it’s being added to.

The answer, I think, is that AI makes taste more important, not less. When anyone can generate code, the scarce skill becomes recognising whether that code works with the grain. The architect’s role shifts from writing code to curating it — from generation to judgement, from creation to curation.

This maps directly to the quality without a name. Alexander wasn’t concerned with whether individual buildings were technically competent. He was concerned with whether they contributed to the wholeness of their environment. In the age of AI, the same question applies to code: does this addition make the system more whole, or does it just make it bigger?

Conclusion

Every system has a grain. It’s set early, it persists stubbornly, and it determines — more than any other single factor — how easily the system absorbs change. Learning to read the grain is perhaps the most valuable skill a technical leader can develop: more useful than knowing any particular technology, more predictive than any metric, more revealing than any architecture diagram.

The quality without a name isn’t mystical. It’s the feeling you get when the concepts are right, the model is coherent, and new capabilities extend naturally from what’s already there. It’s the 10x platform going from current accounts to full product suite in 18 months. It’s the feature that slides in without touching the sides. It’s the 10% adjustment to a spec that turns a six-month nightmare into a two-week delight.

And it’s the absence you feel in the deal tree that isn’t a tree, the cache layer that ignores invalidation, the coupled system where every change cascades unpredictably across boundaries.

Whether you’re building a new system, assessing an acquisition, or trying to understand why your team can’t ship, start by reading the grain. Everything else follows from there.