Event Driven Design
“If I had six hours to chop down a tree, I’d spend four hours sharpening the axe.”Abraham Lincoln
Avoid This: Point-to-Point Integrations
Do This: Build an Event Handling Service that ties everything together.
This post is not a technical document or a how-to guide. This is not a shortcut to a better implementation or a blueprint for one. This is an effort to document a key learning from dozens of projects over my career. It is almost an act of venting, but with the hope that giving it some structure and context will make it reusable for others. This is a humanist story teller’s take on a technological topic.
Just applying technology to a problem is not a Silver Bullet towards a solution. Complexity is bound to arise. Essential Complexity is a requirement that, without it, the complete picture cannot be seen. Accidental Complexity happens when we introduce concepts that do not serve the overall design. Therefore, it is imperative to justify the complexity, by demonstrating where it fits into the big picture and how it helps accomplish the larger vision.
Across many of the consulting engagements I have worked on, I have seen more instances of Accidental Complexity than the Essential implemented. Often we are dealing with Business Requirements being take as Gospel and not challenged. The other driver is a flaw in the fundamental system architecture itself. Specifically, the way in which the integration is handled between disparate applications, systems, and layers during any given process flow.
Invoking a “Challenger Mindset” during the design process can help quash Accidental Complexity driven by Business Requirements. Employing insightful questions that demand the business to rationalize and not list needs. Or by applying techniques such as the “How Might We…” Questioning Method when solving use case problems. I learned a lot of these techniques from books like, “A More Beautiful Question” by Warren Berger. A thought provoking read that is filled with many ideation accelerating strategies like the one above .
To tackle complex integrations and system architecture, I lean on Event Driven Architecture as my preferred design pattern when consulting with customers. In the past it has been referred to as “Publish and Subscribe” framework. Where some systems publish “events” and other systems subscribe to those events. This approach is often a foundation for environments that are comprised of micro services. It is also an approach that can take a lot longer to design and implement, compared to direct integrations. But as my lead quote suggests, it is the better application of your time than just coding away.
Publish and Subscribe
Understanding The Publish and Subscribe design principal of Event Driven Design is step one. Some components of your architecture publish events, some subscribe to events, some do both. Though this approach is not new, there is now a cottage industry of services and startups building solutions to help enhance such architectures. Companies and services like HoneyComb, Apache Kafka, and Axon are making it easier than ever to design, build, and maintain an event driven ecosystem.
Another consideration in this approach is to consider what is the “Domain Expertise” of each Publisher, Subscriber, and adjacent systems in your ecosystem. More simply, let every component do what they are best at and nothing more. Using this approach, the relationship between the entities in a system is more value driven or causal than a hard dependency on one another. Which reduces the number of failure points.
This idea is, in a way, representative of Design Minimalism. We are focusing in on the core function of a thing, and design little else on top or around it. A methodology that highlights the core essence of system or service in order to deliver business value. The effort is a delicate balancing act, coordinating requirements and best practice.
Often, when we deviate from this rule, that is when we bastardize systems and applications. They morph from something simple and elegant into a Hydra of a system that can never be improved, rationalized, or scaled. This major component does more than it should, and the business becomes dependent upon it. Sometimes mistaking the functionality forced upon it as integral in the application itself.
When we need to replace an appliance in our kitchen (our cooking eco-system), we don’t always rebuild the whole kitchen. We usually replace just that appliance. Most appliances operate independently. When we witness true mastery of a kitchen, the cook is like a conductor bringing multiple instruments into harmony. The chef, is the event handler, directing the traffic of meal prep from one appliance to another. Moreover, every appliance is not used in every meal, and can be replaced if a newer version does the job better.
The metaphor is crude but the thinking is accurate. We need to design system architectures that look at components that should be able to be pulled out and replaced more easily than re-architecting large swaths of the landscape and re-integrating tightly couple components. A previous boss taught me to think this way and I immediately saw the benefits the first time we needed to replace a major system.
The careful design of the data model upfront is critical. Remember, none of the Publishers in this model will be writing for or directly to any specific system. Therefore, you have to think broadly about the uniformity required in the model so that Subscribers can consume events with little to no transformation required.
I recommend that you think about your event stream for one process first. If you wanted to log all of the stages of and sequence of events to a data warehouse, then you would need to make sure that each Publisher writes in a way that is consistent and simple. Essentially, all of the actors in an ecosystem should speak the same data language.
Buy or Build
So, we now have a sophisticated system architecture whose components will Publish and/or Subscribe to events. The actors will be focused on their individual domain expertise. Lastly, they will talk the same talk when it comes to data. The question that remains is, are you going to Buy or Build all of these components?
It is probabable that you will be doing some combination of both. A benefit of Event Driven design is that you can look at components independently without worrying about rebuilding direct integrations. When broken down this way, we have more flexibility to determine what is best for each part of the larger system; buy or build.
Over time, you may replace a home grown service with a SaaS one, or replace a SaaS solution with something you built yourself. The replacement in either scenario does not require the reworking of the entire architecture. We are free to swap out these appliances, whether we build or buy them.
To reiterate, the goal of this approach is to allow each system, and subsequently development team to operate independently and reduce failure points. When starting from scratch it is the best architecture for scale and future proofing but can take the most upfront design effort.
It is quick and easy to simply integrate components directly to each other when trying to get out a Minimum Viable Product or Proof of Concept. Often we will employ Synchronous or Asynchronous hand offs of large chunks of data. That if they fail, require retrying from the most upstream component and back down through the whole chain again.
But if the MVP takes off, we often get a second chance to reimagine the whole product before scaling it up. At that moment, when it comes to the final product or system design we have to ask ourselves, are we going to sharpen this axe first or waste all of our time hacking away?