In my last article I introduced the concept of Architecture Shelf Life. I also put forth the postulate that for most patterns and technologies it is approximately 5 years. If you buy that as well as the premise that architectures can only survive 2-3 shelf life refreshes before they become dated and impractical, then you must be asking, how can I hedge against that. I want to extend the life of my architecture in some way.
Most critical for any architecture that is to survive is the ability to be refreshed with new technologies in an incremental fashion. The way this can be most readily achieved is by following standard architectural principles of good component design with the loosest possible coupling between components. This obviously allows components to be implemented independent of each other and also lets them follow their own technology curve for being replaced. Unfortunately though architectures rarely have good components that are truly loosely coupled. The reasons for this are the results of organic growth of the system. I have noticed some specific patterns emerge though.
This one is probably one of the most common. The components are reasonable and the interfaces have decent semantics. But then the implementation choice couples the components to a specific technology or implementation strategy. In most cases the decision is made based on the desire to optimize the interface speed. And in the majority of these cases it was probably an unnecessary optimization that bought a little speed but at the expense of long term architectural flexibility.
Interfaces between components should be text based (e.g. XML, JSON) as they offer the maximum flexibility for migrating components to alternate implementation technologies. The state of the art for standard text format parsers and generators has reached sufficient efficiency that the overhead introduced is nominal compared to the processing time of most components. We have now reached a point in text formats that we can move past the efficiency debate and focus on the architectural benefits we gain from decoupling our interface protocols from an implementation strategy.
Seeds of Destruction
The business needs a simple new feature, say a trivial fraud check. So it's added to an existing component. It works great but now they need a few more capabilities. So the component is enhanced. This cycle continues for 18 months and suddenly you have what really amounts to two components, but they are implemented as one, badly coupled and intertwined. Unfortunately the problem is usually worse than that though as the customers have come to expect a behavior that will be hard to maintain if you decouple. Either because of latency or availability or both, you find yourself with an architectural conundrum that could have been avoided if you had maintained a separation of concerns, regardless of the relative size of the two concerns involved.
Unintentional Vendor Lock
How can it be unintentional? You know, that's a question I often have asked as well. But the reality is that it can be done very easily. I have nothing against Oracle per se, but it will serve to illustrate my point. Oracle provides a few unique features that are in some cases tempting and in other cases unavoidable. Anonymous PL/SQL blocks are stored procedures that are stored in your code, delivered at run time. This is great from an application maintenance perspective because you get the benefits of a stored procedure without the revision management issues. But this feature is unique to Oracle and as far as I know, one competitor (Enterprise DB) which means if you choose to leverage it, you have to devise solutions if you want to migrate to another database.
But that feature is more overt and you can choose to not use it. A more covert feature is how Oracle manages concurrency. It relies on rollback segments that provide read committed concurrency while avoiding row level locks for the duration of the transaction. This provides excellent performance, especially in high concurrency situations. If your update patterns are not particularly contentious then the difference in performance between normal row level locks and rollback segments may be nominal but if you do have records that receive high update loads, you will experience a drop in performance.
The point of this though is it's important to understand the behavior of your vendor's products and understand the implications of how they will impact your architecture, short and long term. It's not just the over features you can avoid, but the more subtle implementation details that you can't avoid but may come to rely upon.
This one is a bit harder to avoid but is worth the investment to minimize. Until recently most architectures considered persistence to mean database. Many architects would apply best practices to avoid vendor lock but little to no effort to avoid database lock. And why would you be worried about database lock? Well because for many classes of persistence, a database is not the most cost effective storage. It may be the easiest to initially implement but as other forms of persistence mature, you may want to be able to take advantage. Yet if you've assumed you have a database with SQL available you may find this difficult to do.
One of the best ways to minimize persistence lock is to separate your access paths in your resource tier. The primary access path to any entity should be via primary key. All other access paths should be added with care and carefully delineated within the implementation of the resource tier. This allows the alternate paths to be managed in other forms of persistence in the future.
I am sure there are other suggestions from my readers. The general theme, as you can see, is to minimize coupling. This keeps your architecture flexible and leaves the door open for integrating newer technologies and patterns as they emerge.