[Updated 6/18/2012 — Huge shout-out to Stan Swete of Workday and Raul Duque of Ultimate (both of whose firms have been clients, as was Raul’s former employer, Meta4) for their review and feedback on my much shorter first draft of this post, which addressed only models-driven development. Based on their feedback, I’ve now touched on agile software development and metadata-driven definitional development. And I would also like to thank Steve Miranda of Oracle for writing to me about the progress that Oracle is making along these same lines and Mike Rossi of SFSF/SAP for briefing me on their aggressive pursuit of metadata-driven applications. As I learn more, outside of NDAs, about what Oracle, SFSF/SAP, and others are doing to adopt these approaches, I hope to update this post further.
These three approaches, models-driven development, agile software development, and metadata-driven definitional development, combined and managed properly, are at the heart of achieving the order of magnitude improvement in software economics, including time-to-market and quality-to-market, which are the real focus of this post. If I embarrass myself in front of the real experts in these areas, the fault is entirely my own, and largely due to my own lack of expertise. Hopefully, you’ll educate me and my readers with your comments. But I should also say that I wrote this post to be accessible to as wide as possible an audience, to respect all the NDAs under which I operate, and to be just a short introduction to these important ideas.
Please note that I’ve added several additional resources at the end of this post.]
A Little History
In 1984 I published an article in Computerworld entitled “Secret to cutting backlog? Write less Code!” The entire article may be worth your time, but then perhaps I’m the only one who is amused by the memories of so distant a past in the history of business applications. What’s relevant to the here and now is that, even then, I was painfully aware that we were buried in demands for business software — and in ever-changing requirements to extend and change that software — that we were never going to be able to meet unless we changed fundamentally our whole approach to designing and developing same.
Thus began the intellectual journey that led to this post. AMS, my employer in 1984, had been given a ton of money by the US Army to develop the requirements for a new personnel system. But an important requirement of that contract — important to me personally because, on some level, it was the making of my career — was that we launch two parallel projects to define those requirements. One project used traditional (what we now call Victorian novel) requirements definition, which was the norm in the then standard waterfall systems lifecycle. The other project used one of the original requirements gathering CASE tools, PSL/PSA, along with the best (then) available event-partitioned data and process modeling techniques. I ran both projects at AMS, and I learned, quite painfully, three professional life lessons about the limitations, even wrongness, of the then current aproaches to systems design and developmet.
Lessons Learned At AMS
The first big lesson was the wrongness of depending entirely, for system requirements, on asking customers what they want “the system” to do. What I learned on that project and have practiced ever since is to study the customer’s business, represent the essence of that business in models, and then conceive of how available — and even not quite yet available — technology could be used to reinvent that business. And while the techniques of modeling have evolved considerably, great domain models have always been intended to let us experiment with a business domain, to understand it and to reinvent it, in ways that we could never do with an existing organization.
The second big lesson was to focus on the pattern in the problem rather than to get buried in all of the details, to see the essential nature of HRM rather than just the surface confusion. It’s that study of the pattern in the problem that was the genesis for so much of my thinking about preferred architectural behaviors. For example, determining eligibility for something is a foundational pattern in HRM, e.g. for participation in a specific developmental event, qualifying for the payout in a specific project success compensation plan, enrollment in a specific health care benefits plan, or getting a yearly replacement for your smartphone. Once it becomes obvious that the eligibility pattern will be needed across HRM, and that the eligibility criteria represent a bounded albeit large and complex set of Boolean expressions across a domain objects and their attributes, one can conceive of a metadata-driven eligibility “engine” which can be used across all of these examples and many more. My emphasis on these preferred architectural behaviors — and there are many — is central to my quest not only for writing less code but also for elevating the use of metadata to drive HRM software.
The third big lesson was that the then best practice waterfall lifecycle was based on three fallacies and was, therefore, a complete fool’s paradise. In the waterfall lifecycle, you spent a ton of time and resources pinning down requirements, written in a Victorian novel style that business users of the era could understand and validate, and then got them signed off in blood before moving on to design, development and more. The first fallacy was that the users knew what they wanted when they mostly had no clue what was possible. The second fallacy was that those pesky requirements, even assuming they were correct, would stand still long enough for us to build and deliver the software — let alone continue to stand still once that software was delivered. We were awash in requirements documents that demanded traceability throughout the lifecycle, and very little energy or time was left for innovation. But the third fallacy was the real killer. By the time you were awash in requirements, it was practically impossible to discern those patterns in the problem that would have led to elegant designs, to producing less code because we could build and reuse those patterns. The evolution of software lifecycles to what we now refer to as agile is a direct response to these and many more fallacies of that older waterfall approach.
AMS made major breakthroughs in all these areas, and the learnings noted above were definitely not achieved on my own. But I do believe that I was early in applying these learnings to the HRM domain.
Lessons Learned At Bloom & Wallace
When I left AMS, I proceeded to build upon what I had learned to model and remodel the HRM domain, using better modeling techniques as I went along and testing my thinking with a serious of large, global clients and their strategic HRM/HRMDS planning projects. And, something I hadn’t been able to do as fully while still at AMS, I saw many more patterns in the domain, both as to the subject matter and the system capabilities needed to deliver that subject matter, which I expressed as part of a growing body of preferred architectural behaviors.
Over the last twenty-five years, my work in modeling the HRM domain and seeing those patterns was the basis for creating and supporting my widely-licensed “starter kit.” I believe this work has had some impact on the underpinnings of the best of HRM enterprise software, and influenced (again, I hope, at least in a small way) some HRM software product architects and business analysts. Earlier this year, I announced that my domain model IP would not be licensed beyond 2012, but that doesn’t mean that our work in this area is done — not by a mile. The good news is that there are now a number of HRM software vendors on this path to getting a lot more bang for their buck — and for their customers.
The promise of CASE tools, which had been the holy grail of software engineering, was that it would be possible to go directly from a completely modeled expression of the desired aspects of the domain directly to usable functionality, delivered functionality, without being touched by human hands. The hope was that we could build a set of tools, putting all of our computer science and engineering talent to work on those tools, which would be able to gobble up those models, themselves defined to these tools, and presto, chango, out pops the application. No compilation, no hand-tuning, and no messy/expensive/error-prone/slow-to-market applications programming.
This concept, which was called models-driven development, has evolved into the more definitional development approaches used first (to my knowledge in the HRM domain) by Meta4 in the late 90’s and, more recently and with much greater visibility in the US by Workday. We’ve also added to our toolkit the creation of metadata-driven “engines” that can be used and reused across the HRM domain. There are other HRM software vendors working with these techniques, to include reports on same from both Oracle and SFSF/SAP, and there’s a lot more of such work that is still in stealth mode but with very promising early results. I hope to see a lot more of this become visible to the market before the end of 2012.
I believe that this combination of an agile lifecycle with models-driven and definitional development, to include the development of metadata-driven “engines,” are our best hope for being able, finally, to wrestle to the ground the very real challenge of having our business applications evolve as quickly as do our businesses — and without adding to the tremendous technical debt burden which so many HRM software vendors are facing. Writing less code to achieve great business applications was my focus in that 1984 article, and it remains so today. Being able to do this is critical if we’re going to realize the full potential of information technology — and not just in HRM.
There’s so much more that I should write about the strengths and some pitfalls of agile software lifecycles, about how modeling a domain in objects helps us see the patterns in that problem domain with enough clarity to build metadata-driven “engines” from those patterns (e.g. an elibility, calculation or workflow engine) rather than creating lots of single purpose applications, and how those models can become applications without any code being written or even generated. Hopefully, the real experts in our industry will jump in to correct what I’ve written and to expand upon it. And I’d sure love to hear from HRM software vendors who aren’t my clients but who are practicing and advancing these techniques.
What’s important here is to make as clear as I can the power of any HRM software architecture, of any development approach, whose robust domain object models become the functionality of the applications with a minimum of human intervention, whose business functionality is therefore built and modified only at the models level. Such an approach can be very flexible initially and over time, easy and fast to implement, and inexpensive for both the vendor/provider and customer to acquire and maintain. And this approach provides full employment for anyone who really knows how to elicit well-constructed domain models from the business ramblings of subject matter experts. Most important, such an approach shortens the intellectual distance between our understanding of the problem domain and our automation of that domain.
I would be remiss if I didn’t point out that the challenges to accomplishing this are huge (but the moat created by any vendors who succeed is equally huge):
- difficulty of accurately representing the domain in a rigorous modeling methodology, along with the need for extensibility, evolution and modifications over time;
- difficulty of building tools which can translate those models into operating objects;
- difficulty of seeing the patterns in the domain with enough clarity to recognize the needed “engines” — and then in building “engines” which can operate solely on metadata;
- difficulty in abstracting complex HRM business rules to metadata;
- difficulty of achieving operational performance with large volumes (although in-memory data/object management opens up a lot of possibilities here as well as with both embedded and predictive analytics);
- difficulty of adjusting those operational objects as the models evolve without human intervention, i.e. without coding; and
- many more that keep software engineers awake at night.
As big as are the challenges, or even more so, are the benefits if those challenges can be met. And it’s my opinion that the amount of lift I mentioned above, if it can be achieved and sustained, made to scale both operationally and in a business sense (e.g. finding those few individuals who understand the HRM domain in a profound way and who are able to express that domain in fully articulated models is a huge challenge to scaling these approaches), will change in a fundamental a way the economics of the HRM enterprise software business. If I’m right, you’ll want to be on the agile, models-driven, definitional development side of the moat thus created, whether you’re an HR leader, working in the HRM software vendor community, or an investor in that community.
Some suggested readings:
Metadata-Driven Application Design and Development by Kevin S Perera of Temenos January 2004
Summary: Presents an overview of using a metadata-driven approach to designing applications. If the use of software frameworks can be defined by patterns, then metadata is the language used to describe those patterns.
Workday’s Technology Strategy: A Revolutionary Approach To Redefining Enterprise Software from Workday, 2006
The Work Of Jean Bezivin
The work of Jean Bezivin at the University of Nantes, France, where is now an emeritus professor. You can reach him at JBezivin@gmail.com or follow him on Twitter @JBezivin, which I do religiously even though I can’t fathom a good bit of his citations, and not just because some of them are in French. But this blog is in English
The Work Of Curt Monash
Curt, an expert in all things database-related, has written some of the best pieces I’ve seen on the underlying Workday architecture, especially as regards their data architecture. Of particular interest may be his most recent piece on this:
The Work of Johan Den Haan
Johan is the CTO of Mendix, a vendor of applications delivery PaaS. He writes on a wide range of topics related to the subject matter of my post, and I’ve learned a ton from following this work.