At the start of the millennium there seemed to be the promise of methods that would change the way we built software.
Model Driven Development promised an end to much of traditional software drudgery.
No longer would we have to hand code and manually test our applications. Instead we’d model them, push a button and as if by magic a fully working application would appear.
Well it hasn’t quite worked out like that has it?
Writing code is still largely a bespoke manual process. And it is expensive, not just writing the code but testing it, deploying it, documenting it, and so on.
It got me thinking. What was so good about MDD? Has it died a death? Or has it just morphed into something else?
Whatever happened to model driven development?
Deconstructing the promise
Let’s try and answer these questions, a bit like a television chef deconstructs a recipe, by ripping apart the key ingredients of the topic.
Firstly, it’s worth clarifying what we mean by a model.
Here we consider a model simply as an abstraction of something — essentially a simplified, blueprint of a more complex entity or process.
From a blueprint of a house we can follow a set of procedures and build a house. And we can repeat this as many times as we want. Further if we customize our plan or change our procedures we can end up with a radically different type of building, but all built to the same repeatable and trusted standards.
Suddenly this is starting to sound interesting and the power of having a model at the heart of the development process becomes clearer.
A plurality of representations
Possibly the most important aspect of MDD is that the model contains enough unique information to be able to derive the concrete things you want to implement.
All of the key attributes of the entities should be included in the model. But isn’t this just configuration? Sort of, but not quite. Configuration turns into modelling when you use your configuration at the heart of your design but especially when you use it to create multiple outputs.
Much of the power of MDD then comes from the fact that a single model may be transformed in many ways. As they say in the world of patents, a model transformed into a plurality of representations.
As a minimum the model can be transformed into the target application. Incidentally application doesn’t necessarily mean a full-stack application. It could of course be just a data processing activity that the model is abstracting.
Further to the main event, significant time and cost benefit can be realized by also transforming the model to other representations such as test suites. Even more value can be extracted from the model by creating textual documentation or graphical views for documenting the system of interest.
Now we have an understanding of what a model is, the question becomes how can one represent that model? As MDD is a set of concepts rather than a standard, the format of a model rather depends on the nature of the problem and therefore can be in whatever format that best suits a domain. However, there have been attempts to produce common modelling languages that span many domains.
In the early days of MDD, the Unified Modelling Language came to the fore. Personally, I struggled with UML. More specifically I struggled with the promise of UML. I am sure there will be many out there who got it and some parts of UML have at least perpetuated, but back in the day I just could not get over some key barriers:
- You needed tooling which was either expensive or free but unreliable.
- You got some diagrams from it, but not complete implementations.
- Round trip engineering was frankly a pain.
I don’t wish to be too dismissive but fundamentally there was a lot of hype around UML. At the time it seemed to promise a complete solution, but in the end, it just appeared to get in the way. Once people start believing that, an approach is going to struggle.
However, there have been plenty of examples where a model driven approach has proven successful.
Where I’ve personally had more success is through modelling with Domain Specific Languages (DSL), especially in modelling the realms of multi-step web applications, voice apps and also with agents/bots on messaging platforms. In these cases, a model that abstracts away the platform specific details can be a huge productivity boost by providing a simple and trusted language for rapidly building fully functional, production ready services across many target platforms.
A more recent discovery for me is that model driven techniques are a fantastic match for data processing problems. Specifically, in the Big Data world where Extract Transform Load problems are commonplace. The ability to transform and enrich data reliably from one message format to another without having to hand crank complex code over and over again is incredibly powerful.
One of the common things about DSLs and Data Models is that processing is completely driven by the modelling. Here round-trip engineering no longer means manually changing your code and then reflecting this back in your model. Instead a change to the model is immediately reflected by the implementation and are by nature in sync — a much more satisfying relationship.
Now we have a model in a given format it should be possible to automatically generate code in a target implementation language. The aim of this is to produce exemplar code that you trust and will be guaranteed to work on your target platform.
The rules for transforming your model into the implementation language are typically encoded as templates or programmatic business rules. The fundamental principle here is that if you produce well tested transformation rules once, they can be used many times to solve related problems.
Another way I like to think of these transformation rules are as being what copy & paste should have been.
On those projects I’ve worked on where code generation has been done well, it has resulted in significant productivity and reliability gains. As such it’s hard to understand why we don’t do more of this as an industry. Perhaps though it is simple trust issue? As developers we are happy to trust the code generated by our compiler or interpreter is correct. So why is it we seem much less likely to trust higher level abstractions?
As a variation on code generation in some situations it is appropriate to write a model interpreter that knows how to execute the model directly. Updates to the model are reflected immediately in the application.
This can be an incredibly powerful mechanism and works particularly well when the model is written in the form of a DSL that describes a multi-step activity such as the pages of a web site or dialogues in a voice application or bot.
Possibly the feature of MDD that has seen least adoption as far as I can tell is that of automating the testing process.
If a model describes what it is supposed to do, then it should be possible to generate tests to show what it should do. Just as importantly it should be possible to infer what it should not do.
A further possibility is that it is also possible to infer and generate suitable input test data for both happy and sad cases. This can be incredibly useful in big data or distributed messaging applications when availability of data at the right volumes is essential.
Unfortunately, it seems this use of MDD has not really taken a foot hold in our industry yet.
Seeds of time
Predicting which technologies and techniques will succeed is like looking into the seeds of time. The specific goals of new approaches and tools may vary but in general the outcome we hope for is some combination of faster, cheaper, simpler.
As we have intimated some of the early model driven approaches just didn’t seem to make things simpler. As a consequence, development speed and costs suffer and ultimately developer confidence too. The software community reacts accordingly.
But is model driven development dead? Certainly not, but like the television chef identifying the key elements of a recipe, I have learnt to identify those problems it is best suited for.
For instance, after everything I have said about UML, I have recently been enjoying somewhat of a renaissance with it after being introduced to C4 modelling (https://c4model.com/). By scaling back my ambitions and not expecting full round-trip engineering I have found this approach very effective at helping me to understand and communicate the architectures I am working on.
Further I’ve also been reflecting on the types of problems where I have ended up using a model-based approach. Often these projects have appeared from the outside to be almost magical in that they solve not only the immediate problem, but also a host of other related problems with minimal change. And therein lies one of the major indicators that a problem is ripe for a model driven approach, namely multi tenancy, which in this context is where you deploy or execute multiple models using the same infrastructure. So, if you are in a problem space where you have not just a single problem to solve but many related ones then a model driven approach may well be more than relevant today.