At the end of Distributed Programming & CALM, I mentioned how organically growing software often fails to produce a system with clear semantics.
So I count myself lucky that Mark Fernandes recently mentioned the Vienna Development Method (VDM). It’s like if design by contract, abstract data types, first-order logic, and a small imperative/OO language with a collection library had a lovechild.
Developing an application using VDM starts like this:
- State some overall semantics claims, introducing state/objects, types, and functions as needed to flesh things out.
- Use preconditions and postconditions (but preferably not any implementation) to define the functions.
- Use invariants to clarify the meaning of your types.
- Get it all to preserve the desired semantics.
At this point, you’re several removes away from a runnable application.
You’ve got some types and some operations on those types that have been implicitly defined using assertions.
Progressive refinement is how you work your way to more concrete data types and more explicitly (operationally) defined functions, step by step.
In concrete terms, you actually start writing some code in between the bunch of NSParameterAssert() and NSAssert() statements that currently comprise the entirety of your program’s functions.
The first part of refinement is coming up with a lower-level (closer to the implementation language) representation of the same system.
You might convert a set into an array with a “no duplicates” invariant, or use strings to represent a certain enumeration.
You’ll update functions or add new ones as needed, and rephrase pre/postconditions in terms of the new data model, and flesh out the actual implementation of the functions in terms of your new data types.
Then we get to the real heart of the matter: Prove this new model is truly a refinement of the last one. To do so, you must prove that your new version of the system preserves all the properties of the original system.
…so I’ve just been cribbing from the Wikipedia page I linked above, and it’s pretty cryptic about that refinement step. (See the end of this article for my best guess at how to interpret that section of the Wikipedia VDM article.)
Looking elsewhere, it appears that in addition to straight-up proof, model-checking and simulation are also used to validate the model. This make sense given that, even before we begin refining our system, we need to convince ourselves that we’ve actually modeled what we intended to model. If we haven’t succeeded at that, all we’ll be refining is high-level garbage into lower-level garbage.
The Agile Spec Problem
Key to the Vienna Development Method is starting from a known-good high-level model, before the first line of implementation code is written.
After the first step of formally stating the specification of the application, you function as a manual compiler. At each step, you apply a program transformation, then prove it preserves your application’s semantics.
You start with a high-level spec, and you push it down the abstraction hierarchy till you finally end up with an actual, executable implementation in your target language.
But that’s not terribly agile.
If you know of any work bridging the very different worlds of “specs are great, we can lower them down into working software” and “stakeholder-driven organic growth is necessary for software to create value” – or how to iteratively, agilely develop a spec, with non-expert stakeholder validation – shoot me an email or reach out to me on App.net, where I’m @jws.
Refinement in VDM: This bit is a mess in the Wikipedia article. Here’s my understanding of it:
- A mapping must exist from your new data representation to the old for all instances possible in the old representation. The mapping function is called the “retrieval” function, since it retrieves the original/old version of the state.
- Once you have such a mapping, you must show that, for all your new data, and
for all your new functions, the old invariants hold:
- Pre-conditions that hold on the retrieval should hold on the new model.
- Post-conditions from a valid starting point that hold in the new version should also hold with the retrieval of the input and output values in the old model.