Category Archives: Model Driven Development

LWC2014: wrap-up

It’s a wrap!

The Language Workbench Challenge, co-located with the Code Generation conference at the Churchill College, Cambridge, UK, once again was a very nice day featuring demos of 7 language workbenches and their newest features. But above all it was joy to meet all the participants and “non-challenging” attendees and exchange knowledge, views and experiences.

LWC 2014 organisers, speakers and delegates
LWC 2014 organisers, speakers and delegates


The LWC 2013 had a record number of participants. There were three problems associated with this:

  1. It was a challenge to give each participant a respectful amount of presentation and demo time. It was equally hard to properly evaluate all workbenches in their own right as well as against each other, tallying their strengths and weaknesses.
  2. Some of the contributions definitely stretched the idea of a language workbench: some were not quite a workbench (in the sense of an integrated tool), others were not quite “language-y” (e.g., the contents of a database is not prose per se).
  3. It is an indication of significant fragmentation in a field that can hardly be considered to be large enough for that.

Because of the first two reasons, the program committee decided to draft up selection criteria. Participants were asked to submit a short report explaining their (future) implementation of the assignment, also addressing why they were fulfilling these criteria.

After an initial misunderstanding was rectified in the assignment’s text, this turned to work out quite well: this year we had a smaller number of candidates (7), which were all promoted to participants based on their reports. Although 5 of these had participated in previous years, there were also two new ones, showing that the selection criteria weren’t inhibitive.

For space reasons, we’ll defer discussing all contributions to the next blog.


This year’s assignment consisted of a base part and a focus assignment. The base part was essentially the same as last year: the implementation of a Questionnaire Language (QL) and a styling language for that (QLS).

We (i.e., the program committee) had provided a reference implementation of an example questionnaire using HTML5 based on a simple questionnaire framework to avoid that participants had to spent a lot of effort on creating that themselves. In practice, this meant that participants could either interpret or generate code from the model in a very straightforward way, requiring a minimum amount of effort. The use of the reference implementation/framework was optional but we were happy to see that most participants have used it and that it indeed had allowed them to focus on the essence of implementing the languages.

The focus assignment consisted of two parts: one focusing on “groupware” capabilities that allow models to be worked on by various people, and one focussing on “scalability” in the sense that one could work (either alone or as a team) on large models. These topics touch on concepts like version control (including “semantic”/language-aware diffing), model persistence and performance. For the latter part, we asked for the implementation of a binary search in questionnaire form.

Persistence of large-scale models

After the last presentation, we had an in promptu presentation by Jürgen Mutschall from MetaModules on persistence of large models. During this he demonstrated a system which held the complete Kepler code base, with parsed Java code viewed as a model, in a database which then could be used to inspect, reason about and manipulate (such Refactoring) it. It was interesting to see the good ‘ole JEE stack being used for model storage and to see it perform really well.

Hands-on session and group discussions

A large part of the afternoon was given over to a hands-on session where attendees could look at the language workbenches and discuss possibilities etc. Although this part was not structured, it certainly led to interesting discussions.


In order to have a good assignment for the LWC 2015 edition we held a little “Agile-style” brainstorm on what subjects  could be interesting. The list we came up with (in no particular order): 

  • Evolution
  • Autotesting
  • Interchange
  • Life Cycle Management
  • Mixing of different notations
  • What is a language workbench?
  • Business users?
  • Interpretation vs. generation
  • Debugging on the model level

Also, the question was raised whether it would be possible to focus on ‘addressing a question rather than building a lot of stuff’.

All of these suggestions, and more ideas, will be taken into consideration for LWC 2015.

Why would I need a Language Workbench?

That’s a question I never really asked myself, but I do get it from other people every once in a while, when I tell them about Code Generation, the Language Workbench Challenge or just model driven software development in general.

The answer could be summarized as ‘for the same reason a blacksmith needs a hammer and an anvil and a shoe maker needs a boot tree and a needle’. Every tradesman needs the right tools for the job.

Boot tree

Just imagine this: you are working on a piece of software and you  find yourself repeatedly going to your customer to ask how they want your application to perform a certain task. Then, you find out that most of the implementations resulting for this are quite similar in nature and you decide to create a small DSL to address that fact and speed up your talks and your development.

Pah! Instead of repeatedly writing the same code, you now write small statements (either graphical or textual) that describe the problems your customer wants you to solve and from which you can generate working software. Nice!

Now, let’s take a step ahead. Suppose you are doing the same thing still, about a year later. Because of your good work, you now have more customers and each of them requests more features than the initial one, because you are doing a good job. In order to cover for that, you have set up a team of developers, because despite the use of DSLs and code generators, you can’t do it all by yourself.

And suddenly, you find yourself in a situation where the small models, because that’s what they are, that you expressed in your DSL start referring to each other, and being (re)used by multiple people on your team. Now which version of which model do you need for product A for customer B, and which one do you need for product C of customer D? And how do you solve the fact that people are making changes to the same model for different products of the same customer?

There are different solutions to that problem, but I reckon most of them involve (re)dividing up the models, and performing version control on them. Maybe even real configuration management, allowing you to mix and match different versions of models. If only you had the tools to support you in that – models are often not stored in text format, like traditional program code, so tools like diff and merge often don’t work, leading to all kinds of issues and additional manual labour.

In a similar fashion, you may run into problems not because your team grows, but because models go bigger – another reason to split them up and maybe start doing version control on the parts. Also, you may find that your models become better if certain parts are expressed in another, dedicated DSL (just think about a DSL for specifying the pages of an app, and another one to define application flow). How do you mix these DSLs in such way that you can easily define the models required to solve your customers problem and generate the code?

This is a very short and possibly rather abstract example, but it is exactly what the case we put up for the Language Workbench Challenge is about. We work with (sets of) domain specific languages, we work in teams, and we work on models that are getting bigger. That will only help us if we can do it efficiently, and if we use tools that help us with issues like the ones expressed above  – which are distracting us from our real goal: solving the customer’s problem. These tools are the language workbenches we discuss and compare in the LWC workshop, for the fourth year in a row. So, if you are interested in this topic, because you work with domain specific languages, or are working on a workbench yourself – join us in Cambridge on April 8th, 2014. Registration details can be found here:  and information about the workshop can be found here.

And while your at it – it’s worth considering combining a visit to this workshop with a visit to Code Generation, or vice versa.

See you all in Cambridge!