All posts by Meinte Boersma

LWC2014: the participants

In this blog we’ll discuss the presentations and demos of the participants of this year’s Language Workbench Challenge. Note that this blog post has been written over some period of time which means that some participants were more clearly represented in my memory at the time of writing than others, so for the latter I had to rely on my notes more. I apologise beforehand if you think your tool has been misrepresented and I’m open to comments.


Tijs van der Storm of the CWI presented the latest developments on Rascal. Although Rascal doesn’t call itself a language workbench but rather a programming language aimed at source code and meta programming. Consequently, and because of libraries enabling integration in the Eclipse IDE, it can be construed as a language workbench after all.

The biggest change in Rascal is the advent of a VM and a compiler (in “pre^4 alpha” stage) that compiles Rascal code to bytecode on that VM. Previously, Rascal code was executed by a JVM-based interpreter. This propels Rascal’s performance into the “realm of reasonability” corresponding to sub-second response times for the scalability part of the focus assignment. Also, the performance is continuously being worked on. Tijs had some statistics on the performance showing that the VM scaled linearly on larger inputs and was appreciably more performant than the interpreter (although that scaled linearly as well).

The implementation of the QL and QLS DSLs has been optimised to an impressively measly 450SLoC, including IDE and semantics – i.e. code generation which takes advantage of the questionnaire reference implementation.

For the collaboration part, Tijs demonstrated a “freshly out of the oven/bleeding edge” language-aware generic diffing capability on top of standard version control systems, like Git.

All in all, it’s certainly nice to see Rascal making strides towards maturity while keeping close to its core principles.

Whole Platform

Enrico Persiani and Riccardo Solmi are two of the mainstays of this challenges since the first edition in 2011. That year they blew everyone away with their Whole Platform that no-one so far had heard about but was surprisingly mature and especially: different.

The Whole Platform (from now on: WP) , as MPS, uses projectional visualisation for the concrete syntax of models. Whereas MPS tries to emulate a text-like look & feel, the WP favours a “table-y” look, requiring a lot of whitespace and relying more on visual cues like lines and fold/unfold buttons for navigation.

Interestingly, Enrico stressed the importance of re-use, not just for sake of modularisation but also to achieve scalability. The WP provides a new feature that enables one to reference a part of another model inside a model and use it either as-is or after a model transformation. This enabled them to build up the binary search tree questionnaire model for the 1024=32×32 case by first constructing the 32 case and then re-using that recursively in combination with a transformation. Even with the model for the 1024 case, the editor looked rather responsive, even though the syntax was of course quite graphical.

Finally, Enrico showed off the advanced the model diffing capability (which worked on top of Git) with features like: customisability, three-way comparison and unified “1-way” compare  that allows the user to defer merging conflicts and commit models with that way.


The presenter of this contribution, Petr Smolík, has attended the LWC in past years, but this year he promoted himself to participant. For his presentation and demo of the tool he used an iPad, proving that in this day and age you don’t need a lot of “iron-on-the-desk” to run language workbenches.

The Metarepository tool has been used for years by Petr’s company Metadata to service their customers in the financial industry. The tool is based on the principle of multi-level modelling which means that there is no real distinction between data, model and meta model. E.g., concepts like  “entity” and “attribute” can be defined in the meta model, instances of which reside in the model and those you can instantiate again, which means you get (persistent) semantics “for free”.

Metarepository runs on the Web and uses projectional editing to visualise and manipulate the models. The concrete syntax of the models seems really “formy”, which makes getting an overview a bit cumbersome. In fact, it seemed that one level of containment corresponds to one separate Web page. Especially for expressions this would require some work, but this is the typical painful “edge case” for projectional tools.

Metarepository has built-in versioning (internally bound to Git) and it’s really an intrinsic part of the whole user experience, not something that’s essentially “tacked-on” which makes for a strong proposition. In fact, the tool seems geared towards providing a software development tooling experience in the first place, with the modelling capability being more of a happy circumstance.

To get from models to running software, you can write semantics in Java or Groovy. This comes out-of-the-box with caching so not every character of the generated code needs to be re-evaluated again, which is obviously good for performance.


The guys from MetaCase are true regulars of the LWC – it just wouldn’t be the same without them, especially since they tend to attack the challenge with extreme vigour.

Apparently, being able to do the binary search questionnaire for numbers 1..2^10 wasn’t enough, so they generated one for 1..2^20, resulting in a huge model that nevertheless could be opened and manipulated in a very performant manner – impressive given that the model visualisation in MetaEdit+ is intrinsically graphical. In fact, they used a model repository of 5 Gigabytes and opened models of over 1 million model elements to prove scalability of the tool.  Interesting is that they had to implement an H-tree visualisation algorithm to layout the questionnaire model without overlap and using as little whitespace as possible.

MetaEdit+ comes in a single and a multi-user version, with the latter using a centralised model repository server to facilitate concurrent versioning. Tight integration of the editor with this server enables MetaEdit+ to use a very fine-grained locking approach that minimises prohibitive locking, so that any one model detail can only be edited by one user at any time but other users can still edit other details that are directly coupled with it.

Changes are transmitted and visible to other collaborators upon a commit – obviously, this necessitates constant network access. Internally, this is all based on ACID-principles with a commit being the transaction boundary. Part of the collaboration capabilities is the graphical “semi-diff” that graphically shows differences in objects (that keep existing).

By the way: you can see this presentation on Youtube.


Monticore is made by the Software Engineering Lab of the Aachen and is a new contribution to the LWC, although it has been in existence for several years: version 1.0 dates from 2006. Right now we’re at v3.2.0 (v3.1.1 at the time of the LWC).

Monticore caters for textual DSLs only. The grammar language is a (slight) abstraction of the ANTLR which is used “under the hood” and that probably explains why it looks quite a bit like Xtext.

The Monticore people didn’t address  scalability and collaboration directly in the tool; instead, they urged to use best practices (such as modularisation). This gives rise to the feeling that the tool is not quite there yet in terms of product maturity for a general (non-academic) audience. It occupies the same technological space as Xtext (Java and Eclipse), making it hard to see the added value relative to Xtext. Maybe next year that’ll be addressed more clearly.


Although we had a few members of the MPS development team, the presenting, demoing and (presumably) implementing the challenge was once again done by people from the Sioux company. This is interesting since MPS then is the only workbench in the challenge that’s represented by actual users, instead of the tool builders. The presenters dove right into it, showing off the various scalability- and collaboration-related features.

Collaboration is based completely Git – not completely surprising since Git is already nicely integrated in IntelliJ/IDEA. The collaboration plays nice with the projectional editing that MPS does and features a structure diff which honours the projection.

Sioux had prepared a couple of models for the questionnaire challenge, some of them monolithic, some of them modularised. They stressed that model architecture is important and that modularisation, as well as type and constraint checking, often come at a cost of flexibility. The editor and its language services seem to scale roughly linear in model size, for the monolithic models a bit less so for the modularised models. Field data suggests linear scaling as well.

Another nice feature of MPS is the templating language that’s used to generate code according to the reference implementation is target-language-aware.


In recent years the Spoofax tooling, which is actively being developed at the University of Delft under the leadership of Eelco Visser, has seen definite growth towards maturity. This is obviously a good thing as it prevents the tool going “Poof!” 😉

Gabriël Konat started off with an overview of the features of Spoofax and was eager to show the newer features: an evolution of the NaBL name binding language and a type system language. The latter integrates nicely with the rest of the tool, provides both type calculation and type checking and works incrementally after the initial slow parsing and checking – this seems to work really well and obviously benefits scalability and collaboration. The type system language can also provide custom functionality like dependency calculation.

Scalability and Git-based collaboration looks really solid, so much so that it’s tedious to say anything specific about it. In fact, Spoofax seems to be as boring as a language workbench should be: a set of meta languages that exhibits good separation of concern and an even learning curve. And to quote Gabriël: “in case of stupidity, just revert”!


LWC2014: wrap-up

It’s a wrap!

The Language Workbench Challenge, co-located with the Code Generation conference at the Churchill College, Cambridge, UK, once again was a very nice day featuring demos of 7 language workbenches and their newest features. But above all it was joy to meet all the participants and “non-challenging” attendees and exchange knowledge, views and experiences.

LWC 2014 organisers, speakers and delegates
LWC 2014 organisers, speakers and delegates


The LWC 2013 had a record number of participants. There were three problems associated with this:

  1. It was a challenge to give each participant a respectful amount of presentation and demo time. It was equally hard to properly evaluate all workbenches in their own right as well as against each other, tallying their strengths and weaknesses.
  2. Some of the contributions definitely stretched the idea of a language workbench: some were not quite a workbench (in the sense of an integrated tool), others were not quite “language-y” (e.g., the contents of a database is not prose per se).
  3. It is an indication of significant fragmentation in a field that can hardly be considered to be large enough for that.

Because of the first two reasons, the program committee decided to draft up selection criteria. Participants were asked to submit a short report explaining their (future) implementation of the assignment, also addressing why they were fulfilling these criteria.

After an initial misunderstanding was rectified in the assignment’s text, this turned to work out quite well: this year we had a smaller number of candidates (7), which were all promoted to participants based on their reports. Although 5 of these had participated in previous years, there were also two new ones, showing that the selection criteria weren’t inhibitive.

For space reasons, we’ll defer discussing all contributions to the next blog.


This year’s assignment consisted of a base part and a focus assignment. The base part was essentially the same as last year: the implementation of a Questionnaire Language (QL) and a styling language for that (QLS).

We (i.e., the program committee) had provided a reference implementation of an example questionnaire using HTML5 based on a simple questionnaire framework to avoid that participants had to spent a lot of effort on creating that themselves. In practice, this meant that participants could either interpret or generate code from the model in a very straightforward way, requiring a minimum amount of effort. The use of the reference implementation/framework was optional but we were happy to see that most participants have used it and that it indeed had allowed them to focus on the essence of implementing the languages.

The focus assignment consisted of two parts: one focusing on “groupware” capabilities that allow models to be worked on by various people, and one focussing on “scalability” in the sense that one could work (either alone or as a team) on large models. These topics touch on concepts like version control (including “semantic”/language-aware diffing), model persistence and performance. For the latter part, we asked for the implementation of a binary search in questionnaire form.

Persistence of large-scale models

After the last presentation, we had an in promptu presentation by Jürgen Mutschall from MetaModules on persistence of large models. During this he demonstrated a system which held the complete Kepler code base, with parsed Java code viewed as a model, in a database which then could be used to inspect, reason about and manipulate (such Refactoring) it. It was interesting to see the good ‘ole JEE stack being used for model storage and to see it perform really well.

Hands-on session and group discussions

A large part of the afternoon was given over to a hands-on session where attendees could look at the language workbenches and discuss possibilities etc. Although this part was not structured, it certainly led to interesting discussions.


In order to have a good assignment for the LWC 2015 edition we held a little “Agile-style” brainstorm on what subjects  could be interesting. The list we came up with (in no particular order): 

  • Evolution
  • Autotesting
  • Interchange
  • Life Cycle Management
  • Mixing of different notations
  • What is a language workbench?
  • Business users?
  • Interpretation vs. generation
  • Debugging on the model level

Also, the question was raised whether it would be possible to focus on ‘addressing a question rather than building a lot of stuff’.

All of these suggestions, and more ideas, will be taken into consideration for LWC 2015.