In this blog we’ll discuss the presentations and demos of the participants of this year’s Language Workbench Challenge. Note that this blog post has been written over some period of time which means that some participants were more clearly represented in my memory at the time of writing than others, so for the latter I had to rely on my notes more. I apologise beforehand if you think your tool has been misrepresented and I’m open to comments.
Tijs van der Storm of the CWI presented the latest developments on Rascal. Although Rascal doesn’t call itself a language workbench but rather a programming language aimed at source code and meta programming. Consequently, and because of libraries enabling integration in the Eclipse IDE, it can be construed as a language workbench after all.
The biggest change in Rascal is the advent of a VM and a compiler (in “pre^4 alpha” stage) that compiles Rascal code to bytecode on that VM. Previously, Rascal code was executed by a JVM-based interpreter. This propels Rascal’s performance into the “realm of reasonability” corresponding to sub-second response times for the scalability part of the focus assignment. Also, the performance is continuously being worked on. Tijs had some statistics on the performance showing that the VM scaled linearly on larger inputs and was appreciably more performant than the interpreter (although that scaled linearly as well).
The implementation of the QL and QLS DSLs has been optimised to an impressively measly 450SLoC, including IDE and semantics – i.e. code generation which takes advantage of the questionnaire reference implementation.
For the collaboration part, Tijs demonstrated a “freshly out of the oven/bleeding edge” language-aware generic diffing capability on top of standard version control systems, like Git.
All in all, it’s certainly nice to see Rascal making strides towards maturity while keeping close to its core principles.
Enrico Persiani and Riccardo Solmi are two of the mainstays of this challenges since the first edition in 2011. That year they blew everyone away with their Whole Platform that no-one so far had heard about but was surprisingly mature and especially: different.
The Whole Platform (from now on: WP) , as MPS, uses projectional visualisation for the concrete syntax of models. Whereas MPS tries to emulate a text-like look & feel, the WP favours a “table-y” look, requiring a lot of whitespace and relying more on visual cues like lines and fold/unfold buttons for navigation.
Interestingly, Enrico stressed the importance of re-use, not just for sake of modularisation but also to achieve scalability. The WP provides a new feature that enables one to reference a part of another model inside a model and use it either as-is or after a model transformation. This enabled them to build up the binary search tree questionnaire model for the 1024=32×32 case by first constructing the 32 case and then re-using that recursively in combination with a transformation. Even with the model for the 1024 case, the editor looked rather responsive, even though the syntax was of course quite graphical.
Finally, Enrico showed off the advanced the model diffing capability (which worked on top of Git) with features like: customisability, three-way comparison and unified “1-way” compare that allows the user to defer merging conflicts and commit models with that way.
The presenter of this contribution, Petr Smolík, has attended the LWC in past years, but this year he promoted himself to participant. For his presentation and demo of the tool he used an iPad, proving that in this day and age you don’t need a lot of “iron-on-the-desk” to run language workbenches.
The Metarepository tool has been used for years by Petr’s company Metadata to service their customers in the financial industry. The tool is based on the principle of multi-level modelling which means that there is no real distinction between data, model and meta model. E.g., concepts like “entity” and “attribute” can be defined in the meta model, instances of which reside in the model and those you can instantiate again, which means you get (persistent) semantics “for free”.
Metarepository runs on the Web and uses projectional editing to visualise and manipulate the models. The concrete syntax of the models seems really “formy”, which makes getting an overview a bit cumbersome. In fact, it seemed that one level of containment corresponds to one separate Web page. Especially for expressions this would require some work, but this is the typical painful “edge case” for projectional tools.
Metarepository has built-in versioning (internally bound to Git) and it’s really an intrinsic part of the whole user experience, not something that’s essentially “tacked-on” which makes for a strong proposition. In fact, the tool seems geared towards providing a software development tooling experience in the first place, with the modelling capability being more of a happy circumstance.
To get from models to running software, you can write semantics in Java or Groovy. This comes out-of-the-box with caching so not every character of the generated code needs to be re-evaluated again, which is obviously good for performance.
The guys from MetaCase are true regulars of the LWC – it just wouldn’t be the same without them, especially since they tend to attack the challenge with extreme vigour.
Apparently, being able to do the binary search questionnaire for numbers 1..2^10 wasn’t enough, so they generated one for 1..2^20, resulting in a huge model that nevertheless could be opened and manipulated in a very performant manner – impressive given that the model visualisation in MetaEdit+ is intrinsically graphical. In fact, they used a model repository of 5 Gigabytes and opened models of over 1 million model elements to prove scalability of the tool. Interesting is that they had to implement an H-tree visualisation algorithm to layout the questionnaire model without overlap and using as little whitespace as possible.
MetaEdit+ comes in a single and a multi-user version, with the latter using a centralised model repository server to facilitate concurrent versioning. Tight integration of the editor with this server enables MetaEdit+ to use a very fine-grained locking approach that minimises prohibitive locking, so that any one model detail can only be edited by one user at any time but other users can still edit other details that are directly coupled with it.
Changes are transmitted and visible to other collaborators upon a commit – obviously, this necessitates constant network access. Internally, this is all based on ACID-principles with a commit being the transaction boundary. Part of the collaboration capabilities is the graphical “semi-diff” that graphically shows differences in objects (that keep existing).
By the way: you can see this presentation on Youtube.
Monticore is made by the Software Engineering Lab of the Aachen and is a new contribution to the LWC, although it has been in existence for several years: version 1.0 dates from 2006. Right now we’re at v3.2.0 (v3.1.1 at the time of the LWC).
Monticore caters for textual DSLs only. The grammar language is a (slight) abstraction of the ANTLR which is used “under the hood” and that probably explains why it looks quite a bit like Xtext.
The Monticore people didn’t address scalability and collaboration directly in the tool; instead, they urged to use best practices (such as modularisation). This gives rise to the feeling that the tool is not quite there yet in terms of product maturity for a general (non-academic) audience. It occupies the same technological space as Xtext (Java and Eclipse), making it hard to see the added value relative to Xtext. Maybe next year that’ll be addressed more clearly.
Although we had a few members of the MPS development team, the presenting, demoing and (presumably) implementing the challenge was once again done by people from the Sioux company. This is interesting since MPS then is the only workbench in the challenge that’s represented by actual users, instead of the tool builders. The presenters dove right into it, showing off the various scalability- and collaboration-related features.
Collaboration is based completely Git – not completely surprising since Git is already nicely integrated in IntelliJ/IDEA. The collaboration plays nice with the projectional editing that MPS does and features a structure diff which honours the projection.
Sioux had prepared a couple of models for the questionnaire challenge, some of them monolithic, some of them modularised. They stressed that model architecture is important and that modularisation, as well as type and constraint checking, often come at a cost of flexibility. The editor and its language services seem to scale roughly linear in model size, for the monolithic models a bit less so for the modularised models. Field data suggests linear scaling as well.
Another nice feature of MPS is the templating language that’s used to generate code according to the reference implementation is target-language-aware.
In recent years the Spoofax tooling, which is actively being developed at the University of Delft under the leadership of Eelco Visser, has seen definite growth towards maturity. This is obviously a good thing as it prevents the tool going “Poof!” 😉
Gabriël Konat started off with an overview of the features of Spoofax and was eager to show the newer features: an evolution of the NaBL name binding language and a type system language. The latter integrates nicely with the rest of the tool, provides both type calculation and type checking and works incrementally after the initial slow parsing and checking – this seems to work really well and obviously benefits scalability and collaboration. The type system language can also provide custom functionality like dependency calculation.
Scalability and Git-based collaboration looks really solid, so much so that it’s tedious to say anything specific about it. In fact, Spoofax seems to be as boring as a language workbench should be: a set of meta languages that exhibits good separation of concern and an even learning curve. And to quote Gabriël: “in case of stupidity, just revert”!