LWC2014: the participants

In this blog we’ll discuss the presentations and demos of the participants of this year’s Language Workbench Challenge. Note that this blog post has been written over some period of time which means that some participants were more clearly represented in my memory at the time of writing than others, so for the latter I had to rely on my notes more. I apologise beforehand if you think your tool has been misrepresented and I’m open to comments.

Rascal

Tijs van der Storm of the CWI presented the latest developments on Rascal. Although Rascal doesn’t call itself a language workbench but rather a programming language aimed at source code and meta programming. Consequently, and because of libraries enabling integration in the Eclipse IDE, it can be construed as a language workbench after all.

The biggest change in Rascal is the advent of a VM and a compiler (in “pre^4 alpha” stage) that compiles Rascal code to bytecode on that VM. Previously, Rascal code was executed by a JVM-based interpreter. This propels Rascal’s performance into the “realm of reasonability” corresponding to sub-second response times for the scalability part of the focus assignment. Also, the performance is continuously being worked on. Tijs had some statistics on the performance showing that the VM scaled linearly on larger inputs and was appreciably more performant than the interpreter (although that scaled linearly as well).

The implementation of the QL and QLS DSLs has been optimised to an impressively measly 450SLoC, including IDE and semantics – i.e. code generation which takes advantage of the questionnaire reference implementation.

For the collaboration part, Tijs demonstrated a “freshly out of the oven/bleeding edge” language-aware generic diffing capability on top of standard version control systems, like Git.

All in all, it’s certainly nice to see Rascal making strides towards maturity while keeping close to its core principles.

Whole Platform

Enrico Persiani and Riccardo Solmi are two of the mainstays of this challenges since the first edition in 2011. That year they blew everyone away with their Whole Platform that no-one so far had heard about but was surprisingly mature and especially: different.

The Whole Platform (from now on: WP) , as MPS, uses projectional visualisation for the concrete syntax of models. Whereas MPS tries to emulate a text-like look & feel, the WP favours a “table-y” look, requiring a lot of whitespace and relying more on visual cues like lines and fold/unfold buttons for navigation.

Interestingly, Enrico stressed the importance of re-use, not just for sake of modularisation but also to achieve scalability. The WP provides a new feature that enables one to reference a part of another model inside a model and use it either as-is or after a model transformation. This enabled them to build up the binary search tree questionnaire model for the 1024=32×32 case by first constructing the 32 case and then re-using that recursively in combination with a transformation. Even with the model for the 1024 case, the editor looked rather responsive, even though the syntax was of course quite graphical.

Finally, Enrico showed off the advanced the model diffing capability (which worked on top of Git) with features like: customisability, three-way comparison and unified “1-way” compare  that allows the user to defer merging conflicts and commit models with that way.

Metarepository

The presenter of this contribution, Petr Smolík, has attended the LWC in past years, but this year he promoted himself to participant. For his presentation and demo of the tool he used an iPad, proving that in this day and age you don’t need a lot of “iron-on-the-desk” to run language workbenches.

The Metarepository tool has been used for years by Petr’s company Metadata to service their customers in the financial industry. The tool is based on the principle of multi-level modelling which means that there is no real distinction between data, model and meta model. E.g., concepts like  “entity” and “attribute” can be defined in the meta model, instances of which reside in the model and those you can instantiate again, which means you get (persistent) semantics “for free”.

Metarepository runs on the Web and uses projectional editing to visualise and manipulate the models. The concrete syntax of the models seems really “formy”, which makes getting an overview a bit cumbersome. In fact, it seemed that one level of containment corresponds to one separate Web page. Especially for expressions this would require some work, but this is the typical painful “edge case” for projectional tools.

Metarepository has built-in versioning (internally bound to Git) and it’s really an intrinsic part of the whole user experience, not something that’s essentially “tacked-on” which makes for a strong proposition. In fact, the tool seems geared towards providing a software development tooling experience in the first place, with the modelling capability being more of a happy circumstance.

To get from models to running software, you can write semantics in Java or Groovy. This comes out-of-the-box with caching so not every character of the generated code needs to be re-evaluated again, which is obviously good for performance.

MetaEdit+

The guys from MetaCase are true regulars of the LWC – it just wouldn’t be the same without them, especially since they tend to attack the challenge with extreme vigour.

Apparently, being able to do the binary search questionnaire for numbers 1..2^10 wasn’t enough, so they generated one for 1..2^20, resulting in a huge model that nevertheless could be opened and manipulated in a very performant manner – impressive given that the model visualisation in MetaEdit+ is intrinsically graphical. In fact, they used a model repository of 5 Gigabytes and opened models of over 1 million model elements to prove scalability of the tool.  Interesting is that they had to implement an H-tree visualisation algorithm to layout the questionnaire model without overlap and using as little whitespace as possible.

MetaEdit+ comes in a single and a multi-user version, with the latter using a centralised model repository server to facilitate concurrent versioning. Tight integration of the editor with this server enables MetaEdit+ to use a very fine-grained locking approach that minimises prohibitive locking, so that any one model detail can only be edited by one user at any time but other users can still edit other details that are directly coupled with it.

Changes are transmitted and visible to other collaborators upon a commit - obviously, this necessitates constant network access. Internally, this is all based on ACID-principles with a commit being the transaction boundary. Part of the collaboration capabilities is the graphical “semi-diff” that graphically shows differences in objects (that keep existing).

By the way: you can see this presentation on Youtube.

Monticore

Monticore is made by the Software Engineering Lab of the Aachen and is a new contribution to the LWC, although it has been in existence for several years: version 1.0 dates from 2006. Right now we’re at v3.2.0 (v3.1.1 at the time of the LWC).

Monticore caters for textual DSLs only. The grammar language is a (slight) abstraction of the ANTLR which is used “under the hood” and that probably explains why it looks quite a bit like Xtext.

The Monticore people didn’t address  scalability and collaboration directly in the tool; instead, they urged to use best practices (such as modularisation). This gives rise to the feeling that the tool is not quite there yet in terms of product maturity for a general (non-academic) audience. It occupies the same technological space as Xtext (Java and Eclipse), making it hard to see the added value relative to Xtext. Maybe next year that’ll be addressed more clearly.

MPS

Although we had a few members of the MPS development team, the presenting, demoing and (presumably) implementing the challenge was once again done by people from the Sioux company. This is interesting since MPS then is the only workbench in the challenge that’s represented by actual users, instead of the tool builders. The presenters dove right into it, showing off the various scalability- and collaboration-related features.

Collaboration is based completely Git – not completely surprising since Git is already nicely integrated in IntelliJ/IDEA. The collaboration plays nice with the projectional editing that MPS does and features a structure diff which honours the projection.

Sioux had prepared a couple of models for the questionnaire challenge, some of them monolithic, some of them modularised. They stressed that model architecture is important and that modularisation, as well as type and constraint checking, often come at a cost of flexibility. The editor and its language services seem to scale roughly linear in model size, for the monolithic models a bit less so for the modularised models. Field data suggests linear scaling as well.

Another nice feature of MPS is the templating language that’s used to generate code according to the reference implementation is target-language-aware.

Spoofax

In recent years the Spoofax tooling, which is actively being developed at the University of Delft under the leadership of Eelco Visser, has seen definite growth towards maturity. This is obviously a good thing as it prevents the tool going “Poof!” ;)

Gabriël Konat started off with an overview of the features of Spoofax and was eager to show the newer features: an evolution of the NaBL name binding language and a type system language. The latter integrates nicely with the rest of the tool, provides both type calculation and type checking and works incrementally after the initial slow parsing and checking - this seems to work really well and obviously benefits scalability and collaboration. The type system language can also provide custom functionality like dependency calculation.

Scalability and Git-based collaboration looks really solid, so much so that it’s tedious to say anything specific about it. In fact, Spoofax seems to be as boring as a language workbench should be: a set of meta languages that exhibits good separation of concern and an even learning curve. And to quote Gabriël: “in case of stupidity, just revert”!

 

Dutch and German Languages, workbenches, books and Ph.D.’s

During the last edition of the Language Workbench Challenge, and Code Generation we noticed that quite a significant part of the delegates were from the The Netherlands. We also concluded that despite the fact that the XText folks were not there this year, there is also quite a bit of language construction and code generation going on in Germany. Where last year (or was it two years ago), I discussed with Steven Kelly that a lot of DSL and code generation related things come from Europe, not so much from e.g. the United States, it seems that in Europe things start to narrow down to The Netherlands and Germany (and Jyväskylä) – or is this only because of the way in and locations where Code Generation and the Language Workbench Challenge are advertised? However, a fact is that quite a few Dutch people have build their Ph.D. theses around environments like Spoofax (Delft University), that Rascal is used extensively for training purposes at the University of Amsterdam, companies like Océ, FEI Company and ASML are increasing their MDD activities and Xtext and MPS are extensively being used in Germany. Actually, we should all attend another Ph.D. defense on June 18th, when Markus Völter is defending his thesis at Delft University. Hmmm, that’s Germany and The Netherlands again…

dsleng

Markus Völter’s 2012 book, in 2014 he’ll defend his Ph.D. thesis

LWC2014: wrap-up

It’s a wrap!

The Language Workbench Challenge, co-located with the Code Generation conference at the Churchill College, Cambridge, UK, once again was a very nice day featuring demos of 7 language workbenches and their newest features. But above all it was joy to meet all the participants and “non-challenging” attendees and exchange knowledge, views and experiences.

LWC 2014 organisers, speakers and delegates

LWC 2014 organisers, speakers and delegates

Lead-up

The LWC 2013 had a record number of participants. There were three problems associated with this:

  1. It was a challenge to give each participant a respectful amount of presentation and demo time. It was equally hard to properly evaluate all workbenches in their own right as well as against each other, tallying their strengths and weaknesses.
  2. Some of the contributions definitely stretched the idea of a language workbench: some were not quite a workbench (in the sense of an integrated tool), others were not quite “language-y” (e.g., the contents of a database is not prose per se).
  3. It is an indication of significant fragmentation in a field that can hardly be considered to be large enough for that.

Because of the first two reasons, the program committee decided to draft up selection criteria. Participants were asked to submit a short report explaining their (future) implementation of the assignment, also addressing why they were fulfilling these criteria.

After an initial misunderstanding was rectified in the assignment’s text, this turned to work out quite well: this year we had a smaller number of candidates (7), which were all promoted to participants based on their reports. Although 5 of these had participated in previous years, there were also two new ones, showing that the selection criteria weren’t inhibitive.

For space reasons, we’ll defer discussing all contributions to the next blog.

Assignment

This year’s assignment consisted of a base part and a focus assignment. The base part was essentially the same as last year: the implementation of a Questionnaire Language (QL) and a styling language for that (QLS).

We (i.e., the program committee) had provided a reference implementation of an example questionnaire using HTML5 based on a simple questionnaire framework to avoid that participants had to spent a lot of effort on creating that themselves. In practice, this meant that participants could either interpret or generate code from the model in a very straightforward way, requiring a minimum amount of effort. The use of the reference implementation/framework was optional but we were happy to see that most participants have used it and that it indeed had allowed them to focus on the essence of implementing the languages.

The focus assignment consisted of two parts: one focusing on “groupware” capabilities that allow models to be worked on by various people, and one focussing on “scalability” in the sense that one could work (either alone or as a team) on large models. These topics touch on concepts like version control (including “semantic”/language-aware diffing), model persistence and performance. For the latter part, we asked for the implementation of a binary search in questionnaire form.

Persistence of large-scale models

After the last presentation, we had an in promptu presentation by Jürgen Mutschall from MetaModules on persistence of large models. During this he demonstrated a system which held the complete Kepler code base, with parsed Java code viewed as a model, in a database which then could be used to inspect, reason about and manipulate (such Refactoring) it. It was interesting to see the good ‘ole JEE stack being used for model storage and to see it perform really well.

Hands-on session and group discussions

A large part of the afternoon was given over to a hands-on session where attendees could look at the language workbenches and discuss possibilities etc. Although this part was not structured, it certainly led to interesting discussions.

Brainstorm

In order to have a good assignment for the LWC 2015 edition we held a little “Agile-style” brainstorm on what subjects  could be interesting. The list we came up with (in no particular order): 

  • Evolution
  • Autotesting
  • Interchange
  • Life Cycle Management
  • Mixing of different notations
  • What is a language workbench?
  • Business users?
  • Interpretation vs. generation
  • Debugging on the model level

Also, the question was raised whether it would be possible to focus on ‘addressing a question rather than building a lot of stuff’.

All of these suggestions, and more ideas, will be taken into consideration for LWC 2015.

LWC2014 Reference implementation updated

Today, we made a small update to the reference implementation of the LWC 2014 assignment. Now, the reference implementation includes a conditionalFormElementWidget, next to simpleFormElementWidget, which allows the use of individual conditional questions, without having to wrap them in a conditionalGroupWidget.

It’s a small change, that will not affect the models created by LWC participants, but it may affect their code generators if they want to use it. Did we cause problems for them now? Hopefully not – this is the type of change to be expected in a real life situation as well.

Updates are available in the LWC2014 Git repository: https://github.com/dslmeinte/LWC2014

 

Why would I need a Language Workbench?

That’s a question I never really asked myself, but I do get it from other people every once in a while, when I tell them about Code Generation, the Language Workbench Challenge or just model driven software development in general.

The answer could be summarized as ‘for the same reason a blacksmith needs a hammer and an anvil and a shoe maker needs a boot tree and a needle’. Every tradesman needs the right tools for the job.

Boot tree

Just imagine this: you are working on a piece of software and you  find yourself repeatedly going to your customer to ask how they want your application to perform a certain task. Then, you find out that most of the implementations resulting for this are quite similar in nature and you decide to create a small DSL to address that fact and speed up your talks and your development.

Pah! Instead of repeatedly writing the same code, you now write small statements (either graphical or textual) that describe the problems your customer wants you to solve and from which you can generate working software. Nice!

Now, let’s take a step ahead. Suppose you are doing the same thing still, about a year later. Because of your good work, you now have more customers and each of them requests more features than the initial one, because you are doing a good job. In order to cover for that, you have set up a team of developers, because despite the use of DSLs and code generators, you can’t do it all by yourself.

And suddenly, you find yourself in a situation where the small models, because that’s what they are, that you expressed in your DSL start referring to each other, and being (re)used by multiple people on your team. Now which version of which model do you need for product A for customer B, and which one do you need for product C of customer D? And how do you solve the fact that people are making changes to the same model for different products of the same customer?

There are different solutions to that problem, but I reckon most of them involve (re)dividing up the models, and performing version control on them. Maybe even real configuration management, allowing you to mix and match different versions of models. If only you had the tools to support you in that – models are often not stored in text format, like traditional program code, so tools like diff and merge often don’t work, leading to all kinds of issues and additional manual labour.

In a similar fashion, you may run into problems not because your team grows, but because models go bigger – another reason to split them up and maybe start doing version control on the parts. Also, you may find that your models become better if certain parts are expressed in another, dedicated DSL (just think about a DSL for specifying the pages of an app, and another one to define application flow). How do you mix these DSLs in such way that you can easily define the models required to solve your customers problem and generate the code?

This is a very short and possibly rather abstract example, but it is exactly what the case we put up for the Language Workbench Challenge is about. We work with (sets of) domain specific languages, we work in teams, and we work on models that are getting bigger. That will only help us if we can do it efficiently, and if we use tools that help us with issues like the ones expressed above  - which are distracting us from our real goal: solving the customer’s problem. These tools are the language workbenches we discuss and compare in the LWC workshop, for the fourth year in a row. So, if you are interested in this topic, because you work with domain specific languages, or are working on a workbench yourself – join us in Cambridge on April 8th, 2014. Registration details can be found here:  and information about the workshop can be found here.

And while your at it – it’s worth considering combining a visit to this workshop with a visit to Code Generation, or vice versa.

See you all in Cambridge!

Accepted submissions

Today, we send out the confirmations of the accepted proposals for LWC 2014. We are very pleased to announce that the following Language Workbenches will be present at the workshop:

  • MetaEdit
  • MetaRepository
  • Monticore
  • MPS
  • Rascal
  • Spoofax
  • WholePlatform

 

Extended submission deadline

Let’s start of with a Happy New Year to the MDSD and LWB community, before bringing this week’s good news.

Over the weekend, we decided to extend the submission deadline for LWC 2014 papers by one week. The reason is plain and simple: the deadline was tight to begin with, and we learned that, despite explicit announcements,  some teams missed the fact that a paper submission was added to the process this year.

The deadline for submitting papers is now January 10th, midnight. The deadline for response to authors will remain.

 

Reference implementation and focus assignment online

As promised, around November 29th we would publish the reference implementation of the questionnaire platform, as well as some guidelines for the team support and scalability issues. These can be found on Git, under  https://github.com/dslmeinte/LWC20u14.

The file README.md that can be found there explains the design of the reference implementation, while the file scalability and teamwork.md gives suggestions and requirements (mainly for model size scalability) on how to deal with the focus assignment.

If you have any questions regarding this, please post them on the Google Group for the Language Workbench Challenge.

Meinte and Angelo will maintain the reference implementation and the focus assignment document – major changes will be largely avoided. If that is not possible, it  will be announced here. Cosmetics and minors will appear on Git without further announcement.

Good luck with the assignment – and we hope to see you in Cambridge in a few months time!

Correction update on the CfP for LWC 2014

About a week ago, we released the CfP for the Language Workbench 2014.  As some people have pointed out, the rules are a bit more strict than in past years, because we feel we need to focus more on the actual subject of the LWC: Language Workbenches (LWB). There are many ways to do model driven development and code generation, of which LWB are only one. All of these are candidate to be discussed at the CodeGeneration conference, with which we have co-located the LWC.

However, we did find a small flaw in the CfP that may lead to confusion. The criteria state that

participating tools should be language workbenches, that is, tools that are designed

specifically for efficiently defining, integrating and using domain specific languages in an

Integrated Development Environment (IDE). Embedded, fixed-language or library-based

solutions do not meet this criterion.

I’d like to point out here (and have updated the CfP PDF accordingly) that the word EMBEDDED here refers to EMBEDDED DSLs, i.e. DSLs that are used to extend a (3GL) programming language, using macro’s, pragma or whatever facilities that programming language provides. It does not refer to embedding of a language workbench into an existing IDE that may also be used for other purposes than DSL creation and use, such as Eclipse or Visual Studio.

Should there be any other doubts or questions about the CfP, don’t hesitate to contact us via submission@languageworkbenches.net!

Regards on behalf of the program committee,

Angelo Hulshout