During the last edition of the Language Workbench Challenge, and Code Generation we noticed that quite a significant part of the delegates were from the The Netherlands. We also concluded that despite the fact that the XText folks were not there this year, there is also quite a bit of language construction and code generation going on in Germany. Where last year (or was it two years ago), I discussed with Steven Kelly that a lot of DSL and code generation related things come from Europe, not so much from e.g. the United States, it seems that in Europe things start to narrow down to The Netherlands and Germany (and Jyväskylä) – or is this only because of the way in and locations where Code Generation and the Language Workbench Challenge are advertised? However, a fact is that quite a few Dutch people have build their Ph.D. theses around environments like Spoofax (Delft University), that Rascal is used extensively for training purposes at the University of Amsterdam, companies like Océ, FEI Company and ASML are increasing their MDD activities and Xtext and MPS are extensively being used in Germany. Actually, we should all attend another Ph.D. defense on June 18th, when Markus Völter is defending his thesis at Delft University. Hmmm, that’s Germany and The Netherlands again…
It’s a wrap!
The Language Workbench Challenge, co-located with the Code Generation conference at the Churchill College, Cambridge, UK, once again was a very nice day featuring demos of 7 language workbenches and their newest features. But above all it was joy to meet all the participants and “non-challenging” attendees and exchange knowledge, views and experiences.
The LWC 2013 had a record number of participants. There were three problems associated with this:
- It was a challenge to give each participant a respectful amount of presentation and demo time. It was equally hard to properly evaluate all workbenches in their own right as well as against each other, tallying their strengths and weaknesses.
- Some of the contributions definitely stretched the idea of a language workbench: some were not quite a workbench (in the sense of an integrated tool), others were not quite “language-y” (e.g., the contents of a database is not prose per se).
- It is an indication of significant fragmentation in a field that can hardly be considered to be large enough for that.
Because of the first two reasons, the program committee decided to draft up selection criteria. Participants were asked to submit a short report explaining their (future) implementation of the assignment, also addressing why they were fulfilling these criteria.
After an initial misunderstanding was rectified in the assignment’s text, this turned to work out quite well: this year we had a smaller number of candidates (7), which were all promoted to participants based on their reports. Although 5 of these had participated in previous years, there were also two new ones, showing that the selection criteria weren’t inhibitive.
For space reasons, we’ll defer discussing all contributions to the next blog.
This year’s assignment consisted of a base part and a focus assignment. The base part was essentially the same as last year: the implementation of a Questionnaire Language (QL) and a styling language for that (QLS).
We (i.e., the program committee) had provided a reference implementation of an example questionnaire using HTML5 based on a simple questionnaire framework to avoid that participants had to spent a lot of effort on creating that themselves. In practice, this meant that participants could either interpret or generate code from the model in a very straightforward way, requiring a minimum amount of effort. The use of the reference implementation/framework was optional but we were happy to see that most participants have used it and that it indeed had allowed them to focus on the essence of implementing the languages.
The focus assignment consisted of two parts: one focusing on “groupware” capabilities that allow models to be worked on by various people, and one focussing on “scalability” in the sense that one could work (either alone or as a team) on large models. These topics touch on concepts like version control (including “semantic”/language-aware diffing), model persistence and performance. For the latter part, we asked for the implementation of a binary search in questionnaire form.
Persistence of large-scale models
After the last presentation, we had an in promptu presentation by Jürgen Mutschall from MetaModules on persistence of large models. During this he demonstrated a system which held the complete Kepler code base, with parsed Java code viewed as a model, in a database which then could be used to inspect, reason about and manipulate (such Refactoring) it. It was interesting to see the good ‘ole JEE stack being used for model storage and to see it perform really well.
Hands-on session and group discussions
A large part of the afternoon was given over to a hands-on session where attendees could look at the language workbenches and discuss possibilities etc. Although this part was not structured, it certainly led to interesting discussions.
In order to have a good assignment for the LWC 2015 edition we held a little “Agile-style” brainstorm on what subjects could be interesting. The list we came up with (in no particular order):
- Life Cycle Management
- Mixing of different notations
- What is a language workbench?
- Business users?
- Interpretation vs. generation
- Debugging on the model level
Also, the question was raised whether it would be possible to focus on ‘addressing a question rather than building a lot of stuff’.
All of these suggestions, and more ideas, will be taken into consideration for LWC 2015.
Today, we made a small update to the reference implementation of the LWC 2014 assignment. Now, the reference implementation includes a conditionalFormElementWidget, next to simpleFormElementWidget, which allows the use of individual conditional questions, without having to wrap them in a conditionalGroupWidget.
It’s a small change, that will not affect the models created by LWC participants, but it may affect their code generators if they want to use it. Did we cause problems for them now? Hopefully not – this is the type of change to be expected in a real life situation as well.
Updates are available in the LWC2014 Git repository: https://github.com/dslmeinte/LWC2014
That’s a question I never really asked myself, but I do get it from other people every once in a while, when I tell them about Code Generation, the Language Workbench Challenge or just model driven software development in general.
The answer could be summarized as ‘for the same reason a blacksmith needs a hammer and an anvil and a shoe maker needs a boot tree and a needle’. Every tradesman needs the right tools for the job.
Just imagine this: you are working on a piece of software and you find yourself repeatedly going to your customer to ask how they want your application to perform a certain task. Then, you find out that most of the implementations resulting for this are quite similar in nature and you decide to create a small DSL to address that fact and speed up your talks and your development.
Pah! Instead of repeatedly writing the same code, you now write small statements (either graphical or textual) that describe the problems your customer wants you to solve and from which you can generate working software. Nice!
Now, let’s take a step ahead. Suppose you are doing the same thing still, about a year later. Because of your good work, you now have more customers and each of them requests more features than the initial one, because you are doing a good job. In order to cover for that, you have set up a team of developers, because despite the use of DSLs and code generators, you can’t do it all by yourself.
And suddenly, you find yourself in a situation where the small models, because that’s what they are, that you expressed in your DSL start referring to each other, and being (re)used by multiple people on your team. Now which version of which model do you need for product A for customer B, and which one do you need for product C of customer D? And how do you solve the fact that people are making changes to the same model for different products of the same customer?
There are different solutions to that problem, but I reckon most of them involve (re)dividing up the models, and performing version control on them. Maybe even real configuration management, allowing you to mix and match different versions of models. If only you had the tools to support you in that – models are often not stored in text format, like traditional program code, so tools like diff and merge often don’t work, leading to all kinds of issues and additional manual labour.
In a similar fashion, you may run into problems not because your team grows, but because models go bigger – another reason to split them up and maybe start doing version control on the parts. Also, you may find that your models become better if certain parts are expressed in another, dedicated DSL (just think about a DSL for specifying the pages of an app, and another one to define application flow). How do you mix these DSLs in such way that you can easily define the models required to solve your customers problem and generate the code?
This is a very short and possibly rather abstract example, but it is exactly what the case we put up for the Language Workbench Challenge is about. We work with (sets of) domain specific languages, we work in teams, and we work on models that are getting bigger. That will only help us if we can do it efficiently, and if we use tools that help us with issues like the ones expressed above - which are distracting us from our real goal: solving the customer’s problem. These tools are the language workbenches we discuss and compare in the LWC workshop, for the fourth year in a row. So, if you are interested in this topic, because you work with domain specific languages, or are working on a workbench yourself – join us in Cambridge on April 8th, 2014. Registration details can be found here: and information about the workshop can be found here.
And while your at it – it’s worth considering combining a visit to this workshop with a visit to Code Generation, or vice versa.
See you all in Cambridge!
Today, we send out the confirmations of the accepted proposals for LWC 2014. We are very pleased to announce that the following Language Workbenches will be present at the workshop:
Let’s start of with a Happy New Year to the MDSD and LWB community, before bringing this week’s good news.
Over the weekend, we decided to extend the submission deadline for LWC 2014 papers by one week. The reason is plain and simple: the deadline was tight to begin with, and we learned that, despite explicit announcements, some teams missed the fact that a paper submission was added to the process this year.
The deadline for submitting papers is now January 10th, midnight. The deadline for response to authors will remain.
As promised, around November 29th we would publish the reference implementation of the questionnaire platform, as well as some guidelines for the team support and scalability issues. These can be found on Git, under https://github.com/dslmeinte/LWC20u14.
The file README.md that can be found there explains the design of the reference implementation, while the file scalability and teamwork.md gives suggestions and requirements (mainly for model size scalability) on how to deal with the focus assignment.
If you have any questions regarding this, please post them on the Google Group for the Language Workbench Challenge.
Meinte and Angelo will maintain the reference implementation and the focus assignment document – major changes will be largely avoided. If that is not possible, it will be announced here. Cosmetics and minors will appear on Git without further announcement.
Good luck with the assignment – and we hope to see you in Cambridge in a few months time!
About a week ago, we released the CfP for the Language Workbench 2014. As some people have pointed out, the rules are a bit more strict than in past years, because we feel we need to focus more on the actual subject of the LWC: Language Workbenches (LWB). There are many ways to do model driven development and code generation, of which LWB are only one. All of these are candidate to be discussed at the CodeGeneration conference, with which we have co-located the LWC.
However, we did find a small flaw in the CfP that may lead to confusion. The criteria state that
“participating tools should be language workbenches, that is, tools that are designed
specifically for efficiently defining, integrating and using domain specific languages in an
Integrated Development Environment (IDE). Embedded, fixed-language or library-based
solutions do not meet this criterion.”
I’d like to point out here (and have updated the CfP PDF accordingly) that the word EMBEDDED here refers to EMBEDDED DSLs, i.e. DSLs that are used to extend a (3GL) programming language, using macro’s, pragma or whatever facilities that programming language provides. It does not refer to embedding of a language workbench into an existing IDE that may also be used for other purposes than DSL creation and use, such as Eclipse or Visual Studio.
Should there be any other doubts or questions about the CfP, don’t hesitate to contact us via email@example.com!
Regards on behalf of the program committee,