Surface+Skype=Too much stuff

Dear Skype,

thanks for assuring me that you take care of my issue. Great also that you do send this confirmation (see below), in response to the painful support chat you impressively offered. Cleverly, you do not provide any means of updating you on the matter. This is troublesome in so far as I managed to work around the issue and I would prefer to let you know that you please do not change this running system so that I can stop wasting my time on getting the surface working. Anyway, if you run into customers with my issue (such as Microsoft Accounts with confused country/region codes and trouble to merge or unlink skype and microsoft accounts), please get in touch and I might want to do some moonlighting. :-)


PS: I do think that the Microsoft Surface RT is pretty impressive, but it looks like the efforts towards identity integration cannot work. In fact, it looks like this platform has amassed so much legacy in terms of identity platforms and constraints and aggressive intents of identity integration that I close by saying that I love to be part of the experiment, but please simplify so that it works.

On Wed, Oct 31, 2012 at 6:31 PM, Skype Customer Service <noreply@skype.net> wrote:
This is an automatically generated email. Please do not reply.

Thanks for contacting Skype.

Just to let you know, we've received the support request you submitted on our website. We'll get back to you with more information in the next 24 hours.

In the meantime, please click here to discover a world of features which will help you enrich your everyday communications.

Hold tight!

The people at Skype

Need a little help right now?
Get Skype Premium and have 24/7 live chat support to help you out whenever you need it, as well as other great features like group video calling.
Skype Premium is available as a daily pass or as a monthly subscription.
Get Skype premium


101companies vNext Pre-alpha


In this post, I work towards a vision towards 101companies vNext. The version of the vision is Pre-alpha; thus the title of this post. I am submitting this post as a position paper to SL(E)BOK @ SLE2012. Before I start, I want to make sure to list those individuals who have helped me to arrive at this vision: Jean-Marie Favre, Dragan Gasevic, Martin Leinberger, Thomas Schmorleiz, and Andrei Varanovich.

Good news

The 101companies community project aims at aggregating and organizing knowledge about software technologies, software languages, and software concepts. The project is getting somewhere in that "interesting" source code is added to the 101repo continuously and "relevant" technologies, languages, and concepts are added continuously to the 101wiki for the last two years or so. Also, the project starts to "make sense" for teaching and professional education. Further, the project also starts to be "a viable scientific matter"; see recent conference publications on the general 101companies idea, megamodeling, and linguistic architecture. Yet further, think of the SoTeSoLa 2012 summer school, which ran the reverse/re-engineering dimension on top of 101companies as the theme of choice for its extended hackathon.

Bad news

Very much not surprisingly (to the 101companies' initiators anyway), the 101companies project faces huge challenges. For instance, the 101companies implementations are of quite different quality and it is not even known what their quality is. Also, the documentation of the implementations is often weak. Further, the actual document structure for wiki pages varies beyond repair which is a consequence of several factors: lack of tool support for checking pages, lack of (complete) agreement on a documentation model, and inconsistency due to the evolving (partial) agreement. Moreover, the process or workflow of becoming a contributor and maintaining contributions is opaque or miserable. Community features are also missing very much: anyone should easily be able to discuss 101companies implementations and content. Finally, 101companies looks odd without integration with major resources such as Wikipedia and StackOverflow.

101companies vNext Pre-alpha

The occasional 101implementation is still appreciated and those uber-active 101ers will continue to bring them in, but that's no longer a priority. What's much more important now is to put into place the infrastructure, the process (documentation), and the community incentives so that many others can leverage, contribute to, and improve the 101companies project (the 101repo and the 101wiki). I will use the rest of my sabbatical and the attendance at SL(E)BOK @ SLE2012 specifically to connect further, to engage in more discussion, to guide the overall effort, and to ultimately arrive at a plan that solves most of the current issues rapidly and cheaply. For instance, we plan to set up a weekly Google Hangout soon so that new contributors can chime in easily. If you are reading this and have any smart ideas about taking this project to the next level, based on your interests and skills, please get in touch.

A conceivable synergy: 101companies and BOK

Our understanding of what we are doing has grown too well to further ignore the fact that the 101companies ambition is too fat: aggregation of tiny HRMS systems is one challenge; aggregation of a Body of Knowledge (BOK) for software languages, software technologies, and software concepts is yet another, related but quite distinguished challenge. Perhaps, the 101wiki shows a humble attempt of assembling some pieces of the BOK, but there can be no doubt that the current content is not much more than an illustration of what's needed and the current process is clearly not fit to improve things sufficiently. Some basic ideas are in the air as to how to improve things interestingly, but they all require broad discussion and broad involvement. I am looking forward some discussions at SL(E)BOK @ SLE2012 and perhaps the available 101companies' bits of a BOK can just be removed from the 101wiki and find a more prosperous home at an emerging BOK soon.


A software language engineer's potpourri

I am visiting LogicBlox (and hence, transitively, Predictix) in Atlanta to re-learn logic programming properly and see what our SLE super-weapons of massive engineering can do for them. Hence, it probably makes sense to give a talk as a kind of potpourri.

Speaker: Ralf Lämmel (University of Koblenz-Landau)

TitleA software language engineer's potpourri 

Abstract: In this talk, I present some of our recent research results and interests; they all relate to and, in fact, enhance software language engineering in a broad sense.

The first topic is the 101companies project, which is developing into an advanced, structured, linked knowledge resource for software developers. At its heart, 101companies is a software chrestomathy, which illustrates 'many' software languages, technologies, and concepts by implementing a Human Resources Management System 'many' times; each implementation selects from the set of optional features for such a system.

The next topic is linguistic architecture or megamodeling, which helps with understanding software technologies at a high level of abstraction by focusing on the entities (e.g., languages, technologies, programs, metamodels, files) and relationships between these entities (e.g.,  domainOf, codomainOf, inputOf, outputOf, conformsTo, correspondsTo). Megamodeling can be powerfully demonstrated by using the 101companies chrestomathy for illustration.

The next topic is linguistic architecture recovery for software products and software repositories such that some basic aspects of linguistic architecture (e.g., links from source-code artifacts to languages, technologies, and concepts) are recovered  semi-automatically in a scalable manner such that possibly very heterogenous repositories with diverse languages and technologies can be analyzed. The approach relies on an easily configurable rule-based system that performs various analyses on a product or repository of interest. Such architecture recovery can again be powerfully demonstrated by using the 101companies chrestomathy for illustration.

The last topic is about drilling deeper into language-usage analysis such that a given corpus is understood in terms of coverage of the language constructs, metrics, cloning, validity, and other dimensions of language usage.

Slides: [.pdf]



Various members of the SoftLang Team at Koblenz and collaborators have contributed on the results and visions presented in this talk. The aforementioned papers are coauthored with these researchers:
  • Jean-Marie Favre (University of Grenoble)
  • Dragan Gašević (Athabasca University)
  • Martin Leinberger (Master student in the team)
  • Ekaterina Pek (PhD student in the team) 
  • Thomas Schmorleiz (Master student in the team)
  • Andrei Varanovich (PhD student in the team)


Revealing 101meta and 101explorer

This is an announcement for a talk at University of Brasilia on 8 Aug 2012.

Title: Rule-based metadata annotation for software repositories

Abstract: Take any non-trivial software project; how do we quickly and usefully enough understand what software languages and software technologies are at work in the project; how can we systematically represent very much related knowledge about software concepts or product features exercised in directories, files, or fragments thereof in the project? How can we, in fact, gather architectural understanding on the grounds of "tags" for languages, technologies, concepts, and features; what can we do to visualize, validate, and otherwise leverage such information for the benefit of understanding projects specifically and computer science generally? In this talk, the language 101meta and the technology 101explorer will be described in an effort of responding to the aforementioned challenges; 101meta and 101explorer are grown in the 101companies community project.

Acknowledgement: This is joint work with Jean-Marie Favre, Martin Leinberger, Thomas Schmorleiz, and Andrei Varanovich.


- 101meta: http://101companies.org/index.php/Language:101meta
- 101explorer: http://101companies.org/index.php/Technology:101explorer
- 101companies: http://101companies.org

Bio of the speaker: see here.


Meeting the shark

Tonight, Henrique Rebêlo will take me to the sea in Recife, which is notoriously known for the occasional shark attack. This means that this could be my last post, and I try to get some stuff done before we go there. This also includes posting the abstract of the talk that I was just giving. There is, in fact, a submitted paper to back up the new content in the talk (such as a rule-based language for metadata association with repository and wiki entities), but I was planning to work a bit more on the paper before I reveal it. Chances are that this will never happen; please contact the co-authors in case necessary.

TitleUnderstanding a multi-language, multi-technology software chrestomathy

Abstract: The 101companies community project implements a human-resources management system time and again while using many different software languages and software technologies. A key challenge of this project is to handle, in fact, to make good use of the diversity of languages and technologies involved. There are some emerging techniques of tagging, rendering, browsing, validation, fact extraction, fragment location, etc. such that the 101companies software chrestomathy can be explored richly and insights can be gathered. This presentation will take 42 minutes, cover 42 languages and exercise 42 technologies. This is how long a talk may take; this is how many languages one easily runs into; this is how many technologies one may need to struggle with. Further, the presentation will exercise 7 technological spaces as well as 7 themes of 101companies implementations. All such diversity is by design: it allows us to demonstrate the characteristics of a multi-language, multi-technology software chrestomathy as well as the means of specifically dealing with such a chrestomathy.

Acknowledgement: Joint work with Jean-Marie Favre, Martin Leinberger, Thomas Schmorleiz, and Andrei Varanovich


Movie w/ animated slides subject to manual advancement:


Megamodeling for software technologies

I am visiting Zinovy Diskin and Tom Maibaum at the Department of Computing and Software at McMaster University to get some work going on patterns of bidirectional transformations (BX), which we started at CSXW 2011, satellite event at GTTSE 2011. I am not going to say more, though, about BX here. Instead, I would like to make an announcement of the talk at McMaster. Not too much surprisingly, I am going to speak about megamodeling. As it happens or just in time, the megamodeling paper by Jean-Marie Favre, Andrei Varanovich, and me has been accepted for MODELS 2012

Title: Megamodeling for software technologies

Abstract: The term of megamodeling has arisen specifically in the MDE context as referring to a form of modeling at the macroscopic level such that the model elements in megamodels would be models themselves, e.g., metamodels, conformant models, and transformation models. A more general notion of megamodeling takes shape, when we go beyond MDE, i.e., when we generalize the scope to arbitrary software technologies and languages. For instance, the notion of model generalizes to the notion of artifact; the notion of metamodel generalizes to the notion of artifact for which to relate to in a conformance relationship; the notion of transformation model generalizes to the notion of an artifact that implements a function on artifacts. Further, the notions of language and technologies are vital for a general notion of megamodeling. In this talk, we present a general notion of megamodeling, as it is embodied by the MegaL language, as it is utilized within the 101companies Project, and as it has been validated for Object/Relational/XML mapping technologies.

Bio: see here.

Acknowledgment: MegaL is joint work with Jean-Marie Favre and Andrei Varanovich. MegaL is part of the 101companies Project, which is community project involving many contributors. As far as the subject of the talk is concerned, thanks are due to Martin Leinberger, Marius Rackwitz, and Thomas Schmorleiz.



Design and execution qualities in the 101companies project

I am visiting the Generative Software Development Lab in Waterloo.

Specifically, I am visiting and working with Krzysztof CzarneckiZinovy Diskin, and Thiago Tonelli Bartolomei.

Today, I was giving a lecture on 101companies and because it was part of Kryysztof's advanced software architecture / quality class, I focused on 101companies' execution and design qualities. This is an area of 101companies, like so many others, which are still under development. Hence, I closed my lecture with a kind request, which I also open up to others:

Request for help: Submit software quality-related feature proposals for 101companies. Each proposal should contain information like the following: a "headline" (<= 66 characters), a "description" (What's is it? How does it make sense for 101companies' HRMS?). Send your requests to gatekeepers@101companies.org. Do you have an implementation handy?


Slides: [.pdf]

Title: Design and execution qualities in the 101companies project


101companies (101companies.org) is a community project in computer science (or software science) with the objective of developing a free, structured, online knowledge resource including an open-source repository for different stakeholders with interests in software technologies, software languages, and technological spaces; notably: teachers and learners in software engineering or software languages as well as software developers, software technologists, and ontologists.

In this lecture, design and execution qualities with coverage by 101companies are discussed. The corresponding list of qualities is by no means complete; instead, it is driven by a more technology-centric point of view: certain popular technologies were to be covered and some emphasis was placed on the notion of technological spaces. Nevertheless or perhaps specifically as a result of such an approach, an original corpus of illustration for software qualities with semantically enriched, structured content is now available.

The audience is strongly encouraged to engage with the lecturer and the community project during and after the lecture. It is clear that 101companies can benefit from informed proposals and contributions on software qualities specifically.


Understanding Technological Spaces

On 14 June, CWI (the Dutch Center for Mathematics and Computer Science in Amsterdam) organizes the CWI Lectures on Understanding Software. I quote from the website: "The lectures are organized in honor of Paul Klint having been awarded a CWI research fellowship, as well as his 40th anniversary at CWI. The program features internationally renowned speakers, that represent different visions on understanding software. Understanding how to build better software is as important as understanding how existing software can be maintained and improved. The lectures are accessible for software engineering practitioners, software engineering students, business practitioners and academic researchers alike."

My years in Amsterdam (1999-2004), at CWI and VU, in the research groups of Paul Klint and Chris Verhoef have profoundly shaped me as a scientist and a software engineer. After all these years, I feel very much honored to deliver a lecture at Paul Klint's distinguished anniversary.

Details of my talk follow below.


Title: Understanding Technological Spaces

Abstract: Comprehensive understanding of software necessitates understanding of technological spaces, i.e., community and technology contexts as they include specific software languages, software technologies, software development methods, related knowledge resources, community venues and fora. In this lecture, I will motivate and demonstrate the emerging community project 101companies as means of gathering and organizing knowledge about technological spaces for the broad benefit of software engineers, software scientists, educators, and learners. Further, I will discuss some methods of knowledge acquisition and representation, e.g., megamodeling, API analysis, and language-usage analysis.

Acknowledgment: 101companies is a community project which I am happy to represent in this talk. I am grateful for broader collaboration with (in alphabetical order) Coen De Roover, Jean-Marie Favre, Ekaterina Pek, Thomas Schmorleiz, and Andrei Varanovich.

Bio of the speaker: See here.

Related links:

The 101companies Project:

An introduction to the project:

A related megamodeling effort:


Should I declare defeat on the research topic of API migration?

Of course, I won't, but perhaps I should! Then, I could turn to lower-hanging fruits in research, which I first need to spot, which I can't though because I am a bit obsessed with API migration (and admittedly some other stuff such as megamodeling). Sigh!

It was around 2004 that I became interested in API migration and I have talked about it here and there ever since. Perhaps I am thinking that talking about a difficult problem of interest helps in discovering the solution of the problem, or at least a sensible path to go. Wishful thinking so far!

In theory, the objective of API migration made a lot of sense while I was on the XML team at Microsoft because there are obviously way too many XML APIs. In practice, nothing happened on this front because I didn't understand automated API migration well enough back then. Add to this that API migration is something that is potentially risky for the API provider and the API migrator. So you need to mash up a rocket scientist and top politician to succeed. I am not yet there.

Back in academia, it took until like 2009 that we had a useful and publishable effort on API migration (see the SLE 2009 paper); just a year later another one (see the ICSM 2010 paper). I kept on working with Thiago in 2010-2012, but our efforts on language support for wrapper- and transformation-based migration hit sort of a brick wall. At least, for now, we take some rest. We have submitted another API migration paper, it's about an advanced technique for automated testing in wrapper development. This research is also backed up by additional wrapper studies.

So we haven't failed, by no means, but we are depressingly just at the stage of wrapper designers and engineers: we understand how to design wrappers (using patterns, for example), how to test wrappers (on the grounds of sophisticated test-data generation and contract assertions), what API differences to expect, how to spot them, and how to respond to them. We would like to be at the stage of language-based API migrators.

What am I supposed to do when a research effort hasn't made the progress that I expected years back when I was too naive? Rather than bailing out, I am going to do two things: a) I am going to compile a talk that deeply analyses what I have learned and what I think could/should be done; b) I am going to compile a funding application so that focused research efforts can target the interesting topic of API migration.

As to the talk, I am looking forward a visit of Suraj C. Kothari at Iowa State University in Ames next week, and here is the plan for this talk. (The trip to Ames is a trip during the trip because I am going to Ames during a trip to Omaha. From a recursion-theoretic point of view, I am obviously interested in carrying out a trip during the trip during the trip. This is certainly a good exercise in trying to understand the difference between left- and right-associativity.)


Title: API migration is a hard problem!

Slides: [.pdf]

Abstract: API migration refers to software migration in the sense of software reengineering: the objective is to eliminate an application's dependencies on a given API and make it depend instead on another API. Hence, we may speak of original API versus replacement API. In principle, migration can be achieved by a wrapping approach (such that the original API is re-implemented in terms of the replacement API so that the original implementation becomes obsolete and the application itself does not need to be changed) or by a transformation approach (such that the code of the application is rewritten so that the references to the original API are replaced by references to the replacement API). A degenerated case of API migration would be API upgrade or downgrade where the two APIs are essentially versions of each other with an effective relationship between the versions such that the wrapper or the transformation for migration can be derived from a suitably recorded, inferred, or specified relationship. The synthesis of a transformation or a wrapper is considerably more involved when the APIs at hand do not relate in such an "obvious" manner, i.e., when they have been developed more or less independently. The two APIs still serve the same domain (e.g., GUI or XML programming), but they differ in terms of features, protocols, contracts, type hierarchy, and other aspects. In this talk, I provide insight into such differences and explain existing, often primitive (laborious) migration techniques, which are mostly focused on wrapping. I use a number of case studies for empirical substantiation. I conclude with an outlook in terms of the challenges ahead with indications as to the techniques and methods to be used or developed. Program analysis must provide the heavy lifting to make progress on the hard problem of API migration.

Acknowledgement: This is joint work with (in alphabetic order) Thiago Tonelli Bartolomei (University of Waterloo, Canada), Krzysztof Czarnecki (University of Waterloo, Canada), Tijs van der Storm (CWI, Amsterdam, The Netherlands). I also acknowledge joint work within the Software Languages Team on the related subject of API (usage) analysis; special thanks are due to Ekaterina Pek.



More than you ever wanted to know about grammar-based testing

Preamble: Ever since 1999 +/- 100 years, I have been working (sporadically, intensively) on grammar-based testing. The latest result was our SLE'11 paper on grammar comparison (joint work with Bernd Fischer and Vadim Zaytsev). I have tried previously to compile a comprehensive slide deck on grammar-based testing, also with coverage on this blog, but this was relatively non-ambitious. With the new SLE'11 results at hand and with the firm goal of pushing grammar-based testing more into CS education (in the context of both formal language theory and software language engineering), I have now developed an uber-comprehensive slide deck with awesome illustrations for the kids. If you are reading this post ahead of the lecture, if you are still planning to attend, then you are well advised to bring brains and coffee. You may also bring a body bag, in case you pass out or worse. As it happens, this is "too much stuff" for a regular talk, lecture, or any reasonable format for that matter. I will run a first "user study" on this slide deck in a class on formal language theory in Omaha this Thursday; thanks to Victor Winter's trust in the survivability of this stuff, or why would he share his class with me otherwise? As a last resort and an exercise in adaptive talking, I am just going to skip major parts based on (missing) background of my audience. To summarize, if I get under the bus today, then all the grammar-based testing stuff is documented for mankind. (That's what Victor said.)

Title of the lecture: Quite an introduction to grammar-based testing

Slides of the lecture: [.pdf]

Elevator pitch for the lecture: Grammars are everywhere; resistance is futile. (More verbosely: If it feels like a grammar (after due consideration and subject to a Bachelor+ degree in CS), then it's probably just one. Just because some grammars mask themselves as object models, schemas, ducks, and friends, you should not move over to the dark side.) Seriously, non-grammars are cool, but life is short, so we need to focus. (I am sort of focusing on grammars and I am not even @grammarware.) Now, even grammars and grammar-based software components have problems, and testing may come to rescue. Perhaps, you think you know what's coming, but you don't have a clue.

Abstract of the lecture: Depending on where you draw the line, grammar-based testing was born, proper, in 1972 with Purdom's article on sentence generation for testing parsers. Now, computer scientists were really obsessed with parsers and compilers in the last millenium and much work followed in the seventies, eighties, and early nineties. Burgess' survey on the automated generation of test cases for compilers summarized this huge area in 1994. Why would you want to test a compiler: it could suffer from regressions along evolution; it could be different than another compiler that serves as reference; it could fail to comply with the language specification (perhaps even the grammar in there); it could break when being stressed; it could simply miss some important case. Non-automated testing really does not suffice in these cases. You cannot possibly (certainly not systematically) test a grammar-based software component other than by generating test data (in fact, test cases) automatically, unless the underlying grammar is trivial. Grammar-based testing suddenly becomes super-important, when much software turns out to be grammar-based (other than parsers and compilers): virtual machines, de/-serialization frameworks, reverse and re-engineering tools, embedded DSL interpreters, APIs, and what have you. Such promotion of grammar-based testing to the horizon of software engineering was perhaps first pronounced by Sirer and Bershad's paper on using grammars for software testing in 1999. Grammar-based testing is not straightforward, by all means, in several dimensions. For instance, coverage criteria for test-data generation must be convincing in terms of "covering all the important cases" and "scaling for non-trivial grammars". Also, all the forms of grammars in practice are "impure" more often than not; think of semantic constraints represented in different ways. Related to the matter of semantics, any automated test-data generation approach relies on an automatic oracle, and getting such an oracle is never easy. This lecture is going to present a certain view on grammar-based testing, which is heavily influenced by the speaker's research and studies. In addition to the speaker's principle admiration of grammars and grammar-based software, the reason for such obsession with grammar-based testing is that this domain is so exciting in terms of combining formal language theory, (automated) software engineering, and declarative programming. This lecture is an attempt to convey important techniques and interesting challenges in grammar-based testing.

Bio of the speaker: As earlier this week. (Nothing much has happened very recently.)

Acknowledgement: The presented work was carried out over several years in collaboration with (in alphabetical order) Bernd Fischer (University of Southampton, UK), Jörg Harm (akquinet AG, Hamburg, Germany), Wolfram Schulte (MSR, Redmond, WA, USA), Chris Verhoef (Vrije Universiteit, Amsterdam, NL), Vadim Zaytsev (CWI, Amsterdam, NL)

Related papers by the speaker (and collaborators):

Related patent:

Have fun!



Technical space travel for developers, researchers, and educators

The inevitable has happened.
I have committed myself to giving the first major talk on 101companies (not counting the AOSD 2011 tutorial, which described an early view on the universe).
This outing talk happens to be at the CS Department at University of Nebraska at Omaha, as I will be visiting Victor Winter the next two weeks.

Ralf Lämmel (University of Koblenz-Landau)

Joint work with Jean-Marie Favre, Thomas Schmorleiz, and Andrei Varanovich.

Technical space travel for developers, researchers, and educators

A technical space is a technology and community context in computer science and information technology. For example, the technical space of XMLware deals with data representation in XML, data modeling with XML schema, and data processing with XQuery, XSLT, DOM, and LINQ to XML. Likewise, the technical space of tableware deals with data representation in a relational database, data modeling according to the relational model or the ER model, and data processing with SQL and friends. There are various other, not necessarily orthogonal technical spaces: Javaware, grammarware, objectware, lambdaware, serviceware, etc. How can we easily travel between spaces such that software products may involve multiple spaces? How can we deal reasonably with the plethora of technologies and languages in computer science and information technology? How can we profoundly experience the universe in a scientifically and educationally relevant manner? We approach these questions in the emerging 101companies project for space-traveling developers, researchers, and educators on the grounds of a wiki, a source-code repository, and an ontology.

Slides: [.pdf]


More of a discussion on web privacy

I had the pleasure to give a talk today on web privacy and P3P at Ecole des Mines de Nantes in the ASCOLA research team by kind invitation of Mario Südholt. The hidden agenda was to promote our empirical research on P3P but we also agreed upfront to attempt a more general discussion of web privacy. So you find little empirical stuff in the early parts of the slide deck.

Title: More of a discussion on web privacy

Abstract: The presentation begins with observations about the current state of web privacy on the internet today. The presentation continues to set up some challenges for web privacy to be addressed in practice, subject to contributions by CS research. The technical core of the presentation is a language engineer's approach to understanding W3C's P3P language for privacy policies of web-based systems. Discussion during and after the talk is strongly appreciated.

Acknowledgement: This is joint work with Ekaterina Pek, ADAPT Team, University of Koblenz-Landau



MegaL goes Nantes

The Software Languages Team in Koblenz, with potent support by visiting scientist Jean-Marie Favre is getting increasingly excited and knowledgeable about megamodels for software technologies and software products. MegaL is the megamodeling language under development. During upcoming research visits, I expect to present MegaL: its rationale, some applications, and ongoing research. The first presentation of this kind is to take place in Nantes in the AtlanMod team. The talk announcement follows.

Title: A megamodel of the ATL model transformation language and toolkit

Abstract: According to http://www.eclipse.org/atl/, "ATL (ATL Transformation Language) is a model transformation language and toolkit. In the field of Model-Driven Engineering (MDE), ATL provides ways to produce a set of target models from a set of source models." We would like to deeply understand the linguistic architecture of ATL in terms of all the involved software languages, metamodels, technologies, and relationships between all of them. To this end, we leverage a suitable form of megamodeling, as it is supported by the (mega)modeling language MegaL. In this manner, we discover some shortcomings of common, informal explanations of ATL and opportunities for highly systematic discussion of ATL.

Acknowledgements: Joint work with Jean-Marie Favre, Martin Leinberger, Thomas Schmorleiz, and Andrei Varanovich.

  • A related paper on megamodeling: [.html]
  • Slide deck of the talk: [.pdf]

Bio: Ralf Lämmel is Professor of Computer Science at the Department of Computer Science at the University of Koblenz-Landau since July 2007. In the past, he held positions at Microsoft Corp., Free University of Amsterdam, CWI (Dutch Center for Mathematics and Computer Science), and the University of Rostock, Germany. Ralf Lämmel's speciality is "software language engineering", but he is generally interested in themes that combine software engineering and programming languages. His research and teaching interests include program transformation, software re-engineering, grammar-based methods as well as model-driven and model-based methods. Ralf Lämmel is a committed member of the research community; he is one of the founding fathers of the international summer school series on Generative and Transformational Techniques on Software Engineering (GTTSE) as well as the international conference on Software Language Engineering (SLE).