Domain-specific modeling languages abstractly represent domain knowledge in a way that users can more easily understand the model content without technical expertise. These languages can be created for any domain, provided the necessary knowledge is available. This research uses educational game design as a demonstration of the power of domain-specific modeling. Games are useful tools in supplementing the traditional education of students, however, many educators often do not possess the design or technical skills to develop a custom game for their own use. MOLEGA (the Modeling Language for Educational Card Games) is a domain-specific modeling language that provides a guided model design environment for these users. Using MOLEGA, users can create visual models, inspired by UML class diagrams, to represent their desired card game, based on two selected variants. User models are then used to generate executable source code for a mobile-compatible, browser-based game that can be deployed on a server by following the provided instructions. MOLEGA is evaluated for validity and correctness using a suite of example models.
We present PrintTalk, a DSL to "program" 3D objects, called "gadgets". PrintTalk also features "topologies", which are predefined spacial arrangements of gadgets. Gadgets are composed by executing a gadget script (possibly consisting of subscripts) that 'draws' the gadget in the 3D scene. However, executing the script also returns a number of constraint variables. These variables can be constrained inside the gadget and can also be bound outside the gadget in order to constrain the produced gadgets after the facts. This is the essence of the gadget composition mechanism of PrintTalk.
PrintTalk is implemented in DrRacket. Running a PrintTalk program generates a file that is sent to the 3D printer. We validate PrintTalk qualitatively by comparing the code for complex gadgets with the code needed to print those gadgets in existing languages.
At Philips IGT, we develop and produce interventional X-ray systems. For a controller in these systems, we have an approximately five years old domain specific language. Like general programming languages, domains specific languages also evolve. These languages co-evolve together with their domain. The language used at IGT was initially created for one system instance. Because of our positive experiences with the language, we want to evolve the language to support a family of systems. In this paper, we report on our experiences with the modifications we made to the original language. We made these changes preserving the behavior of the existing system instance. To prevent confidentiality issues, we use a Lego robot in our examples.
Domain-specific languages seek to provide domain guarantees that eliminate many errors allowed by general-purpose languages. Still, a domain-specific language requires additional quality assurance measures to ensure that specifications behave as intended by the users. However, some domains may have specific quality assurance measures (e.g., proofs, experiments, or case studies) with little tradition of using quality assurance measures customary to software engineering. We investigate the possibility of accommodating such domains by conducting a workshop with 11 prospective users of a domain-specific language named MAL for the pension industry. The workshop emphasised the need for supporting actuaries with new analytical tools for quality assurance and resulted in three designs: quantity monitors let users identify outlier behaviour, fragment debugging lets users debug with limited evaluative power, and debugging spreadsheets let users visualise, analyse, and remodel concrete calculations with an established domain tool. Based on our experiences, we hypothesise that co-design workshops are a viable approach for DSLs in a similar situation.
Model-based systems engineering (MBSE) enables to verify the system performance using system behavior models, which can identify design faults that do not meet the stakeholders’ requirements as early as possible, thus reducing the R&D cost and error risks. Currently, different domain engineers make use of different modeling languages to create their own behavior models. Different behavior models are verified by different approaches. It is difficult to adopt a unified integrated platform to support the modeling and verification of heterogeneous behavior models during the conceptual design phase. This paper proposes a unified modeling and verification approach supporting system formalisms and verification. The KARMA language is used to support the unified formalisms across MBSE models and dynamic simulations for different domain specific models. In order to describe the behavior model more precisely and to facilitate verification, the syntax of hybrid automata is integrated into KARMA. We implemented behavior models and their verification in MetaGraph, a multi-architecture modeling tool. Finally, the effectiveness of the proposed approach is validated by two cases: 1) the scenario of booking railway tickets using BPMN models; 2) the behavior performance simulation of unmanned vehicles using a SysML state machine diagram.
This paper presents our preliminary results developing an incremental query and transformation engine for our modeling framework. Our prior framework combined WebGME, a cloud-based collaborative modeling tool, with FORMULA, a language and tool for specifying and analyzing domain-specific modeling languages. While this arrangement has been successful for defining non-trivial languages in domains like CPS, one ongoing challenge is the scalability of executing model queries and transformations on large models. The inherent incremental nature of the modeling process exacerbates this scalability issue: model queries and transformations are repeatedly performed on incrementally updated models. To address this issue, we are developing an incremental version of FORMULA that can perform efficient model queries and transformations in the face of continual model updates. This paper describes our experiences designing this incremental version, including the challenges we faced and design decisions. We also report encouraging benchmark results.