Research Interest

  • Generative AI:
    In the past I created different textual and graphical domain-specific languages using classical tools like a parser a lexer and meta modeling. With GenAI new possibilities and risks come up. Now computers can act on vague and unstructured input and deliver both structured machine readable and human readable unstructured output. The risks stem on non-determinism, hallucinations, and alignment to name a few.

    Since GPT3 came out I am using, testing, evaluating and integrating GenAI in many different ways with a bunch of different models and development environments:
    • Optimizing my own workflows
    • Coding
    • Enhancing information systems and applications in general (e.g., ad-hoc code)
    • Building special-purpose agents
    • Enabling speech as input for software systems
    • Apply Embedding Models and Vector Embeddings for cheap semantic searching (in terms of compute time and energy as well as memory consumption)
  • Tools for model-driven design:
    The main motivation is to extend the in essence control-oriented eXtreme Model-Driven Design (XMDD) paradigm with data-orientation, adapting well-known paradigms from programming languages in a simplicity-first fashion, ranging from features of object-orientation to functional programming.

    The new approach is type-aware, so that type-safety can be validated at design-time. The central achievement is, however, the introduction of higher-order semantics by treating services and processes as first-class citizens. They may be moved around just like data and plugged and played into activities at runtime, thus enabling higher-order process engineering (HOPE).

    • Rich (higher-order) process modeling
    • Adapting programming language features
    • Lightweight formal methods
  • Data:
    With HOPE data-orientation has been introduced to enable rich process-modeling, but non-programmers should be enabled to design, model, change, and maintain the respective data-structures (and the corresponding database), too. Therefore, DyWA supports the user-driven development of process-oriented web applications combining business process modeling with user-side application domain evolution. Using DyWA, application experts without programming knowledge are able to model according to their professional knowledge and understanding, both domain-specific data models and the business process models that act on the data.
    • Design
    • Modeling
    • Manipulation
    • Access servification
  • Information systems:
    I am interested in bringing together all the different parts of software engineering to a complete product (user interface, business-logic, persistence, authentication & authorization, …) in an agile manner, and at the same time ensure high quality over the whole life-cycle of software systems, which may change continuously, via Active Continuous Quality Control (ACQC).

     

    ACQC aims at providing evolving systems with efficient and effective quality assurance facilities. This approach enables not only to check for designated regressions and bugs but to validate whether the system behaves as expected across versions and platform changes, without the need to create a specification for every system version manually. The approach of ACQC allows to obtain hypothesis models via active automata learning for a new version essentially at the same level of detail as the hypothesis models for the previous version without requiring any further time consuming searches for counterexamples. This means that guiding this search by previously learned models successively accellerates the learning process despite the fact that the underlying system continuously changes.

    • Programming in the large
    • Quality assurance
    • Automata learning
  • Systemic Thinking, Holistic Approaches, and Meta-Modeling:
    My observations of the IT industry lead to a very simple conclusion: the participating people with their varying disciplines do not communicate (well) enough. Sometimes this effect is amplified by temporal, spacial, or organizational distance. This leads to separated optimizations and therefore isolated consideration of issues. But, e.g., optimizing a database on its own sometimes is not enough, instead a query has to be changed in an application. Hence, some of the optimization potential of the domain, the platform, or the infrastructure cannot be incorporated. That is due to missing information on the generic level of a virtualizer, an database operator, a clusterer, a container, an application server, an apache server, or a caching facility.

     Therefore, lately I am eager to combine my knowledge in process and data modeling, information systems, and quality assurance in a family of domain-specific modeling / programming languages in DIME (DyWA Integrated Modeling Environment) together with a great team at TU Dortmund. We base on the open source meta modeling tool suite Cinco, following a generative approach. In DIME the whole development process (see information systems above) is elevated to a language, technology, and platform independent description of what an application should do instead of having a unmanageable and unmaintainable implementation of how should an application do something. This way, business logic, and implementation details as well as optimization can be done completely orthogonal and holistic. The DIME approach allows for prototype-driven agile software development with true incorporation of all participants, not only the IT. We plan to go open source with this soon.