TP3.6: Technical Platform Architecture

Subproject manager
Prof. Dr. Dr. h.c. Manfred Broy
Subproject manager
Dr. habil. Christian Prehofer
Ilias Gerostathopoulos, Ph.D.

Connected Mobility Systems (CMS) refer to the orchestration of devices and services to offer value-added functionalities in the mobility market [1]. An example of a CMS is a smart parking system where external Web services provide a homogeneous view over the availability of parking slots in a city, as recorded by roadside sensors, to cars driving in a city.

CMS are difficult to engineer as they are complex systems with many stakeholders and distributed w.r.t. devices and data.

Figure 1: Overview of Continuous Architecture Engineering
They also have requirements and needs that are often conflicting:
  • Need for exploiting open platforms (w.r.t. management) to proliferate from ecosystem effects.
  • Need to be always on and quick to respond to satisfy end-users.
  • Need for being extensible w.r.t. new features and changing requirements to better fit to the fast speed of today’s innovation.
  • Need for analyzing large amounts of produced data to learn how to improve their functioning.
To deal with this challenge, in this WP are focusing on the following (broad) research questions:
  • How can we ensure the quality and performance of a CMS platform, especially regarding analytics and cooperation with the platform?
  • How can we provide an integrated development environment with Continuous Delivery of new functionalities across all devices and platform parts?
  • How can we ensure the adaptability and evolution of a CMS platform?
Our approach takes the form of what we call Data-Driven Continuous Engineering. According to this, data related to both the runtime phase of a CMS and the development life cycle should be recorded and analyzed in order to identify correlations between development methods and end-products, perform experiments measuring end-user behavior and generally assessing the value a new development delivers. The analytics results should then lead to improvement of development methods, approval or discarding of features, prioritization of test activities, etc. A graphical overview of the approach is given in Figure 1. The proposed approach relates to evidence-based software engineering [2] and techniques used in industy such as A/B testing [3].

We have so far been conducting research on:
  1. How to enable fast and quality development by fast feedback. For this, we are investigating Continuous Integration modeling and best practices.
  2. How to evaluate a CMS architecture and development process based on real data. For this, we are looking into Big Data analytics for analyzing a system’s quality based on operational data from the system.

Figure 3: Tool-agnostic Continuous Integration modeling [4]

Figure 2: Continuous Integration: quick iterations

Research on Continuous Integration modeling and best practices

Continuous Integration (Figure 3) as a software practice holds great potential in improving developers productivity and reducing costs. However, is very hard to customize it for a particular setting (domain/company/project) so that most of the associated benefits are gained. In particular, Continuous Integration systems may differ in:
  • Build duration: minutes, hours, …
  • Build frequency: daily, weekly, …
  • Build triggering: source code changes, …
  • Scope: compilation, unit testing, integration testing, code analysis
  • Definition of failure: build fails, single test fails, …
  • Fault handling: by the developer checking in the code, dedicated team, …
The question then becomes “How do we pick one option at each variation point so that the benefits of using CI in a domain/company/project are maximized?”

We are currently experimenting with the application of a modeling approach for documenting Continuous Integration systems (Figure 2) as a first step towards improving them.

Figure 4: Architecture of RTX tool

Research on Big Data analytics for analyzing a system’s quality based on operational data from the system

We have created an open-source tool called RTX for simplifying the experimentation with (self-)adaptation [5] based on Big Data analytics. In the tool’s architecture (Figure 4), a Kafka broker facilitates the communication with any data-intensive system that can be adapted based on the results of Big Data analysis. For the data analytics part, simple Python scripts or more sophisticated Spark jobs can be used.

In a nutshell, each “experiment” in RTX:
  • sets a parameter value to the running system
  • collects data from the system for some time
  • analyses them to calculate the overall utility
The Github project hosting RTX can be found at


Please contact Dr. Ilias Gerostathopoulos and Dr. habil. Christian Prehofer in case you are interested in more details, cooperation in the topic of platform architecture for mobility systems, or related BS or MS projects or Guided Research topics.


[1] Roland Berger Strategy Consultants. Connected Mobility 2025 (January 2013).
[2] Bosch, J. and Holmstrom Olsson, H. Data-driven continuous evolution of smart systems. pages 28-34. ACM Press, 2016.
[3] Kohavi, R., Crook, T., and Longbotham, R. Online Experimentation at Microsoft. In Third Workshop on Data Mining Case Studies and Practice, 2009.
[4] Stahl, D., and Bosch, J. Automated Software Integration Flows in Industry: A Multiple-case Study. Companion Proceedings of ICSE’14, 54-63, 2014.
[5] De Lemos, et al. Software Engineering for Self-Adaptive Systems: A Second Research Roadmap. In: Software Engineering for Self-Adaptive Systems II. pp. 1-32. Springer Berlin Heidelberg, 2013.