TP4.1: Models & Tools for Indoor-Maps

Subproject manager
Prof. Dr. Dr. h.c. Manfred Broy
Subproject manager
Dr. habil. Christian Prehofer
Researcher
Georgios Pipelidis


Indoor Mapping


Figure 1: Figure 1: Location Based Services, Robotics, Augmented Reality and Mobile Ad-hoc Networks are strongly connected with the existence of Indoor Maps.
Location Based Services (LBS) (e.g. Navigation devices) are widely used in outdoor space in our daily lives. The main components of LBS are a localization method and a map. However, humans spend approximately 80% of their time in indoor environments, where there is a lack of localization technologies and maps. The main reason is that established localization technologies, such as GPS cannot deliver reliable data in indoor space due to weak signal received through building walls. Additionally, indoor places cannot be mapped as scalable as outdoor places. Even though various approaches have been suggested for indoor localization based on different technologies, such as BLE Beacons, WiFi RSS, GSM antennas, magnetic field fingerprint, light, UWB, and dead reckoning, the problem of a scalable mapping hasn’t been efficiently addressed. The last years’ techniques for crowd-sourced indoor mapping have been suggested, either based on volunteered geographic information, crowdsourcing with smartphone cameras, or even based on the simple movement of humans in indoor places [1], [2], [3]. The last works as follows; Measurements from embedded mobile device sensors are collected while users moving naturally inside buildings. The collected data can then be used for estimating the traces of users. Movements can be seen as motion constraints while different kind of POIs (e.g. doors, stairs etc.) as landmarks in a SLAM algorithm. The traces consist of various steps which, if annotated by their exact locations, can produce a point cloud of the required surface.

Challenges and Objectives


Figure 2: An example of an enhanced CityGML Level of Detail 2+ (LoD) which carries information about the number of floors and their altitude.
The problem of scalable mapping outdoors has been successfully addressed, due to available scalable technologies (i.e. airborne photography, satellites, etc.). Unfortunately, there is no scalable technology for mapping indoor places. This is mainly because there is no accurate indoor localization technology. Additionally, combining technologies on mapping and localization requires models which are robust to the uncertainty of dynamic approaches for indoor mapping. Finally, indoor places are more dynamic than outdoor places, while their vertical connections should also be taken into consideration.

This project aims to introduce a model that will enable the integration of dynamic generated semantic annotated indoor maps. In this way other platform services will be enhanced with semantic geo-information. After the introduction of such a model, the integration of existing indoor maps to service providers will be enabled. Those maps will be following existing standards, such as CityGML and IndoorGML. Finally, the homogeneous B2B integration of indoor maps with semantic descriptions will be enabled (e.g. Figure 2).

CityGML LoD2+


Our Goal


Figure 3: An example of a data flow diagram on how a dynamic mapping process could potentially come into reality. The chart presents the entire flow of information as gradual steps beginning from identifying reference locations such as entrances or other uniquely identified locations that belong to the essential outdoor indoor transition. After having successfully obtained a reference location, for localizing a human in the most infrastructure independent way, we need 01 his walking direction and 02 his step length. Having successfully localize a human indoors and 03 examining the properties of the places he has visited, characteristic locations indoors can be recognized. Additionally mapping human traces indoors, 04 a point cloud of the building can be generated. Segmented this point cloud based on WiFi signal reflections on walls 05 rooms of the structure can be recognized and 09 a map that describes the geometry of the indoor place can be emerged. 07 By grouping the human traces based on their time characteristics, 08 the topology of the indoor place can be recognized. 09 Finally, identifying the context of the person in specific locations, a semantic map can be generated.

Current focus

We believe that there will be no single way for mapping indoor places, but rather a diverse set of techniques and services will be used to build up maps and services for indoor locations in a customized way [6].

Figure 4: [4] An example of a “take me to the exit” service. ariadne, provides to the user the nearest exit from the subway station to his destination, the nearest compartment to his exit, as well as indoor routing from the subway platform to the exit.

Some services may actually not even require proper maps, as in the case of a ”take me to the exit” service (Figure 4) for which only user traces can be sufficient. We also posit that we will move towards custom solutions for combining indoor mapping techniques in order to improve accuracy and enable a number of diverse services.

An example of such an intermediate service is the semantic annotation of an indoor place. We hypothesize that user context is place-dependent. Hence, the semantic annotation could be attempted based on user activities. For example, a stair can be identified by identifying the “climbing stairs” activity. User context can be extracted in various ways, using calendar data, location and time. In our research, we opportunistically extract user context via reasoning on activities, through a mobile application (Figure 5). The user activities are recognized from smartphone data, after these data are segmented into clusters based on defined constraints.

Figure 5: [5] The recordData application. It’s goal is to collect data from user smartphones and stream them on our server. It can recognize the following set activities. Sitting, Standing, Walking, Walking upstairs, Walking downstairs, Using the elevator up and Using the elevator down.

For example, (Figure 5) the walking activity is perceived as high disturbances on the smartphone accelerometer, while sitting maps to low disturbances. Additionally, the pressure derivative is negative when climbing stairs up and positive when climbing stairs down (since air pressure decreases by height). The gravity normal vector is perpendicular on the long edge of the phone when sitting and parallel when standing. Finally, a combination of these activities (e.g. parallel to the long edge gravity acceleration combined with non-zero pressure derivative) signifies more complicated activities such as using the elevator. The segmented features are then used for classification. Finally, activities such as walking, sitting, standing, climbing stairs, and using the elevator, are recognized. We aim to fuse activities from multiple users, based on their location and characteristics and extract higher level information that can indicate the user’s context.

Obtaining data from compass, accelerometer, gyroscope, pedometer and ambient pressure, through the recordData application enables Dead Reckoning. Dead Reckoning is a localization method where the current location is estimated based on the previous location, the direction and the distance traveled.

Figure 6: Segmentation of data for activity recognition. Yellow: Sitting, Magenta: Standing, Blue: Walking, Cyan: Walking Upstairs, Red: Walking Downstairs, Green: Elevator Up, White: Elevator Down

In Figure 7 such approach is presented. In this figure, the black continued lines correspond the outline shape of the building. Every point is one-step estimated via the pedometer. The location of each point were estimated based on the previous location, the heading direction – estimated via fusion of compass, accelerometer and gyroscope – and the distance traveled – estimated 0.75m for straight walking and 0.45m for stairs. The colors of the points are signifying activities, where blue is walking on a plain and red is walking upstairs. These activities are then used for restarting the accumulated errors of the dead reckoning. Finally, as reference location has been decided the entrance, since the entrances can be dynamic estimated in high accuracy.


Figure 7: Example of user traces estimated with Dead Reckoning, Activity Recognition and Altitude estimation using the barometric formula. Blue: Walking, Red: Walking upstairs.

From the collected traces, a point cloud can be dynamically generated. Grouping these points based on their Access Points Received Signal Strength characteristics, cellular spaces can be revealed. Cellular space is the smallest subdivision of the space necessary for navigation. Different rooms will form different patterns of these data, due to the reflections of the signal by solid objects in the area such as walls, doors and windows.

Finally, through computational geometry algorithms the geometry, the topology and the semantics of the indoor space will be identified and mapped following existing standards, such as CityGML, IndoorGML and others.

If you would like to get more information about this subproject or if you are interested in a cooperation to develop indoor mapping/localization/navigation solutions, please contact Dr. Christian Prehofer or Georgios Pipelidis. Motivated students who are looking for a thesis or a guided research in one of the presented topics are always invited to contact us as well.

Dead Reckoning


References
[1] G. Pipelidis and C. Prehofer. “Models and Tools for Indoor Maps.” Digital Mobility Platforms and Ecosystems (2016): 154.
[2] M. Alzantot and M. Youssef, ”CrowdInside: Automatic Construction of Indoor Floorplans”, in Proceedings of the 20th International Conference on Advances in Geographic Information Systems, New York, NY, USA, 2012, pp. 99-108.
[3] D. Philipp et al., ”Mapgenie: Grammar-enhanced indoor map construction from crowd-sourced data”, in Pervasive Computing and Communications (PerCom), 2014 IEEE International Conference on, 2014, pp. 139-147.
[4] http://ariadne.one
[5] https://play.google.com/store/apps/details?id=com.recordData.basic&hl=en
[6] G. Pipelidis, S. Xiang, and C. Prehofer. ”Generation of indoor navigable maps with crowdsourcing”. Proceedings of the 15th International Conference on Mobile and Ubiquitous Multimedia. ACM, 2016.

Own Publications
[1] G. Pipelidis, X. Su, C. Prehofer, “Generation of indoor navigable maps with crowdsourcing”, 15th International Conference on Mobile and Ubiquitous Multimedia, Rovaniemi, Finland, Dec 12 – Dec 15, 2016
[2] G. Pipelidis, C. Prehofer, I. Gerostathopoulos, “Adaptive Bootstrapping for Crowdsourced Indoor Maps”, 3rd International Conference on Geographical Information Systems Theory, Applications and Management, Porto, Portugal, 27th to 28th April 2017
[3] G. Pipelidis, O. R. Moslehi Rad, D. Iwaszczuk, C. Prehofer, U. Hudentobler, “A Novel Approach for Dynamic Vertical Indoor Mapping through Crowd-sourced Smartphone Sensor Data”, 8th International Conference on Indoor Position Indoor Navigation, 18-09-2017, Sapporo, Japan.