Done in collaboration with Deltares.
Given a (classified) LAS pointcloud, how to (re)generate its alpha shape?
Finding the bounding-box (or the convex hull, or with a raster solution) is in most cases not sufficient, as the image above shows. At the moment, Deltares could use the results to combat edge effects from classification filter algorithms. Deltares is now using a simplified outline, but that results in artefacts in overlapping flight strips. The project involves investigating how to construct alpha-shapes for massive real-world datasets, eg 100M+ points. As programming language in this project, Deltares uses Julia, an easy (coming from Python) and fast language in development.
Augmented Reality is an emerging technology in visualisation and perseption of digital worlds. City models could be benefited from such a technology, as users could better explore and understand the data that are integrated in such a virtual model of a city. This project is about the development of a visualization application on Microsoft Hololens platform, exploring potential benefits and issues of such an application and the potential use cases for future use. The application should focus on the visualisation of geometry and attributes of a city model. The implementation will be done through the use of the Unity 3D engine and the C# programming language.
Contact: Stelios Vitalis
Road networks are utilised within a wide range of applications for navigation, city planning, and visibility analysis. There is a growing need for road networks within 3D city models for cases such as autonomous vehicle routing and road maintenance and repair. At the same time, while the concept of Levels of Detail (LoD) for buildings in CityGML has been extensively studied, this is not the case for roads. This project will examine a multitude of road standards, in both 2D and 3D, to refine and enhance the concept of LoDs for roads. A road network at various levels will then be created (with procedural modelling utilising ESRI CityEngine if there is interest) to test within an application of the student’s choice.
The project is about exploring NoSQL databases for storing massive 3D data. The main test dataset is the TIN generated from national elevation model of the Netherlands (AHN3) with a point density of over 10 points/m2. Several data structures have been proposed for the representation and storage of TINs in memory and in databases. A few of those data structures (here) are to be tested with the generated TIN models to account for their geometry, topology, storage, indexing, and loading times in a NoSQL database and compare the results with already available results of testing with Postgres/PostGIS database to analyse the performance of NoSql vs. SQL databases.
Prior knowledge of databases and programming in Python or C++ is required.
Based on some preliminary and limited tests, we concluded that it can compress many datasets with a factor of at least 4X, often 10X+. The aim of this MSc project is to verify (or disprove!) this claim by:
Knowledge of Python is enough.
Point clouds, unstructured collections of 3D points in space, are nowadays collected with different acquisition methods, eg photogrammetry and LiDAR, and contain a wealth of information on both natural and man-made structures.
The aim of this project is to extract 3D breaklines directly from a point cloud such as the national AHN3. Breaklines indicate discontinuities in a terrain (such as the ridges in a mountain) and are needed for applications such as flood simulations and noise simulations.
As a starting point the 3D medial-axis transform (MAT) can be used (used to generate the image above).
Prior knowledge of programming in Python or C++ is required.
Point clouds, unstructured collections of 3D points in space, are nowadays collected with different acquisition methods, eg photogrammetry and LiDAR. While current point clouds are dense and offer an accurate representation of real-world objects and landscapes, they lack structure and semantics.
The aim of this project is to properly orient a point cloud, ie to find an approximation of the normal at each point; this normal should point outwards. Surface normals are essential for different processing of a point cloud, eg visualisation, shadow analysis or segmentation.
“Standard” methods, eg that function in PCL, usually find the nearest points of a given point, fit a plane, and choose between the 2 possible normals (up or down) based on a viewpoint. The problem is that in practice, eg with the AHN3 dataset, we do not have that information.
The topic involved building on our work with the 3D medial-axis transform (MAT) and use the 3D MAT of a point cloud as a base to obtain high quality normals with a proper orientation.
It is possible to use Python for this project, although some knowledge of C++ would surely help.
We have developed a software, 3dfier, to automatically construct 3D city models from 2D GIS datasets (e.g. topographical datasets) and LiDAR/pointcloud datasets. The software creates a 3D model by lifting every polygon to 3D, and the semantics of every polygon is used to perform the lifting. That is, water polygons are extruded to horizontal polygons, buildings to LOD1 blocks, roads as smooth surfaces, etc. Every polygon is triangulated (constrained Delaunay triangulation) and the lifted polygons are “stitched” together so that one digital surface model (DSM) is constructed. Our aim is to obtain one DSM that is error-free, i.e. no intersecting triangles, no holes (the surface is watertight), where buildings are integrated in the surface, etc.
The aim of this MSc project is to further develop some lacking features of 3dfier, it could be one of the following:
These topics can be done with Python as a post-processing of 3dfier.
The size of 3D City Models makes it temporally and computationally expensive to quickly parse a dataset to understand if it is suitable for a specific application. Furthermore, datasets are created and modified by different users, this makes it difficult to know what level of processing it has experienced and to track its lineage to understand changes over time. Metadata is crucial for establishing data confidence, estimating fitness-for-purpose, maintaining dataset lineage and credit recognition amongst many other benefits. The problem is that it sounds boring and can be laborious to produce and maintain.
The aim of this project is to:
Develop a methodology that parses a 3D City Model and automatically derives metadata, e.g. spatial extent, the presence of semantic information, level of detail, etc.
Discover a method of storing metadata that a) ensures it is still coupled with its parent dataset and b) is easily queried.
Examine the limitations of 2D metadata standards and make recommendations about how to extend standards to include 3D data.
If there is interest the student may wish to examine a Resource Description Framework as the metadata data model.
For this position we ask for programming skills, preferably in Python. SQL is an asset but you can easily learn it over the course of the project.
Netatmo sells low-cost personal weather stations that anyone can buy, and so there are now close to 1000 stations in The Hague, Delft and the surrounding areas! By default, Netatmo owners share their data publicly, but like all crowdsourced datasets, it is all quite messy. The stations’ locations are only roughly known, some outdoor modules are in the sunlight (at certain times), have different exposure to the wind, etc.
The aim of the project is to improve the dataset by using the 3D model of The Hague to filter the data in a smart way. For instance, maybe we can find out if some of the stations are not located in the places where they say they are, or maybe some of the stations’ readings are reliable only during certain periods of the day due to the sunlight hitting them directly.
For this project, the student should be able to program well enough to read a CityGML model and to do analyses with geometric operations (such as casting shadows). C++ programming would be ideal but any other language should be good enough.
Contact: Ken Arroyo Ohori or Alexander Wandl
Visualization is an important and complex issue in the context of 3D city models. The enormous amount of data to be fetched, the heterogeneity of data sources, and the complexity of rendering are only a few parts of this challenge. The project aims at investigating 3D tiling schemes for efficiently visualizing massive TINs using Cesium 3D webglobe. The knowledge of programming in C++ is required.
For large parts of the world, the available 3D geoinformation is limited, outdated or inaccurate. To cost-effectively obtain an up to date and high-resolution 3D (urban) environment model, automated 3D reconstruction techniques need to be applied on raw elevation and imagery sensor data, in order to find which features (vegetation, buildings, etc.) are present in the terrain and the representation of the relevant feature properties (tree species, roof type). Semantic model based feature matching is a reconstruction approach where a priori knowledge on the environment is used to represent and constrain a search space of possible feature models that can be found in the terrain.
The key in this research is to devise semantic model representations and search algorithms that explore the search space and find instances of the semantic feature model that best match the available sensor data. The project will focus on encoding and using object relations (e.g. between a building and adjacent street) in semantic feature models to improve the correctness of the matches. The project is carried out in co-operation with TNO in The Hague, where these techniques are applied in the field of gaming and simulation.
Contact: Jantien Stoter
Although laser point clouds have become a common data resource for world modelling, photogrammetrically derived digital surface models are still widely used as the basis for a terrain modelling work flow. With current high resolution camera systems, highly detailed Digital Surface Models (DSM) can be obtained. Digital Terrain Model (DTM) filtering is used to remove features from the DSM and obtain a ground level elevation model. Although DTM filtering is a very basic step in terrain modelling, it is still a challenging task. One of the difficulties is the filtering of forested areas in hilly terrain, but also rough terrain is often not a trivial case.
This research will aim at finding improved filtering and interpolation techniques to resolve the difficulties in DTM filtering for photogrammetrically derived DSM’s. The project is carried out in co-operation with TNO in The Hague, where these techniques are applied in the field of gaming and simulation.
Contact: Jantien Stoter
For some years, we have been working on a novel method to automatically generate “good” depth-contours for hydrographic charts. Our latest results, based on the MSc thesis of Ravi Peters and published in that paper, have been picked up by major companies who are implementing it.
The aim of the proposed project is to improve the results. That is, we can at this moment generate smooth contours for most seabed types, but the generation is applied for the whole dataset and a human must decide when the results are okay. The student would have to focus on automatically applying the algorithms only where they are needed and design methods to assess when sufficiently good results have been achieved.
The code of the project is in C++, but probably possible to make do with Python.
The most common way to do edge-matching or to clean small inconsistencies within and between datasets is to apply snapping (point-to-point or point-to-line). However, simple snapping creates many problems, including topological changes and inconsistencies. Snap rounding extends this method in order to give robustness guarantees, but current implementations, such as the one in CGAL, are extremely slow. Related to this, in the project pprepair, we have previously used a constrained triangulation as a robust method to repair polygons and planar partitions. Using this approach topological errors are automatically fixed. We therefore believe that using a triangulation as a base structure is an intuitive and efficient way to optimize snap rounding, since we can perform simple snapping and recover from topological errors afterwards.
The existing prototype (pprepair that needs to be extended has been developed in C++, thus the knowledge of C++—or a strong desire to learn it—is necessary.
Contact: Ken Arroyo Ohori
Calculating the visibility between two points using 3D city models provides valuable input to many application domains, such as solar analyses (shadowing) and finding the optimal place to install a surveillance camera or a billboard. This list is growing, e.g. a potential application could be to estimate the visibility of an urban canyon from a satellite.
Knowledge of Python and FME is sufficient.
Indoor environment in public buildings consist of very large spaces and usually it is difficult to give instructions how to get to a specific part of the such space. Therefore for indoor localisation and navigation, spaces are subdivided into functional areas. There several approaches to subdivide spaces.
This research will concentrate on space subdivision using shape grammar. A shape grammar consists of number of shape rules and a generation process that selects and processes rules. In general, shape rule specifies the transformations on existing (part of a) shape. This research is part of SIMs3D project.
Contact: Abdoulaye Diakité
3D raster representation is increasingly gaining the interest of the researchers. They are simple structures but usually result in large data sets. Therefore in previous research we have investigated Octree data structure and its use for indoor path computation.
This research topic will continue and extend previous work by considering rasterization of outdoor space. The goal is to investigate what kind of Octree would be needed for seamless indoor/outdoor navigation.
Contact: Sisi Zlatanova
The Google Tango tablet is a very intuitive, interactive and interesting tool for indoor scanning. The device contains suitable sensors to rapidly produce 3D models. But the few apps available for this task are very limited, and only provides either an already processed mesh or point cloud samples.
The goal of this research is to evaluate to which extent the tablet can be used for indoor modelling. It is about fully exploiting the skills of the tablet to extract proper point clouds and perform semantically rich surface reconstruction, by relying on other information such as the scanning trajectories, the coloured pictures of the environment, etc.
Contact: Abdoulaye Diakité
In order to perform fine-grained indoor navigation, one needs to consider the entire 3D free and non-free space. The non-free space is often characterized by furniture elements and people activities (crowd, queue, etc). The free space that is available for navigation cannot be evaluated without considering the obstacles. This problem is even more complex because the obstacles can move in the indoor space.
The goal of this research is to investigate the best way to consider the moving objects in an indoor environment to optimally evaluate the real free space available for navigation. This research is part of SIMs3D project.
Contact: Abdoulaye Diakité
Most building managers have little to no information about the indoor status of their buildings: the number of buildings, their structure, rooms, size of windows, doors, area of room floors, etc. Many maintenance daily and yearly tasks such as cleaning, renovation, painting, refurnishing, safety are performed in an ad-hoc manner, which cost extra effort and money. There is great interest in a 3D indoor model, which geometry, topology and semantic information will serve the tasks of the building managers.
Currently two international standards for 3D indoor modelling are available IFC and CityGML LOD4. Both standards have their advantages and disadvantages for such purposes. IFC has too many details and it is difficult to integrate with GIS. CityGML LOD4 is relatively simple, but hardly supported by major vendors. * First option: Is it possible to establish a 3D indoor model that can serve facility management purposes? This research will evaluate the suitability of the two standards (and available database implementations such as 3DCityDB) and propose a solution: extend/adapt one of them or design a new model combining best characteristics of the two. The proposed approach should be realised as a data model in DBMS and validated against a predefined set of user requirements. A simple web application allowing access and view to the database would be recommendable. (Required skills: UML, SQL, basic programming) * Second option: What kind of algorithms are needed to convert automatically IFC to CityGML LOD4 taking care of valid geometry and semantics. It is expected that recommendations for design of a building model will be derived, which will facilitate a fully automatic conversion. This might also result in more strict rules for representing indoor objects in CityGML. (Required skills: computational geometry, programming)
Contact: Sisi Zlatanova (in cooperation with More For You, Charim)
Many indoor navigation apps are currently available but they are focussed on specific application (i.e. shopping, tourism) and have no flexibility. They cannot be adapted to the profile of the user or the task he/she is completing. They do not take into consideration temporal or permanent changes of the environment such as renovation, reorganisation of spaces or in case of emergency. User profiles and changed environment reflect the 3D Indoor model (geometry and network) that is used to compute the navigation paths.
This research will investigate what kind of 3D indoor model which can provide sufficient information to navigate different users through changing indoor environments. Are different LOD/layers needed? How to maintain connectivity of spaces? Can the network be derived on the fly or should be stored with the geometry model? Fundamental concepts of IndoorGML such as dual graph and Multi-Layered Space Model will be the starting point of the research. Generic user profiles and parameters reflecting changing indoor environmental need to be identified. A final app should demonstrate the flexibility of the model and the proposed strategy for path navigation (Required skills: UML, SQL, programming)
Contact: Sisi Zlatanova (in cooperation with CGI)
Current navigation outdoor and indoor system are relatively simple and offer limited choices. Car navigation systems offer a choice between the shorter path or faster path, or avoiding tolls, highways. Indoor navigation apps can compute optimal path to visit a sequence of targets (e.g. in shopping). Various other options have been reported in the literature such as the least turn’s path, the most interesting paths, the least traversing path or the least obstruction path. Furthermore, the current navigation systems mostly consider that one person is navigated to one static target point. But are these options sufficient for indoor navigation? How the people want to move indoors? What kind of algorithms should be used, or developed. What kind of network is needed?
This research should investigate conditions for indoor navigation, corresponding algorithms and networks. Starting point of the research will be the taxonomy for navigation of emergency responders developed at the 3D geoinformation group. The research should identify, implement and validate new ways for indoor navigation (Required skills: graph theory, programming)
Contact: Sisi Zlatanova, Liu Liu
3D indoor models are still very rare and difficult to obtain. 3D BIM models hardly contain recent modifications, 2D floor plans are not accurate and lack 3D. Range or optical images are some of the relatively easy and cheap way to collect 3D data to be provided for 3D modelling algorithms. There several interesting technologies currently at the market, which seem suited for indoor modelling, but they have never been thoroughly investigated and compared.
This research will concentrated on several technologies for collection point clouds ZEB1, Tango, terrestrial scanner, and optical video/images. A comparative study will be completed on the basis of a set of parameters, including time for collecting and processing data to a uniform point cloud. The 4 technologies will be tested on the same building under the same conditions. The most prominent technology should be identified for quick update of parts of 3D models. A test bed for testing and evaluation should be set up. (Required skills interest on scanning technology, processing of point clouds).
Contact: Sisi Zlatanova (related to M4C project SIMs3D)
3D reconstruction of indoor environments is complex task: rooms contain many objects in rooms or attached to walls and ceilings; many of the indoor spaces are half open and there are intermediate floors, balconies and strains. Manual reconstruction is tedious and time consuming; no automatic or semi-automatic approaches currently exist. This research will investigate and design/adapt approaches for identifying the walls, ceilings and floors belonging to an indoor space and construct solids (where possible). The research should also suggest a data structure for integrated management of vector geometry and corresponding point clouds. Several approaches can be addressed that can lead to different Master topics: shape grammar, voxels, segmentation/classification, etc. (Required skills: interest in processing point clouds, programming alternatively using existing software)
Contact: Sisi Zlatanova, Ben Gorte (related to M4C project SIMs3D)
Doors and windows are of critical importance for indoor navigation and localisation, but very difficult to identify from point clouds: doors and windows can be closed or open during the scanning, windows can be covered with curtains or sun shutters. This research will concentrate investigated which outdoor approach could be appropriate for indoor environments. Recently completed research of Kaixuan Zhou could be used as starting point. Different approaches can lead to different MSc topics: use of floor plans, semi-automatics, feature detection, etc. (Required skills: interest in processing point clouds, programming or alternatively using existing software)
Contact: Sisi Zlatanova, Ben Gorte (related to M4C project SIMs3D)
IndoorGML is recently accepted standard for exchange of indoor information needed mostly for navigation. The standard suggests three options for encoding geometry and network information that is needed for navigation: 1) the geometry is provided by IFC, KML or CityGML file and the IndoorGML contains only the path/network for navigation, 2) simple geometry&semantics is stored in the IndoorGML file, and finally 3) no geometry is provided
The three different options have their pros and cons for different types of server-client applications. Sever-based or client-based computations? How much information to be sent to the client: only the navigation path or the navigation path and the building model? Provide the whole path or wait for requests from the user? An android application should be developed that demonstrates the different approaches (Required skills: web technology, app development)
Contact: Sisi Zlatanova (related to OGC pilot project)
Scanning indoor environments with ZEB1 is quick and accurate method for obtaining indoor point clouds. However, the point clouds don’t have colour. This research will investigate an approach for integrating images with the ZEB1 point clouds to obtain coloured points. The research will be in collaboration with CSIRO, GeoSlam and university of Picardie. (Required skills: interest in processing point clouds, programming)
Contact: Sisi Zlatanova (in cooperation with LEAP3D)
Connecting and visualizing real-time sensors with point cloud. Used for analysis of data and influencing the environment (smart city concept). Details to be provided later.
Contact: Sisi Zlatanova and Ester de Bruin (LEAP3D)