Potential MSc topics



Calculating Street Widths

This project will be in partnership with researchers in the department of Urbanism examining the relationship between street design and automated vehicles. As part of this thesis you will examine ways to automatically calculate the width of roads (this is considered the total width including pedestrian walkways and cycling lanes). Part of this process will require you to find ways to automatically classify intersection areas as well as develop methods to automatically partition road segments.

Knowledge of a programming language (preferably Python) is highly recommended.

Contact: Anna Labetski

Versioning of 3D spatial data

Versioning comic

This topic is about the investigation of versioning mechanisms for 3D spatial data, with an emphasis on city models and datasets produced by municipalities. The main aim of the project is to implement existing versioning solutions across different 3D exchange formats and evaluate them. It is, also, about proposing new methods to handle the issue.

As part of the thesis you will have to identify specific use cases, such as the updating of existing 3D city models with new data in a way that ensures that revisions are maintained. You will have to communicate with stakeholders, such as municipalities or organisations that deal with multiple iterations of 3D data.

Knowledge of programming in Python is required.

Contact: Stelios Vitalis

Handling massive 3Di datasets in 3D City Database

The project is about exploring 3DcityDB for managing massive input and output data for 3Di. 3Di Water management is a web based application for detailed hydraulic computations using high resolution datasets. The aim is to: (1) create a data warehouse (i.e. a central repository) for 3D datasets used in hydraulic modelling by integrating 3DCityDB with 3Di, (2) use the integrated 3D city and 3Di area models for different applications such as visualization.

The project is in collaboration with Nelen en Schuurmans, Utrecht. Prior knowledge of databases and programming in Python or C/C++ is required.

Contact: Jonas van Schrojenstein and Kavisha

Sensor standards overview

Create an overview of the standards landscape related to sensors and observations that explains the scope of each of these standards, their application to practical use cases, impact on Spatial Data Infrastructures and mechanisms by which they may be combined. Are these standards, for example, overlapping in their application domain, or are they complementary? Are there gaps that need to be addressed?

Different standards organizations are working on standards related to sensors and the measurements they produce: ISO, the OGC, the W3C, the IETF, and countless non-standardized community or platform-specific protocols and formats. These standards range from mature to early development stage, and from low level communication IoT protocols to ontologies describing sensor semantics.

This project is done in cooperation with Geonovum, the govermental organisation responsible for developing geo-standards.

Contact: Linda van den Brink and Jantien Stoter

Moving objects on the Web

Describing trajectories and paths of moving objects requires a different approach to describing static ones. Research how best to support Web applications that generate or use data concerning moving objects. Use cases include transportation, tourism, migration, location-based services, travel blogs and wildlife tracking. There is an OGC standard for Moving features, but the XML encoding is too complex and verbose - not lightweight enough to conduct, for example, enhanced (near) real-time operations involving moving objects, via the Web.

This project is done in cooperation with Geonovum, the govermental organisation responsible for developing geo-standards.

Contact: Linda van den Brink and Jantien Stoter

Develop a framework for sharing sensor data

The ISO/OGC standard Observations and Measurements (O&M) provides a model for the exchange of information about sensor observations. It’s a rather concise and abstract model and it has always raised questions about how to create a profile in order to use it in practice. A framework for this is needed.

Creating a profile involves the definition of an information model that extends the abstract O&M model. O&M is defined in UML. In addition, there is an XML-based exchange format, the O&M GML encoding, a JSON implementation, and a linked data based ontology called Semantic Sensor Network ontology. All of these may play a role in the framework, but the central question of this MSc topic is how to create a working O&M profile.

A practical case for the study could be the Base Registry Underground, which will contain a lot of sensor data such as groundwater measurements and soil quality observations.

This project is done in cooperation with Geonovum, the govermental organisation responsible for developing geo-standards.

Contact: Linda van den Brink and Jantien Stoter


Linked data: Extend CityJSON with machine-readable semantics

CityJSON is based on JavaScript Object Notation, a lightweight data-interchange format primarily used on the Web. However, JSON is just syntax, without any machine-readable knowledge about the meaning (semantics) of the data. Currently, the only way to figure out what the data in a CityJSON file means (e.g. what is a building?) is to read the CityGML specification (assuming you know where to find it), something only humans can do.

The aim of this MSc project is to find out how to encode the meaning of CityJSON files in a machine-readable way, directly embedded or linked in the JSON document, and to discover what benefits (or disadvantages) this would bring. This could be done by creating a vocabulary which describes the keys that can be used in CityJSON, basically a CityGML vocabulary or (simple) ontology; and using JSON-LD to map the keys in CityJSON to this vocabulary.

The “LD” in JSON-LD stands for “linked data”. Once CityJSON-LD is created, we effectively have a lightweight linked data format for CityGML. But this is not a benefit in itself. The project would go on to explore the advantages and disadvantages of working with CityJSON-LD, as opposed to just CityJSON.

This project is done in cooperation with Geonovum, the govermental organisation responsible for developing geo-standards.

Contact: Linda van den Brink and Jantien Stoter


3D city model to BIM to 3D city model

The interoperability of 3D city models (in CityGML) and detailed building information models (BIM in IFC), has a lot of potential but there are several unresolved problems. Many software tools exist to manage CityGML and IFC data. The aim of this project is to design and test workflows for conversions from CityGML to IFC and vice versa without loss of quality of the data (in geometry or semantics), through existing software’s import and export functions. In collaboration with AMS and ISPRS

Contact: Francesca Noardo, Ken Arroyo Ohori and Jantien Stoter


Integrated 3D geo-information for Building permissions issuing

The aim of the project is to design a platform for automatically checking building information models (BIM) against city regulations. Through analysing the requirements and needs for building permissions (in Europe), the characteristics of the needed information must be established. Procedures, tools and methodologies need to be defined to automatically check a building design against spatial regulations (especially height regulations). In collaboration with EuroSDR and Kadaster

Contact: Francesca Noardo, Ken Arroyo Ohori and Jantien Stoter


Flat roofs

Several applications require information about the geometry of a building’s roof, particularly whether it is flat or not. In theory this information can be derived from a point cloud. But deciding if a roof is flat is not always straightforward, and different domains have different definitions for flat. Think about calculating water runoff, or finding roofs with a potential terraces. Thus better if we can say something about the area of the flat surface in a roof, if there is any.

The aim of this MSc project is to develop a method for computing the area of the flat surface in a roof from a point cloud, and identify whether the roof has structure that consists of multiple levels. Additionally, the work can be extended to modelling multi-level roofs (aka. LoD1.3). Ultimately, to goal is to incorporate such a method in 3dfier and it should therefore be computationally efficient.

Knowledge of a programming language like python is required. You should also be comfortable with, or at least have the strong desire to learn some statistics.

The topic is done in collaboration with Rijkswaterstaat.

Contact: Balázs Dukai and Ravi Peters


Develop a framework to handle massive CityJSON files

As an alternative format for the CityGML data model, we have recently developed CityJSON, it uses JavaScript Object Notation. The aim of CityJSON is to offer an alternative to the GML encoding of CityGML, which can be verbose and complex (and thus rather frustrating to work with). CityJSON aims at being easy-to-use, both for reading datasets, and for creating them. It was designed with programmers in mind, so that tools and APIs supporting it can be quickly built.

While a CityJSON file is about 7X compacter than the equivalent CityGML file, very large areas (like the whole of city of Berlin) are still problematic.

The aim of this MSc project is to design a framework to deal with such massive CityJSON files. The potential solution is to design a tiling scheme, and find a way to make it work with a web-based viewer, eg Cesium or three.js. There is an emerging standard about the tiling of 3D GIS datasets (3D Tiles), which should probably be reused/modified.

Knowledge of Python and of web technologies (javascript; although that can be learned with the project) is enough.

Contact: Hugo Ledoux and Stelios Vitalis


Develop a Blender addon with a complete set of tools for CityJSON files

Blender is a 3D modelling tool with the ability to be extended through the Python programming language. Through the BlenderGIS addon it can incorporate GIS functionality for the manipulation of 3D geospatial datasets.

The aim of this MSc project is to create an addon that adds the ability in Blender to import/export and manipulate all aspects of a 3D city model in CityJSON format. The addon should be able to:

Knowledge of Python is enough.

Contact: Stelios Vitalis and Hugo Ledoux


Extraction of characteristics of buildings from aerial imagery

This project is done in cooperation with Readaar. Readaar was founded in 2016 and extracts all kind of information from aerial imagery. To do this they combine remote sensing with machine learning.

Readaar has already developed a method to generate point clouds and 3D building models from stereo imagery. The next step is to translate this into useful insights like:

The focus of the student within this project will be on using the datasets to develop methods that extract (data mining) the insights that Readaar’s customers want to have.

More information is found there.

Contact: Hugo Ledoux + Sven Briels


Interoperability between BIM IFC & BEM gbXML

The aim of the project is to investigate the interoperability between two popular standards in the BIM (Building Information Modelling) and BEM (Building Energy Modelling) domains: IFC and gbXML. The goal is to compare and identify the schematic mapping between these two standards and develop a (Python/C++) prototype based on these mappings for data conversion. Real world gbXML datasets are to be generated for testing in gbXML energy simulations using tools such as EnergyPlus, Revit or IES.

Prior knowledge of programming in Python or C++ is required.

Contact: Kavisha, Francesca Noardo and Hugo Ledoux

Level of Detail of Roads in CityGML

Road networks are utilised within a wide range of applications for navigation, city planning, and visibility analysis. There is a growing need for road networks within 3D city models for cases such as autonomous vehicle routing and road maintenance and repair. At the same time, while the concept of Levels of Detail (LoD) for buildings in CityGML has been extensively studied, this is not the case for roads. This project will examine a multitude of road standards, in both 2D and 3D, to refine and enhance the concept of LoDs for roads. A road network at various levels will then be created (with procedural modelling utilising ESRI CityEngine if there is interest) to test within an application of the student’s choice.

Contact: Anna Labetski and Hugo Ledoux


Handling massive 3D data using NoSQL databases

The project is about exploring NoSQL databases for storing massive 3D data. The main test dataset is the TIN generated from national elevation model of the Netherlands (AHN3) with a point density of over 10 points/m2. Several data structures have been proposed for the representation and storage of TINs in memory and in databases. A few of those data structures (here) are to be tested with the generated TIN models to account for their geometry, topology, storage, indexing, and loading times in a NoSQL database and compare the results with already available results of testing with Postgres/PostGIS database to analyse the performance of NoSql vs. SQL databases.

Prior knowledge of databases and programming in Python or C++ is required.

Contact: Kavisha and Hugo Ledoux


3D breakline extraction from point clouds

Point clouds, unstructured collections of 3D points in space, are nowadays collected with different acquisition methods, eg photogrammetry and LiDAR, and contain a wealth of information on both natural and man-made structures.

The aim of this project is to extract 3D breaklines directly from a point cloud such as the national AHN3. Breaklines indicate discontinuities in a terrain (such as the ridges in a mountain) and are needed for applications such as flood simulations and noise simulations.

As a starting point the 3D medial-axis transform (MAT) can be used (used to generate the image above).

Prior knowledge of programming in Python or C++ is required.

Contact: Ravi Peters and Hugo Ledoux


Point cloud normal estimation based on the 3D medial axis transform

Point clouds, unstructured collections of 3D points in space, are nowadays collected with different acquisition methods, eg photogrammetry and LiDAR. While current point clouds are dense and offer an accurate representation of real-world objects and landscapes, they lack structure and semantics.

The aim of this project is to properly orient a point cloud, ie to find an approximation of the normal at each point; this normal should point outwards. Surface normals are essential for different processing of a point cloud, eg visualisation, shadow analysis or segmentation.

“Standard” methods, eg that function in PCL, usually find the nearest points of a given point, fit a plane, and choose between the 2 possible normals (up or down) based on a viewpoint. The problem is that in practice, eg with the AHN3 dataset, we do not have that information.

The topic involved building on our work with the 3D medial-axis transform (MAT) and use the 3D MAT of a point cloud as a base to obtain high quality normals with a proper orientation.

It is possible to use Python for this project, although some knowledge of C++ would surely help.

Contact: Ravi Peters and Hugo Ledoux


Improvements (trees, bridges, viaducs) to 3dfier

We have developed a software, 3dfier, to automatically construct 3D city models from 2D GIS datasets (e.g. topographical datasets) and LiDAR/pointcloud datasets. The software creates a 3D model by lifting every polygon to 3D, and the semantics of every polygon is used to perform the lifting. That is, water polygons are extruded to horizontal polygons, buildings to LOD1 blocks, roads as smooth surfaces, etc. Every polygon is triangulated (constrained Delaunay triangulation) and the lifted polygons are “stitched” together so that one digital surface model (DSM) is constructed. Our aim is to obtain one DSM that is error-free, i.e. no intersecting triangles, no holes (the surface is watertight), where buildings are integrated in the surface, etc.

The aim of this MSc project is to further develop some lacking features of 3dfier, it could be one of the following:

  1. adding 3D representation of trees by iconisation, ie by inserting a parametric template that has the general shape/dimension of a tree. This implies automatically finding this, and a good start is the methodology described in this paper (Section 4.2)
  2. adding bridges and other man-made structures (such as viaducs) by first modelling them with Esri CityEngine and then “stitching” them to the 3D model.

These topics can be done with Python as a post-processing of 3dfier.

Contact: Hugo Ledoux and Tom Commandeur


3D visualization of massive TINs

Visualization is an important and complex issue in the context of 3D city models. The enormous amount of data to be fetched, the heterogeneity of data sources, and the complexity of rendering are only a few parts of this challenge. The project aims at investigating 3D tiling schemes for efficiently visualizing massive TINs using Cesium 3D webglobe. The knowledge of programming in C++ is required.

Contact: Kavisha and Hugo Ledoux


3D Cadastre

Since more than 15 years, lots of studies have been done on 3D Cadastre to register multilevel ownership in a transparent and proper way. In 2016, we realised the first 3D cadastral situation 3D cadastral registration in the Netherlands. But still research is needed to develop a solution for 3D cadastral registration that covers all issues. An MSc thesis could focus on one of them, such as how to validate a 3D cadastral plan that was created from a BIM model? Traditionally, a 2D cadastral boundary is checked by surveyors in the field. What are the requirements for 3D cadastral boundaries? How can they be generated accordingly? And how can they be validated? How can a BIM model serve as input for this? Another issue is about how to maintain and exchange 3D data about property boundaries? And how to go beyond the limited visualisation and navigation possibilities of 3D PDF?

Contact: Jantien Stoter and Hugo Ledoux


Automatic generalisation of depth contours

For some years, we have been working on a novel method to automatically generate “good” depth-contours for hydrographic charts. Our latest results, based on the MSc thesis of Ravi Peters and published in that paper, have been picked up by major companies who are implementing it.

The aim of the proposed project is to improve the results. That is, we can at this moment generate smooth contours for most seabed types, but the generation is applied for the whole dataset and a human must decide when the results are okay. The student would have to focus on automatically applying the algorithms only where they are needed and design methods to assess when sufficiently good results have been achieved.

The code of the project is in C++, but probably possible to make do with Python.

Contact: Hugo Ledoux and Ravi Peters


Snap rounding in a triangulation

The most common way to do edge-matching or to clean small inconsistencies within and between datasets is to apply snapping (point-to-point or point-to-line). However, simple snapping creates many problems, including topological changes and inconsistencies. Snap rounding extends this method in order to give robustness guarantees, but current implementations, such as the one in CGAL, are extremely slow. Related to this, in the project pprepair, we have previously used a constrained triangulation as a robust method to repair polygons and planar partitions. Using this approach topological errors are automatically fixed. We therefore believe that using a triangulation as a base structure is an intuitive and efficient way to optimize snap rounding, since we can perform simple snapping and recover from topological errors afterwards.

The existing prototype (pprepair that needs to be extended has been developed in C++, thus the knowledge of C++—or a strong desire to learn it—is necessary.

Contact: Ken Arroyo Ohori


Line of sight (visibility) and raytracing analyses on a point cloud dataset

Calculating the visibility between two points using 3D city models provides valuable input to many application domains, such as solar analyses (shadowing) and finding the optimal place to install a surveillance camera or a billboard. This list is growing, e.g. a potential application could be to estimate the visibility of an urban canyon from a satellite.

We have developed a 3D skeleton-based approach (part of that research project) (PDF here) that would be the start of the project.

Knowledge of Python and FME is sufficient.

Contact: Ravi Peters and Hugo Ledoux


Structure-aware Urban Model Simplification

With the advances of data acquisition technologies, point clouds of urban scenes can be easily captured by various means (e.g., laser scanners, drones, and various cameras). With such data, it is possible to automatically and quickly reconstruct 3D models of urban buildings (e.g., the 3D models in Google Earth). Though such models admittedly look great with textures, they cannot be used in several applications due to the imperfections and complexity of the models (i.e., gigantic meshes, missing regions, noise, and undesired structures).

The aim of this project is to develop algorithms and tools to convert the dense triangular mesh models to lightweight polygonal models by assuming the building surface is piecewise planar. We consider a method to be effective to this problem if it meets the following requirements

Required Skills: Proficient in one programming language (e.g., C/C++, Python, Java); experiences in CGAL and Numerical Optimization is a bonus.

Contact: Liangliang Nan


Repairing 3D Urban Models

Nowadays the number of 3D building models are increasing explosively (e.g., the 3D models in Google Earth). These models can be easily obtained by applying state-of-the-art modeling/reconstruction techniques, or by manual creation using various software packages. It is quite common to observe errors and imperfections in these models, like gaps, holes, self-intersections, duplicated geometry (e.g., double walls), T-junctions, degeneracies, non-manifold (e.g., more than two polygons meeting at the same edge), etc. Unfortunately, most applications, such as physical-based simulation, digital fabrication (e.g., 3D printing), intelligent model editing tools, can only accept clean surface models as input, which restricts the existing models mainly to visualization purpose. It becomes extremely difficult to eliminate these flaws when a certain combination of them are present. Thus, the restoration of these 3D models remains an open problem.

In this project, we would like to develop robust algorithms and tools for automatic restoration and cleaning of the models. We would expect our method to produce a closed surface representation of a building that partitions the space into disjoint internal and external half spaces without ambiguities.

Required Skills: Proficient in one programming language (e.g., C/C++, Python, Java); experiences in mesh processing, machine learning (in particular Deep Learning) is a bonus.

Contact: Liangliang Nan


Coupling 3D city models with Ladybug tools for environmental analyses

The MSc thesis will focus on interoperability between the Ladybug tools and CityGML-based 3D city models. The Ladybug Tools are a collection of free applications that support environmental design and education. They are among the most comprehensive, connecting 3D Computer-Aided Design (CAD) interfaces to a host of validated simulation engines.

Particular attention will be paid to energy-related topics in order verify how and to which extent the CityGML Energy ADE (Application Domain Extension) can be used to deliver and store additional energy-related data needed by the Ladybug tools.

The students’ task will consist in choosing (together with the supervisors) a specific application covered by a Ladybug tool, to analyse the software and data requirements of the selected Ladybug tool(s) and to perform a mapping to the CityGML/Energy ADE data model. In addition, proper interfaces will have to be developed and tested by means of a concrete case study. This topic is available for up to two students (each one choosing a different application area).

Prerequisites: Knowledge of CityGML and its ADE mechanism. A bonus is experience with the CityGML 3D City Database and the associated tools. A programming language of choice (e.g. Java or Python) will be used.

Contact: Giorgio Agugiaro and Jantien Stoter


Interaction between urban heat islands and semantic 3D city models

This summer was exceptionally hot and the Netherlands suffered two consecutive heat waves, which had severe negative impacts on human health and the urban environment caused by drought. The department of Urbanism is running Netatmo weather station network with more than 100 stations all across the city of the Hague. This means there is a rich data set which allows to study the formation of the urban heat islands in relation to the direct, local built environment around these sensors.

The MSc thesis will focus on investigating how a semantically enriched, CityGML-based 3D city model can help in understanding and forecasting urban heat islands. Additionally, based on a real case study, the 3D city model will be used to analyse qualitatively and quantitatively how certain physical urban conditions can contribute to (reducing) the heat island effect.

This MSc thesis will be jointly supervised by the 3D Geoinformation group and the group of Environmental Technology and Design.

Contact: Giorgio Agugiaro and Alexander Wandl


Urban metabolism and semantic 3D city models

This thesis topic is connected to the Horizon 2020 Research Project REPAiR: Resource management in peri-urban areas, going beyond urban metabolism. The project has developed a 2D urban mining model of the Amsterdam metropolitan area.

The MSc thesis will focus on defining and implementing a 3D “urban mining” model which will help to investigate and quantitatively describe where, when, and how many critical materials can be obtained (“extracted”) from existing, ageing cities/neighbourhoods in order to be directly reused or recycled in the context of circular economy. A CityGML-based 3D city model of a real case study area will represent the main source of integrated spatial and non-spatial information the 3D urban mining model will be implemented onto.

This MSc thesis will be jointly supervised by the 3D Geoinformation group and the group of Environmental Technology and Design.

Contact: Giorgio Agugiaro and Alexander Wandl


Integrated modelling of utility networks in the urban environment

In the framework of Smart Cities, the MSc thesis will focus on interoperability issues when it comes the heterogeneous utility networks (e.g. gas, water, electricity, sewage, district heating, telecommunications, etc.) that are found in the urban environment. Starting from a CityGML-based 3D city model, the Msc. thesis will focus on testing and further extending the Utility Network ADE (Application Domain Extension), based on a concrete case study which will be agreed upon with the student. A possible application area is in the energy sector, e.g. when it comes to coupling networks to specific simulation programs. The image shown here is taken from the Msc thesis of Xander van den Duijn (2018) and is an example - and a starting point - of the overall topic of the thesis proposed here.

Prerequisites: Knowledge of CityGML and its ADE mechanism, FME and Enterprise Architect are required. A bonus is experience of the CityGML 3D City Database and the associated tools.

Contact: Giorgio Agugiaro and Jantien Stoter