Connie Ko at Faculty of Environmental and Urban Change, York University

https://sway.office.com/lmKTr82cgWOVVptt?ref=Link&loc=play

https://euc.yorku.ca/research-spotlight/artificial-intelligence-ai-in-geomatics-at-york-university/

EUC Research Update | November 2021
Welcome to the November 2021 edition of the EUC Research Update – bringing you highlights from research activities at York’s Faculty of Environmental & Urban Change. We invite you to view …
Go to this Sway

Terrain Surfaces Refinement

Ground points are not just ground points, they can be subcategorized into subclasses such as parking spaces, sidewalks, drivable road and many more.

Here is an illustration on how I manipulated the ground data:

Parking lots – get parking lot GIS data (2D)

Red points = surface parking lot points

Grey = all other points

Note that you can see those tiny cars on the parking lot and are classified as non-ground (parking lot) points

Similarly you can do the same thing with sidewalks and drivable road:

Object on road (e.g. trees) are not being classified as road.

3D Bounding Box Labeling

3D annotation is such a time consuming task, hats off to all the hard working summer students spending enormous efforts and time. This is just the beginning!

We are interested in annotating bounding boxes around street furniture, it would be so interested to see how the training and predictions come out.

More validation visuals:

Even objects with lower point density

3D Bounding Boxes Labeling Tools

Labeling point cloud is a very (labor) expensive work, in image domain, much work has been put into developing tools and algorithms for annotation. e.g. VoTT by Microsoft (https://github.com/microsoft/VoTT) Pixel Perfect (https://blogs.nvidia.com/blog/2020/08/28/v7-labs-image-annotation/) Labelme (http://labelme2.csail.mit.edu/Release3.0/browserTools/php/publications.php) and many many more if you just do a google search.

Drawing 3D bounding boxes is even more tedious to do compared to image base because it has 6 degree of freedom. On one perspective e.g. BEV your box could look perfect but when you rotate it, it’s not.

I have so far tried 2 methods 1 from Matlab an app is called “Ground Truth Labeler”

https://www.mathworks.com/help/driving/ref/groundtruthlabeler-app.html

and another one from pointly.ai

Matlab Results

pointly.ai Results

Both very easy to use but Pointly.ai is much easier to navigate, however it is also very costly.

Georeferencing the sketchup buildings

After the buildings are built in sketchup, they need to be georeferenced before they can be display on a map with proper location. Otherwise they will show up in local coordinate, which in most cases, centered at (0,0,0).

Traditionally, you can have 2 approaches

Approach 1: Georeference in sketchup and export in .kmz

Approach 2: From sketchup, export in whichever format you want, then georeference in mapping software (e.g. ArcGIS)

My approach is a bit out of the box, Approach 3: use OSM (Open Street Map)

Approach 1:

Add location documentation from sketchup can be found here

Pros: The georeferencing procedures is very easy, in my opinion is easier than using ArcGIS.

Cons: The only format to export it is through .kmz and the exported model does not look as detail as other formats.

Approach 2:

The second approach is export sketchup model first, then georeference in ArcGIS Pro. The documentation can be found here.

This link is also useful https://www.esri.com/arcgis-blog/products/arcgis-pro/3d-gis/geolocating-revit-files-in-arcgis-pro/

BFW_NotGeo_and_Geo

The procedure is also straight forward but the translation from (0,0,0) to appropriate place is rather cumbersome, personally I did not like it.

Pros: sketchup files can be exported in better graphics (I tried .obj), compared to kmz (approach 1) the obj results looks better visually

Cons: the georeferencing procedure is not as easy as approach 1, it took me a lot more time to put the building into the right location using this method

Approach 3: Out of box thinking

  1. Export sketchup building model as obj
  2. Using City Engine to download OSM building models, export the model as .gdb. Note the building downloaded has no texture
  3. Then open the model in ArcGIS Pro, use “Replace Multipatch” function to replace the OSM model with your sketchup building models (obj, from step 1).
I tested my sketchup model using Scott Library and it worked.

Pros: I did not have to do any manual georeference in sketchup OR ArcGIS Pro, it’s much faster

I also did not have to use kmz (does not look good)

Scott Library

Original Massing Model
Google Earth Model

Google Streetview

This is my corrected, textured model, a lot of the portion of the building is covered by trees in both google streetview and google earth. It was hard to figure out what things are behind those trees. Roof was reconstructed through estimation from google earth.

Benchmarking Datasets

Benchmarking datasets are essential for standardizing performance evaluation in machine learning.

For example:

Source: https://www.visualdata.io/discovery

Work in progress:

What I am working on right now is a LiDAR centric benchmarking dataset, the motivation is that deep learning rely on large number of good quality training data for the purpose of semantic segmentation, object detection and object classification.

LiDAR intensity
LiDAR Height
LiDAR Class Labels
Base Map

3D Textured Building Models

York Lanes

Stedman Lecture Halls

Chemistry Building

Steacie Science and Engineering Library

Accolade West Building and Centre for Film and Theatre

Google Earth Model
Final Model

Accolade East