Ground points are not just ground points, they can be subcategorized into subclasses such as parking spaces, sidewalks, drivable road and many more.
Here is an illustration on how I manipulated the ground data:
Parking lots – get parking lot GIS data (2D)
Red points = surface parking lot points
Grey = all other points
Note that you can see those tiny cars on the parking lot and are classified as non-ground (parking lot) points
Similarly you can do the same thing with sidewalks and drivable road:
3D annotation is such a time consuming task, hats off to all the hard working summer students spending enormous efforts and time. This is just the beginning!
We are interested in annotating bounding boxes around street furniture, it would be so interested to see how the training and predictions come out.
More validation visuals:
Even objects with lower point density
Tomorrow is the workshop, we will be learning material for 2D image and 3D point cloud, I have put together lecture material that covers a good story in the context of remote sensing application. I am very excited to meet some new people tomorrow and inspire scientist to apply Deep Learning for their research! Tomorrow will be great.
Labeling point cloud is a very (labor) expensive work, in image domain, much work has been put into developing tools and algorithms for annotation. e.g. VoTT by Microsoft (https://github.com/microsoft/VoTT) Pixel Perfect (https://blogs.nvidia.com/blog/2020/08/28/v7-labs-image-annotation/) Labelme (http://labelme2.csail.mit.edu/Release3.0/browserTools/php/publications.php) and many many more if you just do a google search.
Drawing 3D bounding boxes is even more tedious to do compared to image base because it has 6 degree of freedom. On one perspective e.g. BEV your box could look perfect but when you rotate it, it’s not.
I have so far tried 2 methods 1 from Matlab an app is called “Ground Truth Labeler”
and another one from pointly.ai
Both very easy to use but Pointly.ai is much easier to navigate, however it is also very costly.
After the buildings are built in sketchup, they need to be georeferenced before they can be display on a map with proper location. Otherwise they will show up in local coordinate, which in most cases, centered at (0,0,0).
Traditionally, you can have 2 approaches
Approach 1: Georeference in sketchup and export in .kmz
Approach 2: From sketchup, export in whichever format you want, then georeference in mapping software (e.g. ArcGIS)
My approach is a bit out of the box, Approach 3: use OSM (Open Street Map)
Add location documentation from sketchup can be found here
Pros: The georeferencing procedures is very easy, in my opinion is easier than using ArcGIS.
Cons: The only format to export it is through .kmz and the exported model does not look as detail as other formats.
The second approach is export sketchup model first, then georeference in ArcGIS Pro. The documentation can be found here.
This link is also useful https://www.esri.com/arcgis-blog/products/arcgis-pro/3d-gis/geolocating-revit-files-in-arcgis-pro/
The procedure is also straight forward but the translation from (0,0,0) to appropriate place is rather cumbersome, personally I did not like it.
Pros: sketchup files can be exported in better graphics (I tried .obj), compared to kmz (approach 1) the obj results looks better visually
Cons: the georeferencing procedure is not as easy as approach 1, it took me a lot more time to put the building into the right location using this method
Approach 3: Out of box thinking
- Export sketchup building model as obj
- Using City Engine to download OSM building models, export the model as .gdb. Note the building downloaded has no texture
- Then open the model in ArcGIS Pro, use “Replace Multipatch” function to replace the OSM model with your sketchup building models (obj, from step 1).
Pros: I did not have to do any manual georeference in sketchup OR ArcGIS Pro, it’s much faster
I also did not have to use kmz (does not look good)
Benchmarking datasets are essential for standardizing performance evaluation in machine learning.
Work in progress:
What I am working on right now is a LiDAR centric benchmarking dataset, the motivation is that deep learning rely on large number of good quality training data for the purpose of semantic segmentation, object detection and object classification.
Stedman Lecture Halls
Steacie Science and Engineering Library
Accolade West Building and Centre for Film and Theatre