Texturizing building models

Under Open Data Toronto, 3D building models are available to download within the city of Toronto (https://open.toronto.ca/dataset/3d-massing/). According to the website “The Open Data site will enable access to application developers, designers, urban planners and architects, and the public. Ideally this will enable the creation of a visual portal and access to a large collection of city building ideas.” I visualize part of the York University campus with ArcGIS pro, they show up as multipatch.

Here is an interactive version:

https://arcg.is/1C0rjX

Scroll to zoom in and out, left click to grab, right click to rotate.

One thing you see is the buildings are basically white blocks, the process of adding colors and details (e.g. windows, doors, building material etc.) is called texturizing. If you google Texture mapping you can see a lot of methods of doing such task.

For the purpose of learning, I am using SketchUp to perform the task.

https://www.sketchup.com/plans-and-pricing

It is a manual process, but the results makes such visual improvements:

This is the original 3D model from the Toronto Massing data.

This is a screen capture of Google Earth Model.

This is the texturized 3D model.

The model isn’t perfect, and I didn’t spend a lot of time on it, but it is definitely an improved version from the massing data. This can then be uploaded back to ArcGIS pro for geo-referencing.

Effective Communications Through Academic Posters

Once a while I will get invited to talk about how to make media in academia, this time talking about making posters to a class of 3rd year university students. Through a hands on workshop, we will be discussing about components of academic poster, ways to present your work effectively, and visually pleasing to your audience. At the end of the workshop, I will show them how to make one using powerpoint. Tips and tricks too!

Using LiDAR to Update Tree Inventory

Tree inventory is extremely expensive (labor) and requires regular field visits. In this case, the tree inventory is performed in year 2015. I have used LiDAR to update the tree inventory in several ways:

  1. GPS measurement from field data is not accurate, GPS errors range from 1m to 5 or 6 m. Hence, I use LiDAR to correct the tree tops.
  2. Tree height changes from time to time (tree growth). Here I use 2018 LiDAR dataset to update tree height .
  3. Trees that are no long in the site is removed from the inventory.
  4. Tree crown size is better delineated from LiDAR compared to field measurements.

Compared to the previous 3D model generated from field measurements, I have generated 3D tree models to place on a terrain with overlaying LiDAR data on top, you can see that the point cloud and the trees align very well now.

3D Symbols for ArcGIS Pro

The aesthetic of a 3D scenes can much improved by 3D symbols, ArcGIS Pro (https://support.esri.com/en/technical-article/000016699) has a collection of preset 3D objects as symbols, where you can customize sizes and colours. You can even add size and angles (as attribute) to your feature class to customize the individual size and orientation of object. Which I have never had to do before but just learnt how to do it.

You can even draw your own 3D object and import as symbol, which is what I am learning now.

Here I created a scene with some symbols that are created by ESRI, some created by importing object files (3D objects)

Tree Detection in 3D

I have been working with detecting trees using LiDAR data for a while now, one of the biggest challenges that are still not completely solved by the community is the detection of individual tree when trees are highly overlap. Most detection methods underperform for overlapped trees, we will continue to work on improving detection accuracy of occluded trees, using some of the techniques adopted by occluded human detection.

Cross labeling LiDAR datasets

What? You can do that?

Actually kind of.. anyone who had done any geomatics work will know some distance search function. So if you have a dataset that has classification label (dataset A), you can theoretically transfer the label to a second dataset (dataset B) that doesn’t already contain any labels. Simple.

For each point in dataset B, search a point that is closest in dataset A, then copy the label over and it’s done. There are so many existing functions for calculating distance, my colleague started with matlab. However, to search 500,000,000 points, that takes forever, after looking through several methods, I end up with using “Closest Point Set” with cloudcompare. Although I still have to split them in 1gb tiles, the processing time is much faster!

Example data tile colored by height, without labeling
Same example data tile, with labelled copied from a previous labelled dataset (orange = building, light blue = ground, green = vegetation, dark blue = unclass)

LiDAR Self-Learn Starter (2 of 2)

http://lastools.org/

The tools I want to write about today is lastools software suite and is my earliest exploration tools I have used when I first learn about LiDAR. The very first tool I have used is las2txt (https://rapidlasso.com/las2txt/) because back in the days I didn’t know how a *.las file look like so I turned the file into text for studying. In this post, I will talk about few of the commands that I use most often and include my study notes for you.

Las2txt

Interface for las2txt

Download the tools and double click las2txt.ext, you will be able to input your file (red box 1), and select the attributes (e.g. x, y, z, intensity…) (red box 2) that you want to write to your text file.

command prompt for las2txt

Another way to run las2txt (or any lastools) is to run it in command prompt, like the above figure. The syntax is straight forward, it involves calling the exe (las2txt); the input file (-i); the parameters (-parse); and the output file (-o). The details can be found in the readme file that you have downloaded.

las2ground

  • This is a tool for bare-earth extraction: it classifies LIDAR points into ground points (class = 2) and non-ground points (class = 1)
  • lasground -i terrain.las -o classified_terrain.las
  • Modify the classification (e.g.)
    • -set_classification 2
    • -change_classification_from_to 2 4
    • -classify_z_below_as -5.0 7
    • -classify_z_above_as 70.0 7
    • -classify_z_between_as 2.0 5.0 4
    • -classify_intensity_above_as 200 9
    • -classify_intensity_below_as 30 11
    • -nature (default step size 5m)
    • -town (step size 10)
    • -city (step size 25m)
    • -metro (step size 35m)
An example of the output file, green = ground, blue = non-ground

More information: https://rapidlasso.com/lastools/lasground/

http://lastools.org/download/lasground_README.txt

lasnoise

  • This tool flags or removes noise points in LAS/LAZ/BIN/ASCII files
  • ‘-step_xy 2’ and ‘-step_z 0.3’ which would create cells of size 2 by 2 by 0.3 units
  • Noise = class 7
  • lasnoise -i lidar.las -o lidar_no_noise.las
  • lasnoise -i *.laz -step 4 -isolated 5 -olaz -remove_noise
    • The maximal number of few other points in the 3 by 3 by 3 grid still designating a point as isolated is set with ‘-isolated 6’

More information: https://rapidlasso.com/lastools/lasnoise/

http://lastools.org/download/lasnoise_README.txt

lasheight

  • Computes the height of each LAS point above the ground.
  • Assumes class 2 = ground points and so ground TIN can be constructed
    • Ground points can be in a separate file ‘-ground_points ground.las’ or ‘-ground_points dtm.csv -parse ssxyz’
    • e.g. lasheight -i *.las -ground_points ground.las -replace_z
  • More information: https://rapidlasso.com/lastools/lasheight/

An example of the output file. lasheight calculates height above ground level (calculated from lasground), red colour points have larger distance above ground and blue /green colour points are closer to ground level

lasclassify

  • This tool classifies buildings and high vegetation (i.e. trees) in LAS/LAZ files. This tool requires lasground and lasheight has been processed
  • Default parameters
    • ‘-planar 0.1’ (= roofs) or ‘-rugged 0.4’ (= trees) regions. Above ground threshold with ‘-ground_offset 5’, default is 2.
    • deviation of up to 0.2 meters that neighboring points can have from the planar region they share. The default is 0.1 meters.
    • e.g. laslasclassify -i in.laz -o out.las -planar 0.15 -ground_offset 1.5
    • “Please note that the unlicensed version will set intensity, gps_time, user data, and point source ID to zero.”

More information: https://rapidlasso.com/lastools/lasclassify/

An example output for lasclassify. Orange = building; Yellow = Vegetation; Blue = Ground; Dark Blue = others

I hope my notes can help you get started with LiDAR data.

LiDAR Self-Learn Starter (1 of 2)

I decided to write up this post because many students have had asked how they can start learning and visualizing LiDAR data. LiDAR data is becoming more and more available to the public and many want to learn about it.

Software: CloudCompare

https://www.danielgm.net/cc/

According to their website, CloudCompare is “free to use them for any purpose, including commercially or for education”. This is good because if you are just trying to learn and wanted to make some maps, this is a good starting point without paying for it.

CloudCompare will read and write the following file formats, personally I have had to dealt with *.las format in the past. *.laz is the “zipped” version of *.las.

Modified from: http://pcp2019.ifp.uni-stuttgart.de/presentations/04-CloudCompare_PCP_2019_public.pdf

I wanted to go over few of the most common functions that I use

1. Cleaning

Let’s first talk about the interface, you can click to the “Properties” tab to look at the properties of the las file. Then, you can change the colorization according to the attribute that come with the file, here I change the colorization to “classification” and so 1) Tree points are yellow 2) Ground points are green and 3) small buildings are orange.

Note that there is a bounding box around the size of the file and the box is bigger than it should. This is because there are a lot of “noise” points in the air, there are not a lot of them so you cannot see them very well but looking at the bounding box, it is bigger than it should indicating that there are probably noise above the trees.

You can clean them by

•Tools -> SOR Filter •https://www.cloudcompare.org/doc/wiki/index.php?title=SOR_filter

Tools -> Noise Filter •https://www.cloudcompare.org/doc/wiki/index.php?title=Noise_filter

2. Distances

According to https://www.cloudcompare.org/doc/wiki/index.php?title=Cloud-to-Cloud_Distance, the tool compute the distance comparison between two sets of point clouds

source: https://www.cloudcompare.org/doc/wiki/index.php?title=File:Cc_cloud_cloud_dist_result.jpg

Using their example, two sets of points are being compared and the blue area has very little distance differences (two point clouds are almost the same there) and red areas have large differences (two point clouds are most different there). This tool is useful in change detection as well, imagine you acquire one set of LiDAR points from summer and one set in winter times on top of forested area, you will be able to detect all the missing tree crowns due to leaves fallen on the ground.

CloudCompare also lets you do the following comparisons which I often use:

•Cloud-to-cloud (C2C)

•Cloud-to-mesh (C2M)

•Cloud-to-primitive (C2P)

•Robust cloud-to-cloud (M3C2) – Using ‘clean’ normals is very important in M3C2

3. Rigid Transformation Matrices

https://www.cloudcompare.org/doc/wiki/index.php?title=Apply_Transformation

This is a useful tool if you want to simply apply a transformation matrix to a set of point cloud.

‘Edit -> Apply Transformation’ tool
You can even apply the inverse transformation by checking the corresponding checkbox (e.g. to go back to the previous position of the entity)

In this example, I am applying a translation of 100 in x-direction.

4. Alignment

Alignment is used when I want to align a set of point cloud with respect to another one, or some known points (reference points – think ground control points GCPs for referencing an image). https://www.cloudcompare.org/doc/wiki/index.php?title=Align

This can be done three ways:

•Match bounding-box centers – simply select two (or more) entities, then call ‘Tools > Register > Match bounding-box centers’

•Manual transformation •’Edit > Rotate/Translate’ menu

•Picking point pairs – Tools > Registration > Align (point pairs picking). This is the most useful one, you have to decide which one is reference and which one is the one you want to align. Then, pick the point pairs (at least 3 pairs). Click “Align” button and you will also get RMS results for evaluation.

https://www.cloudcompare.org/doc/wiki/index.php?title=Align

In this example provided from CloudCompare:

“As soon as you have selected at least 3 or more pairs (you must have picked exactly the same number of points in each cloud) CC will display the resulting RMS and you can preview the result with the ‘align’ button. You’ll also see the error contribution for each pair next to each point in the table (so as to remove and pick again the worst pairs for instance). You can add new points to both sets anytime (even after pressing the ‘align’ button) in order to add more constraints and get a more reliable result. And as suggested before, you can also remove points with the dedicated icons above each tables or with the ‘X’ icon next to each point.”

Also, if you have a reference point in 3D that you already know (e.g. by ground survey, or corners of the building) you may use them as reference and enter those points manually for alignment.

5. Automatic Registration

https://www.cloudcompare.org/doc/wiki/index.php?title=ICP

This two register two sets of point clouds automatically, using ICP (Iterative Closest Point)

  • Tool > Fine Registration (ICP)
  • There are three common parameters you may want to adjust
  1. Number of iterations/RMS difference – ICP is an iterative process-Stop this process either after a maximum number of iterations or as soon as the error (RMS) lower than a given threshold
  2. Final overlap – user specify the actual portion of the data/registered cloud that would actually overlap the model/reference cloud if both clouds were registered.
  3. Adjust Scale – determine a potential difference in scaling. If your clouds have different scales (e.g. photogrammetry clouds) you can check this option so as to resolve the scaling as well

The rest is pretty straight forward.

In this example provided by CloudCompare, the final transformation matrix can be calculated with RMS produced automatically. Theorical overlap is 50% which means the two sets of point cloud overlap by 50%. I found this is an important parameter to set in obtaining a “good” RMS.

These are my five top picks for CloudCompare, stay tuned for self-learn part 2.

3D Indoor mapping using GIS etc..

I didn’t realize it was so difficult to map indoor (3D) from AutoCAD floor plan drawings until I had to work with it this summer. It is just the beginning, and it already took us four months to convert 80+ buildings into somewhat 3D, floor by floor. Topologically there are a lot of problems with the original AutoCAD in GIS perspective, e.g. non-closing polygon, non-connecting lines, annotation in the wrong spot etc.. We manage to get a kick start this summer and definitely a lot more work to come in the near future!

Creating 3D floorplans have a lot of benefits, from simulation to modeling to navigation to visualization. I was very lucky to work with two talented under-graduate students at York University.