LiDAR Self-Learn Starter (2 of 2)

http://lastools.org/

The tools I want to write about today is lastools software suite and is my earliest exploration tools I have used when I first learn about LiDAR. The very first tool I have used is las2txt (https://rapidlasso.com/las2txt/) because back in the days I didn’t know how a *.las file look like so I turned the file into text for studying. In this post, I will talk about few of the commands that I use most often and include my study notes for you.

Las2txt

Interface for las2txt

Download the tools and double click las2txt.ext, you will be able to input your file (red box 1), and select the attributes (e.g. x, y, z, intensity…) (red box 2) that you want to write to your text file.

command prompt for las2txt

Another way to run las2txt (or any lastools) is to run it in command prompt, like the above figure. The syntax is straight forward, it involves calling the exe (las2txt); the input file (-i); the parameters (-parse); and the output file (-o). The details can be found in the readme file that you have downloaded.

las2ground

  • This is a tool for bare-earth extraction: it classifies LIDAR points into ground points (class = 2) and non-ground points (class = 1)
  • lasground -i terrain.las -o classified_terrain.las
  • Modify the classification (e.g.)
    • -set_classification 2
    • -change_classification_from_to 2 4
    • -classify_z_below_as -5.0 7
    • -classify_z_above_as 70.0 7
    • -classify_z_between_as 2.0 5.0 4
    • -classify_intensity_above_as 200 9
    • -classify_intensity_below_as 30 11
    • -nature (default step size 5m)
    • -town (step size 10)
    • -city (step size 25m)
    • -metro (step size 35m)
An example of the output file, green = ground, blue = non-ground

More information: https://rapidlasso.com/lastools/lasground/

http://lastools.org/download/lasground_README.txt

lasnoise

  • This tool flags or removes noise points in LAS/LAZ/BIN/ASCII files
  • ‘-step_xy 2’ and ‘-step_z 0.3’ which would create cells of size 2 by 2 by 0.3 units
  • Noise = class 7
  • lasnoise -i lidar.las -o lidar_no_noise.las
  • lasnoise -i *.laz -step 4 -isolated 5 -olaz -remove_noise
    • The maximal number of few other points in the 3 by 3 by 3 grid still designating a point as isolated is set with ‘-isolated 6’

More information: https://rapidlasso.com/lastools/lasnoise/

http://lastools.org/download/lasnoise_README.txt

lasheight

  • Computes the height of each LAS point above the ground.
  • Assumes class 2 = ground points and so ground TIN can be constructed
    • Ground points can be in a separate file ‘-ground_points ground.las’ or ‘-ground_points dtm.csv -parse ssxyz’
    • e.g. lasheight -i *.las -ground_points ground.las -replace_z
  • More information: https://rapidlasso.com/lastools/lasheight/

An example of the output file. lasheight calculates height above ground level (calculated from lasground), red colour points have larger distance above ground and blue /green colour points are closer to ground level

lasclassify

  • This tool classifies buildings and high vegetation (i.e. trees) in LAS/LAZ files. This tool requires lasground and lasheight has been processed
  • Default parameters
    • ‘-planar 0.1’ (= roofs) or ‘-rugged 0.4’ (= trees) regions. Above ground threshold with ‘-ground_offset 5’, default is 2.
    • deviation of up to 0.2 meters that neighboring points can have from the planar region they share. The default is 0.1 meters.
    • e.g. laslasclassify -i in.laz -o out.las -planar 0.15 -ground_offset 1.5
    • “Please note that the unlicensed version will set intensity, gps_time, user data, and point source ID to zero.”

More information: https://rapidlasso.com/lastools/lasclassify/

An example output for lasclassify. Orange = building; Yellow = Vegetation; Blue = Ground; Dark Blue = others

I hope my notes can help you get started with LiDAR data.

LiDAR Self-Learn Starter (1 of 2)

I decided to write up this post because many students have had asked how they can start learning and visualizing LiDAR data. LiDAR data is becoming more and more available to the public and many want to learn about it.

Software: CloudCompare

https://www.danielgm.net/cc/

According to their website, CloudCompare is “free to use them for any purpose, including commercially or for education”. This is good because if you are just trying to learn and wanted to make some maps, this is a good starting point without paying for it.

CloudCompare will read and write the following file formats, personally I have had to dealt with *.las format in the past. *.laz is the “zipped” version of *.las.

Modified from: http://pcp2019.ifp.uni-stuttgart.de/presentations/04-CloudCompare_PCP_2019_public.pdf

I wanted to go over few of the most common functions that I use

1. Cleaning

Let’s first talk about the interface, you can click to the “Properties” tab to look at the properties of the las file. Then, you can change the colorization according to the attribute that come with the file, here I change the colorization to “classification” and so 1) Tree points are yellow 2) Ground points are green and 3) small buildings are orange.

Note that there is a bounding box around the size of the file and the box is bigger than it should. This is because there are a lot of “noise” points in the air, there are not a lot of them so you cannot see them very well but looking at the bounding box, it is bigger than it should indicating that there are probably noise above the trees.

You can clean them by

•Tools -> SOR Filter •https://www.cloudcompare.org/doc/wiki/index.php?title=SOR_filter

Tools -> Noise Filter •https://www.cloudcompare.org/doc/wiki/index.php?title=Noise_filter

2. Distances

According to https://www.cloudcompare.org/doc/wiki/index.php?title=Cloud-to-Cloud_Distance, the tool compute the distance comparison between two sets of point clouds

source: https://www.cloudcompare.org/doc/wiki/index.php?title=File:Cc_cloud_cloud_dist_result.jpg

Using their example, two sets of points are being compared and the blue area has very little distance differences (two point clouds are almost the same there) and red areas have large differences (two point clouds are most different there). This tool is useful in change detection as well, imagine you acquire one set of LiDAR points from summer and one set in winter times on top of forested area, you will be able to detect all the missing tree crowns due to leaves fallen on the ground.

CloudCompare also lets you do the following comparisons which I often use:

•Cloud-to-cloud (C2C)

•Cloud-to-mesh (C2M)

•Cloud-to-primitive (C2P)

•Robust cloud-to-cloud (M3C2) – Using ‘clean’ normals is very important in M3C2

3. Rigid Transformation Matrices

https://www.cloudcompare.org/doc/wiki/index.php?title=Apply_Transformation

This is a useful tool if you want to simply apply a transformation matrix to a set of point cloud.

‘Edit -> Apply Transformation’ tool
You can even apply the inverse transformation by checking the corresponding checkbox (e.g. to go back to the previous position of the entity)

In this example, I am applying a translation of 100 in x-direction.

4. Alignment

Alignment is used when I want to align a set of point cloud with respect to another one, or some known points (reference points – think ground control points GCPs for referencing an image). https://www.cloudcompare.org/doc/wiki/index.php?title=Align

This can be done three ways:

•Match bounding-box centers – simply select two (or more) entities, then call ‘Tools > Register > Match bounding-box centers’

•Manual transformation •’Edit > Rotate/Translate’ menu

•Picking point pairs – Tools > Registration > Align (point pairs picking). This is the most useful one, you have to decide which one is reference and which one is the one you want to align. Then, pick the point pairs (at least 3 pairs). Click “Align” button and you will also get RMS results for evaluation.

https://www.cloudcompare.org/doc/wiki/index.php?title=Align

In this example provided from CloudCompare:

“As soon as you have selected at least 3 or more pairs (you must have picked exactly the same number of points in each cloud) CC will display the resulting RMS and you can preview the result with the ‘align’ button. You’ll also see the error contribution for each pair next to each point in the table (so as to remove and pick again the worst pairs for instance). You can add new points to both sets anytime (even after pressing the ‘align’ button) in order to add more constraints and get a more reliable result. And as suggested before, you can also remove points with the dedicated icons above each tables or with the ‘X’ icon next to each point.”

Also, if you have a reference point in 3D that you already know (e.g. by ground survey, or corners of the building) you may use them as reference and enter those points manually for alignment.

5. Automatic Registration

https://www.cloudcompare.org/doc/wiki/index.php?title=ICP

This two register two sets of point clouds automatically, using ICP (Iterative Closest Point)

  • Tool > Fine Registration (ICP)
  • There are three common parameters you may want to adjust
  1. Number of iterations/RMS difference – ICP is an iterative process-Stop this process either after a maximum number of iterations or as soon as the error (RMS) lower than a given threshold
  2. Final overlap – user specify the actual portion of the data/registered cloud that would actually overlap the model/reference cloud if both clouds were registered.
  3. Adjust Scale – determine a potential difference in scaling. If your clouds have different scales (e.g. photogrammetry clouds) you can check this option so as to resolve the scaling as well

The rest is pretty straight forward.

In this example provided by CloudCompare, the final transformation matrix can be calculated with RMS produced automatically. Theorical overlap is 50% which means the two sets of point cloud overlap by 50%. I found this is an important parameter to set in obtaining a “good” RMS.

These are my five top picks for CloudCompare, stay tuned for self-learn part 2.

3D Indoor mapping using GIS etc..

I didn’t realize it was so difficult to map indoor (3D) from AutoCAD floor plan drawings until I had to work with it this summer. It is just the beginning, and it already took us four months to convert 80+ buildings into somewhat 3D, floor by floor. Topologically there are a lot of problems with the original AutoCAD in GIS perspective, e.g. non-closing polygon, non-connecting lines, annotation in the wrong spot etc.. We manage to get a kick start this summer and definitely a lot more work to come in the near future!

Creating 3D floorplans have a lot of benefits, from simulation to modeling to navigation to visualization. I was very lucky to work with two talented under-graduate students at York University.

Single 3D Tree Detection From LIDAR Point Cloud Using Deep Learning Network. CRSS-SCT 2020

ABSTRACT

The automatic creation and update of tree inventory provides an opportunity to better manage tree as a precious resource in both natural and urban environment. Detecting and classifying individual tree species automatically is still one of the most challenging problems to be resolved. The goal of this project is to automatically detect 3D bounding boxes for individual trees from airborne LiDAR data using state-of-the art single-stage 3D object detection methods. According to Guo et al. (2019), there are five broad categories for 3D object detection methods, 1) Multi-view methods, 2) Segmentation-based methods, 3) Frustum-based methods, 4) Point cloud-based methods, and 5) BEV (Bird eye view) -based methods. The Deep Convolution Neural Network (DCNN) we are choosing fall into point cloud-based method using point cloud directly as input. Our study area is at the York University Keele Campus, Toronto, Ontario, Canada. We acquired three sets of data for the project 1) Airborne LiDAR 2) Mobile LiDAR and 3) field data. We acquired two sets of LiDAR data during September 2018, the ground (mobile) data and the airborne data. The ground data collection was carried out by a mobile mapping system consist of Lynx HS600 (Teledyne Optech, Toronto, Ontario) with and Ladybug 360° camera with six integrated cameras on 8th September 2018. The airborne mission took place on 23rd September 2018 with two ALS sensors Galaxy-PRIME and Titan (Teledyne Optech, Toronto, Ontario). The average flying height for the mission was 1829m (6000ft) agl. For the field data, 5717 trees on campus were recorded with species name, tree crown width [m], and locations obtained by a handheld GPS.

Training data is generated semi-automatically with a marker-controlled watershed segmentation method (Markan, 2018) to first over generate detected local maximums (treetops) and watersheds (tree crown boundaries). By merging the representative watershed candidates, we produced 5717 tree crown boundaries. One of the advantages of this method over a simple marker-controlled watershed segmentation is that this method will allow delineated tree crowns to overlap which is common even in the urban environment. Ground truth bounding boxes are then generated from each 2D watersheds and the height of individual trees. While current 3D object detection methods focus on the detection of vehicle and pedestrians for autonomous driving applications, we aim to test the applicability of existing methods on tree objects. Current state-of-the-art methods we are considering includes VoxelNet (Zhou and Tuzel, 2018), SECOND (Yan et al., 2018), Point pillars (Lang et al., 2018), and Part-A^2 (Shi et al., 2019). Regarding 3D object detection modules, tree point cloud data are sampled and processed into representations suitable for deep convolution neural networks. Several representations as 3D-voxelization-based representation and bird’s-eye-view-based representation are being explored. The representations will be passed through a trainable neural network to generate predicted 3D bounding box for trees. Improving detection accuracy not only affect classification results and will also improve existing tree locations that we traditionally recorded by handheld GPS or estimated from google street view imagery.

Using Toronto Street Tree Data

Randomly zoomed location at Toronto, tree locations (red dot) and the actual trees are not aligned

For the map, please visit: https://arcg.is/0KbH0q

The tree data was collected three years ago hosted by ESRI (http://hub.arcgis.com/datasets/bf0f4752b85c44318918e2a8be2847dc_0), I quickly created a web map to show you how the tree location is not aligned with the tree top. In order to use this data set (along with it’s attribute, e.g. dbh, height, species) for validation (for detection and classification), this problem must be addressed and corrected first.

Detection, Segmentation and Classification

Using airborne view, they can be identified as:

Using Airborne LiDAR and after conversion to canopy height model:

I have been studying these three separate but related problem for trees for a while now. After years of researchers in the community trying to solve these problems completely, it is still not completely solved. It is actually an extremely difficult task. One biggest challenge is the high variations of intra-class and small variations of inter-class.

Given a common pipeline is Detection -> Segmentation -> Classification, errors always propagate from one process to the next one. This approach suffer from this inherit architecture problem. However, the benefit is that they can be trained by three separate sets of datasets diversifying features of representation. Ideally, training of each problems will then require perfectly annotated dateset which is very rare.

41st Canadian Symposium On Remote Sensing. Call for Abstract: Special Session (AI in Remote Sensing)

Session Chairs: Dr. Connie Ko, Dr. Gunho Sohn. York University

The expansion in accessibility of remote sensing data combined with increases in data quantity and complexity of the incoming data has also increased. This challenge requires new theory, methods and system implementations in remote sensing data analytics. Recently, deep learning has been demonstrating an enormous success in visual data analytics. Deep learning is a non-representation learning model, which allows data to learn from its own representation. Due to its generalizability, this technology has been widely used in both computer science and remote sensing community. However, when applying these methods to outdoor natural environment and natural objects, there are still many problems to be solved, which include, but are not limited to: learning with small, noisy, out-of-distribution training data, domain and knowledge transfer, active, continual and fine-grained learning, integrating physical priors, and data fusions. This special session will bring an opportunity to discuss the current status of the AI technology adopted in the remote sensing research community and discuss its limitations and potential for directing our future research efforts.

How to Apply?
To submit your abstract, please visit https://csrs2020.exordo.com where you will find the 2020 ExOrdo abstract submission link. Note that you will be required to setup an account first.

Extended deadline for submission: Feb 28, 2020. Extended abstracts up to 400 words are invited.

The detail of the conference can be found https://crss-sct.ca/conferences/csrs-2020/. Please share widely with any students or researches that may be interested.