Config: Hydra PyTorch

Installation

In order to use the pywhu3d tool, you need to install the pwhu3d library for your interpreter. We recommend you use python=3.7 to follow this tutorial.

Usage

Initialization

Create a WHU3D object:

Parameters:

  • data_root: [data root folder]

  • data_type: als, mls, pc, img

  • format: txt, ply, npy, h5, pickle

  • [optional] scenes: a list of scenes, if not specified, will be represented by all of the files

The structure of the data folder should be like this:

It is also recommended to use default split scenes to create a whu3d object, by using whu3d.train_split.

Then some of the attributes could be directly accessed, including data_root, data_type, scenes, download_link

Attributes

The attributes of whu3d may differ depending on your operations (e.g., after applying the compute_normals function, the attributes may include normals that may not exist before). Nonetheless, you could always use the list_attributes function to see the current attributes that you could currently access.

You could simply get a specific attribute of all scenes by using get_attribute function.

Data

You could access the data of a specific scene by using whu3d.data[scene][attribute].

Labels

Labels could also be directly accessed.

If you have interpreted the labels by using interprete_labels function, you could also get interpreted labels.

Visualization

Point cloud

You can visualize a specific scene or a list of scenes using the vis function. By default, this function will show both the point cloud and image frames, and the points are randomly sampled with sample_ratio = 0.01 for faster visualization. It will show color according to the height of the point if color is not specified, or you could choose a specific color, including intensity, normals, semantics, instances, and other features (some features should be computed first via whu3d functions if they do not exist, and you could use whu3d.list_attributes() to see the current attributes first).

or you can use a remote visualization function that allows you to visualize the scene on your local machine if the script is run on a remote server.

Before running the remove_vis function on your remote machine, you should start another ssh connection to your remote machine, and launch open3d on your local machine.

Images

Similarly, you could use the vis function to see a series of images of a specific scene.

BEV

[Will be available soon.]

Renderings

[Will be available soon.]

Labels

If you want to visualize the labels of semantics or instances, you must run the interprete_labels function first (please refer to the 'labels interpretation' section).

Export

Note that all the export functions will export data to self.data_path by default and you should better not change it if you want to load it later via pywhu3d.

Export data

You could export other formats of whu3d, including las, ply, numpy, pickle, h5py, image, et al, by just using the export_[type] function.

If scenes is not specified, it will export all the scenes by default.

Export labels

export_labels function could export raw labels or interpreted labels.

Export statistics

You could also export detailed statistics of the data and label to excel by using the export_statistics function.

For the export of metrics, you could refer to the 'Evaluation' part.

Custom export

You could use the export function to export a specified type of data.

Labels interpretation

You could use the interprete_labels function to merge similar categories and remap the labels to consecutive numbers like 0, 1, 2, ...

After applying this function, you could access the interpreted labels by using whu3d.gt. For more information, you could use the get_label_map function to see the interpretation table.

Block division

If you want to divide the whole scene into rectangle blocks along XY plane, you could use save_divided_blocks function. This function will directly save the divided blocks into .h5 file.

Custom interpretation

If you could use your own file to interpret the labels, you should follow the steps:

Step1: Create label_interpretion.json. This file should include

sem_no_list_ins exclude the categories which should be not interpreted as instances; sem_label_mapping specifies the mapping rules of semantic labels.

Step 2: Put the JSON file into the data root folder.

Step 3: Perform the interprete_labels function.

Evaluation

The interpretation of predicted results should be consistent with that of the interpreted labels.

Semantic segmentation evaluation

Or you could use the evaluation tool as in the 'instance segmentation evaluation' section, just by replacing the instance results with semantics.

Instance segmentation evaluation

For instance segmentation evaluation, you should use our evaluation.Evaluator tool.

You could get metrics, including:

  • instance metrics: MUCov, MWCov, Pre, Rec, F1-score

  • semantic metrics: oAcc, mAcc, mIoU

You could also export evaluation results.

Custom dataset

You can also use the whu3d tool to customize your own dataset for all pywhu3d features simply by using the format function.

After applying the format function, you could use all the features the whu3d tool provides just as the whu3d-dataset.

Demo

This is a demo for preprocessing MLS dataset.

More

pywhu3d is a tool to manage the whu3d dataset, with limited ability to process the dataset (e.g., segmentation). But if you need more features for processing the outdoor scene dataset, you could refer to [well soon be available]. For more details about our dataset, please refer to our website. Contact Xu Han (hanxuwhu[at]whu[dot]edu[dot]com or hanxu@whu3d.com) if you have any questions.

© 2023 WHU3D. All rights reserved.