Steps to Load LiDAR Data in the Tool

Steps to Load LiDAR Data in the Tool (Using Nuscenes)

Pre-requisite:

All the uploads should have the permission to the bucket owner. Refer to the below command as an example:

aws s3 cp --recursive . s3://{{client-dedicated-bucket}}/arrow-up-right --acl bucket-owner-full-control

Step 1: Convert Point Cloud Files to .las Format

The input point cloud files are in .bin format. Convert all point cloud files to the .las format as .las is supported by the iMerit tool.

  • .bin to .pcd conversion code for NuScenes dataset:

https://forum.nuscenes.org/t/how-do-i-convert-nuscenes-lidar-data-to-pcd-file/785/2arrow-up-right

  • This is an example to convert pcd file in las

cloudcompare.CloudCompare -SILENT -O <filename>.pcd -APPLY_TRANS trans.txt -C_EXPORT_FMT LAS -SAVE_CLOUDS FILE <filename>.las

In the above command -APPLY_TRANS trans.txt is an optional parameter if the pointcloud needs to do any transformation.

Sample trans.txt

-0.7147231213 0.6993243435 0.007369234 -37955.512552122

-0.6993234435 -0.714738245 0.015186255 123474.421875121

0.01588712233 0.005701212 0.999858342 959.7973021222

0 0 0 1

Step 2: Multi Sensor

The input we have includes a .bin file for lidar and a .pcd file for radar. Since they are not in .las format, there is a need to convert these files to .las files and merge them into a single .las file with respect to PointSourceID so that an annotator can toggle the pointsarrow-up-right in the tool.

Step 3: Upload Corresponding Image Files

If there are corresponding image files for the point cloud files, it must be ensured that they are uploaded with the same name as the .las file. The supported image file formats are:

  • .jpg

  • .png

For example, if the point cloud file is named

00000-ca9a282c9e77460f8360f564131a8af5.las

the corresponding image file should also be named the same way with the correct format extension.

00000-ca9a282c9e77460f8360f564131a8af5.jpg 00000-ca9a282c9e77460f8360f564131a8af5.png

For multiple camera sensors

  1. Keep each camera sensor image folder separate, do not merge all images within one folder

  2. Each image within a folder should correspond to the .las filename following the above naming convention.

├── CAM_BACK

│ ├── 00000-ca9a282c9e77460f8360f564131a8af5.jpg

│ └── 00001-39586f9d59004284a7114a68825e8eec.jpg

├── CAM_BACK_LEFT

│ ├── 00000-ca9a282c9e77460f8360f564131a8af5.jpg

│ └── 00001-39586f9d59004284a7114a68825e8eec.jpg

├── CAM_FRONT

│ ├── 00000-ca9a282c9e77460f8360f564131a8af5.jpg

│ └── 00001-39586f9d59004284a7114a68825e8eec.jpg

├── CAM_BACK_RIGHT

│ ├── 00000-ca9a282c9e77460f8360f564131a8af5.jpg

│ └── 00001-39586f9d59004284a7114a68825e8eec.jpg

├── CAM_FRONT_LEFT

│ ├── 00000-ca9a282c9e77460f8360f564131a8af5.jpg

│ └── 00001-39586f9d59004284a7114a68825e8eec.jpg

├── CAM_FRONT_RIGHT

│ ├── 00000-ca9a282c9e77460f8360f564131a8af5.jpg

│ └── 00001-39586f9d59004284a7114a68825e8eec.jpg

Step 4: Create Calibration File

If extrinsic and intrinsic information is available for the image files, create a folder by the name calibration, and create a calibration.json file within it. This file will be read by the tool to facilitate calibration related features. If the calibration information is not the same for all frames, then a separate json file with the same name as the point cloud file needs to be created within the calibration folder.

Case 1 - Calibration Information is the same for all frames

├── calibration

│ ├── calibration.json

Case 2 - Calibration Information is different for all frames

├── calibration

│ ├── 00000-ca9a282c9e77460f8360f564131a8af5.json

│ ├── 00001-39586f9d59004284a7114a68825e8eec.json

Algorithm: Compute Calibration Matrix for Old Format ONLY

Input:

- rotation and translation (cam_extrinsic)

- camera intrinsic Matrix (cam_intrinsic)

Output:

- Calibration Matrix (calibration_matrix)

Note : The translation and the rotation parameters are given with respect to the ego vehicle body frame.

Compute the calibration matrix by multiplying the cam_intrinsic matrix with the inverse of camera_extrinsic

  • calibration_matrix = camera_intrinsic * inverse(camera_extrinsic)

Calibration Format

calibration.json (OLD) format:

calibration.json (NEW) format:

Note: Calibration file format for single camera. For multiple cameras the object will be the same as the CAM_FRONT. Extrinsic elements are 4x4 matrix in column major order.

Step 5: Upload Ego data for calculating Vehicle Velocity

To calculate the reference velocity of objects around the ego vehicle, the ego data information for each frame should be provided with the dataset and needs to be placed in the ego_data folder (this must be the folder name for the tool to be able to read it). This file should include the "timestamp_epoch_ns" information (timestamp_epoch_ns is the timestamp at which each frame is captured. It is represented as a Unix epoch timestamp in nanoseconds (ns). This timestamp is used to calculate the velocity of objects around the ego vehicle). The name of ego data file must be the same as the point cloud file, e.g.,

00000-ca9a282c9e77460f8360f564131a8af5.json.

├── ego_data

│ ├── 00000-ca9a282c9e77460f8360f564131a8af5.json

Step 6: Compute and Upload Ego Vehicle for Merge Point Cloud

To facilitate merge point cloud functionality, the ego data information for each frame will be either calculated using the Iterative Closest Point registration algorithm or it is provided with the dataset and needs to be placed in the ego_data folder (this must be the folder name for the tool to be able to read it). The name of ego data file must be the same as the point cloud file, e.g.,

Lidar Points must be in ego/sensor coordinate system. For referencearrow-up-right

├── ego_data

│ ├── 00000-ca9a282c9e77460f8360f564131a8af5.json

Sample data is like below

utmX_m: The distance the object has moved along the x-axis (in meters) with respect to the 1st frame.

utmY_m: The distance the object has moved along the y-axis (in meters) with respect to the 1st frame.

utmZ_m: The distance the object has moved along the z-axis (in meters) with respect to the 1st frame.

utmHeading_deg: Angle of rotation Z (Yaw)

transformationMatrix: Current frame lidar coordinate system displacement from the reference frame lidar coordinate system. The matrix is in row major format.

Element Order : [ R11, R12, R13, Tx,

R21, R22, R23, Ty,

R31, R32, R33, Tz,

0, 0, 0, 1 ]

If we have complete transformation Matrix details we ignore utmX_m, utmY_m, utmZ_m and utmHeading_deg. Hence in the sample we put those value as zero.

timestamp_epoch_ns: timestamp of this frame capture in nano second

Please reach out to iMerit to help upload the data to facilitate the Merge Point Cloud feature.

If ego data is present for each point cloud, then the tool will show Merged PC option on the top and users can use merged point cloud feature. Otherwise the Merge PC option will not be visible.

Naming convention

  1. The name of the folder which contains calibration information, must be named as ‘calibration’.

  2. The name of the ego data folder must be named as ‘ego_data’.

  3. The folder contains the .las file, it's better to be named as ‘lidar’.

  4. No specific naming convention is there for image folders.

  5. All the file names associated with the .las file, should be the same.

Now we have all data (las, camera folders, ego_data, calibration) for all frames. Here a mapping file is given to map the frames chronologically. According to that, there is padding added (00000, 00001,00002,...) before the frame name throughout the dataset to load the data in the tool chronologically. So here the 1st frame will be 00000-ca9a282c9e77460f8360f564131a8af5.las, then 00001-39586f9d59004284a7114a68825e8eec.las. And so on.

Folder Structure

Last updated