Categories
Misc

Detecting Objects in Point Clouds with NVIDIA CUDA-Pointpillars

Use long-range and high-precision data sets to achieve 3D object detection for perception, mapping, and localization algorithms.

A point cloud is a data set of points in a coordinate system. Points contain a wealth of information, including three-dimensional coordinates X, Y, Z; color; classification value; intensity value; and time. Point clouds mostly come from lidars that are commonly used in various NVIDIA Jetson use cases, such as autonomous machines, perception modules, and 3D modeling.

One of the key applications is to leverage long-range and high-precision data sets to achieve 3D object detection for perception, mapping, and localization algorithms.

PointPillars is one the most common models used for point cloud inference. This post discusses an NVIDIA CUDA-accelerated PointPillars model for Jetson developers. Download the CUDA-PointPillars model today.

What is CUDA-Pointpillars

In this post, we introduce CUDA-Pointpillars, which can detect objects in point clouds. The process is as follows:

  • Base preprocessing: Generates pillars.
  • Preprocessing: Generates BEV feature maps (10 channels).
  • ONNX model for TensorRT: An ONNX mode that can be implemented by TensorRT.
  • Post-processing: Generates bounding boxes by parsing the output of the TensorRT engine.
Image shows the pipeline of CUDA-Pointpillars, which has four parts and uses a point cloud as input and output-bounding box.
Figure 1. Pipeline of CUDA-Pointpillars.

Base preprocessing

The base preprocessing step converts point clouds into base feature maps. It provides the following components:

  • Base feature maps
  • Pillar coordinates: Coordinates of each pillar.
  • Parameters: Number of pillars.
Image shows how to convert points cloud into base feature maps and what is the struct of base feature maps.
Figure 2. Converting point clouds into base feature maps

Preprocessing

The preprocessing step converts the basic feature maps (four channels) into BEV feature maps (10 channels).

Image shows how to convert 4 channels from base feature maps into 0 channels  of BEV feature maps.
Figure 3. Converting base feature maps into BEV feature maps

ONNX model for TensorRT

The native point pillars from OpenPCDet were modified for the following reasons:

  • Too many small operations, with low memory bandwidth.
  • Some operations, like NonZero, are not supported by TensorRT.
  • Some operations, like ScatterND, have low performance.
  • They use “dict” as input and output, which cannot export ONNX files.

To export ONNX from native OpenPCDet, we modified the model (Figure 4).

Image shows an ONNX model in CUDA-Pointpillars, which was exported from OpenPCDet and simplified by onnx-simplifier.
Figure 4. Overview of the ONNX model in CUDA-Pointpillars.

You can divide the whole ONNX file into the following parts:

  • Inputs: BEV feature maps, pillar coordinates, parameters. These are all generated in preprocessing.
  • Outputs: Class, Box, Dir_class. These are parsed by post-processing to generate a bounding box.
  • ScatterBEV: Converts point pillars (1D) into a 2D image, which can work as a plug-in for TensorRT.
  • Others: Supported by TensorRT. [OTHER WHAT?]
Image shows how to scatter point pillars into 2D image for 2D backbone, which detects objects.
Figure 5. Scattering point pillar data into a 2D image for the 2D backbone.

Post-processing

The post-processing parses the output of the TensorRT engine (class, box, and dir_class) and output-bounding boxes. Figure 6 shows example parameters.

Image shows members of a bounding box and their physical significance.
Figure 6. Parameters of a bounding box.

Using CUDA-PointPillars

To use CUDA-PointPillars, provide the ONNX mode file and data buffer for the point clouds:

    std::vector nms_pred;
    PointPillar pointpillar(ONNXModel_File, cuda_stream);
    pointpillar.doinfer(points_data, points_count, nms_pred);

Converting a native model trained by OpenPCDet into an ONNX file for CUDA-Pointpillars

In our project, we provide a Python script that can convert a native model trained by OpenPCDet into am ONNX file for CUDA-Pointpillars. Find the exporter.py script in the /tool directory of CUDA-Pointpillars.

To get a pointpillar.onnx file in the current directory, run the following command:

$ python exporter.py --ckpt ./*.pth

Performance

The table shows the test environment and performance. Before the test, boost CPU and GPU.

Jetson Xavier NVIDIA AGX 8GB
Release NVIDIA JetPack 4.5
CUDA 10.2
TensorRT 7.1.3
Infer Time 33 ms
Table 1. Test platform and performance

Get started with CUDA-PointPillars

In this post, we showed you what CUDA-PointPillars is and how to use it to detect objects in point clouds.

Because native OpenPCDet cannot export ONNX and has too many small operations with low performance for TensorRT, we developed CUDA-PointPillars. This application can export native models trained by OpenPCDet to a special ONNX model and inference the ONNX model by TensorRT.

Download CUDA-PointPillars today.

Leave a Reply

Your email address will not be published. Required fields are marked *