1 Star 0 Fork 1K

nhky / apollo

forked from ApolloAuto / apollo 
加入 Gitee
与超过 1200万 开发者一起发现、参与优秀开源项目,私有仓库也完全免费 :)
免费加入
克隆/下载
how_to_run_offline_sequential_obstacle_perception_visualizer.md 5.26 KB
一键复制 编辑 原始数据 按行查看 历史
Dong Li 提交于 2018-07-27 19:21 . build: use a general USE_GPU compile flag

How to Run the Fusion Obstacle Visualization Tool

Apollo created the LiDAR Obstacle Visualization Tool, an offline visualization tool to show LiDAR-based obstacle perception results (see How to Run Offline Perception Visualizer). However, the tool lacks the ability to visualize the radar-based obstacle perception results and the fusion results based on its two sensors.

Apollo has developed a second visualization tool, the Fusion Obstacle Visualization Tool, to complement the LiDAR Obstacle Visualization Tool. The Fusion Obstacle Visualization Tool shows obstacle perception results from these modules:

  • LiDAR-based algorithm module
  • Radar-based algorithm module
  • Fusion algorithm module for debugging and testing the complete obstacle perception algorithms

All of the visualization is based on LiDAR data visualization. The source data from LiDAR (a set of 3D points to shape any object in the scene) is better than radar in depicting the visual features of an entire scene. Go to the Apollo web site to see the demo videos for the Fusion Obstacle Visualization Tool.

In general, you follow three steps to build and run the Fusion Obstacle Visualization Tool in Docker:

  1. Prepare the source data.
  2. Build the Fusion Obstacle Visualization Tool.
  3. Run the tool.

The next three sections provide the details for each of the three steps.

Prepare the Source Data

Before running the Fusion Obstacle Visualization Tool, you need to prepare the following for:

LiDAR-Based Obstacle Perception
  • Point Cloud Data (PCD) file
  • Host vehicle pose
Radar-Based Obstacle Perception
  • Radar source obstacle data in the protocol buffers (protobuf) format
  • Host vehicle pose
  • Host vehicle velocity

To facilitate the data extraction, Apollo provides a tool named export_sensor_data to export the data from a ROS bag.

Steps

  1. Build the data exporter using these commands:
cd /apollo
bazel build //modules/perception/tool/export_sensor_data:export_sensor_data
  1. Run the data exporter using this command:
/apollo/bazel-bin/modules/perception/tool/export_sensor_data/export_sensor_data
  1. Play the ROS bag.

​ The default directory of the ROS bag is /apollo/data/bag. ​ In the following example, the file name of ROS bag is example.bag.

​ Use these commands:

cd /apollo/data/bag
rosbag play --clock example.bag --rate=0.1

To ensure that you do not miss any frame data when performing callbacks to the ROS messages, it is recommended that you reduce the playing rate, which is set to 0.1 in the example above.

When you play the bag, all data files are dumped to the export directory, using the timestamp as the file name, frame by frame.

The default LiDAR data export directory is /apollo/data/lidar.

The radar directory is /apollo/data/radar.

The directories can be defined in /apollo/modules/perception/tool/export_sensor_data/conf/export_sensor_data.flag using the flags, lidar_path and radar_path.

In the lidar_path, two types of files are generated with the suffixes: .pcd and .pose.

In the radar_path, three types of files are generated with the suffixes .radar, .pose, and .velocity.

Build the Fusion Obstacle Visualization Tool

Apollo uses the Bazel tool to build the Fusion Obstacle Visualization Tool.

  1. Build the Fusion Obstacle Visualization Tool using these commands:
cd /apollo
bazel build -c opt //modules/perception/tool/offline_visualizer_tool:offline_sequential_obstacle_perception_test

The -c opt option is used to build the program with optimized performance, which is important for the offline simulation and visualization of the perception module in real time.

  1. (Optional) If you want to run the perception module with GPU, use this command:
bazel build -c opt --cxxopt=-DUSE_GPU //modules/perception/tool/offline_visualizer_tool:offline_sequential_obstacle_perception_test

Run the Tool

Before running the Fusion Obstacle Visualization Tool, you can set up the source data directories and the algorithm module settings in the configuration file: /apollo/modules/perception/tool/offline_visualizer_tool/conf/offline_sequential_obstacle_perception_test.flag.

The default source data directories are /apollo/data/lidarand /apollo/data/radar for lidar_path and radar_path, respectively.

The visualization-enabling Boolean flag is true, and the obstacle result type to be shown is fused (the fusion obstacle results based on both LiDAR and RADAR sensors) by default. You can change fused to lidar or radar to visualize the pure obstacle results generated by the single-sensor-based obstacle perception.

Run the Fusion Obstacle Visualization Tool using this command:

/apollo/bazel-bin/modules/perception/tool/offline_visualizer_tool/offline_sequential_obstacle_perception_test

You see results such as:

  • A pop-up window showing the perception result with the point cloud, frame-by-frame
  • The raw point cloud shown in grey
  • Bounding boxes (with red arrows that indicate the headings) that have detected:
    • Cars (green)
    • Pedestrians (pink)
    • Cyclists (blue)
    • Unknown elements (purple)
C
1
https://gitee.com/nhky/apolloauto.git
git@gitee.com:nhky/apolloauto.git
nhky
apolloauto
apollo
master

搜索帮助