同步操作将从 ApolloAuto/apollo 强制同步,此操作会覆盖自 Fork 仓库以来所做的任何修改,且无法恢复!!!
确定后同步将在后台操作,完成时将刷新页面,请耐心等待。
Apollo 5.0 June 27, 2019
Apollo 5.0 Perception module introduced a few major features to provide diverse functionality, a more reliable platform and a more robust solution to enhance your AV performance. These include:
Safety alert
Apollo 5.0 does not support a high curvature road, roads without lane lines including local roads and intersections. The perception module is based on visual detection using a deep network with limited data. Therefore, before we release a better network, the driver should be careful while driving and always be ready to disengage the autonomous driving mode by intervening (hit the brakes or turn the steering wheel). While testing Apollo 3.0, please choose a path that has the necessary conditions mentioned above and be vigilant.
The flow chart of Apollo 5.0 Perception module:
To learn more about individual sub-modules, please visit Perception - Apollo 3.0
The Apollo platform's perception module actively depended on Caffe for its modelling, but will now support PaddlePaddle, an open source platform developed by Baidu to support its various deep learning projects. Some features include:
To use the PaddlePaddle model for Camera Obstacle Detector, set camera_obstacle_perception_conf_file
to obstacle_paddle.pt
in the following configuration file
To use the PaddlePaddle model for LiDAR Obstacle Detector, set use_paddle
to true
in the following configuration file
Apollo currently offers a robust calibration service to support your calibration requirements from LiDARs to IMU to Cameras. This service is currently being offered to select partners only. If you would like to learn more about the calibration service, please reach out to us via email: apollopartner@baidu.com
In Apollo 5.0, Perception launched a manual camera calibration tool for camera extrinsic parameters. This tool is simple, reliable and user-friendly. It comes equipped with a visualizer and the calibration can be performed using your keyboard. It helps to estimate the camera's orientation (pitch, yaw, roll). It provides a vanishing point, horizon, and top down view as guidelines. Users would need to change the 3 angles to align a horizon and make the lane lines parallel.
The process of manual calibration can be seen below:
The CIPO includes detection of key objects on the road for longitudinal control. It utilizes the object and ego-lane line detection output. It creates a virtual ego lane line using the vehicle's ego motion prediction. Any vehicle model including Sphere model, Bicycle model and 4-wheel tire model can be used for the ego motion prediction. Based on the vehicle model using the translation of velocity and angular velocity, the length and curvature of the pseudo lanes are determined. Some examples of CIPO using Pseudo lane lines can be seen below:
CIPO used for curved roads
CIPO for a street with no lane lines
In Apollo 5.0, an additional branch of network is attached to the end of the lane encoder to detect the vanishing point. This branch is composed of convolutional layers and fully connected layers, where convolutional layers translate lane features for the vanishing point task and fully connected layers make a global summary of the whole image to output the vanishing point location. Instead of giving an output in x
, y
coordinate directly, the output of vanishing point is in the form of dx
, dy
which indicate its distances to the image center in x
, y
coordinates. The new branch of network is trained separately by using pre-trained lane features directly, where the model weights with respect to the lane line network is fixed. The Flow Diagram is included below, note that the red color denotes the flow of the vanishing point detection algorithm.
Two challenging visual examples of our vanishing point detection with lane network output are shown below:
Illustrates the case that vanishing point can be detected when there is obstacle blocking the view:
Illustrates the case of turning road with altitude changes:
(dx, dy)
rather than (x, y)
reduces the search spaceThe input of Planning and Control modules will be quite different with that of the previous Lidar-based system for Apollo 3.0.
Lane line output
Object output
The world coordinate system is used as ego-coordinate in 3D where the rear center axle is an origin.
If you want to try our perception modules and their associated visualizer, please refer to the following document
此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。