opencda.core.sensing.perception package

Submodules

opencda.core.sensing.perception.o3d_lidar_libs module

Utility functions for 3d lidar visualization and processing by utilizing open3d.

opencda.core.sensing.perception.o3d_lidar_libs.o3d_camera_lidar_fusion(objects, yolo_bbx, lidar_3d, projected_lidar, lidar_sensor)

Utilize the 3D lidar points to extend the 2D bounding box from camera to 3D bounding box under world coordinates.

Parameters
  • objects (dict) – The dictionary contains all object detection results.

  • yolo_bbx (torch.Tensor) – Object detection bounding box at current photo from yolov5, shape (n, 5)->(n, [x1, y1, x2, y2, label])

  • lidar_3d (np.ndarray) – Raw 3D lidar points in lidar coordinate system.

  • projected_lidar (np.ndarray) – 3D lidar points projected to the camera space.

  • lidar_sensor (carla.sensor) – The lidar sensor.

Returns

objects – The update object dictionary that contains 3d bounding boxes.

Return type

dict

opencda.core.sensing.perception.o3d_lidar_libs.o3d_pointcloud_encode(raw_data, point_cloud)

Encode the raw point cloud(np.array) to Open3d PointCloud object.

Parameters
  • raw_data (np.ndarray) – Raw lidar points, (N, 4).

  • point_cloud (o3d.PointCloud) – Open3d PointCloud.

opencda.core.sensing.perception.o3d_lidar_libs.o3d_visualizer_init(actor_id)

Initialize the visualizer.

Parameters

actor_id (int) – Ego vehicle’s id.

Returns

vis – Initialize open3d visualizer.

Return type

o3d.visualizer

opencda.core.sensing.perception.o3d_lidar_libs.o3d_visualizer_show(vis, count, point_cloud, objects)

Visualize the point cloud at runtime.

Parameters
  • vis (o3d.Visualizer) – Visualization interface.

  • count (int) – Current step since simulation started.

  • point_cloud (o3d.PointCloud) – Open3d point cloud.

  • objects (dict) – The dictionary containing objects.

opencda.core.sensing.perception.obstacle_vehicle module

Obstacle vehicle class to save object detection.

class opencda.core.sensing.perception.obstacle_vehicle.BoundingBox(corners)

Bases: object

Bounding box class for obstacle vehicle.

Params: -corners : nd.nparray

Eight corners of the bounding box. (shape:(8, 3))

Attributes: -location : carla.location

The location of the object.

-extentcarla.vector3D

The extent of the object.

class opencda.core.sensing.perception.obstacle_vehicle.ObstacleVehicle(corners, o3d_bbx, vehicle=None, lidar=None)

Bases: object

A class for obstacle vehicle. The attributes are designed to match with carla.Vehicle class.

Parameters
  • corners (nd.nparray) – Eight corners of the bounding box. shape:(8, 3).

  • o3d_bbx (pen3d.AlignedBoundingBox) – The bounding box object in Open3d. This is mainly used for visualization

  • vehicle (carla.Vehicle) – The carla.Vehicle object.

  • lidar (carla.sensor.lidar) – The lidar sensor.

bounding_box

Bounding box of the osbject vehicle.

Type

BoundingBox

location

Location of the object.

Type

carla.location

velocity

Velocity of the object vehicle.

Type

carla.Vector3D

carla_id

The obstacle vehicle’s id. It should be the same with the corresponding carla.Vehicle’s id. If no carla vehicle is matched with the obstacle vehicle, it should be -1.

Type

int

get_location()

Return the location of the object vehicle.

get_transform()

Return the transform of the object vehicle.

get_velocity()

Return the velocity of the object vehicle.

set_carla_id(id)

Set carla id according to the carla.vehicle.

Parameters

id (int) – The id from the carla.vehicle.

set_vehicle(vehicle, lidar)

Assign the attributes from carla.Vehicle to ObstacleVehicle.

Parameters
  • vehicle (carla.Vehicle) – The carla.Vehicle object.

  • lidar (carla.sensor.lidar) –

    The lidar sensor, it is used to project world coordinates to

    sensor coordinates.

set_velocity(velocity)

Set the velocity of the vehicle.

Parameters

velocity (carla.Vector3D) – The target velocity in 3d vector format.

class opencda.core.sensing.perception.obstacle_vehicle.StaticObstacle(corner, o3d_bbx)

Bases: object

The general class for obstacles. Currently, we regard all static obstacles

such as stop signs and traffic light as the same class.

Parameters
  • corner (nd.nparray) – Eight corners of the bounding box (shape:(8, 3)).

  • o3d_bbx (open3d.AlignedBoundingBox) – The bounding box object in Open3d.This is mainly used for visualization.

bounding_box

Bounding box of the osbject vehicle.

Type

BoundingBox

opencda.core.sensing.perception.obstacle_vehicle.is_vehicle_cococlass(label)

Check whether the label belongs to the vehicle class according to coco dataset. :param -label: The lable of the detecting object. :type -label: int

Returns

Whether this label belongs to the vehicle class

Return type

-is_vehicle(bool)

opencda.core.sensing.perception.perception_manager module

Perception module

class opencda.core.sensing.perception.perception_manager.CameraSensor(vehicle, position='front')

Bases: object

Camera manager.

Parameters
  • vehicle (carla.Vehicle) – The carla.Vehicle. We need this class to spawn sensors.

  • position (str) – Indicates the sensor is a front or rear camera. option: front, left, right.

image

Current received rgb image.

Type

np.ndarray

sensor

The carla sensor that mounts at the vehicle.

Type

carla.sensor

class opencda.core.sensing.perception.perception_manager.LidarSensor(vehicle, config_yaml)

Bases: object

Lidar sensor manager.

Parameters
  • vehicle (carla.Vehicle) – carla Vehicle, we need this to spawn sensors.

  • config_yaml (dict) – Configuration dictionary for lidar.

o3d_pointcloud

Received point cloud, saved in o3d.Pointcloud format.

Type

03d object

sensor

Lidar sensor that will be attached to the vehicle.

Type

carla.sensor

class opencda.core.sensing.perception.perception_manager.PerceptionManager(vehicle, config_yaml, ml_manager, data_dump=False)

Bases: object

Default perception module. Currenly only used to detect vehicles.

Parameters
  • vehicle (carla.Vehicle) – carla Vehicle, we need this to spawn sensors.

  • config_yaml (dict) – Configuration dictionary for perception.

  • ml_manager (opencda object) – Shared ML library and models across all CAVs.

  • data_dump (bool) – Whether dumping data, if true, semantic lidar will be spawned.

lidar

Lidar sensor manager.

Type

opencda object

rgb_camera

RGB camera manager.

Type

opencda object

o3d_vis

Open3d point cloud visualizer.

Type

o3d object

activate_mode(objects)

Use Yolov5 + Lidar fusion to detect objects.

Parameters

objects (dict) – The dictionary that contains all category of detected objects. The key is the object category name and value is its 3d coordinates and confidence.

Returns

objects – Updated object dictionary.

Return type

dict

deactivate_mode(objects)

Object detection using server information directly.

Parameters

objects (dict) – The dictionary that contains all category of detected objects. The key is the object category name and value is its 3d coordinates and confidence.

Returns

objects – Updated object dictionary.

Return type

dict

destroy()

Destroy sensors.

detect(ego_pos)

Detect surrounding objects. Currently only vehicle detection supported.

Parameters

ego_pos (carla.Transform) – Ego vehicle pose.

Returns

objects – A list that contains all detected obstacle vehicles.

Return type

list

dist(v)

A fast method to retrieve the obstacle distance the ego vehicle from the server directly.

Parameters

v (carla.vehicle) – The obstacle vehicle.

Returns

distance – The distance between ego and the obstacle vehicle.

Return type

float

filter_vehicle_out_sensor(vehicle_list)

By utilizing semantic lidar, we can retrieve the objects that are in the lidar detection range from the server. This function is important for collect training data for object detection as it can filter out the objects out of the senor range.

Parameters

vehicle_list (list) – The list contains all vehicles information retrieves from the server.

Returns

new_vehicle_list – The list that filters out the out of scope vehicles.

Return type

list

speed_retrieve(objects)

We don’t implement any obstacle speed calculation algorithm. The speed will be retrieved from the server directly.

Parameters

objects (dict) – The dictionary contains the objects.

visualize_3d_bbx_front_camera(objects, rgb_image)

Visualize the 3d bounding box on frontal camera image.

Parameters
  • objects (dict) – The object dictionary.

  • rgb_image (np.ndarray) – Received rgb image at current timestamp.

class opencda.core.sensing.perception.perception_manager.SemanticLidarSensor(vehicle, config_yaml)

Bases: object

Semantic lidar sensor manager. This class is used when data dumping is needed.

Parameters
  • vehicle (carla.Vehicle) – carla Vehicle, we need this to spawn sensors.

  • config_yaml (dict) – Configuration dictionary, the same as the normal lidar.

o3d_pointcloud

Received point cloud, saved in o3d.Pointcloud format.

Type

03d object

sensor

Lidar sensor that will be attached to the vehicle.

Type

carla.sensor

opencda.core.sensing.perception.sensor_transformation module

This script contains the transformations between world and different sensors.

opencda.core.sensing.perception.sensor_transformation.bbx_to_world(cords, vehicle)

Convert bounding box coordinate at vehicle reference to world reference.

Parameters
  • cords (np.ndarray) – Bounding box coordinates with 8 vertices, shape (8, 4)

  • vehicle (opencda object) – Opencda ObstacleVehicle.

Returns

bb_world_cords – Bounding box coordinates under world reference.

Return type

np.ndarray

opencda.core.sensing.perception.sensor_transformation.create_bb_points(vehicle)

Extract the eight vertices of the bounding box from the vehicle.

Parameters

vehicle (opencda object) – Opencda ObstacleVehicle that has attributes.

Returns

bbx – 3d bounding box, shape:(8, 4).

Return type

np.ndarray

opencda.core.sensing.perception.sensor_transformation.get_2d_bb(vehicle, sensor, senosr_transform)

Summarize 2D bounding box creation.

Parameters
  • vehicle (carla.Vehicle) – Ego vehicle.

  • sensor (carla.sensor) – Carla sensor.

  • senosr_transform (carla.Transform) – Sensor position.

Returns

p2d_bb – 2D bounding box.

Return type

np.ndarray

opencda.core.sensing.perception.sensor_transformation.get_bounding_box(vehicle, camera, sensor_transform)

Get vehicle bounding box and project to sensor image.

Parameters
  • vehicle (carla.Vehicle) – Ego vehicle.

  • camera (carla.sensor) – Carla rgb camera spawned at the vehicles.

  • sensor_transform (carla.Transform) – Sensor position in the world.

Returns

camera_bbx – Bounding box coordinates in sensor image.

Return type

np.ndarray

opencda.core.sensing.perception.sensor_transformation.get_camera_intrinsic(sensor)

Retrieve the camera intrinsic matrix.

Parameters

sensor (carla.sensor) – Carla rgb camera object.

Returns

matrix_x – The 2d intrinsic matrix.

Return type

np.ndarray

opencda.core.sensing.perception.sensor_transformation.p3d_to_p2d_bb(p3d_bb)

Draw 2d bounding box(4 vertices) from 3d bounding box(8 vertices). 2D bounding box is represented by two corner points.

Parameters

p3d_bb (np.ndarray) – The 3d bounding box is going to project to 2d.

Returns

p2d_bb – Projected 2d bounding box.

Return type

np.ndarray

opencda.core.sensing.perception.sensor_transformation.project_lidar_to_camera(lidar, camera, point_cloud, rgb_image)

Project lidar to camera space.

Parameters
  • lidar (carla.sensor) – Lidar sensor.

  • camera (carla.sensor) – RGB camera.

  • point_cloud (np.ndarray) – Cloud points, shape: (n, 4).

  • rgb_image (np.ndarray) – RGB image from camera.

Returns

  • rgb_image (np.ndarray) – New rgb image with lidar points projected.

  • points_2d (np.ndarrya) – Point cloud projected to camera space.

opencda.core.sensing.perception.sensor_transformation.sensor_to_world(cords, sensor_transform)

Project coordinates in sensor to world reference.

Parameters
  • cords (np.ndarray) – Coordinates under sensor reference.

  • sensor_transform (carla.Transform) – Sensor position in the world.

Returns

world_cords – Coordinates projected to world space.

Return type

np.ndarray

opencda.core.sensing.perception.sensor_transformation.vehicle_to_sensor(cords, vehicle, sensor_transform)

Transform coordinates from vehicle reference to sensor reference.

Parameters
  • cords (np.ndarray) – Coordinates under vehicle reference, shape (n, 4).

  • vehicle (opencda object) – Carla ObstacleVehicle.

  • sensor_transform (carla.Transform) – Sensor position in the world.

Returns

sensor_cord – Coordinates in the sensor reference, shape(4, n)

Return type

np.ndarray

opencda.core.sensing.perception.sensor_transformation.world_to_sensor(cords, sensor_transform)

Transform coordinates from world reference to sensor reference.

Parameters
  • cords (np.ndarray) – Coordinates under world reference, shape: (4, n).

  • sensor_transform (carla.Transform) – Sensor position in the world.

Returns

sensor_cords – Coordinates in the sensor reference.

Return type

np.ndarray

opencda.core.sensing.perception.sensor_transformation.x_to_world_transformation(transform)

Get the transformation matrix from x(it can be vehicle or sensor) coordinates to world coordinate. :param transform: The transform that contains location and rotation :type transform: carla.Transform

Returns

matrix – The transformation matrx.

Return type

np.ndarray

Module contents