omni.isaac.lab.sensors.patterns#
Sub-module for ray-casting patterns used by the ray-caster.
Classes
Base configuration for a pattern. |
|
Configuration for the grid pattern for ray-casting. |
|
Configuration for a pinhole camera depth image pattern for ray-casting. |
|
Configuration for the Bpearl pattern for ray-casting. |
Pattern Base#
- class omni.isaac.lab.sensors.patterns.PatternBaseCfg#
Base configuration for a pattern.
Attributes:
Function to generate the pattern.
- func: Callable[[PatternBaseCfg, str], tuple[torch.Tensor, torch.Tensor]]#
Function to generate the pattern.
The function should take in the configuration and the device name as arguments. It should return the pattern’s starting positions and directions as a tuple of torch.Tensor.
Grid Pattern#
- omni.isaac.lab.sensors.patterns.grid_pattern(cfg: patterns_cfg.GridPatternCfg, device: str) tuple[torch.Tensor, torch.Tensor] #
A regular grid pattern for ray casting.
The grid pattern is made from rays that are parallel to each other. They span a 2D grid in the sensor’s local coordinates from
(-length/2, -width/2)
to(length/2, width/2)
, which is defined by thesize = (length, width)
andresolution
parameters in the config.- 参数:
cfg – The configuration instance for the pattern.
device – The device to create the pattern on.
- 返回:
The starting positions and directions of the rays.
- 抛出:
ValueError – If the ordering is not “xy” or “yx”.
ValueError – If the resolution is less than or equal to 0.
- class omni.isaac.lab.sensors.patterns.GridPatternCfg#
Configuration for the grid pattern for ray-casting.
Defines a 2D grid of rays in the coordinates of the sensor.
注意
The points are ordered based on the
ordering
attribute.Attributes:
Grid resolution (in meters).
Grid size (length, width) (in meters).
Ray direction.
Specifies the ordering of points in the generated grid.
- ordering: Literal['xy', 'yx']#
Specifies the ordering of points in the generated grid. Defaults to
"xy"
.Consider a grid pattern with points at \((x, y)\) where \(x\) and \(y\) are the grid indices. The ordering of the points can be specified as “xy” or “yx”. This determines the outer and inner loop order when iterating over the grid points.
If “xy” is selected, the points are ordered with outer loop over “x” and inner loop over “y”.
If “yx” is selected, the points are ordered with outer loop over “y” and inner loop over “x”.
For example, the grid pattern points with \(X = (0, 1, 2)\) and \(Y = (3, 4)\):
“xy” ordering: \([(0, 3), (0, 4), (1, 3), (1, 4), (2, 3), (2, 4)]\)
“yx” ordering: \([(0, 3), (1, 3), (2, 3), (1, 4), (2, 4), (2, 4)]\)
Pinhole Camera Pattern#
- omni.isaac.lab.sensors.patterns.pinhole_camera_pattern(cfg: patterns_cfg.PinholeCameraPatternCfg, intrinsic_matrices: torch.Tensor, device: str) tuple[torch.Tensor, torch.Tensor] #
The image pattern for ray casting.
小心
This function does not follow the standard pattern interface. It requires the intrinsic matrices of the cameras to be passed in. This is because we want to be able to randomize the intrinsic matrices of the cameras, which is not possible with the standard pattern interface.
- 参数:
cfg – The configuration instance for the pattern.
intrinsic_matrices – The intrinsic matrices of the cameras. Shape is (N, 3, 3).
device – The device to create the pattern on.
- 返回:
The starting positions and directions of the rays. The shape of the tensors are (N, H * W, 3) and (N, H * W, 3) respectively.
- class omni.isaac.lab.sensors.patterns.PinholeCameraPatternCfg#
Configuration for a pinhole camera depth image pattern for ray-casting.
小心
Focal length as well as the aperture sizes and offsets are set as a tenth of the world unit. In our case, the world unit is meters, so all of these values are in cm. For more information, please check: https://docs.omniverse.nvidia.com/materials-and-rendering/latest/cameras.html
Attributes:
Perspective focal length (in cm).
Horizontal aperture (in cm).
Vertical aperture (in cm).
Offsets Resolution/Film gate horizontally.
Offsets Resolution/Film gate vertically.
Width of the image (in pixels).
Height of the image (in pixels).
Methods:
from_intrinsic_matrix
(intrinsic_matrix, ...)Create a
PinholeCameraPatternCfg
class instance from an intrinsic matrix.- focal_length: float#
Perspective focal length (in cm). Defaults to 24.0cm.
Longer lens lengths narrower FOV, shorter lens lengths wider FOV.
- horizontal_aperture: float#
Horizontal aperture (in cm). Defaults to 20.955 cm.
Emulates sensor/film width on a camera.
备注
The default value is the horizontal aperture of a 35 mm spherical projector.
- vertical_aperture: float | None#
Vertical aperture (in cm). Defaults to None.
Emulates sensor/film height on a camera. If None, then the vertical aperture is calculated based on the horizontal aperture and the aspect ratio of the image to maintain squared pixels. In this case, the vertical aperture is calculated as:
\[\text{vertical aperture} = \text{horizontal aperture} \times \frac{\text{height}}{\text{width}}\]
- classmethod from_intrinsic_matrix(intrinsic_matrix: list[float], width: int, height: int, focal_length: float = 24.0) PinholeCameraPatternCfg #
Create a
PinholeCameraPatternCfg
class instance from an intrinsic matrix.The intrinsic matrix is a 3x3 matrix that defines the mapping between the 3D world coordinates and the 2D image. The matrix is defined as:
\[\begin{split}I_{cam} = \begin{bmatrix} f_x & 0 & c_x \\ 0 & f_y & c_y \\ 0 & 0 & 1 \end{bmatrix},\end{split}\]where \(f_x\) and \(f_y\) are the focal length along x and y direction, while \(c_x\) and \(c_y\) are the principle point offsets along x and y direction respectively.
- 参数:
intrinsic_matrix – Intrinsic matrix of the camera in row-major format. The matrix is defined as [f_x, 0, c_x, 0, f_y, c_y, 0, 0, 1]. Shape is (9,).
width – Width of the image (in pixels).
height – Height of the image (in pixels).
focal_length – Focal length of the camera (in cm). Defaults to 24.0 cm.
- 返回:
An instance of the
PinholeCameraPatternCfg
class.
RS-Bpearl Pattern#
- omni.isaac.lab.sensors.patterns.bpearl_pattern(cfg: patterns_cfg.BpearlPatternCfg, device: str) tuple[torch.Tensor, torch.Tensor] #
The RS-Bpearl pattern for ray casting.
The Robosense RS-Bpearl is a short-range LiDAR that has a 360 degrees x 90 degrees super wide field of view. It is designed for near-field blind-spots detection.
- 参数:
cfg – The configuration instance for the pattern.
device – The device to create the pattern on.
- 返回:
The starting positions and directions of the rays.
- class omni.isaac.lab.sensors.patterns.BpearlPatternCfg#
Configuration for the Bpearl pattern for ray-casting.
Attributes:
Horizontal field of view (in degrees).
Horizontal resolution (in degrees).
Vertical ray angles (in degrees).
LiDAR Pattern#
- omni.isaac.lab.sensors.patterns.lidar_pattern(cfg: patterns_cfg.LidarPatternCfg, device: str) tuple[torch.Tensor, torch.Tensor] #
Lidar sensor pattern for ray casting.
- 参数:
cfg – The configuration instance for the pattern.
device – The device to create the pattern on.
- 返回:
The starting positions and directions of the rays.
- class omni.isaac.lab.sensors.patterns.LidarPatternCfg#
Configuration for the LiDAR pattern for ray-casting.
Attributes:
Number of Channels (Beams).
Vertical field of view range in degrees.
Horizontal field of view range in degrees.
Horizontal resolution (in degrees).