[isaacsim.sensors.physx] Isaac Sim PhysX Sensors#
Version: 2.5.0
Isaac Sim PhysX Sensors extension provides APIs for PhysX-raycast-based lidars and sensors including Proximity Sensor and Lightbeam Sensor.
Enable Extension#
The extension can be enabled (if not already) in one of the following ways:
Define the next entry as an application argument from a terminal.
APP_SCRIPT.(sh|bat) --enable isaacsim.sensors.physx
Define the next entry under [dependencies] in an experience (.kit) file or an extension configuration (extension.toml) file.
[dependencies]
"isaacsim.sensors.physx" = {}
Open the Window > Extensions menu in a running application instance and search for isaacsim.sensors.physx.
Then, toggle the enable control button if it is not already active.
API#
Python API#
Commands
Base command for creating range sensor prims. |
|
Command class to create a lidar sensor. |
|
Command class to create a generic range sensor. |
|
Command class to create a light beam sensor. |
Sensors
A physics-based proximity sensor that detects overlapping objects using PhysX collision queries. |
|
A rotating lidar sensor using PhysX simulation for range detection. |
Commands#
- class RangeSensorCreatePrim(*args: Any, **kwargs: Any)#
Bases:
CommandBase command for creating range sensor prims.
This command is used to create each range sensor prim and handles undo operations so that individual prim commands don’t have to implement their own undo logic.
- Parameters:
path – Path for the new prim.
parent – Parent prim path.
schema_type – Schema type to use for the prim.
translation – Translation vector for the prim.
orientation – Orientation quaternion for the prim.
visibility – Whether the prim is visible.
min_range – Minimum range of the sensor.
max_range – Maximum range of the sensor.
draw_points – Whether to draw points for visualization.
draw_lines – Whether to draw lines for visualization.
- class RangeSensorCreateLidar(*args: Any, **kwargs: Any)#
Bases:
CommandCommand class to create a lidar sensor.
Typical usage example:
result, prim = omni.kit.commands.execute( "RangeSensorCreateLidar", path="/Lidar", parent=None, translation=Gf.Vec3d(0, 0, 0), orientation=Gf.Quatd(1, 0, 0, 0), min_range=0.4, max_range=100.0, draw_points=False, draw_lines=False, horizontal_fov=360.0, vertical_fov=30.0, horizontal_resolution=0.4, vertical_resolution=4.0, rotation_rate=20.0, high_lod=False, yaw_offset=0.0, enable_semantics=False, )
- Parameters:
path – Path for the new lidar sensor prim.
parent – Parent prim path.
translation – Translation vector for the lidar sensor.
orientation – Orientation quaternion for the lidar sensor.
min_range – Minimum range of the sensor.
max_range – Maximum range of the sensor.
draw_points – Whether to draw points for visualization.
draw_lines – Whether to draw lines for visualization.
horizontal_fov – Horizontal field of view in degrees.
vertical_fov – Vertical field of view in degrees.
horizontal_resolution – Horizontal resolution in degrees per sample.
vertical_resolution – Vertical resolution in degrees per sample.
rotation_rate – Rotation rate of the sensor in Hz.
high_lod – Whether to enable high level of detail rendering.
yaw_offset – Yaw offset in degrees.
enable_semantics – Whether to enable semantic segmentation.
- class RangeSensorCreateGeneric(*args: Any, **kwargs: Any)#
Bases:
CommandCommand class to create a generic range sensor.
Typical usage example:
result, prim = omni.kit.commands.execute( "RangeSensorCreateGeneric", path="/GenericSensor", parent=None, translation=Gf.Vec3d(0, 0, 0), orientation=Gf.Quatd(1, 0, 0, 0), min_range=0.4, max_range=100.0, draw_points=False, draw_lines=False, sampling_rate=60, )
- Parameters:
path – Path for the new prim.
parent – Parent prim path.
translation – Translation vector for the prim.
orientation – Orientation quaternion for the prim.
min_range – Minimum range of the sensor.
max_range – Maximum range of the sensor.
draw_points – Whether to draw points for visualization.
draw_lines – Whether to draw lines for visualization.
sampling_rate – Sampling rate of the sensor in Hz.
- class IsaacSensorCreateLightBeamSensor(*args: Any, **kwargs: Any)#
Bases:
CommandCommand class to create a light beam sensor.
- Parameters:
path – Path for the new prim.
parent – Parent prim path.
translation – Translation vector for the prim.
orientation – Orientation quaternion for the prim.
num_rays – Number of rays for the light beam sensor.
curtain_length – Length of the curtain for multi-ray sensors.
forward_axis – Forward direction axis.
curtain_axis – Curtain direction axis.
min_range – Minimum range of the sensor.
max_range – Maximum range of the sensor.
draw_points – Whether to draw points for visualization.
draw_lines – Whether to draw lines for visualization.
**kwargs – Additional keyword arguments.
Sensors#
- class ProximitySensor(
- parent: pxr.Usd.Prim,
- callback_fns=[None, None, None],
- exclusions=[],
Bases:
objectA physics-based proximity sensor that detects overlapping objects using PhysX collision queries.
The sensor performs box overlap queries to detect when other physics objects enter, remain within, or exit its detection zone. It provides callback functionality for handling entry, ongoing overlap, and exit events, along with tracking overlap duration and distance measurements.
The sensor uses the parent prim’s scale property to define the detection box size and performs continuous overlap detection through the PhysX scene query interface. It maintains internal state to track zone transitions and provides detailed overlap metadata including duration and distance.
- Parameters:
parent – The USD prim that defines the sensor’s position, orientation, and scale. The prim’s transform determines the sensor’s world position and the scale property defines the detection box dimensions.
callback_fns – List of three callback functions [on_enter, on_inside, on_exit]. Each callback receives the sensor instance as a parameter. Functions can be None to disable specific callbacks.
exclusions – List of prim paths to exclude from overlap detection. Objects at these paths will not trigger sensor events or appear in overlap data.
- check_for_overlap() int#
Performs a physics overlap box query to detect collisions.
Uses the parent prim’s transform and scale to create a box overlap query that detects collisions with other geometry in the physics scene.
- Returns:
Number of hits from the overlap query.
- get_active_zones() List[str]#
Returns a list of the prim paths of all the collision meshes the tracker is inside of.
- Returns:
Prim paths as strings.
- get_data() Dict[str, Dict[str, float]]#
Returns dictionary of overlapped geometry and respective metadata.
key: prim_path of overlapped geometry val: dictionary of metadata:
“duration”: float of time since overlap “distance”: distance from origin of tracker to origin of overlapped geometry
- Returns:
Overlapped geometry and metadata.
- get_entered_zones() List[str]#
Returns a list of the prim paths of all the collision meshes the tracker just entered.
- Returns:
Prim paths as strings.
- get_exited_zones() List[str]#
Prim paths of all the collision meshes the tracker just exited.
- Returns:
Prim paths as strings.
- is_overlapping() bool#
Whether the proximity sensor is currently overlapping with any geometry.
- Returns:
True if overlapping with any collision meshes.
- report_hit(hit) bool#
Reports a hit from the physics overlap query.
Processes a collision hit by adding the collided prim to active zones and starting a timer for duration tracking.
- Parameters:
hit – The physics hit result from the overlap query.
- Returns:
True to continue the physics query.
- reset()#
Resets the proximity sensor to its initial state.
Clears all active zones, entered zones, exited zones, and overlap data, and sets the internal overlapping state to false.
- status() tuple[bool, dict[str, dict[str, float]]]#
Current overlapping status and data.
- Returns:
A tuple containing the overlapping boolean state and the overlap data dictionary.
- to_string() str#
String representation of the proximity sensor state.
- Returns:
A formatted string containing the tracker path, name, and active zone information with duration and distance details.
- update()#
Updates the proximity sensor state by checking for overlaps and triggering callbacks.
Checks for overlap with collision meshes, updates active zones, determines entered and exited zones, and calls appropriate callback functions for zone transitions and while inside zones.
- class RotatingLidarPhysX(
- prim_path: str,
- name: str = 'rotating_lidar_physX',
- rotation_frequency: float | None = None,
- rotation_dt: float | None = None,
- position: ndarray | None = None,
- translation: ndarray | None = None,
- orientation: ndarray | None = None,
- fov: Tuple[float, float] | None = None,
- resolution: Tuple[float, float] | None = None,
- valid_range: Tuple[float, float] | None = None,
Bases:
BaseSensorA rotating lidar sensor using PhysX simulation for range detection.
This sensor provides rotating lidar functionality with configurable field of view, resolution, and rotation frequency. It captures depth, intensity, point cloud, and other lidar data types during simulation. The sensor can create a new lidar prim at the specified path or use an existing one.
- Parameters:
prim_path – Path to the lidar prim in the USD stage.
name – Name identifier for the sensor.
rotation_frequency – Rotation frequency of the lidar in Hz. Cannot be specified together with rotation_dt.
rotation_dt – Time step for rotation in seconds. Cannot be specified together with rotation_frequency.
position – Position of the sensor in 3D space.
translation – Translation offset for the sensor.
orientation – Orientation of the sensor as a quaternion or rotation matrix.
fov – Field of view as (horizontal_fov, vertical_fov) in degrees.
resolution – Resolution as (horizontal_resolution, vertical_resolution) in degrees.
valid_range – Valid detection range as (min_range, max_range) in meters.
- Raises:
Exception – If both rotation_frequency and rotation_dt are specified.
- add_azimuth_data_to_frame()#
Adds azimuth angle data to the current lidar frame for collection during data acquisition.
- add_depth_data_to_frame()#
Enable depth data collection in the sensor frame.
Adds a ‘depth’ key to the current frame dictionary for storing depth measurements.
- add_intensity_data_to_frame()#
Enable intensity data collection in the sensor frame.
Adds an ‘intensity’ key to the current frame dictionary for storing intensity measurements.
- add_linear_depth_data_to_frame()#
Enable linear depth data collection in the sensor frame.
Adds a ‘linear_depth’ key to the current frame dictionary for storing linear depth measurements.
- add_point_cloud_data_to_frame()#
Adds point cloud data to the current lidar frame for collection during data acquisition.
- add_semantics_data_to_frame()#
Adds semantic segmentation data to the current lidar frame for collection during data acquisition.
Automatically enables semantics on the lidar sensor if not already enabled.
- add_zenith_data_to_frame()#
Adds zenith angle data to the current lidar frame for collection during data acquisition.
- apply_visual_material(
- visual_material: VisualMaterial,
- weaker_than_descendants: bool = False,
Apply visual material to the held prim and optionally its descendants.
- Parameters:
visual_material (VisualMaterial) – visual material to be applied to the held prim. Currently supports PreviewSurface, OmniPBR and OmniGlass.
weaker_than_descendants (bool, optional) – True if the material shouldn’t override the descendants materials, otherwise False. Defaults to False.
Example:
>>> from isaacsim.core.api.materials import OmniGlass >>> >>> # create a dark-red glass visual material >>> material = OmniGlass( ... prim_path="/World/material/glass", # path to the material prim to create ... ior=1.25, ... depth=0.001, ... thin_walled=False, ... color=np.array([0.5, 0.0, 0.0]) ... ) >>> prim.apply_visual_material(material)
- disable_semantics()#
Disables semantic data collection for the lidar sensor.
- disable_visualization()#
Disables visualization of the lidar sensor data.
- enable_semantics()#
Enables semantic data collection for the lidar sensor.
- enable_visualization(
- high_lod: bool = False,
- draw_points: bool = True,
- draw_lines: bool = True,
Enables visualization of the lidar sensor data.
- Parameters:
high_lod – Whether to use high level of detail for visualization.
draw_points – Whether to draw point cloud visualization.
draw_lines – Whether to draw line visualization.
- get_applied_visual_material() VisualMaterial#
Return the current applied visual material in case it was applied using apply_visual_material or it’s one of the following materials that was already applied before: PreviewSurface, OmniPBR and OmniGlass.
- Returns:
the current applied visual material if its type is currently supported.
- Return type:
Example:
>>> # given a visual material applied >>> prim.get_applied_visual_material() <isaacsim.core.api.materials.omni_glass.OmniGlass object at 0x7f36263106a0>
- get_current_frame() dict#
Current frame data from the lidar sensor.
- Returns:
Dictionary containing the current frame data with keys like ‘time’, ‘physics_step’, and any enabled data types.
- get_default_state() XFormPrimState#
Get the default prim states (spatial position and orientation).
- Returns:
an object that contains the default state of the prim (position and orientation)
- Return type:
Example:
>>> state = prim.get_default_state() >>> state <isaacsim.core.utils.types.XFormPrimState object at 0x7f33addda650> >>> >>> state.position [-4.5299529e-08 -1.8347054e-09 -2.8610229e-08] >>> state.orientation [1. 0. 0. 0.]
- get_fov() Tuple[float, float]#
Field of view of the lidar sensor.
- Returns:
Tuple of (horizontal_fov, vertical_fov) in degrees.
- get_local_pose() Tuple[ndarray, ndarray]#
Get prim’s pose with respect to the local frame (the prim’s parent frame)
- Returns:
first index is the position in the local frame (with shape (3, )). Second index is quaternion orientation (with shape (4, )) in the local frame
- Return type:
Tuple[np.ndarray, np.ndarray]
Example:
>>> # if the prim is in position (1.0, 0.5, 0.0) with respect to the world frame >>> position, orientation = prim.get_local_pose() >>> position [0. 0. 0.] >>> orientation [0. 0. 0.]
- get_local_scale() ndarray#
Get prim’s scale with respect to the local frame (the parent’s frame)
- Returns:
scale applied to the prim’s dimensions in the local frame. shape is (3, ).
- Return type:
np.ndarray
Example:
>>> prim.get_local_scale() [1. 1. 1.]
- get_num_cols() int#
Total number of columns in the lidar sensor.
- Returns:
The total number of columns.
- get_num_cols_in_last_step() int#
Number of columns processed in the last physics step.
- Returns:
The number of columns that were ticked in the last step.
- get_num_rows() int#
Number of vertical resolution rows in the lidar sensor.
- Returns:
The number of rows configured for the lidar sensor.
- get_resolution() float#
Resolution of the lidar sensor.
- Returns:
Tuple of (horizontal_resolution, vertical_resolution) in degrees per sample.
- get_rotation_frequency() int#
Rotation frequency of the lidar sensor in rotations per second.
- Returns:
The current rotation rate.
- get_valid_range() Tuple[float, float]#
Valid range of the lidar sensor.
- Returns:
Tuple of (minimum_range, maximum_range) in meters.
- get_visibility() bool#
- Returns:
true if the prim is visible in stage. false otherwise.
- Return type:
bool
Example:
>>> # get the visible state of an visible prim on the stage >>> prim.get_visibility() True
- get_world_pose() Tuple[ndarray, ndarray]#
Get prim’s pose with respect to the world’s frame
- Returns:
first index is the position in the world frame (with shape (3, )). Second index is quaternion orientation (with shape (4, )) in the world frame
- Return type:
Tuple[np.ndarray, np.ndarray]
Example:
>>> # if the prim is in position (1.0, 0.5, 0.0) with respect to the world frame >>> position, orientation = prim.get_world_pose() >>> position [1. 0.5 0. ] >>> orientation [1. 0. 0. 0.]
- get_world_scale() ndarray#
Get prim’s scale with respect to the world’s frame
- Returns:
scale applied to the prim’s dimensions in the world frame. shape is (3, ).
- Return type:
np.ndarray
Example:
>>> prim.get_world_scale() [1. 1. 1.]
- initialize(physics_sim_view=None)#
Initialize the rotating lidar sensor with physics simulation callbacks.
Sets up physics step callbacks for data acquisition and event observers for stage and timeline events.
- Parameters:
physics_sim_view – Physics simulation view for initialization.
- is_paused() bool#
Pause state of the lidar sensor.
- Returns:
True if the sensor is paused, False if it is actively collecting data.
- is_semantics_enabled() bool#
Whether semantic data collection is enabled for the lidar sensor.
- Returns:
True if semantics are enabled, False otherwise.
- is_valid() bool#
Check if the prim path has a valid USD Prim at it
- Returns:
True is the current prim path corresponds to a valid prim in stage. False otherwise.
- Return type:
bool
Example:
>>> # given an existing and valid prim >>> prims.is_valid() True
- is_visual_material_applied() bool#
Check if there is a visual material applied
- Returns:
True if there is a visual material applied. False otherwise.
- Return type:
bool
Example:
>>> # given a visual material applied >>> prim.is_visual_material_applied() True
- pause()#
Pauses lidar data acquisition.
Stops the sensor from collecting new data while keeping it initialized.
- post_reset()#
Reset the lidar sensor state after simulation reset.
Resets time and physics step counters to zero.
- remove_azimuth_data_from_frame()#
Removes azimuth angle data from the current lidar frame to stop collecting this data type.
- remove_depth_data_from_frame()#
Disable depth data collection in the sensor frame.
Removes the ‘depth’ key from the current frame dictionary.
- remove_intensity_data_from_frame()#
Disable intensity data collection in the sensor frame.
Removes the ‘intensity’ key from the current frame dictionary.
- remove_linear_depth_data_from_frame()#
Disable linear depth data collection in the sensor frame.
Removes the ‘linear_depth’ key from the current frame dictionary.
- remove_point_cloud_data_from_frame()#
Removes point cloud data from the current lidar frame to stop collecting this data type.
- remove_semantics_data_from_frame()#
Removes semantic segmentation data from the current lidar frame and disables semantics on the sensor.
- remove_zenith_data_from_frame()#
Removes zenith angle data from the current lidar frame to stop collecting this data type.
- resume()#
Resumes lidar data acquisition.
Unpauses the sensor to continue collecting data during physics steps.
- set_default_state( ) None#
Set the default state of the prim (position and orientation), that will be used after each reset.
- Parameters:
position (Optional[Sequence[float]], optional) – position in the world frame of the prim. shape is (3, ). Defaults to None, which means left unchanged.
orientation (Optional[Sequence[float]], optional) – quaternion orientation in the world frame of the prim. quaternion is scalar-first (w, x, y, z). shape is (4, ). Defaults to None, which means left unchanged.
Example:
>>> # configure default state >>> prim.set_default_state(position=np.array([1.0, 0.5, 0.0]), orientation=np.array([1, 0, 0, 0])) >>> >>> # set default states during post-reset >>> prim.post_reset()
- set_fov(value: Tuple[float, float])#
Sets the field of view for the lidar sensor.
- Parameters:
value – Tuple of (horizontal_fov, vertical_fov) in degrees.
- set_local_pose( ) None#
Set prim’s pose with respect to the local frame (the prim’s parent frame).
Warning
This method will change (teleport) the prim pose immediately to the indicated value
- Parameters:
translation (Optional[Sequence[float]], optional) – translation in the local frame of the prim (with respect to its parent prim). shape is (3, ). Defaults to None, which means left unchanged.
orientation (Optional[Sequence[float]], optional) – quaternion orientation in the local frame of the prim. quaternion is scalar-first (w, x, y, z). shape is (4, ). Defaults to None, which means left unchanged.
Hint
This method belongs to the methods used to set the prim state
Example:
>>> prim.set_local_pose(translation=np.array([1.0, 0.5, 0.0]), orientation=np.array([1., 0., 0., 0.]))
- set_local_scale(
- scale: Sequence[float] | None,
Set prim’s scale with respect to the local frame (the prim’s parent frame).
- Parameters:
scale (Optional[Sequence[float]]) – scale to be applied to the prim’s dimensions. shape is (3, ). Defaults to None, which means left unchanged.
Example:
>>> # scale prim 10 times smaller >>> prim.set_local_scale(np.array([0.1, 0.1, 0.1]))
- set_resolution(value: float)#
Sets the resolution for the lidar sensor.
- Parameters:
value – Resolution value in degrees per sample.
- set_rotation_frequency(value: int)#
Sets the rotation frequency of the lidar sensor.
- Parameters:
value – Rotation rate in rotations per second.
- set_valid_range(value: Tuple[float, float])#
Sets the valid range of the lidar sensor.
- Parameters:
value – Tuple of (minimum_range, maximum_range) in meters.
- set_visibility(visible: bool) None#
Set the visibility of the prim in stage
- Parameters:
visible (bool) – flag to set the visibility of the usd prim in stage.
Example:
>>> # make prim not visible in the stage >>> prim.set_visibility(visible=False)
- set_world_pose( ) None#
Ses prim’s pose with respect to the world’s frame
Warning
This method will change (teleport) the prim pose immediately to the indicated value
- Parameters:
position (Optional[Sequence[float]], optional) – position in the world frame of the prim. shape is (3, ). Defaults to None, which means left unchanged.
orientation (Optional[Sequence[float]], optional) – quaternion orientation in the world frame of the prim. quaternion is scalar-first (w, x, y, z). shape is (4, ). Defaults to None, which means left unchanged.
Hint
This method belongs to the methods used to set the prim state
Example:
>>> prim.set_world_pose(position=np.array([1.0, 0.5, 0.0]), orientation=np.array([1., 0., 0., 0.]))
- property name: str | None#
Returns: str: name given to the prim when instantiating it. Otherwise None.
- property non_root_articulation_link: bool#
Used to query if the prim is a non root articulation link
- Returns:
True if the prim itself is a non root link
- Return type:
bool
Example:
>>> # for a wrapped articulation (where the root prim has the Physics Articulation Root property applied) >>> prim.non_root_articulation_link False
- property prim: pxr.Usd.Prim#
Returns: Usd.Prim: USD Prim object that this object holds.
- property prim_path: str#
Returns: str: prim path in the stage
Omnigraph Nodes#
The extension exposes the following Omnigraph nodes: