Publishing Camera’s Data#
Learning Objectives#
In this tutorial, you learn how to programmatically set up publishers for Isaac Sim Cameras at an approximate frequency.
Getting Started#
Prerequisite
Completed the ROS 2 Cameras tutorial.
Completed ROS 2 Installation (Default) so that the necessary environment variables are set and sourced before launching NVIDIA Isaac Sim.
Read through the Sensor Axes Representation (LiDAR, Cameras).
Read through how to programmatically create a Camera sensor object in the scene.
ROS 2 Bridge is enabled.
Note
In Windows 10 or 11, depending on your machine’s configuration, RViz2 might not open properly.
Setup a Camera in a Scene#
To begin this tutorial, set up an environment with a isaacsim.sensors.camera Camera object. Running the following code results in a basic warehouse environment loaded with a camera in the scene.
import argparse
import sys
parser = argparse.ArgumentParser()
parser.add_argument("--test", action="store_true", help="Run in test mode.")
args, unknown = parser.parse_known_args()
import carb
from isaacsim import SimulationApp
BACKGROUND_STAGE_PATH = "/background"
BACKGROUND_USD_PATH = "/Isaac/Environments/Simple_Warehouse/warehouse_with_forklifts.usd"
CONFIG = {"renderer": "RayTracedLighting", "headless": False}
# Example ROS 2 bridge sample demonstrating the manual loading of stages and manual publishing of images
simulation_app = SimulationApp(CONFIG)
import isaacsim.core.utils.numpy.rotations as rot_utils
import numpy as np
import omni
import omni.graph.core as og
import omni.replicator.core as rep
import omni.syntheticdata._syntheticdata as sd
from isaacsim.core.api import SimulationContext
from isaacsim.core.nodes.scripts.utils import set_target_prims
from isaacsim.core.utils import extensions, stage
from isaacsim.core.utils.prims import is_prim_path_valid, set_targets
from isaacsim.sensors.camera import Camera
from isaacsim.storage.native import get_assets_root_path
# Enable ROS 2 bridge extension
extensions.enable_extension("isaacsim.ros2.bridge")
simulation_app.update()
simulation_context = SimulationContext(stage_units_in_meters=1.0)
# Locate Isaac Sim assets folder to load environment and robot stages
assets_root_path = get_assets_root_path()
if assets_root_path is None:
carb.log_error("Could not find Isaac Sim assets folder")
simulation_app.close()
sys.exit()
# Loading the environment
stage.add_reference_to_stage(assets_root_path + BACKGROUND_USD_PATH, BACKGROUND_STAGE_PATH)
###### Camera helper functions for setting up publishers. ########
# Paste functions from the tutorial here
# def publish_camera_tf(camera: Camera): ...
# def publish_camera_info(camera: Camera, freq): ...
# def publish_pointcloud_from_depth(camera: Camera, freq): ...
# def publish_depth(camera: Camera, freq): ...
# def publish_rgb(camera: Camera, freq): ...
###################################################################
# Create a Camera prim. The Camera class takes the position and orientation in the world axes convention.
camera = Camera(
prim_path="/World/floating_camera",
position=np.array([-3.11, -1.87, 1.0]),
frequency=20,
resolution=(256, 256),
orientation=rot_utils.euler_angles_to_quats(np.array([0, 0, 0]), degrees=True),
)
camera.initialize()
simulation_app.update()
camera.initialize()
############### Calling Camera publishing functions ###############
# Call the publishers.
# Make sure you pasted in the helper functions above, and uncomment out the following lines before running.
approx_freq = 30
# publish_camera_tf(camera)
# publish_camera_info(camera, approx_freq)
# publish_rgb(camera, approx_freq)
# publish_depth(camera, approx_freq)
# publish_pointcloud_from_depth(camera, approx_freq)
####################################################################
# Initialize physics
simulation_context.initialize_physics()
simulation_context.play()
i = 0
while simulation_app.is_running() and (not args.test or i < 100):
simulation_context.step(render=True)
i += 1
simulation_context.stop()
simulation_app.close()
Publish Camera Intrinsics to CameraInfo Topic#
The following snippet will publish camera intrinsics associated with an isaacsim.sensors.camera Camera to a sensor_msgs/CameraInfo topic.
def publish_camera_info(camera: Camera, freq): from isaacsim.ros2.core import read_camera_info # The following code will link the camera's render product and publish the data to the specified topic name. render_product = camera._render_product_path step_size = int(60 / freq) topic_name = camera.name + "_camera_info" queue_size = 1 node_namespace = "" frame_id = camera.prim_path.split("/")[-1] # This matches what the TF tree is publishing. writer = rep.writers.get("ROS2PublishCameraInfo") camera_info, _ = read_camera_info(render_product_path=render_product) writer.initialize( frameId=frame_id, nodeNamespace=node_namespace, queueSize=queue_size, topicName=topic_name, width=camera_info.width, height=camera_info.height, projectionType=camera_info.distortion_model, k=camera_info.k.reshape([1, 9]), r=camera_info.r.reshape([1, 9]), p=camera_info.p.reshape([1, 12]), physicalDistortionModel=camera_info.distortion_model, physicalDistortionCoefficients=camera_info.d, ) writer.attach([render_product]) gate_path = omni.syntheticdata.SyntheticData._get_node_path( "PostProcessDispatch" + "IsaacSimulationGate", render_product ) # Set step input of the Isaac Simulation Gate nodes upstream of ROS publishers to control their execution rate og.Controller.attribute(gate_path + ".inputs:step").set(step_size) return
Publish Pointcloud from Depth Images#
In the following snippet, a pointcloud is published to a sensor_msgs/PointCloud2 message. This pointcloud is reconstructed from the depth image using the intrinsics of the camera.
def publish_pointcloud_from_depth(camera: Camera, freq): # The following code will link the camera's render product and publish the data to the specified topic name. render_product = camera._render_product_path step_size = int(60 / freq) topic_name = camera.name + "_pointcloud" # Set topic name to the camera's name queue_size = 1 node_namespace = "" frame_id = camera.prim_path.split("/")[-1] # This matches what the TF tree is publishing. # Note, this pointcloud publisher will convert the Depth image to a pointcloud using the Camera intrinsics. # This pointcloud generation method does not support semantic labeled objects. rv = omni.syntheticdata.SyntheticData.convert_sensor_type_to_rendervar(sd.SensorType.DistanceToImagePlane.name) writer = rep.writers.get(rv + "ROS2PublishPointCloud") writer.initialize(frameId=frame_id, nodeNamespace=node_namespace, queueSize=queue_size, topicName=topic_name) writer.attach([render_product]) # Set step input of the Isaac Simulation Gate nodes upstream of ROS publishers to control their execution rate gate_path = omni.syntheticdata.SyntheticData._get_node_path(rv + "IsaacSimulationGate", render_product) og.Controller.attribute(gate_path + ".inputs:step").set(step_size) return
Publish RGB Images#
def publish_rgb(camera: Camera, freq): # The following code will link the camera's render product and publish the data to the specified topic name. render_product = camera._render_product_path step_size = int(60 / freq) topic_name = camera.name + "_rgb" queue_size = 1 node_namespace = "" frame_id = camera.prim_path.split("/")[-1] # This matches what the TF tree is publishing. rv = omni.syntheticdata.SyntheticData.convert_sensor_type_to_rendervar(sd.SensorType.Rgb.name) writer = rep.writers.get(rv + "ROS2PublishImage") writer.initialize(frameId=frame_id, nodeNamespace=node_namespace, queueSize=queue_size, topicName=topic_name) writer.attach([render_product]) # Set step input of the Isaac Simulation Gate nodes upstream of ROS publishers to control their execution rate gate_path = omni.syntheticdata.SyntheticData._get_node_path(rv + "IsaacSimulationGate", render_product) og.Controller.attribute(gate_path + ".inputs:step").set(step_size) return
Publish Depth Images#
def publish_depth(camera: Camera, freq): # The following code will link the camera's render product and publish the data to the specified topic name. render_product = camera._render_product_path step_size = int(60 / freq) topic_name = camera.name + "_depth" queue_size = 1 node_namespace = "" frame_id = camera.prim_path.split("/")[-1] # This matches what the TF tree is publishing. rv = omni.syntheticdata.SyntheticData.convert_sensor_type_to_rendervar(sd.SensorType.DistanceToImagePlane.name) writer = rep.writers.get(rv + "ROS2PublishImage") writer.initialize(frameId=frame_id, nodeNamespace=node_namespace, queueSize=queue_size, topicName=topic_name) writer.attach([render_product]) # Set step input of the Isaac Simulation Gate nodes upstream of ROS publishers to control their execution rate gate_path = omni.syntheticdata.SyntheticData._get_node_path(rv + "IsaacSimulationGate", render_product) og.Controller.attribute(gate_path + ".inputs:step").set(step_size) return
Publish a TF Tree for the Camera Pose#
The pointcloud, published using the above function, will publish the pointcloud in the ROS camera axes convention (-Y up, +Z forward). To make visualizing this pointcloud easy in ROS using RViz, the following snippet will publish a TF Tree to the /tf, containing two frames.
The two frames are:
{camera_frame_id}: This is the camera’s pose in the ROS camera convention (-Y up, +Z forward). Pointclouds are published in this frame.{camera_frame_id}_world: This is the camera’s pose in the World axes convention (+Z up, +X forward). This will reflect the true pose of the camera.
The TF Tree looks like this:
world ->
{camera_frame_id}is a dynamic transform from origin to the camera in the ROS camera convention, following any movement of the camera.{camera_frame_id}->{camera_frame_id}_worldis a static transform consisting of only a rotation and zero translation. This static transform can be represented by the quaternion [0.5, -0.5, 0.5, 0.5] in [w, x, y, z] convention.
Because the pointcloud is published in {camera_frame_id}, it is encouraged to set the frame_id of the pointcloud topic to {camera_frame_id}. The resulting visualization of the pointclouds can be viewed in the world frame in RViz.
def publish_camera_tf(camera: Camera):
camera_prim = camera.prim_path
if not is_prim_path_valid(camera_prim):
raise ValueError(f"Camera path '{camera_prim}' is invalid.")
try:
# Generate the camera_frame_id. OmniActionGraph will use the last part of
# the full camera prim path as the frame name, so we will extract it here
# and use it for the pointcloud frame_id.
camera_frame_id = camera_prim.split("/")[-1]
# Generate an action graph associated with camera TF publishing.
ros_camera_graph_path = "/CameraTFActionGraph"
# If a camera graph is not found, create a new one.
if not is_prim_path_valid(ros_camera_graph_path):
(ros_camera_graph, _, _, _) = og.Controller.edit(
{
"graph_path": ros_camera_graph_path,
"evaluator_name": "execution",
"pipeline_stage": og.GraphPipelineStage.GRAPH_PIPELINE_STAGE_SIMULATION,
},
{
og.Controller.Keys.CREATE_NODES: [
("OnTick", "omni.graph.action.OnTick"),
("IsaacClock", "isaacsim.core.nodes.IsaacReadSimulationTime"),
("RosPublisher", "isaacsim.ros2.bridge.ROS2PublishClock"),
],
og.Controller.Keys.CONNECT: [
("OnTick.outputs:tick", "RosPublisher.inputs:execIn"),
("IsaacClock.outputs:simulationTime", "RosPublisher.inputs:timeStamp"),
],
},
)
# Generate 2 nodes associated with each camera: TF from world to ROS camera convention, and world frame.
og.Controller.edit(
ros_camera_graph_path,
{
og.Controller.Keys.CREATE_NODES: [
("PublishTF_" + camera_frame_id, "isaacsim.ros2.bridge.ROS2PublishTransformTree"),
("PublishRawTF_" + camera_frame_id + "_world", "isaacsim.ros2.bridge.ROS2PublishRawTransformTree"),
],
og.Controller.Keys.SET_VALUES: [
("PublishTF_" + camera_frame_id + ".inputs:topicName", "/tf"),
# Note if topic_name is changed to something else besides "/tf",
# it will not be captured by the ROS tf broadcaster.
("PublishRawTF_" + camera_frame_id + "_world.inputs:topicName", "/tf"),
("PublishRawTF_" + camera_frame_id + "_world.inputs:parentFrameId", camera_frame_id),
("PublishRawTF_" + camera_frame_id + "_world.inputs:childFrameId", camera_frame_id + "_world"),
# Static transform from ROS camera convention to world (+Z up, +X forward) convention:
("PublishRawTF_" + camera_frame_id + "_world.inputs:rotation", [0.5, -0.5, 0.5, 0.5]),
],
og.Controller.Keys.CONNECT: [
(ros_camera_graph_path + "/OnTick.outputs:tick", "PublishTF_" + camera_frame_id + ".inputs:execIn"),
(
ros_camera_graph_path + "/OnTick.outputs:tick",
"PublishRawTF_" + camera_frame_id + "_world.inputs:execIn",
),
(
ros_camera_graph_path + "/IsaacClock.outputs:simulationTime",
"PublishTF_" + camera_frame_id + ".inputs:timeStamp",
),
(
ros_camera_graph_path + "/IsaacClock.outputs:simulationTime",
"PublishRawTF_" + camera_frame_id + "_world.inputs:timeStamp",
),
],
},
)
except Exception as e:
print(e)
# Add target prims for the USD pose. All other frames are static.
set_target_prims(
primPath=ros_camera_graph_path + "/PublishTF_" + camera_frame_id,
inputName="inputs:targetPrims",
targetPrimPaths=[camera_prim],
)
return
Running the Example#
Enable isaacsim.ros2.bridge extension and set up ROS 2 environment variables following this workflow tutorial. Save the above script and run it using python.sh in the Isaac Sim folder. In our example, {camera_frame_id} is the prim name of the camera, which is floating_camera.
Verify that you observe a floating camera with prim path /World/floating_camera in the scene, and verify that the camera sees a forklift:
Verify that you observe the following:
If you open a terminal and type ros2 topic list, verify that you observe the following:
ros2 topic list
/camera_camera_info
/camera_depth
/camera_pointcloud
/camera_rgb
/clock
/parameter_events
/rosout
/tf
The frames published by TF will look like the following:
Now, you can visualize the pointcloud and depth images using RViz2. Open RViz2, and set the Fixed Frame field to world.
Then, enable viewing /camera_depth, /camera_rgb, /camera_pointcloud, and /tf topics.
Verify that the depth image /camera_depth and RGB image /camera_rgb look like this:
The pointcloud will look like so. Verify that the camera frames published by the TF publisher shows the two frames. The image on the left shows the {camera_frame_id}_world frame, and the image on the right shows the {camera_frame_id} frame.
From the side view:
Summary#
This tutorial demonstrated how to programmatically set up ROS 2 publishers for Isaac Sim Cameras at an approximate frequency.
Next Steps#
Continue on to the next tutorial in our ROS 2 Tutorials series, ROS 2 Compressed Images, to learn how to publish H.264 compressed camera images from Isaac Sim.