光线投射#
Ray Caster传感器(以及ray caster相机)与基于RTX的渲染类似,二者都涉及到射线投射。不同之处在于,Ray Caster传感器投射的射线严格返回沿投射路径的碰撞信息,并且每条射线的方向可以被指定。它们不会反弹,也不受材质或不透明度等因素的影响。对于传感器指定的每条射线,会沿射线路径追踪一条线,并返回与指定网格的第一次碰撞位置。这是我们的一些四足动物示例中用于测量局部高度场的方法。
为了保持传感器在有多个克隆环境时的性能,线条追踪直接在 Warp 中完成。这就是为什么需要识别特定网格以进行碰撞的原因:当传感器初始化时,warp 会将这些网格数据加载到设备上。因此,当前版本的传感器仅适用于字面上的静态网格(即 那些没有从其 USD 文件中指定的默认值发生变化的 网格)。这个限制将在未来的版本中移除。
使用光线投射传感器需要附加一个 模式 和一个父级 xform。模式定义了光线如何投射,而 prim 属性定义了传感器的方向和位置(可以指定额外的偏移量以实现更精确的放置)。Isaac Lab 支持多种光线投射模式配置,包括通用的 LIDAR 和网格模式。
@configclass
class RaycasterSensorSceneCfg(InteractiveSceneCfg):
"""Design the scene with sensors on the robot."""
# ground plane with texture for interesting casting results
ground = AssetBaseCfg(
prim_path="/World/Ground",
spawn=sim_utils.UsdFileCfg(
usd_path=f"{ISAAC_NUCLEUS_DIR}/Environments/Terrains/rough_plane.usd",
scale = (1,1,1),
)
)
# lights
dome_light = AssetBaseCfg(
prim_path="/World/Light", spawn=sim_utils.DomeLightCfg(intensity=3000.0, color=(0.75, 0.75, 0.75))
)
# robot
robot = ANYMAL_C_CFG.replace(prim_path="{ENV_REGEX_NS}/Robot")
ray_caster = RayCasterCfg(
prim_path="{ENV_REGEX_NS}/Robot/base/lidar_cage",
update_period = 1/60,
offset=RayCasterCfg.OffsetCfg(pos=(0,0,0.5)),
mesh_prim_paths=["/World/Ground"],
pattern_cfg=patterns.LidarPatternCfg(
channels = 100,
vertical_fov_range=[-90, 90],
horizontal_fov_range = [-90,90],
horizontal_res=1.0),
debug_vis=True,
)
注意,模式配置上的单位是以度为单位!另外,我们在这里启用了可视化功能,以便在渲染中明确显示模式,但这不是必需的,并且在性能调优时应该禁用。
查询传感器的数据可以像其他传感器一样在仿真运行时进行。
def run_simulator(sim: sim_utils.SimulationContext, scene: InteractiveScene):
.
.
.
# Simulate physics
while simulation_app.is_running():
.
.
.
# print information from the sensors
print("-------------------------------")
print(scene["ray_caster"])
print("Ray cast hit results: ", scene["ray_caster"].data.ray_hits_w)
-------------------------------
Ray-caster @ '/World/envs/env_.*/Robot/base/lidar_cage':
view type : <class 'omni.isaac.core.prims.xform_prim_view.XFormPrimView'>
update period (s) : 0.016666666666666666
number of meshes : 1
number of sensors : 1
number of rays/sensor: 18000
total number of rays : 18000
Ray cast hit results: tensor([[[-0.3698, 0.0357, 0.0000],
[-0.3698, 0.0357, 0.0000],
[-0.3698, 0.0357, 0.0000],
...,
[ inf, inf, inf],
[ inf, inf, inf],
[ inf, inf, inf]]], device='cuda:0')
-------------------------------
在这里,我们可以看到传感器本身返回的数据。首先请注意,开始和结束处有 3 个闭括号:这是因为返回的数据是按传感器的数量进行分批的。射线投射模式本身也已经被压平,因此数组的维度是 [N, B, 3]
,其中 N
是传感器的数量,B
是模式中投射的射线数量,3 是投射空间的维度。最后,请注意,这个投射模式中的前几个值是相同的:这是因为 lidar 模式是球形的,我们已将视场(FOV)指定为半球形,这包括了极点。在这种配置下, “压平模式” 变得显而易见:前 180 个条目将是相同的,因为它是该半球的底极,并且会有 180 个条目,因为我们的水平视场是 180 度,分辨率为 1 度。
您可以使用此脚本来实验不同的模式配置,并通过修改第 99 行的 triggered
变量来建立对数据存储方式的直觉。
raycaster_sensor.py 的代码
1# Copyright (c) 2022-2024, The Isaac Lab Project Developers.
2# All rights reserved.
3#
4# SPDX-License-Identifier: BSD-3-Clause
5
6import argparse
7import numpy as np
8
9from omni.isaac.lab.app import AppLauncher
10
11# add argparse arguments
12parser = argparse.ArgumentParser(description="Tutorial on adding sensors on a robot.")
13parser.add_argument("--num_envs", type=int, default=1, help="Number of environments to spawn.")
14# append AppLauncher cli args
15AppLauncher.add_app_launcher_args(parser)
16# parse the arguments
17args_cli = parser.parse_args()
18
19# launch omniverse app
20app_launcher = AppLauncher(args_cli)
21simulation_app = app_launcher.app
22
23"""Rest everything follows."""
24
25import torch
26
27import omni.isaac.lab.sim as sim_utils
28from omni.isaac.lab.assets import AssetBaseCfg
29from omni.isaac.lab.scene import InteractiveScene, InteractiveSceneCfg
30from omni.isaac.lab.sensors.ray_caster import RayCasterCfg, patterns
31from omni.isaac.lab.utils import configclass
32from omni.isaac.lab.utils.assets import ISAAC_NUCLEUS_DIR
33
34##
35# Pre-defined configs
36##
37from omni.isaac.lab_assets.anymal import ANYMAL_C_CFG # isort: skip
38
39
40@configclass
41class RaycasterSensorSceneCfg(InteractiveSceneCfg):
42 """Design the scene with sensors on the robot."""
43
44 # ground plane
45 ground = AssetBaseCfg(
46 prim_path="/World/Ground",
47 spawn=sim_utils.UsdFileCfg(
48 usd_path=f"{ISAAC_NUCLEUS_DIR}/Environments/Terrains/rough_plane.usd",
49 scale=(1, 1, 1),
50 ),
51 )
52
53 # lights
54 dome_light = AssetBaseCfg(
55 prim_path="/World/Light", spawn=sim_utils.DomeLightCfg(intensity=3000.0, color=(0.75, 0.75, 0.75))
56 )
57
58 # robot
59 robot = ANYMAL_C_CFG.replace(prim_path="{ENV_REGEX_NS}/Robot")
60
61 ray_caster = RayCasterCfg(
62 prim_path="{ENV_REGEX_NS}/Robot/base/lidar_cage",
63 update_period=1 / 60,
64 offset=RayCasterCfg.OffsetCfg(pos=(0, 0, 0.5)),
65 mesh_prim_paths=["/World/Ground"],
66 pattern_cfg=patterns.LidarPatternCfg(
67 channels=100, vertical_fov_range=[-90, 90], horizontal_fov_range=[-90, 90], horizontal_res=1.0
68 ),
69 debug_vis=not args_cli.headless,
70 )
71
72
73def run_simulator(sim: sim_utils.SimulationContext, scene: InteractiveScene):
74 """Run the simulator."""
75 # Define simulation stepping
76 sim_dt = sim.get_physics_dt()
77 sim_time = 0.0
78 count = 0
79
80 triggered = True
81 countdown = 42
82
83 # Simulate physics
84 while simulation_app.is_running():
85
86 if count % 500 == 0:
87 # reset counter
88 count = 0
89 # reset the scene entities
90 # root state
91 # we offset the root state by the origin since the states are written in simulation world frame
92 # if this is not done, then the robots will be spawned at the (0, 0, 0) of the simulation world
93 root_state = scene["robot"].data.default_root_state.clone()
94 root_state[:, :3] += scene.env_origins
95 scene["robot"].write_root_state_to_sim(root_state)
96 # set joint positions with some noise
97 joint_pos, joint_vel = (
98 scene["robot"].data.default_joint_pos.clone(),
99 scene["robot"].data.default_joint_vel.clone(),
100 )
101 joint_pos += torch.rand_like(joint_pos) * 0.1
102 scene["robot"].write_joint_state_to_sim(joint_pos, joint_vel)
103 # clear internal buffers
104 scene.reset()
105 print("[INFO]: Resetting robot state...")
106 # Apply default actions to the robot
107 # -- generate actions/commands
108 targets = scene["robot"].data.default_joint_pos
109 # -- apply action to the robot
110 scene["robot"].set_joint_position_target(targets)
111 # -- write data to sim
112 scene.write_data_to_sim()
113 # perform step
114 sim.step()
115 # update sim-time
116 sim_time += sim_dt
117 count += 1
118 # update buffers
119 scene.update(sim_dt)
120
121 # print information from the sensors
122 print("-------------------------------")
123 print(scene["ray_caster"])
124 print("Ray cast hit results: ", scene["ray_caster"].data.ray_hits_w)
125
126 if not triggered:
127 if countdown > 0:
128 countdown -= 1
129 continue
130 data = scene["ray_caster"].data.ray_hits_w.cpu().numpy()
131 np.save("cast_data.npy", data)
132 triggered = True
133 else:
134 continue
135
136
137def main():
138 """Main function."""
139
140 # Initialize the simulation context
141 sim_cfg = sim_utils.SimulationCfg(dt=0.005, device=args_cli.device)
142 sim = sim_utils.SimulationContext(sim_cfg)
143 # Set main camera
144 sim.set_camera_view(eye=[3.5, 3.5, 3.5], target=[0.0, 0.0, 0.0])
145 # design scene
146 scene_cfg = RaycasterSensorSceneCfg(num_envs=args_cli.num_envs, env_spacing=2.0)
147 scene = InteractiveScene(scene_cfg)
148 # Play the simulator
149 sim.reset()
150 # Now we are ready!
151 print("[INFO]: Setup complete...")
152 # Run the simulator
153 run_simulator(sim, scene)
154
155
156if __name__ == "__main__":
157 # run the main function
158 main()
159 # close sim app
160 simulation_app.close()