使用运动空间控制器#
有时,使用差分IK控制器控制机器人的末端执行器姿势是不够的。例如,我们可能想要在任务空间中强制执行非常特定的姿势跟踪误差动态,用关节力/扭矩命令驱动机器人,或者在控制其他方向的运动时在特定方向施加接触力(例如,用布擦洗桌子表面)。在这种任务中,我们可以使用操作空间控制器(OSC)。
操作空间控制的参考资料:
O Khatib. A unified approach for motion and force control of robot manipulators: The operational space formulation. IEEE Journal of Robotics and Automation, 3(1):43–53, 1987. URL http://dx.doi.org/10.1109/JRA.1987.1087068.
Robot Dynamics Lecture Notes by Marco Hutter (ETH Zurich). URL https://ethz.ch/content/dam/ethz/special-interest/mavt/robotics-n-intelligent-systems/rsl-dam/documents/RobotDynamics2017/RD_HS2017script.pdf
在本教程中,我们将学习如何使用OSC来控制机器人。我们将使用 controllers.OperationalSpaceController
类,在其他所有方向上跟踪所需的末端执行器姿态的同时,施加一个垂直于倾斜墙面的恒定力。
代码#
教程对应于 source/standalone/tutorials/05_controllers
目录中的 run_osc.py
脚本。
run_osc.py的代码
1# Copyright (c) 2022-2024, The Isaac Lab Project Developers.
2# All rights reserved.
3#
4# SPDX-License-Identifier: BSD-3-Clause
5
6"""
7This script demonstrates how to use the operational space controller (OSC) with the simulator.
8
9The OSC controller can be configured in different modes. It uses the dynamical quantities such as Jacobians and
10mass matricescomputed by PhysX.
11
12.. code-block:: bash
13
14 # Usage
15 ./isaaclab.sh -p source/standalone/tutorials/05_controllers/run_osc.py
16
17"""
18
19"""Launch Isaac Sim Simulator first."""
20
21import argparse
22
23from omni.isaac.lab.app import AppLauncher
24
25# add argparse arguments
26parser = argparse.ArgumentParser(description="Tutorial on using the operational space controller.")
27parser.add_argument("--num_envs", type=int, default=128, help="Number of environments to spawn.")
28# append AppLauncher cli args
29AppLauncher.add_app_launcher_args(parser)
30# parse the arguments
31args_cli = parser.parse_args()
32
33# launch omniverse app
34app_launcher = AppLauncher(args_cli)
35simulation_app = app_launcher.app
36
37"""Rest everything follows."""
38
39import torch
40
41import omni.isaac.lab.sim as sim_utils
42from omni.isaac.lab.assets import Articulation, AssetBaseCfg
43from omni.isaac.lab.controllers import OperationalSpaceController, OperationalSpaceControllerCfg
44from omni.isaac.lab.markers import VisualizationMarkers
45from omni.isaac.lab.markers.config import FRAME_MARKER_CFG
46from omni.isaac.lab.scene import InteractiveScene, InteractiveSceneCfg
47from omni.isaac.lab.sensors import ContactSensorCfg
48from omni.isaac.lab.utils import configclass
49from omni.isaac.lab.utils.math import (
50 combine_frame_transforms,
51 matrix_from_quat,
52 quat_inv,
53 quat_rotate_inverse,
54 subtract_frame_transforms,
55)
56
57##
58# Pre-defined configs
59##
60from omni.isaac.lab_assets import FRANKA_PANDA_HIGH_PD_CFG # isort:skip
61
62
63@configclass
64class SceneCfg(InteractiveSceneCfg):
65 """Configuration for a simple scene with a tilted wall."""
66
67 # ground plane
68 ground = AssetBaseCfg(
69 prim_path="/World/defaultGroundPlane",
70 spawn=sim_utils.GroundPlaneCfg(),
71 )
72
73 # lights
74 dome_light = AssetBaseCfg(
75 prim_path="/World/Light", spawn=sim_utils.DomeLightCfg(intensity=3000.0, color=(0.75, 0.75, 0.75))
76 )
77
78 # Tilted wall
79 tilted_wall = AssetBaseCfg(
80 prim_path="{ENV_REGEX_NS}/TiltedWall",
81 spawn=sim_utils.CuboidCfg(
82 size=(2.0, 1.5, 0.01),
83 collision_props=sim_utils.CollisionPropertiesCfg(),
84 visual_material=sim_utils.PreviewSurfaceCfg(diffuse_color=(1.0, 0.0, 0.0), opacity=0.1),
85 rigid_props=sim_utils.RigidBodyPropertiesCfg(kinematic_enabled=True),
86 activate_contact_sensors=True,
87 ),
88 init_state=AssetBaseCfg.InitialStateCfg(
89 pos=(0.6 + 0.085, 0.0, 0.3), rot=(0.9238795325, 0.0, -0.3826834324, 0.0)
90 ),
91 )
92
93 contact_forces = ContactSensorCfg(
94 prim_path="/World/envs/env_.*/TiltedWall",
95 update_period=0.0,
96 history_length=2,
97 debug_vis=False,
98 )
99
100 robot = FRANKA_PANDA_HIGH_PD_CFG.replace(prim_path="{ENV_REGEX_NS}/Robot")
101 robot.actuators["panda_shoulder"].stiffness = 0.0
102 robot.actuators["panda_shoulder"].damping = 0.0
103 robot.actuators["panda_forearm"].stiffness = 0.0
104 robot.actuators["panda_forearm"].damping = 0.0
105 robot.spawn.rigid_props.disable_gravity = True
106
107
108def run_simulator(sim: sim_utils.SimulationContext, scene: InteractiveScene):
109 """Runs the simulation loop.
110
111 Args:
112 sim: (SimulationContext) Simulation context.
113 scene: (InteractiveScene) Interactive scene.
114 """
115
116 # Extract scene entities for readability.
117 robot = scene["robot"]
118 contact_forces = scene["contact_forces"]
119
120 # Obtain indices for the end-effector and arm joints
121 ee_frame_name = "panda_leftfinger"
122 arm_joint_names = ["panda_joint.*"]
123 ee_frame_idx = robot.find_bodies(ee_frame_name)[0][0]
124 arm_joint_ids = robot.find_joints(arm_joint_names)[0]
125
126 # Create the OSC
127 osc_cfg = OperationalSpaceControllerCfg(
128 target_types=["pose_abs", "wrench_abs"],
129 impedance_mode="variable_kp",
130 inertial_dynamics_decoupling=True,
131 partial_inertial_dynamics_decoupling=False,
132 gravity_compensation=False,
133 motion_damping_ratio_task=1.0,
134 contact_wrench_stiffness_task=[0.0, 0.0, 0.1, 0.0, 0.0, 0.0],
135 motion_control_axes_task=[1, 1, 0, 1, 1, 1],
136 contact_wrench_control_axes_task=[0, 0, 1, 0, 0, 0],
137 )
138 osc = OperationalSpaceController(osc_cfg, num_envs=scene.num_envs, device=sim.device)
139
140 # Markers
141 frame_marker_cfg = FRAME_MARKER_CFG.copy()
142 frame_marker_cfg.markers["frame"].scale = (0.1, 0.1, 0.1)
143 ee_marker = VisualizationMarkers(frame_marker_cfg.replace(prim_path="/Visuals/ee_current"))
144 goal_marker = VisualizationMarkers(frame_marker_cfg.replace(prim_path="/Visuals/ee_goal"))
145
146 # Define targets for the arm
147 ee_goal_pose_set_tilted_b = torch.tensor(
148 [
149 [0.6, 0.15, 0.3, 0.0, 0.92387953, 0.0, 0.38268343],
150 [0.6, -0.3, 0.3, 0.0, 0.92387953, 0.0, 0.38268343],
151 [0.8, 0.0, 0.5, 0.0, 0.92387953, 0.0, 0.38268343],
152 ],
153 device=sim.device,
154 )
155 ee_goal_wrench_set_tilted_task = torch.tensor(
156 [
157 [0.0, 0.0, 10.0, 0.0, 0.0, 0.0],
158 [0.0, 0.0, 10.0, 0.0, 0.0, 0.0],
159 [0.0, 0.0, 10.0, 0.0, 0.0, 0.0],
160 ],
161 device=sim.device,
162 )
163 kp_set_task = torch.tensor(
164 [
165 [360.0, 360.0, 360.0, 360.0, 360.0, 360.0],
166 [420.0, 420.0, 420.0, 420.0, 420.0, 420.0],
167 [320.0, 320.0, 320.0, 320.0, 320.0, 320.0],
168 ],
169 device=sim.device,
170 )
171 ee_target_set = torch.cat([ee_goal_pose_set_tilted_b, ee_goal_wrench_set_tilted_task, kp_set_task], dim=-1)
172
173 # Define simulation stepping
174 sim_dt = sim.get_physics_dt()
175
176 # Update existing buffers
177 # Note: We need to update buffers before the first step for the controller.
178 robot.update(dt=sim_dt)
179
180 # get the updated states
181 jacobian_b, mass_matrix, gravity, ee_pose_b, ee_vel_b, root_pose_w, ee_pose_w, ee_force_b = update_states(
182 sim, scene, robot, ee_frame_idx, arm_joint_ids, contact_forces
183 )
184
185 # Track the given target command
186 current_goal_idx = 0 # Current goal index for the arm
187 command = torch.zeros(
188 scene.num_envs, osc.action_dim, device=sim.device
189 ) # Generic target command, which can be pose, position, force, etc.
190 ee_target_pose_b = torch.zeros(scene.num_envs, 7, device=sim.device) # Target pose in the body frame
191 ee_target_pose_w = torch.zeros(scene.num_envs, 7, device=sim.device) # Target pose in the world frame (for marker)
192
193 # Set joint efforts to zero
194 zero_joint_efforts = torch.zeros(scene.num_envs, robot.num_joints, device=sim.device)
195 joint_efforts = torch.zeros(scene.num_envs, len(arm_joint_ids), device=sim.device)
196
197 count = 0
198 # Simulation loop
199 while simulation_app.is_running():
200 # reset every 500 steps
201 if count % 500 == 0:
202 # reset joint state to default
203 default_joint_pos = robot.data.default_joint_pos.clone()
204 default_joint_vel = robot.data.default_joint_vel.clone()
205 robot.write_joint_state_to_sim(default_joint_pos, default_joint_vel)
206 robot.set_joint_effort_target(zero_joint_efforts) # Set zero torques in the initial step
207 robot.write_data_to_sim()
208 robot.reset()
209 # reset contact sensor
210 contact_forces.reset()
211 # reset target pose
212 robot.update(sim_dt)
213 _, _, _, ee_pose_b, _, _, _, _ = update_states(
214 sim, scene, robot, ee_frame_idx, arm_joint_ids, contact_forces
215 ) # at reset, the jacobians are not updated to the latest state
216 command, ee_target_pose_b, ee_target_pose_w, current_goal_idx = update_target(
217 sim, scene, osc, root_pose_w, ee_target_set, current_goal_idx
218 )
219 # set the osc command
220 osc.reset()
221 command, task_frame_pose_b = convert_to_task_frame(osc, command=command, ee_target_pose_b=ee_target_pose_b)
222 osc.set_command(command=command, current_ee_pose_b=ee_pose_b, current_task_frame_pose_b=task_frame_pose_b)
223 else:
224 # get the updated states
225 jacobian_b, mass_matrix, gravity, ee_pose_b, ee_vel_b, root_pose_w, ee_pose_w, ee_force_b = update_states(
226 sim, scene, robot, ee_frame_idx, arm_joint_ids, contact_forces
227 )
228 # compute the joint commands
229 joint_efforts = osc.compute(
230 jacobian_b=jacobian_b,
231 current_ee_pose_b=ee_pose_b,
232 current_ee_vel_b=ee_vel_b,
233 current_ee_force_b=ee_force_b,
234 mass_matrix=mass_matrix,
235 gravity=gravity,
236 )
237 # apply actions
238 robot.set_joint_effort_target(joint_efforts, joint_ids=arm_joint_ids)
239 robot.write_data_to_sim()
240
241 # update marker positions
242 ee_marker.visualize(ee_pose_w[:, 0:3], ee_pose_w[:, 3:7])
243 goal_marker.visualize(ee_target_pose_w[:, 0:3], ee_target_pose_w[:, 3:7])
244
245 # perform step
246 sim.step(render=True)
247 # update robot buffers
248 robot.update(sim_dt)
249 # update buffers
250 scene.update(sim_dt)
251 # update sim-time
252 count += 1
253
254
255# Update robot states
256def update_states(
257 sim: sim_utils.SimulationContext,
258 scene: InteractiveScene,
259 robot: Articulation,
260 ee_frame_idx: int,
261 arm_joint_ids: list[int],
262 contact_forces,
263):
264 """Update the robot states.
265
266 Args:
267 sim: (SimulationContext) Simulation context.
268 scene: (InteractiveScene) Interactive scene.
269 robot: (Articulation) Robot articulation.
270 ee_frame_idx: (int) End-effector frame index.
271 arm_joint_ids: (list[int]) Arm joint indices.
272 contact_forces: (ContactSensor) Contact sensor.
273
274 Returns:
275 jacobian_b (torch.tensor): Jacobian in the body frame.
276 mass_matrix (torch.tensor): Mass matrix.
277 gravity (torch.tensor): Gravity vector.
278 ee_pose_b (torch.tensor): End-effector pose in the body frame.
279 ee_vel_b (torch.tensor): End-effector velocity in the body frame.
280 root_pose_w (torch.tensor): Root pose in the world frame.
281 ee_pose_w (torch.tensor): End-effector pose in the world frame.
282 ee_force_b (torch.tensor): End-effector force in the body frame.
283
284 Raises:
285 ValueError: Undefined target_type.
286 """
287 # obtain dynamics related quantities from simulation
288 ee_jacobi_idx = ee_frame_idx - 1
289 jacobian_w = robot.root_physx_view.get_jacobians()[:, ee_jacobi_idx, :, arm_joint_ids]
290 mass_matrix = robot.root_physx_view.get_mass_matrices()[:, arm_joint_ids, :][:, :, arm_joint_ids]
291 gravity = robot.root_physx_view.get_generalized_gravity_forces()[:, arm_joint_ids]
292 # Convert the Jacobian from world to root frame
293 jacobian_b = jacobian_w.clone()
294 root_rot_matrix = matrix_from_quat(quat_inv(robot.data.root_state_w[:, 3:7]))
295 jacobian_b[:, :3, :] = torch.bmm(root_rot_matrix, jacobian_b[:, :3, :])
296 jacobian_b[:, 3:, :] = torch.bmm(root_rot_matrix, jacobian_b[:, 3:, :])
297
298 # Compute current pose of the end-effector
299 root_pose_w = robot.data.root_state_w[:, 0:7]
300 ee_pose_w = robot.data.body_state_w[:, ee_frame_idx, 0:7]
301 ee_pos_b, ee_quat_b = subtract_frame_transforms(
302 root_pose_w[:, 0:3], root_pose_w[:, 3:7], ee_pose_w[:, 0:3], ee_pose_w[:, 3:7]
303 )
304 ee_pose_b = torch.cat([ee_pos_b, ee_quat_b], dim=-1)
305
306 # Compute the current velocity of the end-effector
307 ee_vel_w = robot.data.body_vel_w[:, ee_frame_idx, :] # Extract end-effector velocity in the world frame
308 root_vel_w = robot.data.root_vel_w # Extract root velocity in the world frame
309 relative_vel_w = ee_vel_w - root_vel_w # Compute the relative velocity in the world frame
310 ee_lin_vel_b = quat_rotate_inverse(robot.data.root_quat_w, relative_vel_w[:, 0:3]) # From world to root frame
311 ee_ang_vel_b = quat_rotate_inverse(robot.data.root_quat_w, relative_vel_w[:, 3:6])
312 ee_vel_b = torch.cat([ee_lin_vel_b, ee_ang_vel_b], dim=-1)
313
314 # Calculate the contact force
315 ee_force_w = torch.zeros(scene.num_envs, 3, device=sim.device)
316 sim_dt = sim.get_physics_dt()
317 contact_forces.update(sim_dt) # update contact sensor
318 # Calculate the contact force by averaging over last four time steps (i.e., to smoothen) and
319 # taking the max of three surfaces as only one should be the contact of interest
320 ee_force_w, _ = torch.max(torch.mean(contact_forces.data.net_forces_w_history, dim=1), dim=1)
321
322 # This is a simplification, only for the sake of testing.
323 ee_force_b = ee_force_w
324
325 return jacobian_b, mass_matrix, gravity, ee_pose_b, ee_vel_b, root_pose_w, ee_pose_w, ee_force_b
326
327
328# Update the target commands
329def update_target(
330 sim: sim_utils.SimulationContext,
331 scene: InteractiveScene,
332 osc: OperationalSpaceController,
333 root_pose_w: torch.tensor,
334 ee_target_set: torch.tensor,
335 current_goal_idx: int,
336):
337 """Update the targets for the operational space controller.
338
339 Args:
340 sim: (SimulationContext) Simulation context.
341 scene: (InteractiveScene) Interactive scene.
342 osc: (OperationalSpaceController) Operational space controller.
343 root_pose_w: (torch.tensor) Root pose in the world frame.
344 ee_target_set: (torch.tensor) End-effector target set.
345 current_goal_idx: (int) Current goal index.
346
347 Returns:
348 command (torch.tensor): Updated target command.
349 ee_target_pose_b (torch.tensor): Updated target pose in the body frame.
350 ee_target_pose_w (torch.tensor): Updated target pose in the world frame.
351 next_goal_idx (int): Next goal index.
352
353 Raises:
354 ValueError: Undefined target_type.
355 """
356
357 # update the ee desired command
358 command = torch.zeros(scene.num_envs, osc.action_dim, device=sim.device)
359 command[:] = ee_target_set[current_goal_idx]
360
361 # update the ee desired pose
362 ee_target_pose_b = torch.zeros(scene.num_envs, 7, device=sim.device)
363 for target_type in osc.cfg.target_types:
364 if target_type == "pose_abs":
365 ee_target_pose_b[:] = command[:, :7]
366 elif target_type == "wrench_abs":
367 pass # ee_target_pose_b could stay at the root frame for force control, what matters is ee_target_b
368 else:
369 raise ValueError("Undefined target_type within update_target().")
370
371 # update the target desired pose in world frame (for marker)
372 ee_target_pos_w, ee_target_quat_w = combine_frame_transforms(
373 root_pose_w[:, 0:3], root_pose_w[:, 3:7], ee_target_pose_b[:, 0:3], ee_target_pose_b[:, 3:7]
374 )
375 ee_target_pose_w = torch.cat([ee_target_pos_w, ee_target_quat_w], dim=-1)
376
377 next_goal_idx = (current_goal_idx + 1) % len(ee_target_set)
378
379 return command, ee_target_pose_b, ee_target_pose_w, next_goal_idx
380
381
382# Convert the target commands to the task frame
383def convert_to_task_frame(osc: OperationalSpaceController, command: torch.tensor, ee_target_pose_b: torch.tensor):
384 """Converts the target commands to the task frame.
385
386 Args:
387 osc: OperationalSpaceController object.
388 command: Command to be converted.
389 ee_target_pose_b: Target pose in the body frame.
390
391 Returns:
392 command (torch.tensor): Target command in the task frame.
393 task_frame_pose_b (torch.tensor): Target pose in the task frame.
394
395 Raises:
396 ValueError: Undefined target_type.
397 """
398 command = command.clone()
399 task_frame_pose_b = ee_target_pose_b.clone()
400
401 cmd_idx = 0
402 for target_type in osc.cfg.target_types:
403 if target_type == "pose_abs":
404 command[:, :3], command[:, 3:7] = subtract_frame_transforms(
405 task_frame_pose_b[:, :3], task_frame_pose_b[:, 3:], command[:, :3], command[:, 3:7]
406 )
407 cmd_idx += 7
408 elif target_type == "wrench_abs":
409 # These are already defined in target frame for ee_goal_wrench_set_tilted_task (since it is
410 # easier), so not transforming
411 cmd_idx += 6
412 else:
413 raise ValueError("Undefined target_type within _convert_to_task_frame().")
414
415 return command, task_frame_pose_b
416
417
418def main():
419 """Main function."""
420 # Load kit helper
421 sim_cfg = sim_utils.SimulationCfg(dt=0.01, device=args_cli.device)
422 sim = sim_utils.SimulationContext(sim_cfg)
423 # Set main camera
424 sim.set_camera_view([2.5, 2.5, 2.5], [0.0, 0.0, 0.0])
425 # Design scene
426 scene_cfg = SceneCfg(num_envs=args_cli.num_envs, env_spacing=2.0)
427 scene = InteractiveScene(scene_cfg)
428 # Play the simulator
429 sim.reset()
430 # Now we are ready!
431 print("[INFO]: Setup complete...")
432 # Run the simulator
433 run_simulator(sim, scene)
434
435
436if __name__ == "__main__":
437 # run the main function
438 main()
439 # close sim app
440 simulation_app.close()
创建一个操作空间控制器#
OperationalSpaceController
类计算机器人在任务空间中同时进行运动和力控制时的关节力/扭矩。
这个任务空间的参考坐标系可以是欧几里德空间中的任意坐标系。默认情况下,它是机器人的基坐标系。然而,在某些情况下,相对于其他坐标系定义目标坐标会更容易。在这种情况下,应该在 set_command
方法的 current_task_frame_pose_b
参数中提供该任务参考坐标系相对于机器人基坐标系的姿态。例如,在这个教程中,将目标命令相对于与墙面平行的坐标系定义是有意义的,因为这样力控制方向将仅在该坐标系的z轴上非零。设置为具有与墙面相同方向的目标姿态就是这样一个候选项,并在本教程中作为任务坐标系使用。因此,应该以这个任务参考坐标系为考虑来设置所有到 OperationalSpaceControllerCfg
的参数。
对于运动控制,任务空间目标可以被设定为绝对(即,相对于机器人基坐标系定义, target_types: "pose_abs"
) 或相对于末端执行器当前姿态(即,target_types: "pose_rel"
) 。对于力控制,任务空间目标可以被设定为绝对(即,相对于机器人基坐标系定义,target_types: "force_abs"
) 。如果希望同时应用姿态和力控制,target_types
应该是一个列表,如 ["pose_abs", "wrench_abs"]
或 ["pose_rel", "wrench_abs"]
。
运动和力控制将应用的轴可以使用 motion_control_axes_task
和 force_control_axes_task
参数进行指定。这些列表应包含所有六个轴(位置和旋转)的0/1,并且彼此互补(例如,对于x轴,如果 motion_control_axes_task
为 0
,则 force_control_axes_task
应为 1
)。
对于运动控制轴,可以使用 motion_control_stiffness
和 motion_damping_ratio_task
参数指定所需的刚度和阻尼比值,这可以是一个标量(所有轴的相同值)或一个包含六个标量的列表,每个值对应一个轴。如果需要,刚度和阻尼比值可以作为命令参数(例如,使用RL学习这些值或在运行时更改这些值)。为此, impedance_mode
应为 "variable_kp"
以在命令中包含刚度值,或者为 "variable"
以在命令中同时包含刚度和阻尼比值。在这些情况下,还应设置 motion_stiffness_limits_task
和 motion_damping_limits_task
,这会限制刚度和阻尼比值的范围。
对于接触力控制,可以通过不设置 contact_wrench_stiffness_task
来应用开环力控制,或者通过设置所需刚度值来应用带有前馈项的闭环力控制,其使用 contact_wrench_stiffness_task
参数,该参数可以是一个标量或六个标量的列表。请注意,目前只有接触力矩的线性部分(因此是 contact_wrench_stiffness_task
的前三个元素)在闭环控制中被考虑,因为接触传感器无法测量旋转部分。
对于运动控制,应将 inertial_dynamics_decoupling
设置为 True
,以使用机器人的惯性矩阵来解耦任务空间中的期望加速度。这对于运动控制的精确性非常重要,特别是对于快速运动。这种惯性解耦考虑了所有六个运动轴之间的耦合。如果需要,可以通过将 partial_inertial_dynamics_decoupling
设置为 True
来忽略平移和旋转轴之间的惯性耦合。
如果希望在操作空间指令中包含重力补偿,则应将 gravity_compensation
设置为 True
。
包含的 OSC 实现以批处理格式进行计算,并使用 PyTorch 操作。
在本教程中,我们将使用 "pose_abs"
控制除 z 轴以外的所有轴的运动,使用 "wrench_abs"
控制 z 轴上的力。此外,我们将在运动控制中包括完整的惯性解耦,并且不包括重力补偿,因为机器人配置中禁用了重力。最后,我们将将阻抗模式设置为 "variable_kp"
,以动态更改刚度值( motion_damping_ratio_task
设置为 1
:kd 值会根据 kp 值自适应以保持过阻尼响应)。
# Create the OSC
osc_cfg = OperationalSpaceControllerCfg(
target_types=["pose_abs", "wrench_abs"],
impedance_mode="variable_kp",
inertial_dynamics_decoupling=True,
partial_inertial_dynamics_decoupling=False,
gravity_compensation=False,
motion_damping_ratio_task=1.0,
contact_wrench_stiffness_task=[0.0, 0.0, 0.1, 0.0, 0.0, 0.0],
motion_control_axes_task=[1, 1, 0, 1, 1, 1],
contact_wrench_control_axes_task=[0, 0, 1, 0, 0, 0],
)
osc = OperationalSpaceController(osc_cfg, num_envs=scene.num_envs, device=sim.device)
更新机器人的状态#
OSC 实现是一个纯计算类。因此,它期望用户提供有关机器人的必要信息。这包括机器人的雅可比矩阵、质量/惯性矩阵、末端执行器姿态、速度和接触力,均在根帧中。此外,用户应提供重力补偿向量(如有需要)。
# Update robot states
def update_states(
sim: sim_utils.SimulationContext,
scene: InteractiveScene,
robot: Articulation,
ee_frame_idx: int,
arm_joint_ids: list[int],
contact_forces,
):
"""Update the robot states.
Args:
sim: (SimulationContext) Simulation context.
scene: (InteractiveScene) Interactive scene.
robot: (Articulation) Robot articulation.
ee_frame_idx: (int) End-effector frame index.
arm_joint_ids: (list[int]) Arm joint indices.
contact_forces: (ContactSensor) Contact sensor.
Returns:
jacobian_b (torch.tensor): Jacobian in the body frame.
mass_matrix (torch.tensor): Mass matrix.
gravity (torch.tensor): Gravity vector.
ee_pose_b (torch.tensor): End-effector pose in the body frame.
ee_vel_b (torch.tensor): End-effector velocity in the body frame.
root_pose_w (torch.tensor): Root pose in the world frame.
ee_pose_w (torch.tensor): End-effector pose in the world frame.
ee_force_b (torch.tensor): End-effector force in the body frame.
Raises:
ValueError: Undefined target_type.
"""
# obtain dynamics related quantities from simulation
ee_jacobi_idx = ee_frame_idx - 1
jacobian_w = robot.root_physx_view.get_jacobians()[:, ee_jacobi_idx, :, arm_joint_ids]
mass_matrix = robot.root_physx_view.get_mass_matrices()[:, arm_joint_ids, :][:, :, arm_joint_ids]
gravity = robot.root_physx_view.get_generalized_gravity_forces()[:, arm_joint_ids]
# Convert the Jacobian from world to root frame
jacobian_b = jacobian_w.clone()
root_rot_matrix = matrix_from_quat(quat_inv(robot.data.root_state_w[:, 3:7]))
jacobian_b[:, :3, :] = torch.bmm(root_rot_matrix, jacobian_b[:, :3, :])
jacobian_b[:, 3:, :] = torch.bmm(root_rot_matrix, jacobian_b[:, 3:, :])
# Compute current pose of the end-effector
root_pose_w = robot.data.root_state_w[:, 0:7]
ee_pose_w = robot.data.body_state_w[:, ee_frame_idx, 0:7]
ee_pos_b, ee_quat_b = subtract_frame_transforms(
root_pose_w[:, 0:3], root_pose_w[:, 3:7], ee_pose_w[:, 0:3], ee_pose_w[:, 3:7]
)
ee_pose_b = torch.cat([ee_pos_b, ee_quat_b], dim=-1)
# Compute the current velocity of the end-effector
ee_vel_w = robot.data.body_vel_w[:, ee_frame_idx, :] # Extract end-effector velocity in the world frame
root_vel_w = robot.data.root_vel_w # Extract root velocity in the world frame
relative_vel_w = ee_vel_w - root_vel_w # Compute the relative velocity in the world frame
ee_lin_vel_b = quat_rotate_inverse(robot.data.root_quat_w, relative_vel_w[:, 0:3]) # From world to root frame
ee_ang_vel_b = quat_rotate_inverse(robot.data.root_quat_w, relative_vel_w[:, 3:6])
ee_vel_b = torch.cat([ee_lin_vel_b, ee_ang_vel_b], dim=-1)
# Calculate the contact force
ee_force_w = torch.zeros(scene.num_envs, 3, device=sim.device)
sim_dt = sim.get_physics_dt()
contact_forces.update(sim_dt) # update contact sensor
# Calculate the contact force by averaging over last four time steps (i.e., to smoothen) and
# taking the max of three surfaces as only one should be the contact of interest
ee_force_w, _ = torch.max(torch.mean(contact_forces.data.net_forces_w_history, dim=1), dim=1)
# This is a simplification, only for the sake of testing.
ee_force_b = ee_force_w
计算机器人命令#
OSC 分离了设定期望命令和计算期望关节位置的操作。要设定期望命令,用户应提供命令向量,其中包括目标命令(即在 OSC 配置的 target_types
参数中出现的顺序)以及如果 impedance_mode 设置为 "variable_kp"
或 "variable"
时的期望刚度和阻尼比值。它们应该都在与任务框架相同的坐标系中(例如,用 _task
下标表示)并连接在一起。
在本教程中,期望的力矩已经相对于任务框架进行了定义,并且期望的姿势被转换到任务框架中,如下所示:
# Convert the target commands to the task frame
def convert_to_task_frame(osc: OperationalSpaceController, command: torch.tensor, ee_target_pose_b: torch.tensor):
"""Converts the target commands to the task frame.
Args:
osc: OperationalSpaceController object.
command: Command to be converted.
ee_target_pose_b: Target pose in the body frame.
Returns:
command (torch.tensor): Target command in the task frame.
task_frame_pose_b (torch.tensor): Target pose in the task frame.
Raises:
ValueError: Undefined target_type.
"""
command = command.clone()
task_frame_pose_b = ee_target_pose_b.clone()
cmd_idx = 0
for target_type in osc.cfg.target_types:
if target_type == "pose_abs":
command[:, :3], command[:, 3:7] = subtract_frame_transforms(
task_frame_pose_b[:, :3], task_frame_pose_b[:, 3:], command[:, :3], command[:, 3:7]
)
cmd_idx += 7
elif target_type == "wrench_abs":
# These are already defined in target frame for ee_goal_wrench_set_tilted_task (since it is
# easier), so not transforming
cmd_idx += 6
else:
raise ValueError("Undefined target_type within _convert_to_task_frame().")
return command, task_frame_pose_b
OSC 命令是使用任务坐标系中的命令向量设置的,基坐标系中的末端执行器姿势和基坐标系中的任务(参考)坐标系姿势如下。这些信息是必要的,因为内部计算是在基坐标系中完成的。
# set the osc command
osc.reset()
command, task_frame_pose_b = convert_to_task_frame(osc, command=command, ee_target_pose_b=ee_target_pose_b)
osc.set_command(command=command, current_ee_pose_b=ee_pose_b, current_task_frame_pose_b=task_frame_pose_b)
使用提供的机器人状态和所需命令计算关节力/扭矩值,如下所示:
# compute the joint commands
joint_efforts = osc.compute(
jacobian_b=jacobian_b,
current_ee_pose_b=ee_pose_b,
current_ee_vel_b=ee_vel_b,
current_ee_force_b=ee_force_b,
mass_matrix=mass_matrix,
gravity=gravity,
)
计算得出的关节努力/力矩目标随后可以应用于机器人中。
# apply actions
robot.set_joint_effort_target(joint_efforts, joint_ids=arm_joint_ids)
robot.write_data_to_sim()
代码执行#
你现在可以运行脚本并查看结果:
./isaaclab.sh -p source/standalone/tutorials/05_controllers/run_osc.py --num_envs 128
该脚本将启动一个包含128个机器人的仿真。将使用OSC进行机器人的控制。当前和期望的末端执行器姿势应该使用框架标记显示,除了红色倾斜墙之外。您应该看到机器人在施加垂直于墙面的恒定力的同时到达期望姿势。
要停止模拟,您可以关闭窗口或在终端中按 Ctrl+C
。