Using a task-space controller#

In the previous tutorials, we have joint-space controllers to control the robot. However, in many cases, it is more intuitive to control the robot using a task-space controller. For example, if we want to teleoperate the robot, it is easier to specify the desired end-effector pose rather than the desired joint positions.

In this tutorial, we will learn how to use a task-space controller to control the robot. We will use the controllers.DifferentialIKController class to track a desired end-effector pose command.

The Code#

The tutorial corresponds to the run_diff_ik.py script in the scripts/tutorials/05_controllers directory.

Code for run_diff_ik.py
  1# Copyright (c) 2022-2025, The Isaac Lab Project Developers (https://github.com/isaac-sim/IsaacLab/blob/main/CONTRIBUTORS.md).
  2# All rights reserved.
  3#
  4# SPDX-License-Identifier: BSD-3-Clause
  5
  6# Copyright (c) 2022-2025, The Isaac Lab Project Developers.
  7# All rights reserved.
  8#
  9# SPDX-License-Identifier: BSD-3-Clause
 10
 11"""
 12This script demonstrates how to use the differential inverse kinematics controller with the simulator.
 13
 14The differential IK controller can be configured in different modes. It uses the Jacobians computed by
 15PhysX. This helps perform parallelized computation of the inverse kinematics.
 16
 17.. code-block:: bash
 18
 19    # Usage
 20    ./isaaclab.sh -p scripts/tutorials/05_controllers/run_diff_ik.py
 21
 22"""
 23
 24"""Launch Isaac Sim Simulator first."""
 25
 26import argparse
 27
 28from isaaclab.app import AppLauncher
 29
 30# add argparse arguments
 31parser = argparse.ArgumentParser(description="Tutorial on using the differential IK controller.")
 32parser.add_argument("--robot", type=str, default="franka_panda", help="Name of the robot.")
 33parser.add_argument("--num_envs", type=int, default=128, help="Number of environments to spawn.")
 34# append AppLauncher cli args
 35AppLauncher.add_app_launcher_args(parser)
 36# parse the arguments
 37args_cli = parser.parse_args()
 38
 39# launch omniverse app
 40app_launcher = AppLauncher(args_cli)
 41simulation_app = app_launcher.app
 42
 43"""Rest everything follows."""
 44
 45import torch
 46
 47import isaaclab.sim as sim_utils
 48from isaaclab.assets import AssetBaseCfg
 49from isaaclab.controllers import DifferentialIKController, DifferentialIKControllerCfg
 50from isaaclab.managers import SceneEntityCfg
 51from isaaclab.markers import VisualizationMarkers
 52from isaaclab.markers.config import FRAME_MARKER_CFG
 53from isaaclab.scene import InteractiveScene, InteractiveSceneCfg
 54from isaaclab.utils import configclass
 55from isaaclab.utils.assets import ISAAC_NUCLEUS_DIR
 56from isaaclab.utils.math import subtract_frame_transforms
 57
 58##
 59# Pre-defined configs
 60##
 61from isaaclab_assets import FRANKA_PANDA_HIGH_PD_CFG, UR10_CFG  # isort:skip
 62
 63
 64@configclass
 65class TableTopSceneCfg(InteractiveSceneCfg):
 66    """Configuration for a cart-pole scene."""
 67
 68    # ground plane
 69    ground = AssetBaseCfg(
 70        prim_path="/World/defaultGroundPlane",
 71        spawn=sim_utils.GroundPlaneCfg(),
 72        init_state=AssetBaseCfg.InitialStateCfg(pos=(0.0, 0.0, -1.05)),
 73    )
 74
 75    # lights
 76    dome_light = AssetBaseCfg(
 77        prim_path="/World/Light", spawn=sim_utils.DomeLightCfg(intensity=3000.0, color=(0.75, 0.75, 0.75))
 78    )
 79
 80    # mount
 81    table = AssetBaseCfg(
 82        prim_path="{ENV_REGEX_NS}/Table",
 83        spawn=sim_utils.UsdFileCfg(
 84            usd_path=f"{ISAAC_NUCLEUS_DIR}/Props/Mounts/Stand/stand_instanceable.usd", scale=(2.0, 2.0, 2.0)
 85        ),
 86    )
 87
 88    # articulation
 89    if args_cli.robot == "franka_panda":
 90        robot = FRANKA_PANDA_HIGH_PD_CFG.replace(prim_path="{ENV_REGEX_NS}/Robot")
 91    elif args_cli.robot == "ur10":
 92        robot = UR10_CFG.replace(prim_path="{ENV_REGEX_NS}/Robot")
 93    else:
 94        raise ValueError(f"Robot {args_cli.robot} is not supported. Valid: franka_panda, ur10")
 95
 96
 97def run_simulator(sim: sim_utils.SimulationContext, scene: InteractiveScene):
 98    """Runs the simulation loop."""
 99    # Extract scene entities
100    # note: we only do this here for readability.
101    robot = scene["robot"]
102
103    # Create controller
104    diff_ik_cfg = DifferentialIKControllerCfg(command_type="pose", use_relative_mode=False, ik_method="dls")
105    diff_ik_controller = DifferentialIKController(diff_ik_cfg, num_envs=scene.num_envs, device=sim.device)
106
107    # Markers
108    frame_marker_cfg = FRAME_MARKER_CFG.copy()
109    frame_marker_cfg.markers["frame"].scale = (0.1, 0.1, 0.1)
110    ee_marker = VisualizationMarkers(frame_marker_cfg.replace(prim_path="/Visuals/ee_current"))
111    goal_marker = VisualizationMarkers(frame_marker_cfg.replace(prim_path="/Visuals/ee_goal"))
112
113    # Define goals for the arm
114    ee_goals = [
115        [0.5, 0.5, 0.7, 0.707, 0, 0.707, 0],
116        [0.5, -0.4, 0.6, 0.707, 0.707, 0.0, 0.0],
117        [0.5, 0, 0.5, 0.0, 1.0, 0.0, 0.0],
118    ]
119    ee_goals = torch.tensor(ee_goals, device=sim.device)
120    # Track the given command
121    current_goal_idx = 0
122    # Create buffers to store actions
123    ik_commands = torch.zeros(scene.num_envs, diff_ik_controller.action_dim, device=robot.device)
124    ik_commands[:] = ee_goals[current_goal_idx]
125
126    # Specify robot-specific parameters
127    if args_cli.robot == "franka_panda":
128        robot_entity_cfg = SceneEntityCfg("robot", joint_names=["panda_joint.*"], body_names=["panda_hand"])
129    elif args_cli.robot == "ur10":
130        robot_entity_cfg = SceneEntityCfg("robot", joint_names=[".*"], body_names=["ee_link"])
131    else:
132        raise ValueError(f"Robot {args_cli.robot} is not supported. Valid: franka_panda, ur10")
133    # Resolving the scene entities
134    robot_entity_cfg.resolve(scene)
135    # Obtain the frame index of the end-effector
136    # For a fixed base robot, the frame index is one less than the body index. This is because
137    # the root body is not included in the returned Jacobians.
138    if robot.is_fixed_base:
139        ee_jacobi_idx = robot_entity_cfg.body_ids[0] - 1
140    else:
141        ee_jacobi_idx = robot_entity_cfg.body_ids[0]
142
143    # Define simulation stepping
144    sim_dt = sim.get_physics_dt()
145    count = 0
146    # Simulation loop
147    while simulation_app.is_running():
148        # reset
149        if count % 150 == 0:
150            # reset time
151            count = 0
152            # reset joint state
153            joint_pos = robot.data.default_joint_pos.clone()
154            joint_vel = robot.data.default_joint_vel.clone()
155            robot.write_joint_state_to_sim(joint_pos, joint_vel)
156            robot.reset()
157            # reset actions
158            ik_commands[:] = ee_goals[current_goal_idx]
159            joint_pos_des = joint_pos[:, robot_entity_cfg.joint_ids].clone()
160            # reset controller
161            diff_ik_controller.reset()
162            diff_ik_controller.set_command(ik_commands)
163            # change goal
164            current_goal_idx = (current_goal_idx + 1) % len(ee_goals)
165        else:
166            # obtain quantities from simulation
167            jacobian = robot.root_physx_view.get_jacobians()[:, ee_jacobi_idx, :, robot_entity_cfg.joint_ids]
168            ee_pose_w = robot.data.body_pose_w[:, robot_entity_cfg.body_ids[0]]
169            root_pose_w = robot.data.root_pose_w
170            joint_pos = robot.data.joint_pos[:, robot_entity_cfg.joint_ids]
171            # compute frame in root frame
172            ee_pos_b, ee_quat_b = subtract_frame_transforms(
173                root_pose_w[:, 0:3], root_pose_w[:, 3:7], ee_pose_w[:, 0:3], ee_pose_w[:, 3:7]
174            )
175            # compute the joint commands
176            joint_pos_des = diff_ik_controller.compute(ee_pos_b, ee_quat_b, jacobian, joint_pos)
177
178        # apply actions
179        robot.set_joint_position_target(joint_pos_des, joint_ids=robot_entity_cfg.joint_ids)
180        scene.write_data_to_sim()
181        # perform step
182        sim.step()
183        # update sim-time
184        count += 1
185        # update buffers
186        scene.update(sim_dt)
187
188        # obtain quantities from simulation
189        ee_pose_w = robot.data.body_state_w[:, robot_entity_cfg.body_ids[0], 0:7]
190        # update marker positions
191        ee_marker.visualize(ee_pose_w[:, 0:3], ee_pose_w[:, 3:7])
192        goal_marker.visualize(ik_commands[:, 0:3] + scene.env_origins, ik_commands[:, 3:7])
193
194
195def main():
196    """Main function."""
197    # Load kit helper
198    sim_cfg = sim_utils.SimulationCfg(dt=0.01, device=args_cli.device)
199    sim = sim_utils.SimulationContext(sim_cfg)
200    # Set main camera
201    sim.set_camera_view([2.5, 2.5, 2.5], [0.0, 0.0, 0.0])
202    # Design scene
203    scene_cfg = TableTopSceneCfg(num_envs=args_cli.num_envs, env_spacing=2.0)
204    scene = InteractiveScene(scene_cfg)
205    # Play the simulator
206    sim.reset()
207    # Now we are ready!
208    print("[INFO]: Setup complete...")
209    # Run the simulator
210    run_simulator(sim, scene)
211
212
213if __name__ == "__main__":
214    # run the main function
215    main()
216    # close sim app
217    simulation_app.close()

The Code Explained#

While using any task-space controller, it is important to ensure that the provided quantities are in the correct frames. When parallelizing environment instances, they are all existing in the same unique simulation world frame. However, typically, we want each environment itself to have its own local frame. This is accessible through the scene.InteractiveScene.env_origins attribute.

In our APIs, we use the following notation for frames:

  • The simulation world frame (denoted as w), which is the frame of the entire simulation.

  • The local environment frame (denoted as e), which is the frame of the local environment.

  • The robot’s base frame (denoted as b), which is the frame of the robot’s base link.

Since the asset instances are not “aware” of the local environment frame, they return their states in the simulation world frame. Thus, we need to convert the obtained quantities to the local environment frame. This is done by subtracting the local environment origin from the obtained quantities.

Creating an IK controller#

The DifferentialIKController class computes the desired joint positions for a robot to reach a desired end-effector pose. The included implementation performs the computation in a batched format and uses PyTorch operations. It supports different types of inverse kinematics solvers, including the damped least-squares method and the pseudo-inverse method. These solvers can be specified using the ik_method argument. Additionally, the controller can handle commands as both relative and absolute poses.

In this tutorial, we will use the damped least-squares method to compute the desired joint positions. Additionally, since we want to track desired end-effector poses, we will use the absolute pose command mode.

    # Create controller
    diff_ik_cfg = DifferentialIKControllerCfg(command_type="pose", use_relative_mode=False, ik_method="dls")
    diff_ik_controller = DifferentialIKController(diff_ik_cfg, num_envs=scene.num_envs, device=sim.device)

Obtaining the robot’s joint and body indices#

The IK controller implementation is a computation-only class. Thus, it expects the user to provide the necessary information about the robot. This includes the robot’s joint positions, current end-effector pose, and the Jacobian matrix.

While the attribute assets.ArticulationData.joint_pos provides the joint positions, we only want the joint positions of the robot’s arm, and not the gripper. Similarly, while the attribute assets.Articulationdata.body_state_w provides the state of all the robot’s bodies, we only want the state of the robot’s end-effector. Thus, we need to index into these arrays to obtain the desired quantities.

For this, the articulation class provides the methods find_joints() and find_bodies(). These methods take in the names of the joints and bodies and return their corresponding indices.

While you may directly use these methods to obtain the indices, we recommend using the SceneEntityCfg class to resolve the indices. This class is used in various places in the APIs to extract certain information from a scene entity. Internally, it calls the above methods to obtain the indices. However, it also performs some additional checks to ensure that the provided names are valid. Thus, it is a safer option to use this class.

    # Specify robot-specific parameters
    if args_cli.robot == "franka_panda":
        robot_entity_cfg = SceneEntityCfg("robot", joint_names=["panda_joint.*"], body_names=["panda_hand"])
    elif args_cli.robot == "ur10":
        robot_entity_cfg = SceneEntityCfg("robot", joint_names=[".*"], body_names=["ee_link"])
    else:
        raise ValueError(f"Robot {args_cli.robot} is not supported. Valid: franka_panda, ur10")
    # Resolving the scene entities
    robot_entity_cfg.resolve(scene)
    # Obtain the frame index of the end-effector
    # For a fixed base robot, the frame index is one less than the body index. This is because
    # the root body is not included in the returned Jacobians.
    if robot.is_fixed_base:
        ee_jacobi_idx = robot_entity_cfg.body_ids[0] - 1
    else:
        ee_jacobi_idx = robot_entity_cfg.body_ids[0]

Computing robot command#

The IK controller separates the operation of setting the desired command and computing the desired joint positions. This is done to allow for the user to run the IK controller at a different frequency than the robot’s control frequency.

The set_command() method takes in the desired end-effector pose as a single batched array. The pose is specified in the robot’s base frame.

            # reset controller
            diff_ik_controller.reset()
            diff_ik_controller.set_command(ik_commands)

We can then compute the desired joint positions using the compute() method. The method takes in the current end-effector pose (in base frame), Jacobian, and current joint positions. We read the Jacobian matrix from the robot’s data, which uses its value computed from the physics engine.

            # obtain quantities from simulation
            jacobian = robot.root_physx_view.get_jacobians()[:, ee_jacobi_idx, :, robot_entity_cfg.joint_ids]
            ee_pose_w = robot.data.body_pose_w[:, robot_entity_cfg.body_ids[0]]
            root_pose_w = robot.data.root_pose_w
            joint_pos = robot.data.joint_pos[:, robot_entity_cfg.joint_ids]
            # compute frame in root frame
            ee_pos_b, ee_quat_b = subtract_frame_transforms(
                root_pose_w[:, 0:3], root_pose_w[:, 3:7], ee_pose_w[:, 0:3], ee_pose_w[:, 3:7]
            )
            # compute the joint commands
            joint_pos_des = diff_ik_controller.compute(ee_pos_b, ee_quat_b, jacobian, joint_pos)

The computed joint position targets can then be applied on the robot, as done in the previous tutorials.

        # apply actions
        robot.set_joint_position_target(joint_pos_des, joint_ids=robot_entity_cfg.joint_ids)
        scene.write_data_to_sim()

The Code Execution#

Now that we have gone through the code, let’s run the script and see the result:

./isaaclab.sh -p scripts/tutorials/05_controllers/run_diff_ik.py --robot franka_panda --num_envs 128

The script will start a simulation with 128 robots. The robots will be controlled using the IK controller. The current and desired end-effector poses should be displayed using frame markers. When the robot reaches the desired pose, the command should cycle through to the next pose specified in the script.

result of run_diff_ik.py

To stop the simulation, you can either close the window, or press Ctrl+C in the terminal.