Training with an RL Agent#

In the previous tutorials, we covered how to define an RL task environment, register it into the gym registry, and interact with it using a random agent. We now move on to the next step: training an RL agent to solve the task.

Although the envs.ManagerBasedRLEnv conforms to the gymnasium.Env interface, it is not exactly a gym environment. The input and outputs of the environment are not numpy arrays, but rather based on torch tensors with the first dimension being the number of environment instances.

Additionally, most RL libraries expect their own variation of an environment interface. For example, Stable-Baselines3 expects the environment to conform to its VecEnv API which expects a list of numpy arrays instead of a single tensor. Similarly, RSL-RL, RL-Games and SKRL expect a different interface. Since there is no one-size-fits-all solution, we do not base the envs.ManagerBasedRLEnv on any particular learning library. Instead, we implement wrappers to convert the environment into the expected interface. These are specified in the isaaclab_rl module.

In this tutorial, we will use Stable-Baselines3 to train an RL agent to solve the cartpole balancing task.

Caution

Wrapping the environment with the respective learning framework’s wrapper should happen in the end, i.e. after all other wrappers have been applied. This is because the learning framework’s wrapper modifies the interpretation of environment’s APIs which may no longer be compatible with gymnasium.Env.

The Code#

For this tutorial, we use the training script from Stable-Baselines3 workflow in the scripts/reinforcement_learning/sb3 directory.

Code for train.py
  1# Copyright (c) 2022-2025, The Isaac Lab Project Developers (https://github.com/isaac-sim/IsaacLab/blob/main/CONTRIBUTORS.md).
  2# All rights reserved.
  3#
  4# SPDX-License-Identifier: BSD-3-Clause
  5
  6
  7"""Script to train RL agent with Stable Baselines3."""
  8
  9"""Launch Isaac Sim Simulator first."""
 10
 11import argparse
 12import contextlib
 13import signal
 14import sys
 15from pathlib import Path
 16
 17from isaaclab.app import AppLauncher
 18
 19# add argparse arguments
 20parser = argparse.ArgumentParser(description="Train an RL agent with Stable-Baselines3.")
 21parser.add_argument("--video", action="store_true", default=False, help="Record videos during training.")
 22parser.add_argument("--video_length", type=int, default=200, help="Length of the recorded video (in steps).")
 23parser.add_argument("--video_interval", type=int, default=2000, help="Interval between video recordings (in steps).")
 24parser.add_argument("--num_envs", type=int, default=None, help="Number of environments to simulate.")
 25parser.add_argument("--task", type=str, default=None, help="Name of the task.")
 26parser.add_argument(
 27    "--agent", type=str, default="sb3_cfg_entry_point", help="Name of the RL agent configuration entry point."
 28)
 29parser.add_argument("--seed", type=int, default=None, help="Seed used for the environment")
 30parser.add_argument("--log_interval", type=int, default=100_000, help="Log data every n timesteps.")
 31parser.add_argument("--checkpoint", type=str, default=None, help="Continue the training from checkpoint.")
 32parser.add_argument("--max_iterations", type=int, default=None, help="RL Policy training iterations.")
 33parser.add_argument("--export_io_descriptors", action="store_true", default=False, help="Export IO descriptors.")
 34parser.add_argument(
 35    "--keep_all_info",
 36    action="store_true",
 37    default=False,
 38    help="Use a slower SB3 wrapper but keep all the extra training info.",
 39)
 40parser.add_argument(
 41    "--ray-proc-id", "-rid", type=int, default=None, help="Automatically configured by Ray integration, otherwise None."
 42)
 43# append AppLauncher cli args
 44AppLauncher.add_app_launcher_args(parser)
 45# parse the arguments
 46args_cli, hydra_args = parser.parse_known_args()
 47# always enable cameras to record video
 48if args_cli.video:
 49    args_cli.enable_cameras = True
 50
 51# clear out sys.argv for Hydra
 52sys.argv = [sys.argv[0]] + hydra_args
 53
 54# launch omniverse app
 55app_launcher = AppLauncher(args_cli)
 56simulation_app = app_launcher.app
 57
 58
 59def cleanup_pbar(*args):
 60    """
 61    A small helper to stop training and
 62    cleanup progress bar properly on ctrl+c
 63    """
 64    import gc
 65
 66    tqdm_objects = [obj for obj in gc.get_objects() if "tqdm" in type(obj).__name__]
 67    for tqdm_object in tqdm_objects:
 68        if "tqdm_rich" in type(tqdm_object).__name__:
 69            tqdm_object.close()
 70    raise KeyboardInterrupt
 71
 72
 73# disable KeyboardInterrupt override
 74signal.signal(signal.SIGINT, cleanup_pbar)
 75
 76"""Rest everything follows."""
 77
 78import gymnasium as gym
 79import logging
 80import numpy as np
 81import os
 82import random
 83from datetime import datetime
 84
 85from stable_baselines3 import PPO
 86from stable_baselines3.common.callbacks import CheckpointCallback, LogEveryNTimesteps
 87from stable_baselines3.common.vec_env import VecNormalize
 88
 89from isaaclab.envs import (
 90    DirectMARLEnv,
 91    DirectMARLEnvCfg,
 92    DirectRLEnvCfg,
 93    ManagerBasedRLEnvCfg,
 94    multi_agent_to_single_agent,
 95)
 96from isaaclab.utils.dict import print_dict
 97from isaaclab.utils.io import dump_yaml
 98
 99from isaaclab_rl.sb3 import Sb3VecEnvWrapper, process_sb3_cfg
100
101import isaaclab_tasks  # noqa: F401
102from isaaclab_tasks.utils.hydra import hydra_task_config
103
104# import logger
105logger = logging.getLogger(__name__)
106# PLACEHOLDER: Extension template (do not remove this comment)
107
108
109@hydra_task_config(args_cli.task, args_cli.agent)
110def main(env_cfg: ManagerBasedRLEnvCfg | DirectRLEnvCfg | DirectMARLEnvCfg, agent_cfg: dict):
111    """Train with stable-baselines agent."""
112    # randomly sample a seed if seed = -1
113    if args_cli.seed == -1:
114        args_cli.seed = random.randint(0, 10000)
115
116    # override configurations with non-hydra CLI arguments
117    env_cfg.scene.num_envs = args_cli.num_envs if args_cli.num_envs is not None else env_cfg.scene.num_envs
118    agent_cfg["seed"] = args_cli.seed if args_cli.seed is not None else agent_cfg["seed"]
119    # max iterations for training
120    if args_cli.max_iterations is not None:
121        agent_cfg["n_timesteps"] = args_cli.max_iterations * agent_cfg["n_steps"] * env_cfg.scene.num_envs
122
123    # set the environment seed
124    # note: certain randomizations occur in the environment initialization so we set the seed here
125    env_cfg.seed = agent_cfg["seed"]
126    env_cfg.sim.device = args_cli.device if args_cli.device is not None else env_cfg.sim.device
127
128    # directory for logging into
129    run_info = datetime.now().strftime("%Y-%m-%d_%H-%M-%S")
130    log_root_path = os.path.abspath(os.path.join("logs", "sb3", args_cli.task))
131    print(f"[INFO] Logging experiment in directory: {log_root_path}")
132    # The Ray Tune workflow extracts experiment name using the logging line below, hence, do not change it (see PR #2346, comment-2819298849)
133    print(f"Exact experiment name requested from command line: {run_info}")
134    log_dir = os.path.join(log_root_path, run_info)
135    # dump the configuration into log-directory
136    dump_yaml(os.path.join(log_dir, "params", "env.yaml"), env_cfg)
137    dump_yaml(os.path.join(log_dir, "params", "agent.yaml"), agent_cfg)
138
139    # save command used to run the script
140    command = " ".join(sys.orig_argv)
141    (Path(log_dir) / "command.txt").write_text(command)
142
143    # post-process agent configuration
144    agent_cfg = process_sb3_cfg(agent_cfg, env_cfg.scene.num_envs)
145    # read configurations about the agent-training
146    policy_arch = agent_cfg.pop("policy")
147    n_timesteps = agent_cfg.pop("n_timesteps")
148
149    # set the IO descriptors export flag if requested
150    if isinstance(env_cfg, ManagerBasedRLEnvCfg):
151        env_cfg.export_io_descriptors = args_cli.export_io_descriptors
152    else:
153        logger.warning(
154            "IO descriptors are only supported for manager based RL environments. No IO descriptors will be exported."
155        )
156
157    # set the log directory for the environment (works for all environment types)
158    env_cfg.log_dir = log_dir
159
160    # create isaac environment
161    env = gym.make(args_cli.task, cfg=env_cfg, render_mode="rgb_array" if args_cli.video else None)
162
163    # convert to single-agent instance if required by the RL algorithm
164    if isinstance(env.unwrapped, DirectMARLEnv):
165        env = multi_agent_to_single_agent(env)
166
167    # wrap for video recording
168    if args_cli.video:
169        video_kwargs = {
170            "video_folder": os.path.join(log_dir, "videos", "train"),
171            "step_trigger": lambda step: step % args_cli.video_interval == 0,
172            "video_length": args_cli.video_length,
173            "disable_logger": True,
174        }
175        print("[INFO] Recording videos during training.")
176        print_dict(video_kwargs, nesting=4)
177        env = gym.wrappers.RecordVideo(env, **video_kwargs)
178
179    # wrap around environment for stable baselines
180    env = Sb3VecEnvWrapper(env, fast_variant=not args_cli.keep_all_info)
181
182    norm_keys = {"normalize_input", "normalize_value", "clip_obs"}
183    norm_args = {}
184    for key in norm_keys:
185        if key in agent_cfg:
186            norm_args[key] = agent_cfg.pop(key)
187
188    if norm_args and norm_args.get("normalize_input"):
189        print(f"Normalizing input, {norm_args=}")
190        env = VecNormalize(
191            env,
192            training=True,
193            norm_obs=norm_args["normalize_input"],
194            norm_reward=norm_args.get("normalize_value", False),
195            clip_obs=norm_args.get("clip_obs", 100.0),
196            gamma=agent_cfg["gamma"],
197            clip_reward=np.inf,
198        )
199
200    # create agent from stable baselines
201    agent = PPO(policy_arch, env, verbose=1, tensorboard_log=log_dir, **agent_cfg)
202    if args_cli.checkpoint is not None:
203        agent = agent.load(args_cli.checkpoint, env, print_system_info=True)
204
205    # callbacks for agent
206    checkpoint_callback = CheckpointCallback(save_freq=1000, save_path=log_dir, name_prefix="model", verbose=2)
207    callbacks = [checkpoint_callback, LogEveryNTimesteps(n_steps=args_cli.log_interval)]
208
209    # train the agent
210    with contextlib.suppress(KeyboardInterrupt):
211        agent.learn(
212            total_timesteps=n_timesteps,
213            callback=callbacks,
214            progress_bar=True,
215            log_interval=None,
216        )
217    # save the final model
218    agent.save(os.path.join(log_dir, "model"))
219    print("Saving to:")
220    print(os.path.join(log_dir, "model.zip"))
221
222    if isinstance(env, VecNormalize):
223        print("Saving normalization")
224        env.save(os.path.join(log_dir, "model_vecnormalize.pkl"))
225
226    # close the simulator
227    env.close()
228
229
230if __name__ == "__main__":
231    # run the main function
232    main()
233    # close sim app
234    simulation_app.close()

The Code Explained#

Most of the code above is boilerplate code to create logging directories, saving the parsed configurations, and setting up different Stable-Baselines3 components. For this tutorial, the important part is creating the environment and wrapping it with the Stable-Baselines3 wrapper.

There are three wrappers used in the code above:

  1. gymnasium.wrappers.RecordVideo: This wrapper records a video of the environment and saves it to the specified directory. This is useful for visualizing the agent’s behavior during training.

  2. wrappers.sb3.Sb3VecEnvWrapper: This wrapper converts the environment into a Stable-Baselines3 compatible environment.

  3. stable_baselines3.common.vec_env.VecNormalize: This wrapper normalizes the environment’s observations and rewards.

Each of these wrappers wrap around the previous wrapper by following env = wrapper(env, *args, **kwargs) repeatedly. The final environment is then used to train the agent. For more information on how these wrappers work, please refer to the Wrapping environments documentation.

The Code Execution#

We train a PPO agent from Stable-Baselines3 to solve the cartpole balancing task.

Training the agent#

There are three main ways to train the agent. Each of them has their own advantages and disadvantages. It is up to you to decide which one you prefer based on your use case.

Headless execution#

If the --headless flag is set, the simulation is not rendered during training. This is useful when training on a remote server or when you do not want to see the simulation. Typically, it speeds up the training process since only physics simulation step is performed.

./isaaclab.sh -p scripts/reinforcement_learning/sb3/train.py --task Isaac-Cartpole-v0 --num_envs 64 --headless

Headless execution with off-screen render#

Since the above command does not render the simulation, it is not possible to visualize the agent’s behavior during training. To visualize the agent’s behavior, we pass the --enable_cameras which enables off-screen rendering. Additionally, we pass the flag --video which records a video of the agent’s behavior during training.

./isaaclab.sh -p scripts/reinforcement_learning/sb3/train.py --task Isaac-Cartpole-v0 --num_envs 64 --headless --video

The videos are saved to the logs/sb3/Isaac-Cartpole-v0/<run-dir>/videos/train directory. You can open these videos using any video player.

Interactive execution#

While the above two methods are useful for training the agent, they don’t allow you to interact with the simulation to see what is happening. In this case, you can ignore the --headless flag and run the training script as follows:

./isaaclab.sh -p scripts/reinforcement_learning/sb3/train.py --task Isaac-Cartpole-v0 --num_envs 64

This will open the Isaac Sim window and you can see the agent training in the environment. However, this will slow down the training process since the simulation is rendered on the screen. As a workaround, you can switch between different render modes in the "Isaac Lab" window that is docked on the bottom-right corner of the screen. To learn more about these render modes, please check the sim.SimulationContext.RenderMode class.

Viewing the logs#

On a separate terminal, you can monitor the training progress by executing the following command:

# execute from the root directory of the repository
./isaaclab.sh -p -m tensorboard.main --logdir logs/sb3/Isaac-Cartpole-v0

Playing the trained agent#

Once the training is complete, you can visualize the trained agent by executing the following command:

# execute from the root directory of the repository
./isaaclab.sh -p scripts/reinforcement_learning/sb3/play.py --task Isaac-Cartpole-v0 --num_envs 32 --use_last_checkpoint

The above command will load the latest checkpoint from the logs/sb3/Isaac-Cartpole-v0 directory. You can also specify a specific checkpoint by passing the --checkpoint flag.