Configuring an RL Agent#
In the previous tutorial, we saw how to train an RL agent to solve the cartpole balancing task using the Stable-Baselines3 library. In this tutorial, we will see how to configure the training process to use different RL libraries and different training algorithms.
In the directory scripts/reinforcement_learning
, you will find the scripts for
different RL libraries. These are organized into subdirectories named after the library name.
Each subdirectory contains the training and playing scripts for the library.
To configure a learning library with a specific task, you need to create a configuration file
for the learning agent. This configuration file is used to create an instance of the learning agent
and is used to configure the training process. Similar to the environment registration shown in
the Registering an Environment tutorial, you can register the learning agent with the
gymnasium.register
method.
The Code#
As an example, we will look at the configuration included for the task Isaac-Cartpole-v0
in the isaaclab_tasks
package. This is the same task that we used in the
Training with an RL Agent tutorial.
gym.register(
id="Isaac-Cartpole-v0",
entry_point="isaaclab.envs:ManagerBasedRLEnv",
disable_env_checker=True,
kwargs={
"env_cfg_entry_point": f"{__name__}.cartpole_env_cfg:CartpoleEnvCfg",
"rl_games_cfg_entry_point": f"{agents.__name__}:rl_games_ppo_cfg.yaml",
"rsl_rl_cfg_entry_point": f"{agents.__name__}.rsl_rl_ppo_cfg:CartpolePPORunnerCfg",
"rsl_rl_with_symmetry_cfg_entry_point": f"{agents.__name__}.rsl_rl_ppo_cfg:CartpolePPORunnerWithSymmetryCfg",
"skrl_cfg_entry_point": f"{agents.__name__}:skrl_ppo_cfg.yaml",
"sb3_cfg_entry_point": f"{agents.__name__}:sb3_ppo_cfg.yaml",
},
The Code Explained#
Under the attribute kwargs
, we can see the configuration for the different learning libraries.
The key is the name of the library and the value is the path to the configuration instance.
This configuration instance can be a string, a class, or an instance of the class.
For example, the value of the key "rl_games_cfg_entry_point"
is a string that points to the
configuration YAML file for the RL-Games library. Meanwhile, the value of the key
"rsl_rl_cfg_entry_point"
points to the configuration class for the RSL-RL library.
The pattern used for specifying an agent configuration class follows closely to that used for specifying the environment configuration entry point. This means that while the following are equivalent:
Specifying the configuration entry point as a string
from . import agents
gym.register(
id="Isaac-Cartpole-v0",
entry_point="isaaclab.envs:ManagerBasedRLEnv",
disable_env_checker=True,
kwargs={
"env_cfg_entry_point": f"{__name__}.cartpole_env_cfg:CartpoleEnvCfg",
"rsl_rl_cfg_entry_point": f"{agents.__name__}.rsl_rl_ppo_cfg:CartpolePPORunnerCfg",
},
)
Specifying the configuration entry point as a class
from . import agents
gym.register(
id="Isaac-Cartpole-v0",
entry_point="isaaclab.envs:ManagerBasedRLEnv",
disable_env_checker=True,
kwargs={
"env_cfg_entry_point": f"{__name__}.cartpole_env_cfg:CartpoleEnvCfg",
"rsl_rl_cfg_entry_point": agents.rsl_rl_ppo_cfg.CartpolePPORunnerCfg,
},
)
The first code block is the preferred way to specify the configuration entry point. The second code block is equivalent to the first one, but it leads to import of the configuration class which slows down the import time. This is why we recommend using strings for the configuration entry point.
All the scripts in the scripts/reinforcement_learning
directory are configured by default to read the
<library_name>_cfg_entry_point
from the kwargs
dictionary to retrieve the configuration instance.
For instance, the following code block shows how the train.py
script reads the configuration
instance for the Stable-Baselines3 library:
Code for train.py with SB3
1# Copyright (c) 2022-2025, The Isaac Lab Project Developers (https://github.com/isaac-sim/IsaacLab/blob/main/CONTRIBUTORS.md).
2# All rights reserved.
3#
4# SPDX-License-Identifier: BSD-3-Clause
5
6
7"""Script to train RL agent with Stable Baselines3."""
8
9"""Launch Isaac Sim Simulator first."""
10
11import argparse
12import contextlib
13import signal
14import sys
15from pathlib import Path
16
17from isaaclab.app import AppLauncher
18
19# add argparse arguments
20parser = argparse.ArgumentParser(description="Train an RL agent with Stable-Baselines3.")
21parser.add_argument("--video", action="store_true", default=False, help="Record videos during training.")
22parser.add_argument("--video_length", type=int, default=200, help="Length of the recorded video (in steps).")
23parser.add_argument("--video_interval", type=int, default=2000, help="Interval between video recordings (in steps).")
24parser.add_argument("--num_envs", type=int, default=None, help="Number of environments to simulate.")
25parser.add_argument("--task", type=str, default=None, help="Name of the task.")
26parser.add_argument(
27 "--agent", type=str, default="sb3_cfg_entry_point", help="Name of the RL agent configuration entry point."
28)
29parser.add_argument("--seed", type=int, default=None, help="Seed used for the environment")
30parser.add_argument("--log_interval", type=int, default=100_000, help="Log data every n timesteps.")
31parser.add_argument("--checkpoint", type=str, default=None, help="Continue the training from checkpoint.")
32parser.add_argument("--max_iterations", type=int, default=None, help="RL Policy training iterations.")
33parser.add_argument(
34 "--keep_all_info",
35 action="store_true",
36 default=False,
37 help="Use a slower SB3 wrapper but keep all the extra training info.",
38)
39# append AppLauncher cli args
40AppLauncher.add_app_launcher_args(parser)
41# parse the arguments
42args_cli, hydra_args = parser.parse_known_args()
43# always enable cameras to record video
44if args_cli.video:
45 args_cli.enable_cameras = True
46
47# clear out sys.argv for Hydra
48sys.argv = [sys.argv[0]] + hydra_args
49
50# launch omniverse app
51app_launcher = AppLauncher(args_cli)
52simulation_app = app_launcher.app
53
54
55def cleanup_pbar(*args):
56 """
57 A small helper to stop training and
58 cleanup progress bar properly on ctrl+c
59 """
60 import gc
61
62 tqdm_objects = [obj for obj in gc.get_objects() if "tqdm" in type(obj).__name__]
63 for tqdm_object in tqdm_objects:
64 if "tqdm_rich" in type(tqdm_object).__name__:
65 tqdm_object.close()
66 raise KeyboardInterrupt
67
68
69# disable KeyboardInterrupt override
70signal.signal(signal.SIGINT, cleanup_pbar)
71
72"""Rest everything follows."""
73
74import gymnasium as gym
75import numpy as np
76import os
77import random
78from datetime import datetime
79
80from stable_baselines3 import PPO
81from stable_baselines3.common.callbacks import CheckpointCallback, LogEveryNTimesteps
82from stable_baselines3.common.vec_env import VecNormalize
83
84from isaaclab.envs import (
85 DirectMARLEnv,
86 DirectMARLEnvCfg,
87 DirectRLEnvCfg,
88 ManagerBasedRLEnvCfg,
89 multi_agent_to_single_agent,
90)
91from isaaclab.utils.dict import print_dict
92from isaaclab.utils.io import dump_pickle, dump_yaml
93
94from isaaclab_rl.sb3 import Sb3VecEnvWrapper, process_sb3_cfg
95
96import isaaclab_tasks # noqa: F401
97from isaaclab_tasks.utils.hydra import hydra_task_config
98
99# PLACEHOLDER: Extension template (do not remove this comment)
100
101
102@hydra_task_config(args_cli.task, args_cli.agent)
103def main(env_cfg: ManagerBasedRLEnvCfg | DirectRLEnvCfg | DirectMARLEnvCfg, agent_cfg: dict):
104 """Train with stable-baselines agent."""
105 # randomly sample a seed if seed = -1
106 if args_cli.seed == -1:
107 args_cli.seed = random.randint(0, 10000)
108
109 # override configurations with non-hydra CLI arguments
110 env_cfg.scene.num_envs = args_cli.num_envs if args_cli.num_envs is not None else env_cfg.scene.num_envs
111 agent_cfg["seed"] = args_cli.seed if args_cli.seed is not None else agent_cfg["seed"]
112 # max iterations for training
113 if args_cli.max_iterations is not None:
114 agent_cfg["n_timesteps"] = args_cli.max_iterations * agent_cfg["n_steps"] * env_cfg.scene.num_envs
115
116 # set the environment seed
117 # note: certain randomizations occur in the environment initialization so we set the seed here
118 env_cfg.seed = agent_cfg["seed"]
119 env_cfg.sim.device = args_cli.device if args_cli.device is not None else env_cfg.sim.device
120
121 # directory for logging into
122 run_info = datetime.now().strftime("%Y-%m-%d_%H-%M-%S")
123 log_root_path = os.path.abspath(os.path.join("logs", "sb3", args_cli.task))
124 print(f"[INFO] Logging experiment in directory: {log_root_path}")
125 # The Ray Tune workflow extracts experiment name using the logging line below, hence, do not change it (see PR #2346, comment-2819298849)
126 print(f"Exact experiment name requested from command line: {run_info}")
127 log_dir = os.path.join(log_root_path, run_info)
128 # dump the configuration into log-directory
129 dump_yaml(os.path.join(log_dir, "params", "env.yaml"), env_cfg)
130 dump_yaml(os.path.join(log_dir, "params", "agent.yaml"), agent_cfg)
131 dump_pickle(os.path.join(log_dir, "params", "env.pkl"), env_cfg)
132 dump_pickle(os.path.join(log_dir, "params", "agent.pkl"), agent_cfg)
133
134 # save command used to run the script
135 command = " ".join(sys.orig_argv)
136 (Path(log_dir) / "command.txt").write_text(command)
137
138 # post-process agent configuration
139 agent_cfg = process_sb3_cfg(agent_cfg, env_cfg.scene.num_envs)
140 # read configurations about the agent-training
141 policy_arch = agent_cfg.pop("policy")
142 n_timesteps = agent_cfg.pop("n_timesteps")
143
144 # create isaac environment
145 env = gym.make(args_cli.task, cfg=env_cfg, render_mode="rgb_array" if args_cli.video else None)
146
147 # convert to single-agent instance if required by the RL algorithm
148 if isinstance(env.unwrapped, DirectMARLEnv):
149 env = multi_agent_to_single_agent(env)
150
151 # wrap for video recording
152 if args_cli.video:
153 video_kwargs = {
154 "video_folder": os.path.join(log_dir, "videos", "train"),
155 "step_trigger": lambda step: step % args_cli.video_interval == 0,
156 "video_length": args_cli.video_length,
157 "disable_logger": True,
158 }
159 print("[INFO] Recording videos during training.")
160 print_dict(video_kwargs, nesting=4)
161 env = gym.wrappers.RecordVideo(env, **video_kwargs)
162
163 # wrap around environment for stable baselines
164 env = Sb3VecEnvWrapper(env, fast_variant=not args_cli.keep_all_info)
165
166 norm_keys = {"normalize_input", "normalize_value", "clip_obs"}
167 norm_args = {}
168 for key in norm_keys:
169 if key in agent_cfg:
170 norm_args[key] = agent_cfg.pop(key)
171
172 if norm_args and norm_args.get("normalize_input"):
173 print(f"Normalizing input, {norm_args=}")
174 env = VecNormalize(
175 env,
176 training=True,
177 norm_obs=norm_args["normalize_input"],
178 norm_reward=norm_args.get("normalize_value", False),
179 clip_obs=norm_args.get("clip_obs", 100.0),
180 gamma=agent_cfg["gamma"],
181 clip_reward=np.inf,
182 )
183
184 # create agent from stable baselines
185 agent = PPO(policy_arch, env, verbose=1, tensorboard_log=log_dir, **agent_cfg)
186 if args_cli.checkpoint is not None:
187 agent = agent.load(args_cli.checkpoint, env, print_system_info=True)
188
189 # callbacks for agent
190 checkpoint_callback = CheckpointCallback(save_freq=1000, save_path=log_dir, name_prefix="model", verbose=2)
191 callbacks = [checkpoint_callback, LogEveryNTimesteps(n_steps=args_cli.log_interval)]
192
193 # train the agent
194 with contextlib.suppress(KeyboardInterrupt):
195 agent.learn(
196 total_timesteps=n_timesteps,
197 callback=callbacks,
198 progress_bar=True,
199 log_interval=None,
200 )
201 # save the final model
202 agent.save(os.path.join(log_dir, "model"))
203 print("Saving to:")
204 print(os.path.join(log_dir, "model.zip"))
205
206 if isinstance(env, VecNormalize):
207 print("Saving normalization")
208 env.save(os.path.join(log_dir, "model_vecnormalize.pkl"))
209
210 # close the simulator
211 env.close()
212
213
214if __name__ == "__main__":
215 # run the main function
216 main()
217 # close sim app
218 simulation_app.close()
The argument --agent
is used to specify the learning library to use. This is used to
retrieve the configuration instance from the kwargs
dictionary. You can manually specify
alternate configuration instances by passing the --agent
argument.
The Code Execution#
Since for the cartpole balancing task, RSL-RL library offers two configuration instances,
we can use the --agent
argument to specify the configuration instance to use.
Training with the standard PPO configuration:
# standard PPO training ./isaaclab.sh -p scripts/reinforcement_learning/rsl_rl/train.py --task Isaac-Cartpole-v0 --headless \ --run_name ppo
Training with the PPO configuration with symmetry augmentation:
# PPO training with symmetry augmentation ./isaaclab.sh -p scripts/reinforcement_learning/rsl_rl/train.py --task Isaac-Cartpole-v0 --headless \ --agent rsl_rl_with_symmetry_cfg_entry_point \ --run_name ppo_with_symmetry_data_augmentation # you can use hydra to disable symmetry augmentation but enable mirror loss computation ./isaaclab.sh -p scripts/reinforcement_learning/rsl_rl/train.py --task Isaac-Cartpole-v0 --headless \ --agent rsl_rl_with_symmetry_cfg_entry_point \ --run_name ppo_without_symmetry_data_augmentation \ agent.algorithm.symmetry_cfg.use_data_augmentation=false
The --run_name
argument is used to specify the name of the run. This is used to
create a directory for the run in the logs/rsl_rl/cartpole
directory.