Configuring an RL Agent#
In the previous tutorial, we saw how to train an RL agent to solve the cartpole balancing task using the Stable-Baselines3 library. In this tutorial, we will see how to configure the training process to use different RL libraries and different training algorithms.
In the directory scripts/reinforcement_learning
, you will find the scripts for
different RL libraries. These are organized into subdirectories named after the library name.
Each subdirectory contains the training and playing scripts for the library.
To configure a learning library with a specific task, you need to create a configuration file
for the learning agent. This configuration file is used to create an instance of the learning agent
and is used to configure the training process. Similar to the environment registration shown in
the Registering an Environment tutorial, you can register the learning agent with the
gymnasium.register
method.
The Code#
As an example, we will look at the configuration included for the task Isaac-Cartpole-v0
in the isaaclab_tasks
package. This is the same task that we used in the
Training with an RL Agent tutorial.
gym.register(
id="Isaac-Cartpole-v0",
entry_point="isaaclab.envs:ManagerBasedRLEnv",
disable_env_checker=True,
kwargs={
"env_cfg_entry_point": f"{__name__}.cartpole_env_cfg:CartpoleEnvCfg",
"rl_games_cfg_entry_point": f"{agents.__name__}:rl_games_ppo_cfg.yaml",
"rsl_rl_cfg_entry_point": f"{agents.__name__}.rsl_rl_ppo_cfg:CartpolePPORunnerCfg",
"rsl_rl_with_symmetry_cfg_entry_point": f"{agents.__name__}.rsl_rl_ppo_cfg:CartpolePPORunnerWithSymmetryCfg",
"skrl_cfg_entry_point": f"{agents.__name__}:skrl_ppo_cfg.yaml",
"sb3_cfg_entry_point": f"{agents.__name__}:sb3_ppo_cfg.yaml",
},
The Code Explained#
Under the attribute kwargs
, we can see the configuration for the different learning libraries.
The key is the name of the library and the value is the path to the configuration instance.
This configuration instance can be a string, a class, or an instance of the class.
For example, the value of the key "rl_games_cfg_entry_point"
is a string that points to the
configuration YAML file for the RL-Games library. Meanwhile, the value of the key
"rsl_rl_cfg_entry_point"
points to the configuration class for the RSL-RL library.
The pattern used for specifying an agent configuration class follows closely to that used for specifying the environment configuration entry point. This means that while the following are equivalent:
Specifying the configuration entry point as a string
from . import agents
gym.register(
id="Isaac-Cartpole-v0",
entry_point="isaaclab.envs:ManagerBasedRLEnv",
disable_env_checker=True,
kwargs={
"env_cfg_entry_point": f"{__name__}.cartpole_env_cfg:CartpoleEnvCfg",
"rsl_rl_cfg_entry_point": f"{agents.__name__}.rsl_rl_ppo_cfg:CartpolePPORunnerCfg",
},
)
Specifying the configuration entry point as a class
from . import agents
gym.register(
id="Isaac-Cartpole-v0",
entry_point="isaaclab.envs:ManagerBasedRLEnv",
disable_env_checker=True,
kwargs={
"env_cfg_entry_point": f"{__name__}.cartpole_env_cfg:CartpoleEnvCfg",
"rsl_rl_cfg_entry_point": agents.rsl_rl_ppo_cfg.CartpolePPORunnerCfg,
},
)
The first code block is the preferred way to specify the configuration entry point. The second code block is equivalent to the first one, but it leads to import of the configuration class which slows down the import time. This is why we recommend using strings for the configuration entry point.
All the scripts in the scripts/reinforcement_learning
directory are configured by default to read the
<library_name>_cfg_entry_point
from the kwargs
dictionary to retrieve the configuration instance.
For instance, the following code block shows how the train.py
script reads the configuration
instance for the Stable-Baselines3 library:
Code for train.py with SB3
1# Copyright (c) 2022-2025, The Isaac Lab Project Developers (https://github.com/isaac-sim/IsaacLab/blob/main/CONTRIBUTORS.md).
2# All rights reserved.
3#
4# SPDX-License-Identifier: BSD-3-Clause
5
6
7"""Script to train RL agent with Stable Baselines3."""
8
9"""Launch Isaac Sim Simulator first."""
10
11import argparse
12import contextlib
13import signal
14import sys
15from pathlib import Path
16
17from isaaclab.app import AppLauncher
18
19# add argparse arguments
20parser = argparse.ArgumentParser(description="Train an RL agent with Stable-Baselines3.")
21parser.add_argument("--video", action="store_true", default=False, help="Record videos during training.")
22parser.add_argument("--video_length", type=int, default=200, help="Length of the recorded video (in steps).")
23parser.add_argument("--video_interval", type=int, default=2000, help="Interval between video recordings (in steps).")
24parser.add_argument("--num_envs", type=int, default=None, help="Number of environments to simulate.")
25parser.add_argument("--task", type=str, default=None, help="Name of the task.")
26parser.add_argument(
27 "--agent", type=str, default="sb3_cfg_entry_point", help="Name of the RL agent configuration entry point."
28)
29parser.add_argument("--seed", type=int, default=None, help="Seed used for the environment")
30parser.add_argument("--log_interval", type=int, default=100_000, help="Log data every n timesteps.")
31parser.add_argument("--checkpoint", type=str, default=None, help="Continue the training from checkpoint.")
32parser.add_argument("--max_iterations", type=int, default=None, help="RL Policy training iterations.")
33parser.add_argument("--export_io_descriptors", action="store_true", default=False, help="Export IO descriptors.")
34parser.add_argument(
35 "--keep_all_info",
36 action="store_true",
37 default=False,
38 help="Use a slower SB3 wrapper but keep all the extra training info.",
39)
40# append AppLauncher cli args
41AppLauncher.add_app_launcher_args(parser)
42# parse the arguments
43args_cli, hydra_args = parser.parse_known_args()
44# always enable cameras to record video
45if args_cli.video:
46 args_cli.enable_cameras = True
47
48# clear out sys.argv for Hydra
49sys.argv = [sys.argv[0]] + hydra_args
50
51# launch omniverse app
52app_launcher = AppLauncher(args_cli)
53simulation_app = app_launcher.app
54
55
56def cleanup_pbar(*args):
57 """
58 A small helper to stop training and
59 cleanup progress bar properly on ctrl+c
60 """
61 import gc
62
63 tqdm_objects = [obj for obj in gc.get_objects() if "tqdm" in type(obj).__name__]
64 for tqdm_object in tqdm_objects:
65 if "tqdm_rich" in type(tqdm_object).__name__:
66 tqdm_object.close()
67 raise KeyboardInterrupt
68
69
70# disable KeyboardInterrupt override
71signal.signal(signal.SIGINT, cleanup_pbar)
72
73"""Rest everything follows."""
74
75import gymnasium as gym
76import numpy as np
77import os
78import random
79from datetime import datetime
80
81import omni
82from stable_baselines3 import PPO
83from stable_baselines3.common.callbacks import CheckpointCallback, LogEveryNTimesteps
84from stable_baselines3.common.vec_env import VecNormalize
85
86from isaaclab.envs import (
87 DirectMARLEnv,
88 DirectMARLEnvCfg,
89 DirectRLEnvCfg,
90 ManagerBasedRLEnvCfg,
91 multi_agent_to_single_agent,
92)
93from isaaclab.utils.dict import print_dict
94from isaaclab.utils.io import dump_pickle, dump_yaml
95
96from isaaclab_rl.sb3 import Sb3VecEnvWrapper, process_sb3_cfg
97
98import isaaclab_tasks # noqa: F401
99from isaaclab_tasks.utils.hydra import hydra_task_config
100
101# PLACEHOLDER: Extension template (do not remove this comment)
102
103
104@hydra_task_config(args_cli.task, args_cli.agent)
105def main(env_cfg: ManagerBasedRLEnvCfg | DirectRLEnvCfg | DirectMARLEnvCfg, agent_cfg: dict):
106 """Train with stable-baselines agent."""
107 # randomly sample a seed if seed = -1
108 if args_cli.seed == -1:
109 args_cli.seed = random.randint(0, 10000)
110
111 # override configurations with non-hydra CLI arguments
112 env_cfg.scene.num_envs = args_cli.num_envs if args_cli.num_envs is not None else env_cfg.scene.num_envs
113 agent_cfg["seed"] = args_cli.seed if args_cli.seed is not None else agent_cfg["seed"]
114 # max iterations for training
115 if args_cli.max_iterations is not None:
116 agent_cfg["n_timesteps"] = args_cli.max_iterations * agent_cfg["n_steps"] * env_cfg.scene.num_envs
117
118 # set the environment seed
119 # note: certain randomizations occur in the environment initialization so we set the seed here
120 env_cfg.seed = agent_cfg["seed"]
121 env_cfg.sim.device = args_cli.device if args_cli.device is not None else env_cfg.sim.device
122
123 # directory for logging into
124 run_info = datetime.now().strftime("%Y-%m-%d_%H-%M-%S")
125 log_root_path = os.path.abspath(os.path.join("logs", "sb3", args_cli.task))
126 print(f"[INFO] Logging experiment in directory: {log_root_path}")
127 # The Ray Tune workflow extracts experiment name using the logging line below, hence, do not change it (see PR #2346, comment-2819298849)
128 print(f"Exact experiment name requested from command line: {run_info}")
129 log_dir = os.path.join(log_root_path, run_info)
130 # dump the configuration into log-directory
131 dump_yaml(os.path.join(log_dir, "params", "env.yaml"), env_cfg)
132 dump_yaml(os.path.join(log_dir, "params", "agent.yaml"), agent_cfg)
133 dump_pickle(os.path.join(log_dir, "params", "env.pkl"), env_cfg)
134 dump_pickle(os.path.join(log_dir, "params", "agent.pkl"), agent_cfg)
135
136 # save command used to run the script
137 command = " ".join(sys.orig_argv)
138 (Path(log_dir) / "command.txt").write_text(command)
139
140 # post-process agent configuration
141 agent_cfg = process_sb3_cfg(agent_cfg, env_cfg.scene.num_envs)
142 # read configurations about the agent-training
143 policy_arch = agent_cfg.pop("policy")
144 n_timesteps = agent_cfg.pop("n_timesteps")
145
146 # set the IO descriptors export flag if requested
147 if isinstance(env_cfg, ManagerBasedRLEnvCfg):
148 env_cfg.export_io_descriptors = args_cli.export_io_descriptors
149 else:
150 omni.log.warn(
151 "IO descriptors are only supported for manager based RL environments. No IO descriptors will be exported."
152 )
153
154 # set the log directory for the environment (works for all environment types)
155 env_cfg.log_dir = log_dir
156
157 # create isaac environment
158 env = gym.make(args_cli.task, cfg=env_cfg, render_mode="rgb_array" if args_cli.video else None)
159
160 # convert to single-agent instance if required by the RL algorithm
161 if isinstance(env.unwrapped, DirectMARLEnv):
162 env = multi_agent_to_single_agent(env)
163
164 # wrap for video recording
165 if args_cli.video:
166 video_kwargs = {
167 "video_folder": os.path.join(log_dir, "videos", "train"),
168 "step_trigger": lambda step: step % args_cli.video_interval == 0,
169 "video_length": args_cli.video_length,
170 "disable_logger": True,
171 }
172 print("[INFO] Recording videos during training.")
173 print_dict(video_kwargs, nesting=4)
174 env = gym.wrappers.RecordVideo(env, **video_kwargs)
175
176 # wrap around environment for stable baselines
177 env = Sb3VecEnvWrapper(env, fast_variant=not args_cli.keep_all_info)
178
179 norm_keys = {"normalize_input", "normalize_value", "clip_obs"}
180 norm_args = {}
181 for key in norm_keys:
182 if key in agent_cfg:
183 norm_args[key] = agent_cfg.pop(key)
184
185 if norm_args and norm_args.get("normalize_input"):
186 print(f"Normalizing input, {norm_args=}")
187 env = VecNormalize(
188 env,
189 training=True,
190 norm_obs=norm_args["normalize_input"],
191 norm_reward=norm_args.get("normalize_value", False),
192 clip_obs=norm_args.get("clip_obs", 100.0),
193 gamma=agent_cfg["gamma"],
194 clip_reward=np.inf,
195 )
196
197 # create agent from stable baselines
198 agent = PPO(policy_arch, env, verbose=1, tensorboard_log=log_dir, **agent_cfg)
199 if args_cli.checkpoint is not None:
200 agent = agent.load(args_cli.checkpoint, env, print_system_info=True)
201
202 # callbacks for agent
203 checkpoint_callback = CheckpointCallback(save_freq=1000, save_path=log_dir, name_prefix="model", verbose=2)
204 callbacks = [checkpoint_callback, LogEveryNTimesteps(n_steps=args_cli.log_interval)]
205
206 # train the agent
207 with contextlib.suppress(KeyboardInterrupt):
208 agent.learn(
209 total_timesteps=n_timesteps,
210 callback=callbacks,
211 progress_bar=True,
212 log_interval=None,
213 )
214 # save the final model
215 agent.save(os.path.join(log_dir, "model"))
216 print("Saving to:")
217 print(os.path.join(log_dir, "model.zip"))
218
219 if isinstance(env, VecNormalize):
220 print("Saving normalization")
221 env.save(os.path.join(log_dir, "model_vecnormalize.pkl"))
222
223 # close the simulator
224 env.close()
225
226
227if __name__ == "__main__":
228 # run the main function
229 main()
230 # close sim app
231 simulation_app.close()
The argument --agent
is used to specify the learning library to use. This is used to
retrieve the configuration instance from the kwargs
dictionary. You can manually specify
alternate configuration instances by passing the --agent
argument.
The Code Execution#
Since for the cartpole balancing task, RSL-RL library offers two configuration instances,
we can use the --agent
argument to specify the configuration instance to use.
Training with the standard PPO configuration:
# standard PPO training ./isaaclab.sh -p scripts/reinforcement_learning/rsl_rl/train.py --task Isaac-Cartpole-v0 --headless \ --run_name ppo
Training with the PPO configuration with symmetry augmentation:
# PPO training with symmetry augmentation ./isaaclab.sh -p scripts/reinforcement_learning/rsl_rl/train.py --task Isaac-Cartpole-v0 --headless \ --agent rsl_rl_with_symmetry_cfg_entry_point \ --run_name ppo_with_symmetry_data_augmentation # you can use hydra to disable symmetry augmentation but enable mirror loss computation ./isaaclab.sh -p scripts/reinforcement_learning/rsl_rl/train.py --task Isaac-Cartpole-v0 --headless \ --agent rsl_rl_with_symmetry_cfg_entry_point \ --run_name ppo_without_symmetry_data_augmentation \ agent.algorithm.symmetry_cfg.use_data_augmentation=false
The --run_name
argument is used to specify the name of the run. This is used to
create a directory for the run in the logs/rsl_rl/cartpole
directory.