omni.isaac.lab_tasks.utils.wrappers#
Sub-module for environment wrappers to different learning frameworks.
Wrappers allow you to modify the behavior of an environment without modifying the environment itself. This is useful for modifying the observation space, action space, or reward function. Additionally, they can be used to cast a given environment into the respective environment class definition used by different learning frameworks. This operation may include handling of asymmetric actor-critic observations, casting the data between different backends such numpy and pytorch, or organizing the returned data into the expected data structure by the learning framework.
All wrappers work similar to the gymnasium.Wrapper
class. Using a wrapper is as simple as passing
the initialized environment instance to the wrapper constructor. However, since learning frameworks
expect different input and output data structures, their wrapper classes are not compatible with each other.
Thus, they should always be used in conjunction with the respective learning framework.
For instance, to wrap an environment in the Stable-Baselines3 wrapper, you can do the following:
from omni.isaac.lab_tasks.utils.wrappers.sb3 import Sb3VecEnvWrapper
env = Sb3VecEnvWrapper(env)
RL-Games Wrapper#
Wrapper to configure a ManagerBasedRLEnv
or DirectRlEnv
instance to RL-Games vectorized environment.
The following example shows how to wrap an environment for RL-Games and register the environment construction
for RL-Games Runner
class:
from rl_games.common import env_configurations, vecenv
from omni.isaac.lab_tasks.utils.wrappers.rl_games import RlGamesGpuEnv, RlGamesVecEnvWrapper
# configuration parameters
rl_device = "cuda:0"
clip_obs = 10.0
clip_actions = 1.0
# wrap around environment for rl-games
env = RlGamesVecEnvWrapper(env, rl_device, clip_obs, clip_actions)
# register the environment to rl-games registry
# note: in agents configuration: environment name must be "rlgpu"
vecenv.register(
"IsaacRlgWrapper", lambda config_name, num_actors, **kwargs: RlGamesGpuEnv(config_name, num_actors, **kwargs)
)
env_configurations.register("rlgpu", {"vecenv_type": "IsaacRlgWrapper", "env_creator": lambda **kwargs: env})
Classes:
Wraps around Isaac Lab environment for RL-Games. |
|
Thin wrapper to create instance of the environment to fit RL-Games runner. |
- class omni.isaac.lab_tasks.utils.wrappers.rl_games.RlGamesVecEnvWrapper[源代码]#
基类:
IVecEnv
Wraps around Isaac Lab environment for RL-Games.
This class wraps around the Isaac Lab environment. Since RL-Games works directly on GPU buffers, the wrapper handles moving of buffers from the simulation environment to the same device as the learning agent. Additionally, it performs clipping of observations and actions.
For algorithms like asymmetric actor-critic, RL-Games expects a dictionary for observations. This dictionary contains “obs” and “states” which typically correspond to the actor and critic observations respectively.
To use asymmetric actor-critic, the environment observations from
ManagerBasedRLEnv
orDirectRLEnv
must have the key or group name “critic”. The observation group is used to set thenum_states
(int) andstate_space
(gym.spaces.Box
). These are used by the learning agent in RL-Games to allocate buffers in the trajectory memory. Since this is optional for some environments, the wrapper checks if these attributes exist. If they don’t then the wrapper defaults to zero as number of privileged observations.小心
This class must be the last wrapper in the wrapper chain. This is because the wrapper does not follow the
gym.Wrapper
interface. Any subsequent wrappers will need to be modified to work with this wrapper.Methods:
__init__
(env, rl_device, clip_obs, clip_actions)Initializes the wrapper instance.
Returns the class name of the wrapper.
Returns number of actors in the environment.
Returns the Gym spaces for the environment.
Attributes:
Returns the
Env
render_mode
.Returns the
Env
observation_space
.Returns the
Env
action_space
.Returns the base environment of the wrapper.
Returns the number of sub-environment instances.
Returns the base environment simulation device.
Returns the
Env
observation_space
.- __init__(env: ManagerBasedRLEnv | DirectRLEnv, rl_device: str, clip_obs: float, clip_actions: float)[源代码]#
Initializes the wrapper instance.
- 参数:
env – The environment to wrap around.
rl_device – The device on which agent computations are performed.
clip_obs – The clipping value for observations.
clip_actions – The clipping value for actions.
- 抛出:
ValueError – The environment is not inherited from
ManagerBasedRLEnv
orDirectRLEnv
.ValueError – If specified, the privileged observations (critic) are not of type
gym.spaces.Box
.
- property render_mode: str | None#
Returns the
Env
render_mode
.
- property observation_space: gym.spaces.Box#
Returns the
Env
observation_space
.
- property action_space: gym.Space#
Returns the
Env
action_space
.
- property unwrapped: ManagerBasedRLEnv | DirectRLEnv#
Returns the base environment of the wrapper.
This will be the bare
gymnasium.Env
environment, underneath all layers of wrappers.
- property state_space: gym.spaces.Box | None#
Returns the
Env
observation_space
.
- class omni.isaac.lab_tasks.utils.wrappers.rl_games.RlGamesGpuEnv[源代码]#
基类:
IVecEnv
Thin wrapper to create instance of the environment to fit RL-Games runner.
Methods:
__init__
(config_name, num_actors, **kwargs)Initialize the environment.
Get number of agents in the environment.
Get the Gym spaces for the environment.
- __init__(config_name: str, num_actors: int, **kwargs)[源代码]#
Initialize the environment.
- 参数:
config_name – The name of the environment configuration.
num_actors – The number of actors in the environment. This is not used in this wrapper.
RSL-RL Wrapper#
Wrappers and utilities to configure an ManagerBasedRLEnv
for RSL-RL library.
Functions:
|
Export policy into a Torch JIT file. |
|
Export policy into a Torch ONNX file. |
Classes:
Configuration of the runner for on-policy algorithms. |
|
Configuration for the PPO actor-critic networks. |
|
Configuration for the PPO algorithm. |
|
Wraps around Isaac Lab environment for RSL-RL library |
- omni.isaac.lab_tasks.utils.wrappers.rsl_rl.export_policy_as_jit(actor_critic: object, normalizer: object | None, path: str, filename='policy.pt')[源代码]#
Export policy into a Torch JIT file.
- 参数:
actor_critic – The actor-critic torch module.
normalizer – The empirical normalizer module. If None, Identity is used.
path – The path to the saving directory.
filename – The name of exported JIT file. Defaults to “policy.pt”.
- omni.isaac.lab_tasks.utils.wrappers.rsl_rl.export_policy_as_onnx(actor_critic: object, path: str, normalizer: object | None = None, filename='policy.onnx', verbose=False)[源代码]#
Export policy into a Torch ONNX file.
- 参数:
actor_critic – The actor-critic torch module.
normalizer – The empirical normalizer module. If None, Identity is used.
path – The path to the saving directory.
filename – The name of exported ONNX file. Defaults to “policy.onnx”.
verbose – Whether to print the model summary. Defaults to False.
- class omni.isaac.lab_tasks.utils.wrappers.rsl_rl.RslRlOnPolicyRunnerCfg[源代码]#
基类:
object
Configuration of the runner for on-policy algorithms.
Methods:
__init__
([seed, device, num_steps_per_env, ...])Attributes:
The seed for the experiment.
The number of steps per environment per update.
The maximum number of iterations.
Whether to use empirical normalization.
The policy configuration.
The algorithm configuration.
The number of iterations between saves.
The experiment name.
The run name.
The logger to use.
The neptune project name.
The wandb project name.
Whether to resume.
The run directory to load.
The checkpoint file to load.
- __init__(seed: int = <factory>, device: str = <factory>, num_steps_per_env: int = <factory>, max_iterations: int = <factory>, empirical_normalization: bool = <factory>, policy: ~omni.isaac.lab_tasks.utils.wrappers.rsl_rl.rl_cfg.RslRlPpoActorCriticCfg = <factory>, algorithm: ~omni.isaac.lab_tasks.utils.wrappers.rsl_rl.rl_cfg.RslRlPpoAlgorithmCfg = <factory>, save_interval: int = <factory>, experiment_name: str = <factory>, run_name: str = <factory>, logger: ~typing.Literal['tensorboard', 'neptune', 'wandb'] = <factory>, neptune_project: str = <factory>, wandb_project: str = <factory>, resume: bool = <factory>, load_run: str = <factory>, load_checkpoint: str = <factory>) None #
- policy: RslRlPpoActorCriticCfg#
The policy configuration.
- algorithm: RslRlPpoAlgorithmCfg#
The algorithm configuration.
- run_name: str#
The run name. Default is empty string.
The name of the run directory is typically the time-stamp at execution. If the run name is not empty, then it is appended to the run directory’s name, i.e. the logging directory’s name will become
{time-stamp}_{run_name}
.
- class omni.isaac.lab_tasks.utils.wrappers.rsl_rl.RslRlPpoActorCriticCfg[源代码]#
基类:
object
Configuration for the PPO actor-critic networks.
Attributes:
The policy class name.
The initial noise standard deviation for the policy.
The hidden dimensions of the actor network.
The hidden dimensions of the critic network.
The activation function for the actor and critic networks.
Methods:
__init__
([class_name, init_noise_std, ...])The hidden dimensions of the actor network.
The hidden dimensions of the critic network.
- class omni.isaac.lab_tasks.utils.wrappers.rsl_rl.RslRlPpoAlgorithmCfg[源代码]#
基类:
object
Configuration for the PPO algorithm.
Attributes:
The algorithm class name.
The coefficient for the value loss.
Whether to use clipped value loss.
The clipping parameter for the policy.
The coefficient for the entropy loss.
The number of learning epochs per update.
The number of mini-batches per update.
The learning rate for the policy.
The learning rate schedule.
The discount factor.
The lambda parameter for Generalized Advantage Estimation (GAE).
The desired KL divergence.
The maximum gradient norm.
Methods:
__init__
([class_name, value_loss_coef, ...])- __init__(class_name: str = <factory>, value_loss_coef: float = <factory>, use_clipped_value_loss: bool = <factory>, clip_param: float = <factory>, entropy_coef: float = <factory>, num_learning_epochs: int = <factory>, num_mini_batches: int = <factory>, learning_rate: float = <factory>, schedule: str = <factory>, gamma: float = <factory>, lam: float = <factory>, desired_kl: float = <factory>, max_grad_norm: float = <factory>) None #
- class omni.isaac.lab_tasks.utils.wrappers.rsl_rl.RslRlVecEnvWrapper[源代码]#
基类:
VecEnv
Wraps around Isaac Lab environment for RSL-RL library
To use asymmetric actor-critic, the environment instance must have the attributes
num_privileged_obs
(int). This is used by the learning agent to allocate buffers in the trajectory memory. Additionally, the returned observations should have the key “critic” which corresponds to the privileged observations. Since this is optional for some environments, the wrapper checks if these attributes exist. If they don’t then the wrapper defaults to zero as number of privileged observations.小心
This class must be the last wrapper in the wrapper chain. This is because the wrapper does not follow the
gym.Wrapper
interface. Any subsequent wrappers will need to be modified to work with this wrapper.- Reference:
Methods:
__init__
(env)Initializes the wrapper.
Returns the class name of the wrapper.
Returns the current observations of the environment.
Attributes:
Returns the configuration class instance of the environment.
Returns the
Env
render_mode
.Returns the
Env
observation_space
.Returns the
Env
action_space
.Returns the base environment of the wrapper.
The episode length buffer.
- __init__(env: ManagerBasedRLEnv | DirectRLEnv)[源代码]#
Initializes the wrapper.
备注
The wrapper calls
reset()
at the start since the RSL-RL runner does not call reset.- 参数:
env – The environment to wrap around.
- 抛出:
ValueError – When the environment is not an instance of
ManagerBasedRLEnv
orDirectRLEnv
.
- property render_mode: str | None#
Returns the
Env
render_mode
.
- property observation_space: Space#
Returns the
Env
observation_space
.
- property action_space: Space#
Returns the
Env
action_space
.
- property unwrapped: ManagerBasedRLEnv | DirectRLEnv#
Returns the base environment of the wrapper.
This will be the bare
gymnasium.Env
environment, underneath all layers of wrappers.
- get_observations() tuple[torch.Tensor, dict] [源代码]#
Returns the current observations of the environment.
- property episode_length_buf: torch.Tensor#
The episode length buffer.
SKRL Wrapper#
Wrapper to configure an Isaac Lab environment instance to skrl environment.
The following example shows how to wrap an environment for skrl:
from omni.isaac.lab_tasks.utils.wrappers.skrl import SkrlVecEnvWrapper
env = SkrlVecEnvWrapper(env, ml_framework="torch") # or ml_framework="jax"
Or, equivalently, by directly calling the skrl library API as follows:
from skrl.envs.torch.wrappers import wrap_env # for PyTorch, or...
from skrl.envs.jax.wrappers import wrap_env # for JAX
env = wrap_env(env, wrapper="isaaclab")
Functions:
|
Wraps around Isaac Lab environment for skrl. |
- omni.isaac.lab_tasks.utils.wrappers.skrl.SkrlVecEnvWrapper(env: ManagerBasedRLEnv | DirectRLEnv | DirectMARLEnv, ml_framework: Literal['torch', 'jax', 'jax-numpy'] = 'torch', wrapper: Literal['auto', 'isaaclab', 'isaaclab-single-agent', 'isaaclab-multi-agent'] = 'isaaclab')[源代码]#
Wraps around Isaac Lab environment for skrl.
This function wraps around the Isaac Lab environment. Since the wrapping functionality is defined within the skrl library itself, this implementation is maintained for compatibility with the structure of the extension that contains it. Internally it calls the
wrap_env()
from the skrl library API.- 参数:
env – The environment to wrap around.
ml_framework – The ML framework to use for the wrapper. Defaults to “torch”.
wrapper – The wrapper to use. Defaults to “isaaclab”: leave it to skrl to determine if the environment will be wrapped as single-agent or multi-agent.
- 抛出:
ValueError – When the environment is not an instance of any Isaac Lab environment interface.
ValueError – If the specified ML framework is not valid.
Stable-Baselines3 Wrapper#
Wrapper to configure a ManagerBasedRLEnv
or DirectRLEnv
instance to Stable-Baselines3 vectorized environment.
The following example shows how to wrap an environment for Stable-Baselines3:
from omni.isaac.lab_tasks.utils.wrappers.sb3 import Sb3VecEnvWrapper
env = Sb3VecEnvWrapper(env)
Functions:
|
Convert simple YAML types to Stable-Baselines classes/components. |
Classes:
Wraps around Isaac Lab environment for Stable Baselines3. |
- omni.isaac.lab_tasks.utils.wrappers.sb3.process_sb3_cfg(cfg: dict) dict [源代码]#
Convert simple YAML types to Stable-Baselines classes/components.
- 参数:
cfg – A configuration dictionary.
- 返回:
A dictionary containing the converted configuration.
- Reference:
- class omni.isaac.lab_tasks.utils.wrappers.sb3.Sb3VecEnvWrapper[源代码]#
基类:
VecEnv
Wraps around Isaac Lab environment for Stable Baselines3.
Isaac Sim internally implements a vectorized environment. However, since it is still considered a single environment instance, Stable Baselines tries to wrap around it using the
DummyVecEnv
. This is only done if the environment is not inheriting from theirVecEnv
. Thus, this class thinly wraps over the environment fromManagerBasedRLEnv
orDirectRLEnv
.备注
While Stable-Baselines3 supports Gym 0.26+ API, their vectorized environment still uses the old API (i.e. it is closer to Gym 0.21). Thus, we implement the old API for the vectorized environment.
We also add monitoring functionality that computes the un-discounted episode return and length. This information is added to the info dicts under key episode.
In contrast to the Isaac Lab environment, stable-baselines expect the following:
numpy datatype for MDP signals
a list of info dicts for each sub-environment (instead of a dict)
when environment has terminated, the observations from the environment should correspond to the one after reset. The “real” final observation is passed using the info dicts under the key
terminal_observation
.
警告
By the nature of physics stepping in Isaac Sim, it is not possible to forward the simulation buffers without performing a physics step. Thus, reset is performed inside the
step()
function after the actual physics step is taken. Thus, the returned observations for terminated environments is the one after the reset.小心
This class must be the last wrapper in the wrapper chain. This is because the wrapper does not follow the
gym.Wrapper
interface. Any subsequent wrappers will need to be modified to work with this wrapper.Reference:
https://stable-baselines3.readthedocs.io/en/master/guide/vec_envs.html
https://stable-baselines3.readthedocs.io/en/master/common/monitor.html
Methods:
__init__
(env)Initialize the wrapper.
Returns the class name of the wrapper.
Returns the rewards of all the episodes.
Returns the number of time-steps of all the episodes.
Attributes:
Returns the base environment of the wrapper.
- __init__(env: ManagerBasedRLEnv | DirectRLEnv)[源代码]#
Initialize the wrapper.
- 参数:
env – The environment to wrap around.
- 抛出:
ValueError – When the environment is not an instance of
ManagerBasedRLEnv
orDirectRLEnv
.
- property unwrapped: ManagerBasedRLEnv | DirectRLEnv#
Returns the base environment of the wrapper.
This will be the bare
gymnasium.Env
environment, underneath all layers of wrappers.