Environment#
The MujocoEnv
class defines a top-level simulation object encapsulating a MjSim
object. Robosuite uses
class extensions of this base class, namely, RobotEnv
which additionally encompasses Robot
objects and the top-level
task environments which capture specific ManipulationTask
instances and additional objects.
Base Environment#
- class robosuite.environments.base.MujocoEnv(has_renderer=False, has_offscreen_renderer=True, render_camera='frontview', render_collision_mesh=False, render_visual_mesh=True, render_gpu_device_id=-1, control_freq=20, lite_physics=True, horizon=1000, ignore_done=False, hard_reset=True, renderer='mjviewer', renderer_config=None, seed=None)#
Initializes a Mujoco Environment. :param has_renderer: If true, render the simulation state in
a viewer instead of headless mode.
- Parameters:
has_offscreen_renderer (bool) – True if using off-screen rendering.
render_camera (str) – Name of camera to render if has_renderer is True. Setting this value to ‘None’ will result in the default angle being applied, which is useful as it can be dragged / panned by the user using the mouse
render_collision_mesh (bool) – True if rendering collision meshes in camera. False otherwise.
render_visual_mesh (bool) – True if rendering visual meshes in camera. False otherwise.
render_gpu_device_id (int) – corresponds to the GPU device id to use for offscreen rendering. Defaults to -1, in which case the device will be inferred from environment variables (GPUS or CUDA_VISIBLE_DEVICES).
control_freq (float) – how many control signals to receive in every simulated second. This sets the amount of simulation time that passes between every action input.
lite_physics (bool) – Whether to optimize for mujoco forward and step calls to reduce total simulation overhead. Set to False to preserve backward compatibility with datasets collected in robosuite <= 1.4.1.
horizon (int) – Every episode lasts for exactly @horizon timesteps.
ignore_done (bool) – True if never terminating the environment (ignore @horizon).
hard_reset (bool) – If True, re-loads model, sim, and render object upon a reset call, else, only calls sim.reset and resets all robosuite-internal variables
renderer (str) – string for the renderer to use
renderer_config (dict) – dictionary for the renderer configurations
seed (int) – environment seed. Default is None, where environment is unseeded, ie. random
- Raises:
ValueError – [Invalid renderer selection]
- initialize_time(control_freq)#
Initializes the time constants used for simulation. :param control_freq: Hz rate to run control loop at within the simulation :type control_freq: float
- reset()#
Resets simulation. :returns: Environment observation space after reset occurs :rtype: OrderedDict
- step(action)#
Takes a step in simulation with control command @action. :param action: Action to execute within the environment :type action: np.array
- Returns:
(OrderedDict) observations from the environment
(float) reward from the environment
(bool) whether the current episode is completed or not
(dict) misc information
- Return type:
4-tuple
- Raises:
ValueError – [Steps past episode termination]
- reward(action)#
Reward should be a function of state and action :param action: Action to execute within the environment :type action: np.array
- Returns:
Reward from environment
- Return type:
float
- render()#
Renders to an on-screen window.
- observation_spec()#
Returns an observation as observation specification. An alternative design is to return an OrderedDict where the keys are the observation names and the values are the shapes of observations. We leave this alternative implementation commented out, as we find the current design is easier to use in practice. :returns: Observations from the environment :rtype: OrderedDict
- clear_objects(object_names)#
Clears objects with the name @object_names out of the task space. This is useful for supporting task modes with single types of objects, as in @self.single_object_mode without changing the model definition. :param object_names: Name of object(s) to remove from the task workspace :type object_names: str or list of str
- visualize(vis_settings)#
Do any needed visualization here :param vis_settings: Visualization keywords mapped to T/F, determining whether that specific
component should be visualized. Should have “env” keyword as well as any other relevant options specified.
- reset_from_xml_string(xml_string)#
Reloads the environment from an XML description of the environment. :param xml_string: Filepath to the xml file that will be loaded directly into the sim :type xml_string: str
- check_contact(geoms_1, geoms_2=None)#
Finds contact between two geom groups. :param geoms_1: an individual geom name or list of geom names or a model. If
a MujocoModel is specified, the geoms checked will be its contact_geoms
- Parameters:
geoms_2 (str or list of str or MujocoModel or None) – another individual geom name or list of geom names. If a MujocoModel is specified, the geoms checked will be its contact_geoms. If None, will check any collision with @geoms_1 to any other geom in the environment
- Returns:
True if any geom in @geoms_1 is in contact with any geom in @geoms_2.
- Return type:
bool
- get_contacts(model)#
Checks for any contacts with @model (as defined by @model’s contact_geoms) and returns the set of geom names currently in contact with that model (excluding the geoms that are part of the model itself). :param model: Model to check contacts for. :type model: MujocoModel
- Returns:
Unique geoms that are actively in contact with this model.
- Return type:
set
- Raises:
AssertionError – [Invalid input type]
- modify_observable(observable_name, attribute, modifier)#
Modifies observable with associated name @observable_name, replacing the given @attribute with @modifier. :param observable_name: Observable to modify :type observable_name: str :param attribute: Observable attribute to modify.
Options are {‘sensor’, ‘corrupter’,`’filter’, `’delayer’, ‘sampling_rate’, ‘enabled’, ‘active’}
- Parameters:
modifier (any) – New function / value to replace with for observable. If a function, new signature should match the function being replaced.
- close()#
Do any cleanup necessary here.
- property observation_modalities#
Modalities for this environment’s observations :returns: All observation modalities :rtype: set
- property observation_names#
Grabs all names for this environment’s observables :returns: All observation names :rtype: set
- property enabled_observables#
Grabs all names of enabled observables for this environment. An observable is considered enabled if its values are being continually computed / updated at each simulation timestep. :returns: All enabled observation names :rtype: set
- property active_observables#
Grabs all names of active observables for this environment. An observable is considered active if its value is being returned in the observation dict from _get_observations() call or from the step() call (assuming this observable is enabled). :returns: All active observation names :rtype: set
- property action_spec#
Action specification should be implemented in subclasses. Action space is represented by a tuple of (low, high), which are two numpy vectors that specify the min/max action limits per dimension.
- property action_dim#
Size of the action space :returns: Action space dimension :rtype: int
Robot Environment#
- class robosuite.environments.robot_env.RobotEnv(robots, env_configuration='default', base_types='default', controller_configs=None, initialization_noise=None, use_camera_obs=True, has_renderer=False, has_offscreen_renderer=True, render_camera='frontview', render_collision_mesh=False, render_visual_mesh=True, render_gpu_device_id=-1, control_freq=20, lite_physics=True, horizon=1000, ignore_done=False, hard_reset=True, camera_names='agentview', camera_heights=256, camera_widths=256, camera_depths=False, camera_segmentations=None, robot_configs=None, renderer='mjviewer', renderer_config=None, seed=None)#
Initializes a robot environment in Mujoco.
- Parameters:
robots – Specification for specific robot(s) to be instantiated within this env
env_configuration (str) – Specifies how to position the robot(s) within the environment. Default is “default”, which should be interpreted accordingly by any subclasses.
controller_configs (str or list of dict) – If set, contains relevant controller parameters for creating a custom controller. Else, uses the default controller for this specific task. Should either be single dict if same controller is to be used for all robots or else it should be a list of the same length as “robots” param
mount_types (None or str or list of str) – type of mount, used to instantiate mount models from mount factory. Default is “default”, which is the default mount associated with the robot(s) the ‘robots’ specification. None results in no mount, and any other (valid) model overrides the default mount. Should either be single str if same mount type is to be used for all robots or else it should be a list of the same length as “robots” param
initialization_noise (dict or list of dict) –
Dict containing the initialization noise parameters. The expected keys and corresponding value types are specified below:
- ’magnitude’:
The scale factor of uni-variate random noise applied to each of a robot’s given initial joint positions. Setting this value to None or 0.0 results in no noise being applied. If “gaussian” type of noise is applied then this magnitude scales the standard deviation applied, If “uniform” type of noise is applied then this magnitude sets the bounds of the sampling range
- ’type’:
Type of noise to apply. Can either specify “gaussian” or “uniform”
Should either be single dict if same noise value is to be used for all robots or else it should be a list of the same length as “robots” param
- Note:
Specifying “default” will automatically use the default noise settings. Specifying None will automatically create the required dict with “magnitude” set to 0.0.
use_camera_obs (bool) – if True, every observation includes rendered image(s)
has_renderer (bool) – If true, render the simulation state in a viewer instead of headless mode.
has_offscreen_renderer (bool) – True if using off-screen rendering
render_camera (str) – Name of camera to render if has_renderer is True. Setting this value to ‘None’ will result in the default angle being applied, which is useful as it can be dragged / panned by the user using the mouse
render_collision_mesh (bool) – True if rendering collision meshes in camera. False otherwise.
render_visual_mesh (bool) – True if rendering visual meshes in camera. False otherwise.
render_gpu_device_id (int) – corresponds to the GPU device id to use for offscreen rendering. Defaults to -1, in which case the device will be inferred from environment variables (GPUS or CUDA_VISIBLE_DEVICES).
control_freq (float) – how many control signals to receive in every second. This sets the amount of simulation time that passes between every action input.
lite_physics (bool) – Whether to optimize for mujoco forward and step calls to reduce total simulation overhead. Set to False to preserve backward compatibility with datasets collected in robosuite <= 1.4.1.
horizon (int) – Every episode lasts for exactly @horizon timesteps.
ignore_done (bool) – True if never terminating the environment (ignore @horizon).
hard_reset (bool) – If True, re-loads model, sim, and render object upon a reset call, else, only calls sim.reset and resets all robosuite-internal variables
camera_names (str or list of str) –
name of camera to be rendered. Should either be single str if same name is to be used for all cameras’ rendering or else it should be a list of cameras to render.
- Note:
At least one camera must be specified if @use_camera_obs is True.
- Note:
To render all robots’ cameras of a certain type (e.g.: “robotview” or “eye_in_hand”), use the convention “all-{name}” (e.g.: “all-robotview”) to automatically render all camera images from each robot’s camera list).
camera_heights (int or list of int) – height of camera frame. Should either be single int if same height is to be used for all cameras’ frames or else it should be a list of the same length as “camera names” param.
camera_widths (int or list of int) – width of camera frame. Should either be single int if same width is to be used for all cameras’ frames or else it should be a list of the same length as “camera names” param.
camera_depths (bool or list of bool) – True if rendering RGB-D, and RGB otherwise. Should either be single bool if same depth setting is to be used for all cameras or else it should be a list of the same length as “camera names” param.
camera_segmentations (None or str or list of str or list of list of str) –
Camera segmentation(s) to use for each camera. Valid options are:
None: no segmentation sensor used ‘instance’: segmentation at the class-instance level ‘class’: segmentation at the class level ‘element’: segmentation at the per-geom level
If not None, multiple types of segmentations can be specified. A [list of str / str or None] specifies [multiple / a single] segmentation(s) to use for all cameras. A list of list of str specifies per-camera segmentation setting(s) to use.
robot_configs (list of dict) – Per-robot configurations set from any subclass initializers.
seed (int) – environment seed. Default is None, where environment is unseeded, ie. random
- Raises:
ValueError – [Camera obs require offscreen renderer]
ValueError – [Camera name must be specified to use camera obs]
- _load_robots()#
Instantiates robots and stores them within the self.robots attribute
- _check_robot_configuration(robots)#
Sanity check to make sure inputted robots and the corresponding requested task/configuration combo is legal. Should be implemented in every specific task module
- Parameters:
robots (str or list of str) – Inputted requested robots at the task-level environment
Manipulator Environment#
- class robosuite.environments.manipulation.manipulation_env.ManipulationEnv(robots, env_configuration='default', controller_configs=None, base_types='default', gripper_types='default', initialization_noise=None, use_camera_obs=True, has_renderer=False, has_offscreen_renderer=True, render_camera='frontview', render_collision_mesh=False, render_visual_mesh=True, render_gpu_device_id=-1, control_freq=20, lite_physics=True, horizon=1000, ignore_done=False, hard_reset=True, camera_names='agentview', camera_heights=256, camera_widths=256, camera_depths=False, camera_segmentations=None, renderer='mjviewer', renderer_config=None, seed=None)#
Initializes a manipulation-specific robot environment in Mujoco.
- Parameters:
robots – Specification for specific robot arm(s) to be instantiated within this env (e.g: “Sawyer” would generate one arm; [“Panda”, “Panda”, “Sawyer”] would generate three robot arms)
env_configuration (str) – Specifies how to position the robot(s) within the environment. Default is “default”, which should be interpreted accordingly by any subclasses.
controller_configs (str or list of dict) – If set, contains relevant controller parameters for creating a custom controller. Else, uses the default controller for this specific task. Should either be single dict if same controller is to be used for all robots or else it should be a list of the same length as “robots” param
base_types (None or str or list of str) – type of base, used to instantiate base models from base factory. Default is “default”, which is the default base associated with the robot(s) the ‘robots’ specification. None results in no base, and any other (valid) model overrides the default base. Should either be single str if same base type is to be used for all robots or else it should be a list of the same length as “robots” param
gripper_types (None or str or list of str) – type of gripper, used to instantiate gripper models from gripper factory. Default is “default”, which is the default grippers(s) associated with the robot(s) the ‘robots’ specification. None removes the gripper, and any other (valid) model overrides the default gripper. Should either be single str if same gripper type is to be used for all robots or else it should be a list of the same length as “robots” param
initialization_noise (dict or list of dict) –
Dict containing the initialization noise parameters. The expected keys and corresponding value types are specified below:
- ’magnitude’:
The scale factor of uni-variate random noise applied to each of a robot’s given initial joint positions. Setting this value to None or 0.0 results in no noise being applied. If “gaussian” type of noise is applied then this magnitude scales the standard deviation applied, If “uniform” type of noise is applied then this magnitude sets the bounds of the sampling range
- ’type’:
Type of noise to apply. Can either specify “gaussian” or “uniform”
Should either be single dict if same noise value is to be used for all robots or else it should be a list of the same length as “robots” param
- Note:
Specifying “default” will automatically use the default noise settings. Specifying None will automatically create the required dict with “magnitude” set to 0.0.
use_camera_obs (bool) – if True, every observation includes rendered image(s)
has_renderer (bool) – If true, render the simulation state in a viewer instead of headless mode.
has_offscreen_renderer (bool) – True if using off-screen rendering
render_camera (str) – Name of camera to render if has_renderer is True. Setting this value to ‘None’ will result in the default angle being applied, which is useful as it can be dragged / panned by the user using the mouse
render_collision_mesh (bool) – True if rendering collision meshes in camera. False otherwise.
render_visual_mesh (bool) – True if rendering visual meshes in camera. False otherwise.
render_gpu_device_id (int) – corresponds to the GPU device id to use for offscreen rendering. Defaults to -1, in which case the device will be inferred from environment variables (GPUS or CUDA_VISIBLE_DEVICES).
control_freq (float) – how many control signals to receive in every second. This sets the abase of simulation time that passes between every action input.
lite_physics (bool) – Whether to optimize for mujoco forward and step calls to reduce total simulation overhead. Set to False to preserve backward compatibility with datasets collected in robosuite <= 1.4.1.
horizon (int) – Every episode lasts for exactly @horizon timesteps.
ignore_done (bool) – True if never terminating the environment (ignore @horizon).
hard_reset (bool) – If True, re-loads model, sim, and render object upon a reset call, else, only calls sim.reset and resets all robosuite-internal variables
camera_names (str or list of str) –
name of camera to be rendered. Should either be single str if same name is to be used for all cameras’ rendering or else it should be a list of cameras to render.
- Note:
At least one camera must be specified if @use_camera_obs is True.
- Note:
To render all robots’ cameras of a certain type (e.g.: “robotview” or “eye_in_hand”), use the convention “all-{name}” (e.g.: “all-robotview”) to automatically render all camera images from each robot’s camera list).
camera_heights (int or list of int) – height of camera frame. Should either be single int if same height is to be used for all cameras’ frames or else it should be a list of the same length as “camera names” param.
camera_widths (int or list of int) – width of camera frame. Should either be single int if same width is to be used for all cameras’ frames or else it should be a list of the same length as “camera names” param.
camera_depths (bool or list of bool) – True if rendering RGB-D, and RGB otherwise. Should either be single bool if same depth setting is to be used for all cameras or else it should be a list of the same length as “camera names” param.
camera_segmentations (None or str or list of str or list of list of str) –
Camera segmentation(s) to use for each camera. Valid options are:
None: no segmentation sensor used ‘instance’: segmentation at the class-instance level ‘class’: segmentation at the class level ‘element’: segmentation at the per-geom level
If not None, multiple types of segmentations can be specified. A [list of str / str or None] specifies [multiple / a single] segmentation(s) to use for all cameras. A list of list of str specifies per-camera segmentation setting(s) to use.
seed (int) – environment seed. Default is None, where environment is unseeded, ie. random
- Raises:
ValueError – [Camera obs require offscreen renderer]
ValueError – [Camera name must be specified to use camera obs]
- _check_grasp(gripper, object_geoms)#
Checks whether the specified gripper as defined by @gripper is grasping the specified object in the environment. If multiple grippers are specified, will return True if at least one gripper is grasping the object.
By default, this will return True if at least one geom in both the “left_fingerpad” and “right_fingerpad” geom groups are in contact with any geom specified by @object_geoms. Custom gripper geom groups can be specified with @gripper as well.
- Parameters:
gripper (GripperModel or str or list of str or list of list of str or dict) – If a MujocoModel, this is specific gripper to check for grasping (as defined by “left_fingerpad” and “right_fingerpad” geom groups). Otherwise, this sets custom gripper geom groups which together define a grasp. This can be a string (one group of single gripper geom), a list of string (multiple groups of single gripper geoms) or a list of list of string (multiple groups of multiple gripper geoms), or a dictionary in the case where the robot has multiple arms/grippers. At least one geom from each group must be in contact with any geom in @object_geoms for this method to return True.
object_geoms (str or list of str or MujocoModel) – If a MujocoModel is inputted, will check for any collisions with the model’s contact_geoms. Otherwise, this should be specific geom name(s) composing the object to check for contact.
- Returns:
True if the gripper is grasping the given object
- Return type:
bool
- _gripper_to_target(gripper, target, target_type='body', return_distance=False)#
Calculates the (x,y,z) Cartesian distance (target_pos - gripper_pos) from the specified @gripper to the specified @target. If @return_distance is set, will return the Euclidean (scalar) distance instead. If the @gripper is a dict, will return the minimum distance across all grippers.
- Parameters:
gripper (MujocoModel or dict) – Gripper model to update grip site rgb
target (MujocoModel or str) – Either a site / geom / body name, or a model that serves as the target. If a model is given, then the root body will be used as the target.
target_type (str) – One of {“body”, “geom”, or “site”}, corresponding to the type of element @target refers to.
return_distance (bool) – If set, will return Euclidean distance instead of Cartesian distance
- Returns:
(Cartesian or Euclidean) distance from gripper to target
- Return type:
np.array or float
- _visualize_gripper_to_target(gripper, target, target_type='body')#
Colors the grip visualization site proportional to the Euclidean distance to the specified @target. Colors go from red –> green as the gripper gets closer. If a dict of grippers is given, will visualize all grippers to the target.
- Parameters:
gripper (MujocoModel or dict) – Gripper model to update grip site rgb
target (MujocoModel or str) – Either a site / geom / body name, or a model that serves as the target. If a model is given, then the root body will be used as the target.
target_type (str) – One of {“body”, “geom”, or “site”}, corresponding to the type of element @target refers to.