robosuite.environments.manipulation package#

Submodules#

robosuite.environments.manipulation.door module#

class robosuite.environments.manipulation.door.Door(robots, env_configuration='default', controller_configs=None, gripper_types='default', initialization_noise='default', use_latch=True, use_camera_obs=True, use_object_obs=True, reward_scale=1.0, reward_shaping=False, placement_initializer=None, has_renderer=False, has_offscreen_renderer=True, render_camera='frontview', render_collision_mesh=False, render_visual_mesh=True, render_gpu_device_id=- 1, control_freq=20, horizon=1000, ignore_done=False, hard_reset=True, camera_names='agentview', camera_heights=256, camera_widths=256, camera_depths=False, camera_segmentations=None, renderer='mujoco', renderer_config=None)#

Bases: robosuite.environments.manipulation.single_arm_env.SingleArmEnv

This class corresponds to the door opening task for a single robot arm.

Parameters
  • robots (str or list of str) – Specification for specific robot arm(s) to be instantiated within this env (e.g: “Sawyer” would generate one arm; [“Panda”, “Panda”, “Sawyer”] would generate three robot arms) Note: Must be a single single-arm robot!

  • env_configuration (str) – Specifies how to position the robots within the environment (default is “default”). For most single arm environments, this argument has no impact on the robot setup.

  • controller_configs (str or list of dict) – If set, contains relevant controller parameters for creating a custom controller. Else, uses the default controller for this specific task. Should either be single dict if same controller is to be used for all robots or else it should be a list of the same length as “robots” param

  • gripper_types (str or list of str) – type of gripper, used to instantiate gripper models from gripper factory. Default is “default”, which is the default grippers(s) associated with the robot(s) the ‘robots’ specification. None removes the gripper, and any other (valid) model overrides the default gripper. Should either be single str if same gripper type is to be used for all robots or else it should be a list of the same length as “robots” param

  • initialization_noise (dict or list of dict) –

    Dict containing the initialization noise parameters. The expected keys and corresponding value types are specified below:

    ’magnitude’

    The scale factor of uni-variate random noise applied to each of a robot’s given initial joint positions. Setting this value to None or 0.0 results in no noise being applied. If “gaussian” type of noise is applied then this magnitude scales the standard deviation applied, If “uniform” type of noise is applied then this magnitude sets the bounds of the sampling range

    ’type’

    Type of noise to apply. Can either specify “gaussian” or “uniform”

    Should either be single dict if same noise value is to be used for all robots or else it should be a list of the same length as “robots” param

    Note

    Specifying “default” will automatically use the default noise settings. Specifying None will automatically create the required dict with “magnitude” set to 0.0.

  • use_latch (bool) – if True, uses a spring-loaded handle and latch to “lock” the door closed initially Otherwise, door is instantiated with a fixed handle

  • use_camera_obs (bool) – if True, every observation includes rendered image(s)

  • use_object_obs (bool) – if True, include object (cube) information in the observation.

  • reward_scale (None or float) – Scales the normalized reward function by the amount specified. If None, environment reward remains unnormalized

  • reward_shaping (bool) – if True, use dense rewards.

  • placement_initializer (ObjectPositionSampler) – if provided, will be used to place objects on every reset, else a UniformRandomSampler is used by default.

  • has_renderer (bool) – If true, render the simulation state in a viewer instead of headless mode.

  • has_offscreen_renderer (bool) – True if using off-screen rendering

  • render_camera (str) – Name of camera to render if has_renderer is True. Setting this value to ‘None’ will result in the default angle being applied, which is useful as it can be dragged / panned by the user using the mouse

  • render_collision_mesh (bool) – True if rendering collision meshes in camera. False otherwise.

  • render_visual_mesh (bool) – True if rendering visual meshes in camera. False otherwise.

  • render_gpu_device_id (int) – corresponds to the GPU device id to use for offscreen rendering. Defaults to -1, in which case the device will be inferred from environment variables (GPUS or CUDA_VISIBLE_DEVICES).

  • control_freq (float) – how many control signals to receive in every second. This sets the amount of simulation time that passes between every action input.

  • horizon (int) – Every episode lasts for exactly @horizon timesteps.

  • ignore_done (bool) – True if never terminating the environment (ignore @horizon).

  • hard_reset (bool) – If True, re-loads model, sim, and render object upon a reset call, else, only calls sim.reset and resets all robosuite-internal variables

  • camera_names (str or list of str) –

    name of camera to be rendered. Should either be single str if same name is to be used for all cameras’ rendering or else it should be a list of cameras to render.

    Note

    At least one camera must be specified if @use_camera_obs is True.

    Note

    To render all robots’ cameras of a certain type (e.g.: “robotview” or “eye_in_hand”), use the convention “all-{name}” (e.g.: “all-robotview”) to automatically render all camera images from each robot’s camera list).

  • camera_heights (int or list of int) – height of camera frame. Should either be single int if same height is to be used for all cameras’ frames or else it should be a list of the same length as “camera names” param.

  • camera_widths (int or list of int) – width of camera frame. Should either be single int if same width is to be used for all cameras’ frames or else it should be a list of the same length as “camera names” param.

  • camera_depths (bool or list of bool) – True if rendering RGB-D, and RGB otherwise. Should either be single bool if same depth setting is to be used for all cameras or else it should be a list of the same length as “camera names” param.

  • camera_segmentations (None or str or list of str or list of list of str) –

    Camera segmentation(s) to use for each camera. Valid options are:

    None: no segmentation sensor used ‘instance’: segmentation at the class-instance level ‘class’: segmentation at the class level ‘element’: segmentation at the per-geom level

    If not None, multiple types of segmentations can be specified. A [list of str / str or None] specifies [multiple / a single] segmentation(s) to use for all cameras. A list of list of str specifies per-camera segmentation setting(s) to use.

Raises

AssertionError – [Invalid number of robots specified]

reward(action=None)#

Reward function for the task.

Sparse un-normalized reward:

  • a discrete reward of 1.0 is provided if the door is opened

Un-normalized summed components if using reward shaping:

  • Reaching: in [0, 0.25], proportional to the distance between door handle and robot arm

  • Rotating: in [0, 0.25], proportional to angle rotated by door handled - Note that this component is only relevant if the environment is using the locked door version

Note that a successfully completed task (door opened) will return 1.0 irregardless of whether the environment is using sparse or shaped rewards

Note that the final reward is normalized and scaled by reward_scale / 1.0 as well so that the max score is equal to reward_scale

Parameters

action (np.array) – [NOT USED]

Returns

reward value

Return type

float

visualize(vis_settings)#

In addition to super call, visualize gripper site proportional to the distance to the door handle.

Parameters

vis_settings (dict) – Visualization keywords mapped to T/F, determining whether that specific component should be visualized. Should have “grippers” keyword as well as any other relevant options specified.

robosuite.environments.manipulation.lift module#

class robosuite.environments.manipulation.lift.Lift(robots, env_configuration='default', controller_configs=None, gripper_types='default', initialization_noise='default', table_full_size=(0.8, 0.8, 0.05), table_friction=(1.0, 0.005, 0.0001), use_camera_obs=True, use_object_obs=True, reward_scale=1.0, reward_shaping=False, placement_initializer=None, has_renderer=False, has_offscreen_renderer=True, render_camera='frontview', render_collision_mesh=False, render_visual_mesh=True, render_gpu_device_id=- 1, control_freq=20, horizon=1000, ignore_done=False, hard_reset=True, camera_names='agentview', camera_heights=256, camera_widths=256, camera_depths=False, camera_segmentations=None, renderer='mujoco', renderer_config=None)#

Bases: robosuite.environments.manipulation.single_arm_env.SingleArmEnv

This class corresponds to the lifting task for a single robot arm.

Parameters
  • robots (str or list of str) – Specification for specific robot arm(s) to be instantiated within this env (e.g: “Sawyer” would generate one arm; [“Panda”, “Panda”, “Sawyer”] would generate three robot arms) Note: Must be a single single-arm robot!

  • env_configuration (str) – Specifies how to position the robots within the environment (default is “default”). For most single arm environments, this argument has no impact on the robot setup.

  • controller_configs (str or list of dict) – If set, contains relevant controller parameters for creating a custom controller. Else, uses the default controller for this specific task. Should either be single dict if same controller is to be used for all robots or else it should be a list of the same length as “robots” param

  • gripper_types (str or list of str) – type of gripper, used to instantiate gripper models from gripper factory. Default is “default”, which is the default grippers(s) associated with the robot(s) the ‘robots’ specification. None removes the gripper, and any other (valid) model overrides the default gripper. Should either be single str if same gripper type is to be used for all robots or else it should be a list of the same length as “robots” param

  • initialization_noise (dict or list of dict) –

    Dict containing the initialization noise parameters. The expected keys and corresponding value types are specified below:

    ’magnitude’

    The scale factor of uni-variate random noise applied to each of a robot’s given initial joint positions. Setting this value to None or 0.0 results in no noise being applied. If “gaussian” type of noise is applied then this magnitude scales the standard deviation applied, If “uniform” type of noise is applied then this magnitude sets the bounds of the sampling range

    ’type’

    Type of noise to apply. Can either specify “gaussian” or “uniform”

    Should either be single dict if same noise value is to be used for all robots or else it should be a list of the same length as “robots” param

    Note

    Specifying “default” will automatically use the default noise settings. Specifying None will automatically create the required dict with “magnitude” set to 0.0.

  • table_full_size (3-tuple) – x, y, and z dimensions of the table.

  • table_friction (3-tuple) – the three mujoco friction parameters for the table.

  • use_camera_obs (bool) – if True, every observation includes rendered image(s)

  • use_object_obs (bool) – if True, include object (cube) information in the observation.

  • reward_scale (None or float) – Scales the normalized reward function by the amount specified. If None, environment reward remains unnormalized

  • reward_shaping (bool) – if True, use dense rewards.

  • placement_initializer (ObjectPositionSampler) – if provided, will be used to place objects on every reset, else a UniformRandomSampler is used by default.

  • has_renderer (bool) – If true, render the simulation state in a viewer instead of headless mode.

  • has_offscreen_renderer (bool) – True if using off-screen rendering

  • render_camera (str) – Name of camera to render if has_renderer is True. Setting this value to ‘None’ will result in the default angle being applied, which is useful as it can be dragged / panned by the user using the mouse

  • render_collision_mesh (bool) – True if rendering collision meshes in camera. False otherwise.

  • render_visual_mesh (bool) – True if rendering visual meshes in camera. False otherwise.

  • render_gpu_device_id (int) – corresponds to the GPU device id to use for offscreen rendering. Defaults to -1, in which case the device will be inferred from environment variables (GPUS or CUDA_VISIBLE_DEVICES).

  • control_freq (float) – how many control signals to receive in every second. This sets the amount of simulation time that passes between every action input.

  • horizon (int) – Every episode lasts for exactly @horizon timesteps.

  • ignore_done (bool) – True if never terminating the environment (ignore @horizon).

  • hard_reset (bool) – If True, re-loads model, sim, and render object upon a reset call, else, only calls sim.reset and resets all robosuite-internal variables

  • camera_names (str or list of str) –

    name of camera to be rendered. Should either be single str if same name is to be used for all cameras’ rendering or else it should be a list of cameras to render.

    Note

    At least one camera must be specified if @use_camera_obs is True.

    Note

    To render all robots’ cameras of a certain type (e.g.: “robotview” or “eye_in_hand”), use the convention “all-{name}” (e.g.: “all-robotview”) to automatically render all camera images from each robot’s camera list).

  • camera_heights (int or list of int) – height of camera frame. Should either be single int if same height is to be used for all cameras’ frames or else it should be a list of the same length as “camera names” param.

  • camera_widths (int or list of int) – width of camera frame. Should either be single int if same width is to be used for all cameras’ frames or else it should be a list of the same length as “camera names” param.

  • camera_depths (bool or list of bool) – True if rendering RGB-D, and RGB otherwise. Should either be single bool if same depth setting is to be used for all cameras or else it should be a list of the same length as “camera names” param.

  • camera_segmentations (None or str or list of str or list of list of str) –

    Camera segmentation(s) to use for each camera. Valid options are:

    None: no segmentation sensor used ‘instance’: segmentation at the class-instance level ‘class’: segmentation at the class level ‘element’: segmentation at the per-geom level

    If not None, multiple types of segmentations can be specified. A [list of str / str or None] specifies [multiple / a single] segmentation(s) to use for all cameras. A list of list of str specifies per-camera segmentation setting(s) to use.

Raises

AssertionError – [Invalid number of robots specified]

reward(action=None)#

Reward function for the task.

Sparse un-normalized reward:

  • a discrete reward of 2.25 is provided if the cube is lifted

Un-normalized summed components if using reward shaping:

  • Reaching: in [0, 1], to encourage the arm to reach the cube

  • Grasping: in {0, 0.25}, non-zero if arm is grasping the cube

  • Lifting: in {0, 1}, non-zero if arm has lifted the cube

The sparse reward only consists of the lifting component.

Note that the final reward is normalized and scaled by reward_scale / 2.25 as well so that the max score is equal to reward_scale

Parameters

action (np array) – [NOT USED]

Returns

reward value

Return type

float

visualize(vis_settings)#

In addition to super call, visualize gripper site proportional to the distance to the cube.

Parameters

vis_settings (dict) – Visualization keywords mapped to T/F, determining whether that specific component should be visualized. Should have “grippers” keyword as well as any other relevant options specified.

robosuite.environments.manipulation.manipulation_env module#

class robosuite.environments.manipulation.manipulation_env.ManipulationEnv(robots, env_configuration='default', controller_configs=None, mount_types='default', gripper_types='default', initialization_noise=None, use_camera_obs=True, has_renderer=False, has_offscreen_renderer=True, render_camera='frontview', render_collision_mesh=False, render_visual_mesh=True, render_gpu_device_id=- 1, control_freq=20, horizon=1000, ignore_done=False, hard_reset=True, camera_names='agentview', camera_heights=256, camera_widths=256, camera_depths=False, camera_segmentations=None, renderer='mujoco', renderer_config=None)#

Bases: robosuite.environments.robot_env.RobotEnv

Initializes a manipulation-specific robot environment in Mujoco.

Parameters
  • robots – Specification for specific robot arm(s) to be instantiated within this env (e.g: “Sawyer” would generate one arm; [“Panda”, “Panda”, “Sawyer”] would generate three robot arms)

  • env_configuration (str) – Specifies how to position the robot(s) within the environment. Default is “default”, which should be interpreted accordingly by any subclasses.

  • controller_configs (str or list of dict) – If set, contains relevant controller parameters for creating a custom controller. Else, uses the default controller for this specific task. Should either be single dict if same controller is to be used for all robots or else it should be a list of the same length as “robots” param

  • mount_types (None or str or list of str) – type of mount, used to instantiate mount models from mount factory. Default is “default”, which is the default mount associated with the robot(s) the ‘robots’ specification. None results in no mount, and any other (valid) model overrides the default mount. Should either be single str if same mount type is to be used for all robots or else it should be a list of the same length as “robots” param

  • gripper_types (None or str or list of str) – type of gripper, used to instantiate gripper models from gripper factory. Default is “default”, which is the default grippers(s) associated with the robot(s) the ‘robots’ specification. None removes the gripper, and any other (valid) model overrides the default gripper. Should either be single str if same gripper type is to be used for all robots or else it should be a list of the same length as “robots” param

  • initialization_noise (dict or list of dict) –

    Dict containing the initialization noise parameters. The expected keys and corresponding value types are specified below:

    ’magnitude’

    The scale factor of uni-variate random noise applied to each of a robot’s given initial joint positions. Setting this value to None or 0.0 results in no noise being applied. If “gaussian” type of noise is applied then this magnitude scales the standard deviation applied, If “uniform” type of noise is applied then this magnitude sets the bounds of the sampling range

    ’type’

    Type of noise to apply. Can either specify “gaussian” or “uniform”

    Should either be single dict if same noise value is to be used for all robots or else it should be a list of the same length as “robots” param

    Note

    Specifying “default” will automatically use the default noise settings. Specifying None will automatically create the required dict with “magnitude” set to 0.0.

  • use_camera_obs (bool) – if True, every observation includes rendered image(s)

  • has_renderer (bool) – If true, render the simulation state in a viewer instead of headless mode.

  • has_offscreen_renderer (bool) – True if using off-screen rendering

  • render_camera (str) – Name of camera to render if has_renderer is True. Setting this value to ‘None’ will result in the default angle being applied, which is useful as it can be dragged / panned by the user using the mouse

  • render_collision_mesh (bool) – True if rendering collision meshes in camera. False otherwise.

  • render_visual_mesh (bool) – True if rendering visual meshes in camera. False otherwise.

  • render_gpu_device_id (int) – corresponds to the GPU device id to use for offscreen rendering. Defaults to -1, in which case the device will be inferred from environment variables (GPUS or CUDA_VISIBLE_DEVICES).

  • control_freq (float) – how many control signals to receive in every second. This sets the amount of simulation time that passes between every action input.

  • horizon (int) – Every episode lasts for exactly @horizon timesteps.

  • ignore_done (bool) – True if never terminating the environment (ignore @horizon).

  • hard_reset (bool) – If True, re-loads model, sim, and render object upon a reset call, else, only calls sim.reset and resets all robosuite-internal variables

  • camera_names (str or list of str) –

    name of camera to be rendered. Should either be single str if same name is to be used for all cameras’ rendering or else it should be a list of cameras to render.

    Note

    At least one camera must be specified if @use_camera_obs is True.

    Note

    To render all robots’ cameras of a certain type (e.g.: “robotview” or “eye_in_hand”), use the convention “all-{name}” (e.g.: “all-robotview”) to automatically render all camera images from each robot’s camera list).

  • camera_heights (int or list of int) – height of camera frame. Should either be single int if same height is to be used for all cameras’ frames or else it should be a list of the same length as “camera names” param.

  • camera_widths (int or list of int) – width of camera frame. Should either be single int if same width is to be used for all cameras’ frames or else it should be a list of the same length as “camera names” param.

  • camera_depths (bool or list of bool) – True if rendering RGB-D, and RGB otherwise. Should either be single bool if same depth setting is to be used for all cameras or else it should be a list of the same length as “camera names” param.

  • camera_segmentations (None or str or list of str or list of list of str) –

    Camera segmentation(s) to use for each camera. Valid options are:

    None: no segmentation sensor used ‘instance’: segmentation at the class-instance level ‘class’: segmentation at the class level ‘element’: segmentation at the per-geom level

    If not None, multiple types of segmentations can be specified. A [list of str / str or None] specifies [multiple / a single] segmentation(s) to use for all cameras. A list of list of str specifies per-camera segmentation setting(s) to use.

Raises
  • ValueError – [Camera obs require offscreen renderer]

  • ValueError – [Camera name must be specified to use camera obs]

robosuite.environments.manipulation.nut_assembly module#

class robosuite.environments.manipulation.nut_assembly.NutAssembly(robots, env_configuration='default', controller_configs=None, gripper_types='default', initialization_noise='default', table_full_size=(0.8, 0.8, 0.05), table_friction=(1, 0.005, 0.0001), use_camera_obs=True, use_object_obs=True, reward_scale=1.0, reward_shaping=False, placement_initializer=None, single_object_mode=0, nut_type=None, has_renderer=False, has_offscreen_renderer=True, render_camera='frontview', render_collision_mesh=False, render_visual_mesh=True, render_gpu_device_id=- 1, control_freq=20, horizon=1000, ignore_done=False, hard_reset=True, camera_names='agentview', camera_heights=256, camera_widths=256, camera_depths=False, camera_segmentations=None, renderer='mujoco', renderer_config=None)#

Bases: robosuite.environments.manipulation.single_arm_env.SingleArmEnv

This class corresponds to the nut assembly task for a single robot arm.

Parameters
  • robots (str or list of str) – Specification for specific robot arm(s) to be instantiated within this env (e.g: “Sawyer” would generate one arm; [“Panda”, “Panda”, “Sawyer”] would generate three robot arms) Note: Must be a single single-arm robot!

  • env_configuration (str) – Specifies how to position the robots within the environment (default is “default”). For most single arm environments, this argument has no impact on the robot setup.

  • controller_configs (str or list of dict) – If set, contains relevant controller parameters for creating a custom controller. Else, uses the default controller for this specific task. Should either be single dict if same controller is to be used for all robots or else it should be a list of the same length as “robots” param

  • gripper_types (str or list of str) – type of gripper, used to instantiate gripper models from gripper factory. Default is “default”, which is the default grippers(s) associated with the robot(s) the ‘robots’ specification. None removes the gripper, and any other (valid) model overrides the default gripper. Should either be single str if same gripper type is to be used for all robots or else it should be a list of the same length as “robots” param

  • initialization_noise (dict or list of dict) –

    Dict containing the initialization noise parameters. The expected keys and corresponding value types are specified below:

    ’magnitude’

    The scale factor of uni-variate random noise applied to each of a robot’s given initial joint positions. Setting this value to None or 0.0 results in no noise being applied. If “gaussian” type of noise is applied then this magnitude scales the standard deviation applied, If “uniform” type of noise is applied then this magnitude sets the bounds of the sampling range

    ’type’

    Type of noise to apply. Can either specify “gaussian” or “uniform”

    Should either be single dict if same noise value is to be used for all robots or else it should be a list of the same length as “robots” param

    Note

    Specifying “default” will automatically use the default noise settings. Specifying None will automatically create the required dict with “magnitude” set to 0.0.

  • table_full_size (3-tuple) – x, y, and z dimensions of the table.

  • table_friction (3-tuple) – the three mujoco friction parameters for the table.

  • use_camera_obs (bool) – if True, every observation includes rendered image(s)

  • use_object_obs (bool) – if True, include object (cube) information in the observation.

  • reward_scale (None or float) – Scales the normalized reward function by the amount specified. If None, environment reward remains unnormalized

  • reward_shaping (bool) – if True, use dense rewards.

  • placement_initializer (ObjectPositionSampler) – if provided, will be used to place objects on every reset, else a UniformRandomSampler is used by default.

  • single_object_mode (int) –

    specifies which version of the task to do. Note that the observations change accordingly.

    0

    corresponds to the full task with both types of nuts.

    1

    corresponds to an easier task with only one type of nut initialized on the table with every reset. The type is randomized on every reset.

    2

    corresponds to an easier task with only one type of nut initialized on the table with every reset. The type is kept constant and will not change between resets.

  • nut_type (string) – if provided, should be either “round” or “square”. Determines which type of nut (round or square) will be spawned on every environment reset. Only used if @single_object_mode is 2.

  • has_renderer (bool) – If true, render the simulation state in a viewer instead of headless mode.

  • has_offscreen_renderer (bool) – True if using off-screen rendering

  • render_camera (str) – Name of camera to render if has_renderer is True. Setting this value to ‘None’ will result in the default angle being applied, which is useful as it can be dragged / panned by the user using the mouse

  • render_collision_mesh (bool) – True if rendering collision meshes in camera. False otherwise.

  • render_visual_mesh (bool) – True if rendering visual meshes in camera. False otherwise.

  • render_gpu_device_id (int) – corresponds to the GPU device id to use for offscreen rendering. Defaults to -1, in which case the device will be inferred from environment variables (GPUS or CUDA_VISIBLE_DEVICES).

  • control_freq (float) – how many control signals to receive in every second. This sets the amount of simulation time that passes between every action input.

  • horizon (int) – Every episode lasts for exactly @horizon timesteps.

  • ignore_done (bool) – True if never terminating the environment (ignore @horizon).

  • hard_reset (bool) – If True, re-loads model, sim, and render object upon a reset call, else, only calls sim.reset and resets all robosuite-internal variables

  • camera_names (str or list of str) –

    name of camera to be rendered. Should either be single str if same name is to be used for all cameras’ rendering or else it should be a list of cameras to render.

    Note

    At least one camera must be specified if @use_camera_obs is True.

    Note

    To render all robots’ cameras of a certain type (e.g.: “robotview” or “eye_in_hand”), use the convention “all-{name}” (e.g.: “all-robotview”) to automatically render all camera images from each robot’s camera list).

  • camera_heights (int or list of int) – height of camera frame. Should either be single int if same height is to be used for all cameras’ frames or else it should be a list of the same length as “camera names” param.

  • camera_widths (int or list of int) – width of camera frame. Should either be single int if same width is to be used for all cameras’ frames or else it should be a list of the same length as “camera names” param.

  • camera_depths (bool or list of bool) – True if rendering RGB-D, and RGB otherwise. Should either be single bool if same depth setting is to be used for all cameras or else it should be a list of the same length as “camera names” param.

  • camera_segmentations (None or str or list of str or list of list of str) –

    Camera segmentation(s) to use for each camera. Valid options are:

    None: no segmentation sensor used ‘instance’: segmentation at the class-instance level ‘class’: segmentation at the class level ‘element’: segmentation at the per-geom level

    If not None, multiple types of segmentations can be specified. A [list of str / str or None] specifies [multiple / a single] segmentation(s) to use for all cameras. A list of list of str specifies per-camera segmentation setting(s) to use.

Raises
  • AssertionError – [Invalid nut type specified]

  • AssertionError – [Invalid number of robots specified]

on_peg(obj_pos, peg_id)#
reward(action=None)#

Reward function for the task.

Sparse un-normalized reward:

  • a discrete reward of 1.0 per nut if it is placed around its correct peg

Un-normalized components if using reward shaping, where the maximum is returned if not solved:

  • Reaching: in [0, 0.1], proportional to the distance between the gripper and the closest nut

  • Grasping: in {0, 0.35}, nonzero if the gripper is grasping a nut

  • Lifting: in {0, [0.35, 0.5]}, nonzero only if nut is grasped; proportional to lifting height

  • Hovering: in {0, [0.5, 0.7]}, nonzero only if nut is lifted; proportional to distance from nut to peg

Note that a successfully completed task (nut around peg) will return 1.0 per nut irregardless of whether the environment is using sparse or shaped rewards

Note that the final reward is normalized and scaled by reward_scale / 2.0 (or 1.0 if only a single nut is being used) as well so that the max score is equal to reward_scale

Parameters

action (np.array) – [NOT USED]

Returns

reward value

Return type

float

staged_rewards()#

Calculates staged rewards based on current physical states. Stages consist of reaching, grasping, lifting, and hovering.

Returns

  • (float) reaching reward

  • (float) grasping reward

  • (float) lifting reward

  • (float) hovering reward

Return type

4-tuple

visualize(vis_settings)#

In addition to super call, visualize gripper site proportional to the distance to the closest nut.

Parameters

vis_settings (dict) – Visualization keywords mapped to T/F, determining whether that specific component should be visualized. Should have “grippers” keyword as well as any other relevant options specified.

class robosuite.environments.manipulation.nut_assembly.NutAssemblyRound(**kwargs)#

Bases: robosuite.environments.manipulation.nut_assembly.NutAssembly

Easier version of task - place one round nut into its peg.

class robosuite.environments.manipulation.nut_assembly.NutAssemblySingle(**kwargs)#

Bases: robosuite.environments.manipulation.nut_assembly.NutAssembly

Easier version of task - place either one round nut or one square nut into its peg.

class robosuite.environments.manipulation.nut_assembly.NutAssemblySquare(**kwargs)#

Bases: robosuite.environments.manipulation.nut_assembly.NutAssembly

Easier version of task - place one square nut into its peg.

robosuite.environments.manipulation.pick_place module#

class robosuite.environments.manipulation.pick_place.PickPlace(robots, env_configuration='default', controller_configs=None, gripper_types='default', initialization_noise='default', table_full_size=(0.39, 0.49, 0.82), table_friction=(1, 0.005, 0.0001), bin1_pos=(0.1, - 0.25, 0.8), bin2_pos=(0.1, 0.28, 0.8), use_camera_obs=True, use_object_obs=True, reward_scale=1.0, reward_shaping=False, single_object_mode=0, object_type=None, has_renderer=False, has_offscreen_renderer=True, render_camera='frontview', render_collision_mesh=False, render_visual_mesh=True, render_gpu_device_id=- 1, control_freq=20, horizon=1000, ignore_done=False, hard_reset=True, camera_names='agentview', camera_heights=256, camera_widths=256, camera_depths=False, camera_segmentations=None, renderer='mujoco', renderer_config=None)#

Bases: robosuite.environments.manipulation.single_arm_env.SingleArmEnv

This class corresponds to the pick place task for a single robot arm.

Parameters
  • robots (str or list of str) – Specification for specific robot arm(s) to be instantiated within this env (e.g: “Sawyer” would generate one arm; [“Panda”, “Panda”, “Sawyer”] would generate three robot arms) Note: Must be a single single-arm robot!

  • env_configuration (str) – Specifies how to position the robots within the environment (default is “default”). For most single arm environments, this argument has no impact on the robot setup.

  • controller_configs (str or list of dict) – If set, contains relevant controller parameters for creating a custom controller. Else, uses the default controller for this specific task. Should either be single dict if same controller is to be used for all robots or else it should be a list of the same length as “robots” param

  • gripper_types (str or list of str) – type of gripper, used to instantiate gripper models from gripper factory. Default is “default”, which is the default grippers(s) associated with the robot(s) the ‘robots’ specification. None removes the gripper, and any other (valid) model overrides the default gripper. Should either be single str if same gripper type is to be used for all robots or else it should be a list of the same length as “robots” param

  • initialization_noise (dict or list of dict) –

    Dict containing the initialization noise parameters. The expected keys and corresponding value types are specified below:

    ’magnitude’

    The scale factor of uni-variate random noise applied to each of a robot’s given initial joint positions. Setting this value to None or 0.0 results in no noise being applied. If “gaussian” type of noise is applied then this magnitude scales the standard deviation applied, If “uniform” type of noise is applied then this magnitude sets the bounds of the sampling range

    ’type’

    Type of noise to apply. Can either specify “gaussian” or “uniform”

    Should either be single dict if same noise value is to be used for all robots or else it should be a list of the same length as “robots” param

    Note

    Specifying “default” will automatically use the default noise settings. Specifying None will automatically create the required dict with “magnitude” set to 0.0.

  • table_full_size (3-tuple) – x, y, and z dimensions of the table.

  • table_friction (3-tuple) – the three mujoco friction parameters for the table.

  • bin1_pos (3-tuple) – Absolute cartesian coordinates of the bin initially holding the objects

  • bin2_pos (3-tuple) – Absolute cartesian coordinates of the goal bin

  • use_camera_obs (bool) – if True, every observation includes rendered image(s)

  • use_object_obs (bool) – if True, include object (cube) information in the observation.

  • reward_scale (None or float) – Scales the normalized reward function by the amount specified. If None, environment reward remains unnormalized

  • reward_shaping (bool) – if True, use dense rewards.

  • single_object_mode (int) –

    specifies which version of the task to do. Note that the observations change accordingly.

    0

    corresponds to the full task with all types of objects.

    1

    corresponds to an easier task with only one type of object initialized on the table with every reset. The type is randomized on every reset.

    2

    corresponds to an easier task with only one type of object initialized on the table with every reset. The type is kept constant and will not change between resets.

  • object_type (string) – if provided, should be one of “milk”, “bread”, “cereal”, or “can”. Determines which type of object will be spawned on every environment reset. Only used if @single_object_mode is 2.

  • has_renderer (bool) – If true, render the simulation state in a viewer instead of headless mode.

  • has_offscreen_renderer (bool) – True if using off-screen rendering

  • render_camera (str) – Name of camera to render if has_renderer is True. Setting this value to ‘None’ will result in the default angle being applied, which is useful as it can be dragged / panned by the user using the mouse

  • render_collision_mesh (bool) – True if rendering collision meshes in camera. False otherwise.

  • render_visual_mesh (bool) – True if rendering visual meshes in camera. False otherwise.

  • render_gpu_device_id (int) – corresponds to the GPU device id to use for offscreen rendering. Defaults to -1, in which case the device will be inferred from environment variables (GPUS or CUDA_VISIBLE_DEVICES).

  • control_freq (float) – how many control signals to receive in every second. This sets the amount of simulation time that passes between every action input.

  • horizon (int) – Every episode lasts for exactly @horizon timesteps.

  • ignore_done (bool) – True if never terminating the environment (ignore @horizon).

  • hard_reset (bool) – If True, re-loads model, sim, and render object upon a reset call, else, only calls sim.reset and resets all robosuite-internal variables

  • camera_names (str or list of str) –

    name of camera to be rendered. Should either be single str if same name is to be used for all cameras’ rendering or else it should be a list of cameras to render.

    Note

    At least one camera must be specified if @use_camera_obs is True.

    Note

    To render all robots’ cameras of a certain type (e.g.: “robotview” or “eye_in_hand”), use the convention “all-{name}” (e.g.: “all-robotview”) to automatically render all camera images from each robot’s camera list).

  • camera_heights (int or list of int) – height of camera frame. Should either be single int if same height is to be used for all cameras’ frames or else it should be a list of the same length as “camera names” param.

  • camera_widths (int or list of int) – width of camera frame. Should either be single int if same width is to be used for all cameras’ frames or else it should be a list of the same length as “camera names” param.

  • camera_depths (bool or list of bool) – True if rendering RGB-D, and RGB otherwise. Should either be single bool if same depth setting is to be used for all cameras or else it should be a list of the same length as “camera names” param.

  • camera_segmentations (None or str or list of str or list of list of str) –

    Camera segmentation(s) to use for each camera. Valid options are:

    None: no segmentation sensor used ‘instance’: segmentation at the class-instance level ‘class’: segmentation at the class level ‘element’: segmentation at the per-geom level

    If not None, multiple types of segmentations can be specified. A [list of str / str or None] specifies [multiple / a single] segmentation(s) to use for all cameras. A list of list of str specifies per-camera segmentation setting(s) to use.

Raises
  • AssertionError – [Invalid object type specified]

  • AssertionError – [Invalid number of robots specified]

not_in_bin(obj_pos, bin_id)#
reward(action=None)#

Reward function for the task.

Sparse un-normalized reward:

  • a discrete reward of 1.0 per object if it is placed in its correct bin

Un-normalized components if using reward shaping, where the maximum is returned if not solved:

  • Reaching: in [0, 0.1], proportional to the distance between the gripper and the closest object

  • Grasping: in {0, 0.35}, nonzero if the gripper is grasping an object

  • Lifting: in {0, [0.35, 0.5]}, nonzero only if object is grasped; proportional to lifting height

  • Hovering: in {0, [0.5, 0.7]}, nonzero only if object is lifted; proportional to distance from object to bin

Note that a successfully completed task (object in bin) will return 1.0 per object irregardless of whether the environment is using sparse or shaped rewards

Note that the final reward is normalized and scaled by reward_scale / 4.0 (or 1.0 if only a single object is being used) as well so that the max score is equal to reward_scale

Parameters

action (np.array) – [NOT USED]

Returns

reward value

Return type

float

staged_rewards()#

Returns staged rewards based on current physical states. Stages consist of reaching, grasping, lifting, and hovering.

Returns

  • (float) reaching reward

  • (float) grasping reward

  • (float) lifting reward

  • (float) hovering reward

Return type

4-tuple

visualize(vis_settings)#

In addition to super call, visualize gripper site proportional to the distance to the closest object.

Parameters

vis_settings (dict) – Visualization keywords mapped to T/F, determining whether that specific component should be visualized. Should have “grippers” keyword as well as any other relevant options specified.

class robosuite.environments.manipulation.pick_place.PickPlaceBread(**kwargs)#

Bases: robosuite.environments.manipulation.pick_place.PickPlace

Easier version of task - place one bread into its bin.

class robosuite.environments.manipulation.pick_place.PickPlaceCan(**kwargs)#

Bases: robosuite.environments.manipulation.pick_place.PickPlace

Easier version of task - place one can into its bin.

class robosuite.environments.manipulation.pick_place.PickPlaceCereal(**kwargs)#

Bases: robosuite.environments.manipulation.pick_place.PickPlace

Easier version of task - place one cereal into its bin.

class robosuite.environments.manipulation.pick_place.PickPlaceMilk(**kwargs)#

Bases: robosuite.environments.manipulation.pick_place.PickPlace

Easier version of task - place one milk into its bin.

class robosuite.environments.manipulation.pick_place.PickPlaceSingle(**kwargs)#

Bases: robosuite.environments.manipulation.pick_place.PickPlace

Easier version of task - place one object into its bin. A new object is sampled on every reset.

robosuite.environments.manipulation.single_arm_env module#

class robosuite.environments.manipulation.single_arm_env.SingleArmEnv(robots, env_configuration='default', controller_configs=None, mount_types='default', gripper_types='default', initialization_noise=None, use_camera_obs=True, has_renderer=False, has_offscreen_renderer=True, render_camera='frontview', render_collision_mesh=False, render_visual_mesh=True, render_gpu_device_id=- 1, control_freq=20, horizon=1000, ignore_done=False, hard_reset=True, camera_names='agentview', camera_heights=256, camera_widths=256, camera_depths=False, camera_segmentations=None, renderer='mujoco', renderer_config=None)#

Bases: robosuite.environments.manipulation.manipulation_env.ManipulationEnv

A manipulation environment intended for a single robot arm.

robosuite.environments.manipulation.stack module#

class robosuite.environments.manipulation.stack.Stack(robots, env_configuration='default', controller_configs=None, gripper_types='default', initialization_noise='default', table_full_size=(0.8, 0.8, 0.05), table_friction=(1.0, 0.005, 0.0001), use_camera_obs=True, use_object_obs=True, reward_scale=1.0, reward_shaping=False, placement_initializer=None, has_renderer=False, has_offscreen_renderer=True, render_camera='frontview', render_collision_mesh=False, render_visual_mesh=True, render_gpu_device_id=- 1, control_freq=20, horizon=1000, ignore_done=False, hard_reset=True, camera_names='agentview', camera_heights=256, camera_widths=256, camera_depths=False, camera_segmentations=None, renderer='mujoco', renderer_config=None)#

Bases: robosuite.environments.manipulation.single_arm_env.SingleArmEnv

This class corresponds to the stacking task for a single robot arm.

Parameters
  • robots (str or list of str) – Specification for specific robot arm(s) to be instantiated within this env (e.g: “Sawyer” would generate one arm; [“Panda”, “Panda”, “Sawyer”] would generate three robot arms) Note: Must be a single single-arm robot!

  • env_configuration (str) – Specifies how to position the robots within the environment (default is “default”). For most single arm environments, this argument has no impact on the robot setup.

  • controller_configs (str or list of dict) – If set, contains relevant controller parameters for creating a custom controller. Else, uses the default controller for this specific task. Should either be single dict if same controller is to be used for all robots or else it should be a list of the same length as “robots” param

  • gripper_types (str or list of str) – type of gripper, used to instantiate gripper models from gripper factory. Default is “default”, which is the default grippers(s) associated with the robot(s) the ‘robots’ specification. None removes the gripper, and any other (valid) model overrides the default gripper. Should either be single str if same gripper type is to be used for all robots or else it should be a list of the same length as “robots” param

  • initialization_noise (dict or list of dict) –

    Dict containing the initialization noise parameters. The expected keys and corresponding value types are specified below:

    ’magnitude’

    The scale factor of uni-variate random noise applied to each of a robot’s given initial joint positions. Setting this value to None or 0.0 results in no noise being applied. If “gaussian” type of noise is applied then this magnitude scales the standard deviation applied, If “uniform” type of noise is applied then this magnitude sets the bounds of the sampling range

    ’type’

    Type of noise to apply. Can either specify “gaussian” or “uniform”

    Should either be single dict if same noise value is to be used for all robots or else it should be a list of the same length as “robots” param

    Note

    Specifying “default” will automatically use the default noise settings. Specifying None will automatically create the required dict with “magnitude” set to 0.0.

  • table_full_size (3-tuple) – x, y, and z dimensions of the table.

  • table_friction (3-tuple) – the three mujoco friction parameters for the table.

  • use_camera_obs (bool) – if True, every observation includes rendered image(s)

  • use_object_obs (bool) – if True, include object (cube) information in the observation.

  • reward_scale (None or float) – Scales the normalized reward function by the amount specified. If None, environment reward remains unnormalized

  • reward_shaping (bool) – if True, use dense rewards.

  • placement_initializer (ObjectPositionSampler) – if provided, will be used to place objects on every reset, else a UniformRandomSampler is used by default.

  • has_renderer (bool) – If true, render the simulation state in a viewer instead of headless mode.

  • has_offscreen_renderer (bool) – True if using off-screen rendering

  • render_camera (str) – Name of camera to render if has_renderer is True. Setting this value to ‘None’ will result in the default angle being applied, which is useful as it can be dragged / panned by the user using the mouse

  • render_collision_mesh (bool) – True if rendering collision meshes in camera. False otherwise.

  • render_visual_mesh (bool) – True if rendering visual meshes in camera. False otherwise.

  • render_gpu_device_id (int) – corresponds to the GPU device id to use for offscreen rendering. Defaults to -1, in which case the device will be inferred from environment variables (GPUS or CUDA_VISIBLE_DEVICES).

  • control_freq (float) – how many control signals to receive in every second. This sets the amount of simulation time that passes between every action input.

  • horizon (int) – Every episode lasts for exactly @horizon timesteps.

  • ignore_done (bool) – True if never terminating the environment (ignore @horizon).

  • hard_reset (bool) – If True, re-loads model, sim, and render object upon a reset call, else, only calls sim.reset and resets all robosuite-internal variables

  • camera_names (str or list of str) –

    name of camera to be rendered. Should either be single str if same name is to be used for all cameras’ rendering or else it should be a list of cameras to render.

    Note

    At least one camera must be specified if @use_camera_obs is True.

    Note

    To render all robots’ cameras of a certain type (e.g.: “robotview” or “eye_in_hand”), use the convention “all-{name}” (e.g.: “all-robotview”) to automatically render all camera images from each robot’s camera list).

  • camera_heights (int or list of int) – height of camera frame. Should either be single int if same height is to be used for all cameras’ frames or else it should be a list of the same length as “camera names” param.

  • camera_widths (int or list of int) – width of camera frame. Should either be single int if same width is to be used for all cameras’ frames or else it should be a list of the same length as “camera names” param.

  • camera_depths (bool or list of bool) – True if rendering RGB-D, and RGB otherwise. Should either be single bool if same depth setting is to be used for all cameras or else it should be a list of the same length as “camera names” param.

  • camera_segmentations (None or str or list of str or list of list of str) –

    Camera segmentation(s) to use for each camera. Valid options are:

    None: no segmentation sensor used ‘instance’: segmentation at the class-instance level ‘class’: segmentation at the class level ‘element’: segmentation at the per-geom level

    If not None, multiple types of segmentations can be specified. A [list of str / str or None] specifies [multiple / a single] segmentation(s) to use for all cameras. A list of list of str specifies per-camera segmentation setting(s) to use.

Raises

AssertionError – [Invalid number of robots specified]

reward(action)#

Reward function for the task.

Sparse un-normalized reward:

  • a discrete reward of 2.0 is provided if the red block is stacked on the green block

Un-normalized components if using reward shaping:

  • Reaching: in [0, 0.25], to encourage the arm to reach the cube

  • Grasping: in {0, 0.25}, non-zero if arm is grasping the cube

  • Lifting: in {0, 1}, non-zero if arm has lifted the cube

  • Aligning: in [0, 0.5], encourages aligning one cube over the other

  • Stacking: in {0, 2}, non-zero if cube is stacked on other cube

The reward is max over the following:

  • Reaching + Grasping

  • Lifting + Aligning

  • Stacking

The sparse reward only consists of the stacking component.

Note that the final reward is normalized and scaled by reward_scale / 2.0 as well so that the max score is equal to reward_scale

Parameters

action (np array) – [NOT USED]

Returns

reward value

Return type

float

staged_rewards()#

Helper function to calculate staged rewards based on current physical states.

Returns

  • (float): reward for reaching and grasping

  • (float): reward for lifting and aligning

  • (float): reward for stacking

Return type

3-tuple

visualize(vis_settings)#

In addition to super call, visualize gripper site proportional to the distance to the cube.

Parameters

vis_settings (dict) – Visualization keywords mapped to T/F, determining whether that specific component should be visualized. Should have “grippers” keyword as well as any other relevant options specified.

robosuite.environments.manipulation.two_arm_env module#

class robosuite.environments.manipulation.two_arm_env.TwoArmEnv(robots, env_configuration='default', controller_configs=None, mount_types='default', gripper_types='default', initialization_noise=None, use_camera_obs=True, has_renderer=False, has_offscreen_renderer=True, render_camera='frontview', render_collision_mesh=False, render_visual_mesh=True, render_gpu_device_id=- 1, control_freq=20, horizon=1000, ignore_done=False, hard_reset=True, camera_names='agentview', camera_heights=256, camera_widths=256, camera_depths=False, camera_segmentations=None, renderer='mujoco', renderer_config=None)#

Bases: robosuite.environments.manipulation.manipulation_env.ManipulationEnv

A manipulation environment intended for two robot arms.

robosuite.environments.manipulation.two_arm_handover module#

class robosuite.environments.manipulation.two_arm_handover.TwoArmHandover(robots, env_configuration='default', controller_configs=None, gripper_types='default', initialization_noise='default', prehensile=True, table_full_size=(0.8, 1.2, 0.05), table_friction=(1.0, 0.005, 0.0001), use_camera_obs=True, use_object_obs=True, reward_scale=1.0, reward_shaping=False, placement_initializer=None, has_renderer=False, has_offscreen_renderer=True, render_camera='frontview', render_collision_mesh=False, render_visual_mesh=True, render_gpu_device_id=- 1, control_freq=20, horizon=1000, ignore_done=False, hard_reset=True, camera_names='agentview', camera_heights=256, camera_widths=256, camera_depths=False, camera_segmentations=None, renderer='mujoco', renderer_config=None)#

Bases: robosuite.environments.manipulation.two_arm_env.TwoArmEnv

This class corresponds to the handover task for two robot arms.

Parameters
  • robots (str or list of str) – Specification for specific robot arm(s) to be instantiated within this env (e.g: “Sawyer” would generate one arm; [“Panda”, “Panda”, “Sawyer”] would generate three robot arms) Note: Must be either 2 single single-arm robots or 1 bimanual robot!

  • env_configuration (str) –

    Specifies how to position the robots within the environment. Can be either:

    ’bimanual’

    Only applicable for bimanual robot setups. Sets up the (single) bimanual robot on the -x side of the table

    ’single-arm-parallel’

    Only applicable for multi single arm setups. Sets up the (two) single armed robots next to each other on the -x side of the table

    ’single-arm-opposed’

    Only applicable for multi single arm setups. Sets up the (two) single armed robots opposed from each others on the opposite +/-y sides of the table.

  • two (Note that "default" corresponds to either "bimanual" if a bimanual robot is used or "single-arm-opposed" if) –

  • used. (single-arm robots are) –

  • controller_configs (str or list of dict) – If set, contains relevant controller parameters for creating a custom controller. Else, uses the default controller for this specific task. Should either be single dict if same controller is to be used for all robots or else it should be a list of the same length as “robots” param

  • gripper_types (str or list of str) – type of gripper, used to instantiate gripper models from gripper factory. Default is “default”, which is the default grippers(s) associated with the robot(s) the ‘robots’ specification. None removes the gripper, and any other (valid) model overrides the default gripper. Should either be single str if same gripper type is to be used for all robots or else it should be a list of the same length as “robots” param

  • initialization_noise (dict or list of dict) –

    Dict containing the initialization noise parameters. The expected keys and corresponding value types are specified below:

    ’magnitude’

    The scale factor of uni-variate random noise applied to each of a robot’s given initial joint positions. Setting this value to None or 0.0 results in no noise being applied. If “gaussian” type of noise is applied then this magnitude scales the standard deviation applied, If “uniform” type of noise is applied then this magnitude sets the bounds of the sampling range

    ’type’

    Type of noise to apply. Can either specify “gaussian” or “uniform”

    Should either be single dict if same noise value is to be used for all robots or else it should be a list of the same length as “robots” param

    Note

    Specifying “default” will automatically use the default noise settings. Specifying None will automatically create the required dict with “magnitude” set to 0.0.

  • prehensile (bool) – If true, handover object starts on the table. Else, the object starts in Arm0’s gripper

  • table_full_size (3-tuple) – x, y, and z dimensions of the table.

  • table_friction (3-tuple) – the three mujoco friction parameters for the table.

  • use_camera_obs (bool) – if True, every observation includes rendered image(s)

  • use_object_obs (bool) – if True, include object (cube) information in the observation.

  • reward_scale (None or float) – Scales the normalized reward function by the amount specified. If None, environment reward remains unnormalized

  • reward_shaping (bool) – if True, use dense rewards.

  • placement_initializer (ObjectPositionSampler) – if provided, will be used to place objects on every reset, else a UniformRandomSampler is used by default.

  • has_renderer (bool) – If true, render the simulation state in a viewer instead of headless mode.

  • has_offscreen_renderer (bool) – True if using off-screen rendering

  • render_camera (str) – Name of camera to render if has_renderer is True. Setting this value to ‘None’ will result in the default angle being applied, which is useful as it can be dragged / panned by the user using the mouse

  • render_collision_mesh (bool) – True if rendering collision meshes in camera. False otherwise.

  • render_visual_mesh (bool) – True if rendering visual meshes in camera. False otherwise.

  • render_gpu_device_id (int) – corresponds to the GPU device id to use for offscreen rendering. Defaults to -1, in which case the device will be inferred from environment variables (GPUS or CUDA_VISIBLE_DEVICES).

  • control_freq (float) – how many control signals to receive in every second. This sets the amount of simulation time that passes between every action input.

  • horizon (int) – Every episode lasts for exactly @horizon timesteps.

  • ignore_done (bool) – True if never terminating the environment (ignore @horizon).

  • hard_reset (bool) – If True, re-loads model, sim, and render object upon a reset call, else, only calls sim.reset and resets all robosuite-internal variables

  • camera_names (str or list of str) –

    name of camera to be rendered. Should either be single str if same name is to be used for all cameras’ rendering or else it should be a list of cameras to render.

    Note

    At least one camera must be specified if @use_camera_obs is True.

    Note

    To render all robots’ cameras of a certain type (e.g.: “robotview” or “eye_in_hand”), use the convention “all-{name}” (e.g.: “all-robotview”) to automatically render all camera images from each robot’s camera list).

  • camera_heights (int or list of int) – height of camera frame. Should either be single int if same height is to be used for all cameras’ frames or else it should be a list of the same length as “camera names” param.

  • camera_widths (int or list of int) – width of camera frame. Should either be single int if same width is to be used for all cameras’ frames or else it should be a list of the same length as “camera names” param.

  • camera_depths (bool or list of bool) – True if rendering RGB-D, and RGB otherwise. Should either be single bool if same depth setting is to be used for all cameras or else it should be a list of the same length as “camera names” param.

  • camera_segmentations (None or str or list of str or list of list of str) –

    Camera segmentation(s) to use for each camera. Valid options are:

    None: no segmentation sensor used ‘instance’: segmentation at the class-instance level ‘class’: segmentation at the class level ‘element’: segmentation at the per-geom level

    If not None, multiple types of segmentations can be specified. A [list of str / str or None] specifies [multiple / a single] segmentation(s) to use for all cameras. A list of list of str specifies per-camera segmentation setting(s) to use.

Raises
  • ValueError – [Invalid number of robots specified]

  • ValueError – [Invalid env configuration]

  • ValueError – [Invalid robots for specified env configuration]

reward(action=None)#

Reward function for the task.

Sparse un-normalized reward:

  • a discrete reward of 2.0 is provided when only Arm 1 is gripping the handle and has the handle lifted above a certain threshold

Un-normalized max-wise components if using reward shaping:

  • Arm0 Reaching: (1) in [0, 0.25] proportional to the distance between Arm 0 and the handle

  • Arm0 Grasping: (2) in {0, 0.5}, nonzero if Arm 0 is gripping the hammer (any part).

  • Arm0 Lifting: (3) in {0, 1.0}, nonzero if Arm 0 lifts the handle from the table past a certain threshold

  • Arm0 Hovering: (4) in {0, [1.0, 1.25]}, nonzero only if Arm0 is actively lifting the hammer, and is proportional to the distance between the handle and Arm 1 conditioned on the handle being lifted from the table and being grasped by Arm 0

  • Mutual Grasping: (5) in {0, 1.5}, nonzero if both Arm 0 and Arm 1 are gripping the hammer (Arm 1 must be gripping the handle) while lifted above the table

  • Handover: (6) in {0, 2.0}, nonzero when only Arm 1 is gripping the handle and has the handle lifted above the table

Note that the final reward is normalized and scaled by reward_scale / 2.0 as well so that the max score is equal to reward_scale

Parameters

action (np array) – [NOT USED]

Returns

reward value

Return type

float

robosuite.environments.manipulation.two_arm_lift module#

class robosuite.environments.manipulation.two_arm_lift.TwoArmLift(robots, env_configuration='default', controller_configs=None, gripper_types='default', initialization_noise='default', table_full_size=(0.8, 0.8, 0.05), table_friction=(1.0, 0.005, 0.0001), use_camera_obs=True, use_object_obs=True, reward_scale=1.0, reward_shaping=False, placement_initializer=None, has_renderer=False, has_offscreen_renderer=True, render_camera='frontview', render_collision_mesh=False, render_visual_mesh=True, render_gpu_device_id=- 1, control_freq=20, horizon=1000, ignore_done=False, hard_reset=True, camera_names='agentview', camera_heights=256, camera_widths=256, camera_depths=False, camera_segmentations=None, renderer='mujoco', renderer_config=None)#

Bases: robosuite.environments.manipulation.two_arm_env.TwoArmEnv

This class corresponds to the lifting task for two robot arms.

Parameters
  • robots (str or list of str) – Specification for specific robot arm(s) to be instantiated within this env (e.g: “Sawyer” would generate one arm; [“Panda”, “Panda”, “Sawyer”] would generate three robot arms) Note: Must be either 2 single single-arm robots or 1 bimanual robot!

  • env_configuration (str) –

    Specifies how to position the robots within the environment. Can be either:

    ’bimanual’

    Only applicable for bimanual robot setups. Sets up the (single) bimanual robot on the -x side of the table

    ’single-arm-parallel’

    Only applicable for multi single arm setups. Sets up the (two) single armed robots next to each other on the -x side of the table

    ’single-arm-opposed’

    Only applicable for multi single arm setups. Sets up the (two) single armed robots opposed from each others on the opposite +/-y sides of the table.

  • two (Note that "default" corresponds to either "bimanual" if a bimanual robot is used or "single-arm-opposed" if) –

  • used. (single-arm robots are) –

  • controller_configs (str or list of dict) – If set, contains relevant controller parameters for creating a custom controller. Else, uses the default controller for this specific task. Should either be single dict if same controller is to be used for all robots or else it should be a list of the same length as “robots” param

  • gripper_types (str or list of str) – type of gripper, used to instantiate gripper models from gripper factory. Default is “default”, which is the default grippers(s) associated with the robot(s) the ‘robots’ specification. None removes the gripper, and any other (valid) model overrides the default gripper. Should either be single str if same gripper type is to be used for all robots or else it should be a list of the same length as “robots” param

  • initialization_noise (dict or list of dict) –

    Dict containing the initialization noise parameters. The expected keys and corresponding value types are specified below:

    ’magnitude’

    The scale factor of uni-variate random noise applied to each of a robot’s given initial joint positions. Setting this value to None or 0.0 results in no noise being applied. If “gaussian” type of noise is applied then this magnitude scales the standard deviation applied, If “uniform” type of noise is applied then this magnitude sets the bounds of the sampling range

    ’type’

    Type of noise to apply. Can either specify “gaussian” or “uniform”

    Should either be single dict if same noise value is to be used for all robots or else it should be a list of the same length as “robots” param

    Note

    Specifying “default” will automatically use the default noise settings. Specifying None will automatically create the required dict with “magnitude” set to 0.0.

  • table_full_size (3-tuple) – x, y, and z dimensions of the table.

  • table_friction (3-tuple) – the three mujoco friction parameters for the table.

  • use_camera_obs (bool) – if True, every observation includes rendered image(s)

  • use_object_obs (bool) – if True, include object (cube) information in the observation.

  • reward_scale (None or float) – Scales the normalized reward function by the amount specified. If None, environment reward remains unnormalized

  • reward_shaping (bool) – if True, use dense rewards.

  • placement_initializer (ObjectPositionSampler) – if provided, will be used to place objects on every reset, else a UniformRandomSampler is used by default.

  • has_renderer (bool) – If true, render the simulation state in a viewer instead of headless mode.

  • has_offscreen_renderer (bool) – True if using off-screen rendering

  • render_camera (str) – Name of camera to render if has_renderer is True. Setting this value to ‘None’ will result in the default angle being applied, which is useful as it can be dragged / panned by the user using the mouse

  • render_collision_mesh (bool) – True if rendering collision meshes in camera. False otherwise.

  • render_visual_mesh (bool) – True if rendering visual meshes in camera. False otherwise.

  • render_gpu_device_id (int) – corresponds to the GPU device id to use for offscreen rendering. Defaults to -1, in which case the device will be inferred from environment variables (GPUS or CUDA_VISIBLE_DEVICES).

  • control_freq (float) – how many control signals to receive in every second. This sets the amount of simulation time that passes between every action input.

  • horizon (int) – Every episode lasts for exactly @horizon timesteps.

  • ignore_done (bool) – True if never terminating the environment (ignore @horizon).

  • hard_reset (bool) – If True, re-loads model, sim, and render object upon a reset call, else, only calls sim.reset and resets all robosuite-internal variables

  • camera_names (str or list of str) –

    name of camera to be rendered. Should either be single str if same name is to be used for all cameras’ rendering or else it should be a list of cameras to render.

    Note

    At least one camera must be specified if @use_camera_obs is True.

    Note

    To render all robots’ cameras of a certain type (e.g.: “robotview” or “eye_in_hand”), use the convention “all-{name}” (e.g.: “all-robotview”) to automatically render all camera images from each robot’s camera list).

  • camera_heights (int or list of int) – height of camera frame. Should either be single int if same height is to be used for all cameras’ frames or else it should be a list of the same length as “camera names” param.

  • camera_widths (int or list of int) – width of camera frame. Should either be single int if same width is to be used for all cameras’ frames or else it should be a list of the same length as “camera names” param.

  • camera_depths (bool or list of bool) – True if rendering RGB-D, and RGB otherwise. Should either be single bool if same depth setting is to be used for all cameras or else it should be a list of the same length as “camera names” param.

  • camera_segmentations (None or str or list of str or list of list of str) –

    Camera segmentation(s) to use for each camera. Valid options are:

    None: no segmentation sensor used ‘instance’: segmentation at the class-instance level ‘class’: segmentation at the class level ‘element’: segmentation at the per-geom level

    If not None, multiple types of segmentations can be specified. A [list of str / str or None] specifies [multiple / a single] segmentation(s) to use for all cameras. A list of list of str specifies per-camera segmentation setting(s) to use.

Raises
  • ValueError – [Invalid number of robots specified]

  • ValueError – [Invalid env configuration]

  • ValueError – [Invalid robots for specified env configuration]

reward(action=None)#

Reward function for the task.

Sparse un-normalized reward:

  • a discrete reward of 3.0 is provided if the pot is lifted and is parallel within 30 deg to the table

Un-normalized summed components if using reward shaping:

  • Reaching: in [0, 0.5], per-arm component that is proportional to the distance between each arm and its respective pot handle, and exactly 0.5 when grasping the handle - Note that the agent only gets the lifting reward when flipping no more than 30 degrees.

  • Grasping: in {0, 0.25}, binary per-arm component awarded if the gripper is grasping its correct handle

  • Lifting: in [0, 1.5], proportional to the pot’s height above the table, and capped at a certain threshold

Note that the final reward is normalized and scaled by reward_scale / 3.0 as well so that the max score is equal to reward_scale

Parameters

action (np array) – [NOT USED]

Returns

reward value

Return type

float

visualize(vis_settings)#

In addition to super call, visualize gripper site proportional to the distance to each handle.

Parameters

vis_settings (dict) – Visualization keywords mapped to T/F, determining whether that specific component should be visualized. Should have “grippers” keyword as well as any other relevant options specified.

robosuite.environments.manipulation.two_arm_peg_in_hole module#

class robosuite.environments.manipulation.two_arm_peg_in_hole.TwoArmPegInHole(robots, env_configuration='default', controller_configs=None, gripper_types=None, initialization_noise='default', use_camera_obs=True, use_object_obs=True, reward_scale=1.0, reward_shaping=False, peg_radius=(0.015, 0.03), peg_length=0.13, has_renderer=False, has_offscreen_renderer=True, render_camera='frontview', render_collision_mesh=False, render_visual_mesh=True, render_gpu_device_id=- 1, control_freq=20, horizon=1000, ignore_done=False, hard_reset=True, camera_names='agentview', camera_heights=256, camera_widths=256, camera_depths=False, camera_segmentations=None, renderer='mujoco', renderer_config=None)#

Bases: robosuite.environments.manipulation.two_arm_env.TwoArmEnv

This class corresponds to the peg-in-hole task for two robot arms.

Parameters
  • robots (str or list of str) – Specification for specific robot arm(s) to be instantiated within this env (e.g: “Sawyer” would generate one arm; [“Panda”, “Panda”, “Sawyer”] would generate three robot arms) Note: Must be either 2 single single-arm robots or 1 bimanual robot!

  • env_configuration (str) –

    Specifies how to position the robots within the environment. Can be either:

    ’bimanual’

    Only applicable for bimanual robot setups. Sets up the (single) bimanual robot on the -x side of the table

    ’single-arm-parallel’

    Only applicable for multi single arm setups. Sets up the (two) single armed robots next to each other on the -x side of the table

    ’single-arm-opposed’

    Only applicable for multi single arm setups. Sets up the (two) single armed robots opposed from each others on the opposite +/-y sides of the table.

  • two (Note that "default" corresponds to either "bimanual" if a bimanual robot is used or "single-arm-opposed" if) –

  • used. (single-arm robots are) –

  • controller_configs (str or list of dict) – If set, contains relevant controller parameters for creating a custom controller. Else, uses the default controller for this specific task. Should either be single dict if same controller is to be used for all robots or else it should be a list of the same length as “robots” param

  • gripper_types (str or list of str) – type of gripper, used to instantiate gripper models from gripper factory. For this environment, setting a value other than the default (None) will raise an AssertionError, as this environment is not meant to be used with any gripper at all.

  • initialization_noise (dict or list of dict) –

    Dict containing the initialization noise parameters. The expected keys and corresponding value types are specified below:

    ’magnitude’

    The scale factor of uni-variate random noise applied to each of a robot’s given initial joint positions. Setting this value to None or 0.0 results in no noise being applied. If “gaussian” type of noise is applied then this magnitude scales the standard deviation applied, If “uniform” type of noise is applied then this magnitude sets the bounds of the sampling range

    ’type’

    Type of noise to apply. Can either specify “gaussian” or “uniform”

    Should either be single dict if same noise value is to be used for all robots or else it should be a list of the same length as “robots” param

    Note

    Specifying “default” will automatically use the default noise settings. Specifying None will automatically create the required dict with “magnitude” set to 0.0.

  • use_camera_obs (bool or list of bool) – if True, every observation for a specific robot includes a rendered

  • all (image. Should either be single bool if camera obs value is to be used for) – robots or else it should be a list of the same length as “robots” param

  • use_object_obs (bool) – if True, include object (cube) information in the observation.

  • reward_scale (None or float) – Scales the normalized reward function by the amount specified. If None, environment reward remains unnormalized

  • reward_shaping (bool) – if True, use dense rewards.

  • peg_radius (2-tuple) – low and high limits of the (uniformly sampled) radius of the peg

  • peg_length (float) – length of the peg

  • has_renderer (bool) – If true, render the simulation state in a viewer instead of headless mode.

  • has_offscreen_renderer (bool) – True if using off-screen rendering

  • render_camera (str) – Name of camera to render if has_renderer is True. Setting this value to ‘None’ will result in the default angle being applied, which is useful as it can be dragged / panned by the user using the mouse

  • render_collision_mesh (bool) – True if rendering collision meshes in camera. False otherwise.

  • render_visual_mesh (bool) – True if rendering visual meshes in camera. False otherwise.

  • render_gpu_device_id (int) – corresponds to the GPU device id to use for offscreen rendering. Defaults to -1, in which case the device will be inferred from environment variables (GPUS or CUDA_VISIBLE_DEVICES).

  • control_freq (float) – how many control signals to receive in every second. This sets the amount of simulation time that passes between every action input.

  • horizon (int) – Every episode lasts for exactly @horizon timesteps.

  • ignore_done (bool) – True if never terminating the environment (ignore @horizon).

  • hard_reset (bool) – If True, re-loads model, sim, and render object upon a reset call, else, only calls sim.reset and resets all robosuite-internal variables

  • camera_names (str or list of str) –

    name of camera to be rendered. Should either be single str if same name is to be used for all cameras’ rendering or else it should be a list of cameras to render.

    Note

    At least one camera must be specified if @use_camera_obs is True.

    Note

    To render all robots’ cameras of a certain type (e.g.: “robotview” or “eye_in_hand”), use the convention “all-{name}” (e.g.: “all-robotview”) to automatically render all camera images from each robot’s camera list).

  • camera_heights (int or list of int) – height of camera frame. Should either be single int if same height is to be used for all cameras’ frames or else it should be a list of the same length as “camera names” param.

  • camera_widths (int or list of int) – width of camera frame. Should either be single int if same width is to be used for all cameras’ frames or else it should be a list of the same length as “camera names” param.

  • camera_depths (bool or list of bool) – True if rendering RGB-D, and RGB otherwise. Should either be single bool if same depth setting is to be used for all cameras or else it should be a list of the same length as “camera names” param.

  • camera_segmentations (None or str or list of str or list of list of str) –

    Camera segmentation(s) to use for each camera. Valid options are:

    None: no segmentation sensor used ‘instance’: segmentation at the class-instance level ‘class’: segmentation at the class level ‘element’: segmentation at the per-geom level

    If not None, multiple types of segmentations can be specified. A [list of str / str or None] specifies [multiple / a single] segmentation(s) to use for all cameras. A list of list of str specifies per-camera segmentation setting(s) to use.

Raises
  • AssertionError – [Gripper specified]

  • ValueError – [Invalid number of robots specified]

  • ValueError – [Invalid env configuration]

  • ValueError – [Invalid robots for specified env configuration]

reward(action=None)#

Reward function for the task.

Sparse un-normalized reward:

  • a discrete reward of 5.0 is provided if the peg is inside the plate’s hole - Note that we enforce that it’s inside at an appropriate angle (cos(theta) > 0.95).

Un-normalized summed components if using reward shaping:

  • Reaching: in [0, 1], to encourage the arms to approach each other

  • Perpendicular Distance: in [0,1], to encourage the arms to approach each other

  • Parallel Distance: in [0,1], to encourage the arms to approach each other

  • Alignment: in [0, 1], to encourage having the right orientation between the peg and hole.

  • Placement: in {0, 1}, nonzero if the peg is in the hole with a relatively correct alignment

Note that the final reward is normalized and scaled by reward_scale / 5.0 as well so that the max score is equal to reward_scale

robosuite.environments.manipulation.wipe module#

class robosuite.environments.manipulation.wipe.Wipe(robots, env_configuration='default', controller_configs=None, gripper_types='WipingGripper', initialization_noise='default', use_camera_obs=True, use_object_obs=True, reward_scale=1.0, reward_shaping=True, has_renderer=False, has_offscreen_renderer=True, render_camera='frontview', render_collision_mesh=False, render_visual_mesh=True, render_gpu_device_id=- 1, control_freq=20, horizon=1000, ignore_done=False, hard_reset=True, camera_names='agentview', camera_heights=256, camera_widths=256, camera_depths=False, camera_segmentations=None, task_config=None, renderer='mujoco', renderer_config=None)#

Bases: robosuite.environments.manipulation.single_arm_env.SingleArmEnv

This class corresponds to the Wiping task for a single robot arm

Parameters
  • robots (str or list of str) – Specification for specific robot arm(s) to be instantiated within this env (e.g: “Sawyer” would generate one arm; [“Panda”, “Panda”, “Sawyer”] would generate three robot arms) Note: Must be a single single-arm robot!

  • env_configuration (str) – Specifies how to position the robots within the environment (default is “default”). For most single arm environments, this argument has no impact on the robot setup.

  • controller_configs (str or list of dict) – If set, contains relevant controller parameters for creating a custom controller. Else, uses the default controller for this specific task. Should either be single dict if same controller is to be used for all robots or else it should be a list of the same length as “robots” param

  • gripper_types (str or list of str) – type of gripper, used to instantiate gripper models from gripper factory. For this environment, setting a value other than the default (“WipingGripper”) will raise an AssertionError, as this environment is not meant to be used with any other alternative gripper.

  • initialization_noise (dict or list of dict) –

    Dict containing the initialization noise parameters. The expected keys and corresponding value types are specified below:

    ’magnitude’

    The scale factor of uni-variate random noise applied to each of a robot’s given initial joint positions. Setting this value to None or 0.0 results in no noise being applied. If “gaussian” type of noise is applied then this magnitude scales the standard deviation applied, If “uniform” type of noise is applied then this magnitude sets the bounds of the sampling range

    ’type’

    Type of noise to apply. Can either specify “gaussian” or “uniform”

    Should either be single dict if same noise value is to be used for all robots or else it should be a list of the same length as “robots” param

    Note

    Specifying “default” will automatically use the default noise settings. Specifying None will automatically create the required dict with “magnitude” set to 0.0.

  • use_camera_obs (bool) – if True, every observation includes rendered image(s)

  • use_object_obs (bool) – if True, include object (cube) information in the observation.

  • reward_scale (None or float) – Scales the normalized reward function by the amount specified. If None, environment reward remains unnormalized

  • reward_shaping (bool) – if True, use dense rewards.

  • has_renderer (bool) – If true, render the simulation state in a viewer instead of headless mode.

  • has_offscreen_renderer (bool) – True if using off-screen rendering

  • render_camera (str) – Name of camera to render if has_renderer is True. Setting this value to ‘None’ will result in the default angle being applied, which is useful as it can be dragged / panned by the user using the mouse

  • render_collision_mesh (bool) – True if rendering collision meshes in camera. False otherwise.

  • render_visual_mesh (bool) – True if rendering visual meshes in camera. False otherwise.

  • render_gpu_device_id (int) – corresponds to the GPU device id to use for offscreen rendering. Defaults to -1, in which case the device will be inferred from environment variables (GPUS or CUDA_VISIBLE_DEVICES).

  • control_freq (float) – how many control signals to receive in every second. This sets the amount of simulation time that passes between every action input.

  • horizon (int) – Every episode lasts for exactly @horizon timesteps.

  • ignore_done (bool) – True if never terminating the environment (ignore @horizon).

  • hard_reset (bool) – If True, re-loads model, sim, and render object upon a reset call, else, only calls sim.reset and resets all robosuite-internal variables

  • camera_names (str or list of str) –

    name of camera to be rendered. Should either be single str if same name is to be used for all cameras’ rendering or else it should be a list of cameras to render.

    Note

    At least one camera must be specified if @use_camera_obs is True.

    Note

    To render all robots’ cameras of a certain type (e.g.: “robotview” or “eye_in_hand”), use the convention “all-{name}” (e.g.: “all-robotview”) to automatically render all camera images from each robot’s camera list).

  • camera_heights (int or list of int) – height of camera frame. Should either be single int if same height is to be used for all cameras’ frames or else it should be a list of the same length as “camera names” param.

  • camera_widths (int or list of int) – width of camera frame. Should either be single int if same width is to be used for all cameras’ frames or else it should be a list of the same length as “camera names” param.

  • camera_depths (bool or list of bool) – True if rendering RGB-D, and RGB otherwise. Should either be single bool if same depth setting is to be used for all cameras or else it should be a list of the same length as “camera names” param.

  • camera_segmentations (None or str or list of str or list of list of str) –

    Camera segmentation(s) to use for each camera. Valid options are:

    None: no segmentation sensor used ‘instance’: segmentation at the class-instance level ‘class’: segmentation at the class level ‘element’: segmentation at the per-geom level

    If not None, multiple types of segmentations can be specified. A [list of str / str or None] specifies [multiple / a single] segmentation(s) to use for all cameras. A list of list of str specifies per-camera segmentation setting(s) to use.

  • task_config (None or dict) – Specifies the parameters relevant to this task. For a full list of expected parameters, see the default configuration dict at the top of this file. If None is specified, the default configuration will be used.

  • Raises – AssertionError: [Gripper specified] AssertionError: [Bad reward specification] AssertionError: [Invalid number of robots specified]

reward(action=None)#

Reward function for the task.

Sparse un-normalized reward:

  • a discrete reward of self.unit_wiped_reward is provided per single dirt (peg) wiped during this step

  • a discrete reward of self.task_complete_reward is provided if all dirt is wiped

Note that if the arm is either colliding or near its joint limit, a reward of 0 will be automatically given

Un-normalized summed components if using reward shaping (individual components can be set to 0:

  • Reaching: in [0, self.distance_multiplier], proportional to distance between wiper and centroid of dirt and zero if the table has been fully wiped clean of all the dirt

  • Table Contact: in {0, self.wipe_contact_reward}, non-zero if wiper is in contact with table

  • Wiping: in {0, self.unit_wiped_reward}, non-zero for each dirt (peg) wiped during this step

  • Cleaned: in {0, self.task_complete_reward}, non-zero if no dirt remains on the table

  • Collision / Joint Limit Penalty: in {self.arm_limit_collision_penalty, 0}, nonzero if robot arm is colliding with an object - Note that if this value is nonzero, no other reward components can be added

  • Large Force Penalty: in [-inf, 0], scaled by wiper force and directly proportional to self.excess_force_penalty_mul if the current force exceeds self.pressure_threshold_max

  • Large Acceleration Penalty: in [-inf, 0], scaled by estimated wiper acceleration and directly proportional to self.ee_accel_penalty

Note that the final per-step reward is normalized given the theoretical best episode return and then scaled: reward_scale * (horizon / (num_markers * unit_wiped_reward + horizon * (wipe_contact_reward + task_complete_reward)))

Parameters

action (np array) – [NOT USED]

Returns

reward value

Return type

float

Module contents#