Projects using robosuite#
A list of references of projects and papers that use robosuite. If you would like to add your work to this list, please send the paper information to Yuke Zhu (yukez@cs.utexas.edu).
2022#
Robot Learning on the Job: Human-in-the-Loop Autonomy and Learning During Deployment. Huihan Liu, Soroush Nasiriany, Lance Zhang, Zhiyao Bao, Yuke Zhu
Geometric Impedance Control on SE(3) for Robotic Manipulators. Joohwan Seo, Nikhil Potu, Surya Prakash, Alexander Rose, Roberto Horowitz
Guided Skill Learning and Abstraction for Long-Horizon Manipulation. Shuo Cheng, Danfei Xu
VIOLA: Imitation Learning for Vision-Based Manipulation with Object Proposal Priors. Yifeng Zhu, Abhishek Joshi, Peter Stone, Yuke Zhu
Monte Carlo Augmented Actor-Critic for Sparse Reward Deep Reinforcement Learning from Suboptimal Demonstrations. Albert Wilcox, Ashwin Balakrishna, Jules Dedieu, Wyame Benslimane, Daniel S. Brown, Ken Goldberg
ASPiRe: Adaptive Skill Priors for Reinforcement Learning. Mengda Xu, Manuela Veloso, Shuran Song
Active Predicting Coding: Brain-Inspired Reinforcement Learning for Sparse Reward Robotic Control Problems. Alexander Ororbia, Ankur Mali
Spatial and Temporal Features Unified Self-Supervised Representation Learning Network. Rahul Choudhary, Rahee Walambe, Ketan Kotecha
HERD: Continuous Human-to-Robot Evolution for Learning from Human Demonstration. Xingyu Liu, Deepak Pathak, Kris M. Kitani
A Dual Representation Framework for Robot Learning with Human Guidance. Ruohan Zhang, Dhruva Bansal, Yilun Hao, Ayano Hiranaka, Jialu Gao, Chen Wang, Roberto Martín-Martín, Li Fei-Fei, Jiajun Wu
CompoSuite: A Compositional Reinforcement Learning Benchmark. Jorge A. Mendez, Marcel Hussing, Meghna Gummadi, Eric Eaton
Causal Dynamics Learning for Task-Independent State Abstraction. Zizhao Wang, Xuesu Xiao, Zifan Xu, Yuke Zhu, Peter Stone
Latent Policies for Adversarial Imitation Learning. Tianyu Wang, Nikhil Karnwal, Nikolay Atanasov
Play it by Ear: Learning Skills amidst Occlusion through Audio-Visual Imitation Learning. Maximilian Du, Olivia Y. Lee, Suraj Nair, Chelsea Finn
Visuotactile-RL: Learning Multimodal Manipulation Policies with Deep Reinforcement Learning. Johanna Hansen, Francois Hogan, Dmitriy Rivkin, David Meger, Michael Jenkin, Gregory Dudek
DreamingV2: Reinforcement Learning with Discrete World Models without Reconstruction. Masashi Okada, Tadahiro Taniguchi
Ditto: Building Digital Twins of Articulated Objects from Interaction. Zhenyu Jiang, Cheng-Chun Hsu, Yuke Zhu
A Ranking Game for Imitation Learning. Harshit Sikchi, Akanksha Saran, Wonjoon Goo, Scott Niekum
Learning to Grasp the Ungraspable with Emergent Extrinsic Dexterity. Wenxuan Zhou, David Held
Efficiently Learning Recoveries from Failures Under Partial Observability. Shivam Vats, Maxim Likhachev, Oliver Kroemer
Learning Representations via a Robust Behavioral Metric for Deep Reinforcement Learning. Jianda Chen, Sinno Pan
Synthesizing Adversarial Visual Scenarios for Model-Based Robotic Control. Shubhankar Agarwal, Sandeep P. Chinchali
2021#
Guided Imitation of Task and Motion Planning. Michael McDonald, Dylan Hadfield-Menell
V-MAO: Generative Modeling for Multi-Arm Manipulation of Articulated Objects. Xingyu Liu, Kris M. Kitani
Koopman Q-learning: Offline Reinforcement Learning via Symmetries of Dynamics. Matthias Weissenbacher, Samarth Sinha, Animesh Garg, Yoshinobu Kawahara
Validate on Sim, Detect on Real – Model Selection for Domain Randomization. Gal Leibovich, Guy Jacob, Shadi Endrawis, Gal Novik, Aviv Tamar
Accelerating Robotic Reinforcement Learning via Parameterized Action Primitives. Murtaza Dalal, Deepak Pathak, Ruslan Salakhutdinov
Towards More Generalizable One-shot Visual Imitation Learning. Zhao Mandi, Fangchen Liu, Kimin Lee, Pieter Abbeel
Decentralized Multi-Agent Control of a Manipulator in Continuous Task Learning. Asad Ali Shahid, Jorge Said Vidal Sesin, Damjan Pecioski, Francesco Braghin, Dario Piga, Loris Roveda
Augmenting Reinforcement Learning with Behavior Primitives for Diverse Manipulation Tasks. Soroush Nasiriany, Huihan Liu, Yuke Zhu
Bottom-Up Skill Discovery from Unsegmented Demonstrations for Long-Horizon Robot Manipulation. Yifeng Zhu, Peter Stone, Yuke Zhu
Lifelong Robotic Reinforcement Learning by Retaining Experiences. Annie Xie, Chelsea Finn
ThriftyDAgger: Budget-Aware Novelty and Risk Gating for Interactive Imitation Learning. Ryan Hoque, Ashwin Balakrishna, Ellen Novoseller, Albert Wilcox, Daniel S. Brown, Ken Goldberg
What Matters in Learning from Offline Human Demonstrations for Robot Manipulation. Ajay Mandlekar, Danfei Xu, Josiah Wong, Soroush Nasiriany, Chen Wang, Rohun Kulkarni, Li Fei-Fei, Silvio Savarese, Yuke Zhu, Roberto Martín-Martín
Multi-Modal Mutual Information (MuMMI) Training for Robust Self-Supervised Deep Reinforcement Learning. Kaiqi Chen, Yong Lee, Harold Soh
SECANT: Self-Expert Cloning for Zero-Shot Generalization of Visual Policies. Linxi Fan, Guanzhi Wang, De-An Huang, Zhiding Yu, Li Fei-Fei, Yuke Zhu, Anima Anandkumar
What Can I Do Here? Learning New Skills by Imagining Visual Affordances. Alexander Khazatsky, Ashvin Nair, Daniel Jing, Sergey Levine
Calibration-Free Monocular Vision-Based Robot Manipulations With Occlusion Awareness. Yongle Luo, Kun Dong, Lili Zhao, Zhiyong Sun, Erkang Cheng, Honglin Kan, Chao Zhou, Bo Song
Learning a Skill-sequence-dependent Policy for Long-horizon Manipulation Tasks. Zhihao Li, Zhenglong Sun, Jionglong Su, Jiaming Zhang
Efficient Self-Supervised Data Collection for Offline Robot Learning. Shadi Endrawis, Gal Leibovich, Guy Jacob, Gal Novik, Aviv Tamar
Learning Visually Guided Latent Actions for Assistive Teleoperation. Siddharth Karamcheti, Albert J. Zhai, Dylan P. Losey, Dorsa Sadigh
LASER: Learning a Latent Action Space for Efficient Reinforcement Learning. Arthur Allshire, Roberto Martín-Martín, Charles Lin, Shawn Manuel, Silvio Savarese, Animesh Garg
S4RL: Surprisingly Simple Self-Supervision for Offline Reinforcement Learning. Samarth Sinha, Ajay Mandlekar, Animesh Garg
Generalization Through Hand-Eye Coordination: An Action Space for Learning Spatially-Invariant Visuomotor Control. Chen Wang, Rui Wang, Ajay Mandlekar, Li Fei-Fei, Silvio Savarese, Danfei Xu
Interpreting Contact Interactions to Overcome Failure in Robot Assembly Tasks. Peter A. Zachares, Michelle A. Lee, Wenzhao Lian, Jeannette Bohg
Learning Contact-Rich Assembly Skills Using Residual Admittance Policy. Oren Spector, Miriam Zacksenhouse
Learning Multi-Arm Manipulation Through Collaborative Teleoperation. Albert Tung, Josiah Wong, Ajay Mandlekar, Roberto Martín-Martín, Yuke Zhu, Li Fei-Fei, Silvio Savarese
OSCAR: Data-Driven Operational Space Control for Adaptive and Robust Robot Manipulation. Josiah Wong, Viktor Makoviychuk, Anima Anandkumar, Yuke Zhu
OPIRL: Sample Efficient Off-Policy Inverse Reinforcement Learning via Distribution Matching. Hana Hoshino, Kei Ota, Asako Kanezaki, Rio Yokota
RLDS: an Ecosystem to Generate, Share and Use Datasets in Reinforcement Learning. Sabela Ramos, Sertan Girgin, Léonard Hussenot, Damien Vincent, Hanna Yakubovich, Daniel Toyama, Anita Gergely, Piotr Stanczyk, Raphael Marinier, Jeremiah Harmsen, Olivier Pietquin, Nikola Momchev
RMPs for Safe Impedance Control in Contact-Rich Manipulation. Seiji Shaw, Ben Abbatematteo, George Konidaris
Learning Robotic Manipulation Skills Using an Adaptive Force-Impedance Action Space. Maximilian Ulmer, Elie Aljalbout, Sascha Schwarz, Sami Haddadin
2020#
On the Impact of Gravity Compensation on Reinforcement Learning in Goal-Reaching Tasks for Robotic Manipulators. Jonathan Fugal, Jihye Bae, Hasan A. Poonawala
Learning Multi-Arm Manipulation Through Collaborative Teleoperation. Albert Tung, Josiah Wong, Ajay Mandlekar, Roberto Martín-Martín, Yuke Zhu, Li Fei-Fei, Silvio Savarese
Human-in-the-Loop Imitation Learning using Remote Teleoperation. Ajay Mandlekar, Danfei Xu, Roberto Martín-Martín, Yuke Zhu, Li Fei-Fei, Silvio Savarese
Transformers for One-Shot Visual Imitation. Sudeep Dasari, Abhinav Gupta
Conservative Safety Critics for Exploration. Homanga Bharadhwaj, Aviral Kumar, Nicholas Rhinehart, Sergey Levine, Florian Shkurti, Animesh Garg
Continual Model-Based Reinforcement Learning with Hypernetworks. Yizhou Huang, Kevin Xie, Homanga Bharadhwaj, Florian Shkurti
Hierarchical 6-DoF Grasping with Approaching Direction Selection. Yunho Choi, Hogun Kee, Kyungjae Lee, JaeGoo Choy, Junhong Min, Sohee Lee, Songhwai Oh
Residual Learning from Demonstration. Todor Davchev, Kevin Sebastian Luck, Michael Burke, Franziska Meier, Stefan Schaal, Subramanian Ramamoorthy
Crossing the Gap: A Deep Dive into Zero-Shot Sim-to-Real Transfer for Dynamics. Eugene Valassakis, Zihan Ding, Edward Johns
Deep Reinforcement Learning for Contact-Rich Skills Using Compliant Movement Primitives. Oren Spector, Miriam Zacksenhouse
Learning Robot Skills with Temporal Variational Inference. Tanmay Shankar, Abhinav Gupta
Long-Horizon Visual Planning with Goal-Conditioned Hierarchical Predictors. Karl Pertsch, Oleh Rybkin, Frederik Ebert, Chelsea Finn, Dinesh Jayaraman, Sergey Levine
Variational Imitation Learning with Diverse-quality Demonstrations. Voot Tangkaratt, Bo Han, Mohammad Emtiyaz Khan, Masashi Sugiyama
Balance Between Efficient and Effective Learning: Dense2Sparse Reward Shaping for Robot Manipulation with Environment Uncertainty. Yongle Luo, Kun Dong, Lili Zhao, Zhiyong Sun, Chao Zhou, Bo Song
Intrinsic Motivation for Encouraging Synergistic Behavior. Rohan Chitnis, Shubham Tulsiani, Saurabh Gupta, Abhinav Gupta
Learning Continuous Control Actions for Robotic Grasping with Reinforcement Learning. Asad Ali Shahid, Loris Roveda, Dario Piga, Francesco Braghin
Combining Reinforcement Learning and Rule-based Method to Manipulate Objects in Clutter. Yiwen Chen, Zhaojie Ju, Chenguang Yang
Research on Complex Robot Manipulation Tasks Based on Hindsight Trust Region Policy Optimization. Deyu Yang, Hanbo Zhang, Xuguang Lan
2019#
To Follow or not to Follow: Selective Imitation Learning from Observations. Youngwoon Lee, Edward S. Hu, Zhengyu Yang, Joseph J. Lim
IKEA Furniture Assembly Environment for Long-Horizon Complex Manipulation Tasks. Youngwoon Lee, Edward S. Hu, Zhengyu Yang, Alex Yin, Joseph J. Lim
IRIS: Implicit Reinforcement without Interaction at Scale for Learning Control from Offline Robot Manipulation Data. Ajay Mandlekar, Fabio Ramos, Byron Boots, Silvio Savarese, Li Fei-Fei, Animesh Garg, Dieter Fox
Network Randomization: A Simple Technique for Generalization in Deep Reinforcement Learning. Kimin Lee, Kibok Lee, Jinwoo Shin, Honglak Lee
Efficient Bimanual Manipulation Using Learned Task Schemas. Rohan Chitnis, Shubham Tulsiani, Saurabh Gupta, Abhinav Gupta
SURREAL-System: Fully-Integrated Stack for Distributed Deep Reinforcement Learning. Linxi Fan*, Yuke Zhu*, Jiren Zhu, Zihua Liu, Orien Zeng, Anchit Gupta, Joan Creus-Costa, Silvio Savarese, Li Fei-Fei
Memory Based Trajectory-conditioned Policies for Learning from Sparse Rewards. Yijie Guo, Jongwook Choi, Marcin Moczulski, Shengyu Feng, Samy Bengio, Mohammad Norouzi, Honglak Lee
Variable Impedance Control in End-Effector Space: An Action Space for Reinforcement Learning in Contact-Rich Tasks. Roberto Martín-Martín, Michelle A. Lee, Rachel Gardner, Silvio Savarese, Jeannette Bohg, Animesh Garg
2018#
RoboTurk: A Crowdsourcing Platform for Robotic Skill Learning through Imitation. Ajay Mandlekar, Yuke Zhu, Animesh Garg, Jonathan Booher, Max Spero, Albert Tung, Julian Gao, John Emmons, Anchit Gupta, Emre Orbay, Silvio Savarese, Li Fei-Fei
SURREAL: Open-Source Reinforcement Learning Framework and Robot Manipulation Benchmark. Linxi Fan*, Yuke Zhu*, Jiren Zhu, Zihua Liu, Orien Zeng, Anchit Gupta, Joan Creus-Costa, Silvio Savarese, Li Fei-Fei