We present a task-oriented grasp model, that encodes grasps that are configurationally compatible with a given task. For instance, if the task is to pour liquid from a container, the model encodes grasps that leave the opening of the container unobstructed. The model consists of two independent agents: First, a geometric grasp model that computes, from a depth image, a distribution of 6D grasp poses for which the shape of the gripper matches the shape of the underlying surface. The model relies on a dictionary of geometric object parts annotated with workable gripper poses and preshape parameters. It is learned from experience via kinesthetic teaching. The second agent is a CNN-based semantic model that identifies grasp-suitable regions in a depth image, i.e., regions where a grasp will not impede the execution of the task. The semantic model allows us to encode relationships such as "grasp from the handle." A key element of this work is to use a deep network to integrate contextual task cues, and defer the structured-output problem of gripper pose computation to an explicit (learned) geometric model. Jointly, these two models generate grasps that are mechanically fit, and that grip on the object in a way that enables the intended task..
Main reference:
- detry2017c
- R. Detry, J. Papon and L. Matthies, Task-oriented Grasping with Semantic and Geometric Scene Understanding. In IEEE/RSJ International Conference on Intelligent Robots and Systems, 2017. (Best Paper Award in Cognitive Robotics).
doi;
pdf;
show/hide bibtex
@inproceedings{detry2017c,
author = {Renaud Detry and Jeremie Papon and Larry Matthies},
booktitle = {{IEEE/RSJ} International Conference on Intelligent Robots and Systems},
doi = {https://doi.org/10.1109/IROS.2017.8206162},
note = {(Best Paper Award in Cognitive Robotics)},
title = {Task-oriented Grasping with Semantic and Geometric Scene Understanding},
year = {2017}}
Papers covering this topic:
- bowkett2018a
- J. Bowkett, J. Burdick, L. Matthies and R. Detry, Semantic Understanding of Task Outcomes: Visually Identifying Failure Modes Autonomously Discovered in Simulation. In Representing a Complex World: Perception, Inference, and Learning for Joint Semantic, Geometric, and Physical Understanding (ICRA 2018 Workshop), 2018.
pdf;
show/hide bibtex
@inproceedings{bowkett2018a,
author = {Joseph Bowkett and Joel Burdick and Larry Matthies and Renaud Detry},
booktitle = {Representing a Complex World: Perception, Inference, and Learning for Joint Semantic, Geometric, and Physical Understanding (ICRA 2018 Workshop)},
title = {Semantic Understanding of Task Outcomes: Visually Identifying Failure Modes Autonomously Discovered in Simulation},
year = {2018}}
- detry2017a
- R. Detry, J. Papon and L. Matthies, Learning to Grasp with a Deep Network for 2D Context and Geometric Prototypes for 3D Structure. In Learning and control for autonomous manipulation systems: the role of dimensionality reduction (ICRA 2017 Workshop), 2017.
pdf;
show/hide bibtex
@inproceedings{detry2017a,
author = {Renaud Detry and Jeremie Papon and Larry Matthies},
booktitle = {Learning and control for autonomous manipulation systems: the role of dimensionality reduction (ICRA 2017 Workshop)},
title = {Learning to Grasp with a Deep Network for 2D Context and Geometric Prototypes for 3D Structure},
year = {2017}}
- detry2017b
- R. Detry, J. Papon and L. Matthies, Semantic and Geometric Scene Understanding for Task-oriented Grasping of Novel Objects from a Single View. In Learning and control for autonomous manipulation systems: the role of dimensionality reduction (ICRA 2017 Workshop), 2017.
pdf;
show/hide bibtex
@inproceedings{detry2017b,
author = {Renaud Detry and Jeremie Papon and Larry Matthies},
booktitle = {Learning and control for autonomous manipulation systems: the role of dimensionality reduction (ICRA 2017 Workshop)},
title = {Semantic and Geometric Scene Understanding for Task-oriented Grasping of Novel Objects from a Single View},
year = {2017}}
- detry2017d
- R. Detry, J. Papon and L. Matthies, Semantic and Geometric Scene Understanding for Single-view Task-oriented Grasping of Novel Objects. In Workshop on Spatial-Semantic Representations in Robotics (RSS 2017 Workshop), 2017.
pdf;
show/hide bibtex
@inproceedings{detry2017d,
author = {Renaud Detry and Jeremie Papon and Larry Matthies},
booktitle = {Workshop on Spatial-Semantic Representations in Robotics (RSS 2017 Workshop)},
title = {Semantic and Geometric Scene Understanding for Single-view Task-oriented Grasping of Novel Objects},
year = {2017}}
- zhang2017a
- M. Zhang, R. Detry, L. Matthies and K. Daniilidis, Tactile-Vision Integration for Task-Compatible Fine-Part Manipulation. In Revisiting Contact – Turning a problem into a solution (RSS 2017 Workshop), 2017.
pdf;
show/hide bibtex
@inproceedings{zhang2017a,
author = {Mabel Zhang and Renaud Detry and Larry Matthies and Kostas Daniilidis},
booktitle = {Revisiting Contact -- Turning a problem into a solution (RSS 2017 Workshop)},
title = {Tactile-Vision Integration for Task-Compatible Fine-Part Manipulation},
year = {2017}}
- zhang2018a
- M. Zhang, A. t. Pas, R. Detry and K. Daniilidis, Tactile-Visual Integration for Task-Aware Grasping. In RSS Pioneers (RSS 2018 Workshop), 2018.
pdf;
show/hide bibtex
@inproceedings{zhang2018a,
author = {Mabel Zhang and Andreas ten Pas and Renaud Detry and Kostas Daniilidis},
booktitle = {RSS Pioneers (RSS 2018 Workshop)},
title = {Tactile-Visual Integration for Task-Aware Grasping},
year = {2018}}
Many of these publications are copyrighted by their respective publishers. Downloadable versions are not necessarily identical to the published versions. They are made available here for personal use only.