Generalization

Learning Task Pre- and Post-conditions / Task-oriented Grasping

← Back to research project list.

We present a task-oriented grasp model, that encodes grasps that are configurationally compatible with a given task. For instance, if the task is to pour liquid from a container, the model encodes grasps that leave the opening of the container unobstructed. The model consists of two independent agents: First, a geometric grasp model that computes, from a depth image, a distribution of 6D grasp poses for which the shape of the gripper matches the shape of the underlying surface. The model relies on a dictionary of geometric object parts annotated with workable gripper poses and preshape parameters. It is learned from experience via kinesthetic teaching. The second agent is a CNN-based semantic model that identifies grasp-suitable regions in a depth image, i.e., regions where a grasp will not impede the execution of the task. The semantic model allows us to encode relationships such as "grasp from the handle." A key element of this work is to use a deep network to integrate contextual task cues, and defer the structured-output problem of gripper pose computation to an explicit (learned) geometric model. Jointly, these two models generate grasps that are mechanically fit, and that grip on the object in a way that enables the intended task..

Main reference:

detry2017c 
R. Detry, J. Papon and L. Matthies, Task-oriented Grasping with Semantic and Geometric Scene Understanding. In IEEE/RSJ International Conference on Intelligent Robots and Systems, 2017. (Best Paper Award in Cognitive Robotics).
doidoi; pdfpdf; bibtexshow/hide bibtex

Papers covering this topic:

bowkett2018a 
J. Bowkett, J. Burdick, L. Matthies and R. Detry, Semantic Understanding of Task Outcomes: Visually Identifying Failure Modes Autonomously Discovered in Simulation. In Representing a Complex World: Perception, Inference, and Learning for Joint Semantic, Geometric, and Physical Understanding (ICRA 2018 Workshop), 2018.
pdfpdf; bibtexshow/hide bibtex
detry2017a 
R. Detry, J. Papon and L. Matthies, Learning to Grasp with a Deep Network for 2D Context and Geometric Prototypes for 3D Structure. In Learning and control for autonomous manipulation systems: the role of dimensionality reduction (ICRA 2017 Workshop), 2017.
pdfpdf; bibtexshow/hide bibtex
detry2017b 
R. Detry, J. Papon and L. Matthies, Semantic and Geometric Scene Understanding for Task-oriented Grasping of Novel Objects from a Single View. In Learning and control for autonomous manipulation systems: the role of dimensionality reduction (ICRA 2017 Workshop), 2017.
pdfpdf; bibtexshow/hide bibtex
detry2017d 
R. Detry, J. Papon and L. Matthies, Semantic and Geometric Scene Understanding for Single-view Task-oriented Grasping of Novel Objects. In Workshop on Spatial-Semantic Representations in Robotics (RSS 2017 Workshop), 2017.
pdfpdf; bibtexshow/hide bibtex
zhang2017a 
M. Zhang, R. Detry, L. Matthies and K. Daniilidis, Tactile-Vision Integration for Task-Compatible Fine-Part Manipulation. In Revisiting Contact – Turning a problem into a solution (RSS 2017 Workshop), 2017.
pdfpdf; bibtexshow/hide bibtex
zhang2018a 
M. Zhang, A. t. Pas, R. Detry and K. Daniilidis, Tactile-Visual Integration for Task-Aware Grasping. In RSS Pioneers (RSS 2018 Workshop), 2018.
pdfpdf; bibtexshow/hide bibtex

Many of these publications are copyrighted by their respective publishers. Downloadable versions are not necessarily identical to the published versions. They are made available here for personal use only.


Page last modified: August 06, 2019 Valid HTML5 and CSS