Grasp

Task-oriented Grasping: learning to use new tools and objects.

If first html tag is indented, and this include is called after a list, the html tag is considered a list element, and things go wrong. Leaving a hidden unindented line here
Period of Performance: 2011–2015
Role: PI
Additional Contributors: Publication authors (see References below).
sponsor Funded by the Belgian National Fund for Scientific Research (FRS-FNRS).
project thumbnail

Task-oriented Grasping: learning to use new tools and objects.

If first html tag is indented, and this include is called after a list, the html tag is considered a list element, and things go wrong. Leaving a hidden unindented line here
Period of Performance: 2011–2015
Role: PI
Additional Contributors: Publication authors (see References below).
sponsor Funded by the Belgian National Fund for Scientific Research (FRS-FNRS).
project thumbnail

In this project, we address (1) the problem of grasping unknown objects, then (2) grasping new objects while respecting constraints imposed by a task.

Grasping New Objects

We present a real-world robotic agent that is capable of transferring grasping strategies across objects that share similar parts. The agent transfers grasps across objects by identifying, from examples provided by a teacher, parts by which objects are often grasped in a similar fashion. It then uses these parts to identify grasping points onto novel objects. While prior work in this area focused primarily on shape analysis (parts identified, e.g., through visual clustering, or salient structure analysis), the key aspect of this work is the emergence of parts from both object shape and grasp examples. As a result, parts intrinsically encode the intention of executing a grasp.

We devise a similarity measure that reflects whether the shapes of two parts resemble each other, and whether their associated grasps are applied near one another. We discuss a nonlinear clustering procedure that allows groups of similar part-grasp associations to emerge from the space induced by the similarity measure. We present an experiment in which our agent extracts five prototypical parts from thirty-two grasp examples, and we demonstrate the applicability of the prototypical

If first html tag is indented, and this include is called after a list, the html tag is considered a list element, and things go wrong. Leaving a hidden unindented line here
Video illustrating the part learning process.

Task-oriented Grasping

We address the problem of generalizing manipulative actions across different tasks and objects. Our robotic agent acquires task-oriented skills from a teacher, and it abstracts skill parameters away from the specificity of the objects and tools used by the teacher. This process enables the transfer of skills to novel objects. Our method relies on the modularization of a task’s representation. Through modularization, we associate each action parameter to a narrow visual modality, therefore facilitating transfers across different objects or tasks.

If first html tag is indented, and this include is called after a list, the html tag is considered a list element, and things go wrong. Leaving a hidden unindented line here
Transferring task parameters. This figure is organized around a simplified representation of the product space of tasks and objects. The robot is taught two task instances, namely pour from a bottle, and store a carton in a fridge door. The robot is then asked to store the bottle into the fridge door, a task/object combination that it has not been taught. However, from its experience with the carton, the robot learned that, in order to store, it needs to grasp the object near its top, to avoid colliding with the door while inserting the object. “Grasping near the top” is a task constraint that is potentially transferrable to other objects, including a bottle. Yet, the experience acquired with the carton will not allow the robot to adequately grasp the bottle, as the shape of the carton and the bottle differ substantially. A set of finger placements that are compatible with the cylindric shape of the bottle are instead derived from the first task instance that has been taught to the robot. From its experience with the bottle, the robot has learned how to place its fingers around a cylindric shape. The placement of the fingers around the cylindric shape of the bottle is not necessarily specific to the action of pouring liquid. The robot eventually combines parts of the experience gained through two different task instances to plan a grasp for a previously unobserved task/action combination.

References

  1. Hjelm-2019-ARXIV-0.jpg
    Invariant Feature Mappings for Generalizing Affordance Understanding Using Regularized Metric Learning.
    Martin Hjelm, Carl Henrik Ek, Renaud Detry, and Danica Kragic.
    arXiv preprint arXiv:1901.10673, 2019.
  2. Kopicki-2015-IJRR-0.jpg
    One shot learning and generation of dexterous grasps for novel objects.
    Marek Kopicki, Renaud Detry, Maxime Adjigble, Rustam Stolkin, Ales Leonardis, and Jeremy Wyatt.
    International Journal of Robotics Research, 2015.
  3. ICCV
    Learning Human Priors for Task-Constrained Grasping.
    Martin Hjelm, Carl Henrik Ek, Renaud Detry, and Danica Kragic.
    In International Conference on Computer Vision Systems, 2015.
  4. Representations for Cross-task, Cross-object Grasp Transfer.
    Martin Hjelm, Renaud Detry, Carl Henrik Ek, and Danica Kragic.
    In IEEE International Conference on Robotics and Automation, 2014.
  5. Sparse Summarization of Robotic Grasping Data.
    Martin Hjelm, Carl Henrik Ek, Renaud Detry, Hedvig Kjellström, and Danica Kragic.
    In IEEE International Conference on Robotics and Automation, 2013.
  6. Detry-2013-IROS-0.jpg
    Unsupervised Learning Of Predictive Parts For Cross-object Grasp Transfer.
    Renaud Detry, and Justus Piater.
    In IEEE/RSJ International Conference on Intelligent Robots and Systems, 2013 (Finalist for the Best Cognitive Robotics Paper award).
  7. Detry-2013-ICRA-0.jpg
    Learning a Dictionary of Prototypical Grasp-predicting Parts from Grasping Experience.
    Renaud Detry, Carl Henrik Ek, Marianna Madry, and Danica Kragic.
    In IEEE International Conference on Robotics and Automation, 2013.
  8. Detry-2013-ALW-0.jpg
    Generalizing Task Parameters Through Modularization.
    Renaud Detry, Martin Hjelm, Carl Henrik Ek, and Danica Kragic.
    In Autonomous Learning Workshop (Workshop at ICRA 2013), 2013.
  9. Detry-2012-ICRA-0.jpg
    Generalizing Grasps Across Partly Similar Objects.
    Renaud Detry, Carl Henrik Ek, Marianna Madry, Justus Piater, and Danica Kragic.
    In IEEE International Conference on Robotics and Automation, 2012.