Feature Hierarchies

Learning of multi-Dimensional, multi-modal features for robotic grasping.

If first html tag is indented, and this include is called after a list, the html tag is considered a list element, and things go wrong. Leaving a hidden unindented line here
Period of Performance: 2006–2010
My role: (Ph.D. scholarship)
Additional Contributors: Publication authors (see References below).
sponsor Funded by the Belgian National Fund for Scientific Research (FRS-FNRS).
project thumbnail

Learning of multi-Dimensional, multi-modal features for robotic grasping.

If first html tag is indented, and this include is called after a list, the html tag is considered a list element, and things go wrong. Leaving a hidden unindented line here
Period of Performance: 2006–2010
My role: (Ph.D. scholarship)
Additional Contributors: Publication authors (see References below).
sponsor Funded by the Belgian National Fund for Scientific Research (FRS-FNRS).
project thumbnail

This project addresses 3D object modeling and object grasping.

Hierarchical 3D Object Models

This work introduces a generative object model for 6D pose estimation in stereo views of cluttered scenes (Detry et al., 2009). We model an object as a hierarchy of increasingly expressive object parts, where parts represent the 3D geometry and appearance of object edges. At the bottom of the hierarchy, each part encodes the spatial distribution of short segments of object edges of a specific color. Higher-level parts are formed by recursively combining more elementary parts together, the top-level part representing the whole object. The hierarchy is encoded in a Markov random field whose edges parametrize relative part configurations.

Pose inference is implemented with generic probability and machine learning techniques including belief propagation, Monte Carlo integration, and kernel density estimation. The model is learned autonomously from a set of segmented views of an object. A 3D object model is a useful asset in the context of robotic grasping, as it allows for aligning a grasp model to arbitrary object positions and orientations. Several aspects of this work are inspired by biological examples, which makes it a good building block for cognitive robotic platforms.

If first html tag is indented, and this include is called after a list, the html tag is considered a list element, and things go wrong. Leaving a hidden unindented line here
If first html tag is indented, and this include is called after a list, the html tag is considered a list element, and things go wrong. Leaving a hidden unindented line here
Pose estimation. The right image shows the maximum-likelihood pose for the toy pan, which is extracted from the largest mode of the pose distribution over the scene shown on the left.

Grasp Densities

Here, we study means of modeling and learning object grasp affordances, i.e., relative object-gripper poses that lead to stable grasps. Affordances are represented probabilistically with grasp densities (Detry et al., 2011), which correspond to continuous density functions defined on the space of 6D gripper poses – 3D position and orientation.

If first html tag is indented, and this include is called after a list, the html tag is considered a list element, and things go wrong. Leaving a hidden unindented line here
Projection of a 6DOF grasp density on a 2D image. Grasp success likelihood is proportional to the intensity of the green mask.

Grasp densities are linked to visual stimuli through registration with a visual model of the object they characterize, which allows the robot to grasp objects lying in arbitrary poses: to grasp an object, the object’s model is visually aligned to the correct pose; the aligned grasp density is then combined to reaching constraints to select the maximum-likelihood achievable grasp. Grasp densities are learned and refined through exploration: grasps sampled randomly from a density are performed, and an importance-sampling algorithm learns a refined density from the outcomes of these experiences. Initial grasp densities are computed from the visual model of the object.

We demonstrated that grasp densities can be learned autonomously from experience. Our experiment showed that through learning, the robot becomes increasingly efficient at inferring grasp parameters from visual evidence. The experiment also yielded conclusive results in practical scenarios where the robot needs to repeatedly grasp an object lying in an arbitrary pose, where each pose imposes a specific reaching constraint, and thus forces the robot to make use of the entire grasp density to select the most promising achievable grasp. This work led to publications in the fields of robotics (Detry et al., 2010; Detry et al., 2010; Detry et al., 2011) and developmental learning (Detry et al., 2009).

If first html tag is indented, and this include is called after a list, the html tag is considered a list element, and things go wrong. Leaving a hidden unindented line here

References

  1. What a successful grasp tells about the success chances of grasps in its vicinity.
    Leon Bodenhagen, Renaud Detry, Justus Piater, and Norbert Krüger.
    In ICDL-EpiRob, 2011.
  2. Learning Grasp Affordance Densities.
    R. Detry, D. Kraft, O. Kroemer, L. Bodenhagen, J. Peters, N. Krüger, and J. Piater.
    Paladyn. Journal of Behavioral Robotics, 2011.
  3. Development of Object and Grasping Knowledge by Robot Exploration.
    Dirk Kraft, Renaud Detry, Nicolas Pugeault, Emre Başeski, Frank Guerin, Justus Piater, and Norbert Krüger.
    IEEE Transactions on Autonomous Mental Development, 2010.
  4. Learning of Multi-Dimensional, Multi-Modal Features for Robotic Grasping.
    Renaud Detry.
    University of Liège, 2010.
  5. Continuous Surface-point Distributions for 3D Object Pose Estimation and Recognition.
    Renaud Detry, and Justus Piater.
    In Asian Conference on Computer Vision, 2010.
  6. Refining Grasp Affordance Models by Experience.
    Renaud Detry, Dirk Kraft, Anders Glent Buch, Norbert Krüger, and Justus Piater.
    In IEEE International Conference on Robotics and Automation, 2010.
  7. Learning Continuous Grasp Affordances by Sensorimotor Exploration.
    Renaud Detry, Emre Başeski, Mila Popović, Younes Touati, Norbert Krüger, Oliver Kroemer, Jan Peters, and Justus Piater.
    In From Motor Learning to Interaction Learning in Robots, 2010.
  8. Learning Objects and Grasp Affordances through Autonomous Exploration.
    Dirk Kraft, Renaud Detry, Nicolas Pugeault, Emre Başeski, Justus Piater, and Norbert Krüger.
    In International Conference on Computer Vision Systems, 2009.
  9. Learning Object-specific Grasp Affordance Densities.
    Renaud Detry, Emre Başeski, Norbert Krüger, Mila Popović, Younes Touati, Oliver Kroemer, Jan Peters, and Justus Piater.
    In IEEE International Conference on Development and Learning, 2009.
  10. Autonomous Learning of Object-specific Grasp Affordance Densities.
    Renaud Detry, Emre Başeski, Norbert Krüger, Mila Popović, Younes Touati, and Justus Piater.
    In Approaches to Sensorimotor Learning on Humanoid Robots (Workshop at the IEEE International Conference on Robotics and Automation), 2009.
  11. A Probabilistic Framework for 3D Visual Object Representation.
    Renaud Detry, Nicolas Pugeault, and Justus Piater.
    IEEE Trans. Pattern Anal. Mach. Intell., 2009.
  12. Learning Visual Representations for Interactive Systems.
    Justus Piater, Sébastien Jodogne, Renaud Detry, Dirk Kraft, Norbert Krüger, Oliver Kroemer, and Jan Peters.
    In International Symposium on Robotics Research, 2009.
  13. 3D Probabilistic Representations for Vision and Action.
    Justus Piater, and Renaud Detry.
    In Robotics Challenges for Machine Learning II (Workshop at the IEEE/RSJ International Conference on Intelligent Robots and Systems), 2008.
  14. Vision as Inference in a Hierarchical Markov Network.
    Justus Piater, Fabien Scalzo, and Renaud Detry.
    In International Conference on Cognitive and Neural Systems, 2008.
  15. Probabilistic Pose Recovery Using Learned Hierarchical Object Models.
    Renaud Detry, Nicolas Pugeault, and Justus H. Piater.
    In International Cognitive Vision Workshop (Workshop at the 6th International Conference on Vision Systems), 2008.
  16. Hierarchical Integration of Local 3D Features for Probabilistic Pose Recovery.
    Renaud Detry, and Justus H. Piater.
    In Robot Manipulation: Sensing and Adapting to the Real World (Workshop at Robotics, Science and Systems), 2007.