Plexegen

Grasping novel objects with tactile-based grasp stability estimates.

If first html tag is indented, and this include is called after a list, the html tag is considered a list element, and things go wrong. Leaving a hidden unindented line here
Period of Performance: 2012–2016
My role: PI
Additional Contributors: Publication authors (see References below).
sponsor Funded by the Swedish Research Council (VR).
project thumbnail

Grasping novel objects with tactile-based grasp stability estimates.

If first html tag is indented, and this include is called after a list, the html tag is considered a list element, and things go wrong. Leaving a hidden unindented line here
Period of Performance: 2012–2016
My role: PI
Additional Contributors: Publication authors (see References below).
sponsor Funded by the Swedish Research Council (VR).
project thumbnail

To grasp an object, an agent typically first devises a grasping plan from visual data, then it executes this plan, and finally it assesses the success of its action. Planning relies on (1) the extraction of object information from vision, and on (2) the recovery of memories related the current visual context, such as previous attempts to grasp a similar object. Because of the uncertainty inherent to these two processes, designing grasp plans that are guaranteed to work in an open-loop system is difficult. Grasp execution greatly benefits from a closed-loop controller which considers sensory feedback before and while issuing motor commands.

In this project, we study means of monitoring the execution of a grasp plan using vision and touch. By pointing a camera to the robot’s workspace, we can track the 6D pose of visible objects in realtime. Touch data are captured by sensors placed on the robot’s fingers. These two modalities are complementary, since during grasps objects are partly occluded by the hand, and visual object cues become uncertain. Monitoring the execution of a grasp allows the agent to abort grasps that are unlikely to succeed, thus preventing potential damage to the objects or the robot.

If first html tag is indented, and this include is called after a list, the html tag is considered a list element, and things go wrong. Leaving a hidden unindented line here
If first html tag is indented, and this include is called after a list, the html tag is considered a list element, and things go wrong. Leaving a hidden unindented line here
If first html tag is indented, and this include is called after a list, the html tag is considered a list element, and things go wrong. Leaving a hidden unindented line here
Our robot platform is composed of an industrial arm, a three-finger gripper equipped with tactile sensing arrays, and a camera.

We aim to establish the likelihood of success of a grasp before attempting to lift an object. Our agent learns and memorizes what it feels like to grasp objects from various sides. Tactile data are recorded once the hand is fully closed around the object. As the object often moves while the hand is closing around it, we track the object pose throughout the grasp, and record the pose once the hand is fully closed. The robot lifts up the object and turns it upside-down. If the object stays rigidly bound to the hand during this movement, the grasp is considered successful. During training, the agent encounters both successful and unsuccessful grasps, which provide it with input-output pairs, in the form of tactile imprints and relative object-gripper configurations (input) and success/failure labels (output). These data are used to train a classifier, which is subsequently used to decide whether a grasp feels stable enough to proceed to lifting the object.

Our experiment demonstrates that joint tactile and pose-based perceptions carry valuable grasp-related information, as models trained on both hand poses and tactile parameters perform better than the models trained exclusively on one perceptual input.

If first html tag is indented, and this include is called after a list, the html tag is considered a list element, and things go wrong. Leaving a hidden unindented line here
Video illustrating pose- and touch-based grasp stability estimation.

References

  1. Estimating tactile data for adaptive grasping of novel objects.
    Emil Hyttinen, Danica Kragic, and Renaud Detry.
    In IEEE/RAS International Conference on Humanoid Robots, 2017.
  2. Probabilistic consolidation of grasp experience.
    Yasemin Bekiroglu, Andreas Damianou, Renaud Detry, Johannes A Stork, Danica Kragic, and Carl Henrik Ek.
    In IEEE International Conference on Robotics and Automation, 2016.
  3. Hyttinen-2015-ICRA-0.jpg
    Learning the Tactile Signatures of Prototypical Object Parts for Robust Part-based Grasping of Novel Objects.
    Emil Hyttinen, Danica Kragic, and Renaud Detry.
    In IEEE International Conference on Robotics and Automation, 2015.
  4. Grasp Stability from Vision and Touch.
    Yasemin Bekiroglu, Renaud Detry, and Danica Kragic.
    In Advances in Tactile Sensing and Touch-based Human Robot Interaction (Workshop at IROS 2012), 2012.
  5. Bekiroglu-2011-MUU-0.jpg
    Joint Observation of Object Pose and Tactile Imprints for Online Grasp Stability Assessment.
    Yasemin Bekiroglu, Renaud Detry, and Danica Kragic.
    In Manipulation Under Uncertainty (Workshop at IEEE ICRA 2011), 2011.
  6. Bekiroglu-2011-IROS-0.jpg
    Learning Tactile Characterizations Of Object- And Pose-specific Grasps.
    Yasemin Bekiroglu, Renaud Detry, and Danica Kragic.
    In IEEE/RSJ International Conference on Intelligent Robots and Systems, 2011.