Viewpoint optimization for aiding grasp synthesis algorithms using reinforcement learning

B. Calli*, W. Caarls, M. Wisse, P. Jonker

*Corresponding author for this work

Research output: Contribution to journalArticleScientificpeer-review

10 Citations (Scopus)
89 Downloads (Pure)

Abstract

Grasp synthesis for unknown objects is a challenging problem as the algorithms are expected to cope with missing object shape information. This missing information is a function of the vision sensor viewpoint. The majority of the grasp synthesis algorithms in literature synthesize a grasp by using one single image of the target object and making assumptions on the missing shape information. On the contrary, this paper proposes the use of robot's depth sensor actively: we propose an active vision methodology that optimizes the viewpoint of the sensor for increasing the quality of the synthesized grasp over time. By this way, we aim to relax the assumptions on the sensor's viewpoint and boost the success rates of the grasp synthesis algorithms. A reinforcement learning technique is employed to obtain a viewpoint optimization policy, and a training process and automated training data generation procedure are presented. The methodology is applied to a simple force-moment balance-based grasp synthesis algorithm, and a thousand simulations with five objects are conducted with random initial poses in which the grasp synthesis algorithm was not able to obtain a good grasp with the initial viewpoint. In 94% of these cases, the policy achieved to find a successful grasp.

Original languageEnglish
Pages (from-to)1077-1089
JournalAdvanced Robotics
Volume32
Issue number20
DOIs
Publication statusPublished - 2018

Bibliographical note

Accepted Author Manuscript

Keywords

  • Active vision
  • reinforcement learning
  • robotic grasping
  • viewpoint optimization

Fingerprint

Dive into the research topics of 'Viewpoint optimization for aiding grasp synthesis algorithms using reinforcement learning'. Together they form a unique fingerprint.

Cite this