Fast robust peg-in-hole insertion with continuous visual servoing

Rasmus Laurvig Haugaard*, Jeppe Langaa, Christoffer Sloth, Anders Glent Buch

*Kontaktforfatter

Publikation: Konferencebidrag uden forlag/tidsskriftPaperForskningpeer review

Abstract

This paper demonstrates a visual servoing method which is robust towards uncertainties related to system calibration and grasping, while significantly reducing the peg-in-hole time compared to classical methods and recent attempts based on deep learning. The proposed visual servoing method is based on peg and hole point estimates from a deep neural network in a multi-cam setup, where the model is trained on purely synthetic data. Empirical results show that the learnt model generalizes to the real world, allowing for higher success rates and lower cycle times than existing approaches.
OriginalsprogEngelsk
Publikationsdato2020
Antal sider10
StatusUdgivet - 2020
Begivenhed4th Conference on Robot Learning (CoRL 2020) - Cambridge, USA
Varighed: 16. nov. 202018. nov. 2020

Konference

Konference4th Conference on Robot Learning (CoRL 2020)
Land/OmrådeUSA
ByCambridge
Periode16/11/202018/11/2020

Fingeraftryk

Dyk ned i forskningsemnerne om 'Fast robust peg-in-hole insertion with continuous visual servoing'. Sammen danner de et unikt fingeraftryk.

Citationsformater