Synthetic Ground Truth for Presegmentation of Known Objects for Effortless Pose Estimation

Publikation: Kapitel i bog/rapport/konference-proceedingKonferencebidrag i proceedingsForskningpeer review

61 Downloads (Pure)

Abstrakt

We present a method for generating synthetic ground truth for training segmentation networks for presegmenting point clouds in pose estimation problems. Our method replaces global pose estimation algorithms such as RANSAC which requires manual fine-tuning with a robust CNN, without having to hand-label segmentation masks for the given object. The data is generated by blending cropped images of the objects with arbitrary backgrounds. We test the method in two scenarios, and show that networks trained on the generated data segments the objects with high accuracy, allowing them to be used in a pose estimation pipeline.
OriginalsprogEngelsk
TitelProceedings of the 15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 4: VISAPP
RedaktørerGiovanni Maria Farinella, Petia Radeva, Jose Braz
Vol/bind4
ForlagSCITEPRESS Digital Library
Publikationsdato2020
Sider482-489
ISBN (Elektronisk) 978-989-758-402-2
DOI
StatusUdgivet - 2020
Begivenhed15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Valletta, Malta
Varighed: 27. feb. 202029. feb. 2020
Konferencens nummer: 15

Konference

Konference15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications
Nummer15
Land/OmrådeMalta
ByValletta
Periode27/02/202029/02/2020
NavnVISIGRAPP 2020 - Proceedings of the 15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications
Vol/bind4

Fingeraftryk

Dyk ned i forskningsemnerne om 'Synthetic Ground Truth for Presegmentation of Known Objects for Effortless Pose Estimation'. Sammen danner de et unikt fingeraftryk.

Citationsformater