Abstract
It is important for a social robot to know if a nearby human is showing interest in interacting with it. We approximate this interest with expressed visual interest. To find it, we train a number of classifiers with previously labeled data. The input features for these are facial features like head orientation, eye gaze and facial action units, which are provided by the OpenFace library. As training data, we use video footage collected during an in-the-wild human-robot interaction scenario, where a social robot was approaching people at a cafeteria to serve them water. The most successful classifier that we trained tested at a 94% accuracy for detecting interest on an unrelated testing dataset. This allows us to create an effective tool for our social robot, which enables it to start talking to people only when it is fairly certain that the addressed persons are interested in talking to it.
Originalsprog | Engelsk |
---|---|
Titel | Proceedings of the 16th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - HUCAPP |
Redaktører | Alexis Paljic, Tabitha Peck, Jose Braz, Kadi Bouatouch |
Vol/bind | 2 |
Forlag | SCITEPRESS Digital Library |
Publikationsdato | 2021 |
Sider | 198-204 |
ISBN (Elektronisk) | 9789897584886 |
DOI | |
Status | Udgivet - 2021 |
Begivenhed | 16th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, VISIGRAPP 2021 - Virtual, Online Varighed: 8. feb. 2021 → 10. feb. 2021 |
Konference
Konference | 16th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, VISIGRAPP 2021 |
---|---|
By | Virtual, Online |
Periode | 08/02/2021 → 10/02/2021 |
Sponsor | Institute for Systems and Technologies of Information, Control and Communication (INSTICC) |
Navn | IVAPP |
---|---|
Vol/bind | 2 |
ISSN | 2184-4321 |
Bibliografisk note
Publisher Copyright:Copyright © 2021 by SCITEPRESS – Science and Technology Publications, Lda. All rights reserved.
Copyright:
Copyright 2021 Elsevier B.V., All rights reserved.