This project aims to use computational intelligence techniques to reliably learn adaptive natural human pointing and gestures to control an interface on a pseudo-3D display. Highly complex data with interconnections between elements is hard to visualise on screens. Most current tools are operated using point/click/drag on 2D screens. The physical technology to capture appropriate human behaviours exists already, but not the adaptive learning of the syntax and semantics of individual gestures and actions, nor the multi-gesture information fusion required for understanding, which could significantly enhance efficiency, for example, in sorting through named entities in an investigation. All of this is done naturally by most human beings, using biological neural networks.
|Effective start/end date||10/06/2015 → 10/06/2020|