Research

CORSMAL

Principal investigator: Andrea Cavallaro
Co-investigator(s): Kaspar ALTHOEFER
Funding source(s): EPSRC
 Start: 11-02-2019  /  End: 31-12-2022
Robot passing flexible cup to humanCORSMAL proposes to develop and validate a new framework for collaborative recognition and manipulation of objects via cooperation with humans. The project will explore the fusion of multiple sensing modalities (touch, sound and first/third person vision) to accurately and robustly estimate the physical properties of objects in noisy and potentially ambiguous environments. The framework will mimic human capability of learning and adapting across a set of different manipulators, tasks, sensing configurations and environments. In particular, we will address the problems of (1) learning shared autonomy models via observations of and interactions with humans and (2) generalising capabilities across tasks and sites by aggregating data and abstracting models to enable accurate object recognition and manipulation of unknown objects in unknown environments. The focus of CORSMAL is to define learning architectures for multimodal sensory data as well as for aggregated data from different environments. A key aim of the project is to identify the most suitable framework resulting from learning across environments and the optimal trade-off between the use of specialised local models and generalised global models. The goal here is to continually improve the adaptability and robustness of the models. The robustness of the proposed framework will be evaluated with prototype implementations in different environments. Importantly, during the project we will organise two community challenges to favour data sharing and support experiment reproducibility in additional sites