Embodied VR environment facilitates motor imagery brain–computer interface training

Motor imagery (MI) is the predominant control paradigm for brain–computer interfaces (BCIs). After sufficient training effort is invested, the accuracy of commands mediated by mental imagery of bodily movements grows to a satisfactory level. However, many issues with the MI-BCIs persist; e.g., low bit transfer rate, BCI illiteracy, sub-optimal training procedure. Especially the training process for the MI-BCIs requires improvements. Currently, the training has an inappropriate form, resulting in a high mental and temporal demand on the users (weeks of training are required for the control). This study aims at addressing the issues with the MI-BCI training. To support the learning process, an embodied training environment was created. Participants were placed into a virtual reality environment observed from a first-person view of a human-like avatar, and their rehearsal of MI actions was reflected by the corresponding movements performed by the avatar. Leveraging extension of the sense of ownership, agency, and self-location towards a non-body object (principles known from the rubber hand illusion) has already been proven to help in producing stronger EEG correlates of MI. These principles were used to facilitate the MI-BCI training process for the first time. Performance of 30 healthy participants after two sessions of training was measured using an on-line BCI scenario. The group trained using our embodied VR environment gained significantly higher accuracy for BCI actions (58.3%) than the control group trained with a standard MI-BCI training protocol (52.9%). doi:10.1016/j.cag.2018.05.024