Several methods have been proposed for generating high-quality images with Stable Diffusion directly from medical brain electroencephalogram (EEG) signals. However, gathering EEG data from medical instruments involves time-consuming and complex procedures.
In this work, we explore a novel approach for generating and editing facial images, using portable EEG devices instead of medical ones. We established a paradigm to collect EEG-facial attribute pairs. Utilizing deep learning networks, the approach infers facial features from EEG signals and employs the predicted scores to edit faces based on GAN.
The demonstration of real-time facial editing and the quantitative results present a promising step towards wearable and affordable 'thought to face editing', showcasing potential applications in VR products.
Please check more information about the device on the official website