Visual reconstruction from EEG has been paved with the advancement of AI. Recent studies have demonstrated the feasibility of reconstructing images from EEG recordings in designed experiments. Nevertheless, even though breakthroughs in AI have begun with imitating the human system, these frameworks lack resemblance to the visual system of the human. To minimize this gap, the research proposes a novel framework called EEmaGe which utilizes self-supervised learning to reconstruct images from raw EEG data. Unlike the previous methods which rely on supervised learning and labeled data using visual cues, the framework employs self-supervised autoencoders and its downstream tasks to extract robust EEG features. The experimental results demonstrate that an EEG encoder is trained better with similar images compared to EEG alone, even when the labeled data - a pair of EEG and image - is shuffled. As this RE2I approach, the research has the potential to contribute to advance our knowledge about the intricacies of the human brain and to develop more sophisticated AI systems that effectively mock human visual perception.
-
Notifications
You must be signed in to change notification settings - Fork 1
Research - EEmaGe: EEG-based Image Generation for Visual Reconstruction Using Convolutional Autoencoders (and GANs)
dev-onejun/EEmaGe
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
About
Research - EEmaGe: EEG-based Image Generation for Visual Reconstruction Using Convolutional Autoencoders (and GANs)
Topics
Resources
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published