Search Results for author: Young-Eun Lee

Found 8 papers, 2 papers with code

Neural Speech Embeddings for Speech Synthesis Based on Deep Generative Networks

no code implementations10 Dec 2023 Seo-Hyun Lee, Young-Eun Lee, Soowon Kim, Byung-Kwan Ko, Jun-Young Kim, Seong-Whan Lee

Brain-to-speech technology represents a fusion of interdisciplinary applications encompassing fields of artificial intelligence, brain-computer interfaces, and speech synthesis.

Representation Learning Speech Synthesis

Enhanced Generative Adversarial Networks for Unseen Word Generation from EEG Signals

no code implementations14 Nov 2023 Young-Eun Lee, Seo-Hyun Lee, Soowon Kim, Jung-Sun Lee, Deok-Seon Kim, Seong-Whan Lee

Recent advances in brain-computer interface (BCI) technology, particularly based on generative adversarial networks (GAN), have shown great promise for improving decoding performance for BCI.

Brain Computer Interface Data Augmentation +3

Brain-Driven Representation Learning Based on Diffusion Model

no code implementations14 Nov 2023 Soowon Kim, Seo-Hyun Lee, Young-Eun Lee, Ji-Won Lee, Ji-Ha Park, Seong-Whan Lee

Interpreting EEG signals linked to spoken language presents a complex challenge, given the data's intricate temporal and spatial attributes, as well as the various noise factors.

Denoising EEG +1

Diff-E: Diffusion-based Learning for Decoding Imagined Speech EEG

1 code implementation26 Jul 2023 Soowon Kim, Young-Eun Lee, Seo-Hyun Lee, Seong-Whan Lee

Decoding EEG signals for imagined speech is a challenging task due to the high-dimensional nature of the data and low signal-to-noise ratio.

Denoising EEG +1

Subject-Independent Classification of Brain Signals using Skip Connections

no code implementations19 Jan 2023 Soowon Kim, Ji-Won Lee, Young-Eun Lee, Seo-Hyun Lee

A brain-computer interface system can be implemented using electroencephalogram signals because it poses more less clinical risk and can be acquired using portable instruments.

Brain Computer Interface Classification +1

Towards Voice Reconstruction from EEG during Imagined Speech

1 code implementation2 Jan 2023 Young-Eun Lee, Seo-Hyun Lee, Sang-Ho Kim, Seong-Whan Lee

Translating imagined speech from human brain activity into voice is a challenging and absorbing research issue that can provide new means of human communication via brain signals.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +4

Reconstructing ERP Signals Using Generative Adversarial Networks for Mobile Brain-Machine Interface

no code implementations18 May 2020 Young-Eun Lee, Minji Lee, Seong-Whan Lee

As a result, the reconstructed signals had important components such as N200 and P300 similar to ERP during standing.

EEG ERP

Cannot find the paper you are looking for? You can Submit a new open access paper.