no code implementations • 1 Apr 2024 • Richard Kimera, Yun-Seon Kim, Heeyoul Choi
Additionally, the paper probes the distribution of responsibility between AI systems and humans, underscoring the essential role of human oversight in upholding NMT ethical standards.
no code implementations • 25 Jan 2024 • Richard Kimera, Daniela N. Rim, Joseph Kirabira, Ubong Godwin Udomah, Heeyoul Choi
Depression is a global burden and one of the most challenging mental health conditions to control.
no code implementations • 16 Aug 2023 • Daniela N. Rim, Kimera Richard, Heeyoul Choi
The Transformer model has revolutionized Natural Language Processing tasks such as Neural Machine Translation, and many efforts have been made to study the Transformer architecture, which increased its efficiency and accuracy.
no code implementations • 2 May 2023 • DongNyeong Heo, Heeyoul Choi
Latent variable modeling in non-autoregressive neural machine translation (NAT) is a promising approach to mitigate the multimodality problem.
no code implementations • 14 Feb 2023 • Malintha Fernando, Ransalu Senanayake, Heeyoul Choi, Martin Swany
Autonomous mobility is emerging as a new disruptive mode of urban transportation for moving cargo and passengers.
no code implementations • 19 Jan 2023 • Namjin Seo, DongNyeong Heo, Heeyoul Choi
Network function virtualization (NFV) and software-defined network (SDN) have become emerging network paradigms, allowing virtualized network function (VNF) deployment at a low cost.
no code implementations • 7 Jan 2023 • Richard Kimera, Daniela N. Rim, Heeyoul Choi
Neural machine translation (NMT) has achieved great successes with large datasets, so NMT is more premised on high-resource languages.
no code implementations • 20 Mar 2022 • Hyunsub Lee, Heeyoul Choi
In this paper, we introduce a new representation, partitioned representation, which can learn both common and unique features of the anchor and positive samples in contrastive learning.
no code implementations • 17 Feb 2022 • DongNyeong Heo, Heeyoul Choi
Back-translation is an effective semi-supervised learning framework in neural machine translation (NMT).
no code implementations • 29 Oct 2021 • Jin Ha Woo, Heeyoul Choi
The Second Language Acquisition field has been significantly impacted by a greater emphasis on individualized learning and rapid developments in artificial intelligence (AI).
no code implementations • 29 Sep 2021 • Chungjun Lee, Jibum Hong, DongNyeong Heo, Heeyoul Choi
Therefore, we propose several sequential deep learning models to learn time-series patterns and sequential patterns of the virtual network functions (VNFs) in the chain with variable lengths.
no code implementations • 19 Sep 2021 • Daniela N. Rim, DongNyeong Heo, Heeyoul Choi
In this work, we propose adversarial training with contrastive learning (ATCL) to adversarially train a language processing task using the benefits of contrastive learning.
no code implementations • 25 May 2021 • Daniela N. Rim, Inseon Jang, Heeyoul Choi
We apply a reparametrization trick for the Bernoulli distribution for the discrete representations, which allows smooth backpropagation.
no code implementations • 17 Nov 2020 • DongNyeong Heo, Doyoung Lee, Hee-Gon Kim, Suhyun Park, Heeyoul Choi
In the management of computer network systems, the service function chaining (SFC) modules play an important role by generating efficient paths for network traffic through physical servers with virtualized network functions (VNF).
no code implementations • 11 Sep 2020 • DongNyeong Heo, Stanislav Lange, Hee-Gon Kim, Heeyoul Choi
Moreover, the GNN based model can be applied to a new network topology without re-designing and re-training.
no code implementations • 1 Dec 2019 • Hayoung Um, Heeyoul Choi
In pattern recognition or machine learning, it is a very fundamental task to find nearest neighbors of a given point.
no code implementations • 26 Nov 2019 • Hyeokmin Gwon, Chungjun Lee, Rakun Keum, Heeyoul Choi
Growing number of network devices and services have led to increasing demand for protective measures as hackers launch attacks to paralyze or steal information from victim systems.
no code implementations • RANLP 2019 • Sangchul Hahn, Heeyoul Choi
Since deep learning became a key player in natural language processing (NLP), many deep learning models have been showing remarkable performances in a variety of NLP tasks, and in some cases, they are even outperforming humans.
no code implementations • 8 Nov 2018 • Hayoung Eom, Heeyoul Choi
$\alpha$I-pooling is a general pooling method including max-pooling and arithmetic average-pooling as a special case, depending on the parameter $\alpha$.
no code implementations • 8 Nov 2018 • Sangchul Hahn, Heeyoul Choi
The proposed method is applied to the latent variables of variational auto-encoder (VAE), although it can be applied to any generative models with latent variables.
no code implementations • 27 Sep 2018 • Sangchul Hahn, Heeyoul Choi
Based on this explanation, we propose a new technique for activation functions, {\em gradient acceleration in activation function (GAAF)}, that accelerates gradients to flow even in the saturation area.
no code implementations • 26 Jun 2018 • Sangchul Hahn, Heeyoul Choi
Based on this explanation, we propose a new technique for activation functions, {\em gradient acceleration in activation function (GAAF)}, that accelerates gradients to flow even in the saturation area.
no code implementations • 22 Jun 2018 • Heeyoul Choi
LSTM transforms the input and the previous hidden states to the next states with the affine transformation, multiplication operations and a nonlinear activation function, which makes a good data representation for a given task.
no code implementations • 30 Mar 2018 • Heeyoul Choi, Kyunghyun Cho, Yoshua Bengio
Neural machine translation (NMT) has been a new paradigm in machine translation, and the attention mechanism has become the dominant approach with the state-of-the-art records in many language pairs.
no code implementations • 1 Aug 2016 • Sungjin Ahn, Heeyoul Choi, Tanel Pärnamaa, Yoshua Bengio
Current language models have a significant limitation in the ability to encode and decode factual knowledge.
1 code implementation • 3 Jul 2016 • Heeyoul Choi, Kyunghyun Cho, Yoshua Bengio
Based on this observation, in this paper we propose to contextualize the word embedding vectors using a nonlinear bag-of-words representation of the source sentence.