2 code implementations • 19 Dec 2023 • Viet Nguyen, Giang Vu, Tung Nguyen Thanh, Khoat Than, Toan Tran
To minimize that gap, we propose a novel \textit{sequence-aware} loss that aims to reduce the estimation gap to enhance the sampling quality.
no code implementations • 26 Nov 2023 • Quyen Tran, Lam Tran, Khoat Than, Toan Tran, Dinh Phung, Trung Le
Drawing inspiration from prompt tuning techniques applied to Large Language Models, recent methods based on pre-trained ViT networks have achieved remarkable results in the field of Continual Learning.
no code implementations • 30 Nov 2022 • Quyen Tran, Hoang Phan, Khoat Than, Dinh Phung, Trung Le
To address this issue, in this work, we first propose an online mixture model learning approach based on nice properties of the mature optimal transport theory (OT-MM).
1 code implementation • 19 Nov 2022 • Truong Vu, Kien Do, Khang Nguyen, Khoat Than
We propose a novel high-fidelity face swapping method called "Arithmetic Face Swapping" (AFS) that explicitly disentangles the intermediate latent space W+ of a pretrained StyleGAN into the "identity" and "style" subspaces so that a latent code in W+ is the sum of an "identity" code and a "style" code in the corresponding subspaces.
no code implementations • Pacific-Asia Conference on Knowledge Discovery and Data Mining 2022 • Hoang Phan, Anh Phan Tuan, Son Nguyen, Ngo Van Linh, Khoat Than
Our paper studies the continual learning (CL) problems in which data comes in sequence and the trained models are expected to be capable of utilizing existing knowledge to solve new tasks without losing performance on previous ones.
1 code implementation • 26 Jul 2021 • Quyen Tran, Lam Tran, Linh Chu Hai, Linh Ngo Van, Khoat Than
In addition, we go from the hypothesis that a user's preference at a time is a combination of long-term and short-term interests.
no code implementations • 6 Apr 2021 • Khoat Than, Nghia Vu
This result answers the long mystery of why the popular use of Lipschitz constraint for GANs often leads to great success, empirically without a solid theory.
no code implementations • NeurIPS 2021 • Son Nguyen, Duong Nguyen, Khai Nguyen, Khoat Than, Hung Bui, Nhat Ho
Approximate inference in Bayesian deep networks exhibits a dilemma of how to yield high fidelity posterior approximations while maintaining computational efficiency and scalability.
no code implementations • 1 Jan 2021 • Khoat Than, Nghia Vu
Finally, we show why data augmentation can ensure Lipschitz continuity on both the discriminator and generator.
no code implementations • 26 Mar 2020 • Anh Phan Tuan, Bach Tran, Thien Nguyen Huu, Linh Ngo Van, Khoat Than
Furthermore, many applications often face with massive and dynamic short texts, causing various computational challenges to the current batch learning algorithms.
1 code implementation • 13 Mar 2020 • Tran Xuan Bach, Nguyen Duc Anh, Ngo Van Linh, Khoat Than
We show that some existing approaches can forget any knowledge very fast.
1 code implementation • 13 Mar 2020 • Ngo Van Linh, Tran Xuan Bach, Khoat Than
In this paper, to aim at exploiting a knowledge graph effectively, we propose a novel graph convolutional topic model (GCTM) which integrates graph convolutional networks (GCN) into a topic model and a learning method which learns the networks and the topic model simultaneously for data streams.
1 code implementation • ICML 2020 • Rui Shu, Tung Nguyen, Yin-Lam Chow, Tuan Pham, Khoat Than, Mohammad Ghavamzadeh, Stefano Ermon, Hung H. Bui
High-dimensional observations and unknown dynamics are major challenges when applying optimal control to many real-world decision making tasks.
no code implementations • ACL 2019 • Linh The Nguyen, Linh Van Ngo, Khoat Than, Thien Huu Nguyen
It has been shown that implicit connectives can be exploited to improve the performance of the models for implicit discourse relation recognition (IDRR).
1 code implementation • 10 Dec 2015 • Khoat Than, Tu Bao Ho
One of the core problems in this field is the posterior inference for individual data instances.
1 code implementation • 10 Dec 2015 • Khoat Than, Tung Doan
One of the core problems in statistical models is the estimation of a posterior distribution.
no code implementations • 16 Dec 2013 • Khoat Than, Tu Bao Ho
Contrary to the existing belief of intractability, we show that this inference problem is concave under certain conditions.
no code implementations • 26 Oct 2012 • Khoat Than, Tu Bao Ho
In this article, we introduce a simple framework for inference in probabilistic topic models, denoted by FW.