Search Results for author: Khoat Than

Found 18 papers, 8 papers with code

On Inference Stability for Diffusion Models

2 code implementations19 Dec 2023 Viet Nguyen, Giang Vu, Tung Nguyen Thanh, Khoat Than, Toan Tran

To minimize that gap, we propose a novel \textit{sequence-aware} loss that aims to reduce the estimation gap to enhance the sampling quality.

Denoising

KOPPA: Improving Prompt-based Continual Learning with Key-Query Orthogonal Projection and Prototype-based One-Versus-All

no code implementations26 Nov 2023 Quyen Tran, Lam Tran, Khoat Than, Toan Tran, Dinh Phung, Trung Le

Drawing inspiration from prompt tuning techniques applied to Large Language Models, recent methods based on pre-trained ViT networks have achieved remarkable results in the field of Continual Learning.

Continual Learning Meta-Learning

Continual Learning with Optimal Transport based Mixture Model

no code implementations30 Nov 2022 Quyen Tran, Hoang Phan, Khoat Than, Dinh Phung, Trung Le

To address this issue, in this work, we first propose an online mixture model learning approach based on nice properties of the mature optimal transport theory (OT-MM).

Class Incremental Learning Incremental Learning

Face Swapping as A Simple Arithmetic Operation

1 code implementation19 Nov 2022 Truong Vu, Kien Do, Khang Nguyen, Khoat Than

We propose a novel high-fidelity face swapping method called "Arithmetic Face Swapping" (AFS) that explicitly disentangles the intermediate latent space W+ of a pretrained StyleGAN into the "identity" and "style" subspaces so that a latent code in W+ is the sum of an "identity" code and a "style" code in the corresponding subspaces.

Disentanglement Face Swapping

Reducing Catastrophic Forgetting in Neural Networks via Gaussian Mixture Approximation

no code implementations Pacific-Asia Conference on Knowledge Discovery and Data Mining 2022 Hoang Phan, Anh Phan Tuan, Son Nguyen, Ngo Van Linh, Khoat Than

Our paper studies the continual learning (CL) problems in which data comes in sequence and the trained models are expected to be capable of utilizing existing knowledge to solve new tasks without losing performance on previous ones.

Computational Efficiency Continual Learning +1

Generalization of GANs and overparameterized models under Lipschitz continuity

no code implementations6 Apr 2021 Khoat Than, Nghia Vu

This result answers the long mystery of why the popular use of Lipschitz constraint for GANs often leads to great success, empirically without a solid theory.

Data Augmentation Generalization Bounds

Structured Dropout Variational Inference for Bayesian Neural Networks

no code implementations NeurIPS 2021 Son Nguyen, Duong Nguyen, Khai Nguyen, Khoat Than, Hung Bui, Nhat Ho

Approximate inference in Bayesian deep networks exhibits a dilemma of how to yield high fidelity posterior approximations while maintaining computational efficiency and scalability.

Bayesian Inference Computational Efficiency +2

Generalization and Stability of GANs: A theory and promise from data augmentation

no code implementations1 Jan 2021 Khoat Than, Nghia Vu

Finally, we show why data augmentation can ensure Lipschitz continuity on both the discriminator and generator.

Data Augmentation

Bag of biterms modeling for short texts

no code implementations26 Mar 2020 Anh Phan Tuan, Bach Tran, Thien Nguyen Huu, Linh Ngo Van, Khoat Than

Furthermore, many applications often face with massive and dynamic short texts, causing various computational challenges to the current batch learning algorithms.

A Graph Convolutional Topic Model for Short and Noisy Text Streams

1 code implementation13 Mar 2020 Ngo Van Linh, Tran Xuan Bach, Khoat Than

In this paper, to aim at exploiting a knowledge graph effectively, we propose a novel graph convolutional topic model (GCTM) which integrates graph convolutional networks (GCN) into a topic model and a learning method which learns the networks and the topic model simultaneously for data streams.

Topic Models Word Embeddings

Predictive Coding for Locally-Linear Control

1 code implementation ICML 2020 Rui Shu, Tung Nguyen, Yin-Lam Chow, Tuan Pham, Khoat Than, Mohammad Ghavamzadeh, Stefano Ermon, Hung H. Bui

High-dimensional observations and unknown dynamics are major challenges when applying optimal control to many real-world decision making tasks.

Decision Making

Employing the Correspondence of Relations and Connectives to Identify Implicit Discourse Relations via Label Embeddings

no code implementations ACL 2019 Linh The Nguyen, Linh Van Ngo, Khoat Than, Thien Huu Nguyen

It has been shown that implicit connectives can be exploited to improve the performance of the models for implicit discourse relation recognition (IDRR).

Multi-Task Learning

Guaranteed inference in topic models

1 code implementation10 Dec 2015 Khoat Than, Tung Doan

One of the core problems in statistical models is the estimation of a posterior distribution.

Topic Models

Inference in topic models: sparsity and trade-off

1 code implementation10 Dec 2015 Khoat Than, Tu Bao Ho

One of the core problems in this field is the posterior inference for individual data instances.

Topic Models

Probable convexity and its application to Correlated Topic Models

no code implementations16 Dec 2013 Khoat Than, Tu Bao Ho

Contrary to the existing belief of intractability, we show that this inference problem is concave under certain conditions.

Topic Models

Managing sparsity, time, and quality of inference in topic models

no code implementations26 Oct 2012 Khoat Than, Tu Bao Ho

In this article, we introduce a simple framework for inference in probabilistic topic models, denoted by FW.

Topic Models

Cannot find the paper you are looking for? You can Submit a new open access paper.