no code implementations • 18 Jan 2025 • Duy Nguyen, Trung T. Nguyen, Cuong V. Nguyen
In this paper, we propose FADAML, a novel end-to-end machine learning system to detect and filter out fake online advertisements.
no code implementations • 29 Oct 2024 • Vu C. Dinh, Lam Si Tung Ho, Cuong V. Nguyen
We analyze the error rates of the Hamiltonian Monte Carlo algorithm with leapfrog integrator for Bayesian neural network inference.
1 code implementation • 3 Feb 2024 • Cuong V. Nguyen, Cuong D. Do
The adoption of deep learning in ECG diagnosis is often hindered by the scarcity of large, well-labeled datasets in real-world scenarios, leading to the use of transfer learning to leverage features learned from larger datasets.
no code implementations • 5 Dec 2023 • Hong Nguyen, Cuong V. Nguyen, Shrikanth Narayanan, Benjamin Y. Xu, Michael Pazzani
Primary open-angle glaucoma (POAG) is a chronic and progressive optic nerve condition that results in an acquired loss of optic nerve fibers and potential blindness.
1 code implementation • 1 Dec 2023 • Cuong N. Nguyen, Phong Tran, Lam Si Tung Ho, Vu Dinh, Anh T. Tran, Tal Hassner, Cuong V. Nguyen
We consider transferability estimation, the problem of estimating how well deep learning models transfer from a source to a target task.
1 code implementation • 27 Oct 2023 • Cuong V. Nguyen, Hieu Minh Duong, Cuong D. Do
In practical electrocardiography (ECG) interpretation, the scarcity of well-annotated data is a common challenge.
no code implementations • 4 Oct 2023 • Aman Khullar, Daniel Nkemelu, Cuong V. Nguyen, Michael L. Best
In this work, we propose a data augmentation approach that addresses the problem of lack of data for online hate speech detection in limited data contexts using synthetic data generation techniques.
1 code implementation • 27 Jul 2023 • Duc C. Hoang, Behzad Ousat, Amin Kharraz, Cuong V. Nguyen
The popularity of text-based CAPTCHA as a security mechanism to protect websites from automated bots has prompted researches in CAPTCHA solvers, with the aim of understanding its failure cases and subsequently making CAPTCHAs more secure.
1 code implementation • 16 Mar 2023 • Cuong V. Nguyen, Khiem H. Le, Anh M. Tran, Quang H. Pham, Binh T. Nguyen
Transfer learning plays an essential role in Deep Learning, which can remarkably improve the performance of the target domain, whose training data is not sufficient.
no code implementations • 13 Sep 2022 • Cuong N. Nguyen, Lam Si Tung Ho, Vu Dinh, Tal Hassner, Cuong V. Nguyen
We analyze new generalization bounds for deep learning models trained by transfer learning from a source to a target task.
1 code implementation • 24 Feb 2022 • Matthew Ashman, Thang D. Bui, Cuong V. Nguyen, Stratis Markou, Adrian Weller, Siddharth Swaroop, Richard E. Turner
Variational inference (VI) has become the method of choice for fitting many modern probabilistic models.
1 code implementation • 21 Oct 2021 • Cuong V. Nguyen, Tien-Dung Cao, Tram Truong-Huu, Khanh N. Pham, Binh T. Nguyen
In this paper, we perform an empirical study on the impact of several loss functions on the performance of standard GAN models, Deep Convolutional Generative Adversarial Networks (DCGANs).
no code implementations • ICML 2020 • Cuong V. Nguyen, Tal Hassner, Matthias Seeger, Cedric Archambeau
We introduce a new measure to evaluate the transferability of representations learned by classifiers.
Ranked #4 on
Transferability
on classification benchmark
no code implementations • ICCV 2019 • Anh T. Tran, Cuong V. Nguyen, Tal Hassner
As a case study, we transfer a learned face recognition model to CelebA attribute classification tasks, showing state of the art accuracy for tasks estimated to be highly transferable.
no code implementations • 2 Aug 2019 • Cuong V. Nguyen, Alessandro Achille, Michael Lam, Tal Hassner, Vijay Mahadevan, Stefano Soatto
As an application, we apply our procedure to study two properties of a task sequence: (1) total complexity and (2) sequential heterogeneity.
no code implementations • 4 Jun 2019 • Cuong V. Nguyen, Lam Si Tung Ho, Huan Xu, Vu Dinh, Binh Nguyen
We study pool-based active learning with abstention feedbacks where a labeler can abstain from labeling a queried example with some unknown abstention rate.
1 code implementation • 6 May 2019 • Siddharth Swaroop, Cuong V. Nguyen, Thang D. Bui, Richard E. Turner
In the continual learning setting, tasks are encountered sequentially.
no code implementations • ICLR 2019 • Tameem Adel, Cuong V. Nguyen, Richard E. Turner, Zoubin Ghahramani, Adrian Weller
We present a framework for interpretable continual learning (ICL).
no code implementations • 27 Nov 2018 • Thang D. Bui, Cuong V. Nguyen, Siddharth Swaroop, Richard E. Turner
Second, the granularity of the updates e. g. whether the updates are local to each data point and employ message passing or global.
8 code implementations • ICLR 2018 • Cuong V. Nguyen, Yingzhen Li, Thang D. Bui, Richard E. Turner
This paper develops variational continual learning (VCL), a simple but general framework for continual learning that fuses online variational inference (VI) and recent advances in Monte Carlo VI for neural networks.
no code implementations • 23 May 2017 • Cuong V. Nguyen, Lam Si Tung Ho, Huan Xu, Vu Dinh, Binh Nguyen
We study pool-based active learning with abstention feedbacks, where a labeler can abstain from labeling a queried example with some unknown abstention rate.
3 code implementations • NeurIPS 2017 • Thang D. Bui, Cuong V. Nguyen, Richard E. Turner
Sparse pseudo-point approximations for Gaussian process (GP) models provide a suite of methods that support deployment of GPs in the large data regime and enable analytic intractabilities to be sidestepped.
no code implementations • 23 May 2016 • Le Thi Khanh Hien, Cuong V. Nguyen, Huan Xu, Can-Yi Lu, Jiashi Feng
Avoiding this devise, we propose an accelerated randomized mirror descent method for solving this problem without the strongly convex assumption.