no code implementations • 4 Jan 2024 • Hien Dang, Tho Tran, Tan Nguyen, Nhat Ho
However, when the training dataset is class-imbalanced, some NC properties will no longer be true.
1 code implementation • 2 Jan 2024 • Ha Le, Bao Tran, Phuong Le, Tan Nguyen, Dac Nguyen, Ngoan Pham, Dang Huynh
Comparative opinion mining is a specialized field of sentiment analysis that aims to identify and extract sentiments expressed comparatively.
no code implementations • 8 Jun 2023 • Hien Dang, Tho Tran, Tan Nguyen, Nhat Ho
Specifically, via a non-trivial theoretical analysis of linear conditional VAE and hierarchical VAE with two levels of latent, we prove that the cause of posterior collapses in these models includes the correlation between the input and output of the conditional VAE and the effect of learnable encoder variance in the hierarchical VAE.
2 code implementations • 1 Jan 2023 • Hien Dang, Tho Tran, Stanley Osher, Hung Tran-The, Nhat Ho, Tan Nguyen
Modern deep neural networks have achieved impressive performance on tasks from image classification to natural language processing.
1 code implementation • 28 Nov 2022 • Khang Nguyen, Hieu Nong, Vinh Nguyen, Nhat Ho, Stanley Osher, Tan Nguyen
Graph Neural Networks (GNNs) had been demonstrated to be inherently susceptible to the problems of over-smoothing and over-squashing.
no code implementations • 29 Sep 2022 • Anh Do, Duy Dinh, Tan Nguyen, Khuong Nguyen, Stanley Osher, Nhat Ho
Generative Flow Networks (GFlowNets) are recently proposed models for learning stochastic policies that generate compositional objects by sequences of actions with the probability proportional to a given reward function.
1 code implementation • 27 Sep 2022 • Khai Nguyen, Tongzheng Ren, Huy Nguyen, Litu Rout, Tan Nguyen, Nhat Ho
We explain the usage of these projections by introducing Hierarchical Radon Transform (HRT) which is constructed by applying Radon Transform variants recursively.
no code implementations • 1 Aug 2022 • Tan Nguyen, Richard G. Baraniuk, Robert M. Kirby, Stanley J. Osher, Bao Wang
Transformers have achieved remarkable success in sequence modeling and beyond but suffer from quadratic computational and memory complexities with respect to the length of the input sequence.
no code implementations • 1 Jun 2022 • Tan Nguyen, Minh Pham, Tam Nguyen, Khai Nguyen, Stanley J. Osher, Nhat Ho
Multi-head attention empowers the recent success of transformers, the state-of-the-art models that have achieved remarkable success in sequence modeling and beyond.
2 code implementations • 16 Mar 2022 • Tan Nguyen, Binh-Son Hua, Ngan Le
Medical image segmentation has been so far achieving promising results with Convolutional Neural Networks (CNNs).
1 code implementation • 16 Mar 2022 • Ngoc-Vuong Ho, Tan Nguyen, Gia-Han Diep, Ngan Le, Binh-Son Hua
In this paper, we propose Point-Unet, a novel method that incorporates the efficiency of deep learning with 3D point clouds into volumetric segmentation.
no code implementations • 13 Oct 2021 • Bao Wang, Hedi Xia, Tan Nguyen, Stanley Osher
As case studies, we consider how momentum can improve the architecture design for recurrent neural networks (RNNs), neural ordinary differential equations (ODEs), and transformers.
2 code implementations • 10 Oct 2021 • Tuan Nguyen, Hanh Pham, Truong Bui, Tan Nguyen, Duc Luong, Phong Nguyen
Both automatic and human evaluation demonstrated that our approach can generate poems that have better cohesion without losing the quality due to additional loss.
1 code implementation • NeurIPS 2020 • Yujia Huang, James Gornet, Sihui Dai, Zhiding Yu, Tan Nguyen, Doris Y. Tsao, Anima Anandkumar
This mechanism can be interpreted as a form of self-consistency between the maximum a posteriori (MAP) estimation of an internal generative model and the external environment.
1 code implementation • 4 Jun 2020 • Tan Nguyen, Ali Shameli, Yasin Abbasi-Yadkori, Anup Rao, Branislav Kveton
We study sample complexity of optimizing "hill-climbing friendly" functions defined on a graph under noisy observations.
1 code implementation • 9 Oct 2019 • Tan Nguyen, Nan Ye, Peter L. Bartlett
Theoretically, we first consider whether we can use linear, instead of convex, combinations, and obtain generalization results similar to existing ones for learning from a convex hull.
no code implementations • NeurIPS 2019 Workshop Neuro AI 2019 • Yujia Huang, Sihui Dai, Tan Nguyen, Pinglei Bao, Doris Y. Tsao, Richard G. Baraniuk, Anima Anandkumar
Primates have a remarkable ability to correctly classify images even in the presence of significant noise and degradation.
no code implementations • 10 Jul 2019 • Yujia Huang, Sihui Dai, Tan Nguyen, Richard G. Baraniuk, Anima Anandkumar
Our results show that when trained on CIFAR-10, lower likelihood (of latent variables) is assigned to SVHN images.
no code implementations • 10 Jul 2019 • Yue Wang, Jianghao Shen, Ting-Kuei Hu, Pengfei Xu, Tan Nguyen, Richard Baraniuk, Zhangyang Wang, Yingyan Lin
State-of-the-art convolutional neural networks (CNNs) yield record-breaking predictive performance, yet at the cost of high-energy-consumption inference, that prohibits their widely deployments in resource-constrained Internet of Things (IoT) applications.
no code implementations • ICLR 2019 • Nhat Ho, Tan Nguyen, Ankit B. Patel, Anima Anandkumar, Michael. I. Jordan, Richard G. Baraniuk
The conjugate prior yields a new regularizer for learning based on the paths rendered in the generative model for training CNNs–the Rendering Path Normalization (RPN).
no code implementations • 1 Nov 2018 • Tan Nguyen, Nhat Ho, Ankit Patel, Anima Anandkumar, Michael. I. Jordan, Richard G. Baraniuk
This conjugate prior yields a new regularizer based on paths rendered in the generative model for training CNNs-the Rendering Path Normalization (RPN).
no code implementations • NIPS Workshop CDNNRIA 2018 • Yue Wang, Tan Nguyen, Yang Zhao, Zhangyang Wang, Yingyan Lin, Richard Baraniuk
The prohibitive energy cost of running high-performance Convolutional Neural Networks (CNNs) has been limiting their deployment on resource-constrained platforms including mobile and wearable devices.
no code implementations • NeurIPS 2016 • Ankit B. Patel, Tan Nguyen, Richard G. Baraniuk
We develop a probabilistic framework for deep learning based on the Deep Rendering Mixture Model (DRMM), a new generative probabilistic model that explicitly capture variations in data due to latent task nuisance variables.
no code implementations • 6 Dec 2016 • Tan Nguyen, Wanjia Liu, Ethan Perez, Richard G. Baraniuk, Ankit B. Patel
Semi-supervised learning algorithms reduce the high cost of acquiring labeled training data by using both labeled and unlabeled data during learning.
1 code implementation • 2 Apr 2015 • Ankit B. Patel, Tan Nguyen, Richard G. Baraniuk
A grand challenge in machine learning is the development of computational algorithms that match or outperform humans in perceptual inference tasks that are complicated by nuisance variation.