Search Results for author: Tan Nguyen

Found 25 papers, 11 papers with code

Neural Collapse for Cross-entropy Class-Imbalanced Learning with Unconstrained ReLU Feature Model

no code implementations4 Jan 2024 Hien Dang, Tho Tran, Tan Nguyen, Nhat Ho

However, when the training dataset is class-imbalanced, some NC properties will no longer be true.

Unveiling Comparative Sentiments in Vietnamese Product Reviews: A Sequential Classification Framework

1 code implementation2 Jan 2024 Ha Le, Bao Tran, Phuong Le, Tan Nguyen, Dac Nguyen, Ngoan Pham, Dang Huynh

Comparative opinion mining is a specialized field of sentiment analysis that aims to identify and extract sentiments expressed comparatively.

Opinion Mining Sentence +1

Beyond Vanilla Variational Autoencoders: Detecting Posterior Collapse in Conditional and Hierarchical Variational Autoencoders

no code implementations8 Jun 2023 Hien Dang, Tho Tran, Tan Nguyen, Nhat Ho

Specifically, via a non-trivial theoretical analysis of linear conditional VAE and hierarchical VAE with two levels of latent, we prove that the cause of posterior collapses in these models includes the correlation between the input and output of the conditional VAE and the effect of learnable encoder variance in the hierarchical VAE.

Neural Collapse in Deep Linear Networks: From Balanced to Imbalanced Data

2 code implementations1 Jan 2023 Hien Dang, Tho Tran, Stanley Osher, Hung Tran-The, Nhat Ho, Tan Nguyen

Modern deep neural networks have achieved impressive performance on tasks from image classification to natural language processing.

Image Classification

Revisiting Over-smoothing and Over-squashing Using Ollivier-Ricci Curvature

1 code implementation28 Nov 2022 Khang Nguyen, Hieu Nong, Vinh Nguyen, Nhat Ho, Stanley Osher, Tan Nguyen

Graph Neural Networks (GNNs) had been demonstrated to be inherently susceptible to the problems of over-smoothing and over-squashing.

Improving Generative Flow Networks with Path Regularization

no code implementations29 Sep 2022 Anh Do, Duy Dinh, Tan Nguyen, Khuong Nguyen, Stanley Osher, Nhat Ho

Generative Flow Networks (GFlowNets) are recently proposed models for learning stochastic policies that generate compositional objects by sequences of actions with the probability proportional to a given reward function.

Active Learning

Hierarchical Sliced Wasserstein Distance

1 code implementation27 Sep 2022 Khai Nguyen, Tongzheng Ren, Huy Nguyen, Litu Rout, Tan Nguyen, Nhat Ho

We explain the usage of these projections by introducing Hierarchical Radon Transform (HRT) which is constructed by applying Radon Transform variants recursively.

Momentum Transformer: Closing the Performance Gap Between Self-attention and Its Linearization

no code implementations1 Aug 2022 Tan Nguyen, Richard G. Baraniuk, Robert M. Kirby, Stanley J. Osher, Bao Wang

Transformers have achieved remarkable success in sequence modeling and beyond but suffer from quadratic computational and memory complexities with respect to the length of the input sequence.

Image Generation Machine Translation

Transformer with Fourier Integral Attentions

no code implementations1 Jun 2022 Tan Nguyen, Minh Pham, Tam Nguyen, Khai Nguyen, Stanley J. Osher, Nhat Ho

Multi-head attention empowers the recent success of transformers, the state-of-the-art models that have achieved remarkable success in sequence modeling and beyond.

Image Classification Language Modelling +1

3D-UCaps: 3D Capsules Unet for Volumetric Image Segmentation

2 code implementations16 Mar 2022 Tan Nguyen, Binh-Son Hua, Ngan Le

Medical image segmentation has been so far achieving promising results with Convolutional Neural Networks (CNNs).

Hippocampus Image Segmentation +3

Point-Unet: A Context-aware Point-based Neural Network for Volumetric Segmentation

1 code implementation16 Mar 2022 Ngoc-Vuong Ho, Tan Nguyen, Gia-Han Diep, Ngan Le, Binh-Son Hua

In this paper, we propose Point-Unet, a novel method that incorporates the efficiency of deep learning with 3D point clouds into volumetric segmentation.

Image Segmentation Medical Image Segmentation +2

How Does Momentum Benefit Deep Neural Networks Architecture Design? A Few Case Studies

no code implementations13 Oct 2021 Bao Wang, Hedi Xia, Tan Nguyen, Stanley Osher

As case studies, we consider how momentum can improve the architecture design for recurrent neural networks (RNNs), neural ordinary differential equations (ODEs), and transformers.

Computational Efficiency

SP-GPT2: Semantics Improvement in Vietnamese Poetry Generation

1 code implementation10 Oct 2021 Tuan Nguyen, Hanh Pham, Truong Bui, Tan Nguyen, Duc Luong, Phong Nguyen

Both automatic and human evaluation demonstrated that our approach can generate poems that have better cohesion without losing the quality due to additional loss.

Text Generation

Neural Networks with Recurrent Generative Feedback

1 code implementation NeurIPS 2020 Yujia Huang, James Gornet, Sihui Dai, Zhiding Yu, Tan Nguyen, Doris Y. Tsao, Anima Anandkumar

This mechanism can be interpreted as a form of self-consistency between the maximum a posteriori (MAP) estimation of an internal generative model and the external environment.

Adversarial Robustness

Sample Efficient Graph-Based Optimization with Noisy Observations

1 code implementation4 Jun 2020 Tan Nguyen, Ali Shameli, Yasin Abbasi-Yadkori, Anup Rao, Branislav Kveton

We study sample complexity of optimizing "hill-climbing friendly" functions defined on a graph under noisy observations.

Re-Ranking

Greedy Convex Ensemble

1 code implementation9 Oct 2019 Tan Nguyen, Nan Ye, Peter L. Bartlett

Theoretically, we first consider whether we can use linear, instead of convex, combinations, and obtain generalization results similar to existing ones for learning from a convex hull.

Dual Dynamic Inference: Enabling More Efficient, Adaptive and Controllable Deep Inference

no code implementations10 Jul 2019 Yue Wang, Jianghao Shen, Ting-Kuei Hu, Pengfei Xu, Tan Nguyen, Richard Baraniuk, Zhangyang Wang, Yingyan Lin

State-of-the-art convolutional neural networks (CNNs) yield record-breaking predictive performance, yet at the cost of high-energy-consumption inference, that prohibits their widely deployments in resource-constrained Internet of Things (IoT) applications.

Neural Rendering Model: Joint Generation and Prediction for Semi-Supervised Learning

no code implementations ICLR 2019 Nhat Ho, Tan Nguyen, Ankit B. Patel, Anima Anandkumar, Michael. I. Jordan, Richard G. Baraniuk

The conjugate prior yields a new regularizer for learning based on the paths rendered in the generative model for training CNNs–the Rendering Path Normalization (RPN).

Neural Rendering

A Bayesian Perspective of Convolutional Neural Networks through a Deconvolutional Generative Model

no code implementations1 Nov 2018 Tan Nguyen, Nhat Ho, Ankit Patel, Anima Anandkumar, Michael. I. Jordan, Richard G. Baraniuk

This conjugate prior yields a new regularizer based on paths rendered in the generative model for training CNNs-the Rendering Path Normalization (RPN).

EnergyNet: Energy-Efficient Dynamic Inference

no code implementations NIPS Workshop CDNNRIA 2018 Yue Wang, Tan Nguyen, Yang Zhao, Zhangyang Wang, Yingyan Lin, Richard Baraniuk

The prohibitive energy cost of running high-performance Convolutional Neural Networks (CNNs) has been limiting their deployment on resource-constrained platforms including mobile and wearable devices.

A Probabilistic Framework for Deep Learning

no code implementations NeurIPS 2016 Ankit B. Patel, Tan Nguyen, Richard G. Baraniuk

We develop a probabilistic framework for deep learning based on the Deep Rendering Mixture Model (DRMM), a new generative probabilistic model that explicitly capture variations in data due to latent task nuisance variables.

General Classification

Semi-Supervised Learning with the Deep Rendering Mixture Model

no code implementations6 Dec 2016 Tan Nguyen, Wanjia Liu, Ethan Perez, Richard G. Baraniuk, Ankit B. Patel

Semi-supervised learning algorithms reduce the high cost of acquiring labeled training data by using both labeled and unlabeled data during learning.

Variational Inference

A Probabilistic Theory of Deep Learning

1 code implementation2 Apr 2015 Ankit B. Patel, Tan Nguyen, Richard G. Baraniuk

A grand challenge in machine learning is the development of computational algorithms that match or outperform humans in perceptual inference tasks that are complicated by nuisance variation.

Object Object Recognition +2

Cannot find the paper you are looking for? You can Submit a new open access paper.