Search Results for author: Vu Dinh

Found 17 papers, 5 papers with code

Simple Transferability Estimation for Regression Tasks

1 code implementation1 Dec 2023 Cuong N. Nguyen, Phong Tran, Lam Si Tung Ho, Vu Dinh, Anh T. Tran, Tal Hassner, Cuong V. Nguyen

We consider transferability estimation, the problem of estimating how well deep learning models transfer from a source to a target task.

regression Transfer Learning

Generalization Bounds for Deep Transfer Learning Using Majority Predictor Accuracy

no code implementations13 Sep 2022 Cuong N. Nguyen, Lam Si Tung Ho, Vu Dinh, Tal Hassner, Cuong V. Nguyen

We analyze new generalization bounds for deep learning models trained by transfer learning from a source to a target task.

Generalization Bounds Transfer Learning

When can we reconstruct the ancestral state? Beyond Brownian motion

1 code implementation26 Jul 2022 Nhat L. Vu, Thanh P. Nguyen, Binh T. Nguyen, Vu Dinh, Lam Si Tung Ho

Reconstructing the ancestral state of a group of species helps answer many important questions in evolutionary biology.

Posterior concentration and fast convergence rates for generalized Bayesian learning

no code implementations19 Nov 2021 Lam Si Tung Ho, Binh T. Nguyen, Vu Dinh, Duy Nguyen

We prove that under the multi-scale Bernstein's condition, the generalized posterior distribution concentrates around the set of optimal hypotheses and the generalized Bayes estimator can achieve fast learning rate.

regression

When can we reconstruct the ancestral state? A unified theory

no code implementations14 Nov 2021 Lam Si Tung Ho, Vu Dinh

Notably, we show that for a sequence of nested trees with bounded heights, the necessary and sufficient conditions for the existence of a consistent ancestral state reconstruction method under discrete models, the Brownian motion model, and the threshold model are equivalent.

valid

Searching for Minimal Optimal Neural Networks

no code implementations27 Sep 2021 Lam Si Tung Ho, Vu Dinh

Large neural network models have high predictive power but may suffer from overfitting if the training set is not large enough.

OASIS: An Active Framework for Set Inversion

no code implementations31 May 2021 Binh T. Nguyen, Duy M. Nguyen, Lam Si Tung Ho, Vu Dinh

In this work, we introduce a novel method for solving the set inversion problem by formulating it as a binary classification problem.

Active Learning Binary Classification

Convergence of maximum likelihood supertree reconstruction

no code implementations4 May 2021 Lam Si Tung Ho, Vu Dinh

Supertree methods are tree reconstruction techniques that combine several smaller gene trees (possibly on different sets of species) to build a larger species tree.

Consistent Feature Selection for Analytic Deep Neural Networks

1 code implementation NeurIPS 2020 Vu Dinh, Lam Si Tung Ho

One of the most important steps toward interpretability and explainability of neural network models is feature selection, which aims to identify the subset of relevant features.

feature selection

Consistent feature selection for neural networks via Adaptive Group Lasso

no code implementations30 May 2020 Vu Dinh, Lam Si Tung Ho

In this work, we propose and establish a theoretical guarantee for the use of the adaptive group lasso for selecting important features of neural networks.

feature selection

Bayesian Active Learning With Abstention Feedbacks

no code implementations4 Jun 2019 Cuong V. Nguyen, Lam Si Tung Ho, Huan Xu, Vu Dinh, Binh Nguyen

We study pool-based active learning with abstention feedbacks where a labeler can abstain from labeling a queried example with some unknown abstention rate.

Active Learning General Classification

Non-bifurcating phylogenetic tree inference via the adaptive LASSO

1 code implementation28 May 2018 Cheng Zhang, Vu Dinh, Frederick A. Matsen IV

Phylogenetic tree inference using deep DNA sequencing is reshaping our understanding of rapidly evolving systems, such as the within-host battle between viruses and the immune system.

Bayesian Pool-based Active Learning With Abstention Feedbacks

no code implementations23 May 2017 Cuong V. Nguyen, Lam Si Tung Ho, Huan Xu, Vu Dinh, Binh Nguyen

We study pool-based active learning with abstention feedbacks, where a labeler can abstain from labeling a queried example with some unknown abstention rate.

Active Learning General Classification

Probabilistic Path Hamiltonian Monte Carlo

3 code implementations ICML 2017 Vu Dinh, Arman Bilge, Cheng Zhang, Frederick A. Matsen IV

Hamiltonian Monte Carlo (HMC) is an efficient and effective means of sampling posterior distributions on Euclidean space, which has been extended to manifolds with boundary.

Fast learning rates with heavy-tailed losses

no code implementations NeurIPS 2016 Vu Dinh, Lam Si Tung Ho, Duy Nguyen, Binh T. Nguyen

We study fast learning rates when the losses are not necessarily bounded and may have a distribution with heavy tails.

Clustering Quantization

Learning From Non-iid Data: Fast Rates for the One-vs-All Multiclass Plug-in Classifiers

no code implementations12 Aug 2014 Vu Dinh, Lam Si Tung Ho, Nguyen Viet Cuong, Duy Nguyen, Binh T. Nguyen

We prove new fast learning rates for the one-vs-all multiclass plug-in classifiers trained either from exponentially strongly mixing data or from data generated by a converging drifting distribution.

Generalization and Robustness of Batched Weighted Average Algorithm with V-geometrically Ergodic Markov Data

no code implementations12 Jun 2014 Nguyen Viet Cuong, Lam Si Tung Ho, Vu Dinh

For the generalization of the algorithm, we prove a PAC-style bound on the training sample size for the expected $L_1$-loss to converge to the optimal loss when training data are V-geometrically ergodic Markov chains.

General Classification regression

Cannot find the paper you are looking for? You can Submit a new open access paper.