1 code implementation • 29 Dec 2023 • Wensha Zhang, Lam Si Tung Ho, Toby Kenney
We use a multi-optima and multi-variance OU process model to describe the trait evolution process with shifts in both optimal value and variance and provide analysis of how the covariance between species changes when shifts in variance occur along the path.
1 code implementation • 1 Dec 2023 • Cuong N. Nguyen, Phong Tran, Lam Si Tung Ho, Vu Dinh, Anh T. Tran, Tal Hassner, Cuong V. Nguyen
We consider transferability estimation, the problem of estimating how well deep learning models transfer from a source to a target task.
no code implementations • 9 Oct 2023 • Quan Huu Do, Binh T. Nguyen, Lam Si Tung Ho
Existing generalization bounds for deep neural networks require data to be independent and identically distributed (iid).
1 code implementation • 11 Nov 2022 • Esha Saha, Lam Si Tung Ho, Giang Tran
The most popular tools for modelling and predicting infectious disease epidemics are compartmental models.
no code implementations • 13 Sep 2022 • Cuong N. Nguyen, Lam Si Tung Ho, Vu Dinh, Tal Hassner, Cuong V. Nguyen
We analyze new generalization bounds for deep learning models trained by transfer learning from a source to a target task.
1 code implementation • 26 Jul 2022 • Nhat L. Vu, Thanh P. Nguyen, Binh T. Nguyen, Vu Dinh, Lam Si Tung Ho
Reconstructing the ancestral state of a group of species helps answer many important questions in evolutionary biology.
1 code implementation • 12 Apr 2022 • Wensha Zhang, Toby Kenney, Lam Si Tung Ho
PhylogeneticEM is even more conservative with small signal sizes and falls between l1ou + pBIC and Ensemble method + BIC with large signal sizes.
no code implementations • 19 Nov 2021 • Lam Si Tung Ho, Binh T. Nguyen, Vu Dinh, Duy Nguyen
We prove that under the multi-scale Bernstein's condition, the generalized posterior distribution concentrates around the set of optimal hypotheses and the generalized Bayes estimator can achieve fast learning rate.
no code implementations • 14 Nov 2021 • Lam Si Tung Ho, Vu Dinh
Notably, we show that for a sequence of nested trees with bounded heights, the necessary and sufficient conditions for the existence of a consistent ancestral state reconstruction method under discrete models, the Brownian motion model, and the threshold model are equivalent.
no code implementations • 27 Sep 2021 • Lam Si Tung Ho, Vu Dinh
Large neural network models have high predictive power but may suffer from overfitting if the training set is not large enough.
no code implementations • 24 Aug 2021 • Lam Si Tung Ho, Nicholas Richardson, Giang Tran
In this paper, we propose an adaptive group Lasso deep neural network for high-dimensional function approximation where input data are generated from a dynamical system and the target function depends on few active variables or few linear combinations of variables.
no code implementations • 31 May 2021 • Binh T. Nguyen, Duy M. Nguyen, Lam Si Tung Ho, Vu Dinh
In this work, we introduce a novel method for solving the set inversion problem by formulating it as a binary classification problem.
no code implementations • 4 May 2021 • Lam Si Tung Ho, Vu Dinh
Supertree methods are tree reconstruction techniques that combine several smaller gene trees (possibly on different sets of species) to build a larger species tree.
1 code implementation • NeurIPS 2020 • Vu Dinh, Lam Si Tung Ho
One of the most important steps toward interpretability and explainability of neural network models is feature selection, which aims to identify the subset of relevant features.
no code implementations • 30 May 2020 • Vu Dinh, Lam Si Tung Ho
In this work, we propose and establish a theoretical guarantee for the use of the adaptive group lasso for selecting important features of neural networks.
no code implementations • 4 Jun 2019 • Cuong V. Nguyen, Lam Si Tung Ho, Huan Xu, Vu Dinh, Binh Nguyen
We study pool-based active learning with abstention feedbacks where a labeler can abstain from labeling a queried example with some unknown abstention rate.
no code implementations • 25 Nov 2018 • Lam Si Tung Ho, Hayden Schaeffer, Giang Tran, Rachel Ward
In this work, we study the problem of learning nonlinear functions from corrupted and dependent data.
no code implementations • 23 May 2017 • Cuong V. Nguyen, Lam Si Tung Ho, Huan Xu, Vu Dinh, Binh Nguyen
We study pool-based active learning with abstention feedbacks, where a labeler can abstain from labeling a queried example with some unknown abstention rate.
no code implementations • NeurIPS 2016 • Vu Dinh, Lam Si Tung Ho, Duy Nguyen, Binh T. Nguyen
We study fast learning rates when the losses are not necessarily bounded and may have a distribution with heavy tails.
no code implementations • 12 Aug 2014 • Vu Dinh, Lam Si Tung Ho, Nguyen Viet Cuong, Duy Nguyen, Binh T. Nguyen
We prove new fast learning rates for the one-vs-all multiclass plug-in classifiers trained either from exponentially strongly mixing data or from data generated by a converging drifting distribution.
no code implementations • 12 Jun 2014 • Nguyen Viet Cuong, Lam Si Tung Ho, Vu Dinh
For the generalization of the algorithm, we prove a PAC-style bound on the training sample size for the expected $L_1$-loss to converge to the optimal loss when training data are V-geometrically ergodic Markov chains.