Search Results for author: Lam Si Tung Ho

Found 21 papers, 6 papers with code

Detection of evolutionary shifts in variance under an Ornsten-Uhlenbeck model

1 code implementation29 Dec 2023 Wensha Zhang, Lam Si Tung Ho, Toby Kenney

We use a multi-optima and multi-variance OU process model to describe the trait evolution process with shifts in both optimal value and variance and provide analysis of how the covariance between species changes when shifts in variance occur along the path.

Simple Transferability Estimation for Regression Tasks

1 code implementation1 Dec 2023 Cuong N. Nguyen, Phong Tran, Lam Si Tung Ho, Vu Dinh, Anh T. Tran, Tal Hassner, Cuong V. Nguyen

We consider transferability estimation, the problem of estimating how well deep learning models transfer from a source to a target task.

regression Transfer Learning

A Generalization Bound of Deep Neural Networks for Dependent Data

no code implementations9 Oct 2023 Quan Huu Do, Binh T. Nguyen, Lam Si Tung Ho

Existing generalization bounds for deep neural networks require data to be independent and identically distributed (iid).

Epidemiology Generalization Bounds +1

SPADE4: Sparsity and Delay Embedding based Forecasting of Epidemics

1 code implementation11 Nov 2022 Esha Saha, Lam Si Tung Ho, Giang Tran

The most popular tools for modelling and predicting infectious disease epidemics are compartmental models.

Generalization Bounds for Deep Transfer Learning Using Majority Predictor Accuracy

no code implementations13 Sep 2022 Cuong N. Nguyen, Lam Si Tung Ho, Vu Dinh, Tal Hassner, Cuong V. Nguyen

We analyze new generalization bounds for deep learning models trained by transfer learning from a source to a target task.

Generalization Bounds Transfer Learning

When can we reconstruct the ancestral state? Beyond Brownian motion

1 code implementation26 Jul 2022 Nhat L. Vu, Thanh P. Nguyen, Binh T. Nguyen, Vu Dinh, Lam Si Tung Ho

Reconstructing the ancestral state of a group of species helps answer many important questions in evolutionary biology.

Evolutionary shift detection with ensemble variable selection

1 code implementation12 Apr 2022 Wensha Zhang, Toby Kenney, Lam Si Tung Ho

PhylogeneticEM is even more conservative with small signal sizes and falls between l1ou + pBIC and Ensemble method + BIC with large signal sizes.

Variable Selection

Posterior concentration and fast convergence rates for generalized Bayesian learning

no code implementations19 Nov 2021 Lam Si Tung Ho, Binh T. Nguyen, Vu Dinh, Duy Nguyen

We prove that under the multi-scale Bernstein's condition, the generalized posterior distribution concentrates around the set of optimal hypotheses and the generalized Bayes estimator can achieve fast learning rate.

regression

When can we reconstruct the ancestral state? A unified theory

no code implementations14 Nov 2021 Lam Si Tung Ho, Vu Dinh

Notably, we show that for a sequence of nested trees with bounded heights, the necessary and sufficient conditions for the existence of a consistent ancestral state reconstruction method under discrete models, the Brownian motion model, and the threshold model are equivalent.

valid

Searching for Minimal Optimal Neural Networks

no code implementations27 Sep 2021 Lam Si Tung Ho, Vu Dinh

Large neural network models have high predictive power but may suffer from overfitting if the training set is not large enough.

Adaptive Group Lasso Neural Network Models for Functions of Few Variables and Time-Dependent Data

no code implementations24 Aug 2021 Lam Si Tung Ho, Nicholas Richardson, Giang Tran

In this paper, we propose an adaptive group Lasso deep neural network for high-dimensional function approximation where input data are generated from a dynamical system and the target function depends on few active variables or few linear combinations of variables.

OASIS: An Active Framework for Set Inversion

no code implementations31 May 2021 Binh T. Nguyen, Duy M. Nguyen, Lam Si Tung Ho, Vu Dinh

In this work, we introduce a novel method for solving the set inversion problem by formulating it as a binary classification problem.

Active Learning Binary Classification

Convergence of maximum likelihood supertree reconstruction

no code implementations4 May 2021 Lam Si Tung Ho, Vu Dinh

Supertree methods are tree reconstruction techniques that combine several smaller gene trees (possibly on different sets of species) to build a larger species tree.

Consistent Feature Selection for Analytic Deep Neural Networks

1 code implementation NeurIPS 2020 Vu Dinh, Lam Si Tung Ho

One of the most important steps toward interpretability and explainability of neural network models is feature selection, which aims to identify the subset of relevant features.

feature selection

Consistent feature selection for neural networks via Adaptive Group Lasso

no code implementations30 May 2020 Vu Dinh, Lam Si Tung Ho

In this work, we propose and establish a theoretical guarantee for the use of the adaptive group lasso for selecting important features of neural networks.

feature selection

Bayesian Active Learning With Abstention Feedbacks

no code implementations4 Jun 2019 Cuong V. Nguyen, Lam Si Tung Ho, Huan Xu, Vu Dinh, Binh Nguyen

We study pool-based active learning with abstention feedbacks where a labeler can abstain from labeling a queried example with some unknown abstention rate.

Active Learning General Classification

Recovery guarantees for polynomial approximation from dependent data with outliers

no code implementations25 Nov 2018 Lam Si Tung Ho, Hayden Schaeffer, Giang Tran, Rachel Ward

In this work, we study the problem of learning nonlinear functions from corrupted and dependent data.

Bayesian Pool-based Active Learning With Abstention Feedbacks

no code implementations23 May 2017 Cuong V. Nguyen, Lam Si Tung Ho, Huan Xu, Vu Dinh, Binh Nguyen

We study pool-based active learning with abstention feedbacks, where a labeler can abstain from labeling a queried example with some unknown abstention rate.

Active Learning General Classification

Fast learning rates with heavy-tailed losses

no code implementations NeurIPS 2016 Vu Dinh, Lam Si Tung Ho, Duy Nguyen, Binh T. Nguyen

We study fast learning rates when the losses are not necessarily bounded and may have a distribution with heavy tails.

Clustering Quantization

Learning From Non-iid Data: Fast Rates for the One-vs-All Multiclass Plug-in Classifiers

no code implementations12 Aug 2014 Vu Dinh, Lam Si Tung Ho, Nguyen Viet Cuong, Duy Nguyen, Binh T. Nguyen

We prove new fast learning rates for the one-vs-all multiclass plug-in classifiers trained either from exponentially strongly mixing data or from data generated by a converging drifting distribution.

Generalization and Robustness of Batched Weighted Average Algorithm with V-geometrically Ergodic Markov Data

no code implementations12 Jun 2014 Nguyen Viet Cuong, Lam Si Tung Ho, Vu Dinh

For the generalization of the algorithm, we prove a PAC-style bound on the training sample size for the expected $L_1$-loss to converge to the optimal loss when training data are V-geometrically ergodic Markov chains.

General Classification regression

Cannot find the paper you are looking for? You can Submit a new open access paper.