no code implementations • ICML 2020 • Matthew Jones, Thy Nguyen, Huy Nguyen

The field of algorithms has seen a push for fairness, or the removal of inherent bias, in recent history.

no code implementations • 23 May 2024 • Minh Le, An Nguyen, Huy Nguyen, Trang Nguyen, Trang Pham, Linh Van Ngo, Nhat Ho

Exploiting the power of pre-trained models, prompt-based approaches stand out compared to other continual learning solutions in effectively preventing catastrophic forgetting, even with very few learnable parameters and without the need for a memory buffer.

no code implementations • 23 May 2024 • Huy Nguyen, Pedram Akbarian, Trang Pham, Trang Nguyen, Shujian Zhang, Nhat Ho

The cosine router in sparse Mixture of Experts (MoE) has recently emerged as an attractive alternative to the conventional linear router.

no code implementations • 22 May 2024 • Huy Nguyen, Nhat Ho, Alessandro Rinaldo

The softmax gating function is arguably the most popular choice in mixture of experts modeling.

no code implementations • 7 Feb 2024 • Huy Nguyen, Khai Nguyen, Nhat Ho

We consider the parameter estimation problem in the deviated Gaussian mixture of experts in which the data are generated from $(1 - \lambda^{\ast}) g_0(Y| X)+ \lambda^{\ast} \sum_{i = 1}^{k_{\ast}} p_{i}^{\ast} f(Y|(a_{i}^{\ast})^{\top}X+b_i^{\ast},\sigma_{i}^{\ast})$, where $X, Y$ are respectively a covariate vector and a response variable, $g_{0}(Y|X)$ is a known function, $\lambda^{\ast} \in [0, 1]$ is true but unknown mixing proportion, and $(p_{i}^{\ast}, a_{i}^{\ast}, b_{i}^{\ast}, \sigma_{i}^{\ast})$ for $1 \leq i \leq k^{\ast}$ are unknown parameters of the Gaussian mixture of experts.

no code implementations • 5 Feb 2024 • Huy Nguyen, Nhat Ho, Alessandro Rinaldo

Mixture of experts (MoE) model is a statistical machine learning design that aggregates multiple expert networks using a softmax gating function in order to form a more intricate and expressive model.

no code implementations • 5 Feb 2024 • Xing Han, Huy Nguyen, Carl Harris, Nhat Ho, Suchi Saria

As machine learning models in critical fields increasingly grapple with multimodal data, they face the dual challenges of handling a wide array of modalities, often incomplete due to missing elements, and the temporal irregularity and sparsity of collected samples.

1 code implementation • 4 Feb 2024 • Quang Pham, Giang Do, Huy Nguyen, TrungTin Nguyen, Chenghao Liu, Mina Sartipi, Binh T. Nguyen, Savitha Ramasamy, XiaoLi Li, Steven Hoi, Nhat Ho

Sparse mixture of experts (SMoE) offers an appealing solution to scale up the model complexity beyond the mean of increasing the network's depth or width.

no code implementations • 25 Jan 2024 • Huy Nguyen, Pedram Akbarian, Nhat Ho

We demonstrate that due to interactions between the temperature and other model parameters via some partial differential equations, the convergence rates of parameter estimations are slower than any polynomial rates, and could be as slow as $\mathcal{O}(1/\log(n))$, where $n$ denotes the sample size.

1 code implementation • 5 Jan 2024 • Huy Nguyen, Kien Nguyen, Sridha Sridharan, Clinton Fookes

To address this, we introduce AG-ReID. v2, a dataset specifically designed for person Re-ID in mixed aerial and ground scenarios.

Ranked #1 on Person Re-Identification on AG-ReID.v2

no code implementations • 22 Oct 2023 • Huy Nguyen, Pedram Akbarian, TrungTin Nguyen, Nhat Ho

Mixture-of-experts (MoE) model incorporates the power of multiple submodels via gating functions to achieve greater performance in numerous regression and classification applications.

no code implementations • 25 Sep 2023 • Huy Nguyen, Pedram Akbarian, Fanqi Yan, Nhat Ho

When the true number of experts $k_{\ast}$ is known, we demonstrate that the convergence rates of density and parameter estimations are both parametric on the sample size.

no code implementations • 22 Sep 2023 • Huy Nguyen, Prince Grover, Devashish Khatwani

We introduce OpportunityFinder, a code-less framework for performing a variety of causal inference studies with panel data for non-expert users.

1 code implementation • 12 May 2023 • Huy Nguyen, TrungTin Nguyen, Khai Nguyen, Nhat Ho

Originally introduced as a neural network for ensemble learning, mixture of experts (MoE) has recently become a fundamental building block of highly successful modern deep neural networks for heterogeneous data analysis in several applications of machine learning and statistics.

1 code implementation • 15 Mar 2023 • Huy Nguyen, Kien Nguyen, Sridha Sridharan, Clinton Fookes

Our dataset presents a novel elevated-viewpoint challenge for person re-ID due to the significant difference in person appearance across these cameras.

Ranked #2 on Person Re-Identification on AG-ReID

no code implementations • 7 Feb 2023 • Abhinav Bohra, Huy Nguyen, Devashish Khatwani

Multiple techniques have been developed to either decrease the dependence of labeled data (zero/few-shot learning, weak supervision) or to improve the efficiency of labeling process (active learning).

no code implementations • 19 Oct 2022 • Dung Le, Huy Nguyen, Khai Nguyen, Trang Nguyen, Nhat Ho

Generalized sliced Wasserstein distance is a variant of sliced Wasserstein distance that exploits the power of non-linear projection through a given defining function to better capture the complex structures of the probability distributions.

1 code implementation • 27 Sep 2022 • Khai Nguyen, Tongzheng Ren, Huy Nguyen, Litu Rout, Tan Nguyen, Nhat Ho

We explain the usage of these projections by introducing Hierarchical Radon Transform (HRT) which is constructed by applying Radon Transform variants recursively.

no code implementations • 18 Sep 2022 • Nghia Chu, Binh Dao, Nga Pham, Huy Nguyen, Hien Tran

Predicting fund performance is beneficial to both investors and fund managers, and yet is a challenging task.

no code implementations • ECNLP (ACL) 2022 • Huy Nguyen, Devashish Khatwani

Training a product title classification model which is robust to noisy labels in the data is very important to make product classification applications more practical.

no code implementations • 8 Jun 2022 • Huy Nguyen, Fabio Di Troia, Genya Ishigaki, Mark Stamp

We also evaluate the utility of the GAN generative model for adversarial attacks on image-based malware detection.

no code implementations • 29 Oct 2021 • Trung Le, Dat Do, Tuan Nguyen, Huy Nguyen, Hung Bui, Nhat Ho, Dinh Phung

We study the label shift problem between the source and target domains in general domain adaptation (DA) settings.

no code implementations • 24 Aug 2021 • Khang Le, Dung Le, Huy Nguyen, Dat Do, Tung Pham, Nhat Ho

When the metric is the inner product, which we refer to as inner product Gromov-Wasserstein (IGW), we demonstrate that the optimal transportation plans of entropic IGW and its unbalanced variant are (unbalanced) Gaussian distributions.

no code implementations • 18 Aug 2021 • Khang Le, Huy Nguyen, Tung Pham, Nhat Ho

We demonstrate that the ApproxMPOT algorithm can approximate the optimal value of multimarginal POT problem with a computational complexity upper bound of the order $\tilde{\mathcal{O}}(m^3(n+1)^{m}/ \varepsilon^2)$ where $\varepsilon > 0$ stands for the desired tolerance.

no code implementations • NeurIPS 2021 • Khang Le, Huy Nguyen, Quang Nguyen, Tung Pham, Hung Bui, Nhat Ho

We consider robust variants of the standard optimal transport, named robust optimal transport, where marginal constraints are relaxed via Kullback-Leibler divergence.

no code implementations • 22 Dec 2020 • Hui Chen, Hongkuan Zhang, Qian Wu, Yu Huang, Huy Nguyen, Emil Prodan, Xiaoming Zhou, Guoliang Huang

Synthetic dimensions can be rendered in the physical space and this has been achieved with photonics and cold atomic gases, however, little to no work has been succeeded in acoustics because acoustic wave-guides cannot be weakly coupled in a continuous fashion.

Mesoscale and Nanoscale Physics Classical Physics

1 code implementation • 23 Sep 2020 • Thu Nguyen, Duy H. M. Nguyen, Huy Nguyen, Binh T. Nguyen, Bruce A. Wade

The problem of monotone missing data has been broadly studied during the last two decades and has many applications in different fields such as bioinformatics or statistics.

no code implementations • 2 Sep 2020 • Anamay Chaturvedi, Huy Nguyen, Eric Xu

We introduce a new $(\epsilon_p, \delta_p)$-differentially private algorithm for the $k$-means clustering problem.

no code implementations • 29 May 2020 • Anamay Chaturvedi, Huy Nguyen, Lydia Zakynthinou

We extend this work by designing differentially private algorithms for both monotone and non-monotone decomposable submodular maximization under general matroid constraints, with competitive utility guarantees.

no code implementations • 11 Mar 2020 • Huy Nguyen, Nicholas Adrian, Joyce Lim Xin Yan, Jonathan M. Salfity, William Allen, Quang-Cuong Pham

With the rapid rise of 3D-printing as a competitive mass manufacturing method, manual "decaking" - i. e. removing the residual powder that sticks to a 3D-printed part - has become a significant bottleneck.

Robotics

no code implementations • WS 2019 • Farah Nadeem, Huy Nguyen, Yang Liu, Mari Ostendorf

Automated essay scoring systems typically rely on hand-crafted features to predict essay quality, but such systems are limited by the cost of feature engineering.

no code implementations • 10 Oct 2018 • Eric Dodds, Huy Nguyen, Simao Herdade, Jack Culpepper, Andrew Kae, Pierre Garrigues

Our approach significantly outperforms the state-of-the-art on the DeepFashion dataset.

1 code implementation • 25 Jun 2017 • Huy Nguyen, Minh-Le Nguyen

This paper introduces a novel deep learning framework including a lexicon-based approach for sentence-level prediction of sentiment label distribution.

no code implementations • 21 Apr 2016 • Yannis Kalantidis, Lyndon Kennedy, Huy Nguyen, Clayton Mellina, David A. Shamma

We propose a novel hashing-based matching scheme, called Locally Optimized Hashing (LOH), based on a state-of-the-art quantization algorithm that can be used for efficient, large-scale search, recommendation, clustering, and deduplication.

no code implementations • NeurIPS 2014 • Haim Avron, Huy Nguyen, David Woodruff

Sketching is a powerful dimensionality reduction tool for accelerating statistical learning algorithms.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.