1 code implementation • 25 Jul 2024 • Yixin Liu, Thalaiyasingam Ajanthan, Hisham Husain, Vu Nguyen
Additionally, the sparsity inherent in tabular data poses challenges for diffusion models in accurately modeling the data manifold, impacting the robustness of these models for data imputation.
no code implementations • 29 May 2024 • Alexander Soen, Hisham Husain, Philip Schulz, Vu Nguyen
Instead, we propose a different distributional perspective, where we seek to find an idealized data distribution which maximizes a pretrained model's performance.
1 code implementation • 5 Feb 2024 • Lam Ngo, Huong Ha, Jeffrey Chan, Vu Nguyen, Hongyu Zhang
To address this issue, a promising solution is to use a local search strategy that partitions the search domain into local regions with high likelihood of containing the global optimum, and then use BO to optimize the objective function within these regions.
no code implementations • 12 Jun 2023 • Huong Ha, Vu Nguyen, Hung Tran-The, Hongyu Zhang, Xiuzhen Zhang, Anton Van Den Hengel
To address this issue, we propose a new BO method that can sub-linearly converge to the objective function's global optimum even when the true GP hyperparameters are unknown in advance and need to be estimated from the observed data.
1 code implementation • 9 Jun 2023 • Masaki Adachi, Satoshi Hayakawa, Martin Jørgensen, Xingchen Wan, Vu Nguyen, Harald Oberhauser, Michael A. Osborne
Active learning parallelization is widely used, but typically relies on fixing the batch size throughout experimentation.
1 code implementation • CVPR 2023 • Jingyi Xu, Hieu Le, Vu Nguyen, Viresh Ranjan, Dimitris Samaras
By applying this model to all the candidate patches, we can select the most suitable patches as exemplars for counting.
Ranked #7 on Zero-Shot Counting on FSC147
2 code implementations • 19 Jul 2022 • Xingchen Wan, Cong Lu, Jack Parker-Holder, Philip J. Ball, Vu Nguyen, Binxin Ru, Michael A. Osborne
Leveraging the new highly parallelizable Brax physics engine, we show that these innovations lead to large performance gains, significantly outperforming the tuned baseline while learning entire configurations on the fly.
1 code implementation • 13 Jun 2022 • Vu Nguyen, Hisham Husain, Sachin Farfade, Anton Van Den Hengel
CSA outperforms the current state-of-the-art in this practically important area of semi-supervised learning.
no code implementations • CVPR 2022 • Alexander Long, Wei Yin, Thalaiyasingam Ajanthan, Vu Nguyen, Pulak Purkait, Ravi Garg, Alan Blair, Chunhua Shen, Anton Van Den Hengel
We introduce Retrieval Augmented Classification (RAC), a generic approach to augmenting standard image classification pipelines with an explicit retrieval module.
Ranked #6 on Long-tail Learning on iNaturalist 2018
no code implementations • 11 Jan 2022 • Jack Parker-Holder, Raghu Rajan, Xingyou Song, André Biedenkapp, Yingjie Miao, Theresa Eimer, Baohe Zhang, Vu Nguyen, Roberto Calandra, Aleksandra Faust, Frank Hutter, Marius Lindauer
The combination of Reinforcement Learning (RL) with deep learning has led to a series of impressive feats, with many believing (deep) RL provides a path towards generally capable agents.
no code implementations • 22 Oct 2021 • Vu Nguyen, Marc Peter Deisenroth, Michael A. Osborne
More specifically, we propose the first use of such bounds to improve Gaussian process (GP) posterior sampling and Bayesian optimization (BO).
1 code implementation • EMNLP 2021 • Maximilian Ahrens, Julian Ashwin, Jan-Peter Calliess, Vu Nguyen
To this end, we combine a supervised Bayesian topic model with a Bayesian regression framework and perform supervised representation learning for the text features jointly with the regression parameter training, respecting the Frisch-Waugh-Lovell theorem.
no code implementations • NeurIPS 2021 • Jack Parker-Holder, Vu Nguyen, Shaan Desai, Stephen Roberts
Despite a series of recent successes in reinforcement learning (RL), many RL algorithms remain sensitive to hyperparameters.
1 code implementation • 14 Feb 2021 • Xingchen Wan, Vu Nguyen, Huong Ha, Binxin Ru, Cong Lu, Michael A. Osborne
High-dimensional black-box optimisation remains an important yet notoriously challenging problem.
no code implementations • COLING 2020 • Kiet Nguyen, Vu Nguyen, Anh Nguyen, Ngan Nguyen
Due to the lack of benchmark datasets for Vietnamese, we present the Vietnamese Question Answering Dataset (UIT-ViQuAD), a new dataset for the low-resource language as Vietnamese to evaluate MRC models.
1 code implementation • NeurIPS 2020 • Vu Nguyen, Vaden Masrani, Rob Brekelmans, Michael A. Osborne, Frank Wood
Achieving the full promise of the Thermodynamic Variational Objective (TVO), a recently proposed variational lower bound on the log evidence involving a one-dimensional Riemann integral approximation, requires choosing a "schedule" of sorted discretization points.
1 code implementation • 13 Jun 2020 • Vu Nguyen, Tam Le, Makoto Yamada, Michael A. Osborne
Building upon tree-Wasserstein (TW), which is a negative definite variant of OT, we develop a novel discrepancy for neural architectures, and demonstrate it within a Gaussian process surrogate model for the sequential NAS settings.
no code implementations • 26 Feb 2020 • Cheng Li, Sunil Gupta, Santu Rana, Vu Nguyen, Antonio Robles-Kelly, Svetha Venkatesh
Again, it is unknown how to incorporate the expert prior knowledge about the global optimum into Bayesian optimization process.
2 code implementations • NeurIPS 2020 • Jack Parker-Holder, Vu Nguyen, Stephen Roberts
A recent solution to this problem is Population Based Training (PBT) which updates both weights and hyperparameters in a single training run of a population of agents.
no code implementations • 4 Dec 2019 • Samuel Kessler, Vu Nguyen, Stefan Zohren, Stephen Roberts
We place an Indian Buffet process (IBP) prior over the structure of a Bayesian Neural Network (BNN), thus allowing the complexity of the BNN to increase and decrease automatically.
1 code implementation • NeurIPS 2020 • Vu Nguyen, Sebastian Schulze, Michael A. Osborne
We demonstrate the efficiency of our algorithm by tuning hyperparameters for the training of deep reinforcement learning agents and convolutional neural networks.
no code implementations • 22 Jul 2019 • Cheng Li, Santu Rana, Sunil Gupta, Vu Nguyen, Svetha Venkatesh, Alessandra Sutti, David Rubin, Teo Slezak, Murray Height, Mazher Mohammed, Ian Gibson
In this paper, we consider per-variable monotonic trend in the underlying property that results in a unimodal trend in those variables for a target value optimization.
2 code implementations • ICML 2020 • Binxin Ru, Ahsan S. Alvi, Vu Nguyen, Michael A. Osborne, Stephen J. Roberts
Efficient optimisation of black-box problems that comprise both continuous and categorical inputs is important, yet poses significant challenges.
1 code implementation • ICML 2020 • Vu Nguyen, Michael A. Osborne
In this paper, we consider a new setting in BO in which the knowledge of the optimum output f* is available.
1 code implementation • NeurIPS 2018 • Shivapratap Gopakumar, Sunil Gupta, Santu Rana, Vu Nguyen, Svetha Venkatesh
We address this problem by proposing an efficient framework for algorithmic testing.
no code implementations • 5 Nov 2018 • Vu Nguyen, Sunil Gupta, Santu Rana, Cheng Li, Svetha Venkatesh
Bayesian optimization (BO) and its batch extensions are successful for optimizing expensive black-box functions.
no code implementations • 15 Feb 2018 • Cheng Li, Sunil Gupta, Santu Rana, Vu Nguyen, Svetha Venkatesh, Alistair Shilton
Scaling Bayesian optimization to high dimensions is challenging task as the global optimization of high-dimensional acquisition function can be expensive and often infeasible.
1 code implementation • ECCV 2018 • Hieu Le, Tomas F. Yago Vicente, Vu Nguyen, Minh Hoai, Dimitris Samaras
The A-Net modifies the original training images constrained by a simplified physical shadow model and is focused on fooling the D-Net's shadow predictions.
no code implementations • ICCV 2017 • Vu Nguyen, Tomas F. Yago Vicente, Maozheng Zhao, Minh Hoai, Dimitris Samaras
We introduce scGAN, a novel extension of conditional Generative Adversarial Networks (GAN) tailored for the challenging problem of shadow detection in images.
Ranked #6 on RGB Salient Object Detection on ISTD
no code implementations • ICML 2017 • Santu Rana, Cheng Li, Sunil Gupta, Vu Nguyen, Svetha Venkatesh
Bayesian optimization is an efficient way to optimize expensive black-box functions such as designing a new product with highest quality or hyperparameter tuning of a machine learning algorithm.
no code implementations • 3 Apr 2017 • Le Hou, Vu Nguyen, Dimitris Samaras, Tahsin M. Kurc, Yi Gao, Tianhao Zhao, Joel H. Saltz
In this work, we propose a sparse Convolutional Autoencoder (CAE) for fully unsupervised, simultaneous nucleus detection and feature extraction in histopathology tissue images.
no code implementations • 31 Mar 2017 • Hieu Le, Vu Nguyen, Chen-Ping Yu, Dimitris Samaras
This paper proposes a geodesic-distance-based feature that encodes global information for improved video segmentation algorithms.
Ranked #1 on Video Segmentation on SegTrack v2
no code implementations • 15 Mar 2017 • Vu Nguyen, Santu Rana, Sunil Gupta, Cheng Li, Svetha Venkatesh
Current batch BO approaches are restrictive in that they fix the number of evaluations per batch, and this can be wasteful when the number of specified evaluations is larger than the number of real maxima in the underlying acquisition function.
no code implementations • NeurIPS 2016 • Trung Le, Tu Nguyen, Vu Nguyen, Dinh Phung
However, this approach still suffers from a serious shortcoming as it needs to use a high dimensional random feature space to achieve a sufficiently accurate kernel approximation.
no code implementations • 22 Jun 2016 • Trung Le, Khanh Nguyen, Van Nguyen, Vu Nguyen, Dinh Phung
Acquiring labels are often costly, whereas unlabeled data are usually easy to obtain in modern machine learning applications.
1 code implementation • 22 Apr 2016 • Trung Le, Tu Dinh Nguyen, Vu Nguyen, Dinh Phung
One of the most challenging problems in kernel online learning is to bound the model size and to promote the model sparsity.
no code implementations • 9 Jan 2014 • Vu Nguyen, Dinh Phung, XuanLong Nguyen, Svetha Venkatesh, Hung Hai Bui
We present a Bayesian nonparametric framework for multilevel clustering which utilizes group-level context information to simultaneously discover low-dimensional structures of the group contents and partitions groups into clusters.