1 code implementation • ICML 2020 • Thomas Quinn, Dang Nguyen, Santu Rana, Sunil Gupta, Svetha Venkatesh
Interpretability allows the domain-expert to directly evaluate the model's relevance and reliability, a practice that offers assurance and builds trust.
1 code implementation • 25 Jan 2025 • Bao Duong, Sunil Gupta, Thin Nguyen
Existing score-based methods for directed acyclic graph (DAG) learning from observational data struggle to recover the causal graph accurately and sample-efficiently.
no code implementations • 16 Jan 2025 • Thao Minh Le, Vuong Le, Kien Do, Sunil Gupta, Svetha Venkatesh, Truyen Tran
This paper introduces a new problem, Causal Abductive Reasoning on Video Events (CARVE), which involves identifying causal relationships between events in a video and generating hypotheses about causal chains that account for the occurrence of a target event.
no code implementations • 28 Dec 2024 • Dang Nguyen, Sunil Gupta
Given a distortion level, our goal is to predict if the model's accuracy on the set of distorted images is greater than a threshold.
no code implementations • 22 Dec 2024 • Dang Nguyen, Sunil Gupta, Kien Do, Svetha Venkatesh
In other words, we want to predict whether a distortion level makes the image-classifier "non-reliable" or "reliable".
no code implementations • 6 Nov 2024 • Tri Minh Nguyen, Sherif Abdulkader Tawfik, Truyen Tran, Sunil Gupta, Santu Rana, Svetha Venkatesh
We propose the Symmetry-aware Hierarchical Architecture for Flow-based Traversal (SHAFT), a novel generative model employing a hierarchical exploration strategy to efficiently exploit the symmetry of the materials space to generate crystal structures given desired properties.
no code implementations • 29 Oct 2024 • Dang Nguyen, Sunil Gupta, Kien Do, Thin Nguyen, Svetha Venkatesh
To address this problem, we propose a LLM-based method with three important improvements to correctly capture the ground-truth feature-class correlation in the real data.
no code implementations • 14 Oct 2024 • Hung Le, Kien Do, Dung Nguyen, Sunil Gupta, Svetha Venkatesh
To this end, we leverage the Hadamard product for calibrating and updating memory, specifically designed to enhance memory capacity while mitigating numerical and learning challenges.
no code implementations • 8 Oct 2024 • Giang Ngo, Dang Nguyen, Sunil Gupta
The objective of active level set estimation for a black-box function is to precisely identify regions where the function values exceed or fall below a specified threshold by iteratively performing function evaluations to gather more information about the function.
no code implementations • 24 May 2024 • Alistair Shilton, Sunil Gupta, Santu Rana, Svetha Venkatesh
The first model is exact (un-approximated) and global, casting the neural network as an elements in a reproducing kernel Banach space (RKBS); we use this model to provide tight bounds on Rademacher complexity.
no code implementations • 27 Feb 2024 • Arun Kumar A V, Alistair Shilton, Sunil Gupta, Santu Rana, Stewart Greenhill, Svetha Venkatesh
Experimental (design) optimization is a key driver in designing and discovering new products and processes.
no code implementations • 26 Feb 2024 • Giang Ngo, Dang Nguyen, Dat Phan-Trong, Sunil Gupta
When the function is black-box and expensive to evaluate, the level sets need to be found in a minimum set of function evaluations.
1 code implementation • 16 Feb 2024 • Linh Le Pham Van, Hung The Tran, Sunil Gupta
It is anchored in the central concepts of the skewing and extension of source support towards target support to mitigate support deficiencies.
no code implementations • 5 Feb 2024 • Dat Phan-Trong, Hung The Tran, Alistair Shilton, Sunil Gupta
Black-box optimization is a powerful approach for discovering global optima in noisy and expensive black-box functions, a problem widely encountered in real-world scenarios.
no code implementations • 19 Dec 2023 • Phuoc Nguyen, Truyen Tran, Sunil Gupta, Thin Nguyen, Svetha Venkatesh
We then represent the functional form of a target outlier leaf as a function of the node and edge noises.
1 code implementation • 7 Dec 2023 • Tuan Hoang, Santu Rana, Sunil Gupta, Svetha Venkatesh
Recent data-privacy laws have sparked interest in machine unlearning, which involves removing the effect of specific training samples from a learnt model as if they were never present in the original training dataset.
1 code implementation • 21 Aug 2023 • Thommen George Karimpanal, Laknath Buddhika Semage, Santu Rana, Hung Le, Truyen Tran, Sunil Gupta, Svetha Venkatesh
To address this issue, we introduce SEQ (sample efficient querying), where we simultaneously train a secondary RL agent to decide when the LLM should be queried for solutions.
no code implementations • 1 Aug 2023 • Manisha Senadeera, Santu Rana, Sunil Gupta, Svetha Venkatesh
Specifically, we propose a novel way of integrating model selection and BO for the single goal of reaching the function optima faster.
no code implementations • 1 Jun 2023 • Manisha Senadeera, Thommen Karimpanal George, Sunil Gupta, Stephan Jacobs, Santu Rana
This involves learning an "Imagination Network" to transform the other agent's observed state in order to produce a human-interpretable "empathetic state" which, when presented to the learning agent, produces behaviours that mimic the other agent.
no code implementations • 3 Mar 2023 • Sunil Gupta, Alistair Shilton, Arun Kumar A V, Shannon Ryan, Majid Abdolshah, Hung Le, Santu Rana, Julian Berk, Mahad Rashid, Svetha Venkatesh
In this paper we introduce BO-Muse, a new approach to human-AI teaming for the optimization of expensive black-box functions.
1 code implementation • 3 Mar 2023 • Dat Phan-Trong, Hung Tran-The, Sunil Gupta
Bayesian Optimization (BO) is an effective approach for global optimization of black-box functions when function evaluations are expensive.
no code implementations • 1 Feb 2023 • Alistair Shilton, Sunil Gupta, Santu Rana, Svetha Venkatesh
The study of Neural Tangent Kernels (NTKs) has provided much needed insight into convergence and generalization properties of neural networks in the over-parametrized (wide) limit by approximating the network using a first-order Taylor expansion with respect to its weights in the neighborhood of their initialization values.
no code implementations • ICCV 2023 • Prashant W. Patil, Sunil Gupta, Santu Rana, Svetha Venkatesh, Subrahmanyam Murala
Therefore, effective restoration of multi-weather degraded images is an essential prerequisite for successful functioning of such systems.
no code implementations • 23 Nov 2022 • Thanh Nguyen-Tang, Ming Yin, Sunil Gupta, Svetha Venkatesh, Raman Arora
To the best of our knowledge, these are the first $\tilde{\mathcal{O}}(\frac{1}{K})$ bound and absolute zero sub-optimality bound respectively for offline RL with linear function approximation from adaptive data with partial coverage.
no code implementations • 16 Sep 2022 • Dupati Srikar Chandra, Sakshi Varshney, P. K. Srijith, Sunil Gupta
However, the continual learning performance of existing hypernetwork based approaches are affected by the assumption of independence of the weights across the layers in order to maintain parameter efficiency.
1 code implementation • 25 Jul 2022 • Dang Nguyen, Sunil Gupta, Kien Do, Svetha Venkatesh
Traditional KD methods require lots of labeled training samples and a white-box teacher (parameters are accessible) to train a good student.
no code implementations • 8 Jul 2022 • Haripriya Harikumar, Santu Rana, Kien Do, Sunil Gupta, Wei Zong, Willy Susilo, Svetha Venkastesh
To defend against this attack, we first introduce a trigger reverse-engineering mechanism that uses multiple images to recover a variety of potential triggers.
no code implementations • 25 May 2022 • Thao Minh Le, Vuong Le, Sunil Gupta, Svetha Venkatesh, Truyen Tran
This grounding guides the attention mechanism inside VQA models through a duality of mechanisms: pre-training attention weight calculation and directly guiding the weights at inference time on a case-by-case basis.
no code implementations • 13 May 2022 • Phuoc Nguyen, Truyen Tran, Ky Le, Sunil Gupta, Santu Rana, Dang Nguyen, Trong Nguyen, Shannon Ryan, Svetha Venkatesh
We introduce a conditional compression problem and propose a fast framework for tackling it.
no code implementations • 20 Apr 2022 • Hung Le, Thommen Karimpanal George, Majid Abdolshah, Dung Nguyen, Kien Do, Sunil Gupta, Svetha Venkatesh
We introduce a constrained optimization method for policy gradient reinforcement learning, which uses a virtual trust region to regulate each policy update.
no code implementations • 15 Mar 2022 • Hung Tran-The, Sunil Gupta, Santu Rana, Svetha Venkatesh
In particular, whether in the noisy setting, the EI strategy with a standard incumbent converges is still an open question of the Gaussian process bandit optimization problem.
1 code implementation • NeurIPS 2021 • Arun Kumar Anjanapura Venkatesh, Alistair Shilton, Santu Rana, Sunil Gupta, Svetha Venkatesh
Traditional methods for kernel selection rely on parametric kernel functions or a combination thereof and although the kernel hyperparameters are tuned, these methods often provide sub-optimal results due to the limitations induced by the parametric forms.
1 code implementation • ICLR 2022 • Thanh Nguyen-Tang, Sunil Gupta, A. Tuan Nguyen, Svetha Venkatesh
Moreover, we show that our method is more computationally efficient and has a better dependence on the effective dimension of the neural network than an online counterpart.
no code implementations • 3 Nov 2021 • Thommen George Karimpanal, Hung Le, Majid Abdolshah, Santu Rana, Sunil Gupta, Truyen Tran, Svetha Venkatesh
The optimistic nature of the Q-learning target leads to an overestimation bias, which is an inherent problem associated with standard $Q-$learning.
no code implementations • 26 Oct 2021 • Haripriya Harikumar, Kien Do, Santu Rana, Sunil Gupta, Svetha Venkatesh
In this paper, we propose a novel host-free Trojan attack with triggers that are fixed in the semantic space but not necessarily in the pixel space.
no code implementations • 13 Oct 2021 • Thomas P Quinn, Sunil Gupta, Svetha Venkatesh, Vuong Le
This article is a field guide to transparent model design.
no code implementations • 29 Sep 2021 • Majid Abdolshah, Hung Le, Thommen Karimpanal George, Vuong Le, Sunil Gupta, Santu Rana, Svetha Venkatesh
Whilst Generative Adversarial Networks (GANs) generate visually appealing high resolution images, the latent representations (or codes) of these models do not allow controllable changes on the semantic attributes of the generated images.
no code implementations • 29 Sep 2021 • Thommen Karimpanal George, Majid Abdolshah, Hung Le, Santu Rana, Sunil Gupta, Truyen Tran, Svetha Venkatesh
The objective in goal-based reinforcement learning is to learn a policy to reach a particular goal state within the environment.
no code implementations • 29 Sep 2021 • Hung Tran-The, Sunil Gupta, Santu Rana, Long Tran-Thanh, Svetha Venkatesh
With a linear reward function, we demonstrate that our algorithm achieves a near-optimal regret.
no code implementations • 20 Aug 2021 • Majid Abdolshah, Hung Le, Thommen Karimpanal George, Sunil Gupta, Santu Rana, Svetha Venkatesh
This is achieved by representing the global transition dynamics as a union of local transition functions, each with respect to one active object in the scene.
no code implementations • 28 Jul 2021 • Anusua Trivedi, Jocelyn Desbiens, Ron Gross, Sunil Gupta, Rahul Dodhia, Juan Lavista Ferres
Conclusion: In the case of DR, most of the disease biomarkers are related topologically to the vasculature.
no code implementations • 24 Jul 2021 • Hung Tran-The, Sunil Gupta, Thanh Nguyen-Tang, Santu Rana, Svetha Venkatesh
We propose a novel approach that uses a hybrid of offline learning with online exploration.
no code implementations • 18 Jul 2021 • Majid Abdolshah, Hung Le, Thommen Karimpanal George, Sunil Gupta, Santu Rana, Svetha Venkatesh
Transfer in reinforcement learning is usually achieved through generalisation across tasks.
no code implementations • 10 May 2021 • Hung Tran-The, Sunil Gupta, Santu Rana, Svetha Venkatesh
Bayesian optimisation (BO) is a well-known efficient algorithm for finding the global optimum of expensive, black-box functions.
no code implementations • 11 Apr 2021 • Huong Ha, Sunil Gupta, Santu Rana, Svetha Venkatesh
Machine learning models are being used extensively in many important areas, but there is no guarantee a model will always perform well or as its developers intended.
no code implementations • 11 Mar 2021 • Thanh Nguyen-Tang, Sunil Gupta, Hung Tran-The, Svetha Venkatesh
To the best of our knowledge, this is the first theoretical characterization of the sample complexity of offline RL with deep neural network function approximation under the general Besov regularity condition that goes beyond {the linearity regime} in the traditional Reproducing Hilbert kernel spaces and Neural Tangent Kernels.
1 code implementation • 17 Dec 2020 • Huong Ha, Sunil Gupta, Santu Rana, Svetha Venkatesh
In particular, we consider two types of LSE problems: (1) \textit{explicit} LSE problem where the threshold level is a fixed user-specified value, and, (2) \textit{implicit} LSE problem where the threshold level is defined as a percentage of the (unknown) maximum of the objective function.
no code implementations • 19 Nov 2020 • Anh-Cat Le-Ngo, Truyen Tran, Santu Rana, Sunil Gupta, Svetha Venkatesh
We propose a new model-agnostic logic constraint to tackle this issue by formulating a logically consistent loss in the multi-task learning framework as well as a data organisation called family-batch and hybrid-batch.
no code implementations • 20 Sep 2020 • Duc Nguyen, Phuoc Nguyen, Kien Do, Santu Rana, Sunil Gupta, Truyen Tran
These include the capacity of the compact matrix LSTM to compress noisy data near perfectly, making the strategy of compressing-decompressing data ill-suited for anomaly detection under the noise.
no code implementations • 8 Sep 2020 • Alistair Shilton, Sunil Gupta, Santu Rana, Svetha Venkatesh
We propose an algorithm for Bayesian functional optimisation - that is, finding the function to optimise a process - guided by experimenter beliefs and intuitions regarding the expected characteristics (length-scale, smoothness, cyclicity etc.)
no code implementations • NeurIPS 2020 • Hung Tran-The, Sunil Gupta, Santu Rana, Huong Ha, Svetha Venkatesh
To this end, we propose a novel BO algorithm which expands (and shifts) the search space over iterations based on controlling the expansion rate thought a hyperharmonic series.
1 code implementation • 24 Jul 2020 • Thanh Tang Nguyen, Sunil Gupta, Svetha Venkatesh
We consider the problem of learning a set of probability distributions from the empirical Bellman dynamics in distributional reinforcement learning (RL), a class of state-of-the-art methods that estimate the distribution, as opposed to only the expectation, of the total return.
no code implementations • 15 Jul 2020 • Alistair Shilton, Sunil Gupta, Santu Rana, Svetha Venkatesh
In this paper we explore a connection between deep networks and learning in reproducing kernel Krein space.
no code implementations • 19 Jun 2020 • Phuc Luong, Dang Nguyen, Sunil Gupta, Santu Rana, Svetha Venkatesh
In real-world applications, BO often faces a major problem of missing values in inputs.
no code implementations • 10 Jun 2020 • Haripriya Harikumar, Vuong Le, Santu Rana, Sourangshu Bhattacharya, Sunil Gupta, Svetha Venkatesh
Recently, it has been shown that deep learning models are vulnerable to Trojan attacks, where an attacker can install a backdoor during training time to make the resultant model misidentify samples contaminated with a small trigger patch.
1 code implementation • 8 Jun 2020 • Julian Berk, Sunil Gupta, Santu Rana, Svetha Venkatesh
In order to improve the performance of Bayesian optimisation, we develop a modified Gaussian process upper confidence bound (GP-UCB) acquisition function.
1 code implementation • 2 Jun 2020 • Thomas P. Quinn, Dang Nguyen, Santu Rana, Sunil Gupta, Svetha Venkatesh
We define personalized interpretability as a measure of sample-specific feature attribution, and view it as a minimum requirement for a precision health model to justify its conclusions.
no code implementations • 18 May 2020 • Phuoc Nguyen, Truyen Tran, Sunil Gupta, Santu Rana, Hieu-Chi Dam, Svetha Venkatesh
Given a target distribution, we predict the posterior distribution of the latent code, then use a matrix-network decoder to generate a posterior distribution q(\theta).
no code implementations • 27 Mar 2020 • Anil Ramachandran, Sunil Gupta, Santu Rana, Cheng Li, Svetha Venkatesh
In this paper, we represent the prior knowledge about the function optimum through a prior distribution.
no code implementations • 26 Feb 2020 • Cheng Li, Sunil Gupta, Santu Rana, Vu Nguyen, Antonio Robles-Kelly, Svetha Venkatesh
Again, it is unknown how to incorporate the expert prior knowledge about the global optimum into Bayesian optimization process.
1 code implementation • 19 Jan 2020 • Thanh Tang Nguyen, Sunil Gupta, Huong Ha, Santu Rana, Svetha Venkatesh
We adopt the distributionally robust optimization perspective to this problem by maximizing the expected objective under the most adversarial distribution.
1 code implementation • 28 Nov 2019 • Dang Nguyen, Sunil Gupta, Santu Rana, Alistair Shilton, Svetha Venkatesh
To optimize such functions, we propose a new method that formulates the problem as a multi-armed bandit problem, wherein each category corresponds to an arm with its reward distribution centered around the optimum of the objective function in continuous variables.
no code implementations • 27 Nov 2019 • Hung Tran-The, Sunil Gupta, Santu Rana, Svetha Venkatesh
Optimising acquisition function in low dimensional subspaces allows our method to obtain accurate solutions within limited computational budget.
1 code implementation • NeurIPS 2019 • Huong Ha, Santu Rana, Sunil Gupta, Thanh Nguyen, Hung Tran-The, Svetha Venkatesh
Applying Bayesian optimization in problems wherein the search space is unknown is challenging.
no code implementations • 10 Sep 2019 • Thommen George Karimpanal, Santu Rana, Sunil Gupta, Truyen Tran, Svetha Venkatesh
Prior access to domain knowledge could significantly improve the performance of a reinforcement learning agent.
no code implementations • 9 Sep 2019 • Majid Abdolshah, Alistair Shilton, Santu Rana, Sunil Gupta, Svetha Venkatesh
We introduce a cost-aware multi-objective Bayesian optimisation with non-uniform evaluation cost over objective functions by defining cost-aware constraints over the search space.
no code implementations • 22 Jul 2019 • Cheng Li, Santu Rana, Sunil Gupta, Vu Nguyen, Svetha Venkatesh, Alessandra Sutti, David Rubin, Teo Slezak, Murray Height, Mazher Mohammed, Ian Gibson
In this paper, we consider per-variable monotonic trend in the underlying property that results in a unimodal trend in those variables for a target value optimization.
no code implementations • 21 Jun 2019 • Ang Yang, Cheng Li, Santu Rana, Sunil Gupta, Svetha Venkatesh
Since the balance between predictive mean and the predictive variance is the key determinant to the success of Bayesian optimization, the current sparse spectrum methods are less suitable for it.
no code implementations • 21 Feb 2019 • Alistair Shilton, Sunil Gupta, Santu Rana, Svetha Venkatesh, Majid Abdolshah, Dang Nguyen
In this paper we consider the problem of finding stable maxima of expensive (to evaluate) functions.
no code implementations • NeurIPS 2019 • Majid Abdolshah, Alistair Shilton, Santu Rana, Sunil Gupta, Svetha Venkatesh
We present a multi-objective Bayesian optimisation algorithm that allows the user to express preference-order constraints on the objectives of the type "objective A is more important than objective B".
no code implementations • 6 Feb 2019 • Tinu Theckel Joy, Santu Rana, Sunil Gupta, Svetha Venkatesh
We initially tune the hyperparameters on a small subset of training data using Bayesian optimization.
1 code implementation • NeurIPS 2018 • Shivapratap Gopakumar, Sunil Gupta, Santu Rana, Vu Nguyen, Svetha Venkatesh
We address this problem by proposing an efficient framework for algorithmic testing.
no code implementations • 5 Nov 2018 • Vu Nguyen, Sunil Gupta, Santu Rana, Cheng Li, Svetha Venkatesh
Bayesian optimization (BO) and its batch extensions are successful for optimizing expensive black-box functions.
no code implementations • 19 Sep 2018 • Pratibha Vellanki, Santu Rana, Sunil Gupta, David Rubin de Celis Leal, Alessandra Sutti, Murray Height, Svetha Venkatesh
Real world experiments are expensive, and thus it is important to reach a target in minimum number of experiments.
no code implementations • 21 May 2018 • Alistair Shilton, Sunil Gupta, Santu Rana, Pratibha Vellanki, Laurence Park, Cheng Li, Svetha Venkatesh, Alessandra Sutti, David Rubin, Thomas Dorin, Alireza Vahid, Murray Height, Teo Slezak
In this paper we show how such auxiliary data may be used to construct a GP covariance corresponding to a more appropriate weight prior for the objective function.
no code implementations • 16 Feb 2018 • Cheng Li, David Rubin de Celis Leal, Santu Rana, Sunil Gupta, Alessandra Sutti, Stewart Greenhill, Teo Slezak, Murray Height, Svetha Venkatesh
The discovery of processes for the synthesis of new materials involves many decisions about process design, operation, and material properties.
no code implementations • 15 Feb 2018 • Cheng Li, Sunil Gupta, Santu Rana, Vu Nguyen, Svetha Venkatesh, Alistair Shilton
Scaling Bayesian optimization to high dimensions is challenging task as the global optimization of high-dimensional acquisition function can be expensive and often infeasible.
no code implementations • 15 Feb 2018 • Alistair Shilton, Sunil Gupta, Santu Rana, Pratibha Vellanki, Cheng Li, Laurence Park, Svetha Venkatesh, Alessandra Sutti, David Rubin, Thomas Dorin, Alireza Vahid, Murray Height
The paper presents a novel approach to direct covariance function learning for Bayesian optimisation, with particular emphasis on experimental design problems where an existing corpus of condensed knowledge is present.
no code implementations • NeurIPS 2017 • Pratibha Vellanki, Santu Rana, Sunil Gupta, David Rubin, Alessandra Sutti, Thomas Dorin, Murray Height, Paul Sanders, Svetha Venkatesh
We demonstrate the performance of both pc-BO(basic) and pc-BO(nested) by optimising benchmark test functions, tuning hyper-parameters of the SVM classifier, optimising the heat-treatment process for an Al-Sc alloy to achieve target hardness, and optimising the short polymer fibre production process.
no code implementations • ICML 2017 • Santu Rana, Cheng Li, Sunil Gupta, Vu Nguyen, Svetha Venkatesh
Bayesian optimization is an efficient way to optimize expensive black-box functions such as designing a new product with highest quality or hyperparameter tuning of a machine learning algorithm.
no code implementations • 15 Mar 2017 • Vu Nguyen, Santu Rana, Sunil Gupta, Cheng Li, Svetha Venkatesh
Current batch BO approaches are restrictive in that they fix the number of evaluations per batch, and this can be wasteful when the number of specified evaluations is larger than the number of real maxima in the underlying acquisition function.