You need to log in to edit.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

1 code implementation • ICML 2020 • Thomas Quinn, Dang Nguyen, Santu Rana, Sunil Gupta, Svetha Venkatesh

Interpretability allows the domain-expert to directly evaluate the model's relevance and reliability, a practice that offers assurance and builds trust.

no code implementations • 5 Feb 2024 • Kien Do, Dung Nguyen, Hung Le, Thao Le, Dang Nguyen, Haripriya Harikumar, Truyen Tran, Santu Rana, Svetha Venkatesh

To overcome this challenge, we propose to approximate \frac{1}{p(u|b)} using a biased classifier trained with "bias amplification" losses.

1 code implementation • 7 Dec 2023 • Tuan Hoang, Santu Rana, Sunil Gupta, Svetha Venkatesh

Recent data-privacy laws have sparked interest in machine unlearning, which involves removing the effect of specific training samples from a learnt model as if they were never present in the original training dataset.

no code implementations • 8 Sep 2023 • Kishan R. Nagiredla, Buddhika L. Semage, Thommen G. Karimpanal, Arun Kumar A. V, Santu Rana

To improve the sample-efficiency we propose a multi-fidelity-based design exploration strategy based on Hyperband where we tie the controllers learnt across the design spaces through a universal policy learner for warm-starting the subsequent controller learning problems.

1 code implementation • 21 Aug 2023 • Thommen George Karimpanal, Laknath Buddhika Semage, Santu Rana, Hung Le, Truyen Tran, Sunil Gupta, Svetha Venkatesh

To address this issue, we introduce SEQ (sample efficient querying), where we simultaneously train a secondary RL agent to decide when the LLM should be queried for solutions.

no code implementations • 1 Aug 2023 • Manisha Senadeera, Santu Rana, Sunil Gupta, Svetha Venkatesh

Specifically, we propose a novel way of integrating model selection and BO for the single goal of reaching the function optima faster.

no code implementations • 1 Jun 2023 • Manisha Senadeera, Thommen Karimpanal George, Sunil Gupta, Stephan Jacobs, Santu Rana

This involves learning an "Imagination Network" to transform the other agent's observed state in order to produce a human-interpretable "empathetic state" which, when presented to the learning agent, produces behaviours that mimic the other agent.

no code implementations • 3 May 2023 • Kiran Purohit, Soumi Das, Sourangshu Bhattacharya, Santu Rana

We also show that LearnDefend is robust to size and noise in the marking of clean examples in the defense dataset.

1 code implementation • 7 Mar 2023 • Maxence Hussonnois, Thommen George Karimpanal, Santu Rana

Autonomously learning diverse behaviors without an extrinsic reward signal has been a problem of interest in reinforcement learning.

no code implementations • 3 Mar 2023 • Sunil Gupta, Alistair Shilton, Arun Kumar A V, Shannon Ryan, Majid Abdolshah, Hung Le, Santu Rana, Julian Berk, Mahad Rashid, Svetha Venkatesh

In this paper we introduce BO-Muse, a new approach to human-AI teaming for the optimization of expensive black-box functions.

no code implementations • 8 Feb 2023 • Buddhika Laknath Semage, Thommen George Karimpanal, Santu Rana, Svetha Venkatesh

However, simulators are generally incapable of accurately replicating real-world dynamics, and thus bridging the sim2real gap is an important problem in simulation based learning.

no code implementations • 1 Feb 2023 • Alistair Shilton, Sunil Gupta, Santu Rana, Svetha Venkatesh

The study of Neural Tangent Kernels (NTKs) has provided much needed insight into convergence and generalization properties of neural networks in the over-parametrized (wide) limit by approximating the network using a first-order Taylor expansion with respect to its weights in the neighborhood of their initialization values.

no code implementations • ICCV 2023 • Prashant W. Patil, Sunil Gupta, Santu Rana, Svetha Venkatesh, Subrahmanyam Murala

Therefore, effective restoration of multi-weather degraded images is an essential prerequisite for successful functioning of such systems.

no code implementations • 21 Sep 2022 • Kien Do, Hung Le, Dung Nguyen, Dang Nguyen, Haripriya Harikumar, Truyen Tran, Santu Rana, Svetha Venkatesh

Since the EMA generator can be considered as an ensemble of the generator's old versions and often undergoes a smaller change in updates compared to the generator, training on its synthetic samples can help the student recall the past knowledge and prevent the student from adapting too quickly to new updates of the generator.

no code implementations • 8 Jul 2022 • Haripriya Harikumar, Santu Rana, Kien Do, Sunil Gupta, Wei Zong, Willy Susilo, Svetha Venkastesh

To defend against this attack, we first introduce a trigger reverse-engineering mechanism that uses multiple images to recover a variety of potential triggers.

no code implementations • 13 May 2022 • Phuoc Nguyen, Truyen Tran, Ky Le, Sunil Gupta, Santu Rana, Dang Nguyen, Trong Nguyen, Shannon Ryan, Svetha Venkatesh

We introduce a conditional compression problem and propose a fast framework for tackling it.

no code implementations • 15 Mar 2022 • Hung Tran-The, Sunil Gupta, Santu Rana, Svetha Venkatesh

In particular, whether in the noisy setting, the EI strategy with a standard incumbent converges is still an open question of the Gaussian process bandit optimization problem.

no code implementations • 24 Feb 2022 • Kien Do, Haripriya Harikumar, Hung Le, Dung Nguyen, Truyen Tran, Santu Rana, Dang Nguyen, Willy Susilo, Svetha Venkatesh

Trojan attacks on deep neural networks are both dangerous and surreptitious.

no code implementations • 11 Feb 2022 • Buddhika Laknath Semage, Thommen George Karimpanal, Santu Rana, Svetha Venkatesh

Adapting an agent's behaviour to new environments has been one of the primary focus areas of physics based reinforcement learning.

no code implementations • 11 Feb 2022 • Buddhika Laknath Semage, Thommen George Karimpanal, Santu Rana, Svetha Venkatesh

Sim2real transfer is primarily concerned with transferring policies trained in simulation to potentially noisy real world environments.

1 code implementation • NeurIPS 2021 • Arun Kumar Anjanapura Venkatesh, Alistair Shilton, Santu Rana, Sunil Gupta, Svetha Venkatesh

Traditional methods for kernel selection rely on parametric kernel functions or a combination thereof and although the kernel hyperparameters are tuned, these methods often provide sub-optimal results due to the limitations induced by the parametric forms.

no code implementations • 3 Nov 2021 • Thommen George Karimpanal, Hung Le, Majid Abdolshah, Santu Rana, Sunil Gupta, Truyen Tran, Svetha Venkatesh

The optimistic nature of the Q-learning target leads to an overestimation bias, which is an inherent problem associated with standard $Q-$learning.

no code implementations • 26 Oct 2021 • Haripriya Harikumar, Kien Do, Santu Rana, Sunil Gupta, Svetha Venkatesh

In this paper, we propose a novel host-free Trojan attack with triggers that are fixed in the semantic space but not necessarily in the pixel space.

no code implementations • 29 Sep 2021 • Hung Tran-The, Sunil Gupta, Santu Rana, Long Tran-Thanh, Svetha Venkatesh

With a linear reward function, we demonstrate that our algorithm achieves a near-optimal regret.

no code implementations • 29 Sep 2021 • Thommen Karimpanal George, Majid Abdolshah, Hung Le, Santu Rana, Sunil Gupta, Truyen Tran, Svetha Venkatesh

The objective in goal-based reinforcement learning is to learn a policy to reach a particular goal state within the environment.

no code implementations • 29 Sep 2021 • Majid Abdolshah, Hung Le, Thommen Karimpanal George, Vuong Le, Sunil Gupta, Santu Rana, Svetha Venkatesh

Whilst Generative Adversarial Networks (GANs) generate visually appealing high resolution images, the latent representations (or codes) of these models do not allow controllable changes on the semantic attributes of the generated images.

no code implementations • 20 Aug 2021 • Majid Abdolshah, Hung Le, Thommen Karimpanal George, Sunil Gupta, Santu Rana, Svetha Venkatesh

This is achieved by representing the global transition dynamics as a union of local transition functions, each with respect to one active object in the scene.

no code implementations • 24 Jul 2021 • Hung Tran-The, Sunil Gupta, Thanh Nguyen-Tang, Santu Rana, Svetha Venkatesh

We propose a novel approach that uses a hybrid of offline learning with online exploration.

no code implementations • 18 Jul 2021 • Majid Abdolshah, Hung Le, Thommen Karimpanal George, Sunil Gupta, Santu Rana, Svetha Venkatesh

Transfer in reinforcement learning is usually achieved through generalisation across tasks.

no code implementations • 10 May 2021 • Hung Tran-The, Sunil Gupta, Santu Rana, Svetha Venkatesh

Bayesian optimisation (BO) is a well-known efficient algorithm for finding the global optimum of expensive, black-box functions.

no code implementations • 18 Apr 2021 • Buddhika Laknath Semage, Thommen George Karimpanal, Santu Rana, Svetha Venkatesh

Physics-based reinforcement learning tasks can benefit from simplified physics simulators as they potentially allow near-optimal policies to be learned in simulation.

no code implementations • 11 Apr 2021 • Huong Ha, Sunil Gupta, Santu Rana, Svetha Venkatesh

Machine learning models are being used extensively in many important areas, but there is no guarantee a model will always perform well or as its developers intended.

1 code implementation • 17 Dec 2020 • Huong Ha, Sunil Gupta, Santu Rana, Svetha Venkatesh

In particular, we consider two types of LSE problems: (1) \textit{explicit} LSE problem where the threshold level is a fixed user-specified value, and, (2) \textit{implicit} LSE problem where the threshold level is defined as a percentage of the (unknown) maximum of the objective function.

no code implementations • 19 Nov 2020 • Anh-Cat Le-Ngo, Truyen Tran, Santu Rana, Sunil Gupta, Svetha Venkatesh

We propose a new model-agnostic logic constraint to tackle this issue by formulating a logically consistent loss in the multi-task learning framework as well as a data organisation called family-batch and hybrid-batch.

no code implementations • 20 Sep 2020 • Duc Nguyen, Phuoc Nguyen, Kien Do, Santu Rana, Sunil Gupta, Truyen Tran

These include the capacity of the compact matrix LSTM to compress noisy data near perfectly, making the strategy of compressing-decompressing data ill-suited for anomaly detection under the noise.

no code implementations • 8 Sep 2020 • Alistair Shilton, Sunil Gupta, Santu Rana, Svetha Venkatesh

We propose an algorithm for Bayesian functional optimisation - that is, finding the function to optimise a process - guided by experimenter beliefs and intuitions regarding the expected characteristics (length-scale, smoothness, cyclicity etc.)

no code implementations • NeurIPS 2020 • Hung Tran-The, Sunil Gupta, Santu Rana, Huong Ha, Svetha Venkatesh

To this end, we propose a novel BO algorithm which expands (and shifts) the search space over iterations based on controlling the expansion rate thought a hyperharmonic series.

no code implementations • 15 Jul 2020 • Alistair Shilton, Sunil Gupta, Santu Rana, Svetha Venkatesh

In this paper we explore a connection between deep networks and learning in reproducing kernel Krein space.

no code implementations • 19 Jun 2020 • Phuc Luong, Dang Nguyen, Sunil Gupta, Santu Rana, Svetha Venkatesh

In real-world applications, BO often faces a major problem of missing values in inputs.

no code implementations • 10 Jun 2020 • Haripriya Harikumar, Vuong Le, Santu Rana, Sourangshu Bhattacharya, Sunil Gupta, Svetha Venkatesh

Recently, it has been shown that deep learning models are vulnerable to Trojan attacks, where an attacker can install a backdoor during training time to make the resultant model misidentify samples contaminated with a small trigger patch.

1 code implementation • 8 Jun 2020 • Julian Berk, Sunil Gupta, Santu Rana, Svetha Venkatesh

In order to improve the performance of Bayesian optimisation, we develop a modified Gaussian process upper confidence bound (GP-UCB) acquisition function.

1 code implementation • 2 Jun 2020 • Thomas P. Quinn, Dang Nguyen, Santu Rana, Sunil Gupta, Svetha Venkatesh

We define personalized interpretability as a measure of sample-specific feature attribution, and view it as a minimum requirement for a precision health model to justify its conclusions.

no code implementations • 18 May 2020 • Phuoc Nguyen, Truyen Tran, Sunil Gupta, Santu Rana, Hieu-Chi Dam, Svetha Venkatesh

Given a target distribution, we predict the posterior distribution of the latent code, then use a matrix-network decoder to generate a posterior distribution q(\theta).

no code implementations • 27 Mar 2020 • Anil Ramachandran, Sunil Gupta, Santu Rana, Cheng Li, Svetha Venkatesh

In this paper, we represent the prior knowledge about the function optimum through a prior distribution.

no code implementations • 26 Feb 2020 • Cheng Li, Sunil Gupta, Santu Rana, Vu Nguyen, Antonio Robles-Kelly, Svetha Venkatesh

Again, it is unknown how to incorporate the expert prior knowledge about the global optimum into Bayesian optimization process.

1 code implementation • 19 Jan 2020 • Thanh Tang Nguyen, Sunil Gupta, Huong Ha, Santu Rana, Svetha Venkatesh

We adopt the distributionally robust optimization perspective to this problem by maximizing the expected objective under the most adversarial distribution.

1 code implementation • 28 Nov 2019 • Dang Nguyen, Sunil Gupta, Santu Rana, Alistair Shilton, Svetha Venkatesh

To optimize such functions, we propose a new method that formulates the problem as a multi-armed bandit problem, wherein each category corresponds to an arm with its reward distribution centered around the optimum of the objective function in continuous variables.

no code implementations • 27 Nov 2019 • Hung Tran-The, Sunil Gupta, Santu Rana, Svetha Venkatesh

Optimising acquisition function in low dimensional subspaces allows our method to obtain accurate solutions within limited computational budget.

1 code implementation • NeurIPS 2019 • Huong Ha, Santu Rana, Sunil Gupta, Thanh Nguyen, Hung Tran-The, Svetha Venkatesh

Applying Bayesian optimization in problems wherein the search space is unknown is challenging.

no code implementations • 10 Sep 2019 • Thommen George Karimpanal, Santu Rana, Sunil Gupta, Truyen Tran, Svetha Venkatesh

Prior access to domain knowledge could significantly improve the performance of a reinforcement learning agent.

no code implementations • 9 Sep 2019 • Majid Abdolshah, Alistair Shilton, Santu Rana, Sunil Gupta, Svetha Venkatesh

We introduce a cost-aware multi-objective Bayesian optimisation with non-uniform evaluation cost over objective functions by defining cost-aware constraints over the search space.

no code implementations • 22 Jul 2019 • Cheng Li, Santu Rana, Sunil Gupta, Vu Nguyen, Svetha Venkatesh, Alessandra Sutti, David Rubin, Teo Slezak, Murray Height, Mazher Mohammed, Ian Gibson

In this paper, we consider per-variable monotonic trend in the underlying property that results in a unimodal trend in those variables for a target value optimization.

no code implementations • 21 Jun 2019 • Ang Yang, Cheng Li, Santu Rana, Sunil Gupta, Svetha Venkatesh

Since the balance between predictive mean and the predictive variance is the key determinant to the success of Bayesian optimization, the current sparse spectrum methods are less suitable for it.

no code implementations • 21 Feb 2019 • Alistair Shilton, Sunil Gupta, Santu Rana, Svetha Venkatesh, Majid Abdolshah, Dang Nguyen

In this paper we consider the problem of finding stable maxima of expensive (to evaluate) functions.

no code implementations • NeurIPS 2019 • Majid Abdolshah, Alistair Shilton, Santu Rana, Sunil Gupta, Svetha Venkatesh

We present a multi-objective Bayesian optimisation algorithm that allows the user to express preference-order constraints on the objectives of the type "objective A is more important than objective B".

no code implementations • 6 Feb 2019 • Tinu Theckel Joy, Santu Rana, Sunil Gupta, Svetha Venkatesh

We initially tune the hyperparameters on a small subset of training data using Bayesian optimization.

1 code implementation • NeurIPS 2018 • Shivapratap Gopakumar, Sunil Gupta, Santu Rana, Vu Nguyen, Svetha Venkatesh

We address this problem by proposing an efficient framework for algorithmic testing.

no code implementations • 5 Nov 2018 • Vu Nguyen, Sunil Gupta, Santu Rana, Cheng Li, Svetha Venkatesh

Bayesian optimization (BO) and its batch extensions are successful for optimizing expensive black-box functions.

no code implementations • 19 Sep 2018 • Pratibha Vellanki, Santu Rana, Sunil Gupta, David Rubin de Celis Leal, Alessandra Sutti, Murray Height, Svetha Venkatesh

Real world experiments are expensive, and thus it is important to reach a target in minimum number of experiments.

no code implementations • 21 May 2018 • Alistair Shilton, Sunil Gupta, Santu Rana, Pratibha Vellanki, Laurence Park, Cheng Li, Svetha Venkatesh, Alessandra Sutti, David Rubin, Thomas Dorin, Alireza Vahid, Murray Height, Teo Slezak

In this paper we show how such auxiliary data may be used to construct a GP covariance corresponding to a more appropriate weight prior for the objective function.

no code implementations • 16 Feb 2018 • Cheng Li, David Rubin de Celis Leal, Santu Rana, Sunil Gupta, Alessandra Sutti, Stewart Greenhill, Teo Slezak, Murray Height, Svetha Venkatesh

The discovery of processes for the synthesis of new materials involves many decisions about process design, operation, and material properties.

no code implementations • 15 Feb 2018 • Cheng Li, Sunil Gupta, Santu Rana, Vu Nguyen, Svetha Venkatesh, Alistair Shilton

Scaling Bayesian optimization to high dimensions is challenging task as the global optimization of high-dimensional acquisition function can be expensive and often infeasible.

no code implementations • 15 Feb 2018 • Alistair Shilton, Sunil Gupta, Santu Rana, Pratibha Vellanki, Cheng Li, Laurence Park, Svetha Venkatesh, Alessandra Sutti, David Rubin, Thomas Dorin, Alireza Vahid, Murray Height

The paper presents a novel approach to direct covariance function learning for Bayesian optimisation, with particular emphasis on experimental design problems where an existing corpus of condensed knowledge is present.

no code implementations • NeurIPS 2017 • Pratibha Vellanki, Santu Rana, Sunil Gupta, David Rubin, Alessandra Sutti, Thomas Dorin, Murray Height, Paul Sanders, Svetha Venkatesh

We demonstrate the performance of both pc-BO(basic) and pc-BO(nested) by optimising benchmark test functions, tuning hyper-parameters of the SVM classifier, optimising the heat-treatment process for an Al-Sc alloy to achieve target hardness, and optimising the short polymer fibre production process.

no code implementations • ICML 2017 • Santu Rana, Cheng Li, Sunil Gupta, Vu Nguyen, Svetha Venkatesh

Bayesian optimization is an efficient way to optimize expensive black-box functions such as designing a new product with highest quality or hyperparameter tuning of a machine learning algorithm.

no code implementations • 15 Mar 2017 • Vu Nguyen, Santu Rana, Sunil Gupta, Cheng Li, Svetha Venkatesh

Current batch BO approaches are restrictive in that they fix the number of evaluations per batch, and this can be wasteful when the number of specified evaluations is larger than the number of real maxima in the underlying acquisition function.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.