no code implementations • 9 Jan 2014 • Vu Nguyen, Dinh Phung, XuanLong Nguyen, Svetha Venkatesh, Hung Hai Bui
We present a Bayesian nonparametric framework for multilevel clustering which utilizes group-level context information to simultaneously discover low-dimensional structures of the group contents and partitions groups into clusters.
no code implementations • 22 Jul 2014 • Truyen Tran, Dinh Phung, Svetha Venkatesh
The \emph{maximum a posteriori} (MAP) assignment for general structure Markov random fields (MRFs) is computationally intractable.
no code implementations • 23 Jul 2014 • Shivapratap Gopakumar, Truyen Tran, Dinh Phung, Svetha Venkatesh
Stability in clinical prediction models is crucial for transferability between studies, yet has received little attention.
no code implementations • 23 Jul 2014 • Truyen Tran, Svetha Venkatesh
Focusing on the core of the collaborative ranking process, the user and their community, we propose new models for representation of the underlying permutations and prediction of ranks.
no code implementations • 23 Jul 2014 • Truyen Tran, Dinh Phung, Svetha Venkatesh
In practical settings, the task often reduces to estimating a rank functional of an object with respect to a query.
no code implementations • 24 Jul 2014 • Truyen Tran, Dinh Phung, Svetha Venkatesh
Learning structured outputs with general structures is computationally challenging, except for tree-structured models.
no code implementations • 31 Jul 2014 • Truyen Tran, Dinh Phung, Svetha Venkatesh
Ranking over sets arise when users choose between groups of items.
no code implementations • 31 Jul 2014 • Truyen Tran, Dinh Phung, Svetha Venkatesh
Ordinal data is omnipresent in almost all multiuser-generated feedback - questionnaires, preferences etc.
no code implementations • 1 Aug 2014 • Truyen Tran, Dinh Phung, Svetha Venkatesh
We introduce Thurstonian Boltzmann Machines (TBM), a unified architecture that can naturally incorporate a wide range of data inputs at the same time.
no code implementations • 6 Aug 2014 • Truyen Tran, Dinh Phung, Svetha Venkatesh, Hung H. Bui
In this contribution, we propose a new approximation technique that may have the potential to achieve sub-cubic time complexity in length and linear time depth, at the cost of some loss of quality.
no code implementations • 6 Aug 2014 • Truyen Tran, Dinh Phung, Svetha Venkatesh
Modern datasets are becoming heterogeneous.
no code implementations • 6 Aug 2014 • Truyen Tran, Hung Bui, Svetha Venkatesh
Learning and understanding the typical patterns in the daily activities and routines of people from low-level sensory data is an important problem in many application domains such as building smart environments, or providing intelligent assistance.
no code implementations • 6 Aug 2014 • Truyen Tran, Hung Bui, Svetha Venkatesh
We explore a framework called boosted Markov networks to combine the learning capacity of boosting and the rich modeling semantics of Markov networks and applying the framework for video-based activity recognition.
no code implementations • 8 Feb 2015 • Adham Beykikhoshk, Ognjen Arandjelovic, Dinh Phung, Svetha Venkatesh
In this paper we describe a novel framework for the discovery of the topical content of a data corpus, and the tracking of its complex structural changes across the temporal dimension.
no code implementations • 21 Apr 2015 • Ognjen Arandjelovic, Duc-Son Pham, Svetha Venkatesh
This paper addresses the task of time separated aerial image registration.
no code implementations • 21 Apr 2015 • Ognjen Arandjelovic, Duc-Son Pham, Svetha Venkatesh
Our aim is to estimate the perspective-effected geometric distortion of a scene from a video feed.
no code implementations • 21 Apr 2015 • Ognjen Arandjelovic, Duc-Son Pham, Svetha Venkatesh
The need to estimate a particular quantile of a distribution is an important problem which frequently arises in many computer vision and signal processing applications.
no code implementations • 25 Dec 2015 • Adham Beykikhoshk, Ognjen Arandjelovic, Dinh Phung, Svetha Venkatesh
In this paper we describe a novel framework for the discovery of the topical content of a data corpus, and the tracking of its complex structural changes across the temporal dimension.
1 code implementation • 1 Feb 2016 • Trang Pham, Truyen Tran, Dinh Phung, Svetha Venkatesh
We introduce DeepCare, an end-to-end deep dynamic neural network that reads medical records, stores previous illness history, infers current illness states and predicts future medical outcomes.
no code implementations • 9 Feb 2016 • Truyen Tran, Dinh Phung, Svetha Venkatesh
Recommender systems play a central role in providing individualized access to information and services.
no code implementations • 17 Feb 2016 • Truyen Tran, Dinh Phung, Svetha Venkatesh
We introduce Neural Choice by Elimination, a new framework that integrates deep neural networks into probabilistic sequential choice models for learning to rank.
no code implementations • 4 Mar 2016 • Truyen Tran, Dinh Phung, Svetha Venkatesh
We introduce a deep multitask architecture to integrate multityped representations of multimodal objects.
no code implementations • 3 May 2016 • Thuong Nguyen, Truyen Tran, Shivapratap Gopakumar, Dinh Phung, Svetha Venkatesh
Accurate prediction of suicide risk in mental health patients remains an open problem.
no code implementations • 27 May 2016 • Duc-Son Pham, Ognjen Arandjelovic, Svetha Venkatesh
We propose an effective subspace selection scheme as a post-processing step to improve results obtained by sparse subspace clustering (SSC).
no code implementations • 26 Jul 2016 • Phuoc Nguyen, Truyen Tran, Nilmini Wickramasinghe, Svetha Venkatesh
On top of the sequence is a convolutional neural net that detects and combines predictive local clinical motifs to stratify the risk.
no code implementations • 28 Jul 2016 • Truyen Tran, Wei Luo, Dinh Phung, Jonathan Morris, Kristen Rickard, Svetha Venkatesh
Preterm births occur at an alarming rate of 10-15%.
no code implementations • 11 Aug 2016 • Trang Pham, Truyen Tran, Dinh Phung, Svetha Venkatesh
Gates are employed in many recent state-of-the-art recurrent models such as LSTM and GRU, and feedforward models such as Residual Nets and Highway Networks.
1 code implementation • 17 Aug 2016 • Kien Do, Truyen Tran, Dinh Phung, Svetha Venkatesh
We evaluate the proposed method on synthetic and real-world datasets and demonstrate that (a) a proper handling mixed-types is necessary in outlier detection, and (b) free-energy of Mv. RBM is a powerful and efficient outlier scoring method, which is highly competitive against state-of-the-arts.
1 code implementation • 15 Sep 2016 • Trang Pham, Truyen Tran, Dinh Phung, Svetha Venkatesh
CLN has many desirable theoretical properties: (i) it encodes multi-relations between any two instances; (ii) it is deep and compact, allowing complex functions to be approximated at the network level with a small set of free parameters; (iii) local and relational features are learned simultaneously; (iv) long-range, higher-order dependencies between instances are supported naturally; and (v) crucially, learning and inference are efficient, linear in the size of the network and the number of relations.
no code implementations • 28 Sep 2016 • Shivapratap Gopakumar, Truyen Tran, Dinh Phung, Svetha Venkatesh
Using a linear model as basis for prediction, we achieve feature stability by regularising latent correlation in features.
no code implementations • 20 Oct 2016 • Kien Do, Truyen Tran, Svetha Venkatesh
We propose MIXMAD, which stands for MIXed data Multilevel Anomaly Detection, an ensemble method that estimates the sparse regions across multiple levels of abstraction of mixed data.
no code implementations • 2 Dec 2016 • Dang Nguyen, Wei Luo, Dinh Phung, Svetha Venkatesh
In this paper, we consider the patient similarity matching problem over a cancer cohort of more than 220, 000 patients.
no code implementations • 22 Feb 2017 • Trang Pham, Truyen Tran, Svetha Venkatesh
Much recent machine learning research has been directed towards leveraging shared statistics among labels, instances and data views, commonly referred to as multi-label, multi-instance and multi-view learning.
no code implementations • 4 Mar 2017 • Kien Do, Truyen Tran, Svetha Venkatesh
We derive several new deep networks: (i) feed-forward nets that map an input matrix into an output matrix, (ii) recurrent nets which map a sequence of input matrices into a sequence of output matrices.
no code implementations • 15 Mar 2017 • Vu Nguyen, Santu Rana, Sunil Gupta, Cheng Li, Svetha Venkatesh
Current batch BO approaches are restrictive in that they fix the number of evaluations per batch, and this can be wasteful when the number of specified evaluations is larger than the number of real maxima in the underlying acquisition function.
no code implementations • 17 Jul 2017 • Phuoc Nguyen, Truyen Tran, Svetha Venkatesh
At the reasoning layer, evidences across time steps are weighted and combined.
no code implementations • ICML 2017 • Santu Rana, Cheng Li, Sunil Gupta, Vu Nguyen, Svetha Venkatesh
Bayesian optimization is an efficient way to optimize expensive black-box functions such as designing a new product with highest quality or hyperparameter tuning of a machine learning algorithm.
no code implementations • 14 Aug 2017 • Trang Pham, Truyen Tran, Hoa Dam, Svetha Venkatesh
The representation of the virtual node is then the representation of the entire graph.
no code implementations • 17 Aug 2017 • Hung Vu, Dinh Phung, Tu Dinh Nguyen, Anthony Trevors, Svetha Venkatesh
Automated detection of abnormalities in data has been studied in research area in recent years because of its diverse applications in practice including video surveillance, industrial damage detection and network intrusion detection.
no code implementations • 18 Aug 2017 • Tu Dinh Nguyen, Truyen Tran, Dinh Phung, Svetha Venkatesh
The analysis of mixed data has been raising challenges in statistics and machine learning.
no code implementations • 18 Aug 2017 • Tu Dinh Nguyen, Truyen Tran, Dinh Phung, Svetha Venkatesh
Of current representation learning schemes, restricted Boltzmann machines (RBMs) have proved to be highly effective in unsupervised settings.
no code implementations • 21 Nov 2017 • Phuoc Nguyen, Truyen Tran, Svetha Venkatesh
The interaction between diseases and treatments at a visit is modeled as the residual of the diseases minus the treatments.
no code implementations • NeurIPS 2017 • Pratibha Vellanki, Santu Rana, Sunil Gupta, David Rubin, Alessandra Sutti, Thomas Dorin, Murray Height, Paul Sanders, Svetha Venkatesh
We demonstrate the performance of both pc-BO(basic) and pc-BO(nested) by optimising benchmark test functions, tuning hyper-parameters of the SVM classifier, optimising the heat-treatment process for an Al-Sc alloy to achieve target hardness, and optimising the short polymer fibre production process.
no code implementations • 8 Jan 2018 • Trang Pham, Truyen Tran, Svetha Venkatesh
GraphMem is capable of jointly training on multiple datasets by using a specific-task query fed to the controller as an input.
no code implementations • 26 Jan 2018 • Kien Do, Truyen Tran, Svetha Venkatesh
Knowledge graphs contain rich relational structures of the world, and thus complement data-driven machine learning in heterogeneous data.
1 code implementation • 2 Feb 2018 • Hung Le, Truyen Tran, Svetha Venkatesh
One of the core tasks in multi-view learning is to capture relations among views.
no code implementations • 3 Feb 2018 • Phuoc Nguyen, Truyen Tran, Svetha Venkatesh
The same hold for the bag of treatments.
no code implementations • 11 Feb 2018 • Hung Le, Truyen Tran, Svetha Venkatesh
The decoding controller generates a treatment sequence, one treatment option at a time.
no code implementations • 15 Feb 2018 • Cheng Li, Sunil Gupta, Santu Rana, Vu Nguyen, Svetha Venkatesh, Alistair Shilton
Scaling Bayesian optimization to high dimensions is challenging task as the global optimization of high-dimensional acquisition function can be expensive and often infeasible.
no code implementations • 15 Feb 2018 • Alistair Shilton, Sunil Gupta, Santu Rana, Pratibha Vellanki, Cheng Li, Laurence Park, Svetha Venkatesh, Alessandra Sutti, David Rubin, Thomas Dorin, Alireza Vahid, Murray Height
The paper presents a novel approach to direct covariance function learning for Bayesian optimisation, with particular emphasis on experimental design problems where an existing corpus of condensed knowledge is present.
no code implementations • 16 Feb 2018 • Cheng Li, David Rubin de Celis Leal, Santu Rana, Sunil Gupta, Alessandra Sutti, Stewart Greenhill, Teo Slezak, Murray Height, Svetha Venkatesh
The discovery of processes for the synthesis of new materials involves many decisions about process design, operation, and material properties.
no code implementations • 1 Apr 2018 • Kien Do, Truyen Tran, Thin Nguyen, Svetha Venkatesh
GAML regards labels as auxiliary nodes and models them in conjunction with the input graph.
no code implementations • 21 May 2018 • Alistair Shilton, Sunil Gupta, Santu Rana, Pratibha Vellanki, Laurence Park, Cheng Li, Svetha Venkatesh, Alessandra Sutti, David Rubin, Thomas Dorin, Alireza Vahid, Murray Height, Teo Slezak
In this paper we show how such auxiliary data may be used to construct a GP covariance corresponding to a more appropriate weight prior for the objective function.
1 code implementation • NeurIPS 2018 • Hung Le, Truyen Tran, Thin Nguyen, Svetha Venkatesh
Introducing variability while maintaining coherence is a core task in learning to generate utterances in conversation.
no code implementations • 10 Aug 2018 • Trang Pham, Truyen Tran, Svetha Venkatesh
Neural networks excel in detecting regular patterns but are less successful in representing and manipulating complex data structures, possibly due to the lack of an external memory.
no code implementations • 19 Sep 2018 • Pratibha Vellanki, Santu Rana, Sunil Gupta, David Rubin de Celis Leal, Alessandra Sutti, Murray Height, Svetha Venkatesh
Real world experiments are expensive, and thus it is important to reach a target in minimum number of experiments.
no code implementations • 5 Nov 2018 • Vu Nguyen, Sunil Gupta, Santu Rana, Cheng Li, Svetha Venkatesh
Bayesian optimization (BO) and its batch extensions are successful for optimizing expensive black-box functions.
1 code implementation • NeurIPS 2018 • Shivapratap Gopakumar, Sunil Gupta, Santu Rana, Vu Nguyen, Svetha Venkatesh
We address this problem by proposing an efficient framework for algorithmic testing.
no code implementations • 22 Dec 2018 • Kien Do, Truyen Tran, Svetha Venkatesh
We address a fundamental problem in chemistry known as chemical reaction product prediction.
1 code implementation • ICLR 2019 • Hung Le, Truyen Tran, Svetha Venkatesh
Memory-augmented neural networks consisting of a neural controller and an external memory have shown potentials in long-term sequential learning.
Ranked #5 on Text Classification on Yahoo! Answers
no code implementations • 6 Feb 2019 • Tinu Theckel Joy, Santu Rana, Sunil Gupta, Svetha Venkatesh
We initially tune the hyperparameters on a small subset of training data using Bayesian optimization.
1 code implementation • ICLR 2019 • Hoang Thanh-Tung, Truyen Tran, Svetha Venkatesh
We propose a zero-centered gradient penalty for improving the generalization of the discriminator by pushing it toward the optimal discriminator.
no code implementations • NeurIPS 2019 • Majid Abdolshah, Alistair Shilton, Santu Rana, Sunil Gupta, Svetha Venkatesh
We present a multi-objective Bayesian optimisation algorithm that allows the user to express preference-order constraints on the objectives of the type "objective A is more important than objective B".
no code implementations • 21 Feb 2019 • Alistair Shilton, Sunil Gupta, Santu Rana, Svetha Venkatesh, Majid Abdolshah, Dang Nguyen
In this paper we consider the problem of finding stable maxima of expensive (to evaluate) functions.
1 code implementation • CVPR 2019 • Romero Morais, Vuong Le, Truyen Tran, Budhaditya Saha, Moussa Mansour, Svetha Venkatesh
Appearance features have been widely used in video anomaly detection even though they contain complex entangled factors.
Ranked #6 on Video Anomaly Detection on HR-ShanghaiTech
5 code implementations • ICCV 2019 • Dong Gong, Lingqiao Liu, Vuong Le, Budhaditya Saha, Moussa Reda Mansour, Svetha Venkatesh, Anton Van Den Hengel
At the test stage, the learned memory will be fixed, and the reconstruction is obtained from a few selected memory records of the normal data.
1 code implementation • ICLR 2020 • Hung Le, Truyen Tran, Svetha Venkatesh
Neural networks powered with external memory simulate computer behaviors.
Ranked #5 on Question Answering on bAbi (Mean Error Rate metric)
no code implementations • 21 Jun 2019 • Ang Yang, Cheng Li, Santu Rana, Sunil Gupta, Svetha Venkatesh
Since the balance between predictive mean and the predictive variance is the key determinant to the success of Bayesian optimization, the current sparse spectrum methods are less suitable for it.
1 code implementation • bioRxiv 2019 • Thin Nguyen, Hang Le, Svetha Venkatesh
The results show that our proposed method can not only predict the affinity better than non-deep learning models, but also outperform competing deep learning approaches.
Ranked #4 on Drug Discovery on KIBA
no code implementations • 10 Jul 2019 • Thao Minh Le, Vuong Le, Svetha Venkatesh, Truyen Tran
While recent advances in lingual and visual question answering have enabled sophisticated representations and neural reasoning mechanisms, major challenges in Video QA remain on dynamic grounding of concepts, relations and actions to support the reasoning process.
no code implementations • 22 Jul 2019 • Cheng Li, Santu Rana, Sunil Gupta, Vu Nguyen, Svetha Venkatesh, Alessandra Sutti, David Rubin, Teo Slezak, Murray Height, Mazher Mohammed, Ian Gibson
In this paper, we consider per-variable monotonic trend in the underlying property that results in a unimodal trend in those variables for a target value optimization.
no code implementations • 9 Sep 2019 • Majid Abdolshah, Alistair Shilton, Santu Rana, Sunil Gupta, Svetha Venkatesh
We introduce a cost-aware multi-objective Bayesian optimisation with non-uniform evaluation cost over objective functions by defining cost-aware constraints over the search space.
no code implementations • 10 Sep 2019 • Thommen George Karimpanal, Santu Rana, Sunil Gupta, Truyen Tran, Svetha Venkatesh
Prior access to domain knowledge could significantly improve the performance of a reinforcement learning agent.
1 code implementation • NeurIPS 2019 • Huong Ha, Santu Rana, Sunil Gupta, Thanh Nguyen, Hung Tran-The, Svetha Venkatesh
Applying Bayesian optimization in problems wherein the search space is unknown is challenging.
no code implementations • 27 Nov 2019 • Hung Tran-The, Sunil Gupta, Santu Rana, Svetha Venkatesh
Optimising acquisition function in low dimensional subspaces allows our method to obtain accurate solutions within limited computational budget.
1 code implementation • 28 Nov 2019 • Dang Nguyen, Sunil Gupta, Santu Rana, Alistair Shilton, Svetha Venkatesh
To optimize such functions, we propose a new method that formulates the problem as a multi-armed bandit problem, wherein each category corresponds to an arm with its reward distribution centered around the optimum of the objective function in continuous variables.
1 code implementation • 19 Jan 2020 • Thanh Tang Nguyen, Sunil Gupta, Huong Ha, Santu Rana, Svetha Venkatesh
We adopt the distributionally robust optimization perspective to this problem by maximizing the expected objective under the most adversarial distribution.
1 code implementation • ICML 2020 • Hung Le, Truyen Tran, Svetha Venkatesh
Heretofore, neural networks with external memory are restricted to single memory with lossy representations of memory interactions.
Ranked #1 on Question Answering on bAbi
1 code implementation • CVPR 2020 • Thao Minh Le, Vuong Le, Svetha Venkatesh, Truyen Tran
Video question answering (VideoQA) is challenging as it requires modeling capacity to distill dynamic visual artifacts and distant relations and to associate them with linguistic concepts.
Ranked #3 on Audio-Visual Question Answering (AVQA) on AVQA
Audio-Visual Question Answering (AVQA) Question Answering +4
no code implementations • 26 Feb 2020 • Cheng Li, Sunil Gupta, Santu Rana, Vu Nguyen, Antonio Robles-Kelly, Svetha Venkatesh
Again, it is unknown how to incorporate the expert prior knowledge about the global optimum into Bayesian optimization process.
no code implementations • 27 Mar 2020 • Anil Ramachandran, Sunil Gupta, Santu Rana, Cheng Li, Svetha Venkatesh
In this paper, we represent the prior knowledge about the function optimum through a prior distribution.
1 code implementation • 30 Apr 2020 • Thao Minh Le, Vuong Le, Svetha Venkatesh, Truyen Tran
We present Language-binding Object Graph Network, the first neural reasoning method with dynamic relational structures across both visual and textual domains with applications in visual question answering.
no code implementations • 18 May 2020 • Phuoc Nguyen, Truyen Tran, Sunil Gupta, Santu Rana, Hieu-Chi Dam, Svetha Venkatesh
Given a target distribution, we predict the posterior distribution of the latent code, then use a matrix-network decoder to generate a posterior distribution q(\theta).
1 code implementation • 2 Jun 2020 • Thomas P. Quinn, Dang Nguyen, Santu Rana, Sunil Gupta, Svetha Venkatesh
We define personalized interpretability as a measure of sample-specific feature attribution, and view it as a minimum requirement for a precision health model to justify its conclusions.
1 code implementation • 8 Jun 2020 • Julian Berk, Sunil Gupta, Santu Rana, Svetha Venkatesh
In order to improve the performance of Bayesian optimisation, we develop a modified Gaussian process upper confidence bound (GP-UCB) acquisition function.
no code implementations • 10 Jun 2020 • Haripriya Harikumar, Vuong Le, Santu Rana, Sourangshu Bhattacharya, Sunil Gupta, Svetha Venkatesh
Recently, it has been shown that deep learning models are vulnerable to Trojan attacks, where an attacker can install a backdoor during training time to make the resultant model misidentify samples contaminated with a small trigger patch.
no code implementations • 19 Jun 2020 • Phuc Luong, Dang Nguyen, Sunil Gupta, Santu Rana, Svetha Venkatesh
In real-world applications, BO often faces a major problem of missing values in inputs.
no code implementations • 15 Jul 2020 • Alistair Shilton, Sunil Gupta, Santu Rana, Svetha Venkatesh
In this paper we explore a connection between deep networks and learning in reproducing kernel Krein space.
1 code implementation • 24 Jul 2020 • Thanh Tang Nguyen, Sunil Gupta, Svetha Venkatesh
We consider the problem of learning a set of probability distributions from the empirical Bellman dynamics in distributional reinforcement learning (RL), a class of state-of-the-art methods that estimate the distribution, as opposed to only the expectation, of the total return.
1 code implementation • 20 Aug 2020 • Romero Morais, Vuong Le, Truyen Tran, Svetha Venkatesh
We propose Hierarchical Encoder-Refresher-Anticipator, a multi-level neural machine that can learn the structure of human activities by observing a partial hierarchy of events and roll-out such structure into a future prediction in multiple levels of abstraction.
no code implementations • NeurIPS 2020 • Hung Tran-The, Sunil Gupta, Santu Rana, Huong Ha, Svetha Venkatesh
To this end, we propose a novel BO algorithm which expands (and shifts) the search space over iterations based on controlling the expansion rate thought a hyperharmonic series.
no code implementations • 8 Sep 2020 • Alistair Shilton, Sunil Gupta, Santu Rana, Svetha Venkatesh
We propose an algorithm for Bayesian functional optimisation - that is, finding the function to optimise a process - guided by experimenter beliefs and intuitions regarding the expected characteristics (length-scale, smoothness, cyclicity etc.)
no code implementations • 16 Sep 2020 • Dung Nguyen, Svetha Venkatesh, Phuoc Nguyen, Truyen Tran
In psychological game theory, guilt aversion necessitates modelling of agents that have theory about what other agents think, also known as Theory of Mind (ToM).
no code implementations • NeurIPS 2021 • Hung Le, Svetha Venkatesh
For the first time a Neural Program is treated as a datum in memory, paving the ways for modular, recursive and procedural neural programming.
no code implementations • 18 Oct 2020 • Thao Minh Le, Vuong Le, Svetha Venkatesh, Truyen Tran
Video QA challenges modelers in multiple fronts.
no code implementations • 19 Nov 2020 • Anh-Cat Le-Ngo, Truyen Tran, Santu Rana, Sunil Gupta, Svetha Venkatesh
We propose a new model-agnostic logic constraint to tackle this issue by formulating a logically consistent loss in the multi-task learning framework as well as a data organisation called family-batch and hybrid-batch.
1 code implementation • 3 Dec 2020 • Kien Do, Truyen Tran, Svetha Venkatesh
We propose two generic methods for improving semi-supervised learning (SSL).
1 code implementation • 17 Dec 2020 • Huong Ha, Sunil Gupta, Santu Rana, Svetha Venkatesh
In particular, we consider two types of LSE problems: (1) \textit{explicit} LSE problem where the threshold level is a fixed user-specified value, and, (2) \textit{implicit} LSE problem where the threshold level is defined as a percentage of the (unknown) maximum of the objective function.
no code implementations • CVPR 2021 • Romero Morais, Vuong Le, Svetha Venkatesh, Truyen Tran
Their interactions are sparse in time hence more faithful to the true underlying nature and more robust in inference and learning.
no code implementations • 11 Mar 2021 • Thanh Nguyen-Tang, Sunil Gupta, Hung Tran-The, Svetha Venkatesh
To the best of our knowledge, this is the first theoretical characterization of the sample complexity of offline RL with deep neural network function approximation under the general Besov regularity condition that goes beyond {the linearity regime} in the traditional Reproducing Hilbert kernel spaces and Neural Tangent Kernels.
no code implementations • 11 Apr 2021 • Huong Ha, Sunil Gupta, Santu Rana, Svetha Venkatesh
Machine learning models are being used extensively in many important areas, but there is no guarantee a model will always perform well or as its developers intended.
no code implementations • 18 Apr 2021 • Buddhika Laknath Semage, Thommen George Karimpanal, Santu Rana, Svetha Venkatesh
Physics-based reinforcement learning tasks can benefit from simplified physics simulators as they potentially allow near-optimal policies to be learned in simulation.
no code implementations • 10 May 2021 • Hung Tran-The, Sunil Gupta, Santu Rana, Svetha Venkatesh
Bayesian optimisation (BO) is a well-known efficient algorithm for finding the global optimum of expensive, black-box functions.
1 code implementation • 20 May 2021 • Binh Nguyen-Thai, Vuong Le, Catherine Morgan, Nadia Badawi, Truyen Tran, Svetha Venkatesh
The absence or abnormality of fidgety movements of joints or limbs is strongly indicative of cerebral palsy in infants.
no code implementations • 18 Jul 2021 • Majid Abdolshah, Hung Le, Thommen Karimpanal George, Sunil Gupta, Santu Rana, Svetha Venkatesh
Transfer in reinforcement learning is usually achieved through generalisation across tasks.
no code implementations • 24 Jul 2021 • Hung Tran-The, Sunil Gupta, Thanh Nguyen-Tang, Santu Rana, Svetha Venkatesh
We propose a novel approach that uses a hybrid of offline learning with online exploration.
no code implementations • ICCV 2021 • Kien Do, Truyen Tran, Svetha Venkatesh
We propose a novel framework for image clustering that incorporates joint representation learning and clustering.
no code implementations • 20 Aug 2021 • Majid Abdolshah, Hung Le, Thommen Karimpanal George, Sunil Gupta, Santu Rana, Svetha Venkatesh
This is achieved by representing the global transition dynamics as a union of local transition functions, each with respect to one active object in the scene.
no code implementations • 29 Sep 2021 • Majid Abdolshah, Hung Le, Thommen Karimpanal George, Vuong Le, Sunil Gupta, Santu Rana, Svetha Venkatesh
Whilst Generative Adversarial Networks (GANs) generate visually appealing high resolution images, the latent representations (or codes) of these models do not allow controllable changes on the semantic attributes of the generated images.
no code implementations • 29 Sep 2021 • Hung Tran-The, Sunil Gupta, Santu Rana, Long Tran-Thanh, Svetha Venkatesh
With a linear reward function, we demonstrate that our algorithm achieves a near-optimal regret.
no code implementations • 29 Sep 2021 • Thommen Karimpanal George, Majid Abdolshah, Hung Le, Santu Rana, Sunil Gupta, Truyen Tran, Svetha Venkatesh
The objective in goal-based reinforcement learning is to learn a policy to reach a particular goal state within the environment.
no code implementations • ICLR 2022 • Kha Pham, Hung Le, Man Ngo, Truyen Tran, Bao Ho, Svetha Venkatesh
We propose Generative Pseudo-Inverse Memory (GPM), a class of deep generative memory models that are fast to write in and read out.
no code implementations • 13 Oct 2021 • Thomas P Quinn, Sunil Gupta, Svetha Venkatesh, Vuong Le
This article is a field guide to transparent model design.
no code implementations • 26 Oct 2021 • Haripriya Harikumar, Kien Do, Santu Rana, Sunil Gupta, Svetha Venkatesh
In this paper, we propose a novel host-free Trojan attack with triggers that are fixed in the semantic space but not necessarily in the pixel space.
no code implementations • NeurIPS 2021 • Hung Le, Thommen Karimpanal George, Majid Abdolshah, Truyen Tran, Svetha Venkatesh
Episodic control enables sample efficiency in reinforcement learning by recalling past experiences from an episodic memory.
no code implementations • 3 Nov 2021 • Thommen George Karimpanal, Hung Le, Majid Abdolshah, Santu Rana, Sunil Gupta, Truyen Tran, Svetha Venkatesh
The optimistic nature of the Q-learning target leads to an overestimation bias, which is an inherent problem associated with standard $Q-$learning.
1 code implementation • ICLR 2022 • Thanh Nguyen-Tang, Sunil Gupta, A. Tuan Nguyen, Svetha Venkatesh
Moreover, we show that our method is more computationally efficient and has a better dependence on the effective dimension of the neural network than an online counterpart.
1 code implementation • NeurIPS 2021 • Arun Kumar Anjanapura Venkatesh, Alistair Shilton, Santu Rana, Sunil Gupta, Svetha Venkatesh
Traditional methods for kernel selection rely on parametric kernel functions or a combination thereof and although the kernel hyperparameters are tuned, these methods often provide sub-optimal results due to the limitations induced by the parametric forms.
1 code implementation • 3 Dec 2021 • Hung Le, Majid Abdolshah, Thommen K. George, Kien Do, Dung Nguyen, Svetha Venkatesh
We introduce a novel training procedure for policy gradient methods wherein episodic memory is used to optimize the hyperparameters of reinforcement learning algorithms on-the-fly.
no code implementations • 11 Feb 2022 • Buddhika Laknath Semage, Thommen George Karimpanal, Santu Rana, Svetha Venkatesh
Sim2real transfer is primarily concerned with transferring policies trained in simulation to potentially noisy real world environments.
no code implementations • 11 Feb 2022 • Buddhika Laknath Semage, Thommen George Karimpanal, Santu Rana, Svetha Venkatesh
Adapting an agent's behaviour to new environments has been one of the primary focus areas of physics based reinforcement learning.
no code implementations • 24 Feb 2022 • Kien Do, Haripriya Harikumar, Hung Le, Dung Nguyen, Truyen Tran, Santu Rana, Dang Nguyen, Willy Susilo, Svetha Venkatesh
Trojan attacks on deep neural networks are both dangerous and surreptitious.
no code implementations • 15 Mar 2022 • Hung Tran-The, Sunil Gupta, Santu Rana, Svetha Venkatesh
In particular, whether in the noisy setting, the EI strategy with a standard incumbent converges is still an open question of the Gaussian process bandit optimization problem.
no code implementations • 17 Apr 2022 • Dung Nguyen, Phuoc Nguyen, Hung Le, Kien Do, Svetha Venkatesh, Truyen Tran
Inspired by the observation that humans often infer the character traits of others, then use it to explain behaviour, we propose a new neural ToM architecture that learns to generate a latent trait vector of an actor from the past trajectories.
no code implementations • 17 Apr 2022 • Dung Nguyen, Phuoc Nguyen, Svetha Venkatesh, Truyen Tran
In particular, we train a role assignment network for small teams by demonstration and transfer the network to larger teams, which continue to learn through interaction with the environment.
no code implementations • 20 Apr 2022 • Hung Le, Thommen Karimpanal George, Majid Abdolshah, Dung Nguyen, Kien Do, Sunil Gupta, Svetha Venkatesh
We introduce a constrained optimization method for policy gradient reinforcement learning, which uses a virtual trust region to regulate each policy update.
no code implementations • 21 Apr 2022 • Hung Tran, Vuong Le, Svetha Venkatesh, Truyen Tran
We propose to model the persistent-transient duality in human behavior using a parent-child multi-channel neural network, which features a parent persistent channel that manages the global dynamics and children transient channels that are initiated and terminated on-demand to handle detailed interactive actions.
no code implementations • 13 May 2022 • Phuoc Nguyen, Truyen Tran, Ky Le, Sunil Gupta, Santu Rana, Dang Nguyen, Trong Nguyen, Shannon Ryan, Svetha Venkatesh
We introduce a conditional compression problem and propose a fast framework for tackling it.
no code implementations • 25 May 2022 • Thao Minh Le, Vuong Le, Sunil Gupta, Svetha Venkatesh, Truyen Tran
This grounding guides the attention mechanism inside VQA models through a duality of mechanisms: pre-training attention weight calculation and directly guiding the weights at inference time on a case-by-case basis.
1 code implementation • 25 Jul 2022 • Dang Nguyen, Sunil Gupta, Kien Do, Svetha Venkatesh
Traditional KD methods require lots of labeled training samples and a white-box teacher (parameters are accessible) to train a good student.
no code implementations • 21 Sep 2022 • Kien Do, Hung Le, Dung Nguyen, Dang Nguyen, Haripriya Harikumar, Truyen Tran, Santu Rana, Svetha Venkatesh
Since the EMA generator can be considered as an ensemble of the generator's old versions and often undergoes a smaller change in updates compared to the generator, training on its synthetic samples can help the student recall the past knowledge and prevent the student from adapting too quickly to new updates of the generator.
no code implementations • 23 Nov 2022 • Thanh Nguyen-Tang, Ming Yin, Sunil Gupta, Svetha Venkatesh, Raman Arora
To the best of our knowledge, these are the first $\tilde{\mathcal{O}}(\frac{1}{K})$ bound and absolute zero sub-optimality bound respectively for offline RL with linear function approximation from adaptive data with partial coverage.
no code implementations • ICCV 2023 • Prashant W. Patil, Sunil Gupta, Santu Rana, Svetha Venkatesh, Subrahmanyam Murala
Therefore, effective restoration of multi-weather degraded images is an essential prerequisite for successful functioning of such systems.
no code implementations • 17 Jan 2023 • Dung Nguyen, Phuoc Nguyen, Hung Le, Kien Do, Svetha Venkatesh, Truyen Tran
Social reasoning necessitates the capacity of theory of mind (ToM), the ability to contextualise and attribute mental states to others without having access to their internal cognitive structure.
no code implementations • 1 Feb 2023 • Alistair Shilton, Sunil Gupta, Santu Rana, Svetha Venkatesh
The study of Neural Tangent Kernels (NTKs) has provided much needed insight into convergence and generalization properties of neural networks in the over-parametrized (wide) limit by approximating the network using a first-order Taylor expansion with respect to its weights in the neighborhood of their initialization values.
no code implementations • 8 Feb 2023 • Buddhika Laknath Semage, Thommen George Karimpanal, Santu Rana, Svetha Venkatesh
However, simulators are generally incapable of accurately replicating real-world dynamics, and thus bridging the sim2real gap is an important problem in simulation based learning.
no code implementations • 3 Mar 2023 • Sunil Gupta, Alistair Shilton, Arun Kumar A V, Shannon Ryan, Majid Abdolshah, Hung Le, Santu Rana, Julian Berk, Mahad Rashid, Svetha Venkatesh
In this paper we introduce BO-Muse, a new approach to human-AI teaming for the optimization of expensive black-box functions.
no code implementations • ICCV 2023 • Hung Tran, Vuong Le, Svetha Venkatesh, Truyen Tran
To bridge that gap, this work proposes to model two concurrent mechanisms that jointly control human motion: the Persistent process that runs continually on the global scale, and the Transient sub-processes that operate intermittently on the local context of the human while interacting with objects.
no code implementations • 1 Aug 2023 • Manisha Senadeera, Santu Rana, Sunil Gupta, Svetha Venkatesh
Specifically, we propose a novel way of integrating model selection and BO for the single goal of reaching the function optima faster.
1 code implementation • 9 Aug 2023 • Hung Le, Kien Do, Dung Nguyen, Svetha Venkatesh
We present a new computing model for intrinsic rewards in reinforcement learning that addresses the limitations of existing surprise-driven explorations.
1 code implementation • 21 Aug 2023 • Thommen George Karimpanal, Laknath Buddhika Semage, Santu Rana, Hung Le, Truyen Tran, Sunil Gupta, Svetha Venkatesh
To address this issue, we introduce SEQ (sample efficient querying), where we simultaneously train a secondary RL agent to decide when the LLM should be queried for solutions.
1 code implementation • 7 Dec 2023 • Tuan Hoang, Santu Rana, Sunil Gupta, Svetha Venkatesh
Recent data-privacy laws have sparked interest in machine unlearning, which involves removing the effect of specific training samples from a learnt model as if they were never present in the original training dataset.
no code implementations • 19 Dec 2023 • Phuoc Nguyen, Truyen Tran, Sunil Gupta, Thin Nguyen, Svetha Venkatesh
We then represent the functional form of a target outlier leaf as a function of the node and edge noises.
no code implementations • 5 Feb 2024 • Kien Do, Dung Nguyen, Hung Le, Thao Le, Dang Nguyen, Haripriya Harikumar, Truyen Tran, Santu Rana, Svetha Venkatesh
To overcome this challenge, we propose to approximate \frac{1}{p(u|b)} using a biased classifier trained with "bias amplification" losses.
no code implementations • 27 Feb 2024 • Arun Kumar A V, Alistair Shilton, Sunil Gupta, Santu Rana, Stewart Greenhill, Svetha Venkatesh
Experimental (design) optimization is a key driver in designing and discovering new products and processes.
no code implementations • 18 Apr 2024 • Hung Le, Dung Nguyen, Kien Do, Svetha Venkatesh, Truyen Tran
We propose Pointer-Augmented Neural Memory (PANM) to help neural networks understand and apply symbol processing to new, longer sequences of data.
1 code implementation • ICML 2020 • Thomas Quinn, Dang Nguyen, Santu Rana, Sunil Gupta, Svetha Venkatesh
Interpretability allows the domain-expert to directly evaluate the model's relevance and reliability, a practice that offers assurance and builds trust.