no code implementations • 10 Feb 2024 • Nam Phuong Tran, The Anh Ta, Shuqing Shi, Debmalya Mandal, Yali Du, Long Tran-Thanh
Reward allocation, also known as the credit assignment problem, has been an important topic in economics, engineering, and machine learning.
no code implementations • 29 Jan 2024 • Abhimanyu Pallavi Sudhir, Long Tran-Thanh
Prediction markets are useful for estimating probabilities of claims whose truth will be revealed at some fixed time -- this includes questions about the values of real-world events (i. e. statistical uncertainty), and questions about the values of primitive recursive functions (i. e. logical or algorithmic uncertainty).
no code implementations • 25 Sep 2023 • Khoi Do, Duong Nguyen, Hoa Nguyen, Long Tran-Thanh, Nguyen-Hoang Tran, Quoc-Viet Pham
This paper explores Large Batch Training techniques using layer-wise adaptive scaling ratio (LARS) across diverse settings, uncovering insights.
1 code implementation • 17 Jul 2023 • Mustafa Yasir, John Palowitch, Anton Tsitsulin, Long Tran-Thanh, Bryan Perozzi
In this work we examine how two additional synthetic graph generators can improve GraphWorld's evaluation; LFR, a well-established model in the graph clustering literature and CABAM, a recent adaptation of the Barabasi-Albert model tailored for GNN benchmarking.
1 code implementation • 12 May 2023 • Viet Bach Nguyen, Truong Son Hy, Long Tran-Thanh, Nhung Nghiem
In this work, we propose a novel deep learning architecture named Attention-based Multiresolution Graph Neural Networks (ATMGNN) that learns to combine the spatial graph information, i. e. geographical data, with the temporal information, i. e. timeseries data of number of COVID-19 cases, to predict the future dynamics of the pandemic.
no code implementations • 13 Feb 2023 • Le Cong Dinh, Tri-Dung Nguyen, Alain Zemkoho, Long Tran-Thanh
We study online learning problems in which the learner has extra knowledge about the adversary's behaviour, i. e., in game-theoretic settings where opponents typically follow some no-external regret learning algorithms.
no code implementations • 14 Dec 2022 • Nam Phuong Tran, Long Tran-Thanh
Using the side-observation approach, we prove an improved regret upper bound, which depends on the cardinality of the group, given that the group is finite.
no code implementations • 15 Nov 2022 • Shivakumar Mahesh, Anshuka Rangi, Haifeng Xu, Long Tran-Thanh
We provide the first decentralized and robust algorithm RESYNC for defenders whose performance deteriorates gracefully as $\tilde{O}(C)$ as the number of collisions $C$ from the attackers increases.
no code implementations • 29 Sep 2022 • Minh-Duong Nguyen, Quoc-Viet Pham, Dinh Thai Hoang, Long Tran-Thanh, Diep N. Nguyen, Won-Joo Hwang
Moreover, leveraging the advantages of hierarchical network design, we propose a new label-driven knowledge distillation (LKD) technique at the global server to address the second problem.
no code implementations • 29 Aug 2022 • Anshuka Rangi, Haifeng Xu, Long Tran-Thanh, Massimo Franceschetti
To understand the security threats to reinforcement learning (RL) algorithms, this paper studies poisoning attacks to manipulate \emph{any} order-optimal learning algorithm towards a targeted policy in episodic RL and examines the potential damage of two natural types of poisoning attacks, i. e., the manipulation of \emph{reward} and \emph{action}.
no code implementations • 31 May 2022 • Minh Huynh Nguyen, Nghi D. Q. Bui, Truong Son Hy, Long Tran-Thanh, Tien N. Nguyen
We propose a novel method for code summarization utilizing Heterogeneous Code Representations (HCRs) and our specially designed HierarchyNet.
1 code implementation • 30 May 2022 • Truong Son Hy, Viet Bach Nguyen, Long Tran-Thanh, Risi Kondor
In this paper, we introduce Temporal Multiresolution Graph Neural Networks (TMGNN), the first architecture that both learns to construct the multiscale and multiresolution graph structures and incorporates the time-series signals to capture the temporal changes of the dynamic graphs.
no code implementations • CVPR 2022 • Minh Hieu Phan, The-Anh Ta, Son Lam Phung, Long Tran-Thanh, Abdesselam Bouzerdoum
Our CSW-KD method distills the knowledge of a previous model on old classes that are similar to the new one.
no code implementations • 20 Oct 2021 • Thai Le, Long Tran-Thanh, Dongwon Lee
To this question, we successfully demonstrate that indeed it is possible for adversaries to exploit computational learning mechanism such as reinforcement learning (RL) to maximize the influence of socialbots while avoiding being detected.
no code implementations • 7 Oct 2021 • Le Cong Dinh, David Henry Mguni, Long Tran-Thanh, Jun Wang, Yaodong Yang
In this setting, we first demonstrate that MDP-Expert, an existing algorithm that works well with oblivious adversaries can still apply and achieve a policy regret bound of $\mathcal{O}(\sqrt{T \log(L)}+\tau^2\sqrt{ T \log(|A|)})$ where $L$ is the size of adversary's pure strategy set and $|A|$ denotes the size of agent's action space.
no code implementations • 29 Sep 2021 • Nicholas George Bishop, Lau Truong Nguyen, Hieu Trung Thai, Thomas Davies, Long Tran-Thanh
In this paper we investigate the connection of topological similarity between source and target tasks with the efficiency of vanilla transfer learning (i. e., transfer learning without retraining) between them.
no code implementations • 29 Sep 2021 • Hung Tran-The, Sunil Gupta, Santu Rana, Long Tran-Thanh, Svetha Venkatesh
With a linear reward function, we demonstrate that our algorithm achieves a near-optimal regret.
1 code implementation • 8 May 2021 • Lei Xun, Long Tran-Thanh, Bashir M Al-Hashimi, Geoff V. Merrett
Machine learning inference is increasingly being executed locally on mobile and embedded platforms, due to the clear advantages in latency, privacy and connectivity.
no code implementations • 8 May 2021 • Lei Xun, Long Tran-Thanh, Bashir M Al-Hashimi, Geoff V. Merrett
Compared to the existing works, our approach can provide up to 2. 36x (energy) and 2. 73x (time) wider dynamic range with a 2. 4x smaller memory footprint at the same compression rate.
no code implementations • 15 Feb 2021 • Anshuka Rangi, Long Tran-Thanh, Haifeng Xu, Massimo Franceschetti
In particular, for the case of unlimited verifications, we show that with $O(\log T)$ expected number of verifications, a simple modified version of the ETC type bandit algorithm can restore the order optimal $O(\log T)$ regret irrespective of the amount of contamination used by the attacker.
no code implementations • 5 Jan 2021 • Anshuka Rangi, Massimo Franceschetti, Long Tran-Thanh
We then propose bandit algorithms for the two feedback models and show that upper and lower bounds on the regret are of the order of $\tilde{O}(N^{2/3})$ and $\tilde\Omega(N^{2/3})$, respectively, where $N$ is the total number of users.
no code implementations • NeurIPS 2020 • Nicholas Bishop, Long Tran-Thanh, Enrico Gerding
In attempts to relax this assumption, fields such as adversarial learning typically assume that data is provided by an adversary, whose sole objective is to fool a learning algorithm.
no code implementations • NeurIPS 2020 • Nicholas Bishop, Hau Chan, Debmalya Mandal, Long Tran-Thanh
On the other hand, when B_T is not known, we show that the dynamic approximate regret of RGA-META is at most O((K+\tilde{D})^{1/4}\tilde{B}^{1/2}T^{3/4}) where \tilde{B} is the maximal path variation budget within each batch of RGA-META (which is provably in order of o(\sqrt{T}).
no code implementations • NeurIPS Workshop TDA_and_Beyond 2020 • Nicholas George Bishop, Thomas Davies, Long Tran-Thanh
The implicit role of a topological term in a loss function is to restrict the class of functions in which we are learning (the hypothesis class) to those with a specific topology.
no code implementations • 22 Jul 2020 • Le Cong Dinh, Nick Bishop, Long Tran-Thanh
We investigate a repeated two-player zero-sum game setting where the column player is also a designer of the system, and has full control on the design of the payoff matrix.
1 code implementation • 4 Jun 2020 • Thomas Davies, Jack Aspinall, Bryan Wilder, Long Tran-Thanh
We end with experiments on two datasets that utilise both the topological and fuzzy nature of our algorithm: pre-trained model selection in machine learning and lattices structures from materials science.
no code implementations • 27 Feb 2020 • Saaduddin Mahmud, Md. Mosaddek Khan, Moumita Choudhury, Long Tran-Thanh, Nicholas R. Jennings
Distributed Constraint Optimization Problems (DCOPs) are an important framework for modeling coordinated decision-making problems in multi-agent systems with a set of discrete variables.
no code implementations • 19 Nov 2019 • Minming Li, Long Tran-Thanh, Xiaowei Wu
For the case when defending resources cannot be shared, we present a max-flow-based exact algorithm.
no code implementations • 19 Nov 2019 • Dong Quan Vu, Patrick Loiseau, Alonso Silva, Long Tran-Thanh
Resource allocation games such as the famous Colonel Blotto (CB) and Hide-and-Seek (HS) games are often used to model a large variety of practical problems, but only in their one-shot versions.
Computer Science and Game Theory
no code implementations • NeurIPS 2019 • Edoardo Manino, Long Tran-Thanh, Nicholas R. Jennings
Third, SBIC has provable asymptotic guarantees both in the online and offline settings.
no code implementations • 13 Sep 2019 • Saaduddin Mahmud, Moumita Choudhury, Md. Mosaddek Khan, Long Tran-Thanh, Nicholas R. Jennings
Evolutionary optimization is a generic population-based metaheuristic that can be adapted to solve a wide variety of optimization problems and has proven very effective for combinatorial optimization problems.
no code implementations • NeurIPS 2019 • Jiarui Gan, Qingyu Guo, Long Tran-Thanh, Bo An, Michael Wooldridge
We then apply a game-theoretic framework at a higher level to counteract such manipulation, in which the defender commits to a policy that specifies her strategy commitment according to the learned information.
2 code implementations • 27 May 2019 • Dong Quan Vu, Patrick Loiseau, Alonso Silva, Long Tran-Thanh
Resource allocation games such as the famous Colonel Blotto (CB) and Hide-and-Seek (HS) games are often used to model a large variety of practical problems, but only in their one-shot versions.
no code implementations • 5 May 2018 • Zheyuan Ryan Shi, Ziye Tang, Long Tran-Thanh, Rohit Singh, Fei Fang
We study Stackelberg Security Games where the defender, in addition to allocating defensive resources to protect targets from the attacker, can strategically manipulate the attacker's payoff under budget constraints in weighted L^p-norm form regarding the amount of change.
no code implementations • 19 Oct 2016 • Edoardo Manino, Long Tran-Thanh, Nicholas R. Jennings
Crowdsourcing has been successfully employed in the past as an effective and cheap way to execute classification tasks and has therefore attracted the attention of the research community.
no code implementations • NeurIPS 2015 • Jaya Kawale, Hung H. Bui, Branislav Kveton, Long Tran-Thanh, Sanjay Chawla
Matrix factorization (MF) collaborative filtering is an effective and widely used method in recommendation systems.
no code implementations • 10 May 2014 • Long Tran-Thanh, Jia Yuan Yu
We introduce the functional bandit problem, where the objective is to find an arm that optimises a known functional of the unknown arm-reward distributions.
1 code implementation • 18 Jun 2013 • Sasan Maleki, Long Tran-Thanh, Greg Hines, Talal Rahwan, Alex Rogers
While this algorithm provides a bound on the approximation error, this bound is \textit{asymptotic}, meaning that it only holds when the number of samples increases to infinity.
Computer Science and Game Theory