Search Results for author: Jinhang Zuo

Found 10 papers, 1 papers with code

CoRAST: Towards Foundation Model-Powered Correlated Data Analysis in Resource-Constrained CPS and IoT

no code implementations27 Mar 2024 Yi Hu, Jinhang Zuo, Alanis Zhao, Bob Iannucci, Carlee Joe-Wong

Foundation models (FMs) emerge as a promising solution to harness distributed and diverse environmental data by leveraging prior knowledge to understand the complicated temporal and spatial correlations within heterogeneous datasets.

Federated Learning Representation Learning

Adversarial Attacks on Cooperative Multi-agent Bandits

no code implementations3 Nov 2023 Jinhang Zuo, Zhiyao Zhang, Xuchuang Wang, Cheng Chen, Shuai Li, John C. S. Lui, Mohammad Hajiesmaili, Adam Wierman

Cooperative multi-agent multi-armed bandits (CMA2B) consider the collaborative efforts of multiple agents in a shared multi-armed bandit game.

Multi-Armed Bandits

Intelligent Communication Planning for Constrained Environmental IoT Sensing with Reinforcement Learning

no code implementations19 Aug 2023 Yi Hu, Jinhang Zuo, Bob Iannucci, Carlee Joe-Wong

Internet of Things (IoT) technologies have enabled numerous data-driven mobile applications and have the potential to significantly improve environmental monitoring and hazard warnings through the deployment of a network of IoT sensors.

Intelligent Communication Multi-agent Reinforcement Learning +1

Contextual Combinatorial Bandits with Probabilistically Triggered Arms

no code implementations30 Mar 2023 Xutong Liu, Jinhang Zuo, Siwei Wang, John C. S. Lui, Mohammad Hajiesmaili, Adam Wierman, Wei Chen

We study contextual combinatorial bandits with probabilistically triggered arms (C$^2$MAB-T) under a variety of smoothness conditions that capture a wide range of applications, such as contextual cascading bandits and contextual influence maximization bandits.

Hierarchical Conversational Preference Elicitation with Bandit Feedback

no code implementations6 Sep 2022 Jinhang Zuo, Songwen Hu, Tong Yu, Shuai Li, Handong Zhao, Carlee Joe-Wong

To achieve this, the recommender system conducts conversations with users, asking their preferences for different items or item categories.

Recommendation Systems

Batch-Size Independent Regret Bounds for Combinatorial Semi-Bandits with Probabilistically Triggered Arms or Independent Arms

no code implementations31 Aug 2022 Xutong Liu, Jinhang Zuo, Siwei Wang, Carlee Joe-Wong, John C. S. Lui, Wei Chen

Under this new condition, we propose a BCUCB-T algorithm with variance-aware confidence intervals and conduct regret analysis which reduces the $O(K)$ factor to $O(\log K)$ or $O(\log^2 K)$ in the regret bound, significantly improving the regret bounds for the above applications.

Multi-layered Network Exploration via Random Walks: From Offline Optimization to Online Learning

no code implementations9 Jun 2021 Xutong Liu, Jinhang Zuo, Xiaowei Chen, Wei Chen, John C. S. Lui

For the online learning setting, neither the network structure nor the node weights are known initially.

Combinatorial Multi-armed Bandits for Resource Allocation

1 code implementation10 May 2021 Jinhang Zuo, Carlee Joe-Wong

In doing so, the decision maker should learn the value of the resources allocated for each user from feedback on each user's received reward.

Multi-Armed Bandits

Online Competitive Influence Maximization

no code implementations24 Jun 2020 Jinhang Zuo, Xutong Liu, Carlee Joe-Wong, John C. S. Lui, Wei Chen

In this paper, we introduce a new Online Competitive Influence Maximization (OCIM) problem, where two competing items (e. g., products, news stories) propagate in the same network and influence probabilities on edges are unknown.

Observe Before Play: Multi-armed Bandit with Pre-observations

no code implementations21 Nov 2019 Jinhang Zuo, Xiaoxi Zhang, Carlee Joe-Wong

We consider the stochastic multi-armed bandit (MAB) problem in a setting where a player can pay to pre-observe arm rewards before playing an arm in each round.

Cannot find the paper you are looking for? You can Submit a new open access paper.