no code implementations • 9 Mar 2025 • Yang Xiao, Wang Lu, Jie Ji, Ruimeng Ye, Gen Li, Xiaolong Ma, Bo Hui
We believe our approach paves the way for a more precise understanding of brain signals in the future.
no code implementations • 25 Jan 2025 • Bohan Liu, Yang Xiao, Ruimeng Ye, Zinan Ling, Xiaolong Ma, Bo Hui
In this paper, we experimentally demonstrate that, while directly applying DBA to decentralized FL, the attack success rate depends on the distribution of attackers in the network architecture.
3 code implementations • 13 Dec 2024 • Yuchen Fang, Yuxuan Liang, Bo Hui, Zezhi Shao, Liwei Deng, Xu Liu, Xinke Jiang, Kai Zheng
From the spatial data management perspective, we present a novel Transformer framework called PatchSTG to efficiently and dynamically model spatial dependencies for large-scale traffic forecasting with interpretability and fidelity.
Ranked #1 on
Traffic Prediction
on LargeST
1 code implementation • 16 Oct 2024 • Ruimeng Ye, Yang Xiao, Bo Hui
We remark that existing works investigate the phenomenon of weak-to-strong generation in analogous setup (i. e., binary classification), rather than practical alignment-relevant tasks (e. g., safety).
1 code implementation • 10 May 2024 • Bo Hui, Haolin Yuan, Neil Gong, Philippe Burlina, Yinzhi Cao
As a result, a natural attack, called prompt leaking, is to steal the system prompt from an LLM application, which compromises the developer's intellectual property.
no code implementations • 7 Mar 2024 • Bohan Liu, Zijie Zhang, Peixiong He, Zhensen Wang, Yang Xiao, Ruimeng Ye, Yang Zhou, Wei-Shinn Ku, Bo Hui
The Lottery Ticket Hypothesis (LTH) states that a dense neural network model contains a highly sparse subnetwork (i. e., winning tickets) that can achieve even better performance than the original model when trained in isolation.
1 code implementation • 28 Oct 2023 • Chao Jiang, Bo Hui, Bohan Liu, Da Yan
Therefore, we propose to find the winning ticket with varying sparsity along different layers in the model.
1 code implementation • 20 May 2023 • Yuchen Yang, Bo Hui, Haolin Yuan, Neil Gong, Yinzhi Cao
Text-to-image generative models such as Stable Diffusion and DALL$\cdot$E raise many ethical concerns due to the generation of harmful images such as Not-Safe-for-Work (NSFW) ones.
no code implementations • 3 May 2023 • Bo Hui, Da Yan, Xiaolong Ma, Wei-Shinn Ku
Therefore, we propose two techniques to improve GNN performance when the graph sparsity is high.
1 code implementation • 26 Oct 2022 • Haolin Yuan, Bo Hui, Yuchen Yang, Philippe Burlina, Neil Zhenqiang Gong, Yinzhi Cao
Federated learning (FL) allows multiple clients to collaboratively train a deep learning model.
1 code implementation • 2 Apr 2022 • Bo Hui, Wenlu Wang, Jiao Yu, Zhitao Gong, Wei-Shinn Ku, Min-Te Sun, Hua Lu
Based on the inference method and tracking models, we develop innovative indoor range and k nearest neighbor (kNN) query algorithms.
no code implementations • 6 Dec 2021 • Yuchen Fang, Yanjun Qin, Haiyong Luo, Fang Zhao, Liang Zeng, Bo Hui, Chenxing Wang
Besides, we propose a novel encoder-decoder architecture to incorporate the cross-time dynamic graph-based GCN for multi-step traffic forecasting.
1 code implementation • 14 Oct 2021 • Jie Zhang, Bo Hui, Po-Wei Harn, Min-Te Sun, Wei-Shinn Ku
We test our model on several graph datasets including directed homogeneous and heterogeneous graphs.
1 code implementation • 5 Jan 2021 • Bo Hui, Yuchen Yang, Haolin Yuan, Philippe Burlina, Neil Zhenqiang Gong, Yinzhi Cao
The success of the former heavily depends on the quality of the shadow model, i. e., the transferability between the shadow and the target; the latter, given only blackbox probing access to the target model, cannot make an effective inference of unknowns, compared with MI attacks using shadow models, due to the insufficient number of qualified samples labeled with ground truth membership information.