no code implementations • 9 May 2025 • Yiming Niu, Jinliang Deng, Lulu Zhang, Zimu Zhou, Yongxin Tong
In the online phase, FOCUS dynamically adapts these patterns to the current input and captures dependencies between the input segment and high-level events, enabling both accurate and efficient forecasting.
no code implementations • 23 Apr 2025 • Shuyue Wei, Yongxin Tong, Zimu Zhou, Tianran He, Yi Xu
Furthermore, existing solutions fail to achieve high accuracy and efficiency, making practical use of SV still out of reach, because they ignore choosing suitable computation scheme for approximation framework and overlook the property of utility function in FL.
no code implementations • 17 Feb 2025 • Qianchi Zhang, Hainan Zhang, Liang Pang, Hongwei Zheng, Yongxin Tong, Zhiming Zheng
We optimize each module to tackle complex reasoning challenges: (1) Clue extractor firstly uses sentences containing the answer and similar ones as fine-tuned targets, aiming at extracting sufficient potential clues; (2) Re-ranker is trained to prioritize effective clues based on the real feedback from generation module, with clues capable of generating correct answer as positive samples and others as negative; (3) Truncator takes the minimum clues needed to answer the question (truncation point) as fine-tuned targets, and performs truncation on the re-ranked clues to achieve fine-grained noise filtering.
no code implementations • 14 Feb 2025 • Tao Fan, Hanlin Gu, Xuemei Cao, Chee Seng Chan, Qian Chen, Yiqiang Chen, Yihui Feng, Yang Gu, Jiaxiang Geng, Bing Luo, Shuoling Liu, Win Kent Ong, Chao Ren, Jiaqi Shao, Chuan Sun, Xiaoli Tang, Hong Xi Tae, Yongxin Tong, Shuyue Wei, Fan Wu, Wei Xi, Mingcong Xu, He Yang, Xin Yang, Jiangpeng Yan, Hao Yu, Han Yu, Teng Zhang, Yifei Zhang, Xiaojin Zhang, Zhenzhe Zheng, Lixin Fan, Qiang Yang
This paper provides a comprehensive summary of the ten challenging problems inherent in FedFMs, encompassing foundational theory, utilization of private data, continual learning, unlearning, Non-IID and graph data, bidirectional knowledge transfer, incentive mechanism design, game mechanism design, model watermarking, and efficiency.
1 code implementation • 16 Dec 2024 • Wentao Yu, Shuo Chen, Yongxin Tong, Tianlong Gu, Chen Gong
To address these issues, we propose a novel Federated learning method by integrally modeling the Inter-Intra Heterogeneity (FedIIH).
1 code implementation • 6 Dec 2024 • Ye Sun, Lei Shi, Yongxin Tong
Link prediction (LP) is crucial for Knowledge Graphs (KG) completion but commonly suffers from interpretability issues.
no code implementations • 6 Apr 2024 • Yan Kang, Ziyao Ren, Lixin Fan, Linghua Yang, Yongxin Tong, Qiang Yang
This vulnerability may lead the current heuristic hyperparameter configuration of SecureBoost to a suboptimal trade-off between utility, privacy, and efficiency, which are pivotal elements toward a trustworthy federated learning system.
no code implementations • 24 Jul 2023 • Wei Yuan, Liang Qu, Lizhen Cui, Yongxin Tong, Xiaofang Zhou, Hongzhi Yin
Owing to the nature of privacy protection, federated recommender systems (FedRecs) have garnered increasing interest in the realm of on-device recommender systems.
no code implementations • 20 Jul 2023 • Ziyao Ren, Yan Kang, Lixin Fan, Linghua Yang, Yongxin Tong, Qiang Yang
To fill this gap, we propose a Constrained Multi-Objective SecureBoost (CMOSB) algorithm to find Pareto optimal solutions that each solution is a set of hyperparameters achieving optimal tradeoff between utility loss, training cost, and privacy leakage.
no code implementations • 5 Jun 2023 • Liyue Chen, Jiangyi Fang, Zhe Yu, Yongxin Tong, Shaosheng Cao, Leye Wang
In this paper, we propose RegionGen, a data-driven region generation framework that can specify regions with key characteristics (e. g., good spatial semantic meaning and predictability) by modeling region generation as a multi-objective optimization problem.
no code implementations • 25 Jun 2022 • Zhongnan Qu, Zimu Zhou, Yongxin Tong, Lothar Thiele
Data collected by IoT devices are often private and have a large diversity across users.
no code implementations • Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining 2021 • Dingyuan Shi, Yongxin Tong, Zimu Zhou, Bingchen Song, Weifeng Lv, Qiang Yang
Ride hailing is a widespread shared mobility application where the central issue is to assign taxi requests to drivers with various objectives.
2 code implementations • 6 Aug 2021 • Jiaqian Ren, Hao Peng, Lei Jiang, Jia Wu, Yongxin Tong, Lihong Wang, Xu Bai, Bo wang, Qiang Yang
Experiments on both synthetic and real-world datasets show the framework to be highly effective at detection in both multilingual data and in languages where training samples are scarce.
2 code implementations • 2021 IEEE 37th International Conference on Data Engineering 2021 • Yansheng Wang, Yongxin Tong, Dingyuan Shi, Ke Xu
Traditional learning-to-rank (LTR) models are usually trained in a centralized approach based upon a large amount of data.
no code implementations • 18 May 2021 • Xiaocheng Tang, Fan Zhang, Zhiwei Qin, Yansheng Wang, Dingyuan Shi, Bingchen Song, Yongxin Tong, Hongtu Zhu, Jieping Ye
In this paper we propose a unified value-based dynamic learning framework (V1D3) for tackling both tasks.
no code implementations • 23 May 2019 • Xiaoxi He, Dawei Gao, Zimu Zhou, Yongxin Tong, Lothar Thiele
Given a set of deep neural networks, each pre-trained for a single task, it is desired that executing arbitrary combinations of tasks yields minimal computation cost.
1 code implementation • 13 Feb 2019 • Qiang Yang, Yang Liu, Tianjian Chen, Yongxin Tong
We propose a possible solution to these challenges: secure federated learning.