no code implementations • 2 Mar 2025 • Zhiqi Bu, Ruixuan Liu
Differential privacy (DP) is a privacy-preserving paradigm that protects the training data when training deep learning models.
no code implementations • 27 Feb 2025 • Toan Tran, Ruixuan Liu, Li Xiong
Large language models (LLMs) have become the backbone of modern natural language processing but pose privacy concerns about leaking sensitive training data.
no code implementations • 5 Dec 2024 • Christoph Breunig, Ruixuan Liu, Zhengfei Yu
The second method is a double robust Bayesian procedure that adjusts the prior distribution of the conditional mean function and subsequently corrects the posterior distribution of the resulting ATT.
no code implementations • 31 Oct 2024 • Zheng Ruan, Ruixuan Liu, Shimin Chen, Mengying Zhou, Xinquan Yang, Wei Li, Chen Chen, Wei Shen
In the task of dense video captioning of Soccernet dataset, we propose to generate a video caption of each soccer action and locate the timestamp of the caption.
1 code implementation • 19 Aug 2024 • Ruixuan Liu, Alan Chen, WeiYe Zhao, Changliu Liu
In the end, we apply the proposed method to Lego assembly with more than 250 3D structures.
no code implementations • 15 Aug 2024 • Shaojun Xu, Xusheng Luo, Yutong Huang, Letian Leng, Ruixuan Liu, Changliu Liu
To enable non-experts to specify long-horizon, multi-robot collaborative tasks, language models are increasingly used to translate natural language commands into formal specifications.
no code implementations • 4 Jun 2024 • Yixuan Liu, Li Xiong, YuHan Liu, Yujie Gu, Ruixuan Liu, Hong Chen
Third, the model is updated with the gradient reconstructed from recycled common knowledge and noisy incremental information.
no code implementations • 27 May 2024 • Haichao Sha, Yang Cao, Yong liu, Yuncheng Wu, Ruixuan Liu, Hong Chen
However, recent studies have shown that the gradients in deep learning exhibit a heavy-tail phenomenon, that is, the tails of the gradient have infinite variance, which may lead to excessive clipping loss to the gradients with existing DPSGD mechanisms.
no code implementations • 19 Apr 2024 • Zeke Xia, Ming Hu, Dengke Yan, Ruixuan Liu, Anran Li, Xiaofei Xie, Mingsong Chen
To avoid catastrophic forgetting, the main server of KoReA-SFL selects multiple assistant devices for knowledge replay according to the training data distribution of each server-side branch-model portion.
no code implementations • 27 Feb 2024 • Ruixuan Liu, Zhengfei Yu
We consider a quasi-Bayesian method that combines a frequentist estimation in the first stage and a Bayesian estimation/inference approach in the second stage.
no code implementations • 6 Dec 2023 • Haichao Sha, Ruixuan Liu, Yixuan Liu, Hong Chen
We prove that pre-projection enhances the convergence of DP-SGD by reducing the dependence of clipping error and bias to a fraction of the top gradient eigenspace, and in theory, limits cross-client variance to improve the convergence under heterogeneous federation.
no code implementations • 23 Nov 2023 • Ruixuan Liu, Ming Hu, Zeke Xia, Jun Xia, Pengyu Zhang, Yihao Huang, Yang Liu, Mingsong Chen
On the one hand, to achieve model training in all the diverse clients, mobile computing systems can only use small low-performance models for collaborative learning.
no code implementations • 20 Nov 2023 • Zhiqi Bu, Justin Chiu, Ruixuan Liu, Sheng Zha, George Karypis
Deep learning using large models have achieved great success in a wide range of domains.
no code implementations • 30 Oct 2023 • Zhiqi Bu, Ruixuan Liu, Yu-Xiang Wang, Sheng Zha, George Karypis
Recent advances have substantially improved the accuracy, memory cost, and training speed of differentially private (DP) deep learning, especially on large vision and language models with millions to billions of parameters.
no code implementations • 2 Oct 2023 • Ruixuan Liu, Zhiqi Bu, Yu-Xiang Wang, Sheng Zha, George Karypis
The success of large neural networks is crucially determined by the availability of data.
2 code implementations • 12 Sep 2023 • Anthony Cioppa, Silvio Giancola, Vladimir Somers, Floriane Magera, Xin Zhou, Hassan Mkhallati, Adrien Deliège, Jan Held, Carlos Hinojosa, Amir M. Mansourian, Pierre Miralles, Olivier Barnich, Christophe De Vleeschouwer, Alexandre Alahi, Bernard Ghanem, Marc Van Droogenbroeck, Abdullah Kamal, Adrien Maglo, Albert Clapés, Amr Abdelaziz, Artur Xarles, Astrid Orcesi, Atom Scott, Bin Liu, Byoungkwon Lim, Chen Chen, Fabian Deuser, Feng Yan, Fufu Yu, Gal Shitrit, Guanshuo Wang, Gyusik Choi, Hankyul Kim, Hao Guo, Hasby Fahrudin, Hidenari Koguchi, Håkan Ardö, Ibrahim Salah, Ido Yerushalmy, Iftikar Muhammad, Ikuma Uchida, Ishay Be'ery, Jaonary Rabarisoa, Jeongae Lee, Jiajun Fu, Jianqin Yin, Jinghang Xu, Jongho Nang, Julien Denize, Junjie Li, Junpei Zhang, Juntae Kim, Kamil Synowiec, Kenji Kobayashi, Kexin Zhang, Konrad Habel, Kota Nakajima, Licheng Jiao, Lin Ma, Lizhi Wang, Luping Wang, Menglong Li, Mengying Zhou, Mohamed Nasr, Mohamed Abdelwahed, Mykola Liashuha, Nikolay Falaleev, Norbert Oswald, Qiong Jia, Quoc-Cuong Pham, Ran Song, Romain Hérault, Rui Peng, Ruilong Chen, Ruixuan Liu, Ruslan Baikulov, Ryuto Fukushima, Sergio Escalera, Seungcheon Lee, Shimin Chen, Shouhong Ding, Taiga Someya, Thomas B. Moeslund, Tianjiao Li, Wei Shen, Wei zhang, Wei Li, Wei Dai, Weixin Luo, Wending Zhao, Wenjie Zhang, Xinquan Yang, Yanbiao Ma, Yeeun Joo, Yingsen Zeng, Yiyang Gan, Yongqiang Zhu, Yujie Zhong, Zheng Ruan, Zhiheng Li, Zhijian Huang, Ziyu Meng
More information on the tasks, challenges, and leaderboards are available on https://www. soccer-net. org.
no code implementations • 5 Sep 2023 • Ruixuan Liu, Yifan Sun, Changliu Liu
Experiments demonstrate that the EOAT can reliably manipulate Lego bricks and the learning framework can effectively and safely improve the manipulation performance to a 100% success rate.
no code implementations • 20 Aug 2023 • Xusheng Luo, Shaojun Xu, Ruixuan Liu, Changliu Liu
Our approach was experimentally applied to domains of navigation and manipulation.
no code implementations • 20 Jun 2023 • Ruixuan Liu, Rui Chen, Abulikemu Abuduweili, Changliu Liu
Second, it is difficult to ensure interactive safety due to uncertainty in human behaviors.
1 code implementation • 23 May 2023 • WeiYe Zhao, Yifan Sun, Feihan Li, Rui Chen, Ruixuan Liu, Tianhao Wei, Changliu Liu
Due to the diversity of algorithms and tasks, it remains difficult to compare existing safe RL algorithms.
no code implementations • 29 Nov 2022 • Christoph Breunig, Ruixuan Liu, Zhengfei Yu
We prove asymptotic equivalence of our Bayesian procedure and efficient frequentist ATE estimators by establishing a new semiparametric Bernstein-von Mises theorem under double robustness; i. e., the lack of smoothness of conditional mean functions can be compensated by high regularity of the propensity score and vice versa.
no code implementations • 18 Apr 2022 • Ruixuan Liu, Yanlin Wang, Yang Cao, Lingjuan Lyu, Weike Pan, Yun Chen, Hong Chen
Collecting and training over sensitive personal data raise severe privacy concerns in personalized recommendation systems, and federated learning can potentially alleviate the problem by training models over decentralized user data. However, a theoretically private solution in both the training and serving stages of federated recommendation is essential but still lacking. Furthermore, naively applying differential privacy (DP) to the two stages in federated recommendation would fail to achieve a satisfactory trade-off between privacy and utility due to the high-dimensional characteristics of model gradients and hidden representations. In this work, we propose a federated news recommendation method for achieving a better utility in model training and online serving under a DP guarantee. We first clarify the DP definition over behavior data for each round in the life-circle of federated recommendation systems. Next, we propose a privacy-preserving online serving mechanism under this definition based on the idea of decomposing user embeddings with public basic vectors and perturbing the lower-dimensional combination coefficients.
no code implementations • 16 Feb 2022 • Ruixuan Liu, Fangzhao Wu, Chuhan Wu, Yanlin Wang, Lingjuan Lyu, Hong Chen, Xing Xie
In this way, all the clients can participate in the model learning in FL, and the final model can be big and powerful enough.
1 code implementation • EMNLP 2021 • Jingwei Yi, Fangzhao Wu, Chuhan Wu, Ruixuan Liu, Guangzhong Sun, Xing Xie
However, the computation and communication cost of directly learning many existing news recommendation models in a federated way are unacceptable for user clients.
no code implementations • 16 Aug 2021 • Ruixuan Liu, Changliu Liu
Predicting human intention is critical to facilitating safe and efficient human-robot collaboration (HRC).
1 code implementation • 17 Sep 2020 • Ruixuan Liu, Yang Cao, Hong Chen, Ruoyang Guo, Masatoshi Yoshikawa
In this work, by leveraging the \textit{privacy amplification} effect in the recently proposed shuffle model of differential privacy, we achieve the best of two worlds, i. e., accuracy in the curator model and strong privacy without relying on any trusted party.
no code implementations • 24 Mar 2020 • Ruixuan Liu, Yang Cao, Masatoshi Yoshikawa, Hong Chen
To prevent privacy leakages from gradients that are calculated on users' sensitive data, local differential privacy (LDP) has been considered as a privacy guarantee in federated SGD recently.