no code implementations • 18 Jun 2025 • Florin-Alexandru Vasluianu, Tim Seizinger, Zhuyun Zhou, Cailian Chen, Zongwei Wu, Radu Timofte, Mingjia Li, Jin Hu, Hainuo Wang, Hengxing Liu, Jiarui Wang, Qiming Hu, Xiaojie Guo, Xin Lu, Jiarong Yang, Yuanfei Bao, Anya Hu, Zihao Fan, Kunyu Wang, Jie Xiao, Xi Wang, Xueyang Fu, Zheng-Jun Zha, Yu-Fan Lin, Chia-Ming Lee, Chih-Chung Hsu, Xingbo Wang, Dong Li, Yuxu Chen, Bin Chen, Yuanbo Zhou, Yuanbin Chen, Hongwei Wang, Jiannan Lin, Qinquan Gao, Tong Tong, Zhao Zhang, Yanyan Wei, Wei Dong, Han Zhou, Seyed Amirreza Mousavi, Jun Chen, Haobo Liang, Jiajie Jing, Junyu Li, Yan Yang, Seoyeon Lee, Chaewon Kim, Ziyu Feng, Shidi Chen, Bowen Luan, Zewen Chen, Vijayalaxmi Ashok Aralikatti, G Gyaneshwar Rao, Nikhil Akalwadi, Chaitra Desai, Ramesh Ashok Tabib, Uma Mudenagudi, Anas M. Ali, Bilel Benjdira, Wadii Boulila, Alexandru Brateanu, Cosmin Ancuti, Tanmay Chaturvedi, Manish Kumar, Anmol Srivastav, Daksh Trivedi, Shashwat Thakur, Kishor Upla, Zeyu Xiao, Zhuoyuan Li, Boda Zhou, Shashank Shekhar, Kele Xu, Qisheng Xu, Zijian Gao, Tianjiao Wan, Suiyi Zhao, Bo wang, Yan Luo, Mingshen Wang, Yilin Zhang
This work examines the findings of the NTIRE 2025 Shadow Removal Challenge.
1 code implementation • 2 Sep 2024 • Jiarong Yang, YuAn Liu
To address this issue, we consider an asynchronous SFL framework, where an activation buffer and a model buffer are embedded on the server to manage the asynchronously transmitted activations and client-side models, respectively.
no code implementations • 8 May 2024 • Jiarong Yang, YuAn Liu
Split Federated Learning (SFL) is a distributed machine learning framework which strategically divides the learning process between a server and clients and collaboratively trains a shared model by aggregating local models updated based on data from distributed clients.
no code implementations • 28 Nov 2023 • Jiarong Yang, YuAn Liu, Fangjiong Chen, Wen Chen, Changle Li
Federated learning (FL) is a promising distributed learning framework where distributed clients collaboratively train a machine learning model coordinated by a server.
1 code implementation • 1 Aug 2023 • Geyang Guo, Jiarong Yang, Fengyuan LU, Jiaxin Qin, Tianyi Tang, Wayne Xin Zhao
From an evaluation perspective, we build a benchmark to judge ancient Chinese translation quality in different scenarios and evaluate the ancient Chinese translation capacities of various existing models.
no code implementations • 11 Dec 2022 • Jiarong Yang, YuAn Liu, Rahif Kassab
Distributed Stein Variational Gradient Descent (DSVGD) is a non-parametric distributed learning framework for federated Bayesian learning, where multiple clients jointly train a machine learning model by communicating a number of non-random and interacting particles with the server.