no code implementations • 30 Sep 2023 • Xiang Liu, Liangxi Liu, Feiyang Ye, Yunheng Shen, Xia Li, Linshan Jiang, Jialin Li
Efficiently aggregating trained neural networks from local clients into a global model on a server is a widely researched topic in federated learning.
no code implementations • 28 Sep 2023 • Jialin Li, WeiFu Fu, Yuhuan Lin, Qiang Nie, Yong liu
Query-based object detectors have made significant advancements since the publication of DETR.
no code implementations • 30 Aug 2023 • WeiFu Fu, Qiang Nie, Jialin Li, Yuhuan Lin, Kai Wu, Yong liu, Chengjie Wang
Instead of solely using the scarce labeled data for supervision, we propose a novel SSDA framework that incorporates both inter-domain mixing and intra-domain mixing, where inter-domain mixing mitigates the source-target domain gap and intra-domain mixing enriches the available target domain information.
no code implementations • 23 Aug 2023 • Donghao Zhou, Jialin Li, Jinpeng Li, Jiancheng Huang, Qiang Nie, Yong liu, Bin-Bin Gao, Qiong Wang, Pheng-Ann Heng, Guangyong Chen
Large-scale well-annotated datasets are of great importance for training an effective object detector.
no code implementations • 1 Apr 2023 • Jialin Li, Alia Waleed, Hanan Salam
In this paper, we discuss the need for personalization in affective and personality computing (hereinafter referred to as affective computing).
no code implementations • 1 Feb 2023 • Ziji Shi, Le Jiang, Ang Wang, Jie Zhang, Xianyan Jia, Yong Li, Chencan Wu, Jialin Li, Wei Lin
However, finding a suitable model parallel schedule for an arbitrary neural network is a non-trivial task due to the exploding search space.
1 code implementation • 15 Nov 2022 • Shuaichen Chang, David Palzer, Jialin Li, Eric Fosler-Lussier, Ningchuan Xiao
Our experimental results show that V-MODEQA has better overall performance and robustness on MapQA than the state-of-the-art ChartQA and VQA algorithms by capturing the unique properties in map question answering.
no code implementations • 18 Apr 2019 • Hang Zhu, Zhihao Bai, Jialin Li, Ellis Michael, Dan Ports, Ion Stoica, Xin Jin
Experimental results show that Harmonia improves the throughput of these protocols by up to 10X for a replication factor of 10, providing near-linear scalability up to the limit of our testbed.
Distributed, Parallel, and Cluster Computing
no code implementations • 25 May 2018 • Furong Huang, Jialin Li, Xuchen You
We propose a Slicing Initialized Alternating Subspace Iteration (s-ASI) method that is guaranteed to recover top $r$ components ($\epsilon$-close) simultaneously for (a)symmetric tensors almost surely under the noiseless case (with high probability for a bounded noise) using $O(\log(\log \frac{1}{\epsilon}))$ steps of tensor subspace iterations.