1 code implementation • 23 Nov 2024 • Datao Tang, Xiangyong Cao, Xuan Wu, Jialin Li, Jing Yao, Xueru Bai, Deyu Meng
Although existing techniques, e. g., data augmentation and semi-supervised learning, can mitigate this scarcity issue to some extent, they are heavily dependent on high-quality labeled data and perform worse in rare object classes.
no code implementations • 18 Nov 2024 • Xinhai Li, Jialin Li, Ziheng Zhang, Rui Zhang, Fan Jia, Tiancai Wang, Haoqiang Fan, Kuo-Kun Tseng, Ruiping Wang
To address these limitations, we introduce the RoboGSim, a real2sim2real robotic simulator, powered by 3D Gaussian Splatting and the physics engine.
no code implementations • 14 Nov 2024 • Yunuo Wang, Ningning Yang, Jialin Li
Generative Adversarial Networks (GANs) have surfaced as a revolutionary element within the domain of low-dose computed tomography (LDCT) imaging, providing an advanced resolution to the enduring issue of reconciling radiation exposure with image quality.
no code implementations • 6 Nov 2024 • Ziji Shi, Jialin Li, Yang You
Recent advances in Generative Artificial Intelligence have fueled numerous applications, particularly those involving Generative Adversarial Networks (GANs), which are essential for synthesizing realistic photos and videos.
no code implementations • 18 Oct 2024 • Chenyang Zhang, Jiayi Lin, Haibo Tong, Bingxuan Hou, Dongyu Zhang, Jialin Li, Junli Wang
Augmented datasets are feasible for instruction tuning.
no code implementations • 15 Oct 2024 • Xiang Liu, Yijun Song, Xia Li, Yifei Sun, Huiying Lan, Zemin Liu, Linshan Jiang, Jialin Li
We conduct extensive experiments on five datasets with three model structures, demonstrating that our approach significantly reduces inference latency on edge devices and achieves a model size reduction of up to 28. 9 times and 34. 1 times, respectively, while maintaining test accuracy comparable to the original Vision Transformer.
1 code implementation • 12 Oct 2024 • Xi Jiang, Jian Li, Hanqiu Deng, Yong liu, Bin-Bin Gao, Yifeng Zhou, Jialin Li, Chengjie Wang, Feng Zheng
In the field of industrial inspection, Multimodal Large Language Models (MLLMs) have a high potential to renew the paradigms in practical applications due to their robust language capabilities and generalization abilities.
1 code implementation • 7 Oct 2024 • Ziyu Yao, Jialin Li, Yifeng Zhou, Yong liu, Xi Jiang, Chengjie Wang, Feng Zheng, Yuexian Zou, Lei LI
To the best of our knowledge, we are the first to propose a control framework for pre-trained autoregressive visual generation models.
no code implementations • 26 Sep 2024 • Jialin Li, Marta Zagorowska, Giulia De Pasquale, Alisa Rupenyan, John Lygeros
Evaluation on a realistic case study with gas compressors confirms that TVSafeOpt ensures safety when solving time-varying optimization problems with unknown reward and safety functions.
no code implementations • 3 Sep 2024 • Bin Fu, Qiyang Wan, Jialin Li, Ruiping Wang, Xilin Chen
Categorization, a core cognitive ability in humans that organizes objects based on common features, is essential to cognitive science as well as computer vision.
no code implementations • 29 Aug 2024 • Yiran Xu, Haoxiang Zhong, Kai Wu, Jialin Li, Yong liu, Chengjie Wang, Shu-Tao Xia, Hongen Liao
Object detectors have shown outstanding performance on various public datasets.
no code implementations • 19 Aug 2024 • Xuechao Chen, Ying Chen, Jialin Li, Qiang Nie, Yong liu, QiXing Huang, Yang Li
Inspired by semi-supervised learning leveraging limited labeled data and a large amount of unlabeled data, in this work, we propose a novel self-supervised pre-training framework utilizing the real 3D data and the pseudo-3D data lifted from images by a large depth estimation model.
1 code implementation • 9 Aug 2024 • Jialin Li, Junli Wang, Junjie Hu, Ming Jiang
In this study, we introduce a benchmark dataset CUNIT for evaluating decoder-only LLMs in understanding the cultural unity of concepts.
no code implementations • 5 Jun 2024 • Qiang Nie, WeiFu Fu, Yuhuan Lin, Jialin Li, Yifeng Zhou, Yong liu, Lei Zhu, Chengjie Wang
Two issues have to be tackled in the new IIL setting: 1) the notorious catastrophic forgetting because of no access to old data, and 2) broadening the existing decision boundary to new observations because of concept drift.
1 code implementation • 24 May 2024 • Hongyu Wang, Jiayu Xu, Senwei Xie, Ruiping Wang, Jialin Li, Zhaojie Xie, Bin Zhang, Chuyan Xiong, Xilin Chen
In this work, we introduce M4U, a novel and challenging benchmark for assessing the capability of multi-discipline multilingual multimodal understanding and reasoning.
no code implementations • 13 Apr 2024 • Munachiso Nwadike, Jialin Li, Hanan Salam
This paper presents our work on an enhanced personalisation strategy, that leverages data augmentation to develop tailored models for continuous valence and arousal prediction.
1 code implementation • CVPR 2024 • Jialin Li, Qiang Nie, WeiFu Fu, Yuhuan Lin, Guangpin Tao, Yong liu, Chengjie Wang
Deep learning models, particularly those based on transformers, often employ numerous stacked structures, which possess identical architectures and perform similar functions.
no code implementations • 17 Jan 2024 • Yao Lu, Song Bian, Lequn Chen, Yongjun He, Yulong Hui, Matthew Lentz, Beibin Li, Fei Liu, Jialin Li, Qi Liu, Rui Liu, Xiaoxuan Liu, Lin Ma, Kexin Rong, Jianguo Wang, Yingjun Wu, Yongji Wu, Huanchen Zhang, Minjia Zhang, Qizhen Zhang, Tianyi Zhou, Danyang Zhuo
In this paper, we investigate the intersection of large generative AI models and cloud-native computing architectures.
1 code implementation • 30 Sep 2023 • Xiang Liu, Liangxi Liu, Feiyang Ye, Yunheng Shen, Xia Li, Linshan Jiang, Jialin Li
Efficiently aggregating trained neural networks from local clients into a global model on a server is a widely researched topic in federated learning.
no code implementations • 28 Sep 2023 • Jialin Li, WeiFu Fu, Yuhuan Lin, Qiang Nie, Yong liu
Query-based object detectors have made significant advancements since the publication of DETR.
no code implementations • 30 Aug 2023 • WeiFu Fu, Qiang Nie, Jialin Li, Yuhuan Lin, Kai Wu, Jian Li, Yabiao Wang, Yong liu, Chengjie Wang
In this paper, we highlight the significance of exploiting the intra-domain information between the labeled target data and unlabeled target data.
1 code implementation • 23 Aug 2023 • Donghao Zhou, Jialin Li, Jinpeng Li, Jiancheng Huang, Qiang Nie, Yong liu, Bin-Bin Gao, Qiong Wang, Pheng-Ann Heng, Guangyong Chen
Unfortunately, the resultant noisy bounding boxes could cause corrupt supervision signals and thus diminish detection performance.
no code implementations • 1 Apr 2023 • Jialin Li, Alia Waleed, Hanan Salam
In this paper, we discuss the need for personalization in affective and personality computing (hereinafter referred to as affective computing).
no code implementations • 1 Feb 2023 • Ziji Shi, Le Jiang, Ang Wang, Jie Zhang, Xianyan Jia, Yong Li, Chencan Wu, Jialin Li, Wei Lin
However, finding a suitable model parallel schedule for an arbitrary neural network is a non-trivial task due to the exploding search space.
1 code implementation • 15 Nov 2022 • Shuaichen Chang, David Palzer, Jialin Li, Eric Fosler-Lussier, Ningchuan Xiao
Our experimental results show that V-MODEQA has better overall performance and robustness on MapQA than the state-of-the-art ChartQA and VQA algorithms by capturing the unique properties in map question answering.
no code implementations • 18 Apr 2019 • Hang Zhu, Zhihao Bai, Jialin Li, Ellis Michael, Dan Ports, Ion Stoica, Xin Jin
Experimental results show that Harmonia improves the throughput of these protocols by up to 10X for a replication factor of 10, providing near-linear scalability up to the limit of our testbed.
Distributed, Parallel, and Cluster Computing
no code implementations • 25 May 2018 • Furong Huang, Jialin Li, Xuchen You
We propose a Slicing Initialized Alternating Subspace Iteration (s-ASI) method that is guaranteed to recover top $r$ components ($\epsilon$-close) simultaneously for (a)symmetric tensors almost surely under the noiseless case (with high probability for a bounded noise) using $O(\log(\log \frac{1}{\epsilon}))$ steps of tensor subspace iterations.