1 code implementation • 10 May 2025 • Xuefeng Jiang, Jia Li, Nannan Wu, Zhiyuan Wu, Xujing Li, Sheng Sun, Gang Xu, Yuwei Wang, Qi Li, Min Liu
There have been some early attempts to tackle noisy labels in FL.
1 code implementation • 6 May 2025 • Junqi Liu, Xiaohan Lin, Jonas Bayer, Yael Dillies, Weijie Jiang, Xiaodan Liang, Roman Soletskyi, Haiming Wang, Yunzhou Xie, Beibei Xiong, Zhengfeng Yang, Jujian Zhang, Lihong Zhi, Jia Li, Zhengying Liu
CombiBench is suitable for testing IMO solving capabilities since it includes all IMO combinatorial problems since 2000 (except IMO 2004 P3 as its statement contain an images).
1 code implementation • 5 May 2025 • Hao Cheng, Zhiwei Zhao, Yichao He, Zhenzhen Hu, Jia Li, Meng Wang, Richang Hong
Audiovisual emotion recognition (AVER) aims to infer human emotions from nonverbal visual-audio (VA) cues, offering modality-complementary and language-agnostic advantages.
Ranked #1 on
Video Emotion Recognition
on CREMA-D
Contrastive Learning
Dynamic Facial Expression Recognition
+3
1 code implementation • 24 Apr 2025 • Kai Cui, Jia Li, Yu Liu, Xuesong Zhang, Zhenzhen Hu, Meng Wang
Besides, it introduces Long- and Short-Term Temporal Contrastive Learning (LS-TCL) to capture emotional synchronization at different temporal resolutions within modalities.
no code implementations • 19 Apr 2025 • Yunhui Liu, Jiashun Cheng, Jia Li, Fugee Tsung, Hongzhi Yin, Tieke He
Graph anomaly detection (GAD) has garnered increasing attention in recent years, yet it remains challenging due to the scarcity of abnormal nodes and the high cost of label annotations.
1 code implementation • 16 Apr 2025 • Lei Sun, Hang Guo, Bin Ren, Luc van Gool, Radu Timofte, Yawei Li, Xiangyu Kong, Hyunhee Park, Xiaoxuan Yu, Suejin Han, Hakjae Jeon, Jia Li, Hyung-Ju Chun, Donghun Ryou, Inju Ha, Bohyung Han, JingYu Ma, Zhijuan Huang, Huiyuan Fu, Hongyuan Yu, Boqi Zhang, Jiawei Shi, Heng Zhang, Huadong Ma, Deepak Kumar Tyagi, Aman Kukretti, Gajender Sharma, Sriharsha Koundinya, Asim Manna, Jun Cheng, Shan Tan, Jun Liu, Jiangwei Hao, Jianping Luo, Jie Lu, Satya Narayan Tazi, Arnim Gautam, Aditi Pawar, Aishwarya Joshi, Akshay Dudhane, Praful Hambadre, Sachin Chaudhary, Santosh Kumar Vipparthi, Subrahmanyam Murala, Jiachen Tu, Nikhil Akalwadi, Vijayalaxmi Ashok Aralikatti, Dheeraj Damodar Hegde, G Gyaneshwar Rao, Jatin Kalal, Chaitra Desai, Ramesh Ashok Tabib, Uma Mudenagudi, Zhenyuan Lin, Yubo Dong, Weikun Li, Anqi Li, Ang Gao, Weijun Yuan, Zhan Li, Ruting Deng, Yihang Chen, Yifan Deng, Zhanglu Chen, Boyang Yao, Shuling Zheng, Feng Zhang, Zhiheng Fu, Anas M. Ali, Bilel Benjdira, Wadii Boulila, Jan Seny, Pei Zhou, Jianhua Hu, K. L. Eddie Law, Jaeho Lee, M. J. Aashik Rasool, Abdur Rehman, SMA Sharif, Seongwan Kim, Alexandru Brateanu, Raul Balmez, Ciprian Orhei, Cosmin Ancuti, Zeyu Xiao, Zhuoyuan Li, Ziqi Wang, Yanyan Wei, Fei Wang, Kun Li, Shengeng Tang, Yunkai Zhang, Weirun Zhou, Haoxuan Lu
This paper presents an overview of the NTIRE 2025 Image Denoising Challenge ({\sigma} = 50), highlighting the proposed methodologies and corresponding results.
1 code implementation • 15 Apr 2025 • Haiming Wang, Mert Unsal, Xiaohan Lin, Mantas Baksys, Junqi Liu, Marco Dos Santos, Flood Sung, Marina Vinyes, ZhenZhe Ying, Zekai Zhu, Jianqiao Lu, Hugues de Saxcé, Bolton Bailey, Chendong Song, Chenjun Xiao, Dehao Zhang, Ebony Zhang, Frederick Pu, Han Zhu, Jiawei Liu, Jonas Bayer, Julien Michel, Longhui Yu, Léo Dreyfus-Schmidt, Lewis Tunstall, Luigi Pagani, Moreira Machado, Pauline Bourigault, Ran Wang, Stanislas Polu, Thibaut Barroyer, Wen-Ding Li, Yazhe Niu, Yann Fleureau, Yangyang Hu, Zhouliang Yu, Zihan Wang, Zhilin Yang, Zhengying Liu, Jia Li
We introduce Kimina-Prover Preview, a large language model that pioneers a novel reasoning-driven exploration paradigm for formal theorem proving, as showcased in this preview release.
Ranked #1 on
Automated Theorem Proving
on miniF2F-test
(using extra training data)
1 code implementation • 2 Apr 2025 • Zhengwei Tao, Zhi Jin, Bincheng Li, Xiaoying Bai, Haiyan Zhao, Chengfeng Dou, Xiancai Chen, Jia Li, Linyu Li, Chongyang Tao
In constructing this benchmark, we first collected recent trend forecasting questions and then filtered the data using CIL, resulting in an inferable benchmark for event prediction.
no code implementations • 26 Mar 2025 • Nan Gao, Yihua Bao, Dongdong Weng, Jiayi Zhao, Jia Li, Yan Zhou, Pengfei Wan, Di Zhang
Co-speech gesture generation enhances human-computer interaction realism through speech-synchronized gesture synthesis.
no code implementations • 19 Mar 2025 • Jiexia Ye, Weiqi Zhang, Ziyue Li, Jia Li, Fugee Tsung
Medical time series (MedTS) classification is crucial for improved diagnosis in healthcare, and yet it is challenging due to the varying granularity of patterns, intricate inter-channel correlation, information redundancy, and label scarcity.
no code implementations • 17 Mar 2025 • Fengyun Zhang, Jia Li, Xiaoqing Zhang, Shukai Duan, Shuang-Hua Yang
This paper presents a high-precision positioning system that integrates ultra-wideband (UWB) time difference of arrival (TDoA) measurements, inertial measurement unit (IMU) data, and ultrasonic sensors through factor graph optimization.
1 code implementation • 14 Mar 2025 • Haihong Zhao, Chenyi Zi, Aochuan Chen, Jia Li
Graph learning plays a vital role in mining and analyzing complex relationships involved in graph data, which is widely used in many real-world applications like transaction networks and communication networks.
no code implementations • 3 Mar 2025 • Xiaobin Hong, Jiawen Zhang, Wenzhong Li, Sanglu Lu, Jia Li
The rise of foundation models has revolutionized natural language processing and computer vision, yet their best practices to time series forecasting remains underexplored.
1 code implementation • 3 Mar 2025 • Yifan Niu, Ziqi Gao, Tingyang Xu, Yang Liu, Yatao Bian, Yu Rong, Junzhou Huang, Jia Li
In order to decode the complex knowledge of multiple properties in the inversion path, we propose a gradient-based Pareto search method to balance conflicting properties and generate Pareto optimal molecules.
1 code implementation • 2 Mar 2025 • Guanlue Li, Chenran Jiang, Ziqi Gao, Yu Liu, Chenyang Liu, Jiean Chen, Yong Huang, Jia Li
Effective generation of molecular structures, or new chemical entities, that bind to target proteins is crucial for lead identification and optimization in drug discovery.
1 code implementation • 27 Feb 2025 • Yang Liu, Zinan Zheng, Jiashun Cheng, Fugee Tsung, Deli Zhao, Yu Rong, Jia Li
Accurate Subseasonal-to-Seasonal (S2S) climate forecasting is pivotal for decision-making including agriculture planning and disaster preparedness but is known to be challenging due to its chaotic nature.
1 code implementation • 26 Feb 2025 • Ruifeng Tan, Weixiang Hong, Jiayue Tang, Xibin Lu, Ruijun Ma, Xiang Zheng, Jia Li, Jiaqiang Huang, Tong-Yi Zhang
Notably, BatteryLife is the first to release battery life datasets of zinc-ion batteries, sodium-ion batteries, and industry-tested large-capacity lithium-ion batteries.
no code implementations • 24 Feb 2025 • Qianhui Zhao, Li Zhang, Fang Liu, Xiaoli Lian, Qiaoyuanhe Meng, Ziqian Jiao, Zetong Zhou, Borui Zhang, Runlin Guo, Jia Li
Experimental results show that CodeSwift can reach up to 2. 53x and 2. 54x speedup compared to autoregressive decoding in repository-level and standalone code generation tasks, respectively, outperforming state-of-the-art inference acceleration approaches by up to 88%.
1 code implementation • 18 Feb 2025 • Yuhan Li, Xinni Zhang, Linhao Luo, Heng Chang, Yuxiang Ren, Irwin King, Jia Li
Moreover, existing methods often struggle with the integration of extracted CF information with LLMs due to its implicit representation and the modality gap between graph structures and natural language explanations.
1 code implementation • 18 Feb 2025 • Ruotian Ma, Peisong Wang, Cheng Liu, Xingyan Liu, Jiaqi Chen, Bang Zhang, Xin Zhou, Nan Du, Jia Li
In this work, we introduce S$^2$R, an efficient framework that enhances LLM reasoning by teaching models to self-verify and self-correct during inference.
1 code implementation • 11 Feb 2025 • Yong Lin, Shange Tang, Bohan Lyu, Jiayun Wu, Hongzhou Lin, Kaiyu Yang, Jia Li, Mengzhou Xia, Danqi Chen, Sanjeev Arora, Chi Jin
On the miniF2F benchmark, it achieves a 57. 6% success rate (Pass@32), exceeding the previous best open-source model by 7. 6%.
no code implementations • 1 Feb 2025 • Jia Li, Wenjie Zhao, Ziru Huang, Yunhui Guo, Yapeng Tian
Our study reveals a fundamental bias in current methods: they tend to generate segmentation masks based predominantly on visual salience, irrespective of the audio context.
no code implementations • 29 Jan 2025 • Sebastian Pena, Lin Lin, Jia Li
Many existing algorithms cannot recognize new cell types present in only one of the two samples when establishing a correspondence between clusters obtained from two samples.
no code implementations • 26 Jan 2025 • Nan Gao, Jia Li, Huaibo Huang, Ke Shang, Ran He
Blind face restoration (BFR) is a highly challenging problem due to the uncertainty of data degradation patterns.
1 code implementation • 26 Jan 2025 • Yadong Li, Jun Liu, Tao Zhang, Song Chen, Tianpeng Li, zehuan li, Lijun Liu, Lingfeng Ming, Guosheng Dong, Da Pan, Chong Li, Yuanbo Fang, Dongdong Kuang, Mingrui Wang, Chenglin Zhu, Youwei Zhang, Hongyu Guo, Fengyu Zhang, Yuran Wang, Bowen Ding, Wei Song, Xu Li, Yuqi Huo, Zheng Liang, Shusen Zhang, Xin Wu, Shuai Zhao, Linchu Xiong, Yozhen Wu, Jiahui Ye, Wenhao Lu, Bowen Li, Yan Zhang, Yaqi Zhou, Xin Chen, Lei Su, Hongda Zhang, Fuzhong Chen, Xuezhen Dong, Na Nie, Zhiying Wu, Bin Xiao, Ting Li, Shunya Dang, Ping Zhang, Yijia Sun, Jincheng Wu, Jinjie Yang, Xionghai Lin, Zhi Ma, Kegeng Wu, Jia Li, Aiyuan Yang, Hui Liu, Jianqiang Zhang, Xiaoxi Chen, Guangwei Ai, Wentao Zhang, Yicong Chen, Xiaoqin Huang, Kun Li, Wenjing Luo, Yifei Duan, Lingling Zhu, Ran Xiao, Zhe Su, Jiani Pu, Dian Wang, Xu Jia, Tianyu Zhang, Mengyu Ai, Mang Wang, Yujing Qiao, Lei Zhang, Yanjun Shen, Fan Yang, Miao Zhen, Yijie Zhou, Mingyang Chen, Fei Li, Chenzheng Zhu, Keer Lu, Yaqi Zhao, Hao Liang, Youquan Li, Yanzhao Qin, Linzhuang Sun, Jianhua Xu, Haoze Sun, MingAn Lin, Zenan Zhou, WeiPeng Chen
We introduce Baichuan-Omni-1. 5, an omni-modal model that not only has omni-modal understanding capabilities but also provides end-to-end audio generation capabilities.
no code implementations • 10 Jan 2025 • Yifan Zhao, Jia Li, Zeyin Song, Yonghong Tian
Depicting novel classes with language descriptions by observing few-shot samples is inherent in human-learning systems.
1 code implementation • 24 Dec 2024 • Xuefeng Jiang, Lvhua Wu, Sheng Sun, Jia Li, Jingjing Xue, Yuwei Wang, Tingting Wu, Min Liu
However, the effectiveness of LLMs in detecting code vulnerabilities is largely under-explored.
no code implementations • 21 Dec 2024 • Shengkun Yang, Zhichang Guo, Jia Li, Fanghui Song, Wenjuan Yao
We employ the second order SAV algorithm to further speed up the calculation while maintaining accuracy.
no code implementations • 19 Dec 2024 • Lecheng Wang, Xianjie Shi, Ge Li, Jia Li, Yihong Dong, Xuanming Zhang, Wenpin Jiao, Hong Mei
We present a new finding: the performance of LMs gradually declines when trained on recursively generated text until they perform no better than a randomly initialized LM.
1 code implementation • 9 Dec 2024 • Xuesong Zhang, Yunbo Xu, Jia Li, Zhenzhen Hu, Richnag Hong
SUSA includes a Textual Semantic Understanding (TSU) module, which narrows the modality gap between instructions and environments by generating and associating the descriptions of environmental landmarks in agent's immediate surroundings.
Ranked #1 on
Visual Navigation
on R2R
1 code implementation • 3 Dec 2024 • Qisen Wang, Yifan Zhao, Jiawei Ma, Jia Li
However, the diffusion model, as an external prior that can directly provide visual supervision, has always underperformed in sparse-view 3D reconstruction using Score Distillation Sampling (SDS) due to the low information entropy of sparse views compared to text, leading to optimization challenges caused by mode deviation.
no code implementations • 24 Nov 2024 • Yi Ran, Zhichang Guo, Jia Li, Yao Li, Martin Burger, Boying Wu
Adversarial attacks can be used as a criterion for judging the adaptability of neural networks to real data, since adversarial attacks can find the most extreme perturbations that make neural networks ineffective.
no code implementations • 21 Nov 2024 • Botao Wang, Jia Li, Heng Chang, Keli Zhang, Fugee Tsung
We then present an analysis of decomposing the optimization target into a consistency penalty and a structure modification based on cause-effect relations.
Ranked #50 on
Node Classification
on Squirrel
1 code implementation • 10 Nov 2024 • Yiqing Lin, Jianheng Tang, Chenyi Zi, H. Vicky Zhao, Yuan YAO, Jia Li
Existing methods generally focus on a single graph object type (node, edge, graph, etc.)
1 code implementation • 9 Nov 2024 • XiaoJun Wu, Junxi Liu, Huanyi Su, Zhouchi Lin, Yiyan Qi, Chengjin Xu, Jiajun Su, Jiajie Zhong, Fuwei Wang, Saizhuo Wang, Fengrui Hua, Jia Li, Jian Guo
As large language models become increasingly prevalent in the financial sector, there is a pressing need for a standardized method to comprehensively assess their performance.
1 code implementation • 4 Nov 2024 • Jiawen Zhang, Shun Zheng, Xumeng Wen, Xiaofang Zhou, Jiang Bian, Jia Li
Numerous industrial sectors necessitate models capable of providing robust forecasts across various horizons.
1 code implementation • 2 Nov 2024 • Mingze Gong, Lei Chen, Jia Li
Accurate forecasting of spatiotemporal data remains challenging due to complex spatial dependencies and temporal dynamics.
no code implementations • 30 Oct 2024 • Jia Li, Ge Li, Xuanming Zhang, YunFei Zhao, Yihong Dong, Zhi Jin, Binhua Li, Fei Huang, Yongbin Li
These evaluations help practitioners select superior LLMs in specific domains and discover the shortcomings of existing LLMs.
no code implementations • 24 Oct 2024 • Jiashun Cheng, Zinan Zheng, Yang Liu, Jianheng Tang, Hongwei Wang, Yu Rong, Jia Li, Fugee Tsung
Graph Anomaly Detection (GAD) is a challenging and practical research topic where Graph Neural Networks (GNNs) have recently shown promising results.
1 code implementation • 24 Oct 2024 • Qifan Zhang, Xiaobin Hong, Jianheng Tang, Nuo Chen, Yuhan Li, Wenzhong Li, Jing Tang, Jia Li
Furthermore, GCoder efficiently manages large-scale graphs with millions of nodes and diverse input formats, overcoming the limitations of previous models focused on the reasoning steps paradigm.
1 code implementation • 17 Oct 2024 • Siyuan Jiang, Jia Li, He Zong, Huanyu Liu, Hao Zhu, Shukai Hu, Erlu Li, Jiazheng Ding, Yu Han, Wei Ning, Gen Wang, Yihong Dong, Kechi Zhang, Ge Li
In this paper, we propose a lightweight and effective LLM for code completion named aiXcoder-7B.
no code implementations • 17 Oct 2024 • Kangkang Lu, Yanhua Yu, Zhiyong Huang, Jia Li, Yuling Wang, Meiyu Liang, Xiting Qin, Yimeng Ren, Tat-Seng Chua, Xidian Wang
Specifically, we propose a Heterogeneous Heterophilic Spectral Graph Neural Network (H2SGNN), which employs a dual-module approach: local independent filtering and global hybrid filtering.
1 code implementation • 11 Oct 2024 • Ziming Yu, Pan Zhou, Sike Wang, Jia Li, Hua Huang
Fine-tuning Large Language Models (LLMs) has proven effective for a variety of downstream tasks.
1 code implementation • 11 Oct 2024 • Jia Li, Yangchen Yu, Yin Chen, Yu Zhang, Peng Jia, Yunbo Xu, Ziqiang Li, Meng Wang, Richang Hong
Engagement estimation plays a crucial role in understanding human social behaviors, attracting increasing research interests in fields such as affective computing and human-computer interaction.
1 code implementation • 9 Oct 2024 • Jian Xiao, Zhenzhen Hu, Jia Li, Richang Hong
By replacing a single text query with a series of text proxies, TV-ProxyNet not only broadens the query scope but also achieves a more precise expansion.
no code implementations • 4 Oct 2024 • Jia Li, Ge Li, Lecheng Wang, Hao Zhu, Zhi Jin
In this paper, we propose a self-reflection approach to generating ERs of code.
no code implementations • 4 Oct 2024 • Jia Li, Yuqi Zhu, Yongmin Li, Ge Li, Zhi Jin
HonestCoder selectively shows the generated programs to developers based on LLMs' confidence.
2 code implementations • 3 Oct 2024 • Yihong Dong, Ge Li, Yongding Tao, Xue Jiang, Kechi Zhang, Jia Li, Jinliang Deng, Jing Su, Jun Zhang, Jingjing Xu
Despite the remarkable successes of general-purpose neural networks, such as MLPs and Transformers, we find that they exhibit notable shortcomings in modeling and reasoning about periodic phenomena, achieving only marginal performance within the training domain and failing to generalize effectively to out-of-domain (OOD) scenarios.
no code implementations • 29 Sep 2024 • Arda Akman, Peyman Tehrani, Pablo Oliver, Marcin Hoffmann, Michael Jones, Jia Li
This paper discusses the use case of energy saving and traffic steering in O-RAN, the mechanism of multi-vendor interoperability to make it work and depict its test methodology.
no code implementations • 20 Sep 2024 • Nuo Chen, Ning Wu, Jianhui Chang, Jia Li
The module creates diverse equations, which the Problem-Crafter agent then transforms into math word problems.
no code implementations • 19 Sep 2024 • Chenyu Li, Shiming Ge, Daichi Zhang, Jia Li
Many real-world applications today like video surveillance and urban governance need to address the recognition of masked faces, where content replacement by diverse masks often brings in incomplete appearance and ambiguous representation, leading to a sharp drop in accuracy.
no code implementations • 18 Sep 2024 • Shiming Ge, Shengwei Zhao, Chenyu Li, Yu Zhang, Jia Li
Face recognition in the wild is now advancing towards light-weight models, fast inference speed and resolution-adapted capability.
1 code implementation • 10 Sep 2024 • Yin Chen, Jia Li, Yu Zhang, Zhenzhen Hu, Shiguang Shan, Meng Wang, Richang Hong
Dynamic facial expression recognition (DFER) infers emotions from the temporal evolution of expressions, unlike static facial expression recognition (SFER), which relies solely on a single snapshot.
Ranked #3 on
Dynamic Facial Expression Recognition
on DFEW
Dynamic Facial Expression Recognition
Facial Expression Recognition
+1
no code implementations • 9 Sep 2024 • Xuesong Zhang, Jia Li, Yunbo Xu, Zhenzhen Hu, Richang Hong
Autonomous navigation for an embodied agent guided by natural language instructions remains a formidable challenge in vision-and-language navigation (VLN).
1 code implementation • 8 Aug 2024 • Xuefeng Jiang, Sheng Sun, Jia Li, Jingjing Xue, Runhan Li, Zhiyuan Wu, Gang Xu, Yuwei Wang, Min Liu
Intuitively, the performance degradation is dominated by clients with higher noise rates since their trained models contain more misinformation from data, thus it is necessary to devise an effective optimization scheme to mitigate the negative impacts of these noisy clients.
no code implementations • 3 Aug 2024 • Yunshan Qi, Jia Li, Yifan Zhao, Yu Zhang, Lin Zhu
To effectively introduce event streams into the neural volumetric representation learning process, we propose an event-enhanced blur rendering loss and an event rendering loss, which guide the network via modeling the real blur process and event generation process, respectively.
no code implementations • 3 Aug 2024 • Jung In Park, Mahyar Abbasian, Iman Azimi, Dawn T. Bounds, Angela Jun, Jaesu Han, Robert M. McCarron, Jessica Borelli, Parmida Safavi, Sanaz Mirbaha, Jia Li, Mona Mahmoudi, Carmen Wiedenhoeft, Amir M. Rahmani
Conclusion: The study validated an evaluation framework for mental health chatbots, proving its effectiveness in improving safety and reliability.
no code implementations • 2 Aug 2024 • Changqun Xia, Chenxi Xie, Zhentao He, Tianshu Yu, Jia Li
To compensate for the lack of HRSOD dataset, we thoughtfully collect a large-scale high resolution salient object detection dataset, called UHRSD, containing 5, 920 images from real-world complex scenarios at 4K-8K resolutions.
no code implementations • 31 Jul 2024 • Di Chen, Jia Li, H. Michael Zhang
This study provides the first empirical understanding of collective cooperativeness in human-driven mixed traffic and points to new possibilities to manage mixed autonomy traffic systems.
no code implementations • 27 Jul 2024 • Aochuan Chen, Jiashun Cheng, Zijing Liu, Ziqi Gao, Fugee Tsung, Yu Li, Jia Li
Low-Rank Adaptation (LoRA) has gained popularity for fine-tuning large foundation models, leveraging low-rank matrices $\mathbf{A}$ and $\mathbf{B}$ to represent weight changes (i. e., $\Delta \mathbf{W} = \mathbf{B} \mathbf{A}$).
1 code implementation • 16 Jul 2024 • Nuo Chen, Yan Wang, Yang Deng, Jia Li
This survey explores the burgeoning field of role-playing with language models, focusing on their development from early persona-based models to advanced character-driven simulations facilitated by Large Language Models (LLMs).
1 code implementation • 10 Jul 2024 • Yuhan Li, Peisong Wang, Xiao Zhu, Aochuan Chen, Haiyun Jiang, Deng Cai, Victor Wai Kin Chan, Jia Li
To bridge this gap, we introduce GLBench, the first comprehensive benchmark for evaluating GraphLLM methods in both supervised and zero-shot scenarios.
1 code implementation • 29 Jun 2024 • Jianheng Tang, Qifan Zhang, Yuhan Li, Jia Li
The "arms race" of Large Language Models (LLMs) demands novel, challenging, and diverse benchmarks to faithfully examine their progresses.
1 code implementation • 24 Jun 2024 • Zinan Zheng, Yang Liu, Jia Li, Jianhua Yao, Yu Rong
Moreover, we show that DEGNN is data efficient, learning with less data, and can generalize across scenarios such as unobserved orientation.
no code implementations • 20 Jun 2024 • Yunshan Qi, Lin Zhu, Yifan Zhao, Nan Bao, Jia Li
Neural Radiance Fields (NeRF) achieves impressive 3D representation learning and novel view synthesis results with high-quality multi-view images as input.
no code implementations • 16 Jun 2024 • Evgenii Kuriabov, Jia Li
Explainable machine learning (XML) has emerged as a major challenge in artificial intelligence (AI).
1 code implementation • 8 Jun 2024 • Chenyi Zi, Haihong Zhao, Xiangguo Sun, Yiqing Lin, Hong Cheng, Jia Li
Artificial general intelligence on graphs has shown significant advancements across various applications, yet the traditional 'Pre-train & Fine-tune' paradigm faces inefficiencies and negative transfer issues, particularly in complex and few-shot settings.
no code implementations • 7 Jun 2024 • Jiexia Ye, Weiqi Zhang, Ziyue Li, Jia Li, Meng Zhao, Fugee Tsung
The recent rapid advancements in language models (LMs) have garnered attention in medical time series-text multimodal learning.
1 code implementation • 30 May 2024 • Jia Li, Ge Li, YunFei Zhao, Yongmin Li, Huanyu Liu, Hao Zhu, Lecheng Wang, Kaibo Liu, Zheng Fang, Lanshen Wang, Jiazheng Ding, Xuanming Zhang, Yuqi Zhu, Yihong Dong, Zhi Jin, Binhua Li, Fei Huang, Yongbin Li
Our experiments reveal these LLMs' coding abilities in real-world code repositories.
no code implementations • 30 May 2024 • Ke Yi, Yuhui Xu, Heng Chang, Chen Tang, Yuan Meng, Tong Zhang, Jia Li
Large Language Models (LLMs) have advanced rapidly but face significant memory demands.
no code implementations • 30 May 2024 • Jia Li, Lijie Hu, Zhixian He, Jingfeng Zhang, Tianhang Zheng, Di Wang
With the advancement of image-to-image diffusion models guided by text, significant progress has been made in image editing.
1 code implementation • 28 May 2024 • Sike Wang, Pan Zhou, Jia Li, Hua Huang
In this paper, we propose the first 4-bit second-order optimizers, exemplified by 4-bit Shampoo, maintaining performance similar to that of 32-bit ones.
no code implementations • 24 May 2024 • Jia Li, Lin Lin
Central to our investigation is dimension reduction within the Wasserstein metric space to enhance classification accuracy.
1 code implementation • 5 May 2024 • Ziqi Gao, Qichao Wang, Aochuan Chen, Zijing Liu, Bingzhe Wu, Liang Chen, Jia Li
Low-rank adaptation~(LoRA) has recently gained much interest in fine-tuning foundation models.
1 code implementation • 3 May 2024 • Jiexia Ye, Weiqi Zhang, Ke Yi, Yongzi Yu, Ziyue Li, Jia Li, Fugee Tsung
There are two main research lines, namely pre-training foundation models from scratch for time series and adapting large language foundation models for time series.
1 code implementation • 26 Apr 2024 • Zhengwei Tao, Zhi Jin, Yifan Zhang, Xiancai Chen, Xiaoying Bai, Yue Fang, Haiyan Zhao, Jia Li, Chongyang Tao
It requires event schema knowledge to perform global reasoning and needs to deal with the diversity of the inter-event relations and the reasoning paradigms.
1 code implementation • 24 Apr 2024 • Marcos V. Conde, Florin-Alexandru Vasluianu, Radu Timofte, Jianxing Zhang, Jia Li, Fan Wang, Xiaopeng Li, Zikun Liu, Hyunhee Park, Sejun Song, Changho Kim, Zhijuan Huang, Hongyuan Yu, Cheng Wan, Wending Xiang, Jiamin Lin, Hang Zhong, Qiaosong Zhang, Yue Sun, Xuanwu Yin, Kunlong Zuo, Senyan Xu, Siyuan Jiang, Zhijing Sun, Jiaying Zhu, Liangyan Li, Ke Chen, Yunzhe Li, Yimo Ning, Guanhua Zhao, Jun Chen, Jinyang Yu, Kele Xu, Qisheng Xu, Yong Dou
This paper reviews the NTIRE 2024 RAW Image Super-Resolution Challenge, highlighting the proposed solutions and results.
1 code implementation • 23 Apr 2024 • Zhen Yang, Fang Liu, Zhongxing Yu, Jacky Wai Keung, Jia Li, Shuo Liu, Yifan Hong, Xiaoxue Ma, Zhi Jin, Ge Li
This paper investigates diverse LLMs and learning-based transpilers for automated code translation tasks, finding that: although certain LLMs have outperformed current transpilers, they still have some accuracy issues, where most of the failures are induced by a lack of comprehension of source programs, missing clear instructions on I/O types in translation, and ignoring discrepancies between source and target programs.
1 code implementation • 31 Mar 2024 • Jia Li, Ge Li, Xuanming Zhang, Yihong Dong, Zhi Jin
Existing benchmarks demonstrate poor alignment with real-world code repositories and are insufficient to evaluate the coding abilities of LLMs.
no code implementations • 15 Mar 2024 • Nan Gao, Jia Li, Huaibo Huang, Zhi Zeng, Ke Shang, Shuwu Zhang, Ran He
Experimental results demonstrate the superiority of DiffMAC over state-of-the-art methods, with a high degree of generalization in real-world and heterogeneous settings.
1 code implementation • 14 Mar 2024 • Mingyuan Sun, Donghao Zhang, ZongYuan Ge, Jiaxu Wang, Jia Li, Zheng Fang, Renjing Xu
Based on this, we propose EventRPG, which leverages relevance propagation on the spiking neural network for more efficient augmentation.
Ranked #1 on
Action Recognition
on SL-Animals
no code implementations • 11 Mar 2024 • Xiangguo Sun, Hong Cheng, Jia Li, Bo Liu, Jihong Guan
This paper is an extended abstract of our original work published in KDD23, where we won the best research paper award (Xiangguo Sun, Hong Cheng, Jia Li, Bo Liu, and Jihong Guan.
no code implementations • 11 Mar 2024 • Ziqi Gao, Tao Feng, Jiaxuan You, Chenyi Zi, Yan Zhou, Chen Zhang, Jia Li
In this work, by taking each chain as a node and assembly actions as edges, we show that an acyclic undirected connected graph can be used to predict the structure of multi-chain protein complexes (a. k. a., protein complex modelling, PCM).
4 code implementations • 29 Feb 2024 • Anton Lozhkov, Raymond Li, Loubna Ben allal, Federico Cassano, Joel Lamy-Poirier, Nouamane Tazi, Ao Tang, Dmytro Pykhtar, Jiawei Liu, Yuxiang Wei, Tianyang Liu, Max Tian, Denis Kocetkov, Arthur Zucker, Younes Belkada, Zijian Wang, Qian Liu, Dmitry Abulkhanov, Indraneil Paul, Zhuang Li, Wen-Ding Li, Megan Risdal, Jia Li, Jian Zhu, Terry Yue Zhuo, Evgenii Zheltonozhskii, Nii Osae Osae Dade, Wenhao Yu, Lucas Krauß, Naman jain, Yixuan Su, Xuanli He, Manan Dey, Edoardo Abati, Yekun Chai, Niklas Muennighoff, Xiangru Tang, Muhtasham Oblokulov, Christopher Akiki, Marc Marone, Chenghao Mou, Mayank Mishra, Alex Gu, Binyuan Hui, Tri Dao, Armel Zebaze, Olivier Dehaene, Nicolas Patry, Canwen Xu, Julian McAuley, Han Hu, Torsten Scholak, Sebastien Paquet, Jennifer Robinson, Carolyn Jane Anderson, Nicolas Chapados, Mostofa Patwary, Nima Tajbakhsh, Yacine Jernite, Carlos Muñoz Ferrandis, Lingming Zhang, Sean Hughes, Thomas Wolf, Arjun Guha, Leandro von Werra, Harm de Vries
Our large model, StarCoder2- 15B, significantly outperforms other models of comparable size.
Ranked #35 on
Code Generation
on MBPP
1 code implementation • 25 Feb 2024 • Nuo Chen, Yuhan Li, Jianheng Tang, Jia Li
Large language models (LLMs) have achieved impressive success across several fields, but their proficiency in understanding and resolving complex graph problems is less explored.
1 code implementation • 19 Feb 2024 • Nuo Chen, Hongguang Li, Juhua Huang, Baoyuan Wang, Jia Li
Existing retrieval-based methods have made significant strides in maintaining long-term conversations.
1 code implementation • 17 Feb 2024 • Yuhan Li, Peisong Wang, ZHIXUN LI, Jeffrey Xu Yu, Jia Li
The results underscore the effectiveness of our model in achieving significant cross-dataset zero-shot transferability, opening pathways for the development of graph foundation models.
3 code implementations • 15 Feb 2024 • Haihong Zhao, Aochuan Chen, Xiangguo Sun, Hong Cheng, Jia Li
In response to this challenge, we propose a novel approach called Graph COordinators for PrEtraining (GCOPE), that harnesses the underlying commonalities across diverse graph datasets to enhance few-shot learning.
no code implementations • 6 Feb 2024 • Haihong Zhao, Chenyi Zi, Yang Liu, Chen Zhang, Yan Zhou, Jia Li
In this paper, we introduce a novel framework Knowledge-Data Alignment (KDAlign) to integrate rule knowledge, typically summarized by human experts, to supplement the limited labeled data.
no code implementations • 12 Jan 2024 • Jia Li, Ge Li, YunFei Zhao, Yongmin Li, Zhi Jin, Hao Zhu, Huanyu Liu, Kaibo Liu, Lecheng Wang, Zheng Fang, Lanshen Wang, Jiazheng Ding, Xuanming Zhang, Yihong Dong, Yuqi Zhu, Bin Gu, Mengfei Yang
Compared to previous benchmarks, DevEval aligns to practical projects in multiple dimensions, e. g., real program distributions, sufficient dependencies, and enough-scale project contexts.
no code implementations • 11 Jan 2024 • Lucas W. Remedios, Shunxing Bao, Samuel W. Remedios, Ho Hin Lee, Leon Y. Cai, Thomas Li, Ruining Deng, Can Cui, Jia Li, Qi Liu, Ken S. Lau, Joseph T. Roland, Mary K. Washington, Lori A. Coburn, Keith T. Wilson, Yuankai Huo, Bennett A. Landman
In this paper, we propose to use inter-modality learning to label previously un-labelable cell types on virtual H&E.
1 code implementation • 4 Jan 2024 • Heng Chang, Jiangnan Ye, Alejo Lopez Avila, Jinhua Du, Jia Li
Graph Neural Networks (GNNs) have achieved great success in Knowledge Graph Completion (KGC) by modelling how entities and relations interact in recent years.
no code implementations • 20 Dec 2023 • Vincent Pisztora, Jia Li
In this paper we propose a method for the optimal allocation of observations between an intrinsically explainable glass box model and a black box model.
no code implementations • 18 Dec 2023 • Nuo Chen, Hongguang Li, Baoyuan Wang, Jia Li
IMP-TIP follows the ``From Good to Great" concept, collecting multiple potential solutions from both LLMs and their Tool-Augmented counterparts for the same math problem, and then selecting or re-generating the most accurate answer after cross-checking these solutions via tool-augmented interleaf prompting.
2 code implementations • 9 Dec 2023 • Yin Chen, Jia Li, Shiguang Shan, Meng Wang, Richang Hong
And the TMAs capture and model the relationships of dynamic changes in facial expressions, effectively extending the pre-trained image model for videos.
Ranked #2 on
Dynamic Facial Expression Recognition
on FERV39k
Dynamic Facial Expression Recognition
Facial Expression Recognition
+1
1 code implementation • 7 Dec 2023 • Nuo Chen, Ning Wu, Shining Liang, Ming Gong, Linjun Shou, Dongmei Zhang, Jia Li
This paper presents an in-depth analysis of Large Language Models (LLMs), focusing on LLaMA, a prominent open-source foundational model in natural language processing.
no code implementations • 29 Nov 2023 • Jia Li, Lijie Hu, Jingfeng Zhang, Tianhang Zheng, Hua Zhang, Di Wang
In this paper, we address the limitations of existing text-to-image diffusion models in generating demographically fair results when given human-related descriptions.
3 code implementations • 28 Nov 2023 • Xiangguo Sun, Jiawen Zhang, Xixi Wu, Hong Cheng, Yun Xiong, Jia Li
This paper presents a pioneering survey on the emerging domain of graph prompts in AGI, addressing key challenges and opportunities in harnessing graph data for AGI applications.
1 code implementation • CVPR 2024 • Wenjie Zhao, Jia Li, Xin Dong, Yu Xiang, Yunhui Guo
Semantic segmentation models, while effective for in-distribution categories, face challenges in real-world deployment due to encountering out-of-distribution (OoD) objects.
1 code implementation • 27 Nov 2023 • Jia Li, Yanyan Shen, Lei Chen, Charles Wang Wai Ng
Inspired by the Cloze task and BERT, we fully consider the characteristics of spatial interpolation and design the SpaFormer model based on the Transformer architecture as the core of SSIN.
3 code implementations • 21 Nov 2023 • Yuhan Li, ZHIXUN LI, Peisong Wang, Jia Li, Xiangguo Sun, Hong Cheng, Jeffrey Xu Yu
First of all, we propose a new taxonomy, which organizes existing methods into three categories based on the role (i. e., enhancer, predictor, and alignment component) played by LLMs in graph-related tasks.
no code implementations • 15 Nov 2023 • Yanlin Qi, Jia Li, Michael Zhang
This new data-driven framework provides a cost-effective and adaptable solution that complements the case-specific approaches for CMF estimation, which is particularly beneficial when availability of crash data or time imposes constraints.
no code implementations • 1 Nov 2023 • Zejun Wang, Jia Li, Ge Li, Zhi Jin
To help human users refine their requirements and improve large language models' code generation performances, we propose ChatCoder: a method to refine the requirements via chatting with large language models.
2 code implementations • 31 Oct 2023 • Nuo Chen, Zinan Zheng, Ning Wu, Ming Gong, Dongmei Zhang, Jia Li
This indicates that crafting multilingual corpora can be regarded as a vital strategy for enhancing model performance in a specific language, especially in mathematical reasoning tasks.
no code implementations • 15 Oct 2023 • Ge Li, Chongyang Tao, Jia Li, Huangzhao Zhang, Fang Liu, Zhi Jin
Large language models (LLMs) have shown impressive in-context learning (ICL) ability in code generation.
no code implementations • 13 Oct 2023 • Sheng Zhou, Dan Guo, Jia Li, Xun Yang, Meng Wang
The associations between these repetitive objects are superfluous for answer reasoning; (2) two spatially distant OCR tokens detected in the image frequently have weak semantic dependencies for answer reasoning; and (3) the co-existence of nearby objects and tokens may be indicative of important visual cues for predicting answers.
1 code implementation • 11 Oct 2023 • Jiawen Zhang, Xumeng Wen, Zhenwei Zhang, Shun Zheng, Jia Li, Jiang Bian
Delivering precise point and distributional forecasts across a spectrum of prediction horizons represents a significant and enduring challenge in the application of time-series forecasting within various industries.
1 code implementation • NeurIPS 2023 • Botao Wang, Jia Li, Yang Liu, Jiashun Cheng, Yu Rong, Wenjia Wang, Fugee Tsung
We first present the error analysis of PL strategy by showing that the error is bounded by the confidence of PL threshold and consistency of multi-view prediction.
1 code implementation • 12 Sep 2023 • Jiaxiu Li, Kun Li, Jia Li, Guoliang Chen, Dan Guo, Meng Wang
Compared with the general video grounding task, MTVG focuses on meticulous actions and changes on the face.
1 code implementation • 6 Sep 2023 • Yuqi Zhu, Ge Li, YunFei Zhao, Jia Li, Zhi Jin, Hong Mei
With an analysis of loss distributions of code tokens, we find that code tokens can be divided into two categories: challenging tokens that are difficult to predict and confident tokens that can be easily inferred.
1 code implementation • 26 Aug 2023 • Chongyang Tao, Zhi Jin, Fang Liu, Jia Li, Ge Li
In this paper, we propose a novel method named ZC3 for Zero-shot Cross-language Code Clone detection.
no code implementations • 26 Aug 2023 • Jia Li, Yongmin Li, Ge Li, Xing Hu, Xin Xia, Zhi Jin
Besides the patternized words, a code summary also contains important keywords, which are the key to reflecting the functionality of the code.
no code implementations • 25 Aug 2023 • Yang Liu, Jiashun Cheng, Haihong Zhao, Tingyang Xu, Peilin Zhao, Fugee Tsung, Jia Li, Yu Rong
Furthermore, we offer theoretical insights into SEGNO, highlighting that it can learn a unique trajectory between adjacent states, which is crucial for model generalization.
no code implementations • 25 Aug 2023 • Jia Li, Wei Qian, Kun Li, Qi Li, Dan Guo, Meng Wang
Specifically, we achieve the results of 0. 8492 and 0. 8439 for MuSe-Personalisation in terms of arousal and valence CCC.
no code implementations • 24 Aug 2023 • Weiqi Zhang, Jianfeng Zhang, Jia Li, Fugee Tsung
Based on this, we create two views for the input time series through two different encoders.
no code implementations • 20 Aug 2023 • Shunxing Bao, Sichen Zhu, Vasantha L Kolachala, Lucas W. Remedios, Yeonjoo Hwang, Yutong Sun, Ruining Deng, Can Cui, Yike Li, Jia Li, Joseph T. Roland, Qi Liu, Ken S. Lau, Subra Kugathasan, Peng Qiu, Keith T. Wilson, Lori A. Coburn, Bennett A. Landman, Yuankai Huo
This analysis is based on data collected at the two research institutes.
no code implementations • 31 Jul 2023 • Tianshu Yu, Changqun Xia, Jia Li
That is, motion of different parts of the portraits is imbalanced.
no code implementations • 31 Jul 2023 • Jia Li, Xiang Li
Observation-Oriented paradigm currently dominates relationship learning models, including AI-based ones, which inherently do not account for relationships with temporally nonlinear effects.
1 code implementation • 22 Jul 2023 • Jia Li, Yanhao Wang, Arpit Merchant
Normalized-cut graph partitioning aims to divide the set of nodes in a graph into $k$ disjoint clusters to minimize the fraction of the total edges between any cluster and all other clusters.
1 code implementation • 15 Jul 2023 • Cheng Chen, Yifan Zhao, Jia Li
Learning multi-label image recognition with incomplete annotation is gaining popularity due to its superior performance and significant labor savings when compared to training with fully labeled datasets.
1 code implementation • 4 Jul 2023 • Xiangguo Sun, Hong Cheng, Jia Li, Bo Liu, Jihong Guan
Inspired by the prompt learning in natural language processing (NLP), which has presented significant effectiveness in leveraging prior knowledge for various NLP tasks, we study the prompting topic for graphs with the motivation of filling the gap between pre-trained models and various graph tasks.
no code implementations • 29 Jun 2023 • Enzhe Zhao, Zhichang Guo, Shengzhu Shi, Yao Li, Jia Li, Dazhi Zhang
SaaFormer applies a multi-level spectral extraction structure to segment the spectrum into multiple spectrum clips, such that the wavelength continuity of the spectrum across the channel are preserved.
1 code implementation • NeurIPS 2023 • Jianheng Tang, Fengrui Hua, Ziqi Gao, Peilin Zhao, Jia Li
With a long history of traditional Graph Anomaly Detection (GAD) algorithms and recently popular Graph Neural Networks (GNNs), it is still not clear (1) how they perform under a standard comprehensive setting, (2) whether GNNs can outperform traditional algorithms such as tree ensembles, and (3) how about their efficiency on large-scale graphs.
1 code implementation • 18 Jun 2023 • Yifan Zhao, Tong Zhang, Jia Li, Yonghong Tian
Recent progress in this setting assumes that the base knowledge and novel query samples are distributed in the same domains, which are usually infeasible for realistic applications.
1 code implementation • 14 Jun 2023 • Jiawen Zhang, Shun Zheng, Wei Cao, Jiang Bian, Jia Li
Irregularly sampled multivariate time series are ubiquitous in various fields, particularly in healthcare, and exhibit two key characteristics: intra-series irregularity and inter-series discrepancy.
no code implementations • 12 Jun 2023 • Yu Zhang, Jia Li, Jie Ding, Xiang Li
Learning and analysis of network robustness, including controllability robustness and connectivity robustness, is critical for various networked systems against attacks.
2 code implementations • 2 Jun 2023 • Amit Roy, Juan Shu, Jia Li, Carl Yang, Olivier Elshocht, Jeroen Smeets, Pan Li
Graph Anomaly Detection (GAD) is a technique used to identify abnormal nodes within graphs, finding applications in network security, fraud detection, social media spam detection, and various other domains.
no code implementations • 24 May 2023 • Zhengwei Tao, Zhi Jin, Xiaoying Bai, Haiyan Zhao, Yanlin Feng, Jia Li, Wenpeng Hu
In this paper, we propose an overarching framework for event semantic processing, encompassing understanding, reasoning, and prediction, along with their fine-grained aspects.
no code implementations • 19 May 2023 • Liangqi Yuan, Yuan Wei, Jia Li
Deep neural networks (DNNs) are used to fit and train the pressure image stream and recognize the corresponding human behavior.
no code implementations • 11 May 2023 • Jia Li, Ge Li, Yongmin Li, Zhi Jin
In this paper, we propose Structured CoTs (SCoTs) and present a novel prompting technique for code generation, named SCoT prompting.
1 code implementation • 11 May 2023 • Jianheng Tang, Kangfei Zhao, Jia Li
In this paper, we introduce FGWEA, an unsupervised entity alignment framework that leverages the Fused Gromov-Wasserstein (FGW) distance, allowing for a comprehensive comparison of entity semantics and KG structures within a joint optimization framework.
1 code implementation • 9 May 2023 • Nuo Chen, Linjun Shou, Ming Gong, Jian Pei, Bowen Cao, Jianhui Chang, Daxin Jiang, Jia Li
Currently, learning better unsupervised sentence representations is the pursuit of many natural language processing communities.
4 code implementations • 9 May 2023 • Raymond Li, Loubna Ben allal, Yangtian Zi, Niklas Muennighoff, Denis Kocetkov, Chenghao Mou, Marc Marone, Christopher Akiki, Jia Li, Jenny Chim, Qian Liu, Evgenii Zheltonozhskii, Terry Yue Zhuo, Thomas Wang, Olivier Dehaene, Mishig Davaadorj, Joel Lamy-Poirier, João Monteiro, Oleh Shliazhko, Nicolas Gontier, Nicholas Meade, Armel Zebaze, Ming-Ho Yee, Logesh Kumar Umapathi, Jian Zhu, Benjamin Lipkin, Muhtasham Oblokulov, Zhiruo Wang, Rudra Murthy, Jason Stillerman, Siva Sankalp Patel, Dmitry Abulkhanov, Marco Zocca, Manan Dey, Zhihan Zhang, Nour Fahmy, Urvashi Bhattacharyya, Wenhao Yu, Swayam Singh, Sasha Luccioni, Paulo Villegas, Maxim Kunakov, Fedor Zhdanov, Manuel Romero, Tony Lee, Nadav Timor, Jennifer Ding, Claire Schlesinger, Hailey Schoelkopf, Jan Ebert, Tri Dao, Mayank Mishra, Alex Gu, Jennifer Robinson, Carolyn Jane Anderson, Brendan Dolan-Gavitt, Danish Contractor, Siva Reddy, Daniel Fried, Dzmitry Bahdanau, Yacine Jernite, Carlos Muñoz Ferrandis, Sean Hughes, Thomas Wolf, Arjun Guha, Leandro von Werra, Harm de Vries
The BigCode community, an open-scientific collaboration working on the responsible development of Large Language Models for Code (Code LLMs), introduces StarCoder and StarCoderBase: 15. 5B parameter models with 8K context length, infilling capabilities and fast large-batch inference enabled by multi-query attention.
Ranked #53 on
Code Generation
on MBPP
1 code implementation • 6 May 2023 • Kechi Zhang, Zhuo Li, Jia Li, Ge Li, Zhi Jin
Inspired by the process of human programming, we propose a generate-and-edit approach named Self-Edit that utilizes execution results of the generated code from LLMs to improve the code quality on the competitive programming task.
1 code implementation • 10 Apr 2023 • Weiqi Zhang, Guanlue Li, Jianheng Tang, Jia Li, Fugee Tsung
In our study, we examine this prevalent strategy through the lens of graph Dirichlet energy.
1 code implementation • 1 Apr 2023 • Ruining Deng, Can Cui, Lucas W. Remedios, Shunxing Bao, R. Michael Womick, Sophie Chiron, Jia Li, Joseph T. Roland, Ken S. Lau, Qi Liu, Keith T. Wilson, Yaohong Wang, Lori A. Coburn, Bennett A. Landman, Yuankai Huo
Analyzing high resolution whole slide images (WSIs) with regard to information across multiple scales poses a significant challenge in digital pathology.
no code implementations • 31 Mar 2023 • Jia Li, YunFei Zhao, Yongmin Li, Ge Li, Zhi Jin
A key question is how to make prompts (i. e., Prompting Techniques).
no code implementations • 25 Mar 2023 • Liangqi Yuan, Houlin Chen, Robert Ewing, Jia Li
Passive radio frequency (PRF)-based indoor positioning systems (IPS) have attracted researchers' attention due to their low price, easy and customizable configuration, and non-invasive design.
1 code implementation • 16 Mar 2023 • Jia Li, Yin Chen, Xuesong Zhang, Jiantao Nie, Ziqiang Li, Yangchen Yu, Yan Zhang, Richang Hong, Meng Wang
In this paper, we present our advanced solutions to the two sub-challenges of Affective Behavior Analysis in the wild (ABAW) 2023: the Emotional Reaction Intensity (ERI) Estimation Challenge and Expression (Expr) Classification Challenge.
1 code implementation • CVPR 2023 • Dingfeng Shi, Yujie Zhong, Qiong Cao, Lin Ma, Jia Li, DaCheng Tao
In this paper, we present a one-stage framework TriDet for temporal action detection.
Ranked #2 on
Temporal Action Localization
on EPIC-KITCHENS-100
2 code implementations • 12 Mar 2023 • Jiajin Li, Jianheng Tang, Lemin Kong, Huikang Liu, Jia Li, Anthony Man-Cho So, Jose Blanchet
This observation allows us to provide an approximation bound for the distance between the fixed-point set of BAPG and the critical point set of GW.
1 code implementation • 4 Mar 2023 • Tian Bian, Yuli Jiang, Jia Li, Tingyang Xu, Yu Rong, Yi Su, Timothy Kwok, Helen Meng, Hong Cheng
Many patients with chronic diseases resort to multiple medications to relieve various symptoms, which raises concerns about the safety of multiple medication use, as severe drug-drug antagonism can lead to serious adverse effects or even death.
1 code implementation • 27 Feb 2023 • Nuo Chen, Hongguang Li, Junqing He, Yinan Bao, Xinshi Lin, Qi Yang, Jianfeng Liu, Ruyi Gan, Jiaxing Zhang, Baoyuan Wang, Jia Li
Thus, model's comprehension ability towards real scenarios are hard to evaluate reasonably.
no code implementations • 25 Feb 2023 • Heng Chang, Jie Cai, Jia Li
With a carefully designed instantiation of a causal model on the knowledge graph, we generate the counterfactual relations to answer the question by regarding the representations of entity pair given relation as context, structural information of relation-aware neighborhood as treatment, and validity of the composed triplet as the outcome.
1 code implementation • 17 Feb 2023 • Nuo Chen, Hongguang Li, Yinan Bao, Baoyuan Wang, Jia Li
To this end, we construct a new dataset called Penguin to promote the research of MRC, providing a training and test bed for natural response generation to real scenarios.
Chinese Reading Comprehension
Machine Reading Comprehension
+1
no code implementations • 16 Feb 2023 • Nuo Chen, Linjun Shou, Ming Gong, Jian Pei, Chenyu You, Jianhui Chang, Daxin Jiang, Jia Li
For instance, TPLMs jointly pre-trained with table and text input could be effective for tasks also with table-text joint input like table question answering, but it may fail for tasks with only tables or text as input such as table retrieval.
no code implementations • 8 Feb 2023 • Fang Liu, Jia Li, Li Zhang
The experimental results on function translation tasks between Python, Java, and C++ show that SDA-Trans outperforms many large-scale pre-trained models, especially for unseen language translation.
1 code implementation • 30 Jan 2023 • Jianheng Tang, Weiqi Zhang, Jiajin Li, Kangfei Zhao, Fugee Tsung, Jia Li
As the graphs to be aligned are usually constructed from different sources, the inconsistency issues of structures and features between two graphs are ubiquitous in real-world applications.
no code implementations • 17 Jan 2023 • Jia Li, Shengye Qiao, Zhirui Zhao, Chenxi Xie, Xiaowu Chen, Changqun Xia
To this end, we design a lightweight framework while maintaining satisfying competitive accuracy.
7 code implementations • 9 Jan 2023 • Loubna Ben allal, Raymond Li, Denis Kocetkov, Chenghao Mou, Christopher Akiki, Carlos Munoz Ferrandis, Niklas Muennighoff, Mayank Mishra, Alex Gu, Manan Dey, Logesh Kumar Umapathi, Carolyn Jane Anderson, Yangtian Zi, Joel Lamy Poirier, Hailey Schoelkopf, Sergey Troshin, Dmitry Abulkhanov, Manuel Romero, Michael Lappert, Francesco De Toni, Bernardo García del Río, Qian Liu, Shamik Bose, Urvashi Bhattacharyya, Terry Yue Zhuo, Ian Yu, Paulo Villegas, Marco Zocca, Sourab Mangrulkar, David Lansky, Huu Nguyen, Danish Contractor, Luis Villa, Jia Li, Dzmitry Bahdanau, Yacine Jernite, Sean Hughes, Daniel Fried, Arjun Guha, Harm de Vries, Leandro von Werra
The BigCode project is an open-scientific collaboration working on the responsible development of large language models for code.
no code implementations • 5 Jan 2023 • Yanhao Wang, Michael Mathioudakis, Jia Li, Francesco Fabbri
Diversity maximization aims to select a diverse and representative subset of items from a large dataset.
1 code implementation • ICCV 2023 • Yunshan Qi, Lin Zhu, Yu Zhang, Jia Li
To solve this problem, we propose a novel Event-Enhanced NeRF (E2NeRF) by utilizing the combination data of a bio-inspired event camera and a standard RGB camera.
1 code implementation • 28 Dec 2022 • Yifan Zhao, Jia Li, Xiaowu Chen, Yonghong Tian
This framework, namely PArt-guided Relational Transformers (PART), is proposed to learn the discriminative part features with an automatic part discovery module, and to explore the intrinsic correlations with a feature transformation module by adapting the Transformer models from the field of natural language processing.
Ranked #8 on
Fine-Grained Image Classification
on FGVC Aircraft
Fine-Grained Image Classification
Fine-Grained Visual Recognition
+1
no code implementations • 28 Dec 2022 • Yifan Zhao, Jia Li, Yonghong Tian
Fine-grained visual parsing, including fine-grained part segmentation and fine-grained object recognition, has attracted considerable critical attention due to its importance in many real-world applications, e. g., agriculture, remote sensing, and space technologies.
no code implementations • 12 Dec 2022 • Yang Liu, Yu Rong, Zhuoning Guo, Nuo Chen, Tingyang Xu, Fugee Tsung, Jia Li
To address these challenges, we formulate the micro perspective mobility modeling into computing the relevance score between a diffusion and a location, conditional on a geometric graph.
no code implementations • 30 Nov 2022 • Ziqi Gao, Yifan Niu, Jiashun Cheng, Jianheng Tang, Tingyang Xu, Peilin Zhao, Lanqing Li, Fugee Tsung, Jia Li
In this work, we present a regularized graph autoencoder for graph attribute imputation, named MEGAE, which aims at mitigating spectral concentration problem by maximizing the graph spectral entropy.
no code implementations • 20 Nov 2022 • Denis Kocetkov, Raymond Li, Loubna Ben allal, Jia Li, Chenghao Mou, Carlos Muñoz Ferrandis, Yacine Jernite, Margaret Mitchell, Sean Hughes, Thomas Wolf, Dzmitry Bahdanau, Leandro von Werra, Harm de Vries
Large Language Models (LLMs) play an ever-increasing role in the field of Artificial Intelligence (AI)--not only for natural language processing but also for code understanding and generation.
1 code implementation • 15 Nov 2022 • Jia Li, Xiang Li, Xiaowei Jia, Michael Steinbach, Vipin Kumar
Causal DAGs(Directed Acyclic Graphs) are usually considered in a 2D plane.
1 code implementation • 13 Nov 2022 • Nuo Chen, Yan Wang, Haiyun Jiang, Deng Cai, Yuhan Li, Ziyang Chen, Longyue Wang, Jia Li
In this paper, we introduce the Harry Potter Dialogue (HPD) dataset, designed to advance the study of dialogue agents and character alignment.
no code implementations • 10 Nov 2022 • Lixiang Zhang, Jia Li
Furthermore, we adopt methods in adversarial training to enhance the robustness of DNN surrogate models.
1 code implementation • 3 Nov 2022 • Haojie Zhang, Ge Li, Jia Li, Zhongjin Zhang, Yuqi Zhu, Zhi Jin
Large-scale pre-trained language models have achieved impressive results on a wide range of downstream tasks recently.
no code implementations • Neurocomputing 2022 • Heng Wu, Yifan Zhao, Jia Li
Recent ideas propose to explore this problem in an unsupervised setting, i. e., without any labels in base classes, which reduces the heavy consumption of manual annotations.
no code implementations • 31 Oct 2022 • Jia Li, Zhuo Li, Huangzhao Zhang, Ge Li, Zhi Jin, Xing Hu, Xin Xia
The attackers aim to inject insidious backdoors into models by poisoning the training data with poison samples.
1 code implementation • 31 Oct 2022 • Jia Li, Ge Li, Zhuo Li, Zhi Jin, Xing Hu, Kechi Zhang, Zhiyi Fu
Pre-trained models are first pre-trained with pre-training tasks and fine-tuned with the code editing task.
no code implementations • 27 Sep 2022 • Junjie Wu, Changqun Xia, Tianshu Yu, Jia Li
Inspired by humans' observing process, we propose a view-aware salient object detection method based on a Sample Adaptive View Transformer (SAVT) module with two sub-modules to mitigate these issues.
1 code implementation • 16 Sep 2022 • Lanqing Li, Liang Zeng, Ziqi Gao, Shen Yuan, Yatao Bian, Bingzhe Wu, Hengtong Zhang, Yang Yu, Chan Lu, Zhipeng Zhou, Hongteng Xu, Jia Li, Peilin Zhao, Pheng-Ann Heng
The last decade has witnessed a prosperous development of computational methods and dataset curation for AI-aided drug discovery (AIDD).
1 code implementation • 15 Aug 2022 • Ruining Deng, Can Cui, Lucas W. Remedios, Shunxing Bao, R. Michael Womick, Sophie Chiron, Jia Li, Joseph T. Roland, Ken S. Lau, Qi Liu, Keith T. Wilson, Yaohong Wang, Lori A. Coburn, Bennett A. Landman, Yuankai Huo
Multi-instance learning (MIL) is widely used in the computer-aided interpretation of pathological Whole Slide Images (WSIs) to solve the lack of pixel-wise or patch-wise annotations.
no code implementations • 7 Aug 2022 • Yongjun Chen, Jia Li, Zhiwei Liu, Nitish Shirish Keskar, Huan Wang, Julian McAuley, Caiming Xiong
Due to the dynamics of users' interests and model updates during training, considering randomly sampled items from a user's non-interacted item set as negatives can be uninformative.
1 code implementation • 5 Aug 2022 • Jia Li, Ziyang Zhang, Junjie Lang, Yueqi Jiang, Liuwei An, Peng Zou, Yangyang Xu, Sheng Gao, Jie Lin, Chunxiao Fan, Xiao Sun, Meng Wang
In this paper, we present our solutions for the Multimodal Sentiment Analysis Challenge (MuSe) 2022, which includes MuSe-Humor, MuSe-Reaction and MuSe-Stress Sub-challenges.
no code implementations • 22 Jul 2022 • Jia Li, Jiantao Nie, Dan Guo, Richang Hong, Meng Wang
PF-ViT aims to separate and recognize the disturbance-agnostic emotion from a static facial image via generating its corresponding poker face, without the need for paired images.
Ranked #7 on
Facial Expression Recognition (FER)
on FER+
1 code implementation • 20 Jul 2022 • Xin Yu, Peng Dai, Wenbo Li, Lan Ma, Jiajun Shen, Jia Li, Xiaojuan Qi
With the rapid development of mobile devices, modern widely-used mobile phones typically allow users to capture 4K resolution (i. e., ultra-high-definition) images.
Ranked #1 on
Image Restoration
on UHDM
1 code implementation • 14 Jul 2022 • Dingfeng Shi, Yujie Zhong, Qiong Cao, Jing Zhang, Lin Ma, Jia Li, DaCheng Tao
Moreover, we propose two losses to facilitate and stabilize the training of action classification.
Ranked #17 on
Temporal Action Localization
on THUMOS’14
1 code implementation • 26 Jun 2022 • Jiashun Cheng, Man Li, Jia Li, Fugee Tsung
Graph self-supervised learning (SSL) has been vastly employed to learn representations from unlabeled graphs.
no code implementations • 21 Jun 2022 • Hongyu Li, Jia Li, Xin Ren, Long Xu
Inspired by the haze formation process on Earth, we formulate a similar visual degradation process on clean images and synthesize dusty images sharing a similar feature distribution with realistic dusty images.
no code implementations • 11 Jun 2022 • Jia Li, Yongfeng Huang, Heng Chang, Yu Rong
We study the node classification problem in the hierarchical graph where a 'node' is a graph instance.
1 code implementation • 31 May 2022 • Jianheng Tang, Jiajin Li, Ziqi Gao, Jia Li
Graph Neural Networks (GNNs) are widely applied for graph anomaly detection.
1 code implementation • 31 May 2022 • Wenzhuo Yang, Jia Li, Caiming Xiong, Steven C. H. Hoi
Counterfactual explanation is an important Explainable AI technique to explain machine learning predictions.
no code implementations • 17 May 2022 • Jiajin Li, Jianheng Tang, Lemin Kong, Huikang Liu, Jia Li, Anthony Man-Cho So, Jose Blanchet
In this paper, we study the design and analysis of a class of efficient algorithms for computing the Gromov-Wasserstein (GW) distance tailored to large-scale graph learning tasks.
1 code implementation • CVPR 2022 • Chenxi Xie, Changqun Xia, Mingcan Ma, Zhirui Zhao, Xiaowu Chen, Jia Li
An attention-based Cross-Model Grafting Module (CMGM) is proposed to enable CNN branch to combine broken detailed information more holistically, guided by different source feature during decoding process.
Ranked #8 on
RGB Salient Object Detection
on UHRSD
(using extra training data)
1 code implementation • CVPR 2022 • Peng Dai, Xin Yu, Lan Ma, Baoheng Zhang, Jia Li, Wenbo Li, Jiajun Shen, Xiaojuan Qi
Moire patterns, appearing as color distortions, severely degrade image and video qualities when filming a screen with digital cameras.
1 code implementation • 5 Apr 2022 • Yongjun Chen, Jia Li, Caiming Xiong
A generator, as an auxiliary model, is trained jointly with the discriminator to sample plausible alternative next items and will be thrown out after training.
1 code implementation • 25 Mar 2022 • Zhiwei Liu, Yongjun Chen, Jia Li, Man Luo, Philip S. Yu, Caiming Xiong
However, existing methods all construct views by adopting augmentation from data perspectives, while we argue that 1) optimal data augmentation methods are hard to devise, 2) data augmentation methods destroy sequential correlations, and 3) data augmentation fails to incorporate comprehensive self-supervised signals.
no code implementations • ACL 2022 • Wenpeng Yin, Jia Li, Caiming Xiong
This work defines a new learning paradigm ConTinTin (Continual Learning from Task Instructions), in which a system should learn a sequence of new tasks one by one, each task is explained by a piece of textual instruction.
no code implementations • Multimedia Systems 2022 • Chunxiao Fan, zhenxing Wang, Jia Li, Shanshan Wang, Xiao Sun
In the proposed method, (1) the topological structure information and texture feature of regions of interest (ROIs) are modeled as graphs and processed with graph convolutional network (GCN) to remain the topological features.
Facial Expression Recognition
Facial Expression Recognition (FER)
+1
no code implementations • 2 Mar 2022 • Jia Li, Jie Cao, Junxian Duan, Ran He
We propose a new challenging task namely IDentity Stylization (IDS) across heterogeneous domains.
no code implementations • 18 Feb 2022 • Zhuomin Zhang, Elizabeth C. Mansfield, Jia Li, John Russell, George S. Young, Catherine Adams, James Z. Wang
The British landscape painter John Constable is considered foundational for the Realist movement in 19th-century European painting.
1 code implementation • 5 Feb 2022 • Yongjun Chen, Zhiwei Liu, Jia Li, Julian McAuley, Caiming Xiong
Specifically, we introduce a latent variable to represent users' intents and learn the distribution function of the latent variable via clustering.
1 code implementation • 12 Jan 2022 • Zohreh Ovaisi, Shelby Heinecke, Jia Li, Yongfeng Zhang, Elena Zheleva, Caiming Xiong
Robust machine learning is an increasingly important topic that focuses on developing models resilient to various forms of imperfect data.
no code implementations • 20 Nov 2021 • Wenpeng Yin, Shelby Heinecke, Jia Li, Nitish Shirish Keskar, Michael Jones, Shouzhong Shi, Stanislav Georgiev, Kurt Milich, Joseph Esposito, Caiming Xiong
The distribution gap between training datasets and data encountered in production is well acknowledged.
no code implementations • NeurIPS 2021 • Jia Li, Jiajin Li, Yang Liu, Jianwei Yu, Yueting Li, Hong Cheng
In this paper, we consider an inverse problem in graph learning domain -- ``given the graph representations smoothed by Graph Convolutional Network (GCN), how can we reconstruct the input graph signal?"
no code implementations • 15 Oct 2021 • Mingcan Ma, Changqun Xia, Chenxi Xie, Xiaowu Chen, Jia Li
Moreover, Unlike multi-path parallel training, MHB randomly selects one branch each time for gradient back propagation in a boosting way.
1 code implementation • 14 Oct 2021 • Shuyuan Xu, Juntao Tan, Shelby Heinecke, Jia Li, Yongfeng Zhang
Experiments on real-world datasets show that our method is able to deconfound unobserved confounders to achieve better recommendation performance.
1 code implementation • ICCV 2021 • Jiawei Zhao, Ke Yan, Yifan Zhao, Xiaowei Guo, Feiyue Huang, Jia Li
Different from these researches, in this paper, we propose a novel Transformer-based Dual Relation learning framework, constructing complementary relationships by exploring two aspects of correlation, i. e., structural relation graph and semantic relation graph.
Ranked #9 on
Multi-Label Classification
on PASCAL VOC 2007
no code implementations • 3 Oct 2021 • Jia Li, Huaibo Huang, Xiaofei Jia, Ran He
Blind face restoration (BFR) is a challenging problem because of the uncertainty of the degradation patterns.
1 code implementation • ACM MM 2021 • Jiawei Zhao, Yifan Zhao, Jia Li
Multi-label image recognition aims to recognize multiple objects simultaneously in one image.
Ranked #5 on
Multi-Label Classification
on PASCAL VOC 2007
no code implementations • 29 Sep 2021 • Chenyu Liu, Jia Li, Junxian Duan, Huaibo Huang
The first is that capturing the general clue of artifacts is difficult.
no code implementations • 29 Sep 2021 • Zhiwei Liu, Yongjun Chen, Jia Li, Man Luo, Philip S. Yu, Caiming Xiong
However, existing methods all construct views by adopting augmentation from data perspectives, while we argue that 1) optimal data augmentation methods are hard to devise, 2) data augmentation methods destroy sequential correlations, and 3) data augmentation fails to incorporate comprehensive self-supervised signals.
no code implementations • 23 Sep 2021 • Yongjun Chen, Jia Li, Chenghao Liu, Chenxi Li, Markus Anderle, Julian McAuley, Caiming Xiong
However, properly integrating them into user interest models is challenging since attribute dynamics can be diverse such as time-interval aware, periodic patterns (etc.
1 code implementation • ICCV 2021 • Jiajian Zhao, Yifan Zhao, Jia Li, Ke Yan, Yonghong Tian
The crucial problem in vehicle re-identification is to find the same vehicle identity when reviewing this object from cross-view cameras, which sets a higher demand for learning viewpoint-invariant representations.
1 code implementation • 15 Sep 2021 • Dong Zhao, Jia Li, Hongyu Li, Long Xu
In this paper, firstly, we propose a new complementary feature enhanced framework, in which the complementary features are learned by several complementary subtasks and then together serve to boost the performance of the primary task.
Ranked #1 on
Image Dehazing
on NH-HAZE