no code implementations • 26 May 2025 • Jingyu Liu, Ce Zhang
The growing demand for efficient Large Language Model (LLM) inference requires a holistic optimization on algorithms, systems, and hardware.
no code implementations • 23 May 2025 • Jingyu Liu, Yanglei Song
We study the stochastic linear bandit problem with multiple arms over $T$ rounds, where the covariate dimension $d$ may exceed $T$, but each arm-specific parameter vector is $s$-sparse.
no code implementations • 28 Apr 2025 • Zuxing Lu, Xin Yuan, ShaoWen Yang, Jingyu Liu, Changyin Sun
Semantic-aware 3D scene reconstruction is essential for autonomous robots to perform complex interactions.
no code implementations • 5 Mar 2025 • Yanfei Li, Teng Yin, Wenyi Shang, Jingyu Liu, Xi Wang, Kaiyang Zhao
To address this, we propose a Prototype-Guided Adaptive Distillation (PGAD) framework that directly incorporates incomplete multi-modal data into training.
no code implementations • 4 Mar 2025 • Jingyu Liu, Xiaoting Wang, Xiaozhe Wang
Assessing the risk of low-probability high-impact transient instability (TI) events is crucial for ensuring robust and stable power system operation under high uncertainty.
no code implementations • 2 Mar 2025 • Jiaen Lin, Jingyu Liu
This observation suggests that the representations in intermediate layers contain richer information compared to those in other layers.
1 code implementation • 5 Feb 2025 • Jingyu Liu, Beidi Chen, Ce Zhang
In this work, we present SpecPrefill, a training free framework that accelerates the inference TTFT for both long and medium context queries based on the following insight: LLMs are generalized enough to preserve the quality given only a carefully chosen subset of prompt tokens.
no code implementations • 17 Jan 2025 • Bishal Thapaliya, Esra Akbas, Ram Sapkota, Bhaskar Ray, Vince Calhoun, Jingyu Liu
Resting-state functional magnetic resonance imaging (rs-fMRI) offers valuable insights into the human brain's functional organization and is a powerful tool for investigating the relationship between brain function and cognitive processes, as it allows for the functional organization of the brain to be captured without relying on a specific task or stimuli.
no code implementations • 1 Nov 2024 • Qiang Li, Jingyu Liu, Vince D. Calhoun
Moreover, after applying topological network analysis to the correlation of correlation networks, we observed that some high-order interaction hubs predominantly occurred in primary and high-level cognitive areas, such as the visual and fronto-parietal regions.
1 code implementation • 15 Oct 2024 • Bishal Thapaliya, Anh Nguyen, Yao Lu, Tian Xie, Igor Grudetskyi, Fudong Lin, Antonios Valkanas, Jingyu Liu, Deepayan Chakraborty, Bilel Fehri
We propose the Enhanced Cluster-aware Graph Network (ECGN), a novel method that addresses these issues by integrating cluster-specific training with synthetic node generation.
no code implementations • 13 Oct 2024 • Jingyu Liu, Xinyu Liu, Mingzhe Qu, Tianyi Lyu
To overcome these challenges, we propose the EITNet model, a deep learning framework that combines EfficientDet for object detection, I3D for spatiotemporal feature extraction, and TimeSformer for temporal analysis, all integrated with IoT technology for seamless real-time data collection and processing.
1 code implementation • 8 Oct 2024 • Yongxin Guo, Jingyu Liu, Mingda Li, Xiaoying Tang, Qingbin Liu, Xi Chen
To effectively handle various tasks simultaneously and enable zero-shot prediction, there is a growing trend in employing video LLMs for VTG tasks.
no code implementations • 5 Oct 2024 • Yilong Li, Jingyu Liu, Hao Zhang, M Badri Narayanan, Utkarsh Sharma, Shuai Zhang, Pan Hu, Yijing Zeng, Jayaram Raghuram, Suman Banerjee
Deploying large language models (LLMs) locally on mobile devices is advantageous in scenarios where transmitting data to remote cloud servers is either undesirable due to privacy concerns or impractical due to network connection.
no code implementations • 3 Oct 2024 • Jingyu Liu, Jiaen Lin, Yong liu
In this paper, we investigate this issue in depth and find that while RAG can assist with reasoning, the help is limited.
no code implementations • 10 Sep 2024 • Dingxin Cheng, Mingda Li, Jingyu Liu, Yongxin Guo, Bin Jiang, Qingbin Liu, Xi Chen, Bo Zhao
While this method excels in short video understanding, it may result in a blend of multiple event information in long videos due to coarse compression, which causes information redundancy.
no code implementations • 5 Sep 2024 • Mingze Gao, Jingyu Liu, Mingda Li, Jiangtao Xie, Qingbin Liu, Bo Zhao, Xi Chen, Hui Xiong
Multimodal Large Language Models (MLLMs) have significantly improved performance across various image-language applications.
no code implementations • 3 Sep 2024 • Xingjian Wu, Xiaoting Wang, Xiaozhe Wang, Peter E. Caines, Jingyu Liu
Transient stability and critical clearing time (CCT) are important concepts in power system protection and control.
no code implementations • 23 Aug 2024 • Jingyu Liu, Minquan Wang, Ye Ma, Bo wang, Aozhu Chen, Quan Chen, Peng Jiang, Xirong Li
Previous studies about adding SFX to videos perform video to SFX matching at a holistic level, lacking the ability of adding SFX to a specific moment.
no code implementations • 13 Aug 2024 • Guozhen Zhang, Jingyu Liu, Shengming Cao, Xiaotong Zhao, Kevin Zhao, Kai Ma, LiMin Wang
On Kinetics-400, InTI reaches a top-1 accuracy of 87. 1 with a remarkable 37. 5% reduction in GFLOPs compared to naive adaptation.
1 code implementation • 22 May 2024 • Yongxin Guo, Jingyu Liu, Mingda Li, Dingxin Cheng, Xiaoying Tang, Dianbo Sui, Qingbin Liu, Xi Chen, Kevin Zhao
Video Temporal Grounding (VTG) strives to accurately pinpoint event timestamps in a specific video using linguistic queries, significantly impacting downstream tasks like video browsing and editing.
no code implementations • 19 May 2024 • Bishal Thapaliya, Robyn Miller, Jiayu Chen, Yu-Ping Wang, Esra Akbas, Ram Sapkota, Bhaskar Ray, Pranav Suresh, Santosh Ghimire, Vince Calhoun, Jingyu Liu
Resting-state functional magnetic resonance imaging (rs-fMRI) is a noninvasive technique pivotal for understanding human neural mechanisms of intricate cognitive processes.
1 code implementation • 16 May 2024 • Tao Feng, Chuanyang Jin, Jingyu Liu, Kunlun Zhu, Haoqin Tu, Zirui Cheng, GuanYu Lin, Jiaxuan You
The evolution of artificial intelligence (AI) has profoundly impacted human society, driving significant advancements in multiple sectors.
no code implementations • 15 May 2024 • Riyasat Ohib, Bishal Thapaliya, Gintare Karolina Dziugaite, Jingyu Liu, Vince Calhoun, Sergey Plis
In this work, we propose Salient Sparse Federated Learning (SSFL), a streamlined approach for sparse federated learning with efficient communication.
no code implementations • 12 Apr 2024 • Jie Wang, Jun Ai, Minyan Lu, Haoran Su, Dan Yu, Yutao Zhang, Junda Zhu, Jingyu Liu
We investigate the perturbation metrics and range representations used to measure the degree of perturbations on images, as well as the robustness metrics specifically for the robustness conditions of classification models.
no code implementations • 18 Mar 2024 • Rao Fu, Jingyu Liu, Xilun Chen, Yixin Nie, Wenhan Xiong
This paper introduces Scene-LLM, a 3D-visual-language model that enhances embodied agents' abilities in interactive 3D indoor environments by integrating the reasoning strengths of Large Language Models (LLMs).
no code implementations • 19 Jan 2024 • Xiaoting Wang, Jingyu Liu, Xiaozhe Wang
This paper presents an adaptive stochastic spectral embedding (ASSE) method to solve the probabilistic AC optimal power flow (AC-OPF), a critical aspect of power system operation.
1 code implementation • 6 Nov 2023 • Bishal Thapaliya, Esra Akbas, Jiayu Chen, Raam Sapkota, Bhaskar Ray, Pranav Suresh, Vince Calhoun, Jingyu Liu
Resting-state functional magnetic resonance imaging (rsfMRI) is a powerful tool for investigating the relationship between brain function and cognitive processes as it allows for the functional organization of the brain to be captured without relying on a specific task or stimuli.
no code implementations • 23 Oct 2023 • Yulan Hu, Sheng Ouyang, Jingyu Liu, Ge Chen, Zhirui Yang, Junchen Wan, Fuzheng Zhang, Zhongyuan Wang, Yong liu
Thus, we propose GraphRank, a simple yet efficient graph contrastive learning method that addresses the problem of false negative samples by redefining the concept of negative samples to a certain extent, thereby avoiding the issue of false negative samples.
1 code implementation • 6 Oct 2023 • Jingyu Liu, Huayi Tang, Yong liu
Graph Contrastive Learning (GCL) aims to learn node representations by aligning positive pairs and separating negative ones.
2 code implementations • 27 Sep 2023 • Wenhan Xiong, Jingyu Liu, Igor Molybog, Hejia Zhang, Prajjwal Bhargava, Rui Hou, Louis Martin, Rashi Rungta, Karthik Abinav Sankararaman, Barlas Oguz, Madian Khabsa, Han Fang, Yashar Mehdad, Sharan Narang, Kshitiz Malik, Angela Fan, Shruti Bhosale, Sergey Edunov, Mike Lewis, Sinong Wang, Hao Ma
We also examine the impact of various design choices in the pretraining process, including the data mix and the training curriculum of sequence lengths -- our ablation experiments suggest that having abundant long texts in the pretrain dataset is not the key to achieving strong performance, and we empirically verify that long context continual pretraining is more efficient and similarly effective compared to pretraining from scratch with long sequences.
2 code implementations • 24 Aug 2023 • Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve
We release Code Llama, a family of large language models for code based on Llama 2 providing state-of-the-art performance among open models, infilling capabilities, support for large input contexts, and zero-shot instruction following ability for programming tasks.
Ranked #38 on
Code Generation
on MBPP
(using extra training data)
no code implementations • 23 May 2023 • Tsu-Jui Fu, Wenhan Xiong, Yixin Nie, Jingyu Liu, Barlas Oğuz, William Yang Wang
To address this \texttt{T3H} task, we propose Compositional Cross-modal Human (CCH).
Ranked #1 on
Text-to-3D-Human Generation
on SHHQ
no code implementations • 15 Apr 2023 • Riyasat Ohib, Bishal Thapaliya, Pratyush Gaggenapalli, Jingyu Liu, Vince Calhoun, Sergey Plis
Federated learning (FL) enables the training of a model leveraging decentralized data in client sites while preserving privacy by not collecting data.
1 code implementation • 7 Mar 2023 • Jingyu Liu, Wenhan Xiong, Ian Jones, Yixin Nie, Anchit Gupta, Barlas Oğuz
Whether heuristic or learned, these methods ignore instance-level visual attributes of objects, and as a result may produce visually less coherent scenes.
no code implementations • 15 Sep 2022 • Yuda Bi, Anees Abrol, Zening Fu, Jiayu Chen, Jingyu Liu, Vince Calhoun
Prior work has demonstrated that deep learning models that take advantage of the data's 3D structure can outperform standard machine learning on several learning tasks.
no code implementations • 9 Jun 2022 • Jingyu Liu, Xiaoting Wang, Xiaozhe Wang
This paper proposes an adaptive sparse polynomial chaos expansion(PCE)-based method to quantify the impacts of uncertainties on critical clearing time (CCT) that is an important index in transient stability analysis.
no code implementations • 10 Nov 2021 • Zijian Gao, Jingyu Liu, Weiqi Sun, Sheng Chen, Dedan Chang, Lili Zhao
Modern video-text retrieval frameworks basically consist of three parts: video encoder, text encoder and the similarity head.
Ranked #13 on
Video Retrieval
on MSR-VTT-1kA
(using extra training data)
no code implementations • 14 Oct 2021 • Zijian Gao, Huanyu Liu, Jingyu Liu
The current state-of-the-art methods for video corpus moment retrieval (VCMR) often use similarity-based feature alignment approach for the sake of convenience and speed.
1 code implementation • 11 Oct 2021 • Kaihao Zhang, Dongxu Li, Wenhan Luo, Jingyu Liu, Jiankang Deng, Wei Liu, Stefanos Zafeiriou
It is thus unclear how these algorithms perform on public face hallucination datasets.
Ranked #1 on
Image Super-Resolution
on WLFW
1 code implementation • 21 Apr 2021 • Jie Lian, Jingyu Liu, Shu Zhang, Kai Gao, Xiaoqing Liu, Dingwen Zhang, Yizhou Yu
Leveraging on constant structure and disease relations extracted from domain knowledge, we propose a structure-aware relation network (SAR-Net) extending Mask R-CNN.
1 code implementation • 19 Oct 2020 • Jie Lian, Jingyu Liu, Yizhou Yu, Mengyuan Ding, Yaoci Lu, Yi Lu, Jie Cai, Deshou Lin, Miao Zhang, Zhe Wang, Kai He, Yijie Yu
The detection of thoracic abnormalities challenge is organized by the Deepwise AI Lab.
1 code implementation • 17 Jun 2020 • Jingyu Liu, Jie Lian, Yizhou Yu
Instance level detection of thoracic diseases or abnormalities are crucial for automatic diagnosis in chest X-ray images.
no code implementations • 6 Jan 2020 • Haleh Falakshahi, Victor M. Vergara, Jingyu Liu, Daniel H. Mathalon, Judith M. Ford, James Voyvodic, Bryon A. Mueller, Aysenil Belger, Sarah McEwen, Steven G. Potkin, Adrian Preda, Hooman Rokham, Jing Sui, Jessica A. Turner, Sergey Plis, Vince D. Calhoun
Through simulation and real data, we show our approach reveals important information about disease-related network disruptions that are missed with a focus on a single modality.
no code implementations • ICCV 2019 • Jingyu Liu, Gangming Zhao, Yu Fei, Ming Zhang, Yizhou Wang, Yizhou Yu
We show that the use of contrastive attention and alignment module allows the model to learn rich identification and localization information using only a small amount of location annotations, resulting in state-of-the-art performance in NIH chest X-ray dataset.
no code implementations • 12 Feb 2019 • Dongliang Xu, Bailing Wang, XiaoJiang Du, Xiaoyan Zhu, zhitao Guan, Xiaoyan Yu, Jingyu Liu
However, the advantages of convolutional neural networks depend on the data used by the training classifier, particularly the size of the training set.
no code implementations • ICCV 2017 • Jingyu Liu, Liang Wang, Ming-Hsuan Yang
In this paper, we explore the role of attributes by incorporating them into both referring expression generation and comprehension.