Search Results for author: Jingyu Liu

Found 46 papers, 14 papers with code

HAMburger: Accelerating LLM Inference via Token Smashing

no code implementations26 May 2025 Jingyu Liu, Ce Zhang

The growing demand for efficient Large Language Model (LLM) inference requires a holistic optimization on algorithms, systems, and hardware.

Large Language Model

Minimax Rate-Optimal Algorithms for High-Dimensional Stochastic Linear Bandits

no code implementations23 May 2025 Jingyu Liu, Yanglei Song

We study the stochastic linear bandit problem with multiple arms over $T$ rounds, where the covariate dimension $d$ may exceed $T$, but each arm-specific parameter vector is $s$-sparse.

GSFF-SLAM: 3D Semantic Gaussian Splatting SLAM via Feature Field

no code implementations28 Apr 2025 Zuxing Lu, Xin Yuan, ShaoWen Yang, Jingyu Liu, Changyin Sun

Semantic-aware 3D scene reconstruction is essential for autonomous robots to perform complex interactions.

3D Scene Reconstruction Pose Tracking +2

PGAD: Prototype-Guided Adaptive Distillation for Multi-Modal Learning in AD Diagnosis

no code implementations5 Mar 2025 Yanfei Li, Teng Yin, Wenyi Shang, Jingyu Liu, Xi Wang, Kaiyang Zhao

To address this, we propose a Prototype-Guided Adaptive Distillation (PGAD) framework that directly incorporates incomplete multi-modal data into training.

Transfer Learning

Efficient Sampling and Sensitivity Analysis of Rare Transient Instability Events via Subset Simulation

no code implementations4 Mar 2025 Jingyu Liu, Xiaoting Wang, Xiaozhe Wang

Assessing the risk of low-probability high-impact transient instability (TI) events is crucial for ensuring robust and stable power system operation under high uncertainty.

Optimizing Multi-Hop Document Retrieval Through Intermediate Representations

no code implementations2 Mar 2025 Jiaen Lin, Jingyu Liu

This observation suggests that the representations in intermediate layers contain richer information compared to those in other layers.

Multi-hop Question Answering Question Answering +3

Speculative Prefill: Turbocharging TTFT with Lightweight and Training-Free Token Importance Estimation

1 code implementation5 Feb 2025 Jingyu Liu, Beidi Chen, Ce Zhang

In this work, we present SpecPrefill, a training free framework that accelerates the inference TTFT for both long and medium context queries based on the following insight: LLMs are generalized enough to preserve the quality given only a carefully chosen subset of prompt tokens.

Benchmarking Large Language Model

Self-Clustering Graph Transformer Approach to Model Resting-State Functional Brain Activity

no code implementations17 Jan 2025 Bishal Thapaliya, Esra Akbas, Ram Sapkota, Bhaskar Ray, Vince Calhoun, Jingyu Liu

Resting-state functional magnetic resonance imaging (rs-fMRI) offers valuable insights into the human brain's functional organization and is a powerful tool for investigating the relationship between brain function and cognitive processes, as it allows for the functional organization of the brain to be captured without relying on a specific task or stimuli.

Functional Connectivity Gender Classification

Correlation of Correlation Networks: High-Order Interactions in the Topology of Brain Networks

no code implementations1 Nov 2024 Qiang Li, Jingyu Liu, Vince D. Calhoun

Moreover, after applying topological network analysis to the correlation of correlation networks, we observed that some high-order interaction hubs predominantly occurred in primary and high-level cognitive areas, such as the visual and fronto-parietal regions.

ECGN: A Cluster-Aware Approach to Graph Neural Networks for Imbalanced Classification

1 code implementation15 Oct 2024 Bishal Thapaliya, Anh Nguyen, Yao Lu, Tian Xie, Igor Grudetskyi, Fudong Lin, Antonios Valkanas, Jingyu Liu, Deepayan Chakraborty, Bilel Fehri

We propose the Enhanced Cluster-aware Graph Network (ECGN), a novel method that addresses these issues by integrating cluster-specific training with synthetic node generation.

imbalanced classification

EITNet: An IoT-Enhanced Framework for Real-Time Basketball Action Recognition

no code implementations13 Oct 2024 Jingyu Liu, Xinyu Liu, Mingzhe Qu, Tianyi Lyu

To overcome these challenges, we propose the EITNet model, a deep learning framework that combines EfficientDet for object detection, I3D for spatiotemporal feature extraction, and TimeSformer for temporal analysis, all integrated with IoT technology for seamless real-time data collection and processing.

Action Recognition object-detection +2

TRACE: Temporal Grounding Video LLM via Causal Event Modeling

1 code implementation8 Oct 2024 Yongxin Guo, Jingyu Liu, Mingda Li, Xiaoying Tang, Qingbin Liu, Xi Chen

To effectively handle various tasks simultaneously and enable zero-shot prediction, there is a growing trend in employing video LLMs for VTG tasks.

Text Generation Video Understanding

PalmBench: A Comprehensive Benchmark of Compressed Large Language Models on Mobile Platforms

no code implementations5 Oct 2024 Yilong Li, Jingyu Liu, Hao Zhang, M Badri Narayanan, Utkarsh Sharma, Shuai Zhang, Pan Hu, Yijing Zeng, Jayaram Raghuram, Suman Banerjee

Deploying large language models (LLMs) locally on mobile devices is advantageous in scenarios where transmitting data to remote cloud servers is either undesirable due to privacy concerns or impractical due to network connection.

Benchmarking Quantization

How Much Can RAG Help the Reasoning of LLM?

no code implementations3 Oct 2024 Jingyu Liu, Jiaen Lin, Yong liu

In this paper, we investigate this issue in depth and find that while RAG can assist with reasoning, the help is limited.

RAG Retrieval-augmented Generation

Enhancing Long Video Understanding via Hierarchical Event-Based Memory

no code implementations10 Sep 2024 Dingxin Cheng, Mingda Li, Jingyu Liu, Yongxin Guo, Bin Jiang, Qingbin Liu, Xi Chen, Bo Zhao

While this method excels in short video understanding, it may result in a blend of multiple event information in long videos due to coarse compression, which causes information redundancy.

Video Understanding

Comparative Analysis of Learning-Based Methods for Transient Stability Assessment

no code implementations3 Sep 2024 Xingjian Wu, Xiaoting Wang, Xiaozhe Wang, Peter E. Caines, Jingyu Liu

Transient stability and critical clearing time (CCT) are important concepts in power system protection and control.

feature selection

D&M: Enriching E-commerce Videos with Sound Effects by Key Moment Detection and SFX Matching

no code implementations23 Aug 2024 Jingyu Liu, Minquan Wang, Ye Ma, Bo wang, Aozhu Chen, Quan Chen, Peng Jiang, Xirong Li

Previous studies about adding SFX to videos perform video to SFX matching at a holistic level, lacking the ability of adding SFX to a specific moment.

Highlight Detection Moment Retrieval

Dynamic and Compressive Adaptation of Transformers From Images to Videos

no code implementations13 Aug 2024 Guozhen Zhang, Jingyu Liu, Shengming Cao, Xiaotong Zhao, Kevin Zhao, Kai Ma, LiMin Wang

On Kinetics-400, InTI reaches a top-1 accuracy of 87. 1 with a remarkable 37. 5% reduction in GFLOPs compared to naive adaptation.

Image-text matching Text Matching

VTG-LLM: Integrating Timestamp Knowledge into Video LLMs for Enhanced Video Temporal Grounding

1 code implementation22 May 2024 Yongxin Guo, Jingyu Liu, Mingda Li, Dingxin Cheng, Xiaoying Tang, Dianbo Sui, Qingbin Liu, Xi Chen, Kevin Zhao

Video Temporal Grounding (VTG) strives to accurately pinpoint event timestamps in a specific video using linguistic queries, significantly impacting downstream tasks like video browsing and editing.

Dense Video Captioning Highlight Detection +2

DSAM: A Deep Learning Framework for Analyzing Temporal and Spatial Dynamics in Brain Networks

no code implementations19 May 2024 Bishal Thapaliya, Robyn Miller, Jiayu Chen, Yu-Ping Wang, Esra Akbas, Ram Sapkota, Bhaskar Ray, Pranav Suresh, Santosh Ghimire, Vince Calhoun, Jingyu Liu

Resting-state functional magnetic resonance imaging (rs-fMRI) is a noninvasive technique pivotal for understanding human neural mechanisms of intricate cognitive processes.

Functional Connectivity Graph Neural Network

How Far Are We From AGI: Are LLMs All We Need?

1 code implementation16 May 2024 Tao Feng, Chuanyang Jin, Jingyu Liu, Kunlun Zhu, Haoqin Tu, Zirui Cheng, GuanYu Lin, Jiaxuan You

The evolution of artificial intelligence (AI) has profoundly impacted human society, driving significant advancements in multiple sectors.

All

Unmasking Efficiency: Learning Salient Sparse Models in Non-IID Federated Learning

no code implementations15 May 2024 Riyasat Ohib, Bishal Thapaliya, Gintare Karolina Dziugaite, Jingyu Liu, Vince Calhoun, Sergey Plis

In this work, we propose Salient Sparse Federated Learning (SSFL), a streamlined approach for sparse federated learning with efficient communication.

Federated Learning

A Survey of Neural Network Robustness Assessment in Image Recognition

no code implementations12 Apr 2024 Jie Wang, Jun Ai, Minyan Lu, Haoran Su, Dan Yu, Yutao Zhang, Junda Zhu, Jingyu Liu

We investigate the perturbation metrics and range representations used to measure the degree of perturbations on images, as well as the robustness metrics specifically for the robustness conditions of classification models.

Adversarial Robustness image-classification +2

Scene-LLM: Extending Language Model for 3D Visual Understanding and Reasoning

no code implementations18 Mar 2024 Rao Fu, Jingyu Liu, Xilun Chen, Yixin Nie, Wenhan Xiong

This paper introduces Scene-LLM, a 3D-visual-language model that enhances embodied agents' abilities in interactive 3D indoor environments by integrating the reasoning strengths of Large Language Models (LLMs).

3D Question Answering (3D-QA) Dense Captioning +3

Efficient Probabilistic Optimal Power Flow Assessment Using an Adaptive Stochastic Spectral Embedding Surrogate Model

no code implementations19 Jan 2024 Xiaoting Wang, Jingyu Liu, Xiaozhe Wang

This paper presents an adaptive stochastic spectral embedding (ASSE) method to solve the probabilistic AC optimal power flow (AC-OPF), a critical aspect of power system operation.

Brain Networks and Intelligence: A Graph Neural Network Based Approach to Resting State fMRI Data

1 code implementation6 Nov 2023 Bishal Thapaliya, Esra Akbas, Jiayu Chen, Raam Sapkota, Bhaskar Ray, Pranav Suresh, Vince Calhoun, Jingyu Liu

Resting-state functional magnetic resonance imaging (rsfMRI) is a powerful tool for investigating the relationship between brain function and cognitive processes as it allows for the functional organization of the brain to be captured without relying on a specific task or stimuli.

Graph Neural Network

Graph Ranking Contrastive Learning: A Extremely Simple yet Efficient Method

no code implementations23 Oct 2023 Yulan Hu, Sheng Ouyang, Jingyu Liu, Ge Chen, Zhirui Yang, Junchen Wan, Fuzheng Zhang, Zhongyuan Wang, Yong liu

Thus, we propose GraphRank, a simple yet efficient graph contrastive learning method that addresses the problem of false negative samples by redefining the concept of negative samples to a certain extent, thereby avoiding the issue of false negative samples.

Contrastive Learning Graph Learning +1

Perfect Alignment May be Poisonous to Graph Contrastive Learning

1 code implementation6 Oct 2023 Jingyu Liu, Huayi Tang, Yong liu

Graph Contrastive Learning (GCL) aims to learn node representations by aligning positive pairs and separating negative ones.

Contrastive Learning

Effective Long-Context Scaling of Foundation Models

2 code implementations27 Sep 2023 Wenhan Xiong, Jingyu Liu, Igor Molybog, Hejia Zhang, Prajjwal Bhargava, Rui Hou, Louis Martin, Rashi Rungta, Karthik Abinav Sankararaman, Barlas Oguz, Madian Khabsa, Han Fang, Yashar Mehdad, Sharan Narang, Kshitiz Malik, Angela Fan, Shruti Bhosale, Sergey Edunov, Mike Lewis, Sinong Wang, Hao Ma

We also examine the impact of various design choices in the pretraining process, including the data mix and the training curriculum of sequence lengths -- our ablation experiments suggest that having abundant long texts in the pretrain dataset is not the key to achieving strong performance, and we empirically verify that long context continual pretraining is more efficient and similarly effective compared to pretraining from scratch with long sequences.

Continual Pretraining Language Modeling +1

Code Llama: Open Foundation Models for Code

2 code implementations24 Aug 2023 Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve

We release Code Llama, a family of large language models for code based on Llama 2 providing state-of-the-art performance among open models, infilling capabilities, support for large input contexts, and zero-shot instruction following ability for programming tasks.

Ranked #38 on Code Generation on MBPP (using extra training data)

16k Code Generation +3

SalientGrads: Sparse Models for Communication Efficient and Data Aware Distributed Federated Training

no code implementations15 Apr 2023 Riyasat Ohib, Bishal Thapaliya, Pratyush Gaggenapalli, Jingyu Liu, Vince Calhoun, Sergey Plis

Federated learning (FL) enables the training of a model leveraging decentralized data in client sites while preserving privacy by not collecting data.

Federated Learning

CLIP-Layout: Style-Consistent Indoor Scene Synthesis with Semantic Furniture Embedding

1 code implementation7 Mar 2023 Jingyu Liu, Wenhan Xiong, Ian Jones, Yixin Nie, Anchit Gupta, Barlas Oğuz

Whether heuristic or learned, these methods ignore instance-level visual attributes of objects, and as a result may produce visually less coherent scenes.

Indoor Scene Synthesis Scene Generation

Prediction of Gender from Longitudinal MRI data via Deep Learning on Adolescent Data Reveals Unique Patterns Associated with Brain Structure and Change over a Two-year Period

no code implementations15 Sep 2022 Yuda Bi, Anees Abrol, Zening Fu, Jiayu Chen, Jingyu Liu, Vince Calhoun

Prior work has demonstrated that deep learning models that take advantage of the data's 3D structure can outperform standard machine learning on several learning tasks.

Gender Prediction

A Sparse Polynomial Chaos Expansion-Based Method for Probabilistic Transient Stability Assessment and Enhancement

no code implementations9 Jun 2022 Jingyu Liu, Xiaoting Wang, Xiaozhe Wang

This paper proposes an adaptive sparse polynomial chaos expansion(PCE)-based method to quantify the impacts of uncertainties on critical clearing time (CCT) that is an important index in transient stability analysis.

CLIP2TV: Align, Match and Distill for Video-Text Retrieval

no code implementations10 Nov 2021 Zijian Gao, Jingyu Liu, Weiqi Sun, Sheng Chen, Dedan Chang, Lili Zhao

Modern video-text retrieval frameworks basically consist of three parts: video encoder, text encoder and the similarity head.

Ranked #13 on Video Retrieval on MSR-VTT-1kA (using extra training data)

Representation Learning Text Retrieval +1

Coarse to Fine: Video Retrieval before Moment Localization

no code implementations14 Oct 2021 Zijian Gao, Huanyu Liu, Jingyu Liu

The current state-of-the-art methods for video corpus moment retrieval (VCMR) often use similarity-based feature alignment approach for the sake of convenience and speed.

Moment Retrieval Retrieval +2

A Structure-Aware Relation Network for Thoracic Diseases Detection and Segmentation

1 code implementation21 Apr 2021 Jie Lian, Jingyu Liu, Shu Zhang, Kai Gao, Xiaoqing Liu, Dingwen Zhang, Yizhou Yu

Leveraging on constant structure and disease relations extracted from domain knowledge, we propose a structure-aware relation network (SAR-Net) extending Mask R-CNN.

Instance Segmentation Object Detection +2

ChestX-Det10: Chest X-ray Dataset on Detection of Thoracic Abnormalities

1 code implementation17 Jun 2020 Jingyu Liu, Jie Lian, Yizhou Yu

Instance level detection of thoracic diseases or abnormalities are crucial for automatic diagnosis in chest X-ray images.

Classification General Classification

Align, Attend and Locate: Chest X-Ray Diagnosis via Contrast Induced Attention Network With Limited Supervision

no code implementations ICCV 2019 Jingyu Liu, Gangming Zhao, Yu Fei, Ming Zhang, Yizhou Wang, Yizhou Yu

We show that the use of contrastive attention and alignment module allows the model to learn rich identification and localization information using only a small amount of location annotations, resulting in state-of-the-art performance in NIH chest X-ray dataset.

Contrastive Learning

Verification Code Recognition Based on Active and Deep Learning

no code implementations12 Feb 2019 Dongliang Xu, Bailing Wang, XiaoJiang Du, Xiaoyan Zhu, zhitao Guan, Xiaoyan Yu, Jingyu Liu

However, the advantages of convolutional neural networks depend on the data used by the training classifier, particularly the size of the training set.

Deep Learning

Referring Expression Generation and Comprehension via Attributes

no code implementations ICCV 2017 Jingyu Liu, Liang Wang, Ming-Hsuan Yang

In this paper, we explore the role of attributes by incorporating them into both referring expression generation and comprehension.

Attribute Referring Expression +1

Cannot find the paper you are looking for? You can Submit a new open access paper.