Search Results for author: Jingyu Liu

Found 20 papers, 7 papers with code

Brain Networks and Intelligence: A Graph Neural Network Based Approach to Resting State fMRI Data

1 code implementation6 Nov 2023 Bishal Thapaliya, Esra Akbas, Jiayu Chen, Raam Sapkota, Bhaskar Ray, Pranav Suresh, Vince Calhoun, Jingyu Liu

Resting-state functional magnetic resonance imaging (rsfMRI) is a powerful tool for investigating the relationship between brain function and cognitive processes as it allows for the functional organization of the brain to be captured without relying on a specific task or stimuli.

Do We Really Need Contrastive Learning for Graph Representation?

no code implementations23 Oct 2023 Yulan Hu, Sheng Ouyang, Jingyu Liu, Ge Chen, Zhirui Yang, Junchen Wan, Fuzheng Zhang, Zhongyuan Wang, Yong liu

In recent years, contrastive learning has emerged as a dominant self-supervised paradigm, attracting numerous research interests in the field of graph learning.

Contrastive Learning Graph Learning

Perfect Alignment May be Poisonous to Graph Contrastive Learning

no code implementations6 Oct 2023 Jingyu Liu, Huayi Tang, Yong liu

Graph Contrastive Learning (GCL) aims to learn node representations by aligning positive pairs and separating negative ones.

Contrastive Learning

Effective Long-Context Scaling of Foundation Models

1 code implementation27 Sep 2023 Wenhan Xiong, Jingyu Liu, Igor Molybog, Hejia Zhang, Prajjwal Bhargava, Rui Hou, Louis Martin, Rashi Rungta, Karthik Abinav Sankararaman, Barlas Oguz, Madian Khabsa, Han Fang, Yashar Mehdad, Sharan Narang, Kshitiz Malik, Angela Fan, Shruti Bhosale, Sergey Edunov, Mike Lewis, Sinong Wang, Hao Ma

We also examine the impact of various design choices in the pretraining process, including the data mix and the training curriculum of sequence lengths -- our ablation experiments suggest that having abundant long texts in the pretrain dataset is not the key to achieving strong performance, and we empirically verify that long context continual pretraining is more efficient and similarly effective compared to pretraining from scratch with long sequences.

Continual Pretraining Language Modelling

Code Llama: Open Foundation Models for Code

2 code implementations24 Aug 2023 Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve

We release Code Llama, a family of large language models for code based on Llama 2 providing state-of-the-art performance among open models, infilling capabilities, support for large input contexts, and zero-shot instruction following ability for programming tasks.

Code Generation Instruction Following

SalientGrads: Sparse Models for Communication Efficient and Data Aware Distributed Federated Training

no code implementations15 Apr 2023 Riyasat Ohib, Bishal Thapaliya, Pratyush Gaggenapalli, Jingyu Liu, Vince Calhoun, Sergey Plis

Federated learning (FL) enables the training of a model leveraging decentralized data in client sites while preserving privacy by not collecting data.

Federated Learning

CLIP-Layout: Style-Consistent Indoor Scene Synthesis with Semantic Furniture Embedding

no code implementations7 Mar 2023 Jingyu Liu, Wenhan Xiong, Ian Jones, Yixin Nie, Anchit Gupta, Barlas Oğuz

Whether heuristic or learned, these methods ignore instance-level visual attributes of objects, and as a result may produce visually less coherent scenes.

Indoor Scene Synthesis Scene Generation

Prediction of Gender from Longitudinal MRI data via Deep Learning on Adolescent Data Reveals Unique Patterns Associated with Brain Structure and Change over a Two-year Period

no code implementations15 Sep 2022 Yuda Bi, Anees Abrol, Zening Fu, Jiayu Chen, Jingyu Liu, Vince Calhoun

Prior work has demonstrated that deep learning models that take advantage of the data's 3D structure can outperform standard machine learning on several learning tasks.

Gender Prediction

A Sparse Polynomial Chaos Expansion-Based Method for Probabilistic Transient Stability Assessment and Enhancement

no code implementations9 Jun 2022 Jingyu Liu, Xiaoting Wang, Xiaozhe Wang

This paper proposes an adaptive sparse polynomial chaos expansion(PCE)-based method to quantify the impacts of uncertainties on critical clearing time (CCT) that is an important index in transient stability analysis.

CLIP2TV: Align, Match and Distill for Video-Text Retrieval

no code implementations10 Nov 2021 Zijian Gao, Jingyu Liu, Weiqi Sun, Sheng Chen, Dedan Chang, Lili Zhao

Modern video-text retrieval frameworks basically consist of three parts: video encoder, text encoder and the similarity head.

Ranked #10 on Video Retrieval on MSR-VTT-1kA (using extra training data)

Representation Learning Retrieval +2

Coarse to Fine: Video Retrieval before Moment Localization

no code implementations14 Oct 2021 Zijian Gao, Huanyu Liu, Jingyu Liu

The current state-of-the-art methods for video corpus moment retrieval (VCMR) often use similarity-based feature alignment approach for the sake of convenience and speed.

Moment Retrieval Retrieval +2

A Structure-Aware Relation Network for Thoracic Diseases Detection and Segmentation

1 code implementation21 Apr 2021 Jie Lian, Jingyu Liu, Shu Zhang, Kai Gao, Xiaoqing Liu, Dingwen Zhang, Yizhou Yu

Leveraging on constant structure and disease relations extracted from domain knowledge, we propose a structure-aware relation network (SAR-Net) extending Mask R-CNN.

Instance Segmentation Object Detection +1

ChestX-Det10: Chest X-ray Dataset on Detection of Thoracic Abnormalities

1 code implementation17 Jun 2020 Jingyu Liu, Jie Lian, Yizhou Yu

Instance level detection of thoracic diseases or abnormalities are crucial for automatic diagnosis in chest X-ray images.

Classification General Classification

Align, Attend and Locate: Chest X-Ray Diagnosis via Contrast Induced Attention Network With Limited Supervision

no code implementations ICCV 2019 Jingyu Liu, Gangming Zhao, Yu Fei, Ming Zhang, Yizhou Wang, Yizhou Yu

We show that the use of contrastive attention and alignment module allows the model to learn rich identification and localization information using only a small amount of location annotations, resulting in state-of-the-art performance in NIH chest X-ray dataset.

Contrastive Learning

Verification Code Recognition Based on Active and Deep Learning

no code implementations12 Feb 2019 Dongliang Xu, Bailing Wang, XiaoJiang Du, Xiaoyan Zhu, zhitao Guan, Xiaoyan Yu, Jingyu Liu

However, the advantages of convolutional neural networks depend on the data used by the training classifier, particularly the size of the training set.


Referring Expression Generation and Comprehension via Attributes

no code implementations ICCV 2017 Jingyu Liu, Liang Wang, Ming-Hsuan Yang

In this paper, we explore the role of attributes by incorporating them into both referring expression generation and comprehension.

Referring Expression Referring expression generation

Cannot find the paper you are looking for? You can Submit a new open access paper.