1 code implementation • 6 Nov 2023 • Bishal Thapaliya, Esra Akbas, Jiayu Chen, Raam Sapkota, Bhaskar Ray, Pranav Suresh, Vince Calhoun, Jingyu Liu
Resting-state functional magnetic resonance imaging (rsfMRI) is a powerful tool for investigating the relationship between brain function and cognitive processes as it allows for the functional organization of the brain to be captured without relying on a specific task or stimuli.
no code implementations • 23 Oct 2023 • Yulan Hu, Sheng Ouyang, Jingyu Liu, Ge Chen, Zhirui Yang, Junchen Wan, Fuzheng Zhang, Zhongyuan Wang, Yong liu
In recent years, contrastive learning has emerged as a dominant self-supervised paradigm, attracting numerous research interests in the field of graph learning.
no code implementations • 6 Oct 2023 • Jingyu Liu, Huayi Tang, Yong liu
Graph Contrastive Learning (GCL) aims to learn node representations by aligning positive pairs and separating negative ones.
1 code implementation • 27 Sep 2023 • Wenhan Xiong, Jingyu Liu, Igor Molybog, Hejia Zhang, Prajjwal Bhargava, Rui Hou, Louis Martin, Rashi Rungta, Karthik Abinav Sankararaman, Barlas Oguz, Madian Khabsa, Han Fang, Yashar Mehdad, Sharan Narang, Kshitiz Malik, Angela Fan, Shruti Bhosale, Sergey Edunov, Mike Lewis, Sinong Wang, Hao Ma
We also examine the impact of various design choices in the pretraining process, including the data mix and the training curriculum of sequence lengths -- our ablation experiments suggest that having abundant long texts in the pretrain dataset is not the key to achieving strong performance, and we empirically verify that long context continual pretraining is more efficient and similarly effective compared to pretraining from scratch with long sequences.
2 code implementations • 24 Aug 2023 • Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve
We release Code Llama, a family of large language models for code based on Llama 2 providing state-of-the-art performance among open models, infilling capabilities, support for large input contexts, and zero-shot instruction following ability for programming tasks.
Ranked #13 on
Code Generation
on HumanEval
no code implementations • 23 May 2023 • Tsu-Jui Fu, Wenhan Xiong, Yixin Nie, Jingyu Liu, Barlas Oğuz, William Yang Wang
To address this \texttt{T3H} task, we propose Compositional Cross-modal Human (CCH).
Ranked #1 on
Text-to-3D-Human Generation
on SHHQ
no code implementations • 15 Apr 2023 • Riyasat Ohib, Bishal Thapaliya, Pratyush Gaggenapalli, Jingyu Liu, Vince Calhoun, Sergey Plis
Federated learning (FL) enables the training of a model leveraging decentralized data in client sites while preserving privacy by not collecting data.
no code implementations • 7 Mar 2023 • Jingyu Liu, Wenhan Xiong, Ian Jones, Yixin Nie, Anchit Gupta, Barlas Oğuz
Whether heuristic or learned, these methods ignore instance-level visual attributes of objects, and as a result may produce visually less coherent scenes.
no code implementations • 15 Sep 2022 • Yuda Bi, Anees Abrol, Zening Fu, Jiayu Chen, Jingyu Liu, Vince Calhoun
Prior work has demonstrated that deep learning models that take advantage of the data's 3D structure can outperform standard machine learning on several learning tasks.
no code implementations • 9 Jun 2022 • Jingyu Liu, Xiaoting Wang, Xiaozhe Wang
This paper proposes an adaptive sparse polynomial chaos expansion(PCE)-based method to quantify the impacts of uncertainties on critical clearing time (CCT) that is an important index in transient stability analysis.
no code implementations • 10 Nov 2021 • Zijian Gao, Jingyu Liu, Weiqi Sun, Sheng Chen, Dedan Chang, Lili Zhao
Modern video-text retrieval frameworks basically consist of three parts: video encoder, text encoder and the similarity head.
Ranked #10 on
Video Retrieval
on MSR-VTT-1kA
(using extra training data)
no code implementations • 14 Oct 2021 • Zijian Gao, Huanyu Liu, Jingyu Liu
The current state-of-the-art methods for video corpus moment retrieval (VCMR) often use similarity-based feature alignment approach for the sake of convenience and speed.
1 code implementation • 11 Oct 2021 • Kaihao Zhang, Dongxu Li, Wenhan Luo, Jingyu Liu, Jiankang Deng, Wei Liu, Stefanos Zafeiriou
It is thus unclear how these algorithms perform on public face hallucination datasets.
Ranked #1 on
Image Super-Resolution
on WLFW
1 code implementation • 21 Apr 2021 • Jie Lian, Jingyu Liu, Shu Zhang, Kai Gao, Xiaoqing Liu, Dingwen Zhang, Yizhou Yu
Leveraging on constant structure and disease relations extracted from domain knowledge, we propose a structure-aware relation network (SAR-Net) extending Mask R-CNN.
1 code implementation • 19 Oct 2020 • Jie Lian, Jingyu Liu, Yizhou Yu, Mengyuan Ding, Yaoci Lu, Yi Lu, Jie Cai, Deshou Lin, Miao Zhang, Zhe Wang, Kai He, Yijie Yu
The detection of thoracic abnormalities challenge is organized by the Deepwise AI Lab.
1 code implementation • 17 Jun 2020 • Jingyu Liu, Jie Lian, Yizhou Yu
Instance level detection of thoracic diseases or abnormalities are crucial for automatic diagnosis in chest X-ray images.
no code implementations • 6 Jan 2020 • Haleh Falakshahi, Victor M. Vergara, Jingyu Liu, Daniel H. Mathalon, Judith M. Ford, James Voyvodic, Bryon A. Mueller, Aysenil Belger, Sarah McEwen, Steven G. Potkin, Adrian Preda, Hooman Rokham, Jing Sui, Jessica A. Turner, Sergey Plis, Vince D. Calhoun
Through simulation and real data, we show our approach reveals important information about disease-related network disruptions that are missed with a focus on a single modality.
no code implementations • ICCV 2019 • Jingyu Liu, Gangming Zhao, Yu Fei, Ming Zhang, Yizhou Wang, Yizhou Yu
We show that the use of contrastive attention and alignment module allows the model to learn rich identification and localization information using only a small amount of location annotations, resulting in state-of-the-art performance in NIH chest X-ray dataset.
no code implementations • 12 Feb 2019 • Dongliang Xu, Bailing Wang, XiaoJiang Du, Xiaoyan Zhu, zhitao Guan, Xiaoyan Yu, Jingyu Liu
However, the advantages of convolutional neural networks depend on the data used by the training classifier, particularly the size of the training set.
no code implementations • ICCV 2017 • Jingyu Liu, Liang Wang, Ming-Hsuan Yang
In this paper, we explore the role of attributes by incorporating them into both referring expression generation and comprehension.