no code implementations • NAACL (ACL) 2022 • Weiyi Lu, Sunny Rajagopalan, Priyanka Nigam, Jaspreet Singh, Xiaodi Sun, Yi Xu, Belinda Zeng, Trishul Chilimbi
However, one issue that often arises in MTL is the convergence speed between tasks varies due to differences in task difficulty, so it can be a challenge to simultaneously achieve the best performance on all tasks with a single model checkpoint.
no code implementations • NAACL (maiworkshop) 2021 • Han Ding, Li Erran Li, Zhiting Hu, Yi Xu, Dilek Hakkani-Tur, Zheng Du, Belinda Zeng
Recent vision-language understanding approaches adopt a multi-modal transformer pre-training and finetuning paradigm.
no code implementations • 7 Oct 2023 • Zixuan Liu, Gaurush Hiranandani, Kun Qian, Eddie W. Huang, Yi Xu, Belinda Zeng, Karthik Subbian, Sheng Wang
ForeSeer transfers reviews from similar products on a large product graph and exploits these reviews to predict aspects that might emerge in future reviews.
no code implementations • 5 Jun 2023 • Han Xie, Da Zheng, Jun Ma, Houyu Zhang, Vassilis N. Ioannidis, Xiang Song, Qing Ping, Sheng Wang, Carl Yang, Yi Xu, Belinda Zeng, Trishul Chilimbi
Model pre-training on large text corpora has been demonstrated effective for various downstream applications in the NLP domain.
no code implementations • CVPR 2023 • Qian Jiang, Changyou Chen, Han Zhao, Liqun Chen, Qing Ping, Son Dinh Tran, Yi Xu, Belinda Zeng, Trishul Chilimbi
Hence we advocate that the key of better performance lies in meaningful latent modality structures instead of perfect modality alignment.
Few-Shot Image Classification
Open-Ended Question Answering
+6
no code implementations • 22 Jun 2022 • Vassilis N. Ioannidis, Xiang Song, Da Zheng, Houyu Zhang, Jun Ma, Yi Xu, Belinda Zeng, Trishul Chilimbi, George Karypis
The effectiveness in our framework is achieved by applying stage-wise fine-tuning of the BERT model first with heterogenous graph information and then with a GNN model.
no code implementations • 7 Jun 2022 • Xiaodi Sun, Sunny Rajagopalan, Priyanka Nigam, Weiyi Lu, Yi Xu, Belinda Zeng, Trishul Chilimbi
In this paper, we propose an improvement to prompt-based fine-tuning that addresses these two issues.
no code implementations • CVPR 2022 • Jiali Duan, Liqun Chen, Son Tran, Jinyu Yang, Yi Xu, Belinda Zeng, Trishul Chilimbi
Aligning signals from different modalities is an important step in vision-language representation learning as it affects the performance of later stages such as cross-modality fusion.
1 code implementation • CVPR 2022 • Jinyu Yang, Jiali Duan, Son Tran, Yi Xu, Sampath Chanda, Liqun Chen, Belinda Zeng, Trishul Chilimbi, Junzhou Huang
Besides CMA, TCL introduces an intra-modal contrastive objective to provide complementary benefits in representation learning.
Ranked #1 on
Zero-Shot Cross-Modal Retrieval
on COCO 2014
no code implementations • 30 Oct 2021 • Xuanli He, Iman Keivanloo, Yi Xu, Xiang He, Belinda Zeng, Santosh Rajagopalan, Trishul Chilimbi
To achieve this, we propose a novel idea, Magic Pyramid (MP), to reduce both width-wise and depth-wise computation via token pruning and early exiting for Transformer-based models, particularly BERT.
no code implementations • 24 Sep 2021 • Tarik Arici, Mehmet Saygin Seyfioglu, Tal Neiman, Yi Xu, Son Train, Trishul Chilimbi, Belinda Zeng, Ismail Tutar
Vision-and-Language Pre-training (VLP) improves model performance for downstream tasks that require image and text inputs.
1 code implementation • 2 Jul 2021 • Junya Chen, Zhe Gan, Xuan Li, Qing Guo, Liqun Chen, Shuyang Gao, Tagyoung Chung, Yi Xu, Belinda Zeng, Wenlian Lu, Fan Li, Lawrence Carin, Chenyang Tao
InfoNCE-based contrastive representation learners, such as SimCLR, have been tremendously successful in recent years.