no code implementations • 24 Jun 2023 • Liunian Harold Li, Zi-Yi Dou, Nanyun Peng, Kai-Wei Chang
In fact, our experiments show that GLIP, the state-of-the-art vision-language model for object detection, often disregards contextual information in the language descriptions and instead relies heavily on detecting objects solely by their names.
no code implementations • 24 Jun 2023 • Liunian Harold Li, Jack Hessel, Youngjae Yu, Xiang Ren, Kai-Wei Chang, Yejin Choi
We release our corpus of chain-of-thought samples and code.
no code implementations • 2 Jun 2023 • Masoud Monajatipoor, Liunian Harold Li, Mozhdeh Rouhsedaghat, Lin F. Yang, Kai-Wei Chang
In this paper, we study an interesting hypothesis: can we transfer the in-context learning ability from the language domain to VL domain?
1 code implementation • 12 Jun 2022 • Haotian Zhang, Pengchuan Zhang, Xiaowei Hu, Yen-Chun Chen, Liunian Harold Li, Xiyang Dai, Lijuan Wang, Lu Yuan, Jenq-Neng Hwang, Jianfeng Gao
We present GLIPv2, a grounded VL understanding model, that serves both localization tasks (e. g., object detection, instance segmentation) and Vision-Language (VL) understanding tasks (e. g., VQA, image captioning).
Ranked #1 on
Phrase Grounding
on Flickr30k Entities Test
(using extra training data)
no code implementations • 25 May 2022 • Jingnong Qu, Liunian Harold Li, Jieyu Zhao, Sunipa Dev, Kai-Wei Chang
Disinformation has become a serious problem on social media.
1 code implementation • 24 May 2022 • Da Yin, Hritik Bansal, Masoud Monajatipoor, Liunian Harold Li, Kai-Wei Chang
In this paper, we introduce a benchmark dataset, Geo-Diverse Commonsense Multilingual Language Models Analysis (GeoMLAMA), for probing the diversity of the relational knowledge in multilingual PLMs.
1 code implementation • 23 May 2022 • Honghua Zhang, Liunian Harold Li, Tao Meng, Kai-Wei Chang, Guy Van Den Broeck
Logical reasoning is needed in a wide range of NLP tasks.
8 code implementations • 19 Apr 2022 • Chunyuan Li, Haotian Liu, Liunian Harold Li, Pengchuan Zhang, Jyoti Aneja, Jianwei Yang, Ping Jin, Houdong Hu, Zicheng Liu, Yong Jae Lee, Jianfeng Gao
In general, these language-augmented visual models demonstrate strong transferability to a variety of datasets and tasks.
Ranked #1 on
Zero-Shot Image Classification
on ODinW
1 code implementation • CVPR 2022 • Yiwu Zhong, Jianwei Yang, Pengchuan Zhang, Chunyuan Li, Noel Codella, Liunian Harold Li, Luowei Zhou, Xiyang Dai, Lu Yuan, Yin Li, Jianfeng Gao
However, we show that directly applying such models to recognize image regions for object detection leads to poor performance due to a domain shift: CLIP was trained to match an image as a whole to a text description, without capturing the fine-grained alignment between image regions and text spans.
Ranked #5 on
Open Vocabulary Object Detection
on MSCOCO
(using extra training data)
no code implementations • 16 Dec 2021 • Zhecan Wang, Haoxuan You, Liunian Harold Li, Alireza Zareian, Suji Park, Yiqing Liang, Kai-Wei Chang, Shih-Fu Chang
As for pre-training, a scene-graph-aware pre-training method is proposed to leverage structure knowledge extracted in the visual scene graph.
1 code implementation • CVPR 2022 • Liunian Harold Li, Pengchuan Zhang, Haotian Zhang, Jianwei Yang, Chunyuan Li, Yiwu Zhong, Lijuan Wang, Lu Yuan, Lei Zhang, Jenq-Neng Hwang, Kai-Wei Chang, Jianfeng Gao
The unification brings two benefits: 1) it allows GLIP to learn from both detection and grounding data to improve both tasks and bootstrap a good grounding model; 2) GLIP can leverage massive image-text pairs by generating grounding boxes in a self-training fashion, making the learned representation semantic-rich.
Ranked #1 on
2D Object Detection
on RF100
1 code implementation • EMNLP 2021 • Da Yin, Liunian Harold Li, Ziniu Hu, Nanyun Peng, Kai-Wei Chang
Commonsense is defined as the knowledge that is shared by everyone.
Ranked #1 on
Visual Commonsense Reasoning
on GD-VCR
Cultural Vocal Bursts Intensity Prediction
Visual Commonsense Reasoning
no code implementations • 10 Aug 2021 • Masoud Monajatipoor, Mozhdeh Rouhsedaghat, Liunian Harold Li, Aichi Chien, C. -C. Jay Kuo, Fabien Scalzo, Kai-Wei Chang
Vision-and-language(V&L) models take image and text as input and learn to capture the associations between them.
4 code implementations • 13 Jul 2021 • Sheng Shen, Liunian Harold Li, Hao Tan, Mohit Bansal, Anna Rohrbach, Kai-Wei Chang, Zhewei Yao, Kurt Keutzer
Most existing Vision-and-Language (V&L) models rely on pre-trained visual encoders, using a relatively small set of manually-annotated data (as compared to web-crawled data), to perceive the visual world.
Ranked #4 on
Vision and Language Navigation
on RxR
(using extra training data)
1 code implementation • NAACL 2021 • Liunian Harold Li, Haoxuan You, Zhecan Wang, Alireza Zareian, Shih-Fu Chang, Kai-Wei Chang
Pre-trained contextual vision-and-language (V&L) models have achieved impressive performance on various benchmarks.
no code implementations • ACL 2020 • Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang
Pre-trained visually grounded language models such as ViLBERT, LXMERT, and UNITER have achieved significant performance improvement on vision-and-language tasks but what they learn during pre-training remains unclear.
6 code implementations • 9 Aug 2019 • Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang
We propose VisualBERT, a simple and flexible framework for modeling a broad range of vision-and-language tasks.
Ranked #1 on
Visual Reasoning
on NLVR
no code implementations • TACL 2019 • Liunian Harold Li, Patrick H. Chen, Cho-Jui Hsieh, Kai-Wei Chang
Contextual representation models have achieved great success in improving various downstream natural language processing tasks.
no code implementations • 28 Feb 2019 • Liunian Harold Li, Patrick H. Chen, Cho-Jui Hsieh, Kai-Wei Chang
Our framework reduces the time spent on the output layer to a negligible level, eliminates almost all the trainable parameters of the softmax layer and performs language modeling without truncating the vocabulary.