no code implementations • 25 Jul 2024 • Xinyu Pi, Mingyuan Wu, Jize Jiang, Haozhen Zheng, Beitong Tian, ChengXiang Zhai, Klara Nahrstedt, Zhiting Hu
Smaller-scale Vision-Langauge Models (VLMs) often claim to perform on par with larger models in general-domain visual grounding and question-answering benchmarks while offering advantages in computational efficiency and storage.
no code implementations • 19 Dec 2023 • Ziyi Chen, Jize Jiang, Daqian Zuo, Heyi Tao, Jun Yang, Yuxiang Wei
This results in high computational costs and limits the number of retrieved text, hindering accuracy.
no code implementations • 1 Mar 2023 • Adam Davies, Jize Jiang, ChengXiang Zhai
Our framework, CALM (Competence-based Analysis of Language Models), is designed to investigate LLM competence in the context of specific tasks by intervening on models' internal representations of different linguistic properties using causal probing, and measuring models' alignment under these interventions with a given ground-truth causal model of the task.