1 code implementation • ICCV 2023 • Hritik Bansal, Nishad Singhi, Yu Yang, Fan Yin, Aditya Grover, Kai-Wei Chang
Multimodal contrastive pretraining has been used to train multimodal representation models, such as CLIP, on large amounts of paired image-text data.
no code implementations • 6 Feb 2023 • Nishad Singhi, Florian Mohnert, Ben Prystawski, Falk Lieder
This creates an untapped opportunity to derive practical recommendations for which subgoals managers and individuals should set from cognitive models of bounded rationality.