no code implementations • 11 Mar 2024 • Yuki Tatsukawa, I-Chao Shen, Anran Qi, Yuki Koyama, Takeo Igarashi, Ariel Shamir
To solve this problem, we present FontCLIP: a model that connects the semantic understanding of a large vision-language model with typographical knowledge.