no code implementations • 7 Jun 2024 • Gyutae Park, Seojin Hwang, Hwanhee Lee
We provide a future work direction to explore more effective few-shot learning strategies and to investigate the transfer learning capabilities of LLMs for cross-lingual summarization.
1 code implementation • CVPR 2022 • Gyutae Park, Sungjoon Son, Jaeyoung Yoo, SeHo Kim, Nojun Kwak
In this paper, we propose a transformer-based image matting model called MatteFormer, which takes full advantage of trimap information in the transformer block.
Ranked #5 on Image Matting on Composition-1K