1 code implementation • 22 Nov 2023 • Qifan Yu, Juncheng Li, Longhui Wei, Liang Pang, Wentao Ye, Bosheng Qin, Siliang Tang, Qi Tian, Yueting Zhuang
Multi-modal Large Language Models (MLLMs) tuned on machine-generated instruction-following data have demonstrated remarkable performance in various multi-modal understanding and generation tasks.
no code implementations • 15 Aug 2023 • Bosheng Qin, Wentao Ye, Qifan Yu, Siliang Tang, Yueting Zhuang
Our approach employs a pretrained T2I diffusion model to generate each video frame in an autoregressive fashion.
1 code implementation • 22 May 2023 • Qifan Yu, Juncheng Li, Wentao Ye, Siliang Tang, Yueting Zhuang
Recent text-to-image generation models have shown promising results in generating high-fidelity photo-realistic images.
1 code implementation • ICCV 2023 • Qifan Yu, Juncheng Li, Yu Wu, Siliang Tang, Wei Ji, Yueting Zhuang
Based on that, we further introduce a novel Entangled cross-modal prompt approach for open-world predicate scene graph generation (Epic), where models can generalize to unseen predicates in a zero-shot manner.