Search Results for author: Chris Kelly

Found 5 papers, 1 papers with code

VisionGPT: Vision-Language Understanding Agent Using Generalized Multimodal Framework

no code implementations14 Mar 2024 Chris Kelly, Luhui Hu, Bang Yang, Yu Tian, Deshun Yang, Cindy Yang, Zaoshan Huang, Zihao Li, Jiayin Hu, Yuexian Zou

With the emergence of large language models (LLMs) and vision foundation models, how to combine the intelligence and capacity of these open-sourced or API-available models to achieve open-world visual perception remains an open question.

Language Modelling Large Language Model +2

VisionGPT-3D: A Generalized Multimodal Agent for Enhanced 3D Vision Understanding

no code implementations14 Mar 2024 Chris Kelly, Luhui Hu, Jiayin Hu, Yu Tian, Deshun Yang, Bang Yang, Cindy Yang, Zihao Li, Zaoshan Huang, Yuexian Zou

It seamlessly integrates various SOTA vision models and brings the automation in the selection of SOTA vision models, identifies the suitable 3D mesh creation algorithms corresponding to 2D depth maps analysis, generates optimal results based on diverse multimodal inputs such as text prompts.

WorldGPT: A Sora-Inspired Video AI Agent as Rich World Models from Text and Image Inputs

no code implementations10 Mar 2024 Deshun Yang, Luhui Hu, Yu Tian, Zihao Li, Chris Kelly, Bang Yang, Cindy Yang, Yuexian Zou

Several text-to-video diffusion models have demonstrated commendable capabilities in synthesizing high-quality video content.

Video Generation

UnifiedVisionGPT: Streamlining Vision-Oriented AI through Generalized Multimodal Framework

1 code implementation16 Nov 2023 Chris Kelly, Luhui Hu, Cindy Yang, Yu Tian, Deshun Yang, Bang Yang, Zaoshan Huang, Zihao Li, Yuexian Zou

In the current landscape of artificial intelligence, foundation models serve as the bedrock for advancements in both language and vision domains.

Cannot find the paper you are looking for? You can Submit a new open access paper.