no code implementations • 15 Oct 2024 • Zhiyuan Ma, Yuzhu Zhang, Guoli Jia, Liangliang Zhao, Yichao Ma, Mingjie Ma, Gaofeng Liu, Kaiyan Zhang, Jianjun Li, BoWen Zhou
As one of the most popular and sought-after generative models in the recent years, diffusion models have sparked the interests of many researchers and steadily shown excellent advantage in various generative tasks such as image synthesis, video generation, molecule design, 3D scene rendering and multimodal generation, relying on their dense theoretical principles and reliable application practices.
no code implementations • 19 Jun 2024 • Zhiyuan Ma, Liangliang Zhao, Biqing Qi, BoWen Zhou
The most advanced diffusion models have recently adopted increasingly deep stacked networks (e. g., U-Net or Transformer) to promote the generative emergence capabilities of vision generation models similar to large language models (LLMs).
no code implementations • 30 Apr 2024 • Lei Zhuang, Jingdong Zhao, Yuntao Li, Zichun Xu, Liangliang Zhao, Hong Liu
EISE and MPT are collaboratively trained, enabling EISE to autonomously learn and extract patterns from environmental data, thereby forming semantic representations that MPT could more effectively interpret and utilize for motion planning.
1 code implementation • 19 Jan 2024 • Junyu Gao, Liangliang Zhao, Xuelong Li
Considering the absence of a dataset for this task, a large-scale Dataset (NWPU-MOC) is collected, consisting of 3, 416 scenes with a resolution of 1024 $\times$ 1024 pixels, and well-annotated using 14 fine-grained object categories.
no code implementations • COLING 2020 • Ruifang He, Liangliang Zhao, Huanyu Liu
In this paper, we construct TWEETSUM, a new event-oriented dataset for social summarization.