Search Results for author: Yuren Cong

Found 8 papers, 2 papers with code

Segment Any Object Model (SAOM): Real-to-Simulation Fine-Tuning Strategy for Multi-Class Multi-Instance Segmentation

no code implementations16 Mar 2024 Mariia Khan, Yue Qiu, Yuren Cong, Jumana Abu-Khalaf, David Suter, Bodo Rosenhahn

The foundational Segment Anything Model (SAM) is designed for promptable multi-class multi-instance segmentation but tends to output part or sub-part masks in the "everything" mode for various real-world applications.

Instance Segmentation Object +3

FLATTEN: optical FLow-guided ATTENtion for consistent text-to-video editing

no code implementations9 Oct 2023 Yuren Cong, Mengmeng Xu, Christian Simon, Shoufa Chen, Jiawei Ren, Yanping Xie, Juan-Manuel Perez-Rua, Bodo Rosenhahn, Tao Xiang, Sen He

In this paper, for the first time, we introduce optical flow into the attention module in the diffusion model's U-Net to address the inconsistency issue for text-to-video editing.

Optical Flow Estimation Text-to-Video Editing +1

Learning Similarity between Scene Graphs and Images with Transformers

no code implementations2 Apr 2023 Yuren Cong, Wentong Liao, Bodo Rosenhahn, Michael Ying Yang

Scene graph generation is conventionally evaluated by (mean) Recall@K, which measures the ratio of correctly predicted triplets that appear in the ground truth.

Contrastive Learning Graph Generation +3

Attribute-Centric Compositional Text-to-Image Generation

no code implementations4 Jan 2023 Yuren Cong, Martin Renqiang Min, Li Erran Li, Bodo Rosenhahn, Michael Ying Yang

We further propose an attribute-centric contrastive loss to avoid overfitting to overrepresented attribute compositions.

Attribute Fairness +1

SSGVS: Semantic Scene Graph-to-Video Synthesis

no code implementations11 Nov 2022 Yuren Cong, Jinhui Yi, Bodo Rosenhahn, Michael Ying Yang

A semantic scene graph-to-video synthesis framework (SSGVS), based on the pre-trained VSG encoder, VQ-VAE, and auto-regressive Transformer, is proposed to synthesize a video given an initial scene image and a non-fixed number of semantic scene graphs.

Image Generation

RelTR: Relation Transformer for Scene Graph Generation

1 code implementation27 Jan 2022 Yuren Cong, Michael Ying Yang, Bodo Rosenhahn

Different objects in the same scene are more or less related to each other, but only a limited number of these relationships are noteworthy.

Graph Generation Object +4

Spatial-Temporal Transformer for Dynamic Scene Graph Generation

1 code implementation ICCV 2021 Yuren Cong, Wentong Liao, Hanno Ackermann, Bodo Rosenhahn, Michael Ying Yang

Compared to the task of scene graph generation from images, it is more challenging because of the dynamic relationships between objects and the temporal dependencies between frames allowing for a richer semantic interpretation.

Scene Graph Generation Video Understanding +1

Cannot find the paper you are looking for? You can Submit a new open access paper.