Search Results for author: Jingjing Lv

Found 6 papers, 4 papers with code

Generate E-commerce Product Background by Integrating Category Commonality and Personalized Style

no code implementations20 Dec 2023 Haohan Wang, Wei Feng, Yang Lu, Yaoyu Li, Zheng Zhang, Jingjing Lv, Xin Zhu, Junjie Shen, Zhangang Lin, Lixing Bo, Jingping Shao

Furthermore, for products with specific and fine-grained requirements in layout, elements, etc, a Personality-Wise Generator is devised to learn such personalized style directly from a reference image to resolve textual ambiguities, and is trained in a self-supervised manner for more efficient training data usage.

2k

Planning and Rendering: Towards End-to-End Product Poster Generation

no code implementations14 Dec 2023 Zhaochen Li, Fengheng Li, Wei Feng, Honghe Zhu, An Liu, Yaoyu Li, Zheng Zhang, Jingjing Lv, Xin Zhu, Junjie Shen, Zhangang Lin, Jingping Shao, Zhenglu Yang

At the planning stage, we propose a PlanNet to generate the layout of the product and other visual components considering both the appearance features of the product and semantic features of the text, which improves the diversity and rationality of the layouts.

Image Inpainting

RTQ: Rethinking Video-language Understanding Based on Image-text Model

2 code implementations1 Dec 2023 Xiao Wang, Yaoyu Li, Tian Gan, Zheng Zhang, Jingjing Lv, Liqiang Nie

Recent advancements in video-language understanding have been established on the foundation of image-text models, resulting in promising outcomes due to the shared knowledge between images and videos.

Video Captioning Video Question Answering +1

CBNet: A Plug-and-Play Network for Segmentation-Based Scene Text Detection

1 code implementation5 Dec 2022 Xi Zhao, Wei Feng, Zheng Zhang, Jingjing Lv, Xin Zhu, Zhangang Lin, Jinghe Hu, Jingping Shao

Recently, segmentation-based methods are quite popular in scene text detection, which mainly contain two steps: text kernel segmentation and expansion.

Scene Text Detection Segmentation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.