Search Results for author: Keren Ye

Found 10 papers, 3 papers with code

TIP: Text-Driven Image Processing with Semantic and Restoration Instructions

no code implementations18 Dec 2023 Chenyang Qi, Zhengzhong Tu, Keren Ye, Mauricio Delbracio, Peyman Milanfar, Qifeng Chen, Hossein Talebi

Text-driven diffusion models have become increasingly popular for various image editing tasks, including inpainting, stylization, and object replacement.

Deblurring Denoising +2

VILA: Learning Image Aesthetics from User Comments with Vision-Language Pretraining

1 code implementation CVPR 2023 Junjie Ke, Keren Ye, Jiahui Yu, Yonghui Wu, Peyman Milanfar, Feng Yang

Our results show that our pretrained aesthetic vision-language model outperforms prior works on image aesthetic captioning over the AVA-Captions dataset, and it has powerful zero-shot capability for aesthetic tasks such as zero-shot style classification and zero-shot IAA, surpassing many supervised baselines.

Language Modelling Video Quality Assessment

Weakly-Supervised Action Detection Guided by Audio Narration

no code implementations12 May 2022 Keren Ye, Adriana Kovashka

We explored how to eliminate the expensive annotations in video detection data which provide refined boundaries.

Action Detection

Linguistic Structures as Weak Supervision for Visual Scene Graph Generation

1 code implementation CVPR 2021 Keren Ye, Adriana Kovashka

Prior work in scene graph generation requires categorical supervision at the level of triplets - subjects and objects, and predicates that relate them, either with or without bounding box information.

Graph Generation Scene Graph Generation

SpotPatch: Parameter-Efficient Transfer Learning for Mobile Object Detection

no code implementations4 Jan 2021 Keren Ye, Adriana Kovashka, Mark Sandler, Menglong Zhu, Andrew Howard, Marco Fornoni

In this paper we address the question: can task-specific detectors be trained and represented as a shared set of weights, plus a very small set of additional weights for each task?

Object object-detection +2

Cap2Det: Learning to Amplify Weak Caption Supervision for Object Detection

1 code implementation ICCV 2019 Keren Ye, Mingda Zhang, Adriana Kovashka, Wei Li, Danfeng Qin, Jesse Berent

Learning to localize and name object instances is a fundamental problem in vision, but state-of-the-art approaches rely on expensive bounding box supervision.

Object object-detection +1

Learning to discover and localize visual objects with open vocabulary

no code implementations25 Nov 2018 Keren Ye, Mingda Zhang, Wei Li, Danfeng Qin, Adriana Kovashka, Jesse Berent

To alleviate the cost of obtaining accurate bounding boxes for training today's state-of-the-art object detection models, recent weakly supervised detection work has proposed techniques to learn from image-level labels.

Object object-detection +1

Story Understanding in Video Advertisements

no code implementations29 Jul 2018 Keren Ye, Kyle Buettner, Adriana Kovashka

We dedicate our study to understand the dynamic structure of video ads automatically.

ADVISE: Symbolism and External Knowledge for Decoding Advertisements

no code implementations ECCV 2018 Keren Ye, Adriana Kovashka

In order to convey the most content in their limited space, advertisements embed references to outside knowledge via symbolism.

Clustering Image Captioning +2

Automatic Understanding of Image and Video Advertisements

no code implementations CVPR 2017 Zaeem Hussain, Mingda Zhang, Xiaozhong Zhang, Keren Ye, Christopher Thomas, Zuha Agha, Nathan Ong, Adriana Kovashka

There is more to images than their objective physical content: for example, advertisements are created to persuade a viewer to take a certain action.

Cannot find the paper you are looking for? You can Submit a new open access paper.