Search Results for author: Erika Lu

Found 10 papers, 4 papers with code

ZipLoRA: Any Subject in Any Style by Effectively Merging LoRAs

1 code implementation22 Nov 2023 Viraj Shah, Nataniel Ruiz, Forrester Cole, Erika Lu, Svetlana Lazebnik, Yuanzhen Li, Varun Jampani

Experiments on a wide range of subject and style combinations show that ZipLoRA can generate compelling results with meaningful improvements over baselines in subject and style fidelity while preserving the ability to recontextualize.

Omnimatte3D: Associating Objects and Their Effects in Unconstrained Monocular Video

no code implementations CVPR 2023 Mohammed Suhail, Erika Lu, Zhengqi Li, Noah Snavely, Leonid Sigal, Forrester Cole

Instead, our method applies recent progress in monocular camera pose and depth estimation to create a full, RGBD video layer for the background, along with a video layer for each foreground object.

Depth Estimation

Self-supervised AutoFlow

no code implementations CVPR 2023 Hsin-Ping Huang, Charles Herrmann, Junhwa Hur, Erika Lu, Kyle Sargent, Austin Stone, Ming-Hsuan Yang, Deqing Sun

Recently, AutoFlow has shown promising results on learning a training set for optical flow, but requires ground truth labels in the target domain to compute its search metric.

Optical Flow Estimation

Omnimatte: Associating Objects and Their Effects in Video

no code implementations CVPR 2021 Erika Lu, Forrester Cole, Tali Dekel, Andrew Zisserman, William T. Freeman, Michael Rubinstein

We show results on real-world videos containing interactions between different types of subjects (cars, animals, people) and complex effects, ranging from semi-transparent elements such as smoke and reflections, to fully opaque effects such as objects attached to the subject.

Self-supervised Video Object Segmentation by Motion Grouping

no code implementations ICCV 2021 Charig Yang, Hala Lamdouar, Erika Lu, Andrew Zisserman, Weidi Xie

We additionally evaluate on a challenging camouflage dataset (MoCA), significantly outperforming the other self-supervised approaches, and comparing favourably to the top supervised approach, highlighting the importance of motion cues, and the potential bias towards visual appearance in existing video segmentation models.

Motion Segmentation Object +6

On the Origin of Species of Self-Supervised Learning

no code implementations31 Mar 2021 Samuel Albanie, Erika Lu, Joao F. Henriques

In the quiet backwaters of cs. CV, cs. LG and stat. ML, a cornucopia of new learning systems is emerging from a primordial soup of mathematics-learning systems with no need for external supervision.

Self-Supervised Learning

Layered Neural Rendering for Retiming People in Video

1 code implementation16 Sep 2020 Erika Lu, Forrester Cole, Tali Dekel, Weidi Xie, Andrew Zisserman, David Salesin, William T. Freeman, Michael Rubinstein

We present a method for retiming people in an ordinary, natural video -- manipulating and editing the time in which different motions of individuals in the video occur.

Neural Rendering

MAST: A Memory-Augmented Self-supervised Tracker

2 code implementations CVPR 2020 Zihang Lai, Erika Lu, Weidi Xie

Recent interest in self-supervised dense tracking has yielded rapid progress, but performance still remains far from supervised methods.

Semantic Segmentation Semi-Supervised Video Object Segmentation +2

Class-Agnostic Counting

1 code implementation1 Nov 2018 Erika Lu, Weidi Xie, Andrew Zisserman

The model achieves competitive performance on cell and crowd counting datasets, and surpasses the state-of-the-art on the car dataset using only three training images.

Crowd Counting Few-Shot Learning +2

Learning to See Physics via Visual De-animation

no code implementations NeurIPS 2017 Jiajun Wu, Erika Lu, Pushmeet Kohli, Bill Freeman, Josh Tenenbaum

At the core of our system is a physical world representation that is first recovered by a perception module and then utilized by physics and graphics engines.

Future prediction

Cannot find the paper you are looking for? You can Submit a new open access paper.