Search Results for author: Alexander G. Anderson

Found 6 papers, 1 papers with code

PIM: Video Coding using Perceptual Importance Maps

no code implementations20 Dec 2022 Evgenya Pergament, Pulkit Tandon, Oren Rippel, Lubomir Bourdev, Alexander G. Anderson, Bruno Olshausen, Tsachy Weissman, Sachin Katti, Kedar Tatwawadi

The contributions of this work are threefold: (1) we introduce a web-tool which allows scalable collection of fine-grained perceptual importance, by having users interactively paint spatio-temporal maps over encoded videos; (2) we use this tool to collect a dataset with 178 videos with a total of 14443 frames of human annotated spatio-temporal importance maps over the videos; and (3) we use our curated dataset to train a lightweight machine learning model which can predict these spatio-temporal importance regions.

Video Compression

An Interactive Annotation Tool for Perceptual Video Compression

1 code implementation8 May 2022 Evgenya Pergament, Pulkit Tandon, Kedar Tatwawadi, Oren Rippel, Lubomir Bourdev, Bruno Olshausen, Tsachy Weissman, Sachin Katti, Alexander G. Anderson

We use this tool to collect data in-the-wild (10 videos, 17 users) and utilize the obtained importance maps in the context of x264 coding to demonstrate that the tool can indeed be used to generate videos which, at the same bitrate, look perceptually better through a subjective study - and are 1. 9 times more likely to be preferred by viewers.

Video Compression

ELF-VC: Efficient Learned Flexible-Rate Video Coding

no code implementations ICCV 2021 Oren Rippel, Alexander G. Anderson, Kedar Tatwawadi, Sanjay Nair, Craig Lytle, Lubomir Bourdev

In this setting, for natural videos our approach compares favorably across the entire R-D curve under metrics PSNR, MS-SSIM and VMAF against all mainstream video standards (H. 264, H. 265, AV1) and all ML codecs.

Computational Efficiency MS-SSIM +2

The High-Dimensional Geometry of Binary Neural Networks

no code implementations ICLR 2018 Alexander G. Anderson, Cory P. Berg

However, there is a dearth of theoretical analysis to explain why we can effectively capture the features in our data with binary weights and activations.

Vocal Bursts Intensity Prediction

DeepMovie: Using Optical Flow and Deep Neural Networks to Stylize Movies

no code implementations26 May 2016 Alexander G. Anderson, Cory P. Berg, Daniel P. Mossing, Bruno A. Olshausen

The other naive method that initializes the optimization for the next frame using the rendered version of the previous frame also produces poor results because the features of the texture stay fixed relative to the frame of the movie instead of moving with objects in the scene.

Optical Flow Estimation Style Transfer

Cannot find the paper you are looking for? You can Submit a new open access paper.