Search Results for author: Jeremy Reizenstein

Found 10 papers, 5 papers with code

Common Objects in 3D: Large-Scale Learning and Evaluation of Real-life 3D Category Reconstruction

1 code implementation ICCV 2021 Jeremy Reizenstein, Roman Shapovalov, Philipp Henzler, Luca Sbordone, Patrick Labatut, David Novotny

Traditional approaches for learning 3D object categories have been predominantly trained and evaluated on synthetic datasets due to the unavailability of real 3D-annotated category-centric data.

3D Reconstruction Neural Rendering +1

Accelerating 3D Deep Learning with PyTorch3D

3 code implementations16 Jul 2020 Nikhila Ravi, Jeremy Reizenstein, David Novotny, Taylor Gordon, Wan-Yen Lo, Justin Johnson, Georgia Gkioxari

We address these challenges by introducing PyTorch3D, a library of modular, efficient, and differentiable operators for 3D deep learning.

Autonomous Vehicles

PerspectiveNet: A Scene-consistent Image Generator for New View Synthesis in Real Indoor Environments

no code implementations NeurIPS 2019 Ben Graham, David Novotny, Jeremy Reizenstein

Given a set of a reference RGBD views of an indoor environment, and a new viewpoint, our goal is to predict the view from that location.

The iisignature library: efficient calculation of iterated-integral signatures and log signatures

2 code implementations22 Feb 2018 Jeremy Reizenstein, Benjamin Graham

Iterated-integral signatures and log signatures are vectors calculated from a path that characterise its shape.

Data Structures and Algorithms Mathematical Software Rings and Algebras

Invariants of multidimensional time series based on their iterated-integral signature

no code implementations18 Jan 2018 Joscha Diehl, Jeremy Reizenstein

We introduce a novel class of features for multidimensional time series, that are invariant with respect to transformations of the ambient space.

Time Series Time Series Analysis

Calculation of Iterated-Integral Signatures and Log Signatures

2 code implementations7 Dec 2017 Jeremy Reizenstein

We explain the algebra needed to make sense of the log signature of a path, with plenty of examples.

Rings and Algebras

Efficient batchwise dropout training using submatrices

no code implementations9 Feb 2015 Ben Graham, Jeremy Reizenstein, Leigh Robinson

Dropout networks are generally trained by minibatch gradient descent with a dropout mask turning off some of the units---a different pattern of dropout is applied to every sample in the minibatch.

Cannot find the paper you are looking for? You can Submit a new open access paper.