Search Results for author: Alexander Kirillov

Found 28 papers, 16 papers with code

Point-Level Region Contrast for Object Detection Pre-Training

no code implementations9 Feb 2022 Yutong Bai, Xinlei Chen, Alexander Kirillov, Alan Yuille, Alexander C. Berg

In this work we present point-level region contrast, a self-supervised pre-training approach for the task of object detection.

Contrastive Learning Knowledge Distillation +1

SLIP: Self-supervision meets Language-Image Pre-training

1 code implementation23 Dec 2021 Norman Mu, Alexander Kirillov, David Wagner, Saining Xie

Across ImageNet and a battery of additional datasets, we find that SLIP improves accuracy by a large margin.

Multi-Task Learning Representation Learning +1

Mask2Former for Video Instance Segmentation

1 code implementation20 Dec 2021 Bowen Cheng, Anwesa Choudhuri, Ishan Misra, Alexander Kirillov, Rohit Girdhar, Alexander G. Schwing

We find Mask2Former also achieves state-of-the-art performance on video instance segmentation without modifying the architecture, the loss or even the training pipeline.

Instance Segmentation Panoptic Segmentation +3

Per-Pixel Classification is Not All You Need for Semantic Segmentation

3 code implementations NeurIPS 2021 Bowen Cheng, Alexander G. Schwing, Alexander Kirillov

Overall, the proposed mask classification-based method simplifies the landscape of effective approaches to semantic and panoptic segmentation tasks and shows excellent empirical results.

Classification Panoptic Segmentation

Pointly-Supervised Instance Segmentation

1 code implementation13 Apr 2021 Bowen Cheng, Omkar Parkhi, Alexander Kirillov

Our experiments show that the new module is more suitable for the proposed point-based supervision.

Instance Segmentation Semantic Segmentation

Boundary IoU: Improving Object-Centric Image Segmentation Evaluation

1 code implementation CVPR 2021 Bowen Cheng, Ross Girshick, Piotr Dollár, Alexander C. Berg, Alexander Kirillov

We perform an extensive analysis across different error types and object sizes and show that Boundary IoU is significantly more sensitive than the standard Mask IoU measure to boundary errors for large objects and does not over-penalize errors on smaller objects.

Panoptic Segmentation

On Interaction Between Augmentations and Corruptions in Natural Corruption Robustness

1 code implementation NeurIPS 2021 Eric Mintun, Alexander Kirillov, Saining Xie

Invariance to a broad array of image corruptions, such as warping, noise, or color shifts, is an important aspect of building robust models in computer vision.

TrackFormer: Multi-Object Tracking with Transformers

1 code implementation7 Jan 2021 Tim Meinhardt, Alexander Kirillov, Laura Leal-Taixe, Christoph Feichtenhofer

The challenging task of multi-object tracking (MOT) requires simultaneous reasoning about track initialization, identity, and spatio-temporal trajectories.

Ranked #3 on Multi-Object Tracking on MOTS20 (using extra training data)

Frame Multi-Object Tracking +1

Is Robustness Robust? On the interaction between augmentations and corruptions

no code implementations1 Jan 2021 Eric Mintun, Alexander Kirillov, Saining Xie

Invariance to a broad array of image corruptions, such as warping, noise, or color shifts, is an important aspect of building robust models in computer vision.

Panoptic Feature Pyramid Networks

10 code implementations CVPR 2019 Alexander Kirillov, Ross Girshick, Kaiming He, Piotr Dollár

In this work, we perform a detailed study of this minimally extended version of Mask R-CNN with FPN, which we refer to as Panoptic FPN, and show it is a robust and accurate baseline for both tasks.

Instance Segmentation Panoptic Segmentation +1

Calculated attributes of synonym sets

no code implementations5 Mar 2018 Andrew Krizhanovsky, Alexander Kirillov

Several geometric characteristics of the synset words are introduced: the interior of synset, the synset word rank and centrality.

Analyzing Modular CNN Architectures for Joint Depth Prediction and Semantic Segmentation

no code implementations26 Feb 2017 Omid Hosseini Jafari, Oliver Groth, Alexander Kirillov, Michael Ying Yang, Carsten Rother

Towards this end we propose a Convolutional Neural Network (CNN) architecture that fuses the state of the state-of-the-art results for depth estimation and semantic labeling.

Depth Estimation Semantic Segmentation

Global Hypothesis Generation for 6D Object Pose Estimation

no code implementations CVPR 2017 Frank Michel, Alexander Kirillov, Eric Brachmann, Alexander Krull, Stefan Gumhold, Bogdan Savchynskyy, Carsten Rother

Most modern approaches solve this task in three steps: i) Compute local features; ii) Generate a pool of pose-hypotheses; iii) Select and refine a pose from the pool.

6D Pose Estimation using RGB

Joint Graph Decomposition and Node Labeling: Problem, Algorithms, Applications

1 code implementation14 Nov 2016 Evgeny Levinkov, Jonas Uhrig, Siyu Tang, Mohamed Omran, Eldar Insafutdinov, Alexander Kirillov, Carsten Rother, Thomas Brox, Bernt Schiele, Bjoern Andres

In order to find feasible solutions efficiently, we define two local search algorithms that converge monotonously to a local optimum, offering a feasible solution at any time.

Combinatorial Optimization Multiple Object Tracking +2

Joint M-Best-Diverse Labelings as a Parametric Submodular Minimization

no code implementations NeurIPS 2016 Alexander Kirillov, Alexander Shekhovtsov, Carsten Rother, Bogdan Savchynskyy

In particular, the joint M-best diverse labelings can be obtained by running a non-parametric submodular minimization (in the special case - max-flow) solver for M different values of $\gamma$ in parallel, for certain diversity measures.

M-Best-Diverse Labelings for Submodular Energies and Beyond

no code implementations NeurIPS 2015 Alexander Kirillov, Dmytro Shlezinger, Dmitry P. Vetrov, Carsten Rother, Bogdan Savchynskyy

In this work we show that the joint inference of $M$ best diverse solutions can be formulated as a submodular energy minimization if the original MAP-inference problem is submodular, hence fast inference techniques can be used.

Total Energy

Joint Training of Generic CNN-CRF Models with Stochastic Optimization

no code implementations16 Nov 2015 Alexander Kirillov, Dmitrij Schlesinger, Shuai Zheng, Bogdan Savchynskyy, Philip H. S. Torr, Carsten Rother

We propose a new CNN-CRF end-to-end learning framework, which is based on joint stochastic optimization with respect to both Convolutional Neural Network (CNN) and Conditional Random Field (CRF) parameters.

Stochastic Optimization

Cannot find the paper you are looking for? You can Submit a new open access paper.