1387 papers with code • 0 benchmarks • 3 datasets

This task has no description! Would you like to contribute one?

Most implemented papers

MMDetection: Open MMLab Detection Toolbox and Benchmark

open-mmlab/mmdetection 17 Jun 2019

In this paper, we introduce the various features of this toolbox.

Learning Transferable Visual Models From Natural Language Supervision

openai/CLIP 26 Feb 2021

State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories.

Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms

zalandoresearch/fashion-mnist 25 Aug 2017

We present Fashion-MNIST, a new dataset comprising of 28x28 grayscale images of 70, 000 fashion products from 10 categories, with 7, 000 images per category.

CIDEr: Consensus-based Image Description Evaluation

tylin/coco-caption CVPR 2015

We propose a novel paradigm for evaluating image descriptions that uses human consensus.

The StarCraft Multi-Agent Challenge

oxwhirl/pymarl 11 Feb 2019

In this paper, we propose the StarCraft Multi-Agent Challenge (SMAC) as a benchmark problem to fill this gap.

Benchmarking Graph Neural Networks

graphdeeplearning/benchmarking-gnns 2 Mar 2020

In the last few years, graph neural networks (GNNs) have become the standard toolkit for analyzing and learning from data on graphs.

Benchmarking Deep Reinforcement Learning for Continuous Control

rllab/rllab 22 Apr 2016

Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning.

Technical Report on the CleverHans v2.1.0 Adversarial Examples Library

tensorflow/cleverhans 3 Oct 2016

An adversarial example library for constructing attacks, building defenses, and benchmarking both

Habitat: A Platform for Embodied AI Research

facebookresearch/habitat-sim ICCV 2019

We present Habitat, a platform for research in embodied artificial intelligence (AI).

MS MARCO: A Human Generated MAchine Reading COmprehension Dataset

AmenRa/rank_eval 28 Nov 2016

The size of the dataset and the fact that the questions are derived from real user search queries distinguishes MS MARCO from other well-known publicly available datasets for machine reading comprehension and question-answering.