General Classification

3768 papers with code • 10 benchmarks • 8 datasets

Algorithms trying to solve the general task of classification.


Use these libraries to find General Classification models and implementations

Most implemented papers

Very Deep Convolutional Networks for Large-Scale Image Recognition

tensorflow/models 4 Sep 2014

In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting.

YOLO9000: Better, Faster, Stronger

AlexeyAB/darknet CVPR 2017

On the 156 classes not in COCO, YOLO9000 gets 16. 0 mAP.

MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications

tensorflow/tensorflow 17 Apr 2017

We present a class of efficient models called MobileNets for mobile and embedded vision applications.

Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization

ramprs/grad-cam ICCV 2017

For captioning and VQA, we show that even non-attention based models can localize inputs.

Convolutional Neural Networks for Sentence Classification

facebookresearch/pytext EMNLP 2014

We report on a series of experiments with convolutional neural networks (CNN) trained on top of pre-trained word vectors for sentence-level classification tasks.

SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation

PaddlePaddle/PaddleSeg 2 Nov 2015

We show that SegNet provides good performance with competitive inference time and more efficient inference memory-wise as compared to other architectures.

Going Deeper with Convolutions

worksheets/0xbcd424d2 CVPR 2015

We propose a deep convolutional neural network architecture codenamed "Inception", which was responsible for setting the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC 2014).

Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift

tensorflow/models 11 Feb 2015

Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change.

Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks

cbfinn/maml ICML 2017

We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learning problems, including classification, regression, and reinforcement learning.