Classification

3193 papers with code • 33 benchmarks • 117 datasets

Classification is the task of categorizing a set of data into predefined classes or groups. The aim of classification is to train a model to correctly predict the class or group of new, unseen data. The model is trained on a labeled dataset where each instance is assigned a class label. The learning algorithm then builds a mapping between the features of the data and the class labels. This mapping is then used to predict the class label of new, unseen data points. The quality of the prediction is usually evaluated using metrics such as accuracy, precision, and recall.

Libraries

Use these libraries to find Classification models and implementations
11 papers
2,912
8 papers
15,236
See all 13 libraries.

Most implemented papers

Deep Residual Learning for Image Recognition

tensorflow/models CVPR 2016

Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

YOLOv3: An Incremental Improvement

open-mmlab/mmdetection 8 Apr 2018

At 320x320 YOLOv3 runs in 22 ms at 28. 2 mAP, as accurate as SSD but three times faster.

Very Deep Convolutional Networks for Large-Scale Image Recognition

tensorflow/models 4 Sep 2014

In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting.

Densely Connected Convolutional Networks

liuzhuang13/DenseNet CVPR 2017

Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output.

An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale

google-research/vision_transformer ICLR 2021

While the Transformer architecture has become the de-facto standard for natural language processing tasks, its applications to computer vision remain limited.

Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning

tensorflow/models 23 Feb 2016

Recently, the introduction of residual connections in conjunction with a more traditional architecture has yielded state-of-the-art performance in the 2015 ILSVRC challenge; its performance was similar to the latest generation Inception-v3 network.

Searching for MobileNetV3

tensorflow/models ICCV 2019

We achieve new state of the art results for mobile classification, detection and segmentation.

Convolutional Pose Machines

CMU-Perceptual-Computing-Lab/convolutional-pose-machines-release CVPR 2016

Pose Machines provide a sequential prediction framework for learning rich implicit spatial models.

A ConvNet for the 2020s

facebookresearch/ConvNeXt CVPR 2022

The "Roaring 20s" of visual recognition began with the introduction of Vision Transformers (ViTs), which quickly superseded ConvNets as the state-of-the-art image classification model.

Xception: Deep Learning with Depthwise Separable Convolutions

tensorflow/models CVPR 2017

We present an interpretation of Inception modules in convolutional neural networks as being an intermediate step in-between regular convolution and the depthwise separable convolution operation (a depthwise convolution followed by a pointwise convolution).