Classification

3288 papers with code • 38 benchmarks • 133 datasets

Classification is the task of categorizing a set of data into predefined classes or groups. The aim of classification is to train a model to correctly predict the class or group of new, unseen data. The model is trained on a labeled dataset where each instance is assigned a class label. The learning algorithm then builds a mapping between the features of the data and the class labels. This mapping is then used to predict the class label of new, unseen data points. The quality of the prediction is usually evaluated using metrics such as accuracy, precision, and recall.

Libraries

Use these libraries to find Classification models and implementations
11 papers
2,927
8 papers
15,587
See all 14 libraries.

Most implemented papers

EfficientNetV2: Smaller Models and Faster Training

google/automl 1 Apr 2021

By pretraining on the same ImageNet21k, our EfficientNetV2 achieves 87. 3% top-1 accuracy on ImageNet ILSVRC2012, outperforming the recent ViT by 2. 0% accuracy while training 5x-11x faster using the same computing resources.

Learning Transferable Architectures for Scalable Image Recognition

tensorflow/models CVPR 2018

In our experiments, we search for the best convolutional layer (or "cell") on the CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking together more copies of this cell, each with their own parameters to design a convolutional architecture, named "NASNet architecture".

RegNet: Self-Regulated Network for Image Classification

keras-team/keras 3 Jan 2021

The ResNet and its variants have achieved remarkable successes in various computer vision tasks.

ResNet strikes back: An improved training procedure in timm

rwightman/pytorch-image-models NeurIPS Workshop ImageNet_PPF 2021

We share competitive training settings and pre-trained models in the timm open-source library, with the hope that they will serve as better baselines for future work.

GLM: General Language Model Pretraining with Autoregressive Blank Infilling

THUDM/GLM ACL 2022

On a wide range of tasks across NLU, conditional and unconditional generation, GLM outperforms BERT, T5, and GPT given the same model sizes and data, and achieves the best performance from a single pretrained model with 1. 25x parameters of BERT Large , demonstrating its generalizability to different downstream tasks.

ViViT: A Video Vision Transformer

google-research/scenic ICCV 2021

We present pure-transformer based models for video classification, drawing upon the recent success of such models in image classification.

Resurrecting Recurrent Neural Networks for Long Sequences

Gothos/LRU-pytorch 11 Mar 2023

Recurrent Neural Networks (RNNs) offer fast inference on long sequences but are hard to optimize and slow to train.

Revisiting RCNN: On Awakening the Classification Power of Faster RCNN

bowenc0221/Decoupled-Classification-Refinement ECCV 2018

Recent region-based object detectors are usually built with separate classification and localization branches on top of shared feature extraction networks.

DropEdge: Towards Deep Graph Convolutional Networks on Node Classification

DropEdge/DropEdge ICLR 2020

\emph{Over-fitting} and \emph{over-smoothing} are two main obstacles of developing deep Graph Convolutional Networks (GCNs) for node classification.

Inverse Classification for Comparison-based Interpretability in Machine Learning

carla-recourse/CARLA 22 Dec 2017

In the context of post-hoc interpretability, this paper addresses the task of explaining the prediction of a classifier, considering the case where no information is available, neither on the classifier itself, nor on the processed data (neither the training nor the test data).