Fine-Grained Image Classification

170 papers with code • 35 benchmarks • 36 datasets

Fine-Grained Image Classification is a task in computer vision where the goal is to classify images into subcategories within a larger category. For example, classifying different species of birds or different types of flowers. This task is considered to be fine-grained because it requires the model to distinguish between subtle differences in visual appearance and patterns, making it more challenging than regular image classification tasks.

( Image credit: Looking for the Devil in the Details )

Libraries

Use these libraries to find Fine-Grained Image Classification models and implementations

Most implemented papers

An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale

google-research/vision_transformer ICLR 2021

While the Transformer architecture has become the de-facto standard for natural language processing tasks, its applications to computer vision remain limited.

EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks

tensorflow/tpu ICML 2019

Convolutional Neural Networks (ConvNets) are commonly developed at a fixed resource budget, and then scaled up for better accuracy if more resources are available.

AutoAugment: Learning Augmentation Policies from Data

tensorflow/models 24 May 2018

In our implementation, we have designed a search space where a policy consists of many sub-policies, one of which is randomly chosen for each image in each mini-batch.

Training data-efficient image transformers & distillation through attention

facebookresearch/deit 23 Dec 2020

In this work, we produce a competitive convolution-free transformer by training on Imagenet only.

ResMLP: Feedforward networks for image classification with data-efficient training

facebookresearch/deit NeurIPS 2021

We present ResMLP, an architecture built entirely upon multi-layer perceptrons for image classification.

Sharpness-Aware Minimization for Efficiently Improving Generalization

google-research/sam ICLR 2021

In today's heavily overparameterized models, the value of the training loss provides few guarantees on model generalization ability.

Learning to Navigate for Fine-grained Classification

yangze0930/NTS-Net ECCV 2018

In consideration of intrinsic consistency between informativeness of the regions and their probability being ground-truth class, we design a novel training paradigm, which enables Navigator to detect most informative regions under the guidance from Teacher.

GPipe: Efficient Training of Giant Neural Networks using Pipeline Parallelism

tensorflow/lingvo NeurIPS 2019

Scaling up deep neural network capacity has been known as an effective approach to improving model quality for several different machine learning tasks.

Transformer in Transformer

huawei-noah/CV-Backbones NeurIPS 2021

In this paper, we point out that the attention inside these local patches are also essential for building visual transformers with high performance and we explore a new architecture, namely, Transformer iN Transformer (TNT).

ResNet strikes back: An improved training procedure in timm

rwightman/pytorch-image-models NeurIPS Workshop ImageNet_PPF 2021

We share competitive training settings and pre-trained models in the timm open-source library, with the hope that they will serve as better baselines for future work.