Browse > Computer Vision > Image Classification

Image Classification

272 papers with code · Computer Vision

Image classification is the task of classifying images into various categories.

State-of-the-art leaderboards

Latest papers with code

Probabilistic Discriminative Learning with Layered Graphical Models

31 Jan 2019tum-vision/lgm

Probabilistic graphical models are traditionally known for their successes in generative modeling. In this work, we advocate layered graphical models (LGMs) for probabilistic discriminative learning.


31 Jan 2019

Pathologist-level classification of histologic patterns on resected lung adenocarcinoma slides with deep neural networks

31 Jan 2019BMIRDS/deepslide

It achieved a kappa score of 0.525 and an agreement of 66.6% with three pathologists for classifying the predominant patterns, slightly higher than the inter-pathologist kappa score of 0.485 and agreement of 62.7% on this test set. If confirmed in clinical practice, our model can assist pathologists in improving classification of lung adenocarcinoma patterns by automatically pre-screening and highlighting cancerous regions prior to review.


31 Jan 2019

Evaluating Generalization Ability of Convolutional Neural Networks and Capsule Networks for Image Classification via Top-2 Classification

29 Jan 2019leftthomas/PSCapsNet

In this paper, we propose a new image classification task called Top-2 classification to evaluate the generalization ability of CNNs and CapsNets. The models are trained on single label image samples same as the traditional image classification task.


29 Jan 2019

Convolutional Neural Networks with Layer Reuse

28 Jan 2019okankop/CNN-layer-reuse

If the same patterns also occur at the deeper layers of the network, why wouldn't the same convolutional filters be used also in those layers? In this paper, we propose a CNN architecture, Layer Reuse Network (LruNet), where the convolutional layers are used repeatedly without the need of introducing new layers to get a better performance.


28 Jan 2019

Fixup Initialization: Residual Learning Without Normalization

ICLR 2019 valilenk/fixup

Normalization layers are a staple in state-of-the-art deep neural network architectures. They are widely believed to stabilize training, enable higher learning rate, accelerate convergence and improve generalization, though the reason for their effectiveness is still an active research topic.


27 Jan 2019

Equivariant Transformer Networks

25 Jan 2019stanford-futuredata/equivariant-transformers

How can prior knowledge on the transformation invariances of a domain be incorporated into the architecture of a neural network? We propose Equivariant Transformers (ETs), a family of differentiable image-to-image mappings that improve the robustness of models towards pre-defined continuous transformation groups.


25 Jan 2019

Understanding the Impact of Label Granularity on CNN-based Image Classification

21 Jan 2019cmu-enyac/Label-Granularity

In this paper, we conduct extensive experiments using various datasets to demonstrate and analyze how and why training based on fine-grain labeling, such as "Persian cat" can improve CNN accuracy on classifying coarse-grain classes, in this case "cat." For example, a CNN trained with fine-grain labels and only 40% of the total training data can achieve higher accuracy than a CNN trained with the full training dataset and coarse-grain labels.


21 Jan 2019

Impact of Fully Connected Layers on Performance of Convolutional Neural Networks for Image Classification

21 Jan 2019shabbeersh/Impact-of-FC-layers

To automate the process of learning a CNN architecture, this letter attempts at finding the relationship between Fully Connected (FC) layers with some of the characteristics of the datasets. (i) What is the impact of deeper/shallow architectures on the performance of the CNN w.r.t FC layers?, (ii) How the deeper/wider datasets influence the performance of CNN w.r.t FC layers?, and (iii) Which kind of architecture (deeper/ shallower) is better suitable for which kind of (deeper/ wider) datasets.


21 Jan 2019

Class-Balanced Loss Based on Effective Number of Samples

16 Jan 2019richardaecn/class-balanced-loss

With the rapid increase of large-scale, real-world datasets, it becomes critical to address the problem of long-tailed data distribution (i.e., a few classes account for most of the data, while most classes are under-represented). We design a re-weighting scheme that uses the effective number of samples for each class to re-balance the loss, thereby yielding a class-balanced loss.


16 Jan 2019

FishNet: A Versatile Backbone for Image, Region, and Pixel Level Prediction

NeurIPS 2018 zsef123/Fishnet-PyTorch

The basic principles in designing convolutional neural network (CNN) structures for predicting objects on different levels, e.g., image-level, region-level, and pixel-level are diverging. Generally, network structures designed specifically for image classification are directly used as default backbone structure for other tasks including detection and segmentation, but there is seldom backbone structure designed under the consideration of unifying the advantages of networks designed for pixel-level or region-level predicting tasks, which may require very deep features with high resolution.


11 Jan 2019