Vision transformers (ViTs) have found only limited practical use in processing images, in spite of their state-of-the-art accuracy on certain benchmarks.
Ranked #10 on Image Classification on Tiny ImageNet Classification
Capturing high-resolution magnetic resonance (MR) images is a time consuming process, which makes it unsuitable for medical emergencies and pediatric patients.
Training CNNs from scratch on new domains typically demands large numbers of labeled images and computations, which is not suitable for low-power hardware.
Our work suggests that research on model structures that exploit the right inductive bias is far from over, and that such models can enable the training of computer vision models in settings with limited GPU resources.
Ranked #168 on Image Classification on CIFAR-10
Secondly, we introduced an inductive bias for images by replacing the initial linear embedding layer by convolutional layers in ViX, which significantly increased classification accuracy without increasing the model size.
Ranked #172 on Image Classification on CIFAR-10
Performance of deep learning algorithms decreases drastically if the data distributions of the training and testing sets are different.
To counter the paucity of data, we also deploy another head on the scoring network for regularization via multi-task learning and use an unusual self-balancing hybrid scoring function.
Current research suggests that the key factors in designing neural network architectures involve choosing number of filters for every convolution layer, number of hidden neurons for every fully connected layer, dropout and pruning.
Remarkably, without retraining on target datasets, our pre-trained nucleus detector also outperformed existing nucleus detectors that were trained on at least some of the images from the target datasets.
With the increase in the use of deep learning for computer-aided diagnosis in medical images, the criticism of the black-box nature of the deep learning models is also on the rise.
Deep neural networks have revolutionized medical image analysis and disease diagnosis.
no code implementations • 6 Apr 2020 • Mukesh Kumar Vishal, Dipesh Tamboli, Abhijeet Patil, Rohit Saluja, Biplab Banerjee, Amit Sethi, Dhandapani Raju, Sudhir Kumar, R N Sahoo, Viswanathan Chinnusamy, J Adinarayana
The present investigation is carried out for discriminating drought tolerant, and susceptible genotypes.
Survival models are used in various fields, such as the development of cancer treatment protocols.
We aim to provide a better interpretation of classification results by providing localization on microscopic histopathology images.
One of the first steps in the diagnosis of most cardiac diseases, such as pulmonary hypertension, coronary heart disease is the segmentation of ventricles from cardiac magnetic resonance (MRI) images.
Spatial arrangement of cells of various types, such as tumor infiltrating lymphocytes and the advancing edge of a tumor, are important features for detecting and characterizing cancers.
Normalizing unwanted color variations due to differences in staining processes and scanner responses has been shown to aid machine learning in computational pathology.
While convolutional neural networks (CNNs) have recently made great strides in supervised classification of data structured on a grid (e. g. images composed of pixel grids), in several interesting datasets, the relations between features can be better represented as a general graph instead of a regular grid.
In this paper, we propose a deep learning-based method for classification of H&E stained breast tissue images released for BACH challenge 2018 by fine-tuning Inception-v3 convolutional neural network (CNN) proposed by Szegedy et al.
Next, features are extracted from each frame using a convolutional neural network (CNN) that is trained to classify between normal and abnormal frames.
We propose a method to classify images from target classes with a small number of training examples based on transfer learning from non-target classes.