Search Results for author: Sara Atito

Found 12 papers, 4 papers with code

DiCoM -- Diverse Concept Modeling towards Enhancing Generalizability in Chest X-Ray Studies

no code implementations22 Feb 2024 Abhieet Parida, Daniel Capellan-Martin, Sara Atito, Muhammad Awais, Maria J. Ledesma-Carbayo, Marius G. Linguraru, Syed Muhammad Anwar

In this context, we introduce Diverse Concept Modeling (DiCoM), a novel self-supervised training paradigm that leverages a student teacher framework for learning diverse concepts and hence effective representation of the CXR data.

LT-ViT: A Vision Transformer for multi-label Chest X-ray classification

no code implementations13 Nov 2023 Umar Marikkar, Sara Atito, Muhammad Awais, Adam Mahdi

Vision Transformers (ViTs) are widely adopted in medical imaging tasks, and some existing efforts have been directed towards vision-language training for Chest X-rays (CXRs).

Masked Momentum Contrastive Learning for Zero-shot Semantic Understanding

no code implementations22 Aug 2023 Jiantao Wu, Shentong Mo, Muhammad Awais, Sara Atito, ZhenHua Feng, Josef Kittler

Self-supervised pretraining (SSP) has emerged as a popular technique in machine learning, enabling the extraction of meaningful feature representations without labelled data.

Contrastive Learning Object +6

Variantional autoencoder with decremental information bottleneck for disentanglement

1 code implementation22 Mar 2023 Jiantao Wu, Shentong Mo, Muhammad Awais, Sara Atito, Xingshen Zhang, Lin Wang, Xiang Yang

One major challenge of disentanglement learning with variational autoencoders is the trade-off between disentanglement and reconstruction fidelity.

Disentanglement

ASiT: Local-Global Audio Spectrogram vIsion Transformer for Event Classification

1 code implementation23 Nov 2022 Sara Atito, Muhammad Awais, Wenwu Wang, Mark D Plumbley, Josef Kittler

Transformers, which were originally developed for natural language processing, have recently generated significant interest in the computer vision and audio communities due to their flexibility in learning long-range relationships.

Keyword Spotting Self-Supervised Learning +1

SPCXR: Self-supervised Pretraining using Chest X-rays Towards a Domain Specific Foundation Model

no code implementations23 Nov 2022 Syed Muhammad Anwar, Abhijeet Parida, Sara Atito, Muhammad Awais, Gustavo Nino, Josef Kitler, Marius George Linguraru

However, the traditional diagnostic tool design methods based on supervised learning are burdened by the need to provide training data annotation, which should be of good quality for better clinical outcomes.

COVID-19 Diagnosis Image Segmentation +3

SB-SSL: Slice-Based Self-Supervised Transformers for Knee Abnormality Classification from MRI

no code implementations29 Aug 2022 Sara Atito, Syed Muhammad Anwar, Muhammad Awais, Josef Kitler

The availability of large scale data with high quality ground truth labels is a challenge when developing supervised machine learning solutions for healthcare domain.

Self-Supervised Learning

GMML is All you Need

1 code implementation30 May 2022 Sara Atito, Muhammad Awais, Josef Kittler

This has motivated the research in self-supervised transformer pretraining, which does not need to decode the semantic information conveyed by labels to link it to the image properties, but rather focuses directly on extracting a concise representation of the image data that reflects the notion of similarity, and is invariant to nuisance factors.

Data Augmentation Self-Learning +1

MC-SSL0.0: Towards Multi-Concept Self-Supervised Learning

no code implementations30 Nov 2021 Sara Atito, Muhammad Awais, Ammarah Farooq, ZhenHua Feng, Josef Kittler

In this aspect the proposed SSL frame-work MC-SSL0. 0 is a step towards Multi-Concept Self-Supervised Learning (MC-SSL) that goes beyond modelling single dominant label in an image to effectively utilise the information from all the concepts present in it.

Image Classification Self-Supervised Learning +1

SiT: Self-supervised vIsion Transformer

2 code implementations8 Apr 2021 Sara Atito, Muhammad Awais, Josef Kittler

We also observed that SiT is good for few shot learning and also showed that it is learning useful representation by simply training a linear classifier on top of the learned features from SiT.

Few-Shot Learning Self-Supervised Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.