Search Results for author: Anas Mahmoud

Found 5 papers, 1 papers with code

Decoding Data Quality via Synthetic Corruptions: Embedding-guided Pruning of Code Data

no code implementations5 Dec 2023 Yu Yang, Aaditya K. Singh, Mostafa Elhoushi, Anas Mahmoud, Kushal Tirumala, Fabian Gloeckle, Baptiste Rozière, Carole-Jean Wu, Ari S. Morcos, Newsha Ardalani

Armed with this knowledge, we devise novel pruning metrics that operate in embedding space to identify and remove low-quality entries in the Stack dataset.

Code Generation

Sieve: Multimodal Dataset Pruning Using Image Captioning Models

1 code implementation3 Oct 2023 Anas Mahmoud, Mostafa Elhoushi, Amro Abbas, Yu Yang, Newsha Ardalani, Hugh Leather, Ari Morcos

We propose a pruning signal, Sieve, that employs synthetic captions generated by image-captioning models pretrained on small, diverse, and well-aligned image-text pairs to evaluate the alignment of noisy image-text pairs.

Image Captioning Language Modelling +1

Self-Supervised Image-to-Point Distillation via Semantically Tolerant Contrastive Loss

no code implementations CVPR 2023 Anas Mahmoud, Jordan S. K. Hu, Tianshu Kuai, Ali Harakeh, Liam Paull, Steven L. Waslander

However, image-to point representation learning for autonomous driving datasets faces two main challenges: 1) the abundance of self-similarity, which results in the contrastive losses pushing away semantically similar point and image regions and thus disturbing the local semantic structure of the learned representations, and 2) severe class imbalance as pretraining gets dominated by over-represented classes.

3D Semantic Segmentation Autonomous Driving +4

Dense Voxel Fusion for 3D Object Detection

no code implementations2 Mar 2022 Anas Mahmoud, Jordan S. K. Hu, Steven L. Waslander

Sequential fusion methods suffer from a limited number of pixel and point correspondences due to point cloud sparsity, or their performance is strictly capped by the detections of one of the modalities.

3D Object Detection Object +1

Cannot find the paper you are looking for? You can Submit a new open access paper.