Search Results for author: Samir Khaki

Found 14 papers, 12 papers with code

Dynamic Vision Mamba

1 code implementation7 Apr 2025 Mengxuan Wu, Zekai Li, Zhiyuan Liang, Moyang Li, Xuanlei Zhao, Samir Khaki, Zheng Zhu, Xiaojiang Peng, Konstantinos N. Plataniotis, Kai Wang, Wangbo Zhao, Yang You

For block redundancy, we allow each image to select SSM blocks dynamically based on an empirical observation that the inference speed of Mamba-based vision models is largely affected by the number of SSM blocks.

Mamba

Data-to-Model Distillation: Data-Efficient Learning Framework

1 code implementation19 Nov 2024 Ahmad Sajedi, Samir Khaki, Lucy Z. Liu, Ehsan Amjadian, Yuri A. Lawryshyn, Konstantinos N. Plataniotis

In this paper, we propose a novel framework called Data-to-Model Distillation (D2M) to distill the real dataset's knowledge into the learnable parameters of a pre-trained generative model by aligning rich representations extracted from real and generated images.

Computational Efficiency Dataset Distillation +2

Prioritize Alignment in Dataset Distillation

1 code implementation6 Aug 2024 Zekai Li, Ziyao Guo, Wangbo Zhao, Tianle Zhang, Zhi-Qi Cheng, Samir Khaki, Kaipeng Zhang, Ahmad Sajedi, Konstantinos N Plataniotis, Kai Wang, Yang You

To achieve this, existing methods use the agent model to extract information from the target dataset and embed it into the distilled dataset.

Dataset Distillation

ATOM: Attention Mixer for Efficient Dataset Distillation

1 code implementation2 May 2024 Samir Khaki, Ahmad Sajedi, Kai Wang, Lucy Z. Liu, Yuri A. Lawryshyn, Konstantinos N. Plataniotis

To address these challenges in dataset distillation, we propose the ATtentiOn Mixer (ATOM) module to efficiently distill large datasets using a mixture of channel and spatial-wise attention in the feature matching process.

Dataset Distillation Neural Architecture Search

The Need for Speed: Pruning Transformers with One Recipe

1 code implementation26 Mar 2024 Samir Khaki, Konstantinos N. Plataniotis

We introduce the $\textbf{O}$ne-shot $\textbf{P}$runing $\textbf{T}$echnique for $\textbf{I}$nterchangeable $\textbf{N}$etworks ($\textbf{OPTIN}$) framework as a tool to increase the efficiency of pre-trained transformer architectures $\textit{without requiring re-training}$.

image-classification Image Classification +2

ProbMCL: Simple Probabilistic Contrastive Learning for Multi-label Visual Classification

1 code implementation2 Jan 2024 Ahmad Sajedi, Samir Khaki, Yuri A. Lawryshyn, Konstantinos N. Plataniotis

We validate the effectiveness of our framework through experimentation with datasets from the computer vision and medical imaging domains.

Contrastive Learning image-classification +2

DataDAM: Efficient Dataset Distillation with Attention Matching

2 code implementations ICCV 2023 Ahmad Sajedi, Samir Khaki, Ehsan Amjadian, Lucy Z. Liu, Yuri A. Lawryshyn, Konstantinos N. Plataniotis

Emerging research on dataset distillation aims to reduce training costs by creating a small synthetic set that contains the information of a larger real dataset and ultimately achieves test accuracy equivalent to a model trained on the whole dataset.

Continual Learning Dataset Distillation +1

CFDP: Common Frequency Domain Pruning

1 code implementation7 Jun 2023 Samir Khaki, Weihan Luo

In this paper, we introduce a novel end-to-end pipeline for model pruning via the frequency domain.

CONetV2: Efficient Auto-Channel Size Optimization for CNNs

1 code implementation13 Oct 2021 Yi Ru Wang, Samir Khaki, Weihang Zheng, Mahdi S. Hosseini, Konstantinos N. Plataniotis

Neural Architecture Search (NAS) has been pivotal in finding optimal network configurations for Convolution Neural Networks (CNNs).

Knowledge Distillation Neural Architecture Search

Cannot find the paper you are looking for? You can Submit a new open access paper.