Search Results for author: Mostafa El-Khamy

Found 33 papers, 4 papers with code

SLoRA: Federated Parameter Efficient Fine-Tuning of Language Models

no code implementations12 Aug 2023 Sara Babakniya, Ahmed Roushdy Elkordy, Yahya H. Ezzeldin, Qingfeng Liu, Kee-Bong Song, Mostafa El-Khamy, Salman Avestimehr

In the absence of centralized data, Federated Learning (FL) can benefit from distributed and private data of the FL edge clients for fine-tuning.

Federated Learning Transfer Learning

Zero-Shot Learning of a Conditional Generative Adversarial Network for Data-Free Network Quantization

no code implementations26 Oct 2022 Yoojin Choi, Mostafa El-Khamy, Jungwon Lee

We propose a novel method for training a conditional generative adversarial network (CGAN) without the use of training data, called zero-shot learning of a CGAN (ZS-CGAN).

Data Free Quantization Generative Adversarial Network +1

Toward Sustainable Continual Learning: Detection and Knowledge Repurposing of Similar Tasks

no code implementations11 Oct 2022 Sijia Wang, Yoojin Choi, Junya Chen, Mostafa El-Khamy, Ricardo Henao

This results in the eventual prohibitive expansion of the knowledge repository if we consider learning from a long sequence of tasks.

Continual Learning

Latent Feature Disentanglement For Visual Domain Generalization

no code implementations29 Sep 2021 Behnam Gholami, Mostafa El-Khamy, Kee-Bong Song

We demonstrate the effectiveness of our approach on several widely used datasets for the domain generalization problem, on all of which we achieve competitive results with state-of-the-art models.

Data Augmentation Disentanglement +2

Dual-Teacher Class-Incremental Learning With Data-Free Generative Replay

no code implementations17 Jun 2021 Yoojin Choi, Mostafa El-Khamy, Jungwon Lee

In the conventional generative replay, the generative model is pre-trained for old data and shared in extra memory for later incremental learning.

Class Incremental Learning Incremental Learning +2

Towards Fair Federated Learning with Zero-Shot Data Augmentation

no code implementations27 Apr 2021 Weituo Hao, Mostafa El-Khamy, Jungwon Lee, Jianyi Zhang, Kevin J Liang, Changyou Chen, Lawrence Carin

Federated learning has emerged as an important distributed learning paradigm, where a server aggregates a global model from many client-trained models while having no access to the client data.

Data Augmentation Fairness +1

WAFFLe: Weight Anonymized Factorization for Federated Learning

no code implementations13 Aug 2020 Weituo Hao, Nikhil Mehta, Kevin J Liang, Pengyu Cheng, Mostafa El-Khamy, Lawrence Carin

Experiments on MNIST, FashionMNIST, and CIFAR-10 demonstrate WAFFLe's significant improvement to local test performance and fairness while simultaneously providing an extra layer of security.

Fairness Federated Learning

Data-Free Network Quantization With Adversarial Knowledge Distillation

1 code implementation8 May 2020 Yoojin Choi, Jihwan Choi, Mostafa El-Khamy, Jungwon Lee

The synthetic data are generated from a generator, while no data are used in training the generator and in quantization.

Knowledge Distillation Model Compression +1

GSANet: Semantic Segmentation with Global and Selective Attention

no code implementations14 Feb 2020 Qing-Feng Liu, Mostafa El-Khamy, Dongwoon Bai, Jungwon Lee

The proposed Global and Selective Attention Network (GSANet) features Atrous Spatial Pyramid Pooling (ASPP) with a novel sparsemax global attention and a novel selective attention that deploys a condensation and diffusion mechanism to aggregate the multi-scale contextual information from the extracted deep features.

Segmentation Semantic Segmentation

HyperCon: Image-To-Video Model Transfer for Video-To-Video Translation Tasks

no code implementations10 Dec 2019 Ryan Szeto, Mostafa El-Khamy, Jungwon Lee, Jason J. Corso

To combine the benefits of image and video models, we propose an image-to-video model transfer method called Hyperconsistency (HyperCon) that transforms any well-trained image model into a temporally consistent video model without fine-tuning.

Image-to-Image Translation Style Transfer +4

End-to-End Multi-Task Denoising for the Joint Optimization of Perceptual Speech Metrics

no code implementations Interspeech 2019 Jaeyoung Kim, Mostafa El-Khamy, Jungwon Lee

Second, three loss functions based on SDR, PESQ and STOI are proposed to minimize the metric mismatch.

Sound Audio and Speech Processing

T-GSA: Transformer with Gaussian-weighted self-attention for speech enhancement

no code implementations13 Oct 2019 Jaeyoung Kim, Mostafa El-Khamy, Jungwon Lee

Transformer neural networks (TNN) demonstrated state-of-art performance on many natural language processing (NLP) tasks, replacing recurrent neural networks (RNNs), such as LSTMs or GRUs.

Audio and Speech Processing Sound

Wyner VAE: A Variational Autoencoder with Succinct Common Representation Learning

no code implementations25 Sep 2019 J. Jon Ryu, Yoojin Choi, Young-Han Kim, Mostafa El-Khamy, Jungwon Lee

A new variational autoencoder (VAE) model is proposed that learns a succinct common representation of two correlated data variables for conditional and joint generation tasks.

Representation Learning

Variable Rate Deep Image Compression With a Conditional Autoencoder

no code implementations ICCV 2019 Yoojin Choi, Mostafa El-Khamy, Jungwon Lee

Our model also shows comparable and sometimes better performance than the state-of-the-art learned image compression models that deploy multiple networks trained for varying rates.

Image Compression Quantization

TW-SMNet: Deep Multitask Learning of Tele-Wide Stereo Matching

no code implementations11 Jun 2019 Mostafa El-Khamy, Haoyu Ren, Xianzhi Du, Jungwon Lee

In this paper, we introduce the problem of estimating the real world depth of elements in a scene captured by two cameras with different field of views, where the first field of view (FOV) is a Wide FOV (WFOV) captured by a wide angle lens, and the second FOV is contained in the first FOV and is captured by a tele zoom lens.

Depth Estimation Disparity Estimation +2

Deep Robust Single Image Depth Estimation Neural Network Using Scene Understanding

no code implementations7 Jun 2019 Haoyu Ren, Mostafa El-Khamy, Jungwon Lee

We introduce two different scene understanding modules based on scene classification and coarse depth estimation respectively.

Depth Estimation General Classification +2

Learning with Succinct Common Representation Based on Wyner's Common Information

no code implementations27 May 2019 J. Jon Ryu, Yoojin Choi, Young-Han Kim, Mostafa El-Khamy, Jungwon Lee

A new bimodal generative model is proposed for generating conditional and joint samples, accompanied with a training method with learning a succinct bottleneck representation.

Density Ratio Estimation Image Retrieval +3

AMNet: Deep Atrous Multiscale Stereo Disparity Estimation Networks

no code implementations19 Apr 2019 Xianzhi Du, Mostafa El-Khamy, Jungwon Lee

A stacked atrous multiscale network is proposed to aggregate rich multiscale contextual information from the cost volume which allows for estimating the disparity with high accuracy at multiple scales.

Disparity Estimation Stereo Disparity Estimation +2

Jointly Sparse Convolutional Neural Networks in Dual Spatial-Winograd Domains

no code implementations21 Feb 2019 Yoojin Choi, Mostafa El-Khamy, Jungwon Lee

We consider the optimization of deep convolutional neural networks (CNNs) such that they provide good performance while having reduced complexity if deployed on either conventional systems with spatial-domain convolution or lower-complexity systems designed for Winograd convolution.

End-to-End Multi-Task Denoising for joint SDR and PESQ Optimization

no code implementations26 Jan 2019 Jaeyoung Kim, Mostafa El-Khamy, Jungwon Lee

First, the network optimization is performed on the time-domain signals after ISTFT to avoid spectrum mismatch.

Denoising Speech Enhancement

DN-ResNet: Efficient Deep Residual Network for Image Denoising

no code implementations16 Oct 2018 Haoyu Ren, Mostafa El-Khamy, Jungwon Lee

The results show that DN-ResNets are more efficient, robust, and perform better denoising than current state of art deep learning methods, as well as the popular variants of the BM3D algorithm, in cases of blind and non-blind denoising of images corrupted with Poisson, Gaussian or Poisson-Gaussian noise.

Computational Efficiency Image Denoising +2

Learning Sparse Low-Precision Neural Networks With Learnable Regularization

no code implementations1 Sep 2018 Yoojin Choi, Mostafa El-Khamy, Jungwon Lee

In training low-precision networks, gradient descent in the backward pass is performed with high-precision weights while quantized low-precision weights and activations are used in the forward pass to calculate the loss function for training.

Image Super-Resolution L2 Regularization +1

Compression of Deep Convolutional Neural Networks under Joint Sparsity Constraints

no code implementations21 May 2018 Yoojin Choi, Mostafa El-Khamy, Jungwon Lee

In particular, the proposed framework produces one compressed model whose convolutional filters can be made sparse either in the spatial domain or in the Winograd domain.


Fused Deep Neural Networks for Efficient Pedestrian Detection

no code implementations2 May 2018 Xianzhi Du, Mostafa El-Khamy, Vlad I. Morariu, Jungwon Lee, Larry Davis

The classification system further classifies the generated candidates based on opinions of multiple deep verification networks and a fusion network which utilizes a novel soft-rejection fusion method to adjust the confidence in the detection results.

Ensemble Learning General Classification +2

Universal Deep Neural Network Compression

no code implementations NIPS Workshop CDNNRIA 2018 Yoojin Choi, Mostafa El-Khamy, Jungwon Lee

In this paper, we investigate lossy compression of deep neural networks (DNNs) by weight quantization and lossless source coding for memory-efficient deployment.

Neural Network Compression Quantization

CT-SRCNN: Cascade Trained and Trimmed Deep Convolutional Neural Networks for Image Super Resolution

no code implementations11 Nov 2017 Haoyu Ren, Mostafa El-Khamy, Jungwon Lee

We propose methodologies to train highly accurate and efficient deep convolutional neural networks (CNNs) for image super resolution (SR).

Image Super-Resolution

BridgeNets: Student-Teacher Transfer Learning Based on Recursive Neural Networks and its Application to Distant Speech Recognition

no code implementations27 Oct 2017 Jaeyoung Kim, Mostafa El-Khamy, Jungwon Lee

Despite the remarkable progress achieved on automatic speech recognition, recognizing far-field speeches mixed with various noise sources is still a challenging task.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +4

Residual LSTM: Design of a Deep Recurrent Architecture for Distant Speech Recognition

3 code implementations10 Jan 2017 Jaeyoung Kim, Mostafa El-Khamy, Jungwon Lee

The residual LSTM provides an additional spatial shortcut path from lower layers for efficient training of deep networks with multiple LSTM layers.

Distant Speech Recognition speech-recognition

Towards the Limit of Network Quantization

no code implementations5 Dec 2016 Yoojin Choi, Mostafa El-Khamy, Jungwon Lee

Network quantization is one of network compression techniques to reduce the redundancy of deep neural networks.

Clustering Quantization

Fused DNN: A deep neural network fusion approach to fast and robust pedestrian detection

no code implementations11 Oct 2016 Xianzhi Du, Mostafa El-Khamy, Jungwon Lee, Larry S. Davis

A single shot deep convolutional network is trained as a object detector to generate all possible pedestrian candidates of different sizes and occlusions.

Pedestrian Detection Semantic Segmentation

Cannot find the paper you are looking for? You can Submit a new open access paper.