Search Results for author: Upamanyu Madhow

Found 19 papers, 10 papers with code

Improving Robustness via Tilted Exponential Layer: A Communication-Theoretic Perspective

1 code implementation2 Nov 2023 Bhagyashree Puranik, Ahmad Beirami, Yao Qin, Upamanyu Madhow

State-of-the-art techniques for enhancing robustness of deep networks mostly rely on empirical risk minimization with suitable data augmentation.

Data Augmentation

Neuro-Inspired Deep Neural Networks with Sparse, Strong Activations

1 code implementation26 Feb 2022 Metehan Cekic, Can Bakiskan, Upamanyu Madhow

While end-to-end training of Deep Neural Networks (DNNs) yields state of the art performance in an increasing array of applications, it does not provide insight into, or control over, the features being extracted.

Image Classification

Self-supervised Speaker Recognition Training Using Human-Machine Dialogues

no code implementations7 Feb 2022 Metehan Cekic, Ruirui Li, Zeya Chen, Yuguang Yang, Andreas Stolcke, Upamanyu Madhow

Speaker recognition, recognizing speaker identities based on voice alone, enables important downstream applications, such as personalization and authentication.

Contrastive Learning Speaker Recognition

Generalized Likelihood Ratio Test for Adversarially Robust Hypothesis Testing

no code implementations4 Dec 2021 Bhagyashree Puranik, Upamanyu Madhow, Ramtin Pedarsani

We derive the worst-case attack for the GLRT defense, and show that its asymptotic performance (as the dimension of the data increases) approaches that of the minimax defense.

All-Digital LoS MIMO with Low-Precision Analog-to-Digital Conversion

no code implementations2 Aug 2021 Ahmet Dundar Sezer, Upamanyu Madhow

Line-of-sight (LoS) multi-input multi-output (MIMO) systems exhibit attractive scaling properties with increase in carrier frequency: for a fixed form factor and range, the spatial degrees of freedom increase quadratically for 2D arrays, in addition to the typically linear increase in available bandwidth.

Quantization

Sparse Coding Frontend for Robust Neural Networks

1 code implementation12 Apr 2021 Can Bakiskan, Metehan Cekic, Ahmet Dundar Sezer, Upamanyu Madhow

Deep Neural Networks are known to be vulnerable to small, adversarially crafted, perturbations.

A Neuro-Inspired Autoencoding Defense Against Adversarial Perturbations

1 code implementation21 Nov 2020 Can Bakiskan, Metehan Cekic, Ahmet Dundar Sezer, Upamanyu Madhow

Our nominal design is to train the decoder and classifier together in standard supervised fashion, but we also consider unsupervised decoder training based on a regression objective (as in a conventional autoencoder) with separate supervised training of the classifier.

Dictionary Learning

Adversarially Robust Classification based on GLRT

no code implementations16 Nov 2020 Bhagyashree Puranik, Upamanyu Madhow, Ramtin Pedarsani

We evaluate the GLRT approach for the special case of binary hypothesis testing in white Gaussian noise under $\ell_{\infty}$ norm-bounded adversarial perturbations, a setting for which a minimax strategy optimizing for the worst-case attack is known.

Classification General Classification +2

Multi-sensor Spatial Association using Joint Range-Doppler Features

no code implementations12 Jul 2020 Anant Gupta, Ahmet Dundar Sezer, Upamanyu Madhow

We investigate the problem of localizing multiple targets using a single set of measurements from a network of radar sensors.

Wireless Fingerprinting via Deep Learning: The Impact of Confounding Factors

1 code implementation25 Feb 2020 Metehan Cekic, Soorya Gopalakrishnan, Upamanyu Madhow

The opportunity for doing so arises due to subtle nonlinear variations across transmitters, even those made by the same manufacturer.

Polarizing Front Ends for Robust CNNs

1 code implementation22 Feb 2020 Can Bakiskan, Soorya Gopalakrishnan, Metehan Cekic, Upamanyu Madhow, Ramtin Pedarsani

The vulnerability of deep neural networks to small, adversarially designed perturbations can be attributed to their "excessive linearity."

A design framework for all-digital mmWave massive MIMO with per-antenna nonlinearities

no code implementations25 Dec 2019 Mohammed Abdelghany, Ali A. Farid, Upamanyu Madhow, Mark J. W. Rodwell

Millimeter wave MIMO combines the benefits of compact antenna arrays with a large number of elements and massive bandwidths, so that fully digital beamforming has the potential of supporting a large number of simultaneous users with {\it per user} data rates of multiple gigabits/sec (Gbps).

Robust Wireless Fingerprinting via Complex-Valued Neural Networks

no code implementations19 May 2019 Soorya Gopalakrishnan, Metehan Cekic, Upamanyu Madhow

A "wireless fingerprint" which exploits hardware imperfections unique to each device is a potentially powerful tool for wireless security.

Robust Adversarial Learning via Sparsifying Front Ends

1 code implementation24 Oct 2018 Soorya Gopalakrishnan, Zhinus Marzi, Metehan Cekic, Upamanyu Madhow, Ramtin Pedarsani

We also devise attacks based on the locally linear model that outperform the well-known FGSM attack.

Combating Adversarial Attacks Using Sparse Representations

3 code implementations11 Mar 2018 Soorya Gopalakrishnan, Zhinus Marzi, Upamanyu Madhow, Ramtin Pedarsani

It is by now well-known that small adversarial perturbations can induce classification errors in deep neural networks (DNNs).

General Classification

On the information in spike timing: neural codes derived from polychronous groups

no code implementations9 Mar 2018 Zhinus Marzi, Joao Hespanha, Upamanyu Madhow

There is growing evidence regarding the importance of spike timing in neural information processing, with even a small number of spikes carrying information, but computational models lag significantly behind those for rate coding.

Sparsity-based Defense against Adversarial Attacks on Linear Classifiers

3 code implementations15 Jan 2018 Zhinus Marzi, Soorya Gopalakrishnan, Upamanyu Madhow, Ramtin Pedarsani

In this paper, we study this phenomenon in the setting of a linear classifier, and show that it is possible to exploit sparsity in natural data to combat $\ell_{\infty}$-bounded adversarial perturbations.

Learning Sparse, Distributed Representations using the Hebbian Principle

no code implementations14 Nov 2016 Aseem Wadhwa, Upamanyu Madhow

The "fire together, wire together" Hebbian model is a central principle for learning in neuroscience, but surprisingly, it has found limited applicability in modern machine learning.

Compressive spectral embedding: sidestepping the SVD

1 code implementation NeurIPS 2015 Dinesh Ramasamy, Upamanyu Madhow

Spectral embedding based on the Singular Value Decomposition (SVD) is a widely used "preprocessing" step in many learning tasks, typically leading to dimensionality reduction by projecting onto a number of dominant singular vectors and rescaling the coordinate axes (by a predefined function of the singular value).

Clustering Dimensionality Reduction

Cannot find the paper you are looking for? You can Submit a new open access paper.