Search Results for author: Vahid Tarokh

Found 48 papers, 11 papers with code

Semi-Empirical Objective Functions for MCMC Proposal Optimization

no code implementations3 Jun 2021 Chris Cannella, Vahid Tarokh

We introduce and demonstrate a semi-empirical procedure for determining approximate objective functions suitable for optimizing arbitrarily parameterized proposal distributions in MCMC methods.

Gradient Assisted Learning

no code implementations2 Jun 2021 Enmao Diao, Jie Ding, Vahid Tarokh

In distributed settings, collaborations between different entities, such as financial institutions, medical centers, and retail markets, are crucial to providing improved service and performance.

SemiFL: Communication Efficient Semi-Supervised Federated Learning with Unlabeled Clients

no code implementations2 Jun 2021 Enmao Diao, Jie Ding, Vahid Tarokh

Moreover, we demonstrate that SemiFL can outperform many existing FL results trained with fully supervised data, and perform competitively with the state-of-the-art centralized Semi-Supervised Learning (SSL) methods.

Federated Learning

Towards Explainable Convolutional Features for Music Audio Modeling

1 code implementation31 May 2021 Anna K. Yanchenko, Mohammadreza Soltani, Robert J. Ravier, Sayan Mukherjee, Vahid Tarokh

In this work, we attempt to "open the black-box" on deep convolutional models to inform future architectures for music audio tasks, and explain the excellent performance of deep convolutions that model spectrograms as 2D images.

Neural Architecture Search From Task Similarity Measure

no code implementations27 Feb 2021 Cat P. Le, Mohammadreza Soltani, Robert Ravier, Vahid Tarokh

In this paper, we propose a neural architecture search framework based on a similarity measure between the baseline tasks and the incoming target task.

Neural Architecture Search

Generative Archimedean Copulas

1 code implementation22 Feb 2021 Yuting Ng, Ali Hasan, Khalil Elkhalil, Vahid Tarokh

We propose a new generative modeling technique for learning multidimensional cumulative distribution functions (CDFs) in the form of copulas.

Deep Extreme Value Copulas for Estimation and Sampling

no code implementations17 Feb 2021 Ali Hasan, Khalil Elkhalil, Joao M. Pereira, Sina Farsiu, Jose H. Blanchet, Vahid Tarokh

We propose a new method for modeling the distribution function of high dimensional extreme value distributions.

Model-Free Energy Distance for Pruning DNNs

1 code implementation1 Jan 2021 Mohammadreza Soltani, Suya Wu, Yuerong Li, Jie Ding, Vahid Tarokh

We measure a new model-free information between the feature maps and the output of the network.

On Statistical Efficiency in Learning

1 code implementation24 Dec 2020 Jie Ding, Enmao Diao, Jiawei Zhou, Vahid Tarokh

We propose a generalized notion of Takeuchi's information criterion and prove that the proposed method can asymptotically achieve the optimal out-sample prediction loss under reasonable assumptions.

Model Selection

Task-Aware Neural Architecture Search

no code implementations27 Oct 2020 Cat P. Le, Mohammadreza Soltani, Robert Ravier, Vahid Tarokh

The design of handcrafted neural networks requires a lot of time and resources.

Neural Architecture Search

HeteroFL: Computation and Communication Efficient Federated Learning for Heterogeneous Clients

1 code implementation ICLR 2021 Enmao Diao, Jie Ding, Vahid Tarokh

In this work, we propose a new federated learning framework named HeteroFL to address heterogeneous clients equipped with very different computation and communication capabilities.

Federated Learning

Projected Latent Markov Chain Monte Carlo: Conditional Sampling of Normalizing Flows

no code implementations ICLR 2021 Chris Cannella, Mohammadreza Soltani, Vahid Tarokh

We introduce Projected Latent Markov Chain Monte Carlo (PL-MCMC), a technique for sampling from the high-dimensional conditional distributions learned by a normalizing flow.

Deep Cross-Subject Mapping of Neural Activity

no code implementations13 Jul 2020 Marko Angjelichinoski, Bijan Pesaran, Vahid Tarokh

In this paper, we consider the problem of cross-subject decoding, where neural activity data collected from the prefrontal cortex of a given subject (destination) is used to decode motor intentions from the neural activity of a different subject (source).

GeoStat Representations of Time Series for Fast Classification

no code implementations13 Jul 2020 Robert J. Ravier, Mohammadreza Soltani, Miguel Simões, Denis Garagic, Vahid Tarokh

GeoStat representations are based off of a generalization of recent methods for trajectory classification, and summarize the information of a time series in terms of comprehensive statistics of (possibly windowed) distributions of easy to compute differential geometric quantities, requiring no dynamic time warping.

Classification Dynamic Time Warping +3

Identifying Latent Stochastic Differential Equations with Variational Auto-Encoders

no code implementations12 Jul 2020 Ali Hasan, João M. Pereira, Sina Farsiu, Vahid Tarokh

We present a method for learning latent stochastic differential equations (SDEs) from high dimensional time series data.

Self-Supervised Learning Time Series

Fisher Auto-Encoders

no code implementations12 Jul 2020 Khalil Elkhalil, Ali Hasan, Jie Ding, Sina Farsiu, Vahid Tarokh

It has been conjectured that the Fisher divergence is more robust to model uncertainty than the conventional Kullback-Leibler (KL) divergence.

Model Linkage Selection for Cooperative Learning

no code implementations15 May 2020 Jiaying Zhou, Jie Ding, Kean Ming Tan, Vahid Tarokh

The main crux is to sequentially incorporate additional learners that can enhance the prediction accuracy of an existing joint model based on user-specified parameter sharing patterns across a set of learners.

Proximal Gradient Algorithm with Momentum and Flexible Parameter Restart for Nonconvex Optimization

no code implementations26 Feb 2020 Yi Zhou, Zhe Wang, Kaiyi Ji, Yingbin Liang, Vahid Tarokh

Our APG-restart is designed to 1) allow for adopting flexible parameter restart schemes that cover many existing ones; 2) have a global sub-linear convergence rate in nonconvex and nonsmooth optimization; and 3) have guaranteed convergence to a critical point and have various types of asymptotic convergence rates depending on the parameterization of local geometry in nonconvex and nonsmooth optimization.

Multimodal Controller for Generative Models

1 code implementation7 Feb 2020 Enmao Diao, Jie Ding, Vahid Tarokh

Class-conditional generative models are crucial tools for data generation from user-specified class labels.

Robust Marine Buoy Placement for Ship Detection Using Dropout K-Means

no code implementations2 Jan 2020 Yuting Ng, João M. Pereira, Denis Garagic, Vahid Tarokh

Marine buoys aid in the battle against Illegal, Unreported and Unregulated (IUU) fishing by detecting fishing vessels in their vicinity.

SpiderBoost and Momentum: Faster Variance Reduction Algorithms

no code implementations NeurIPS 2019 Zhe Wang, Kaiyi Ji, Yi Zhou, Yingbin Liang, Vahid Tarokh

SARAH and SPIDER are two recently developed stochastic variance-reduced algorithms, and SPIDER has been shown to achieve a near-optimal first-order oracle complexity in smooth nonconvex optimization.

Gradient Information for Representation and Modeling

no code implementations NeurIPS 2019 Jie Ding, Robert Calderbank, Vahid Tarokh

Motivated by Fisher divergence, in this paper we present a new set of information quantities which we refer to as gradient information.

A Distributed Online Convex Optimization Algorithm with Improved Dynamic Regret

no code implementations12 Nov 2019 Yan Zhang, Robert J. Ravier, Michael M. Zavlanos, Vahid Tarokh

In this paper, we consider the problem of distributed online convex optimization, where a network of local agents aim to jointly optimize a convex function over a period of multiple time steps.

Cross-subject Decoding of Eye Movement Goals from Local Field Potentials

no code implementations8 Nov 2019 Marko Angjelichinoski, John Choi, Taposh Banerjee, Bijan Pesaran, Vahid Tarokh

We propose an efficient data-driven estimation approach for linear transfer functions that uses the first and second order moments of the class-conditional distributions.

Transfer Learning

Deep Clustering of Compressed Variational Embeddings

no code implementations23 Oct 2019 Suya Wu, Enmao Diao, Jie Ding, Vahid Tarokh

Motivated by the ever-increasing demands for limited communication bandwidth and low-power consumption, we propose a new methodology, named joint Variational Autoencoders with Bernoulli mixture models (VAB), for performing clustering in the compressed data domain.

Deep Clustering

Learning Partial Differential Equations from Data Using Neural Networks

1 code implementation22 Oct 2019 Ali Hasan, João M. Pereira, Robert Ravier, Sina Farsiu, Vahid Tarokh

We develop a framework for estimating unknown partial differential equations from noisy data, using a deep learning approach.

Perception-Distortion Trade-off with Restricted Boltzmann Machines

no code implementations21 Oct 2019 Chris Cannella, Jie Ding, Mohammadreza Soltani, Vahid Tarokh

In this work, we introduce a new procedure for applying Restricted Boltzmann Machines (RBMs) to missing data inference tasks, based on linearization of the effective energy function governing the distribution of observations.

Speech Emotion Recognition with Dual-Sequence LSTM Architecture

no code implementations20 Oct 2019 Jianyou Wang, Michael Xue, Ryan Culhane, Enmao Diao, Jie Ding, Vahid Tarokh

Speech Emotion Recognition (SER) has emerged as a critical component of the next generation human-machine interfacing technologies.

Speech Emotion Recognition

Supervised Encoding for Discrete Representation Learning

1 code implementation15 Oct 2019 Cat P. Le, Yi Zhou, Jie Ding, Vahid Tarokh

Classical supervised classification tasks search for a nonlinear mapping that maps each encoded feature directly to a probability mass over the labels.

Representation Learning Style Transfer

Restricted Recurrent Neural Networks

1 code implementation21 Aug 2019 Enmao Diao, Jie Ding, Vahid Tarokh

Recurrent Neural Network (RNN) and its variations such as Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU), have become standard building blocks for learning online data of sequential nature in many research areas, including natural language processing and speech data analysis.

Language Modelling

DRASIC: Distributed Recurrent Autoencoder for Scalable Image Compression

1 code implementation23 Mar 2019 Enmao Diao, Jie Ding, Vahid Tarokh

We propose a new architecture for distributed image compression from a group of distributed data sources.

Image Compression

Momentum Schemes with Stochastic Variance Reduction for Nonconvex Composite Optimization

no code implementations7 Feb 2019 Yi Zhou, Zhe Wang, Kaiyi Ji, Yingbin Liang, Vahid Tarokh

In this paper, we develop novel momentum schemes with flexible coefficient settings to accelerate SPIDER for nonconvex and nonsmooth composite optimization, and show that the resulting algorithms achieve the near-optimal gradient oracle complexity for achieving a generalized first-order stationary condition.

Minimax-optimal decoding of movement goals from local field potentials using complex spectral features

no code implementations29 Jan 2019 Marko Angjelichinoski, Taposh Banerjee, John Choi, Bijan Pesaran, Vahid Tarokh

We consider the problem of predicting eye movement goals from local field potentials (LFP) recorded through a multielectrode array in the macaque prefrontal cortex.

SGD Converges to Global Minimum in Deep Learning via Star-convex Path

no code implementations ICLR 2019 Yi Zhou, Junjie Yang, Huishuai Zhang, Yingbin Liang, Vahid Tarokh

Stochastic gradient descent (SGD) has been found to be surprisingly effective in training a variety of deep neural networks.

SpiderBoost and Momentum: Faster Stochastic Variance Reduction Algorithms

no code implementations25 Oct 2018 Zhe Wang, Kaiyi Ji, Yi Zhou, Yingbin Liang, Vahid Tarokh

SARAH and SPIDER are two recently developed stochastic variance-reduced algorithms, and SPIDER has been shown to achieve a near-optimal first-order oracle complexity in smooth nonconvex optimization.

Model Selection Techniques -- An Overview

no code implementations22 Oct 2018 Jie Ding, Vahid Tarokh, Yuhong Yang

In the era of big data, analysts usually explore various statistical models or machine learning methods for observed data in order to facilitate scientific discoveries or gain predictive power.

Epidemiology Model Selection

Learning Bounds for Greedy Approximation with Explicit Feature Maps from Multiple Kernels

no code implementations NeurIPS 2018 Shahin Shahrampour, Vahid Tarokh

We establish an out-of-sample error bound capturing the trade-off between the error in terms of explicit features (approximation error) and the error due to spectral properties of the best model in the Hilbert space associated to the combined kernel (spectral error).

Stationary Geometric Graphical Model Selection

no code implementations10 Jun 2018 Ilya Soloveychik, Vahid Tarokh

We consider the problem of model selection in Gaussian Markov fields in the sample deficient scenario.

Model Selection Time Series

Region Detection in Markov Random Fields: Gaussian Case

no code implementations12 Feb 2018 Ilya Soloveychik, Vahid Tarokh

Assuming that the entire graph can be partitioned into a number of spatial regions with similar edge parameters and reasonably regular boundaries, we develop new information-theoretic sample complexity bounds and show that a bounded number of samples can be sufficient to consistently recover these regions.

Model Selection

On Data-Dependent Random Features for Improved Generalization in Supervised Learning

no code implementations19 Dec 2017 Shahin Shahrampour, Ahmad Beirami, Vahid Tarokh

The randomized-feature approach has been successfully employed in large-scale kernel approximation and supervised learning.

On Optimal Generalizability in Parametric Learning

no code implementations NeurIPS 2017 Ahmad Beirami, Meisam Razaviyayn, Shahin Shahrampour, Vahid Tarokh

Such bias is measured by the cross validation procedure in practice where the data set is partitioned into a training set used for training and a validation set, which is not used in training and is left to measure the out-of-sample performance.

Dictionary Learning and Sparse Coding-based Denoising for High-Resolution Task Functional Connectivity MRI Analysis

no code implementations21 Jul 2017 Seongah Jeong, Xiang Li, Jiarui Yang, Quanzheng Li, Vahid Tarokh

In order to address the limitations of the unsupervised DLSC-based fMRI studies, we utilize the prior knowledge of task paradigm in the learning step to train a data-driven dictionary and to model the sparse representation.

Denoising Dictionary Learning

Nonlinear Sequential Accepts and Rejects for Identification of Top Arms in Stochastic Bandits

no code implementations9 Jul 2017 Shahin Shahrampour, Vahid Tarokh

At each round, the budget is divided by a nonlinear function of remaining arms, and the arms are pulled correspondingly.

Multi-Armed Bandits

On Sequential Elimination Algorithms for Best-Arm Identification in Multi-Armed Bandits

no code implementations8 Sep 2016 Shahin Shahrampour, Mohammad Noshad, Vahid Tarokh

Based on this result, we develop an algorithm that divides the budget according to a nonlinear function of remaining arms at each round.

Multi-Armed Bandits

Learning the Number of Autoregressive Mixtures in Time Series Using the Gap Statistics

no code implementations11 Sep 2015 Jie Ding, Mohammad Noshad, Vahid Tarokh

We define a new distance measure between stable AR filters and draw a reference curve that is used to measure how much adding a new AR filter improves the performance of the model, and then choose the number of AR filters that has the maximum gap with the reference curve.

Model Selection Time Series

Bridging AIC and BIC: a new criterion for autoregression

no code implementations11 Aug 2015 Jie Ding, Vahid Tarokh, Yuhong Yang

When the data is generated from a finite order autoregression, the Bayesian information criterion is known to be consistent, and so is the new criterion.

Model Selection Time Series

Data-Driven Learning of the Number of States in Multi-State Autoregressive Models

no code implementations6 Jun 2015 Jie Ding, Mohammad Noshad, Vahid Tarokh

In this work, we consider the class of multi-state autoregressive processes that can be used to model non-stationary time-series of interest.

Model Selection Time Series

Cannot find the paper you are looking for? You can Submit a new open access paper.