Search Results for author: Vahid Tarokh

Found 79 papers, 22 papers with code

Data-Driven Learning of the Number of States in Multi-State Autoregressive Models

no code implementations6 Jun 2015 Jie Ding, Mohammad Noshad, Vahid Tarokh

In this work, we consider the class of multi-state autoregressive processes that can be used to model non-stationary time-series of interest.

Model Selection Time Series +1

Bridging AIC and BIC: a new criterion for autoregression

no code implementations11 Aug 2015 Jie Ding, Vahid Tarokh, Yuhong Yang

When the data is generated from a finite order autoregression, the Bayesian information criterion is known to be consistent, and so is the new criterion.

Model Selection Time Series +1

Learning the Number of Autoregressive Mixtures in Time Series Using the Gap Statistics

no code implementations11 Sep 2015 Jie Ding, Mohammad Noshad, Vahid Tarokh

We define a new distance measure between stable AR filters and draw a reference curve that is used to measure how much adding a new AR filter improves the performance of the model, and then choose the number of AR filters that has the maximum gap with the reference curve.

Model Selection Time Series +1

On Sequential Elimination Algorithms for Best-Arm Identification in Multi-Armed Bandits

no code implementations8 Sep 2016 Shahin Shahrampour, Mohammad Noshad, Vahid Tarokh

Based on this result, we develop an algorithm that divides the budget according to a nonlinear function of remaining arms at each round.

Multi-Armed Bandits

Nonlinear Sequential Accepts and Rejects for Identification of Top Arms in Stochastic Bandits

no code implementations9 Jul 2017 Shahin Shahrampour, Vahid Tarokh

At each round, the budget is divided by a nonlinear function of remaining arms, and the arms are pulled correspondingly.

Multi-Armed Bandits

Dictionary Learning and Sparse Coding-based Denoising for High-Resolution Task Functional Connectivity MRI Analysis

no code implementations21 Jul 2017 Seongah Jeong, Xiang Li, Jiarui Yang, Quanzheng Li, Vahid Tarokh

In order to address the limitations of the unsupervised DLSC-based fMRI studies, we utilize the prior knowledge of task paradigm in the learning step to train a data-driven dictionary and to model the sparse representation.

Denoising Dictionary Learning

On Optimal Generalizability in Parametric Learning

no code implementations NeurIPS 2017 Ahmad Beirami, Meisam Razaviyayn, Shahin Shahrampour, Vahid Tarokh

Such bias is measured by the cross validation procedure in practice where the data set is partitioned into a training set used for training and a validation set, which is not used in training and is left to measure the out-of-sample performance.

On Data-Dependent Random Features for Improved Generalization in Supervised Learning

no code implementations19 Dec 2017 Shahin Shahrampour, Ahmad Beirami, Vahid Tarokh

The randomized-feature approach has been successfully employed in large-scale kernel approximation and supervised learning.

Region Detection in Markov Random Fields: Gaussian Case

no code implementations12 Feb 2018 Ilya Soloveychik, Vahid Tarokh

Assuming that the entire graph can be partitioned into a number of spatial regions with similar edge parameters and reasonably regular boundaries, we develop new information-theoretic sample complexity bounds and show that a bounded number of samples can be sufficient to consistently recover these regions.

Model Selection

Stationary Geometric Graphical Model Selection

no code implementations10 Jun 2018 Ilya Soloveychik, Vahid Tarokh

We consider the problem of model selection in Gaussian Markov fields in the sample deficient scenario.

Model Selection Time Series +1

Learning Bounds for Greedy Approximation with Explicit Feature Maps from Multiple Kernels

no code implementations NeurIPS 2018 Shahin Shahrampour, Vahid Tarokh

We establish an out-of-sample error bound capturing the trade-off between the error in terms of explicit features (approximation error) and the error due to spectral properties of the best model in the Hilbert space associated to the combined kernel (spectral error).

Model Selection Techniques -- An Overview

no code implementations22 Oct 2018 Jie Ding, Vahid Tarokh, Yuhong Yang

In the era of big data, analysts usually explore various statistical models or machine learning methods for observed data in order to facilitate scientific discoveries or gain predictive power.

Epidemiology Model Selection

SpiderBoost and Momentum: Faster Stochastic Variance Reduction Algorithms

1 code implementation25 Oct 2018 Zhe Wang, Kaiyi Ji, Yi Zhou, Yingbin Liang, Vahid Tarokh

SARAH and SPIDER are two recently developed stochastic variance-reduced algorithms, and SPIDER has been shown to achieve a near-optimal first-order oracle complexity in smooth nonconvex optimization.

SGD Converges to Global Minimum in Deep Learning via Star-convex Path

no code implementations ICLR 2019 Yi Zhou, Junjie Yang, Huishuai Zhang, Yingbin Liang, Vahid Tarokh

Stochastic gradient descent (SGD) has been found to be surprisingly effective in training a variety of deep neural networks.

Minimax-optimal decoding of movement goals from local field potentials using complex spectral features

no code implementations29 Jan 2019 Marko Angjelichinoski, Taposh Banerjee, John Choi, Bijan Pesaran, Vahid Tarokh

We consider the problem of predicting eye movement goals from local field potentials (LFP) recorded through a multielectrode array in the macaque prefrontal cortex.

Momentum Schemes with Stochastic Variance Reduction for Nonconvex Composite Optimization

no code implementations7 Feb 2019 Yi Zhou, Zhe Wang, Kaiyi Ji, Yingbin Liang, Vahid Tarokh

In this paper, we develop novel momentum schemes with flexible coefficient settings to accelerate SPIDER for nonconvex and nonsmooth composite optimization, and show that the resulting algorithms achieve the near-optimal gradient oracle complexity for achieving a generalized first-order stationary condition.

DRASIC: Distributed Recurrent Autoencoder for Scalable Image Compression

1 code implementation23 Mar 2019 Enmao Diao, Jie Ding, Vahid Tarokh

We propose a new architecture for distributed image compression from a group of distributed data sources.

Image Compression

Restricted Recurrent Neural Networks

1 code implementation21 Aug 2019 Enmao Diao, Jie Ding, Vahid Tarokh

Recurrent Neural Network (RNN) and its variations such as Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU), have become standard building blocks for learning online data of sequential nature in many research areas, including natural language processing and speech data analysis.

Language Modelling

Supervised Encoding for Discrete Representation Learning

1 code implementation15 Oct 2019 Cat P. Le, Yi Zhou, Jie Ding, Vahid Tarokh

Classical supervised classification tasks search for a nonlinear mapping that maps each encoded feature directly to a probability mass over the labels.

Representation Learning Style Transfer

Speech Emotion Recognition with Dual-Sequence LSTM Architecture

no code implementations20 Oct 2019 Jianyou Wang, Michael Xue, Ryan Culhane, Enmao Diao, Jie Ding, Vahid Tarokh

Speech Emotion Recognition (SER) has emerged as a critical component of the next generation human-machine interfacing technologies.

Speech Emotion Recognition

Perception-Distortion Trade-off with Restricted Boltzmann Machines

no code implementations21 Oct 2019 Chris Cannella, Jie Ding, Mohammadreza Soltani, Vahid Tarokh

In this work, we introduce a new procedure for applying Restricted Boltzmann Machines (RBMs) to missing data inference tasks, based on linearization of the effective energy function governing the distribution of observations.

Learning Partial Differential Equations from Data Using Neural Networks

1 code implementation22 Oct 2019 Ali Hasan, João M. Pereira, Robert Ravier, Sina Farsiu, Vahid Tarokh

We develop a framework for estimating unknown partial differential equations from noisy data, using a deep learning approach.

Deep Clustering of Compressed Variational Embeddings

no code implementations23 Oct 2019 Suya Wu, Enmao Diao, Jie Ding, Vahid Tarokh

Motivated by the ever-increasing demands for limited communication bandwidth and low-power consumption, we propose a new methodology, named joint Variational Autoencoders with Bernoulli mixture models (VAB), for performing clustering in the compressed data domain.

Clustering Deep Clustering

Cross-subject Decoding of Eye Movement Goals from Local Field Potentials

no code implementations8 Nov 2019 Marko Angjelichinoski, John Choi, Taposh Banerjee, Bijan Pesaran, Vahid Tarokh

We propose an efficient data-driven estimation approach for linear transfer functions that uses the first and second order moments of the class-conditional distributions.

Transfer Learning

A Distributed Online Convex Optimization Algorithm with Improved Dynamic Regret

no code implementations12 Nov 2019 Yan Zhang, Robert J. Ravier, Michael M. Zavlanos, Vahid Tarokh

In this paper, we consider the problem of distributed online convex optimization, where a network of local agents aim to jointly optimize a convex function over a period of multiple time steps.

SpiderBoost and Momentum: Faster Variance Reduction Algorithms

no code implementations NeurIPS 2019 Zhe Wang, Kaiyi Ji, Yi Zhou, Yingbin Liang, Vahid Tarokh

SARAH and SPIDER are two recently developed stochastic variance-reduced algorithms, and SPIDER has been shown to achieve a near-optimal first-order oracle complexity in smooth nonconvex optimization.

Gradient Information for Representation and Modeling

no code implementations NeurIPS 2019 Jie Ding, Robert Calderbank, Vahid Tarokh

Motivated by Fisher divergence, in this paper we present a new set of information quantities which we refer to as gradient information.

Robust Marine Buoy Placement for Ship Detection Using Dropout K-Means

no code implementations2 Jan 2020 Yuting Ng, João M. Pereira, Denis Garagic, Vahid Tarokh

Marine buoys aid in the battle against Illegal, Unreported and Unregulated (IUU) fishing by detecting fishing vessels in their vicinity.

Clustering

Multimodal Controller for Generative Models

1 code implementation7 Feb 2020 Enmao Diao, Jie Ding, Vahid Tarokh

In the absence of the controllers, our model reduces to non-conditional generative models.

Proximal Gradient Algorithm with Momentum and Flexible Parameter Restart for Nonconvex Optimization

no code implementations26 Feb 2020 Yi Zhou, Zhe Wang, Kaiyi Ji, Yingbin Liang, Vahid Tarokh

Our APG-restart is designed to 1) allow for adopting flexible parameter restart schemes that cover many existing ones; 2) have a global sub-linear convergence rate in nonconvex and nonsmooth optimization; and 3) have guaranteed convergence to a critical point and have various types of asymptotic convergence rates depending on the parameterization of local geometry in nonconvex and nonsmooth optimization.

Model Linkage Selection for Cooperative Learning

no code implementations15 May 2020 Jiaying Zhou, Jie Ding, Kean Ming Tan, Vahid Tarokh

The main crux is to sequentially incorporate additional learners that can enhance the prediction accuracy of an existing joint model based on user-specified parameter sharing patterns across a set of learners.

Identifying Latent Stochastic Differential Equations

1 code implementation12 Jul 2020 Ali Hasan, João M. Pereira, Sina Farsiu, Vahid Tarokh

We present a method for learning latent stochastic differential equations (SDEs) from high-dimensional time series data.

Self-Supervised Learning Time Series +1

Fisher Auto-Encoders

no code implementations12 Jul 2020 Khalil Elkhalil, Ali Hasan, Jie Ding, Sina Farsiu, Vahid Tarokh

It has been conjectured that the Fisher divergence is more robust to model uncertainty than the conventional Kullback-Leibler (KL) divergence.

GeoStat Representations of Time Series for Fast Classification

no code implementations13 Jul 2020 Robert J. Ravier, Mohammadreza Soltani, Miguel Simões, Denis Garagic, Vahid Tarokh

GeoStat representations are based off of a generalization of recent methods for trajectory classification, and summarize the information of a time series in terms of comprehensive statistics of (possibly windowed) distributions of easy to compute differential geometric quantities, requiring no dynamic time warping.

Classification Dynamic Time Warping +4

Projected Latent Markov Chain Monte Carlo: Conditional Sampling of Normalizing Flows

no code implementations ICLR 2021 Chris Cannella, Mohammadreza Soltani, Vahid Tarokh

We introduce Projected Latent Markov Chain Monte Carlo (PL-MCMC), a technique for sampling from the high-dimensional conditional distributions learned by a normalizing flow.

Deep Cross-Subject Mapping of Neural Activity

no code implementations13 Jul 2020 Marko Angjelichinoski, Bijan Pesaran, Vahid Tarokh

In this paper, we consider the problem of cross-subject decoding, where neural activity data collected from the prefrontal cortex of a given subject (destination) is used to decode motor intentions from the neural activity of a different subject (source).

HeteroFL: Computation and Communication Efficient Federated Learning for Heterogeneous Clients

3 code implementations ICLR 2021 Enmao Diao, Jie Ding, Vahid Tarokh

In this work, we propose a new federated learning framework named HeteroFL to address heterogeneous clients equipped with very different computation and communication capabilities.

Federated Learning

Task-Aware Neural Architecture Search

1 code implementation27 Oct 2020 Cat P. Le, Mohammadreza Soltani, Robert Ravier, Vahid Tarokh

The design of handcrafted neural networks requires a lot of time and resources.

Neural Architecture Search

On Statistical Efficiency in Learning

1 code implementation24 Dec 2020 Jie Ding, Enmao Diao, Jiawei Zhou, Vahid Tarokh

We propose a generalized notion of Takeuchi's information criterion and prove that the proposed method can asymptotically achieve the optimal out-sample prediction loss under reasonable assumptions.

Model Selection

Model-Free Energy Distance for Pruning DNNs

1 code implementation1 Jan 2021 Mohammadreza Soltani, Suya Wu, Yuerong Li, Jie Ding, Vahid Tarokh

We measure a new model-free information between the feature maps and the output of the network.

Modeling Extremes with d-max-decreasing Neural Networks

no code implementations17 Feb 2021 Ali Hasan, Khalil Elkhalil, Yuting Ng, Joao M. Pereira, Sina Farsiu, Jose H. Blanchet, Vahid Tarokh

We propose a novel neural network architecture that enables non-parametric calibration and generation of multivariate extreme value distributions (MEVs).

Generative Archimedean Copulas

1 code implementation22 Feb 2021 Yuting Ng, Ali Hasan, Khalil Elkhalil, Vahid Tarokh

We propose a new generative modeling technique for learning multidimensional cumulative distribution functions (CDFs) in the form of copulas.

Computational Efficiency

Improved Automated Machine Learning from Transfer Learning

1 code implementation27 Feb 2021 Cat P. Le, Mohammadreza Soltani, Robert Ravier, Vahid Tarokh

In this paper, we propose a neural architecture search framework based on a similarity measure between some baseline tasks and a target task.

BIG-bench Machine Learning Neural Architecture Search +1

Fisher Task Distance and Its Application in Neural Architecture Search

1 code implementation23 Mar 2021 Cat P. Le, Mohammadreza Soltani, Juncheng Dong, Vahid Tarokh

Next, we construct an online neural architecture search framework using the Fisher task distance, in which we have access to the past learned tasks.

Neural Architecture Search Transfer Learning

A Methodology for Exploring Deep Convolutional Features in Relation to Hand-Crafted Features with an Application to Music Audio Modeling

1 code implementation31 May 2021 Anna K. Yanchenko, Mohammadreza Soltani, Robert J. Ravier, Sayan Mukherjee, Vahid Tarokh

In this work, we instead take the perspective of relating deep features to well-studied, hand-crafted features that are meaningful for the application of interest.

Feature Importance

GAL: Gradient Assisted Learning for Decentralized Multi-Organization Collaborations

1 code implementation2 Jun 2021 Enmao Diao, Jie Ding, Vahid Tarokh

However, the underlying organizations may have little interest in sharing their local data, models, and objective functions.

Semi-Empirical Objective Functions for MCMC Proposal Optimization

no code implementations3 Jun 2021 Chris Cannella, Vahid Tarokh

Current objective functions used for training neural MCMC proposal distributions implicitly rely on architectural restrictions to yield sensible optimization results, which hampers the development of highly expressive neural MCMC proposal architectures.

Semi-Empirical Objective Functions for Neural MCMC Proposal Optimization

no code implementations29 Sep 2021 Chris Cannella, Vahid Tarokh

Current objective functions used for training neural MCMC proposal distributions implicitly rely on architectural restrictions to yield sensible optimization results, which hampers the development of highly expressive neural MCMC proposal architectures.

Task Affinity with Maximum Bipartite Matching in Few-Shot Learning

1 code implementation ICLR 2022 Cat P. Le, Juncheng Dong, Mohammadreza Soltani, Vahid Tarokh

We propose an asymmetric affinity score for representing the complexity of utilizing the knowledge of one task for learning another one.

Few-Shot Learning

Benchmarking Data-driven Surrogate Simulators for Artificial Electromagnetic Materials

1 code implementation NeurIPS 2021 Yang Deng*, Juncheng Dong*, Simiao Ren*, Omar Khatib, Mohammadreza Soltani, Vahid Tarokh, Willie Padilla, Jordan Malof

Recently, it has been shown that deep learning can be an alternative solution to infer the relationship between an AEM geometry and its properties using a (relatively) small pool of CEMS data.

Benchmarking Neural Network simulation

Characteristic Neural Ordinary Differential Equations

no code implementations25 Nov 2021 Xingzi Xu, Ali Hasan, Khalil Elkhalil, Jie Ding, Vahid Tarokh

While NODEs model the evolution of a latent variables as the solution to an ODE, C-NODE models the evolution of the latent variables as the solution of a family of first-order quasi-linear partial differential equations (PDEs) along curves on which the PDEs reduce to ODEs, referred to as characteristic curves.

Computational Efficiency Density Estimation

A Physics-Informed Vector Quantized Autoencoder for Data Compression of Turbulent Flow

1 code implementation10 Jan 2022 Mohammadreza Momenifar, Enmao Diao, Vahid Tarokh, Andrew D. Bragg

In this study, we apply a physics-informed Deep Learning technique based on vector quantization to generate a discrete, low-dimensional representation of data from simulations of three-dimensional turbulent flows.

Data Compression Quantization

Multi-Agent Adversarial Attacks for Multi-Channel Communications

no code implementations22 Jan 2022 Juncheng Dong, Suya Wu, Mohammadreza Sultani, Vahid Tarokh

In particular, by modeling the adversaries as learning agents, we show that the proposed MAAS is able to successfully choose the transmitted channel(s) and their respective allocated power(s) without any prior knowledge of the sender strategy.

Reinforcement Learning (RL)

Toward Data-Driven STAP Radar

no code implementations26 Jan 2022 Shyam Venkatasubramanian, Chayut Wongkamthong, Mohammadreza Soltani, Bosung Kang, Sandeep Gogineni, Ali Pezeshki, Muralidhar Rangaswamy, Vahid Tarokh

In this regard, we will generate a large, representative adaptive radar signal processing database for training and testing, analogous in spirit to the COCO dataset for natural images.

object-detection Object Detection +1

On The Energy Statistics of Feature Maps in Pruning of Neural Networks with Skip-Connections

no code implementations26 Jan 2022 Mohammadreza Soltani, Suya Wu, Yuerong Li, Jie Ding, Vahid Tarokh

We propose a new structured pruning framework for compressing Deep Neural Networks (DNNs) with skip connections, based on measuring the statistical dependency of hidden layers and predicted outputs.

Inference and Sampling for Archimax Copulas

no code implementations27 May 2022 Yuting Ng, Ali Hasan, Vahid Tarokh

Understanding multivariate dependencies in both the bulk and the tails of a distribution is an important problem for many applications, such as ensuring algorithms are robust to observations that are infrequent but have devastating effects.

Data-Driven Target Localization Using Adaptive Radar Processing and Convolutional Neural Networks

no code implementations7 Sep 2022 Shyam Venkatasubramanian, Sandeep Gogineni, Bosung Kang, Ali Pezeshki, Muralidhar Rangaswamy, Vahid Tarokh

Leveraging the advanced functionalities of modern radio frequency (RF) modeling and simulation tools, specifically designed for adaptive radar processing applications, this paper presents a data-driven approach to improve accuracy in radar target localization post adaptive radar detection.

Few-Shot Learning regression

Transfer Learning for Individual Treatment Effect Estimation

no code implementations1 Oct 2022 Ahmed Aloui, Juncheng Dong, Cat P. Le, Vahid Tarokh

To this end, we theoretically assess the feasibility of transferring ITE knowledge and present a practical framework for efficient transfer.

Causal Inference counterfactual +1

Minimax Concave Penalty Regularized Adaptive System Identification

no code implementations7 Nov 2022 Bowen Li, Suya Wu, Erin E. Tripp, Ali Pezeshki, Vahid Tarokh

We develop a recursive least square (RLS) type algorithm with a minimax concave penalty (MCP) for adaptive identification of a sparse tap-weight vector that represents a communication channel.

Time Series Time Series Analysis

Quickest Change Detection for Unnormalized Statistical Models

no code implementations1 Feb 2023 Suya Wu, Enmao Diao, Taposh Banerjee, Jie Ding, Vahid Tarokh

This paper develops a new variant of the classical Cumulative Sum (CUSUM) algorithm for the quickest change detection.

Change Detection

Domain Adaptation via Rebalanced Sub-domain Alignment

no code implementations3 Feb 2023 Yiling Liu, Juncheng Dong, Ziyang Jiang, Ahmed Aloui, Keyu Li, Hunter Klein, Vahid Tarokh, David Carlson

To address this limitation, we propose a novel generalization bound that reweights source classification error by aligning source and target sub-domains.

Unsupervised Domain Adaptation

PASTA: Pessimistic Assortment Optimization

no code implementations8 Feb 2023 Juncheng Dong, Weibin Mo, Zhengling Qi, Cong Shi, Ethan X. Fang, Vahid Tarokh

The objective is to use the offline dataset to find an optimal assortment.

Pruning Deep Neural Networks from a Sparsity Perspective

2 code implementations ICLR 2023 Enmao Diao, Ganghua Wang, Jiawei Zhan, Yuhong Yang, Jie Ding, Vahid Tarokh

Our extensive experiments corroborate the hypothesis that for a generic pruning procedure, PQI decreases first when a large model is being effectively regularized and then increases when its compressibility reaches a limit that appears to correspond to the beginning of underfitting.

Network Pruning

Subspace Perturbation Analysis for Data-Driven Radar Target Localization

no code implementations14 Mar 2023 Shyam Venkatasubramanian, Sandeep Gogineni, Bosung Kang, Ali Pezeshki, Muralidhar Rangaswamy, Vahid Tarokh

Via the use of space-time adaptive processing (STAP) techniques and convolutional neural networks, these data-driven approaches to target localization have helped benchmark the performance of neural networks for matched scenarios.

Mode-Aware Continual Learning for Conditional Generative Adversarial Networks

no code implementations19 May 2023 Cat P. Le, Juncheng Dong, Ahmed Aloui, Vahid Tarokh

To this end, we introduce a new continual learning approach for conditional generative adversarial networks by leveraging a mode-affinity score specifically designed for generative modeling.

Continual Learning

Inference and Sampling of Point Processes from Diffusion Excursions

no code implementations1 Jun 2023 Ali Hasan, Yu Chen, Yuting Ng, Mohamed Abdelghani, Anderson Schneider, Vahid Tarokh

In this framework, we relate the return times of a diffusion in a continuous path space to new arrivals of the point process.

Point Processes

Robust Reinforcement Learning through Efficient Adversarial Herding

no code implementations12 Jun 2023 Juncheng Dong, Hao-Lun Hsu, Qitong Gao, Vahid Tarokh, Miroslav Pajic

In this work, we extend the two-player game by introducing an adversarial herd, which involves a group of adversaries, in order to address ($\textit{i}$) the difficulty of the inner optimization problem, and ($\textit{ii}$) the potential over pessimism caused by the selection of a candidate adversary set that may include unlikely scenarios.

reinforcement-learning Reinforcement Learning (RL)

Causal Mediation Analysis with Multi-dimensional and Indirectly Observed Mediators

no code implementations13 Jun 2023 Ziyang Jiang, Yiling Liu, Michael H. Klein, Ahmed Aloui, Yiman Ren, Keyu Li, Vahid Tarokh, David Carlson

This is important in many scientific applications to identify the underlying mechanisms of a treatment effect.

Individual Treatment Effects in Extreme Regimes

no code implementations20 Jun 2023 Ahmed Aloui, Ali Hasan, Yuting Ng, Miroslav Pajic, Vahid Tarokh

Understanding individual treatment effects in extreme regimes is important for characterizing risks associated with different interventions.

PrACTiS: Perceiver-Attentional Copulas for Time Series

no code implementations3 Oct 2023 Cat P. Le, Chris Cannella, Ali Hasan, Yuting Ng, Vahid Tarokh

Transformers incorporating copula structures have demonstrated remarkable performance in time series prediction.

Time Series Time Series Forecasting +1

Counterfactual Data Augmentation with Contrastive Learning

no code implementations7 Nov 2023 Ahmed Aloui, Juncheng Dong, Cat P. Le, Vahid Tarokh

To address this, we introduce a model-agnostic data augmentation method that imputes the counterfactual outcomes for a selected subset of individuals.

Contrastive Learning counterfactual +2

Random Linear Projections Loss for Hyperplane-Based Optimization in Neural Networks

no code implementations21 Nov 2023 Shyam Venkatasubramanian, Ahmed Aloui, Vahid Tarokh

Advancing loss function design is pivotal for optimizing neural network training and performance.

Classification

Large Deviation Analysis of Score-based Hypothesis Testing

no code implementations27 Jan 2024 Enmao Diao, Taposh Banerjee, Vahid Tarokh

We analyze the performance of this score-based hypothesis testing procedure and derive upper bounds on the probabilities of its Type I and II errors.

Cannot find the paper you are looking for? You can Submit a new open access paper.