Search Results for author: Hrayr Harutyunyan

Found 11 papers, 10 papers with code

Formal limitations of sample-wise information-theoretic generalization bounds

no code implementations13 May 2022 Hrayr Harutyunyan, Greg Ver Steeg, Aram Galstyan

Remarkably, PAC-Bayes, single-draw and expected squared generalization gap bounds that depend on information in pairs of examples exist.

Generalization Bounds

Information-theoretic generalization bounds for black-box learning algorithms

1 code implementation NeurIPS 2021 Hrayr Harutyunyan, Maxim Raginsky, Greg Ver Steeg, Aram Galstyan

We derive information-theoretic generalization bounds for supervised learning algorithms based on the information contained in predictions rather than in the output of the training algorithm.

Generalization Bounds

Estimating informativeness of samples with Smooth Unique Information

1 code implementation ICLR 2021 Hrayr Harutyunyan, Alessandro Achille, Giovanni Paolini, Orchid Majumder, Avinash Ravichandran, Rahul Bhotika, Stefano Soatto

We define a notion of information that an individual sample provides to the training of a neural network, and we specialize it to measure both how much a sample informs the final weights and how much it informs the function computed by the weights.

Informativeness

Efficient Covariance Estimation from Temporal Data

2 code implementations30 May 2019 Hrayr Harutyunyan, Daniel Moyer, Hrant Khachatrian, Greg Ver Steeg, Aram Galstyan

Estimating the covariance structure of multivariate time series is a fundamental problem with a wide-range of real-world applications -- from financial modeling to fMRI analysis.

Time Series

MixHop: Higher-Order Graph Convolutional Architectures via Sparsified Neighborhood Mixing

3 code implementations30 Apr 2019 Sami Abu-El-Haija, Bryan Perozzi, Amol Kapoor, Nazanin Alipourfard, Kristina Lerman, Hrayr Harutyunyan, Greg Ver Steeg, Aram Galstyan

Existing popular methods for semi-supervised learning with Graph Neural Networks (such as the Graph Convolutional Network) provably cannot learn a general class of neighborhood mixing relationships.

Node Classification

Disentangled Representations via Synergy Minimization

1 code implementation10 Oct 2017 Greg Ver Steeg, Rob Brekelmans, Hrayr Harutyunyan, Aram Galstyan

Scientists often seek simplified representations of complex systems to facilitate prediction and understanding.

Fast structure learning with modular regularization

3 code implementations NeurIPS 2019 Greg Ver Steeg, Hrayr Harutyunyan, Daniel Moyer, Aram Galstyan

We also use our approach for estimating covariance structure for a number of real-world datasets and show that it consistently outperforms state-of-the-art estimators at a fraction of the computational cost.

Cannot find the paper you are looking for? You can Submit a new open access paper.