Search Results for author: Jose Gallego-Posada

Found 8 papers, 6 papers with code

Balancing Act: Constraining Disparate Impact in Sparse Models

2 code implementations31 Oct 2023 Meraj Hashemizadeh, Juan Ramirez, Rohan Sukumaran, Golnoosh Farnadi, Simon Lacoste-Julien, Jose Gallego-Posada

Model pruning is a popular approach to enable the deployment of large deep learning models on edge devices with restricted computational or storage capacities.

A Distributed Data-Parallel PyTorch Implementation of the Distributed Shampoo Optimizer for Training Neural Networks At-Scale

1 code implementation12 Sep 2023 Hao-Jun Michael Shi, Tsung-Hsien Lee, Shintaro Iwasaki, Jose Gallego-Posada, Zhijing Li, Kaushik Rangadurai, Dheevatsa Mudigere, Michael Rabbat

It constructs a block-diagonal preconditioner where each block consists of a coarse Kronecker product approximation to full-matrix AdaGrad for each parameter of the neural network.

Stochastic Optimization

L$_0$onie: Compressing COINs with L$_0$-constraints

1 code implementation8 Jul 2022 Juan Ramirez, Jose Gallego-Posada

Advances in Implicit Neural Representations (INR) have motivated research on domain-agnostic compression techniques.

Equivariant Mesh Attention Networks

1 code implementation21 May 2022 Sourya Basu, Jose Gallego-Posada, Francesco Viganò, James Rowbottom, Taco Cohen

Equivariance to symmetries has proven to be a powerful inductive bias in deep learning research.

Inductive Bias

Flexible Learning of Sparse Neural Networks via Constrained $L_0$ Regularization

no code implementations NeurIPS Workshop LatinX_in_AI 2021 Jose Gallego-Posada, Juan Ramirez De Los Rios, Akram Erraqabi

We propose to approach the problem of learning $L_0$-sparse networks using a constrained formulation of the optimization problem.

Simplicial Regularization

1 code implementation ICLR Workshop GTRL 2021 Jose Gallego-Posada, Patrick Forré

Inspired by the fuzzy topological representation of a dataset employed in UMAP (McInnes et al., 2018), we propose a regularization principle for supervised learning based on the preservation of the simplicial complex structure of the data.

Data Augmentation Dimensionality Reduction

GANGs: Generative Adversarial Network Games

no code implementations2 Dec 2017 Frans A. Oliehoek, Rahul Savani, Jose Gallego-Posada, Elise van der Pol, Edwin D. de Jong, Roderich Gross

We introduce Generative Adversarial Network Games (GANGs), which explicitly model a finite zero-sum game between a generator ($G$) and classifier ($C$) that use mixed strategies.

Generative Adversarial Network

Cannot find the paper you are looking for? You can Submit a new open access paper.