Search Results for author: Luca Zappella

Found 13 papers, 4 papers with code

Robust multimodal models have outlier features and encode more concepts

no code implementations19 Oct 2023 Jonathan Crabbé, Pau Rodríguez, Vaishaal Shankar, Luca Zappella, Arno Blaas

In this work, we bridge this gap by probing the representation spaces of 12 robust multimodal models with various backbones (ResNets and ViTs) and pretraining sets (OpenAI, LAION-400M, LAION-2B, YFCC15M, CC12M and DataComp).

Spatial LibriSpeech: An Augmented Dataset for Spatial Audio Learning

1 code implementation18 Aug 2023 Miguel Sarabia, Elena Menyaylenko, Alessandro Toso, Skyler Seto, Zakaria Aldeneh, Shadi Pirhosseinloo, Luca Zappella, Barry-John Theobald, Nicholas Apostoloff, Jonathan Sheaffer

We present Spatial LibriSpeech, a spatial audio dataset with over 650 hours of 19-channel audio, first-order ambisonics, and optional distractor noise.

8k Position

The Role of Entropy and Reconstruction in Multi-View Self-Supervised Learning

1 code implementation20 Jul 2023 Borja Rodríguez-Gálvez, Arno Blaas, Pau Rodríguez, Adam Goliński, Xavier Suau, Jason Ramapuram, Dan Busbridge, Luca Zappella

We consider a different lower bound on the MI consisting of an entropy and a reconstruction term (ER), and analyze the main MVSSL families through its lens.

Self-Supervised Learning

DUET: 2D Structured and Approximately Equivariant Representations

1 code implementation28 Jun 2023 Xavier Suau, Federico Danieli, T. Anderson Keller, Arno Blaas, Chen Huang, Jason Ramapuram, Dan Busbridge, Luca Zappella

We propose 2D strUctured and EquivarianT representations (coined DUET), which are 2d representations organized in a matrix structure, and equivariant with respect to transformations acting on the input data.

Self-Supervised Learning Transfer Learning

Designing Data: Proactive Data Collection and Iteration for Machine Learning

no code implementations24 Jan 2023 Aspen Hopkins, Fred Hohman, Luca Zappella, Xavier Suau Cuadros, Dominik Moritz

Lack of diversity in data collection has caused significant failures in machine learning (ML) applications.

Density Estimation

Homomorphic Self-Supervised Learning

no code implementations15 Nov 2022 T. Anderson Keller, Xavier Suau, Luca Zappella

In this work, we observe that many existing self-supervised learning algorithms can be both unified and generalized when seen through the lens of equivariant representations.

Self-Supervised Learning

Fair SA: Sensitivity Analysis for Fairness in Face Recognition

no code implementations8 Feb 2022 Aparna R. Joshi, Xavier Suau, Nivedha Sivakumar, Luca Zappella, Nicholas Apostoloff

One such high impact domain is that of face recognition, with real world applications involving images affected by various degradations, such as motion blur or high exposure.

Face Recognition Fairness

Challenges of Adversarial Image Augmentations

no code implementations NeurIPS Workshop ICBINB 2021 Arno Blaas, Xavier Suau, Jason Ramapuram, Nicholas Apostoloff, Luca Zappella

Image augmentations applied during training are crucial for the generalization performance of image classifiers.

Self-conditioning pre-trained language models

1 code implementation30 Sep 2021 Xavier Suau, Luca Zappella, Nicholas Apostoloff

We compare our method with FUDGE and PPLM-BoW, and show that our approach is able to achieve gender parity at a lower perplexity.

Text Generation

Finding Experts in Transformer Models

no code implementations15 May 2020 Xavier Suau, Luca Zappella, Nicholas Apostoloff

We show that expert units are important in several ways: (1) The presence of expert units is correlated ($r^2=0. 833$) with the generalization power of TM, which allows ranking TM without requiring fine-tuning on suites of downstream tasks.

Filter Distillation for Network Compression

no code implementations ICLR 2019 Xavier Suau, Luca Zappella, Nicholas Apostoloff

We propose two algorithms: the first allows users to target compression to specific network property, such as number of trainable variable (footprint), and produces a compressed model that satisfies the requested property while preserving the maximum amount of spectral energy in the responses of each layer, while the second is a parameter-free heuristic that selects the compression used at each layer by trying to mimic an ideal set of uncorrelated responses.

Domain Adaptation Neural Network Compression +1

Cannot find the paper you are looking for? You can Submit a new open access paper.