Search Results for author: Haitz Sáez de Ocáriz Borde

Found 12 papers, 2 papers with code

Asymmetry in Low-Rank Adapters of Foundation Models

1 code implementation26 Feb 2024 Jiacheng Zhu, Kristjan Greenewald, Kimia Nadjahi, Haitz Sáez de Ocáriz Borde, Rickard Brüel Gabrielsson, Leshem Choshen, Marzyeh Ghassemi, Mikhail Yurochkin, Justin Solomon

Specifically, when updating the parameter matrices of a neural network by adding a product $BA$, we observe that the $B$ and $A$ matrices have distinct functions: $A$ extracts features from the input, while $B$ uses these features to create the desired output.

Breaking the Curse of Dimensionality with Distributed Neural Computation

no code implementations5 Feb 2024 Haitz Sáez de Ocáriz Borde, Takashi Furuya, Anastasis Kratsios, Marc T. Law

This improves the optimal bounds for traditional non-distributed deep learning models, namely ReLU MLPs, which need $\mathcal{O}(\varepsilon^{-n/2})$ parameters to achieve the same accuracy.

AMES: A Differentiable Embedding Space Selection Framework for Latent Graph Inference

no code implementations20 Nov 2023 Yuan Lu, Haitz Sáez de Ocáriz Borde, Pietro Liò

More importantly, our interpretability framework provides a general approach for quantitatively comparing embedding spaces across different tasks based on their contributions, a dimension that has been overlooked in previous literature on latent graph inference.

Neural Snowflakes: Universal Latent Graph Inference via Trainable Latent Geometries

no code implementations23 Oct 2023 Haitz Sáez de Ocáriz Borde, Anastasis Kratsios

Furthermore, when the latent graph can be represented in the feature space of a sufficiently regular kernel, we show that the combined neural snowflake and MLP encoder do not succumb to the curse of dimensionality by using only a low-degree polynomial number of parameters in the number of nodes.

Inductive Bias Metric Learning

Closed-Form Diffusion Models

no code implementations19 Oct 2023 Christopher Scarvelis, Haitz Sáez de Ocáriz Borde, Justin Solomon

In this work, we instead explicitly smooth the closed-form score to obtain an SGM that generates novel samples without training.

Capacity Bounds for Hyperbolic Neural Network Representations of Latent Tree Structures

no code implementations18 Aug 2023 Anastasis Kratsios, Ruiyang Hong, Haitz Sáez de Ocáriz Borde

We find that the network complexity of HNN implementing the graph representation is independent of the representation fidelity/distortion.

Projections of Model Spaces for Latent Graph Inference

no code implementations21 Mar 2023 Haitz Sáez de Ocáriz Borde, Álvaro Arroyo, Ingmar Posner

Graph Neural Networks leverage the connectivity structure of graphs as an inductive bias.

Inductive Bias

Latent Graph Inference using Product Manifolds

no code implementations26 Nov 2022 Haitz Sáez de Ocáriz Borde, Anees Kazi, Federico Barbero, Pietro Liò

The original dDGM architecture used the Euclidean plane to encode latent features based on which the latent graphs were generated.

Graph Learning

Graph Neural Network Expressivity and Meta-Learning for Molecular Property Regression

no code implementations24 Sep 2022 Haitz Sáez de Ocáriz Borde, Federico Barbero

We demonstrate the applicability of model-agnostic algorithms for meta-learning, specifically Reptile, to GNN models in molecular regression tasks.

Meta-Learning regression

Sheaf Neural Networks with Connection Laplacians

1 code implementation17 Jun 2022 Federico Barbero, Cristian Bodnar, Haitz Sáez de Ocáriz Borde, Michael Bronstein, Petar Veličković, Pietro Liò

A Sheaf Neural Network (SNN) is a type of Graph Neural Network (GNN) that operates on a sheaf, an object that equips a graph with vector spaces over its nodes and edges and linear maps between these spaces.

Node Classification

Latent Space based Memory Replay for Continual Learning in Artificial Neural Networks

no code implementations26 Nov 2021 Haitz Sáez de Ocáriz Borde

Memory replay may be key to learning in biological brains, which manage to learn new tasks continually without catastrophically interfering with previous knowledge.

Continual Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.