Search Results for author: Simone Bombari

Found 6 papers, 3 papers with code

Towards Understanding the Word Sensitivity of Attention Layers: A Study via Random Features

1 code implementation5 Feb 2024 Simone Bombari, Marco Mondelli

Understanding the reasons behind the exceptional success of transformers requires a better analysis of why attention layers are suitable for NLP tasks.

Generalization Bounds Sentence +1

How Spurious Features Are Memorized: Precise Analysis for Random and NTK Features

1 code implementation20 May 2023 Simone Bombari, Marco Mondelli

In this paper, we consider spurious features that are uncorrelated with the learning task, and we provide a precise characterization of how they are memorized via two separate terms: (i) the stability of the model with respect to individual training samples, and (ii) the feature alignment between the spurious feature and the full sample.

Learning Theory Memorization

Beyond the Universal Law of Robustness: Sharper Laws for Random Features and Neural Tangent Kernels

1 code implementation3 Feb 2023 Simone Bombari, Shayan Kiyani, Marco Mondelli

However, this "universal" law provides only a necessary condition for robustness, and it is unable to discriminate between models.

Memorization and Optimization in Deep Neural Networks with Minimum Over-parameterization

no code implementations20 May 2022 Simone Bombari, Mohammad Hossein Amani, Marco Mondelli

The Neural Tangent Kernel (NTK) has emerged as a powerful tool to provide memorization, optimization and generalization guarantees in deep neural networks.

Memorization Open-Ended Question Answering

Sharp asymptotics on the compression of two-layer neural networks

no code implementations17 May 2022 Mohammad Hossein Amani, Simone Bombari, Marco Mondelli, Rattana Pukdee, Stefano Rini

In this paper, we study the compression of a target two-layer neural network with N nodes into a compressed network with M<N nodes.

Vocal Bursts Valence Prediction

Towards Differential Relational Privacy and its use in Question Answering

no code implementations30 Mar 2022 Simone Bombari, Alessandro Achille, Zijian Wang, Yu-Xiang Wang, Yusheng Xie, Kunwar Yashraj Singh, Srikar Appalaraju, Vijay Mahadevan, Stefano Soatto

While bounding general memorization can have detrimental effects on the performance of a trained model, bounding RM does not prevent effective learning.

Memorization Question Answering

Cannot find the paper you are looking for? You can Submit a new open access paper.