1 code implementation • 12 Dec 2023 • Pratyusha Das, Sarath Shekkizhar, Antonio Ortega
In this paper, we first propose to use a local Dataset Graph (DS-Graph) obtained from the feature representation of input data at each layer to develop an understanding of the layer-wise embedding geometry of the STGCN.
1 code implementation • 4 Dec 2023 • Randall Balestriero, Romain Cosentino, Sarath Shekkizhar
We obtain in closed form (i) the intrinsic dimension in which the Multi-Head Attention embeddings are constrained to exist and (ii) the partition and per-region affine mappings of the per-layer feedforward networks.
no code implementations • 31 Oct 2022 • Carlos Hurtado, Sarath Shekkizhar, Javier Ruiz-Hidalgo, Antonio Ortega
Modern machine learning systems are increasingly trained on large amounts of data embedded in high-dimensional spaces.
no code implementations • 18 Sep 2022 • Romain Cosentino, Sarath Shekkizhar, Mahdi Soltanolkotabi, Salman Avestimehr, Antonio Ortega
Self-supervised learning (SSL) has emerged as a desirable paradigm in computer vision due to the inability of supervised models to learn representations that can generalize in domains with limited labels.
no code implementations • 18 Oct 2021 • David Bonet, Antonio Ortega, Javier Ruiz-Hidalgo, Sarath Shekkizhar
Feature spaces in the deep layers of convolutional neural networks (CNNs) are often very high-dimensional and difficult to interpret.
no code implementations • 15 Oct 2021 • Sarath Shekkizhar, Antonio Ortega
An increasing number of systems are being designed by gathering significant amounts of data and then optimizing the system parameters directly using the obtained data.
1 code implementation • 27 Jul 2021 • David Bonet, Antonio Ortega, Javier Ruiz-Hidalgo, Sarath Shekkizhar
Motivated by our observations, we use CW-DeepNNK to propose a novel early stopping criterion that (i) does not require a validation set, (ii) is based on a task performance metric, and (iii) allows stopping to be reached at different points for each channel.
1 code implementation • 20 Jul 2020 • Sarath Shekkizhar, Antonio Ortega
Modern machine learning systems based on neural networks have shown great success in learning complex data patterns while being able to make good predictions on unseen data points.
BIG-bench Machine Learning Interpretability Techniques for Deep Learning +2
1 code implementation • 16 Feb 2020 • Sarath Shekkizhar, Antonio Ortega
Graphs are useful to interpret widely used image processing methods, e. g., bilateral filtering, or to develop new ones, e. g., kernel based techniques.
3 code implementations • 21 Oct 2019 • Sarath Shekkizhar, Antonio Ortega
Data-driven neighborhood definitions and graph constructions are often used in machine learning and signal processing applications.