no code implementations • 16 Mar 2024 • Namyong Park, Xing Wang, Antoine Simoulin, Shuai Yang, Grey Yang, Ryan Rossi, Puja Trivedi, Nesreen Ahmed
To address these limitations, the forward-forward algorithm (FF) was recently proposed as an alternative to BP in the image classification domain, which trains NNs by performing two forward passes over positive and negative data.
no code implementations • 7 Jan 2024 • Puja Trivedi, Mark Heimann, Rushil Anirudh, Danai Koutra, Jayaraman J. Thiagarajan
While graph neural networks (GNNs) are widely used for node and graph representation learning tasks, the reliability of GNN uncertainty estimates under distribution shifts remains relatively under-explored.
no code implementations • 29 Nov 2023 • Puja Trivedi, Ryan Rossi, David Arbour, Tong Yu, Franck Dernoncourt, Sungchul Kim, Nedim Lipka, Namyong Park, Nesreen K. Ahmed, Danai Koutra
Most real-world networks are noisy and incomplete samples from an unknown target distribution.
no code implementations • 20 Sep 2023 • Jayaraman J. Thiagarajan, Vivek Narayanaswamy, Puja Trivedi, Rushil Anirudh
In this paper, we propose PAGER (Principled Analysis of Generalization Errors in Regressors), a framework to systematically detect and characterize failures in deep regression models.
no code implementations • 20 Sep 2023 • Puja Trivedi, Mark Heimann, Rushil Anirudh, Danai Koutra, Jayaraman J. Thiagarajan
Safe deployment of graph neural networks (GNNs) under distribution shift requires models to provide accurate confidence indicators (CI).
no code implementations • 8 Jul 2023 • April Chen, Ryan A. Rossi, Namyong Park, Puja Trivedi, Yu Wang, Tong Yu, Sungchul Kim, Franck Dernoncourt, Nesreen K. Ahmed
In this article, we examine and categorize fairness techniques for improving the fairness of GNNs.
no code implementations • 23 Mar 2023 • Puja Trivedi, Danai Koutra, Jayaraman J. Thiagarajan
Advances in the expressivity of pretrained models have increased interest in the design of adaptation protocols which enable safe and effective transfer learning.
no code implementations • 23 Mar 2023 • Puja Trivedi, Danai Koutra, Jayaraman J. Thiagarajan
Overall, our work carefully studies the effectiveness of popular scoring functions in realistic settings and helps to better understand their limitations.
1 code implementation • 4 Aug 2022 • Puja Trivedi, Ekdeep Singh Lubana, Mark Heimann, Danai Koutra, Jayaraman J. Thiagarajan
Overall, our work rigorously contextualizes, both empirically and theoretically, the effects of data-centric properties on augmentation strategies and learning paradigms for graph SSL.
no code implementations • 26 Jul 2022 • Puja Trivedi, Danai Koutra, Jayaraman J. Thiagarajan
While directly fine-tuning (FT) large-scale, pretrained models on task-specific data is well-known to induce strong in-distribution task performance, recent works have demonstrated that different adaptation protocols, such as linear probing (LP) prior to FT, can improve out-of-distribution generalization.
1 code implementation • 9 Nov 2021 • Fatemeh Vahedian, Ruiyu Li, Puja Trivedi, Di Jin, Danai Koutra
Understanding the training dynamics of deep neural networks (DNNs) is important as it can lead to improved training efficiency and task performance.
no code implementations • 5 Nov 2021 • Puja Trivedi, Ekdeep Singh Lubana, Yujun Yan, Yaoqing Yang, Danai Koutra
Unsupervised graph representation learning is critical to a wide range of applications where labels may be scarce or expensive to procure.
no code implementations • 29 Sep 2021 • Puja Trivedi, Mark Heimann, Danai Koutra, Jayaraman J. Thiagarajan
Using the recent population augmentation graph-based analysis of self-supervised learning, we show theoretically that the success of GCL with popular augmentations is bounded by the graph edit distance between different classes.
2 code implementations • 4 Feb 2021 • Ekdeep Singh Lubana, Puja Trivedi, Danai Koutra, Robert P. Dick
Catastrophic forgetting undermines the effectiveness of deep neural networks (DNNs) in scenarios such as continual learning and lifelong learning.
1 code implementation • 10 Sep 2020 • Ekdeep Singh Lubana, Puja Trivedi, Conrad Hougen, Robert P. Dick, Alfred O. Hero
To address this issue, we propose OrthoReg, a principled regularization strategy that enforces orthonormality on a network's filters to reduce inter-filter correlation, thereby allowing reliable, efficient determination of group importance estimates, improved trainability of pruned networks, and efficient, simultaneous pruning of large groups of filters.