no code implementations • 8 Jun 2023 • Donald Loveland, Jiong Zhu, Mark Heimann, Benjamin Fish, Michael T. Schaub, Danai Koutra
We ground the practical implications of this work through granular analysis on five real-world datasets with varying global homophily levels, demonstrating that (a) GNNs can fail to generalize to test nodes that deviate from the global homophily of a graph, and (b) high local homophily does not necessarily confer high performance for a node.
no code implementations • 12 Sep 2022 • Benjamin Fish, Luke Stark
Existing efforts to formulate computational definitions of fairness have largely focused on distributional notions of equality, where equality is defined by the resources or decisions given to individuals in the system.
no code implementations • 7 Apr 2020 • Benjamin Fish, Lev Reyzin
In the problem of learning with label proportions, which we call LLP learning, the training data is unlabeled, and only the proportions of examples receiving each label are given.
no code implementations • 28 Sep 2017 • Benjamin Fish, Lev Reyzin, Benjamin I. P. Rubinstein
In this work, we study how to use sampling to speed up mechanisms for answering adaptive queries into datasets without reducing the accuracy of those mechanisms.
no code implementations • 24 Feb 2017 • Benjamin Fish, Rajmonda S. Caceres
We introduce a framework that tackles both of these issues: By measuring the performance of the time scale detection algorithm based on how well a given task is accomplished on the resulting network, we are for the first time able to directly compare different time scale detection algorithms to each other.
1 code implementation • 21 Jan 2016 • Benjamin Fish, Jeremy Kun, Ádám D. Lelkes
To help to distinguish between these naive algorithms and more sensible algorithms we propose a new measure of fairness, called resilience to random bias (RRB).
no code implementations • 24 Apr 2015 • Benjamin Fish, Rajmonda S. Caceres
We show that not only does oversampling affect the quality of link prediction, but that we can use link prediction to recover from the effects of oversampling.