no code implementations • ECCV 2020 • Mingeun Kang, Kiwon Lee, Yong H. Lee, Changho Suh
We consider graph-based semi-supervised learning that leverages a similarity graph across data points to better exploit data structure exposed in unlabeled data.
no code implementations • 5 Feb 2023 • Yuji Roh, Kangwook Lee, Steven Euijong Whang, Changho Suh
First, we analytically show that existing in-processing fair algorithms have fundamental limits in accuracy and group fairness.
1 code implementation • 12 Oct 2022 • Jaewoong Cho, Moonseok Choi, Changho Suh
We explore the fairness issue that arises in recommender systems.
no code implementations • NeurIPS 2020 • Adel Elmahdy, Junhyung Ahn, Changho Suh, Soheil Mohajer
We consider a matrix completion problem that exploits social or item similarity graphs as side information.
no code implementations • NeurIPS 2021 • Yuji Roh, Kangwook Lee, Steven Euijong Whang, Changho Suh
In this work, we propose a sample selection-based algorithm for fair and robust training.
no code implementations • 29 Sep 2021 • Soobin Um, Changho Suh
We explore a fairness-related challenge that arises in generative models.
no code implementations • 12 Sep 2021 • Junhyung Ahn, Adel Elmahdy, Soheil Mohajer, Changho Suh
In the achievability proof, we demonstrate that probability of error of the maximum likelihood estimator vanishes for sufficiently large number of users and items, if all sufficient conditions are satisfied.
1 code implementation • ICLR 2021 • Yuji Roh, Kangwook Lee, Steven Euijong Whang, Changho Suh
We address this problem via the lens of bilevel optimization.
no code implementations • NeurIPS 2020 • Jaewoong Cho, Gyeongjo Hwang, Changho Suh
As machine learning becomes prevalent in a widening array of sensitive applications such as job hiring and criminal justice, one critical aspect that machine learning classifiers should respect is to ensure fairness: guaranteeing the irrelevancy of a prediction output to sensitive attributes such as gender and race.
no code implementations • 8 Jun 2020 • Qiaosheng Zhang, Geewon Suh, Changho Suh, Vincent Y. F. Tan
In this paper, we design and analyze MC2G (Matrix Completion with 2 Graphs), an algorithm that performs matrix completion in the presence of social and item similarity graphs.
1 code implementation • ICML 2020 • Yuji Roh, Kangwook Lee, Steven Euijong Whang, Changho Suh
Trustworthy AI is a critical issue in machine learning where, in addition to training a model that is accurate, one must consider both fair and robust training in the presence of data bias and poisoning.
no code implementations • 6 Dec 2019 • Qiaosheng Zhang, Vincent Y. F. Tan, Changho Suh
We consider the problem of recovering a binary rating matrix as well as clusters of users and items based on a partially observed matrix together with side-information in the form of social and item similarity graphs.
no code implementations • 25 Sep 2019 • Yuji Roh, Kangwook Lee, Gyeong Jo Hwang, Steven Euijong Whang, Changho Suh
We consider the problem of fair and robust model training in the presence of data poisoning.
no code implementations • 25 Sep 2019 • Sunghyun Kim, Minje Jang, Changho Suh
As existing state-of-the-art algorithms are tailored to certain statistical models, we have different best algorithms across distinct scenarios.
no code implementations • 25 Feb 2019 • Jaewoong Cho, Changho Suh
Generative Adversarial Networks (GANs) have become a powerful framework to learn generative models that arise across a wide variety of domains.
no code implementations • NeurIPS 2018 • Kwangjun Ahn, Kangwook Lee, Hyunseung Cha, Changho Suh
Considering a simple correlation model between a rating matrix and a graph, we characterize the sharp threshold on the number of observed entries required to recover the rating matrix (called the optimal sample complexity) as a function of the quality of graph side information (to be detailed).
no code implementations • 23 May 2018 • Kwangjun Ahn, Kangwook Lee, Changho Suh
Our main contribution lies in performance analysis of the poly-time algorithms under a random hypergraph model, which we name the weighted stochastic block model, in which objects and multi-way measures are modeled as nodes and weights of hyperedges, respectively.
no code implementations • ICLR 2018 • Kangwook Lee, Hoon Kim, Changho Suh
Recently, Shrivastava et al. (2017) propose Simulated+Unsupervised (S+U) learning: It first learns a mapping from synthetic data to real data, translates a large amount of labeled synthetic data to the ones that resemble real data, and then trains a learning model on the translated data.
no code implementations • NeurIPS 2017 • Minje Jang, Sunghyun Kim, Changho Suh, Sewoong Oh
As our result, we characterize the minimax optimality on the sample size for top-K ranking.
no code implementations • 12 Sep 2017 • Kwangjun Ahn, Kangwook Lee, Changho Suh
The objective of the problem is to cluster data points into distinct communities based on a set of measurements, each of which is associated with the values of a certain number of data points.
1 code implementation • ICML 2017 • Soheil Mohajer, Changho Suh, Adel Elmahdy
We explore an active top-$K$ ranking problem based on pairwise comparisons that are collected possibly in a sequential manner as per our design choice.
no code implementations • 14 Mar 2016 • Minje Jang, Sunghyun Kim, Changho Suh, Sewoong Oh
First, in a general comparison model where item pairs to compare are given a priori, we attain an upper and lower bound on the sample size for reliable recovery of the top-$K$ ranked items.
no code implementations • 15 Feb 2016 • Changho Suh, Vincent Y. F. Tan, Renbo Zhao
We study the top-$K$ ranking problem where the goal is to recover the set of top-$K$ ranked items out of a large collection of items based on partially revealed preferences.
no code implementations • 11 Feb 2016 • Yuxin Chen, Govinda Kamath, Changho Suh, David Tse
Motivated by applications in domains such as social networks and computational biology, we study the problem of community recovery in graphs with locality.
no code implementations • 27 Apr 2015 • Yuxin Chen, Changho Suh
To approach this minimax limit, we propose a nearly linear-time ranking scheme, called \emph{Spectral MLE}, that returns the indices of the top-$K$ items in accordance to a careful score estimate.
no code implementations • 6 Apr 2015 • Yuxin Chen, Changho Suh, Andrea J. Goldsmith
In particular, our results isolate a family of \emph{minimum} \emph{channel divergence measures} to characterize the degree of measurement corruption, which together with the size of the minimum cut of $\mathcal{G}$ dictates the feasibility of exact information recovery.