Search Results for author: Kangwook Lee

Found 33 papers, 17 papers with code

Learning to Embed Multi-Modal Contexts for Situated Conversational Agents

no code implementations Findings (NAACL) 2022 Haeju Lee, Oh Joon Kwon, Yunseon Choi, Minho Park, Ran Han, Yoonhyung Kim, Jinhyeon Kim, Youngjune Lee, Haebin Shin, Kangwook Lee, Kee-Eung Kim

The Situated Interactive Multi-Modal Conversations (SIMMC) 2. 0 aims to create virtual shopping assistants that can accept complex multi-modal inputs, i. e. visual appearances of objects and user utterances.

coreference-resolution Coreference Resolution +2

Debiasing Pre-Trained Language Models via Efficient Fine-Tuning

1 code implementation LTEDI (ACL) 2022 Michael Gira, Ruisu Zhang, Kangwook Lee

An explosion in the popularity of transformer-based language models (such as GPT-3, BERT, RoBERTa, and ALBERT) has opened the doors to new machine learning applications involving language modeling, text generation, and more.

Language Modelling Text Generation

Improving Fair Training under Correlation Shifts

no code implementations5 Feb 2023 Yuji Roh, Kangwook Lee, Steven Euijong Whang, Changho Suh

First, we analytically show that existing in-processing fair algorithms have fundamental limits in accuracy and group fairness.

Fairness

Optimizing DDPM Sampling with Shortcut Fine-Tuning

1 code implementation31 Jan 2023 Ying Fan, Kangwook Lee

In this study, we propose Shortcut Fine-Tuning (SFT), a new approach for addressing the challenge of fast sampling of pretrained Denoising Diffusion Probabilistic Models (DDPMs).

Denoising

Looped Transformers as Programmable Computers

1 code implementation30 Jan 2023 Angeliki Giannou, Shashank Rajput, Jy-yong Sohn, Kangwook Lee, Jason D. Lee, Dimitris Papailiopoulos

We present a framework for using transformer networks as universal computers by programming them with specific weights and placing them in a loop.

Score-based Generative Modeling Secretly Minimizes the Wasserstein Distance

1 code implementation13 Dec 2022 Dohyun Kwon, Ying Fan, Kangwook Lee

Specifically, we prove that the Wasserstein distance is upper bounded by the square root of the objective function up to multiplicative constants and a fixed constant offset.

Image Generation

Equal Improvability: A New Fairness Notion Considering the Long-term Impact

1 code implementation13 Oct 2022 Ozgur Guldogan, Yuchen Zeng, Jy-yong Sohn, Ramtin Pedarsani, Kangwook Lee

In order to promote long-term fairness, we propose a new fairness notion called Equal Improvability (EI), which equalizes the potential acceptance rate of the rejected samples across different groups assuming a bounded level of effort will be spent by each rejected sample.

Fairness

Outlier-Robust Group Inference via Gradient Space Clustering

1 code implementation13 Oct 2022 Yuchen Zeng, Kristjan Greenewald, Kangwook Lee, Justin Solomon, Mikhail Yurochkin

Traditional machine learning models focus on achieving good performance on the overall training distribution, but they often underperform on minority groups.

A Better Way to Decay: Proximal Gradient Training Algorithms for Neural Nets

no code implementations6 Oct 2022 Liu Yang, Jifan Zhang, Joseph Shenouda, Dimitris Papailiopoulos, Kangwook Lee, Robert D. Nowak

For neural networks with ReLU activations, solutions to the weight decay objective are equivalent to those of a different objective in which the regularization term is instead a sum of products of $\ell_2$ (not squared) norms of the input and output weights associated each ReLU.

LIFT: Language-Interfaced Fine-Tuning for Non-Language Machine Learning Tasks

1 code implementation14 Jun 2022 Tuan Dinh, Yuchen Zeng, Ruisu Zhang, Ziqian Lin, Michael Gira, Shashank Rajput, Jy-yong Sohn, Dimitris Papailiopoulos, Kangwook Lee

LIFT does not make any changes to the model architecture or loss function, and it solely relies on the natural language interface, enabling "no-code machine learning with LMs."

BIG-bench Machine Learning General Classification +2

Breaking Fair Binary Classification with Optimal Flipping Attacks

no code implementations12 Apr 2022 Changhun Jo, Jy-yong Sohn, Kangwook Lee

Minimizing risk with fairness constraints is one of the popular approaches to learning a fair classifier.

Classification Data Poisoning +1

Rare Gems: Finding Lottery Tickets at Initialization

1 code implementation24 Feb 2022 Kartik Sreenivasan, Jy-yong Sohn, Liu Yang, Matthew Grinde, Alliot Nagle, Hongyi Wang, Eric Xing, Kangwook Lee, Dimitris Papailiopoulos

Frankle & Carbin conjecture that we can avoid this by training "lottery tickets", i. e., special sparse subnetworks found at initialization, that can be trained to high accuracy.

Improved Input Reprogramming for GAN Conditioning

1 code implementation7 Jan 2022 Tuan Dinh, Daewon Seo, Zhixu Du, Liang Shang, Kangwook Lee

Motivated by real-world scenarios with scarce labeled data, we focus on the input reprogramming approach and carefully analyze the existing algorithm.

Improving Fairness via Federated Learning

2 code implementations29 Oct 2021 Yuchen Zeng, Hongxu Chen, Kangwook Lee

We then theoretically and empirically show that the performance tradeoff of FedAvg-based fair learning algorithms is strictly worse than that of a fair classifier trained on centralized data.

Fairness Federated Learning

Gradient Inversion with Generative Image Prior

1 code implementation NeurIPS 2021 Jinwoo Jeon, Jaechang Kim, Kangwook Lee, Sewoong Oh, Jungseul Ok

Federated Learning (FL) is a distributed learning framework, in which the local data never leaves clients devices to preserve privacy, and the server trains models on the data via accessing only the gradients of those local data.

Federated Learning

Coded-InvNet for Resilient Prediction Serving Systems

no code implementations11 Jun 2021 Tuan Dinh, Kangwook Lee

Inspired by a new coded computation algorithm for invertible functions, we propose Coded-InvNet a new approach to design resilient prediction serving systems that can gracefully handle stragglers or node failures.

Translation

Permutation-Based SGD: Is Random Optimal?

1 code implementation ICLR 2022 Shashank Rajput, Kangwook Lee, Dimitris Papailiopoulos

However, for general strongly convex functions, random permutations are optimal.

SLM: Learning a Discourse Language Representation with Sentence Unshuffling

no code implementations EMNLP 2020 Haejun Lee, Drew A. Hudson, Kangwook Lee, Christopher D. Manning

We introduce Sentence-level Language Modeling, a new pre-training objective for learning a discourse language representation in a fully self-supervised manner.

Language Modelling

Accordion: Adaptive Gradient Communication via Critical Learning Regime Identification

2 code implementations29 Oct 2020 Saurabh Agarwal, Hongyi Wang, Kangwook Lee, Shivaram Venkataraman, Dimitris Papailiopoulos

The techniques usually require choosing a static compression ratio, often requiring users to balance the trade-off between model accuracy and per-iteration speedup.

Quantization

Discrete-Valued Latent Preference Matrix Estimation with Graph Side Information

no code implementations16 Mar 2020 Changhun Jo, Kangwook Lee

Ahn et al. (2018) firstly characterized the optimal sample complexity in the presence of graph side information, but the results are limited due to strict, unrealistic assumptions made on the unknown latent preference matrix and the structure of user clusters.

Recommendation Systems

FR-Train: A Mutual Information-Based Approach to Fair and Robust Training

1 code implementation ICML 2020 Yuji Roh, Kangwook Lee, Steven Euijong Whang, Changho Suh

Trustworthy AI is a critical issue in machine learning where, in addition to training a model that is accurate, one must consider both fair and robust training in the presence of data bias and poisoning.

Data Poisoning Fairness

FR-GAN: Fair and Robust Training

no code implementations25 Sep 2019 Yuji Roh, Kangwook Lee, Gyeong Jo Hwang, Steven Euijong Whang, Changho Suh

We consider the problem of fair and robust model training in the presence of data poisoning.

Data Poisoning Fairness

Binary Rating Estimation with Graph Side Information

no code implementations NeurIPS 2018 Kwangjun Ahn, Kangwook Lee, Hyunseung Cha, Changho Suh

Considering a simple correlation model between a rating matrix and a graph, we characterize the sharp threshold on the number of observed entries required to recover the rating matrix (called the optimal sample complexity) as a function of the quality of graph side information (to be detailed).

Hypergraph Spectral Clustering in the Weighted Stochastic Block Model

no code implementations23 May 2018 Kwangjun Ahn, Kangwook Lee, Changho Suh

Our main contribution lies in performance analysis of the poly-time algorithms under a random hypergraph model, which we name the weighted stochastic block model, in which objects and multi-way measures are modeled as nodes and weights of hyperedges, respectively.

Stochastic Block Model

Simulated+Unsupervised Learning With Adaptive Data Generation and Bidirectional Mappings

no code implementations ICLR 2018 Kangwook Lee, Hoon Kim, Changho Suh

Recently, Shrivastava et al. (2017) propose Simulated+Unsupervised (S+U) learning: It first learns a mapping from synthetic data to real data, translates a large amount of labeled synthetic data to the ones that resemble real data, and then trains a learning model on the translated data.

Gaze Estimation

Community Recovery in Hypergraphs

no code implementations12 Sep 2017 Kwangjun Ahn, Kangwook Lee, Changho Suh

The objective of the problem is to cluster data points into distinct communities based on a set of measurements, each of which is associated with the values of a certain number of data points.

Face Clustering Motion Segmentation

Speeding Up Distributed Machine Learning Using Codes

no code implementations8 Dec 2015 Kangwook Lee, Maximilian Lam, Ramtin Pedarsani, Dimitris Papailiopoulos, Kannan Ramchandran

We focus on two of the most basic building blocks of distributed learning algorithms: matrix multiplication and data shuffling.

BIG-bench Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.