Search Results for author: Moontae Lee

Found 22 papers, 6 papers with code

Discriminator-Guided Multi-step Reasoning with Language Models

1 code implementation24 May 2023 Muhammad Khalifa, Lajanugen Logeswaran, Moontae Lee, Honglak Lee, Lu Wang

In the context of multi-step reasoning, language models (LMs) probabilities are often miscalibrated -- solutions with high probabilities are not always correct.

Multimodal Subtask Graph Generation from Instructional Videos

no code implementations17 Feb 2023 Yunseok Jang, Sungryull Sohn, Lajanugen Logeswaran, Tiange Luo, Moontae Lee, Honglak Lee

Real-world tasks consist of multiple inter-dependent subtasks (e. g., a dirty pan needs to be washed before it can be used for cooking).

Graph Generation

Exploring the Benefits of Training Expert Language Models over Instruction Tuning

1 code implementation7 Feb 2023 Joel Jang, Seungone Kim, Seonghyeon Ye, Doyoung Kim, Lajanugen Logeswaran, Moontae Lee, Kyungjae Lee, Minjoon Seo

Recently, Language Models (LMs) instruction-tuned on multiple tasks, also known as multitask-prompted fine-tuning (MT), have shown the capability to generalize to unseen tasks.

Learning to Unlearn: Instance-wise Unlearning for Pre-trained Classifiers

no code implementations27 Jan 2023 Sungmin Cha, Sungjun Cho, Dasol Hwang, Honglak Lee, Taesup Moon, Moontae Lee

Since the recent advent of regulations for data protection (e. g., the General Data Protection Regulation), there has been increasing demand in deleting information learned from sensitive data in pre-trained models without retraining from scratch.

Image Classification

Transferring Pre-trained Multimodal Representations with Cross-modal Similarity Matching

no code implementations7 Jan 2023 Byoungjip Kim, Sungik Choi, Dasol Hwang, Moontae Lee, Honglak Lee

Despite surprising performance on zero-shot transfer, pre-training a large-scale multimodal model is often prohibitive as it requires a huge amount of data and computing resources.

Language Modelling Self-Supervised Learning

Transformers meet Stochastic Block Models: Attention with Data-Adaptive Sparsity and Cost

1 code implementation27 Oct 2022 Sungjun Cho, Seonwoo Min, Jinwoo Kim, Moontae Lee, Honglak Lee, Seunghoon Hong

The forward and backward cost are thus linear to the number of edges, which each attention head can also choose flexibly based on the input.

Stochastic Block Model

Knowledge Unlearning for Mitigating Privacy Risks in Language Models

1 code implementation4 Oct 2022 Joel Jang, Dongkeun Yoon, Sohee Yang, Sungmin Cha, Moontae Lee, Lajanugen Logeswaran, Minjoon Seo

Pretrained Language Models (LMs) memorize a vast amount of knowledge during initial pretraining, including information that may violate the privacy of personal lives and identities.

Language Modelling

Grouping-matrix based Graph Pooling with Adaptive Number of Clusters

no code implementations7 Sep 2022 Sung Moon Ko, Sungjun Cho, Dae-Woong Jeong, Sehui Han, Moontae Lee, Honglak Lee

Conventional methods ask users to specify an appropriate number of clusters as a hyperparameter, then assume that all input graphs share the same number of clusters.

Binary Classification Molecular Property Prediction +1

Pure Transformers are Powerful Graph Learners

1 code implementation6 Jul 2022 Jinwoo Kim, Tien Dat Nguyen, Seonwoo Min, Sungjun Cho, Moontae Lee, Honglak Lee, Seunghoon Hong

We show that standard Transformers without graph-specific modifications can lead to promising results in graph learning both in theory and practice.

Graph Learning Graph Regression +1

Towards More Objective Evaluation of Class Incremental Learning: Representation Learning Perspective

no code implementations16 Jun 2022 Sungmin Cha, Jihwan Kwak, Dongsub Shim, Hyunwoo Kim, Moontae Lee, Honglak Lee, Taesup Moon

While the common method for evaluating CIL algorithms is based on average test accuracy for all learned classes, we argue that maximizing accuracy alone does not necessarily lead to effective CIL algorithms.

class-incremental learning Class Incremental Learning +3

Few-shot Reranking for Multi-hop QA via Language Model Prompting

2 code implementations25 May 2022 Muhammad Khalifa, Lajanugen Logeswaran, Moontae Lee, Honglak Lee, Lu Wang

To alleviate the need for a large number of labeled question-document pairs for retriever training, we propose PromptRank, which relies on large language models prompting for multi-hop path reranking.

Open-Domain Question Answering Passage Re-Ranking +2

Rebalancing Batch Normalization for Exemplar-based Class-Incremental Learning

no code implementations CVPR 2023 Sungmin Cha, Sungjun Cho, Dasol Hwang, Sunwon Hong, Moontae Lee, Taesup Moon

The main reason for the ineffectiveness of their method lies in not fully addressing the data imbalance issue, especially in computing the gradients for learning the affine transformation parameters of BN.

class-incremental learning Class Incremental Learning +1

On-the-Fly Rectification for Robust Large-Vocabulary Topic Inference

no code implementations12 Nov 2021 Moontae Lee, Sungjun Cho, Kun Dong, David Mimno, David Bindel

Across many data domains, co-occurrence statistics about the joint appearance of objects are powerfully informative.

Community Detection

Practical Correlated Topic Modeling and Analysis via the Rectified Anchor Word Algorithm

no code implementations IJCNLP 2019 Moontae Lee, Sungjun Cho, David Bindel, David Mimno

Despite great scalability on large data and their ability to understand correlations between topics, spectral topic models have not been widely used due to the absence of reliability in real data and lack of practical implementations.

Topic Models

Prior-aware Dual Decomposition: Document-specific Topic Inference for Spectral Topic Models

no code implementations19 Nov 2017 Moontae Lee, David Bindel, David Mimno

Spectral topic modeling algorithms operate on matrices/tensors of word co-occurrence statistics to learn topic-specific word distributions.

Topic Models

Low-dimensional Embeddings for Interpretable Anchor-based Topic Inference

no code implementations EMNLP 2014 Moontae Lee, David Mimno

The anchor words algorithm performs provably efficient topic model inference by finding an approximate convex hull in a high-dimensional word co-occurrence space.

Robust Spectral Inference for Joint Stochastic Matrix Factorization

no code implementations NeurIPS 2015 Moontae Lee, David Bindel, David Mimno

Spectral inference provides fast algorithms and provable optimality for latent topic analysis.

Beyond Exchangeability: The Chinese Voting Process

no code implementations NeurIPS 2016 Moontae Lee, Seok Hyun Jin, David Mimno

Many online communities present user-contributed responses such as reviews of products and answers to questions.

Basic Reasoning with Tensor Product Representations

no code implementations12 Jan 2016 Paul Smolensky, Moontae Lee, Xiaodong He, Wen-tau Yih, Jianfeng Gao, Li Deng

In this paper we present the initial development of a general theory for mapping inference in predicate logic to computation over Tensor Product Representations (TPRs; Smolensky (1990), Smolensky & Legendre (2006)).

Question Answering

Cannot find the paper you are looking for? You can Submit a new open access paper.