Search Results for author: Moontae Lee

Found 33 papers, 10 papers with code

Pure Transformers are Powerful Graph Learners

1 code implementation6 Jul 2022 Jinwoo Kim, Tien Dat Nguyen, Seonwoo Min, Sungjun Cho, Moontae Lee, Honglak Lee, Seunghoon Hong

We show that standard Transformers without graph-specific modifications can lead to promising results in graph learning both in theory and practice.

Graph Learning Graph Regression +1

Exploring the Benefits of Training Expert Language Models over Instruction Tuning

2 code implementations7 Feb 2023 Joel Jang, Seungone Kim, Seonghyeon Ye, Doyoung Kim, Lajanugen Logeswaran, Moontae Lee, Kyungjae Lee, Minjoon Seo

Recently, Language Models (LMs) instruction-tuned on multiple tasks, also known as multitask-prompted fine-tuning (MT), have shown the capability to generalize to unseen tasks.

Common Sense Reasoning Coreference Resolution +4

Knowledge Unlearning for Mitigating Privacy Risks in Language Models

1 code implementation4 Oct 2022 Joel Jang, Dongkeun Yoon, Sohee Yang, Sungmin Cha, Moontae Lee, Lajanugen Logeswaran, Minjoon Seo

Pretrained Language Models (LMs) memorize a vast amount of knowledge during initial pretraining, including information that may violate the privacy of personal lives and identities.

Ranked #3 on Language Modelling on The Pile (Test perplexity metric)

Language Modelling

GRACE: Discriminator-Guided Chain-of-Thought Reasoning

1 code implementation24 May 2023 Muhammad Khalifa, Lajanugen Logeswaran, Moontae Lee, Honglak Lee, Lu Wang

To address this issue, we propose Guiding chain-of-thought ReAsoning with a CorrectnEss Discriminator (GRACE), a stepwise decoding approach that steers the decoding process towards producing correct reasoning steps.

GSM8K Math

Few-shot Reranking for Multi-hop QA via Language Model Prompting

2 code implementations25 May 2022 Muhammad Khalifa, Lajanugen Logeswaran, Moontae Lee, Honglak Lee, Lu Wang

To alleviate the need for a large number of labeled question-document pairs for retriever training, we propose PromptRank, which relies on large language models prompting for multi-hop path reranking.

Open-Domain Question Answering Passage Re-Ranking +2

Transformers meet Stochastic Block Models: Attention with Data-Adaptive Sparsity and Cost

1 code implementation27 Oct 2022 Sungjun Cho, Seonwoo Min, Jinwoo Kim, Moontae Lee, Honglak Lee, Seunghoon Hong

The forward and backward cost are thus linear to the number of edges, which each attention head can also choose flexibly based on the input.

Stochastic Block Model

YTCommentQA: Video Question Answerability in Instructional Videos

1 code implementation30 Jan 2024 Saelyne Yang, Sunghyun Park, Yunseok Jang, Moontae Lee

Experiments with answerability classification tasks demonstrate the complexity of YTCommentQA and emphasize the need to comprehend the combined role of visual and script information in video reasoning.

Question Answering Video Question Answering

Exploring Demonstration Ensembling for In-context Learning

1 code implementation17 Aug 2023 Muhammad Khalifa, Lajanugen Logeswaran, Moontae Lee, Honglak Lee, Lu Wang

The standard approach for ICL is to prompt the LM with concatenated demonstrations followed by the test input.

In-Context Learning

From Heuristic to Analytic: Cognitively Motivated Strategies for Coherent Physical Commonsense Reasoning

1 code implementation24 Oct 2023 Zheyuan Zhang, Shane Storks, Fengyuan Hu, Sungryull Sohn, Moontae Lee, Honglak Lee, Joyce Chai

We incorporate these interlinked dual processes in fine-tuning and in-context learning with PLMs, applying them to two language understanding tasks that require coherent physical commonsense reasoning.

In-Context Learning Physical Commonsense Reasoning

Prior-aware Dual Decomposition: Document-specific Topic Inference for Spectral Topic Models

no code implementations19 Nov 2017 Moontae Lee, David Bindel, David Mimno

Spectral topic modeling algorithms operate on matrices/tensors of word co-occurrence statistics to learn topic-specific word distributions.

Topic Models

Low-dimensional Embeddings for Interpretable Anchor-based Topic Inference

no code implementations EMNLP 2014 Moontae Lee, David Mimno

The anchor words algorithm performs provably efficient topic model inference by finding an approximate convex hull in a high-dimensional word co-occurrence space.

Robust Spectral Inference for Joint Stochastic Matrix Factorization

no code implementations NeurIPS 2015 Moontae Lee, David Bindel, David Mimno

Spectral inference provides fast algorithms and provable optimality for latent topic analysis.

Beyond Exchangeability: The Chinese Voting Process

no code implementations NeurIPS 2016 Moontae Lee, Seok Hyun Jin, David Mimno

Many online communities present user-contributed responses such as reviews of products and answers to questions.

Position

Basic Reasoning with Tensor Product Representations

no code implementations12 Jan 2016 Paul Smolensky, Moontae Lee, Xiaodong He, Wen-tau Yih, Jianfeng Gao, Li Deng

In this paper we present the initial development of a general theory for mapping inference in predicate logic to computation over Tensor Product Representations (TPRs; Smolensky (1990), Smolensky & Legendre (2006)).

Question Answering

Practical Correlated Topic Modeling and Analysis via the Rectified Anchor Word Algorithm

no code implementations IJCNLP 2019 Moontae Lee, Sungjun Cho, David Bindel, David Mimno

Despite great scalability on large data and their ability to understand correlations between topics, spectral topic models have not been widely used due to the absence of reliability in real data and lack of practical implementations.

Topic Models

On-the-Fly Rectification for Robust Large-Vocabulary Topic Inference

no code implementations12 Nov 2021 Moontae Lee, Sungjun Cho, Kun Dong, David Mimno, David Bindel

Across many data domains, co-occurrence statistics about the joint appearance of objects are powerfully informative.

Community Detection

Rebalancing Batch Normalization for Exemplar-based Class-Incremental Learning

no code implementations CVPR 2023 Sungmin Cha, Sungjun Cho, Dasol Hwang, Sunwon Hong, Moontae Lee, Taesup Moon

The main reason for the ineffectiveness of their method lies in not fully addressing the data imbalance issue, especially in computing the gradients for learning the affine transformation parameters of BN.

Class Incremental Learning Incremental Learning

Towards More Objective Evaluation of Class Incremental Learning: Representation Learning Perspective

no code implementations16 Jun 2022 Sungmin Cha, Jihwan Kwak, Dongsub Shim, Hyunwoo Kim, Moontae Lee, Honglak Lee, Taesup Moon

While the common method for evaluating CIL algorithms is based on average test accuracy for all learned classes, we argue that maximizing accuracy alone does not necessarily lead to effective CIL algorithms.

Class Incremental Learning Incremental Learning +2

Grouping-matrix based Graph Pooling with Adaptive Number of Clusters

no code implementations7 Sep 2022 Sung Moon Ko, Sungjun Cho, Dae-Woong Jeong, Sehui Han, Moontae Lee, Honglak Lee

Conventional methods ask users to specify an appropriate number of clusters as a hyperparameter, then assume that all input graphs share the same number of clusters.

Binary Classification Molecular Property Prediction +2

Transferring Pre-trained Multimodal Representations with Cross-modal Similarity Matching

no code implementations7 Jan 2023 Byoungjip Kim, Sungik Choi, Dasol Hwang, Moontae Lee, Honglak Lee

Despite surprising performance on zero-shot transfer, pre-training a large-scale multimodal model is often prohibitive as it requires a huge amount of data and computing resources.

Language Modelling Self-Supervised Learning

Learning to Unlearn: Instance-wise Unlearning for Pre-trained Classifiers

no code implementations27 Jan 2023 Sungmin Cha, Sungjun Cho, Dasol Hwang, Honglak Lee, Taesup Moon, Moontae Lee

Since the recent advent of regulations for data protection (e. g., the General Data Protection Regulation), there has been increasing demand in deleting information learned from sensitive data in pre-trained models without retraining from scratch.

Image Classification

Multimodal Subtask Graph Generation from Instructional Videos

no code implementations17 Feb 2023 Yunseok Jang, Sungryull Sohn, Lajanugen Logeswaran, Tiange Luo, Moontae Lee, Honglak Lee

Real-world tasks consist of multiple inter-dependent subtasks (e. g., a dirty pan needs to be washed before it can be used for cooking).

Graph Generation

When to Read Documents or QA History: On Unified and Selective Open-domain QA

no code implementations7 Jun 2023 Kyungjae Lee, Sang-eun Han, Seung-won Hwang, Moontae Lee

This paper studies the problem of open-domain question answering, with the aim of answering a diverse range of questions leveraging knowledge resources.

Natural Questions Open-Domain Question Answering +2

3D Denoisers are Good 2D Teachers: Molecular Pretraining via Denoising and Cross-Modal Distillation

no code implementations8 Sep 2023 Sungjun Cho, Dae-Woong Jeong, Sung Moon Ko, Jinwoo Kim, Sehui Han, Seunghoon Hong, Honglak Lee, Moontae Lee

Pretraining molecular representations from large unlabeled data is essential for molecular property prediction due to the high cost of obtaining ground-truth labels.

Denoising Knowledge Distillation +4

Code Models are Zero-shot Precondition Reasoners

no code implementations16 Nov 2023 Lajanugen Logeswaran, Sungryull Sohn, Yiwei Lyu, Anthony Zhe Liu, Dong-Ki Kim, Dongsub Shim, Moontae Lee, Honglak Lee

One of the fundamental skills required for an agent acting in an environment to complete tasks is the ability to understand what actions are plausible at any given point.

Decision Making

Projection Regret: Reducing Background Bias for Novelty Detection via Diffusion Models

no code implementations NeurIPS 2023 Sungik Choi, Hankook Lee, Honglak Lee, Moontae Lee

Based on our observation that diffusion models can \emph{project} any sample to an in-distribution sample with similar background information, we propose \emph{Projection Regret (PR)}, an efficient novelty detection method that mitigates the bias of non-semantic information.

Novelty Detection Perceptual Distance

Reinforcement Learning from Reflective Feedback (RLRF): Aligning and Improving LLMs via Fine-Grained Self-Reflection

no code implementations21 Mar 2024 Kyungjae Lee, Dasol Hwang, Sunghyun Park, Youngsoo Jang, Moontae Lee

Despite the promise of RLHF in aligning LLMs with human preferences, it often leads to superficial alignment, prioritizing stylistic changes over improving downstream performance of LLMs.

Mathematical Reasoning

Cannot find the paper you are looking for? You can Submit a new open access paper.