Search Results for author: Lajanugen Logeswaran

Found 26 papers, 16 papers with code

Auto-Intent: Automated Intent Discovery and Self-Exploration for Large Language Model Web Agents

no code implementations29 Oct 2024 Jaekyeom Kim, Dong-Ki Kim, Lajanugen Logeswaran, Sungryull Sohn, Honglak Lee

In this paper, we introduce Auto-Intent, a method to adapt a pre-trained large language model (LLM) as an agent for a target domain without direct fine-tuning, where we empirically focus on web navigation tasks.

Decision Making Intent Discovery +2

Understanding the Capabilities and Limitations of Large Language Models for Cultural Commonsense

no code implementations7 May 2024 Siqi Shen, Lajanugen Logeswaran, Moontae Lee, Honglak Lee, Soujanya Poria, Rada Mihalcea

Large language models (LLMs) have demonstrated substantial commonsense understanding through numerous benchmark evaluations.

Small Language Models Need Strong Verifiers to Self-Correct Reasoning

1 code implementation26 Apr 2024 Yunxiang Zhang, Muhammad Khalifa, Lajanugen Logeswaran, Jaekyeom Kim, Moontae Lee, Honglak Lee, Lu Wang

Self-correction has emerged as a promising solution to boost the reasoning performance of large language models (LLMs), where LLMs refine their solutions using self-generated critiques that pinpoint the errors.

Math

TOD-Flow: Modeling the Structure of Task-Oriented Dialogues

1 code implementation7 Dec 2023 Sungryull Sohn, Yiwei Lyu, Anthony Liu, Lajanugen Logeswaran, Dong-Ki Kim, Dongsub Shim, Honglak Lee

Our TOD-Flow graph learns what a model can, should, and should not predict, effectively reducing the search space and providing a rationale for the model's prediction.

Dialog Act Classification Response Generation

Code Models are Zero-shot Precondition Reasoners

no code implementations16 Nov 2023 Lajanugen Logeswaran, Sungryull Sohn, Yiwei Lyu, Anthony Zhe Liu, Dong-Ki Kim, Dongsub Shim, Moontae Lee, Honglak Lee

One of the fundamental skills required for an agent acting in an environment to complete tasks is the ability to understand what actions are plausible at any given point.

Decision Making Sequential Decision Making

Exploring Demonstration Ensembling for In-context Learning

1 code implementation17 Aug 2023 Muhammad Khalifa, Lajanugen Logeswaran, Moontae Lee, Honglak Lee, Lu Wang

The standard approach for ICL is to prompt the LM with concatenated demonstrations followed by the test input.

In-Context Learning

GRACE: Discriminator-Guided Chain-of-Thought Reasoning

1 code implementation24 May 2023 Muhammad Khalifa, Lajanugen Logeswaran, Moontae Lee, Honglak Lee, Lu Wang

To address this issue, we propose Guiding chain-of-thought ReAsoning with a CorrectnEss Discriminator (GRACE), a stepwise decoding approach that steers the decoding process towards producing correct reasoning steps.

GSM8K Math

A Picture is Worth a Thousand Words: Language Models Plan from Pixels

no code implementations16 Mar 2023 Anthony Z. Liu, Lajanugen Logeswaran, Sungryull Sohn, Honglak Lee

Planning is an important capability of artificial agents that perform long-horizon tasks in real-world environments.

Multimodal Subtask Graph Generation from Instructional Videos

no code implementations17 Feb 2023 Yunseok Jang, Sungryull Sohn, Lajanugen Logeswaran, Tiange Luo, Moontae Lee, Honglak Lee

Real-world tasks consist of multiple inter-dependent subtasks (e. g., a dirty pan needs to be washed before it can be used for cooking).

Graph Generation

Exploring the Benefits of Training Expert Language Models over Instruction Tuning

2 code implementations7 Feb 2023 Joel Jang, Seungone Kim, Seonghyeon Ye, Doyoung Kim, Lajanugen Logeswaran, Moontae Lee, Kyungjae Lee, Minjoon Seo

Recently, Language Models (LMs) instruction-tuned on multiple tasks, also known as multitask-prompted fine-tuning (MT), have shown the capability to generalize to unseen tasks.

Common Sense Reasoning Coreference Resolution +4

Knowledge Unlearning for Mitigating Privacy Risks in Language Models

2 code implementations4 Oct 2022 Joel Jang, Dongkeun Yoon, Sohee Yang, Sungmin Cha, Moontae Lee, Lajanugen Logeswaran, Minjoon Seo

Pretrained Language Models (LMs) memorize a vast amount of knowledge during initial pretraining, including information that may violate the privacy of personal lives and identities.

Ranked #3 on Language Modelling on The Pile (Test perplexity metric)

Language Modelling

Few-shot Reranking for Multi-hop QA via Language Model Prompting

2 code implementations25 May 2022 Muhammad Khalifa, Lajanugen Logeswaran, Moontae Lee, Honglak Lee, Lu Wang

To alleviate the need for a large number of labeled question-document pairs for retriever training, we propose PromptRank, which relies on large language models prompting for multi-hop path reranking.

Open-Domain Question Answering Passage Re-Ranking +2

Few-shot Sequence Learning with Transformers

no code implementations17 Dec 2020 Lajanugen Logeswaran, Ann Lee, Myle Ott, Honglak Lee, Marc'Aurelio Ranzato, Arthur Szlam

In the simplest setting, we append a token to an input sequence which represents the particular task to be undertaken, and show that the embedding of this token can be optimized on the fly given few labeled examples.

Few-Shot Learning

Zero-Shot Entity Linking by Reading Entity Descriptions

3 code implementations ACL 2019 Lajanugen Logeswaran, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, Jacob Devlin, Honglak Lee

First, we show that strong reading comprehension models pre-trained on large unlabeled data can be used to generalize to unseen entities.

Entity Linking Reading Comprehension

An efficient framework for learning sentence representations

6 code implementations ICLR 2018 Lajanugen Logeswaran, Honglak Lee

In this work we propose a simple and efficient framework for learning sentence representations from unlabelled data.

General Classification Representation Learning +1

Generative Adversarial Text to Image Synthesis

39 code implementations17 May 2016 Scott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, Honglak Lee

Automatic synthesis of realistic images from text would be interesting and useful, but current AI systems are still far from this goal.

Adversarial Text Text-to-Image Generation

Cannot find the paper you are looking for? You can Submit a new open access paper.