no code implementations • 29 Oct 2024 • Jaekyeom Kim, Dong-Ki Kim, Lajanugen Logeswaran, Sungryull Sohn, Honglak Lee
In this paper, we introduce Auto-Intent, a method to adapt a pre-trained large language model (LLM) as an agent for a target domain without direct fine-tuning, where we empirically focus on web navigation tasks.
1 code implementation • 18 Oct 2024 • Lechen Zhang, Tolga Ergen, Lajanugen Logeswaran, Moontae Lee, David Jurgens
Past research has focused on optimizing prompts specific to a task.
no code implementations • 7 May 2024 • Siqi Shen, Lajanugen Logeswaran, Moontae Lee, Honglak Lee, Soujanya Poria, Rada Mihalcea
Large language models (LLMs) have demonstrated substantial commonsense understanding through numerous benchmark evaluations.
1 code implementation • 26 Apr 2024 • Yunxiang Zhang, Muhammad Khalifa, Lajanugen Logeswaran, Jaekyeom Kim, Moontae Lee, Honglak Lee, Lu Wang
Self-correction has emerged as a promising solution to boost the reasoning performance of large language models (LLMs), where LLMs refine their solutions using self-generated critiques that pinpoint the errors.
no code implementations • 13 Mar 2024 • Yao Fu, Dong-Ki Kim, Jaekyeom Kim, Sungryull Sohn, Lajanugen Logeswaran, Kyunghoon Bae, Honglak Lee
The primary limitation of large language models (LLMs) is their restricted understanding of the world.
1 code implementation • 7 Dec 2023 • Sungryull Sohn, Yiwei Lyu, Anthony Liu, Lajanugen Logeswaran, Dong-Ki Kim, Dongsub Shim, Honglak Lee
Our TOD-Flow graph learns what a model can, should, and should not predict, effectively reducing the search space and providing a rationale for the model's prediction.
no code implementations • 16 Nov 2023 • Lajanugen Logeswaran, Sungryull Sohn, Yiwei Lyu, Anthony Zhe Liu, Dong-Ki Kim, Dongsub Shim, Moontae Lee, Honglak Lee
One of the fundamental skills required for an agent acting in an environment to complete tasks is the ability to understand what actions are plausible at any given point.
1 code implementation • 16 Nov 2023 • Mingqian Zheng, Jiaxin Pei, Lajanugen Logeswaran, Moontae Lee, David Jurgens
In this study, we present a systematic evaluation of personas in system prompts.
1 code implementation • 16 Nov 2023 • Bangzhao Shu, Lechen Zhang, MinJe Choi, Lavinia Dunagan, Lajanugen Logeswaran, Moontae Lee, Dallas Card, David Jurgens
The versatility of Large Language Models (LLMs) on natural language understanding tasks has made them popular for research in social sciences.
no code implementations • 25 Oct 2023 • Dong-Ki Kim, Sungryull Sohn, Lajanugen Logeswaran, Dongsub Shim, Honglak Lee
Recently, there has been an increasing interest in automated prompt optimization based on reinforcement learning (RL).
Multi-agent Reinforcement Learning reinforcement-learning +2
1 code implementation • 22 Oct 2023 • Yunxiang Zhang, Muhammad Khalifa, Lajanugen Logeswaran, Moontae Lee, Honglak Lee, Lu Wang
Open-domain question answering (QA) systems are often built with retrieval modules.
1 code implementation • 17 Aug 2023 • Muhammad Khalifa, Lajanugen Logeswaran, Moontae Lee, Honglak Lee, Lu Wang
The standard approach for ICL is to prompt the LM with concatenated demonstrations followed by the test input.
1 code implementation • 24 May 2023 • Muhammad Khalifa, Lajanugen Logeswaran, Moontae Lee, Honglak Lee, Lu Wang
To address this issue, we propose Guiding chain-of-thought ReAsoning with a CorrectnEss Discriminator (GRACE), a stepwise decoding approach that steers the decoding process towards producing correct reasoning steps.
no code implementations • 16 Mar 2023 • Anthony Z. Liu, Lajanugen Logeswaran, Sungryull Sohn, Honglak Lee
Planning is an important capability of artificial agents that perform long-horizon tasks in real-world environments.
no code implementations • 17 Feb 2023 • Lajanugen Logeswaran, Sungryull Sohn, Yunseok Jang, Moontae Lee, Honglak Lee
This work explores the problem of generating task graphs of real-world activities.
no code implementations • 17 Feb 2023 • Yunseok Jang, Sungryull Sohn, Lajanugen Logeswaran, Tiange Luo, Moontae Lee, Honglak Lee
Real-world tasks consist of multiple inter-dependent subtasks (e. g., a dirty pan needs to be washed before it can be used for cooking).
2 code implementations • 7 Feb 2023 • Joel Jang, Seungone Kim, Seonghyeon Ye, Doyoung Kim, Lajanugen Logeswaran, Moontae Lee, Kyungjae Lee, Minjoon Seo
Recently, Language Models (LMs) instruction-tuned on multiple tasks, also known as multitask-prompted fine-tuning (MT), have shown the capability to generalize to unseen tasks.
Ranked #9 on Question Answering on StoryCloze
2 code implementations • 4 Oct 2022 • Joel Jang, Dongkeun Yoon, Sohee Yang, Sungmin Cha, Moontae Lee, Lajanugen Logeswaran, Minjoon Seo
Pretrained Language Models (LMs) memorize a vast amount of knowledge during initial pretraining, including information that may violate the privacy of personal lives and identities.
Ranked #3 on Language Modelling on The Pile (Test perplexity metric)
no code implementations • NAACL 2022 • Lajanugen Logeswaran, Yao Fu, Moontae Lee, Honglak Lee
Pre-trained large language models have shown successful progress in many language understanding benchmarks.
2 code implementations • 25 May 2022 • Muhammad Khalifa, Lajanugen Logeswaran, Moontae Lee, Honglak Lee, Lu Wang
To alleviate the need for a large number of labeled question-document pairs for retriever training, we propose PromptRank, which relies on large language models prompting for multi-hop path reranking.
no code implementations • 17 Dec 2020 • Lajanugen Logeswaran, Ann Lee, Myle Ott, Honglak Lee, Marc'Aurelio Ranzato, Arthur Szlam
In the simplest setting, we append a token to an input sequence which represents the particular task to be undertaken, and show that the embedding of this token can be optimized on the fly given few labeled examples.
3 code implementations • ACL 2019 • Lajanugen Logeswaran, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, Jacob Devlin, Honglak Lee
First, we show that strong reading comprehension models pre-trained on large unlabeled data can be used to generalize to unseen entities.
1 code implementation • NeurIPS 2018 • Lajanugen Logeswaran, Honglak Lee, Samy Bengio
We propose an adversarial loss to enforce generated samples to be attribute compatible and realistic.
6 code implementations • ICLR 2018 • Lajanugen Logeswaran, Honglak Lee
In this work we propose a simple and efficient framework for learning sentence representations from unlabelled data.
2 code implementations • 8 Nov 2016 • Lajanugen Logeswaran, Honglak Lee, Dragomir Radev
Modeling the structure of coherent texts is a key NLP problem.
39 code implementations • 17 May 2016 • Scott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, Honglak Lee
Automatic synthesis of realistic images from text would be interesting and useful, but current AI systems are still far from this goal.