Search Results for author: Monica S. Lam

Found 17 papers, 14 papers with code

Assisting in Writing Wikipedia-like Articles From Scratch with Large Language Models

no code implementations22 Feb 2024 Yijia Shao, Yucheng Jiang, Theodore A. Kanell, Peter Xu, Omar Khattab, Monica S. Lam

We study how to apply large language models to write grounded and organized long-form articles from scratch, with comparable breadth and depth to Wikipedia pages.

Retrieval

SUQL: Conversational Search over Structured and Unstructured Data with Large Language Models

1 code implementation16 Nov 2023 Shicheng Liu, Jialiang Xu, Wesley Tjangnaka, Sina J. Semnani, Chen Jie Yu, Monica S. Lam

This paper presents the first conversational agent that supports the full generality of hybrid data access for large knowledge corpora, through a language we developed called SUQL (Structured and Unstructured Query Language).

Conversational Search In-Context Learning +1

ReactGenie: A Development Framework for Complex Multimodal Interactions Using Large Language Models

no code implementations16 Jun 2023 Jackie Junrui Yang, Yingtian Shi, Yuhan Zhang, Karina Li, Daniel Wan Rosli, Anisha Jain, Shuning Zhang, Tianshi Li, James A. Landay, Monica S. Lam

This paper targets complex interactions, where users can issue multimodal commands that translate into one of the possible exponential combinations of actions/function invocations.

WikiChat: Stopping the Hallucination of Large Language Model Chatbots by Few-Shot Grounding on Wikipedia

1 code implementation23 May 2023 Sina J. Semnani, Violet Z. Yao, Heidi C. Zhang, Monica S. Lam

WikiChat generates a response from an LLM, retains only the grounded facts, and combines them with additional information it retrieves from the corpus to form factual and engaging responses.

Chatbot Hallucination +4

Zero and Few-Shot Localization of Task-Oriented Dialogue Agents with a Distilled Representation

1 code implementation18 Feb 2023 Mehrad Moradshahi, Sina J. Semnani, Monica S. Lam

We propose automatic methods that use ToD training data in a source language to build a high-quality functioning dialogue agent in another target language that has no training data (i. e. zero-shot) or a small training set (i. e. few-shot).

Dialogue State Tracking Machine Translation +1

ThingTalk: An Extensible, Executable Representation Language for Task-Oriented Dialogues

1 code implementation23 Mar 2022 Monica S. Lam, Giovanni Campagna, Mehrad Moradshahi, Sina J. Semnani, Silei Xu

Task-oriented conversational agents rely on semantic parsers to translate natural language to formal representations.

Semantic Parsing

Contextual Semantic Parsing for Multilingual Task-Oriented Dialogues

1 code implementation4 Nov 2021 Mehrad Moradshahi, Victoria Tsai, Giovanni Campagna, Monica S. Lam

On RiSAWOZ, CrossWOZ, CrossWOZ-EN, and MultiWOZ-ZH datasets we improve the state of the art by 11%, 17%, 20%, and 0. 3% in joint goal accuracy.

Dialogue State Tracking Machine Translation +3

Localizing Open-Ontology QA Semantic Parsers in a Day Using Machine Translation

1 code implementation EMNLP 2020 Mehrad Moradshahi, Giovanni Campagna, Sina J. Semnani, Silei Xu, Monica S. Lam

We propose Semantic Parser Localizer (SPL), a toolkit that leverages Neural Machine Translation (NMT) systems to localize a semantic parser for a new language.

Machine Translation NMT +3

A Few-Shot Semantic Parser for Wizard-of-Oz Dialogues with the Precise ThingTalk Representation

1 code implementation Findings (ACL) 2022 Giovanni Campagna, Sina J. Semnani, Ryan Kearns, Lucas Jun Koba Sato, Silei Xu, Monica S. Lam

Previous attempts to build effective semantic parsers for Wizard-of-Oz (WOZ) conversations suffer from the difficulty in acquiring a high-quality, manually annotated training set.

Data Augmentation Dialogue State Tracking

Zero-Shot Transfer Learning with Synthesized Data for Multi-Domain Dialogue State Tracking

1 code implementation ACL 2020 Giovanni Campagna, Agata Foryciarz, Mehrad Moradshahi, Monica S. Lam

We show that data augmentation through synthesized data can improve the accuracy of zero-shot learning for both the TRADE model and the BERT-based SUMBT model on the MultiWOZ 2. 1 dataset.

Data Augmentation Dialogue State Tracking +3

Schema2QA: High-Quality and Low-Cost Q&A Agents for the Structured Web

3 code implementations16 Jan 2020 Silei Xu, Giovanni Campagna, Jian Li, Monica S. Lam

The key concept is to cover the space of possible compound queries on the database with a large number of in-domain questions synthesized with the help of a corpus of generic query templates.

Question Answering Semantic Parsing +1

ImagineNet: Restyling Apps Using Neural Style Transfer

no code implementations14 Jan 2020 Michael H. Fischer, Richard R. Yang, Monica S. Lam

This paper presents ImagineNet, a tool that uses a novel neural style transfer model to enable end-users and app developers to restyle GUIs using an image of their choice.

Style Transfer

HUBERT Untangles BERT to Improve Transfer across NLP Tasks

1 code implementation25 Oct 2019 Mehrad Moradshahi, Hamid Palangi, Monica S. Lam, Paul Smolensky, Jianfeng Gao

We introduce HUBERT which combines the structured-representational power of Tensor-Product Representations (TPRs) and BERT, a pre-trained bidirectional Transformer language model.

Language Modelling

Genie: A Generator of Natural Language Semantic Parsers for Virtual Assistant Commands

1 code implementation18 Apr 2019 Giovanni Campagna, Silei Xu, Mehrad Moradshahi, Richard Socher, Monica S. Lam

We advocate formalizing the capability of virtual assistants with a Virtual Assistant Programming Language (VAPL) and using a neural semantic parser to translate natural language into VAPL code.

Data Augmentation Translation

Cannot find the paper you are looking for? You can Submit a new open access paper.