Search Results for author: Hongyuan Mei

Found 30 papers, 17 papers with code

GMP-AR: Granularity Message Passing and Adaptive Reconciliation for Temporal Hierarchy Forecasting

no code implementations18 Jun 2024 Fan Zhou, Chen Pan, Lintao Ma, Yu Liu, James Zhang, Jun Zhou, Hongyuan Mei, Weitao Lin, Zi Zhuang, Wenxin Ning, Yunhua Hu, Siqiao Xue

These methods merely take the temporal hierarchical structure to maintain coherence without improving the forecasting accuracy.

Transcrib3D: 3D Referring Expression Resolution through Large Language Models

no code implementations30 Apr 2024 Jiading Fang, Xiangshan Tan, Shengjie Lin, Igor Vasiljevic, Vitor Guizilini, Hongyuan Mei, Rares Ambrus, Gregory Shakhnarovich, Matthew R Walter

We introduce Transcrib3D, an approach that brings together 3D detection methods and the emergent reasoning capabilities of large language models (LLMs).

Referring Expression

MANGO: A Benchmark for Evaluating Mapping and Navigation Abilities of Large Language Models

1 code implementation29 Mar 2024 Peng Ding, Jiading Fang, Peng Li, Kangrui Wang, Xiaochen Zhou, Mo Yu, Jing Li, Matthew R. Walter, Hongyuan Mei

The task is question-answering: for each maze, a large language model reads the walkthrough and answers hundreds of mapping and navigation questions such as "How should you go to Attic from West of House?"

Language Modelling Large Language Model +1

EasyTPP: Towards Open Benchmarking Temporal Point Processes

1 code implementation16 Jul 2023 Siqiao Xue, Xiaoming Shi, Zhixuan Chu, Yan Wang, Hongyan Hao, Fan Zhou, Caigao Jiang, Chen Pan, James Y. Zhang, Qingsong Wen, Jun Zhou, Hongyuan Mei

In this paper, we present EasyTPP, the first central repository of research assets (e. g., data, models, evaluation programs, documentations) in the area of event sequence modeling.

Benchmarking Point Processes

Autoregressive Modeling with Lookahead Attention

no code implementations20 May 2023 Li Du, Hongyuan Mei, Jason Eisner

To predict the next token, autoregressive models ordinarily examine the past.

Morphological Inflection

Can Large Language Models Play Text Games Well? Current State-of-the-Art and Open Questions

no code implementations6 Apr 2023 Chen Feng Tsai, Xiaochen Zhou, Sierra S. Liu, Jing Li, Mo Yu, Hongyuan Mei

Large language models (LLMs) such as ChatGPT and GPT-4 have recently demonstrated their remarkable abilities of communicating with human users.

World Knowledge

Explicit Planning Helps Language Models in Logical Reasoning

2 code implementations28 Mar 2023 Hongyu Zhao, Kangrui Wang, Mo Yu, Hongyuan Mei

In this paper, we propose LEAP, a novel system that uses language models to perform multi-step logical reasoning and incorporates explicit planning into the inference procedure.

Logical Reasoning Multiple-choice +1

Robustness of Learning from Task Instructions

1 code implementation7 Dec 2022 Jiasheng Gu, Hongyu Zhao, Hanzi Xu, Liangyu Nie, Hongyuan Mei, Wenpeng Yin

To our knowledge, this is the first work that systematically studies how robust a PLM is when it is supervised by instructions with different factors of variability.

Language Modelling

Tiny-Attention Adapter: Contexts Are More Important Than the Number of Parameters

no code implementations18 Oct 2022 Hongyu Zhao, Hao Tan, Hongyuan Mei

Our tiny-attention adapter learns to modify the hidden states at each position directly conditioned on the hidden states at all the other positions, which is missed by the previously proposed adapters.

Language Modelling Transfer Learning

HYPRO: A Hybridly Normalized Probabilistic Model for Long-Horizon Prediction of Event Sequences

3 code implementations4 Oct 2022 Siqiao Xue, Xiaoming Shi, james Y zhang, Hongyuan Mei

In this paper, we tackle the important yet under-investigated problem of making long-horizon prediction of event sequences.

Bellman Meets Hawkes: Model-Based Reinforcement Learning via Temporal Point Processes

1 code implementation29 Jan 2022 Chao Qu, Xiaoyu Tan, Siqiao Xue, Xiaoming Shi, James Zhang, Hongyuan Mei

We consider a sequential decision making problem where the agent faces the environment characterized by the stochastic discrete events and seeks an optimal intervention policy such that its long-term reward is maximized.

Decision Making Model-based Reinforcement Learning +3

Transformer Embeddings of Irregularly Spaced Events and Their Participants

1 code implementation ICLR 2022 Chenghao Yang, Hongyuan Mei, Jason Eisner

The neural Hawkes process (Mei & Eisner, 2017) is a generative model of irregularly spaced sequences of discrete events.

Personalized Dynamic Treatment Regimes in Continuous Time: A Bayesian Approach for Optimizing Clinical Decisions with Timing

no code implementations8 Jul 2020 William Hua, Hongyuan Mei, Sarah Zohar, Magali Giral, Yanxun Xu

In the second step, we propose a policy gradient method to learn the personalized optimal clinical decision that maximizes the patient survival by interacting the MTPP with the model on clinical observations while accounting for uncertainties in clinical observations learned from the posterior inference of the Bayesian joint model in the first step.

Methodology

Neural Datalog Through Time: Informed Temporal Modeling via Logical Specification

1 code implementation ICML 2020 Hongyuan Mei, Guanghui Qin, Minjie Xu, Jason Eisner

Learning how to predict future events from patterns of past events is difficult when the set of possible event types is large.

Imputing Missing Events in Continuous-Time Event Streams

2 code implementations14 May 2019 Hongyuan Mei, Guanghui Qin, Jason Eisner

On held-out incomplete sequences, our method is effective at inferring the ground-truth unobserved events, with particle smoothing consistently improving upon particle filtering.

On the Idiosyncrasies of the Mandarin Chinese Classifier System

no code implementations NAACL 2019 Shijia Liu, Hongyuan Mei, Adina Williams, Ryan Cotterell

While idiosyncrasies of the Chinese classifier system have been a richly studied topic among linguists (Adams and Conklin, 1973; Erbaugh, 1986; Lakoff, 1986), not much work has been done to quantify them with statistical methods.

Inference of unobserved event streams with neural Hawkes particle smoothing

no code implementations27 Sep 2018 Hongyuan Mei, Guanghui Qin, Jason Eisner

Particle smoothing is an extension of particle filtering in which proposed events are conditioned on the future as well as the past.

Decoder

Halo: Learning Semantics-Aware Representations for Cross-Lingual Information Extraction

no code implementations SEMEVAL 2018 Hongyuan Mei, Sheng Zhang, Kevin Duh, Benjamin Van Durme

Cross-lingual information extraction (CLIE) is an important and challenging task, especially in low resource scenarios.

TAG

Coherent Dialogue with Attention-based Language Models

no code implementations21 Nov 2016 Hongyuan Mei, Mohit Bansal, Matthew R. Walter

We model coherent conversation continuation via RNN-based dialogue models equipped with a dynamic attention mechanism.

Diversity Language Modelling

Accurate Vision-based Vehicle Localization using Satellite Imagery

no code implementations30 Oct 2015 Hang Chu, Hongyuan Mei, Mohit Bansal, Matthew R. Walter

We propose a method for accurately localizing ground vehicles with the aid of satellite imagery.

Visual Localization

What to talk about and how? Selective Generation using LSTMs with Coarse-to-Fine Alignment

1 code implementation NAACL 2016 Hongyuan Mei, Mohit Bansal, Matthew R. Walter

We propose an end-to-end, domain-independent neural encoder-aligner-decoder model for selective generation, i. e., the joint task of content selection and surface realization.

Data-to-Text Generation Decoder

Listen, Attend, and Walk: Neural Mapping of Navigational Instructions to Action Sequences

1 code implementation12 Jun 2015 Hongyuan Mei, Mohit Bansal, Matthew R. Walter

We propose a neural sequence-to-sequence model for direction following, a task that is essential to realizing effective autonomous agents.

Decoder Natural Language Understanding +1

Cannot find the paper you are looking for? You can Submit a new open access paper.