Search Results for author: Zeyi Liu

Found 9 papers, 4 papers with code

ContactHandover: Contact-Guided Robot-to-Human Object Handover

no code implementations1 Apr 2024 Zixi Wang, Zeyi Liu, Nicolas Ouporov, Shuran Song

We propose ContactHandover, a robot to human handover system that consists of two phases: a contact-guided grasping phase and an object delivery phase.

Object

REFLECT: Summarizing Robot Experiences for Failure Explanation and Correction

1 code implementation27 Jun 2023 Zeyi Liu, Arpit Bahety, Shuran Song

The ability to detect and analyze failed executions automatically is crucial for an explainable and robust robotic system.

Common Sense Reasoning Large Language Model +1

CADM: Confusion Model-based Detection Method for Real-drift in Chunk Data Stream

no code implementations25 Mar 2023 Songqiao Hu, Zeyi Liu, Xiao He

When a new data chunk arrives, we use both real labels and pseudo labels to update the model after prediction and drift detection.

A Survey of Knowledge Enhanced Pre-trained Language Models

no code implementations11 Nov 2022 Linmei Hu, Zeyi Liu, Ziwang Zhao, Lei Hou, Liqiang Nie, Juanzi Li

We introduce appropriate taxonomies respectively for Natural Language Understanding (NLU) and Natural Language Generation (NLG) to highlight these two main tasks of NLP.

Natural Language Understanding Retrieval +2

BusyBot: Learning to Interact, Reason, and Plan in a BusyBoard Environment

1 code implementation17 Jul 2022 Zeyi Liu, Zhenjia Xu, Shuran Song

We introduce BusyBoard, a toy-inspired robot learning environment that leverages a diverse set of articulated objects and inter-object functional relations to provide rich visual feedback for robot interactions.

Causal Discovery Robot Manipulation +2

Fast Extraction of Word Embedding from Q-contexts

no code implementations15 Sep 2021 Junsheng Kong, Weizhao Li, Zeyi Liu, Ben Liao, Jiezhong Qiu, Chang-Yu Hsieh, Yi Cai, Shengyu Zhang

In this work, we show that with merely a small fraction of contexts (Q-contexts)which are typical in the whole corpus (and their mutual information with words), one can construct high-quality word embedding with negligible errors.

Cannot find the paper you are looking for? You can Submit a new open access paper.