Search Results for author: Haowei Lin

Found 8 papers, 6 papers with code

JARVIS-1: Open-World Multi-task Agents with Memory-Augmented Multimodal Language Models

no code implementations10 Nov 2023 ZiHao Wang, Shaofei Cai, Anji Liu, Yonggang Jin, Jinbing Hou, Bowei Zhang, Haowei Lin, Zhaofeng He, Zilong Zheng, Yaodong Yang, Xiaojian Ma, Yitao Liang

Achieving human-like planning and control with multimodal observations in an open world is a key milestone for more functional generalist agents.

MCU: A Task-centric Framework for Open-ended Agent Evaluation in Minecraft

1 code implementation12 Oct 2023 Haowei Lin, ZiHao Wang, Jianzhu Ma, Yitao Liang

To pursue the goal of creating an open-ended agent in Minecraft, an open-ended game environment with unlimited possibilities, this paper introduces a task-centric framework named MCU for Minecraft agent evaluation.

Out-of-Distribution Generalization

FLatS: Principled Out-of-Distribution Detection with Feature-Based Likelihood Ratio Score

1 code implementation8 Oct 2023 Haowei Lin, Yuntian Gu

Backed by theoretical analysis, this paper advocates for the measurement of the "OOD-ness" of a test case $\boldsymbol{x}$ through the likelihood ratio between out-distribution $\mathcal P_{\textit{out}}$ and in-distribution $\mathcal P_{\textit{in}}$.

Out-of-Distribution Detection

Class Incremental Learning via Likelihood Ratio Based Task Prediction

1 code implementation26 Sep 2023 Haowei Lin, Yijia Shao, Weinan Qian, Ningxin Pan, Yiduo Guo, Bing Liu

An emerging theoretically justified and effective approach is to train a task-specific model for each task in a shared network for all tasks based on a task-incremental learning (TIL) method to deal with forgetting.

Class Incremental Learning Incremental Learning +1

Adapting a Language Model While Preserving its General Knowledge

2 code implementations21 Jan 2023 Zixuan Ke, Yijia Shao, Haowei Lin, Hu Xu, Lei Shu, Bing Liu

This paper shows that the existing methods are suboptimal and proposes a novel method to perform a more informed adaptation of the knowledge in the LM by (1) soft-masking the attention heads based on their importance to best preserve the general knowledge in the LM and (2) contrasting the representations of the general and the full (both general and domain knowledge) to learn an integrated representation with both general and domain-specific knowledge.

Continual Learning General Knowledge +1

Continual Training of Language Models for Few-Shot Learning

3 code implementations11 Oct 2022 Zixuan Ke, Haowei Lin, Yijia Shao, Hu Xu, Lei Shu, Bing Liu

Recent work on applying large language models (LMs) achieves impressive performance in many NLP applications.

Continual Learning Continual Pretraining +2

Efficient Out-of-Distribution Detection via CVAE data Generation

no code implementations29 Sep 2021 Mengyu Wang, Yijia Shao, Haowei Lin, Wenpeng Hu, Bing Liu

Recently, contrastive loss with data augmentation and pseudo class creation has been shown to produce markedly better results for out-of-distribution (OOD) detection than previous methods.

Data Augmentation Out-of-Distribution Detection +1

Cannot find the paper you are looking for? You can Submit a new open access paper.