Search Results for author: Yile Wang

Found 19 papers, 12 papers with code

Teaching CORnet Human fMRI Representations for Enhanced Model-Brain Alignment

no code implementations15 Jul 2024 Zitong Lu, Yile Wang

To answer this question, this study proposed ReAlnet-fMRI, a model based on the SOTA vision model CORnet but optimized using human fMRI data through a multi-layer encoding-based alignment framework.

EEG Object Recognition

Reasoning in Conversation: Solving Subjective Tasks through Dialogue Simulation for Large Language Models

no code implementations27 Feb 2024 Xiaolong Wang, Yile Wang, Yuanchi Zhang, Fuwen Luo, Peng Li, Maosong Sun, Yang Liu

Based on the characteristics of the tasks and the strong dialogue-generation capabilities of LLMs, we propose RiC (Reasoning in Conversation), a method that focuses on solving subjective tasks through dialogue simulation.

Dark Humor Detection Dialogue Generation +3

DEEM: Dynamic Experienced Expert Modeling for Stance Detection

1 code implementation23 Feb 2024 Xiaolong Wang, Yile Wang, Sijie Cheng, Peng Li, Yang Liu

Recent work has made a preliminary attempt to use large language models (LLMs) to solve the stance detection task, showing promising results.

Stance Detection

Enhancing Multilingual Capabilities of Large Language Models through Self-Distillation from Resource-Rich Languages

1 code implementation19 Feb 2024 Yuanchi Zhang, Yile Wang, Zijun Liu, Shuo Wang, Xiaolong Wang, Peng Li, Maosong Sun, Yang Liu

While large language models (LLMs) have been pre-trained on multilingual corpora, their performance still lags behind in most languages compared to a few resource-rich languages.

Transfer Learning

Towards Unified Alignment Between Agents, Humans, and Environment

no code implementations12 Feb 2024 Zonghan Yang, An Liu, Zijun Liu, Kaiming Liu, Fangzhou Xiong, Yile Wang, Zeyuan Yang, Qingyuan Hu, Xinrui Chen, Zhenhe Zhang, Fuwen Luo, Zhicheng Guo, Peng Li, Yang Liu

We also conduct proof-of-concept studies by introducing realistic features to WebShop, including user profiles to demonstrate intentions, personalized reranking for complex environmental dynamics, and runtime cost statistics to reflect self-constraints.

Decision Making

Achieving More Human Brain-Like Vision via Human EEG Representational Alignment

no code implementations30 Jan 2024 Zitong Lu, Yile Wang, Julie D. Golomb

Despite advancements in artificial intelligence, object recognition models still lag behind in emulating visual information processing in human brains.

Adversarial Robustness EEG +1

Speak It Out: Solving Symbol-Related Problems with Symbol-to-Language Conversion for Language Models

1 code implementation22 Jan 2024 Yile Wang, Sijie Cheng, Zixin Sun, Peng Li, Yang Liu

We propose symbol-to-language (S2L), a tuning-free method that enables large language models to solve symbol-related problems with information expressed in natural language.

ARC Property Prediction +2

Personal LLM Agents: Insights and Survey about the Capability, Efficiency and Security

2 code implementations10 Jan 2024 Yuanchun Li, Hao Wen, Weijun Wang, Xiangyu Li, Yizhen Yuan, Guohong Liu, Jiacheng Liu, Wenxing Xu, Xiang Wang, Yi Sun, Rui Kong, Yile Wang, Hanfei Geng, Jian Luan, Xuefeng Jin, Zilong Ye, Guanjing Xiong, Fan Zhang, Xiang Li, Mengwei Xu, Zhijun Li, Peng Li, Yang Liu, Ya-Qin Zhang, Yunxin Liu

Next, we discuss several key challenges to achieve intelligent, efficient and secure Personal LLM Agents, followed by a comprehensive survey of representative solutions to address these challenges.

Self-Knowledge Guided Retrieval Augmentation for Large Language Models

1 code implementation8 Oct 2023 Yile Wang, Peng Li, Maosong Sun, Yang Liu

Large language models (LLMs) have shown superior performance without task-specific fine-tuning.

Question Answering Retrieval +1

Prompt-Guided Retrieval Augmentation for Non-Knowledge-Intensive Tasks

1 code implementation28 May 2023 Zhicheng Guo, Sijie Cheng, Yile Wang, Peng Li, Yang Liu

There are two main challenges to leveraging retrieval-augmented methods for NKI tasks: 1) the demand for diverse relevance score functions and 2) the dilemma between training cost and task performance.

Retrieval

CVT-SLR: Contrastive Visual-Textual Transformation for Sign Language Recognition with Variational Alignment

1 code implementation CVPR 2023 Jiangbin Zheng, Yile Wang, Cheng Tan, Siyuan Li, Ge Wang, Jun Xia, Yidong Chen, Stan Z. Li

In this work, we propose a novel contrastive visual-textual transformation for SLR, CVT-SLR, to fully explore the pretrained knowledge of both the visual and language modalities.

cross-modal alignment Sign Language Recognition

Using Context-to-Vector with Graph Retrofitting to Improve Word Embeddings

1 code implementation ACL 2022 Jiangbin Zheng, Yile Wang, Ge Wang, Jun Xia, Yufei Huang, Guojiang Zhao, Yue Zhang, Stan Z. Li

Although contextualized embeddings generated from large-scale pre-trained models perform well in many tasks, traditional static embeddings (e. g., Skip-gram, Word2Vec) still play an important role in low-resource and lightweight settings due to their low computational cost, ease of deployment, and stability.

Word Embeddings Word Similarity

YATO: Yet Another deep learning based Text analysis Open toolkit

1 code implementation28 Sep 2022 Zeqiang Wang, Yile Wang, Jiageng Wu, Zhiyang Teng, Jie Yang

Designed in a hierarchical structure, YATO supports free combinations of three types of widely used features including 1) traditional neural networks (CNN, RNN, etc.

Deep Learning

Can Offline Reinforcement Learning Help Natural Language Understanding?

no code implementations15 Sep 2022 Ziqi Zhang, Yile Wang, Yue Zhang, Donglin Wang

Experimental results show that our RL pre-trained models can give close performance compared with the models using the LM training objective, showing that there exist common useful features across these two modalities.

Language Modelling Natural Language Understanding +4

Pre-Training a Graph Recurrent Network for Language Representation

1 code implementation8 Sep 2022 Yile Wang, Linyi Yang, Zhiyang Teng, Ming Zhou, Yue Zhang

Transformer-based pre-trained models have gained much advance in recent years, becoming one of the most important backbones in natural language processing.

Language Modelling Sentence +2

Lost in Context? On the Sense-wise Variance of Contextualized Word Embeddings

no code implementations20 Aug 2022 Yile Wang, Yue Zhang

We quantify how much the contextualized embeddings of each word sense vary across contexts in typical pre-trained models.

Position Sentence +2

Does Chinese BERT Encode Word Structure?

1 code implementation COLING 2020 Yile Wang, Leyang Cui, Yue Zhang

Contextualized representations give significantly improved results for a wide range of NLP tasks.

Chunking Natural Language Inference +2

LogiQA: A Challenge Dataset for Machine Reading Comprehension with Logical Reasoning

2 code implementations16 Jul 2020 Jian Liu, Leyang Cui, Hanmeng Liu, Dandan Huang, Yile Wang, Yue Zhang

Machine reading is a fundamental task for testing the capability of natural language understanding, which is closely related to human cognition in many aspects.

Logical Reasoning Machine Reading Comprehension +1

How Can BERT Help Lexical Semantics Tasks?

no code implementations7 Nov 2019 Yile Wang, Leyang Cui, Yue Zhang

Contextualized embeddings such as BERT can serve as strong input representations to NLP tasks, outperforming their static embeddings counterparts such as skip-gram, CBOW and GloVe.

Sentence Word Embeddings

Cannot find the paper you are looking for? You can Submit a new open access paper.