no code implementations • 15 Jul 2024 • Zitong Lu, Yile Wang
To answer this question, this study proposed ReAlnet-fMRI, a model based on the SOTA vision model CORnet but optimized using human fMRI data through a multi-layer encoding-based alignment framework.
no code implementations • 27 Feb 2024 • Xiaolong Wang, Yile Wang, Yuanchi Zhang, Fuwen Luo, Peng Li, Maosong Sun, Yang Liu
Based on the characteristics of the tasks and the strong dialogue-generation capabilities of LLMs, we propose RiC (Reasoning in Conversation), a method that focuses on solving subjective tasks through dialogue simulation.
1 code implementation • 23 Feb 2024 • Xiaolong Wang, Yile Wang, Sijie Cheng, Peng Li, Yang Liu
Recent work has made a preliminary attempt to use large language models (LLMs) to solve the stance detection task, showing promising results.
1 code implementation • 19 Feb 2024 • Yuanchi Zhang, Yile Wang, Zijun Liu, Shuo Wang, Xiaolong Wang, Peng Li, Maosong Sun, Yang Liu
While large language models (LLMs) have been pre-trained on multilingual corpora, their performance still lags behind in most languages compared to a few resource-rich languages.
no code implementations • 12 Feb 2024 • Zonghan Yang, An Liu, Zijun Liu, Kaiming Liu, Fangzhou Xiong, Yile Wang, Zeyuan Yang, Qingyuan Hu, Xinrui Chen, Zhenhe Zhang, Fuwen Luo, Zhicheng Guo, Peng Li, Yang Liu
We also conduct proof-of-concept studies by introducing realistic features to WebShop, including user profiles to demonstrate intentions, personalized reranking for complex environmental dynamics, and runtime cost statistics to reflect self-constraints.
no code implementations • 30 Jan 2024 • Zitong Lu, Yile Wang, Julie D. Golomb
Despite advancements in artificial intelligence, object recognition models still lag behind in emulating visual information processing in human brains.
1 code implementation • 22 Jan 2024 • Yile Wang, Sijie Cheng, Zixin Sun, Peng Li, Yang Liu
We propose symbol-to-language (S2L), a tuning-free method that enables large language models to solve symbol-related problems with information expressed in natural language.
2 code implementations • 10 Jan 2024 • Yuanchun Li, Hao Wen, Weijun Wang, Xiangyu Li, Yizhen Yuan, Guohong Liu, Jiacheng Liu, Wenxing Xu, Xiang Wang, Yi Sun, Rui Kong, Yile Wang, Hanfei Geng, Jian Luan, Xuefeng Jin, Zilong Ye, Guanjing Xiong, Fan Zhang, Xiang Li, Mengwei Xu, Zhijun Li, Peng Li, Yang Liu, Ya-Qin Zhang, Yunxin Liu
Next, we discuss several key challenges to achieve intelligent, efficient and secure Personal LLM Agents, followed by a comprehensive survey of representative solutions to address these challenges.
1 code implementation • 8 Oct 2023 • Yile Wang, Peng Li, Maosong Sun, Yang Liu
Large language models (LLMs) have shown superior performance without task-specific fine-tuning.
1 code implementation • 28 May 2023 • Zhicheng Guo, Sijie Cheng, Yile Wang, Peng Li, Yang Liu
There are two main challenges to leveraging retrieval-augmented methods for NKI tasks: 1) the demand for diverse relevance score functions and 2) the dilemma between training cost and task performance.
1 code implementation • CVPR 2023 • Jiangbin Zheng, Yile Wang, Cheng Tan, Siyuan Li, Ge Wang, Jun Xia, Yidong Chen, Stan Z. Li
In this work, we propose a novel contrastive visual-textual transformation for SLR, CVT-SLR, to fully explore the pretrained knowledge of both the visual and language modalities.
1 code implementation • ACL 2022 • Jiangbin Zheng, Yile Wang, Ge Wang, Jun Xia, Yufei Huang, Guojiang Zhao, Yue Zhang, Stan Z. Li
Although contextualized embeddings generated from large-scale pre-trained models perform well in many tasks, traditional static embeddings (e. g., Skip-gram, Word2Vec) still play an important role in low-resource and lightweight settings due to their low computational cost, ease of deployment, and stability.
Ranked #1 on Word Similarity on WS353
1 code implementation • 28 Sep 2022 • Zeqiang Wang, Yile Wang, Jiageng Wu, Zhiyang Teng, Jie Yang
Designed in a hierarchical structure, YATO supports free combinations of three types of widely used features including 1) traditional neural networks (CNN, RNN, etc.
no code implementations • 15 Sep 2022 • Ziqi Zhang, Yile Wang, Yue Zhang, Donglin Wang
Experimental results show that our RL pre-trained models can give close performance compared with the models using the LM training objective, showing that there exist common useful features across these two modalities.
1 code implementation • 8 Sep 2022 • Yile Wang, Linyi Yang, Zhiyang Teng, Ming Zhou, Yue Zhang
Transformer-based pre-trained models have gained much advance in recent years, becoming one of the most important backbones in natural language processing.
no code implementations • 20 Aug 2022 • Yile Wang, Yue Zhang
We quantify how much the contextualized embeddings of each word sense vary across contexts in typical pre-trained models.
1 code implementation • COLING 2020 • Yile Wang, Leyang Cui, Yue Zhang
Contextualized representations give significantly improved results for a wide range of NLP tasks.
2 code implementations • 16 Jul 2020 • Jian Liu, Leyang Cui, Hanmeng Liu, Dandan Huang, Yile Wang, Yue Zhang
Machine reading is a fundamental task for testing the capability of natural language understanding, which is closely related to human cognition in many aspects.
no code implementations • 7 Nov 2019 • Yile Wang, Leyang Cui, Yue Zhang
Contextualized embeddings such as BERT can serve as strong input representations to NLP tasks, outperforming their static embeddings counterparts such as skip-gram, CBOW and GloVe.