Search Results for author: Shaonan Wang

Found 22 papers, 5 papers with code

Cross-Modal Cloze Task: A New Task to Brain-to-Word Decoding

1 code implementation Findings (ACL) 2022 Shuxian Zou, Shaonan Wang, Jiajun Zhang, Chengqing Zong

More importantly, it demonstrates that it is feasible to decode a certain word within a large vocabulary from its neural brain activity.

Binary Classification Language Modelling

How Does the Experimental Setting Affect the Conclusions of Neural Encoding Models?

no code implementations LREC 2022 Xiaohan Zhang, Shaonan Wang, Chengqing Zong

Based on these results, we suggest a block-wise cross-validation training method and an adequate data size for increasing the performance of linear encoding models.

MapGuide: A Simple yet Effective Method to Reconstruct Continuous Language from Brain Activities

no code implementations26 Mar 2024 Xinpei Zhao, Jingyuan Sun, Shaonan Wang, Jing Ye, Xiaohan Zhang, Chengqing Zong

In contrast, we propose a simple yet effective method that guides text reconstruction by directly comparing them with the predicted text embeddings mapped from brain activities.

Text Generation

Computational Models to Study Language Processing in the Human Brain: A Survey

no code implementations20 Mar 2024 Shaonan Wang, Jingyuan Sun, Yunhao Zhang, Nan Lin, Marie-Francine Moens, Chengqing Zong

Despite differing from the human language processing mechanism in implementation and algorithms, current language models demonstrate remarkable human-like or surpassing language capabilities.

MulCogBench: A Multi-modal Cognitive Benchmark Dataset for Evaluating Chinese and English Computational Language Models

no code implementations2 Mar 2024 Yunhao Zhang, Xiaohan Zhang, Chong Li, Shaonan Wang, Chengqing Zong

Results show that language models share significant similarities with human cognitive data and the similarity patterns are modulated by the data modality and stimuli complexity.

Align after Pre-train: Improving Multilingual Generative Models with Cross-lingual Alignment

no code implementations14 Nov 2023 Chong Li, Shaonan Wang, Jiajun Zhang, Chengqing Zong

It aligns the internal sentence representations across different languages via multilingual contrastive learning and aligns model outputs by answering prompts in different languages.

Contrastive Learning Sentence

Interpreting and Exploiting Functional Specialization in Multi-Head Attention under Multi-task Learning

1 code implementation16 Oct 2023 Chong Li, Shaonan Wang, Yunhao Zhang, Jiajun Zhang, Chengqing Zong

We further propose a simple multi-task training method to increase functional specialization and mitigate negative information transfer in multi-task learning.

Multi-Task Learning

Language Cognition and Language Computation -- Human and Machine Language Understanding

no code implementations12 Jan 2023 Shaonan Wang, Nai Ding, Nan Lin, Jiajun Zhang, Chengqing Zong

Language understanding is a key scientific issue in the fields of cognitive and computer science.

Improved Target-specific Stance Detection on Social Media Platforms by Delving into Conversation Threads

no code implementations6 Nov 2022 Yupeng Li, Haorui He, Shaonan Wang, Francis C. M. Lau, Yunya Song

In response, we address a new task called conversational stance detection which is to infer the stance towards a given target (e. g., COVID-19 vaccination) when given a data instance and its corresponding conversation thread.

Benchmarking Opinion Mining +1

Multiple Sequential Learning Tasks Represented in Recurrent Neural Networks

no code implementations NeurIPS Workshop AI4Scien 2021 Shaonan Wang, Bingyu Liu

From the computational perspective, we hypothesize that the working mechanism of a multitask model can provide a possible solution to that of brains.

Towards Brain-to-Text Generation: Neural Decoding with Pre-trained Encoder-Decoder Models

no code implementations NeurIPS Workshop AI4Scien 2021 Shuxian Zou, Shaonan Wang, Jiajun Zhang, Chengqing Zong

However, most of the existing studies have focused on discriminating which one in two stimuli corresponds to the given brain image, which is far from directly generating text from neural activities.

Text Generation

NCLS: Neural Cross-Lingual Summarization

1 code implementation IJCNLP 2019 Junnan Zhu, Qian Wang, Yining Wang, Yu Zhou, Jiajun Zhang, Shaonan Wang, Cheng-qing Zong

Moreover, we propose to further improve NCLS by incorporating two related tasks, monolingual summarization and machine translation, into the training process of CLS under multi-task learning.

Machine Translation Multi-Task Learning +1

Understanding Memory Modules on Learning Simple Algorithms

no code implementations1 Jul 2019 Kexin Wang, Yu Zhou, Shaonan Wang, Jiajun Zhang, Cheng-qing Zong

Recent work has shown that memory modules are crucial for the generalization ability of neural networks on learning simple algorithms.

Dimensionality Reduction

Associative Multichannel Autoencoder for Multimodal Word Representation

1 code implementation EMNLP 2018 Shaonan Wang, Jiajun Zhang, Cheng-qing Zong

In this paper we address the problem of learning multimodal word representations by integrating textual, visual and auditory inputs.

Memory, Show the Way: Memory Based Few Shot Word Representation Learning

no code implementations EMNLP 2018 Jingyuan Sun, Shaonan Wang, Cheng-qing Zong

Distributional semantic models (DSMs) generally require sufficient examples for a word to learn a high quality representation.

General Classification NER +4

Learning Multimodal Word Representation via Dynamic Fusion Methods

no code implementations2 Jan 2018 Shaonan Wang, Jiajun Zhang, Cheng-qing Zong

Multimodal models have been proven to outperform text-based models on learning semantic word representations.

Investigating Inner Properties of Multimodal Representation and Semantic Compositionality with Brain-based Componential Semantics

no code implementations15 Nov 2017 Shaonan Wang, Jiajun Zhang, Nan Lin, Cheng-qing Zong

Considering that multimodal models are originally motivated by human concept representations, we assume that correlating multimodal representations with brain-based semantics would interpret their inner properties to answer the above questions.

Learning Semantic Representations Natural Language Understanding

Exploiting Word Internal Structures for Generic Chinese Sentence Representation

no code implementations EMNLP 2017 Shaonan Wang, Jiajun Zhang, Cheng-qing Zong

We introduce a novel mixed characterword architecture to improve Chinese sentence representations, by utilizing rich semantic information of word internal structures.

Sentence Sentence Similarity

Learning Sentence Representation with Guidance of Human Attention

no code implementations29 Sep 2016 Shaonan Wang, Jiajun Zhang, Cheng-qing Zong

Recently, much progress has been made in learning general-purpose sentence representations that can be used across domains.

POS Sentence

Cannot find the paper you are looking for? You can Submit a new open access paper.