1 code implementation • Findings (ACL) 2022 • Shuxian Zou, Shaonan Wang, Jiajun Zhang, Chengqing Zong
More importantly, it demonstrates that it is feasible to decode a certain word within a large vocabulary from its neural brain activity.
no code implementations • LREC 2022 • Xiaohan Zhang, Shaonan Wang, Chengqing Zong
Based on these results, we suggest a block-wise cross-validation training method and an adequate data size for increasing the performance of linear encoding models.
no code implementations • 26 Mar 2024 • Xinpei Zhao, Jingyuan Sun, Shaonan Wang, Jing Ye, Xiaohan Zhang, Chengqing Zong
In contrast, we propose a simple yet effective method that guides text reconstruction by directly comparing them with the predicted text embeddings mapped from brain activities.
no code implementations • 20 Mar 2024 • Shaonan Wang, Jingyuan Sun, Yunhao Zhang, Nan Lin, Marie-Francine Moens, Chengqing Zong
Despite differing from the human language processing mechanism in implementation and algorithms, current language models demonstrate remarkable human-like or surpassing language capabilities.
no code implementations • 2 Mar 2024 • Yunhao Zhang, Xiaohan Zhang, Chong Li, Shaonan Wang, Chengqing Zong
Results show that language models share significant similarities with human cognitive data and the similarity patterns are modulated by the data modality and stimuli complexity.
no code implementations • 22 Feb 2024 • Chengzhang Yu, Xianjun Yang, Wenxia Bao, Shaonan Wang, Zhiming Yao
In environments where RGB images are inadequate, pressure maps is a viable alternative, garnering scholarly attention.
no code implementations • 14 Nov 2023 • Chong Li, Shaonan Wang, Jiajun Zhang, Chengqing Zong
It aligns the internal sentence representations across different languages via multilingual contrastive learning and aligns model outputs by answering prompts in different languages.
1 code implementation • 16 Oct 2023 • Chong Li, Shaonan Wang, Yunhao Zhang, Jiajun Zhang, Chengqing Zong
We further propose a simple multi-task training method to increase functional specialization and mitigate negative information transfer in multi-task learning.
1 code implementation • NeurIPS 2023 • Jingyuan Sun, Mingxiao Li, Zijiao Chen, Yunhao Zhang, Shaonan Wang, Marie-Francine Moens
The second phase tunes the feature learner to attend to neural activation patterns most informative for visual reconstruction with guidance from an image auto-encoder.
Ranked #1 on Brain Visual Reconstruction from fMRI on GOD
no code implementations • 12 Jan 2023 • Shaonan Wang, Nai Ding, Nan Lin, Jiajun Zhang, Chengqing Zong
Language understanding is a key scientific issue in the fields of cognitive and computer science.
no code implementations • 6 Nov 2022 • Yupeng Li, Haorui He, Shaonan Wang, Francis C. M. Lau, Yunya Song
In response, we address a new task called conversational stance detection which is to infer the stance towards a given target (e. g., COVID-19 vaccination) when given a data instance and its corresponding conversation thread.
no code implementations • NeurIPS Workshop AI4Scien 2021 • Shaonan Wang, Bingyu Liu
From the computational perspective, we hypothesize that the working mechanism of a multitask model can provide a possible solution to that of brains.
no code implementations • NeurIPS Workshop AI4Scien 2021 • Shuxian Zou, Shaonan Wang, Jiajun Zhang, Chengqing Zong
However, most of the existing studies have focused on discriminating which one in two stimuli corresponds to the given brain image, which is far from directly generating text from neural activities.
no code implementations • COLING 2020 • Jingyuan Sun, Shaonan Wang, Jiajun Zhang, Chengqing Zong
The framework is based on language models and can be smoothly built with different language model architectures.
1 code implementation • IJCNLP 2019 • Junnan Zhu, Qian Wang, Yining Wang, Yu Zhou, Jiajun Zhang, Shaonan Wang, Cheng-qing Zong
Moreover, we propose to further improve NCLS by incorporating two related tasks, monolingual summarization and machine translation, into the training process of CLS under multi-task learning.
no code implementations • 1 Jul 2019 • Kexin Wang, Yu Zhou, Shaonan Wang, Jiajun Zhang, Cheng-qing Zong
Recent work has shown that memory modules are crucial for the generalization ability of neural networks on learning simple algorithms.
1 code implementation • EMNLP 2018 • Shaonan Wang, Jiajun Zhang, Cheng-qing Zong
In this paper we address the problem of learning multimodal word representations by integrating textual, visual and auditory inputs.
no code implementations • EMNLP 2018 • Jingyuan Sun, Shaonan Wang, Cheng-qing Zong
Distributional semantic models (DSMs) generally require sufficient examples for a word to learn a high quality representation.
no code implementations • 2 Jan 2018 • Shaonan Wang, Jiajun Zhang, Cheng-qing Zong
Multimodal models have been proven to outperform text-based models on learning semantic word representations.
no code implementations • 15 Nov 2017 • Shaonan Wang, Jiajun Zhang, Nan Lin, Cheng-qing Zong
Considering that multimodal models are originally motivated by human concept representations, we assume that correlating multimodal representations with brain-based semantics would interpret their inner properties to answer the above questions.
Learning Semantic Representations Natural Language Understanding
no code implementations • EMNLP 2017 • Shaonan Wang, Jiajun Zhang, Cheng-qing Zong
We introduce a novel mixed characterword architecture to improve Chinese sentence representations, by utilizing rich semantic information of word internal structures.
no code implementations • 29 Sep 2016 • Shaonan Wang, Jiajun Zhang, Cheng-qing Zong
Recently, much progress has been made in learning general-purpose sentence representations that can be used across domains.