1 code implementation • ACL 2022 • Peng Qian, Roger Levy
We hypothesize that human performance is better characterized by flexible inference through composition of basic computational motifs available to the human language user.
no code implementations • 22 Mar 2024 • Dazhong Rong, Guoyao Yu, Shuheng Shen, Xinyi Fu, Peng Qian, Jianhai Chen, Qinming He, Xing Fu, Weiqiang Wang
To gather a significant quantity of annotated training data for high-performance image classification models, numerous companies opt to enlist third-party providers to label their unlabeled data.
1 code implementation • 22 Feb 2024 • Ningyu Xu, Qi Zhang, Menghan Zhang, Peng Qian, Xuanjing Huang
Here we re-purpose the reverse dictionary task as a case study to probe LLMs' capacity for conceptual inference.
1 code implementation • 26 Nov 2022 • Zhengjie Huang, Yunyang Huang, Peng Qian, Jianhai Chen, Qinming He
Finally, we aggregate all graph embeddings of an address into the address-level representation, and engage in a classification model to give the address behavior classification.
1 code implementation • NAACL 2022 • Mycal Tucker, Tiwalayo Eisape, Peng Qian, Roger Levy, Julie Shah
Recent causal probing literature reveals when language models and syntactic probes use similar representations.
1 code implementation • EMNLP 2021 • Yiwen Wang, Jennifer Hu, Roger Levy, Peng Qian
We find suggestive evidence that structural supervision helps with representing syntactic state across intervening content and improves performance in low-data settings, suggesting that the benefits of hierarchical inductive biases in acquiring dependency relationships may extend beyond English.
1 code implementation • ACL 2021 • Peng Qian, Tahira Naseem, Roger Levy, Ramón Fernandez Astudillo
Here we study whether structural guidance leads to more human-like systematic linguistic generalization in Transformer language models without resorting to pre-training on very large amounts of data.
1 code implementation • 24 Jul 2021 • Zhenguang Liu, Peng Qian, Xiaoyang Wang, Yuan Zhuang, Lin Qiu, Xun Wang
Then, we propose a novel temporal message propagation network to extract the graph feature from the normalized graph, and combine the graph feature with designed expert patterns to yield a final detection system.
1 code implementation • 17 Jun 2021 • Zhenguang Liu, Peng Qian, Xiang Wang, Lei Zhu, Qinming He, Shouling Ji
In this paper, we explore combining deep learning with expert patterns in an explainable fashion.
1 code implementation • 28 May 2021 • Mycal Tucker, Peng Qian, Roger Levy
Neural language models exhibit impressive performance on a variety of tasks, but their internal reasoning may be difficult to understand.
no code implementations • 10 Dec 2020 • Bing Chen, Shuo Li, Xianfei Hou, Feifei Zhou, Peng Qian, Feng Mei, Suotang Jia, Nanyang Xu, Heng Shen
Quantum simulator with the ability to harness the dynamics of complex quantum systems has emerged as a promising platform for probing exotic topological phases.
Quantum Physics
no code implementations • EMNLP 2020 • Ethan Wilcox, Peng Qian, Richard Futrell, Ryosuke Kohita, Roger Levy, Miguel Ballesteros
Humans can learn structural properties about a word from minimal experience, and deploy their learned syntactic representations uniformly in different grammatical contexts.
no code implementations • ACL 2020 • Jon Gauthier, Jennifer Hu, Ethan Wilcox, Peng Qian, Roger Levy
Targeted syntactic evaluations have yielded insights into the generalizations learned by neural network language models.
1 code implementation • 2 Jun 2020 • Ethan Gotlieb Wilcox, Jon Gauthier, Jennifer Hu, Peng Qian, Roger Levy
Human reading behavior is tuned to the statistics of natural language: the time it takes human subjects to read a word can be predicted from estimates of the word's probability in context.
1 code implementation • ACL 2020 • Jennifer Hu, Jon Gauthier, Peng Qian, Ethan Wilcox, Roger P. Levy
While state-of-the-art neural network models continue to achieve lower perplexity scores on language modeling benchmarks, it remains unknown whether optimizing for broad-coverage predictive performance leads to human-like syntactic knowledge.
1 code implementation • IJCNLP 2019 • Aixiu An, Peng Qian, Ethan Wilcox, Roger Levy
We assess whether different neural language models trained on English and French represent phrase-level number and gender features, and use those features to drive downstream expectations.
2 code implementations • NAACL 2019 • Richard Futrell, Ethan Wilcox, Takashi Morita, Peng Qian, Miguel Ballesteros, Roger Levy
We deploy the methods of controlled psycholinguistic experimentation to shed light on the extent to which the behavior of neural network language models reflects incremental representations of syntactic state.
no code implementations • NAACL 2019 • Ethan Wilcox, Peng Qian, Richard Futrell, Miguel Ballesteros, Roger Levy
State-of-the-art LSTM language models trained on large corpora learn sequential contingencies in impressive detail and have been shown to acquire a number of non-local grammatical dependencies with some success.
no code implementations • 22 Apr 2016 • Peng Qian, Xipeng Qiu, Xuanjing Huang
Recently, the long short-term memory neural network (LSTM) has attracted wide interest due to its success in many tasks.
no code implementations • 28 May 2015 • Xipeng Qiu, Peng Qian, Liusong Yin, Shiyu Wu, Xuanjing Huang
In this paper, we give an overview for the shared task at the 4th CCF Conference on Natural Language Processing \& Chinese Computing (NLPCC 2015): Chinese word segmentation and part-of-speech (POS) tagging for micro-blog texts.