Search Results for author: Shuailong Liang

Found 7 papers, 5 papers with code

新型冠状病毒肺炎相关的推特主题与情感研究(Exploring COVID-19-related Twitter Topic Dynamics across Countries)

no code implementations CCL 2020 Shuailong Liang, Derek F. Wong, Yue Zhang

我们基于从2020年1月22日至2020年4月30日在推特社交平台上抓取的不同国家和地区发布的50万条推文, 研究了有关 2019新型冠状病毒肺炎相关的主题和人们的观点, 发现了不同国家之间推特用户的普遍关切和看法之间存在着异同, 并且对不同议题的情感态度也有所不同。我们发现大部分推文中包含了强烈的情感, 其中表达爱与支持的推文比较普遍。总体来看, 人们的情感随着时间的推移逐渐正向增强。

Attention Guided Dialogue State Tracking with Sparse Supervision

no code implementations28 Jan 2021 Shuailong Liang, Lahari Poddar, Gyuri Szarvas

We present results on two public multi-domain DST datasets (MultiWOZ and Schema Guided Dialogue) in both settings i. e. training with turn-level and with sparse supervision.

Dialogue State Tracking

SemEval-2020 Task 4: Commonsense Validation and Explanation

2 code implementations SEMEVAL 2020 Cunxiang Wang, Shuailong Liang, Yili Jin, Yilong Wang, Xiaodan Zhu, Yue Zhang

In this paper, we present SemEval-2020 Task 4, Commonsense Validation and Explanation (ComVE), which includes three subtasks, aiming to evaluate whether a system can distinguish a natural language statement that makes sense to humans from one that does not, and provide the reasons.

Who Blames Whom in a Crisis? Detecting Blame Ties from News Articles Using Neural Networks

1 code implementation24 Apr 2019 Shuailong Liang, Olivia Nicol, Yue Zhang

Blame games tend to follow major disruptions, be they financial crises, natural disasters or terrorist attacks.

Subword Encoding in Lattice LSTM for Chinese Word Segmentation

1 code implementation NAACL 2019 Jie Yang, Yue Zhang, Shuailong Liang

Previous lattice LSTM model takes word embeddings as the lexicon input, we prove that subword encoding can give the comparable performance and has the benefit of not relying on any external segmentor.

Chinese Word Segmentation Word Embeddings

Design Challenges and Misconceptions in Neural Sequence Labeling

2 code implementations COLING 2018 Jie Yang, Shuailong Liang, Yue Zhang

We investigate the design challenges of constructing effective and efficient neural sequence labeling systems, by reproducing twelve neural sequence labeling models, which include most of the state-of-the-art structures, and conduct a systematic model comparison on three benchmarks (i. e. NER, Chunking, and POS tagging).

Chunking Misconceptions +3

Cannot find the paper you are looking for? You can Submit a new open access paper.