no code implementations • CONLL 2020 • Qile Zhu, Haidar Khan, Saleh Soltan, Stephen Rawls, Wael Hamza
For complex parsing tasks, the state-of-the-art method is based on autoregressive sequence to sequence models to generate the parse directly.
1 code implementation • 16 Sep 2020 • Xiyao Ma, Qile Zhu, Yanlin Zhou, Xiaolin Li, Dapeng Wu
Asking questions from natural language text has attracted increasing attention recently, and several schemes have been proposed with promising results by asking the right question words and copy relevant words from the input to the question.
1 code implementation • ACL 2020 • Qile Zhu, Jianlin Su, Wei Bi, Xiaojiang Liu, Xiyao Ma, Xiaolin Li, Dapeng Wu
Variational Autoencoder (VAE) is widely used as a generative model to approximate a model's posterior on latent variables by combining the amortized variational inference and deep neural networks.
no code implementations • 12 Mar 2020 • Zhigang Dai, Jinhua Fu, Qile Zhu, Hengbin Cui, Xiaolong Li, Yuan Qi
We revise the attention distribution to focus on the local and contextual semantic information by incorporating the relative position information between utterances.
no code implementations • 2 Dec 2019 • Xiyao Ma, Qile Zhu, Yanlin Zhou, Xiaolin Li, Dapeng Wu
Taking an answer and its context as input, sequence-to-sequence models have made considerable progress on question generation.
1 code implementation • EMNLP 2018 • Qile Zhu, Zheng Feng, Xiaolin Li
In this paper, we propose a novel way called GraphBTM to represent biterms as graphs and design a Graph Convolutional Networks (GCNs) with residual connections to extract transitive features from biterms.
1 code implementation • 19 Dec 2017 • Xiaoyong Yuan, Pan He, Qile Zhu, Xiaolin Li
In this paper, we review recent findings on adversarial examples for deep neural networks, summarize the methods for generating adversarial examples, and propose a taxonomy of these methods.
no code implementations • 4 Dec 2017 • Ruimin Sun, Xiaoyong Yuan, Pan He, Qile Zhu, Aokun Chen, Andre Gregio, Daniela Oliveira, Xiaolin Li
Existing malware detectors on safety-critical devices have difficulties in runtime detection due to the performance overhead.
1 code implementation • ICCV 2017 • Pan He, Weilin Huang, Tong He, Qile Zhu, Yu Qiao, Xiaolin Li
Our text detector achieves an F-measure of 77% on the ICDAR 2015 bench- mark, advancing the state-of-the-art results in [18, 28].
Ranked #4 on
Scene Text Detection
on COCO-Text