1 code implementation • NAACL (ACL) 2022 • Xinya Du, Zixuan Zhang, Sha Li, Pengfei Yu, Hongwei Wang, Tuan Lai, Xudong Lin, Ziqi Wang, Iris Liu, Ben Zhou, Haoyang Wen, Manling Li, Darryl Hannan, Jie Lei, Hyounghun Kim, Rotem Dror, Haoyu Wang, Michael Regan, Qi Zeng, Qing Lyu, Charles Yu, Carl Edwards, Xiaomeng Jin, Yizhu Jiao, Ghazaleh Kazeminejad, Zhenhailong Wang, Chris Callison-Burch, Mohit Bansal, Carl Vondrick, Jiawei Han, Dan Roth, Shih-Fu Chang, Martha Palmer, Heng Ji
We introduce RESIN-11, a new schema-guided event extraction&prediction framework that can be applied to a large variety of newsworthy scenarios.
1 code implementation • Findings (NAACL) 2022 • Qi Zeng, Qiusi Zhan, Heng Ji
Events are inter-related in documents.
1 code implementation • NAACL (TextGraphs) 2021 • Qi Zeng, Manling Li, Tuan Lai, Heng Ji, Mohit Bansal, Hanghang Tong
Current methods for event representation ignore related events in a corpus-level global context.
no code implementations • EMNLP 2020 • Manling Li, Qi Zeng, Ying Lin, Kyunghyun Cho, Heng Ji, Jonathan May, Nathanael Chambers, Clare Voss
Event schemas can guide our understanding and ability to make predictions with respect to what might happen next.
1 code implementation • 30 May 2022 • Qi Zeng, Qiusi Zhan, Heng Ji
Events are inter-related in documents.
1 code implementation • 30 May 2022 • Yinglun Xu, Qi Zeng, Gagandeep Singh
Our attacks leverage this insight to construct a corrupted environment for misleading the agent towards learning low-performing policies with a limited attack budget.
1 code implementation • 23 Apr 2022 • Qi Zeng, Yash Kothari, Spencer H. Bryngelson, Florian Schäfer
Neural networks can be trained to solve partial differential equations (PDEs) by using the PDE residual as the loss function.
1 code implementation • NAACL 2022 • Yifan Chen, Qi Zeng, Dilek Hakkani-Tur, Di Jin, Heng Ji, Yun Yang
Transformer-based models are not efficient in processing long sequences due to the quadratic space and time complexity of the self-attention modules.
no code implementations • 23 Nov 2021 • Shahed Mohammed, Mohammad Honarvar, Qi Zeng, Hoda Hashemi, Robert Rohling, Piotr Kozlowski, Septimiu Salcudean
We evaluate our new method in multiple in silico and phantom experiments, with comparisons with existing methods, and we show improvements in contrast to noise and signal to noise ratios.
1 code implementation • NeurIPS 2021 • Yifan Chen, Qi Zeng, Heng Ji, Yun Yang
Transformers are expensive to train due to the quadratic time and space complexity in the self-attention mechanism.
no code implementations • Findings (EMNLP) 2021 • Xuanting Cai, Quanbin Ma, Pan Li, Jianyu Liu, Qi Zeng, Zhengkan Yang, Pushkar Tripathi
Understanding the semantic meaning of content on the web through the lens of entities and concepts has many practical advantages.
1 code implementation • NeurIPS 2021 • Yifan Chen, Qi Zeng, Heng Ji, Yun Yang
Transformers are expensive to train due to the quadratic time and space complexity in the self-attention mechanism.
no code implementations • 5 Jan 2021 • Qi Zeng, Ying Liu, Liming Pan, Ming Tang
Our work provides insights on the importance of nodes in the multiplex network and gives a feasible framework to investigate influential spreaders in the asymmetrically coevolving dynamics.
Physics and Society
1 code implementation • INLG (ACL) 2020 • Qingyun Wang, Qi Zeng, Lifu Huang, Kevin Knight, Heng Ji, Nazneen Fatema Rajani
To assist human review process, we build a novel ReviewRobot to automatically assign a review score and write comments for multiple categories such as novelty and meaningful comparison.
no code implementations • ACL 2020 • Manling Li, Alireza Zareian, Qi Zeng, Spencer Whitehead, Di Lu, Heng Ji, Shih-Fu Chang
We introduce a new task, MultiMedia Event Extraction (M2E2), which aims to extract events and their arguments from multimedia documents.
no code implementations • 8 Dec 2019 • Zhifang Liao, Haihui Pan, Qi Zeng, Xiaoping Fan, Yan Zhang, Song Yu
Therefore, improving the accuracy of power load forecasting has always been the pursuing goals for a power system.
no code implementations • IJCNLP 2019 • Jingjing Xu, Liang Zhao, Hanqi Yan, Qi Zeng, Yun Liang, Xu sun
The generator learns to generate examples to attack the classifier while the classifier learns to defend these attacks.
no code implementations • IJCNLP 2019 • Jingjing Xu, Yuechen Wang, Duyu Tang, Nan Duan, Pengcheng Yang, Qi Zeng, Ming Zhou, Xu sun
We provide representative baselines for these tasks and further introduce a coarse-to-fine model for clarification question generation.
no code implementations • 13 Nov 2018 • Qi Zeng, Liangchen Luo, Wenhao Huang, Yang Tang
Extracting valuable facts or informative summaries from multi-dimensional tables, i. e. insight mining, is an important task in data analysis and business intelligence.
no code implementations • 12 Nov 2018 • Liangchen Luo, Wenhao Huang, Qi Zeng, Zaiqing Nie, Xu sun
Most existing works on dialog systems only consider conversation content while neglecting the personality of the user the bot is interacting with, which begets several unsolved issues.
1 code implementation • EMNLP 2018 • Liangchen Luo, Jingjing Xu, Junyang Lin, Qi Zeng, Xu sun
Different from conventional text generation tasks, the mapping between inputs and responses in conversations is more complicated, which highly demands the understanding of utterance-level semantic dependency, a relation between the whole meanings of inputs and outputs.
Ranked #2 on
Text Generation
on DailyDialog
1 code implementation • NAACL 2019 • Guangxiang Zhao, Jingjing Xu, Qi Zeng, Xuancheng Ren
This task requires the system to identify multiple styles of music based on its reviews on websites.
1 code implementation • EMNLP 2018 • Jingjing Xu, Xuancheng Ren, Yi Zhang, Qi Zeng, Xiaoyan Cai, Xu sun
Compared to the state-of-the-art models, our skeleton-based model can generate significantly more coherent text according to human evaluation and automatic evaluation.
1 code implementation • ACL 2018 • Jingjing Xu, Xu sun, Qi Zeng, Xuancheng Ren, Xiaodong Zhang, Houfeng Wang, Wenjie Li
We evaluate our approach on two review datasets, Yelp and Amazon.
Ranked #6 on
Unsupervised Text Style Transfer
on Yelp