no code implementations • EMNLP 2020 • Liqiang Xiao, Lu Wang, Hao He, Yaohui Jin
Previous work is mostly based on statistical methods that estimate word-level salience, which does not consider semantics and larger context when quantifying importance.
no code implementations • ICML 2020 • Shuang Li, Lu Wang, Ruizhi Zhang, xiaofu Chang, Xuqin Liu, Yao Xie, Yuan Qi, Le Song
We propose a modeling framework for event data, which excels in small data regime with the ability to incorporate domain knowledge.
no code implementations • 19 Sep 2023 • Esha Uboweja, David Tian, Qifei Wang, Yi-Chun Kuo, Joe Zou, Lu Wang, George Sung, Matthias Grundmann
Our framework provides a pre-trained single-hand embedding model that can be fine-tuned for custom gesture recognition.
no code implementations • 9 Sep 2023 • Yuhong He, Long Peng, Lu Wang, Jun Cheng
Since rain streaks show a variety of shapes and directions, learning the degradation representation is extremely challenging for single image deraining.
1 code implementation • 20 Aug 2023 • Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Lu Wang, Ruoxi Jia, Ming Jin
Current literature, aiming to surpass the "Chain-of-Thought" approach, often resorts to an external modus operandi involving halting, modifying, and then resuming the generation process to boost Large Language Models' (LLMs) reasoning capacities.
1 code implementation • 17 Aug 2023 • Muhammad Khalifa, Lajanugen Logeswaran, Moontae Lee, Honglak Lee, Lu Wang
The standard approach for ICL is to prompt the LM with concatenated demonstrations followed by the test input.
no code implementations • 11 Aug 2023 • Chao Yang, Lu Wang, Kun Gao, Shuang Li
Leveraging the temporal point process modeling and learning framework, the rule content and weights will be gradually optimized until the likelihood of the observational event sequences is optimal.
1 code implementation • 1 Aug 2023 • Zhangchi Zhu, Lu Wang, Pu Zhao, Chao Du, Wei zhang, Hang Dong, Bo Qiao, QIngwei Lin, Saravan Rajmohan, Dongmei Zhang
To mitigate the impact of label uncertainty and improve the robustness of learning with positive and unlabeled data, we propose a new robust PU learning method with a training strategy motivated by the nature of human learning: easy cases should be learned first.
no code implementations • 1 Jul 2023 • Shuzhe Chen, Li Li, Zhichao Lin, Ke Zhang, Ying Gong, Lu Wang, Xu Wu, Maokun Li, Yuanlin Song, Fan Yang, Shenheng Xu
A simple convolutional neural network is used for classification.
no code implementations • 25 Jun 2023 • Yiman Zhu, Lu Wang, Jingyi Yuan, Yu Guo
In this article, we propose a data-driven method for low-light image enhancement (LLIE) of spin targets in space environment based on diffusion model.
1 code implementation • 13 Jun 2023 • Tianxiang Zhao, Wenchao Yu, Suhang Wang, Lu Wang, Xiang Zhang, Yuncong Chen, Yanchi Liu, Wei Cheng, Haifeng Chen
Imitation learning has achieved great success in many sequential decision-making tasks, in which a neural agent is learned by imitating collected human demonstrations.
1 code implementation • 24 May 2023 • Muhammad Khalifa, Lajanugen Logeswaran, Moontae Lee, Honglak Lee, Lu Wang
In the context of multi-step reasoning, language models (LMs) probabilities are often miscalibrated -- solutions with high probabilities are not always correct.
1 code implementation • 24 May 2023 • Qi Zeng, Mankeerat Sidhu, Hou Pong Chan, Lu Wang, Heng Ji
Opinions in the scientific domain can be divergent, leading to controversy or consensus among reviewers.
no code implementations • 24 May 2023 • Naihao Deng, Siyang Liu, Xinliang Frederick Zhang, Winston Wu, Lu Wang, Rada Mihalcea
Annotator disagreement is ubiquitous in natural language processing (NLP) tasks.
no code implementations • 24 May 2023 • Shuyang Cao, Lu Wang
Long document summarization systems are critical for domains with lengthy and jargonladen text, yet they present significant challenges to researchers and developers with limited computing resources.
1 code implementation • 19 May 2023 • Zezhong Wang, Fangkai Yang, Pu Zhao, Lu Wang, Jue Zhang, Mohit Garg, QIngwei Lin, Dongmei Zhang
Large Language Model (LLM) has gained popularity and achieved remarkable results in open-domain tasks, but its performance in real industrial domain-specific scenarios is average since there is no specific knowledge in it.
no code implementations • 19 May 2023 • Liting Chen, Lu Wang, Hang Dong, Yali Du, Jie Yan, Fangkai Yang, Shuang Li, Pu Zhao, Si Qin, Saravan Rajmohan, QIngwei Lin, Dongmei Zhang
The emergence of large language models (LLMs) has substantially influenced natural language processing, demonstrating exceptional results across various tasks.
1 code implementation • 19 May 2023 • Xin Liu, Muhammad Khalifa, Lu Wang
Energy-based models (EBMs) have gained popularity for controlled text generation due to their high applicability to a wide range of constraints.
no code implementations • 11 Apr 2023 • Tianyuan Zhang, Yisong Xiao, Xiaoya Zhang, Hao Li, Lu Wang
Thus, virtual simulation experiments can provide a solution to this challenge.
no code implementations • 1 Mar 2023 • Mu-Huan Chung, Lu Wang, Sharon Li, Yuhong Yang, Calvin Giang, Khilan Jerath, Abhay Raman, David Lie, Mark Chignell
In this paper we present research results concerning the application of Active Learning to anomaly detection in redacted emails, comparing the utility of different methods for implementing active learning in this context.
1 code implementation • 14 Feb 2023 • Liting Chen, Jie Yan, Zhengdao Shao, Lu Wang, QIngwei Lin, Dongmei Zhang
In this paper, we propose Conservative State Value Estimation (CSVE), a new approach that learns conservative V-function via directly imposing penalty on OOD states.
no code implementations • 11 Dec 2022 • Lu Wang, Bofu Tang, Feifei Liu, Zhenyu Jiang, Xianmei Meng
Objective: To systematically evaluate the value of endocytoscopy (ECS) in the diagnosis of early esophageal cancer (EC).
no code implementations • 21 Nov 2022 • Junjie Sheng, Lu Wang, Fangkai Yang, Bo Qiao, Hang Dong, Xiangfeng Wang, Bo Jin, Jun Wang, Si Qin, Saravan Rajmohan, QIngwei Lin, Dongmei Zhang
To address these two limitations, this paper formulates the oversubscription for cloud as a chance-constrained optimization problem and propose an effective Chance Constrained Multi-Agent Reinforcement Learning (C2MARL) method to solve this problem.
Multi-agent Reinforcement Learning
reinforcement-learning
+1
1 code implementation • 14 Nov 2022 • Joseph J. Peper, Lu Wang
Generative models have demonstrated impressive results on Aspect-based Sentiment Analysis (ABSA) tasks, particularly for the emerging task of extracting Aspect-Category-Opinion-Sentiment (ACOS) quadruples.
no code implementations • 4 Nov 2022 • Changyuan Qiu, Winston Wu, Xinliang Frederick Zhang, Lu Wang
In this work, we introduce the task of multimodal ideology prediction, where a model predicts binary or five-point scale ideological leanings, given a text-image pair with political content.
no code implementations • 3 Nov 2022 • Shuyang Cao, Lu Wang
Despite having less performance drop when testing on data drawn from a later time, linear prompts focus more on non-temporal information and are less sensitive to the given timestamps, according to human evaluations and sensitivity analyses.
1 code implementation • 2 Nov 2022 • Xinliang Frederick Zhang, Nick Beauchamp, Lu Wang
We present a novel generative framework to allow the generation of canonical names for entities as well as stances among them.
no code implementations • 7 Oct 2022 • Lu Wang, Luis F. Abanto-Leon, Arash Asadi
Empowering cellular networks with augmented sensing capabilities is one of the key research areas in 6G communication systems.
no code implementations • 9 Sep 2022 • Yushu Chen, Guangwen Yang, Lu Wang, Qingzhong Gan, Haipeng Chen, Quanyong Xu
Atmospheric powered descent guidance can be solved by successive convexification; however, its onboard application is impeded by the sharp increase in computation caused by nonlinear aerodynamic forces.
no code implementations • 31 May 2022 • Marcel Robitaille, HeeBong Yang, Lu Wang, Na Young Kim
Time-fluctuating signals are ubiquitous and diverse in many physical, chemical, and biological systems, among which random telegraph signals (RTSs) refer to a series of instantaneous switching events between two discrete levels from single-particle movements.
2 code implementations • 25 May 2022 • Muhammad Khalifa, Lajanugen Logeswaran, Moontae Lee, Honglak Lee, Lu Wang
To alleviate the need for a large number of labeled question-document pairs for retriever training, we propose PromptRank, which relies on large language models prompting for multi-hop path reranking.
1 code implementation • Findings (NAACL) 2022 • Yujian Liu, Xinliang Frederick Zhang, David Wegsman, Nick Beauchamp, Lu Wang
Ideology is at the core of political science research.
1 code implementation • NAACL 2022 • Xu Wang, Simin Fan, Jessica Houghton, Lu Wang
NLP-powered automatic question generation (QG) techniques carry great pedagogical potential of saving educators' time and benefiting student learning.
no code implementations • 7 Apr 2022 • Nick J. C. Wang, Lu Wang, Yandan Sun, Haimei Kang, Dejun Zhang
We revisit ideas presented by Lugosch et al. using speech pre-training and three-module modeling; however, to ease construction of the end-to-end SLU model, we use as our phoneme module an open-source acoustic-phonetic model from a DNN-HMM hybrid automatic speech recognition (ASR) system instead of training one from scratch.
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+4
no code implementations • Findings (ACL) 2022 • Xinyu Hua, Lu Wang
Combined with transfer learning, substantial F1 score boost (5-25) can be further achieved during the early iterations of active learning across domains.
no code implementations • ACL 2022 • Shuyang Cao, Lu Wang
In this work, we present HIBRIDS, which injects Hierarchical Biases foR Incorporating Document Structure into the calculation of attention scores.
no code implementations • 9 Feb 2022 • Lu Wang, Jie Yang, Masoumeh Zareapoor, ZhongLong Zheng
Cross-modal hashing still has some challenges needed to address: (1) most existing CMH methods take graphs as input to model data distribution.
no code implementations • 24 Nov 2021 • Shiqi Liu, Lu Wang, Jie Lian, Ting Chen, Cong Liu, Xuchen Zhan, Jintao Lu, Jie Liu, Ting Wang, Dong Geng, Hongwei Duan, Yuze Tian
Relative radiometric normalization(RRN) of different satellite images of the same terrain is necessary for change detection, object classification/segmentation, and map-making tasks.
no code implementations • 21 Oct 2021 • Imanol Luengo, Maria Grammatikopoulou, Rahim Mohammadi, Chris Walsh, Chinedu Innocent Nwoye, Deepak Alapatt, Nicolas Padoy, Zhen-Liang Ni, Chen-Chen Fan, Gui-Bin Bian, Zeng-Guang Hou, Heonjin Ha, Jiacheng Wang, Haojie Wang, Dong Guo, Lu Wang, Guotai Wang, Mobarakol Islam, Bharat Giddwani, Ren Hongliang, Theodoros Pissas, Claudio Ravasio, Martin Huber, Jeremy Birch, Joan M. Nunez Do Rio, Lyndon Da Cruz, Christos Bergeles, Hongyu Chen, Fucang Jia, Nikhil KumarTomar, Debesh Jha, Michael A. Riegler, Pal Halvorsen, Sophia Bano, Uddhav Vaghela, Jianyuan Hong, Haili Ye, Feihong Huang, Da-Han Wang, Danail Stoyanov
In 2020, we released pixel-wise semantic annotations for anatomy and instruments for 4670 images sampled from 25 videos of the CATARACTS training set.
no code implementations • 1 Oct 2021 • Yan Xia, Linhui Jiang, Lu Wang, Xue Chen, Jianjie Ye, Tangyan Hou, Liqiang Wang, Yibo Zhang, Mengying Li, Zhen Li, Zhe Song, Yaping Jiang, Weiping Liu, Pengfei Li, Daniel Rosenfeld, John H. Seinfeld, Shaocai Yu
Our results show that the ORRS measurements, assisted by the machine-learning-based ensemble model developed here, can realize day-to-day supervision of on-road vehicle-specific emissions.
no code implementations • ICLR 2022 • Shuang Li, Mingquan Feng, Lu Wang, Abdelmajid Essofi, Yufeng Cao, Junchi Yan, Le Song
We propose a principled method to learn a set of human-readable logic rules to explain temporal point processes.
2 code implementations • EMNLP 2021 • Shuyang Cao, Lu Wang
We study generating abstractive summaries that are faithful and factually consistent with the given articles.
1 code implementation • 7 Aug 2021 • Hou Pong Chan, Lu Wang, Irwin King
We study controllable text summarization which allows users to gain control on a particular attribute (e. g., length limit) of the generated summaries.
1 code implementation • ACL 2021 • Shuyang Cao, Lu Wang
We first define a new question type ontology which differentiates the nuanced nature of questions better than widely used question words.
no code implementations • 24 Jun 2021 • Shuang Li, Lu Wang, Xinyun Chen, Yixiang Fang, Yan Song
In this paper, we model the propagation of the COVID-19 as spatio-temporal point processes and propose a generative and intensity-free model to track the spread of the disease.
no code implementations • 24 Jun 2021 • Cheng Jie, Da Xu, Zigeng Wang, Lu Wang, Wei Shen
With the increasing scale of search engine marketing, designing an efficient bidding system is becoming paramount for the success of e-commerce companies.
1 code implementation • CVPR 2021 • YuHan Shen, Lu Wang, Ehsan Elhamifar
We address the problem of unsupervised localization of key-steps and feature learning in instructional videos using both visual and language instructions.
27 code implementations • ICLR 2022 • Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen
We propose Low-Rank Adaptation, or LoRA, which freezes the pre-trained model weights and injects trainable rank decomposition matrices into each layer of the Transformer architecture, greatly reducing the number of trainable parameters for downstream tasks.
no code implementations • ACL 2021 • Xinyu Hua, Ashwin Sreevatsa, Lu Wang
To enrich the generation with diverse content, we further propose to use large pre-trained models to predict relevant concepts and to generate claims.
1 code implementation • 17 May 2021 • Lu Wang, xiaofu Chang, Shuang Li, Yunfei Chu, Hui Li, Wei zhang, Xiaofeng He, Le Song, Jingren Zhou, Hongxia Yang
Secondly, on top of the proposed graph transformer, we introduce a two-stream encoder that separately extracts representations from temporal neighborhoods associated with the two interaction nodes and then utilizes a co-attentional transformer to model inter-dependencies at a semantic level.
no code implementations • 25 Apr 2021 • Songmin Dai, Jide Li, Lu Wang, Congcong Zhu, Yifan Wu, Xiaoqiang Li
This paper first introduces a novel method to generate anomalous data by breaking up global structures while preserving local structures of normal data at multiple levels.
no code implementations • NAACL 2021 • Shuyang Cao, Lu Wang
Using attention head masking, we are able to reveal the relation between encoder-decoder attentions and content selection behaviors of summarization models.
1 code implementation • NAACL 2021 • Luyang Huang, Shuyang Cao, Nikolaus Parulian, Heng Ji, Lu Wang
The quadratic computational and memory complexities of large Transformers have limited their scalability for long document summarization.
no code implementations • NAACL 2021 • Shuyang Cao, Lu Wang
How to generate summaries of different styles without requiring corpora in the target styles, or training separate models?
no code implementations • 5 Mar 2021 • Lu Wang, Haoyan Jiang, Mark Chignell
In this paper, we developed a new ensemble machine learning Python package based on multi-task learning (MTL), referred to as the Med-Multi-Task Learning (MD-MTL) package and applied it in predicting disease scores of patients, and in carrying out risk factor analysis on multiple subgroups of patients simultaneously.
no code implementations • 29 Dec 2020 • Lu Wang, Dong Guo, Guotai Wang, Shaoting Zhang
In this paper, we propose an annotation-efficient learning framework for segmentation tasks that avoids annotations of training images, where we use an improved Cycle-Consistent Generative Adversarial Network (GAN) to learn from a set of unpaired medical images and auxiliary masks obtained either from a shape model or public datasets.
no code implementations • 25 Oct 2020 • Wen Sun, Shiyu Lei, Lu Wang, Zhiqiang Liu, Yan Zhang
Industrial Internet of Things (IoT) enables distributed intelligent services varying with the dynamic and realtime industrial devices to achieve Industry 4. 0 benefits.
no code implementations • EMNLP 2020 • Xinyu Hua, Lu Wang
In this work, we present a novel content-controlled text generation framework, PAIR, with planning and iterative refinement, which is built upon a large model, BART.
no code implementations • ACL 2020 • Prafulla Kumar Choubey, Aaron Lee, Ruihong Huang, Lu Wang
Understanding discourse structures of news articles is vital to effectively contextualize the occurrence of a news event.
Ranked #5 on
Text Classification
on NewsDiscourse
no code implementations • ACL 2020 • Xingshan Zeng, Jing Li, Lu Wang, Zhiming Mao, Kam-Fai Wong
Trending topics in social media content evolve over time, and it is therefore crucial to understand social media users and their interpersonal communications in a dynamic manner.
no code implementations • 24 Jun 2020 • Yong Chen, Lu Wang, Jiajia Hu, Mingbin Ye
Fall event detection, as one of the greatest risks to the elderly, has been a hot research issue in the solitary scene in recent years.
no code implementations • AKBC 2020 • Xinyu Hua, Lei LI, Lifeng Hua, Lu Wang
We therefore propose a novel model, XREF, that leverages attention mechanisms to (1) pinpoint relevant context within comments, and (2) detect supporting entities from the news article.
2 code implementations • NeurIPS 2020 • Lu Wang, Xuanqing Liu, Jin-Feng Yi, Yuan Jiang, Cho-Jui Hsieh
Metric learning is an important family of algorithms for classification and similarity search, but the robustness of learned metrics against small adversarial perturbations is less studied.
no code implementations • 1 Jun 2020 • Xiao-Lei Yin, Dong-Xue Liang, Lu Wang, Jing Qiu, Zhi-Yun Yang, Jun-Hui Xing, Jian-Zeng Dong, Zhao-Yuan Ma
With the help of this technology, doctors can significantly reduce exposure frequency and intensity of the X-ray during coronary angiography.
1 code implementation • 11 May 2020 • Lu Wang, huan zhang, Jin-Feng Yi, Cho-Jui Hsieh, Yuan Jiang
By constraining adversarial perturbations in a low-dimensional subspace via spanning an auxiliary unlabeled dataset, the spanning attack significantly improves the query efficiency of a wide variety of existing black-box attacks.
1 code implementation • ACL 2020 • Luyang Huang, Lingfei Wu, Lu Wang
Sequence-to-sequence models for abstractive summarization have been studied extensively, yet the generated summaries commonly suffer from fabricated content, and are often found to be near-extractive.
no code implementations • 26 Mar 2020 • Lu Wang, Dong-Xue Liang, Xiao-Lei Yin, Jing Qiu, Zhi-Yun Yang, Jun-Hui Xing, Jian-Zeng Dong, Zhao-Yuan Ma
This article proposes a new video segmentation framework that can extract the clearest and most comprehensive coronary angiography images from a video sequence, thereby helping physicians to better observe the condition of blood vessels.
no code implementations • 26 Mar 2020 • Lu Wang, Dong-Xue Liang, Xiao-Lei Yin, Jing Qiu, Zhi-Yun Yang, Jun-Hui Xing, Jian-Zeng Dong, Zhao-Yuan Ma
The reconstruction of three-dimensional models of coronary arteries is of great significance for the localization, evaluation and diagnosis of stenosis and plaque in the arteries, as well as for the assisted navigation of interventional surgery.
no code implementations • 23 Mar 2020 • Tobias Ross, Annika Reinke, Peter M. Full, Martin Wagner, Hannes Kenngott, Martin Apitz, Hellena Hempe, Diana Mindroc Filimon, Patrick Scholz, Thuy Nuong Tran, Pierangela Bruno, Pablo Arbeláez, Gui-Bin Bian, Sebastian Bodenstedt, Jon Lindström Bolmgren, Laura Bravo-Sánchez, Hua-Bin Chen, Cristina González, Dong Guo, Pål Halvorsen, Pheng-Ann Heng, Enes Hosgor, Zeng-Guang Hou, Fabian Isensee, Debesh Jha, Tingting Jiang, Yueming Jin, Kadir Kirtac, Sabrina Kletz, Stefan Leger, Zhixuan Li, Klaus H. Maier-Hein, Zhen-Liang Ni, Michael A. Riegler, Klaus Schoeffmann, Ruohua Shi, Stefanie Speidel, Michael Stenzel, Isabell Twick, Gutai Wang, Jiacheng Wang, Liansheng Wang, Lu Wang, Yu-Jie Zhang, Yan-Jie Zhou, Lei Zhu, Manuel Wiesenfarth, Annette Kopp-Schneider, Beat P. Müller-Stich, Lena Maier-Hein
The validation of the competing methods for the three tasks (binary segmentation, multi-instance detection and multi-instance segmentation) was performed in three different stages with an increasing domain gap between the training and the test data.
no code implementations • 14 Jan 2020 • Lu Wang, Jie Yang
Due to the superiority in similarity computation and database storage for large-scale multiple modalities data, cross-modal hashing methods have attracted extensive attention in similarity retrieval across the heterogeneous modalities.
no code implementations • 11 Nov 2019 • Lu Wang, Jie Yang
Large-scale cross-modal hashing similarity retrieval has attracted more and more attention in modern search applications such as search engines and autopilot, showing great superiority in computation and storage.
no code implementations • IJCNLP 2019 • Xingshan Zeng, Jing Li, Lu Wang, Kam-Fai Wong
The prevalent use of social media leads to a vast amount of online conversations being produced on a daily basis.
no code implementations • ICLR 2020 • Xinyun Chen, Lu Wang, Yizhe Hang, Heng Ge, Hongyuan Zha
We consider off-policy policy evaluation when the trajectory data are generated by multiple behavior policies.
no code implementations • 4 Oct 2019 • Lu Wang, Wenchao Yu, Wei Wang, Wei Cheng, Wei zhang, Hongyuan Zha, Xiaofeng He, Haifeng Chen
Graph representation learning, aiming to learn low-dimensional representations which capture the geometric dependencies between nodes in the original graph, has gained increasing popularity in a variety of graph analysis tasks, including node classification and link prediction.
1 code implementation • IJCNLP 2019 • Lisa Fan, Marshall White, Eva Sharma, Ruisi Su, Prafulla Kumar Choubey, Ruihong Huang, Lu Wang
The increasing prevalence of political bias in news media calls for greater public awareness of it, as well as robust methods for its detection.
no code implementations • IJCNLP 2019 • Eva Sharma, Luyang Huang, Zhe Hu, Lu Wang
Human judges further rate our system summaries as more informative and coherent than those by popular summarization models.
no code implementations • IJCNLP 2019 • Xinyu Hua, Lu Wang
Building effective text generation systems requires three critical components: content selection, text planning, and surface realization, and traditionally they are tackled as separate problems.
no code implementations • 29 Jul 2019 • Lu Wang, Dongxiao Zhu
Many real-world datasets are labeled with natural orders, i. e., ordinal labels.
no code implementations • ACL 2019 • Eva Sharma, Chen Li, Lu Wang
Most existing text summarization datasets are compiled from the news domain, where summaries have a flattened discourse structure.
1 code implementation • 10 Jun 2019 • Lu Wang, Xuanqing Liu, Jin-Feng Yi, Zhi-Hua Zhou, Cho-Jui Hsieh
Furthermore, we show that dual solutions for these QP problems could give us a valid lower bound of the adversarial perturbation that can be used for formal robustness verification, giving us a nice view of attack/verification for NN models.
1 code implementation • ACL 2019 • Hou Pong Chan, Wang Chen, Lu Wang, Irwin King
To address this problem, we propose a reinforcement learning (RL) approach for keyphrase generation, with an adaptive reward function that encourages a model to generate both sufficient and accurate keyphrases.
no code implementations • ACL 2019 • Xinyu Hua, Zhe Hu, Lu Wang
Automatic argument generation is an appealing but challenging task.
1 code implementation • ACL 2019 • Xingshan Zeng, Jing Li, Lu Wang, Kam-Fai Wong
We hypothesize that both the context of the ongoing conversations and the users' previous chatting history will affect their continued interests in future engagement.
no code implementations • ACL 2019 • Hai Ye, Wenjie Li, Lu Wang
Semantic parsing aims to transform natural language (NL) utterances into formal meaning representations (MRs), whereas an NL generator achieves the reverse: producing a NL description for some given MRs.
no code implementations • ICLR 2019 • Shen-Huan Lv, Lu Wang, Zhi-Hua Zhou
Recent research about margin theory has proved that maximizing the minimum margin like support vector machines does not necessarily lead to better performance, and instead, it is crucial to optimize the margin distribution.
no code implementations • NAACL 2019 • Xinyu Hua, Mitko Nikolov, Nikhil Badugu, Lu Wang
Peer-review plays a critical role in the scientific writing and publication ecosystem.
no code implementations • 18 Mar 2019 • Shihua Huang, Lu Wang
Driven by Convolutional Neural Networks, object detection and semantic segmentation have gained significant improvements.
no code implementations • ICLR 2019 • Shen-Huan Lyu, Lu Wang, Zhi-Hua Zhou
We utilize a convex margin distribution loss function on the deep neural networks to validate our theoretical results by optimizing the margin ratio.
no code implementations • 12 Nov 2018 • Songmin Dai, Xiaoqiang Li, Lu Wang, Pin Wu, Weiqin Tong, Yimin Chen
We get appealing results in both tasks, which shows the independence prior is useful for instance segmentation and it is possible to unsupervisedly learn instance masks with only one image.
no code implementations • 27 Oct 2018 • Dongchi Yu, Lu Wang
Designing and modifying complex hull forms for optimal vessel performances have been a major challenge for naval architects.
no code implementations • 14 Oct 2018 • Lisa Fan, Dong Yu, Lu Wang
Sequence-to-sequence (seq2seq) neural models have been actively investigated for abstractive summarization.
no code implementations • 27 Sep 2018 • Guoshuai Zhao, Jun Li, Lu Wang, Xueming Qian, Yun Fu
In this paper, we propose a Graph-Sequence-to-Sequence(GraphSeq2Seq) model to fuse the dependency graph among words into the traditional Seq2Seq framework.
no code implementations • EMNLP 2018 • Hai Ye, Lu Wang
We study the problem of generating keyphrases that summarize the key points for a given document.
no code implementations • COLING 2018 • Lu Wang, Shoushan Li, Changlong Sun, Luo Si, Xiaozhong Liu, Min Zhang, Guodong Zhou
Question-Answer (QA) matching is a fundamental task in the Natural Language Processing community.
no code implementations • 4 Jul 2018 • Lu Wang, Wei zhang, Xiaofeng He, Hongyuan Zha
Prior relevant studies recommend treatments either use supervised learning (e. g. matching the indicator signal which denotes doctor prescriptions), or reinforcement learning (e. g. maximizing evaluation signal which indicates cumulative reward from survival rates).
no code implementations • NAACL 2018 • Xingshan Zeng, Jing Li, Lu Wang, Nicholas Beauchamp, Sarah Shugars, Kam-Fai Wong
We propose a statistical model that jointly captures: (1) topics for representing user interests and conversation content, and (2) discourse modes for describing user replying behavior and conversation dynamics.
no code implementations • ACL 2018 • Xinyu Hua, Lu Wang
High quality arguments are essential elements for human reasoning and decision-making processes.
no code implementations • WS 2017 • Xinyu Hua, Lu Wang
We study the problem of domain adaptation for neural abstractive summarization.
no code implementations • TACL 2017 • Lu Wang, Nick Beauchamp, Sarah Shugars, Kechen Qin
Using a dataset of 118 Oxford-style debates, our model's combination of content (as latent topics) and style (as linguistic features) allows us to predict audience-adjudicated winners with 74% accuracy, significantly outperforming linguistic features alone (66%).
no code implementations • ACL 2017 • Kechen Qin, Lu Wang, Joseph Kim
We present a joint modeling approach to identify salient discussion points in spoken meetings as well as to label the discourse relations between speaker turns.
no code implementations • ACL 2017 • Xinyu Hua, Lu Wang
We investigate the problem of sentence-level supporting argument detection from relevant documents for user-specified claims.
no code implementations • 25 Jun 2016 • Lu Wang, Claire Cardie
This paper addresses the problem of summarizing decisions in spoken meetings: our goal is to produce a concise {\it decision abstract} for each meeting decision.
no code implementations • 25 Jun 2016 • Lu Wang, Larry Heck, Dilek Hakkani-Tur
Our session-based models outperform the state-of-the-art method for entity extraction task in SDS.
no code implementations • WS 2012 • Lu Wang, Claire Cardie
We present a novel unsupervised framework for focused meeting summarization that views the problem as an instance of relation extraction.
no code implementations • ACL 2013 • Lu Wang, Hema Raghavan, Vittorio Castelli, Radu Florian, Claire Cardie
We consider the problem of using sentence compression techniques to facilitate query-focused multi-document summarization.
no code implementations • WS 2012 • Lu Wang, Claire Cardie
We present a token-level decision summarization framework that utilizes the latent topic structures of utterances to identify "summary-worthy" words.
no code implementations • WS 2014 • Lu Wang, Claire Cardie
For example, the isotonic CRF model achieves F1 scores of 0. 74 and 0. 67 for agreement and disagreement detection, when a linear chain CRF obtains 0. 58 and 0. 56 for the discussions on Wikipedia Talk pages.
no code implementations • ACL 2014 • Lu Wang, Claire Cardie
We investigate the novel task of online dispute detection and propose a sentiment analysis solution to the problem: we aim to identify the sequence of sentence-level sentiments expressed during a discussion and to use them as features in a classifier that predicts the DISPUTE/NON-DISPUTE label for the discussion as a whole.
no code implementations • COLING 2014 • Lu Wang, Hema Raghavan, Claire Cardie, Vittorio Castelli
We present a submodular function-based framework for query-focused opinion summarization.
no code implementations • HLT 2015 • Lu Wang, Claire Cardie, Galen Marchetti
Existing timeline generation systems for complex events consider only information from traditional media, ignoring the rich social context provided by user-generated content that reveals representative public interests or insightful opinions.
no code implementations • NAACL 2016 • Lu Wang, Wang Ling
We study the problem of generating abstractive summaries for opinionated text.