Search Results for author: Haoran Li

Found 42 papers, 12 papers with code

Learn to Copy from the Copying History: Correlational Copy Network for Abstractive Summarization

no code implementations EMNLP 2021 Haoran Li, Song Xu, Peng Yuan, Yujia Wang, Youzheng Wu, Xiaodong He, BoWen Zhou

It thereby takes advantage of prior copying distributions and, at each time step, explicitly encourages the model to copy the input word that is relevant to the previously copied one.

Abstractive Text Summarization News Summarization

Load-balanced Gather-scatter Patterns for Sparse Deep Neural Networks

no code implementations20 Dec 2021 Fei Sun, Minghai Qin, Tianyun Zhang, Xiaolong Ma, Haoran Li, Junwen Luo, Zihao Zhao, Yen-Kuang Chen, Yuan Xie

Our experiments show that GS patterns consistently make better trade-offs between accuracy and computation efficiency compared to conventional structured sparse patterns.

Machine Translation Speech Recognition

CONFIT: Toward Faithful Dialogue Summarization with Linguistically-Informed Contrastive Fine-tuning

no code implementations16 Dec 2021 Xiangru Tang, Arjun Nair, Borui Wang, Bingyao Wang, Jai Desai, Aaron Wade, Haoran Li, Asli Celikyilmaz, Yashar Mehdad, Dragomir Radev

Using human evaluation and automatic faithfulness metrics, we show that our model significantly reduces all kinds of factual errors on the dialogue summarization, SAMSum corpus.

Abstractive Dialogue Summarization Meeting Summarization +1

The Powerful Use of AI in the Energy Sector: Intelligent Forecasting

no code implementations3 Nov 2021 Erik Blasch, Haoran Li, Zhihao Ma, Yang Weng

To meet society requirements, this paper proposes a methodology to develop, deploy, and evaluate AI systems in the energy sector by: (1) understanding the power system measurements with physics, (2) designing AI algorithms to forecast the need, (3) developing robust and accountable AI methods, and (4) creating reliable measures to evaluate the performance of the AI model.

Dimensionality Reduction

ABCP: Automatic Block-wise and Channel-wise Network Pruning via Joint Search

1 code implementation8 Oct 2021 Jiaqi Li, Haoran Li, Yaran Chen, Zixiang Ding, Nannan Li, Mingjun Ma, Zicheng Duan, Dongbing Zhao

Compared with the traditional rule-based pruning method, this pipeline saves human labor and achieves a higher compression ratio with lower accuracy loss.

Network Pruning

Adversarial twin neural networks: maximizing physics recovery for physical system

no code implementations29 Sep 2021 Haoran Li, Erik Blasch, Jingyi Yuan, Yang Weng

Thus, we propose (1) sparsity regularization for the physical model and (2) physical superiority over the virtual model.

WHAT TO DO IF SPARSE REPRESENTATION LEARNING FAILS UNEXPECTEDLY?

no code implementations29 Sep 2021 Jingyi Yuan, Haoran Li, Erik Blasch, Yang Weng

RISE is based on a complete analysis for the generalizability of data properties for physical systems.

Active Learning Representation Learning

The JDDC 2.0 Corpus: A Large-Scale Multimodal Multi-Turn Chinese Dialogue Dataset for E-commerce Customer Service

no code implementations27 Sep 2021 Nan Zhao, Haoran Li, Youzheng Wu, Xiaodong He, BoWen Zhou

We present the solutions of top-5 teams participating in the JDDC multimodal dialogue challenge based on this dataset, which provides valuable insights for further researches on the multimodal dialogue task.

Investigating Crowdsourcing Protocols for Evaluating the Factual Consistency of Summaries

no code implementations19 Sep 2021 Xiangru Tang, Alexander R. Fabbri, Ziming Mao, Griffin Adams, Borui Wang, Haoran Li, Yashar Mehdad, Dragomir Radev

Current pre-trained models applied to summarization are prone to factual inconsistencies which either misrepresent the source text or introduce extraneous information.

Blind Image Quality Assessment for MRI with A Deep Three-dimensional content-adaptive Hyper-Network

no code implementations13 Jul 2021 Kehan Qi, Haoran Li, Chuyu Rong, Yu Gong, Cheng Li, Hairong Zheng, Shanshan Wang

However, the performance of these methods is limited due to the utilization of simple content-non-adaptive network parameters and the waste of the important 3D spatial information of the medical images.

Blind Image Quality Assessment

Mixed Cross Entropy Loss for Neural Machine Translation

1 code implementation30 Jun 2021 Haoran Li, Wei Lu

In neural machine translation, cross entropy (CE) is the standard loss function in two training methods of auto-regressive models, i. e., teacher forcing and scheduled sampling.

Machine Translation Translation

Syntax-augmented Multilingual BERT for Cross-lingual Transfer

1 code implementation ACL 2021 Wasi Uddin Ahmad, Haoran Li, Kai-Wei Chang, Yashar Mehdad

In recent years, we have seen a colossal effort in pre-training multilingual text encoders using large-scale corpora in many languages to facilitate cross-lingual transfer learning.

Cross-Lingual Transfer Named Entity Recognition +4

ConvoSumm: Conversation Summarization Benchmark and Improved Abstractive Summarization with Argument Mining

1 code implementation ACL 2021 Alexander R. Fabbri, Faiaz Rahman, Imad Rizvi, Borui Wang, Haoran Li, Yashar Mehdad, Dragomir Radev

While online conversations can cover a vast amount of information in many different formats, abstractive text summarization has primarily focused on modeling solely news articles.

Abstractive Text Summarization Argument Mining +2

Differentially Private Federated Knowledge Graphs Embedding

1 code implementation17 May 2021 Hao Peng, Haoran Li, Yangqiu Song, Vincent Zheng, JianXin Li

However, for multiple cross-domain knowledge graphs, state-of-the-art embedding models cannot make full use of the data from different knowledge domains while preserving the privacy of exchanged data.

Knowledge Graph Embedding Knowledge Graphs +2

EASE: Extractive-Abstractive Summarization with Explanations

no code implementations14 May 2021 Haoran Li, Arash Einolghozati, Srinivasan Iyer, Bhargavi Paranjape, Yashar Mehdad, Sonal Gupta, Marjan Ghazvininejad

Current abstractive summarization systems outperform their extractive counterparts, but their widespread adoption is inhibited by the inherent lack of interpretability.

Abstractive Text Summarization Document Summarization +1

Lifelong Learning with Sketched Structural Regularization

no code implementations17 Apr 2021 Haoran Li, Aditya Krishnan, Jingfeng Wu, Soheil Kolouri, Praveen K. Pilly, Vladimir Braverman

In practice and due to computational constraints, most SR methods crudely approximate the importance matrix by its diagonal.

Continual Learning Permuted-MNIST

K-PLUG: Knowledge-injected Pre-trained Language Model for Natural Language Understanding and Generation in E-Commerce

1 code implementation Findings (EMNLP) 2021 Song Xu, Haoran Li, Peng Yuan, Yujia Wang, Youzheng Wu, Xiaodong He, Ying Liu, BoWen Zhou

K-PLUG achieves new state-of-the-art results on a suite of domain-specific NLP tasks, including product knowledge base completion, abstractive product summarization, and multi-turn dialogue, significantly outperforms baselines across the board, which demonstrates that the proposed method effectively learns a diverse set of domain-specific knowledge for both language understanding and generation tasks.

Knowledge Base Completion Language Modelling +2

K-PLUG: KNOWLEDGE-INJECTED PRE-TRAINED LANGUAGE MODEL FOR NATURAL LANGUAGE UNDERSTANDING AND GENERATION

1 code implementation1 Jan 2021 Song Xu, Haoran Li, Peng Yuan, Yujia Wang, Youzheng Wu, Xiaodong He, Ying Liu, BoWen Zhou

K-PLUG achieves new state-of-the-art results on a suite of domain-specific NLP tasks, including product knowledge base completion, abstractive product summarization, and multi-turn dialogue, significantly outperforms baselines across the board, which demonstrates that the proposed method effectively learns a diverse set of domain-specific knowledge for both language understanding and generation tasks.

Chatbot Knowledge Base Completion +4

Multimodal Sentence Summarization via Multimodal Selective Encoding

no code implementations COLING 2020 Haoran Li, Junnan Zhu, Jiajun Zhang, Xiaodong He, Chengqing Zong

Thus, we propose a multimodal selective gate network that considers reciprocal relationships between textual and multi-level visual features, including global image descriptor, activation grids, and object proposals, to select highlights of the event when encoding the source sentence.

Sentence Summarization

Dynamic radiomics: a new methodology to extract quantitative time-related features from tomographic images

no code implementations1 Nov 2020 Fengying Che, Ruichuan Shi, Jian Wu, Haoran Li, Shuqin Li, Weixing Chen, Hao Zhang, Zhi Li, Xiaoyu Cui

The feature extraction methods of radiomics are mainly based on static tomographic images at a certain moment, while the occurrence and development of disease is a dynamic process that cannot be fully reflected by only static characteristics.

Conversational Semantic Parsing

no code implementations EMNLP 2020 Armen Aghajanyan, Jean Maillard, Akshat Shrivastava, Keith Diedrick, Mike Haeger, Haoran Li, Yashar Mehdad, Ves Stoyanov, Anuj Kumar, Mike Lewis, Sonal Gupta

In this paper, we propose a semantic representation for such task-oriented conversational systems that can represent concepts such as co-reference and context carryover, enabling comprehensive understanding of queries in a session.

Semantic Parsing

Multimodal Joint Attribute Prediction and Value Extraction for E-commerce Product

1 code implementation EMNLP 2020 Tiangang Zhu, Yue Wang, Haoran Li, Youzheng Wu, Xiaodong He, Bo-Wen Zhou

We annotate a multimodal product attribute value dataset that contains 87, 194 instances, and the experimental results on this dataset demonstrate that explicitly modeling the relationship between attributes and values facilitates our method to establish the correspondence between them, and selectively utilizing visual product information is necessary for the task.

Attribute Value Extraction

MTOP: A Comprehensive Multilingual Task-Oriented Semantic Parsing Benchmark

no code implementations EACL 2021 Haoran Li, Abhinav Arora, Shuohui Chen, Anchit Gupta, Sonal Gupta, Yashar Mehdad

Scaling semantic parsing models for task-oriented dialog systems to new languages is often expensive and time-consuming due to the lack of available datasets.

Semantic Parsing Translation

Self-Attention Guided Copy Mechanism for Abstractive Summarization

no code implementations ACL 2020 Song Xu, Haoran Li, Peng Yuan, Youzheng Wu, Xiaodong He, Bo-Wen Zhou

Copy module has been widely equipped in the recent abstractive summarization models, which facilitates the decoder to extract words from the source into the summary.

Abstractive Text Summarization

AutoGAN-Distiller: Searching to Compress Generative Adversarial Networks

3 code implementations ICML 2020 Yonggan Fu, Wuyang Chen, Haotao Wang, Haoran Li, Yingyan Lin, Zhangyang Wang

Inspired by the recent success of AutoML in deep compression, we introduce AutoML to GAN compression and develop an AutoGAN-Distiller (AGD) framework.

AutoML Knowledge Distillation +2

General Purpose Text Embeddings from Pre-trained Language Models for Scalable Inference

no code implementations Findings of the Association for Computational Linguistics 2020 Jingfei Du, Myle Ott, Haoran Li, Xing Zhou, Veselin Stoyanov

The resulting method offers a compelling solution for using large-scale pre-trained models at a fraction of the computational cost when multiple tasks are performed on the same text.

Knowledge Distillation Quantization

BiFNet: Bidirectional Fusion Network for Road Segmentation

no code implementations18 Apr 2020 Haoran Li, Yaran Chen, Qichao Zhang, Dongbin Zhao

Considering the bird's eye views(BEV) of the LiDAR remains the space structure in horizontal plane, this paper proposes a bidirectional fusion network(BiFNet) to fuse the image and BEV of the point cloud.

Multi-modal Datasets for Super-resolution

no code implementations13 Apr 2020 Haoran Li, Weihong Quan, Meijun Yan, Jin Zhang, Xiaoli Gong, Jin Zhou

However, due to the variety of image degradation types in the real world, models trained on single-modal simulation datasets do not always have good robustness and generalization ability in different degradation scenarios.

Super-Resolution

Music-oriented Dance Video Synthesis with Pose Perceptual Loss

1 code implementation13 Dec 2019 Xuanchi Ren, Haoran Li, Zijian Huang, Qifeng Chen

We present a learning-based approach with pose perceptual loss for automatic music video generation.

Video Generation

Predictive Multi-level Patient Representations from Electronic Health Records

no code implementations12 Nov 2019 Zichang Wang, Haoran Li, Lu-chen Liu, Haoxian Wu, Ming Zhang

Most related studies transform EHR data of a patient into a sequence of clinical events in temporal order and then use sequential models to learn patient representations for outcome prediction.

Emerging Cross-lingual Structure in Pretrained Language Models

no code implementations ACL 2020 Shijie Wu, Alexis Conneau, Haoran Li, Luke Zettlemoyer, Veselin Stoyanov

We study the problem of multilingual masked language modeling, i. e. the training of a single model on concatenated text from multiple languages, and present a detailed study of several factors that influence why these models are so effective for cross-lingual transfer.

Cross-Lingual Transfer Language Modelling +3

Learning Hierarchical Representations of Electronic Health Records for Clinical Outcome Prediction

no code implementations20 Mar 2019 Lu-chen Liu, Haoran Li, Zhiting Hu, Haoran Shi, Zichang Wang, Jian Tang, Ming Zhang

Our model learns hierarchical representationsof event sequences, to adaptively distinguish between short-range and long-range events, and accurately capture coretemporal dependencies.

Multilingual Seq2seq Training with Similarity Loss for Cross-Lingual Document Classification

no code implementations WS 2018 Katherine Yu, Haoran Li, Barlas Oguz

In this paper we continue experiments where neural machine translation training is used to produce joint cross-lingual fixed-dimensional sentence embeddings.

Cross-Lingual Document Classification Cross-Lingual Transfer +6

Multi-modal Summarization for Asynchronous Collection of Text, Image, Audio and Video

no code implementations EMNLP 2017 Haoran Li, Junnan Zhu, Cong Ma, Jiajun Zhang, Cheng-qing Zong

In this work, we propose an extractive Multi-modal Summarization (MMS) method which can automatically generate a textual summary given a set of documents, images, audios and videos related to a specific topic.

Automatic Speech Recognition Document Summarization +1

Cannot find the paper you are looking for? You can Submit a new open access paper.