Search Results for author: Wen Xiao

Found 13 papers, 7 papers with code

T3-Vis: visual analytic for Training and fine-Tuning Transformers in NLP

1 code implementation EMNLP (ACL) 2021 Raymond Li, Wen Xiao, Lanjun Wang, Hyeju Jang, Giuseppe Carenini

Transformers are the dominant architecture in NLP, but their training and fine-tuning is still very challenging.

SensatUrban: Learning Semantics from Urban-Scale Photogrammetric Point Clouds

no code implementations12 Jan 2022 Qingyong Hu, Bo Yang, Sheikh Khalid, Wen Xiao, Niki Trigoni, Andrew Markham

Each point in the dataset has been labelled with fine-grained semantic annotations, resulting in a dataset that is three times the size of the previous existing largest photogrammetric point cloud dataset.

PRIMERA: Pyramid-based Masked Sentence Pre-training for Multi-document Summarization

1 code implementation ACL 2022 Wen Xiao, Iz Beltagy, Giuseppe Carenini, Arman Cohan

We introduce PRIMERA, a pre-trained model for multi-document representation with a focus on summarization that reduces the need for dataset-specific architectures and large amounts of fine-tuning labeled data.

Abstractive Text Summarization Document Summarization +1

T3-Vis: a visual analytic framework for Training and fine-Tuning Transformers in NLP

1 code implementation31 Aug 2021 Raymond Li, Wen Xiao, Lanjun Wang, Hyeju Jang, Giuseppe Carenini

Transformers are the dominant architecture in NLP, but their training and fine-tuning is still very challenging.

W-RST: Towards a Weighted RST-style Discourse Framework

no code implementations ACL 2021 Patrick Huber, Wen Xiao, Giuseppe Carenini

Aiming for a better integration of data-driven and linguistically-inspired approaches, we explore whether RST Nuclearity, assigning a binary assessment of importance between text segments, can be replaced by automatically generated, real-valued scores, in what we call a Weighted-RST framework.

Demoting the Lead Bias in News Summarization via Alternating Adversarial Learning

no code implementations ACL 2021 Linzi Xing, Wen Xiao, Giuseppe Carenini

In news articles the lead bias is a common phenomenon that usually dominates the learning signals for neural extractive summarizers, severely limiting their performance on data with different or even no bias.

News Summarization

Do We Really Need That Many Parameters In Transformer For Extractive Summarization? Discourse Can Help !

no code implementations EMNLP (CODI) 2020 Wen Xiao, Patrick Huber, Giuseppe Carenini

The multi-head self-attention of popular transformer models is widely used within Natural Language Processing (NLP), including for the task of extractive summarization.

Extractive Summarization Natural Language Processing

Towards Semantic Segmentation of Urban-Scale 3D Point Clouds: A Dataset, Benchmarks and Challenges

2 code implementations CVPR 2021 Qingyong Hu, Bo Yang, Sheikh Khalid, Wen Xiao, Niki Trigoni, Andrew Markham

An essential prerequisite for unleashing the potential of supervised deep learning algorithms in the area of 3D scene understanding is the availability of large-scale and richly annotated datasets.

Scene Understanding Semantic Segmentation

Extractive Summarization of Long Documents by Combining Global and Local Context

1 code implementation IJCNLP 2019 Wen Xiao, Giuseppe Carenini

In this paper, we propose a novel neural single document extractive summarization model for long documents, incorporating both the global context of the whole document and the local context within the current topic.

Extractive Summarization Text Summarization

Cannot find the paper you are looking for? You can Submit a new open access paper.