Search Results for author: Xiang Lin

Found 16 papers, 4 papers with code

Dynamic Scheduled Sampling with Imitation Loss for Neural Text Generation

no code implementations31 Jan 2023 Xiang Lin, Prathyusha Jwalapuram, Shafiq Joty

Scheduled sampling is a curriculum learning strategy that gradually exposes the model to its own predictions during training to mitigate this bias.

Machine Translation Text Generation

Chart-to-Text: A Large-Scale Benchmark for Chart Summarization

2 code implementations ACL 2022 Shankar Kantharaj, Rixie Tiffany Ko Leong, Xiang Lin, Ahmed Masry, Megh Thakkar, Enamul Hoque, Shafiq Joty

We also introduce a number of state-of-the-art neural models as baselines that utilize image captioning and data-to-text generation techniques to tackle two problem variations: one assumes the underlying data table of the chart is available while the other needs to extract data from chart images.

Data-to-Text Generation Image Captioning

Rethinking Self-Supervision Objectives for Generalizable Coherence Modeling

no code implementations ACL 2022 Prathyusha Jwalapuram, Shafiq Joty, Xiang Lin

Given the claims of improved text generation quality across various pre-trained neural models, we consider the coherence evaluation of machine generated text to be one of the principal applications of coherence models that needs to be investigated.

Coherence Evaluation Contrastive Learning +1

Perturbation Theory-Aided Learned Digital Back-Propagation Scheme for Optical Fiber Nonlinearity Compensation

no code implementations11 Oct 2021 Xiang Lin, Shenghang Luo, Sunish Kumar Orappanpara Soman, Octavia A. Dobre, Lutz Lampe, Deyuan Chang, Chuandong Li

The proposed scheme is evaluated by numerical simulations of a single carrier optical fiber communication system operating at 32 Gbaud with 64-quadrature amplitude modulation and 20*80 km transmission distance.

Straight to the Gradient: Learning to Use Novel Tokens for Neural Text Generation

1 code implementation14 Jun 2021 Xiang Lin, Simeng Han, Shafiq Joty

Advanced large-scale neural language models have led to significant success in many language generation tasks.

Text Generation

Rethinking Coherence Modeling: Synthetic vs. Downstream Tasks

no code implementations EACL 2021 Tasnim Mohiuddin, Prathyusha Jwalapuram, Xiang Lin, Shafiq Joty

Although coherence modeling has come a long way in developing novel models, their evaluation on downstream applications for which they are purportedly developed has largely been neglected.

Benchmarking Coherence Evaluation +6

Resurrecting Submodularity for Neural Text Generation

no code implementations8 Nov 2019 Simeng Han, Xiang Lin, Shafiq Joty

The resulting attention module offers an architecturally simple and empirically effective method to improve the coverage of neural text generation.

Abstractive Text Summarization Text Generation

Fiber Nonlinearity Mitigation via the Parzen Window Classifier for Dispersion Managed and Unmanaged Links

no code implementations17 Sep 2019 Abdelkerim Amari, Xiang Lin, Octavia A. Dobre, Ramachandran Venkatesan, Alex Alvarado

Machine learning techniques have recently received significant attention as promising approaches to deal with the optical channel impairments, and in particular, the nonlinear effects.

BIG-bench Machine Learning

Hierarchical Pointer Net Parsing

1 code implementation IJCNLP 2019 Linlin Liu, Xiang Lin, Shafiq Joty, Simeng Han, Lidong Bing

Transition-based top-down parsing with pointer networks has achieved state-of-the-art results in multiple parsing tasks, while having a linear time complexity.

Discourse Parsing Inductive Bias

A Unified Linear-Time Framework for Sentence-Level Discourse Parsing

2 code implementations ACL 2019 Xiang Lin, Shafiq Joty, Prathyusha Jwalapuram, M Saiful Bari

We propose an efficient neural framework for sentence-level discourse analysis in accordance with Rhetorical Structure Theory (RST).

Discourse Parsing

N2VSCDNNR: A Local Recommender System Based on Node2vec and Rich Information Network

no code implementations12 Apr 2019 Jinyin Chen, Yangyang Wu, Lu Fan, Xiang Lin, Haibin Zheng, Shanqing Yu, Qi Xuan

In particular, we use a bipartite network to construct the user-item network, and represent the interactions among users (or items) by the corresponding one-mode projection network.

Clustering Recommendation Systems

Can Adversarial Network Attack be Defended?

no code implementations11 Mar 2019 Jinyin Chen, Yangyang Wu, Xiang Lin, Qi Xuan

In this paper, we are interested in the possibility of defense against adversarial attack on network, and propose defense strategies for GNNs against attacks.

Social and Information Networks Physics and Society

A Machine Learning-Based Detection Technique for Optical Fiber Nonlinearity Mitigation

no code implementations27 Feb 2019 Abdelkerim Amari, Xiang Lin, Octavia A. Dobre, Ramachandran Venkatesan, Alex Alvarado

In this case, digital back propagation compensates for the deterministic nonlinearity and the Parzen window deals with the stochastic nonlinear signal-noise interactions, which are not taken into account by digital back propagation.

BIG-bench Machine Learning General Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.