Search Results for author: Tianyu Liu

Found 87 papers, 48 papers with code

An Image is Worth 1/2 Tokens After Layer 2: Plug-and-Play Inference Acceleration for Large Vision-Language Models

1 code implementation11 Mar 2024 Liang Chen, Haozhe Zhao, Tianyu Liu, Shuai Bai, Junyang Lin, Chang Zhou, Baobao Chang

To this end, we introduce FastV, a versatile plug-and-play method designed to optimize computational efficiency by learning adaptive attention patterns in early layers and pruning visual tokens in subsequent ones.

Computational Efficiency Video Understanding

PCA-Bench: Evaluating Multimodal Large Language Models in Perception-Cognition-Action Chain

1 code implementation21 Feb 2024 Liang Chen, Yichi Zhang, Shuhuai Ren, Haozhe Zhao, Zefan Cai, Yuchi Wang, Peiyi Wang, Xiangdi Meng, Tianyu Liu, Baobao Chang

To address this, we introduce Embodied-Instruction-Evolution (EIE), an automatic framework for synthesizing instruction tuning examples in multimodal embodied environments.

Autonomous Driving Decision Making

Rectify the Regression Bias in Long-Tailed Object Detection

no code implementations29 Jan 2024 Ke Zhu, Minghao Fu, Jie Shao, Tianyu Liu, Jianxin Wu

While existing methods fail to handle the regression bias, the class-specific regression head for rare classes is hypothesized to be the main cause of it in this paper.

Long-tailed Object Detection Object +3

Unlocking Efficiency in Large Language Model Inference: A Comprehensive Survey of Speculative Decoding

1 code implementation15 Jan 2024 Heming Xia, Zhe Yang, Qingxiu Dong, Peiyi Wang, Yongqi Li, Tao Ge, Tianyu Liu, Wenjie Li, Zhifang Sui

To mitigate the high inference latency stemming from autoregressive decoding in Large Language Models (LLMs), Speculative Decoding has emerged as a novel decoding paradigm for LLM inference.

Language Modelling Large Language Model

Multiuser Resource Allocation for Semantic-Relay-Aided Text Transmissions

no code implementations12 Nov 2023 Zeyang Hu, Tianyu Liu, Changsheng You, Zhaohui Yang, Mingzhe Chen

Thus, it has the great potential to improve the spectrum efficiency of conventional wireless systems with bit transmissions, especially in low signal-to-noise ratio (SNR) and small bandwidth regions.

Formal Aspects of Language Modeling

no code implementations7 Nov 2023 Ryan Cotterell, Anej Svete, Clara Meister, Tianyu Liu, Li Du

Large language models have become one of the most commonly deployed NLP inventions.

Language Modelling

Interaction Screening and Pseudolikelihood Approaches for Tensor Learning in Ising Models

no code implementations20 Oct 2023 Tianyu Liu, Somabha Mukherjee

In this paper, we study two well known methods of Ising structure learning, namely the pseudolikelihood approach and the interaction screening approach, in the context of tensor recovery in $k$-spin Ising models.

MuSe-GNN: Learning Unified Gene Representation From Multimodal Biological Graph Data

1 code implementation NeurIPS 2023 Tianyu Liu, Yuge Wang, Rex Ying, Hongyu Zhao

Discovering genes with similar functions across diverse biomedical contexts poses a significant challenge in gene representation learning due to data heterogeneity.

Benchmarking Contrastive Learning +1

PINN-based viscosity solution of HJB equation

no code implementations18 Sep 2023 Tianyu Liu, Steven Ding, Jiarui Zhang, Liutao Zhou

This paper proposed a novel PINN-based viscosity solution for HJB equations.

Making Large Language Models Better Reasoners with Alignment

no code implementations5 Sep 2023 Peiyi Wang, Lei LI, Liang Chen, Feifan Song, Binghuai Lin, Yunbo Cao, Tianyu Liu, Zhifang Sui

To address this problem, we introduce an \textit{Alignment Fine-Tuning (AFT)} paradigm, which involves three steps: 1) fine-tuning LLMs with COT training data; 2) generating multiple COT responses for each question, and categorizing them into positive and negative ones based on whether they achieve the correct answer; 3) calibrating the scores of positive and negative responses given by LLMs with a novel constraint alignment loss.

Induction Network: Audio-Visual Modality Gap-Bridging for Self-Supervised Sound Source Localization

1 code implementation9 Aug 2023 Tianyu Liu, Peng Zhang, Wei Huang, Yufei zha, Tao You, Yanning Zhang

By decoupling the gradients of visual and audio modalities, the discriminative visual representations of sound sources can be learned with the designed Induction Vector in a bootstrap manner, which also enables the audio modality to be aligned with the visual modality consistently.

Contrastive Learning

A Geometric Notion of Causal Probing

no code implementations27 Jul 2023 Clément Guerner, Anej Svete, Tianyu Liu, Alexander Warstadt, Ryan Cotterell

The linear subspace hypothesis (Bolukbasi et al., 2016) states that, in a language model's representation space, all information about a concept such as verbal number is encoded in a linear subspace.

counterfactual Language Modelling

Car-Studio: Learning Car Radiance Fields from Single-View and Endless In-the-wild Images

1 code implementation26 Jul 2023 Tianyu Liu, Hao Zhao, Yang Yu, Guyue Zhou, Ming Liu

However, previous studies learned within a sequence of autonomous driving datasets, resulting in unsatisfactory blurring when rotating the car in the simulator.

Autonomous Driving

Hexatagging: Projective Dependency Parsing as Tagging

1 code implementation8 Jun 2023 Afra Amini, Tianyu Liu, Ryan Cotterell

We introduce a novel dependency parser, the hexatagger, that constructs dependency trees by tagging the words in a sentence with elements from a finite set of possible tags.

Computational Efficiency Dependency Parsing +2

Large Language Models are not Fair Evaluators

2 code implementations29 May 2023 Peiyi Wang, Lei LI, Liang Chen, Zefan Cai, Dawei Zhu, Binghuai Lin, Yunbo Cao, Qi Liu, Tianyu Liu, Zhifang Sui

In this paper, we uncover a systematic bias in the evaluation paradigm of adopting large language models~(LLMs), e. g., GPT-4, as a referee to score and compare the quality of responses generated by candidate models.

Language Modelling Large Language Model +1

ImageNetVC: Zero- and Few-Shot Visual Commonsense Evaluation on 1000 ImageNet Categories

1 code implementation24 May 2023 Heming Xia, Qingxiu Dong, Lei LI, Jingjing Xu, Tianyu Liu, Ziwei Qin, Zhifang Sui

Recently, Large Language Models (LLMs) have been serving as general-purpose interfaces, posing a significant demand for comprehensive visual knowledge.

Common Sense Reasoning

Linear-Time Modeling of Linguistic Structure: An Order-Theoretic Perspective

no code implementations24 May 2023 Tianyu Liu, Afra Amini, Mrinmaya Sachan, Ryan Cotterell

We show that these exhaustive comparisons can be avoided, and, moreover, the complexity of such tasks can be reduced to linear by casting the relation between tokens as a partial order over the string.

coreference-resolution Dependency Parsing +1

Denoising Bottleneck with Mutual Information Maximization for Video Multimodal Fusion

1 code implementation24 May 2023 Shaoxiang Wu, Damai Dai, Ziwei Qin, Tianyu Liu, Binghuai Lin, Yunbo Cao, Zhifang Sui

However, unlike other image-text multimodal tasks, video has longer multimodal sequences with more redundancy and noise in both visual and audio modalities.

Denoising Multimodal Sentiment Analysis

DialogVCS: Robust Natural Language Understanding in Dialogue System Upgrade

no code implementations24 May 2023 Zefan Cai, Xin Zheng, Tianyu Liu, Xu Wang, Haoran Meng, Jiaqi Han, Gang Yuan, Binghuai Lin, Baobao Chang, Yunbo Cao

In the constant updates of the product dialogue systems, we need to retrain the natural language understanding (NLU) model as new data from the real users would be merged into the existent data accumulated in the last updates.

Intent Detection Multi-Label Classification +1

RepCL: Exploring Effective Representation for Continual Text Classification

no code implementations12 May 2023 YiFan Song, Peiyi Wang, Dawei Zhu, Tianyu Liu, Zhifang Sui, Sujian Li

Continual learning (CL) aims to constantly learn new knowledge over time while avoiding catastrophic forgetting on old tasks.

Continual Learning Representation Learning +2

Enhancing Continual Relation Extraction via Classifier Decomposition

1 code implementation8 May 2023 Heming Xia, Peiyi Wang, Tianyu Liu, Binghuai Lin, Yunbo Cao, Zhifang Sui

In this work, we point out that there exist two typical biases after training of this vanilla strategy: classifier bias and representation bias, which causes the previous knowledge that the model learned to be shaded.

Continual Relation Extraction Relation

DialogQAE: N-to-N Question Answer Pair Extraction from Customer Service Chatlog

no code implementations14 Dec 2022 Xin Zheng, Tianyu Liu, Haoran Meng, Xu Wang, Yufan Jiang, Mengliang Rao, Binghuai Lin, Zhifang Sui, Yunbo Cao

Harvesting question-answer (QA) pairs from customer service chatlog in the wild is an efficient way to enrich the knowledge base for customer service chatbots in the cold start or continuous integration scenarios.

Retrieval

A Bilingual Parallel Corpus with Discourse Annotations

1 code implementation26 Oct 2022 Yuchen Eleanor Jiang, Tianyu Liu, Shuming Ma, Dongdong Zhang, Mrinmaya Sachan, Ryan Cotterell

The BWB corpus consists of Chinese novels translated by experts into English, and the annotated test set is designed to probe the ability of machine translation systems to model various discourse phenomena.

Document Level Machine Translation Machine Translation +2

Autoregressive Structured Prediction with Language Models

1 code implementation26 Oct 2022 Tianyu Liu, Yuchen Jiang, Nicholas Monath, Ryan Cotterell, Mrinmaya Sachan

Recent years have seen a paradigm shift in NLP towards using pretrained language models ({PLM}) for a wide range of tasks.

 Ranked #1 on Relation Extraction on CoNLL04 (RE+ Micro F1 metric)

Named Entity Recognition Named Entity Recognition (NER) +2

Learning Robust Representations for Continual Relation Extraction via Adversarial Class Augmentation

1 code implementation10 Oct 2022 Peiyi Wang, YiFan Song, Tianyu Liu, Binghuai Lin, Yunbo Cao, Sujian Li, Zhifang Sui

In this paper, through empirical studies we argue that this assumption may not hold, and an important reason for catastrophic forgetting is that the learned representations do not have good robustness against the appearance of analogous relations in the subsequent learning process.

Continual Relation Extraction Relation

A Structured Span Selector

1 code implementation NAACL 2022 Tianyu Liu, Yuchen Eleanor Jiang, Ryan Cotterell, Mrinmaya Sachan

Many natural language processing tasks, e. g., coreference resolution and semantic role labeling, require selecting text spans and making decisions about them.

coreference-resolution Inductive Bias +1

Robust Fine-tuning via Perturbation and Interpolation from In-batch Instances

1 code implementation2 May 2022 Shoujie Tong, Qingxiu Dong, Damai Dai, YiFan Song, Tianyu Liu, Baobao Chang, Zhifang Sui

For each instance in a batch, we involve other instances in the same batch to interact with it.

A Two-Stream AMR-enhanced Model for Document-level Event Argument Extraction

1 code implementation NAACL 2022 Runxin Xu, Peiyi Wang, Tianyu Liu, Shuang Zeng, Baobao Chang, Zhifang Sui

In this paper, we focus on extracting event arguments from an entire document, which mainly faces two critical problems: a) the long-distance dependency between trigger and arguments over sentences; b) the distracting context towards an event in the document.

Document-level Event Extraction Event Argument Extraction +2

HPT: Hierarchy-aware Prompt Tuning for Hierarchical Text Classification

1 code implementation28 Apr 2022 Zihan Wang, Peiyi Wang, Tianyu Liu, Binghuai Lin, Yunbo Cao, Zhifang Sui, Houfeng Wang

However, in this paradigm, there exists a huge gap between the classification tasks with sophisticated label hierarchy and the masked language model (MLM) pretraining tasks of PLMs and thus the potentials of PLMs can not be fully tapped.

Language Modelling Multi-Label Classification +2

SmartSales: Sales Script Extraction and Analysis from Sales Chatlog

no code implementations19 Apr 2022 Hua Liang, Tianyu Liu, Peiyi Wang, Mengliang Rao, Yunbo Cao

2) Customer objection response assists the salespeople to figure out the typical customer objections and corresponding winning sales scripts, as well as search for proper sales responses for a certain customer objection.

Management

ATP: AMRize Then Parse! Enhancing AMR Parsing with PseudoAMRs

2 code implementations Findings (NAACL) 2022 Liang Chen, Peiyi Wang, Runxin Xu, Tianyu Liu, Zhifang Sui, Baobao Chang

As Abstract Meaning Representation (AMR) implicitly involves compound semantic annotations, we hypothesize auxiliary tasks which are semantically or formally related can better enhance AMR parsing.

Ranked #7 on AMR Parsing on LDC2020T02 (using extra training data)

AMR Parsing Dependency Parsing +1

An alternative paradigm of fault diagnosis in dynamic systems: orthogonal projection-based methods

no code implementations16 Feb 2022 Steven X. Ding, Linlin Li, Tianyu Liu

In this paper, we propose a new paradigm of fault diagnosis in dynamic systems as an alternative to the well-established observer-based framework.

Fault Detection

Demystify Optimization and Generalization of Over-parameterized PAC-Bayesian Learning

no code implementations4 Feb 2022 Wei Huang, Chunrui Liu, Yilan Chen, Tianyu Liu, Richard Yi Da Xu

In addition to being a pure generalization bound analysis tool, PAC-Bayesian bound can also be incorporated into an objective function to train a probabilistic neural network, making them a powerful and relevant framework that can numerically provide a tight generalization bound for supervised learning.

Semi-supervised Implicit Scene Completion from Sparse LiDAR

1 code implementation29 Nov 2021 Pengfei Li, Yongliang Shi, Tianyu Liu, Hao Zhao, Guyue Zhou, Ya-Qin Zhang

Recent advances show that semi-supervised implicit representation learning can be achieved through physical constraints like Eikonal equations.

Representation Learning

Cerberus Transformer: Joint Semantic, Affordance and Attribute Parsing

1 code implementation CVPR 2022 Xiaoxue Chen, Tianyu Liu, Hao Zhao, Guyue Zhou, Ya-Qin Zhang

Multi-task indoor scene understanding is widely considered as an intriguing formulation, as the affinity of different tasks may lead to improved performance.

Attribute Scene Understanding +2

Hierarchical Curriculum Learning for AMR Parsing

1 code implementation ACL 2022 Peiyi Wang, Liang Chen, Tianyu Liu, Damai Dai, Yunbo Cao, Baobao Chang, Zhifang Sui

Abstract Meaning Representation (AMR) parsing aims to translate sentences to semantic representation with a hierarchical structure, and is recently empowered by pretrained sequence-to-sequence models.

AMR Parsing Representation Learning

An Enhanced Span-based Decomposition Method for Few-Shot Sequence Labeling

1 code implementation NAACL 2022 Peiyi Wang, Runxin Xu, Tianyu Liu, Qingyu Zhou, Yunbo Cao, Baobao Chang, Zhifang Sui

Few-Shot Sequence Labeling (FSSL) is a canonical paradigm for the tagging models, e. g., named entity recognition and slot filling, to generalize on an emerging, resource-scarce domain.

Few-shot NER Meta-Learning +4

Behind the Scenes: An Exploration of Trigger Biases Problem in Few-Shot Event Classification

1 code implementation29 Aug 2021 Peiyi Wang, Runxin Xu, Tianyu Liu, Damai Dai, Baobao Chang, Zhifang Sui

However, we find they suffer from trigger biases that signify the statistical homogeneity between some trigger words and target event types, which we summarize as trigger overlapping and trigger separability.

Explicit Interaction Network for Aspect Sentiment Triplet Extraction

no code implementations21 Jun 2021 Peiyi Wang, Tianyu Liu, Damai Dai, Runxin Xu, Baobao Chang, Zhifang Sui

Table encoder extracts sentiment at token-pair level, so that the compositional feature between targets and opinions can be easily captured.

Aspect Sentiment Triplet Extraction Sentence +1

Decompose, Fuse and Generate: A Formation-Informed Method for Chinese Definition Generation

no code implementations NAACL 2021 Hua Zheng, Damai Dai, Lei LI, Tianyu Liu, Zhifang Sui, Baobao Chang, Yang Liu

In this paper, we tackle the task of Definition Generation (DG) in Chinese, which aims at automatically generating a definition for a word.

Document-level Event Extraction via Heterogeneous Graph-based Interaction Model with a Tracker

2 code implementations ACL 2021 Runxin Xu, Tianyu Liu, Lei LI, Baobao Chang

Existing methods are not effective due to two challenges of this task: a) the target event arguments are scattered across sentences; b) the correlation among events in a document is non-trivial to model.

Document-level Event Extraction Event Extraction

Premise-based Multimodal Reasoning: Conditional Inference on Joint Textual and Visual Clues

no code implementations ACL 2022 Qingxiu Dong, Ziwei Qin, Heming Xia, Tian Feng, Shoujie Tong, Haoran Meng, Lin Xu, Weidong Zhan, Sujian Li, Zhongyu Wei, Tianyu Liu, Zuifang Sui

It is a common practice for recent works in vision language cross-modal reasoning to adopt a binary or multi-choice classification formulation taking as input a set of source image(s) and textual query.

Multimodal Reasoning Natural Language Inference +1

An Intelligent Model for Solving Manpower Scheduling Problems

1 code implementation7 May 2021 Lingyu Zhang, Tianyu Liu, Yunhai Wang

In addition, to the numerical solution of the manpower scheduling problem, this paper also studies the algorithm for scheduling task list generation and the method of displaying scheduling results.

Management Scheduling

Apply Artificial Neural Network to Solving Manpower Scheduling Problem

1 code implementation7 May 2021 Tianyu Liu, Lingyu Zhang

This paper proposes a new model combined with deep learning to solve the multi-shift manpower scheduling problem based on the existing research.

Scheduling Time Series +1

Automatic Learning to Detect Concept Drift

no code implementations4 May 2021 Hang Yu, Tianyu Liu, Jie Lu, Guangquan Zhang

Many methods have been proposed to detect concept drift, i. e., the change in the distribution of streaming data, due to concept drift causes a decrease in the prediction accuracy of algorithms.

Active Learning Meta-Learning

A Token-level Reference-free Hallucination Detection Benchmark for Free-form Text Generation

2 code implementations ACL 2022 Tianyu Liu, Yizhe Zhang, Chris Brockett, Yi Mao, Zhifang Sui, Weizhu Chen, Bill Dolan

Large pretrained generative models like GPT-3 often suffer from hallucinating non-existent or incorrect content, which undermines their potential merits in real applications.

Hallucination Sentence +1

Integrating Fast Regional Optimization into Sampling-based Kinodynamic Planning for Multirotor Flight

1 code implementation9 Mar 2021 Hongkai Ye, Tianyu Liu, Chao Xu, Fei Gao

For real-time multirotor kinodynamic motion planning, the efficiency of sampling-based methods is usually hindered by difficult-to-sample homotopy classes like narrow passages.

Motion Planning Robotics

Application of the unified control and detection framework to detecting stealthy integrity cyber-attacks on feedback control systems

no code implementations27 Feb 2021 Steven X. Ding, Linlin Li, Dong Zhao, Chris Louen, Tianyu Liu

It is demonstrated, in the unified framework of control and detection, that all kernel attacks can be structurally detected when not only the observer-based residual, but also the control signal based residual signals are generated and used for the detection purpose.

Towards Faithfulness in Open Domain Table-to-text Generation from an Entity-centric View

1 code implementation17 Feb 2021 Tianyu Liu, Xin Zheng, Baobao Chang, Zhifang Sui

In open domain table-to-text generation, we notice that the unfaithful generation usually contains hallucinated content which can not be aligned to any input table record.

Few-Shot Learning Table-to-Text Generation

First Target and Opinion then Polarity: Enhancing Target-opinion Correlation for Aspect Sentiment Triplet Extraction

no code implementations17 Feb 2021 Lianzhe Huang, Peiyi Wang, Sujian Li, Tianyu Liu, Xiaodong Zhang, Zhicong Cheng, Dawei Yin, Houfeng Wang

Aspect Sentiment Triplet Extraction (ASTE) aims to extract triplets from a sentence, including target entities, associated sentiment polarities, and opinion spans which rationalize the polarities.

Aspect Sentiment Triplet Extraction Sentence

PAC-Bayes Bounds for Meta-learning with Data-Dependent Prior

1 code implementation7 Feb 2021 Tianyu Liu, Jie Lu, Zheng Yan, Guangquan Zhang

By leveraging experience from previous tasks, meta-learning algorithms can achieve effective fast adaptation ability when encountering new tasks.

Meta-Learning

An Anchor-Based Automatic Evaluation Metric for Document Summarization

no code implementations COLING 2020 Kexiang Wang, Tianyu Liu, Baobao Chang, Zhifang Sui

The widespread adoption of reference-based automatic evaluation metrics such as ROUGE has promoted the development of document summarization.

Document Summarization

Discriminatively-Tuned Generative Classifiers for Robust Natural Language Inference

1 code implementation EMNLP 2020 Xiaoan Ding, Tianyu Liu, Baobao Chang, Zhifang Sui, Kevin Gimpel

We explore training objectives for discriminative fine-tuning of our generative classifiers, showing improvements over log loss fine-tuning from prior work .

Natural Language Inference

An Empirical Study on Model-agnostic Debiasing Strategies for Robust Natural Language Inference

1 code implementation CONLL 2020 Tianyu Liu, Xin Zheng, Xiaoan Ding, Baobao Chang, Zhifang Sui

The prior work on natural language inference (NLI) debiasing mainly targets at one or few known biases while not necessarily making the models more robust.

Data Augmentation Natural Language Inference

An Exploration of Arbitrary-Order Sequence Labeling via Energy-Based Inference Networks

1 code implementation EMNLP 2020 Lifu Tu, Tianyu Liu, Kevin Gimpel

Many tasks in natural language processing involve predicting structured outputs, e. g., sequence labeling, semantic role labeling, parsing, and machine translation.

Machine Translation Representation Learning +2

HypoNLI: Exploring the Artificial Patterns of Hypothesis-only Bias in Natural Language Inference

no code implementations LREC 2020 Tianyu Liu, Xin Zheng, Baobao Chang, Zhifang Sui

Many recent studies have shown that for models trained on datasets for natural language inference (NLI), it is possible to make correct predictions by merely looking at the hypothesis while completely ignoring the premise.

Natural Language Inference

Table-to-Text Natural Language Generation with Unseen Schemas

no code implementations9 Nov 2019 Tianyu Liu, Wei Wei, William Yang Wang

In this paper, we propose the new task of table-to-text NLG with unseen schemas, which specifically aims to test the generalization of NLG for input tables with attribute types that never appear during training.

Attribute Text Generation

Towards Improving Neural Named Entity Recognition with Gazetteers

1 code implementation ACL 2019 Tianyu Liu, Jin-Ge Yao, Chin-Yew Lin

Most of the recently proposed neural models for named entity recognition have been purely data-driven, with a strong emphasis on getting rid of the efforts for collecting external resources or designing hand-crafted features.

Ranked #14 on Named Entity Recognition (NER) on Ontonotes v5 (English) (using extra training data)

named-entity-recognition Named Entity Recognition +1

Enhancing Topic-to-Essay Generation with External Commonsense Knowledge

no code implementations ACL 2019 Pengcheng Yang, Lei LI, Fuli Luo, Tianyu Liu, Xu sun

Experiments show that with external commonsense knowledge and adversarial training, the generated essays are more novel, diverse, and topic-consistent than existing methods in terms of both automatic and human evaluation.

Concept-To-Text Generation

MAAM: A Morphology-Aware Alignment Model for Unsupervised Bilingual Lexicon Induction

no code implementations ACL 2019 Pengcheng Yang, Fuli Luo, Peng Chen, Tianyu Liu, Xu sun

The task of unsupervised bilingual lexicon induction (UBLI) aims to induce word translations from monolingual corpora in two languages.

Bilingual Lexicon Induction Denoising +2

Towards Comprehensive Description Generation from Factual Attribute-value Tables

no code implementations ACL 2019 Tianyu Liu, Fuli Luo, Pengcheng Yang, Wei Wu, Baobao Chang, Zhifang Sui

To relieve these problems, we first propose force attention (FA) method to encourage the generator to pay more attention to the uncovered attributes to avoid potential key attributes missing.

Attribute

Learning to Control the Fine-grained Sentiment for Story Ending Generation

no code implementations ACL 2019 Fuli Luo, Damai Dai, Pengcheng Yang, Tianyu Liu, Baobao Chang, Zhifang Sui, Xu sun

Therefore, we propose a generic and novel framework which consists of a sentiment analyzer and a sentimental generator, respectively addressing the two challenges.

Text Generation

A Novel Dual-Lidar Calibration Algorithm Using Planar Surfaces

no code implementations27 Apr 2019 Jianhao Jiao, Qinghai Liao, Yilong Zhu, Tianyu Liu, Yang Yu, Rui Fan, Lujia Wang, Ming Liu

Multiple lidars are prevalently used on mobile vehicles for rendering a broad view to enhance the performance of localization and perception systems.

Translation

Phrase-level Self-Attention Networks for Universal Sentence Encoding

no code implementations EMNLP 2018 Wei Wu, Houfeng Wang, Tianyu Liu, Shuming Ma

As a result, the memory consumption can be reduced because the self-attention is performed at the phrase level instead of the sentence level.

Multi-class Classification Natural Language Inference +4

Incorporating Glosses into Neural Word Sense Disambiguation

1 code implementation ACL 2018 Fuli Luo, Tianyu Liu, Qiaolin Xia, Baobao Chang, Zhifang Sui

GAS models the semantic relationship between the context and the gloss in an improved memory network framework, which breaks the barriers of the previous supervised methods and knowledge-based methods.

Word Sense Disambiguation

Table-to-text Generation by Structure-aware Seq2seq Learning

3 code implementations27 Nov 2017 Tianyu Liu, Kexiang Wang, Lei Sha, Baobao Chang, Zhifang Sui

In the decoding phase, dual attention mechanism which contains word level attention and field level attention is proposed to model the semantic relevance between the generated description and the table.

Table-to-Text Generation

A Soft-label Method for Noise-tolerant Distantly Supervised Relation Extraction

no code implementations EMNLP 2017 Tianyu Liu, Kexiang Wang, Baobao Chang, Zhifang Sui

Distant-supervised relation extraction inevitably suffers from wrong labeling problems because it heuristically labels relational facts with knowledge bases.

Relation Relation Extraction +1

Order-Planning Neural Text Generation From Structured Data

1 code implementation1 Sep 2017 Lei Sha, Lili Mou, Tianyu Liu, Pascal Poupart, Sujian Li, Baobao Chang, Zhifang Sui

Generating texts from structured data (e. g., a table) is important for various natural language processing tasks such as question answering and dialog systems.

Question Answering Table-to-Text Generation

Towards Time-Aware Knowledge Graph Completion

no code implementations COLING 2016 Tingsong Jiang, Tianyu Liu, Tao Ge, Lei Sha, Baobao Chang, Sujian Li, Zhifang Sui

In this paper, we present a novel time-aware knowledge graph completion model that is able to predict links in a KG using both the existing facts and the temporal information of the facts.

Question Answering Relation Extraction +1

Cannot find the paper you are looking for? You can Submit a new open access paper.