Search Results for author: Zhiyang Teng

Found 32 papers, 21 papers with code

How Well Do Text Embedding Models Understand Syntax?

1 code implementation14 Nov 2023 Yan Zhang, Zhaopeng Feng, Zhiyang Teng, Zuozhu Liu, Haizhou Li

Text embedding models have significantly contributed to advancements in natural language processing by adeptly capturing semantic properties of textual data.

GLoRE: Evaluating Logical Reasoning of Large Language Models

1 code implementation13 Oct 2023 Hanmeng Liu, Zhiyang Teng, Ruoxi Ning, Jian Liu, Qiji Zhou, Yue Zhang

Recently, large language models (LLMs), including notable models such as GPT-4 and burgeoning community models, have showcased significant general language understanding abilities.

Logical Reasoning Natural Language Understanding

Fast-DetectGPT: Efficient Zero-Shot Detection of Machine-Generated Text via Conditional Probability Curvature

1 code implementation8 Oct 2023 Guangsheng Bao, Yanbin Zhao, Zhiyang Teng, Linyi Yang, Yue Zhang

Large language models (LLMs) have shown the ability to produce fluent and cogent content, presenting both productivity opportunities and societal risks.

Multimodal Relation Extraction with Cross-Modal Retrieval and Synthesis

no code implementations25 May 2023 Xuming Hu, Zhijiang Guo, Zhiyang Teng, Irwin King, Philip S. Yu

Multimodal relation extraction (MRE) is the task of identifying the semantic relationships between two entities based on the context of the sentence image pair.

Cross-Modal Retrieval Relation Extraction +1

LogicLLM: Exploring Self-supervised Logic-enhanced Training for Large Language Models

1 code implementation23 May 2023 Fangkai Jiao, Zhiyang Teng, Shafiq Joty, Bosheng Ding, Aixin Sun, Zhengyuan Liu, Nancy F. Chen

Existing efforts to improve logical reasoning ability of language models have predominantly relied on supervised fine-tuning, hindering generalization to new domains and/or tasks.

Logical Reasoning

LogiCoT: Logical Chain-of-Thought Instruction-Tuning

1 code implementation20 May 2023 Hanmeng Liu, Zhiyang Teng, Leyang Cui, Chaoli Zhang, Qiji Zhou, Yue Zhang

LogiCoT serves as an instruction set for teaching models of logical reasoning and elicits general reasoning skills.

Logical Reasoning Text Generation

Token-Level Fitting Issues of Seq2seq Models

no code implementations8 May 2023 Guangsheng Bao, Zhiyang Teng, Yue Zhang

Sequence-to-sequence (seq2seq) models have been widely used for natural language processing, computer vision, and other deep learning tasks.

Language Modelling

Target-Side Augmentation for Document-Level Machine Translation

1 code implementation8 May 2023 Guangsheng Bao, Zhiyang Teng, Yue Zhang

Document-level machine translation faces the challenge of data sparsity due to its long input length and a small amount of training data, increasing the risk of learning spurious patterns.

Data Augmentation Document Level Machine Translation +2

Evaluating the Logical Reasoning Ability of ChatGPT and GPT-4

1 code implementation7 Apr 2023 Hanmeng Liu, Ruoxi Ning, Zhiyang Teng, Jian Liu, Qiji Zhou, Yue Zhang

With the release of Generative Pretrained Transformer 4 (GPT-4), highlighted as "advanced" at reasoning tasks, we are eager to learn the GPT-4 performance on various logical reasoning tasks.

Logical Reasoning Natural Language Inference +2

METS-CoV: A Dataset of Medical Entity and Targeted Sentiment on COVID-19 Related Tweets

1 code implementation28 Sep 2022 Peilin Zhou, Zeqiang Wang, Dading Chong, Zhijiang Guo, Yining Hua, Zichang Su, Zhiyang Teng, Jiageng Wu, Jie Yang

To further investigate tweet users' attitudes toward specific entities, 4 types of entities (Person, Organization, Drug, and Vaccine) are selected and annotated with user sentiments, resulting in a targeted sentiment dataset with 9, 101 entities (in 5, 278 tweets).

Epidemiology named-entity-recognition +3

YATO: Yet Another deep learning based Text analysis Open toolkit

1 code implementation28 Sep 2022 Zeqiang Wang, Yile Wang, Jiageng Wu, Zhiyang Teng, Jie Yang

Designed in a hierarchical structure, YATO supports free combinations of three types of widely used features including 1) traditional neural networks (CNN, RNN, etc.

Pre-Training a Graph Recurrent Network for Language Representation

1 code implementation8 Sep 2022 Yile Wang, Linyi Yang, Zhiyang Teng, Ming Zhou, Yue Zhang

Transformer-based pre-trained models have gained much advance in recent years, becoming one of the most important backbones in natural language processing.

Language Modelling text-classification +1

G-Transformer for Document-level Machine Translation

1 code implementation ACL 2021 Guangsheng Bao, Yue Zhang, Zhiyang Teng, Boxing Chen, Weihua Luo

However, study shows that when we further enlarge the translation unit to a whole document, supervised training of Transformer can fail.

Document Level Machine Translation Inductive Bias +2

SemGloVe: Semantic Co-occurrences for GloVe from BERT

no code implementations30 Dec 2020 Leilei Gan, Zhiyang Teng, Yue Zhang, Linchao Zhu, Fei Wu, Yi Yang

In this paper, we propose SemGloVe, which distills semantic co-occurrences from BERT into static GloVe word embeddings.

Language Modelling Word Embeddings +1

End-to-End Chinese Parsing Exploiting Lexicons

no code implementations8 Dec 2020 Yuan Zhang, Zhiyang Teng, Yue Zhang

Chinese parsing has traditionally been solved by three pipeline systems including word-segmentation, part-of-speech tagging and dependency parsing modules.

Dependency Parsing Graph Attention +2

Dialogue State Induction Using Neural Latent Variable Models

1 code implementation13 Aug 2020 Qingkai Min, Libo Qin, Zhiyang Teng, Xiao Liu, Yue Zhang

Dialogue state modules are a useful component in a task-oriented dialogue system.

Densely Connected Graph Convolutional Networks for Graph-to-Sequence Learning

1 code implementation TACL 2019 Zhijiang Guo, Yan Zhang, Zhiyang Teng, Wei Lu

We focus on graph-to-sequence learning, which can be framed as transducing graph structures to sequences for text generation.

Graph-to-Sequence Machine Translation +2

Two Local Models for Neural Constituent Parsing

1 code implementation COLING 2018 Zhiyang Teng, Yue Zhang

Non-local features have been exploited by syntactic parsers for capturing dependencies between sub output structures.

Vocal Bursts Valence Prediction

Combining Discrete and Neural Features for Sequence Labeling

1 code implementation24 Aug 2017 Jie Yang, Zhiyang Teng, Meishan Zhang, Yue Zhang

Our results on standard benchmarks show that state-of-the-art neural models can give accuracies comparable to the best discrete models in the literature for most tasks and combing discrete and neural features unanimously yield better results.

named-entity-recognition Named Entity Recognition +2

Head-Lexicalized Bidirectional Tree LSTMs

no code implementations TACL 2017 Zhiyang Teng, Yue Zhang

In this paper, we propose a method for automatic head-lexicalization for tree-structure LSTMs, propagating head words from leaf nodes to every constituent node.

Language Modelling Relation Extraction +1

Measuring the Information Content of Financial News

no code implementations COLING 2016 Ching-Yun Chang, Yue Zhang, Zhiyang Teng, Zahn Bozanic, Bin Ke

Measuring the information content of news text is useful for decision makers in their investments since news information can influence the intrinsic values of companies.

Bidirectional Tree-Structured LSTM with Head Lexicalization

1 code implementation21 Nov 2016 Zhiyang Teng, Yue Zhang

In this paper, we propose a method for automatic head-lexicalization for tree-structure LSTMs, propagating head words from leaf nodes to every constituent node.

General Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.