1 code implementation • ACL 2022 • Chenhua Chen, Zhiyang Teng, Zhongqing Wang, Yue Zhang
Dependency trees have been intensively used with graph neural networks for aspect-based sentiment classification.
Aspect-Based Sentiment Analysis
Aspect-Based Sentiment Analysis (ABSA)
+1
no code implementations • EMNLP 2020 • Chenhua Chen, Zhiyang Teng, Yue Zhang
Aspect-level sentiment analysis aims to recognize the sentiment polarity of an aspect or a target in a comment.
no code implementations • 28 Apr 2024 • Hanmeng Liu, Zhiyang Teng, Chaoli Zhang, Yue Zhang
Chain-of-Thought (CoT) prompting has emerged as a pivotal technique for augmenting the inferential capabilities of language models during reasoning tasks.
no code implementations • 16 Jan 2024 • Chongzhi Zhang, Mingyuan Zhang, Zhiyang Teng, Jiayi Li, Xizhou Zhu, Lewei Lu, Ziwei Liu, Aixin Sun
Our method involves the direct generation of a global 2D temporal map via a conditional denoising diffusion process, based on the input video and language query.
no code implementations • 27 Dec 2023 • Chenyang Qiu, Guoshun Nan, Tianyu Xiong, Wendi Deng, Di Wang, Zhiyang Teng, Lijuan Sun, Qimei Cui, Xiaofeng Tao
This finding motivates us to present a novel method that aims to harden GCNs by automatically learning Latent Homophilic Structures over heterophilic graphs.
Ranked #5 on
Node Classification
on Actor
1 code implementation • 14 Nov 2023 • Yan Zhang, Zhaopeng Feng, Zhiyang Teng, Zuozhu Liu, Haizhou Li
Text embedding models have significantly contributed to advancements in natural language processing by adeptly capturing semantic properties of textual data.
1 code implementation • 13 Oct 2023 • Hanmeng Liu, Zhiyang Teng, Ruoxi Ning, Jian Liu, Qiji Zhou, Yue Zhang
Recently, large language models (LLMs), including notable models such as GPT-4 and burgeoning community models, have showcased significant general language understanding abilities.
2 code implementations • 8 Oct 2023 • Guangsheng Bao, Yanbin Zhao, Zhiyang Teng, Linyi Yang, Yue Zhang
Large language models (LLMs) have shown the ability to produce fluent and cogent content, presenting both productivity opportunities and societal risks.
no code implementations • 25 May 2023 • Xuming Hu, Zhijiang Guo, Zhiyang Teng, Irwin King, Philip S. Yu
Multimodal relation extraction (MRE) is the task of identifying the semantic relationships between two entities based on the context of the sentence image pair.
2 code implementations • 23 May 2023 • Fangkai Jiao, Zhiyang Teng, Bosheng Ding, Zhengyuan Liu, Nancy F. Chen, Shafiq Joty
Existing efforts to improve logical reasoning ability of language models have predominantly relied on supervised fine-tuning, hindering generalization to new domains and/or tasks.
1 code implementation • 22 May 2023 • Guangsheng Bao, Zhiyang Teng, Hao Zhou, Jianhao Yan, Yue Zhang
However, current NAT models still have a significant performance gap compared to their AT counterparts.
1 code implementation • 20 May 2023 • Hanmeng Liu, Zhiyang Teng, Leyang Cui, Chaoli Zhang, Qiji Zhou, Yue Zhang
LogiCoT serves as an instruction set for teaching models of logical reasoning and elicits general reasoning skills.
no code implementations • 8 May 2023 • Guangsheng Bao, Zhiyang Teng, Yue Zhang
Sequence-to-sequence (seq2seq) models have been widely used for natural language processing, computer vision, and other deep learning tasks.
1 code implementation • 8 May 2023 • Guangsheng Bao, Zhiyang Teng, Yue Zhang
Document-level machine translation faces the challenge of data sparsity due to its long input length and a small amount of training data, increasing the risk of learning spurious patterns.
1 code implementation • 7 Apr 2023 • Hanmeng Liu, Ruoxi Ning, Zhiyang Teng, Jian Liu, Qiji Zhou, Yue Zhang
With the release of Generative Pretrained Transformer 4 (GPT-4), highlighted as "advanced" at reasoning tasks, we are eager to learn the GPT-4 performance on various logical reasoning tasks.
1 code implementation • Conference 2023 • Zhiyang Teng, Chenhua Chen, Yan Zhang, Yue Zhang
Experiments on various text generation benchmarks show the effectiveness of our proposed method.
1 code implementation • 28 Sep 2022 • Peilin Zhou, Zeqiang Wang, Dading Chong, Zhijiang Guo, Yining Hua, Zichang Su, Zhiyang Teng, Jiageng Wu, Jie Yang
To further investigate tweet users' attitudes toward specific entities, 4 types of entities (Person, Organization, Drug, and Vaccine) are selected and annotated with user sentiments, resulting in a targeted sentiment dataset with 9, 101 entities (in 5, 278 tweets).
1 code implementation • 28 Sep 2022 • Zeqiang Wang, Yile Wang, Jiageng Wu, Zhiyang Teng, Jie Yang
Designed in a hierarchical structure, YATO supports free combinations of three types of widely used features including 1) traditional neural networks (CNN, RNN, etc.
1 code implementation • 8 Sep 2022 • Yile Wang, Linyi Yang, Zhiyang Teng, Ming Zhou, Yue Zhang
Transformer-based pre-trained models have gained much advance in recent years, becoming one of the most important backbones in natural language processing.
1 code implementation • EMNLP 2021 • Jian Liu, Zhiyang Teng, Leyang Cui, Hanmeng Liu, Yue Zhang
Aspect category sentiment analysis has attracted increasing research attention.
1 code implementation • ACL 2021 • Guangsheng Bao, Yue Zhang, Zhiyang Teng, Boxing Chen, Weihua Luo
However, study shows that when we further enlarge the translation unit to a whole document, supervised training of Transformer can fail.
3 code implementations • 30 Dec 2020 • Leilei Gan, Zhiyang Teng, Yue Zhang, Linchao Zhu, Fei Wu, Yi Yang
In this paper, we propose SemGloVe, which distills semantic co-occurrences from BERT into static GloVe word embeddings.
no code implementations • 8 Dec 2020 • Yuan Zhang, Zhiyang Teng, Yue Zhang
Chinese parsing has traditionally been solved by three pipeline systems including word-segmentation, part-of-speech tagging and dependency parsing modules.
1 code implementation • EMNLP 2020 • Yan Zhang, Zhijiang Guo, Zhiyang Teng, Wei Lu, Shay B. Cohen, Zuozhu Liu, Lidong Bing
With the help of these strategies, we are able to train a model with fewer parameters while maintaining the model capacity.
1 code implementation • 13 Aug 2020 • Qingkai Min, Libo Qin, Zhiyang Teng, Xiao Liu, Yue Zhang
Dialogue state modules are a useful component in a task-oriented dialogue system.
1 code implementation • TACL 2019 • Zhijiang Guo, Yan Zhang, Zhiyang Teng, Wei Lu
We focus on graph-to-sequence learning, which can be framed as transducing graph structures to sequences for text generation.
1 code implementation • COLING 2018 • Zhiyang Teng, Yue Zhang
Non-local features have been exploited by syntactic parsers for capturing dependencies between sub output structures.
1 code implementation • 24 Aug 2017 • Jie Yang, Zhiyang Teng, Meishan Zhang, Yue Zhang
Our results on standard benchmarks show that state-of-the-art neural models can give accuracies comparable to the best discrete models in the literature for most tasks and combing discrete and neural features unanimously yield better results.
no code implementations • TACL 2017 • Zhiyang Teng, Yue Zhang
In this paper, we propose a method for automatic head-lexicalization for tree-structure LSTMs, propagating head words from leaf nodes to every constituent node.
no code implementations • COLING 2016 • Ching-Yun Chang, Yue Zhang, Zhiyang Teng, Zahn Bozanic, Bin Ke
Measuring the information content of news text is useful for decision makers in their investments since news information can influence the intrinsic values of companies.
1 code implementation • 21 Nov 2016 • Zhiyang Teng, Yue Zhang
In this paper, we propose a method for automatic head-lexicalization for tree-structure LSTMs, propagating head words from leaf nodes to every constituent node.
no code implementations • LREC 2016 • Meishan Zhang, Jie Yang, Zhiyang Teng, Yue Zhang
We present a light-weight machine learning tool for NLP research.