Search Results for author: Chenghao Yang

Found 18 papers, 13 papers with code

Textual Relationship Modeling for Cross-Modal Information Retrieval

1 code implementation31 Oct 2018 Jing Yu, Chenghao Yang, Zengchang Qin, Zhuoqian Yang, Yue Hu, Yanbing Liu

A joint neural model is proposed to learn feature representation individually in each modality.

Multimedia

OpenHowNet: An Open Sememe-based Lexical Knowledge Base

1 code implementation28 Jan 2019 Fanchao Qi, Chenghao Yang, Zhiyuan Liu, Qiang Dong, Maosong Sun, Zhendong Dong

In this paper, we present an open sememe-based lexical knowledge base OpenHowNet.

COS960: A Chinese Word Similarity Dataset of 960 Word Pairs

1 code implementation1 Jun 2019 Junjie Huang, Fanchao Qi, Chenghao Yang, Zhiyuan Liu, Maosong Sun

Word similarity computation is a widely recognized task in the field of lexical semantics.

POS Word Similarity

Modeling Semantic Compositionality with Sememe Knowledge

1 code implementation ACL 2019 Fanchao Qi, Jun-Jie Huang, Chenghao Yang, Zhiyuan Liu, Xiao Chen, Qun Liu, Maosong Sun

In this paper, we verify the effectiveness of sememes, the minimum semantic units of human languages, in modeling SC by a confirmatory experiment.

multi-word expression embedding multi-word expression sememe prediction

Word-level Textual Adversarial Attacking as Combinatorial Optimization

1 code implementation ACL 2020 Yuan Zang, Fanchao Qi, Chenghao Yang, Zhiyuan Liu, Meng Zhang, Qun Liu, Maosong Sun

Also, further experiments show our model has higher transferability and can bring more robustness enhancement to victim models by adversarial training.

Adversarial Attack Combinatorial Optimization +3

Enhancing Transformer with Sememe Knowledge

no code implementations WS 2020 Yuhui Zhang, Chenghao Yang, Zhengping Zhou, Zhiyuan Liu

While large-scale pretraining has achieved great success in many NLP tasks, it has not been fully studied whether external linguistic knowledge can improve data-driven models.

Language Modelling

Frustratingly Hard Evidence Retrieval for QA Over Books

no code implementations WS 2020 Xiangyang Mou, Mo Yu, Bingsheng Yao, Chenghao Yang, Xiaoxiao Guo, Saloni Potdar, Hui Su

A lot of progress has been made to improve question answering (QA) in recent years, but the special problem of QA over narrative book stories has not been explored in-depth.

Question Answering Retrieval

Weakly-Supervised Methods for Suicide Risk Assessment: Role of Related Domains

1 code implementation ACL 2021 Chenghao Yang, Yudong Zhang, Smaranda Muresan

Social media has become a valuable resource for the study of suicidal ideation and the assessment of suicide risk.

Narrative Question Answering with Cutting-Edge Open-Domain QA Techniques: A Comprehensive Study

3 code implementations7 Jun 2021 Xiangyang Mou, Chenghao Yang, Mo Yu, Bingsheng Yao, Xiaoxiao Guo, Saloni Potdar, Hui Su

Recent advancements in open-domain question answering (ODQA), i. e., finding answers from large open-domain corpus like Wikipedia, have led to human-level performance on many datasets.

Open-Domain Question Answering

Transformer Embeddings of Irregularly Spaced Events and Their Participants

1 code implementation ICLR 2022 Chenghao Yang, Hongyuan Mei, Jason Eisner

The neural Hawkes process (Mei & Eisner, 2017) is a generative model of irregularly spaced sequences of discrete events.

The prediction of the quality of results in Logic Synthesis using Transformer and Graph Neural Networks

no code implementations23 Jul 2022 Chenghao Yang, Zhongda Wang, Yinshui Xia, Zhufei Chu

Furthermore, the Transformer and GNNs are adopted as a joint learning policy for the QoR prediction of the unseen circuit-optimization sequences.

Improving Stability of Fine-Tuning Pretrained Language Models via Component-Wise Gradient Norm Clipping

1 code implementation19 Oct 2022 Chenghao Yang, Xuezhe Ma

Despite its superior performance, such fine-tuning can be unstable, resulting in significant variance in performance and potential risks for practical applications.

ReCode: Robustness Evaluation of Code Generation Models

2 code implementations20 Dec 2022 Shiqi Wang, Zheng Li, Haifeng Qian, Chenghao Yang, Zijian Wang, Mingyue Shang, Varun Kumar, Samson Tan, Baishakhi Ray, Parminder Bhatia, Ramesh Nallapati, Murali Krishna Ramanathan, Dan Roth, Bing Xiang

Most existing works on robustness in text or code tasks have focused on classification, while robustness in generation tasks is an uncharted area and to date there is no comprehensive benchmark for robustness in code generation.

Code Generation

Beyond Reverse KL: Generalizing Direct Preference Optimization with Diverse Divergence Constraints

no code implementations28 Sep 2023 Chaoqi Wang, Yibo Jiang, Chenghao Yang, Han Liu, Yuxin Chen

The increasing capabilities of large language models (LLMs) raise opportunities for artificial general intelligence but concurrently amplify safety concerns, such as potential misuse of AI systems, necessitating effective AI alignment.

Can You Follow Me? Testing Situational Understanding in ChatGPT

1 code implementation24 Oct 2023 Chenghao Yang, Allyson Ettinger

Understanding sentence meanings and updating information states appropriately across time -- what we call "situational understanding" (SU) -- is a critical ability for human-like AI agents.

Chatbot Sentence

Identifying Self-Disclosures of Use, Misuse and Addiction in Community-based Social Media Posts

1 code implementation15 Nov 2023 Chenghao Yang, Tuhin Chakrabarty, Karli R Hochstatter, Melissa N Slavin, Nabila El-Bassel, Smaranda Muresan

In the last decade, the United States has lost more than 500, 000 people from an overdose involving prescription and illicit opioids making it a national public health emergency (USDHHS, 2017).

When Hindsight is Not 20/20: Testing Limits on Reflective Thinking in Large Language Models

no code implementations14 Apr 2024 Yanhong Li, Chenghao Yang, Allyson Ettinger

In this paper, we set out to clarify these capabilities under a more stringent evaluation setting in which we disallow any kind of external feedback.

Cannot find the paper you are looking for? You can Submit a new open access paper.