no code implementations • 24 Feb 2025 • Jibang Wu, Chenghao Yang, Simon Mahns, Chaoqi Wang, Hao Zhu, Fei Fang, Haifeng Xu
This paper develops an agentic framework that employs large language models (LLMs) to automate the generation of persuasive and grounded marketing content, using real estate listing descriptions as our focal application domain.
1 code implementation • 8 Feb 2025 • Jiajun Shi, Chaoren Wei, Liqun Yang, Zekun Moore Wang, Chenghao Yang, Ge Zhang, Stephen Huang, Tao Peng, Jian Yang, Zhoufutu Wen
In this paper, we introduce CryptoX, an evaluation framework that, for the first time, combines existing benchmarks and cryptographic, to quantify the compositional reasoning capacity of LLMs.
1 code implementation • 9 Jun 2024 • Hao Li, Chenghao Yang, An Zhang, Yang Deng, Xiang Wang, Tat-Seng Chua
Crucial to addressing this real-world need are event summary and persona management, which enable reasoning for appropriate long-term dialogue responses.
no code implementations • 21 May 2024 • Chenghao Yang, Zi Yang, Nan Hua
Long-context modeling presents a significant challenge for transformer-based large language models (LLMs) due to the quadratic complexity of the self-attention mechanism and issues with length extrapolation caused by pretraining exclusively on short inputs.
1 code implementation • 14 Apr 2024 • Yanhong Li, Chenghao Yang, Allyson Ettinger
In this paper, we set out to clarify these capabilities under a more stringent evaluation setting in which we disallow any kind of external feedback.
1 code implementation • 15 Nov 2023 • Chenghao Yang, Tuhin Chakrabarty, Karli R Hochstatter, Melissa N Slavin, Nabila El-Bassel, Smaranda Muresan
In the last decade, the United States has lost more than 500, 000 people from an overdose involving prescription and illicit opioids making it a national public health emergency (USDHHS, 2017).
1 code implementation • 24 Oct 2023 • Chenghao Yang, Allyson Ettinger
Understanding sentence meanings and updating information states appropriately across time -- what we call "situational understanding" (SU) -- is a critical ability for human-like AI agents.
1 code implementation • 28 Sep 2023 • Chaoqi Wang, Yibo Jiang, Chenghao Yang, Han Liu, Yuxin Chen
The increasing capabilities of large language models (LLMs) raise opportunities for artificial general intelligence but concurrently amplify safety concerns, such as potential misuse of AI systems, necessitating effective AI alignment.
1 code implementation • 31 May 2023 • Chenghao Yang, Fan Yin, He He, Kai-Wei Chang, Xiaofei Ma, Bing Xiang
In practice, Shapley Values are often estimated with a small number of stochastic model evaluations.
2 code implementations • 20 Dec 2022 • Shiqi Wang, Zheng Li, Haifeng Qian, Chenghao Yang, Zijian Wang, Mingyue Shang, Varun Kumar, Samson Tan, Baishakhi Ray, Parminder Bhatia, Ramesh Nallapati, Murali Krishna Ramanathan, Dan Roth, Bing Xiang
Most existing works on robustness in text or code tasks have focused on classification, while robustness in generation tasks is an uncharted area and to date there is no comprehensive benchmark for robustness in code generation.
1 code implementation • 19 Oct 2022 • Chenghao Yang, Xuezhe Ma
Despite its superior performance, such fine-tuning can be unstable, resulting in significant variance in performance and potential risks for practical applications.
no code implementations • 23 Jul 2022 • Chenghao Yang, Zhongda Wang, Yinshui Xia, Zhufei Chu
Furthermore, the Transformer and GNNs are adopted as a joint learning policy for the QoR prediction of the unseen circuit-optimization sequences.
2 code implementations • ICLR 2022 • Chenghao Yang, Hongyuan Mei, Jason Eisner
The neural Hawkes process (Mei & Eisner, 2017) is a generative model of irregularly spaced sequences of discrete events.
3 code implementations • 7 Jun 2021 • Xiangyang Mou, Chenghao Yang, Mo Yu, Bingsheng Yao, Xiaoxiao Guo, Saloni Potdar, Hui Su
Recent advancements in open-domain question answering (ODQA), i. e., finding answers from large open-domain corpus like Wikipedia, have led to human-level performance on many datasets.
1 code implementation • ACL 2021 • Chenghao Yang, Yudong Zhang, Smaranda Muresan
Social media has become a valuable resource for the study of suicidal ideation and the assessment of suicide risk.
no code implementations • WS 2020 • Xiangyang Mou, Mo Yu, Bingsheng Yao, Chenghao Yang, Xiaoxiao Guo, Saloni Potdar, Hui Su
A lot of progress has been made to improve question answering (QA) in recent years, but the special problem of QA over narrative book stories has not been explored in-depth.
no code implementations • WS 2020 • Yuhui Zhang, Chenghao Yang, Zhengping Zhou, Zhiyuan Liu
While large-scale pretraining has achieved great success in many NLP tasks, it has not been fully studied whether external linguistic knowledge can improve data-driven models.
1 code implementation • ACL 2020 • Yuan Zang, Fanchao Qi, Chenghao Yang, Zhiyuan Liu, Meng Zhang, Qun Liu, Maosong Sun
Also, further experiments show our model has higher transferability and can bring more robustness enhancement to victim models by adversarial training.
1 code implementation • ACL 2019 • Fanchao Qi, Jun-Jie Huang, Chenghao Yang, Zhiyuan Liu, Xiao Chen, Qun Liu, Maosong Sun
In this paper, we verify the effectiveness of sememes, the minimum semantic units of human languages, in modeling SC by a confirmatory experiment.
multi-word expression embedding
multi-word expression sememe prediction
1 code implementation • 1 Jun 2019 • Junjie Huang, Fanchao Qi, Chenghao Yang, Zhiyuan Liu, Maosong Sun
Word similarity computation is a widely recognized task in the field of lexical semantics.
1 code implementation • 28 Jan 2019 • Fanchao Qi, Chenghao Yang, Zhiyuan Liu, Qiang Dong, Maosong Sun, Zhendong Dong
In this paper, we present an open sememe-based lexical knowledge base OpenHowNet.
1 code implementation • 31 Oct 2018 • Jing Yu, Chenghao Yang, Zengchang Qin, Zhuoqian Yang, Yue Hu, Yanbing Liu
A joint neural model is proposed to learn feature representation individually in each modality.
Multimedia