no code implementations • 20 Mar 2024 • Tyler Loakman, Chen Tang, Chenghua Lin
Previous work in phonologically and phonetically grounded language generation has mainly focused on domains such as puns and poetry.
1 code implementation • 19 Nov 2023 • Chen Tang, Tyler Loakman, Chenghua Lin
These results underscore the effectiveness of our model in leveraging context and event features to improve the quality of generated narratives.
no code implementations • 9 Nov 2023 • Tyler Loakman, Aaron Maladry, Chenghua Lin
Human evaluation is often considered to be the gold standard method of evaluating a Natural Language Generation system.
1 code implementation • 28 Jun 2023 • Chen Tang, Hongbo Zhang, Tyler Loakman, Chenghua Lin, Frank Guerin
Further analysis also shows that our representation learning framework can fill the semantic gap by coagulating representations of both text and graph knowledge.
1 code implementation • 6 Jun 2023 • Tyler Loakman, Chen Tang, Chenghua Lin
Previous work in phonetically-grounded language generation has mainly focused on domains such as lyrics and poetry.
1 code implementation • 10 May 2023 • Hongbo Zhang, Chen Tang, Tyler Loakman, Chenghua Lin, Stefan Goetze
In this paper, we propose a novel context-aware graph-attention model (Context-aware GAT), which can effectively incorporate global features of relevant knowledge graphs based on a context-enhanced knowledge aggregation process.
1 code implementation • 27 Oct 2022 • Chen Tang, Hongbo Zhang, Tyler Loakman, Chenghua Lin, Frank Guerin
In this paper, we propose a novel framework to improve medical dialogue generation by considering features centered on domain-specific terminology.
1 code implementation • 19 Oct 2022 • Henglin Huang, Chen Tang, Tyler Loakman, Frank Guerin, Chenghua Lin
In spite of the success of prior works with the application of pre-trained models, current neural models for Chinese stories still struggle to generate high-quality long text narratives.
1 code implementation • 19 Oct 2022 • Chen Tang, Zhihao Zhang, Tyler Loakman, Chenghua Lin, Frank Guerin
To improve the performance of long text generation, recent studies have leveraged automatically planned event structures (i. e. storylines) to guide story generation.