1 code implementation • 7 Oct 2022 • Haoqin Tu, Zhongliang Yang, Jinshuai Yang, Siyu Zhang, Yongfeng Huang
Visualization of the local latent prior well confirms the primary devotion in hidden space of the proposed model.
1 code implementation • 12 May 2022 • Haoqin Tu, Zhongliang Yang, Jinshuai Yang, Yongfeng Huang
Variational Auto-Encoder (VAE) has become the de-facto learning paradigm in achieving representation learning and generation for natural language at the same time.
1 code implementation • Findings (ACL) 2021 • Siyu Zhang, Zhongliang Yang, Jinshuai Yang, Yongfeng Huang
Generative linguistic steganography mainly utilized language models and applied steganographic sampling (stegosampling) to generate high-security steganographic text (stegotext).
no code implementations • 2 Jun 2020 • Zhongliang Yang, Baitao Gong, Yamin Li, Jinshuai Yang, Zhiwen Hu, Yongfeng Huang
On the one hand, we hide the secret information by coding the path in the knowledge graph, but not the conditional probability of each generated word; on the other hand, we can control the semantic expression of the generated steganographic text to a certain extent.