no code implementations • 24 Apr 2024 • Cheng Kang, Daniel Novak, Katerina Urbanova, Yuqing Cheng, Yong Hu
Large language models (LLMs) have demonstrated impressive generalization capabilities on specific tasks with human-written instruction data.
no code implementations • 15 Feb 2024 • Cheng Kang, Xinye Chen, Yong Hu, Daniel Novak
To further enhance their portability of independent deployment as well as improve their stability evaluated by language perplexity, we propose a novel approach called the Quantized Embedding Controllable Diffusion Language Model (QE-CDLM).