Search Results for author: Zhiqiang Liu

Found 11 papers, 4 papers with code

面向 Transformer 模型的蒙古语语音识别词特征编码方法(Researching of the Mongolian word encoding method based on Transformer Mongolian speech recognition)

no code implementations CCL 2022 Xiaoxu Zhang, Zhiqiang Ma, Zhiqiang Liu, Caijilahu Bao

“针对 Transformer 模型在蒙古语语音识别任务中无法学习到带有控制符的蒙古语词和语音之间的对应关系, 造成模型对蒙古语的不适应问题。提出一种面向 Transformer 模型的蒙古语词编码方法, 方法使用蒙古语字母特征与词特征进行混合编码, 通过结合蒙古语字母信息使 Transformer 模型能够区分带有控制符的蒙古语词, 学习到蒙古语词与语音之间的对应关系。在 IMUT-MC 数据集上, 构建 Transformer 模型并进行了词特征编码方法的消融实验和对比实验。消融实验结果表明, 词特征编码方法在 HWER、WER、SER 上分别降低了 23. 4%、6. 9%、2. 6%;对比实验结果表明, 词特征编码方法领先于所有方法, HWER 和 WER 分别达到 11. 8%、19. 8%。”

speech-recognition Speech Recognition

基于注意力的蒙古语说话人特征提取方法(Attention based Mongolian Speaker Feature Extraction)

no code implementations CCL 2022 Fangyuan Zhu, Zhiqiang Ma, Zhiqiang Liu, Caijilahu Bao, Hongbin Wang

“说话人特征提取模型提取到的说话人特征之间区分性低, 使得蒙古语声学模型无法学习到区分性信息, 导致模型无法适应不同说话人。提出一种基于注意力的说话人自适应方法, 方法引入神经图灵机进行自适应, 增加记忆模块存放说话人特征, 采用注意力机制计算记忆模块中说话人特征与当前语音说话人特征的相似权重矩阵, 通过权重矩阵重新组合成说话人特征s-vector, 进而提高说话人特征之间的区分性。在IMUT-MCT数据集上, 进行说话人特征提取方法的消融实验、模型自适应实验和案例分析。实验结果表明, 对比不同说话人特征s-vector、i-vector与d-vector, s-vector比其他两种方法的SER和WER分别降低4. 96%、1. 08%;在不同的蒙古语声学模型上进行比较, 提出的方法相对于基线均有性能提升。”

Windformer:Bi-Directional Long-Distance Spatio-Temporal Network For Wind Speed Prediction

1 code implementation24 Nov 2023 XueWei Li, Zewen Shang, Zhiqiang Liu, Jian Yu, Wei Xiong, Mei Yu

History and future time information includes the trend of airflow changes, whether this dynamic information can be utilized will also affect the prediction effect.

Management Time Series

FinEval: A Chinese Financial Domain Knowledge Evaluation Benchmark for Large Language Models

1 code implementation19 Aug 2023 Liwen Zhang, Weige Cai, Zhaowei Liu, Zhi Yang, Wei Dai, Yujie Liao, Qianru Qin, Yifei Li, Xingyu Liu, Zhiqiang Liu, Zhoufan Zhu, Anbo Wu, Xin Guo, Yun Chen

Our work offers a more comprehensive financial knowledge evaluation benchmark, utilizing data of mock exams and covering a wide range of evaluated LLMs.

Multiple-choice

Improved Knowledge Distillation via Adversarial Collaboration

no code implementations29 Nov 2021 Zhiqiang Liu, Chengkai Huang, Yanxia Liu

To achieve this goal, a small student model is trained to exploit the knowledge of a large well-trained teacher model.

Knowledge Distillation

Semi-Online Knowledge Distillation

1 code implementation23 Nov 2021 Zhiqiang Liu, Yanxia Liu, Chengkai Huang

However, to the best of our knowledge, KD and DML have never been jointly explored in a unified framework to solve the knowledge distillation problem.

Knowledge Distillation Model Compression +1

Application of Artificial Neural Networks for Catalysis

no code implementations3 Oct 2021 Zhiqiang Liu, Wentao Zhou

Catalyst, as an important material, plays a crucial role in the development of chemical industry.

Self-Learning

One Comment from One Perspective: An Effective Strategy for Enhancing Automatic Music Comment

1 code implementation COLING 2020 Tengfei Huo, Zhiqiang Liu, Jinchao Zhang, Jie zhou

The automatic generation of music comments is of great significance for increasing the popularity of music and the music platform{'}s activity.

Comment Generation

Adaptive Federated Learning and Digital Twin for Industrial Internet of Things

no code implementations25 Oct 2020 Wen Sun, Shiyu Lei, Lu Wang, Zhiqiang Liu, Yan Zhang

Industrial Internet of Things (IoT) enables distributed intelligent services varying with the dynamic and realtime industrial devices to achieve Industry 4. 0 benefits.

Clustering Federated Learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.