1 code implementation • 20 Mar 2025 • Tianze Luo, Xingchen Miao, Wenbo Duan
In this work, we present WaveFM, a reparameterized flow matching model for mel-spectrogram conditioned speech synthesis, designed to enhance both sample quality and generation speed for diffusion vocoders.
1 code implementation • 6 Feb 2025 • Yi Ding, Joon Hei Lee, Shuailei Zhang, Tianze Luo, Cuntai Guan
Learning the spatial topology of electroencephalogram (EEG) channels and their temporal dynamics is crucial for decoding attention states.
1 code implementation • 3 Jan 2025 • Heqing Zou, Tianze Luo, Guiyang Xie, Victor Xiao Jie Zhang, Fengmao Lv, Guangcong Wang, Junyang Chen, Zhuochen Wang, Hansheng Zhang, Huaijian Zhang
Multimodal large language models have become a popular topic in deep visual understanding due to many promising real-world applications.
1 code implementation • 27 Sep 2024 • Heqing Zou, Tianze Luo, Guiyang Xie, Victor, Zhang, Fengmao Lv, Guangcong Wang, Junyang Chen, Zhuochen Wang, Hansheng Zhang, Huaijian Zhang
Given the diverse nature of visual data, MultiModal Large Language Models (MM-LLMs) exhibit variations in model designing and training for understanding images, short videos, and long videos.
no code implementations • 5 Mar 2024 • Bosheng Ding, Chengwei Qin, Ruochen Zhao, Tianze Luo, Xinze Li, Guizhen Chen, Wenhan Xia, Junjie Hu, Anh Tuan Luu, Shafiq Joty
In the rapidly evolving field of large language models (LLMs), data augmentation (DA) has emerged as a pivotal technique for enhancing model performance by diversifying training examples without the need for additional data collection.
1 code implementation • 4 May 2023 • Fangkai Jiao, Bosheng Ding, Tianze Luo, Zhanfeng Mo
This project focuses on enhancing open-source large language models through instruction-tuning and providing comprehensive evaluations of their performance.
1 code implementation • 16 Nov 2022 • Tianze Luo, Zhanfeng Mo, Sinno Jialin Pan
In this paper, we argue that running full-rank diffusion SDEs on the whole graph adjacency matrix space hinders diffusion models from learning graph topology generation, and hence significantly deteriorates the quality of generated graph data.
no code implementations • NAACL 2022 • Quanyu Long, Tianze Luo, Wenya Wang, Sinno Jialin Pan
In this work, we study Unsupervised Domain Adaptation (UDA) in a challenging self-supervised approach.
no code implementations • 21 Feb 2022 • Qiuhao Zeng, Tianze Luo, Boyu Wang
Unsupervised domain adaptation (UDA) enables knowledge transfer from the labelled source domain to the unlabeled target domain by reducing the cross-domain discrepancy.
no code implementations • 12 Dec 2021 • Qi Hao, Tianze Luo, Guangda Huzhang
The homepage recommendation on most E-commerce applications places items in a hierarchical manner, where different channels display items in different styles.
1 code implementation • 7 Jul 2021 • Tianbo Li, Tianze Luo, Yiping Ke, Sinno Jialin Pan
Neural marked point processes possess good interpretability of probabilistic models as well as the representational power of neural networks.