Search Results for author: Tianze Luo

Found 11 papers, 7 papers with code

WaveFM: A High-Fidelity and Efficient Vocoder Based on Flow Matching

1 code implementation20 Mar 2025 Tianze Luo, Xingchen Miao, Wenbo Duan

In this work, we present WaveFM, a reparameterized flow matching model for mel-spectrogram conditioned speech synthesis, designed to enhance both sample quality and generation speed for diffusion vocoders.

Speech Synthesis

Decoding Human Attentive States from Spatial-temporal EEG Patches Using Transformers

1 code implementation6 Feb 2025 Yi Ding, Joon Hei Lee, Shuailei Zhang, Tianze Luo, Cuntai Guan

Learning the spatial topology of electroencephalogram (EEG) channels and their temporal dynamics is crucial for decoding attention states.

Brain Computer Interface EEG

From Seconds to Hours: Reviewing MultiModal Large Language Models on Comprehensive Long Video Understanding

1 code implementation27 Sep 2024 Heqing Zou, Tianze Luo, Guiyang Xie, Victor, Zhang, Fengmao Lv, Guangcong Wang, Junyang Chen, Zhuochen Wang, Hansheng Zhang, Huaijian Zhang

Given the diverse nature of visual data, MultiModal Large Language Models (MM-LLMs) exhibit variations in model designing and training for understanding images, short videos, and long videos.

Video Understanding Visual Reasoning

Data Augmentation using Large Language Models: Data Perspectives, Learning Paradigms and Challenges

no code implementations5 Mar 2024 Bosheng Ding, Chengwei Qin, Ruochen Zhao, Tianze Luo, Xinze Li, Guizhen Chen, Wenhan Xia, Junjie Hu, Anh Tuan Luu, Shafiq Joty

In the rapidly evolving field of large language models (LLMs), data augmentation (DA) has emerged as a pivotal technique for enhancing model performance by diversifying training examples without the need for additional data collection.

Data Augmentation Survey

Panda LLM: Training Data and Evaluation for Open-Sourced Chinese Instruction-Following Large Language Models

1 code implementation4 May 2023 Fangkai Jiao, Bosheng Ding, Tianze Luo, Zhanfeng Mo

This project focuses on enhancing open-source large language models through instruction-tuning and providing comprehensive evaluations of their performance.

Instruction Following

Fast Graph Generation via Spectral Diffusion

1 code implementation16 Nov 2022 Tianze Luo, Zhanfeng Mo, Sinno Jialin Pan

In this paper, we argue that running full-rank diffusion SDEs on the whole graph adjacency matrix space hinders diffusion models from learning graph topology generation, and hence significantly deteriorates the quality of generated graph data.

Graph Generation

Domain-Augmented Domain Adaptation

no code implementations21 Feb 2022 Qiuhao Zeng, Tianze Luo, Boyu Wang

Unsupervised domain adaptation (UDA) enables knowledge transfer from the labelled source domain to the unlabeled target domain by reducing the cross-domain discrepancy.

Transfer Learning Unsupervised Domain Adaptation

Re-ranking With Constraints on Diversified Exposures for Homepage Recommender System

no code implementations12 Dec 2021 Qi Hao, Tianze Luo, Guangda Huzhang

The homepage recommendation on most E-commerce applications places items in a hierarchical manner, where different channels display items in different styles.

Diversity Recommendation Systems +1

Mitigating Performance Saturation in Neural Marked Point Processes: Architectures and Loss Functions

1 code implementation7 Jul 2021 Tianbo Li, Tianze Luo, Yiping Ke, Sinno Jialin Pan

Neural marked point processes possess good interpretability of probabilistic models as well as the representational power of neural networks.

Model Selection Point Processes

Cannot find the paper you are looking for? You can Submit a new open access paper.