Search Results for author: Ju-Chiang Wang

Found 13 papers, 0 papers with code

Music Era Recognition Using Supervised Contrastive Learning and Artist Information

no code implementations7 Jul 2024 Qiqi He, Xuchen Song, Weituo Hao, Ju-Chiang Wang, Wei-Tsung Lu, Wei Li

For the case where the artist information is available, we extend the audio-based model to take multimodal inputs and develop a framework, called MultiModal Contrastive (MMC) learning, to enhance the training.

Contrastive Learning Music Classification

Scaling Up Music Information Retrieval Training with Semi-Supervised Learning

no code implementations2 Oct 2023 Yun-Ning Hung, Ju-Chiang Wang, Minz Won, Duc Le

To our knowledge, this is the first attempt to study the effects of scaling up both model and training data for a variety of MIR tasks.

Information Retrieval Music Information Retrieval +1

Multitrack Music Transcription with a Time-Frequency Perceiver

no code implementations19 Jun 2023 Wei-Tsung Lu, Ju-Chiang Wang, Yun-Ning Hung

Multitrack music transcription aims to transcribe a music audio input into the musical notes of multiple instruments simultaneously.

Multi-Task Learning Music Transcription

SingNet: A Real-time Singing Voice Beat and Downbeat Tracking System

no code implementations4 Jun 2023 Mojtaba Heydari, Ju-Chiang Wang, Zhiyao Duan

Singing voice beat and downbeat tracking posses several applications in automatic music production, analysis and manipulation.

Downbeat Tracking

Jointist: Simultaneous Improvement of Multi-instrument Transcription and Music Source Separation via Joint Training

no code implementations1 Feb 2023 Kin Wai Cheuk, Keunwoo Choi, Qiuqiang Kong, Bochen Li, Minz Won, Ju-Chiang Wang, Yun-Ning Hung, Dorien Herremans

Jointist consists of an instrument recognition module that conditions the other two modules: a transcription module that outputs instrument-specific piano rolls, and a source separation module that utilizes instrument information and transcription results.

Chord Recognition Instrument Recognition +1

Modeling the Rhythm from Lyrics for Melody Generation of Pop Song

no code implementations3 Jan 2023 Daiyu Zhang, Ju-Chiang Wang, Katerina Kosta, Jordan B. L. Smith, Shicen Zhou

Experiments for Chinese lyric-to-melody generation show that the proposed framework is able to model key characteristics of rhythm and pitch distributions in the dataset, and in a subjective evaluation, the melodies generated by our system were rated as similar to or better than those of a state-of-the-art alternative.

Binaural Rendering of Ambisonic Signals by Neural Networks

no code implementations4 Nov 2022 Yin Zhu, Qiuqiang Kong, Junjie Shi, Shilei Liu, Xuzhou Ye, Ju-Chiang Wang, Junping Zhang

Binaural rendering of ambisonic signals is of broad interest to virtual reality and immersive media.

To catch a chorus, verse, intro, or anything else: Analyzing a song with structural functions

no code implementations29 May 2022 Ju-Chiang Wang, Yun-Ning Hung, Jordan B. L. Smith

Conventional music structure analysis algorithms aim to divide a song into segments and to group them with abstract labels (e. g., 'A', 'B', and 'C').

Boundary Detection Temporal Localization

Supervised Metric Learning for Music Structure Features

no code implementations18 Oct 2021 Ju-Chiang Wang, Jordan B. L. Smith, Wei-Tsung Lu, Xuchen Song

Music structure analysis (MSA) methods traditionally search for musically meaningful patterns in audio: homogeneity, repetition, novelty, and segment-length regularity.

Metric Learning

Modeling the Compatibility of Stem Tracks to Generate Music Mashups

no code implementations26 Mar 2021 Jiawen Huang, Ju-Chiang Wang, Jordan B. L. Smith, Xuchen Song, Yuxuan Wang

A music mashup combines audio elements from two or more songs to create a new work.

Cannot find the paper you are looking for? You can Submit a new open access paper.