Search Results for author: Judith Yue Li

Found 4 papers, 1 papers with code

V2Meow: Meowing to the Visual Beat via Video-to-Music Generation

no code implementations11 May 2023 Kun Su, Judith Yue Li, Qingqing Huang, Dima Kuzmin, Joonseok Lee, Chris Donahue, Fei Sha, Aren Jansen, Yu Wang, Mauro Verzetti, Timo I. Denk

Video-to-music generation demands both a temporally localized high-quality listening experience and globally aligned video-acoustic signatures.

Music Generation

Multi-Task End-to-End Training Improves Conversational Recommendation

no code implementations8 May 2023 Naveen Ram, Dima Kuzmin, Ellie Ka In Chio, Moustafa Farid Alzantot, Santiago Ontanon, Ambarish Jash, Judith Yue Li

In this paper, we analyze the performance of a multitask end-to-end transformer model on the task of conversational recommendations, which aim to provide recommendations based on a user's explicit preferences expressed in dialogue.

Dialogue Management Management +1

MAQA: A Multimodal QA Benchmark for Negation

no code implementations9 Jan 2023 Judith Yue Li, Aren Jansen, Qingqing Huang, Joonseok Lee, Ravi Ganti, Dima Kuzmin

Multimodal learning can benefit from the representation power of pretrained Large Language Models (LLMs).

Negation Question Answering

MuLan: A Joint Embedding of Music Audio and Natural Language

1 code implementation26 Aug 2022 Qingqing Huang, Aren Jansen, Joonseok Lee, Ravi Ganti, Judith Yue Li, Daniel P. W. Ellis

Music tagging and content-based retrieval systems have traditionally been constructed using pre-defined ontologies covering a rigid set of music attributes or text queries.

Cross-Modal Retrieval Music Tagging +2

Cannot find the paper you are looking for? You can Submit a new open access paper.