Search Results for author: Tashi Namgyal

Found 4 papers, 1 papers with code

JAMMIN-GPT: Text-based Improvisation using LLMs in Ableton Live

1 code implementation6 Dec 2023 Sven Hollowell, Tashi Namgyal, Paul Marshall

We introduce a system that allows users of Ableton Live to create MIDI-clips by naming them with musical descriptions.

Descriptive

Data is Overrated: Perceptual Metrics Can Lead Learning in the Absence of Training Data

no code implementations6 Dec 2023 Tashi Namgyal, Alexander Hepburn, Raul Santos-Rodriguez, Valero Laparra, Jesus Malo

Perceptual metrics are traditionally used to evaluate the quality of natural signals, such as images and audio.

MIDI-Draw: Sketching to Control Melody Generation

no code implementations19 May 2023 Tashi Namgyal, Peter Flach, Raul Santos-Rodriguez

We describe a proof-of-principle implementation of a system for drawing melodies that abstracts away from a note-level input representation via melodic contours.

What You Hear Is What You See: Audio Quality Metrics From Image Quality Metrics

no code implementations19 May 2023 Tashi Namgyal, Alexander Hepburn, Raul Santos-Rodriguez, Valero Laparra, Jesus Malo

In this study, we investigate the feasibility of utilizing state-of-the-art image perceptual metrics for evaluating audio signals by representing them as spectrograms.

Cannot find the paper you are looking for? You can Submit a new open access paper.