no code implementations • 21 Mar 2024 • Alice Baird, Rachel Manzelli, Panagiotis Tzirakis, Chris Gagne, Haoqi Li, Sadie Allen, Sander Dieleman, Brian Kulis, Shrikanth S. Narayanan, Alan Cowen
In this short white paper, to encourage researchers with limited access to large-datasets, the organizers first outline several open-source datasets that are available to the community, and for the duration of the workshop are making several propriety datasets available.
no code implementations • ICML 2020 • Kubra Cilingir, Rachel Manzelli, Brian Kulis
Classical linear metric learning methods have recently been extended along two distinct lines: deep metric learning methods for learning embeddings of the data using neural networks, and Bregman divergence learning approaches for extending learning Euclidean distances to more general divergence measures such as divergences over distributions.
no code implementations • NIPS Workshop CDNNRIA 2018 • Sivaramakrishnan Sankarapandian, Anil Kag, Rachel Manzelli, Brian Kulis
We describe a training strategy that grows the number of units during training, and show on several benchmark datasets that our model yields architectures that are smaller than those obtained when tuning the number of hidden units on a standard fixed architecture.
no code implementations • 26 Jun 2018 • Rachel Manzelli, Vijay Thakkar, Ali Siahkamari, Brian Kulis
Existing automatic music generation approaches that feature deep learning can be broadly classified into two types: raw audio models and symbolic models.