Search Results for author: Jesse A. Livezey

Found 9 papers, 4 papers with code

Deep learning approaches for neural decoding: from CNNs to LSTMs and spikes to fMRI

no code implementations19 May 2020 Jesse A. Livezey, Joshua I. Glaser

Deep learning has been shown to be a useful tool for improving the accuracy and flexibility of neural decoding across a wide range of tasks, and we point out areas for future scientific development.

EEG Electroencephalogram (EEG) +4

Hangul Fonts Dataset: a Hierarchical and Compositional Dataset for Investigating Learned Representations

no code implementations23 May 2019 Jesse A. Livezey, Ahyeon Hwang, Jacob Yeung, Kristofer E. Bouchard

Thus, HFD enables the identification of shortcomings in existing methods, a critical first step toward developing new machine learning algorithms to extract hierarchical and compositional structure in the context of naturalistic variability.

BIG-bench Machine Learning Representation Learning

Unsupervised Discovery of Temporal Structure in Noisy Data with Dynamical Components Analysis

1 code implementation23 May 2019 David G. Clark, Jesse A. Livezey, Kristofer E. Bouchard

Linear dimensionality reduction methods are commonly used to extract low-dimensional structure from high-dimensional data.

Dimensionality Reduction Test +2

Spiking Linear Dynamical Systems on Neuromorphic Hardware for Low-Power Brain-Machine Interfaces

no code implementations22 May 2018 David G. Clark, Jesse A. Livezey, Edward F. Chang, Kristofer E. Bouchard

Neuromorphic architectures achieve low-power operation by using many simple spiking neurons in lieu of traditional hardware.

Deep learning as a tool for neural data analysis: speech classification and cross-frequency coupling in human sensorimotor cortex

2 code implementations26 Mar 2018 Jesse A. Livezey, Kristofer E. Bouchard, Edward F. Chang

A fundamental challenge in neuroscience is to understand what structure in the world is represented in spatially distributed patterns of neural activity from multiple single-trial measurements.

General Classification

Learning overcomplete, low coherence dictionaries with linear inference

no code implementations10 Jun 2016 Jesse A. Livezey, Alejandro F. Bujan, Friedrich T. Sommer

Further, by comparing ICA algorithms on synthetic data and natural images to the computationally more expensive sparse coding solution, we show that the coherence control biases the exploration of the data manifold, sometimes yielding suboptimal solutions.

Theano: A Python framework for fast computation of mathematical expressions

1 code implementation9 May 2016 The Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Frédéric Bastien, Justin Bayer, Anatoly Belikov, Alexander Belopolsky, Yoshua Bengio, Arnaud Bergeron, James Bergstra, Valentin Bisson, Josh Bleecher Snyder, Nicolas Bouchard, Nicolas Boulanger-Lewandowski, Xavier Bouthillier, Alexandre de Brébisson, Olivier Breuleux, Pierre-Luc Carrier, Kyunghyun Cho, Jan Chorowski, Paul Christiano, Tim Cooijmans, Marc-Alexandre Côté, Myriam Côté, Aaron Courville, Yann N. Dauphin, Olivier Delalleau, Julien Demouth, Guillaume Desjardins, Sander Dieleman, Laurent Dinh, Mélanie Ducoffe, Vincent Dumoulin, Samira Ebrahimi Kahou, Dumitru Erhan, Ziye Fan, Orhan Firat, Mathieu Germain, Xavier Glorot, Ian Goodfellow, Matt Graham, Caglar Gulcehre, Philippe Hamel, Iban Harlouchet, Jean-Philippe Heng, Balázs Hidasi, Sina Honari, Arjun Jain, Sébastien Jean, Kai Jia, Mikhail Korobov, Vivek Kulkarni, Alex Lamb, Pascal Lamblin, Eric Larsen, César Laurent, Sean Lee, Simon Lefrancois, Simon Lemieux, Nicholas Léonard, Zhouhan Lin, Jesse A. Livezey, Cory Lorenz, Jeremiah Lowin, Qianli Ma, Pierre-Antoine Manzagol, Olivier Mastropietro, Robert T. McGibbon, Roland Memisevic, Bart van Merriënboer, Vincent Michalski, Mehdi Mirza, Alberto Orlandi, Christopher Pal, Razvan Pascanu, Mohammad Pezeshki, Colin Raffel, Daniel Renshaw, Matthew Rocklin, Adriana Romero, Markus Roth, Peter Sadowski, John Salvatier, François Savard, Jan Schlüter, John Schulman, Gabriel Schwartz, Iulian Vlad Serban, Dmitriy Serdyuk, Samira Shabanian, Étienne Simon, Sigurd Spieckermann, S. Ramana Subramanyam, Jakub Sygnowski, Jérémie Tanguay, Gijs van Tulder, Joseph Turian, Sebastian Urban, Pascal Vincent, Francesco Visin, Harm de Vries, David Warde-Farley, Dustin J. Webb, Matthew Willson, Kelvin Xu, Lijun Xue, Li Yao, Saizheng Zhang, Ying Zhang

Since its introduction, it has been one of the most used CPU and GPU mathematical compilers - especially in the machine learning community - and has shown steady performance improvements.

BIG-bench Machine Learning Clustering +2

Discovering Hidden Factors of Variation in DeepNetworks

no code implementations arXiv 2015 Brian Cheung, Jesse A. Livezey, Arjun K. Bansal, Bruno A. Olshausen

Deep learning has enjoyed a great deal of success because of its ability to learnuseful features for tasks such as classification.

General Classification

Discovering Hidden Factors of Variation in Deep Networks

1 code implementation20 Dec 2014 Brian Cheung, Jesse A. Livezey, Arjun K. Bansal, Bruno A. Olshausen

Deep learning has enjoyed a great deal of success because of its ability to learn useful features for tasks such as classification.

General Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.