Search Results for author: Mark Pupilli

Found 2 papers, 1 papers with code

Harnessing Manycore Processors with Distributed Memory for Accelerated Training of Sparse and Recurrent Models

no code implementations7 Nov 2023 Jan Finkbeiner, Thomas Gmeinder, Mark Pupilli, Alexander Titterton, Emre Neftci

To overcome this limitation, we explore sparse and recurrent model training on a massively parallel multiple instruction multiple data (MIMD) architecture with distributed local memory.

Efficient Neural Network

Bundle Adjustment on a Graph Processor

1 code implementation CVPR 2020 Joseph Ortiz, Mark Pupilli, Stefan Leutenegger, Andrew J. Davison

Graph processors such as Graphcore's Intelligence Processing Unit (IPU) are part of the major new wave of novel computer architecture for AI, and have a general design with massively parallel computation, distributed on-chip memory and very high inter-core communication bandwidth which allows breakthrough performance for message passing algorithms on arbitrary graphs.

Cannot find the paper you are looking for? You can Submit a new open access paper.