no code implementations • 21 Mar 2024 • Ethan N. Evans, Matthew Cook, Zachary P. Bradshaw, Margarite L. LaBorde
The recent exploding growth in size of state-of-the-art machine learning models highlights a well-known issue where exponential parameter growth, which has grown to trillions as in the case of the Generative Pre-trained Transformer (GPT), leads to training time and memory requirements which limit their advancement in the near term.
no code implementations • 12 Nov 2023 • Bijan Mazaheri, Siddharth Jain, Matthew Cook, Jehoshua Bruck
We explore what we call ``omitted label contexts,'' in which training data is limited to a subset of the possible labels.
1 code implementation • 31 Mar 2023 • Muhammad S. Battikh, Artem Lensky, Dillon Hammill, Matthew Cook
In this paper, we present a method based on a residual neural network for point set registration that preserves the topological structure of the target point set.
no code implementations • 31 May 2022 • Alexander Nedergaard, Matthew Cook
We introduce an artificial curiosity algorithm based on lower bounding an approximation to the entropy of the state visitation distribution.
1 code implementation • 17 Sep 2020 • Nils Eckstein, Julia Buhmann, Matthew Cook, Jan Funke
We present a method for microtubule tracking in electron microscopy volumes.
no code implementations • 2 Jul 2020 • Matthew Cook, Alina Zare, Paul Gader
Specifically, many systems lack the ability to identify when outliers (e. g., samples that are distinct from and not represented in the training data distribution) are being presented to the system.
no code implementations • 3 Feb 2020 • Thanuja D. Ambegoda, Matthew Cook
Drawing inspiration from these human strategies, we formulate the segmentation task as an edge labeling problem on a graph with local topological constraints.
1 code implementation • 1 Feb 2020 • Thanuja D. Ambegoda, Julien N. P. Martel, Jozef Adamcik, Matthew Cook, Richard H. R. Hahnloser
Serial section electron microscopy (ssEM) is a widely used technique for obtaining volumetric information of biological tissues at nanometer scale.
no code implementations • 14 Oct 2019 • Jigar Doshi, Dominic Garcia, Cliff Massey, Pablo Llueca, Nicolas Borensztein, Michael Baird, Matthew Cook, Devaki Raj
In this paper, we share our approach to real-time segmentation of fire perimeter from aerial full-motion infrared video.
no code implementations • 1 Apr 2019 • Joshua Peeples, Matthew Cook, Daniel Suen, Alina Zare, James Keller
In this paper, we compare the segmentation performance of a semi-supervised approach using PFLICM and a supervised method using Possibilistic K-NN.
no code implementations • 21 Jun 2018 • Julia Buhmann, Renate Krause, Rodrigo Ceballos Lentini, Nils Eckstein, Matthew Cook, Srinivas Turaga, Jan Funke
High-throughput electron microscopy allows recording of lar- ge stacks of neural tissue with sufficient resolution to extract the wiring diagram of the underlying neural network.
no code implementations • 30 Apr 2017 • Andre Luckow, Matthew Cook, Nathan Ashcraft, Edwin Weill, Emil Djerekarov, Bennie Vorster
In this paper, we describe different automotive uses cases for deep learning in particular in the domain of computer vision.
no code implementations • 18 Mar 2017 • Johannes Thiele, Peter Diehl, Matthew Cook
We investigate a recently proposed model for cortical computation which performs relational inference.
no code implementations • 29 Aug 2016 • Peter U. Diehl, Matthew Cook
We are at a loss to explain, simulate, or understand such a multi-functional homogeneous sheet-like computational structure - we do not have computational models which work in this way.
no code implementations • 19 Mar 2016 • Brendan Alvey, Alina Zare, Matthew Cook, Dominic K. Ho
The adaptive coherence estimator (ACE) estimates the squared cosine of the angle between a known target vector and a sample vector in a whitened coordinate space.
no code implementations • 19 Mar 2016 • Matthew Cook, Alina Zare, Dominic Ho
The new algorithm, Task Driven Extended Functions of Multiple Instances, can overcome data that does not have very precise point-wise labels and still learn a highly discriminative dictionary.
1 code implementation • 8 Mar 2015 • Jan Funke, Francesc Moreno-Noguer, Albert Cardona, Matthew Cook
This measure, which we call Tolerant Edit Distance (TED), is motivated by two observations: (1) Some errors, like small boundary shifts, are tolerable in practice.