no code implementations • 30 Jun 2023 • Yajing Liu, Christina M Cole, Chris Peterson, Michael Kirby
A ReLU neural network leads to a finite polyhedral decomposition of input space and a corresponding finite dual graph.
no code implementations • 7 Jun 2023 • Tomojit Ghosh, Michael Kirby
SLCE works by mapping the samples of a class to its class centroid using a linear transformation.
no code implementations • 7 Jun 2023 • Tomojit Ghosh, Michael Kirby, Karim Karimov
In the first step, we solve the linear Centroid-Encoder, a convex optimization problem over a matrix $A$.
no code implementations • 7 Jun 2023 • Tomojit Ghosh, Michael Kirby
During training, we update class centroids by taking the Hadamard product of the centroids and weights of the sparse layer, thus ignoring the irrelevant features from the target.
no code implementations • 2 May 2023 • Huma Jamil, Yajing Liu, Turgay Caglar, Christina M. Cole, Nathaniel Blanchard, Christopher Peterson, Michael Kirby
Here, we investigate the potential for ReLU activation patterns (encoded as bit vectors) to aid in understanding and interpreting the behavior of neural networks.
no code implementations • 23 Nov 2022 • Huma Jamil, Yajing Liu, Christina M. Cole, Nathaniel Blanchard, Emily J. King, Michael Kirby, Christopher Peterson
This paper illustrates how one can utilize the dual graph to detect and analyze adversarial attacks in the context of digital images.
1 code implementation • CVPR 2022 • Nathan Mankovich, Emily King, Chris Peterson, Michael Kirby
We provide evidence that the flag median is robust to outliers and can be used effectively in algorithms like Linde-Buzo-Grey (LBG) to produce improved clusterings on Grassmannians.
no code implementations • 30 Jan 2022 • Tomojit Ghosh, Michael Kirby
The resulting algorithm, Sparse Centroid-Encoder (SCE), extracts discriminatory features in groups using a sparsity inducing $\ell_1$-norm while mapping a point to its class centroid.
no code implementations • 29 Sep 2021 • Tomojit Ghosh, Michael Kirby
We develop a sparse optimization problem for the determination of the total set of features that discriminate two or more classes.
no code implementations • 30 Nov 2020 • Ben Sattelberg, Renzo Cavalieri, Michael Kirby, Chris Peterson, Ross Beveridge
The weights in the neural network determine a decomposition of the input space into convex polytopes and on each of these polytopes the network can be described by a single affine mapping.
no code implementations • 24 Jun 2020 • Xiaofeng Ma, Michael Kirby, Chris Peterson
Subspace methods, utilizing Grassmann manifolds, have been a great aid in dealing with such variability.
no code implementations • 27 Feb 2020 • Tomojit Ghosh, Michael Kirby
The Centroid-Encoder (CE) method is similar to the autoencoder but incorporates label information to keep objects of a class close together in the reduced visualization space.
no code implementations • 27 Jun 2019 • Henry Kvinge, Elin Farnell, Julia R. Dupuis, Michael Kirby, Chris Peterson, Elizabeth C. Schundler
In this paper we explore a phenomenon in which bandwise CS sampling of a hyperspectral data cube followed by reconstruction can actually result in amplification of chemical signals contained in the cube.
no code implementations • 20 Jun 2019 • Elin Farnell, Henry Kvinge, John P. Dixon, Julia R. Dupuis, Michael Kirby, Chris Peterson, Elizabeth C. Schundler, Christian W. Smith
We propose a method for defining an order for a sampling basis that is optimal with respect to capturing variance in data, thus allowing for meaningful sensing at any desired level of compression.
no code implementations • 27 Oct 2018 • Henry Kvinge, Elin Farnell, Michael Kirby, Chris Peterson
In this paper, we propose a new statistic that we call the $\kappa$-profile for analysis of large data sets.
no code implementations • 5 Aug 2018 • Henry Kvinge, Elin Farnell, Michael Kirby, Chris Peterson
Intuitively, the SAP algorithm seeks to determine a projection which best preserves the lengths of all secants between points in a data set; by applying the algorithm to find the best projections to vector spaces of various dimensions, one may infer the dimension of the manifold of origination.
no code implementations • 10 Jul 2018 • Henry Kvinge, Elin Farnell, Michael Kirby, Chris Peterson
Dimensionality-reduction techniques are a fundamental tool for extracting useful information from high-dimensional data sets.
no code implementations • 3 Jul 2018 • Elin Farnell, Henry Kvinge, Michael Kirby, Chris Peterson
Endmember extraction plays a prominent role in a variety of data analysis problems as endmembers often correspond to data representing the purest or best representative of some feature.
1 code implementation • 18 Dec 2017 • Johannes Albrecht, Antonio Augusto Alves Jr, Guilherme Amadio, Giuseppe Andronico, Nguyen Anh-Ky, Laurent Aphecetche, John Apostolakis, Makoto Asai, Luca Atzori, Marian Babik, Giuseppe Bagliesi, Marilena Bandieramonte, Sunanda Banerjee, Martin Barisits, Lothar A. T. Bauerdick, Stefano Belforte, Douglas Benjamin, Catrin Bernius, Wahid Bhimji, Riccardo Maria Bianchi, Ian Bird, Catherine Biscarat, Jakob Blomer, Kenneth Bloom, Tommaso Boccali, Brian Bockelman, Tomasz Bold, Daniele Bonacorsi, Antonio Boveia, Concezio Bozzi, Marko Bracko, David Britton, Andy Buckley, Predrag Buncic, Paolo Calafiura, Simone Campana, Philippe Canal, Luca Canali, Gianpaolo Carlino, Nuno Castro, Marco Cattaneo, Gianluca Cerminara, Javier Cervantes Villanueva, Philip Chang, John Chapman, Gang Chen, Taylor Childers, Peter Clarke, Marco Clemencic, Eric Cogneras, Jeremy Coles, Ian Collier, David Colling, Gloria Corti, Gabriele Cosmo, Davide Costanzo, Ben Couturier, Kyle Cranmer, Jack Cranshaw, Leonardo Cristella, David Crooks, Sabine Crépé-Renaudin, Robert Currie, Sünje Dallmeier-Tiessen, Kaushik De, Michel De Cian, Albert De Roeck, Antonio Delgado Peris, Frédéric Derue, Alessandro Di Girolamo, Salvatore Di Guida, Gancho Dimitrov, Caterina Doglioni, Andrea Dotti, Dirk Duellmann, Laurent Duflot, Dave Dykstra, Katarzyna Dziedziniewicz-Wojcik, Agnieszka Dziurda, Ulrik Egede, Peter Elmer, Johannes Elmsheuser, V. Daniel Elvira, Giulio Eulisse, Steven Farrell, Torben Ferber, Andrej Filipcic, Ian Fisk, Conor Fitzpatrick, José Flix, Andrea Formica, Alessandra Forti, Giovanni Franzoni, James Frost, Stu Fuess, Frank Gaede, Gerardo Ganis, Robert Gardner, Vincent Garonne, Andreas Gellrich, Krzysztof Genser, Simon George, Frank Geurts, Andrei Gheata, Mihaela Gheata, Francesco Giacomini, Stefano Giagu, Manuel Giffels, Douglas Gingrich, Maria Girone, Vladimir V. Gligorov, Ivan Glushkov, Wesley Gohn, Jose Benito Gonzalez Lopez, Isidro González Caballero, Juan R. González Fernández, Giacomo Govi, Claudio Grandi, Hadrien Grasland, Heather Gray, Lucia Grillo, Wen Guan, Oliver Gutsche, Vardan Gyurjyan, Andrew Hanushevsky, Farah Hariri, Thomas Hartmann, John Harvey, Thomas Hauth, Benedikt Hegner, Beate Heinemann, Lukas Heinrich, Andreas Heiss, José M. Hernández, Michael Hildreth, Mark Hodgkinson, Stefan Hoeche, Burt Holzman, Peter Hristov, Xingtao Huang, Vladimir N. Ivanchenko, Todor Ivanov, Jan Iven, Brij Jashal, Bodhitha Jayatilaka, Roger Jones, Michel Jouvin, Soon Yung Jun, Michael Kagan, Charles William Kalderon, Meghan Kane, Edward Karavakis, Daniel S. Katz, Dorian Kcira, Oliver Keeble, Borut Paul Kersevan, Michael Kirby, Alexei Klimentov, Markus Klute, Ilya Komarov, Dmitri Konstantinov, Patrick Koppenburg, Jim Kowalkowski, Luke Kreczko, Thomas Kuhr, Robert Kutschke, Valentin Kuznetsov, Walter Lampl, Eric Lancon, David Lange, Mario Lassnig, Paul Laycock, Charles Leggett, James Letts, Birgit Lewendel, Teng Li, Guilherme Lima, Jacob Linacre, Tomas Linden, Miron Livny, Giuseppe Lo Presti, Sebastian Lopienski, Peter Love, Adam Lyon, Nicolò Magini, Zachary L. Marshall, Edoardo Martelli, Stewart Martin-Haugh, Pere Mato, Kajari Mazumdar, Thomas McCauley, Josh McFayden, Shawn McKee, Andrew McNab, Rashid Mehdiyev, Helge Meinhard, Dario Menasce, Patricia Mendez Lorenzo, Alaettin Serhan Mete, Michele Michelotto, Jovan Mitrevski, Lorenzo Moneta, Ben Morgan, Richard Mount, Edward Moyse, Sean Murray, Armin Nairz, Mark S. Neubauer, Andrew Norman, Sérgio Novaes, Mihaly Novak, Arantza Oyanguren, Nurcan Ozturk, Andres Pacheco Pages, Michela Paganini, Jerome Pansanel, Vincent R. Pascuzzi, Glenn Patrick, Alex Pearce, Ben Pearson, Kevin Pedro, Gabriel Perdue, Antonio Perez-Calero Yzquierdo, Luca Perrozzi, Troels Petersen, Marko Petric, Andreas Petzold, Jónatan Piedra, Leo Piilonen, Danilo Piparo, Jim Pivarski, Witold Pokorski, Francesco Polci, Karolos Potamianos, Fernanda Psihas, Albert Puig Navarro, Günter Quast, Gerhard Raven, Jürgen Reuter, Alberto Ribon, Lorenzo Rinaldi, Martin Ritter, James Robinson, Eduardo Rodrigues, Stefan Roiser, David Rousseau, Gareth Roy, Grigori Rybkine, Andre Sailer, Tai Sakuma, Renato Santana, Andrea Sartirana, Heidi Schellman, Jaroslava Schovancová, Steven Schramm, Markus Schulz, Andrea Sciabà, Sally Seidel, Sezen Sekmen, Cedric Serfon, Horst Severini, Elizabeth Sexton-Kennedy, Michael Seymour, Davide Sgalaberna, Illya Shapoval, Jamie Shiers, Jing-Ge Shiu, Hannah Short, Gian Piero Siroli, Sam Skipsey, Tim Smith, Scott Snyder, Michael D. Sokoloff, Panagiotis Spentzouris, Hartmut Stadie, Giordon Stark, Gordon Stewart, Graeme A. Stewart, Arturo Sánchez, Alberto Sánchez-Hernández, Anyes Taffard, Umberto Tamponi, Jeff Templon, Giacomo Tenaglia, Vakhtang Tsulaia, Christopher Tunnell, Eric Vaandering, Andrea Valassi, Sofia Vallecorsa, Liviu Valsan, Peter Van Gemmeren, Renaud Vernet, Brett Viren, Jean-Roch Vlimant, Christian Voss, Margaret Votava, Carl Vuosalo, Carlos Vázquez Sierra, Romain Wartel, Gordon T. Watts, Torre Wenaus, Sandro Wenzel, Mike Williams, Frank Winklmeier, Christoph Wissing, Frank Wuerthwein, Benjamin Wynne, Zhang Xiaomei, Wei Yang, Efe Yazgan
Particle physics has an ambitious and broad experimental programme for the coming decades.
Computational Physics High Energy Physics - Experiment
no code implementations • 7 Jul 2016 • Sofya Chepushtanova, Michael Kirby, Chris Peterson, Lori Ziegelmeier
This realization has motivated the development of new tools such as persistent homology for exploring topological invariants, or features, in large data sets.
4 code implementations • 22 Jul 2015 • Henry Adams, Sofya Chepushtanova, Tegan Emerson, Eric Hanson, Michael Kirby, Francis Motta, Rachel Neville, Chris Peterson, Patrick Shipman, Lori Ziegelmeier
We convert a PD to a finite-dimensional vector representation which we call a persistence image (PI), and prove the stability of this transformation with respect to small perturbations in the inputs.
Ranked #4 on Graph Classification on NEURON-BINARY
no code implementations • 3 Feb 2015 • Sofya Chepushtanova, Michael Kirby
The resulting points on the Grassmannian have representations as orthonormal matrices and as such do not reside in Euclidean space in the usual sense.
no code implementations • CVPR 2014 • Tim Marrinan, J. Ross Beveridge, Bruce Draper, Michael Kirby, Chris Peterson
The extrinsic manifold mean, the L2-median, and the flag mean are alternative averages that can be substituted directly for the Karcher mean in many applications.