no code implementations • 12 May 2022 • Xiao Wang, Aristeidis Tsaris, Debangshu Mukherjee, Mohamed Wahib, Peng Chen, Mark Oxley, Olga Ovchinnikova, Jacob Hinkle
In this paper, we propose a novel image gradient decomposition method that significantly reduces the memory footprint for ptychographic reconstruction by tessellating image gradients and diffraction measurements into tiles.
no code implementations • 22 Mar 2021 • Sergei V. Kalinin, Maxim A. Ziatdinov, Jacob Hinkle, Stephen Jesse, Ayana Ghosh, Kyle P. Kelley, Andrew R. Lupini, Bobby G. Sumpter, Rama K. Vasudevan
Machine learning and artificial intelligence (ML/AI) are rapidly becoming an indispensable part of physics research, with domain applications ranging from theory and materials prediction to high-throughput data analysis.
no code implementations • 19 Dec 2020 • Abhishek K Dubey, Michael T Young, Christopher Stanley, Dalton Lunga, Jacob Hinkle
These pre-trained DL models' ability to generalize in clinical settings is poor because of the changes in data distributions between publicly available and privately held radiographs.
no code implementations • 31 Jul 2020 • Abhishek K Dubey, Alina Peluso, Jacob Hinkle, Devanshu Agarawal, Zilong Tan
Shallow Convolution Neural Network (CNN) is a time-tested tool for the information extraction from cancer pathology reports.
no code implementations • 6 Mar 2020 • Theodore Papamarkou, Hayley Guy, Bryce Kroencke, Jordan Miller, Preston Robinette, Daniel Schultz, Jacob Hinkle, Laura Pullum, Catherine Schuman, Jeremy Renshaw, Stylianos Chatzidakis
The results demonstrate that such a deep learning approach allows to detect the locus of corrosion via smaller tiles, and at the same time to infer with high accuracy whether an image comes from a corroded canister.
no code implementations • 3 Jan 2020 • Devanshu Agrawal, Theodore Papamarkou, Jacob Hinkle
There has recently been much work on the "wide limit" of neural networks, where Bayesian neural networks (BNNs) are shown to converge to a Gaussian process (GP) as all hidden layers are sent to infinite width.
1 code implementation • 15 Oct 2019 • Theodore Papamarkou, Jacob Hinkle, M. Todd Young, David Womble
Nevertheless, this paper shows that a non-converged Markov chain, generated via MCMC sampling from the parameter space of a neural network, can yield via Bayesian marginalization a valuable posterior predictive distribution of the output of the neural network.
no code implementations • NeurIPS 2019 • Guannan Zhang, Jiaxin Zhang, Jacob Hinkle
We developed a Nonlinear Level-set Learning (NLL) method for dimensionality reduction in high-dimensional function approximation with small data.
Functional Analysis