As model size continues to grow and access to labeled training data remains limited, transfer learning has become a popular approach in many scientific and engineering fields.
Empirical studies suggest that machine learning models trained with empirical risk minimization (ERM) often rely on attributes that may be spuriously correlated with the class labels.
The framework employs a novel self-paced sampling strategy to ensure the accuracy of pseudo labels.
In this work, we propose two novel metrics: the unsupervised mean squared error (MSE) and the unsupervised peak signal-to-noise ratio (PSNR), which are computed using only noisy data.
The resulting representations and clusters from self-supervision are used as features of a survival model for recurrence prediction at the patient level.
1 code implementation • 21 Dec 2021 • Avinash Parnandi, Aakash Kaku, Anita Venkatesan, Natasha Pandit, Audre Wirtanen, Haresh Rajamohan, Kannan Venkataramanan, Dawn Nilsen, Carlos Fernandez-Granda, Heidi Schambra
Here, we present PrimSeq, a pipeline to classify and count functional motions trained in stroke rehabilitation.
no code implementations • 21 Nov 2021 • Sheng Liu, Aakash Kaku, Weicheng Zhu, Matan Leibovich, Sreyas Mohan, Boyang Yu, Haoxiang Huang, Laure Zanna, Narges Razavian, Jonathan Niles-Weed, Carlos Fernandez-Granda
Reliable probability estimation is of crucial importance in many real-world applications where there is inherent (aleatoric) uncertainty.
1 code implementation • 3 Nov 2021 • Aakash Kaku, Kangning Liu, Avinash Parnandi, Haresh Rengaraj Rajamohan, Kannan Venkataramanan, Anita Venkatesan, Audre Wirtanen, Natasha Pandit, Heidi Schambra, Carlos Fernandez-Granda
To address this, we propose a novel approach for high-resolution action identification, inspired by speech-recognition techniques, which is based on a sequence-to-sequence model that directly predicts the sequence of actions.
We discover a phenomenon that has been previously reported in the context of classification: the networks tend to first fit the clean pixel-level labels during an "early-learning" phase, before eventually memorizing the false annotations.
We find, however, that in heterogeneous parameter spaces, i. e. in spaces in which the variance of the estimated parameters varies considerably, good performance is hard to achieve and requires arduous tweaking of the loss function, hyper parameters, and the distribution of the training data in parameter space.
Deep convolutional neural networks (CNNs) for image denoising are usually trained on large datasets.
In cancer diagnosis, interpretability can be achieved by localizing the region of the input image responsible for the output, i. e. the location of a lesion.
Furthermore, we show that our ConvNorm can reduce the layerwise spectral norm of the weight matrices and hence improve the Lipschitzness of the network, leading to easier training and improved robustness for deep ConvNets.
This shows that the network exploits global and local information in the noisy measurements, for example, by adapting its filtering approach when it encounters atomic-level defects at the nanoparticle surface.
Denoising Materials Science Image and Video Processing
This is advantageous because motion compensation is computationally expensive, and can be unreliable when the input data are noisy.
Ranked #5 on Video Denoising on Set8 sigma50
SBD outperforms existing techniques by a wide margin on a simulated benchmark dataset, as well as on real data.
1 code implementation • 4 Aug 2020 • Farah E. Shamout, Yiqiu Shen, Nan Wu, Aakash Kaku, Jungkyu Park, Taro Makino, Stanisław Jastrzębski, Duo Wang, Ben Zhang, Siddhant Dogra, Meng Cao, Narges Razavian, David Kudlowitz, Lea Azour, William Moore, Yvonne W. Lui, Yindalon Aphinyanaphongs, Carlos Fernandez-Granda, Krzysztof J. Geras
In order to verify performance in a real clinical setting, we silently deployed a preliminary version of the deep neural network at New York University Langone Health during the first wave of the pandemic, which produced accurate predictions in real-time.
In contrast with existing approaches, which use the model output during early learning to detect the examples with clean labels, and either ignore or attempt to correct the false labels, we take a different route and instead capitalize on early learning via regularization.
Ranked #4 on Learning with noisy labels on CIFAR-10N-Random2
Thus, using a combination of IMU-based motion capture and deep learning, we were able to identify primitives automatically.
Extraneous variables are variables that are irrelevant for a certain task, but heavily affect the distribution of the available data.
Early detection is a crucial goal in the study of Alzheimer's Disease (AD).
Here, however, we show that bias terms used in most CNNs (additive constants, including those used for batch normalization) interfere with the interpretability of these networks, do not help performance, and in fact prevent generalization of performance to noise levels not including in the training data.
In contrast, a bias-free architecture -- obtained by removing the constant terms in every layer of the network, including those used for batch normalization-- generalizes robustly across noise levels, while preserving state-of-the-art performance within the training range.
Frequency estimation is a fundamental problem in signal processing, with applications in radar imaging, underwater acoustics, seismic imaging, and spectroscopy.
In this work, we consider separable inverse problems, where the data are modeled as a linear combination of functions that depend nonlinearly on certain parameters of interest.
We apply our methodology to detect anomalous individuals, to cluster the cohort into groups with different sleeping tendencies, and to obtain improved predictions of future sleep behavior.
We propose a learning-based approach for estimating the spectrum of a multisinusoidal signal from a finite number of samples.
Magnetic resonance fingerprinting (MRF) is a technique for quantitative estimation of spin-relaxation parameters from magnetic-resonance data.
Medical Physics Numerical Analysis Numerical Analysis Optimization and Control