no code implementations • 11 Dec 2023 • Roozbeh Yousefzadeh
We define the ambiguity based on the geometric arrangements of the decision boundaries and the convex hull of training set in the feature space learned by the trained model, and demonstrate that a single ambiguity measure may detect a considerable portion of mistakes of a model on in-distribution samples, adversarial inputs, as well as out-of-distribution inputs.
no code implementations • 12 Nov 2023 • Roozbeh Yousefzadeh, Xuenan Cao
We also see that GPT-4's ability to prove mathematical theorems is continuously expanding over time despite the claim that it is a fixed model.
no code implementations • 23 Mar 2022 • Roozbeh Yousefzadeh, Xuenan Cao
The right to AI explainability has consolidated as a consensus in the research community and policy-making.
no code implementations • 20 Mar 2022 • Roozbeh Yousefzadeh
Partitions are defined by decision boundaries and so is the classification model/function.
no code implementations • 19 Mar 2022 • Roozbeh Yousefzadeh
We show that interpolation is not adequate to understand generalization of deep networks and we should broaden our perspective.
no code implementations • 5 Feb 2022 • Roozbeh Yousefzadeh
We study the partitioning of the domain in feature space, identify regions guaranteed to have certain classifications, and investigate its implications for the pixel space.
no code implementations • 27 Jan 2022 • Roozbeh Yousefzadeh, Xuenan Cao
Given a model trained to recommend clinical procedures for patients, can we trust the recommendation when the model considers a patient older or younger than all the samples in the training set?
no code implementations • 22 Dec 2021 • Roozbeh Yousefzadeh
Medical image datasets can have large number of images representing patients with different health conditions and various disease severity.
no code implementations • 13 Dec 2021 • Roozbeh Yousefzadeh
We define the homotopy path as a subspace rotation based on the orthogonal Procrustes problem, and then we discretize the homotopy path using eigenvalue decomposition of the rotation matrix.
no code implementations • 6 Dec 2021 • Roozbeh Yousefzadeh, Jessica A. Mollick
In our framework, we use the term extrapolation in this specific way of extrapolating outside the convex hull of training set (in the pixel space or feature space) but within the specific scope defined by the training data, the same way extrapolation is defined in many studies in cognitive science.
no code implementations • 1 Mar 2021 • Roozbeh Yousefzadeh
We explain that those mixed images will be samples on the decision boundaries of the trained model, and although such methods successfully hide the contents of images from the entity in charge of federated learning, they provide crucial information to that entity about the decision boundaries of the trained model.
no code implementations • 21 Feb 2021 • Roozbeh Yousefzadeh
However, solving the problem using standard optimization algorithms can be very expensive for large datasets.
no code implementations • NeurIPS Workshop DL-IG 2020 • Roozbeh Yousefzadeh
We study the generalization of deep learning models in relation to the convex hull of their training sets.
1 code implementation • 17 Jun 2020 • Roozbeh Yousefzadeh, Furong Huang
We show that each image can be written as the summation of a finite number of rank-1 patterns in the wavelet space, providing a low rank approximation that captures the structures and patterns essential for learning.
1 code implementation • 24 Feb 2020 • Roozbeh Yousefzadeh
We show that such analysis can provide valuable insights about the datasets and the classification task at hand, prior to training a model.
1 code implementation • 3 Jan 2020 • Roozbeh Yousefzadeh, Dianne P. O'Leary
Here, we use flip points to accomplish these goals for deep learning models with continuous output scores (e. g., computed by softmax), used in social applications.
no code implementations • 7 Aug 2019 • Roozbeh Yousefzadeh, Dianne P. O'Leary
Through numerical results, we confirm that some of the speculations about the decision boundaries are accurate, some of the computational methods can be improved, and some of the simplifying assumptions may be unreliable, for models with nonlinear activation functions.
no code implementations • 6 Aug 2019 • Roozbeh Yousefzadeh, Dianne P. O'Leary
Here, we propose a practical method that employs matrix conditioning to automatically design the structure of layers of a feed-forward network, by first adjusting the proportion of neurons among the layers of a network and then scaling the size of network up or down.
no code implementations • 21 Mar 2019 • Roozbeh Yousefzadeh, Dianne P. O'Leary
We show that distance between an input and the closest flip point identifies the most influential points in the training data.