no code implementations • 14 Mar 2024 • Sepideh Neshatfar, Salimeh Yasaei Sekeh
However, their vulnerability to adversarial attacks, particularly through susceptible nodes, poses a challenge in decision-making.
1 code implementation • 13 Mar 2024 • Soheil Gharatappeh, Sepideh Neshatfar, Salimeh Yasaei Sekeh, Vikas Dhiman
In this paper, we present a novel fog-aware object detection network called FogGuard, designed to address the challenges posed by foggy weather conditions.
no code implementations • 29 Feb 2024 • Mahsa Mozafari-Nia, Salimeh Yasaei Sekeh
Despite the impressive performance of deep neural networks (DNNs), their computational complexity and storage space consumption have led to the concept of network compression.
no code implementations • 7 Jul 2023 • Jovon Craig, Josh Andle, Theodore S. Nowak, Salimeh Yasaei Sekeh
To better understand these attacks and facilitate more efficient adversarial training, in this paper we develop a novel theoretical framework that investigates how the adversarial robustness of a subnetwork contributes to the robustness of the entire network.
no code implementations • 11 May 2023 • Sepideh Neshatfar, Abram Magner, Salimeh Yasaei Sekeh
To gain a theoretical perspective on the supervised summarization problem itself, we first formulate it in terms of maximizing the Shannon mutual information between the summarized graph and the class label.
no code implementations • 28 Oct 2022 • Nicholas Soucy, Salimeh Yasaei Sekeh
Semantic segmentation models classifying hyperspectral images (HSI) are vulnerable to adversarial examples.
no code implementations • 26 Apr 2022 • Josh Andle, Salimeh Yasaei Sekeh
Despite the numerous previous solutions to bypass the catastrophic forgetting (CF) of previously seen tasks during the learning process, most of them still suffer significant forgetting, expensive memory cost, or lack of theoretical understanding of neural networks' conduct while learning new tasks.
no code implementations • 14 Apr 2022 • Madan Ravi Ganesh, Salimeh Yasaei Sekeh, Jason J. Corso
Raw deep neural network (DNN) performance is not enough; in real-world settings, computational load, training efficiency and adversarial security are just as or even more important.
no code implementations • 22 Mar 2022 • Joshua Andle, Nicholas Soucy, Simon Socolow, Salimeh Yasaei Sekeh
Our contributions include the outlining of key characteristics in the SDD, employment of an information-theoretic measure and custom metric to clearly visualize those characteristics, the implementation of the PECNet and Y-Net trajectory prediction models to demonstrate the outlined characteristics' impact on predictive performance, and lastly we provide a comparison between the SDD and Intersection Drone (inD) Dataset.
no code implementations • 9 Mar 2022 • Nicholas Soucy, Salimeh Yasaei Sekeh
These approaches use patching to incorporate the rich neighborhood information in images and exploit the simplicity and segmentability of the most common HSI datasets.
no code implementations • 22 Jun 2020 • Madan Ravi Ganesh, Dawsin Blanchard, Jason J. Corso, Salimeh Yasaei Sekeh
Finally, we define a novel sensitivity criterion for filters that measures the strength of their contributions to the succeeding layer and highlights critical filters that need to be completely protected from pruning.
no code implementations • 16 Jun 2020 • Stewart W Doe, Tyler Russell Seekins, David Fitzpatrick, Dawsin Blanchard, Salimeh Yasaei Sekeh
[l2020] to national and county level COVID-19 data.
no code implementations • 18 Mar 2020 • Madan Ravi Ganesh, Jason J. Corso, Salimeh Yasaei Sekeh
Most approaches to deep neural network compression via pruning either evaluate a filter's importance using its weights or optimize an alternative objective function with sparsity constraints.
no code implementations • 2 Oct 2019 • Salimeh Yasaei Sekeh, Madan Ravi Ganesh, Shurjo Banerjee, Jason J. Corso, Alfred O. Hero
In this work, firstly, we assert that OSFS's main assumption of having data from all the samples available at runtime is unrealistic and introduce a new setting where features and samples are streamed concurrently called OSFS with Streaming Samples (OSFS-SS).
no code implementations • 21 May 2019 • Salimeh Yasaei Sekeh, Alfred O. Hero
This paper proposes a geometric estimator of dependency between a pair of multivariate samples.
no code implementations • 10 Feb 2019 • Salimeh Yasaei Sekeh, Alfred O. Hero
Feature selection and reducing the dimensionality of data is an essential step in data analysis.
no code implementations • 15 Nov 2018 • Salimeh Yasaei Sekeh, Brandon Oselio, Alfred O. Hero
Providing a tight bound on the BER that is also feasible to estimate has been a challenge.
no code implementations • 1 Oct 2018 • Salimeh Yasaei Sekeh, Morteza Noshad, Kevin R. Moon, Alfred O. Hero
We derive a bound on the convergence rate for the Friedman-Rafsky (FR) estimator of the HP-divergence, which is related to a multivariate runs statistic for testing between two distributions.
no code implementations • 17 Feb 2017 • Morteza Noshad, Kevin R. Moon, Salimeh Yasaei Sekeh, Alfred O. Hero III
Considering the $k$-nearest neighbor ($k$-NN) graph of $Y$ in the joint data set $(X, Y)$, we show that the average powered ratio of the number of $X$ points to the number of $Y$ points among all $k$-NN points is proportional to R\'{e}nyi divergence of $X$ and $Y$ densities.
no code implementations • 13 Sep 2016 • Kevin R. Moon, Morteza Noshad, Salimeh Yasaei Sekeh, Alfred O. Hero III
Information theoretic measures (e. g. the Kullback Liebler divergence and Shannon mutual information) have been used for exploring possibly nonlinear multivariate dependencies in high dimension.