Search Results for author: Amir Nazemi

Found 6 papers, 1 papers with code

Challenges for Predictive Modeling with Neural Network Techniques using Error-Prone Dietary Intake Data

no code implementations15 Nov 2023 Dylan Spicker, Amir Nazemi, Joy Hutchinson, Paul Fieguth, Sharon I. Kirkpatrick, Michael Wallace, Kevin W. Dodd

In this work, we demonstrate the ways in which measurement error erodes the performance of neural networks, and illustrate the care that is required for leveraging these models in the presence of error.

Memory-Efficient Continual Learning Object Segmentation for Long Video

no code implementations26 Sep 2023 Amir Nazemi, Mohammad Javad Shafiee, Zahra Gharaee, Paul Fieguth

We propose two novel techniques to reduce the memory requirement of Online VOS methods while improving modeling accuracy and generalization on long videos.

Continual Learning Object +4

Deep Neural Network Perception Models and Robust Autonomous Driving Systems

no code implementations4 Mar 2020 Mohammad Javad Shafiee, Ahmadreza Jeddi, Amir Nazemi, Paul Fieguth, Alexander Wong

This paper analyzes the robustness of deep learning models in autonomous driving applications and discusses the practical solutions to address that.

Autonomous Driving

Potential adversarial samples for white-box attacks

no code implementations13 Dec 2019 Amir Nazemi, Paul Fieguth

Deep convolutional neural networks can be highly vulnerable to small perturbations of their inputs, potentially a major issue or limitation on system robustness when using deep networks as classifiers.

Adversarial Attack

Unsupervised Feature Learning Toward a Real-time Vehicle Make and Model Recognition

no code implementations8 Jun 2018 Amir Nazemi, Mohammad Javad Shafiee, Zohreh Azimifar, Alexander Wong

Here, we formulate the vehicle make and model recognition as a fine-grained classification problem and propose a new configurable on-road vehicle make and model recognition framework.

Cannot find the paper you are looking for? You can Submit a new open access paper.