Search Results for author: Vahid Behzadan

Found 21 papers, 5 papers with code

Synthetic Reduced Nearest Neighbor Model for Regression

no code implementations29 Sep 2021 Pooya Tavallali, Vahid Behzadan, Mukesh Singhal

This algorithm is comprised of two steps: (1) The assignment step, where assignments of the samples to each centroid is found and the target response (i. e., prediction) of each centroid is determined; and (2) the update/centroid step, where each centroid is updated such that the loss function of the entire model is minimized.

Stochastic Induction of Decision Trees with Application to Learning Haar Tree

no code implementations29 Sep 2021 Azar Alizadeh, Pooya Tavallali, Vahid Behzadan, Mukesh Singhal

Experimentally, the algorithm is compared with several other related state-of-the-art decision tree learning methods, including the baseline non-stochastic approach.

Mitigation of Adversarial Policy Imitation via Constrained Randomization of Policy (CRoP)

no code implementations AAAI Workshop AdvML 2022 Nancirose Piazza, Vahid Behzadan

Deep reinforcement learning (DRL) policies are vulnerable to unauthorized replication attacks, where an adversary exploits imitation learning to reproduce target policies from observed behavior.

Imitation Learning reinforcement-learning

Adversarial Poisoning Attacks and Defense for General Multi-Class Models Based On Synthetic Reduced Nearest Neighbors

no code implementations11 Feb 2021 Pooya Tavallali, Vahid Behzadan, Peyman Tavallali, Mukesh Singhal

Through extensive experimental analysis, we demonstrate that (i) the proposed attack technique can deteriorate the accuracy of several models drastically, and (ii) under the proposed attack, the proposed defense technique significantly outperforms other conventional machine learning models in recovering the accuracy of the targeted model.

Data Poisoning

Adversarial Attacks on Deep Algorithmic Trading Policies

no code implementations22 Oct 2020 Yaser Faghan, Nancirose Piazza, Vahid Behzadan, Ali Fathi

Deep Reinforcement Learning (DRL) has become an appealing solution to algorithmic trading such as high frequency trading of stocks and cyptocurrencies.

Algorithmic Trading reinforcement-learning

Sentimental LIAR: Extended Corpus and Deep Learning Models for Fake Claim Classification

2 code implementations1 Sep 2020 Bibek Upadhayay, Vahid Behzadan

The rampant integration of social media in our every day lives and culture has given rise to fast and easier access to the flow of information than ever in human history.

Emotion Recognition Fake News Detection +2

Founding The Domain of AI Forensics

no code implementations11 Dec 2019 Ibrahim Baggili, Vahid Behzadan

With the widespread integration of AI in everyday and critical technologies, it seems inevitable to witness increasing instances of failure in AI systems.

A Novel Approach for Detection and Ranking of Trendy and Emerging Cyber Threat Events in Twitter Streams

no code implementations12 Jul 2019 Avishek Bose, Vahid Behzadan, Carlos Aguirre, William H. Hsu

We present a new machine learning and text information extraction approach to detection of cyber threat events in Twitter that are novel (previously non-extant) and developing (marked by significance with respect to similarity with a previously detected event).

Event Detection

Analysis and Improvement of Adversarial Training in DQN Agents With Adversarially-Guided Exploration (AGE)

no code implementations3 Jun 2019 Vahid Behzadan, William Hsu

This paper investigates the effectiveness of adversarial training in enhancing the robustness of Deep Q-Network (DQN) policies to state-space perturbations.

RL-Based Method for Benchmarking the Adversarial Resilience and Robustness of Deep Reinforcement Learning Policies

no code implementations3 Jun 2019 Vahid Behzadan, William Hsu

This paper investigates the resilience and robustness of Deep Reinforcement Learning (DRL) policies to adversarial perturbations in the state space.

Disentanglement reinforcement-learning

Adversarial Exploitation of Policy Imitation

no code implementations3 Jun 2019 Vahid Behzadan, William Hsu

This paper investigates a class of attacks targeting the confidentiality aspect of security in Deep Reinforcement Learning (DRL) policies.

Imitation Learning Model extraction +1

Sequential Triggers for Watermarking of Deep Reinforcement Learning Policies

no code implementations3 Jun 2019 Vahid Behzadan, William Hsu

This scheme provides a mechanism for the integration of a unique identifier within the policy in the form of its response to a designated sequence of state transitions, while incurring minimal impact on the nominal performance of the policy.

reinforcement-learning

Emergence of Addictive Behaviors in Reinforcement Learning Agents

no code implementations14 Nov 2018 Vahid Behzadan, Roman V. Yampolskiy, Arslan Munir

This paper presents a novel approach to the technical analysis of wireheading in intelligent agents.

Q-Learning reinforcement-learning

TrolleyMod v1.0: An Open-Source Simulation and Data-Collection Platform for Ethical Decision Making in Autonomous Vehicles

1 code implementation14 Nov 2018 Vahid Behzadan, James Minton, Arslan Munir

This paper presents TrolleyMod v1. 0, an open-source platform based on the CARLA simulator for the collection of ethical decision-making data for autonomous vehicles.

Autonomous Vehicles Decision Making +1

The Faults in Our Pi Stars: Security Issues and Open Challenges in Deep Reinforcement Learning

no code implementations23 Oct 2018 Vahid Behzadan, Arslan Munir

Since the inception of Deep Reinforcement Learning (DRL) algorithms, there has been a growing interest in both research and industrial communities in the promising potentials of this paradigm.

Autonomous Navigation reinforcement-learning

Mitigation of Policy Manipulation Attacks on Deep Q-Networks with Parameter-Space Noise

no code implementations4 Jun 2018 Vahid Behzadan, Arslan Munir

Recent developments have established the vulnerability of deep reinforcement learning to policy manipulation attacks via intentionally perturbed inputs, known as adversarial examples.

reinforcement-learning

Adversarial Reinforcement Learning Framework for Benchmarking Collision Avoidance Mechanisms in Autonomous Vehicles

no code implementations4 Jun 2018 Vahid Behzadan, Arslan Munir

With the rapidly growing interest in autonomous navigation, the body of research on motion planning and collision avoidance techniques has enjoyed an accelerating rate of novel proposals and developments.

Autonomous Navigation Motion Planning +1

A Psychopathological Approach to Safety Engineering in AI and AGI

no code implementations23 May 2018 Vahid Behzadan, Arslan Munir, Roman V. Yampolskiy

The complexity of dynamics in AI techniques is already approaching that of complex adaptive systems, thus curtailing the feasibility of formal controllability and reachability analysis in the context of AI safety.

Whatever Does Not Kill Deep Reinforcement Learning, Makes It Stronger

4 code implementations23 Dec 2017 Vahid Behzadan, Arslan Munir

Recent developments have established the vulnerability of deep Reinforcement Learning (RL) to policy manipulation attacks via adversarial perturbations.

reinforcement-learning

Vulnerability of Deep Reinforcement Learning to Policy Induction Attacks

1 code implementation16 Jan 2017 Vahid Behzadan, Arslan Munir

Deep learning classifiers are known to be inherently vulnerable to manipulation by intentionally perturbed inputs, named adversarial examples.

reinforcement-learning

Cannot find the paper you are looking for? You can Submit a new open access paper.