Search Results for author: Nasser Mozayani

Found 7 papers, 1 papers with code

Leveraging Visemes for Better Visual Speech Representation and Lip Reading

no code implementations19 Jul 2023 Javad Peymanfard, Vahid Saeedi, Mohammad Reza Mohammadi, Hossein Zeinali, Nasser Mozayani

We evaluate our approach on various tasks, including word-level and sentence-level lip reading, and audiovisual speech recognition using the Arman-AV dataset, a largescale Persian corpus.

Lip Reading Sentence +2

Word-level Persian Lipreading Dataset

no code implementations8 Apr 2023 Javad Peymanfard, Ali Lashini, Samin Heydarian, Hossein Zeinali, Nasser Mozayani

Lip-reading has made impressive progress in recent years, driven by advances in deep learning.

Lipreading Lip Reading

MAS2HP: A Multi Agent System to Predict Protein Structure in 2D HP model

1 code implementation11 May 2022 Hossein Parineh, Nasser Mozayani

In this paper we proposed a new approach for protein structure prediction by using agent-based modeling (ABM) in two dimensional hydrophobic-hydrophilic model.

Protein Structure Prediction

A new Potential-Based Reward Shaping for Reinforcement Learning Agent

no code implementations17 Feb 2019 Babak Badnava, Mona Esmaeili, Nasser Mozayani, Payman Zarkesh-Ha

Among the literature of both the transfer learning and the potential-based reward shaping, a subject that has never been addressed is the knowledge gathered during the learning process itself.

Atari Games reinforcement-learning +2

Learning to predict where to look in interactive environments using deep recurrent q-learning

no code implementations17 Dec 2016 Sajad Mousavi, Michael Schukat, Enda Howley, Ali Borji, Nasser Mozayani

Bottom-Up (BU) saliency models do not perform well in complex interactive environments where humans are actively engaged in tasks (e. g., sandwich making and playing the video games).

Atari Games Q-Learning +2

Cannot find the paper you are looking for? You can Submit a new open access paper.