Search Results for author: Adil Kaan Akan

Found 8 papers, 2 papers with code

ADAPT: Efficient Multi-Agent Trajectory Prediction with Adaptation

no code implementations ICCV 2023 Görkay Aydemir, Adil Kaan Akan, Fatma Güney

To address this challenge, we propose ADAPT, a novel approach for jointly predicting the trajectories of all agents in the scene with dynamic weight learning.

Attribute Trajectory Prediction

Trajectory Forecasting on Temporal Graphs

1 code implementation1 Jul 2022 Görkay Aydemir, Adil Kaan Akan, Fatma Güney

We complement our representation with two types of memory modules; one focusing on the agent of interest and the other on the entire scene.

Motion Forecasting Trajectory Forecasting

StretchBEV: Stretching Future Instance Prediction Spatially and Temporally

no code implementations25 Mar 2022 Adil Kaan Akan, Fatma Güney

Our model learns temporal dynamics in a latent space through stochastic residual updates at each time step.

Stochastic Video Prediction with Structure and Motion

no code implementations20 Mar 2022 Adil Kaan Akan, Sadra Safadoust, Fatma Güney

The existing methods fail to fully capture the dynamics of the structured world by only focusing on changes in pixels.

Future prediction Video Prediction

SLAMP: Stochastic Latent Appearance and Motion Prediction

1 code implementation ICCV 2021 Adil Kaan Akan, Erkut Erdem, Aykut Erdem, Fatma Güney

Motion is an important cue for video prediction and often utilized by separating video content into static and dynamic components.

 Ranked #1 on Video Prediction on Cityscapes 128x128 (PSNR metric)

Autonomous Driving motion prediction +2

Just Noticeable Difference for Machines to Generate Adversarial Images

no code implementations29 Jan 2020 Adil Kaan Akan, Mehmet Ali Genc, Fatos T. Yarman Vural

We define Just Noticeable Difference for a machine learning model and generate a least perceptible difference for adversarial images which can trick a model.

BIG-bench Machine Learning object-detection +1

Cannot find the paper you are looking for? You can Submit a new open access paper.