Search Results for author: Vasileios Lioutas

Found 13 papers, 3 papers with code

Video Killed the HD-Map: Predicting Multi-Agent Behavior Directly From Aerial Images

no code implementations19 May 2023 Yunpeng Liu, Vasileios Lioutas, Jonathan Wilder Lavington, Matthew Niedoba, Justice Sefas, Setareh Dabiri, Dylan Green, Xiaoxuan Liang, Berend Zwartsenberg, Adam Ścibior, Frank Wood

The development of algorithms that learn multi-agent behavioral models using human demonstrations has led to increasingly realistic simulations in the field of autonomous driving.

Autonomous Driving Trajectory Prediction

Critic Sequential Monte Carlo

no code implementations30 May 2022 Vasileios Lioutas, Jonathan Wilder Lavington, Justice Sefas, Matthew Niedoba, Yunpeng Liu, Berend Zwartsenberg, Setareh Dabiri, Frank Wood, Adam Scibior

We introduce CriticSMC, a new algorithm for planning as inference built from a composition of sequential Monte Carlo with learned Soft-Q function heuristic factors.

Collision Avoidance

MATE-KD: Masked Adversarial TExt, a Companion to Knowledge Distillation

1 code implementation ACL 2021 Ahmad Rashid, Vasileios Lioutas, Mehdi Rezagholizadeh

We present, MATE-KD, a novel text-based adversarial training algorithm which improves the performance of knowledge distillation.

Adversarial Text Data Augmentation +2

Towards Zero-Shot Knowledge Distillation for Natural Language Processing

no code implementations EMNLP 2021 Ahmad Rashid, Vasileios Lioutas, Abbas Ghaddar, Mehdi Rezagholizadeh

Knowledge Distillation (KD) is a common knowledge transfer algorithm used for model compression across a variety of deep learning based natural language processing (NLP) solutions.

Knowledge Distillation Model Compression +1

Mapping Low-Resolution Images To Multiple High-Resolution Images Using Non-Adversarial Mapping

no code implementations21 Jun 2020 Vasileios Lioutas

Several methods have recently been proposed for the Single Image Super-Resolution (SISR) problem.

Image Super-Resolution

Time-aware Large Kernel Convolutions

1 code implementation ICML 2020 Vasileios Lioutas, Yuhong Guo

Some of these models use all the available sequence tokens to generate an attention distribution which results in time complexity of $O(n^2)$.

Document Summarization Language Modelling +2

Copy this Sentence

no code implementations23 May 2019 Vasileios Lioutas, Andriy Drozdyuk

In this paper we provide the mathematical definition of attention and examine its application to sequence to sequence models.

Sentence

Cannot find the paper you are looking for? You can Submit a new open access paper.