Search Results for author: Vitaly Ablavsky

Found 11 papers, 7 papers with code

The 7th AI City Challenge

no code implementations15 Apr 2023 Milind Naphade, Shuo Wang, David C. Anastasiu, Zheng Tang, Ming-Ching Chang, Yue Yao, Liang Zheng, Mohammed Shaiqur Rahman, Meenakshi S. Arya, Anuj Sharma, Qi Feng, Vitaly Ablavsky, Stan Sclaroff, Pranamesh Chakraborty, Sanjita Prajapati, Alice Li, Shangru Li, Krishna Kunadharaju, Shenxin Jiang, Rama Chellappa

The AI City Challenge's seventh edition emphasizes two domains at the intersection of computer vision and artificial intelligence - retail business and Intelligent Traffic Systems (ITS) - that have considerable untapped potential.

Retrieval

ZeroWaste Dataset: Towards Deformable Object Segmentation in Cluttered Scenes

1 code implementation CVPR 2022 Dina Bashkirova, Mohamed Abdelfattah, Ziliang Zhu, James Akl, Fadi Alladkani, Ping Hu, Vitaly Ablavsky, Berk Calli, Sarah Adel Bargal, Kate Saenko

Recyclable waste detection poses a unique computer vision challenge as it requires detection of highly deformable and often translucent objects in cluttered scenes without the kind of context information usually present in human-centric datasets.

Object object-detection +5

CityFlow-NL: Tracking and Retrieval of Vehicles at City Scale by Natural Language Descriptions

1 code implementation12 Jan 2021 Qi Feng, Vitaly Ablavsky, Stan Sclaroff

In this paper, we focus on two foundational tasks: the Vehicle Retrieval by NL task and the Vehicle Tracking by NL task, which take advantage of the proposed CityFlow-NL benchmark and provide a strong basis for future research on the multi-target multi-camera tracking by NL description task.

Multi-Object Tracking Retrieval +1

DMCL: Distillation Multiple Choice Learning for Multimodal Action Recognition

1 code implementation23 Dec 2019 Nuno C. Garcia, Sarah Adel Bargal, Vitaly Ablavsky, Pietro Morerio, Vittorio Murino, Stan Sclaroff

In this work, we address the problem of learning an ensemble of specialist networks using multimodal data, while considering the realistic and challenging scenario of possible missing modalities at test time.

Action Recognition Multiple-choice +1

Siamese Natural Language Tracker: Tracking by Natural Language Descriptions with Siamese Trackers

1 code implementation CVPR 2021 Qi Feng, Vitaly Ablavsky, Qinxun Bai, Stan Sclaroff

We propose a novel Siamese Natural Language Tracker (SNLT), which brings the advancements in visual tracking to the tracking by natural language (NL) descriptions task.

Region Proposal Visual Object Tracking +1

Learning to Separate: Detecting Heavily-Occluded Objects in Urban Scenes

1 code implementation ECCV 2020 Chenhongyi Yang, Vitaly Ablavsky, Kaihong Wang, Qi Feng, Margrit Betke

While visual object detection with deep learning has received much attention in the past decade, cases when heavy intra-class occlusions occur have not been studied thoroughly.

object-detection Object Detection +1

Real-time Visual Object Tracking with Natural Language Description

no code implementations26 Jul 2019 Qi Feng, Vitaly Ablavsky, Qinxun Bai, Guorong Li, Stan Sclaroff

In benchmarks, our method is competitive with state of the art trackers, while it outperforms all other trackers on targets with unambiguous and precise language annotations.

Object Visual Object Tracking

Cannot find the paper you are looking for? You can Submit a new open access paper.