no code implementations • 22 Dec 2023 • Cristian Rodriguez-Opazo, Edison Marrese-Taylor, Ehsan Abbasnejad, Hamed Damirchi, Ignacio M. Jara, Felipe Bravo-Marquez, Anton Van Den Hengel
Contrastive Language-Image Pretraining (CLIP) stands out as a prominent method for image representation learning.
no code implementations • 29 Nov 2023 • Hamed Damirchi, Cristian Rodríguez-Opazo, Ehsan Abbasnejad, Damien Teney, Javen Qinfeng Shi, Stephen Gould, Anton Van Den Hengel
Large pre-trained models can dramatically reduce the amount of task-specific data required to solve a problem, but they often fail to capture domain-specific nuances out of the box.
no code implementations • 2 Jun 2023 • Hamed Damirchi, Forest Agostinelli, Pooyan Jamshidi
However, a lack of structure in each module's role, and modular network-specific issues such as module collapse have restricted their usability.
no code implementations • 30 Jan 2023 • Ali Farajzadeh Bavil, Hamed Damirchi, Hamid D. Taghirad
Due to the compact and rich high-level representations offered, skeleton-based human action recognition has recently become a highly active research topic.
Ranked #4 on Skeleton Based Action Recognition on N-UCLA
no code implementations • 1 Jul 2021 • Hamed Damirchi, Rooholla Khorrambakht, Hamid D. Taghirad, Behzad Moshiri
In such cases where multiple losses are imposed on a network, the uncertainty over each output can be derived to weigh the different loss terms in a maximum likelihood setting.
no code implementations • 18 Jan 2021 • Rooholla Khorrambakht, Chris Xiaoxuan Lu, Hamed Damirchi, Zhenghua Chen, Zhengguo Li
Inertial Measurement Units (IMUs) are interceptive modalities that provide ego-motion measurements independent of the environmental factors.
no code implementations • 17 Nov 2020 • Hamed Damirchi, Rooholla Khorrambakht, Hamid D. Taghirad
Visual odometry networks commonly use pretrained optical flow networks in order to derive the ego-motion between consecutive frames.
no code implementations • 6 Jul 2020 • Hamed Damirchi, Rooholla Khorrambakht, Hamid Taghirad
We provide heatmaps of the priors, learned by the network, to visualize the utilization of each of the data sources by the trained network.