1 code implementation • 23 Feb 2025 • Prafful Kumar Khoba, Zijian Wang, Chetan Arora, Mahsa Baktashmotlagh
Through extensive experimentation, we demonstrate the efficacy of our feature perturbation method in providing a more precise and robust estimation of model transferability.
no code implementations • 27 Dec 2024 • Mateusz Michalkiewicz, Sheena Bai, Mahsa Baktashmotlagh, Varun Jampani, Guha Balakrishnan
In this paper, we analyze the viewpoint stability of foundational models - specifically, their sensitivity to changes in viewpoint- and define instability as significant feature variations resulting from minor changes in viewing angle, leading to generalization gaps in 3D reasoning tasks.
1 code implementation • 18 Nov 2024 • Bowen Yuan, Zijian Wang, Mahsa Baktashmotlagh, Yadan Luo, Zi Huang
At the image level, we employ a palette network, a specialized neural network, to dynamically allocate colors from a reduced color space to each pixel.
no code implementations • 5 Aug 2024 • Ekaterina Khramtsova, Mahsa Baktashmotlagh, Guido Zuccon, Xi Wang, Mathieu Salzmann
In this work, we propose a source-free approach centred on uncertainty-based estimation, using a generative model for calibration in the absence of source data.
no code implementations • 9 Jul 2024 • Ekaterina Khramtsova, Teerapong Leelanupab, Shengyao Zhuang, Mahsa Baktashmotlagh, Guido Zuccon
In this demo we present a web-based application for selecting an effective pre-trained dense retriever to use on a private collection.
no code implementations • 21 Jun 2024 • Zhuoxiao Chen, Junjie Meng, Mahsa Baktashmotlagh, Yonggang Zhang, Zi Huang, Yadan Luo
Specifically, we propose a Model Synergy (MOS) strategy that dynamically selects historical checkpoints with diverse knowledge and assembles them to best accommodate the current test batch.
1 code implementation • 21 Jun 2024 • Jia Syuen Lim, Zhuoxiao Chen, Mahsa Baktashmotlagh, Zhi Chen, Xin Yu, Zi Huang, Yadan Luo
We demonstrate the effectiveness of DiPEx through extensive class-agnostic OD and OOD-OD experiments on MS-COCO and LVIS, surpassing other prompting methods by up to 20. 1% in AR and achieving a 21. 3% AP improvement over SAM.
1 code implementation • 7 Feb 2024 • Ekaterina Khramtsova, Shengyao Zhuang, Mahsa Baktashmotlagh, Guido Zuccon
In this paper we present Large Language Model Assisted Retrieval Model Ranking (LARMOR), an effective unsupervised approach that leverages LLMs for selecting which dense retriever to use on a test corpus (target).
no code implementations • ICCV 2023 • Mateusz Michalkiewicz, Masoud Faraki, Xiang Yu, Manmohan Chandraker, Mahsa Baktashmotlagh
Overfitting to the source domain is a common issue in gradient-based training of deep neural networks.
no code implementations • 18 Sep 2023 • Ekaterina Khramtsova, Shengyao Zhuang, Mahsa Baktashmotlagh, Xi Wang, Guido Zuccon
We propose the new problem of choosing which dense retrieval model to use when searching on a new collection for which no labels are available, i. e. in a zero-shot setting.
no code implementations • ICCV 2023 • Yadan Luo, Zhuoxiao Chen, Zhen Fang, Zheng Zhang, Zi Huang, Mahsa Baktashmotlagh
Achieving a reliable LiDAR-based object detector in autonomous driving is paramount, but its success hinges on obtaining large amounts of precise 3D annotations.
1 code implementation • ICCV 2023 • Zhuoxiao Chen, Yadan Luo, Zheng Wang, Mahsa Baktashmotlagh, Zi Huang
Unsupervised domain adaptation (DA) with the aid of pseudo labeling techniques has emerged as a crucial approach for domain-adaptive 3D object detection.
1 code implementation • 23 Jan 2023 • Yadan Luo, Zhuoxiao Chen, Zijian Wang, Xin Yu, Zi Huang, Mahsa Baktashmotlagh
To alleviate the high annotation cost in LiDAR-based 3D object detection, active learning is a promising solution that learns to select only a small portion of unlabeled data to annotate, without compromising model performance.
no code implementations • ICCV 2023 • Zijian Wang, Yadan Luo, Liang Zheng, Zi Huang, Mahsa Baktashmotlagh
This paper focuses on model transferability estimation, i. e., assessing the performance of pre-trained models on a downstream task without performing fine-tuning.
no code implementations • WACV 2023 • Tianle Chen, Mahsa Baktashmotlagh, Zijian Wang, Mathieu Salzmann
Domain generalization (DG) aims to learn a model from multiple training (i. e., source) domains that can generalize well to the unseen test (i. e., target) data coming from a different distribution.
Ranked #3 on
Single-Source Domain Generalization
on Digits-five
no code implementations • 15 Oct 2022 • Siamak Layeghy, Mahsa Baktashmotlagh, Marius Portmann
In order to enhance the generalisibility of machine learning based network intrusion detection systems, we propose to extract domain invariant features using adversarial domain adaptation from multiple network domains, and then apply an unsupervised technique for recognising abnormalities, i. e., intrusions.
no code implementations • 9 Jul 2022 • Ekaterina Khramtsova, Guido Zuccon, Xi Wang, Mahsa Baktashmotlagh
This paper performs a detailed analysis of the effectiveness of topological properties for image classification in various training scenarios, defined by: the number of training samples, the complexity of the training data and the complexity of the backbone network.
2 code implementations • 13 Feb 2022 • Yadan Luo, Zijian Wang, Zhuoxiao Chen, Zi Huang, Mahsa Baktashmotlagh
However, most existing OSDA approaches are limited due to three main reasons, including: (1) the lack of essential theoretical analysis of generalization bound, (2) the reliance on the coexistence of source and target data during adaptation, and (3) failing to accurately estimate the uncertainty of model predictions.
1 code implementation • 1 Sep 2021 • Zhuoxiao Chen, Yadan Luo, Mahsa Baktashmotlagh
The majority of video domain adaptation algorithms are proposed for closed-set scenarios in which all the classes are shared among the domains.
1 code implementation • ICCV 2021 • Zijian Wang, Yadan Luo, Ruihong Qiu, Zi Huang, Mahsa Baktashmotlagh
Domain generalization (DG) aims to generalize a model trained on multiple source (i. e., training) domains to a distributionally different target (i. e., test) domain.
Ranked #7 on
Single-Source Domain Generalization
on Digits-five
no code implementations • 24 Jul 2021 • Olga Moskvyak, Frederic Maire, Feras Dayoub, Mahsa Baktashmotlagh
To reduce the need for labeled data, we focus on a semi-supervised approach that requires only a subset of the training data to be labeled.
no code implementations • 11 Jun 2021 • Mateusz Michalkiewicz, Stavros Tsogkas, Sarah Parisot, Mahsa Baktashmotlagh, Anders Eriksson, Eugene Belilovsky
The impressive performance of deep convolutional neural networks in single-view 3D reconstruction suggests that these models perform non-trivial reasoning about the 3D structure of the output space.
no code implementations • ACL 2021 • Farhad Moghimifar, Lizhen Qu, Yue Zhuo, Gholamreza Haffari, Mahsa Baktashmotlagh
The dynamic nature of commonsense knowledge postulates models capable of performing multi-hop reasoning over new situations.
no code implementations • ICLR 2021 • Olga Moskvyak, Frederic Maire, Feras Dayoub, Mahsa Baktashmotlagh
Keypoint representations are learnt with a semantic keypoint consistency constraint that forces the keypoint detection network to learn similar features for the same keypoint across the dataset.
no code implementations • 1 Jan 2021 • Mahsa Baktashmotlagh, Tianle Chen, Mathieu Salzmann
In this setting, existing techniques focus on the challenging task of isolating the unknown target samples, so as to avoid the negative transfer resulting from aligning the source feature distributions with the broader target one that encompasses the additional unknown classes.
1 code implementation • ALTA 2020 • Farhad Moghimifar, Gholamreza Haffari, Mahsa Baktashmotlagh
Our experiments on four different benchmark causality datasets demonstrate the superiority of our approach over the existing baselines, by up to 7% improvement, on the tasks of identification and localisation of the causal relations from the text.
no code implementations • ALTA 2020 • Farhad Moghimifar, Afshin Rahimi, Mahsa Baktashmotlagh, Xue Li
Causal relationships form the basis for reasoning and decision-making in Artificial Intelligence systems.
1 code implementation • 25 Nov 2020 • Yadan Luo, Zi Huang, Hongxu Chen, Yang Yang, Mahsa Baktashmotlagh
Most of the prior efforts are devoted to learning node embeddings with graph neural networks (GNNs), which preserve the signed network topology by message-passing along edges to facilitate the downstream link prediction task.
1 code implementation • COLING 2020 • Farhad Moghimifar, Lizhen Qu, Yue Zhuo, Mahsa Baktashmotlagh, Gholamreza Haffari
However, current approaches in this realm lack the ability to perform commonsense reasoning upon facing an unseen situation, mostly due to incapability of identifying a diverse range of implicit social relations.
no code implementations • 26 Aug 2020 • Olga Moskvyak, Frederic Maire, Feras Dayoub, Mahsa Baktashmotlagh
Learning embeddings that are invariant to the pose of the object is crucial in visual image retrieval and re-identification.
1 code implementation • 31 Jul 2020 • Yadan Luo, Zi Huang, Zijian Wang, Zheng Zhang, Mahsa Baktashmotlagh
To further enhance the model capacity and testify the robustness of the proposed architecture on difficult transfer tasks, we extend our model to work in a semi-supervised setting using an additional video-level bipartite graph.
Ranked #3 on
Domain Adaptation
on HMDB --> UCF (full)
1 code implementation • ICML 2020 • Yadan Luo, Zijian Wang, Zi Huang, Mahsa Baktashmotlagh
The existing domain adaptation approaches which tackle this problem work in the closed-set setting with the assumption that the source and the target data share exactly the same classes of objects.
no code implementations • 10 May 2020 • Mateusz Michalkiewicz, Eugene Belilovsky, Mahsa Baktashmotlagh, Anders Eriksson
Deep learning applied to the reconstruction of 3D shapes has seen growing interest.
1 code implementation • ECCV 2020 • Mateusz Michalkiewicz, Sarah Parisot, Stavros Tsogkas, Mahsa Baktashmotlagh, Anders Eriksson, Eugene Belilovsky
In this work we demonstrate experimentally that naive baselines do not apply when the goal is to learn to reconstruct novel objects using very few examples, and that in a \emph{few-shot} learning setting, the network must learn concepts that can be applied to new categories, avoiding rote memorization.
no code implementations • 3 Mar 2020 • Qianggong Zhang, Yanyang Gu, Michalkiewicz Mateusz, Mahsa Baktashmotlagh, Anders Eriksson
In conventional formulations of multilayer feedforward neural networks, the individual layers are customarily defined by explicit functions.
no code implementations • 9 Jan 2020 • Olga Moskvyak, Frederic Maire, Feras Dayoub, Mahsa Baktashmotlagh
Our method outperforms the same model without body landmarks input by 26% and 18% on the synthetic and the real datasets respectively.
no code implementations • 29 Nov 2019 • Mohammad Mahfujur Rahman, Clinton Fookes, Mahsa Baktashmotlagh, Sridha Sridharan
Domain adaptation (DA) and domain generalization (DG) have emerged as a solution to the domain shift problem where the distribution of the source and target data is different.
Ranked #16 on
Domain Adaptation
on ImageCLEF-DA
no code implementations • 12 Nov 2019 • Yadan Luo, Zi Huang, Zheng Zhang, Ziwei Wang, Mahsa Baktashmotlagh, Yang Yang
Meta-learning for few-shot learning allows a machine to leverage previously acquired knowledge as a prior, thus improving the performance on novel tasks with only small amounts of data.
1 code implementation • 28 Feb 2019 • Olga Moskvyak, Frederic Maire, Asia O. Armstrong, Feras Dayoub, Mahsa Baktashmotlagh
We present a novel system for visual re-identification based on unique natural markings that is robust to occlusions, viewpoint and illumination changes.
no code implementations • 21 Jan 2019 • Mateusz Michalkiewicz, Jhony K. Pontes, Dominic Jack, Mahsa Baktashmotlagh, Anders Eriksson
This repository contains the code for the paper "Occupancy Networks - Learning 3D Reconstruction in Function Space"
1 code implementation • 2 Jan 2019 • Mohammad Mahfujur Rahman, Clinton Fookes, Mahsa Baktashmotlagh, Sridha Sridharan
In the presence of large sets of labeled data, Deep Learning (DL) has accomplished extraordinary triumphs in the avenue of computer vision, particularly in object classification and recognition tasks.
no code implementations • 21 Dec 2018 • Mohammad Mahfujur Rahman, Clinton Fookes, Mahsa Baktashmotlagh, Sridha Sridharan
If DA methods are applied directly to DG by a simple exclusion of the target data from training, poor performance will result for a given task.
Ranked #125 on
Domain Generalization
on PACS
no code implementations • ICLR 2019 • Mahsa Baktashmotlagh, Masoud Faraki, Tom Drummond, Mathieu Salzmann
To this end, we rely on the intuition that the source and target samples depicting the known classes can be generated by a shared subspace, whereas the target samples from unknown classes come from a different, private subspace.
no code implementations • 22 Sep 2017 • Fahimeh Rezazadegan, Sareh Shirazi, Mahsa Baktashmotlagh, Larry S. Davis
Anticipating future actions is a key component of intelligence, specifically when it applies to real-time systems, such as robots or autonomous cars.
no code implementations • 4 Sep 2017 • Samuel Cunningham-Nelson, Mahsa Baktashmotlagh, Wageeh Boles
In this work, we explore using statistical dependence measures for textual classification, representing text as word vectors.
no code implementations • ICCV 2015 • Mehrtash Harandi, Mathieu Salzmann, Mahsa Baktashmotlagh
State-of-the-art image-set matching techniques typically implicitly model each image-set with a Gaussian distribution.
no code implementations • CVPR 2014 • Mahsa Baktashmotlagh, Mehrtash T. Harandi, Brian C. Lovell, Mathieu Salzmann
Here, we propose to make better use of the structure of this manifold and rely on the distance on the manifold to compare the source and target distributions.