FACT: Federated Adversarial Cross Training

1 Jun 2023  ·  Stefan Schrod, Jonas Lippl, Andreas Schäfer, Michael Altenbuchinger ·

Federated Learning (FL) facilitates distributed model development to aggregate multiple confidential data sources. The information transfer among clients can be compromised by distributional differences, i.e., by non-i.i.d. data. A particularly challenging scenario is the federated model adaptation to a target client without access to annotated data. We propose Federated Adversarial Cross Training (FACT), which uses the implicit domain differences between source clients to identify domain shifts in the target domain. In each round of FL, FACT cross initializes a pair of source clients to generate domain specialized representations which are then used as a direct adversary to learn a domain invariant data representation. We empirically show that FACT outperforms state-of-the-art federated, non-federated and source-free domain adaptation models on three popular multi-source-single-target benchmarks, and state-of-the-art Unsupervised Domain Adaptation (UDA) models on single-source-single-target experiments. We further study FACT's behavior with respect to communication restrictions and the number of participating clients.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Multi-Source Unsupervised Domain Adaptation Digits-five FACT Accuracy 95.2 # 2
Domain Adaptation MNIST-to-USPS FACT Accuracy 98.8 # 1
Multi-Source Unsupervised Domain Adaptation Office-31 FACT Accuracy 88.7 # 1
Multi-Source Unsupervised Domain Adaptation Office-Caltech10 FACT Accuracy 97.6 # 3
Domain Adaptation SVHN-to-MNIST FACT Accuracy 90.6 # 10
Domain Adaptation USPS-to-MNIST FACT Accuracy 98.6 # 2

Methods


No methods listed for this paper. Add relevant methods here