A major question is therefore how to measure the performance of an algorithm in comparison to an optimal solution on instances we encounter in practice.
As a case study, we apply this methodology to analyzing gender bias in pre-trained Transformer language models.
Common methods for interpreting neural models in natural language processing typically examine either their structure or their behavior, but not both.
Despite the vast success of Deep Neural Networks in numerous application domains, it has been shown that such models are not robust i. e., they are vulnerable to small adversarial perturbations of the input.
Recently, there has been a surge of interest in a parallel optimization technique called adaptive sampling which produces solutions with desirable approximation guarantees for submodular maximization in exponentially faster parallel runtime.