no code implementations • 31 May 2024 • Simla Burcu Harma, Ayan Chakraborty, Elizaveta Kostenok, Danila Mishin, Dongho Ha, Babak Falsafi, Martin Jaggi, Ming Liu, Yunho Oh, Suvinay Subramanian, Amir Yazdanbakhsh
In addition, through rigorous analysis, we demonstrate that sparsity and quantization are not orthogonal; their interaction can significantly harm model accuracy, with quantization error playing a dominant role in this degradation.
no code implementations • 10 Sep 2023 • Pavel Burnyshev, Elizaveta Kostenok, Alexey Zaytsev
Through our investigation, we provide evidence that machine translation models display robustness displayed robustness against best performed known adversarial attacks, as the degree of perturbation in the output is directly proportional to the perturbation in the input.
no code implementations • 22 Aug 2023 • Elizaveta Kostenok, Daniil Cherniavskii, Alexey Zaytsev
Additionally, we introduce topological features to compare attention patterns across heads and layers.