1 code implementation • 25 Sep 2021 • Swapnil Parekh, Yaman Singla Kumar, Somesh Singh, Changyou Chen, Balaji Krishnamurthy, Rajiv Ratn Shah
It is well known that natural language models are vulnerable to adversarial attacks, which are mostly input-specific in nature.