no code implementations • 25 Mar 2023 • Denis Kuznedelev, Soroush Tabesh, Kimia Noorbakhsh, Elias Frantar, Sara Beery, Eldar Kurtic, Dan Alistarh
To address this, we ask: can we quickly compress large generalist models into accurate and efficient specialists?
2 code implementations • 23 Feb 2022 • Hanieh Naderi, Kimia Noorbakhsh, Arian Etemadi, Shohreh Kasaei
Although 3D point cloud classification has recently been widely deployed in different application scenarios, it is still very vulnerable to adversarial attacks.
no code implementations • 25 Jan 2022 • Rahul Goel, Modar Sulaiman, Kimia Noorbakhsh, Mahdi Sharifi, Rajesh Sharma, Pooyan Jamshidi, Kallol Roy
The pretrained transformer of GPT-2 is trained to generate text and then fine-tuned to classify facial images.
2 code implementations • 15 Nov 2021 • Kimia Noorbakhsh, Manuel Gomez Rodriguez
This model satisfies a desirable counterfactual monotonicity condition, which is sufficient to identify counterfactual dynamics in the process of thinning.
1 code implementation • 7 Oct 2021 • Kimia Noorbakhsh, Modar Sulaiman, Mahdi Sharifi, Kallol Roy, Pooyan Jamshidi
In this paper, we present a sample efficient way of solving the symbolic tasks by first pretraining the transformer model with language translation and then fine-tuning the pretrained transformer model to solve the downstream task of symbolic mathematics.