1 code implementation • 18 Oct 2022 • Ali Mirzazadeh, Florian Dubost, Maxwell Pike, Krish Maniar, Max Zuo, Christopher Lee-Messer, Daniel Rubin
We propose an unsupervised fine-tuning method that optimizes the consistency of attention maps and show that it improves both classification performance and the quality of attention maps.
no code implementations • 23 Mar 2022 • Max Zuo, Logan Schick, Matthew Gombolay, Nakul Gopalan
In each test, CA-RRT reached more states on average in the same number of iterations as weighted-RRT.