no code implementations • 31 Jan 2025 • Halil Ibrahim Aysel, Xiaohao Cai, Adam Prugel-Bennett
Concept-based explanation methods, such as concept bottleneck models (CBMs), aim to improve the interpretability of machine learning models by linking their decisions to human-understandable concepts, under the critical assumption that such concepts can be accurately attributed to the network's feature space.
no code implementations • 22 Aug 2024 • RuiXiao Zhang, Juheon Lee, Xiaohao Cai, Adam Prugel-Bennett
Deep learning models such as convolutional neural networks and transformers have been widely applied to solve 3D object detection problems in the domain of autonomous driving.
1 code implementation • 15 Aug 2024 • Yabin Wang, Zhiwu Huang, Su Zhou, Adam Prugel-Bennett, Xiaopeng Hong
This paper critiques the overly specialized approach of fine-tuning pre-trained models solely with a penny-wise objective on a single deepfake dataset, while disregarding the pound-wise balance for generalization and knowledge retention.
1 code implementation • 4 Jul 2024 • RuiXiao Zhang, Yihong Wu, Juheon Lee, Adam Prugel-Bennett, Xiaohao Cai
This raises a fundamental question related to the evaluation of the 3D object detection models' cross-domain performance: Do we really need models to maintain excellent performance in their original 3D bounding boxes after being applied across domains?
no code implementations • 29 Sep 2021 • Mark Tuddenham, Adam Prugel-Bennett, Jonathon Hare
The optimisation of neural networks can be sped up by orthogonalising the gradients before the optimisation step, ensuring the diversification of the learned representations.
no code implementations • NeurIPS Workshop DL-IG 2020 • Dominic Belcher, Adam Prugel-Bennett, Srinandan Dasmahapatra
Recent results in deep learning show that considering only the capacity of machines does not adequately explain the generalisation performance we can observe.
no code implementations • NeurIPS 2020 • Matthew Painter, Jonathon Hare, Adam Prugel-Bennett
In this work we empirically show that linear disentangled representations are not generally present in standard VAE models and that they instead require altering the loss landscape to induce them.
no code implementations • 20 Nov 2013 • Shaona Ghosh, Adam Prugel-Bennett
On-line linear optimization on combinatorial action sets (d-dimensional actions) with bandit feedback, is known to have complexity in the order of the dimension of the problem.