1 code implementation • 18 Mar 2024 • Miltiadis Kofinas, Boris Knyazev, Yan Zhang, Yunlu Chen, Gertjan J. Burghouts, Efstratios Gavves, Cees G. M. Snoek, David W. Zhang
Neural networks that process the parameters of other neural networks find applications in domains as diverse as classifying implicit neural representations, generating neural network weights, and predicting generalization errors.
no code implementations • 6 Feb 2024 • Aviv Shamsian, Aviv Navon, David W. Zhang, Yan Zhang, Ethan Fetaya, Gal Chechik, Haggai Maron
Learning in deep weight spaces (DWS), where neural networks process the weights of other neural networks, is an emerging research direction, with applications to 2D and 3D neural fields (INRs, NeRFs), as well as making inferences about other types of neural networks.
no code implementations • 19 Dec 2023 • Leander van den Heuvel, Gertjan Burghouts, David W. Zhang, Gwenn Englebienne, Sabina B. van Rooij
For object detection, it is possible to view the prediction of bounding boxes as a reverse diffusion process.
no code implementations • 15 Nov 2023 • Aviv Shamsian, David W. Zhang, Aviv Navon, Yan Zhang, Miltiadis Kofinas, Idan Achituve, Riccardo Valperga, Gertjan J. Burghouts, Efstratios Gavves, Cees G. M. Snoek, Ethan Fetaya, Gal Chechik, Haggai Maron
Learning in weight spaces, where neural networks process the weights of other deep neural networks, has emerged as a promising research direction with applications in various fields, from analyzing and editing neural fields and implicit neural representations, to network pruning and quantization.
1 code implementation • 30 Jan 2023 • Yan Zhang, David W. Zhang, Simon Lacoste-Julien, Gertjan J. Burghouts, Cees G. M. Snoek
Slot attention is a powerful method for object-centric modeling in images and videos.
2 code implementations • 17 Jan 2023 • David W. Zhang, Corrado Rainone, Markus Peschl, Roberto Bondesan
Finding the best way to schedule operations in a computation graph is a classical NP-hard problem which is central to compiler optimization.
1 code implementation • ICLR 2022 • Yan Zhang, David W. Zhang, Simon Lacoste-Julien, Gertjan J. Burghouts, Cees G. M. Snoek
Most set prediction models in deep learning use set-equivariant operations, but they actually operate on multisets.
1 code implementation • 26 Jun 2021 • David W. Zhang, Gertjan J. Burghouts, Cees G. M. Snoek
We address two common scaling problems encountered in set-to-hypergraph tasks that limit the size of the input set: the exponentially growing number of hyperedges and the run-time complexity, both leading to higher memory requirements.
1 code implementation • ICLR 2021 • David W. Zhang, Gertjan J. Burghouts, Cees G. M. Snoek
In this paper, we propose an alternative to training via set losses by viewing learning as conditional density estimation.