no code implementations • ICLR Workshop Neural_Compression 2021 • Rupam Acharyya, Boyu Zhang, Ankani Chattoraj, Shouman Das, Daniel Stefankovic
We then empirically show that DPP edge pruning for neural networks outperforms other competing methods (both edge and node) on real data.
no code implementations • ICLR Workshop Neural_Compression 2021 • Rupam Acharyya, Ankani Chattoraj, Boyu Zhang, Shouman Das, Daniel Stefankovic
Despite multitude of empirical advances, there is a lack of theoretical understanding of the effectiveness of different pruning methods.
1 code implementation • 30 Jun 2020 • Rupam Acharyya, Ankani Chattoraj, Boyu Zhang, Shouman Das, Daniel Stefankovic
We inspect different pruning techniques under the statistical mechanics formulation of a teacher-student framework and derive their generalization error (GE) bounds.
no code implementations • 6 Apr 2017 • Sejun Park, Yunhun Jang, Andreas Galanis, Jinwoo Shin, Daniel Stefankovic, Eric Vigoda
The Gibbs sampler is a particularly popular Markov chain used for learning and inference problems in Graphical Models (GMs).
no code implementations • ICML 2017 • Haichuan Yang, Shupeng Gui, Chuyang Ke, Daniel Stefankovic, Ryohei Fujimaki, Ji Liu
The cardinality constraint is an intrinsic way to restrict the solution structure in many domains, for example, sparse learning, feature selection, and compressed sensing.
no code implementations • NeurIPS 2012 • Tivadar Papai, Henry Kautz, Daniel Stefankovic
In many applications, a Markov logic network (MLN) is trained in one domain, but used in a different one.