Paper

Interpretable Set Functions

We propose learning flexible but interpretable functions that aggregate a variable-length set of permutation-invariant feature vectors to predict a label. We use a deep lattice network model so we can architect the model structure to enhance interpretability, and add monotonicity constraints between inputs-and-outputs... (read more)

Results in Papers With Code
(↓ scroll down to see all results)