Two Instances of Interpretable Neural Network for Universal Approximations
This paper proposes two bottom-up interpretable neural network (NN) constructions for universal approximation, namely Triangularly-constructed NN (TNN) and Semi-Quantized Activation NN (SQANN). Further notable properties are (1) resistance to catastrophic forgetting (2) existence of proof for arbitrarily high accuracies (3) the ability to identify samples that are out-of-distribution through interpretable activation "fingerprints".
PDF AbstractDatasets
Add Datasets
introduced or used in this paper
Results from the Paper
Submit
results from this paper
to get state-of-the-art GitHub badges and help the
community compare results to other papers.
Methods
No methods listed for this paper. Add
relevant methods here