Multi-hypothesis classifier

20 Aug 2019  ·  Sayantan Sengupta, Sudip Sanyal ·

Accuracy is the most important parameter among few others which defines the effectiveness of a machine learning algorithm. Higher accuracy is always desirable. Now, there is a vast number of well established learning algorithms already present in the scientific domain. Each one of them has its own merits and demerits. Merits and demerits are evaluated in terms of accuracy, speed of convergence, complexity of the algorithm, generalization property, and robustness among many others. Also the learning algorithms are data-distribution dependent. Each learning algorithm is suitable for a particular distribution of data. Unfortunately, no dominant classifier exists for all the data distribution, and the data distribution task at hand is usually unknown. Not one classifier can be discriminative well enough if the number of classes are huge. So the underlying problem is that a single classifier is not enough to classify the whole sample space correctly. This thesis is about exploring the different techniques of combining the classifiers so as to obtain the optimal accuracy. Three classifiers are implemented namely plain old nearest neighbor on raw pixels, a structural feature extracted neighbor and Gabor feature extracted nearest neighbor. Five different combination strategies are devised and tested on Tibetan character images and analyzed

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods