Search Results for author: Bimal Bhattarai

Found 9 papers, 6 papers with code

ConvTextTM: An Explainable Convolutional Tsetlin Machine Framework for Text Classification

no code implementations LREC 2022 Bimal Bhattarai, Ole-Christoffer Granmo, Lei Jiao

Recent advancements in natural language processing (NLP) have reshaped the industry, with powerful language models such as GPT-3 achieving superhuman performance on various tasks.

Decision Making Document Classification +2

Contracting Tsetlin Machine with Absorbing Automata

no code implementations17 Oct 2023 Bimal Bhattarai, Ole-Christoffer Granmo, Lei Jiao, Per-Arne Andersen, Svein Anders Tunheim, Rishad Shafik, Alex Yakovlev

In brief, the TA of each clause literal has both an absorbing Exclude- and an absorbing Include state, making the learning scheme absorbing instead of ergodic.

Verifying Properties of Tsetlin Machines

1 code implementation25 Mar 2023 Emilia Przybysz, Bimal Bhattarai, Cosimo Persia, Ana Ozaki, Ole-Christoffer Granmo, Jivitesh Sharma

Then, we show the correctness of our encoding and provide results for the properties: adversarial robustness, equivalence, and similarity of TsMs.

Adversarial Robustness Interpretable Machine Learning +2

Measuring the Novelty of Natural Language Text Using the Conjunctive Clauses of a Tsetlin Machine Text Classifier

5 code implementations17 Nov 2020 Bimal Bhattarai, Ole-Christoffer Granmo, Lei Jiao

The mechanism uses the conjunctive clauses of the TM to measure to what degree a text matches the classes covered by the training data.

Novelty Detection Text Classification

Massively Parallel and Asynchronous Tsetlin Machine Architecture Supporting Almost Constant-Time Scaling

2 code implementations10 Sep 2020 K. Darshana Abeyrathna, Bimal Bhattarai, Morten Goodwin, Saeed Gorji, Ole-Christoffer Granmo, Lei Jiao, Rupsa Saha, Rohan K. Yadav

We evaluated the proposed parallelization across diverse learning tasks and it turns out that our decentralized TM learning algorithm copes well with working on outdated data, resulting in no significant loss in learning accuracy.

Cannot find the paper you are looking for? You can Submit a new open access paper.