Search Results for author: Ashwin Srinivasan

Found 24 papers, 4 papers with code

Domain-Specific Pre-training Improves Confidence in Whole Slide Image Classification

1 code implementation20 Feb 2023 Soham Rohit Chitnis, Sidong Liu, Tirtharaj Dash, Tanmay Tulsidas Verlekar, Antonio Di Ieva, Shlomo Berkovsky, Lovekesh Vig, Ashwin Srinivasan

To investigate the effect of domain-specific pre-training, we considered the current state-of-the-art multiple-instance learning models, 1) CLAM, an attention-based model, and 2) TransMIL, a self-attention-based model, and evaluated the models' confidence and predictive performance in detecting primary brain tumors - gliomas.

Image Classification Multiple Instance Learning +1

Neuro-symbolic Meta Reinforcement Learning for Trading

no code implementations15 Jan 2023 S I Harini, Gautam Shroff, Ashwin Srinivasan, Prayushi Faldu, Lovekesh Vig

We model short-duration (e. g. day) trading in financial markets as a sequential decision-making problem under uncertainty, with the added complication of continual concept-drift.

Decision Making Meta Reinforcement Learning +3

A Protocol for Intelligible Interaction Between Agents That Learn and Explain

no code implementations4 Jan 2023 Ashwin Srinivasan, Michael Bain, A. Baskar, Enrico Coiera

In this paper we view the interaction between humans and ML systems within the broader context of interaction between agents capable of learning and explanation.

Neural Feature-Adaptation for Symbolic Predictions Using Pre-Training and Semantic Loss

no code implementations29 Nov 2022 Vedant Shah, Aditya Agrawal, Lovekesh Vig, Ashwin Srinivasan, Gautam Shroff, Tanmay Verlekar

Real data is often imperfect meaning that even if the symbolic theory remains unchanged, we may still need to address the problem of mapping raw data to high-level symbols, each time there is a change in the data acquisition environment or equipment.

Knowledge-based Analogical Reasoning in Neuro-symbolic Latent Spaces

no code implementations19 Sep 2022 Vishwa Shah, Aditya Sharma, Gautam Shroff, Lovekesh Vig, Tirtharaj Dash, Ashwin Srinivasan

However, connectionist models struggle to include explicit domain knowledge for deductive reasoning.

Composition of Relational Features with an Application to Explaining Black-Box Predictors

1 code implementation1 Jun 2022 Ashwin Srinivasan, A Baskar, Tirtharaj Dash, Devanshu Shah

Using a notion of explanations based on the compositional structure of features in a CRM, we provide empirical evidence on synthetic data of the ability to identify appropriate explanations; and demonstrate the use of CRMs as 'explanation machines' for black-box models that do not provide explanations for their predictions.

Inductive logic programming

One-way Explainability Isn't The Message

no code implementations5 May 2022 Ashwin Srinivasan, Michael Bain, Enrico Coiera

We propose operational principles -- we call them Intelligibility Axioms -- to guide the design of a collaborative decision-support system.

Solving Visual Analogies Using Neural Algorithmic Reasoning

no code implementations19 Nov 2021 Atharv Sonwane, Gautam Shroff, Lovekesh Vig, Ashwin Srinivasan, Tirtharaj Dash

We consider a class of visual analogical reasoning problems that involve discovering the sequence of transformations by which pairs of input/output images are related, so as to analogously transform future inputs.

Program Synthesis Visual Analogies

Zero-Shot Dense Retrieval with Momentum Adversarial Domain Invariant Representation

no code implementations29 Sep 2021 Ji Xin, Chenyan Xiong, Ashwin Srinivasan, Ankita Sharma, Damien Jose, Paul N. Bennett

Dense retrieval (DR) methods conduct text retrieval by first encoding texts in the embedding space and then matching them by nearest neighbor search.

Representation Learning Retrieval +1

A Review of Some Techniques for Inclusion of Domain-Knowledge into Deep Neural Networks

no code implementations21 Jul 2021 Tirtharaj Dash, Sharad Chitlangia, Aditya Ahuja, Ashwin Srinivasan

We present a survey of ways in which existing scientific knowledge are included when constructing models with neural networks.

Inclusion of Domain-Knowledge into GNNs using Mode-Directed Inverse Entailment

1 code implementation22 May 2021 Tirtharaj Dash, Ashwin Srinivasan, A Baskar

We also provide experimental evidence comparing BotGNNs favourably to multi-layer perceptrons (MLPs) that use features representing a "propositionalised" form of the background knowledge; and BotGNNs to a standard ILP based on the use of most-specific clauses.

Inductive logic programming

Incorporating Domain Knowledge into Deep Neural Networks

no code implementations27 Feb 2021 Tirtharaj Dash, Sharad Chitlangia, Aditya Ahuja, Ashwin Srinivasan

We present a survey of ways in which domain-knowledge has been included when constructing models with neural networks.

Constructing and Evaluating an Explainable Model for COVID-19 Diagnosis from Chest X-rays

no code implementations19 Dec 2020 Rishab Khincha, Soundarya Krishnan, Tirtharaj Dash, Lovekesh Vig, Ashwin Srinivasan

In this paper, deep neural networks are used to extract domain-specific features(morphological features like ground-glass opacity and disease indications like pneumonia) directly from the image data.

COVID-19 Diagnosis

Incorporating Symbolic Domain Knowledge into Graph Neural Networks

2 code implementations23 Oct 2020 Tirtharaj Dash, Ashwin Srinivasan, Lovekesh Vig

These kinds of problems have been addressed effectively in the past by Inductive Logic Programming (ILP), by virtue of 2 important characteristics: (a) The use of a representation language that easily captures the relation encoded in graph-structured data, and (b) The inclusion of prior information encoded as domain-specific relations, that can alleviate problems of data scarcity, and construct new relations.

Inductive logic programming

One-shot Information Extraction from Document Images using Neuro-Deductive Program Synthesis

no code implementations6 Jun 2019 Vishal Sunder, Ashwin Srinivasan, Lovekesh Vig, Gautam Shroff, Rohit Rahul

Our interest in this paper is in meeting a rapidly growing industrial demand for information extraction from images of documents such as invoices, bills, receipts etc.

Program Synthesis

Deep Reader: Information extraction from Document images via relation extraction and Natural Language

no code implementations11 Dec 2018 Vishwanath D, Rohit Rahul, Gunjan Sehgal, Swati, Arindam Chowdhury, Monika Sharma, Lovekesh Vig, Gautam Shroff, Ashwin Srinivasan

In this paper, we propose a novel enterprise based end-to-end framework called DeepReader which facilitates information extraction from document images via identification of visual entities and populating a meta relational model across different entities in the document image.

Optical Character Recognition Optical Character Recognition (OCR) +2

Logical Explanations for Deep Relational Machines Using Relevance Information

no code implementations2 Jul 2018 Ashwin Srinivasan, Lovekesh Vig, Michael Bain

We investigate the use of a Bayes-like approach to identify logical proxies for local predictions of a DRM.

Inductive logic programming

Neuro-symbolic EDA-based Optimisation using ILP-enhanced DBNs

no code implementations20 Dec 2016 Sarmimala Saikia, Lovekesh Vig, Ashwin Srinivasan, Gautam Shroff, Puneet Agarwal, Richa Rawat

We investigate solving discrete optimisation problems using the estimation of distribution (EDA) approach via a novel combination of deep belief networks(DBN) and inductive logic programming (ILP). While DBNs are used to learn the structure of successively better feasible solutions, ILP enables the incorporation of domain-based background knowledge related to the goodness of solutions. Recent work showed that ILP could be an effective way to use domain knowledge in an EDA scenario. However, in a purely ILP-based EDA, sampling successive populations is either inefficient or not straightforward. In our Neuro-symbolic EDA, an ILP engine is used to construct a model for good solutions using domain-based background knowledge. These rules are introduced as Boolean features in the last hidden layer of DBNs used for EDA-based optimization. This incorporation of logical ILP features requires some changes while training and sampling from DBNs: (a)our DBNs need to be trained with data for units at the input layer as well as some units in an otherwise hidden layer, and (b)we would like the samples generated to be drawn from instances entailed by the logical model. We demonstrate the viability of our approach on instances of two optimisation problems: predicting optimal depth-of-win for the KRK endgame, and jobshop scheduling. Our results are promising: (i)On each iteration of distribution estimation, samples obtained with an ILP-assisted DBN have a substantially greater proportion of good solutions than samples generated using a DBN without ILP features, and (ii)On termination of distribution estimation, samples obtained using an ILP-assisted DBN contain more near-optimal samples than samples from a DBN without ILP features. These results suggest that the use of ILP-constructed theories could be useful for incorporating complex domain-knowledge into deep models for estimation of distribution based procedures.

Inductive logic programming

Generation of Near-Optimal Solutions Using ILP-Guided Sampling

no code implementations3 Aug 2016 Ashwin Srinivasan, Gautam Shroff, Lovekesh Vig, Sarmimala Saikia, Puneet Agarwal

To answer this in the affirmative, we need: (a)a general-purpose technique for the incorporation of domain knowledge when constructing models for optimal values; and (b)a way of using these models to generate new data samples.

Inductive logic programming Job Shop Scheduling +1

Consensus-Based Modelling using Distributed Feature Construction

no code implementations11 Sep 2014 Haimonti Dutta, Ashwin Srinivasan

That is, there is a network of computational units, each of which employs an ILP engine to construct some small number of features and then builds a (local) model.

Inductive logic programming

Cannot find the paper you are looking for? You can Submit a new open access paper.