no code implementations • 18 Mar 2025 • Shraddha Surana, Ashwin Srinivasan
We test the tool on two different non-trivial scientific data analysis tasks.
1 code implementation • 27 Oct 2024 • Ashwin Srinivasan, Karan Bania, Shreyas V, Harshvardhan Mestha, Sidong Liu
Here, we address this shortcoming for the case in which one of the agents acts as a ''generator'' using a large language model (LLM) and the other is an agent that acts as a ''tester'' using either a human-expert, or a proxy for a human-expert (for example, a database compiled using human-expertise).
no code implementations • 18 Aug 2024 • Emmanuel Aboah Boateng, Cassiano O. Becker, Nabiha Asghar, Kabir Walia, Ashwin Srinivasan, Ehi Nosakhare, Victor Dibia, Soundar Srinivasan
Hand-crafting high quality prompts to optimize the performance of language models is a complicated and labor-intensive process.
no code implementations • 4 Mar 2023 • Shreyas Bhat Brahmavar, Rohit Rajesh, Tirtharaj Dash, Lovekesh Vig, Tanmay Tulsidas Verlekar, Md Mahmudul Hasan, Tariq Khan, Erik Meijering, Ashwin Srinivasan
Deep neural network (DNN) models for retinopathy have estimated predictive accuracies in the mid-to-high 90%.
1 code implementation • 20 Feb 2023 • Soham Rohit Chitnis, Sidong Liu, Tirtharaj Dash, Tanmay Tulsidas Verlekar, Antonio Di Ieva, Shlomo Berkovsky, Lovekesh Vig, Ashwin Srinivasan
To investigate the effect of domain-specific pre-training, we considered the current state-of-the-art multiple-instance learning models, 1) CLAM, an attention-based model, and 2) TransMIL, a self-attention-based model, and evaluated the models' confidence and predictive performance in detecting primary brain tumors - gliomas.
no code implementations • 15 Jan 2023 • S I Harini, Gautam Shroff, Ashwin Srinivasan, Prayushi Faldu, Lovekesh Vig
We model short-duration (e. g. day) trading in financial markets as a sequential decision-making problem under uncertainty, with the added complication of continual concept-drift.
1 code implementation • 4 Jan 2023 • A. Baskar, Ashwin Srinivasan, Michael Bain, Enrico Coiera
Machine Learning (ML) has emerged as a powerful form of data modelling with widespread applicability beyond its roots in the design of autonomous agents.
no code implementations • 29 Nov 2022 • Vedant Shah, Aditya Agrawal, Lovekesh Vig, Ashwin Srinivasan, Gautam Shroff, Tanmay Verlekar
Real data is often imperfect meaning that even if the symbolic theory remains unchanged, we may still need to address the problem of mapping raw data to high-level symbols, each time there is a change in the data acquisition environment or equipment.
no code implementations • 19 Sep 2022 • Vishwa Shah, Aditya Sharma, Gautam Shroff, Lovekesh Vig, Tirtharaj Dash, Ashwin Srinivasan
However, connectionist models struggle to include explicit domain knowledge for deductive reasoning.
1 code implementation • 1 Jun 2022 • Ashwin Srinivasan, A Baskar, Tirtharaj Dash, Devanshu Shah
Using a notion of explanations based on the compositional structure of features in a CRM, we provide empirical evidence on synthetic data of the ability to identify appropriate explanations; and demonstrate the use of CRMs as 'explanation machines' for black-box models that do not provide explanations for their predictions.
no code implementations • 5 May 2022 • Ashwin Srinivasan, Michael Bain, Enrico Coiera
We propose operational principles -- we call them Intelligibility Axioms -- to guide the design of a collaborative decision-support system.
no code implementations • 19 Nov 2021 • Atharv Sonwane, Gautam Shroff, Lovekesh Vig, Ashwin Srinivasan, Tirtharaj Dash
We consider a class of visual analogical reasoning problems that involve discovering the sequence of transformations by which pairs of input/output images are related, so as to analogously transform future inputs.
no code implementations • 19 Oct 2021 • Atharv Sonwane, Sharad Chitlangia, Tirtharaj Dash, Lovekesh Vig, Gautam Shroff, Ashwin Srinivasan
The ability to solve Bongard problems is an example of such a test.
no code implementations • Findings (ACL) 2022 • Ji Xin, Chenyan Xiong, Ashwin Srinivasan, Ankita Sharma, Damien Jose, Paul N. Bennett
Dense retrieval (DR) methods conduct text retrieval by first encoding texts in the embedding space and then matching them by nearest neighbor search.
no code implementations • 29 Sep 2021 • Ji Xin, Chenyan Xiong, Ashwin Srinivasan, Ankita Sharma, Damien Jose, Paul N. Bennett
Dense retrieval (DR) methods conduct text retrieval by first encoding texts in the embedding space and then matching them by nearest neighbor search.
no code implementations • 21 Jul 2021 • Tirtharaj Dash, Sharad Chitlangia, Aditya Ahuja, Ashwin Srinivasan
We present a survey of ways in which existing scientific knowledge are included when constructing models with neural networks.
1 code implementation • 22 May 2021 • Tirtharaj Dash, Ashwin Srinivasan, A Baskar
We also provide experimental evidence comparing BotGNNs favourably to multi-layer perceptrons (MLPs) that use features representing a "propositionalised" form of the background knowledge; and BotGNNs to a standard ILP based on the use of most-specific clauses.
no code implementations • 27 Feb 2021 • Tirtharaj Dash, Sharad Chitlangia, Aditya Ahuja, Ashwin Srinivasan
We present a survey of ways in which domain-knowledge has been included when constructing models with neural networks.
no code implementations • 19 Dec 2020 • Rishab Khincha, Soundarya Krishnan, Tirtharaj Dash, Lovekesh Vig, Ashwin Srinivasan
In this paper, deep neural networks are used to extract domain-specific features(morphological features like ground-glass opacity and disease indications like pneumonia) directly from the image data.
2 code implementations • 23 Oct 2020 • Tirtharaj Dash, Ashwin Srinivasan, Lovekesh Vig
These kinds of problems have been addressed effectively in the past by Inductive Logic Programming (ILP), by virtue of 2 important characteristics: (a) The use of a representation language that easily captures the relation encoded in graph-structured data, and (b) The inclusion of prior information encoded as domain-specific relations, that can alleviate problems of data scarcity, and construct new relations.
no code implementations • WS 2019 • Vinayshekhar Bannihatti Kumar, Ashwin Srinivasan, Aditi Chaudhary, James Route, Teruko Mitamura, Eric Nyberg
This paper presents the submissions by Team Dr. Quad to the ACL-BioNLP 2019 shared task on Textual Inference and Question Entailment in the Medical Domain.
no code implementations • 6 Jun 2019 • Vishal Sunder, Ashwin Srinivasan, Lovekesh Vig, Gautam Shroff, Rohit Rahul
Our interest in this paper is in meeting a rapidly growing industrial demand for information extraction from images of documents such as invoices, bills, receipts etc.
no code implementations • 11 Dec 2018 • Vishwanath D, Rohit Rahul, Gunjan Sehgal, Swati, Arindam Chowdhury, Monika Sharma, Lovekesh Vig, Gautam Shroff, Ashwin Srinivasan
In this paper, we propose a novel enterprise based end-to-end framework called DeepReader which facilitates information extraction from document images via identification of visual entities and populating a meta relational model across different entities in the document image.
Optical Character Recognition
Optical Character Recognition (OCR)
+2
no code implementations • 2 Jul 2018 • Ashwin Srinivasan, Lovekesh Vig, Michael Bain
We investigate the use of a Bayes-like approach to identify logical proxies for local predictions of a DRM.
no code implementations • 20 Dec 2016 • Sarmimala Saikia, Lovekesh Vig, Ashwin Srinivasan, Gautam Shroff, Puneet Agarwal, Richa Rawat
We investigate solving discrete optimisation problems using the estimation of distribution (EDA) approach via a novel combination of deep belief networks(DBN) and inductive logic programming (ILP). While DBNs are used to learn the structure of successively better feasible solutions, ILP enables the incorporation of domain-based background knowledge related to the goodness of solutions. Recent work showed that ILP could be an effective way to use domain knowledge in an EDA scenario. However, in a purely ILP-based EDA, sampling successive populations is either inefficient or not straightforward. In our Neuro-symbolic EDA, an ILP engine is used to construct a model for good solutions using domain-based background knowledge. These rules are introduced as Boolean features in the last hidden layer of DBNs used for EDA-based optimization. This incorporation of logical ILP features requires some changes while training and sampling from DBNs: (a)our DBNs need to be trained with data for units at the input layer as well as some units in an otherwise hidden layer, and (b)we would like the samples generated to be drawn from instances entailed by the logical model. We demonstrate the viability of our approach on instances of two optimisation problems: predicting optimal depth-of-win for the KRK endgame, and jobshop scheduling. Our results are promising: (i)On each iteration of distribution estimation, samples obtained with an ILP-assisted DBN have a substantially greater proportion of good solutions than samples generated using a DBN without ILP features, and (ii)On termination of distribution estimation, samples obtained using an ILP-assisted DBN contain more near-optimal samples than samples from a DBN without ILP features. These results suggest that the use of ILP-constructed theories could be useful for incorporating complex domain-knowledge into deep models for estimation of distribution based procedures.
no code implementations • 3 Aug 2016 • Ashwin Srinivasan, Gautam Shroff, Lovekesh Vig, Sarmimala Saikia, Puneet Agarwal
To answer this in the affirmative, we need: (a)a general-purpose technique for the incorporation of domain knowledge when constructing models for optimal values; and (b)a way of using these models to generate new data samples.
no code implementations • 11 Sep 2014 • Haimonti Dutta, Ashwin Srinivasan
That is, there is a network of computational units, each of which employs an ILP engine to construct some small number of features and then builds a (local) model.