Search Results for author: Tohida Rehman

Found 11 papers, 3 papers with code

Comparative Analysis of Abstractive Summarization Models for Clinical Radiology Reports

no code implementations19 Jun 2025 Anindita Bhattacharya, Tohida Rehman, Debarshi Kumar Sanyal, Samiran Chattopadhyay

The findings section of a radiology report is often detailed and lengthy, whereas the impression section is comparatively more compact and captures key diagnostic conclusions.

Can pre-trained language models generate titles for research papers?

1 code implementation22 Sep 2024 Tohida Rehman, Debarshi Kumar Sanyal, Samiran Chattopadhyay

In this paper, we fine-tune pre-trained language models to generate titles of papers from their abstracts.

Transfer Learning and Transformer Architecture for Financial Sentiment Analysis

no code implementations28 Apr 2024 Tohida Rehman, Raghubir Bose, Samiran Chattopadhyay, Debarshi Kumar Sanyal

Financial sentiment analysis allows financial institutions like Banks and Insurance Companies to better manage the credit scoring of their customers in a better way.

Language Modeling Language Modelling +2

Analysis of Multidomain Abstractive Summarization Using Salience Allocation

no code implementations19 Feb 2024 Tohida Rehman, Raghubir Bose, Soumik Dey, Samiran Chattopadhyay

This paper explores the realm of abstractive text summarization through the lens of the SEASON (Salience Allocation as Guidance for Abstractive SummarizatiON) technique, a model designed to enhance summarization by leveraging salience allocation techniques.

Abstractive Text Summarization Articles +1

Hallucination Reduction in Long Input Text Summarization

1 code implementation28 Sep 2023 Tohida Rehman, Ronit Mandal, Abhishek Agarwal, Debarshi Kumar Sanyal

We have used the following metrics to measure factual consistency at the entity level: precision-source, and F1-target.

Decoder Hallucination +1

Generation of Highlights from Research Papers Using Pointer-Generator Networks and SciBERT Embeddings

1 code implementation14 Feb 2023 Tohida Rehman, Debarshi Kumar Sanyal, Samiran Chattopadhyay, Plaban Kumar Bhowmick, Partha Pratim Das

On the new MixSub dataset, where only the abstract is the input, our proposed model (when trained on the whole training corpus without distinguishing between the subject categories) achieves ROUGE-1, ROUGE-2 and ROUGE-L F1-scores of 31. 78, 9. 76 and 29. 3, respectively, METEOR score of 24. 00, and BERTScore F1 of 85. 25.

Articles

Cannot find the paper you are looking for? You can Submit a new open access paper.