Search Results for author: Balaji Vasan Srinivasan

Found 30 papers, 3 papers with code

Let's Ask Again: Refine Network for Automatic Question Generation

1 code implementation IJCNLP 2019 Preksha Nema, Akash Kumar Mohankumar, Mitesh M. Khapra, Balaji Vasan Srinivasan, Balaraman Ravindran

It is desired that the generated question should be (i) grammatically correct (ii) answerable from the passage and (iii) specific to the given answer.

Question Generation Question-Generation

Towards Transparent and Explainable Attention Models

2 code implementations ACL 2020 Akash Kumar Mohankumar, Preksha Nema, Sharan Narasimhan, Mitesh M. Khapra, Balaji Vasan Srinivasan, Balaraman Ravindran

To make attention mechanisms more faithful and plausible, we propose a modified LSTM cell with a diversity-driven training objective that ensures that the hidden representations learned at different time steps are diverse.

Attribute

IGA : An Intent-Guided Authoring Assistant

1 code implementation14 Apr 2021 Simeng Sun, Wenlong Zhao, Varun Manjunatha, Rajiv Jain, Vlad Morariu, Franck Dernoncourt, Balaji Vasan Srinivasan, Mohit Iyyer

While large-scale pretrained language models have significantly improved writing assistance functionalities such as autocomplete, more complex and controllable writing assistants have yet to be explored.

Language Modelling Sentence

When science journalism meets artificial intelligence : An interactive demonstration

no code implementations EMNLP 2018 Raghuram Vadapalli, Bakhtiyar Syed, Nishant Prabhu, Balaji Vasan Srinivasan, Vasudeva Varma

We present an online interactive tool that generates titles of blog titles and thus take the first step toward automating science journalism.

Generating Topic-Oriented Summaries Using Neural Attention

no code implementations NAACL 2018 Kundan Krishna, Balaji Vasan Srinivasan

Existing summarization algorithms generate a single summary and are not capable of generating multiple summaries tuned to the interests of the readers.

Abstractive Text Summarization

Vocabulary Tailored Summary Generation

no code implementations COLING 2018 Kundan Krishna, Aniket Murhekar, Saumitra Sharma, Balaji Vasan Srinivasan

Neural sequence-to-sequence models have been successfully extended for summary generation. However, existing frameworks generate a single summary for a given input and do not tune the summaries towards any additional constraints/preferences.

Abstractive Text Summarization

Corpus-based Content Construction

no code implementations COLING 2018 Balaji Vasan Srinivasan, Pranav Maneriker, Kundan Krishna, Natwar Modani

Enterprise content writers are engaged in writing textual content for various purposes.

Improving generation quality of pointer networks via guided attention

no code implementations20 Jan 2019 Kushal Chawla, Kundan Krishna, Balaji Vasan Srinivasan

The first shortcoming is the extractive nature of the generated summaries, since the network eventually learns to copy from the input article most of the times, affecting the abstractive nature of the generated summaries.

Abstractive Text Summarization

A Lexical, Syntactic, and Semantic Perspective for Understanding Style in Text

no code implementations18 Sep 2019 Gaurav Verma, Balaji Vasan Srinivasan

With a growing interest in modeling inherent subjectivity in natural language, we present a linguistically-motivated process to understand and analyze the writing style of individuals from three perspectives: lexical, syntactic, and semantic.

Authorship Attribution

Adapting Language Models for Non-Parallel Author-Stylized Rewriting

no code implementations22 Sep 2019 Bakhtiyar Syed, Gaurav Verma, Balaji Vasan Srinivasan, Anandhavelu Natarajan, Vasudeva Varma

Given the recent progress in language modeling using Transformer-based neural models and an active interest in generating stylized text, we present an approach to leverage the generalization capabilities of a language model to rewrite an input text in a target author's style.

Denoising Language Modelling

Generating summaries tailored to target characteristics

no code implementations18 Dec 2019 Kushal Chawla, Hrituraj Singh, Arijit Pramanik, Mithlesh Kumar, Balaji Vasan Srinivasan

Recently, research efforts have gained pace to cater to varied user preferences while generating text summaries.

Text Summarization

Reinforced Rewards Framework for Text Style Transfer

no code implementations11 May 2020 Abhilasha Sancheti, Kundan Krishna, Balaji Vasan Srinivasan, Anandhavelu Natarajan

Style transfer deals with the algorithms to transfer the stylistic properties of a piece of text into that of another while ensuring that the core content is preserved.

Style Transfer Text Style Transfer

Multi-Style Transfer with Discriminative Feedback on Disjoint Corpus

no code implementations NAACL 2021 Navita Goyal, Balaji Vasan Srinivasan, Anandhavelu Natarajan, Abhilasha Sancheti

Style transfer has been widely explored in natural language generation with non-parallel corpus by directly or indirectly extracting a notion of style from source and target domain corpus.

Language Modelling Style Transfer +1

Incorporating Stylistic Lexical Preferences in Generative Language Models

no code implementations Findings of the Association for Computational Linguistics 2020 Hrituraj Singh, Gaurav Verma, Balaji Vasan Srinivasan

While recent advances in language modeling have resulted in powerful generation models, their generation style remains implicitly dependent on the training data and can not emulate a specific target style.

Language Modelling Reinforcement Learning (RL)

DRAG: Director-Generator Language Modelling Framework for Non-Parallel Author Stylized Rewriting

no code implementations EACL 2021 Hrituraj Singh, Gaurav Verma, Aparna Garimella, Balaji Vasan Srinivasan

In this paper, we propose a Director-Generator framework to rewrite content in the target author's style, specifically focusing on certain target attributes.

Denoising Language Modelling

CLAUSEREC: A Clause Recommendation Framework for AI-aided Contract Authoring

no code implementations EMNLP 2021 Vinay Aggarwal, Aparna Garimella, Balaji Vasan Srinivasan, Anandhavelu N, Rajiv Jain

We propose a two-staged pipeline to first predict if a specific clause type is relevant to be added in a contract, and then recommend the top clauses for the given type based on the contract context.

IGA: An Intent-Guided Authoring Assistant

no code implementations EMNLP 2021 Simeng Sun, Wenlong Zhao, Varun Manjunatha, Rajiv Jain, Vlad Morariu, Franck Dernoncourt, Balaji Vasan Srinivasan, Mohit Iyyer

While large-scale pretrained language models have significantly improved writing assistance functionalities such as autocomplete, more complex and controllable writing assistants have yet to be explored.

Language Modelling Sentence

Entailment Relation Aware Paraphrase Generation

no code implementations20 Mar 2022 Abhilasha Sancheti, Balaji Vasan Srinivasan, Rachel Rudinger

We introduce a new task of entailment relation aware paraphrase generation which aims at generating a paraphrase conforming to a given entailment relation (e. g. equivalent, forward entailing, or reverse entailing) with respect to a given input.

Natural Language Inference Paraphrase Generation +3

Agent-Specific Deontic Modality Detection in Legal Language

no code implementations23 Nov 2022 Abhilasha Sancheti, Aparna Garimella, Balaji Vasan Srinivasan, Rachel Rudinger

Legal documents are typically long and written in legalese, which makes it particularly difficult for laypeople to understand their rights and duties.

Natural Language Understanding Transfer Learning

What to Read in a Contract? Party-Specific Summarization of Legal Obligations, Entitlements, and Prohibitions

no code implementations19 Dec 2022 Abhilasha Sancheti, Aparna Garimella, Balaji Vasan Srinivasan, Rachel Rudinger

In this work, we propose a new task of party-specific extractive summarization for legal contracts to facilitate faster reviewing and improved comprehension of rights and duties.

Extractive Summarization Sentence +1

Learning with Difference Attention for Visually Grounded Self-supervised Representations

no code implementations26 Jun 2023 Aishwarya Agarwal, Srikrishna Karanam, Balaji Vasan Srinivasan

Recent works in self-supervised learning have shown impressive results on single-object images, but they struggle to perform well on complex multi-object images as evidenced by their poor visual grounding.

Self-Supervised Learning Visual Grounding

A-STAR: Test-time Attention Segregation and Retention for Text-to-image Synthesis

no code implementations ICCV 2023 Aishwarya Agarwal, Srikrishna Karanam, K J Joseph, Apoorv Saxena, Koustava Goswami, Balaji Vasan Srinivasan

First, our attention segregation loss reduces the cross-attention overlap between attention maps of different concepts in the text prompt, thereby reducing the confusion/conflict among various concepts and the eventual capture of all concepts in the generated output.

Denoising Image Generation

CoPL: Contextual Prompt Learning for Vision-Language Understanding

no code implementations3 Jul 2023 Koustava Goswami, Srikrishna Karanam, Prateksha Udhayanan, K J Joseph, Balaji Vasan Srinivasan

Our key innovations over earlier works include using local image features as part of the prompt learning process, and more crucially, learning to weight these prompts based on local features that are appropriate for the task at hand.

Learning with Multi-modal Gradient Attention for Explainable Composed Image Retrieval

no code implementations31 Aug 2023 Prateksha Udhayanan, Srikrishna Karanam, Balaji Vasan Srinivasan

To this end, our key novelty is a new gradient-attention-based learning objective that explicitly forces the model to focus on the local regions of interest being modified in each retrieval step.

Image Retrieval Retrieval

An Image is Worth Multiple Words: Multi-attribute Inversion for Constrained Text-to-Image Synthesis

no code implementations20 Nov 2023 Aishwarya Agarwal, Srikrishna Karanam, Tripti Shukla, Balaji Vasan Srinivasan

Another line of techniques expand the inversion space to learn multiple embeddings but they do this only along the layer dimension (e. g., one per layer of the DDPM model) or the timestep dimension (one for a set of timesteps in the denoising process), leading to suboptimal attribute disentanglement.

Attribute Denoising +2

Social Media Ready Caption Generation for Brands

no code implementations3 Jan 2024 Himanshu Maheshwari, Koustava Goswami, Apoorv Saxena, Balaji Vasan Srinivasan

Our architecture is based on two parts: a the first part contains an image captioning model that takes in an image that the brand wants to post online and gives a plain English caption; b the second part takes in the generated caption along with the target brand personality and outputs a catchy personality-aligned social media caption.

Caption Generation Image Captioning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.