no code implementations • 5 Jun 2024 • Matteo Gabburo, Nicolaas Paul Jedema, Siddhant Garg, Leonardo F. R. Ribeiro, Alessandro Moschitti
Further investigation reveals that RC scores strongly correlate with both QA performance and expert judgment across five of the six studied benchmarks, indicating that RC is an effective measure of question difficulty.
no code implementations • 21 Sep 2023 • Matteo Gabburo, Siddhant Garg, Rik Koncel Kedziorski, Alessandro Moschitti
Evaluation of QA systems is very challenging and expensive, with the most reliable approach being human annotations of correctness of answers for questions.
no code implementations • 24 May 2023 • Matteo Gabburo, Siddhant Garg, Rik Koncel-Kedziorski, Alessandro Moschitti
Recent studies show that sentence-level extractive QA, i. e., based on Answer Sentence Selection (AS2), is outperformed by Generation-based QA (GenQA) models, which generate answers using the top-k answer sentences ranked by AS2 models (a la retrieval-augmented generation style).
no code implementations • 24 May 2023 • Luca Di Liello, Siddhant Garg, Alessandro Moschitti
Answer Sentence Selection (AS2) is a core component for building an accurate Question Answering pipeline.
Ranked #4 on Question Answering on TrecQA (using extra training data)
no code implementations • 13 Apr 2023 • Siddhant Garg, Lijun Zhang, Hui Guan
Numerous structured pruning methods are already developed that can readily achieve speedups in single-task models, but the pruning of multi-task networks has not yet been extensively studied.
no code implementations • 23 Oct 2022 • Matteo Gabburo, Rik Koncel-Kedziorski, Siddhant Garg, Luca Soldaini, Alessandro Moschitti
In this paper, we propose to train a GenQA model by transferring knowledge from a trained AS2 model, to overcome the aforementioned issue.
no code implementations • 13 Sep 2022 • Siddhant Garg, Mudit Chaudhary
We present SeRP, a framework for Self-Supervised Learning of 3D point clouds.
no code implementations • 20 May 2022 • Luca Di Liello, Siddhant Garg, Luca Soldaini, Alessandro Moschitti
An important task for designing QA systems is answer sentence selection (AS2): selecting the sentence containing (or constituting) the answer to a question from a set of retrieved relevant documents.
Ranked #1 on Answer Selection on ASNQ
1 code implementation • NAACL 2022 • Luca Di Liello, Siddhant Garg, Luca Soldaini, Alessandro Moschitti
Our evaluation on three AS2 and one fact verification datasets demonstrates the superiority of our pre-training technique over the traditional ones for transformers used as joint models for multi-candidate inference tasks, as well as when used as cross-encoders for sentence-pair formulations of these tasks.
Ranked #3 on Fact Verification on FEVER
no code implementations • 9 Apr 2022 • Siddhant Garg, Dhruval Jain
Using the proposed loss functions, we are able to surpass the performance of Vanilla BYOL (71. 04%) by training the BYOL framework using CCSL loss (76. 87%) on the STL10 dataset.
no code implementations • 31 Oct 2021 • Siddhant Garg, Debi Prasanna Mohanty, Siva Prasad Thota, Sukumar Moharana
With the combination of our novel approach and the architecture, we present state-of-the-art results on detecting the image tilt angle on mobile devices as compared to the MobileNetV3 model.
1 code implementation • 6 Oct 2021 • Mehmet F. Demirel, Shengchao Liu, Siddhant Garg, Zhenmei Shi, YIngyu Liang
Our experiments demonstrate the strong performance of AWARE in graph-level prediction tasks in the standard setting in the domains of molecular property prediction and social networks.
no code implementations • EMNLP 2021 • Siddhant Garg, Alessandro Moschitti
In this paper we propose a novel approach towards improving the efficiency of Question Answering (QA) systems by filtering out questions that will not be answered by them.
1 code implementation • 27 Jan 2021 • Siddhant Garg, Goutham Ramakrishnan, Varun Thumbe
Large datasets in NLP suffer from noisy labels, due to erroneous automatic and human annotation procedures.
1 code implementation • NeurIPS 2020 • Siddhant Garg, YIngyu Liang
Unsupervised and self-supervised learning approaches have become a crucial tool to learn representations for downstream prediction tasks.
1 code implementation • 4 Aug 2020 • Siddhant Garg, Adarsh Kumar, Vibhor Goel, YIngyu Liang
We introduce adversarial perturbations in the model weights using a composite loss on the predictions of the original model and the desired trigger through projected gradient descent.
no code implementations • 8 May 2020 • Siddhant Garg, Goutham Ramakrishnan
The last few decades have seen significant breakthroughs in the fields of deep learning and quantum computing.
no code implementations • Asian Chapter of the Association for Computational Linguistics 2020 • Siddhant Garg, Rohit Kumar Sharma, YIngyu Liang
In this paper we show that concatenating the embeddings from the pre-trained model with those from a simple sentence embedding model trained only on the target data, can improve over the performance of FT for few-sample tasks.
2 code implementations • EMNLP 2020 • Siddhant Garg, Goutham Ramakrishnan
Modern text classification models are susceptible to adversarial examples, perturbed versions of the original text indiscernible by humans which get misclassified by the model.
1 code implementation • 20 Mar 2020 • Marios Loizou, Siddhant Garg, Dmitry Petrov, Melinos Averkiou, Evangelos Kalogerakis
The mechanism assesses both the degree of interaction between points and also mediates feature propagation across shapes, improving the accuracy and consistency of the resulting point-wise feature representations for shape segmentation.
Ranked #1 on 3D Semantic Segmentation on PartNet
2 code implementations • AAAI 2020 2019 • Siddhant Garg, Thuy Vu, Alessandro Moschitti
Additionally, we show that the transfer step of TANDA makes the adaptation step more robust to noise.
Ranked #2 on Question Answering on TrecQA (using extra training data)
no code implementations • 2 Oct 2019 • Siddhant Garg, Aditya Kumar Akash
The complexity of this problem stems from the anonymous feedback to the player and the stochastic generation of the reward.
no code implementations • 23 Sep 2019 • Siddhant Garg
Recent works show that ordering of the training data affects the model performance for Neural Machine Translation.
1 code implementation • EMNLP 2018 • Shiv Shankar, Siddhant Garg, Sunita Sarawagi
In this paper we show that a simple beam approximation of the joint distribution between attention and output is an easy, accurate, and efficient attention mechanism for sequence to sequence learning.