In this paper, we describe our system entry for Shared Task 8 at SMM4H-2021, which is on automatic classification of self-reported breast cancer posts on Twitter.
We generate counterspeech using three datasets and observe significant improvement across different attribute scores.
This simultaneously reduces network bandwidth, compute utilization, and memory footprint while preserving model quality.
In this work, we present DadaGP, a new symbolic music dataset comprising 26, 181 song scores in the GuitarPro format covering 739 musical genres, along with an accompanying tokenized format well-suited for generative sequence models such as the Transformer.
Climate change has become one of the biggest global problems increasingly compromising the Earth's habitability.
We describe our system entry for the WASSA 2021 Shared Task (for both Track-1 and Track-2), where we leveraged the information from Pre-trained language models for Track-specific Tasks.
We present the design of GATI, an end-to-end prediction serving system that incorporates learned caches for low-latency DNN inference.
This paper harnesses attention based pre-trained models fine-tuned on Hindi data with Hostile-Non hostile task as Auxiliary and fusing its features for further sub-tasks classification.
We introduce adversarial perturbations in the model weights using a composite loss on the predictions of the original model and the desired trigger through projected gradient descent.
For example, for the user entered query "capital of USA", the most probable question intent is "What's the capital of USA?".
In this work, we observe that caching intermediate layer outputs can help us avoid running all the layers of a DNN for a sizeable fraction of inference requests.
To fix the noisy state annotations, we use crowdsourced workers to re-annotate state and utterances based on the original utterances in the dataset.
Ranked #16 on Multi-domain Dialogue State Tracking on MULTIWOZ 2.0