Search Results for author: Peratham Wiriyathammabhum

Found 7 papers, 1 papers with code

TTCB System Description to a Shared Task on Implicit and Underspecified Language 2021

no code implementations ACL (unimplicit) 2021 Peratham Wiriyathammabhum

In this report, we describe our transformers for text classification baseline (TTCB) submissions to a shared task on implicit and underspecified language 2021.

text-classification Text Classification

ClassBases at CASE-2022 Multilingual Protest Event Detection Tasks: Multilingual Protest News Detection and Automatically Replicating Manually Created Event Datasets

no code implementations16 Jan 2023 Peratham Wiriyathammabhum

For the multilingual protest news detection, we participated in subtask-1, subtask-2, and subtask-4, which are document classification, sentence classification, and token classification.

Classification Document Classification +5

PromptShots at the FinNLP-2022 ERAI Tasks: Pairwise Comparison and Unsupervised Ranking

no code implementations16 Jan 2023 Peratham Wiriyathammabhum

Surprisingly, we observed OpenAI InstructGPT language model few-shot trained on Chinese data works best in our submissions, ranking 3rd on the maximal loss (ML) pairwise accuracy.

Language Modelling

TEDB System Description to a Shared Task on Euphemism Detection 2022

no code implementations16 Jan 2023 Peratham Wiriyathammabhum

In this report, we describe our Transformers for euphemism detection baseline (TEDB) submissions to a shared task on euphemism detection 2022.

Sarcasm Detection Sentiment Analysis +2

Is Sluice Resolution really just Question Answering?

no code implementations29 May 2021 Peratham Wiriyathammabhum

Ellipsis and questions are referentially dependent expressions (anaphoras) and retrieving the corresponding antecedents are like answering questions to output pieces of clarifying information.

Question Answering

SpotFast Networks with Memory Augmented Lateral Transformers for Lipreading

1 code implementation21 May 2020 Peratham Wiriyathammabhum

The experiments show that our proposed model outperforms various state-of-the-art models and incorporating the memory augmented lateral transformers makes a 3. 7% improvement to the SpotFast networks.

Action Recognition Lipreading

Cannot find the paper you are looking for? You can Submit a new open access paper.