no code implementations • ACL (unimplicit) 2021 • Peratham Wiriyathammabhum
In this report, we describe our transformers for text classification baseline (TTCB) submissions to a shared task on implicit and underspecified language 2021.
no code implementations • 16 Jan 2023 • Peratham Wiriyathammabhum
For the multilingual protest news detection, we participated in subtask-1, subtask-2, and subtask-4, which are document classification, sentence classification, and token classification.
no code implementations • 16 Jan 2023 • Peratham Wiriyathammabhum
Surprisingly, we observed OpenAI InstructGPT language model few-shot trained on Chinese data works best in our submissions, ranking 3rd on the maximal loss (ML) pairwise accuracy.
no code implementations • 16 Jan 2023 • Peratham Wiriyathammabhum
In this report, we describe our Transformers for euphemism detection baseline (TEDB) submissions to a shared task on euphemism detection 2022.
no code implementations • 29 May 2021 • Peratham Wiriyathammabhum
Ellipsis and questions are referentially dependent expressions (anaphoras) and retrieving the corresponding antecedents are like answering questions to output pieces of clarifying information.
1 code implementation • 21 May 2020 • Peratham Wiriyathammabhum
The experiments show that our proposed model outperforms various state-of-the-art models and incorporating the memory augmented lateral transformers makes a 3. 7% improvement to the SpotFast networks.
Ranked #14 on Lipreading on Lip Reading in the Wild
no code implementations • WS 2019 • Peratham Wiriyathammabhum, Abhinav Shrivastava, Vlad I. Morariu, Larry S. Davis
This paper presents a new task, the grounding of spatio-temporal identifying descriptions in videos.