Search Results for author: Radityo Eko Prasojo

Found 15 papers, 7 papers with code

Investigating Text Shortening Strategy in BERT: Truncation vs Summarization

1 code implementation19 Mar 2024 Mirza Alim Mutasodirin, Radityo Eko Prasojo

In this study, we investigate the performance of document truncation and summarization in text classification tasks.

Document Summarization Extractive Summarization +2

COPAL-ID: Indonesian Language Reasoning with Local Culture and Nuances

1 code implementation2 Nov 2023 Haryo Akbarianto Wibowo, Erland Hilman Fuadi, Made Nindyatama Nityasya, Radityo Eko Prasojo, Alham Fikri Aji

Unlike the previous Indonesian COPA dataset (XCOPA-ID), COPAL-ID incorporates Indonesian local and cultural nuances, and therefore, provides a more natural portrayal of day-to-day causal reasoning within the Indonesian cultural sphere.

Common Sense Reasoning

ParaCotta: Synthetic Multilingual Paraphrase Corpora from the Most Diverse Translation Sample Pair

no code implementations PACLIC 2021 Alham Fikri Aji, Tirana Noor Fatyanosa, Radityo Eko Prasojo, Philip Arthur, Suci Fitriany, Salma Qonitah, Nadhifa Zulfa, Tomi Santoso, Mahendra Data

We release our synthetic parallel paraphrase corpus across 17 languages: Arabic, Catalan, Czech, German, English, Spanish, Estonian, French, Hindi, Indonesian, Italian, Dutch, Romanian, Russian, Swedish, Vietnamese, and Chinese.

Machine Translation Sentence +1

Nix-TTS: Lightweight and End-to-End Text-to-Speech via Module-wise Distillation

1 code implementation29 Mar 2022 Rendi Chevi, Radityo Eko Prasojo, Alham Fikri Aji, Andros Tjandra, Sakriani Sakti

We present Nix-TTS, a lightweight TTS achieved via knowledge distillation to a high-quality yet large-sized, non-autoregressive, and end-to-end (vocoder-free) TTS teacher model.

Knowledge Distillation Neural Architecture Search

Which Student is Best? A Comprehensive Knowledge Distillation Exam for Task-Specific BERT Models

no code implementations3 Jan 2022 Made Nindyatama Nityasya, Haryo Akbarianto Wibowo, Rendi Chevi, Radityo Eko Prasojo, Alham Fikri Aji

We perform knowledge distillation (KD) benchmark from task-specific BERT-base teacher models to various student models: BiLSTM, CNN, BERT-Tiny, BERT-Mini, and BERT-Small.

Data Augmentation Knowledge Distillation +3

Synthetic Source Language Augmentation for Colloquial Neural Machine Translation

no code implementations30 Dec 2020 Asrul Sani Ariesandy, Mukhlis Amien, Alham Fikri Aji, Radityo Eko Prasojo

Neural machine translation (NMT) is typically domain-dependent and style-dependent, and it requires lots of training data.

Machine Translation NMT +1

Costs to Consider in Adopting NLP for Your Business

no code implementations16 Dec 2020 Made Nindyatama Nityasya, Haryo Akbarianto Wibowo, Radityo Eko Prasojo, Alham Fikri Aji

Recent advances in Natural Language Processing (NLP) have largely pushed deep transformer-based models as the go-to state-of-the-art technique without much regard to the production and utilization cost.

Benchmarking Multidomain English-Indonesian Machine Translation

1 code implementation LREC 2020 Tri Wahyu Guntara, Alham Fikri Aji, Radityo Eko Prasojo

In the context of Machine Translation (MT) from-and-to English, Bahasa Indonesia has been considered a low-resource language, and therefore applying Neural Machine Translation (NMT) which typically requires large training dataset proves to be problematic.

Benchmarking Machine Translation +2

Cannot find the paper you are looking for? You can Submit a new open access paper.