Search Results for author: Tadashi Nomoto

Found 10 papers, 2 papers with code

Grounding NBA Matchup Summaries

1 code implementation INLG (ACL) 2021 Tadashi Nomoto

The present paper summarizes an attempt we made to meet a shared task challenge on grounding machine-generated summaries of NBA matchups (https://github. com/ehudreiter/accuracySharedTask. git).

The Fewer Splits are Better: Deconstructing Readability in Sentence Splitting

no code implementations2 Feb 2023 Tadashi Nomoto

In this work, we focus on sentence splitting, a subfield of text simplification, motivated largely by an unproven idea that if you divide a sentence in pieces, it should become easier to understand.

Sentence Text Simplification

Extended Multilingual Protest News Detection -- Shared Task 1, CASE 2021 and 2022

no code implementations21 Nov 2022 Ali Hürriyetoğlu, Osman Mutlu, Fırat Duruşan, Onur Uca, Alaeddin Selçuk Gürel, Benjamin Radford, Yaoyao Dai, Hansi Hettiarachchi, Niklas Stoehr, Tadashi Nomoto, Milena Slavcheva, Francielle Vargas, Aaqib Javid, Fatih Beyhan, Erdem Yörük

The CASE 2022 extension consists of expanding the test data with more data in previously available languages, namely, English, Hindi, Portuguese, and Spanish, and adding new test data in Mandarin, Turkish, and Urdu for Sub-task 1, document classification.

Document Classification Event Detection +3

Learning to Simplify with Data Hopelessly Out of Alignment

no code implementations2 Apr 2022 Tadashi Nomoto

We consider whether it is possible to do text simplification without relying on a "parallel" corpus, one that is made up of sentence-by-sentence alignments of complex and ground truth simple sentences.

Sentence Text Simplification

Meeting the 2020 Duolingo Challenge on a Shoestring

no code implementations WS 2020 Tadashi Nomoto

Both are neural models that aim at disrupting a sentence representation the encoder generates with an eye on increasing the diversity of sentences that emerge out of the process.

Sentence

Generating Paraphrases with Lean Vocabulary

no code implementations WS 2019 Tadashi Nomoto

In this work, we examine whether it is possible to achieve the state of the art performance in paraphrase generation with reduced vocabulary.

Paraphrase Generation reinforcement-learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.