AI-generated text boundary detection with RoFT

Due to the rapid development of large language models, people increasingly often encounter texts that may start as written by a human but continue as machine-generated. Detecting the boundary between human-written and machine-generated parts of such texts is a challenging problem that has not received much attention in literature. We attempt to bridge this gap and examine several ways to adapt state of the art artificial text detection classifiers to the boundary detection setting. We push all detectors to their limits, using the Real or Fake text benchmark that contains short texts on several topics and includes generations of various language models. We use this diversity to deeply examine the robustness of all detectors in cross-domain and cross-model settings to provide baselines and insights for future research. In particular, we find that perplexity-based approaches to boundary detection tend to be more robust to peculiarities of domain-specific data than supervised fine-tuning of the RoBERTa model; we also find which features of the text confuse boundary detection algorithms and negatively influence their performance in cross-domain settings.

PDF Abstract

Datasets


Introduced in the Paper:

RoFT-chatgpt

Used in the Paper:

RoFT

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Boundary Detection RoFT TLE + TS Binary MSE 22.23 # 4
Accuracy (%) 12.58 # 4
Boundary Detection RoFT PHD + TS ML MSE 14.14 # 3
Accuracy (%) 23.50 # 3
Boundary Detection RoFT RoBERTa + SEP MSE 2.63 # 2
Accuracy (%) 49.64 # 2
Boundary Detection RoFT-chatgpt RoBERTa + SEP MSE 3.06 # 2
Accuracy (%) 54.61 # 2
Boundary Detection RoFT-chatgpt PHD + TS ML MSE 14.45 # 3
Accuracy (%) 17.29 # 4
Boundary Detection RoFT-chatgpt TLE + TS Binary MSE 18.52 # 4
Accuracy (%) 20.02 # 3

Methods