Search results
This directory contains the code for working with the GPT-2 output detector model, obtained by fine-tuning a RoBERTa model with the outputs of the 1.5B-parameter GPT-2 model. For motivations and discussions regarding the release of this detector model, please check out our blog post and report.
19 lis 2024 · The GPT-2 Output Detector identifies AI-generated text by analyzing linguistic and stylistic features. It provides probability scores to assess the likelihood of text being produced by the GPT-2 model. Designed with a user-friendly interface, the tool offers real-time detection and immediate results.
5 lis 2019 · As the final model release of GPT-2 ’s staged release , we’re releasing the largest version (1.5B parameters) of GPT-2 along with code and model weights to facilitate detection of outputs of GPT-2 models.
This is an online demo of the GPT-2 output detector model, based on the 🤗/Transformers implementation of RoBERTa. Enter some text in the text box; the predicted probabilities will be displayed below. The results start to get reliable after around 50 tokens.
Dataset of GPT-2 outputs for research in detection, biases, and more - openai/gpt-2-output-dataset
We've provided a script to download all of them, in download_dataset.py. Additionally, we encourage research on detection of finetuned models. We have released data under gs://gpt-2/output-dataset/v1-amazonfinetune/ with samples from a GPT-2 full model finetuned to output Amazon reviews.
Model Description: RoBERTa large OpenAI Detector is the GPT-2 output detector model, obtained by fine-tuning a RoBERTa large model with the outputs of the 1.5B-parameter GPT-2 model. The model can be used to predict if text was generated by a GPT-2 model.