Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/burhanultayyab/gptzero

An open-source implementation of GPTZero
https://github.com/burhanultayyab/gptzero

Last synced: 3 months ago
JSON representation

An open-source implementation of GPTZero

Awesome Lists containing this project

README

        

# Implementation of DetectGPT in Pytorch could be found here: https://github.com/BurhanUlTayyab/DetectGPT
[![Follow on Twitter](https://img.shields.io/twitter/follow/BurhanUlT?style=social)](https://x.com/BurhanUlT)

# GPTZero

GPTZero is an AI model with some mathematical formulation to determine if a particular text fed to it is written by AI or a human being.

## The motivation for it

Recently, GPTZero gotten a lot of hype/traction from media to be able to determine whether a set of sentences are generated from ChatGPT. It was indeed a great initative for Education Institutes. However, the implementation is closed-source. We tried to construct identical solutioning and voila!! :tada: our implementation gets the exact same results mostly lol. :joy: :joy: :joy:

## Implementation video
[![GPTZero Implementation](https://i.imgur.com/6Xtakve.png)](https://www.youtube.com/watch?v=x9H-aY5sCDA)

## Installation
```pip install -r requirements.txt```

## Usage
### Using Python function
```python3 infer.py```
#### example
```
from model import GPT2PPL
model = GPT2PPL()
sentence = "your text here"
model(sentence)
```
### Using Python input
```python3 local_infer.py```
#### example
```
Please enter your sentence: (Press Enter twice to start processing)
Hello World.
My name is mike.
(empty line)
```

## Note
Our implementation produces 100% same results as gptzero.me. We've compared extensively on a large corpus of text to compare our values with them, and surprisingly got the same results.

## Acknowledgement
1. This repository is built based on the hugging face
https://huggingface.co/docs/transformers/perplexity

2. Liu, Yinhan, et al. "Roberta: A robustly optimized bert pretraining approach." arXiv preprint arXiv:1907.11692 (2019).