Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/andfanilo/writing-with-gpt2
Reproducing "Writing with Transformer" demo, using aitextgen/FastAPI in backend, Quill/React in frontend
https://github.com/andfanilo/writing-with-gpt2
aitextgen fastapi gpt2 python quill react
Last synced: 18 days ago
JSON representation
Reproducing "Writing with Transformer" demo, using aitextgen/FastAPI in backend, Quill/React in frontend
- Host: GitHub
- URL: https://github.com/andfanilo/writing-with-gpt2
- Owner: andfanilo
- License: mit
- Created: 2020-09-24T16:30:54.000Z (about 4 years ago)
- Default Branch: master
- Last Pushed: 2021-01-23T15:39:14.000Z (almost 4 years ago)
- Last Synced: 2024-10-11T09:29:59.445Z (about 1 month ago)
- Topics: aitextgen, fastapi, gpt2, python, quill, react
- Language: JavaScript
- Homepage:
- Size: 128 KB
- Stars: 28
- Watchers: 4
- Forks: 13
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# Writing with GPT-2
![](./img/demo.png)
![](./img/diagram.png)
## Development
- Ensure you have [Python 3.6/3.7](https://www.python.org/downloads/), [Node.js](https://nodejs.org), and [npm](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) installed.
### Python Backend
Make sure you are in the `backend` folder:
```sh
cd backend/
```Install a virtual environment:
```sh
# If using venv
python3 -m venv venv
. venv/bin/activate# If using conda
conda create -n write-with-gpt2 python=3.7
conda activate write-with-gpt2# On Windows I use Conda to install pytorch separately
conda install pytorch cpuonly -c pytorch# When environment is activated
pip install -r requirements.txt
python aitextgen_app.py
```To run in hot module reloading mode:
```sh
uvicorn aitextgen_app:app --host 0.0.0.0 --reload
```To run with multiple workers:
```sh
uvicorn aitextgen_app:app --host 0.0.0.0 --workers 4
```Runs on http://localhost:8000. You can consult interactive API on http://localhost:8000/docs.
Configuration is made via environment variable or `.env` file. Available are:
- **MODEL_NAME**:
- to use a custom model, point to the location of the `pytorch_model.bin`.
You will also need to pass `config.json` through `CONFIG_FILE`.
- otherwise model from Huggingface's [repository of models](https://huggingface.co/), defaults to `distilgpt2`.
- **CONFIG_FILE**: path to JSON file of model architecture.
- **USE_GPU**: `True` to generate text from GPU.#### From gpt-2-simple to Pytorch
To convert gpt-2-simple model to Pytorch, see [Importing from gpt-2-simple](https://docs.aitextgen.io/gpt-2-simple/):
```sh
transformers-cli convert --model_type gpt2 --tf_checkpoint checkpoint/run1 --pytorch_dump_output pytorch --config checkpoint/run1/hparams.json
```This will put a `pytorch_model.bin` and `config.json` in the pytorch folder, which is what you'll need to pass to `.env` file to load the model.
#### Run gpt-2-simple version
Added back the older `gpt-2-simple` version we add in `backend/gpt2_app`.
To download a model:
```python
import gpt_2_simple as gpt2
gpt2.download_gpt2(model_name='124M')
```To run app:
```sh
set MODEL_NAME=124M
uvicorn aitextgen_app:app --host 0.0.0.0
```Set `MODEL_NAME` to any model folder inside `models`, or edit `.env`.
#### Streamlit Debug
You can run the Streamlit app to debug the model.
```
streamlit run st_app.py
```### React Frontend
Make sure you are in the frontend folder, and ensure backend API is working.
```sh
cd frontend/
``````sh
npm install # Install npm dependencies
npm run start # Start Webpack dev server
```Web app now available on http://localhost:3000.
#### Building the frontend
To create a production build:
```sh
npm run build
```Now your React built app will be statically served by FastAPI on `http://localhost:8000/app` along with the other APIs. You don't need to run the Webpack devserver anymore.
### Using GPU
Miniconda/Anaconda recommended on Windows.
conda command : `conda install pytorch cudatoolkit=10.2 -c pytorch`.
If you [install manually](https://developer.nvidia.com/cuda-toolkit), you can check your currently installed CUDA toolkit version with `nvcc --version`. Once you have CUDA toolkit installed, you can verify it by running `nvidia-smi`.
**Beware**: after installing CUDA, it seems you shouldn't try to update GPU driver though GeForce or else you'll have to reinstall CUDA toolkit ?
## References
- [Write With Transformer](https://transformer.huggingface.co/doc/distil-gpt2)
- [React-Quill-Demo](https://codesandbox.io/s/tn2x3)
- [How To Create a React + Flask Project](https://blog.miguelgrinberg.com/post/how-to-create-a-react--flask-project)
- [How to Deploy a React + Flask Project](https://blog.miguelgrinberg.com/post/how-to-deploy-a-react--flask-project)
- [Interactive Playground - Autosave](https://quilljs.com/playground/#autosave)
- [Mentions implementation](https://github.com/zenoamaro/react-quill/issues/324)
- [Cloning Medium with Parchment](https://quilljs.com/guides/cloning-medium-with-parchment/)
- [gpt-2-cloud-run](https://github.com/minimaxir/gpt-2-cloud-run)
- [How To Make Custom AI-Generated Text With GPT-2](https://minimaxir.com/2019/09/howto-gpt2/)
- [How to generate text without fintetune?](https://github.com/minimaxir/gpt-2-simple/issues/10)
- [aitextgen](https://docs.aitextgen.io/)
- [Setting up your PC/Workstation for Deep Learning: Tensorflow and PyTorch — Windows](https://towardsdatascience.com/setting-up-your-pc-workstation-for-deep-learning-tensorflow-and-pytorch-windows-9099b96035cb)
- [CUDA Installation Guide for Microsoft Windows](https://docs.nvidia.com/cuda/cuda-installation-guide-microsoft-windows/)