Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/msr8/markify
Markify is an open source command line application written in python which scrapes data from your social media accounts and utilises markov chains to generate new sentences based on the scraped data
https://github.com/msr8/markify
cli discord markov-chain markov-chains markov-model markovify nltk-python python reddit scraper twitter
Last synced: about 1 month ago
JSON representation
Markify is an open source command line application written in python which scrapes data from your social media accounts and utilises markov chains to generate new sentences based on the scraped data
- Host: GitHub
- URL: https://github.com/msr8/markify
- Owner: msr8
- License: gpl-3.0
- Created: 2022-07-20T11:36:16.000Z (over 2 years ago)
- Default Branch: main
- Last Pushed: 2024-08-08T03:15:44.000Z (5 months ago)
- Last Synced: 2024-10-08T12:07:02.104Z (3 months ago)
- Topics: cli, discord, markov-chain, markov-chains, markov-model, markovify, nltk-python, python, reddit, scraper, twitter
- Language: Python
- Homepage:
- Size: 26.7 MB
- Stars: 12
- Watchers: 2
- Forks: 1
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- Changelog: CHANGELOG.md
- License: LICENSE.txt
Awesome Lists containing this project
README
> [!NOTE]
> Reddit scraping does not work anymore because of the new (June 2023) policy changes, due to which pushshift had to shut down
https://user-images.githubusercontent.com/79649185/182558272-255becc8-1dcc-45b5-99ef-22e0596cf490.mp4
# Index
* [Introduction](#introduction)
* [Installation](#installation)
* [Usage](#usage)
* [Flags](#flags)
* [How does this work?](#how-does-this-work)
* [FAQs](#faqs)
# Introduction
Markify is an open source command line application written in python which scrapes data from your social media accounts and utilises markov chains to generate new sentences based on the scraped data
- Engineered a ***command-line application***, Markify, leveraging Python to extract and analyze data from social media accounts
- Employed ***NLTK*** for meticulous data sanitization
- Demonstrated proficiency in ***interfacting with a variety of APIs*** (official and unofficial) to aggregate data
- Employed the use of the ***markov chains*** for generating new sentences
- Packaged the application for widespread use by uploading it to ***PyPI***
# Installation
There are many methods to install markify on your device, such as:
## 1) Install the pip package
***(Reccomended)***```bash
python -m pip install markify
```## 2) Install it via pip and git
```bash
python -m pip install git+https://github.com/msr8/markify.git
```## 3) Clone the repo and install the package
```bash
git clone https://github.com/msr8/markify
cd markify
python setup.py install
```## 4) Clone the repo and run markify without installing to PATH
```bash
git clone https://github.com/msr8/markify
cd markify
python -m pip install -r requirements.txt
cd src
python markify.py
```
# Usage
To use, you can simply just run `markify` on the command line, but we gotta setup a config file first. If you're windows, the default location for the config file is `%LOCALAPPDATA%\markify\config.json`, and on linux/macOS it is `~/.config/markify/config.json`. Alterantively, you can provide the path to the config file using the `-c --config` flag. If you run the program and the config file doesn't exist, it makes an empty template. An ideal config file should look like:
```json
{
"reddit": {
"username" : "..."
},
"discord": {
"token" : "..."
},
"twitter": {
"username" : "..."
}
}
```
where the username under reddit section is your reddit username, token under discord is your discord token, and username under twitter is your twitter username. If any of them are not given, the program will skip the collection process for that social media
# Flags
You can view the available flags by running `markify --help`. It should show the following text:
```
-h, --help show this help message and exit
-c CONFIG, --config CONFIG
The path to config file. By default, its {LOCALAPPDATA}/markify/config.json on
windows, and ~/.config/markify/config.json on other operating systems
-d DATA, --data DATA The path to the json data file. If given, the program will not scrape any data and
will just compile the model and generate sentences
-n NUMBER, --number NUMBER
Number of sentences to generate. Default is 50
-v, --version Print out the version number
```
More explanation is given below:
## -c --config
This is the path to the config file (config.json). By default, its `{LOCALAPPDATA}/markify/config.json` on windows, and `~/.config/markify/config.json` on other operating systems. For example:
```bash
markify -c /Users/tyrell/Documents/config.json
```## -d --data
This is the path to the data file containing all the scraped content. If it is given, the program doesn't scrape any data and just complies a model based on the data present in the file. By default, a new data file is generated in the `DATA` folder in the config folder and is named `x.json` where `x` is the current epoch time in seconds. For example:
```bash
markify -d /Users/tyrell/.config/markify/DATA/1658433988.json
```## -n --number
This is the number of sentences to generate after compiling the model. Default is 50. For example:
```bash
markify -n 20
```## -v --version
Print out the version of markify you're using via this flag. For example:
```bash
markify -v
```
# How does this work?
This program has 4 main parts: Scraping reddit comments, scraping discord messages, scraping tweets, generating sentences using markov chains. More explanation is given below
## Scraping reddit comments
The program uses the [Pushshift's API](https://github.com/pushshift/api) to scrape your comments. Since Pushshift can only return 1000 comments at a time, the program gets the timestamp of the oldest comment and then sends a request to the API to get comments before that timestamp. This loop goes on until either all your comments are scraped, or 10000 comments are scraped. I chose to use Pushshift's API since its faster, yeilds more result, and doesnt need a client ID or secret
## Scraping discord messages
To scrape discord messages, first the program checks if the token is valid or not by getting basic information (username, discriminator, and account ID) through the `/users/@me` endpoint. Then it gets all the DM channels you have participated in through the `/@me/channels` endpoint. Then it extracts the channel IDs from the response and gets the recent 100 messages in the channels using the `/channels/channelid/messages` endpoint, where `channelid` is the channel ID. Then it goes through the respone and adds the messages which are a text message, sent by you, and arent empty, to the data file
## Scraping tweets
The program uses the [snscrape](https://github.com/JustAnotherArchivist/snscrape) module to scrape your tweets. The program keeps scraping your tweets until either it has scraped all the tweets, or has scraped 10000 tweets
## Generating sentences using markov chains
The program extracts all the useful texts from the data file and makes a markov chain model based on them using the [markovify](https://github.com/jsvine/markovify) module. Then the program generates new sentences (default being 50) and prints them out
# FAQs
### Q) How do I get my discord token?
Recently (as of July 2022), discord reworked its system of tokens and the format of the new tokes is a bit different. You can obtain your discord token using this [guide](https://www.androidauthority.com/get-discord-token-3149920/)
### Q) The program is throwing an error and is telling me to install "averaged_perceptron_tagger" or something. What to do?
Running the command given below should work
```bash
python3 -c "import nltk; nltk.download('averaged_perceptron_tagger')"
```
You can visit [this page](https://www.nltk.org/data.html) for more information
### Q) The installation is stuck at building lxml. What to do?
Sadly, all you can do is wait. It is a [known issue with lxml](https://stackoverflow.com/questions/33064433/lxml-will-never-finish-building-on-ubuntu)