Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/rsharifnasab/create_word_cloud
create word clouds with wrodcloud-fa for twitter and telegram chat
https://github.com/rsharifnasab/create_word_cloud
beautifulsoup python python3 telegram twint twitter worldcould
Last synced: about 1 month ago
JSON representation
create word clouds with wrodcloud-fa for twitter and telegram chat
- Host: GitHub
- URL: https://github.com/rsharifnasab/create_word_cloud
- Owner: rsharifnasab
- License: gpl-3.0
- Created: 2020-02-25T17:28:18.000Z (almost 5 years ago)
- Default Branch: master
- Last Pushed: 2024-07-26T18:29:04.000Z (5 months ago)
- Last Synced: 2024-08-05T09:14:14.795Z (5 months ago)
- Topics: beautifulsoup, python, python3, telegram, twint, twitter, worldcould
- Language: Python
- Homepage:
- Size: 5.45 MB
- Stars: 26
- Watchers: 2
- Forks: 5
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# create word clouds for telegram chat and twitter
## how to use (general)
1. read this file carefully
2. open config.py and set your setting such as input address
3. run program.py with python3.4 or higher
4. select your context (twitter or telegram or normal text or twitter exported data)
5. wait until the program finishes
## how to use (tweets)
1. you should get tweets from [this site](https://www.allmytweets.net/) and paste to a file for example input.txt
2. enter file address to config.py, twitter_config dictionary
3. if you set 'SOURCE': "clipboard" program will use system clipboard instead of the input file
4. you can set other options in that config
+ no_retweet and no_replis are work as they expected so
+ no link will remove all tweets contains links for example quotes, images and unfollow checker auto-tweets## how to use (telegram)
1. export chat history to a folder
2. enter folder address in config.py in telegram_config dictionary, something like './input/*.html'## how to use (twitter data)
1. this option is for people who wants a cloud for all tweets not last 3000 tweets
2. first of all, you should follow this [link](https://help.twitter.com/en/managing-your-account/how-to-download-your-twitter-archive) to export twitter data, it takes some time! be patient
3. after your data is ready, download and unzip it, we need the `tweet.js` file in next step
4. enter the path of `tweet.js` in the data folder to the config file (in `config.py/data_config`)
5. open program and select option `4`
6. you can set NO_REPLIES if you want only your tweets not replies
7. there is no NO_RETWEET option because retweets are not included in exported data
8. Press enter and enjoy :D+ no link will remove all tweets contains a link for example quotes, images and unfollow checker auto-tweets
### notes
+ you can set font and mask in general_config file
+ enter a TTF font
+ mask image should be png or jpg+ you don't need to specify file format! the program will guess it
+ if you use a third-party mask and its not black and white (greyscale for example) you can set NORMALIZE_MASK to True
+ normalize mask is a slow operation, it handles greyscale masks, if you use built-in masks you don't need it
+ if stop words didn't remove automatically, set STOP_WORD_CLEAN to "manual" in config.py
+ only if Persian words are reversed, make ARABIC_RESHAPER to True
+ line color and line width are for splitter line between text and whitespace, it may be good for some cases
+ max words and min font and max font are 3 numbers, change them with caution, if the min font is too low or max words is high, the whole operation will take much time if the max font is not so big, the cloud is ugly
## Requirements
+ wordcloud
+ numpy and PIL
+ beautifulsoup4 (for parse telegram htmls)
+ arabic reshaper
+ bidi
+ clipboardYou can also install requirements using `requirements.txt` and `pip`:
pip3 install -r requirements.txt
## to-do
+ [ ] write better documentation
+ [ ] add command-line argument parse
+ [ ] add well twint support for getting tweets automatically## special thanks to:
+ [Mohammadreza Alihoseiny](https://github.com/alihoseiny/) for developing wordcloud-fa and writing [this blog post](https://blog.alihoseiny.ir/%DA%86%DA%AF%D9%88%D9%86%D9%87-%D8%A8%D8%A7-%D9%BE%D8%A7%DB%8C%D8%AA%D9%88%D9%86-%D8%A7%D8%A8%D8%B1-%DA%A9%D9%84%D9%85%D8%A7%D8%AA-%D9%81%D8%A7%D8%B1%D8%B3%DB%8C-%D8%A8%D8%B3%D8%A7%D8%B2%DB%8C%D9%85%D8%9F/)
+ [amueller](https://github.com/amueller) for developing wordcloud package
+ [Ali Yoonesi](https://github.com/AYoonesi) for good advice and troubleshoots
+ [Radin Shayanfar](https://github.com/radinshayanfar) for pull requests and bug fixes