Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/marcanuy/publishfeed
Getting RSS feeds and tweet when a website publishes new articles in their feed
https://github.com/marcanuy/publishfeed
atom feed python python3 rss rss-feed-scraper tweet twitter
Last synced: about 2 hours ago
JSON representation
Getting RSS feeds and tweet when a website publishes new articles in their feed
- Host: GitHub
- URL: https://github.com/marcanuy/publishfeed
- Owner: marcanuy
- License: mit
- Created: 2017-06-08T13:30:50.000Z (over 7 years ago)
- Default Branch: master
- Last Pushed: 2022-12-07T23:58:39.000Z (almost 2 years ago)
- Last Synced: 2023-03-14T07:45:34.318Z (over 1 year ago)
- Topics: atom, feed, python, python3, rss, rss-feed-scraper, tweet, twitter
- Language: Python
- Homepage: https://simpleit.rocks/automatically-tweet-new-blog-posts-based-in-rss/
- Size: 20.5 KB
- Stars: 11
- Watchers: 4
- Forks: 4
- Open Issues: 2
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
Publish Feed
============A publisher of articles from websites RSS feeds to Twitter.
**Table of Contents**
- [Publish Feed](#publish-feed)
- [Overview](#overview)# Overview
This app currently perform two tasks:
- download RSS content from several sources listed in `feeds.yml`
- publish its titles and links to TwitterThis is an easy way to get your [blog posts automatically tweeted](https://simpleit.rocks/automatically-tweet-new-blog-posts-based-in-rss/).
# Installation
## Install dependencies
Install Pip dependencies: make install
## Get Twitter credentials
Create a Twitter app at and generate keys
with **read and write permissions**.## Set up credentials and feeds
Customize `feeds.yml.skel` and save it as `feeds.yml`:
cd publishfeed
cp feeds.yml.skel feeds.yml
For example, defining feeds for two different Twitter accounts,
and , would
look like:simpleitrocks: #twitter handler
twitter:
consumer_key: 'XXXXXX'
consumer_secret: 'XXXXXXX'
access_key: 'XXXXXX'
access_secret: 'XXXXXX'
urls:
- https://simpleit.rocks/feed
hashtags: '#TechTutorials'
reddit:
twitter:
consumer_key: 'XXXXXX'
consumer_secret: 'XXXXXXX'
access_key: 'XXXXXX'
access_secret: 'XXXXXX'
urls:
- http://www.reddit.com/.rss
- http://www.reddit.com/r/news/.rss
- http://www.reddit.com/user/marcanuy/.rss
hashtags: '#RedditFrontPage'# Running
There are two commands available:
~~~ bash
$ python main.pyusage: main.py [-h] [-g | -t] feed
Process and publish RSS Feeds articles
positional arguments:
feed Index from feeds.ymloptional arguments:
-h, --help show this help message and exit
-g, --getfeeds Download and save new articles from the list of feeds
-t, --tweet Tweet unpublished articles from this list of feeds
~~~- `python main.py --getfeeds`
Download all the pages from the URLs defined in `urls` and save the
new ones. E.g.: python main.py reddit --getfeeds
- `python main.py --tweet`Tweet the oldest unpublished page (previously downloaded with
`--getfeeds`). E.g.: python main.py reddit --tweet## Cronjobs
Set up two cronjobs to publish new feed content automatically:
~~~ bash
crontab -e
~~~~~~ cronjob
# hourly download new pages
0 * * * * workon publishfeed; cd publishfeed; python main.py reddit --getfeeds# tweet every 15 minutes if there are unpublished pages
*/15 * * * * workon publishfeed; cd publishfeed; python main.py reddit --tweet
~~~**Make sure you configure the tweeter cronjob with at least 2 minutes
between each job so your credentials won't be suspended**# Design
`feeds.yml` populates the **FeedSet** model, then for each url, new
content is created as **RSSContent** instances (using SQLAlchemy) and saved in
`/databases/rss_.db` *SQLite* databases.To tweet a new post, we get the oldest unpublished page from
**RSSContent**, publish it and change its status.# Questions
Do you think there is something missing or that can be improved? Feel
free to open issues and/or contributing!# License
MIT Licensed.