Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/ndrean/godwd
Rails back-end of https://thedownwinder.com
https://github.com/ndrean/godwd
cloudinary jwt knock nginx puma rails redis sidekiq sse
Last synced: 23 days ago
JSON representation
Rails back-end of https://thedownwinder.com
- Host: GitHub
- URL: https://github.com/ndrean/godwd
- Owner: ndrean
- Created: 2020-08-24T15:40:44.000Z (over 4 years ago)
- Default Branch: master
- Last Pushed: 2020-10-27T23:39:13.000Z (about 4 years ago)
- Last Synced: 2024-10-25T13:41:58.913Z (2 months ago)
- Topics: cloudinary, jwt, knock, nginx, puma, rails, redis, sidekiq, sse
- Language: Ruby
- Homepage:
- Size: 780 KB
- Stars: 0
- Watchers: 2
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# The app:
A Rails API live broadcasting changes in a PSQL db, sending background mails.
- back-end code is written with `Ruby-On-Rails`
![Ruby-On_Rails](https://github.com/ndrean/godwd/blob/master/public/Rails.png)
- and uses `Puma` as concurrent webserver, and reverse-proxied with `nginx`
![nginx](https://github.com/ndrean/godwd/blob/master/public/nginx.png)
so that we have the following schema:
![Nginx Puma Rack Rails](https://github.com/ndrean/godwd/blob/master/public/Nginx-puma-rack-rails.png)
The app is served from Heroku (free dyno...)
![Heroku](https://github.com/ndrean/godwd/blob/master/public/Heroku.png)
- uses a `PostgreSQL` database
![Postgres](https://github.com/ndrean/godwd/blob/master/public/Postgres.png)
- changes in the database are live streamed with **Server-Sent-Events**
- uses `Sidekiq` with `Redis` as the ActiveJob adapter for mailing
![Sidekiq](https://github.com/ndrean/godwd/blob/master/public/sidekiq.png)
![Redis](https://github.com/ndrean/godwd/blob/master/public/Redis.png)- `Mailgun` for the mialing service
![Mailgun](https://github.com/ndrean/godwd/blob/master/public/mailgun.png)
- `Cloudinary` (without ActiveStorage) for storing images. The upload is done directly to Cloudinary by the front end. The front end sends the url of the image, and the back end saves it. The back end only deletes the image async with a Sidekiq worker.
![Cloudinary](https://github.com/ndrean/godwd/blob/master/public/Cloudinary.png)
- the data sent from the server can be gzip or brotli compressed. Here we chosed to let nginx take care of this.
- The authentification uses the gem **Knock** (with BCrypt and JWT).
The front end is:
- a `React` fromt end (using Create-React-app)
![React](https://github.com/ndrean/godwd/blob/master/public/React.png)
- uses a `Facebook Login` component
![FBLogin](https://github.com/ndrean/godwd/blob/master/public/FB-Login.png)
- uploads images directly to `Cloudinary`
![ReactCloudinary](https://github.com/ndrean/godwd/blob/master/public/reactcloudinary.png)
- displays mapq with `Leaflet.js` and the `arcGis` service for reverse geolocation
![Leaflet.js](https://github.com/ndrean/godwd/blob/master/public/leafletjs.png)
![arcGis](https://github.com/ndrean/godwd/blob/master/public/arcGis.png)
The domain-registrar has been set with AWS Route53, and the static front end files are hosted in a AWS S3 bucket: create a bucket, upload code, make it public, set public access policy, configure it as web hosting, set DNS CNAME
![AWS-S3](https://github.com/ndrean/godwd/blob/master/public/AWS-S3.png)
and we use a CDN: Cloudfare that provides the SSL certificates. To make the Cloudfare subdomain work with S3, you add the domain to Cloudfare, and the domain-registrar DNS servers for the Cloudflare, and DNS records accordingly. Then we have:
![Cloudfare](https://github.com/ndrean/godwd/blob/master/public/Cloudfare.png)
# Database structure
3 tables, where 'events' is a joint table.
- The field `events.participants` has a format Postgres of `jsonb`, an array of type `{email: '[email protected]', notif:"false", ptoken:"wmkm234kxkl"}`
- `end_gps` and `start_gps` are arrays of 2 decimals, `[45.23424,1.234234]`
- a user has the field `password_digest` even if we use the field `password`: the gem `bcrypt` saves it encrypted (the key is the Rails `secret_bse_key`).
- the fields `uid` and `access-token` are copies of a users's Facebook credentials.
- the `confirm_token` is used on 'sign up': the `Knock` gem generates a token that is saved in the db, and sent in a link by email in the user. when the confirms, the db reads this token and confirms the user.![Database schema](https://github.com/ndrean/godwd/blob/master/public/goDownWind.png)
```
CREATE TABLE "events" (
"id" varchar,
"directCLurl" string,
"publicID" string,
"url" string,
"participants" jsonb,
"user_id" bigint,
"itinary_id" bigint,
"created_at" datetime,
"updated_at" datetime,
"comment" text
);CREATE TABLE "itinaries" (
"id" varchar,
"date" date,
"start" string,
"end" string,
"distance" decimal,
"created_at" datetime,
"updated_at" datetime,
"end_gps" decimal,
"start_gps" decimal
);CREATE TABLE "users" (
"id" varchar,
"email" string,
"password_digest" string,
"confirm_token" string,
"confirm_email" boolean,
"access_token" string,
"uid" string,
"created_at" datetime,
"updated_at" datetime
);ALTER TABLE "itinaries" ADD CONSTRAINT "fk_rails_events_itinaries" FOREIGN KEY ("id") REFERENCES "events" ("itinary_id");
ALTER TABLE "users" ADD CONSTRAINT "fk_rails_events_users" FOREIGN KEY ("id") REFERENCES "events" ("user_id");
```# schema.rb
```ruby
# /config/application.rb
config.active_record.schema_format :ruby # :sql
```so we can do `rails db:schema.load` instead of running all the migrations with `rails db:migrate`.
Once `docker-compose up`, we can do:
```bash
docker-compose exec rails db:create
docker-compose exec web rails db:schema:load
docker-compose exec web rails db:seed
```# TODO :
- Test implement Request Rate Limiter ? (throttling on login? on 'new event')
> gem `rack-attack`
- try SSE with Redis publis/subscribe...(can't make it work...)
- try SSE with Postgres LISTEN/NOTIFIY ?? => capture the delete action ?
# HTTP Caching w/Rails
> `api:rails: ConditionalGet`
> This is a Rails API so only `if stale` is possible. -`if stale?` renders 'Completed 304 Not Modified in 33ms' or queries again when necessary.Read:
Other HTTP caching with Rails (non API):
- if request is `fresh_when(@variable)` Etag will render 304 Not modified response
- set HTTP Cache-Control header: `expires_in 2.hours, public: true`
# Note: VPS for Rails
To be tested.
# Async jobs:
- `ActiveJob`. Set `config.active_job.queue_adapter = :sidekiq` in `/config/environments/dev-prod.rb`, and use `perform_later` or `deliver_later`. We alos need to declare a class inheriting from `ApplicationJob`and defined `queure_as :mailer` for example.
- or directly `Sidekiq worker`: example with RemoveDirectLink. Create a worker under `/app/workers/my_worker.rb` with `include Sidekiq::Worker` and use `perform_async` in the controller).
## Sidekiq setup
- added to '/config/application;rb`the declaration:`config.active_job.queue_adapter = :sidekiq` tells ActiveJob to use Sidekiq.
- Added `/config/sidekiq.rb` with `Redis`.
When we defined the route:
```ruby
mount Sidekiq::Web => '/sidekiq'
```then the sidekiq console is available at http://localhost:3001/sidekiq.
To run Sidekiq, we do:
```bash
bundle exec sidekiq --environment development -C config/sidekiq.yml
```This will be a separate process for the process launcher `Foreman`:
```bash
worker: bundle exec sidekiq -C ./config/sidekiq.yml
```## Mail background jobs
- Note: gem 'mailgun-ruby` is usefull to get the info that a mail has been sent.
We declare in '/config/application.rb' (for all environments):
`config.action_mailer.delivery_method = :smtp`We don't use ActiveJob here to send async a mail, we use ActionMailer with Sidekiq and the method `deliver_later` . We define a class (`EventMailer` and `UserMailer`, both inheriting from `ApplicationMailer`) with actions that will be used by the controller. Each method uses a `html.erb` view to be delivered via the mail protocole `smtp`. The views use the instance variables defined in the actions.
The mails are queued in a queue named `mailers` and Sidekiq uses a Redis db.
The usage of Redis is declared in the '/app/config/initializers/sidekiq.rb' and the gem 'redis'.
For Heroku, we need to set the config vars `REDIS_PROVIDER` and `REDIS_URL`.
For 'locahost', we set `REDIS_URL='redis://localhost:6379'`.
We use `Mailgun`. Once we have registered our domain, we set the DNS TXT & CNAME provided by Mailgun in the registar provider (OVH or AWS), and the SMTP data in `/config/initializers/smtp.rb`:
```ruby
ActionMailer::Base.smtp_settings = {
address: 'smtp.mailgun.org',
port: 587,
domain: ENV['DOMAIN_NAME'], <=> "thedownwinder.com"
user_name: ENV['SMTP_USER_NAME'], <=> "[email protected]"
password: ENV['MAIL_APP_PASSWORD'], <=> "eac87f019exxxx"
authentication: :plain,
enable_starttls_auto: true
}
```# Cloudinary remove with Sidekiq
- gem `Cloudinary`
> credentials: they are passed manually to each call in the method, and added as `config vars` to Heroku. The `/config/cloudinary.yml` is not used since it doesn't accept `.env` variables.
We use the worker `RemoveDirectLink` to async remove a picture from Cloudinary by the Rails backend. We can use activeJob or directly a worker. The '/workers' folder is not read by Rails, only Sidekiq, declared
Here, we used a worker (without ActiveJob and `default queue`, just including `Sidekiq::Worker`) and use `perform_async`.
```ruby
# /App/workers/remove_direct_link.rb
class RemoveDirectLink
include Sidekiq::Workerdef perform(event_publicID)
auth = {
cloud_name: Rails.application.credentials.CL[:CLOUD_NAME],
api_key: Rails.application.credentials.CL[:API_KEY],
api_secret: Rails.application.credentials.CL[:API_SECRET]
}
return if !event_publicID
Cloudinary::Uploader.destroy(event_publicID, auth)
end
end
```We could also use ActiveJob (cf mails) by defining a class inheriting from `ApplicationJob` and specifying the 'queue' and use `deliver_later`. Here, we use the Cloudinary method `destroy`:
```
# /app/jobs/remove_direct_link.rb
class RemoveDirectLink < ApplicationJob
queue_as :defaultdef perform(event_publicID)
auth = {
cloud_name: ENV['CL_CLOUD_NAME'],
api_key: ENV['CL_API_KEY'],
api_secret: ENV['CL_API_SECRET']
}return if !event_publicID
Cloudinary::Uploader.destroy(event_publicID, auth)
end
end
```# Puma port setup
React will run on '3000' and Rails will run on port '3001'
```ruby
# /config/puma.rb
port ENV.fetch("PORT") { 3001 }
```# Bootsnap issue
Removed line 60 ` # config.file_watcher = ActiveSupport::EventedFileUpdateChecker`in '/config/development.rb' which uses `listen`.
# CORS
CORS stands for Cross-Origin Resource Sharing, a standard that lets developers specify who can access the assets on a server and what HTTP requests are accepted. For example, a restrictive 'same-origin' policy would prevent your Rails API at localhost:3001 from sending and receiving data to your front-end at localhost:3000.
```ruby
# /config/application.rb
config.middleware.insert_before 0, Rack::Cors do
allow do
origins ["https://thedownwinder.com", "http://localhost:3001", "http://localhost:8080"]
resource ‘*’,
headers: :any,
methods: [:get, :post, :options],
credentials: true
end
end
```# SSE
# Sidekiq, Redis setup
1 worker, 1 dyno, 5 web thread
```ruby
# /config/initializers/sidekiq.rb
if Rails.env.production?
Sidekiq.configure_client do |config|
config.redis = { url: ENV['REDIS_URL'], size: 3, network_timeout: 5 }
endSidekiq.configure_server do |config|
config.redis = { url: ENV['REDIS_URL'], size: 5, network_timeout: 5 }
end
end
``````ruby
# .env
REDIS_URL='redis://localhost:6379'#/config/initializers/sidekiq.rb
...config.redis = { url: ENV['REDIS_URL'], size: 2 }
```To run Redis, we do:
```bash
brew services redis-server
```We declare another process for Foreman (Procfile):
```bash
redis: redis-server --port 6379
```# Procfile & Foreman
foreman start -f ProcfileIwant
> Dev localhost mode:
```
api: bundle exec bin/rails server -p 3001
worker: bundle exec sidekiq -C ./config/sidekiq.yml
redis: redis-server --port 6379```
> Heroku mode:
```
api: bundle exec bin/rails server -p 3001
worker: bundle exec sidekiq -C ./config/sidekiq.yml
```- settings.config vars:
`REDIS_URL` will be set in 'setttings/config vars' after setting `REDIS_PROVIDER=REDISTOGO_URL` (free)
Set the keys `RAILS_MASTER_KEY` and `SECRET_KEY_BASE` (do `EDITOR="code ...wait" rails credentials:edit` to set)
The `DATABASE_URL` wil be set by Heroku.
# Compression
We can use directly gzip or Brotli compression with Rails. For Brotli, use the gem `rack-brotli` and set:
```ruby
#/config.application.rb
config.middleware.use Rack::Deflater
config.middleware.use Rack::Brotli
```Since we use `Nginx`, we will use the inbuild gzip service to we delegate the data compression to nginx.
# Arrays in PostgreSQL
To accept an array, we need to separate between the ',' when we read the params in the controller.
```ruby
if params[:event][:itinary_attributes][:start_gps]
params[:event][:itinary_attributes][:start_gps] = params[:event][:itinary_attributes][:start_gps][0].split(',')
params[:event][:itinary_attributes][:end_gps] = params[:event][:itinary_attributes][:end_gps][0].split(',')
end
```We can also do the job directly in React: if we read an array `start_gps=[45,1]`, then to pass into `event:{itinary_attributes: {start_gps: [], end_gps: [] } }`, we do:
```js
fd.append("event[itinary_attributes][start_gps][]", itinary.start_gps[0] || "");
fd.append("event[itinary_attributes][start_gps][]", itinary.start_gps[1] || "");
fd.append("event[itinary_attributes][end_gps][]", itinary.end_gps[0] || "");
fd.append("event[itinary_attributes][end_gps][]", itinary.end_gps[1] || "");
```# Running the app:
The Rails api can be run with `rails server` and navigate to `localhost:3001/api/v1/events`.
You can run `foreman start -f Procfile_nginx_port.rb` and navigate to `localhost:8000/api/v1/events`: it is reverse-proxied with Nginx.
You can run `docker-compose up` and navigate to `localhost:8080/api/v1/events`: the Docker container exposes 8080 > 80 and Rails via 3001:3001.# Running multiple processes
Use `foreman`
The `database.yml` musn't use the key `db` (or set `localhost`)
# Docker
```
- app
- config
database.yml
puma.rb
- docker
- app
Dockerfile
- web
Dockerfile
nginx.conf
docker-compose.yml
.dockerignore
```- need to add `host: db` in `database.yml` in lieu of `host: localhost` when working with localhost & foreman
- sequence `docker build .`, then `docker-compose up` one by one, `db`, then `sidekiq`, then `web`(otherwise you get an error due to `Bootsnap`).
- the db is created, then `docker-compose exec web rails db:schema:load` and `db:seed`.
- Note: need to set `POSTGRES_PASSWORD: xxx` in the service `web|environment`
- get IPAdress with `docker inspect | grep `IPAddress`(and the container Id is given in the list`docker ps -a`)
```bash
rm -rf tmp/*
docker rm $(docker ps -q -a) -f
docker rmi $(docker images -q) -f
docker build .
docker-compose up --build
docker-compose up -d web
docker-compose up -d sidekiq
docker-compose exec web rake db:create
docker-compose exec web rake db:schema:load
docker-compose exec web rake db:seeds
```Set the key `host: db` in `database.yml` where `db` is the name of the Postgresql service in `docker-compose.yml`.
Needs in `.env`:
- Postgres:
```
# .env (Postgres Docker)
POSTGRES_USER=postgres
POTGRES_PASSWORD=postgres
```- Redis:
Setup with
```
# .env
REDIS_URL='redis://localhost:6379'
```Set for Postgres:
```
# .env
# Postgres Docker
POSTGRES_DB=godwd_development
POSTGRES_USER=postgres
POTGRES_PASSWORD=postgres
```- create the database
```bash
docker-compose exec app rails db:create
docker-compse exec app rails db:schema:load # instead of db:migrate
docker-compose exec app rails db:seed
```- connect from local machine to a PSQL db in Docker:
## Docker commands
- list all containers: `docker container ls -a`
- list all containers's ids: `docker container ls -aq`
- stop all containers by passing a list of ids: `docker container stop $(docker container ls -aq)`
- remove all containers by passing a list of ids: `docker container rm $(docker container ls -aq)`
- To wipe Docker clean and start from scratch, enter the command:
`docker container stop $(docker container ls –aq) && docker system prune –af ––volumes``docker run --link db -it postgres:9.4 psql -h db -U postgres`
# JWT, Knock
`Knock` uses the gems `jwt` and we add the gem `bcrypt` for the `has_secure_password` attribute in the `User`model.
- Install: `rails g
```ruby
payload = { id: 1, email: '[email protected]' }
secret = Rails.application.credentials.secret_key_base
token = JWT.encode(payload, secret, 'HS256')
```but we use the gem `Knock`
# Heroku Nginx buildpack
The buildpack will not start NGINX until a file has been written to /tmp/app-initialized. Since NGINX binds to the dyno's $PORT and since the $PORT determines if the app can receive traffic, you can delay NGINX accepting traffic until your application is ready to handle it.
First:
- run `heroku buildpacks:add heroku-community/nginx`
- copy the `nginx.config.erb` in the '/config' folder.
- update the `puma.rb` code
- Procfile: `bin/start-nginx bundle exec puma -C ./config/puma.rb`
## Procfile
```bash
foreman start -f Procfile.dev
``````
#/Procfile (for Heroku)
web: bin/start-nginx bundle exec puma -C ./config/puma.rb
worker: bundle exec sidekiq -C ./config/sidekiq.yml```
# Certbot - Nginx - Docker
# Old files
```ruby
class RegisterJob < ApplicationJob
queue_as :mailers
def perform(fb_user_email, fb_user_confirm_token)
UserMailer.register(fb_user_email, fb_user_confirm_token).deliver
end
end
```- version with ActiveJob : use "RemoveDirectLink.perform_later" in controller
```ruby
class RemoveDirectLink < ApplicationJob
queue_as :default
def perform(event_publicID)
auth = {
cloud_name: Rails.application.credentials.CL[:CLOUD_NAME],
api_key: Rails.application.credentials.CL[:API_KEY],
api_secret: Rails.application.credentials.CL[:API_SECRET]
}
return if !event_publicID
Cloudinary::Uploader.destroy(event_publicID, auth)
end
end
```# NGINX: reverse proxy
The main reason to set up Nginx as reverse proxy (client > Nginx > Puma/Rails) is to run your API server on a different network or IP then your front-end application is on. You can then secure this network and only allow traffic from the reverse proxy server.
## localhost settings:
They are 2 ways to let Puma and Nginx communicate: with unix sockets and tcp/ip domain names.
- unix socket:
```
#/config/puma.rb
!! remove port 3001 (port where Rails listens to)
bind "unix:///Users/utilisateur/code/rails/godwd/tmp/sockets/nginx.socket"
preload_app!
rackup DefaultRackup
on_worker_boot { ActiveRecord::Base.establish_connection }#/usr/local/etc/nginx/nginx.conf
http {
upstream app_server {
server unix:///Users/utilisateur/code/rails/godwd/tmp/sockets/nginx.socket fail_timeout=0;
}server {
listen 8080;location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app_server;
}
}
```- tcp. (`127.0.0.1:3001` and not `0.0.0.:3001`):
```
#/app/config/puma.rb
port 3001
!!! remove bind "unix://..."#/usr/local/etc/nginx/nginx.conf
http {
upstream app_server {
server 127.0.0.1:3001 fail_timeout=0;
}
server {
listen 8000;location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
proxy_pass http://app_server;
}
}
}
```- whitelisting `app_server` with: `Rails.application.config.host << "app_server"` in '#/config.development.rb'.
- Procfile `web: bundle exec puma -p 3001 -C config/puma.rb`## Heroku production.
My app is located at `godwd-api.herokuapp.com` and I name-spaced my endpoints with '/api/v1'.
- buildpack : `$ heroku buildpacks:add heroku-community/nginx`,
- add to `Procfile`: `web: bin/start-nginx bundle exec puma --config config/puma.rb`
- add the file `/app/config/nginx.config.erb` using the biolerplate given by Heroku> Mode tcp/ip
```ruby
# puma in single mode => set workers to 'O'
workers ENV.fetch('WEB_CONCURRENCY') {2}
threads_count = Integer(ENV['RAILS_MAX_THREADS'] || 5)
threads threads_count, threads_countport 3001
preload_app!
rackup DefaultRackup# Heroku buildpack needs this file to initialize
on_worker_fork { FileUtils.touch('/tmp/app-initialized') }
on_worker_boot { ActiveRecord::Base.establish_connection }
plugin :tmp_restart
on_restart { Sidekiq.redis.shutdown(&:close) }
``````
#/app/config/nginx.config.erb
daemon off;
[...]
http {
[...]
upstream app_server {
server 127.0.0.1:3001 fail_timeout=0;
}server {
listen <%= ENV['PORT'] %>;
[...]location / {
try_files $uri @rails;
}location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app_server;
}
}
}
```## Nginx local mode
```ruby
#/config/puma.rb
[...]
port 3001 # mode tcp
bind "unix:///tmp/nginx.socket" # mode unix socket
[...]
``````
#/usr/local/etc/nginx/nginx.conf
[...]
http {
[...]
upstream app_server {
# mode tcp
server localhost:3001 fail_timeout=0;
# mode unix
# server unix:///Users/utilisateur/code/rails/godwd/tmp/sockets/nginx.socket fail_timeout=0;
}
[...]
server {listen 8080;
[...]
location / {
[...]
proxy_pass http://app_server; # same port as Puma
}
}
```To use tcp connection between Nginx and Puma, use `foreman start -f Procfile_nginx_port`; it calls 'config.puma_port.rb'.
For unix socket connection, use `foreman start -f Procfile_nginx_socket` (calls 'config/puma_socket.rb')
Navigate to http://localhost:8080/... and you should see 'server: nginx'
> check nginx with `ps aux | grep nginx``
mauris_tovar mariana
# Cloudfare / S3
# Kill Rails
'' = `lsof -i :5432` to see how is running at 5432, then `kill -9 `, or `kill -9 $(lsof -i :5432)`.
- `which psql` and `pg_isready`