Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/veracioux/website

My personal website and webserver
https://github.com/veracioux/website

Last synced: about 23 hours ago
JSON representation

My personal website and webserver

Awesome Lists containing this project

README

        

#+TITLE: veracioux website
#+LINK: tem https://github.com/tem-cli/tem
#+LINK: shdocker https://github.com/veracioux/shdocker
#+LINK: gh-dotfiles https://github.com/veracioux/dotfiles
#+LINK: gl-dotfiles https://gitlab.com/veracioux/dotfiles
#+LINK: upload ./.tem/path/upload
#+LINK: upload-secrets ./.tem/path/upload-secrets

* Deployment
Deployment is controlled using =tem=.

** TODO Production
Run the following commands on your local machine:
- [[upload-secrets][=upload-secrets=]] :: Upload private =.env= files and passwords stored in =.htpasswd= files.
- [[./.tem/path/upload][=upload=]] :: Builds all images and pushes them to the registry.

* Infrastructure
The website comprises the following components:
- Frontend :: =frontend/=, Vue 3, Nuxt 3
- API backend :: =api/=, Django 3
- Worker server :: =worker/=, node http server, rabbitmq
- Staging proxy :: =proxy/=, PHP, provides password auth for staging
- Docker registry :: Facilitates efficient image uploads (runs only on production)
- Mail server :: =mail/=, planned, will probably use iRedMail

The plan is to have them organized in the following docker containers
(services), managed by =docker compose=:
- =web= :: Runs an =nginx= server configured from =nginx.conf.in= that serves the
static frontend and acts as a reverse proxy for all the other services. In a
staging environment, it also proxies all requests through a password auth PHP
script located in =proxy/=.
- =frontend= :: Runs a development frontend server with HMR, and using a
volume to share data with the host and watch for file changes. Exists only
when =env = dev=.
- =api= :: Runs the Django API backend.
- =mail= :: My private mail server, placeholder for now
- =docker-registry= :: Runs a docker registry using the =registry= image.
- =worker= :: The main worker server. It schedules tasks for individual workers via
message queue. It also serves a simple API that can be used to request that a
task be performed.
- =worker_= :: Each individual worker run in its own docker container. It
consumes a message from the MQ and performs its task.

/NOTE: All config files ending with =.in= are preprocessed by jinja2. Scripts ending with =.in= are meant to be sourced instead of executed./

I use this project frequently as a playing ground to test some **unorthodox
practices and novel workflows** that I am working on. Some of them have proven
very useful, and I have decided to make them a central part of the development
workflow of my website:
- [[tem][=tem=]] :: A new way to manage project environments. It is still in its early
stages, but is still useful as it made my workflow very efficient.
Using this program, I am able to create shell scripts for common tasks and
automatically have them available in my =PATH= whenever I enter the project
directory using the CLI. I have also set the project up so that some shell
variables are automatically available to me when I use the =fish= shell.
- [[shdocker][=shdocker=]] :: A preprocessor for Dockerfiles that allows me to use the shell
to inject dynamic data into them. For example, each configuration (production,
staging and dev) is defined in the same shDockerfile, and I control the
different output Dockerfiles using =if= statements in the shDockerfile. See for
yourself in [[./shDockerfile]].

* Environment variables
** =ENVIRONMENT=
The environment that the server is running in. Can be one of: =dev=, =staging=,
=prod=.

The meaning of these should be obvious. The =dev= environment exposes ports of all
containers to the host and it enables hot reloading. It uses a local (ephemeral)
database and the local filesystem where applicable. The =staging= environment is
meant to be as similar to =prod= as possible, using a persistent database, a
dropbox account for storage of some files, etc. Obviously, it doesn't use the
same accounts and API keys. =dev= and =staging= can be deployed locally. =staging= can
also be served on a dedicated server. It is password protected regardless of
whether it is served locally or on a server. The password is saved in the
browser via cookie so you don't have to enter it every time.

** =MACHINE=
** TODO Hostnames and ports
| *Env variable* | *Value* | *Description* |
| =WEB_HOST= | =0.0.0.0= | Main web host |
| =WEB_PORT= | =3000= | Main web port |
| =WEB_PORT_STAGING= | =13000= | Staging web port[fn:port_staging] |
| =FRONTEND_HOST= | =0.0.0.0= | Development frontend server host |
| =FRONTEND_PORT= | =8080= | Development frontend server port |
| =API_HOST= | =0.0.0.0= | API backend host |
| =API_PORT= | =8000= | API backend port |
| =DB_HOST= | =0.0.0.0= | Database host |
| =DB_PORT= | =5432= | Database port |
| =DOCKER_REGISTRY_PORT= | =5000= | Docker registry port |
| =WORKER_SERVER_HOST= | =0.0.0.0= | Worker orchestration server host |
| =WORKER_SERVER_PORT= | =8001= | Worker orchestration server port |

/NOTE: Within =docker-compose.yaml=, some of these hosts are changed to their coresponding service names, so that different services can communicate with each other./

[fn:port_staging] This port is used by the staging container when both the production
and staging containers are expected to run on the same host.

* Conventions
- For each service, a =shDockerfile= exists either in the service's dedicated
subdirectory or in a =docker= subdirectory of that directory.
- Generated dockerfiles for each service reside in a =_docker= subdirectory of
that service. Each dockerfile has an extension based on the environment for
which the dockerfile was generated (=dev=, =staging= or =prod=).
* Setting up the production server
- Bind =docker.veracioux.me= to =localhost= in =/etc/hosts=.
- Load the contents of [[file:host/crontab][=host/crontab=]] into the actual cron process on the server
- TODO more
* Available commands
** Running and deployment
- Deploy locally: =up=
- Force remove all service containers locally: =down=
- Serve production server: =serve=. If no arguments are given, serves the staging build.
This command should be run on the dedicated hosting server.
- Manage local secrets: =lpass= - a wrapper around GNU =pass=
- Upload all locally built dockerfiles to the server: =upload=. If no arguments
are given, uploads the staging images. To upload production images, pass the =prod= argument.
- Upload local secrets to the server: =upload-secrets=
** Maintenance
- Automatically reissue certificate: =reissue-cert=. Uses =acme.sh=. You shouldn't
have to run this manually, because there is a cronjob that does this
automatically every few months (see [[file:host/crontab][=host/crontab=]])

* Some things that should be kept in mind
- For greater maintainability and migratability, ports are not hardcoded in
files (where possible). Instead they are taken from the environment. The
most notable example is =nginx.conf= where variables are baked into the
target file using =envsubst=. To avoid conflicts with nginx's own variables,
the =envsubst= command is called with the relevant environment variables
explicitly specified as arguments. So, if you want to add a new variable in
=nginx.conf=, you will have to specify the variable inside
=scripts/entrypoint-prod.sh= where the =envsubst= command is invoked.
- If the dev frontend HMR gives you any headache, try removing
=frontend/.pnpm-store= and restarting the frontend service.
* Static assets

Static assets are collected by the django =manage.py= script into
=/var/static_root=, because that's the easiest way (django provides the
=collectstatic= command).

* Serving files temporarily

To create a temporary endpoint that serves a file, just put a file (for example
one named =myfile=) inside the =~/tmp_root= directory. The file will be available at
a URL subpath that corresponds to the path of the file relative to =~/tmp_root=
(in our example that is =/myfile=). Note that =index.html= files behave as usual,
i.e. =~/tmp_root/myfile/index.html= will be available at =/myfile= and =/myfile/=.

* Staging
The staging version of the website can be accessed at =/stg=. A form will be
displayed for you to login. Your auth will be stored in the session, so you don't
have to login again until the staging website is restarted.

When testing the staging build locally, you can use the following credentials:
Username: =test=
Password: =test=

** Notes
The =web= container (if run with a staging configuration) also allows you to
access the website at =/=, without password protection for convenience, just
like in =dev= or =prod=.

* Git server
I host my dotfiles on [[gh-dotfiles][GitHub]] and mirror them on [[gl-dotfiles][GitLab]] as well. I have a =master=
branch which contains the configs that I want to always use, and there are other
branches for different (specialized) users. The different users can be either
users on other machines (e.g. my separate work laptop) or users created for
testing.

My main user =haris= is the only one I want to trust completely, so that's the
only one that I want to be able to modify all branches. And I want other users to
be able to push changes to their own branches, both as a backup option and so
that the =haris= user can merge those changes into =master=. One option would be to
create another GitHub user, restrict its permissions and reconcile changes using
PRs. But I don't want to create another user and I don't need PRs (simple git
merge is fine), so I do this:

- I create a user named =git= on my personal server.
- I add the dotfiles repo at =/home/git/dotfiles=
- I allow all my alternative users to SSH to the =git= user using their SSH keys.

That way each user can push to any branch of the repo, and when I want to merge
any branch into the =master= branch of the repo hosted on GitHub and GitLab, I can
do that with the =haris= user. I always keep the =master= branch checked out on
[email protected]= so no one can push to that branch.

** Setup
On the server run:
#+begin_src shell
sudo useradd --create-home git
sudo passwd git
#+end_src
On the client run:
#+begin_src shell
ssh-copy-id ~/.ssh/ [email protected]
#+end_src