https://github.com/rs-python/rs-demo
Collection of Jupyter notebooks showcasing the latest features of the system
https://github.com/rs-python/rs-demo
jupyter-notebook software
Last synced: 3 days ago
JSON representation
Collection of Jupyter notebooks showcasing the latest features of the system
- Host: GitHub
- URL: https://github.com/rs-python/rs-demo
- Owner: RS-PYTHON
- License: apache-2.0
- Created: 2023-12-05T07:24:30.000Z (about 2 years ago)
- Default Branch: develop
- Last Pushed: 2026-01-26T10:25:25.000Z (8 days ago)
- Last Synced: 2026-01-26T23:48:12.596Z (7 days ago)
- Topics: jupyter-notebook, software
- Language: Jupyter Notebook
- Homepage:
- Size: 8.29 MB
- Stars: 0
- Watchers: 0
- Forks: 0
- Open Issues: 2
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# Running Modes
In this page, we will see how to run the Jupyter notebooks on cluster, local and hybrid mode.
### Quick links
* On cluster mode:
* JupyterHub:
* RS-Server website (Swagger/OpenAPI):
* Create an API key:
* Prefect dashboard (orchestrator):
* Grafana (logs, traces, metrics):
* On hybrid mode:
* RS-Server website (Swagger/OpenAPI):
* Create an API key:
* Prefect dashboard (orchestrator):
* On local mode:
* RS-Server website (Swagger/OpenAPI):
* (frontend, only for visualization, not functional)
* (auxip)
* (cadip)
* (catalog)
* (prip)
* (edrs)
* Prefect dashboard (orchestrator):
* SeaweedFS s3 bucket: with:
* Username: `seaweedfs`
* Password: `Strong#Pass#1234`
## Prefect and Dask
### EOPF/DPR

When calling EOPF (DPR) with Prefect and Dask:
1. The **client** (Jupyter notebook, Prefect dashboard or terminal) runs a **Prefect flow**
(implemented as a Python function) on the **Prefect workers** on the Kubernetes cluster.
1. The flow calls the **tasks** (implemented as Python functions) on the **Dask workers**
on the Kubernetes cluster.
1. The tasks call the **EOPF Python functions** that are installed as a Python package (wheel) on the Dask pods.
### Staging

When calling the staging with Prefect and Dask:
1. The **client** (Jupyter notebook, Prefect dashboard or terminal) runs a **Prefect flow**
(implemented as a Python function) on the **Prefect workers** on the Kubernetes cluster.
1. The flow makes **HTTP requests** to the **rs-server-staging** web service on the Kubernetes cluster.
1. The service calls the **tasks** (implemented as Python functions) on the **Dask workers**
on the Kubernetes cluster.
1. The tasks call the **rs-server-staging Python functions** that are installed as a Python package (wheel)
on the Dask pods.
## Run on cluster mode
On cluster mode, we run the Jupyter notebooks from our JupyterHub session deployed on the cluster. They connect to the services deployed on the RS-Server website (=cluster). Authentication is required for this mode.
### Prerequisites
* You have access to JupyterHub:
* You have access to the RS-Server website:
* You have generated an API key from the RS-Server website.
### Initialize the Prefect blocks
Before the first use, you need to initialize the Prefect block that contains the
environment variables shared for all users between Jupyter, Prefect and Dask.
Run this command line to get the existing values, if any:
```python
# From terminal
prefect block inspect "secret/env-vars"
# Or from python
import json
from prefect.blocks.system import Secret
print(json.dumps(Secret.load("env-vars", _sync=True).get(), indent=2))
```
Run this Python code from any Jupyter notebook to write the new values:
```python
import os
from prefect.blocks.system import Secret
value = {
# S3 bucket name and subfolder.
# NOTE: the "share-bucket" block will be created automatically from these
# variables. So if you change these variables, please also remove the
# "share-bucket" block and it will be recreated.
"PREFECT_BUCKET_NAME": "rs-dev-cluster-temp",
"PREFECT_BUCKET_FOLDER": "prefect-share",
# Token that was used to setup the Dask clusters.
# See: https://gateway.dask.org/authentication.html#using-jupyterhub-s-authentication
"JUPYTERHUB_API_TOKEN": "",
# Needed to run the performance indicator prefect flow
# The values for the following fields should be taken from rs-infra-core inventory,
# file rs-infra-core/inventory/sample/host_vars/setup/apps.yml.
# There is a section named rs_performance_indicator. The values for the fields
# are set at the cluster deployment. These values should be also used here
# Here is the aforementioned section:
# rs_performance_indicator:
# database:
# host: postgresql-cluster-rw.database.svc.cluster.local
# name: performance
# password: test
# username: test
# secret: pi-database-password
"POSTGRES_HOST": "", # default: "postgresql-cluster-rw.database.svc.cluster.local",
"POSTGRES_USER": "",
"POSTGRES_PASSWORD": "",
"POSTGRES_PORT": "", # normally, 5432
"POSTGRES_PI_DB": "performance",
# osam url, internal to the cluster
"RSPY_HOST_OSAM": "http://rs-osam.processing.svc.cluster.local:8080",
}
# Jupyter env vars to pass to Prefect and Dask
for env in [
"RSPY_UAC_CHECK_URL",
"RSPY_WEBSITE",
"TEMPO_ENDPOINT",
"DASK_GATEWAY_PUBLIC",
"DASK_GATEWAY_ADDRESS",
]:
value[env] = os.environ[env]
# Save Prefect block
await Secret(value=value).save("env-vars", overwrite=True)
```
From a bash Terminal in Jupyter, check your values with:
```bash
# View all configured blocks
prefect block ls
# Displays details about the configured blocks
prefect block inspect secret/auth
```
### Run the demos on cluster mode
* Open a JupyterHub session.
* Open a terminal, check that `rs-client-libraries` is installed by running:
```shell
pip show rs-client-libraries # should show the name, version, ...
```
* On the left, in the file explorer, go to the demos or tutorial folder and double-click a notebook to open it:

### Install a new `rs-client-libraries` version
#### Option 1: update the JupyterHub image (affects everyone)
1. Verify in the CI/CD that the last `rs-client-libraries` modifications were merged into the `develop` branch:
1. Ask the `rs-infrastructure` administrator to run a new CI/CD workflow to publish this `rs-client-libraries` version into a new JupyterHub image.
#### Option 2: from a wheel package (affects only you)
1. In the CI/CD, click on the last `rs-client-libraries` branch workflow that you want to use: , go to the `Artifacts` section and download the `.whl` package file.
1. Or alternatively, build the wheel yourself from your local `rs-client-libraries` project by running: `poetry build --format wheel`
1. Upload the `.whl` package file to your JupyterHub session, open a Terminal and run:
```shell
# Uninstall the old version. Note: this fails if we do it for the first time because
# we try to uninstall the root installation of the library, but this this OK.
pip install pip-autoremove
pip pip-autoremove -y rs-client-libraries 2>/dev/null
# You may have conflicts between dependencies installed for the root user
# and the current user. You can uninstall all current user dependencies with:
# for dep in $(pip freeze | cut -d "@" -f1); do pip uninstall -y $dep 2>/dev/null; done
# The old rs-client-libraries version is still installed for the root user.
# This is the version you (=current user) use by default.
pip show rs-client-libraries | grep Location # should display: /opt/conda/lib/python3.x/site-packages
# Install the new version for the current user.
pip install rs_client_libraries--py3-none-any.whl
opentelemetry-bootstrap -a install
```
## Run on local mode
On local mode, docker-compose and Docker images are used to run services and libraries locally (not on a cluster). There is no authentication for this mode.
### Prerequisites
* You have Docker installed on your system, see:
* You have access to the RSPY project on GitHub:
* You have created a personal access token (PAT) on GitHub:
* This access token is used to retrieve the rs-server product on the github repository.
* You may want to create a classic PAT with the ```read:packages``` permissions.
* You have checked out this git project:
```shell
git clone https://github.com/RS-PYTHON/rs-demo.git
# Get last version
cd rs-demo
git checkout develop
```
### Run the demos on local mode
To pull the latest Docker images, run:
```shell
# Login into the project ghcr.io (GitHub Container Registry)
# Username: your GitHub login
# Password: your personal access token (PAT) created above
docker login https://ghcr.io/v2/rs-python
# From the local-mode directory, pull the images
cd ./local-mode
docker compose pull
```
Then to run the demos:
```shell
# Still from the local-mode directory, if you're not there yet
cd ./local-mode
# Run all services.
# Note: in case of port conflicts, you can kill all your running docker containers with:
# docker rm -f $(docker ps -aq)
docker compose down -v; docker compose up # -d for detached
# Note: we always need to call 'down' before 'up' or we'll have errors
# when the stac database will initialize a second time.
```
Near the end of the logs you will see some Jupyter information e.g:
```
jupyter | To access the server, open this file in a browser:
jupyter | ...
jupyter | Or copy and paste one of these URLs:
jupyter | ...
jupyter | http://127.0.0.1:8888/lab?token=612cb124335d9ab80a5a6414631a7df186b2401234050001
```
Open (ctrl-click) the ```http://127.0.0.1:8888/lab?token=...``` link to open the Jupyter web client (=Jupyter Notebook) in your browser.
__Note__: the token is auto-generated by Jupyter and changes everytime you relaunch the containers. So after relaunching, your old Jupyter web session won't be available anymore.
To show the Jupyter logs from another terminal, run:
```shell
docker compose logs jupyter
```
On the left, in the file explorer, go to the demos or tutorial folder and double-click a notebook to open it:

```shell
# When you're done, shutdown all services and volumes (-v)
# with Ctrl-C (if not in detached mode i.e. -d) then:
docker compose down -v
# You can use this to remove all docker volumes
# (use with care if you have other docker containers)
docker volume prune
```
### How does it work
The [docker-compose.yml](local-mode/docker-compose.yml) file uses Docker images to run all the necessary container services for the demos :
* The latest rs-server images available:
* Built from the CI/CD:
* Available in the ghcr.io:
* The AUXIP, CADIP ... station mockups:
* Built from the CI/CD:
* Also available in the ghcr.io
* STAC PostgreSQL database
* SeaweedFS S3 bucket server
* Jupyter server
These containers are run locally (not on a cluster). The Jupyter notebooks accessed from are run from the containerized Jupyter server, not from your local environment. This Jupyter environment contains all the Python modules required to call the rs-server HTTP endpoints.
### How to run your local rs-server code in this environment
It can be helpful to use your last rs-server code version to debug it or to test modifications without pushing them and rebuilding the Docker image. Follow these steps:
1. Go to the ```local-mode``` directory and run:
```shell
cp 'docker-compose.yml' 'docker-compose-debug.yml'
```
1. If your local `rs-server` github repository is under `/my/local/rs-server`, modify the `docker-compose-debug.yml` file to mount your local `rs-server` services:
```yaml
# e.g.
rs-server-adgs:
# ...
volumes:
- /my/local/rs-server/services/common/rs_server_common:/usr/local/lib/python3.x/site-packages/rs_server_common
- /my/local/rs-server/services/adgs/rs_server_adgs:/usr/local/lib/python3.x/site-packages/rs_server_adgs
- /my/local/rs-server/services/adgs/config:/usr/local/lib/python3.x/site-packages/config
# - and any other useful files ...
```
1. Run the demo with:
```shell
# Still from the local-mode directory, if you're not there yet
cd ./local-mode
# Run all services
docker compose down -v; docker compose -f docker-compose-debug.yml up # -d for detached
```
## Run on hybrid mode
On hybrid mode, we run the Jupyter notebooks locally, but they connect to the services deployed on the RS-Server website (=cluster). Authentication is required for this mode.
### Prerequisites
* You have access to the RS-Server website:
* You have generated an API key from the RS-Server website.
* You have saved the S3 bucket configuration in you local file: `~/.s3cfg`
* Python is installed on your system.
You also need the rs-client-libraries project:
* If you have its source code, install it with:
```shell
cd /path/to/rs-client-libraries
pip install poetry
poetry install --with dev,demo
poetry run opentelemetry-bootstrap -a install
```
* Or if you only have its ```.whl``` package, install it with:
```shell
pip install rs_client_libraries-*.whl
# then install jupyter lab
pip install jupyterlab
```
### Run the demos on hybrid mode
From your terminal in the rs-demo, run:
```shell
# NOTE: at CS France premises, use this to deactivate the proxy which causes random errors
unset no_proxy ftp_proxy https_proxy http_proxy
# To use your local rs-client-libraries source code
cd /path/to/rs-client-libraries
# git checkout develop && git pull # maybe take the latest default branch
poetry run /path/to/rs-demo/hybrid-mode/start-jupyterlab.sh
# Or if you have installed it from rs_client_libraries-*.whl,
# just run
/path/to/rs-demo/hybrid-mode/start-jupyterlab.sh
```
The Jupyter web client (=Jupyter Notebook) opens in a new tab of your browser.
*WARNING*: the cluster is shut down from 18h30 to 8h00 each night and on the weekends.
### How to check your Python interpreter used in notebooks
In a notebook cell, run:
```python
import sys
print(sys.executable)
```
If you use the rs-client-libraries poetry environment, it should show something like:
```shell
${HOME}/.cache/pypoetry/virtualenvs/rs-client-libraries-xxxxxxxx-py3.x/bin/python
```
## Licensing
The code in this project is licensed under Apache License 2.0.
---

This project is funded by the EU and ESA.