{"id":20156211,"url":"https://github.com/openfun/k8s-la-stack-tutorial","last_synced_at":"2026-03-19T14:12:15.223Z","repository":{"id":176803666,"uuid":"659423439","full_name":"openfun/k8s-la-stack-tutorial","owner":"openfun","description":"☸ A tutorial to deploy a complete learning analytics stack to Kubernetes","archived":false,"fork":false,"pushed_at":"2023-07-07T07:57:23.000Z","size":2917,"stargazers_count":2,"open_issues_count":0,"forks_count":1,"subscribers_count":5,"default_branch":"main","last_synced_at":"2025-05-29T14:14:52.745Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":null,"language":null,"has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"cc-by-sa-4.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/openfun.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2023-06-27T20:01:39.000Z","updated_at":"2023-07-11T10:45:07.000Z","dependencies_parsed_at":null,"dependency_job_id":"8e7cf7e3-e52d-4354-ae10-b997545a6b77","html_url":"https://github.com/openfun/k8s-la-stack-tutorial","commit_stats":null,"previous_names":["openfun/k8s-la-stack-tutorial"],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/openfun/k8s-la-stack-tutorial","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/openfun%2Fk8s-la-stack-tutorial","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/openfun%2Fk8s-la-stack-tutorial/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/openfun%2Fk8s-la-stack-tutorial/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/openfun%2Fk8s-la-stack-tutorial/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/openfun","download_url":"https://codeload.github.com/openfun/k8s-la-stack-tutorial/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/openfun%2Fk8s-la-stack-tutorial/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":28823194,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-01-27T18:44:20.126Z","status":"ssl_error","status_checked_at":"2026-01-27T18:44:09.161Z","response_time":168,"last_error":"SSL_connect returned=1 errno=0 peeraddr=140.82.121.6:443 state=error: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-11-13T23:37:59.588Z","updated_at":"2026-01-27T21:21:20.207Z","avatar_url":"https://github.com/openfun.png","language":null,"readme":"# Deploy a Learning Analytics stack to Kubernetes\n\n[![CC BY-SA 4.0][cc-by-sa-shield]][cc-by-sa]\n\nIn this tutorial, you will learn to deploy a complete learning analytics stack\nto Kubernetes in minutes! 🎉\n\n\n**Disclaimer**\n\n\u003e To proceed with this tutorial, you need to be familiar in using the command\n\u003e line from your operating system. Being familiar with Kubernetes basic\n\u003e concepts would be a plus to completely understand every step of this\n\u003e tutorial.\n\n## Prerequisites\n\n-   A running Kubernetes cluster\n-   `kubectl`, the official Kubernetes CLI: https://kubernetes.io/docs/reference/kubectl/\n-   `helm`, the package manager for kubernetes: https://helm.sh/fr/docs/intro/install/\n-   `curl`, the command line tool for transfering data with URLs: https://curl.se/\n\n## Kubernetes 101\n\nIn this very first step of the tutorial, we will clone the Git project of the\nworkshop in a working directory:\n\n```sh\n# Go to your usual working directory\ncd ${HOME}/work\n\n# Clone this project, either with SSH:\ngit clone git@github.com:openfun/k8s-la-stack-tutorial.git\n# or with HTTPS:\ngit clone https://github.com/openfun/k8s-la-stack-tutorial.git\n\n# Go to our tutorial working directory\ncd k8s-la-stack-tutorial\n```\n\nWe will now check that we are able to connect to the target Kubernetes cluster.\nTo do so, we need to download the Kubernetes cluster configuration file (_aka_\nthe Kubeconfig) and save it to our working directory:\n\n```sh\n# Move the configuration file to the current directory\n#\n# Nota bene: You need to adapt this example path to where your system stores\n# downloaded files\nmv ~/Download/kubeconfig.yml .\n```\n\nNow we need to define an environment variable pointing to the kubeconfig file\npath before using the `kubectl` command:\n\n```sh\n# Define an environment variable pointing to the target cluster configuration file\nexport KUBECONFIG=${PWD}/kubeconfig.yml\n```\n\n\u003e ⚠️ This `kubeconfig.yml` file should not be versionned or shared as it\n\u003e contains your credentials to connect to your Kubernetes cluster.\n\nAt this step, we should be able to send commands to the Kubernetes cluster\nusing the `kubectl` tool:\n\n```sh\n# Check cluster status\nkubectl cluster-info\n\n# List nodes of our cluster\nkubectl get nodes\n```\n\nThe response of this last command should look like the following:\n\n```\nNAME                                         STATUS   ROLES    AGE   VERSION\nnodepool-08a7421c-46bf-4fe4-b6-node-1eb1ad   Ready    \u003cnone\u003e   12m   v1.26.4\nnodepool-08a7421c-46bf-4fe4-b6-node-8f3b8a   Ready    \u003cnone\u003e   10m   v1.26.4\nnodepool-08a7421c-46bf-4fe4-b6-node-d8d327   Ready    \u003cnone\u003e   12m   v1.26.4\n```\n\nThis tells us that our cluster has three active nodes running Kubernetes\n`1.26.4` since few minutes.\n\nWe will now create our own Kubernetes namespace to work on:\n\n```sh\n# Generate a unique identifier for our namespace\nexport K8S_NAMESPACE=\"${USER:-user}-${RANDOM}\"\n\n# Check your namespace value\necho ${K8S_NAMESPACE}\n\n# Create the namespace\nkubectl create namespace ${K8S_NAMESPACE}\n\n# Activate the namespace\nkubectl config set-context --current --namespace=${K8S_NAMESPACE}\n```\n\nAt this stage, we don't expect any pod to be running:\n\n```sh\nkubectl get pods\n# Expected response is: No resources found in xxx-yyy namespace.\n```\n\n## Deploy applications\n\nIn this tutorial, we will deploy a full learning analytics stack composed of\nthe following components:\n\n- **Learning Record Store (LRS)**, here [Ralph](https://github.com/openfun/ralph);\n- **Database/datalake**, here [Elasticsearch](https://github.com/elastic/elasticsearch);\n- **Dashboard system**, here [Superset](https://github.com/apache/superset).\n\nAdditionally, we will also deploy a Learning Management System (LMS), here\n[Moodle](https://github.com/moodle/moodle).\n\nThe Moodle LMS generates learning traces and send them to the Ralph LRS in xAPI\nformat. The LRS stores learning traces in an Elasticsearch cluster that will be\nset as the primary datasource of a generalist dashboarding system: Apache\nSuperset.\n\n```mermaid\nflowchart LR\n    lms[Moodle - LMS] -- xAPI --\u003e lrs[Ralph - LRS] --\u003e data[(Elasticsearch - DB)] --\u003e dashboard[Superset - dashboards]\n```\n\n### LMS: Moodle:tm:\n\nMoodle:tm: can be installed using Helm in a single line of code:\n\n```\nhelm install lms oci://registry-1.docker.io/bitnamicharts/moodle\n```\n\n\u003e 💡 Note that it can take a few minutes before the service is up and running.\n\n\u003e 💡 Also note that, sometimes, on clusters with fewer resources, the Moodle\n\u003e pod happen to be stuck while the MariaDB pod is ready: either wait for the\n\u003e Moodle pod to restart, or restart it with the command:\n\u003e `kubectl delete pods -l app.kubernetes.io/name=moodle`\n\n\n\nYou can check deployment status using the `kubectl get pods -w` command. In a\nsimilar way, the load balancer may take some time to be available, you can\ncheck its status using the `kubectl get svc -w lms-moodle` command.\n\nOnce the service has an assigned external IP, you can get and store this IP\naddress (we will use it later):\n\n```sh\n# Define the MOODLE_IP variable\nexport MOODLE_IP=$(kubectl get svc lms-moodle --template \"{{ range (index .status.loadBalancer.ingress 0) }}{{ . }}{{ end }}\")\n\n# Display its value\necho \"Moodle URL: http://${MOODLE_IP}/\"\n```\n\nYou can click on the displayed link from your terminal to discover your brand\nnew Moodle instance 🎉.\n\nTo login to your moodle instance, we need to fetch randomly generated password\nusing:\n\n```sh\n# Get user password\nexport MOODLE_PASSWORD=$(kubectl get secret lms-moodle -o jsonpath=\"{.data.moodle-password}\" | base64 -d)\n\n# Display credentials\necho \"Moodle user: user\"\necho \"Moodle password: ${MOODLE_PASSWORD}\"\n```\n\nBefore configuring our Moodle instance, we need to deploy all other\napplications from our learning analytics stack. Keep it up! 💪\n\n### Data lake: Elasticsearch\n\nIn its recent releases, Elastic recommends to deploy its services using Custom\nResource Definitions (CRDs) installed via its official Helm chart. For the sake\nof simplicity, we've installed those definitions cluster-wide so that all\nnamespaces can benefit from it.\n\nYou don't need to execute the commands below, but we share them for\ninformation purpose:\n\n```sh\n#\n# ⚠️  Don't execute the following commands ⚠️\n#\n\n# Add elastic official helm charts repository\nhelm repo add elastic https://helm.elastic.co\n\n# Update available charts list\nhelm repo update\n\n# Install the eck operator\nhelm install elastic-operator elastic/eck-operator -n elastic-system --create-namespace\n```\n\nSince CRDs are already deployed cluster-wide, we will now be able to deploy a\ntwo-nodes elasticsearch \"cluster\":\n\n```sh\nkubectl apply -f manifests/data-lake.yml\n```\n\nIf you take a look at the data lake manifest, you will notice that official\nElastic CRDs eases the definition of a cluster:\n\n```yaml\n# manifests/data-lake.yml\napiVersion: elasticsearch.k8s.elastic.co/v1\nkind: Elasticsearch\nmetadata:\n  name: data-lake\nspec:\n  version: 8.8.1\n  nodeSets:\n    - name: default\n      count: 2\n      config:\n        node.store.allow_mmap: false\n      podTemplate:\n        spec:\n          containers:\n          - name: elasticsearch\n            env:\n              - name: ES_JAVA_OPTS\n                value: -Xms512m -Xmx512m\n            resources:\n              requests:\n                memory: 512Mi\n                cpu: 0.5\n              limits:\n                memory: 2Gi\n                cpu: 2\n```\n\nOnce applied, your elasticsearch pod should be running. You can check this using the following command:\n\n```sh\nkubectl get pods -w\n```\n\nWe expect to see two pods called `data-lake-es-default-0` and\n`data-lake-es-default-1`.\n\nWhen our Elasticsearch cluster is up (this can take few minutes), you may\ncreate the Elasticsearch index that will be used to store learning traces (xAPI\nstatements):\n\n```sh\n# Store elastic user password\nexport ELASTIC_PASSWORD=\"$(kubectl get secret data-lake-es-elastic-user -o jsonpath=\"{.data.elastic}\" | base64 -d)\"\n\n# Execute an index creation request in the elasticsearch container\nkubectl exec data-lake-es-default-0 --container elasticsearch -- \\\n    curl -ks -X PUT \"https://elastic:${ELASTIC_PASSWORD}@localhost:9200/statements?pretty\"\n```\n\nOur Elasticsearch cluster is all set. In the next section, We will now deploy\nthe [LRS](https://github.com/openfun/ralph).\n\n### LRS: Ralph\n\nRalph is also distributed as a Helm chart that can be deployed with a single\nline of code:\n\n```sh\nhelm install \\\n    --values charts/ralph/values.yaml \\\n    --set envSecrets.RALPH_BACKENDS__DATABASE__ES__HOSTS=https://elastic:\"${ELASTIC_PASSWORD}\"@data-lake-es-http:9200 \\\n    lrs oci://registry-1.docker.io/openfuncharts/ralph\n```\n\n\u003e 💡 You are now familiar with the procedure: you can check Ralph deployment\n\u003e using the `kubectl get pods -w` and `kubectl get svc -w lrs-ralph` commands.\n\nOnce deployed, you can get and store the IP address of the LRS:\n\n```sh\n# Define the LRS_IP variable\nexport LRS_IP=$(kubectl get svc lrs-ralph -o jsonpath=\"{.status.loadBalancer.ingress[0].ip}\")\n\n# Display its value\necho \"LRS URL: http://${LRS_IP}/\"\n```\n\nWe will now check that we are able to query the LRS using the `curl` tool.\n\n```sh\n# Check configured user credentials\ncurl --user admin:password \"http://${LRS_IP}/whoami\"\n```\n\n\u003e 💡 Alternatively you may choose to click on the LRS URL link from your\n\u003e terminal to open it with your default web browser. HTTP Basic Auth\n\u003e credentials are `admin` for the login and `password` for the password.\n\n\nTo test our deployment, we will send 22 batches of 1k statements to the LRS:\n\n```sh\n# Send 22k xAPI statements in parallel 😎\n\\ls data/statements* | xargs -t -n 1 -P 10 -I {} bash -c \" \\\n  gunzip -c {} | \\\n  curl -L \\\n    --user \\\"admin:password\\\" \\\n    -X POST \\\n    -H \\\"Content-Type: application/json\\\" \\\n    http://${LRS_IP}/xAPI/statements/ -d @- \\\n\"\n```\n\nIf everything went well, the LRS should respond with 22k UUIDs filling your\nterminal with apparently random characters. 😅\n\nLet's check what our statements look like by querying them to the LRS:\n\n```sh\n# Fetch stored statements\ncurl -L --user \"admin:password\" http://\"${LRS_IP}\"/xAPI/statements/\n```\n\n\u003e 💡 Similarly to our previous remark, you may choose to click on the LRS URL\n\u003e link from your terminal to open it with your default web browser. HTTP Basic\n\u003e Auth credentials are `admin` for the login and `password` for the password.\n\n### Data visualization: Apache Superset\n\nLast but not least, we are now ready to deploy Superset, the shiny\ndatavisualization tool from the Apache Foundation.\n\nThis time, we need to add the official superset Helm repository locally and\nthen install its latest release:\n\n```sh\n# Add Superset official helm charts repository\nhelm repo add superset https://apache.github.io/superset\n\n# Update available charts list\nhelm repo update\n\n# Install the official superset chart\nhelm install --values charts/superset/values.yaml dataviz superset/superset\n```\n\nAfter one minute (or two), the `dataviz-superset` service should be up and\nrunning and the load balancer should have an assigned IP.\n\n**Questions**\n\n1. What command should you run to check the service deployment status?\n2. How will we get the load balancer IP?\n\n⏰\n\n![Mr Bean Waiting](https://media.giphy.com/media/QBd2kLB5qDmysEXre9/giphy-downsized.gif)\n\n⏰\n\n**Responses**\n\n1. `kubectl get svc -w lrs-ralph`\n2. `kubectl get svc dataviz-superset -o jsonpath=\"{.status.loadBalancer.ingress[0].ip}\"`\n\nYou got it, congratz! 🎉\n\n```sh\n# Define the SUPERSET_IP variable\nexport SUPERSET_IP=$(kubectl get svc dataviz-superset -o jsonpath=\"{.status.loadBalancer.ingress[0].ip}\")\n\n# Display its value\necho \"SUPERSET URL: http://${SUPERSET_IP}/\"\n```\n\nAll desired services for our complete learning analytics stack are now\ndeployed, w00t! It's time to configure our tools.\n\n## Configure applications\n\n### Moodle\n\n#### Plugin: Logstore xAPI\n\nTo send xAPI statements from the LMS to the LRS, we need to install the\nLogstore xAPI plugin from the Moodle plugins directory where it can be\ndownloaded: https://moodle.org/plugins/logstore_xapi\n\nOnce downloaded, go to \"Site administration \u003e Plugins \u003e Install plugins\". Once\nthe Zip package has been uploaded submit the form and follow the installation\nworkflow.\n\nConfiguration for the Logstore xAPI should look like:\n\n```\nLRS endpoint: http://xx.xx.xx.xx/xAPI/statements\nUsername: admin\nLRS basic auth secret: password\nSend statements by scheduled task? No\n```\n\n\u003e Substitute the `xx.xx.xx.xx` pattern with the ${LRS_IP} value.\n\nOnce installed and configured, don't forget to enable the plugin by clicking in\nthe \"Logging\" section of the \"Site administration \u003e Plugins\" page.\n\n#### Generate a test course\n\nWe are now ready to generate events from the LMS and send them to the LRS,\nbut... we need a course for this!\n\nFrom the \"Site administration \u003e Development\" page, click on the \"Debugging\"\nlink from the development section. Select \"DEVELOPER\" for the \"Debug messages\"\nand save changes.\n\nNow that the developer mode is active, go back to the \"Site administration \u003e\nDevelopment\" page and click on the \"Make test course\" link. Select a `S` size\ncourse (it needs 30s to be generated) and give it a short/full name. After\nseconds, you can navigate in the newly generated course. Feel free to test\nvarious activities so that you generate learning traces we can explore in\nSuperset.\n\n### Superset\n\nNow it's time to have fun with learning traces! Login to your Superset instance\nusing `admin` as login and also as password. Go to \"Settings \u003e Data - Database\nconnections\", add a new database of type \"ElasticSearch (OpenDistro SQL)\" with\nthe following SQLAlchemy connection URI:\n\n```\nelasticsearch+https://elastic:XXXXXXXXXX@data-lake-es-http:9200/?verify_certs=False\n```\n\n\u003e Substiture the `XXXXXXXXXX` pattern with the `elastic` user password that is\n\u003e stored in the `${ELASTIC_PASSWORD}` variable.\n\nYou can now add a new dataset using the `statements` index from the\nElasticsearch database. Use this dataset to create new charts and dashboards.\n\n---\n\nThis is it.\n\nWe are all set.\n\n**You did it!** 🎉\n\n![The Office Wow](https://media.giphy.com/media/tkApIfibjeWt1ufWwj/giphy.gif)\n\n## License\n\nThis work is licensed under a\n[Creative Commons Attribution-ShareAlike 4.0 International License][cc-by-sa].\n\n[![CC BY-SA 4.0][cc-by-sa-image]][cc-by-sa]\n\n[cc-by-sa]: http://creativecommons.org/licenses/by-sa/4.0/\n[cc-by-sa-image]: https://licensebuttons.net/l/by-sa/4.0/88x31.png\n[cc-by-sa-shield]: https://img.shields.io/badge/License-CC%20BY--SA%204.0-lightgrey.svg\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fopenfun%2Fk8s-la-stack-tutorial","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fopenfun%2Fk8s-la-stack-tutorial","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fopenfun%2Fk8s-la-stack-tutorial/lists"}