{"id":23331267,"url":"https://github.com/sassoftware/esp-kubernetes","last_synced_at":"2026-03-04T14:31:43.975Z","repository":{"id":325461533,"uuid":"1100641066","full_name":"sassoftware/esp-kubernetes","owner":"sassoftware","description":null,"archived":false,"fork":false,"pushed_at":"2025-11-21T12:46:29.000Z","size":620,"stargazers_count":0,"open_issues_count":0,"forks_count":1,"subscribers_count":0,"default_branch":"main","last_synced_at":"2025-11-21T14:35:36.795Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":null,"language":"Shell","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/sassoftware.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":"SUPPORT.md","governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2025-11-20T14:56:35.000Z","updated_at":"2025-11-21T12:46:33.000Z","dependencies_parsed_at":null,"dependency_job_id":null,"html_url":"https://github.com/sassoftware/esp-kubernetes","commit_stats":null,"previous_names":["sassoftware/esp-kubernetes"],"tags_count":null,"template":false,"template_full_name":null,"purl":"pkg:github/sassoftware/esp-kubernetes","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/sassoftware%2Fesp-kubernetes","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/sassoftware%2Fesp-kubernetes/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/sassoftware%2Fesp-kubernetes/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/sassoftware%2Fesp-kubernetes/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/sassoftware","download_url":"https://codeload.github.com/sassoftware/esp-kubernetes/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/sassoftware%2Fesp-kubernetes/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":30083745,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-03-04T13:22:36.021Z","status":"ssl_error","status_checked_at":"2026-03-04T13:20:45.750Z","response_time":59,"last_error":"SSL_read: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["sas-quest"],"created_at":"2024-12-20T22:33:06.443Z","updated_at":"2026-03-04T14:31:43.948Z","avatar_url":"https://github.com/sassoftware.png","language":"Shell","readme":"Archive notice: This project is no longer under active development and was archived on 2024-11-12.\n\n# SAS Event Stream Processing Lightweight Kubernetes\n\n__CAUTION: The scripts provided in this repository to deploy SAS Event Stream Processing are no longer supported.__\n__It is highly recommended that you do not use them.__\n__Work is underway to replace them with a new easy-to-use deployment tool that enables you to manage standalone SAS Event Stream Processing deployments in various cloud environments.__\n__The deployment tool is designed to provide end-to-end automation of the deployment process while maintaining flexibility to enable customized deployment.__\n__More specific information about this new deployment tool will be made available in 2024.__\n\nTable of Contents\n\n* [Overview](#overview)\n* [Introduction](#introduction)\n* [Before You Start](#before-you-start)\n* [Azure Notes](#azure-notes)\n* [AWS Notes](#aws-notes)\n* [GCP Notes](#gcp-notes)\n* [Components of the SAS Event Stream Processing Cloud Ecosystem](#components-of-the-sas-event-stream-processing-cloud-ecosystem)\n    * [YAML Templates](#yaml-templates)\n    * [Scripts](#scripts)\n* [Prerequisites](#prerequisites)\n    * [Persistent Volume](#persistent-volume)\n    * [Additional Prerequisites for a Multi-user Deployment](#additional-prerequisites-for-a-multi-user-deployment)\n    * [Kubernetes Metrics Server](#kubernetes-metrics-server)\n* [Getting Started](#getting-started)\n    * [Retrieve Required Files](#retrieve-required-files)\n    * [Set the Environment Variables](#set-the-environment-variables)\n    * [Location of the Public Domain Images](#location-of-the-public-domain-images)\n    * [Define PostgreSQL/UAA Secrets](#define-postgresqluaa-secrets)\n    * [Generate a Deployment with mkdeploy](#generate-a-deployment-with-mkdeploy)\n    * [Deploy Images in Kubernetes with dodeploy](#deploy-images-in-kubernetes-with-dodeploy)\n* [Accessing Projects and Servers](#accessing-projects-and-servers)\n    * [Query a Project](#query-a-project)\n        * [Query the Metering Server](#query-the-metering-server)\n        * [Access Web-Based Clients](#access-web-based-clients)\n* [Configuring for Multiple Users](#configuring-for-multiple-users)\n* [Using filebrowser](#using-filebrowser)\n* [Contributing](#contributing)\n* [License](#license)\n* [Additional Resources](#additional-resources)\n\n## Overview\n\n[SAS Event Stream Processing](https://go.documentation.sas.com/doc/en/espcdc/v_017/espov/home.htm) (ESP) enables you to quickly process and analyze a large number of continuously flowing events.\nESP may be deployed in a Kubernetes (K8s) environment as part of a [SAS Viya](https://support.sas.com/en/software/sas-viya.html) offering (for more information, see the [SAS Viya: Deployment Guide](https://go.documentation.sas.com/doc/en/dplyml0phy0dkr/v_019/titlepage.htm)) or as a lightweight offering that does not require SAS Viya.\nThis project contains files to assist you with a SAS ESP K8s lightweight deployment and has been tested against K8s version 1.23.\n\n**Note:** These instructions apply to SAS ESP version 2020.1 and later.\nFor older SAS ESP versions, please see the available tags in this Git repository.\n\n## Introduction\n\nThis project supports the deployment of a lightweight version of SAS ESP.\nIt is a repository of scripts, YAML template files, and sample projects (XML files) that enable you to develop, deploy, and test an ESP server and SAS ESP web-based clients in a K8s cluster.\nThe resulting SAS ESP cloud ecosystem runs *independently* of SAS Viya.\n\nUse the tools in this repository for either of the following deployment approaches:\n\n* lightweight open, multi-user, multi-tenant deployment\n* lightweight open, single-user deployment\n\n**Important:** If you want to deploy SAS ESP with other SAS products, do *not* use the tools in this repository.\n\nBefore you proceed, decide which of these deployment approaches you intend to take.\nCarefully read the associated prerequisites for your chosen approach before editing any file or running any script.\n\n## Before You Start\n\nBefore you deploy SAS Event Stream Processing with the scripts and templates provided by this repository, you must fulfill the following prerequisites:\n\n* You must have a properly configured DNS system. The fully qualified domain names `\u003cnamespace\u003e.\u003cdomain\u003e` must\nresolve both externally and within the Kubernetes cluster.\n* You must set up an `Nginx` Ingress controller. This controller is the gateway into the Kubernetes cluster. For example, if your Kubernetes namespace\nis named `\u003cnamespace\u003e`, and your DNS domain is named `\u003cdomain\u003e`, `\u003cnamespace\u003e.\u003cdomain\u003e` must\nresolve to the public IP address of the `Nginx` controller.\n* You must have an `RO` persistent volume to use as the backing store for the PostgreSQL database.\nThe YAML template file that binds this PV is `pvc-pg.yaml`.\n* You must have an `RW` persistent volume to use as a read/write scratch area for input files (CSV,JSON), output\nfiles (CSV,JSON), and deep learning and machine learning models (ASTORE/ONNX/Python). The YAML template file that binds this\nPV is `pvc.yaml`.\n* A Docker repository must be available on the Kubernetes cluster and must be populated\nwith the SAS Event Stream Processing Docker images.\n\nFor more information about YAML template files see [YAML Templates](#yaml-templates). For more information about the required PVs, see [Persistent Volume](#persistent-volume). For more information about the required Docker images, see [Getting Started](#getting-started).\n\n## Azure Notes\n\nIf you are installing this package on Azure Kubernetes Service (AKS), then you must read the [Azure Notes](AZURE-notes.md) before you deploy SAS Event Stream Processing.\n\n## AWS Notes\n\nIf you are installing this package on AWS - Elastic Kubernetes Service (EKS), then you must read the [AWS Notes](AWS-notes.md) before you deploy SAS Event Stream Processing.\n\n## GCP Notes\n\nIf you are installing this page on Google Kubernetes Engine (GKE), then you must read the [GCP Notes](GCP-notes.md) before you deploy SAS Event Stream Processing.\n\n**Note:** Deployment of this package on the OpenShift platform is not supported.\n\n## Components of the SAS Event Stream Processing Cloud Ecosystem\n\n### YAML Templates\n\nThe following subdirectories of /esp-cloud provide essential components of the SAS Event Stream Processing cloud ecosystem.\n\n[Operator](/esp-cloud/operator) - contains YAML template files and projects to deploy the ESP operator and the SAS Event Stream Processing metering server.\n\nFrom this location, deploy the following Docker images that you obtained through your SOE:\n\n* SAS Event Stream Processing metering server\n* ESP operator\n* open-source PostgreSQL database (you can replace this with an alternative PostgreSQL database)\n* open-source filebrowser to manage the persistent volume (PV)\n\n[Clients](/esp-cloud/clients) - contains YAML template files and projects to deploy SAS Event Stream Processing\nweb-based clients.  \n\nFrom this location, deploy the following Docker images that you obtained through your SOE:\n\n* SAS Event Stream Processing Studio\n* SAS Event Stream Processing Streamviewer\n* SAS Event Stream Manager\n\n[OAuth2](/esp-cloud/oauth2) - contains YAML template files for supporting multi-user installations.\n\nFrom this location, deploy the following Docker images:\n\n* OAuth2 Proxy\n* Pivotal User Account and Authentication (UAA) server (configured to store user credentials in PostgreSQL, but could also be reconfigured to read user credentials from alternative identity management (IM) systems)\n\nEach of these subdirectories contains README files with more specific, detailed instructions.\n\n### Scripts\n\nThe /bin subdirectory of /esp-cloud provides the following scripts to facilitate deployment:\n\n* **mkdeploy** - creates a set of deployment YAML files. You must set appropriate environment variables before running this script.\n* **dodeploy** - deploys images on the Kubernetes cluster\n* **mkproject** - converts XML project code into a Kubernetes custom resource file that works in the SAS Event Stream Processing Cloud Ecosystem\n* **uaatool** - allows easy modification (add/delete/list) users in the UAA database used in a multiuser deployment.\n\nFor more information about using these scripts, see [Getting Started](#getting-started).\n\n## Prerequisites\n\n### Persistent Volume\n\n**Important**: To deploy the Docker images that you download, you must have a running Kubernetes cluster and two persistent volumes (PVs) available for use.\nWork with your Kubernetes administrator to obtain access to a cluster with the required PVs.\nBy default, the persistent volume claims (PVCs) use the Kubernetes storage class \"nfs-client\" and are dynamically provisioned.\nYou can change these settings appropriately for your specific installation.\nYou have the option to set up a PV storage class that supports `ReadWriteMany` [access mode][k8s-pv-accessmodes] with a storage class name other than \"nfs-client\".\nIf you use this option, use the `ESP_STORAGECLASS_RWX` environment variable to set the `ReadWriteMany` storage class name; for example:\n\n```shell\nexport ESP_STORAGECLASS_RWX='my-rwx-storage-class-name'\n```\n\nThe first PV is a backing store for the PostgreSQL database, which requires Write access to the persistent volume. Because the PostgreSQL pod is the only pod that writes to this PV, assign it the access mode **ReadWriteOnce**. A typical deployment with no stored projects or metadata uses about 68MB of storage. For a smaller deployment, 20GB of storage for the PV should be adequate.\n\nThe second PV is used as a read/write location for running ESP projects.\nBecause SAS Event Stream Processing projects read and write to the PV simultaneously, the PV must support `ReadWriteMany` [access mode][k8s-pv-accessmodes].\nThe size of this PV depends on the amount of input and output data that you intend to store there.\nDetermine the amount of data to be consumed (input data), estimate the amount of processed data to be written (output data), and specify the size accordingly.\n\n### Additional Prerequisites for a Multi-user Deployment\n\nFor a multi-user deployment, here are the additional prerequisites:\n\n* access to a Pivotal UAA server in a container\n* access to the **cf-uaac** Pivotal UAA command-line client to configure the UAA server\n\nBoth of these containers are supplied in the publicly available repository:\n\n```text\nghcr.io/skolodzieski/uaac            3.2.0      265MB\nghcr.io/skolodzieski/uaa             75.18.0    1.09GB\n```\n\n### Kubernetes Metrics Server\n\nThe Kubernetes Metrics Server is required for SAS Event Stream Processing clients to access Kubernetes metrics.\nTo install this component, see \u003chttps://github.com/kubernetes-sigs/metrics-server\u003e.\n\nIf the Kubernetes Metrics Server is not installed, when you use SAS Event Stream Manager to deploy a project in a deployment whose type is \"Cluster\", the **Kubernetes Metrics** tab does not display information about current CPU utilization.\n\n## Getting Started\n\nThe instructions in this section should be executed on a workstation that has Docker installed.\n\n### Retrieve Required Files\n\nTo prepare for deployment, follow these steps:\n\n1. Create a directory to store the working files.\n    For example:\n\n    ```shell\n    mkdir myesp\n    ```\n\n1. Open the software order email (SOE) that you received from SAS, and click the **Get Started** button.\n\n1. Log in to the [My SAS](https://my.sas.com) web portal.\n\n1. On the My SAS web page that opens, expand the information for the order by clicking the down arrow.\n\n1. In the pane that opens, examine the order information. The version indicates\n    the release cadence and the version of SAS Viya software to be deployed (for\n    example, LTS 2020.1). If you want to deploy a different version, select the\n    cadence and release from the **SAS Viya Version** list.\n\n1. Under Order Assets, click **Download Certificates**.\n\n1. Save the `SASViyaV4-order-ID_certs.zip` file in the `myesp` directory that you created in step 1.\n\n1. In the same Order Assets section of the page, click **Download License**.\n\n1. Save the license file (`license.jwt`) from `my.sas.com` in the `myesp` directory.\n\n1. Scroll down to the section of the page that is labeled **SAS Mirror Manager**.\n    Click [Download Now](https://support.sas.com/en/documentation/install-center/viya/deployment-tools/4/mirror-manager.html)\n    to download the SAS Mirror Manager package to the machine where you want to\n    create your mirror registry.\n\n    Or use wget to download SAS Mirror Manager:\n\n    ```shell\n    wget https://support.sas.com/installation/viya/4/sas-mirror-manager/lax/mirrormgr-linux.tgz\n    ```\n\n    SAS recommends saving the file in the `myesp` directory.\n\n1. Uncompress the downloaded file in the `myesp` directory:\n\n    ```shell\n    tar -xvf mirrormgr-linux.tgz\n    ```\n\n1. (Optional) Save the SOE in the same directory. The information in the SOE can be useful should you need to contact SAS Technical Support with questions or issues.\n\n1. Use SAS Mirror Manager to download asset tags:\n\n    ```shell\n    ./mirrormgr list remote docker tags --deployment-data SASViyaV4-order-ID_certs.zip --cadence cadence --latest\n    ```\n\n1. Record this list of available tags for later use.\n\n1. Populate your mirror registry with the software that you ordered. For example:\n\n    ```shell\n    ./mirrormgr --deployment-data SASViyaV4_${ORDER}_certs.zip mirror registry \\\n    --destination myregistry.mydomain.com \\\n    --username myregistryuser \\\n    --password myregistrypassword \\\n    --cadence SAS-release-cadence \\\n    --latest \\\n    --log-file /tmp/mirrormgr.log \\\n    --workers 10\n    ```\n\n1. Clone the following project:\n\n    ```shell\n    git clone https://github.com/sassoftware/esp-kubernetes.git\n    ```\n\n1. Note the list of assets that you need to complete the deployment, which\n    looks something like this:\n\n    ```text\n    myregistry.mydomain.com/viya-4-x64_oci_linux_2-docker/sas-event-stream-manager-app:7.9.17-20210114.1610585769219\n    myregistry.mydomain.com/viya-4-x64_oci_linux_2-docker/sas-event-stream-processing-metering-app:10.78.0-20201214.1607929237374\n    myregistry.mydomain.com/viya-4-x64_oci_linux_2-docker/sas-esp-operator:10.77.2-20201210.1607615447889\n    myregistry.mydomain.com/viya-4-x64_oci_linux_2-docker/sas-event-stream-processing-studio-app:7.9.15-20210114.1610585675385\n    myregistry.mydomain.com/viya-4-x64_oci_linux_2-docker/sas-event-stream-processing-streamviewer-app:7.9.17-20210114.1610584965330\n    myregistry.mydomain.com/viya-4-x64_oci_linux_2-docker/sas-event-stream-processing-server-app:10.79.26-20210115.1610737535603\n    ```\n\n### Set the Environment Variables\n\nSet the following environment variables before you use the deployment scripts. Refer to the names of the Docker images in your mirror registry and to the names of the Docker images that you pulled for the OAuth2 Proxy and the Pivotal UAA server.\n\n```shell\nIMAGE_ESPSRV      = \"name of image for SAS Event Stream Processing Server\"\nIMAGE_METERBILL   = \"name of image for SAS Event Stream Processing Metering Server\"\nIMAGE_OPERATOR    = \"name of image for SAS Event Stream Processing Operator\"\n\nIMAGE_ESPESM      = \"name of image for SAS Event Stream Manager\"\nIMAGE_ESPSTRMVWR  = \"name of image for SAS Event Stream Processing Streamviewer\"\nIMAGE_ESPSTUDIO   = \"name of image for SAS Event Stream Processing Studio\"\n\nIMAGE_FILEBROWSER = \"name of image for FileBrowser\"\nIMAGE_OAUTH2P     = \"name of image for OAuth2 Proxy\"\nIMAGE_UAA         = \"name of image for Pivotal UAA Server\"\n```\n\nFor example:\n\n```shell\nIMAGE_ESPSRV=myregistry.mydomain.com/viya-4-x64_oci_linux_2-docker/sas-event-stream-processing-server-app:10.79.26-20210115.1610737535603\n```\n\nPerform the SAS Event Stream Processing cloud deployment from a single directory, /esp-cloud. A single script enables the deployment of the ESP operator and the web-based clients.\n\nThe deployment can be performed in Open mode (no TLS or user authentication), or in multi-user mode, which provides full authentication through a UAA server.\nTo enable TLS in multi-user mode, follow the [TLS instructions](TLS.md).\n\nFor more information, see [/esp-cloud](/esp-cloud).\n\n### Location of the Public Domain Images\n\nThe deployment makes use of the following third party Docker images:\n\n```text\nghcr.io/skolodzieski/filebrowser   latest\nghcr.io/skolodzieski/postgres      12.5 \n```\n\nThe two files: ```esp-cloud/operator/templates/fileb.yaml``` and ```esp-cloud/operator/templates/postgres.yaml``` reference these Docker images and do not need to be modified as long as you do not want to replace these third party images.\n\n### Define PostgreSQL/UAA Secrets\n\nThe following four environment variables control the secrets (created at deployment time) for the Postgres Database and the Pivitol UAA server.\n\n```text\n       uaaUsername             --   Username for the UAA server, defaults to uaaUSER (only used in multiuser deployment)\n       uaaCredentials          --   Password for the UAA server, defaults to uaaPASS (only used in multiuser deployment)\n       postgresSQLUsername     --   Username for the Postgres Database\n       postgresSQLCredentials  --   Password for the Postgres Database\n```\n\n### Generate a Deployment with mkdeploy\n\nUse the **mkdeploy** script to create a set of deployment YAML files. The script uses the environment variables that you set to locate the Docker images and to pass parameters for specifying a namespace, Ingress root, license, and type of deployment.\n\n```shell\n./bin/mkdeploy\nUsage: ./bin/mkdeploy\n\n  GENERAL options\n\n       -r                          -- remove existing deploy/\n                                       before creating\n       -y                          -- no prompt, just execute\n       -n \u003cnamespace\u003e              -- specify K8s namespace\n       -d \u003cingress domain root\u003e    -- project domain root,\n                                       ns.\u003cdomain root\u003e/\u003cpath\u003e is Ingress\n       -l \u003cesp license file\u003e       -- SAS ESP license\n\n       -C                          -- deploy clients\n       -M                          -- enable multiuser mode\n       -A                          -- decorate deployment for Azure\n       -W                          -- decorate deployment for AWS\n       -G                          -- decorate deployment for Google Cloud\n```\n\n**Note:** Use the *-d* (Ingress domain root) parameter specified in the **mkdeploy** script to create Ingress routes for the deployed pods. All SAS Event Stream Processing applications within the Kubernetes cluster are now accessed through specific context roots and a single Ingress host. The Ingress host is specified in the form `\u003cnamespace\u003e.\u003cingress domain root\u003e`.\n\nThe options `-C` and `-M` are optional and generate the following deployments:\n\n* **Open deployment (no authentication or TLS)** with no web-based clients.\n\n```shell\n./bin/mkdeploy -r -l ../../LICENSE/SASViyaV0400_09QTFR_70180938_Linux_x86-64.jwt -n \u003cnamespace\u003e -d sas.com\n```\n\n* **Open deployment (no authentication or TLS)** with all web-based clients.\n\n```shell\n./bin/mkdeploy -r -l ../../LICENSE/SASViyaV0400_09QTFR_70180938_Linux_x86-64.jwt -n \u003cnamespace\u003e -d sas.com -C\n```\n\n* **Multi-user deployment (UAA authentication and TLS)** with no web-based clients.\n\n```shell\n./bin/mkdeploy -r -l ../../LICENSE/SASViyaV0400_09QTFR_70180938_Linux_x86-64.jwt -n \u003cnamespace\u003e -d sas.com -M\n```\n\n* **Multi-user deployment (UAA authentication and TLS)** with all web-based clients.\n\n```shell\n./bin/mkdeploy -r -l ../../LICENSE/SASViyaV0400_09QTFR_70180938_Linux_x86-64.jwt -n \u003cnamespace\u003e -d sas.com -C -M\n```\n\nAfter you run the **mkdeploy** script, which generates usable deployment manifests, the YAML template file named deploy/pvc-pg.yaml specifies a *PersistentVolumeClaim*. For example:\n\n```yaml\n#\n# This is the esp-pv claim that esp component pods use.\n#\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n  name: esp-pv-pg\nannotations:\n  volume.beta.kubernetes.io/storage-class: \"nfs-client\"\nspec:\n  accessModes:\n    - ReadWriteOnce # This volume is used for PostgreSQL storage\n  resources:\n    requests:\n      storage: 20Gi  # volume size requested\n```\n\nThis *PersistentVolumeClaim* is made by the PostgreSQL database. Ensure that the PV that you have set up can satisfy this claim.\n\nA second YAML template file named deploy/pvc.yaml specifies a *PersistentVolumeClaim* as the read/write location for running ESP projects. For example:\n\n```yaml\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n  name: esp-pv\nspec:\n  storageClassName: nfs-client\n  accessModes:\n    - ReadWriteMany\n  resources:\n    requests:\n      storage: 5Gi\n```\n\nIn general, the processes associated with the ESP server run user:**sas**, group:**sas**. Typically,\nthis choice of user and group means uid:**1001**, gid:**1001**. For example, when you deploy the open source filebrowser\napplication, the associated processes have these assignments.\n\nIn the `deploy/fileb.yaml` YAML template file, the relevant section is similar to the following:\n\n```yaml\ninitContainers:\n    -\n        name: config-data\n        image: \"TEMPLATE_ESP_SERVER_IMAGE\"\n        command:\n            - 'bash'\n            - '-c'\n            - 'mkdir -p /mnt/data/input ; mkdir -p /mnt/data/output ; mkdir -p /mnt/data/mas-store ; touch /db/filebrowser.db'\n        securityContext:\n            runAsGroup: 1001\n            runAsUser: 1001\n```\n\nThis section specifies an initialization container that runs prior to starting the\nfilebrowser application. It creates two directories on the PV:\n\n```text\n    input/\n    output/\n```\n\nThese directories are used by the deployment. The input/ and output/ directories are created\nfor use by running event stream processing projects that need to access files (CSV, XML, and so on).\n\n### Deploy Images in Kubernetes with dodeploy\n\nAfter you have revised the manifest files that reside in the /deploy directory, deploy images on the Kubernetes\ncluster with the **dodeploy** script.\nFor example:\n\n```shell\n./bin/dodeploy -n cmdline\n```\n\nThis invocation checks that the given namespace exists before it executes the\ndeployment. If the namespace does not exist, the script asks whether the namespace should\nbe created.\n\nAfter the deployment is completed, you should see active pods within your\nnamespace. For example, consider the output below. The pods (Ingress) marked with an **M** appear only in a multi-user deployment. The pods (Ingress) marked with a **C** appear only when web-based clients are included in the deployment.\n\n```shell\n   $ kubectl -n mudeploy get pods\n   NAME                                                              READY   STATUS    RESTARTS   AGE\n   espfb-deployment-5cc85c6bfd-4fb92                                 1/1     Running   0          25h\nM  oauth2-proxy-7d94759449-2thmd                                     1/1     Running   0          25h\n   postgres-deployment-56c9d65d6c-9k9kb                              1/1     Running   1          25h\n   sas-esp-operator-5b596f8b6f-wmggk                                 1/1     Running   0          25h\nC  sas-event-stream-manager-app-5b5946b544-9bkxs                     1/1     Running   0          25h\nC  sas-event-stream-processing-client-config-server-75b656f6b7xhpj   1/1     Running   0          25h\nC  sas-event-stream-processing-metering-app-86c5b7c6-c7qv4           1/1     Running   0          24h\nC  sas-event-stream-processing-streamviewer-app-55d79d6996-24vq5     1/1     Running   0          25h\nC  sas-event-stream-processing-studio-app-bf4f675f4-sfpjk            1/1     Running   0          25h\nM  uaa-deployment-85d9fbf6bd-s8fwl                                   1/1     Running   0          25h\n```\n\nThe ESP operator, SAS Event Stream Processing Studio, SAS Event Stream Processing Streamviewer, PostgreSQL, OAuth2 Proxy, and Pivotal UAA are started by the YAML files that are supplied. After SAS Event Stream Processing Studio initializes, it creates a custom resource that causes the ESP operator to start a client-configuration server, named `sas-event-stream-processing-client-config-server`. The client-configuration server is a small ESP server that runs a dummy project. For more information, see [Understanding and Managing the Client-Configuration Server](https://documentation.sas.com/?cdcId=espcdc\u0026cdcVersion=default\u0026docsetId=espex\u0026docsetTarget=n154fz2uzumjwrn111xqzu36wy93.htm) in SAS Event Stream Processing Help Center.\n\nAn Ingress for each component should also appear in the namespace. For example:\n\n```shell\n   $ kubectl -n mudeploy get ingress\n   NAME                                               HOSTS            ADDRESS   PORTS     AGE\n   espfb                                              xxxxxx.sas.com             80, 443   25h\nM  oauth2-proxy                                       xxxxxx.sas.com             80, 443   25h\n   sas-event-stream-manager-app                       xxxxxx.sas.com             80, 443   25h\nC  sas-event-stream-processing-client-config-server   xxxxxx.sas.com             80        25h\nC  sas-event-stream-processing--app                   xxxxxx.sas.com             80, 443   24h\nC  sas-event-stream-processing-streamviewer-app       xxxxxx.sas.com             80, 443   25h\nC  sas-event-stream-processing-studio-app             xxxxxx.sas.com             80, 443   25h\nM  uaa                                                xxxxxx.sas.com             80, 443   25h\n```\n\n**Note:** The client-configuration server shows up last. The remaining pods show up as “running” almost instantaneously.  \n\n## Accessing Projects and Servers\n\nYou can use the following URL and context root to access projects and servers:\n\n```text\nProject X    --   https://\u003cnamespace\u003e.sas.com/SASEventStreamProcessingServer/X/eventStreamProcessing/v1/\nMetering     --   https://\u003cnamespace\u003e.sas.com/SASEventStreamProcessingMetering/eventStreamProcessing/v1/meterData\nStudio       --   https://\u003cnamespace\u003e.sas.com/SASEventStreamProcessingStudio\nStreamviewer --   https://\u003cnamespace\u003e.sas.com/SASEventStreamProcessingStreamviewer\nESM          --   https://\u003cnamespace\u003e.sas.com/SASEventStreamManager\nfilebrowser  --   https://\u003cnamespace\u003e.sas.com/files\n```\n\n### Query a Project\n\nSuppose that the Ingress domain root is `sas.com`, the namespace is `esp`, and the project's service name is **array**.  \n\nAfter deployment, you can query a project deployed in an open environment as follows:\n\n```shell\ncurl https://esp.sas.com/SASEventStreamProcessingServer/array/eventStreamProcessing/v1/\n```\n\nYou can query a project deployed in a multi-user environment as follows:\n\n```shell\ncurl https://esp.sas.com/SASEventStreamProcessingServer/array/eventStreamProcessing/v1/ -H 'Authorization: Bearer \u003cput a valid access token here\u003e'\n```\n\n#### Query the Metering Server\n\n**Note:** You cannot access the metering server through a web browser.\n\nSuppose that the Ingress domain root is `sas.com`, and the namespace is `esp`.\n\nAfter deployment, you can perform a simple query of the metering server deployed in an open environment on the cluster as follows:\n\n```shell\ncurl https://esp.sas.com/SASEventStreamProcessingMetering/eventStreamProcessing/v1/meterData\n```\n\nYou can perform a simple query of the metering server deployed in a multi-user environment on the cluster as follows:\n\n```shell\ncurl https://esp.sas.com/SASEventStreamProcessingMetering/eventStreamProcessing/v1/meterData -H 'Authorization: Bearer \u003cput a valid access token here\u003e'\n```\n\n#### Access Web-Based Clients\n\nAfter deployment, you can access web-based clients in an open or multi-user deployment through the following URLs:\n\n```text\nSAS Event Stream Processing Studio          -- https://esp.sas.com/SASEventStreamProcessingStudio\nSAS Event Stream Processing Streamviewer    -- https://esp.sas.com/SASEventStreamProcessingStreamviewer\nSAS Event Stream Processing Manager         -- https://esp.sas.com/SASEventStreamManager\n```\n\n## Configuring for Multiple Users\n\nFor information about adding service and user accounts and adding credentials, see [Oauth2](/esp-cloud/oauth2).\n\n## Using filebrowser\n\nThe filebrowser application on GitHub enables you to access persistent stores used by the Kubernetes pods.  \n\nThe filebrowser application is installed in your Kubernetes cluster automatically. You can access it through a browser at the following URL:\n\n`https://\u003cnamespace\u003e.\u003cingress domain root\u003e/files`\n\nWith filebrowser, you can perform the following tasks:\n\n* copy input files (CSV, JSON, XML) into the persistent store\n* view output files written to the persistent store by running projects\n* copy large binary model files for analytics (SAS Analytic Store files) to the persistent store\n\n## Contributing\n\nWe welcome your contributions! Please read [CONTRIBUTING.md](CONTRIBUTING.md) for details about how to submit contributions to this project.\n\n## License\n\nThis project is licensed under the [Apache 2.0 License](LICENSE).\n\n## Additional Resources\n\nThe [SAS Event Stream Processing product support page](https://support.sas.com/en/software/event-stream-processing-support.html)\ncontains:\n\n* current and past product documentation\n* instructional videos\n* examples\n* training courses\n* featured blogs\n* featured community topics\n\n[k8s-pv-accessmodes]: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fsassoftware%2Fesp-kubernetes","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fsassoftware%2Fesp-kubernetes","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fsassoftware%2Fesp-kubernetes/lists"}