{"id":18637100,"url":"https://github.com/openshift/console","last_synced_at":"2025-05-13T22:02:47.233Z","repository":{"id":37247202,"uuid":"129436456","full_name":"openshift/console","owner":"openshift","description":"OpenShift Cluster Console UI","archived":false,"fork":false,"pushed_at":"2025-04-28T13:52:50.000Z","size":238857,"stargazers_count":431,"open_issues_count":71,"forks_count":632,"subscribers_count":109,"default_branch":"main","last_synced_at":"2025-04-28T14:14:27.488Z","etag":null,"topics":["openshift","openshift-origin"],"latest_commit_sha":null,"homepage":"https://www.openshift.org","language":"TypeScript","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/openshift.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null}},"created_at":"2018-04-13T17:54:59.000Z","updated_at":"2025-04-28T13:52:55.000Z","dependencies_parsed_at":"2023-10-16T08:44:18.950Z","dependency_job_id":"f4482e73-6e6d-4370-be01-4b6c873afdb1","html_url":"https://github.com/openshift/console","commit_stats":{"total_commits":14670,"total_committers":339,"mean_commits":43.27433628318584,"dds":0.9374914792092706,"last_synced_commit":"237973c70e0452cc6f66f9e123aa493802ae6fd5"},"previous_names":[],"tags_count":142,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/openshift%2Fconsole","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/openshift%2Fconsole/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/openshift%2Fconsole/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/openshift%2Fconsole/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/openshift","download_url":"https://codeload.github.com/openshift/console/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":251326851,"owners_count":21571636,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["openshift","openshift-origin"],"created_at":"2024-11-07T05:33:22.345Z","updated_at":"2025-04-28T14:14:44.465Z","avatar_url":"https://github.com/openshift.png","language":"TypeScript","readme":"# OpenShift Console\n\nCodename: \"Bridge\"\n\n[quay.io/openshift/origin-console](https://quay.io/repository/openshift/origin-console?tab=tags)\n\nThe console is a more friendly `kubectl` in the form of a single page webapp. It also integrates with other services like monitoring, chargeback, and OLM. Some things that go on behind the scenes include:\n\n- Proxying the Kubernetes API under `/api/kubernetes`\n- Providing additional non-Kubernetes APIs for interacting with the cluster\n- Serving all frontend static assets\n- User Authentication\n\n## Quickstart\n\n### Dependencies:\n\n1. [node.js](https://nodejs.org/) \u003e= 18 \u0026 [yarn](https://yarnpkg.com/en/docs/install) \u003e= 1.20\n2. [go](https://golang.org/) \u003e= 1.22+\n3. [oc](https://mirror.openshift.com/pub/openshift-v4/clients/oc/latest/) or [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) and an OpenShift or Kubernetes cluster\n4. [jq](https://stedolan.github.io/jq/download/) (for `contrib/environment.sh`)\n\n### Build everything:\n\nThis project uses [Go modules](https://github.com/golang/go/wiki/Modules),\nso you should clone the project outside of your `GOPATH`. To build both the\nfrontend and backend, run:\n\n```\n./build.sh\n```\n\nBackend binaries are output to `./bin`.\n\n### Configure the application\n\nThe following instructions assume you have an existing cluster you can connect\nto. OpenShift 4.x clusters can be installed using the\n[OpenShift Installer](https://mirror.openshift.com/pub/openshift-v4/clients/ocp/latest/). More information about installing OpenShift can be found at\n\u003chttps://try.openshift.com/\u003e.\nYou can also use [CodeReady Containers](https://github.com/code-ready/crc)\nfor local installs, or native Kubernetes clusters.\n\n#### OpenShift (no authentication)\n\nFor local development, you can disable OAuth and run bridge with an OpenShift\nuser's access token. If you've installed OpenShift 4.0, run the following\ncommands to login as the kubeadmin user and start a local console for\ndevelopment. Make sure to replace `/path/to/install-dir` with the directory you\nused to install OpenShift.\n\n```\noc login -u kubeadmin -p $(cat /path/to/install-dir/auth/kubeadmin-password)\nsource ./contrib/oc-environment.sh\n./bin/bridge\n```\n\nThe console will be running at [localhost:9000](http://localhost:9000).\n\nIf you don't have `kubeadmin` access, you can use any user's API token,\nalthough you will be limited to that user's access and might not be able to run\nthe full integration test suite.\n\n#### OpenShift (with authentication)\n\nIf you need to work on the backend code for authentication or you need to test\ndifferent users, you can set up authentication in your development environment.\nRegistering an OpenShift OAuth client requires administrative privileges for\nthe entire cluster, not just a local project. You must be logged in as a\ncluster admin such as `system:admin` or `kubeadmin`.\n\nTo run bridge locally connected to an OpenShift cluster, create an\n`OAuthClient` resource with a generated secret and read that secret:\n\n```\noc process -f examples/console-oauth-client.yaml | oc apply -f -\noc get oauthclient console-oauth-client -o jsonpath='{.secret}' \u003e examples/console-client-secret\n```\n\nIf the CA bundle of the OpenShift API server is unavailable, fetch the CA\ncertificates from a service account secret. Due to [upstream changes](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#manually-create-an-api-token-for-a-serviceaccount),\nthese service account secrets need to be created manually.\nOtherwise copy the CA bundle to\n`examples/ca.crt`:\n\n```\noc apply -f examples/sa-secrets.yaml\noc get secrets -n default --field-selector type=kubernetes.io/service-account-token -o json | \\\n    jq '.items[0].data.\"ca.crt\"' -r | python -m base64 -d \u003e examples/ca.crt\n# Note: use \"openssl base64\" because the \"base64\" tool is different between mac and linux\n```\n\nFinally run the console and visit [localhost:9000](http://localhost:9000):\n\n```\n./examples/run-bridge.sh\n```\n\n#### Enabling Monitoring Locally\nIn order to enable the monitoring UI and see the \"Observe\" navigation item while running locally, you'll need to run the OpenShift Monitoring dynamic plugin alongside Bridge. To do so, follow these steps:\n\n1. Clone the monitoring-plugin repo: https://github.com/openshift/monitoring-plugin\n2. `cd` to the monitoring-plugin root dir\n3. Run\n  ```\n  make install \u0026\u0026 make start-frontend\n  ```\n4. Run Bridge in another terminal following the steps above, but set the following environment variable before starting Bridge:\n  ```\n  export BRIDGE_PLUGINS=\"monitoring-plugin=http://localhost:9001\"\n  ```\n\n#### Updating `tectonic-console-builder` image\nUpdating `tectonic-console-builder` image is needed whenever there is a change in the build-time dependencies and/or go versions.\n\nIn order to update the `tectonic-console-builder` to a new version i.e. v27, follow these steps:\n\n1. Update the `tectonic-console-builder` image tag in files listed below:\n   - .ci-operator.yaml\n   - Dockerfile.dev\n   - Dockerfile.plugins.demo\n   For example, `tectonic-console-builder:27`\n2. Update the dependencies in Dockerfile.builder file i.e. v18.0.0.\n3. Run `./push-builder.sh` script build and push the updated builder image to quay.io.\n   Note: You can test the image using `./builder-run.sh ./build-backend.sh`.\n   To update the image on quay.io, you need edit permission to the quay.io/coreos/  tectonic-console-builder repo.\n4. Lastly, update the mapping of `tectonic-console-builder` image tag in\n   [openshift/release](https:// github.com/openshift/release/blob/master/core-services/image-mirroring/supplemental-ci-images/mapping_supplemental_ci_images_ci) repository.\n   Note: There could be scenario were you would have to add the new image reference in the \"mapping_supplemental_ci_images_ci\" file, i.e. to avoid CI downtime for upcoming release cycle.\n   Optional: Request for the [rhel-8-base-nodejs-openshift-4.15](https://github.com/openshift-eng/ocp-build-data/pull/3775/files) nodebuilder update if it doesn't match the node version in `tectonic-console-builder`.\n\n#### CodeReady Containers\n\nIf you want to use CodeReady for local development, first make sure [it is set up](https://crc.dev/crc/#setting-up-codeready-containers_gsg), and the [OpenShift cluster is started](https://crc.dev/crc/#starting-the-virtual-machine_gsg).\n\nTo login to the cluster's API server, you can use the following command:\n\n```shell\noc login -u kubeadmin -p $(cat ~/.crc/machines/crc/kubeadmin-password) https://api.crc.testing:6443\n```\n\n\u0026hellip; or, alternatively, use the CRC daemon UI (*Copy OC Login Command --\u003e kubeadmin*) to get the cluster-specific command.\n\nFinally, prepare the environment, and run the console:\n\n```shell\nsource ./contrib/environment.sh\n./bin/bridge\n```\n\n#### Native Kubernetes\n\nIf you have a working `kubectl` on your path, you can run the application with:\n\n```\nexport KUBECONFIG=/path/to/kubeconfig\nsource ./contrib/environment.sh\n./bin/bridge\n```\n\nThe script in `contrib/environment.sh` sets sensible defaults in the environment, and uses `kubectl` to query your cluster for endpoint and authentication information.\n\nTo configure the application to run by hand, (or if `environment.sh` doesn't work for some reason) you can manually provide a Kubernetes bearer token with the following steps.\n\nFirst get the secret ID that has a type of `kubernetes.io/service-account-token` by running:\n\n```\nkubectl get secrets\n```\n\nthen get the secret contents:\n\n```\nkubectl describe secrets/\u003csecret-id-obtained-previously\u003e\n```\n\nUse this token value to set the `BRIDGE_K8S_AUTH_BEARER_TOKEN` environment variable when running Bridge.\n\n## Operator\n\nIn OpenShift 4.x, the console is installed and managed by the\n[console operator](https://github.com/openshift/console-operator/).\n\n## Hacking\n\nSee [CONTRIBUTING](CONTRIBUTING.md) for workflow \u0026 convention details.\n\nSee [STYLEGUIDE](STYLEGUIDE.md) for file format and coding style guide.\n\n### Dev Dependencies\n\ngo 1.18+, nodejs/yarn, kubectl\n\n### Frontend Development\n\nAll frontend code lives in the `frontend/` directory. The frontend uses node, yarn, and webpack to compile dependencies into self contained bundles which are loaded dynamically at run time in the browser. These bundles are not committed to git. Tasks are defined in `package.json` in the `scripts` section and are aliased to `yarn run \u003ccmd\u003e` (in the frontend directory).\n\n#### Install Dependencies\n\nTo install the build tools and dependencies:\n\n```\ncd frontend\nyarn install\n```\n\nYou must run this command once, and every time the dependencies change. `node_modules` are not committed to git.\n\n#### Interactive Development\n\nThe following build task will watch the source code for changes and compile automatically.\nIf you would like to disable hot reloading, set the environment variable `HOT_RELOAD` to `false`.\n\n```\nyarn run dev\n```\n\nIf changes aren't detected, you might need to increase `fs.inotify.max_user_watches`. See \u003chttps://webpack.js.org/configuration/watch/#not-enough-watchers\u003e. If you need to increase your watchers, it's common to see multiple errors beginning with `Error from chokidar`.\n\nNote:  ensure `yarn run dev` has finished its initial build before visiting http://localhost:9000, otherwise `./bin/bridge` will stop running.\n\n### Unit Tests\n\nRun all unit tests:\n\n```\n./test.sh\n```\n\nRun backend tests:\n\n```\n./test-backend.sh\n```\n\nRun frontend tests:\n\n```\n./test-frontend.sh\n```\n\n#### Debugging Unit Tests\n\n1. `cd frontend; yarn run build`\n2. Add `debugger;` statements to any unit test\n3. `yarn debug-test route-pages`\n4. Chrome browser URL: 'chrome://inspect/#devices', click on the 'inspect' link in **Target (v10...)** section.\n5. Launches chrome-dev tools, click Resume button to continue\n6. Will break on any `debugger;` statements\n\n### Integration Tests\n\nCypress integration tests are implemented in [Cypress.io](https://www.cypress.io/).\n\nTo install Cypress:\n\n```\ncd frontend\nyarn run cypress install\n```\n\nLaunch Cypress test runner:\n\n```\ncd frontend\noc login ...\nyarn run test-cypress-console\n```\n\nThis will launch the Cypress Test Runner UI in the `console` package, where you can run one or all Cypress tests.\n\n**Important:**  when testing with authentication, set `BRIDGE_KUBEADMIN_PASSWORD` environment variable in your shell.\n\n#### Execute Cypress in different packages\n\nAn alternate way to execute cypress tests is via [frontend/integration-tests/test-cypress.sh](frontend/integration-tests/test-cypress.sh) which takes a `-p \u003cpackage\u003e` parameter to allow execution in different packages. It also can run Cypress tests in the Test Runner UI or in `-- headless` mode:\n\n```\nconsole/frontend \u003e ./integration-tests/test-cypress.sh\n\nRuns Cypress tests in Test Runner or headless mode\nUsage: test-cypress [-p] \u003cpackage\u003e [-s] \u003cfilemask\u003e [-h true]\n  '-p \u003cpackage\u003e' may be 'console, 'olm' or 'devconsole'\n  '-s \u003cspecmask\u003e' is a file mask for spec test files, such as 'tests/monitoring/*'. Used only in headless mode when '-p' is specified.\n  '-h true' runs Cypress in headless mode. When omitted, launches Cypress Test Runner\nExamples:\n  ./integration-tests/test-cypress.sh                                       // displays this help text\n  ./integration-tests/test-cypress.sh -p console                            // opens Cypress Test Runner for console tests\n  ./integration-tests/test-cypress.sh -p olm                                // opens Cypress Test Runner for OLM tests\n  ./integration-tests/test-cypress.sh -h true                               // runs all packages in headless mode\n  ./integration-tests/test-cypress.sh -p olm -h true                        // runs OLM tests in headless mode\n  ./integration-tests/test-cypress.sh -p console -s 'tests/crud/*' -h true  // runs console CRUD tests in headless mode\n```\n\nWhen running in headless mode, Cypress will test using its integrated Electron browser, but if you want to use Chrome or Firefox instead, set `BRIDGE_E2E_BROWSER_NAME` environment variable in your shell with the value `chrome` or `firefox`.\n\n[**_More information on Console's Cypress usage_**](frontend/packages/integration-tests-cypress/README.md)\n\n[**_More information on DevConsole's Cypress usage_**](frontend/packages/dev-console/integration-tests/README.md)\n\n#### How the Integration Tests Run in CI\n\nThe end-to-end tests run against pull requests using [ci-operator](https://github.com/openshift/ci-operator/).\nThe tests are defined in [this manifest](https://github.com/openshift/release/blob/master/ci-operator/jobs/openshift/console/openshift-console-master-presubmits.yaml)\nin the [openshift/release](https://github.com/openshift/release) repo and were generated with [ci-operator-prowgen](https://github.com/openshift/ci-operator-prowgen).\n\nCI runs the [test-prow-e2e.sh](test-prow-e2e.sh) script, which runs [frontend/integration-tests/test-cypress.sh](frontend/integration-tests/test-cypress.sh).\n\n`test-cypress.sh` runs all Cypress tests, in all 'packages' (console, olm, and devconsole), in `-- headless` mode via:\n\n`test-cypress.sh -h true`\n\nFor more information on `test-cypress.sh` usage please see [Execute Cypress in different packages](#execute-cypress-in-different-packages)\n\n### Internationalization\n\nSee [INTERNATIONALIZATION](INTERNATIONALIZATION.md) for information on our internationalization tools and guidelines.\n\n### Deploying a Custom Image to an OpenShift Cluster\n\nOnce you have made changes locally, these instructions will allow you to push\nchanges to an OpenShift cluster for others to review. This involves building a\nlocal image, pushing the image to an image registry, then updating the\nOpenShift cluster to pull the new image.\n\n#### Prerequisites\n\n1. Docker v17.05 or higher for multi-stage builds\n2. An image registry like [quay.io](https://quay.io/signin/) or [Docker Hub](https://hub.docker.com/)\n\n#### Steps\n\n1. Create a repository in the image registry of your choice to hold the image.\n2. Build Image `docker build -t \u003cyour-image-name\u003e \u003cpath-to-repository | url\u003e`. For example:\n\n```\ndocker build -t quay.io/myaccount/console:latest .\n```\n\n3. Push image to image registry `docker push \u003cyour-image-name\u003e`. Make sure\n   docker is logged into your image registry! For example:\n\n```\ndocker push quay.io/myaccount/console:latest\n```\n\n4. Put the console operator in unmanaged state:\n\n```\noc patch consoles.operator.openshift.io cluster --patch '{ \"spec\": { \"managementState\": \"Unmanaged\" } }' --type=merge\n```\n\n5. Update the console Deployment with the new image:\n\n```\noc set image deploy console console=quay.io/myaccount/console:latest -n openshift-console\n```\n\n6. Wait for the changes to rollout:\n\n```\noc rollout status -w deploy/console -n openshift-console\n```\n\nYou should now be able to see your development changes on the remote OpenShift cluster!\n\nWhen done, you can put the console operator back in a managed state to remove the custom image:\n\n```\noc patch consoles.operator.openshift.io cluster --patch '{ \"spec\": { \"managementState\": \"Managed\" } }' --type=merge\n```\n\n### Dependency Management\n\nDependencies should be pinned to an exact semver, sha, or git tag (eg, no ^).\n\n#### Backend\n\nWhenever making vendor changes:\n\n1. Finish updating dependencies \u0026 writing changes\n2. Commit everything _except_ `vendor/` (eg, `server: add x feature`)\n3. Make a second commit with only `vendor/` (eg, `vendor: revendor`)\n\nAdding new or updating existing backend dependencies:\n\n1.  Edit the `go.mod` file to the desired version (most likely a git hash)\n2.  Run `go mod tidy \u0026\u0026 go mod vendor`\n3.  Verify update was successful. `go.sum` will have been updated to reflect the changes to `go.mod` and the package will have been updated in `vendor`.\n\n#### Frontend\n\nAdd new frontend dependencies:\n\n```\nyarn add \u003cpackage@version\u003e\n```\n\nUpdate existing frontend dependencies:\n\n```\nyarn upgrade \u003cpackage@version\u003e\n```\n\nTo upgrade yarn itself, download a new yarn release from\n\u003chttps://github.com/yarnpkg/yarn/releases\u003e, replace the release in\n`frontend/.yarn/releases` with the new version, and update `yarn-path` in\n`frontend/.yarnrc`.\n\n##### @patternfly\n\nNote that when upgrading @patternfly packages, we've seen in the past that it can cause the JavaScript heap to run out of memory, or the bundle being too large if multiple versions of the same @patternfly package is pulled in. To increase efficiency, run the following after updating packages:\n\n```\nnpx yarn-deduplicate --scopes @patternfly\n```\n\n#### Supported Browsers\n\nWe support the latest versions of the following browsers:\n\n- Edge\n- Chrome\n- Safari\n- Firefox\n\nIE 11 and earlier is not supported.\n\n### CLI Artifacts Downloads Server\n\nThe server provides `oc` binaries from the [quay.io/repository/openshift/origin-cli-artifacts](https://quay.io/repository/openshift/origin-cli-artifacts) image.\n\n#### To build the server:\n\n```\n./build-downloads.sh\n```\n\n#### Running the server :\n\nAfter building, the server can be run directly with:\n\n```\n./bin/downloads --config-path=cmd/downloads/config/defaultArtifactsConfig.yaml\n```\nAlternatively, you can use the provided Dockerfile.downloads to build an image containing the server. Use the following command to build the Docker image:\n\n```\ndocker build -f Dockerfile.downloads -t downloadsserver:latest .\n```\nNote: If you are running on macOS, you might need to pass the `--platform linux/amd64` flag to the Docker build command. The origin-cli-artifacts image is not supported on macOS.\n\nTo launch the server using the built image, you can run:\n\n```\ndocker run -p 8081:8081 downloadsserver:latest\n```\n\n## ContentSecurityViolation Detection\n\nThe console application automatically reports CSP violations to telemetry. This detection and\nreporting logic attempts to parse a dynamic plugin name from the securitypolicyviolation event to\ninclude in the data reported to telemetry. If a plugin name is not determined in\nthis way, then 'none' will be used. Additionally, violation reporting is throttled to prevent\nspamming the telemetry service with repetitive data. Identical violations will not be\nreported more than once a day.\n\n## Frontend Packages\n- [console-dynamic-plugin-sdk](./frontend/packages/console-dynamic-plugin-sdk/README.md)\n[[API]](./frontend/packages/console-dynamic-plugin-sdk/docs/api.md)\n[[Console Extensions]](./frontend/packages/console-dynamic-plugin-sdk/docs/console-extensions.md)\n\n- [console-plugin-shared](./frontend/packages/console-plugin-shared/README.md)\n\n- [dev-console](./frontend/packages/dev-console/README.md)\n\n- [eslint-plugin-console](./frontend/packages/eslint-plugin-console/README.md)\n\n- [integration-tests-cypress](./frontend/packages/integration-tests-cypress/README.md)\n\n- [knative-plugin](./frontend/packages/knative-plugin/README.md)\n\n- operator-lifecycle-manager\n[[Descriptors README]](./frontend/packages/operator-lifecycle-manager/src/components/descriptors/README.md)\n[[Descriptors API Reference]](./frontend/packages/operator-lifecycle-manager/src/components/descriptors/reference/reference.md)\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fopenshift%2Fconsole","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fopenshift%2Fconsole","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fopenshift%2Fconsole/lists"}