{"id":20126449,"url":"https://github.com/sovereigncloudstack/central-api","last_synced_at":"2026-01-23T09:36:21.562Z","repository":{"id":194066077,"uuid":"690022143","full_name":"SovereignCloudStack/central-api","owner":"SovereignCloudStack","description":"MVP for SCS Central API","archived":false,"fork":false,"pushed_at":"2024-08-12T12:41:43.000Z","size":75,"stargazers_count":2,"open_issues_count":3,"forks_count":0,"subscribers_count":5,"default_branch":"main","last_synced_at":"2024-08-13T13:20:59.025Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":"https://scs.community/","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"agpl-3.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/SovereignCloudStack.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2023-09-11T11:39:35.000Z","updated_at":"2024-08-13T13:20:59.026Z","dependencies_parsed_at":"2023-09-11T14:10:08.083Z","dependency_job_id":"5b1d1f16-86e8-4cd5-bba0-4e4d31cce84a","html_url":"https://github.com/SovereignCloudStack/central-api","commit_stats":null,"previous_names":["sovereigncloudstack/central-api"],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/SovereignCloudStack%2Fcentral-api","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/SovereignCloudStack%2Fcentral-api/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/SovereignCloudStack%2Fcentral-api/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/SovereignCloudStack%2Fcentral-api/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/SovereignCloudStack","download_url":"https://codeload.github.com/SovereignCloudStack/central-api/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":224517293,"owners_count":17324408,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-11-13T20:16:09.991Z","updated_at":"2026-01-23T09:36:16.539Z","avatar_url":"https://github.com/SovereignCloudStack.png","language":"Python","readme":"# SCS Central API\n\n## Premise\n\nBy embracing existing open source solutions and bundling them, SCS provides a viable\nalternative to widely adopted proprietary cloud offerings, including\nInfrastructure-as-a-Service offerings, Kubernetes-as-a-Service offerings and other\nX-as-a-Service offerings.\n\nThe choice to embrace existing technology has huge advantages over starting from\nscratch.\nBy not reinventing wheels, a lot of effort is saved and existing communities are\nstrengthened. The adoption of existing open standards is supported, reducing\nmarket fragmentation and increasing interoperability.\n\n## Challenge\n\nThe challenge: Using popular open source components at cloud service providers\ndoes not result in a consistent experience for their users, yet.\n\nEach part of the stack is consistent within its own scope: E.g. The\n[OpenStack Networking API](https://docs.openstack.org/api-ref/network/v2/) is sort of\nconsistent with the\n[OpenStack Load Balancer API](https://docs.openstack.org/api-ref/load-balancer/v2/).\n\nThe OpenStack API's share API idioms like the used AuthN/AuthZ\n(Authentication/Authorization) mechanisms. But these are not applicable beyond\nOpenStack services.\n\nEntering general IAM (Identity and Access Management), Keycloak has its own set of\nAPI endpoints and authentication flows.  \nEntering Kubernetes, CAPI ([Kubernetes Cluster API](https://cluster-api.sigs.k8s.io/))\nuses the Kubernetes API with its own authentication configuration, RBAC (Role Based\nAccess Control) and opinionated resource management idioms.\n\nSo, without a central API harmonizing at least the semantics of AuthN/AuthZ and\nresource management, users are left with a bunch of semantically incompatible API's.\nIf resources in different API's are somehow interconnected, the users have to take\ncare of bridging these differences themselves.\n\nProviding a consistent API across many different offerings with sort of consistent\nAPI idioms is something that primarily the big proprietary cloud providers manage to\ndo. And while that serves users well in that regard, it also serves as an effective\nvendor lock-in feature.\n\n## Alternatives and other potential solutions\n\n\u003cdetails\u003e\u003csummary\u003eSelf-Service solution\u003c/summary\u003e\n\n### Self-Service solution\n\nUsers that want to avoid such vendor lock-in as well as want to avoid spending much\ntime bridging technologies manually, the best bet would probably be to setup\ninfrastructure-as-code (IaC) tooling (such as OpenTofu, Terraform and alike)\nwith a number of specialized providers to bring all their interdependent resources into\na single place keeping track of relationships between resources across multiple API's.\nCaveat: Infrastructure-as-code tooling gets admin access, while RBAC for human access is still\ninconsistent.\nOrganizations with a lot of time/money to spend probably are able/willing to build/buy\nthemselves out of this situation, but that is not a solution for everyone.\n\nAlso an option especially for smaller setups: Just accept the differences between\nAPI's and use the automation tooling that seems most native to each API. For example,\nTerraform or Ansible for OpenStack VM's, ArgoCD/Flux/... for Kubernetes CAPI resources\nand workload resources. The trade-off would be choosing between the full power of\nall offered cloud resources (and integrating these as user) or just using a few ones,\nlike only Kubernetes-as-a-Service (and build the rest as user).\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\u003csummary\u003ePromote OpenStack API to primary SCS standard\u003c/summary\u003e\n\n### Promote OpenStack API to primary SCS standard\n\nAs already established [above](#Challenge), OpenStack already provides consistent\nAPI's with shared idioms and AuthN/AuthZ mechanisms - not only for pure\n[\"compute/VM\"](https://docs.openstack.org/api-ref/compute) solutions,\nbut for\n[managing Kubernetes clusters via \"Magnum\"](https://docs.openstack.org/api-ref/container-infrastructure-management),\nas well. While the default Magnum implementation did not seem to live\nup to expectations yet, new implementations using Cluster API seem to\ngrow in [popularity](https://www.stackhpc.com/magnum-clusterapi.html).\n\nThis is not the pursued solution, as:\n\n- Kubernetes API's are generally more wide-spread and popular.\n- The Kubernetes ecosystem provides more extensibility tooling for adding more services\n  of any kind.\n- This would not be well aligned with SCS's\n  [technological vision](https://scs.community/about/#technological-vision)\n  which does not envision OpenStack as primary SCS service.\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\u003csummary\u003eThe intuitive choice: Abstract away all the things!\u003c/summary\u003e\n\n### The intuitive choice: Abstract away all the things!\n\nThe ideal form of API: An API that is extremely consistent in itself, each resource\ndefined using consistent patterns and terminology, never leaking implementation details.\n\n- `OpenStack Compute Instance`? Very OpenStack specific, creating lock-in to OpenStack API's\n- `SCS Instance`? Perfection, right?\n\nImagine CLI access like:\n\n```bash\nscs create project myproject\nscs create subnet --project=myproject mysubnet\nscs create k8s --project=myproject --subnet=mysubnet mykubernetes\n```\n\nImagine using a Terraform provider like:\n\n```hcl\nresource \"scs_project\" \"myproject\" {\n  name = \"myproject\"\n}\nresource \"scs_subnet\" \"mysubnet\" {\n  project = scs_project.myproject.name\n  name    = \"mysubnet\"\n  # ...\n}\nresource \"scs_kubernetes\" \"mykubernetes\" {\n  project = scs_project.myproject.name\n  subnet  = scs_subnet.mysubnet.name\n  name    = \"mykubernetes\"\n  # ...\n}\n```\n\nImagine a Crossplane provider (or DIY similar Kubernetes controller framework) like:\n\n```yaml\napiVersion: management.scs.community/v1\nkind: Project\nmetadata:\n  name: myproject\nspec:\n  # ...\n---\napiVersion: networking.scs.community/v1\nkind: Subnet\nmetadata:\n  name: mysubnet\nspec:\n  forProvider:\n    projectRef:\n      name: myproject\n  # ...\n---\napiVersion: kubernetes.scs.community/v1\nkind: Kubernetes\nmetadata:\n  name: mykubernetes\nspec:\n  forProvider:\n    projectRef:\n      name: myproject\n    subnetRef:\n      name: mysubnet\n  # ...\n```\n\nThis is obviously desirable from a user's perspective.\nHowever, unfortunately, it is also much more work than the SCS project can\nrealistically build and maintain in the short or medium term.\n\nIt also comes with the requirement to make many tough trade-off decisions.\nFor example:\n\nProvider \"A\" offers to hide Kubernetes API server endpoints from the public\ninternet, utilizing some sort of bastion host. Provider \"B\" instead implements\nIP based firewall blocking on the public endpoints. Provider \"C\" does neither.  \nShould the API follow either provider \"A\" or \"B\"? Should both approaches be\nimplemented, but as optional features? If any of these approaches is defined\nto be a mandatory feature to support, provider \"C\" cannot be compliant.\n\nAny choice brings significant disadvantages:\n\n- If such features are included as features that are mandatory to support,\n  some providers may have difficulties adopting the API.\n- If such features are included as optional features, the ability to migrate\n  from one provider to another suffers significantly. Without this ability,\n  but with the added complexity of an abstraction layer rewriting patterns and\n  names of resources/attributes, users also may opt to use provider-specific\n  API's, instead.\n- If such features are excluded, the API becomes overall less useful for the\n  users who may opt to use more powerful provider-specific API's, instead.\n\nAs such, making decisions with these tradeoffs in mind, is not about finding\nthe perfect solution for everyone, but \"just the right\" balance that is\npracticable for providers and valuable for users. Finding these optimal\nbalances is going to cost possibly even more time than actually implementing\nthem in code.\n\nIn sum: Going this route would be technically the best thing to do, yet does\nnot seem feasible given tough trade-offs and limited resources.  \nIf the opportunity arises to partner with some other organization with a lot\nof staff and resources, this option may be reevaluated, though.\n\n\u003c/details\u003e\n\n## The chosen approach to pursue\n\n```mermaid\nflowchart TB\n    subgraph \"With central API (simplified)\"\n        User2{\"User\"}\n        subgraph \"provider responsibility\"\n            CentralAPI[\"Central API\"]\n            OpenStack2[\"OpenStack API\"]\n            Keycloak2[\"Keycloak API\"]\n            CAPI2[\"Cluster API\"]\n        end\n\n        User2\n            -- uses --\u003e K8sTooling2[\"kubectl/\\nargocd/flux/...\"]\n        K8sTooling2 -- calls --\u003e CentralAPI\n        CentralAPI -- calls --\u003e OpenStack2\n        CentralAPI -- calls --\u003e Keycloak2\n        CentralAPI -- calls --\u003e CAPI2\n    end\n    subgraph \"Without central API (simplified)\"\n        User1{\"User\"}\n        subgraph \"provider responsibility\"\n            OpenStack1[\"OpenStack API\"]\n            Keycloak1[\"Keycloak API\"]\n        end\n        CAPI1[\"Cluster API\"]\n\n        User1\n            -- uses --\u003e OpenStackCLI1[\"OpenStackCLI/OpenStackUI/\\nTerraform/Ansible/...\"]\n            -- calls --\u003e OpenStack1\n        User1\n            -- uses --\u003e KeycloakCLI1[\"KeycloakCLI/KeycloakUI/\\nTerraform/Ansible/...\"]\n            -- calls --\u003e Keycloak1\n        User1\n            -- uses --\u003e K8sTooling1[\"kubectl/\\nargocd/flux/...\"]\n            -- calls --\u003e CAPI1\n    end\n```\n\nGoal: **Provide a \"semantically\" consistent API modelling most cloud resources\nthat are in scope for SCS**.\n\nIn other words: Bring each cloud resource type - as it is - into the central API.\n\nAn `OpenStack Compute Instance` continues to be as-is with all of its usual\nproperties and implementation details.  \nA `Keycloak Realm` continues to be as-is with all of its usual properties\nand implementation details.\n\nThat is not to say that abstractions are absolutely not planned as further steps.\nThere were discussions happening about that already: Regarding IAM management [^1]\nand Kubernetes management [^2].\n\nHowever, the **main** benefit is that all offered API objects can be managed\nusing the same API idioms (AuthN/AuthZ/REST) with the same client tooling [^3].\n\n[^1]: There were discussions to build a generic SCS API to support\nSCS installations powered by Zitadel. Approaching the issue a little\nbit like the \"Abstract all the things!\" consideration above, but focusing\non two basic use cases (Firstly, setting up an identity federation to some\nexisting identity provider; Secondly, managing users without remote identity\nprovider). While not in scope for the first steps, this probably could be\nelegantly implemented as one generic Crossplane \"Composite Resource Definition\"\nbacked by a Crossplane \"Composition\" defining either Keycloak objects OR\nZitadel objects (given that Zitadel gets a Crossplane provider or a similar\nKubernetes controller before).\n\n[^2]: In order to cover providers that use Gardnener, a generic Crossplane\n\"Composite Resource Definition\" like in [^1] may be created. Alternatively,\nGardnener CRD's could maybe just be mirrored in their Central API instance,\nstill creating an interoperability benefit through \"semantic\" compatibility.\n\n[^3]: Which is also not to say that it will be suggested to providers to disable\ntheir public OpenStack/Keycloak/... API's, preventing use of native\nOpenStack/Keycloak/... tooling and breaking existing solutions.\nExtensively using these API's together with the central API may compromise\nthe benefits of its uniform AuthZ, though.\n\n### Kubernetes API\n\nInstead of creating SCS-specific API idioms and building the implementation\nfrom scratch, the Kubernetes API will be \"reused\". Essentially, the Kubernetes\nAPI is just an opinionated REST API which has opinions on how a resource\nis defined, how it looks like, how it is reconciled/handled, how AuthN/AuthZ\ncan be implemented. The Kubernetes ecosystem provides much tooling for working\nwith such (custom) resource definitions: For creating the definitions\nthemselves, building controllers, making them discoverable and deployable.\n\nAs such, Kubernetes is a great choice for building any sort of resource\nmanagement API - with some caveats regarding its deployment and the legacy\nof starting off as container orchestration tooling.\n\n### Crossplane tooling\n\nCrossplane even extends the Kubernetes API with\n\"[Compositions](https://docs.crossplane.io/v1.14/concepts/compositions/)\" and\n\"[Composite Resource Definitions](https://docs.crossplane.io/v1.14/concepts/composite-resource-definitions/)\"\n(XRD) to make Kubernetes the base for platform engineering within organizations.\n\nSecondly, it provides an API machinery to bring any cloud resource into Kubernetes\nusing backend-specific \"providers\" (roughly comparable with Terraform providers).\nAs such, Crossplane with its provider ecosystem actually already did most of\nthe heavy lifting for providing e.g. OpenStack or Keycloak resources inside of Kubernetes.\n\nOn top, the platform engineering concepts in Crossplane make building multi-tenancy\nsystems pretty straight-forward, even for\n[single clusters](https://docs.crossplane.io/knowledge-base/guides/multi-tenant/#single-cluster-multi-tenancy).\n\nAlright. Crossplane takes care of exposing OpenStack resources and does some\nfancy stuff regarding multi-tenancy. What about providing actual Kubernetes\n**workload** clusters?\n\n### Cluster stacks / Cluster API\n\n[Cluster stacks](https://github.com/SovereignCloudStack/cluster-stacks) do\n[not replace the use of Cluster API](https://github.com/SovereignCloudStack/cluster-stack-operator/blob/adb648ceaebddca04a015fbea0319110ca99a5cc/docs/architecture/user-flow.md#recap---how-do-cluster-api-and-cluster-stacks-work-together).\nInstead, they are complementing Cluster API by providing `ClusterClasses`, node\nimages (if required) and workload cluster addons.\n\nIt is still to be determined how to bring multi-tenancy concepts from Crossplane\ninto ClusterStacks/CAPI, if even required.\n\nShould the provider be responsible for creating `ClusterClasses`?\nIf yes, enforcing some parameters inside via a `ClusterClass` may be enough\nto provide multi-tenancy, already. That is to be determined, though.\n\n## Implementation\n\nDisregarding any potential further abstractions, most work in automation for\nthe providers will be about installing the central API and securely distributing\ncredentials for backing services like OpenStack or Keycloak.  \nFor that, there is no production implementation yet. See\n[the POC for inspiration](./docs/poc-setup.md) for now. It includes access to an OpenStack API\nthrough Kubernetes/Crossplane.\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fsovereigncloudstack%2Fcentral-api","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fsovereigncloudstack%2Fcentral-api","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fsovereigncloudstack%2Fcentral-api/lists"}