{"id":21206961,"url":"https://github.com/paralect/plato","last_synced_at":"2025-03-14T23:21:45.602Z","repository":{"id":72774079,"uuid":"66950992","full_name":"paralect/plato","owner":"paralect","description":null,"archived":false,"fork":false,"pushed_at":"2016-11-13T13:33:23.000Z","size":36,"stargazers_count":1,"open_issues_count":6,"forks_count":0,"subscribers_count":5,"default_branch":"master","last_synced_at":"2025-01-21T15:49:20.578Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":null,"language":"Ruby","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/paralect.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2016-08-30T14:59:40.000Z","updated_at":"2021-02-17T12:06:50.000Z","dependencies_parsed_at":"2023-02-28T00:15:35.891Z","dependency_job_id":null,"html_url":"https://github.com/paralect/plato","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/paralect%2Fplato","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/paralect%2Fplato/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/paralect%2Fplato/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/paralect%2Fplato/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/paralect","download_url":"https://codeload.github.com/paralect/plato/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":243658808,"owners_count":20326566,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-11-20T20:57:20.245Z","updated_at":"2025-03-14T23:21:45.575Z","avatar_url":"https://github.com/paralect.png","language":"Ruby","readme":"# 🐒 Plato\n\n_This is a draft of the plan for Plato. Several technologies proposed and more to come\nsoon._\n\nPlato is a collection of _integrated_ applications, services, tools and guidelines written and managed \nas a _single unit_. Plato provides both foundation for your project and reference implementation\nof several applications used by Paralect. \n\nPlato is commited to be modern and up to date. Bleeding edge technologies are also welcome,\nas long as reliability and stability are not sacrified. Because landscape of tools, frameworks and\ntechnologies constantly changes and evolves, someone should track and prepare the way for the changes.\nThis is the role that Plato will try to play. When it is time to move forward, you will receive \nupgrade guidelines and all required documentation.\n\n📢 Plato serves as a communication channel and experimentation playground for all Paralect teammates.\nAny proposal, suggestion, fix or implementation are welcome!\n\n## Overview\n\n1. Monorepo. All platforms and tech stacks inside one repo. \n2. Trunc based development. Branches only for releases (almost).\n3. Git LFS for large files (like PSD, MP4, etc)\n4. Third party software can be both open source or proprietary\n5. Third party software can be both hosted or available in the cloud\n6. Paralect specific apps and tools are also part of Plato\n\n# Documentation\n\n|📖 [Documentation](docs/README.md)|\n|---------------------------------|\n\n# Technology Proposal\n\n_This is a preliminary proposal that is changing every day. Share your \nopinion or vote for the particualar technology._ \n\nIn two sentences: _\"Google architecture for infrastructure. Facebook architecture for applications.\"_\n\n🔹**Core Technologies**\n\n1. **[DigitalOcean](https://www.digitalocean.com/)** as IaaS Provider with deep integration via [API](https://developers.digitalocean.com/documentation/). In the future consider [AWS](https://aws.amazon.com/), [Azure](https://azure.microsoft.com/en-us), [Google Cloud Platform](https://cloud.google.com), [OpenStack](https://www.openstack.org/) in yet unknown order. \n1. **[Kubernetes](http://kubernetes.io/)** for Cluster and Container Management\n2. **[GlusterFS](https://www.gluster.org/) or [Torus](https://github.com/coreos/torus)** as Network Filesystem ([SDS](https://en.wikipedia.org/wiki/Software-defined_storage)) \n3. **[CoreOS](https://coreos.com/)** as Cluster Operating System\n4. **[systemd](https://freedesktop.org/wiki/Software/systemd)** as Linux Init System\n5. **[Docker](https://www.docker.com/)** as Container Runtime\n6. **[Prometheus](https://prometheus.io)** as Container Cluster Monitoring: instrumentation, collection, querying, and alerting.\n7. **[Alpine](http://www.alpinelinux.org/)** as Linux Distribution for Docker Images (if possible)\n\n🔹**Infrastructure Development**\n\n1. **[Go](https://golang.org/)** as preferable language, if makes sense. Any choice is permitted. \n\n🔹**Application Development**\n\n1. **Single JavaScript Specification** across all engines (V8, Nashorn, SpiderMonkey, Chakra, Nitro) and use cases (Browser, Desktop, Mobile, Server)\n2. **ES6/ES7** as Language Dialect (TODO: specify more strictly in a form of Babel config flags)\n3. **[Babel](https://babeljs.io/)** as Transpiler\n4. **[Flow](https://flowtype.org/)** as Typechecker\n5. **[React](https://facebook.github.io/react/)** and **[React Native](https://facebook.github.io/react-native/)** for UI \n6. **Linux** and **OS X** as supported development platforms from day one. Windows in a few months.\n\n🐥 Nothing is finally selected! This is a proposal from which to start investigations. Share \nyour ideas!\n\n\n# Principles and goals\n\n1. Container-centric infrastructure. User creates and manages containers and never physical or virtual machines. In terms of DigitalOcean, for example, it means that Plato creates and deletes droplets via [DigitalOcean API](https://developers.digitalocean.com/documentation). \n2. Container-centric development. Developer consumes databases and tools wrapped in containers. Instead of `apt-get install mongodb` developer should do `docker pull mongodb`. \n3. Managing storage is a distinct problem from managing compute. Should be a clear separation of how storage is provided from how it is consumed. On commodity hardware this implies usage of Network Filesystems, like GlusterFS, Ceph, Torus etc. \n4. Plato cluster is controlled via REST API by `plato` command and web UI. \n5. Single command to deploy and provision fully functional Plato cluster. \n6. One process per container. \n\n# Things to watch out \n\nHere we are keeping our eyes on different technologies that are potential candidates for adoption. \nThings evolve rapidly and the following list also should be up to date. \n\nRegardless of our today's choice, we should constantly track the most notable technologies and services. \n\n#### 🌏 **IaaS Providers**\n\nIdeally, we should support all major providers :) Today we need at least one.\n\n1. **[DigitalOcean](https://www.digitalocean.com/)** is one of the most simple and affordable providers with clean price and [API](https://developers.digitalocean.com/documentation/). The only downside is a limited functionality, comparing with more mature IaaS platforms. Slowly, but they are progressing. At July 13, 2016 they released [Block Storage](https://www.digitalocean.com/company/blog/block-storage-more-space-to-scale/) that will try to compete with [EBS](https://aws.amazon.com/ebs/), [GPS](https://cloud.google.com/compute/docs/disks/) and similar. Not sure that DigitalOcean has everything that we need, but it worth to at least try it.\n2. [AWS](https://aws.amazon.com/), [Azure](https://azure.microsoft.com/en-us), [Google Cloud Platform](https://cloud.google.com) are the most mature platforms. Each platform is considered as a safe bet.   \n3. **[OpenStack](https://www.openstack.org/)** is a platform for deploying and managing private or public IaaS clusters. Usually installed on bare-metal machines. \n\n#### 🚉  **Network Filesystem**\n\nMajor platforms provide some form of [Software-defined Storage](https://en.wikipedia.org/wiki/Software-defined_storage). Amazon Elastic Block Storage, Google Persistent Disk and Digital Ocean Block Storage. But some still run Network Filesystem on top of this storages: [NASA Jet Propulsion Laboratory Case Study](https://aws.amazon.com/solutions/case-studies/nasa-jpl-curiosity/).  \n\nWe need Network Filesystem to abstract storage from computing nodes. More specifically, we need to provide persistent storage for containers. \n\n1. **GlusterFS**. Production ready. Used by OpenShift platform.\n2. **CephFS**. Production ready starting from [April 21, 2016](http://thenewstack.io/converging-storage-cephfs-now-production-ready).\n3. **Torus**. Created and supported by the CoreOS team. Claims to be a \"[cloud-native and modern distributed storage](https://coreos.com/blog/torus-distributed-storage-by-coreos.html)\". Not ready for production, early days. \nStill, it make sense to give Torus a try.\n4. **Flocker.**\n\n#### 🚀 **Cluster and Container Management**\n\n1. **Kubernetes.** Originates from Google, \nbased on in-house cluster manager [Borg](https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/43438.pdf). Kubernetes is considered production ready, and is available as cloud service: Google Container \nEngine. Also, Red Hat's OpenShift is based on this technology. Kubernates was designed from \"container\" \npoint of view from the beginning.\n2. **Mesos.** Known and battle proven on large scale clusters in Twitter, Facebook and LinkedIn.\n3. **DC/OS.** At April 19, 2016 [DC/OS][2] project [was announced][1] as compilation of Mesos, Marathon and\nMesosphere Datacenter Operating System. Among partners are such companies as Microsoft, Cisco, \nConfluent, HP, Citrix, Autodesk etc.\n4. **Docker Swarm**. Released at June 20, 2016 with [Docker 1.12](https://blog.docker.com/2016/06/docker-1-12-built-in-orchestration). \"Native\" clustering for Docker. \n5. **Fleet**. \n\n#### 💻 **Container Operating System**\n\nThere’s been an explosion of new container-centric operating systems. We need at least one in the \nbeginning.\n\n1. **CoreOS**. The first container-optimized OS. First alpha release was in July 2013. Based on principles from the Google Chrome OS. No package management is provided by the OS. Libraries and packages are part of the application developed using Containers. Container runtime, SSH, and kernel are the primary components. Every process is managed by systemd. The management of CoreOS machines is done at the cluster level rather than at an individual machine level.\n2. **Ubuntu Snappy**.\n3. **Red Hat Project Atomic**.\n4. **RancherOS**.\n5. **VMware Photon**.\n6. **Microsoft Nano Server**.\n\n#### 🔲 **Containers Runtime**\n\n1. **Docker.**\n2. **Rocket (rkt).** Rkt is the Container runtime developed by CoreOS. Rkt does not have a daemon and is\nmanaged by `systemd`. Rkt uses the Application Container image (ACI) image format,\nwhich is according to the [Appc specification](https://github.com/appc/spec). Rkt’s\nexecution is split into three stages. This approach was taken so that some of the stages can\nbe replaced by a different implementation if needed. Rkt can also \n[run Docker images](https://github.com/coreos/rkt/blob/master/Documentation/running-docker-images.md).\n1. **Appc.** Not runtime but \"well-specified and community developed specification for application containers\". Started\nby CoreOS, but initially [gained support](https://www.infoq.com/news/2015/05/appc-spec-gains-support) from Google, Apcera, Red Hat and VMware.\n\n#### 📡 **Networking**\n\nVirtual networks that are portable across data centers and public clouds.\n\n1. **[Flannel](https://coreos.com/flannel/docs/latest/).** Virtual network maintained by the CoreOS project. \n2. **[Calico](https://www.projectcalico.org).** \n3. **[Weave](https://www.weave.works/).** Simple, resilient multi-host Docker networking.\n\n#### 🚥 **Service Discovery and Configuration**\n\nDistributed, consistent key-value stores for shared configuration and service discovery.\n\n1. **ZooKeeper.**\n2. **etcd.**\n3. **Consul.**\n\n#### 🔀 **Load Balancers and Proxies**\n\n1. **Nginx.**\n2. **HAProxy.**\n\n#### 📈 **Monitoring**\n\n1. **[Prometheus](https://prometheus.io).** Monitoring system and time series database. Together with\nKubernetes, Prometheus is a member of [Cloud Native Computing Foundation](https://cncf.io/projects).\nActively [promoted](https://coreos.com/blog/coreos-and-prometheus-improve-cluster-monitoring.html) by CoreOS.\n2. **[cAdvisor](https://github.com/google/cadvisor).** Analyzes resource usage and performance characteristics of running containers. Google project. Has native support for Docker containers.\n3. **[Sysdig](http://www.sysdig.org/).** Sysdig is open source, system-level exploration: capture system state and activity from a running Linux instance, then save, filter and analyze. \n4. **[NewRelic](https://newrelic.com/).** Commercial, but good :)\n\n#### 📦 **Hardware Virtualization and Hypervisors**\n\nFor now, we mostly interested in Type-2 hypervisors. They allow to run local cluster of unmodified operating systems on top of conventional operating systems. This is required mostly for low-level work on infrastructure level.  \n\n1. **VirtualBox.** Open source Type-2 hypervisor.\n2. **Vagrant.** Not directly related to virtualization, but provides convenient tools to manage local virtual machines.  \n3. **Hyper-V** and **VMware ESXi**. Commercial Type-1 hypervisors.\n4. **VMWare Workstation/Fussion.** Commercial Type-2 hypervisor. \n\n#### 🌀 **UNIX init systems**\n\nInit system is important, because it is a built-in and \"OS native\" functionality to manage lifetime of processes\nand, with modern init systems, even containers. Besides this, init system defines behaviour of operating system \nthroughout the lifetime.\n\n1. **[systemd](https://freedesktop.org/wiki/Software/systemd).** Released at 2010 by RedHat developer [Lennart Poettering](https://en.wikipedia.org/wiki/Lennart_Poettering) and today is adopted by most major Linux distributions,\nincluding Ubuntu, Debian, OpenSUSE, Oracle Linux, CoreOS etc. Scope of this project is fascinating. Motivation\nfor new init system is explained by Poettering in his [6 years old article](http://0pointer.de/blog/projects/systemd.html).\n2. **[Upstart](http://upstart.ubuntu.com).** Released at 2006 by Canonical. At some point it was adopted even by RedHat.\nBut today seems loosing positions to systemd.\n3. **[System V init](https://en.wikipedia.org/wiki/Init#SYSV).** Traditional UNIX System V init system that everybody\ntries to replace :)\n\n## Plan ideas\n\nThree ways:\n\n1. **Simple.** Start from Kubernetes cluster managed by [Google Container Engine](https://cloud.google.com/container-engine/). Use services that build and distribute Docker images (like [Google Container Builder](https://cloud.google.com/container-builder), [Google Container Registry](https://cloud.google.com/container-registry), [Codeship](codeship.com), [Quay](https://quay.io/) etc.).\n2. **Complex.** Integrate with IaaS (like DigitalOcean) and deploy Kubernetes cluster manually. Deploy our own Docker Image Builder and Registry. Right now Kubernetes do not support just released DigitalOcean [Block Storage](https://www.digitalocean.com/products/storage/) as Persistent Volume which is required for statefull containers. This should be implemented or some workarounds like [Flocker](https://clusterhq.com/flocker) should be used. Or we can use [Google Compute Engine](https://cloud.google.com/compute) (this is not Container Engine), [AWS EC2](https://aws.amazon.com/ec2/) or any other supported by Kubernetes IaaS.\n4. **Normal.** Mix of previous two ways. \n\nComplex way:\n\n```\n   ---------------\n   |  Plato CLI  |\n   ---------------\n          |\n          |\n---------------------------------------\n|  Plato REST API  |   Plato Web UI   |\n---------------------------------------\n|           Plato Cluster             |\n---------------------------------------\n```\n\n1. Deploy Kubernetes cluster with the help of DigitalOcean API. Simplify it to the point when Kubernetes can be deployed with single shell command. Deployment is complete when Plato API service is running and publicly accessible via HTTP. Further communication with the cluster should be using Plato API and _not_ Kubernetes API. \n2. Create service that automatically builds Docker images, stores them and plays a role of Docker Registry. This service can be controlled via Plato API: enable/disable/configure.\n3. Create simple web dashboard which allows to control cluster and view some cluster information. All needed functionality should be provided by Plato API. This dashboard should be enabled by default with every Plato installation. \n\n## Common infrastructure services and tools\n\nHere are some of the most vital services and tools that are required \nby any application. This list shows how tremendously broad\nfunctionality of Plato is. But no needs to worry :) Today we have\naccess to well established open-source cluster and container management\nsolutions that provide implementation for many listed features.\n\n1. Configuration management \n1. Cluster management\n1. Service discovery \n1. Metrics collection, aggregation and analysis \n1. Logs collection, aggregation and analysis\n1. Health checking\n1. Load balancing\n1. Rolling update \n1. Replication of application services\n1. Resource usage monitoring\n1. High availability and high scalability for all movable parts \n1. Application deployment pipeline \n1. Different types of Queues (In-Memory, Kafka, RabbitMQ)\n1. Task and event scheduler\n1. Scalable execution of unit tests\n1. Scalable execution of WebDriver tests\n1. Load and performance testing\n1. Scalable processing of cold data (Hadoop)\n1. Scalable processing of stream data (Heron, Samza, Spark)\n1. High-speed distributed interactive analytics (Drill, Impala, Presto, Druid)\n1. Deployment to top IaaS (maybe PaaS) services: AWS, Azure, DigitalOcean etc.\n1. Private container registry \n1. _Quite a bit more..._\n\nRoot system that needs to be selected is a cluster and/or container management platform. \n\nCouple of months ago, [DC/OS][2] project [was announced][1] as compilation of Mesos, Marathon and\nMesosphere Datacenter Operating System. Among partners are such companies as Microsoft, Cisco, \nConfluent, HP, Citrix, Autodesk etc. Today Mesos can run Kubernetes and YARN, but vice versa is\nnot something typical. Mesos also integrates well with IaaS stacks, such as OpenStack, CloudStack etc.\nAlso, we already have experience with Mesos+Marathon in Paralect. This naturally leads to selection\nof DC/OS or Mesos as foundation of Plato.\n\nAnother mentioned contender is [Kubernetes](http://kubernetes.io). This project originates from Google, \nbased on in-house cluster manager [Borg](https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/43438.pdf). Kubernetes is considered production ready, and is available as cloud service: Google Container \nEngine. Also, Red Hat's OpenShift is based on this technology. Kubernates was designed from \"container\" \npoint of view from the beginning, while Mesos run through some evolution in order to support containers. \n\nNext option is a construction of container management solution from lower-level components, like etcd,\nsystemd, fleet. The most complex way. \n\nIn any way, things can change, and when they will change we should not cry, but instead gradually \nadopt _next generation solution™_.\n\n## Consumer-oriented products\n\nBesides common infrastructure services and frameworks, Plato will \nconsists of at least the following consumer-oriented products:\n\n1. Paralect internal and external sites, apps, mobile apps, services and tools\n2. Robomongo site and services\n\n## Shared cluster for PaaS services\n\nSome services are well consumable by any application, even when application\nis not part of Plato. For example, scheduler that communicates over HTTP or \nimage processing services, or grid computation, or WebDriver tests etc. \n\nIt means that Plato should be always running, always available and always accessable\nby Paralect teammates and automated processes.\n\n## Userland\n\nApplication specific logic is written in JavaScript. All Plato services, frameworks and \ntools are consumable via JavaScript API. \n\n1. Single JavaScript specification across all engines (V8, Nashorn, SpiderMonkey, Chakra, Nitro) and use cases (Browser, Desktop, Mobile, Server)\n2. ES6 and ES7 (gradually upgraded)\n3. Babel as transpiler\n4. Flow as typechecker\n5. React and React Native for UI \n6. Support development on OS X and Linux hosts. Windows if time and demand. \n\n\n[1]: https://mesosphere.com/blog/2016/04/19/open-source-dcos/\n[2]: https://dcos.io/\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fparalect%2Fplato","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fparalect%2Fplato","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fparalect%2Fplato/lists"}