{"id":13994688,"url":"https://github.com/tensorflow/ecosystem","last_synced_at":"2025-05-14T07:09:03.038Z","repository":{"id":41117483,"uuid":"70939283","full_name":"tensorflow/ecosystem","owner":"tensorflow","description":"Integration of TensorFlow with other open-source frameworks","archived":false,"fork":false,"pushed_at":"2024-09-25T14:32:01.000Z","size":804,"stargazers_count":1372,"open_issues_count":72,"forks_count":394,"subscribers_count":209,"default_branch":"master","last_synced_at":"2025-05-08T00:09:45.189Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":"","language":"Scala","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/tensorflow.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2016-10-14T19:02:07.000Z","updated_at":"2025-04-09T00:27:28.000Z","dependencies_parsed_at":"2022-08-27T04:21:45.036Z","dependency_job_id":"085ae544-0847-4a08-99ee-281c9b51df44","html_url":"https://github.com/tensorflow/ecosystem","commit_stats":{"total_commits":74,"total_committers":38,"mean_commits":"1.9473684210526316","dds":0.8513513513513513,"last_synced_commit":"c39378894bf080843a86edacd0176c53f70d939f"},"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tensorflow%2Fecosystem","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tensorflow%2Fecosystem/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tensorflow%2Fecosystem/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tensorflow%2Fecosystem/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/tensorflow","download_url":"https://codeload.github.com/tensorflow/ecosystem/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":254092776,"owners_count":22013290,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-08-09T14:03:03.044Z","updated_at":"2025-05-14T07:08:58.024Z","avatar_url":"https://github.com/tensorflow.png","language":"Scala","readme":"# TensorFlow Ecosystem\n\nThis repository contains examples for integrating TensorFlow with other\nopen-source frameworks. The examples are minimal and intended for use as\ntemplates. Users can tailor the templates for their own use-cases.\n\nIf you have any additions or improvements, please create an issue or pull\nrequest.\n\n## Contents\n\n- [docker](docker) - Docker configuration for running TensorFlow on\n  cluster managers.\n- [kubeflow](https://github.com/kubeflow/kubeflow) - A Kubernetes native platform for ML\n\t* A K8s custom resource for running distributed [TensorFlow jobs](https://github.com/kubeflow/kubeflow/blob/master/user_guide.md#submitting-a-tensorflow-training-job) \n\t* Jupyter images for different versions of TensorFlow\n\t* [TFServing](https://github.com/kubeflow/kubeflow/blob/master/user_guide.md#serve-a-model-using-tensorflow-serving) Docker images and K8s templates\n- [kubernetes](kubernetes) - Templates for running distributed TensorFlow on\n  Kubernetes.\n- [marathon](marathon) - Templates for running distributed TensorFlow using\n  Marathon, deployed on top of Mesos.\n- [hadoop](hadoop) - TFRecord file InputFormat/OutputFormat for Hadoop MapReduce\n  and Spark.\n- [spark-tensorflow-connector](spark/spark-tensorflow-connector) - Spark TensorFlow Connector\n- [spark-tensorflow-distributor](spark/spark-tensorflow-distributor) - Python package that helps users do distributed training with TensorFlow on their Spark clusters.\n\n## Distributed TensorFlow\n\nSee the [Distributed TensorFlow](https://www.tensorflow.org/deploy/distributed)\ndocumentation for a description of how it works. The examples in this\nrepository focus on the most common form of distributed training: between-graph\nreplication with asynchronous updates.\n\n### Common Setup for distributed training\n\nEvery distributed training program has some common setup. First, define flags so\nthat the worker knows about other workers and knows what role it plays in\ndistributed training:\n\n```python\n# Flags for configuring the task\nflags.DEFINE_integer(\"task_index\", None,\n                     \"Worker task index, should be \u003e= 0. task_index=0 is \"\n                     \"the master worker task the performs the variable \"\n                     \"initialization.\")\nflags.DEFINE_string(\"ps_hosts\", None,\n                    \"Comma-separated list of hostname:port pairs\")\nflags.DEFINE_string(\"worker_hosts\", None,\n                    \"Comma-separated list of hostname:port pairs\")\nflags.DEFINE_string(\"job_name\", None, \"job name: worker or ps\")\n```\n\nThen, start your server. Since worker and parameter servers (ps jobs) usually\nshare a common program, parameter servers should stop at this point and so they\nare joined with the server.\n\n```python\n# Construct the cluster and start the server\nps_spec = FLAGS.ps_hosts.split(\",\")\nworker_spec = FLAGS.worker_hosts.split(\",\")\n\ncluster = tf.train.ClusterSpec({\n    \"ps\": ps_spec,\n    \"worker\": worker_spec})\n\nserver = tf.train.Server(\n    cluster, job_name=FLAGS.job_name, task_index=FLAGS.task_index)\n\nif FLAGS.job_name == \"ps\":\n  server.join()\n```\n\nAfterwards, your code varies depending on the form of distributed training you\nintend on doing. The most common form is between-graph replication.\n\n### Between-graph Replication\n\nIn this mode, each worker separately constructs the exact same graph. Each\nworker then runs the graph in isolation, only sharing gradients with the\nparameter servers. This set up is illustrated by the following diagram. Please\nnote that each dashed box indicates a task.\n![Diagram for Between-graph replication](images/between-graph_replication.png \"Between-graph Replication\")\n\nYou must explicitly set the device before graph construction for this mode of\ntraining. The following code snippet from the\n[Distributed TensorFlow tutorial](https://www.tensorflow.org/deploy/distributed)\ndemonstrates the setup:\n\n```python\nwith tf.device(tf.train.replica_device_setter(\n    worker_device=\"/job:worker/task:%d\" % FLAGS.task_index,\n    cluster=cluster)):\n  # Construct the TensorFlow graph.\n\n# Run the TensorFlow graph.\n```\n\n### Requirements To Run the Examples\n\nTo run our examples, [Jinja templates](http://jinja.pocoo.org/) must be installed:\n\n```sh\n# On Ubuntu\nsudo apt-get install python-jinja2\n\n# On most other platforms\nsudo pip install Jinja2\n```\n\nJinja is used for template expansion. There are other framework-specific\nrequirements, please refer to the README page of each framework.\n","funding_links":[],"categories":["Scala"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ftensorflow%2Fecosystem","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Ftensorflow%2Fecosystem","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ftensorflow%2Fecosystem/lists"}