{"id":13906980,"url":"https://github.com/tensorflow/cloud","last_synced_at":"2025-05-14T11:09:37.377Z","repository":{"id":42985631,"uuid":"239587597","full_name":"tensorflow/cloud","owner":"tensorflow","description":"The TensorFlow Cloud repository provides APIs that will allow to easily go from debugging and training your Keras and TensorFlow code in a local environment to distributed training in the cloud.","archived":false,"fork":false,"pushed_at":"2025-01-29T14:59:50.000Z","size":1872,"stargazers_count":378,"open_issues_count":75,"forks_count":91,"subscribers_count":27,"default_branch":"master","last_synced_at":"2025-04-13T03:59:24.036Z","etag":null,"topics":["cloud","gcp","keras","tensorflow"],"latest_commit_sha":null,"homepage":"https://github.com/tensorflow/cloud","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/tensorflow.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":"CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":null,"codeowners":"CODEOWNERS","security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2020-02-10T18:51:59.000Z","updated_at":"2025-02-27T09:32:26.000Z","dependencies_parsed_at":"2023-01-30T17:01:31.511Z","dependency_job_id":"6c2a01aa-e4a7-4881-bce3-1324457b195a","html_url":"https://github.com/tensorflow/cloud","commit_stats":{"total_commits":440,"total_committers":26,"mean_commits":"16.923076923076923","dds":0.6840909090909091,"last_synced_commit":"dfd9ca1fe2200d10584c09ab8c3b392c82871d99"},"previous_names":[],"tags_count":18,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tensorflow%2Fcloud","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tensorflow%2Fcloud/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tensorflow%2Fcloud/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tensorflow%2Fcloud/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/tensorflow","download_url":"https://codeload.github.com/tensorflow/cloud/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":248661706,"owners_count":21141450,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["cloud","gcp","keras","tensorflow"],"created_at":"2024-08-06T23:01:45.797Z","updated_at":"2025-04-13T03:59:30.107Z","avatar_url":"https://github.com/tensorflow.png","language":"Python","readme":"# TensorFlow Cloud\n\nThe TensorFlow Cloud repository provides APIs that will allow to easily go from\ndebugging, training, tuning your Keras and TensorFlow code in a local\nenvironment to distributed training/tuning on Cloud.\n\n## Introduction\n\n-   [TensorFlow Cloud `run` API](https://github.com/tensorflow/cloud/blob/master/src/python/tensorflow_cloud/core/README.md)\n\n-   [TensorFlow Cloud Tuner](https://github.com/tensorflow/cloud/blob/master/src/python/tensorflow_cloud/tuner/README.md)\n\n## TensorFlow Cloud `run` API for GCP training/tuning\n\n### Installation\n\n#### Requirements\n\n-   Python \u003e= 3.6\n-   [A Google Cloud project](https://cloud.google.com/ai-platform/docs/getting-started-keras#set_up_your_project)\n-   An\n    [authenticated GCP account](https://cloud.google.com/ai-platform/docs/getting-started-keras#authenticate_your_gcp_account)\n-   [Google AI platform](https://cloud.google.com/ai-platform/) APIs enabled for\n    your GCP account. We use the AI platform for deploying docker images on GCP.\n-   Either a functioning version of\n    [docker](https://docs.docker.com/engine/install/) if you want to use a local\n    docker process for your build, or\n    [create a cloud storage bucket](https://cloud.google.com/ai-platform/docs/getting-started-keras#create_a_bucket)\n    to use with [Google Cloud build](https://cloud.google.com/cloud-build) for\n    docker image build and publishing.\n\n-   [Authenticate to your Docker Container Registry](https://cloud.google.com/container-registry/docs/advanced-authentication#gcloud-helper)\n\n-   (optional) [nbconvert](https://nbconvert.readthedocs.io/en/latest/) if you\n    are using a notebook file as `entry_point` as shown in\n    [usage guide #4](#usage-guide).\n\nFor detailed end to end setup instructions, please see\n[Setup instructions](#setup-instructions).\n\n#### Install latest release\n\n```shell\npip install -U tensorflow-cloud\n```\n\n#### Install from source\n\n```shell\ngit clone https://github.com/tensorflow/cloud.git\ncd cloud\npip install src/python/.\n```\n\n### High level overview\n\nTensorFlow Cloud package provides the `run` API for training your models on GCP.\nTo start, let's walk through a simple workflow using this API.\n\n1.  Let's begin with a Keras model training code such as the following, saved as\n    `mnist_example.py`.\n\n    ```python\n    import tensorflow as tf\n\n    (x_train, y_train), (_, _) = tf.keras.datasets.mnist.load_data()\n\n    x_train = x_train.reshape((60000, 28 * 28))\n    x_train = x_train.astype('float32') / 255\n\n    model = tf.keras.Sequential([\n      tf.keras.layers.Dense(512, activation='relu', input_shape=(28 * 28,)),\n      tf.keras.layers.Dropout(0.2),\n      tf.keras.layers.Dense(10, activation='softmax')\n    ])\n\n    model.compile(loss='sparse_categorical_crossentropy',\n                  optimizer=tf.keras.optimizers.Adam(),\n                  metrics=['accuracy'])\n\n    model.fit(x_train, y_train, epochs=10, batch_size=128)\n    ```\n\n1.  After you have tested this model on your local environment for a few epochs,\n    probably with a small dataset, you can train the model on Google Cloud by\n    writing the following simple script `scale_mnist.py`.\n\n    ```python\n    import tensorflow_cloud as tfc\n    tfc.run(entry_point='mnist_example.py')\n    ```\n\n    Running `scale_mnist.py` will automatically apply TensorFlow\n    [one device strategy](https://www.tensorflow.org/api_docs/python/tf/distribute/OneDeviceStrategy)\n    and train your model at scale on Google Cloud Platform. Please see the\n    [usage guide](#usage-guide) section for detailed instructions and additional\n    API parameters.\n\n1.  You will see an output similar to the following on your console. This\n    information can be used to track the training job status.\n\n    ```shell\n    user@desktop$ python scale_mnist.py\n    Job submitted successfully.\n    Your job ID is:  tf_cloud_train_519ec89c_a876_49a9_b578_4fe300f8865e\n    Please access your job logs at the following URL:\n    https://console.cloud.google.com/mlengine/jobs/tf_cloud_train_519ec89c_a876_49a9_b578_4fe300f8865e?project=prod-123\n    ```\n\n### Setup instructions\n\nEnd to end instructions to help set up your environment for Tensorflow Cloud.\nYou use one of the following notebooks to setup your project or follow the\ninstructions below.\n\n\u003ctable align=\"left\"\u003e\n    \u003ctd\u003e\n        \u003ca href=\"https://colab.research.google.com/github/tensorflow/cloud/blob/master/examples/google_cloud_project_setup_instructions.ipynb\"\u003e\n            \u003cimg width=\"50\" src=\"https://cloud.google.com/ml-engine/images/colab-logo-32px.png\" alt=\"Colab logo\"\u003eRun in Colab\n        \u003c/a\u003e\n    \u003c/td\u003e\n    \u003ctd\u003e\n        \u003ca href=\"https://github.com/tensorflow/cloud/blob/master/examples/google_cloud_project_setup_instructions.ipynb\"\u003e\n            \u003cimg src=\"https://cloud.google.com/ml-engine/images/github-logo-32px.png\" alt=\"GitHub logo\"\u003eView on GitHub\n        \u003c/a\u003e\n     \u003c/td\u003e\n    \u003ctd\u003e\n        \u003ca href=\"https://www.kaggle.com/nitric/google-cloud-project-setup-instructions\"\u003e\n            \u003cimg width=\"90\" src=\"https://www.kaggle.com/static/images/site-logo.png\" alt=\"Kaggle logo\"\u003eRun in Kaggle\n        \u003c/a\u003e\n     \u003c/td\u003e\n\u003c/table\u003e\n\n1.  Create a new local directory\n\n    ```shell\n    mkdir tensorflow_cloud\n    cd tensorflow_cloud\n    ```\n\n1.  Make sure you have `python \u003e= 3.6`\n\n    ```shell\n    python -V\n    ```\n\n1.  Set up virtual environment\n\n    ```shell\n    virtualenv tfcloud --python=python3\n    source tfcloud/bin/activate\n    ```\n\n1.  [Set up your Google Cloud project](https://cloud.google.com/ai-platform/docs/getting-started-keras#set_up_your_project)\n\n    Verify that gcloud sdk is installed.\n\n    ```shell\n    which gcloud\n    ```\n\n    Set default gcloud project\n\n    ```shell\n    export PROJECT_ID=\u003cyour-project-id\u003e\n    gcloud config set project $PROJECT_ID\n    ```\n\n1.  [Authenticate your GCP account](https://cloud.google.com/ai-platform/docs/getting-started-keras#authenticate_your_gcp_account)\n\n    Create a service account.\n\n    ```shell\n    export SA_NAME=\u003cyour-sa-name\u003e\n    gcloud iam service-accounts create $SA_NAME\n    gcloud projects add-iam-policy-binding $PROJECT_ID \\\n        --member serviceAccount:$SA_NAME@$PROJECT_ID.iam.gserviceaccount.com \\\n        --role 'roles/editor'\n    ```\n\n    Create a key for your service account.\n\n    ```shell\n    gcloud iam service-accounts keys create ~/key.json --iam-account $SA_NAME@$PROJECT_ID.iam.gserviceaccount.com\n    ```\n\n    Create the GOOGLE_APPLICATION_CREDENTIALS environment variable.\n\n    ```shell\n    export GOOGLE_APPLICATION_CREDENTIALS=~/key.json\n    ```\n\n1.  [Create a Cloud Storage bucket](https://cloud.google.com/ai-platform/docs/getting-started-keras#create_a_bucket).\n    Using [Google Cloud build](https://cloud.google.com/cloud-build) is the\n    recommended method for building and publishing docker images, although we\n    optionally allow for local\n    [docker daemon process](https://docs.docker.com/config/daemon/#start-the-daemon-manually)\n    depending on your specific needs.\n\n    ```shell\n    BUCKET_NAME=\"your-bucket-name\"\n    REGION=\"us-central1\"\n    gcloud auth login\n    gsutil mb -l $REGION gs://$BUCKET_NAME\n    ```\n\n    (optional for local docker setup) `shell sudo dockerd`\n\n1.  Authenticate access to Google Cloud registry.\n\n    ```shell\n    gcloud auth configure-docker\n    ```\n\n1.  Install [nbconvert](https://nbconvert.readthedocs.io/en/latest/) if you plan\n    to use a notebook file `entry_point` as shown in\n    [usage guide #4](#usage-guide).\n\n    ```shell\n    pip install nbconvert\n    ```\n\n1.  Install latest release of tensorflow-cloud\n\n    ```shell\n    pip install tensorflow-cloud\n    ```\n\n### Usage guide\n\nAs described in the [high level overview](#high-level-overview), the `run` API\nallows you to train your models at scale on GCP. The\n[`run`](https://github.com/tensorflow/cloud/blob/master/src/python/core/run.py#L31)\nAPI can be used in four different ways. This is defined by where you are running\nthe API (Terminal vs IPython notebook), and your `entry_point` parameter.\n`entry_point` is an optional Python script or notebook file path to the file\nthat contains your TensorFlow Keras training code. This is the most important\nparameter in the API.\n\n```python\nrun(entry_point=None,\n    requirements_txt=None,\n    distribution_strategy='auto',\n    docker_config='auto',\n    chief_config='auto',\n    worker_config='auto',\n    worker_count=0,\n    entry_point_args=None,\n    stream_logs=False,\n    job_labels=None,\n    **kwargs)\n```\n\n1.  **Using a python file as `entry_point`.**\n\n    If you have your `tf.keras` model in a python file (`mnist_example.py`),\n    then you can write the following simple script (`scale_mnist.py`) to scale\n    your model on GCP.\n\n    ```python\n    import tensorflow_cloud as tfc\n    tfc.run(entry_point='mnist_example.py')\n    ```\n\n    Please note that all the files in the same directory tree as `entry_point`\n    will be packaged in the docker image created, along with the `entry_point`\n    file. It's recommended to create a new directory to house each cloud project\n    which includes necessary files and nothing else, to optimize image build\n    times.\n\n1.  **Using a notebook file as `entry_point`.**\n\n    If you have your `tf.keras` model in a notebook file\n    (`mnist_example.ipynb`), then you can write the following simple script\n    (`scale_mnist.py`) to scale your model on GCP.\n\n    ```python\n    import tensorflow_cloud as tfc\n    tfc.run(entry_point='mnist_example.ipynb')\n    ```\n\n    Please note that all the files in the same directory tree as `entry_point`\n    will be packaged in the docker image created, along with the `entry_point`\n    file. Like the python script `entry_point` above, we recommended creating a\n    new directory to house each cloud project which includes necessary files and\n    nothing else, to optimize image build times.\n\n1.  **Using `run` within a python script that contains the `tf.keras` model.**\n\n    You can use the `run` API from within your python file that contains the\n    `tf.keras` model (`mnist_scale.py`). In this use case, `entry_point` should\n    be `None`. The `run` API can be called anywhere and the entire file will be\n    executed remotely. The API can be called at the end to run the script\n    locally for debugging purposes (possibly with fewer epochs and other flags).\n\n    ```python\n    import tensorflow_datasets as tfds\n    import tensorflow as tf\n    import tensorflow_cloud as tfc\n\n    tfc.run(\n        entry_point=None,\n        distribution_strategy='auto',\n        requirements_txt='requirements.txt',\n        chief_config=tfc.MachineConfig(\n                cpu_cores=8,\n                memory=30,\n                accelerator_type=tfc.AcceleratorType.NVIDIA_TESLA_T4,\n                accelerator_count=2),\n        worker_count=0)\n\n    datasets, info = tfds.load(name='mnist', with_info=True, as_supervised=True)\n    mnist_train, mnist_test = datasets['train'], datasets['test']\n\n    num_train_examples = info.splits['train'].num_examples\n    num_test_examples = info.splits['test'].num_examples\n\n    BUFFER_SIZE = 10000\n    BATCH_SIZE = 64\n\n    def scale(image, label):\n        image = tf.cast(image, tf.float32)\n        image /= 255\n        return image, label\n\n    train_dataset = mnist_train.map(scale).cache()\n    train_dataset = train_dataset.shuffle(BUFFER_SIZE).batch(BATCH_SIZE)\n\n    model = tf.keras.Sequential([\n        tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(\n            28, 28, 1)),\n        tf.keras.layers.MaxPooling2D(),\n        tf.keras.layers.Flatten(),\n        tf.keras.layers.Dense(64, activation='relu'),\n        tf.keras.layers.Dense(10, activation='softmax')\n    ])\n\n    model.compile(loss='sparse_categorical_crossentropy',\n                  optimizer=tf.keras.optimizers.Adam(),\n                  metrics=['accuracy'])\n    model.fit(train_dataset, epochs=12)\n    ```\n\n    Please note that all the files in the same directory tree as the python\n    script will be packaged in the docker image created, along with the python\n    file. It's recommended to create a new directory to house each cloud project\n    which includes necessary files and nothing else, to optimize image build\n    times.\n\n1.  **Using `run` within a notebook script that contains the `tf.keras` model.**\n\n    ![Image of colab](https://github.com/tensorflow/cloud/blob/master/images/colab.png)\n\n    In this use case, `entry_point` should be `None` and\n    `docker_config.image_build_bucket` must be specified, to ensure the build\n    can be stored and published.\n\n    ### Cluster and distribution strategy configuration\n\n    By default, `run` API takes care of wrapping your model code in a TensorFlow\n    distribution strategy based on the cluster configuration you have provided.\n\n    ***No distribution***\n\n    CPU chief config and no additional workers\n\n    ```python\n    tfc.run(entry_point='mnist_example.py',\n            chief_config=tfc.COMMON_MACHINE_CONFIGS['CPU'])\n    ```\n\n    ***OneDeviceStrategy***\n\n    1 GPU on chief (defaults to `AcceleratorType.NVIDIA_TESLA_T4`) and no\n    additional workers.\n\n    ```python\n    tfc.run(entry_point='mnist_example.py')\n    ```\n\n    ***MirroredStrategy***\n\n    Chief config with multiple GPUS (`AcceleratorType.NVIDIA_TESLA_V100`).\n\n    ```python\n    tfc.run(entry_point='mnist_example.py',\n            chief_config=tfc.COMMON_MACHINE_CONFIGS['V100_4X'])\n    ```\n\n    ***MultiWorkerMirroredStrategy***\n\n    Chief config with 1 GPU and 2 workers each with 8 GPUs\n    (`AcceleratorType.NVIDIA_TESLA_V100`).\n\n    ```python\n    tfc.run(entry_point='mnist_example.py',\n            chief_config=tfc.COMMON_MACHINE_CONFIGS['V100_1X'],\n            worker_count=2,\n            worker_config=tfc.COMMON_MACHINE_CONFIGS['V100_8X'])\n    ```\n\n    ***TPUStrategy***\n\n    Chief config with 1 CPU and 1 worker with TPU.\n\n    ```python\n    tfc.run(entry_point=\"mnist_example.py\",\n            chief_config=tfc.COMMON_MACHINE_CONFIGS[\"CPU\"],\n            worker_count=1,\n            worker_config=tfc.COMMON_MACHINE_CONFIGS[\"TPU\"])\n    ```\n\n    Please note that TPUStrategy with TensorFlow Cloud works only with TF\n    version 2.1 as this is the latest version supported by\n    [AI Platform cloud TPU](https://cloud.google.com/ai-platform/training/docs/runtime-version-list#tpu-support)\n\n    ***Custom distribution strategy***\n\n    If you would like to take care of specifying distribution strategy in your\n    model code and do not want `run` API to create a strategy, then set\n    `distribution_stategy` as `None`. This will be required for example when you\n    are using `strategy.experimental_distribute_dataset`.\n\n    ```python\n    tfc.run(entry_point='mnist_example.py',\n            distribution_strategy=None,\n            worker_count=2)\n    ```\n\n#### What happens when you call run?\n\nThe API call will encompass the following:\n\n1.  Making code entities such as a Keras script/notebook, **cloud and\n    distribution ready**.\n1.  Converting this distribution entity into a **docker container** with the\n    required dependencies.\n1.  **Deploy** this container at scale and train using TensorFlow distribution\n    strategies.\n1.  **Stream logs** and monitor them on hosted TensorBoard, manage checkpoint\n    storage.\n\nBy default, we will use local docker daemon for building and publishing docker\nimages to Google container registry. Images are published to\n`gcr.io/your-gcp-project-id`. If you specify `docker_config.image_build_bucket`,\nthen we will use [Google Cloud build](https://cloud.google.com/cloud-build) to\nbuild and publish docker images.\n\nWe use [Google AI platform](https://cloud.google.com/ai-platform/) for deploying\ndocker images on GCP.\n\nPlease note that, when `entry_point` argument is specified, all the files in the\nsame directory tree as `entry_point` will be packaged in the docker image\ncreated, along with the `entry_point` file.\n\nPlease see `run` API documentation for detailed information on the parameters\nand how you can modify the above processes to suit your needs.\n\n### End to end examples\n\n```shell\ncd src/python/tensorflow_cloud/core\npython tests/examples/call_run_on_script_with_keras_fit.py\n```\n\n-   [Using a python file as `entry_point` (Keras fit API)](https://github.com/tensorflow/cloud/blob/master/src/python/tensorflow_cloud/core/tests/examples/call_run_on_script_with_keras_fit.py).\n-   [Using a python file as `entry_point` (Keras custom training loop)](https://github.com/tensorflow/cloud/blob/master/src/python/tensorflow_cloud/core/tests/examples/call_run_on_script_with_keras_ctl.py).\n-   [Using a python file as `entry_point` (Keras save and load)](https://github.com/tensorflow/cloud/blob/master/src/python/tensorflow_cloud/core/tests/examples/call_run_on_script_with_keras_save_and_load.py).\n-   [Using a notebook file as `entry_point`](https://github.com/tensorflow/cloud/blob/master/src/python/tensorflow_cloud/core/tests/examples/call_run_on_notebook_with_keras_fit.py).\n-   [Using `run` within a python script that contains the `tf.keras` model](https://github.com/tensorflow/cloud/blob/master/src/python/tensorflow_cloud/core/tests/examples/call_run_within_script_with_keras_fit.py).\n-   [Using cloud build instead of local docker](https://github.com/tensorflow/cloud/blob/master/src/python/tensorflow_cloud/core/tests/examples/call_run_on_script_with_keras_fit_cloud_build.py).\n-   [Run AutoKeras with TensorFlow Cloud](https://github.com/tensorflow/cloud/blob/master/src/python/tensorflow_cloud/core/tests/examples/call_run_within_script_with_autokeras.py).\n\n### Running unit tests\n\n```shell\npytest src/python/tensorflow_cloud/core/tests/unit/\n```\n\n### Local vs remote training\n\nThings to keep in mind when running your jobs remotely:\n\n[Coming soon]\n\n### Debugging workflow\n\nHere are some tips for fixing unexpected issues.\n\n#### Operation disallowed within distribution strategy scope\n\n**Error like**: Creating a generator within a strategy scope is disallowed,\nbecause there is ambiguity on how to replicate a generator (e.g. should it be\ncopied so that each replica gets the same random numbers, or 'split' so that\neach replica gets different random numbers).\n\n**Solution**: Passing `distribution_strategy='auto'` to `run` API wraps all of\nyour script in a TF distribution strategy based on the cluster configuration\nprovided. You will see the above error or something similar to it, if for some\nreason an operation is not allowed inside distribution strategy scope. To fix\nthe error, please pass `None` to the `distribution_strategy` param and create a\nstrategy instance as part of your training code as shown in\n[this](https://github.com/tensorflow/cloud/blob/master/src/python/tensorflow_cloud/core/tests/testdata/save_and_load.py)\nexample.\n\n#### Docker image build timeout\n\n**Error like**: requests.exceptions.ConnectionError: ('Connection aborted.',\ntimeout('The write operation timed out'))\n\n**Solution**: The directory being used as an entry point likely has too much\ndata for the image to successfully build, and there may be extraneous data\nincluded in the build. Reformat your directory structure such that the folder\nwhich contains the entry point only includes files necessary for the current\nproject.\n\n#### Version not supported for TPU training\n\n**Error like**: There was an error submitting the job.Field: tpu_tf_version\nError: The specified runtime version '2.3' is not supported for TPU training.\nPlease specify a different runtime version.\n\n**Solution**: Please use TF version 2.1. See TPU Strategy in\n[Cluster and distribution strategy configuration section](#cluster-and-distribution-strategy-configuration).\n\n#### TF nightly build.\n\n**Warning like**: Docker parent image '2.4.0.dev20200720' does not exist. Using\nthe latest TF nightly build.\n\n**Solution**: If you do not provide `docker_config.parent_image` param, then by\ndefault we use pre-built TF docker images as parent image. If you do not have TF\ninstalled on the environment where `run` is called, then TF docker image for the\n`latest` stable release will be used. Otherwise, the version of the docker image\nwill match the locally installed TF version. However, pre-built TF docker images\naren't available for TF nightlies except for the latest. So, if your local TF is\nan older nightly version, we upgrade to the latest nightly automatically and\nraise this warning.\n\n#### Mixing distribution strategy objects.\n\n**Error like**: RuntimeError: Mixing different tf.distribute.Strategy objects.\n\n**Solution**: Please provide `distribution_strategy=None` when you already have\na distribution strategy defined in your model code. Specifying\n`distribution_strategy'='auto'`, will wrap your code in a TensorFlow\ndistribution strategy. This will cause the above error, if there is a strategy\nobject already used in your code.\n\n### Coming up\n\n-   Distributed Keras tuner support.\n\n## Contributing\n\nWe welcome community contributions, see [CONTRIBUTING.md](CONTRIBUTING.md) and,\nfor style help,\n[Writing TensorFlow documentation](https://www.tensorflow.org/community/contribute/docs)\nguide.\n\n## License\n\n[Apache License 2.0](LICENSE)\n\n## Privacy Notice\n\nThis application reports technical and operational details of your usage of\nCloud Services in accordance with Google privacy policy, for more information\nplease refer to https://policies.google.com/privacy. If you wish to opt-out, you\nmay do so by running\ntensorflow_cloud.utils.google_api_client.optout_metrics_reporting().\n","funding_links":[],"categories":["分布式机器学习","Tensorflow实用程序"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ftensorflow%2Fcloud","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Ftensorflow%2Fcloud","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ftensorflow%2Fcloud/lists"}