{"id":13412547,"url":"https://github.com/tensorflow/tensorboard","last_synced_at":"2025-05-12T16:18:48.345Z","repository":{"id":37847299,"uuid":"91379993","full_name":"tensorflow/tensorboard","owner":"tensorflow","description":"TensorFlow's Visualization Toolkit","archived":false,"fork":false,"pushed_at":"2025-05-09T19:01:02.000Z","size":123198,"stargazers_count":6871,"open_issues_count":696,"forks_count":1668,"subscribers_count":187,"default_branch":"master","last_synced_at":"2025-05-12T16:18:20.256Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":null,"language":"TypeScript","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/tensorflow.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":"SECURITY.md","support":null,"governance":null,"roadmap":null,"authors":"AUTHORS","dei":null,"publiccode":null,"codemeta":null,"zenodo":null}},"created_at":"2017-05-15T20:08:07.000Z","updated_at":"2025-05-12T07:11:42.000Z","dependencies_parsed_at":"2023-12-15T19:59:13.087Z","dependency_job_id":"5d15aab4-2916-440a-b0b5-a1987e454a66","html_url":"https://github.com/tensorflow/tensorboard","commit_stats":{"total_commits":5610,"total_committers":336,"mean_commits":"16.696428571428573","dds":0.8397504456327985,"last_synced_commit":"16501e7cebf4743bbfeb4f0f51216e27049ee60e"},"previous_names":[],"tags_count":74,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tensorflow%2Ftensorboard","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tensorflow%2Ftensorboard/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tensorflow%2Ftensorboard/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tensorflow%2Ftensorboard/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/tensorflow","download_url":"https://codeload.github.com/tensorflow/tensorboard/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":253774593,"owners_count":21962199,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-07-30T20:01:25.954Z","updated_at":"2025-05-12T16:18:48.309Z","avatar_url":"https://github.com/tensorflow.png","language":"TypeScript","readme":"# TensorBoard [![GitHub Actions CI](https://github.com/tensorflow/tensorboard/workflows/CI/badge.svg)](https://github.com/tensorflow/tensorboard/actions?query=workflow%3ACI+branch%3Amaster+event%3Apush) [![GitHub Actions Nightly CI](https://github.com/tensorflow/tensorboard/workflows/nightly-release/badge.svg)](https://github.com/tensorflow/tensorboard/actions?query=workflow%3Anightly-release+branch%3Amaster) [![PyPI](https://badge.fury.io/py/tensorboard.svg)](https://badge.fury.io/py/tensorboard)\n\nTensorBoard is a suite of web applications for inspecting and understanding your\nTensorFlow runs and graphs.\n\nThis README gives an overview of key concepts in TensorBoard, as well as how to\ninterpret the visualizations TensorBoard provides. For an in-depth example of\nusing TensorBoard, see the tutorial: [TensorBoard: Getting Started][].\nDocumentation on how to use TensorBoard to work with images, graphs, hyper\nparameters, and more are linked from there, along with tutorial walk-throughs in\nColab.\n\nTensorBoard is designed to run entirely offline, without requiring any access\nto the Internet. For instance, this may be on your local machine, behind a\ncorporate firewall, or in a datacenter.\n\n[TensorBoard: Getting Started]: https://www.tensorflow.org/tensorboard/get_started\n[TensorBoard.dev]: https://tensorboard.dev\n[This experiment]: https://tensorboard.dev/experiment/EDZb7XgKSBKo6Gznh3i8hg/#scalars\n\n# Usage\n\nBefore running TensorBoard, make sure you have generated summary data in a log\ndirectory by creating a summary writer:\n\n``` python\n# sess.graph contains the graph definition; that enables the Graph Visualizer.\n\nfile_writer = tf.summary.FileWriter('/path/to/logs', sess.graph)\n```\n\nFor more details, see\n[the TensorBoard tutorial](https://www.tensorflow.org/get_started/summaries_and_tensorboard).\nOnce you have event files, run TensorBoard and provide the log directory. If\nyou're using a precompiled TensorFlow package (e.g. you installed via pip), run:\n\n```\ntensorboard --logdir path/to/logs\n```\n\nOr, if you are building from source:\n\n```bash\nbazel build tensorboard:tensorboard\n./bazel-bin/tensorboard/tensorboard --logdir path/to/logs\n\n# or even more succinctly\nbazel run tensorboard -- --logdir path/to/logs\n```\n\nThis should print that TensorBoard has started. Next, connect to\nhttp://localhost:6006.\n\nTensorBoard requires a `logdir` to read logs from. For info on configuring\nTensorBoard, run `tensorboard --help`.\n\nTensorBoard can be used in Google Chrome or Firefox. Other browsers might\nwork, but there may be bugs or performance issues.\n\n# Key Concepts\n\n### Summary Ops: How TensorBoard gets data from TensorFlow\n\nThe first step in using TensorBoard is acquiring data from your TensorFlow run.\nFor this, you need\n[summary ops](https://www.tensorflow.org/api_docs/python/tf/summary).\nSummary ops are ops, just like\n[`tf.matmul`](https://www.tensorflow.org/api_docs/python/tf/linalg/matmul)\nand\n[`tf.nn.relu`](https://www.tensorflow.org/api_docs/python/tf/nn/relu),\nwhich means they take in tensors, produce tensors, and are evaluated from within\na TensorFlow graph. However, summary ops have a twist: the Tensors they produce\ncontain serialized protobufs, which are written to disk and sent to TensorBoard.\nTo visualize the summary data in TensorBoard, you should evaluate the summary\nop, retrieve the result, and then write that result to disk using a\nsummary.FileWriter. A full explanation, with examples, is in [the\ntutorial](https://www.tensorflow.org/get_started/summaries_and_tensorboard).\n\nThe supported summary ops include:\n* [`tf.summary.scalar`](https://www.tensorflow.org/api_docs/python/tf/summary/scalar)\n* [`tf.summary.image`](https://www.tensorflow.org/api_docs/python/tf/summary/image)\n* [`tf.summary.audio`](https://www.tensorflow.org/api_docs/python/tf/summary/audio)\n* [`tf.summary.text`](https://www.tensorflow.org/api_docs/python/tf/summary/text)\n* [`tf.summary.histogram`](https://www.tensorflow.org/api_docs/python/tf/summary/histogram)\n\n### Tags: Giving names to data\n\nWhen you make a summary op, you will also give it a `tag`. The tag is basically\na name for the data recorded by that op, and will be used to organize the data\nin the frontend. The scalar and histogram dashboards organize data by tag, and\ngroup the tags into folders according to a directory/like/hierarchy. If you have\na lot of tags, we recommend grouping them with slashes.\n\n### Event Files \u0026 LogDirs: How TensorBoard loads the data\n\n`summary.FileWriters` take summary data from TensorFlow, and then write them to a\nspecified directory, known as the `logdir`. Specifically, the data is written to\nan append-only record dump that will have \"tfevents\" in the filename.\nTensorBoard reads data from a full directory, and organizes it into the history\nof a single TensorFlow execution.\n\nWhy does it read the whole directory, rather than an individual file? You might\nhave been using\n[supervisor.py](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/training/supervisor.py)\nto run your model, in which case if TensorFlow crashes, the supervisor will\nrestart it from a checkpoint. When it restarts, it will start writing to a new\nevents file, and TensorBoard will stitch the various event files together to\nproduce a consistent history of what happened.\n\n### Runs: Comparing different executions of your model\n\nYou may want to visually compare multiple executions of your model; for example,\nsuppose you've changed the hyperparameters and want to see if it's converging\nfaster. TensorBoard enables this through different \"runs\". When TensorBoard is\npassed a `logdir` at startup, it recursively walks the directory tree rooted at\n`logdir` looking for subdirectories that contain tfevents data. Every time it\nencounters such a subdirectory, it loads it as a new `run`, and the frontend\nwill organize the data accordingly.\n\nFor example, here is a well-organized TensorBoard log directory, with two runs,\n\"run1\" and \"run2\".\n\n```\n/some/path/mnist_experiments/\n/some/path/mnist_experiments/run1/\n/some/path/mnist_experiments/run1/events.out.tfevents.1456525581.name\n/some/path/mnist_experiments/run1/events.out.tfevents.1456525585.name\n/some/path/mnist_experiments/run2/\n/some/path/mnist_experiments/run2/events.out.tfevents.1456525385.name\n/tensorboard --logdir /some/path/mnist_experiments\n```\n\n#### Logdir \u0026 Logdir_spec (Legacy Mode)\n\nYou may also pass a comma separated list of log directories, and TensorBoard\nwill watch each directory. You can also assign names to individual log\ndirectories by putting a colon between the name and the path, as in\n\n```\ntensorboard --logdir_spec name1:/path/to/logs/1,name2:/path/to/logs/2\n```\n\n_This flag (`--logdir_spec`) is discouraged and can usually be avoided_. TensorBoard walks log directories recursively; for finer-grained control, prefer using a symlink tree. _Some features may not work when using `--logdir_spec` instead of `--logdir`._\n\n# The Visualizations\n\n### Scalar Dashboard\n\nTensorBoard's Scalar Dashboard visualizes scalar statistics that vary over time;\nfor example, you might want to track the model's loss or learning rate. As\ndescribed in *Key Concepts*, you can compare multiple runs, and the data is\norganized by tag. The line charts have the following interactions:\n\n* Clicking on the small blue icon in the lower-left corner of each chart will\nexpand the chart\n\n* Dragging a rectangular region on the chart will zoom in\n\n* Double clicking on the chart will zoom out\n\n* Mousing over the chart will produce crosshairs, with data values recorded in\nthe run-selector on the left.\n\nAdditionally, you can create new folders to organize tags by writing regular\nexpressions in the box in the top-left of the dashboard.\n\n### Histogram Dashboard\n\nThe Histogram Dashboard displays how the statistical distribution of a Tensor\nhas varied over time. It visualizes data recorded via `tf.summary.histogram`.\nEach chart shows temporal \"slices\" of data, where each slice is a histogram of\nthe tensor at a given step. It's organized with the oldest timestep in the back,\nand the most recent timestep in front. By changing the Histogram Mode from\n\"offset\" to \"overlay\", the perspective will rotate so that every histogram slice\nis rendered as a line and overlaid with one another.\n\n### Distribution Dashboard\n\nThe Distribution Dashboard is another way of visualizing histogram data from\n`tf.summary.histogram`. It shows some high-level statistics on a distribution.\nEach line on the chart represents a percentile in the distribution over the\ndata: for example, the bottom line shows how the minimum value has changed over\ntime, and the line in the middle shows how the median has changed. Reading from\ntop to bottom, the lines have the following meaning: `[maximum, 93%, 84%, 69%,\n50%, 31%, 16%, 7%, minimum]`\n\nThese percentiles can also be viewed as standard deviation boundaries on a\nnormal distribution: `[maximum, μ+1.5σ, μ+σ, μ+0.5σ, μ, μ-0.5σ, μ-σ, μ-1.5σ,\nminimum]` so that the colored regions, read from inside to outside, have widths\n`[σ, 2σ, 3σ]` respectively.\n\n### Image Dashboard\n\nThe Image Dashboard can display pngs that were saved via a `tf.summary.image`.\nThe dashboard is set up so that each row corresponds to a different tag, and\neach column corresponds to a run. Since the image dashboard supports arbitrary\npngs, you can use this to embed custom visualizations (e.g. matplotlib\nscatterplots) into TensorBoard. This dashboard always shows you the latest image\nfor each tag.\n\n### Audio Dashboard\n\nThe Audio Dashboard can embed playable audio widgets for audio saved via a\n`tf.summary.audio`. The dashboard is set up so that each row corresponds to a\ndifferent tag, and each column corresponds to a run. This dashboard always\nembeds the latest audio for each tag.\n\n### Graph Explorer\n\nThe Graph Explorer can visualize a TensorBoard graph, enabling inspection of the\nTensorFlow model. To get best use of the graph visualizer, you should use name\nscopes to hierarchically group the ops in your graph - otherwise, the graph may\nbe difficult to decipher. For more information, including examples, see the\n[examining the TensorFlow graph](https://www.tensorflow.org/tensorboard/graphs)\ntutorial.\n\n### Embedding Projector\n\nThe Embedding Projector allows you to visualize high-dimensional data; for\nexample, you may view your input data after it has been embedded in a high-\ndimensional space by your model. The embedding projector reads data from your\nmodel checkpoint file, and may be configured with additional metadata, like\na vocabulary file or sprite images. For more details, see [the embedding\nprojector tutorial](https://www.tensorflow.org/tutorials/text/word_embeddings).\n\n### Text Dashboard\n\nThe Text Dashboard displays text snippets saved via `tf.summary.text`. Markdown\nfeatures including hyperlinks, lists, and tables are all supported.\n\n### Time Series Dashboard\n\nThe Time Series Dashboard shows a unified interface containing all your Scalars,\nHistograms, and Images saved via `tf.summary.scalar`, `tf.summary.image`, or\n`tf.summary.histogram`. It enables viewing your 'accuracy' line chart side by\nside with activation histograms and training example images, for example.\n\nFeatures include:\n\n* Custom run colors: click on the colored circles in the run selector to change\na run's color.\n\n* Pinned cards: click the 'pin' icon on any card to add it to the pinned section\nat the top for quick comparison.\n\n* Settings: the right pane offers settings for charts and other visualizations.\nImportant settings will persist across TensorBoard sessions, when hosted at the\nsame URL origin.\n\n* Autocomplete in tag filter: search for specific charts more easily.\n\n# Frequently Asked Questions\n\n### My TensorBoard isn't showing any data! What's wrong?\n\nFirst, check that the directory passed to `--logdir` is correct. You can also\nverify this by navigating to the Scalars dashboard (under the \"Inactive\" menu)\nand looking for the log directory path at the bottom of the left sidebar.\n\nIf you're loading from the proper path, make sure that event files are present.\nTensorBoard will recursively walk its logdir, it's fine if the data is nested\nunder a subdirectory. Ensure the following shows at least one result:\n\n`find DIRECTORY_PATH | grep tfevents`\n\nYou can also check that the event files actually have data by running\ntensorboard in inspect mode to inspect the contents of your event files.\n\n`tensorboard --inspect --logdir DIRECTORY_PATH`\n\nThe output for an event file corresponding to a blank TensorBoard may\nstill sometimes show a few steps, representing a few initial events that\naren't shown by TensorBoard (for example, when using the Keras TensorBoard callback):\n\n```\ntensor\n   first_step           0\n   last_step            2\n   max_step             2\n   min_step             0\n   num_steps            2\n   outoforder_steps     [(2, 0), (2, 0), (2, 0)]\n```\n\nIn contrast, the output for an event file with more data might look like this:\n\n```\ntensor\n   first_step           0\n   last_step            55\n   max_step             250\n   min_step             0\n   num_steps            60\n   outoforder_steps     [(2, 0), (2, 0), (2, 0), (2, 0), (50, 9), (100, 19), (150, 29), (200, 39), (250, 49)]\n```\n\n### TensorBoard is showing only some of my data, or isn't properly updating!\n\n\u003e **Update:** After [2.3.0 release][2-3-0], TensorBoard no longer auto reloads\n\u003e every 30 seconds. To re-enable the behavior, please open the settings by\n\u003e clicking the gear icon in the top-right of the TensorBoard web interface, and\n\u003e enable \"Reload data\".\n\n\u003e **Update:** the [experimental `--reload_multifile=true` option][pr-1867] can\n\u003e now be used to poll all \"active\" files in a directory for new data, rather\n\u003e than the most recent one as described below. A file is \"active\" as long as it\n\u003e received new data within `--reload_multifile_inactive_secs` seconds ago,\n\u003e defaulting to 86400.\n\nThis issue usually comes about because of how TensorBoard iterates through the\n`tfevents` files: it progresses through the events file in timestamp order, and\nonly reads one file at a time. Let's suppose we have files with timestamps `a`\nand `b`, where `a\u003cb`. Once TensorBoard has read all the events in `a`, it will\nnever return to it, because it assumes any new events are being written in the\nmore recent file. This could cause an issue if, for example, you have two\n`FileWriters` simultaneously writing to the same directory. If you have\nmultiple summary writers, each one should be writing to a separate directory.\n\n### Does TensorBoard support multiple or distributed summary writers?\n\n\u003e **Update:** the [experimental `--reload_multifile=true` option][pr-1867] can\n\u003e now be used to poll all \"active\" files in a directory for new data, defined as\n\u003e any file that received new data within `--reload_multifile_inactive_secs`\n\u003e seconds ago, defaulting to 86400.\n\nNo. TensorBoard expects that only one events file will be written to at a time,\nand multiple summary writers means multiple events files. If you are running a\ndistributed TensorFlow instance, we encourage you to designate a single worker\nas the \"chief\" that is responsible for all summary processing. See\n[supervisor.py](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/training/supervisor.py)\nfor an example.\n\n### I'm seeing data overlapped on itself! What gives?\n\nIf you are seeing data that seems to travel backwards through time and overlap\nwith itself, there are a few possible explanations.\n\n* You may have multiple execution of TensorFlow that all wrote to the same log\ndirectory. Please have each TensorFlow run write to its own logdir.\n\n  \u003e **Update:** the [experimental `--reload_multifile=true` option][pr-1867] can\n  \u003e now be used to poll all \"active\" files in a directory for new data, defined\n  \u003e as any file that received new data within `--reload_multifile_inactive_secs`\n  \u003e seconds ago, defaulting to 86400.\n\n* You may have a bug in your code where the global_step variable (passed\nto `FileWriter.add_summary`) is being maintained incorrectly.\n\n* It may be that your TensorFlow job crashed, and was restarted from an earlier\ncheckpoint. See *How to handle TensorFlow restarts*, below.\n\nAs a workaround, try changing the x-axis display in TensorBoard from `steps` to\n`wall_time`. This will frequently clear up the issue.\n\n### How should I handle TensorFlow restarts?\n\nTensorFlow is designed with a mechanism for graceful recovery if a job crashes\nor is killed: TensorFlow can periodically write model checkpoint files, which\nenable you to restart TensorFlow without losing all your training progress.\n\nHowever, this can complicate things for TensorBoard; imagine that TensorFlow\nwrote a checkpoint at step `a`, and then continued running until step `b`, and\nthen crashed and restarted at timestamp `a`. All of the events written between\n`a` and `b` were \"orphaned\" by the restart event and should be removed.\n\nTo facilitate this, we have a `SessionLog` message in\n`tensorflow/core/util/event.proto` which can record `SessionStatus.START` as an\nevent; like all events, it may have a `step` associated with it. If TensorBoard\ndetects a `SessionStatus.START` event with step `a`, it will assume that every\nevent with a step greater than `a` was orphaned, and it will discard those\nevents. This behavior may be disabled with the flag\n`--purge_orphaned_data false` (in versions after 0.7).\n\n### How can I export data from TensorBoard?\n\nThe Scalar Dashboard supports exporting data; you can click the \"enable\ndownload links\" option in the left-hand bar. Then, each plot will provide\ndownload links for the data it contains.\n\nIf you need access to the full dataset, you can read the event files that\nTensorBoard consumes by using the [`summary_iterator`](\nhttps://www.tensorflow.org/api_docs/python/tf/compat/v1/train/summary_iterator)\nmethod.\n\n### Can I make my own plugin?\n\nYes! You can clone and tinker with one of the [examples][plugin-examples] and\nmake your own, amazing visualizations. More documentation on the plugin system\nis described in the [ADDING_A_PLUGIN](./ADDING_A_PLUGIN.md) guide. Feel free to\nfile feature requests or questions about plugin functionality.\n\nOnce satisfied with your own groundbreaking new plugin, see the\n[distribution section][plugin-distribution] on how to publish to PyPI and share\nit with the community.\n\n[plugin-examples]: ./tensorboard/examples/plugins\n[plugin-distribution]: ./ADDING_A_PLUGIN.md#distribution\n\n### Can I customize which lines appear in a plot?\n\nUsing the [custom scalars plugin](tensorboard/plugins/custom_scalar), you can\ncreate scalar plots with lines for custom run-tag pairs. However, within the\noriginal scalars dashboard, each scalar plot corresponds to data for a specific\ntag and contains lines for each run that includes that tag.\n\n### Can I visualize margins above and below lines?\n\nMargin plots (that visualize lower and upper bounds) may be created with the\n[custom scalars plugin](tensorboard/plugins/custom_scalar). The original\nscalars plugin does not support visualizing margins.\n\n### Can I create scatterplots (or other custom plots)?\n\nThis isn't yet possible. As a workaround, you could create your custom plot in\nyour own code (e.g. matplotlib) and then write it into an `SummaryProto`\n(`core/framework/summary.proto`) and add it to your `FileWriter`. Then, your\ncustom plot will appear in the TensorBoard image tab.\n\n### Is my data being downsampled? Am I really seeing all the data?\n\nTensorBoard uses [reservoir\nsampling](https://en.wikipedia.org/wiki/Reservoir_sampling) to downsample your\ndata so that it can be loaded into RAM. You can modify the number of elements it\nwill keep per tag by using the `--samples_per_plugin` command line argument (ex:\n`--samples_per_plugin=scalars=500,images=20`).\nSee this [Stack Overflow question](http://stackoverflow.com/questions/43702546/tensorboard-doesnt-show-all-data-points/)\nfor some more information.\n\n### I get a network security popup every time I run TensorBoard on a mac!\n\nVersions of TensorBoard prior to TensorBoard 2.0 would by default serve on host\n`0.0.0.0`, which is publicly accessible. For those versions of TensorBoard, you\ncan stop the popups by specifying `--host localhost` at startup.\n\nIn TensorBoard 2.0 and up, `--host localhost` is the default. Use `--bind_all`\nto restore the old behavior of serving to the public network on both IPv4 and\nIPv6.\n\n### Can I run `tensorboard` without a TensorFlow installation?\n\nTensorBoard 1.14+ can be run with a reduced feature set if you do not have\nTensorFlow installed. The primary limitation is that as of 1.14, only the\nfollowing plugins are supported: scalars, custom scalars, image, audio,\ngraph, projector (partial), distributions, histograms, text, PR curves, mesh.\nIn addition, there is no support for log directories on Google Cloud Storage.\n\n### How can I contribute to TensorBoard development?\n\nSee [DEVELOPMENT.md](DEVELOPMENT.md).\n\n### I have a different issue that wasn't addressed here!\n\nFirst, try searching our [GitHub\nissues](https://github.com/tensorflow/tensorboard/issues) and\n[Stack Overflow][stack-overflow]. It may be\nthat someone else has already had the same issue or question.\n\nGeneral usage questions (or problems that may be specific to your local setup)\nshould go to [Stack Overflow][stack-overflow].\n\nIf you have found a bug in TensorBoard, please [file a GitHub issue](\nhttps://github.com/tensorflow/tensorboard/issues/new) with as much supporting\ninformation as you can provide (e.g. attaching events files, including the output\nof `tensorboard --inspect`, etc.).\n\n[stack-overflow]: https://stackoverflow.com/questions/tagged/tensorboard\n[pr-1867]: https://github.com/tensorflow/tensorboard/pull/1867\n[2-3-0]: https://github.com/tensorflow/tensorboard/releases/tag/2.3.0\n","funding_links":[],"categories":["Researchers","TypeScript","Python","Visualization","Deep Learning Framework","Visualization \u0026 Interpretability","Table of Contents","Project/Product","工作流程和实验跟踪","Industry Strength Visualisation","其他_机器学习与深度学习","Training","Tools","Machine Learning Management and Experimentation","Training \u0026 Fine-Tuning"],"sub_categories":["Tools","High-Level DL APIs","Visualization","Misc","Experiment Tracking"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ftensorflow%2Ftensorboard","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Ftensorflow%2Ftensorboard","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ftensorflow%2Ftensorboard/lists"}