{"id":13717549,"url":"https://github.com/asappresearch/flambe","last_synced_at":"2025-09-08T15:45:05.243Z","repository":{"id":42352440,"uuid":"199525869","full_name":"asappresearch/flambe","owner":"asappresearch","description":"An ML framework to accelerate research and its path to production.","archived":false,"fork":false,"pushed_at":"2024-09-03T20:58:51.000Z","size":13888,"stargazers_count":265,"open_issues_count":22,"forks_count":28,"subscribers_count":8,"default_branch":"master","last_synced_at":"2025-04-02T04:01:52.333Z","etag":null,"topics":["deep-learning","distributed","machine-learning","ml","python","pytorch","research"],"latest_commit_sha":null,"homepage":"https://flambe.ai","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/asappresearch.png","metadata":{"files":{"readme":"README-pypi.rst","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2019-07-29T20:57:45.000Z","updated_at":"2025-01-05T23:47:51.000Z","dependencies_parsed_at":"2024-12-07T15:01:07.059Z","dependency_job_id":"5448ed3b-6b8d-41c0-b526-e2e4c16fb1ab","html_url":"https://github.com/asappresearch/flambe","commit_stats":null,"previous_names":[],"tags_count":19,"template":true,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/asappresearch%2Fflambe","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/asappresearch%2Fflambe/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/asappresearch%2Fflambe/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/asappresearch%2Fflambe/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/asappresearch","download_url":"https://codeload.github.com/asappresearch/flambe/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":247980837,"owners_count":21027808,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["deep-learning","distributed","machine-learning","ml","python","pytorch","research"],"created_at":"2024-08-03T00:01:23.868Z","updated_at":"2025-04-09T05:08:45.488Z","avatar_url":"https://github.com/asappresearch.png","language":"Python","readme":"Flambé\n------\n\n|\n\n.. image:: https://github.com/asappresearch/flambe/workflows/Run%20fast%20tests/badge.svg\n    :target: https://github.com/asappresearch/flambe/actions\n    :alt: Fast tests\n\n.. image:: https://github.com/asappresearch/flambe/workflows/Run%20slow%20tests/badge.svg\n    :target: https://github.com/asappresearch/flambe/actions\n    :alt: Slow tests\n\n.. image:: https://readthedocs.org/projects/flambe/badge/?version=latest\n    :target: https://flambe.ai/en/latest/?badge=latest\n    :alt: Documentation Status\n\n.. image:: https://badge.fury.io/py/flambe.svg\n    :target: https://badge.fury.io/py/flambe\n    :alt: PyPI version\n\n|\n\nWelcome to Flambé, a `PyTorch \u003chttps://pytorch.org/\u003e`_-based library that allows users to:\n\n* Run complex experiments with **multiple training and processing stages**\n* **Search over hyperparameters**, and select the best trials\n* Run experiments **remotely** over many workers, including full AWS integration\n* Easily share experiment configurations, results, and model weights with others\n\nInstallation\n------------\n\n**From** ``PIP``:\n\n.. code-block:: bash\n\n    pip install flambe\n\n**From source**:\n\n.. code-block:: bash\n\n    git clone git@github.com:asappresearch/flambe.git\n    cd flambe\n    pip install .\n\n\nGetting started\n---------------\n\nDefine an ``Experiment``:\n\n.. code-block:: yaml\n\n    !Experiment\n\n    name: sst-text-classification\n\n    pipeline:\n\n      # stage 0 - Load the Stanford Sentiment Treebank dataset and run preprocessing\n      dataset: !SSTDataset\n        transform:\n          text: !TextField\n          label: !LabelField\n\n      # Stage 1 - Define a model\n      model: !TextClassifier\n          embedder: !Embedder\n            embedding: !torch.Embedding  # automatically use pytorch classes\n              num_embeddings: !@ dataset.text.vocab_size\n              embedding_dim: 300\n            embedding_dropout: 0.3\n            encoder: !PooledRNNEncoder\n              input_size: 300\n              n_layers: !g [2, 3, 4]\n              hidden_size: 128\n              rnn_type: sru\n              dropout: 0.3\n          output_layer: !SoftmaxLayer\n              input_size: !@ model[embedder][encoder].rnn.hidden_size\n              output_size: !@ dataset.label.vocab_size\n\n      # Stage 2 - Train the model on the dataset\n      train: !Trainer\n        dataset: !@ dataset\n        model: !@ model\n        train_sampler: !BaseSampler\n        val_sampler: !BaseSampler\n        loss_fn: !torch.NLLLoss\n        metric_fn: !Accuracy\n        optimizer: !torch.Adam\n          params: !@ train[model].trainable_params\n        max_steps: 10\n        iter_per_step: 100\n\n      # Stage 3 - Eval on the test set\n      eval: !Evaluator\n        dataset: !@ dataset\n        model: !@ train.model\n        metric_fn: !Accuracy\n        eval_sampler: !BaseSampler\n\n    # Define how to schedule variants\n    schedulers:\n      train: !ray.HyperBandScheduler\n\nAll objects in the ``pipeline`` are subclasses of ``Component``, which\nare automatically registered to be used with YAML. Custom ``Component``\nimplementations must implement ``run`` to add custom behavior when being executed.\n\nNow just execute:\n\n.. code-block:: bash\n\n    flambe example.yaml\n\nNote that defining objects like model and dataset ahead of time is optional; it's useful if you want to reference the same model architecture multiple times later in the pipeline.\n\nProgress can be monitored via the Report Site (with full integration with Tensorboard).\n\nFeatures\n--------\n\n* **Native support for hyperparameter search**: using search tags (see ``!g`` in the example) users can define multi variant pipelines. More advanced search algorithms will be available in a coming release!\n* **Remote and distributed experiments**: users can submit ``Experiments`` to ``Clusters`` which will execute in a distributed way. Full ``AWS`` integration is supported.\n* **Visualize all your metrics and meaningful data using Tensorboard**: log scalars, histograms, images, hparams and much more.\n* **Add custom code and objects to your pipelines**: extend flambé functionality using our easy-to-use *extensions* mechanism.\n* **Modularity with hierarchical serialization**: save different components from pipelines and load them safely anywhere.\n\nNext Steps\n-----------\n\nFull documentation, tutorials and much more in https://flambe.ai\n\nContact\n-------\nYou can reach us at flambe@asapp.com\n","funding_links":[],"categories":["Pytorch \u0026 related libraries｜Pytorch \u0026 相关库","Python","Pytorch \u0026 related libraries","Models and Projects"],"sub_categories":["Other libraries｜其他库:","Other libraries:","Ray Tune (Hyperparameter Optimization)"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fasappresearch%2Fflambe","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fasappresearch%2Fflambe","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fasappresearch%2Fflambe/lists"}