{"id":13472454,"url":"https://github.com/amazon-science/chronos-forecasting","last_synced_at":"2025-05-12T15:37:58.252Z","repository":{"id":227442316,"uuid":"762224673","full_name":"amazon-science/chronos-forecasting","owner":"amazon-science","description":"Chronos: Pretrained Models for Probabilistic Time Series Forecasting","archived":false,"fork":false,"pushed_at":"2025-04-10T15:26:11.000Z","size":1028,"stargazers_count":3149,"open_issues_count":20,"forks_count":349,"subscribers_count":32,"default_branch":"main","last_synced_at":"2025-04-13T01:45:31.761Z","etag":null,"topics":["artificial-intelligence","forecasting","foundation-models","huggingface","huggingface-transformers","large-language-models","llm","machine-learning","pretrained-models","time-series","time-series-forecasting","timeseries","transformers"],"latest_commit_sha":null,"homepage":"https://arxiv.org/abs/2403.07815","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/amazon-science.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":"CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":"CITATION.cff","codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null}},"created_at":"2024-02-23T10:35:42.000Z","updated_at":"2025-04-12T15:34:55.000Z","dependencies_parsed_at":"2024-03-13T11:28:40.578Z","dependency_job_id":"f39b60bb-093c-4f98-b806-26443b17faad","html_url":"https://github.com/amazon-science/chronos-forecasting","commit_stats":{"total_commits":67,"total_committers":10,"mean_commits":6.7,"dds":0.4328358208955224,"last_synced_commit":"ad410c9c0ae0d499aeec9a7af09b0636844b6274"},"previous_names":["amazon-science/chronos-forecasting"],"tags_count":9,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/amazon-science%2Fchronos-forecasting","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/amazon-science%2Fchronos-forecasting/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/amazon-science%2Fchronos-forecasting/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/amazon-science%2Fchronos-forecasting/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/amazon-science","download_url":"https://codeload.github.com/amazon-science/chronos-forecasting/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":250478325,"owners_count":21437142,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["artificial-intelligence","forecasting","foundation-models","huggingface","huggingface-transformers","large-language-models","llm","machine-learning","pretrained-models","time-series","time-series-forecasting","timeseries","transformers"],"created_at":"2024-07-31T16:00:54.780Z","updated_at":"2025-04-23T17:23:20.992Z","avatar_url":"https://github.com/amazon-science.png","language":"Python","readme":"\u003cdiv align=\"center\"\u003e\n\u003cimg src=\"https://raw.githubusercontent.com/amazon-science/chronos-forecasting/main/figures/chronos-logo.png\" width=\"60%\"\u003e\n\u003c/div\u003e\n\n\u003cdiv align=\"center\"\u003e\n\n# Chronos: Learning the Language of Time Series\n\n[![preprint](https://img.shields.io/static/v1?label=arXiv\u0026message=2403.07815\u0026color=B31B1B\u0026logo=arXiv)](https://arxiv.org/abs/2403.07815)\n[![huggingface](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Datasets-FFD21E)](https://huggingface.co/datasets/autogluon/chronos_datasets)\n[![huggingface](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-FFD21E)](https://huggingface.co/collections/amazon/chronos-models-65f1791d630a8d57cb718444)\n[![fev](https://img.shields.io/static/v1?label=fev\u0026message=Benchmark\u0026color=B31B1B\u0026logo=github)](https://github.com/autogluon/fev)\n[![aws](https://img.shields.io/static/v1?label=SageMaker\u0026message=Deploy\u0026color=FF9900\u0026logo=amazon-web-services)](notebooks/deploy-chronos-bolt-to-amazon-sagemaker.ipynb)\n[![faq](https://img.shields.io/badge/FAQ-Questions%3F-blue)](https://github.com/amazon-science/chronos-forecasting/issues?q=is%3Aissue+label%3AFAQ)\n[![License: MIT](https://img.shields.io/badge/License-Apache--2.0-green.svg)](https://opensource.org/licenses/Apache-2.0)\n\n\u003c/div\u003e\n\n\n## 🚀 News\n- **14 Feb 2025**: 🚀 Chronos-Bolt is now available on Amazon SageMaker JumpStart! Check out the [tutorial notebook](notebooks/deploy-chronos-bolt-to-amazon-sagemaker.ipynb) to learn how to deploy Chronos endpoints for production use in 3 lines of code.\n- **12 Dec 2024**: 📊 We released [`fev`](https://github.com/autogluon/fev), a lightweight package for benchmarking time series forecasting models based on the [Hugging Face `datasets`](https://huggingface.co/docs/datasets/en/index) library.\n- **26 Nov 2024**: ⚡️ Chronos-Bolt models released [on HuggingFace](https://huggingface.co/collections/amazon/chronos-models-65f1791d630a8d57cb718444). Chronos-Bolt models are more accurate (5% lower error), up to 250x faster and 20x more memory efficient than the original Chronos models of the same size!\n- **27 Jun 2024**: 🚀 [Released datasets](https://huggingface.co/datasets/autogluon/chronos_datasets) used in the paper and an [evaluation script](./scripts/README.md#evaluating-chronos-models) to compute the WQL and MASE scores reported in the paper.\n- **17 May 2024**: 🐛 Fixed an off-by-one error in bin indices in the `output_transform`. This simple fix significantly improves the overall performance of Chronos. We will update the results in the next revision on ArXiv.\n- **10 May 2024**: 🚀 We added the code for pretraining and fine-tuning Chronos models. You can find it in [this folder](./scripts/training). We also added [a script](./scripts/kernel-synth.py) for generating synthetic time series data from Gaussian processes (KernelSynth; see Section 4.2 in the paper for details). Check out the [usage examples](./scripts/).\n- **19 Apr 2024**: 🚀 Chronos is now supported on [AutoGluon-TimeSeries](https://auto.gluon.ai/stable/tutorials/timeseries/index.html), the powerful AutoML package for time series forecasting which enables model ensembles, cloud deployments, and much more. Get started with the [tutorial](https://auto.gluon.ai/stable/tutorials/timeseries/forecasting-chronos.html).\n- **08 Apr 2024**: 🧪 Experimental [MLX inference support](https://github.com/amazon-science/chronos-forecasting/tree/mlx) added. If you have an Apple Silicon Mac, you can now obtain significantly faster forecasts from Chronos compared to CPU inference. This provides an alternative way to exploit the GPU on your Apple Silicon Macs together with the \"mps\" support in PyTorch.\n- **25 Mar 2024**: 🚀 [v1.1.0 released](https://github.com/amazon-science/chronos-forecasting/releases/tag/v1.1.0) with inference optimizations and `pipeline.embed` to extract encoder embeddings from Chronos.\n- **13 Mar 2024**: 🚀 Chronos [paper](https://arxiv.org/abs/2403.07815) and inference code released.\n\n## ✨ Introduction\n\nChronos is a family of **pretrained time series forecasting models** based on language model architectures. A time series is transformed into a sequence of tokens via scaling and quantization, and a language model is trained on these tokens using the cross-entropy loss. Once trained, probabilistic forecasts are obtained by sampling multiple future trajectories given the historical context. Chronos models have been trained on a large corpus of publicly available time series data, as well as synthetic data generated using Gaussian processes.\n\nFor details on Chronos models, training data and procedures, and experimental results, please refer to the paper [Chronos: Learning the Language of Time Series](https://arxiv.org/abs/2403.07815).\n\n\u003cp align=\"center\"\u003e\n  \u003cimg src=\"https://raw.githubusercontent.com/amazon-science/chronos-forecasting/main/figures/main-figure.png\" width=\"100%\"\u003e\n  \u003cbr /\u003e\n  \u003cspan\u003e\n    Fig. 1: High-level depiction of Chronos. (\u003cb\u003eLeft\u003c/b\u003e) The input time series is scaled and quantized to obtain a sequence of tokens. (\u003cb\u003eCenter\u003c/b\u003e) The tokens are fed into a language model which may either be an encoder-decoder or a decoder-only model. The model is trained using the cross-entropy loss. (\u003cb\u003eRight\u003c/b\u003e) During inference, we autoregressively sample tokens from the model and map them back to numerical values. Multiple trajectories are sampled to obtain a predictive distribution.\n  \u003c/span\u003e\n\u003c/p\u003e\n\n### Architecture\n\nThe models in this repository are based on the [T5 architecture](https://arxiv.org/abs/1910.10683). The only difference is in the vocabulary size: Chronos-T5 models use 4096 different tokens, compared to 32128 of the original T5 models, resulting in fewer parameters.\n\n\u003cdiv align=\"center\"\u003e\n\n| Model                                                                  | Parameters | Based on                                                               |\n| ---------------------------------------------------------------------- | ---------- | ---------------------------------------------------------------------- |\n| [**chronos-t5-tiny**](https://huggingface.co/amazon/chronos-t5-tiny)   | 8M         | [t5-efficient-tiny](https://huggingface.co/google/t5-efficient-tiny)   |\n| [**chronos-t5-mini**](https://huggingface.co/amazon/chronos-t5-mini)   | 20M        | [t5-efficient-mini](https://huggingface.co/google/t5-efficient-mini)   |\n| [**chronos-t5-small**](https://huggingface.co/amazon/chronos-t5-small) | 46M        | [t5-efficient-small](https://huggingface.co/google/t5-efficient-small) |\n| [**chronos-t5-base**](https://huggingface.co/amazon/chronos-t5-base)   | 200M       | [t5-efficient-base](https://huggingface.co/google/t5-efficient-base)   |\n| [**chronos-t5-large**](https://huggingface.co/amazon/chronos-t5-large) | 710M       | [t5-efficient-large](https://huggingface.co/google/t5-efficient-large) |\n| [**chronos-bolt-tiny**](https://huggingface.co/amazon/chronos-bolt-tiny)   | 9M         | [t5-efficient-tiny](https://huggingface.co/google/t5-efficient-tiny)   |\n| [**chronos-bolt-mini**](https://huggingface.co/amazon/chronos-bolt-mini)   | 21M        | [t5-efficient-mini](https://huggingface.co/google/t5-efficient-mini)   |\n| [**chronos-bolt-small**](https://huggingface.co/amazon/chronos-bolt-small) | 48M        | [t5-efficient-small](https://huggingface.co/google/t5-efficient-small) |\n| [**chronos-bolt-base**](https://huggingface.co/amazon/chronos-bolt-base)   | 205M       | [t5-efficient-base](https://huggingface.co/google/t5-efficient-base)   |\n\n\u003c/div\u003e\n\n### Zero-Shot Results\n\nThe following figure showcases the remarkable **zero-shot** performance of Chronos and Chronos-Bolt models on 27 datasets against local models, task-specific models and other pretrained models. For details on the evaluation setup and other results, please refer to [the paper](https://arxiv.org/abs/2403.07815).\n\n\u003cp align=\"center\"\u003e\n  \u003cimg src=\"https://raw.githubusercontent.com/amazon-science/chronos-forecasting/main/figures/zero_shot-agg_scaled_score.svg\" width=\"100%\"\u003e\n  \u003cbr /\u003e\n  \u003cspan\u003e\n    Fig. 2: Performance of different models on Benchmark II, comprising 27 datasets \u003cb\u003enot seen\u003c/b\u003e by Chronos and Chronos-Bolt models during training. This benchmark provides insights into the zero-shot performance of Chronos and Chronos-Bolt models against local statistical models, which fit parameters individually for each time series, task-specific models \u003ci\u003etrained on each task\u003c/i\u003e, and pretrained models trained on a large corpus of time series. Pretrained Models (Other) indicates that some (or all) of the datasets in Benchmark II may have been in the training corpus of these models. The probabilistic (WQL) and point (MASE) forecasting metrics were normalized using the scores of the Seasonal Naive baseline and aggregated through a geometric mean to obtain the Agg. Relative WQL and MASE, respectively.\n  \u003c/span\u003e\n\u003c/p\u003e\n\n## 📈 Usage\n\nTo perform inference with Chronos or Chronos-Bolt models, the easiest way is to install this package through `pip`:\n\n```sh\npip install chronos-forecasting\n```\n\nIf you're interested in pretraining, fine-tuning, and other research \u0026 development, clone and install the package from source:\n\n```sh\n# Clone the repository\ngit clone https://github.com/amazon-science/chronos-forecasting.git\n\n# Install in editable mode with extra training-related dependencies\ncd chronos-forecasting \u0026\u0026 pip install --editable \".[training]\"\n```\n\n\u003e [!TIP]\n\u003e This repository is intended for research purposes and provides a minimal interface to Chronos models. For reliable production use, we recommend the following options:\n\u003e - [AutoGluon](https://auto.gluon.ai) provides effortless fine-tuning, augmenting Chronos models with exogenous information through covariate regressors, ensembling with other statistical and machine learning models. Check out the AutoGluon Chronos [tutorial](https://auto.gluon.ai/stable/tutorials/timeseries/forecasting-chronos.html).\n\u003e - SageMaker JumpStart makes it easy to deploy Chronos inference endpoints to AWS with just a few lines of code. Check out [this tutorial](notebooks/deploy-chronos-bolt-to-amazon-sagemaker.ipynb) for more details.\n\n### Forecasting\n\nA minimal example showing how to perform forecasting using Chronos and Chronos-Bolt models:\n\n```python\nimport pandas as pd  # requires: pip install pandas\nimport torch\nfrom chronos import BaseChronosPipeline\n\npipeline = BaseChronosPipeline.from_pretrained(\n    \"amazon/chronos-t5-small\",  # use \"amazon/chronos-bolt-small\" for the corresponding Chronos-Bolt model\n    device_map=\"cuda\",  # use \"cpu\" for CPU inference\n    torch_dtype=torch.bfloat16,\n)\n\ndf = pd.read_csv(\n    \"https://raw.githubusercontent.com/AileenNielsen/TimeSeriesAnalysisWithPython/master/data/AirPassengers.csv\"\n)\n\n# context must be either a 1D tensor, a list of 1D tensors,\n# or a left-padded 2D tensor with batch as the first dimension\n# quantiles is an fp32 tensor with shape [batch_size, prediction_length, num_quantile_levels]\n# mean is an fp32 tensor with shape [batch_size, prediction_length]\nquantiles, mean = pipeline.predict_quantiles(\n    context=torch.tensor(df[\"#Passengers\"]),\n    prediction_length=12,\n    quantile_levels=[0.1, 0.5, 0.9],\n)\n```\n\nFor the original Chronos models, `pipeline.predict` can be used to draw forecast samples. More options for `predict_kwargs` in `pipeline.predict_quantiles` can be found with:\n\n```python\nfrom chronos import ChronosPipeline, ChronosBoltPipeline\n\nprint(ChronosPipeline.predict.__doc__)  # for Chronos models\nprint(ChronosBoltPipeline.predict.__doc__)  # for Chronos-Bolt models\n```\n\nWe can now visualize the forecast:\n\n```python\nimport matplotlib.pyplot as plt  # requires: pip install matplotlib\n\nforecast_index = range(len(df), len(df) + 12)\nlow, median, high = quantiles[0, :, 0], quantiles[0, :, 1], quantiles[0, :, 2]\n\nplt.figure(figsize=(8, 4))\nplt.plot(df[\"#Passengers\"], color=\"royalblue\", label=\"historical data\")\nplt.plot(forecast_index, median, color=\"tomato\", label=\"median forecast\")\nplt.fill_between(forecast_index, low, high, color=\"tomato\", alpha=0.3, label=\"80% prediction interval\")\nplt.legend()\nplt.grid()\nplt.show()\n```\n\n### Extracting Encoder Embeddings\n\nA minimal example showing how to extract encoder embeddings from Chronos models:\n\n```python\nimport pandas as pd\nimport torch\nfrom chronos import ChronosPipeline\n\npipeline = ChronosPipeline.from_pretrained(\n    \"amazon/chronos-t5-small\",\n    device_map=\"cuda\",\n    torch_dtype=torch.bfloat16,\n)\n\ndf = pd.read_csv(\"https://raw.githubusercontent.com/AileenNielsen/TimeSeriesAnalysisWithPython/master/data/AirPassengers.csv\")\n\n# context must be either a 1D tensor, a list of 1D tensors,\n# or a left-padded 2D tensor with batch as the first dimension\ncontext = torch.tensor(df[\"#Passengers\"])\nembeddings, tokenizer_state = pipeline.embed(context)\n```\n\n### Pretraining, fine-tuning and evaluation\n\nScripts for pretraining, fine-tuning and evaluating Chronos models can be found in [this folder](./scripts/).\n\n## :floppy_disk: Datasets\n\nDatasets used in the Chronos paper for pretraining and evaluation (both in-domain and zero-shot) are available through the HuggingFace repos: [`autogluon/chronos_datasets`](https://huggingface.co/datasets/autogluon/chronos_datasets) and [`autogluon/chronos_datasets_extra`](https://huggingface.co/datasets/autogluon/chronos_datasets_extra). Check out these repos for instructions on how to download and use the datasets.\n\n## 🔥 Coverage\n\n- [Adapting language model architectures for time series forecasting](https://www.amazon.science/blog/adapting-language-model-architectures-for-time-series-forecasting) (Amazon Science blog post)\n- [Amazon AI Researchers Introduce Chronos: A New Machine Learning Framework for Pretrained Probabilistic Time Series Models](https://www.marktechpost.com/2024/03/15/amazon-ai-researchers-introduce-chronos-a-new-machine-learning-framework-for-pretrained-probabilistic-time-series-models/) (Marktechpost blog post)\n- [Chronos: The Rise of Foundation Models for Time Series Forecasting](https://towardsdatascience.com/chronos-the-rise-of-foundation-models-for-time-series-forecasting-aaeba62d9da3) (Towards Data Science blog post by Luís Roque and Rafael Guedes)\n- [Moirai: Time Series Foundation Models for Universal Forecasting](https://towardsdatascience.com/moirai-time-series-foundation-models-for-universal-forecasting-dc93f74b330f) (Towards Data Science blog post by Luís Roque and Rafael Guedes, includes comparison of Chronos with Moirai)\n- [Chronos: The Latest Time Series Forecasting Foundation Model by Amazon](https://towardsdatascience.com/chronos-the-latest-time-series-forecasting-foundation-model-by-amazon-2687d641705a) (Towards Data Science blog post by Marco Peixeiro)\n  - The original article had a critical bug affecting the metric computation for Chronos. We opened a [pull request](https://github.com/marcopeix/time-series-analysis/pull/10) to fix it.\n- [How to Effectively Forecast Time Series with Amazon's New Time Series Forecasting Model](https://towardsdatascience.com/how-to-effectively-forecast-time-series-with-amazons-new-time-series-forecasting-model-9e04d4ccf67e) (Towards Data Science blog post by Eivind Kjosbakken)\n- [Chronos: Learning the Language of Time Series](https://minimizeregret.com/linked/2024/03/27/chronos-forecasting/) (Minimize Regret blog post by Tim Radtke)\n- [Chronos: Another Zero-Shot Time Series Forecaster LLM](https://levelup.gitconnected.com/chronos-another-zero-shot-time-series-forecaster-llm-0e80753a7ad0) (Level Up Coding blog post by Level Up Coding AI TutorMaster)\n- [Paper Review: Chronos: Learning the Language of Time Series](https://andlukyane.com/blog/paper-review-chronos) (Review by Andrey Lukyanenko)\n- [Foundation Models for Forecasting: the Future or Folly?](https://insights.radix.ai/blog/foundation-models-for-forecasting-the-future-or-folly) (Blog post by Radix)\n- [Learning the Language of Time Series with Chronos](https://medium.com/@ManueleCaddeo/learning-the-language-of-time-series-with-chronos-fea7d0fedde4) (Medium post by Manuele Caddeo)\n- [The latest advancement in Time Series Forecasting from AWS: Chronos](https://medium.com/chat-gpt-now-writes-all-my-articles/the-latest-advancement-in-time-series-forecasting-from-aws-chronos-python-code-included-0205d01248f3) (Medium post by Abish Pius)\n- [Decoding the Future: How Chronos Redefines Time Series Forecasting with the Art of Language](https://medium.com/@zamalbabar/decoding-the-future-how-chronos-redefines-time-series-forecasting-with-the-art-of-language-cecc2174e400) (Medium post by Zamal)\n- [Comparison of Chronos against the SCUM ensemble of statistical models](https://github.com/Nixtla/nixtla/tree/main/experiments/amazon-chronos) (Benchmark by Nixtla)\n  - We opened a [pull request](https://github.com/Nixtla/nixtla/pull/281) extending the analysis to 28 datasets (200K+ time series) and showing that **zero-shot** Chronos models perform comparably to this strong ensemble of 4 statistical models while being significantly faster on average. Our complete response can be [found here](https://www.linkedin.com/pulse/extended-comparison-chronos-against-statistical-ensemble-ansari-4aste/).\n- [Comparison of Chronos against a variety of forecasting models](https://www.linkedin.com/feed/update/urn:li:activity:7178398371815051267/) (Benchmark by ReadyTensor)\n\n## 📝 Citation\n\nIf you find Chronos models useful for your research, please consider citing the associated [paper](https://arxiv.org/abs/2403.07815):\n\n```\n@article{ansari2024chronos,\n  title={Chronos: Learning the Language of Time Series},\n  author={Ansari, Abdul Fatir and Stella, Lorenzo and Turkmen, Caner and Zhang, Xiyuan, and Mercado, Pedro and Shen, Huibin and Shchur, Oleksandr and Rangapuram, Syama Syndar and Pineda Arango, Sebastian and Kapoor, Shubham and Zschiegner, Jasper and Maddix, Danielle C. and Mahoney, Michael W. and Torkkola, Kari and Gordon Wilson, Andrew and Bohlke-Schneider, Michael and Wang, Yuyang},\n  journal={Transactions on Machine Learning Research},\n  issn={2835-8856},\n  year={2024},\n  url={https://openreview.net/forum?id=gerNCVqqtR}\n}\n```\n\n## 🛡️ Security\n\nSee [CONTRIBUTING](CONTRIBUTING.md#security-issue-notifications) for more information.\n\n## 📃 License\n\nThis project is licensed under the Apache-2.0 License.\n","funding_links":[],"categories":["Python","A. Large Language Models","时间序列","Repos","Simulation, Forecasting and Macro Modeling","\u003ca name=\"Python\"\u003e\u003c/a\u003ePython"],"sub_categories":["Taxonomy of Roles and Unified Workflows","网络服务_其他","Forecasting and Nowcasting"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Famazon-science%2Fchronos-forecasting","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Famazon-science%2Fchronos-forecasting","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Famazon-science%2Fchronos-forecasting/lists"}