{"id":13574327,"url":"https://github.com/oneapi-src/demand-forecasting","last_synced_at":"2025-04-04T14:32:19.468Z","repository":{"id":62860474,"uuid":"560595350","full_name":"oneapi-src/demand-forecasting","owner":"oneapi-src","description":"AI Starter Kit for demand forecasting using Intel® Optimized Tensorflow*","archived":true,"fork":false,"pushed_at":"2024-05-08T23:57:39.000Z","size":260,"stargazers_count":9,"open_issues_count":0,"forks_count":6,"subscribers_count":5,"default_branch":"main","last_synced_at":"2024-11-05T09:44:13.389Z","etag":null,"topics":["ai-starter-kit","deep-learning","tensorflow"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"bsd-3-clause","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/oneapi-src.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":"CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":"SECURITY.md","support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2022-11-01T20:55:32.000Z","updated_at":"2024-10-28T16:16:39.000Z","dependencies_parsed_at":"2024-05-08T20:42:32.606Z","dependency_job_id":"630a7a2e-4af2-4848-972b-a396d4b101b6","html_url":"https://github.com/oneapi-src/demand-forecasting","commit_stats":null,"previous_names":[],"tags_count":3,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/oneapi-src%2Fdemand-forecasting","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/oneapi-src%2Fdemand-forecasting/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/oneapi-src%2Fdemand-forecasting/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/oneapi-src%2Fdemand-forecasting/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/oneapi-src","download_url":"https://codeload.github.com/oneapi-src/demand-forecasting/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":247194296,"owners_count":20899462,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["ai-starter-kit","deep-learning","tensorflow"],"created_at":"2024-08-01T15:00:50.491Z","updated_at":"2025-04-04T14:32:18.758Z","avatar_url":"https://github.com/oneapi-src.png","language":"Python","readme":"PROJECT NOT UNDER ACTIVE MANAGEMENT\n\nThis project will no longer be maintained by Intel.\n\nIntel has ceased development and contributions including, but not limited to, maintenance, bug fixes, new releases, or updates, to this project.  \n\nIntel no longer accepts patches to this project.\n\nIf you have an ongoing need to use this project, are interested in independently developing it, or would like to maintain patches for the open source software community, please create your own fork of this project.  \n\nContact: webadmin@linux.intel.com\n# Demand Forecasting\r\n\r\n## Introduction\r\nThis reference kit implements an end-to-end (E2E) workflow that, with the help of [Intel® Optimization for TensorFlow*](https://www.intel.com/content/www/us/en/developer/tools/oneapi/optimization-for-tensorflow.html) and [Intel® Neural Compressor](https://www.intel.com/content/www/us/en/developer/tools/oneapi/neural-compressor.html), produces a complex time series model for demand forecasting capable of spanning multiple products and stores all at once.\r\n\r\nCheck out more workflow examples in the [Developer Catalog](https://developer.intel.com/aireferenceimplementations).\r\n\r\n## Solution Technical Overview\r\nThe ability to forecast the demand and thus overcome the problems arising from its variability is one of the top challenges for Supply Chain Managers. When planning the production or the supply of shops, organizations are always managing a short blanket: producing/supplying more to avoid out of stocks implies spending more for production, transportation, and immobilized capital.\r\n\r\nTo model the complex patterns that may be present when understanding product demand, it is often necessary to capture complicated mechanisms such as extended seasonality and long-term correlations, resulting in significant feature engineering to come to a good solution. Modern artificial Intelligence (AI) solutions, such as Deep Neural Networks can drastically aid in this process by becoming automatic feature extractors where the explicit mechanisms do not need to be written down, but rather the AI is allowed to learn them from historical data.  \r\n\r\nIn this use case, we follow this Deep Learning approach and demonstrate how to train and utilize a Convolutional Neural Network and Long Short-Term Memory Network (CNN-LSTM) time series model, which takes in the last 130 days worth of sales data for a specific item at a specific store, to predict the demand one day ahead. A trained model can then be used to predict the next days demand for every item and every store in a given catalog using only previous purchase data, on a daily basis. \r\n\r\nBroadly, we will tackle this problem using the following pipeline:\r\n\r\n\u003e Historical Purchase Data =\u003e CNN-LSTM Training =\u003e CNN-LSTM Inference\r\n\r\nThe solution contained in this repo uses the following Intel® packages:\r\n\r\n* ***Intel® Optimizations for TensorFlow****\r\n\r\n  The latest version of [Intel® Optimization for TensorFlow*](https://www.intel.com/content/www/us/en/developer/tools/oneapi/optimization-for-tensorflow.html) is included as part of the Intel® oneAPI AI Analytics Toolkit (AI Kit). This kit provides a comprehensive and interoperable set of AI software libraries to accelerate end-to-end data science and machine-learning workflows.\r\n\r\n* ***Intel® Neural Compressor****\r\n\r\n  [Intel® Neural Compressor](https://www.intel.com/content/www/us/en/developer/tools/oneapi/neural-compressor.html) performs model compression to reduce the model size and increase the speed of deep learning inference for deployment on CPUs or GPUs. This open-source Python* library automates popular model compression technologies, such as quantization, pruning, and knowledge distillation across multiple deep learning frameworks.\r\n\r\n  Using this library, you can:\r\n\r\n  * Converge quickly on quantized models though automatic accuracy-driven tuning strategies.\r\n  * Prune the least important parameters for large models.\r\n  * Distill knowledge from a larger model to improve the accuracy of a smaller model for deployment.\r\n  * Get started with model compression with one-click analysis and code insertion.\r\n\r\nFor more details, visit [Intel® Optimization for TensorFlow*](https://www.intel.com/content/www/us/en/developer/tools/oneapi/optimization-for-tensorflow.html), [Intel® Neural Compressor](https://www.intel.com/content/www/us/en/developer/tools/oneapi/neural-compressor.html), the [Demand Forecasting](https://github.com/oneapi-src/demand-forecasting) GitHub repository.\r\n\r\n## Solution Technical Details\r\nThe reference kit implementation is a reference solution to the described use case that includes:\r\n\r\n  1. An Optimized E2E architecture to arrive at an AI solution with a CNN-LSTM model implemented in TensorFlow*.\r\n  2. The Intel® optimizations enabled for TensorFlow* and Intel® Neural Compressor for Model Quantization.\r\n\r\n### Optimized E2E Architecture with Intel® oneAPI Components\r\n\r\n![Use_case_flow](assets/e2e-flow-optimized.png)\r\n\r\n### Dataset\r\n\r\nThe dataset used for this demo is a synthetic set of daily purchase counts over a period of five years characterized by `date`, `item`, `store`, `sales`, where each feature corresponds to:\r\n\r\n- `date`: date of purchase\r\n- `item`: item id\r\n- `store`: store id\r\n- `sales`: amount of `item` purchased at `store`\r\n\r\nThere is exactly one row per (`date`, `item`, `store`) tuple.\r\n\r\nTreated as a time series, this provides a time view of how purchases change over time for each item at each store.\r\n\r\n### Expected Data Input-Output\r\n\r\n**Input**                                 | **Output** |\r\n| :---: | :---: |\r\n| Past Purchases         | Predicted Demand at the selected horizon |\r\n\r\n**Example Input**                                 | **Example Output** |\r\n| :---: | :---: |\r\n| ***Date***, **Store**, **Item**, ***Sales*** \u003cbr\u003e 1-1-2010, 1, 1, 5 \u003cbr\u003e 1-2-2010, 1, 1, 7 \u003cbr\u003e 1-3-2010, 1, 1, 9 \u003cbr\u003e 1-4-2010, 1, 1, 11 \u003cbr\u003e 1-1-2010, 2, 1, 10 \u003cbr\u003e 1-2-2010, 2, 1, 15 \u003cbr\u003e 1-3-2010, 2, 1, 20 \u003cbr\u003e 1-4-2010, 2, 1, 25 |***Date***, ***Store***, ***Item***, ***Predicted Demand*** \u003cbr\u003e 1-5-2010, 2, 1, 20 \u003cbr\u003e 1-5-2010, 2,1, 30\r\n\r\n## Validated Hardware Details\r\nThere are workflow-specific hardware and software setup requirements to run this use case.\r\n\r\n| Recommended Hardware                                            | Precision\r\n| ----------------------------------------------------------------|-\r\n| CPU: Intel® 2nd Gen Xeon® Platinum 8280 CPU @ 2.70GHz or higher | FP32, INT8\r\n| RAM: 187 GB                                                     |\r\n| Recommended Free Disk Space: 20 GB or more                      |\r\n\r\nOperating System: Ubuntu* 22.04 LTS.\r\n\r\n## How it Works\r\n\r\n### Model Training\r\n\r\nUsing the synthetic daily dataset described [here](#dataset), we train a Deep Learning model to forecast the next days demand using the previous $n$ days. More specifically, to build a forecasting model for $\\text{count}[t]$, representing the sales of a given (`item`, `store`) at time $t$, we first transform the time series data to rows of the form:\r\n\r\n(`item`, `store`, `count[t-n]`, `count[t - (n - 1)]`, ..., `count[t-1]`, `count[t]`)\r\n\r\nUsing data of this form, the forecasting problem becomes a regression problem that takes in the demand/sales of the previous $n$ time points and predicts the demand at the current time $t$, representing the forecasting assumption as:\r\n\r\n$f(\\text{count}[t-n], \\text{count}[t - (n - 1)], ..., \\text{count}[t-1]) = \\text{count}[t]$.\r\n\r\nHere, $n$ and the forecast horizon are configurable if desired beyond $n$ lags and 1-step ahead. A CNN-LSTM model is used for this application, as opposed to just a LSTM, to allow for the model to learn better long-term dependencies by folding the time series and applying a Convolutional Neural Network (CNN) encoder before modeling the modified sequential dependencies via the Long Short-Term Memory Network (LSTM).\r\n\r\n### Model Inference\r\n\r\nThe saved model from the training process can be used to predict demand on new data of the same format. This expects the previous 130 days sales counts. A multi-step ahead forecast can be generated by iterating this process, using predicted values as input and propagating forward though the model.\r\n\r\n## Get Started\r\nStart by **defining an environment variable** that will store the workspace path, this can be an existing directory or one to be created in further steps. This ENVVAR will be used for all the commands executed using absolute paths.\r\n\r\n[//]: # (capture: baremetal)\r\n```bash\r\nexport WORKSPACE=$PWD/demand-forecasting\r\n```\r\n\r\nDefine `DATA_DIR` and `OUTPUT_DIR`.\r\n\r\n[//]: # (capture: baremetal)\r\n```bash\r\nexport DATA_DIR=$WORKSPACE/data\r\nexport OUTPUT_DIR=$WORKSPACE/output\r\nexport CONFIG_DIR=$WORKSPACE/config\r\n```\r\n\r\n### Download the Workflow Repository\r\nCreate a working directory for the workflow and clone the [Main\r\nRepository](https://github.com/oneapi-src/demand-forecasting) into your working\r\ndirectory.\r\n\r\n[//]: # (capture: baremetal)\r\n```\r\nmkdir -p $WORKSPACE \u0026\u0026 cd $WORKSPACE\r\n```\r\n\r\n```bash\r\ngit clone https://github.com/oneapi-src/demand-forecasting $WORKSPACE\r\n```\r\n### Set Up Conda\r\nTo learn more, please visit [install anaconda on Linux](https://docs.anaconda.com/free/anaconda/install/linux/).\r\n\r\n```bash\r\nwget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh\r\nbash Miniconda3-latest-Linux-x86_64.sh\r\n```\r\n### Set Up Environment\r\nInstall and set the libmamba solver as default solver. Run the following commands:\r\n\r\n```bash\r\nconda install -n base conda-libmamba-solver -y\r\nconda config --set solver libmamba\r\n```\r\n\r\nThe [env/intel_env.yml](./env/intel_env.yml) file contains all dependencies to create the Intel® environment.\r\n\r\n| **Packages required in YAML file**| **Version**\r\n| :---                              | :--\r\n| python                            | 3.9\r\n| intel-aikit-tensorflow            | 2024.1\r\n\r\n Execute next command to create the conda environment.\r\n\r\n```bash\r\nconda env create -f $WORKSPACE/env/intel_env.yml\r\n```\r\n\r\nDuring this setup, `demand_forecasting_intel` conda environment will be created with the dependencies listed in the YAML configuration. Use the following command to activate the environment created above:\r\n\r\n```bash\r\nconda activate demand_forecasting_intel\r\n```\r\n\r\n## Supported Runtime Environment\r\nYou can execute this reference pipeline using the following environments:\r\n* Bare Metal\r\n\r\n### Run Using Bare Metal\r\nFollow these instructions to set up and run this workflow on your own development system.\r\n\r\n#### Set Up System Software\r\nOur examples use the ``conda`` package and environment on your local computer. If you don't already have ``conda`` installed, go to [Set up conda](#set-up-conda) or see the [Conda* Linux installation instructions](https://docs.conda.io/projects/conda/en/stable/user-guide/install/linux.html).\r\n\r\n#### Run Workflow\r\nOnce we create and activate the `demand_forecasting_intel` environment, we can run the next steps.\r\n\r\n***Setting up the data***\r\n\r\nThe benchmarking scripts expects two files to be present in `data/demand`:\r\n\r\n* `data/demand/train.csv`: training data\r\n\r\n* `data/demand/test_full.csv`: testing data\r\n\r\nTo setup the data for benchmarking:\r\n\r\n1. Use the `src/generate_data.py` script to generate synthetic data for `demand/train.csv` and `demand/test_full.csv`:\r\n\r\n    [//]: # (capture: baremetal)\r\n    ```shell\r\n    python $WORKSPACE/src/generate_data.py --output_dir $DATA_DIR/demand\r\n    ```\r\n\r\n#### Model Building Process\r\n\r\nAs described in [Model Training](#model-training) section, we first transform the data to the regression format expected and feed this data into our CNN-LSTM model. The `run_training.py` script reads and preprocesses the data, trains the model, and saves the model which can be used for future inference.\r\n\r\nThe script takes the following arguments:\r\n\r\n```shell\r\nusage: run_training.py [-l LOGFILE] [-s SAVE_MODEL_DIR] [-b BATCH_SIZE] -i INPUT_CSV\r\n\r\noptions:\r\n  -l LOGFILE, --logfile LOGFILE\r\n                        log file to output benchmarking results to\r\n  -s SAVE_MODEL_DIR, --save_model_dir SAVE_MODEL_DIR\r\n                        directory to save model to\r\n  -b BATCH_SIZE, --batch_size BATCH_SIZE\r\n                        training batch size\r\n  -i INPUT_CSV, --input_csv INPUT_CSV\r\n                        input csv file path\r\n```\r\nAs an example of using this to train a model, run the following command:\r\n\r\n[//]: # (capture: baremetal)\r\n```shell\r\npython $WORKSPACE/src/run_training.py --save_model_dir $OUTPUT_DIR/saved_models/intel \\\r\n  --batch_size 512 --input_csv $DATA_DIR/demand/train.csv -l $OUTPUT_DIR/logs/training_log.txt\r\n```\r\n\r\n### Running Inference\r\n\r\nThe above script will train and save models to the `save_model_dir`. To use this model to make predictions on new data, a 2-step process is necessary to optimize performance:  \r\n\r\n1. Convert the saved model from a Keras* saved model to a TensorFlow* frozen graph.  To do this, execute the script `convert_keras_to_frozen_graph.py` which takes the following arguments:\r\n\r\n```shell\r\nusage: convert_keras_to_frozen_graph.py -s KERAS_SAVED_MODEL_DIR -o OUTPUT_SAVED_DIR\r\n\r\noptions:\r\n  -s KERAS_SAVED_MODEL_DIR, --keras_saved_model_dir KERAS_SAVED_MODEL_DIR\r\n                        directory with saved keras model\r\n  -o OUTPUT_SAVED_DIR, --output_saved_dir OUTPUT_SAVED_DIR\r\n                        directory to save frozen graph to\r\n```\r\n\r\nFor the above saved model, run the command:\r\n\r\n[//]: # (capture: baremetal)\r\n```shell\r\npython $WORKSPACE/src/convert_keras_to_frozen_graph.py -s $OUTPUT_DIR/saved_models/intel \\\r\n  -o $OUTPUT_DIR/saved_models/intel\r\n```\r\n\r\nThe `convert_keras_to_frozen_graph` script takes in the saved Keras\\* model and outputs a frozen graph in the same directory called `saved_models/intel/saved_frozen_model.pb`.\r\n\r\n2. Once a saved frozen graph is saved, this model can now be used to perform inference using the `run_inference.py` script which has the following arguments:\r\n\r\n```shell\r\nusage: run_inference.py [-l LOGFILE] -s SAVED_FROZEN_MODEL [-b BATCH_SIZE]\r\n    --input_file INPUT_FILE [--benchmark_mode] [-n NUM_ITERS]\r\n\r\noptions:\r\n  -l LOGFILE, --logfile LOGFILE\r\n                        log file to output benchmarking results to\r\n  -s SAVED_FROZEN_MODEL, --saved_frozen_model SAVED_FROZEN_MODEL\r\n                        saved frozen graph\r\n  -b BATCH_SIZE, --batch_size BATCH_SIZE\r\n                        batch size to use\r\n  -i INPUT_FILE, --input_file INPUT_FILE\r\n                        input csv data file\r\n  --benchmark_mode      benchmark inference time\r\n  -n NUM_ITERS, --num_iters NUM_ITERS\r\n                        number of iterations to use when benchmarking\r\n```\r\n\r\nNow, run the inference on data file `data/demand/test_full.csv`:\r\n\r\n[//]: # (capture: baremetal)\r\n```shell\r\npython $WORKSPACE/src/run_inference.py --input_file $DATA_DIR/demand/test_full.csv \\\r\n  --saved_frozen_model $OUTPUT_DIR/saved_models/intel/saved_frozen_model.pb \\\r\n  --batch_size 512 --benchmark_mode -l $OUTPUT_DIR/logs/inference_log.txt\r\n```\r\n\r\nOn larger sample data set sizes and more complex models, the gains will become more obvious and apparent.\r\n\r\n#### Post Training Optimization with Intel® Neural Compressor\r\n\r\nIn scenarios where the model or data become very large, such as if there are a huge amount of stores and items, and the model is expanded to capture more complex phenomena, it may be desirable to further optimize the latency and throughput of a model.  For these scenarios, one method can utilize model quantization techniques via Intel® Neural Compressor.\r\n\r\nModel quantization is the practice of converting the FP32 weights in Deep Neural Networks to a lower precision, such as INT8 in order **to accelerate computation time and reduce storage space of trained models**. This may be useful if latency and throughput are critical. Intel® offers multiple algorithms and packages for quantizing trained models. This reference implementation includes the `run_quantize_inc.py` script which can be executed after saving the frozen graph to attempt accuracy-aware quantization on the trained model.\r\n\r\nThe `run_quantize_inc.py` script takes the following arguments:\r\n\r\n```shell\r\nusage: run_quantize_inc.py --saved_frozen_graph SAVED_FROZEN_GRAPH --output_dir OUTPUT_DIR \r\n    --inc_config_file INC_CONFIG_FILE --input_csv INPUT_CSV\r\n\r\noptions:\r\n  --saved_frozen_graph SAVED_FROZEN_GRAPH\r\n                        saved pretrained frozen graph to quantize\r\n  --output_dir OUTPUT_DIR\r\n                        directory to save quantized model\r\n  --inc_config_file INC_CONFIG_FILE\r\n                        INC conf yaml\r\n  --input_csv INPUT_CSV\r\n                        input csv dataset file path\r\n```\r\n\r\nExecute model quantization in the `saved_frozen_model.pb` file as follows:\r\n\r\n[//]: # (capture: baremetal)\r\n```shell\r\npython $WORKSPACE/src/run_quantize_inc.py \\\r\n  --saved_frozen_graph $OUTPUT_DIR/saved_models/intel/saved_frozen_model.pb \\\r\n  --output_dir $OUTPUT_DIR/saved_models/intel --inc_config_file $CONFIG_DIR/conf.yaml \\\r\n  --input_csv $DATA_DIR/demand/train.csv\r\n```\r\n\r\nThe ouput is a quantized model saved in `saved_models/intel/saved_frozen_int8_model.pb` file. This model is typically smaller at a minor cost to accuracy.\r\n\r\nInference on this newly quantized model can be performed identically as before, pointing the script to the saved quantized graph.\r\n\r\n[//]: # (capture: baremetal)\r\n```shell\r\npython $WORKSPACE/src/run_inference.py --input_file $DATA_DIR/demand/test_full.csv \\\r\n  --saved_frozen_model $OUTPUT_DIR/saved_models/intel/saved_frozen_int8_model.pb \\\r\n  --batch_size 512 --benchmark_mode -l $OUTPUT_DIR/logs/quantize_inc_inference_log.txt\r\n```\r\n\r\n#### Clean Up Bare Metal\r\nFollow these steps to restore your `$WORKSPACE` directory to an initial step. Please note that all downloaded or created dataset files, conda environment, and logs created by the workflow will be deleted. Before executing next steps back up your important files.\r\n\r\n```shell\r\n# activate base environment\r\nconda activate base\r\n# delete conda environment created\r\nconda env remove -n demand-forecasting\r\n```\r\n\r\n```shell\r\n# remove all data generated\r\nrm -rf $DATA_DIR/demand\r\n# remove all outputs generated\r\nrm -rf $OUTPUT_DIR\r\n```\r\n\r\n### Expected Output\r\nThe `$OUTPUT_DIR/logs/training_log.txt` file shows values for `Train RMSE`, `Validation RMSE` and `Train time`:\r\n\r\n```shell\r\nINFO:root:Starting training on 639009 samples with batch size 512...\r\nINFO:root:======\u003e Train RMSE: 7.5889\r\nINFO:root:======\u003e Validation RMSE: 7.5917\r\nINFO:root:=======\u003e Train time : 467 seconds\r\nINFO:root:Saving model...\r\nINFO:tensorflow:Assets written to: demand-forecasting/output/saved_models/intel/\r\n```\r\n\r\nThe `$OUTPUT_DIR/logs/inference.txt` contains the value for `Average Inference Time`:\r\n\r\n```shell\r\nINFO:root:Starting inference on batch size 512 for 100 iterations\r\nINFO:root:=======\u003e Average Inference Time : 0.007982 seconds\r\n```\r\n\r\nInference on the newly quantized model execution command creates the `$OUTPUT_DIR/logs/quantize_inc_inference_log.txt` file that contains `Average Inference Time` value:\r\n\r\n```shell\r\nINFO:root:Starting inference on batch size 512 for 100 iterations\r\nINFO:root:=======\u003e Average Inference Time : 0.005064 seconds\r\n```\r\n\r\nGenerated models will be saved in `$OUTPUT_DIR/saved_models/intel/`:\r\n\r\n```shell\r\nassets/\r\nfingerprint.pb\r\nkeras_metadata.pb\r\nsaved_frozen_int8_model.pb\r\nsaved_frozen_model.pb\r\nsaved_model.pb\r\nvariables/\r\n```\r\n\r\n## Summary and Next Steps\r\nDemand Forecasting is a pivotal and ever-present component of many business decisions. The ability for an analyst to quickly train a complex model and forecast into the future, whether to analyze a hypothesis or to make key decisions can heavily impact the day or time to market products. This reference kit implementation provides a performance-optimized guide around demand forecasting using Deep Learning related use cases that be easily scaled across similar use cases.\r\n\r\nTraining is conducted using Intel® oneAPI Optimizations for TensorFlow* to accelerate performance using oneDNN optimizations.\r\n\r\nWhen it is necessary to forecast demand across a large catalog of items and stores, it is often desirable to predict in large batches to process the workload as quickly as possible. For more complex models, it may also be desirable to reduce the size of the model with minimal accuracy impact to scale the solutions. This use case utilizes model quantization techniques from Intel® Neural Compressor.\r\n\r\n## Learn More\r\nFor more information about or to read about other relevant workflow examples, see these guides and software resources:\r\n\r\n- [Intel® AI Analytics Toolkit (AI Kit)](https://www.intel.com/content/www/us/en/developer/tools/oneapi/ai-analytics-toolkit.html)\r\n- [Intel® Distribution for Python*](https://www.intel.com/content/www/us/en/developer/tools/oneapi/distribution-for-Python*.html)\r\n- [Intel® Optimization for TensorFlow*](https://www.intel.com/content/www/us/en/developer/tools/oneapi/optimization-for-tensorflow.html)\r\n- [Intel® Neural Compressor](https://www.intel.com/content/www/us/en/developer/tools/oneapi/neural-compressor.html)\r\n\r\n## Support\r\nIf you have questions or issues about this use case, want help with troubleshooting, want to report a bug or submit enhancement requests, please submit a GitHub issue.\r\n\r\n## Appendix\r\n\\*Names and brands that may be claimed as the property of others. [Trademarks](https://www.intel.com/content/www/us/en/legal/trademarks.html).\r\n\r\n### Disclaimers\r\nTo the extent that any public or non-Intel datasets or models are referenced by or accessed using tools or code on this site those datasets or models are provided by the third party indicated as the content source. Intel does not create the content and does not warrant its accuracy or quality. By accessing the public content, or using materials trained on or with such content, you agree to the terms associated with that content and that your use complies with the applicable license.\r\n\r\nIntel expressly disclaims the accuracy, adequacy, or completeness of any such public content, and is not liable for any errors, omissions, or defects in the content, or for any reliance on the content. Intel is not liable for any liability or damages relating to your use of public content.\r\n","funding_links":[],"categories":["Table of Contents"],"sub_categories":["AI - Frameworks and Toolkits"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Foneapi-src%2Fdemand-forecasting","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Foneapi-src%2Fdemand-forecasting","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Foneapi-src%2Fdemand-forecasting/lists"}