Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/aryan-jadon/Regression-Loss-Functions-in-Time-Series-Forecasting-Tensorflow
This repository contains the implementation of paper Temporal Fusion Transformers for Interpretable Multi-horizon Time Series Forecasting with different loss functions in Tensorflow. We have compared 14 regression loss functions performance on 4 different datasets.
https://github.com/aryan-jadon/Regression-Loss-Functions-in-Time-Series-Forecasting-Tensorflow
deep-learning forecasting keras loss-functions machine-learning python tensorflow time-series time-series-forecasting transformers
Last synced: 3 months ago
JSON representation
This repository contains the implementation of paper Temporal Fusion Transformers for Interpretable Multi-horizon Time Series Forecasting with different loss functions in Tensorflow. We have compared 14 regression loss functions performance on 4 different datasets.
- Host: GitHub
- URL: https://github.com/aryan-jadon/Regression-Loss-Functions-in-Time-Series-Forecasting-Tensorflow
- Owner: aryan-jadon
- Created: 2022-09-21T00:59:13.000Z (over 2 years ago)
- Default Branch: main
- Last Pushed: 2023-01-17T01:10:24.000Z (about 2 years ago)
- Last Synced: 2024-08-02T06:19:47.129Z (6 months ago)
- Topics: deep-learning, forecasting, keras, loss-functions, machine-learning, python, tensorflow, time-series, time-series-forecasting, transformers
- Language: Python
- Homepage:
- Size: 2.52 MB
- Stars: 73
- Watchers: 4
- Forks: 18
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
- awesome-time-series - [code
README
# Regression Loss Functions Performance Evaluation in Time Series Forecasting using Temporal Fusion Transformers
[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.7542550.svg)](https://doi.org/10.5281/zenodo.7542550)
```
This repository contains the implementation of paper Temporal Fusion Transformers for Interpretable
Multi-horizon Time Series Forecasting with different loss functions in Tensorflow.
We have compared 14 regression loss functions performance on 4 different datasets.
Summary of experiment with instructions on how to replicate this experiment can be find below.
```## About Temporal Fusion Transformers
Paper Link: https://arxiv.org/pdf/1912.09363.pdf
Authors: Bryan Lim, Sercan Arik, Nicolas Loeff and Tomas Pfister> Abstract - Multi-horizon forecasting problems often contain a complex mix of inputs -- including static (i.e. time-invariant)
> covariates, known future inputs, and other exogenous time series that are only observed historically -- without any
> prior information on how they interact with the target. While several deep learning models have been proposed for
> multi-step prediction, they typically comprise black-box models which do not account for the full range of inputs
> present in common scenarios. In this paper, we introduce the Temporal Fusion Transformer (TFT) -- a novel
> attention-based architecture which combines high-performance multi-horizon forecasting with interpretable insights
> into temporal dynamics. To learn temporal relationships at different scales, the TFT utilizes recurrent layers for
> local processing and interpretable self-attention layers for learning long-term dependencies.
> The TFT also uses specialized components for the judicious selection of relevant features and a series of gating layers
> to suppress unnecessary components, enabling high performance in a wide range of regimes. On a variety of real-world datasets,
> we demonstrate significant performance improvements over existing benchmarks, and showcase three practical
> interpretability use-cases of TFT.Majority of this repository work is taken from - https://github.com/google-research/google-research/tree/master/tft.
## Experiments Summary and Our Paper
### Cite Our Paper
```
@misc{jadon2022comprehensive,
title={A Comprehensive Survey of Regression Based Loss Functions for Time Series Forecasting},
author={Aryan Jadon and Avinash Patil and Shruti Jadon},
year={2022},
eprint={2211.02989},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```### Paper Link - https://arxiv.org/abs/2211.02989
![Summary of Loss Functions](https://github.com/aryan-jadon/Regression-Loss-Functions-in-Time-Series-Forecasting-Tensorflow/blob/main/loss_functions_plots/Loss-Functions-Summary.png)
## Replicating this Repository and Experiments
### Downloading Data and Running Default Experiments
The key modules for experiments are organised as:
* **data\_formatters**: Stores the main dataset-specific column definitions, along with functions for data transformation and normalization. For compatibility with the TFT, new experiments should implement a unique ``GenericDataFormatter`` (see **base.py**), with examples for the default experiments shown in the other python files.
* **expt\_settings**: Holds the folder paths and configurations for the default experiments,
* **libs**: Contains the main libraries, including classes to manage hyperparameter optimisation (**hyperparam\_opt.py**), the main TFT network class (**tft\_model.py**), and general helper functions (**utils.py**)Scripts are all saved in the main folder, with descriptions below:
* **run.sh**: Simple shell script to ensure correct environmental setup.
* **script\_download\_data.py**: Downloads data for the main experiment and processes them into csv files ready for training/evaluation.
* **script\_train\_fixed\_params.py**: Calibrates the TFT using a predefined set of hyperparameters, and evaluates for a given experiment.
* **script\_hyperparameter\_optimisation.py**: Runs full hyperparameter optimization using the default random search ranges defined for the TFT.Our four default experiments are divided into ``volatility``, ``electricity``, ``traffic``, and``favorita``.
To run these experiments, first download the data, and then run the relevant training routine.#### Step 1: Download data for default experiments
To download the experiment data, run the following script:```bash
python3 -m script_download_data $EXPT $OUTPUT_FOLDER
```where ``$EXPT`` can be any of {``volatility``, ``electricity``, ``traffic``, ``favorita``}, and ``$OUTPUT_FOLDER`` denotes the root folder in which experiment outputs are saved.
#### Step 2: Train and evaluate network
To train the network with the optimal default parameters, run:```bash
python3 -m script_train_fixed_params $EXPT $OUTPUT_FOLDER $USE_GPU
```where ``$EXPT`` and ``$OUTPUT_FOLDER`` are as above, ``$GPU`` denotes whether to run with GPU support (options are {``'yes'`` or``'no'``}).
For full hyperparameter optimization, run:
```bash
python3 -m script_hyperparam_opt $EXPT $OUTPUT_FOLDER $USE_GPU yes
```where options are as above.
### Running Experiments with Loss Functions
#### Move the Downloaded Dataset to their Respective Experiment Folder
Run Experiment Script
```bash
python3 running_experiments.py
```