https://github.com/lucacappelletti94/plot_keras_history
A simple python package to print a keras NN training history.
https://github.com/lucacappelletti94/plot_keras_history
Last synced: about 1 month ago
JSON representation
A simple python package to print a keras NN training history.
- Host: GitHub
- URL: https://github.com/lucacappelletti94/plot_keras_history
- Owner: LucaCappelletti94
- License: mit
- Created: 2019-05-05T16:50:09.000Z (about 6 years ago)
- Default Branch: master
- Last Pushed: 2024-12-09T10:44:44.000Z (6 months ago)
- Last Synced: 2025-05-05T23:12:07.186Z (about 1 month ago)
- Language: Python
- Size: 12 MB
- Stars: 18
- Watchers: 1
- Forks: 4
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- Funding: .github/FUNDING.yml
- License: LICENSE
- Citation: CITATION.cff
Awesome Lists containing this project
README
# Plot Keras History
[](https://pypi.org/project/plot-keras-history/)
[](https://github.com/LucaCappelletti94/plot_keras_history/blob/master/LICENSE)
[](https://pepy.tech/project/plot-keras-history)
[](https://github.com/LucaCappelletti94/plot_keras_history/actions/)A Python package to print a [`Keras model training history`](https://keras.io/callbacks/#history).
## How do I install this package?
As usual, just download it using pip:
```bash
pip install plot_keras_history
```## Usage
Let's say you have a model generated by the function `my_keras_model`.
### Plotting a training history
In the following example, we will see how to plot and either show or save the training history:

```python
from keras.models import Sequential
from keras.layers import Dense
import matplotlib.pyplot as plt
import numpy as np
from plot_keras_history import show_history, plot_historymodel = Sequential([
Dense(1, activation="sigmoid")
])
model.compile(
optimizer="nadam",
loss="binary_crossentropy"
)
X = np.random.uniform(size=(100, 100))
y = np.random.randint(2, size=(100))
history = model.fit(
X[:50], y[:50],
validation_data=(X[50:], y[50:]),
epochs=10,
verbose=False
)
show_history(history)
plot_history(history, path="standard.png")
plt.close()
```### Plotting into separate graphs
By default, the graphs are all in one big image, but for various reasons, you might need them one by one:
```python
from keras.models import Sequential
from keras.layers import Dense
import matplotlib.pyplot as plt
import numpy as np
from plot_keras_history import plot_historymodel = Sequential([
Dense(1, activation="sigmoid")
])
model.compile(
optimizer="nadam",
loss="binary_crossentropy"
)
X = np.random.uniform(size=(100, 100))
y = np.random.randint(2, size=(100))
history = model.fit(
X[:50], y[:50],
validation_data=(X[50:], y[50:]),
epochs=10,
verbose=False
)
plot_history(history, path="singleton", single_graphs=True)
plt.close()
```### Plotting multiple histories
Suppose you are training your model on multiple holdouts and want to plot all of them, plus an average. Fortunately, we've got you covered!

```python
from keras.models import Sequential
from keras.layers import Dense
import matplotlib.pyplot as plt
import numpy as np
from plot_keras_history import plot_historyhistories = []
for holdout in range(10):
model = Sequential([
Dense(1, activation="sigmoid")
])
model.compile(
optimizer="nadam",
loss="binary_crossentropy"
)
X = np.random.uniform(size=(100, 100))
y = np.random.randint(2, size=(100))
history = model.fit(
X[:50], y[:50],
validation_data=(X[50:], y[50:]),
epochs=10,
verbose=False
)
histories.append(history)plot_history(
histories,
show_standard_deviation=False,
show_average=True
)
plt.close()
```### Reducing the history noise with Savgol Filters
In some cases, it is necessary to see the progress of the history while interpolating results to reduce noise. A parameter is available to automatically apply a Savgol filter:

```python
from keras.models import Sequential
from keras.layers import Dense
import matplotlib.pyplot as plt
import numpy as np
from plot_keras_history import plot_historymodel = Sequential([
Dense(1, activation="sigmoid")
])
model.compile(
optimizer="nadam",
loss="binary_crossentropy"
)
X = np.random.uniform(size=(100, 100))
y = np.random.randint(2, size=(100))
history = model.fit(
X[:50], y[:50],
validation_data=(X[50:], y[50:]),
epochs=10,
verbose=False
)
plot_history(history, path="interpolated.png", interpolate=True)
plt.close()
```### Automatic aliases
Metrics such as `"lr"` (Learning Rate) or `"acc"` (Accuracy) are automatically renamed to more descriptive labels.
### Automatic normalization
The library normalizes the ranges of metrics known to be in `[-1, 1]` or `[0, 1]` to avoid visual biases.
### All the available options
```python
from keras.models import Sequential
from keras.layers import Dense
import matplotlib.pyplot as plt
import numpy as np
from plot_keras_history import plot_historymodel = Sequential([
Dense(1, activation="sigmoid")
])
model.compile(
optimizer="nadam",
loss="binary_crossentropy"
)
X = np.random.uniform(size=(100, 100))
y = np.random.randint(2, size=(100))
history = model.fit(
X[:50], y[:50],
validation_data=(X[50:], y[50:]),
epochs=10,
verbose=False
)
plot_history(
history,
style="-", # Line style.
interpolate=True, # Whether to interpolate graph datapoints.
side=5, # Graph size.
graphs_per_row=4, # Number of graphs per row.
customization_callback=None, # Callback for customizing graphs.
path="interpolated.png", # Save path for the resulting image or images (for single_graphs).
single_graphs=False # Whether to save as single or multiple graphs.
)
plt.close()
```### Chaining histories
If you stop and restart a model's training, it may break the history into two objects. Use [`chain_histories`](https://github.com/LucaCappelletti94/plot_keras_history/blob/dd590ce7f89b2a52236f231a9a6377b3e1d76489/plot_keras_history/utils.py#L3-L8) to merge them:
```python
from keras.models import Sequential
from keras.layers import Dense
import numpy as np
from plot_keras_history import chain_historiesmodel = Sequential([
Dense(1, activation="sigmoid")
])
model.compile(
optimizer="nadam",
loss="binary_crossentropy"
)
X = np.random.uniform(size=(100, 100))
y = np.random.randint(2, size=(100))
model = Sequential([
Dense(1, activation="sigmoid")
])
model.compile(
optimizer="nadam",
loss="binary_crossentropy"
)
X = np.random.uniform(size=(100, 100))
y = np.random.randint(2, size=(100))
history1 = model.fit(
X[:50], y[:50],
validation_data=(X[50:], y[50:]),
epochs=10,
verbose=False
)
history2 = model.fit(
X[:50], y[:50],
validation_data=(X[50:], y[50:]),
epochs=10,
verbose=False
)
history = chain_histories(history1, history2)
```## Extras
Numerous additional metrics are available in [`extra_keras_metrics`](https://github.com/LucaCappelletti94/extra_keras_metrics).
## Cite this software
If you need a bib file to cite this work:
```bibtex
@software{Cappelletti_Plot_Keras_History_2022,
author = {Cappelletti, Luca},
doi = {10.5072/zenodo.1054923},
month = {4},
title = {{Plot Keras History}},
version = {1.1.36},
year = {2022}
}
```