Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/neonwatty/machine-learning-refined

Master the fundamentals of machine learning, deep learning, and mathematical optimization by building key concepts and models from scratch using Python.
https://github.com/neonwatty/machine-learning-refined

artificial-intelligence deep-learning jupyter-notebook jupyter-notebooks machine-learning python

Last synced: about 22 hours ago
JSON representation

Master the fundamentals of machine learning, deep learning, and mathematical optimization by building key concepts and models from scratch using Python.

Awesome Lists containing this project

README

        

# Learn machine learning by building all the fundamentals from scratch - with Python

## Welcome!

Learn machine learning from the ground up - using Python and a handful of fundamental tools.

This repository contains a range of resources associated with the 2nd edition of the university textbook Machine Learning Refined. [Our pedagogical approach](#our-pedagogy) stresses intuition, visualization, and "getting your hands dirty" building real machine learning models from scratch. The only [technical prerequisites](#technical-prerequisites) is a basic understanding of Python and matrix maths.

## Independent learners

Are you an independent learner aiming to build a solid foundation in machine learning?
Get started fast by

- [Downloading a free PDF of the book](https://github.com/neonwatty/machine-learning-refined/tree/main/chapter_pdfs)
- [Examining the online notes](#online-notes), and
- Taking a look at our [getting started guide](#how-to-use-the-book) to using the text to help you best achieve your goals

When you're ready - follow our [installation instructions](#software-installation-and-dependencies) to run the notes locally or [get started on a range of exercises](https://github.com/neonwatty/machine-learning-refined/tree/main/exercises). Always feel free to reach out to us with questions by [filing an issue in this repository](https://github.com/neonwatty/machine-learning-refined/issues). A physical copy of the text may be procured via [online retailers like these](#get-a-physical-copy-of-the-book).

## Instructors

Are you an instructor looking use the text in an upcoming course? [You're in good company!](#reviews-and-endorsements).

Get started fast by

- [Downloading a free PDF of the book](https://github.com/neonwatty/machine-learning-refined/tree/main/chapter_pdfs)
- Examining our [getting started guide](#how-to-use-the-book) to help you best employ the text in your course
- [Perusing a sample of widgets from the notes](#a-sample-of-widgets-from-the-notes)
- [Examining the online notes](#online-notes)
- [Downloading PPTX slides associated with each section of the text](https://github.com/neonwatty/machine-learning-refined/tree/main/presentations)
- [Downloading exercise wrappers and datasets](https://github.com/neonwatty/machine-learning-refined/tree/main/exercises)

Instructor requests are most easily answered by [filing an issue in this repository](https://github.com/neonwatty/machine-learning-refined/issues).

Verified college and university instructors may request a free physical copy of the text for examination via the [publisher's website](http://cambridge.force.com/Samples?isbn=9781108480727&Title=Machine%20Learning%20Refined&Author=Watt/Borhani/Katsaggelos).

## Our pedagogy

[(Back to top)](#welcome)

We believe mastery of a machine learning concept/topic is achieved only when the answer to each of the following three questions is affirmative.

1. **`Intuition`** Can you describe the idea with a simple picture?
2. **`Mathematical derivation`** Can you express your intuition in mathematical notation and derive underlying models/cost functions?
3. **`Implementation`** Can you code up your derivations in a programming language, say Python, without using high-level libraries?

We wrote our text with the aim of empowering readers to grasp the concepts of classic machine learning at this high standard.

## Online notes

[(Back to top)](#welcome)

Early drafts of the 2nd edition were released as Jupyter / Collab notebooks and are shared below.

While these allow for interesting and unique interactivity, the final draft significantly expands on this content and [is available for download as a PDF here](https://github.com/neonwatty/machine-learning-refined/tree/main/chapter_pdfs).





### Chapter 1. Introduction to Machine Learning ([Download Chapter PDF πŸ“„](https://github.com/neonwatty/machine-learning-refined/tree/main/chapter_pdfs/2nd_ed))

1.1 Introduction

1.2 Distinguishing Cats from Dogs: a Machine Learning Approach

1.3 The Basic Taxonomy of Machine Learning Problems

1.4 Mathematical Optimization

1.5 Conclusion

### Chapter 2. Zero-Order Optimization Techniques ([Download Chapter PDF πŸ“„](https://github.com/neonwatty/machine-learning-refined/tree/main/chapter_pdfs/2nd_ed))

[2.1 Introduction](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/2_Zero_order_methods/2_1_Introduction.ipynb) [![2.1](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/2_Zero_order_methods/2_1_Introduction.ipynb) \
[2.2 The Zero-Order Optimality Condition](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/2_Zero_order_methods/2_2_Zero.ipynb) [![2.2](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/2_Zero_order_methods/2_2_Zero.ipynb) \
[2.3 Global Optimization Methods](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/2_Zero_order_methods/2_3_Global.ipynb) [![2.3](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/2_Zero_order_methods/2_3_Global.ipynb) \
[2.4 Local Optimization Methods](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/2_Zero_order_methods/2_4_Local.ipynb) [![2.4](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/2_Zero_order_methods/2_4_Local.ipynb) \
[2.5 Random Search](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/2_Zero_order_methods/2_5_Random.ipynb) [![2.5](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/2_Zero_order_methods/2_5_Random.ipynb) \
[2.6 Coordinate Search and Descent](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/2_Zero_order_methods/2_6_Coordinate.ipynb) [![2.6](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/2_Zero_order_methods/2_6_Coordinate.ipynb) \
2.7 Conclusion

2.8 Exercises

### Chapter 3. First-Order Optimization Techniques ([Download Chapter PDF πŸ“„](https://github.com/neonwatty/machine-learning-refined/tree/main/chapter_pdfs/2nd_ed))

[3.1 Introduction](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/3_First_order_methods/3_1_Introduction.ipynb) [![3.1](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/3_First_order_methods/3_1_Introduction.ipynb) \
[3.2 The First-Order Optimality Condition](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/3_First_order_methods/3_2_First.ipynb) [![3.2](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/3_First_order_methods/3_2_First.ipynb) \
[3.3 The Geometry of First-Order Taylor Series](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/3_First_order_methods/3_4_Geometry.ipynb) [![3.3](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/3_First_order_methods/3_4_Geometry.ipynb) \
3.4 Computing Gradients Efficiently
[3.5 Gradient Descent](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/3_First_order_methods/3_5_Descent.ipynb) [![3.5](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/3_First_order_methods/3_5_Descent.ipynb) \
[3.6 Two Natural Weaknesses of Gradient Descent](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/3_First_order_methods/3_6_Problems.ipynb) [![3.6](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/3_First_order_methods/3_6_Problems.ipynb) \
3.7 Conclusion

3.8 Exercises

### Chapter 4. Second-Order Optimization Techniques ([Download Chapter PDF πŸ“„](https://github.com/neonwatty/machine-learning-refined/tree/main/chapter_pdfs/2nd_ed))

[4.1 The Second-Order Optimality Condition](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/4_Second_order_methods/4_1_Second.ipynb) [![4.1](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/4_Second_order_methods/4_1_Second.ipynb) \
[4.2 The Geometry of Second-Order Taylor Series](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/4_Second_order_methods/4_2_Quadratic.ipynb) [![4.2](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/4_Second_order_methods/4_2_Quadratic.ipynb) \
[4.3 Newton’s Method](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/4_Second_order_methods/4_3_Newtons.ipynb) [![4.3](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/4_Second_order_methods/4_3_Newtons.ipynb) \
[4.4 Two Natural Weaknesses of Newton’s Method](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/4_Second_order_methods/4_4_Problems.ipynb) [![4.4](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/4_Second_order_methods/4_4_Problems.ipynb) \
4.5 Conclusion

4.6 Exercises

### Chapter 5. Linear Regression ([Download Chapter PDF πŸ“„](https://github.com/neonwatty/machine-learning-refined/tree/main/chapter_pdfs/2nd_ed))

5.1 Introduction

[5.2 Least Squares Linear Regression](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/5_Linear_regression/5_2_Least.ipynb) [![5.2](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/5_Linear_regression/5_2_Least.ipynb) \
[5.3 Least Absolute Deviations](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/5_Linear_regression/5_3_Absolute.ipynb) [![5.3](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/5_Linear_regression/5_3_Absolute.ipynb) \
[5.4 Regression Quality Metrics](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/5_Linear_regression/5_4_Metrics.ipynb) [![5.4](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/5_Linear_regression/5_4_Metrics.ipynb) \
[5.5 Weighted Regression](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/5_Linear_regression/5_5_Weighted.ipynb) [![5.5](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/5_Linear_regression/5_5_Weighted.ipynb) \
[5.6 Multi-Output Regression](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/5_Linear_regression/5_6_Multi.ipynb) [![5.6](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/5_Linear_regression/5_6_Multi.ipynb) \
5.7 Conclusion

5.8 Exercises

[5.9 Endnotes: Probabilistic interpretation of linear regression](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/5_Linear_regression/5_7_Probabilistic.ipynb) [![5.9](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/5_Linear_regression/5_7_Probabilistic.ipynb) \

### Chapter 6. Linear Two-Class Classification ([Download Chapter PDF πŸ“„](https://github.com/neonwatty/machine-learning-refined/tree/main/chapter_pdfs/2nd_ed))

6.1 Introduction

[6.2 Logistic Regression and the Cross Entropy Cost](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/6_Linear_twoclass_classification/6_2_Cross_entropy.ipynb) [![6.2](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/6_Linear_twoclass_classification/6_2_Cross_entropy.ipynb) \
[6.3 Logistic Regression and the Softmax Cost](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/6_Linear_twoclass_classification/6_3_Softmax.ipynb) [![6.3](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/6_Linear_twoclass_classification/6_3_Softmax.ipynb) \
[6.4 The Perceptron](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/6_Linear_twoclass_classification/6_4_Perceptron.ipynb) [![6.4](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/6_Linear_twoclass_classification/6_4_Perceptron.ipynb) \
[6.5 Support Vector Machines](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/6_Linear_twoclass_classification/6_5_SVMs.ipynb) [![6.5](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/6_Linear_twoclass_classification/6_5_SVMs.ipynb) \
[6.6 Which Approach Produces the Best Results?](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/6_Linear_twoclass_classification/6_6_Categorical.ipynb) [![6.6](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/6_Linear_twoclass_classification/6_6_Categorical.ipynb) \
[6.7 The Categorical Cross Entropy Cost](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/6_Linear_twoclass_classification/6_7_Comparison.ipynb) [![6.7](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/6_Linear_twoclass_classification/6_7_Comparison.ipynb) \
[6.8 Classification Quality Metrics](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/6_Linear_twoclass_classification/6_8_Metrics.ipynb) [![6.8](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/6_Linear_twoclass_classification/6_8_Metrics.ipynb) \
[6.9 Weighted Two-Class Classification](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/5_Linear_regression/5_7_Probabilistic.ipynb) [![6.9](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/5_Linear_regression/5_7_Probabilistic.ipynb) \
6.10 Conclusion

6.11 Exercises

### Chapter 7. Linear Multi-Class Classification ([Download Chapter PDF πŸ“„](https://github.com/neonwatty/machine-learning-refined/tree/main/chapter_pdfs/2nd_ed))

7.1 Introduction

[7.2 One-versus-All Multi-Class Classification](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/7_Linear_multiclass_classification/7_2_OvA.ipynb) [![7.2](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/7_Linear_multiclass_classification/7_2_OvA.ipynb)\
[7.3 Multi-Class Classification and the Perceptron](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/7_Linear_multiclass_classification/7_3_Perceptron.ipynb) [![7.3](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/7_Linear_multiclass_classification/7_3_Perceptron.ipynb) \
[7.4 Which Approach Produces the Best Results?](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/7_Linear_multiclass_classification/7_4_Comparison.ipynb) [![7.4](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/7_Linear_multiclass_classification/7_4_Comparison.ipynb) \
[7.5 The Categorical Cross Entropy Cost Function](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/7_Linear_multiclass_classification/7_5_Categorical.ipynb) [![7.5](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/7_Linear_multiclass_classification/7_5_Categorical.ipynb) \
[7.6 Classification Quality Metrics](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/7_Linear_multiclass_classification/7_6_Metrics.ipynb) [![7.6](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/7_Linear_multiclass_classification/7_6_Metrics.ipynb) \
7.7 Weighted Multi-Class Classification

[7.8 Stochastic and Mini-Batch Learning](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/7_Linear_multiclass_classification/7_8_Minibatch.ipynb) [![7.8](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/7_Linear_multiclass_classification/7_8_Minibatch.ipynb) \
7.9 Conclusion

7.10 Exercises

### Chapter 8. Linear Unsupervised Learning ([Download Chapter PDF πŸ“„](https://github.com/neonwatty/machine-learning-refined/tree/main/chapter_pdfs/2nd_ed))

8.1 Introduction

[8.2 Fixed Spanning Sets, Orthonormality, and Projections](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/8_Linear_unsupervised_learning/8_2_Spanning.ipynb) [![8.2](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/8_Linear_unsupervised_learning/8_2_Spanning.ipynb) \
[8.3 The Linear Autoencoder and Principal Component Analysis](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/8_Linear_unsupervised_learning/8_3_PCA.ipynb) [![8.3](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/8_Linear_unsupervised_learning/8_3_PCA.ipynb) \
[8.4 Recommender Systems](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/8_Linear_unsupervised_learning/8_4_Recommender.ipynb) [![8.4](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/8_Linear_unsupervised_learning/8_4_Recommender.ipynb) \
[8.5 K-Means Clustering](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/8_Linear_unsupervised_learning/8_5_Kmeans.ipynb) [![8.5](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/8_Linear_unsupervised_learning/8_5_Kmeans.ipynb) \
[8.6 General Matrix Factorization Techniques](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/8_Linear_unsupervised_learning/8_6_Factorization.ipynb) [![7.8](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/8_Linear_unsupervised_learning/8_6_Factorization.ipynb) \
8.7 Conclusion

8.8 Exercises

8.9 Endnotes

### Chapter 9. Feature Engineering and Selection ([Download Chapter PDF πŸ“„](https://github.com/neonwatty/machine-learning-refined/tree/main/chapter_pdfs/2nd_ed))

9.1 Introduction

[9.2 Histogram Features](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/9_Feature_engineer_select/9_2_Histogram.ipynb) [![9.2](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/9_Feature_engineer_select/9_2_Histogram.ipynb) \
[9.3 Feature Scaling via Standard Normalization](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/9_Feature_engineer_select/9_3_Scaling.ipynb) [![9.3](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/9_Feature_engineer_select/9_3_Scaling.ipynb) \
[9.4 Imputing Missing Values in a Dataset](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/9_Feature_engineer_select/9_4_Cleaning.ipynb) [![9.4](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/9_Feature_engineer_select/9_4_Cleaning.ipynb) \
[9.5 Feature Scaling via PCA-Sphering](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/9_Feature_engineer_select/9_5_PCA_sphereing.ipynb) [![9.5](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/9_Feature_engineer_select/9_5_PCA_sphereing.ipynb) \
[9.6 Feature Selection via Boosting](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/9_Feature_engineer_select/9_6_Boosting.ipynb) [![9.6](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/9_Feature_engineer_select/9_6_Boosting.ipynb) \
[9.7 Feature Selection via Regularization](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/9_Feature_engineer_select/9_7_Regularization.ipynb) [![9.7](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/9_Feature_engineer_select/9_7_Regularization.ipynb) \
9.8 Conclusion

9.9 Exercises

### Chapter 10. Principles of Nonlinear Feature Engineering ([Download Chapter PDF πŸ“„](https://github.com/neonwatty/machine-learning-refined/tree/main/chapter_pdfs/2nd_ed))

[10.1 Introduction](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/10_Nonlinear_intro/10_1_Intro.ipynb) [![10.1](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/10_Nonlinear_intro/10_1_Intro.ipynb) \ \
[10.2 Nonlinear Regression](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/10_Nonlinear_intro/10_2_Regression.ipynb) [![10.2](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/10_Nonlinear_intro/10_2_Regression.ipynb) \
[10.3 Nonlinear Multi-Output Regression](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/10_Nonlinear_intro/10_3_MultReg.ipynb) [![10.3](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/10_Nonlinear_intro/10_3_MultReg.ipynb) \
[10.4 Nonlinear Two-Class Classification](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/10_Nonlinear_intro/10_4_Twoclass.ipynb) [![10.4](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/10_Nonlinear_intro/10_4_Twoclass.ipynb) \
[10.5 Nonlinear Multi-Class Classification](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/10_Nonlinear_intro/10_5_Multiclass.ipynb) [![10.5](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/10_Nonlinear_intro/10_5_Multiclass.ipynb) \
[10.6 Nonlinear Unsupervised Learning](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/10_Nonlinear_intro/10_6_Unsupervised.ipynb) [![10.6](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/10_Nonlinear_intro/10_6_Unsupervised.ipynb) \
10.7 Conclusion

10.8 Exercises

### Chapter 11. Principles of Feature Learning ([Download Chapter PDF πŸ“„](https://github.com/neonwatty/machine-learning-refined/tree/main/chapter_pdfs/2nd_ed))

[11.1 Introduction](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/11_Feature_learning/11_1_Intro.ipynb) [![11.1](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/11_Feature_learning/11_1_Intro.ipynb) \
[11.2 Universal Approximators](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/11_Feature_learning/11_2_Universal.ipynb) [![11.2](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/11_Feature_learning/11_2_Universal.ipynb) \
[11.3 Universal Approximation of Real Data](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/11_Feature_learning/11_3_Real_approximation.ipynb) [![11.3](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/11_Feature_learning/11_3_Real_approximation.ipynb) \
[11.4 Naive Cross-Validation](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/11_Feature_learning/11_4_Cross_validation.ipynb) [![11.4](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/11_Feature_learning/11_4_Cross_validation.ipynb) \
[11.5 Efficient Cross-Validation via Boosting](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/11_Feature_learning/11_5_Boosting.ipynb) [![11.5](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/11_Feature_learning/11_5_Boosting.ipynb) \
[11.6 Efficient Cross-Validation via Regularization](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/11_Feature_learning/11_6_Regularization.ipynb) [![11.6](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/11_Feature_learning/11_6_Regularization.ipynb) \
11.7 Testing Data

11.8 Which Universal Approximator Works Best in Practice?

[11.9 Bagging Cross-Validated Models](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/11_Feature_learning/11_9_Bagging.ipynb) [![11.9](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/11_Feature_learning/11_9_Bagging.ipynb) \
[11.10 K-Fold Cross-Validation](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/11_Feature_learning/11_10_Kfolds.ipynb) [![11.10](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/11_Feature_learning/11_10_Kfolds.ipynb) \
11.11 When Feature Learning Fails

11.12 Conclusion

11.13 Exercises

### Chapter 12. Kernel Methods ([Download Chapter PDF πŸ“„](https://github.com/neonwatty/machine-learning-refined/tree/main/chapter_pdfs/2nd_ed))

12.1 Introduction

12.2 Fixed-Shape Universal Approximators

12.3 The Kernel Trick

12.4 Kernels as Measures of Similarity

12.5 Optimization of Kernelized Models

12.6 Cross-Validating Kernelized Learners

12.7 Conclusion

12.8 Exercises

### Chapter 13. Fully Connected Neural Networks ([Download Chapter PDF πŸ“„](https://github.com/neonwatty/machine-learning-refined/tree/main/chapter_pdfs/2nd_ed))

13.1 Introduction

[13.2 Fully Connected Neural Networks](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/13_Multilayer_perceptrons/13_2_Multi_layer_perceptrons.ipynb) [![13.2](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/13_Multilayer_perceptrons/13_2_Multi_layer_perceptrons.ipynb) \
[13.3 Activation Functions](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/13_Multilayer_perceptrons/13_3_Optimization.ipynb) [![13.3](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/13_Multilayer_perceptrons/13_3_Optimization.ipynb) \
13.4 The Backpropagation Algorithm

[13.5 Optimization of Neural Network Models](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/13_Multilayer_perceptrons/13_5_Activation.ipynb) [![13.5](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/13_Multilayer_perceptrons/13_5_Activation.ipynb) \
[13.6 Batch Normalization](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/13_Multilayer_perceptrons/13_6_Batch_normalization.ipynb) [![13.6](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/13_Multilayer_perceptrons/13_6_Batch_normalization.ipynb) \
[13.7 Cross-Validation via Early Stopping](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/13_Multilayer_perceptrons/13_7_early_stopping.ipynb) [![13.7](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/13_Multilayer_perceptrons/13_7_early_stopping.ipynb) \
13.8 Conclusion

13.9 Exercises

### Chapter 14. Tree-Based Learners ([Download Chapter PDF πŸ“„](https://github.com/neonwatty/machine-learning-refined/tree/main/chapter_pdfs/2nd_ed))

14.1 Introduction

14.2 From Stumps to Deep Trees

14.3 Regression Trees

14.4 Classification Trees

14.5 Gradient Boosting

14.6 Random Forests

14.7 Cross-Validation Techniques for Recursively Defined Trees

14.8 Conclusion

14.9 Exercises

### Appendix A. Advanced First- and Second-Order Optimization Methods ([Download Chapter PDF πŸ“„](https://github.com/neonwatty/machine-learning-refined/tree/main/chapter_pdfs/2nd_ed))

A.1 Introduction

[A.2 Momentum-Accelerated Gradient Descent](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/3_First_order_methods/A_2_Momentum.ipynb) [![A.2](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/3_First_order_methods/A_2_Momentum.ipynb) \
[A.3 Normalized Gradient Descent](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/3_First_order_methods/A_3_Normalized.ipynb) [![A.3](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/3_First_order_methods/A_3_Normalized.ipynb) \
[A.4 Advanced Gradient-Based Methods](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/3_First_order_methods/A_4_Advanced.ipynb) [![A.4](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/3_First_order_methods/A_4_Advanced.ipynb) \
[A.5 Mini-Batch Optimization](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/3_First_order_methods/A_5_Minibatch.ipynb) [![A.5](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/3_First_order_methods/A_5_Minibatch.ipynb) \
[A.6 Conservative Steplength Rules](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/3_First_order_methods/A_6_Conservative.ipynb) [![A.6](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/3_First_order_methods/A_6_Conservative.ipynb) \
[A.7 General Steepest Descent](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/3_First_order_methods/A_7_General_steepest_descent.ipynb) [![A.7](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/3_First_order_methods/A_7_General_steepest_descent.ipynb) \
[A.8 Newton’s Method, Regularization, and Nonconvex Functions](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/4_Second_order_methods/A_8_Nonconvex.ipynb) [![A.8](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/4_Second_order_methods/A_8_Nonconvex.ipynb) \
[A.9 Hessian-Free Methods](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/4_Second_order_methods/A_9_Quasi.ipynb) [![A.9](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/4_Second_order_methods/A_9_Quasi.ipynb)

### Appendix B. Derivatives and Automatic Differentiation ([Download Chapter PDF πŸ“„](https://github.com/neonwatty/machine-learning-refined/tree/main/chapter_pdfs/2nd_ed))

B.1 Introduction

B.2 The Derivative

B.3 Derivative Rules for Elementary Functions and Operations

B.4 The Gradient

B.5 The Computation Graph

B.6 The Forward Mode of Automatic Differentiation

B.7 The Reverse Mode of Automatic Differentiation

B.8 Higher-Order Derivatives

B.9 Taylor Series

[B.10 Using the autograd Library](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/3_First_order_methods/B_10_Automatic.ipynb) [![B.10](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/3_First_order_methods/B_10_Automatic.ipynb)

### Appendix C. Linear Algebra ([Download Chapter PDF πŸ“„](https://github.com/neonwatty/machine-learning-refined/tree/main/chapter_pdfs/2nd_ed))

C.1 Introduction

[C.2 Vectors and Vector Operations](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/16_Linear_algebra/16_2_Vectors.ipynb) [![16.2](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/16_Linear_algebra/16_2_Vectors.ipynb) \
[C.3 Matrices and Matrix Operations](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/16_Linear_algebra/16_3_Matrices.ipynb) [![16.3](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/16_Linear_algebra/16_3_Matrices.ipynb) \
[C.4 Eigenvalues and Eigenvectors](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/16_Linear_algebra/16_4_Eigen.ipynb) [![16.4](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/16_Linear_algebra/16_4_Eigen.ipynb) \
[C.5 Vector and Matrix Norms](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/16_Linear_algebra/16_5_Norms.ipynb) [![16.5](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jermwatt/machine_learning_refined/blob/main/notes/16_Linear_algebra/16_5_Norms.ipynb)

## How to use the book?

[(Back to top)](#welcome)

Example ”roadmaps” shown below provide suggested paths
for navigating the text based on a variety of learning outcomes and university
courses taught using the present book.

##### Recommended study roadmap for a course on the essentials of machine learning, including requisite chapters (left column), sections (middle column), and corresponding topics (right column). This essentials plan is suitable for time-constrained courses (in quarter-based programs and universities) or self-study, or where machine learning is not the sole focus but a key component of some broader course of study.





##### Recommended study roadmap for a full treatment of standard machine learning subjects, including chapters, sections, as well as corresponding topics to cover. This plan entails a more in-depth coverage of machine learning topics compared to the essentials roadmap given above, and is best suited for senior undergraduate/early graduate students in semester-based programs and passionate independent readers.





##### Recommended study roadmap for a course on mathematical optimization for machine learning and deep learning, including chapters, sections, as well as topics to cover.





##### Recommended study roadmap for an introductory portion of a course on deep learning, including chapters, sections, as well as topics to cover.





## Technical prerequisites

[(Back to top)](#welcome)

To make full use of the text one needs only a basic understanding of vector algebra (mathematical
functions, vector arithmetic, etc.) and computer programming (for example,
basic proficiency with a dynamically typed language like Python). We provide
complete introductory treatments of other prerequisite topics including linear
algebra, vector calculus, and automatic differentiation in the appendices of the
text.

## Software installation and dependencies

[(Back to top)](#welcome)

### Google Collab !

The majority of the notes and exercise wrappers in this repository can be run without the need to install anything locally - for free on Google Collab. Click the Collab sticker ![collab sticker](https://colab.research.google.com/assets/colab-badge.svg) at the top of a notebook to open it in Collab.

### Running locally

After cloning this repository and entering the directory we recommend one of three methods for successfully running the Jupyter notebooks contained therein.

#### Docker method

After installing [docker and docker-compose on your machine](https://docs.docker.com/compose/install/)
traverse to this repo at your terminal and type

`docker-compose up -d`

When running this command the first time an associated docker image is pulled from DockerHub.

Then in any web browser go to

`localhost:8888`

to view the repository contents - including jupyter notebooks.

#### Anaconda method

After installing [Anaconda Python 3 distribution](https://www.anaconda.com/download) on your machine, cd into this repo's directory and follow these steps to create a conda virtual environment to view its contents and notebooks. Python 3.10 and up is required.

First, create the environment

`conda create python=3.10 --name mlr2 --file requirements.txt`

Then activate it

`conda activate mlr2`

Run jupyter via the command below

`jupyter notebook --port=8888 --ip=0.0.0.0 --allow-root --NotebookApp.token=''`

And finally, open any web browser and traverse to

`localhost:8888`

to view the repository contents - including jupyter notebooks.

#### pip / uv method

Using Python 3.10 or above and pip or [uv](https://github.com/astral-sh/uv) on your machine, cd into this repo's directory and follow these steps to install the required packages.

First create a virtual environment and activate it - for example with uv

```bash
uv venv --python 3.10.0 && source .venv/bin/activate
```

Then install Python requirements

`uv pip install -r requirements.txt`

Run jupyter via the command below

`jupyter notebook --port=8888 --ip=0.0.0.0 --allow-root --NotebookApp.token=''`

And finally, open any web browser and traverse to

`localhost:8888`

to view the repository contents - including jupyter notebooks.

## Reviews and Endorsements

[(Back to top)](#welcome)

> An excellent book that treats the fundamentals of machine learning from basic principles to practical implementation. The book is suitable as a text for senior-level and first-year graduate courses in engineering and computer science. It is well organized and covers basic concepts and algorithms in mathematical optimization methods, linear learning, and nonlinear learning techniques. The book is nicely illustrated in multiple colors and contains numerous examples and coding exercises using Python.

**John G. Proakis**, University of California, San Diego


> Some machine learning books cover only programming aspects, often relying on outdated software tools; some focus exclusively on neural networks; others, solely on theoretical foundations; and yet more books detail advanced topics for the specialist. This fully revised and expanded text provides a broad and accessible introduction to machine learning for engineering and computer science students. The presentation builds on first principles and geometric intuition, while offering real-world examples, commented implementations in Python, and computational exercises. I expect this book to become a key resource for students and researchers.

**Osvaldo Simeone**, King's College, London


> This book is great for getting started in machine learning. It builds up the tools of the trade from first principles, provides lots of examples, and explains one thing at a time at a steady pace. The level of detail and runnable code show what's really going when we run a learning algorithm.

**David Duvenaud**, University of Toronto


> This book covers various essential machine learning methods (e.g., regression, classification, clustering, dimensionality reduction, and deep learning) from a unified mathematical perspective of seeking the optimal model parameters that minimize a cost function. Every method is explained in a comprehensive, intuitive way, and mathematical understanding is aided and enhanced with many geometric illustrations and elegant Python implementations.

**Kimiaki Sihrahama**, Kindai University, Japan


> Books featuring machine learning are many, but those which are simple, intuitive, and yet theoretical are extraordinary 'outliers'. This book is a fantastic and easy way to launch yourself into the exciting world of machine learning, grasp its core concepts, and code them up in Python or Matlab. It was my inspiring guide in preparing my 'Machine Learning Blinks' on my BASIRA YouTube channel for both undergraduate and graduate levels.

**Islem Rekik**, Director of the Brain And SIgnal Research and Analysis (BASIRA) Laboratory

## A sample of widgets from the notes

[(Back to top)](#welcome)

You'll find a large number of [intuitive animations](#our-pedagogy) in the [notes](#online-notes) of this repository.



| | | |
| ------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Cross-validation (regression) | Cross-validation (two-class classification) | Cross-validation (multi-class classification) |



| | | |
| ----------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------- |
| K-means clustering | Feature normalization | Normalized gradient descent |



| | | |
| ------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------- |
| Rotation | Convexification | Dogification! |



| | | |
| ----------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------- |
| A nonlinear transformation | Weighted classification | The moving average |



| | |
| ------------------------------------------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------ |
| Batch normalization | Logistic regression |



| | |
| ----------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Polynomials vs. NNs vs. Trees (regression) | Polynomials vs. NNs vs. Trees (classification) |



| | |
| ------------------------------------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------ |
| Changing gradient descent's steplength (1d) | Changing gradient descent's steplength (2d) |



| | |
| ----------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------ |
| Convex combination of two functions | Taylor series approximation |



| | |
| ---------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------- |
| Feature selection via regularization | Secant planes |



| | |
| ----------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------- |
| Function approximation with a neural network | A regression tree |

## Get a physical copy of the book

[(Back to top)](#welcome)

- From [Cambridge University Press](https://www.cambridge.org/us/academic/subjects/engineering/communications-and-signal-processing/machine-learning-refined-foundations-algorithms-and-applications-2nd-edition?format=HB)
- From [Amazon](https://www.amazon.com/Machine-Learning-Refined-Foundations-Applications/dp/1108480721)
- From [Barnes & Noble](https://www.barnesandnoble.com/w/machine-learning-refined-jeremy-watt/1136155294?ean=9781108480727)