{"id":13681588,"url":"https://github.com/thma/WhyHaskellMatters","last_synced_at":"2025-04-30T06:31:13.726Z","repository":{"id":42455662,"uuid":"238533527","full_name":"thma/WhyHaskellMatters","owner":"thma","description":"In this article I try to explain why Haskell keeps being such an important language by presenting some of its most important and distinguishing features and detailing them with working code examples.  The presentation aims to be self-contained and does not require any previous knowledge of the language. ","archived":false,"fork":false,"pushed_at":"2023-12-15T08:30:02.000Z","size":492,"stargazers_count":469,"open_issues_count":1,"forks_count":14,"subscribers_count":17,"default_branch":"master","last_synced_at":"2025-04-05T20:06:26.169Z","etag":null,"topics":["algebraic-data-types","anonymous-functions","declarative-programming","explicit-side-effects","first-class-functions","foldable","folding","function-composition","functional-programming","functor","haskell","higher-order-functions","lazy-evaluation","list-comprehension","mapping","monads","partial-application","pattern-matching","polymorphic-types","type-classes"],"latest_commit_sha":null,"homepage":"","language":"Haskell","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/thma.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null}},"created_at":"2020-02-05T19:41:16.000Z","updated_at":"2025-01-30T13:38:36.000Z","dependencies_parsed_at":"2024-01-14T15:23:54.290Z","dependency_job_id":"ed10bc0e-cc5c-4d1a-a3c7-d2437d0fa53b","html_url":"https://github.com/thma/WhyHaskellMatters","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/thma%2FWhyHaskellMatters","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/thma%2FWhyHaskellMatters/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/thma%2FWhyHaskellMatters/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/thma%2FWhyHaskellMatters/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/thma","download_url":"https://codeload.github.com/thma/WhyHaskellMatters/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":251654036,"owners_count":21622249,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["algebraic-data-types","anonymous-functions","declarative-programming","explicit-side-effects","first-class-functions","foldable","folding","function-composition","functional-programming","functor","haskell","higher-order-functions","lazy-evaluation","list-comprehension","mapping","monads","partial-application","pattern-matching","polymorphic-types","type-classes"],"created_at":"2024-08-02T13:01:32.709Z","updated_at":"2025-04-30T06:31:13.404Z","avatar_url":"https://github.com/thma.png","language":"Haskell","readme":"# Why Haskell Matters\n\n[![Actions Status](https://github.com/thma/WhyHaskellMatters/workflows/Haskell%20CI/badge.svg)](https://github.com/thma/WhyHaskellMatters/actions)\n\n\u003e Haskell doesn't solve different problems than other languages.\n\u003e But it solves them differently.\n\u003e \n\u003e -- unknown\n\n## Abstract\n\nIn this article I try to explain why Haskell keeps being such an important language by presenting some\nof its most important and distinguishing features and detailing them with working code examples.\n\nThe presentation aims to be self-contained and does not require any previous knowledge of the language.\n\nThe target audience are Haskell freshmen and developers with a background in non-functional languages who are eager\nto learn about concepts of functional programming and Haskell in particular.\n\n## Table of contents\n\n- [Introduction](#introduction)\n- [Functions are first class](#functions-are-first-class)\n  - [Functions can be assigned to variables exactly as any other values](#functions-can-be-assigned-to-variables-exactly-as-any-other-values)\n  - [Support for anonymous functions](#support-for-anonymous-functions)\n  - [Functions can be returned as values from other functions](#functions-can-be-returned-as-values-from-other-functions)\n    - [Function composition](#function-composition)\n    - [Currying and Partial Application](#currying-and-Partial-Application)\n  - [Functions can be passed as arguments to other functions](#functions-can-be-passed-as-arguments-to-other-functions)\n- [Pattern matching](#pattern-matching)\n- [Algebraic Data Types](#algebraic-data-types)\n- [Polymorphic Data Types](#polymorphic-data-types)\n  - [Lists](#lists)\n    - [Arithmetic sequences](#arithmetic-sequences)\n- [Immutability](#immutability)\n- [Declarative programming](#declarative-programming)\n  - [Mapping](#mapping)\n  - [Folding](#folding)\n- [Non-strict Evaluation](#non-strict-evaluation)\n  - [Avoid endless loops](#avoid-endless-loops)\n  - [Define potentially infinite data structures](#define-potentially-infinite-data-structures)\n  - [List comprehension](#list-comprehension)\n  - [Define control flow structures as abstractions](#define-control-flow-structures-as-abstractions)\n- [Type Classes](#type-classes)\n  - [Read and Show](#read-and-show)\n  - [Functor and Foldable](#functor-and-foldable)\n    - [Functor](#functor)\n    - [Foldable](#foldable)\n  - [The Maybe Monad](#the-maybe-monad)\n    -  [Total Functions](#total-functions)\n    -  [Composition of Maybe operations](#composition-of-maybe-operations)\n  - [Purity](#purity)  \n  - [Explicit side effects with the IO Monad](#explicit-side-effects-with-the-io-monad)\n- [Conclusion](#conclusion)\n\n## Introduction\n\nExactly thirty years ago, on April 1st 1990, a small group of researchers in the field of non-strict functional \nprogramming published the original Haskell language report.\n\nHaskell never became one of the most popular languages in the software industry or part of the mainstream, \nbut it has been and still is quite influential in the software development community.\n\nIn this article I try to explain why Haskell keeps being such an important language by presenting some\nof its most distinguishing features and detailing them with working code examples.\n\nThe presentation aims to be self-contained and does not require any previous knowledge of the language.\nI will also try to keep the learning curve moderate and to limit the scope of the presentation;\nnevertheless this article is by no means a complete introduction to the language.\n\n(If you are looking for thorough tutorials have a look at [Haskell Wikibook](https://en.wikibooks.org/wiki/Haskell) or\n [Learn You a Haskell](http://www.learnyouahaskell.com/.)\n\nBefore diving directly into the technical details I'd like to first have a closer look on the reception of \nHaskell in the software developers community:\n\n### A strange development over time\n\nIn a talk in 2017 on [the Haskell journey](https://www.youtube.com/watch?v=re96UgMk6GQ) \nsince its beginnings in the 1980ies Simon Peyton Jones speaks about the\nrather unusual life story of Haskell.\n\nFirst he talks about the typical life cycle of research languages. They are often created by \na single researcher (who is also the single user), and most of them will be abandoned \nafter just a few years.\n\nA more successful research language might gain some interest in a larger community \nbut will still not escape the ivory tower and typically will be given up within ten years.\n\nOn the other hand we have all those popular programming languages that are quickly adopted by \nlarge numbers of developers and thus reach \"the threshold of immortality\".\nThat is the base of existing code will grow so large that the language will \nbe in use for decades.\n\nA little jokingly he then depicts the sad fate of languages designed by \ncommittees by flat line through zero: They simply never take off.\n\nFinally, he presents a chart showing the Haskell timeline:\n\n![the haskell timeline](img/language-5.png)\n\nThe development shown in this chart seems rather unexpected: \nHaskell started as a research language and was even designed by a committee; \nso in all probability it should have been abandoned long before the millennium!\n\nInstead, it gained some momentum in its early years followed by a rather quiet phase during \nthe decade of OO hype (Java being released in 1995).\nAnd then again we see a continuous growth of interest since about 2005. \nI'm writing this in early 2020, and we still see this trend!\n\n### Being used versus being discussed\n\nThen Simon Peyton Jones points out another interesting characteristic of the reception of Haskell \nin recent years:\nIn statistics that rank programming languages by actual usage Haskell is typically not under the 30 most active languages.\nBut in statistics that instead rank languages by the volume of discussions on the internet\nHaskell typically scores much better (often in the top ten).\n\n### So why does Haskell keep being a hot topic in the software development community?\n\nA very *short answer* might be: \nHaskell has a number of features that are clearly different from those of most other programming languages. \nMany of these features have proven to be powerful tools to solve basic problems of software development elegantly.\n\nTherefore, over time other programming languages have adopted parts of these concepts (e.g. pattern matching or type classes).\nIn discussions about such concepts the Haskell heritage is mentioned \nand differences between the original Haskell concepts and those of other languages are discussed.\nSometimes people feel encouraged to have a closer look at the source of these concepts to get a deeper understanding of\ntheir original intentions. That's why we see a growing number of developers working in\nPython, Typescript, Scala, Rust, C++, C# or Java starting to dive into Haskell.\n\nA further essential point is that Haskell is still an experimental laboratory for research in areas such as\ncompiler construction, programming language design, theorem-provers, type systems etc.\nSo inevitably Haskell will be a topic in the discussion about these approaches.\n\nIn the following sections we will try to find the *longer answer* by\nstudying some of the most distinguishing features of Haskell.\n\n## Functions are First-class\n\n\u003e In computer science, a programming language is said to have first-class functions if it treats functions as \n\u003e first-class citizens. This means the language supports **passing functions as arguments to other functions**, \n\u003e **returning them as the values from other functions**, and **assigning them to variables or storing them in data \n\u003e structures.**[1] Some programming language theorists require **support for anonymous functions** (function literals) \n\u003e as well.[2] In languages with first-class functions, the names of functions do not have any special status; \n\u003e they are treated like ordinary variables with a function type.\n\u003e \n\u003e quoted from [Wikipedia](https://en.wikipedia.org/wiki/First-class_function)\n\nWe'll go through this one by one:\n\n### Functions can be assigned to variables exactly as any other values\n\nLet's have a look how this looks like in Haskell. First we define some simple values:\n\n```haskell\n-- define constant `aNumber` with a value of 42. \naNumber :: Integer\naNumber = 42\n\n-- define constant `aString` with a value of \"hello world\"\naString :: String\naString = \"Hello World\"\n```\n\nIn the first line we see a type signature that defines the constant `aNumber` to be of type `Integer`.\nIn the second line we define the value of `aNumber` to be `42`.\nIn the same way we define the constant `aString` to be of type `String`.\n\nHaskell is a statically typed language: all type checks happen at compile time.\nStatic typing has the advantage that type errors don't happen at runtime. \nThis is especially useful if a function signature is changed and this change \naffects many dependent parts of a project: the compiler will detect the breaking changes\nat all affected places.\n\nThe Haskell Compiler also provides *type inference*, which allows the compiler to deduce the concrete data type\nof an expression from the context.\nThus, it is usually not required to provide type declarations. \nNevertheless, using explicit type signatures is considered good style as they are an important element of a \ncomprehensive documentation.\n\nNext we define a function `square` that takes an integer argument and returns the square value of the argument:\n```Haskell\nsquare :: Integer -\u003e Integer\nsquare x = x * x\n```\n\nDefinition of a function works exactly in the same way as the definition of any other value.\nThe only thing special is that we declare the type to be a **function type** by using the `-\u003e` notation.\nSo `:: Integer -\u003e Integer` represents a function from `Integer` to `Integer`.\nIn the second line we define function `square` to compute `x * x` for any `Integer` argument `x`.\n\nOk, seems not too difficult, so let's define another function `double` that doubles its input value:\n\n```haskell\ndouble :: Integer -\u003e Integer\ndouble n = 2 * n\n```\n\n### Support for anonymous functions\n\nAnonymous functions, also known as lambda expressions, can be defined in Haskell like this:\n\n```Haskell\n\\x -\u003e x * x\n```\n\nThis expression denotes an anonymous function that takes a single argument x and returns the square of that argument.\nThe backslash is read as λ (the greek letter lambda). \n\nYou can use such as expressions everywhere where you would use any other function. For example you could apply the \nanonymous function `\\x -\u003e x * x` to a number just like the named function `square`:\n\n```haskell\n-- use named function:\nresult = square 5\n\n-- use anonymous function:\nresult' = (\\x -\u003e x * x) 5\n```\n\nWe will see more useful applications of anonymous functions in the following section.\n\n### Functions can be returned as values from other functions\n\n#### Function composition\n\nDo you remember *function composition* from your high-school math classes? \nFunction composition is an operation that takes two functions `f` and `g` and produces a function `h` such that \n`h(x) = g(f(x))`\nThe resulting composite function is denoted `h = g ∘ f` where  `(g ∘ f )(x) = g(f(x))`.\nIntuitively, composing functions is a chaining process in which the output of function `f` is used as input of function `g`.\n\nSo looking from a programmers perspective the `∘` operator is a function that \ntakes two functions as arguments and returns a new composite function.\n\nIn Haskell this operator is represented as the dot operator `.`:\n\n```haskell\n(.) :: (b -\u003e c) -\u003e (a -\u003e b) -\u003e a -\u003e c\n(.) f g x = f (g x)\n```\n\nThe brackets around the dot are required as we want to use a non-alphabetical symbol as an identifier.\nIn Haskell such identifiers can be used as infix operators (as we will see below).\nOtherwise `(.)` is defined as any other function. \nPlease also note how close the syntax is to the original mathematical definition.\n\nUsing this operator we can easily create a composite function that first doubles \na number and then computes the square of that doubled number:\n\n```haskell\nsquareAfterDouble :: Integer -\u003e Integer\nsquareAfterDouble = square . double\n```\n\n#### Currying and Partial Application\n\nIn this section we look at another interesting example of functions producing \nother functions as return values.\nWe start by defining a function `add` that takes two `Integer` arguments and computes their sum:\n\n```haskell\n-- function adding two numbers \nadd :: Integer -\u003e Integer -\u003e Integer\nadd x y = x + y\n```\n\nThis look quite straightforward. But still there is one interesting detail to note:\nthe type signature of `add` is not something like \n\n```haskell\nadd :: (Integer, Integer) -\u003e Integer\n```\n\nInstead it is:\n\n```haskell\nadd :: Integer -\u003e Integer -\u003e Integer\n```\n\nWhat does this signature actually mean?\nIt can be read as \"A function taking an Integer argument and returning a function of type `Integer -\u003e Integer`\".\nSounds weird? But that's exactly what Haskell does internally. \nSo if we call `add 2 3` first `add` is applied to `2` which return a new function of type `Integer -\u003e Integer` which is then applied to `3`.\n\nThis technique is called [**Currying**](https://wiki.haskell.org/Currying)\n\nCurrying is widely used in Haskell as it allows another cool thing: **partial application**.\n\nIn the next code snippet we define a function `add5` by partially applying the function `add` to only one argument:\n\n```haskell\n-- partial application: applying add to 5 returns a function of type Integer -\u003e Integer\nadd5 :: Integer -\u003e Integer\nadd5 = add 5\n```\n\nThe trick is as follows: `add 5` returns a function of type `Integer -\u003e Integer` which will add `5` to any Integer argument.\n\nPartial application thus allows us to write functions that return functions as result values.\nThis technique is frequently used to \n[provide functions with configuration data](https://github.com/thma/LtuPatternFactory#dependency-injection--parameter-binding-partial-application).\n\n### Functions can be passed as arguments to other functions\n\nI could keep this section short by telling you that we have already seen an example for this:\nthe function composition operator `(.)`.\nIt **accepts two functions as arguments** and returns a new one as in:\n\n```haskell\nsquareAfterDouble :: Integer -\u003e Integer\nsquareAfterDouble = square . double\n```\n\nBut I have another instructive example at hand.\n\nLet's imagine we have to implement a function that doubles any odd Integer:\n\n```haskell\nifOddDouble :: Integer -\u003e Integer\nifOddDouble n =\n  if odd n\n    then double n\n    else n\n```\n\nThe Haskell code is straightforward: new ingredients are the `if ... then ... else ...` and the\nodd `odd` which is a predicate from the Haskell standard library \nthat returns `True` if an integral number is odd.\n\nNow let's assume that we also need another function that computes the square for any odd number:\n\n```haskell\nifOddSquare :: Integer -\u003e Integer\nifOddSquare n =\n  if odd n\n    then square n\n    else n\n```\n\nAs vigilant developers we immediately detect a violation of the \n[Don't repeat yourself principle](https://en.wikipedia.org/wiki/Don%27t_repeat_yourself) as \nboth functions only vary in the usage of a different growth functions `double` versus `square`.\n\nSo we are looking for a way to refactor this code by a solution that keeps the original\nstructure but allows to vary the used growth function.\n\nWhat we need is a function that takes a growth function (of type `(Integer -\u003e Integer)`) \nas first argument, an `Integer` as second argument\nand returns an `Integer`. The specified growth function will be applied in the `then` clause:\n\n```haskell\nifOdd :: (Integer -\u003e Integer) -\u003e Integer -\u003e Integer\nifOdd growthFunction n =\n  if odd n\n    then growthFunction n\n    else n\n```\n\nWith this approach we can refactor `ifOddDouble` and `ifOddSquare` as follows:\n\n```haskell\nifOddDouble :: Integer -\u003e Integer\nifOddDouble n = ifOdd double n\n\nifOddSquare :: Integer -\u003e Integer\nifOddSquare n = ifOdd square n\n```\n\nNow imagine that we have to implement new function `ifEvenDouble` and `ifEvenSquare`, that\nwill work only on even numbers. Instead of repeating ourselves we come up with a function\n`ifPredGrow` that takes a predicate function of type `(Integer -\u003e Bool)` as first argument, \na growth function of type `(Integer -\u003e Integer)` as second argument and an Integer as third argument, \nreturning an `Integer`.\n\nThe predicate function will be used to determine whether the growth function has to be applied:\n\n```haskell\nifPredGrow :: (Integer -\u003e Bool) -\u003e (Integer -\u003e Integer) -\u003e Integer -\u003e Integer\nifPredGrow predicate growthFunction n =\n  if predicate n\n    then growthFunction n\n    else n\n```\n\nUsing this [higher order function](https://en.wikipedia.org/wiki/Higher-order_function) \nthat even takes two functions as arguments we can write the two new functions and \nfurther refactor the existing ones without breaking the DRY principle:\n\n```haskell\nifEvenDouble :: Integer -\u003e Integer\nifEvenDouble n = ifPredGrow even double n\n\nifEvenSquare :: Integer -\u003e Integer\nifEvenSquare n = ifPredGrow even square n\n\nifOddDouble'' :: Integer -\u003e Integer\nifOddDouble'' n = ifPredGrow odd double n\n\nifOddSquare'' :: Integer -\u003e Integer\nifOddSquare'' n = ifPredGrow odd square n\n```\n\n## Pattern matching\n\nWith the things that we have learnt so far, we can now start to implement some more interesting functions.\nSo what about implementing the recursive [factorial function](https://en.wikipedia.org/wiki/Factorial)?\n\nThe factorial function can be defined as follows:\n \n\u003e For all n ∈ ℕ\u003csub\u003e0\u003c/sub\u003e:\n\u003e```\n\u003e0! = 1\n\u003en! = n * (n-1)!\n\u003e```\n\nWith our current knowledge of Haskell we can implement this as follows:\n\n```haskell\nfactorial :: Natural -\u003e Natural\nfactorial n =\n  if n == 0\n    then 1\n    else n * factorial (n - 1)\n```\n\nWe are using the Haskell data type `Natural` to denote the set of non-negative integers ℕ\u003csub\u003e0\u003c/sub\u003e.\nUsing the literal `factorial` within the definition of the function `factorial` works as expected and denotes a \nrecursive function call.\n\nAs these kind of recursive definition of functions are typical for functional programming, the language designers have\nadded a useful feature called *pattern matching* that allows to define functions by a set of equations:\n\n```haskell\nfac :: Natural -\u003e Natural\nfac 0 = 1\nfac n = n * fac (n - 1)\n```\n\nThis style comes much closer to the mathematical definition and is typically more readable, as it helps to avoid\nnested `if ... then ... else ...` constructs.\n\nPattern matching can not only be used for numeric values but for any other data types. \nWe'll see some more examples shortly.\n\n## Algebraic Data Types\n\nHaskell supports user-defined data types by making use of a well thought out concept.\nLet's start with a simple example:\n\n```haskell\ndata Status = Green | Yellow | Red\n```\n\nThis declares a data type `Status` which has exactly three different instances. For each instance a\n*data constructor* is defined that allows to create a new instance of the data type.\n\nEach of those data constructors is a function (in this simple case a constant) that returns a `Status` instance.\n\nThe type `Status` is a so called *sum type* as it is represents the set defined by the sum of all three \ninstances `Green`, `Yellow`, `Red`. In Java this corresponds to Enumerations.\n\nLet's assume we have to create a converter that maps our `Status` values to `Severity` values \nrepresenting severity levels in some other system.\nThis converter can be written using the pattern matching syntax that we already have seen above:\n\n```haskell\n-- another sum type representing severity:\ndata Severity = Low | Middle | High deriving (Eq, Show)\n\nseverity :: Status -\u003e Severity\nseverity Green  = Low\nseverity Yellow = Middle\nseverity Red    = High\n```\n\nThe compiler will tell us when we did not cover all instances of the `Status` type \n(by making use of the `-fwarn-incomplete-patterns` pragma).\n\nNow we look at data types that combine multiple different elements, like pairs n-tuples, etc.\nLet's start with a `PairStatusSeverity` type that combines two different elements:\n\n```haskell\ndata PairStatusSeverity = P Status Severity\n```\n\nThis can be understood as: data type `PairStatusSeverity` can be constructed from a\ndata constructor `P` that takes a value of type `Status` and a value of type `Severity` and returns a `Pair` instance.\n\nSo for example `P Green High` returns a `PairStatusSeverity` instance\n(the data constructor `P`  has the signature `P :: Status -\u003e Severity -\u003e PairStatusSeverity`).\n\nThe type `PairStatusSeverity` can be interpreted as the set of all possible ordered pairs of Status and Severity values, \nthat is the *cartesian product* of `Status` and `Severity`.\n\nThat's why such a data type is called *product type*. \n\nHaskell allows you to create arbitrary data types by combining *sum types* and *product types*. The complete\nrange of data types that can be constructed in this way is called \n[*algebraic data types*](https://en.wikipedia.org/wiki/Algebraic_data_type) or ADT in short.\n\nUsing algebraic data types has several advantages:\n\n- Pattern matching can be used to analyze any concrete instance to select different behaviour based on input data.\n  as in the example that maps `Status` to `Severity` there is no need to use `if..then..else..` constructs.\n- The compiler can detect incomplete patterns matching or other flaws.\n- The compiler can derive many complex functionality automatically for ADTs as they are constructed in\n  such a regular way.\n  \nWe will cover the interesting combination of ADTs and pattern matching in the following sections.  \n  \n## Polymorphic Data Types\n\nForming pairs or more generally n-tuples is a very common task in programming. \nTherefore it would be inconvenient and repetitive if we were forced to create new Pair or Tuple types\nfor each concrete usage. consider the following example:\n\n```haskell\ndata PairStatusSeverity = P Status Severity\n\ndata PairStatusString   = P' Status String\n\ndata PairSeverityStatus = P'' Severity Status\n```\n\nLuckily data type declarations allow to use type variables to avoid this kind of cluttered code.\nSo we can define a generic data type `Pair` that allows us to freely combine different kinds of arguments:\n\n```haskell\n-- a simple polymorphic type\ndata Pair a b = P a b\n```\n\nThis can be understood as: data type `Pair` uses two elements of (potentially) different types `a` and `b`; the\ndata constructor `P` takes a value of type `a` and a value of type `b` and returns a `Pair a b` instance \n(the data constructor `P`  has the signature `P :: a -\u003e b -\u003e Pair a b`).\nThe type `Pair` can now be used to create many different concrete data types it is thus \ncalled a *polymorphic* data type.\nAs the Polymorphism is defined by type variables, i.e. parameters to the type declarations, this mechanism is\ncalled *parametric polymorphism*.\n\nAs pairs and n-tuples are so frequently used, the Haskell language designers have added some syntactic sugar to\nwork effortlessly with them.\n\nSo you can simply write tuples like this:\n\n```haskell\ntuple :: (Status, Severity, String)\ntuple = (Green, Low, \"All green\")\n```\n\n### Lists\n\nAnother very useful polymorphic type is the `List`.\n\nA list can either be the empty list (denoted by the data constructor `[]`) \nor some element of a data type `a` followed by a list with elements of type `a`, denoted by `[a]`.\n\nThis intuition is reflected in the following data type definition:\n\n```haskell\ndata [a] = [] | a : [a]\n```\n\nThe cons operator `(:)` (which is an infix operator like `(.)` from the previous section) is declared as a \n*data constructor* to construct a list from a single element of type `a` and a list of type `[a]`.\n\nSo a list containing only a single element `1` is constructed by:\n\n```haskell\n1 : []\n```\n\nA list containing the three numbers 1, 2, 3 is constructed like this:\n\n```haskell\n1 : 2 : 3 : []\n```\n\nLuckily the Haskell language designers have been so kind to offer some syntactic sugar for this. \nSo the first list can simply be written as `[1]` and the second as `[1,2,3]`.\n\nPolymorphic type expressions describe *families of types*. \nFor example, `(forall a)[a]` is the family of types consisting of, \nfor every type `a`, the type of lists of `a`. \nLists of integers (e.g. `[1,2,3]`), lists of characters (`['a','b','c']`), \neven lists of lists of integers, etc., are all members of this family. \n\nFunction that work on lists can use pattern matching to select behaviour for the `[]` and the `a:[a]` case.\n\nTake for instance the definition of the function `length` that computes the length of a list:\n\n```haskell\nlength :: [a] -\u003e Integer\nlength []     =  0\nlength (x:xs) =  1 + length xs\n```\n\nWe can read these equations as: The length of the empty list is 0, \nand the length of a list whose first element is x and remainder is xs \nis 1 plus the length of xs.\n\n\nIn our next example we want to work with a of some random integers:\n```haskell\nsomeNumbers :: [Integer]\nsomeNumbers = [49,64,97,54,19,90,934,22,215,6,68,325,720,8082,1,33,31]\n```\n\nNow we want to select all even or all odd numbers from this list. \nWe are looking for a function `filter` that takes two\narguments: first a predicate function that will be used to check each element\nand second the actual list of elements. The function will return a list with all matching elements.\nAnd of course our solution should work not only for Integers but for any other types as well.\nHere is the type signature of such a filter function:\n\n```haskell\nfilter :: (a -\u003e Bool) -\u003e [a] -\u003e [a]\n```\n\nIn the implementation we will use pattern matching to provide different behaviour for the `[]` and the `(x:xs)` case:\n\n```haskell\nfilter :: (a -\u003e Bool) -\u003e [a] -\u003e [a]\nfilter pred []     = []\nfilter pred (x:xs)\n  | pred x         = x : filter pred xs\n  | otherwise      = filter pred xs\n```\n\nThe `[]` case is obvious. To understand the `(x:xs)` case we have to know that in addition to simple matching of the type constructors\nwe can also use *pattern guards* to perform additional testing on the input data.\nIn this case we compute `pred x` if it evaluates to `True`, `x` is a match and will be cons'ed with the result of \n`filter pred xs`.\nIf it does not evaluate to `True`, \nwe will not add `x` to the result list and thus simply call filter recursively on the remainder of the list.\n\nNow we can use `filter` to select elements from our sample list:\n\n```haskell\nsomeEvenNumbers :: [Integer]\nsomeEvenNumbers = filter even someNumbers\n\n-- predicates may also be lambda-expresssions\nsomeOddNumbers :: [Integer]\nsomeOddNumbers = filter (\\n -\u003e n `rem` 2 /= 0) someNumbers  \n```\n\nOf course we don't have to invent functions like `filter` on our own but can rely on the [extensive set of \npredefined functions working on lists](https://hackage.haskell.org/package/base-4.12.0.0/docs/Data-List.html) \nin the Haskell base library.\n\n#### Arithmetic sequences\n\nThere is a nice feature that often comes in handy when dealing with lists of numbers. It's called *arithmetic sequences* and\nallows you to define lists of numbers with a concise syntax:\n\n```haskell\nupToHundred :: [Integer]\nupToHundred = [1..100]\n```\n\nAs expected this assigns `upToHundred` with a list of integers from 1 to 100.\n\nIt's also possible to define a step width that determines the increment between the subsequent numbers.\nIf we want only the odd numbers we can construct them like this:\n```haskell\noddsUpToHundred :: [Integer]\noddsUpToHundred = [1,3..100]\n```\n\nArithmetic sequences can also be used in more dynamic cases. For example we can define the `factorial` function like this:\n```math\nn! = 1 * 2 * 3 ... (n-2) * (n-1) * n, for integers \u003e 0\n```\n\nIn Haskell we can use an arithmetic sequence to define this function:\n\n```haskell\nfac' n   = prod [1..n]\n```\n\n## Immutability\n\n\u003e In object-oriented and functional programming, an immutable object is an object \n\u003e whose state cannot be modified after it is created. This is in contrast to a mutable object \n\u003e (changeable object), which can be modified after it is created.\n\u003e  \n\u003e Quoted from [Wikipedia](https://en.wikipedia.org/wiki/Immutable_object)\n\nThis is going to be a very short section. In Haskell all data is immutable. Period.\n\nLet's look at some interactions with the Haskell GHCi REPL (whenever you see the `λ\u003e` prompt in this article \nit is from a GHCi session):\n\n```haskell\nλ\u003e a = [1,2,3]\nλ\u003e a\n[1,2,3]\nλ\u003e reverse a\n[3,2,1]\nλ\u003e a\n[1,2,3]\n```\n\nIn Haskell there is no way to change the value of `a` after its initial creation. There are no *destructive* \noperations available unlike some other functional languages such as Lisp, Scheme or ML.\n\nThe huge benefit of this is that refactoring becomes much simpler than in languages where every function or method\nmight mutate data. Thus it will also be easier to reason about a given piece of code.\n\nOf course this also makes programming of concurrent operations much easier. With a *shared nothing* approach, \nHaskell programs are automatically thread-safe.\n\n## Declarative programming\n\nIn this section I want to explain how programming with *higher order* functions can be used to\nfactor out many basic control structures and algorithms from the user code.\n\nThis will result in a more *declarative programming* style where the developer can simply \ndeclare *what* she wants to achieve but is not required to write down *how* it is to be achieved.\n\nCode that applies this style will be much denser, and it will be more concerned with the actual elements\nof the problem domain than with the technical implementation details.\n\n### Mapping\n\nWe'll demonstrate this with some examples working on lists. \nFirst we get the task to write a function that doubles all elements of a `[Integer]` list.\nWe want to reuse the `double` function we have already defined above.\n\nWith all that we have learnt so far, writing a function `doubleAll` isn't that hard:\n\n```haskell\n-- compute the double value for all list elements\ndoubleAll :: [Integer] -\u003e [Integer]\ndoubleAll [] = []\ndoubleAll (n:rest) = double n : doubleAll rest\n```\n\nNext we are asked to implement a similar function `squareAll` that will use `square` to compute the square of all elements in a list.\nThe naive way would be to implement it in the *WET* (We Enjoy Typing) approach:\n\n```haskell\n-- compute squares for all list elements\nsquareAll :: [Integer] -\u003e [Integer]\nsquareAll [] = []\nsquareAll (n:rest) = square n : squareAll rest\n```\n\nOf course this is very ugly: \nboth function use the same pattern matching and apply the same recursive iteration strategy.\nThey only differ in the function applied to each element.\n\nAs role model developers we don't want to repeat ourselves.  We are thus looking for something that \ncaptures the essence of mapping a given function over a list of elements:\n\n```haskell\nmap :: (a -\u003e b) -\u003e [a] -\u003e [b]\nmap f []     = []\nmap f (x:xs) = f x : map f xs\n```\n\nThis function abstracts away the implementation details of iterating over a list and allows to provide a user defined \nmapping function as well.\n\nNow we can use `map` to simply *declare our intention* (the 'what') and don't have to detail the 'how':\n\n```haskell\ndoubleAll' :: [Integer] -\u003e [Integer]\ndoubleAll' = map double\n\nsquareAll' :: [Integer] -\u003e [Integer]\nsquareAll' = map square\n```\n\n### Folding\n\nNow let's have a look at some related problem.\nOur first task is to add up all elements of a  `[Integer]` list.\nFirst the naive approach which uses the already familiar mix of pattern matching plus recursion:\n\n```haskell\nsumUp :: [Integer] -\u003e Integer\nsumUp [] = 0\nsumUp (n:rest) = n + sumUp rest\n```\n\nBy looking at the code for a function that computes the product of all elements of a  `[Integer]` list we can again see that\nwe are repeating ourselves:\n\n```haskell\nprod :: [Integer] -\u003e Integer\nprod [] = 1\nprod (n:rest) = n * prod rest\n```\n\nSo what is the essence of both algorithms?\nAt the core of both algorithms we have a recursive function which \n\n- takes a binary operator (`(+)`or `(*)` in our case), \n- an initial value that is used as a starting point for the accumulation \n  (typically the identity element (or neutral element) of the binary operator), \n- the list of elements that should be reduced to a single return value\n- performs the accumulation by recursively applying the binary operator to all elements of the list until the `[]` is reached,\n  where the neutral element is returned.\n\nThis essence is contained in the higher order function `foldr` which again is part of the Haskell standard library:\n\n```haskell\nfoldr :: (a -\u003e b -\u003e b) -\u003e b -\u003e [a] -\u003e b\nfoldr f acc []     =  acc\nfoldr f acc (x:xs) =  f x (foldr f acc xs)\n```\n\nNow we can use `foldr` to simply *declare our intention* (the 'what') and don't have to detail the 'how':\n\n```haskell\nsumUp' :: [Integer] -\u003e Integer\nsumUp' = foldr (+) 0\n\nprod' :: [Integer] -\u003e Integer\nprod' = foldr (*) 1\n```\n\nWith the functions `map` and `foldr` (or `reduce`) we have now two very powerful tools at hand that can be used in\nmany situation where list data has to be processed.\n\nBoth functions can even be composed to form yet another very important programming concept: *Map/Reduce*.\nIn Haskell this operation is provided by the function `foldMap`.\n\nI won't go into details here as it would go beyond the scope of this article, but I'll invite you to read my \n[introduction to Map/Reduce in Haskell](https://github.com/thma/LtuPatternFactory#map-reduce).\n\n## Non-strict Evaluation\n\nNow we come to topic that was one of the main drivers for the Haskell designers: they wanted to get\naway from the then standard model of strict evaluation.\n\nNon-Strict Evaluation (aka. normal order reduction) has one very important property. \n \n\u003e If a lambda expression has a normal form, then normal order reduction will terminate and find that normal form.\n\u003e\n\u003e Church-Rosser Theorem II\n\nThis property does not hold true for other reduction strategies (like applicative order or call-by-value reduction).\n\nThis result from mathematical research on the [lambda calculus](https://en.wikipedia.org/wiki/Lambda_calculus) \nis important as Haskell maintains the semantics of normal order reduction.\n\nThe real-world benefits of lazy evaluation include:\n\n- Avoid endless loops in certain edge cases\n- The ability to define control flow (structures) as abstractions instead of primitives.\n- The ability to define potentially infinite data structures. This allows for more straightforward implementation of some algorithms.\n\nSo let's have a closer look at those benefits:\n\n### Avoid endless loops\n\nConsider the following example function:\n\n```haskell\nignoreY :: Integer -\u003e Integer -\u003e Integer\nignoreY x y = x\n```\n\nIt takes two integer arguments and returns the first one unmodified. The second argument is \nsimply ignored.\n\nIn most programming languages both arguments will be\nevaluated before the function body is executed: \nthey use applicative order reduction aka. eager evaluation or call-by-value semantics.\n\nIn Haskell on the other hand it is safe to call the function with a non-terminating expression in the second argument.\nFirst we create a non-terminating expression `viciousCircle`. Any attempt to evaluate it will result in an endless loop:\n\n```haskell\n-- it's possible to define non-terminating expressions like\nviciousCircle :: a\nviciousCircle = viciousCircle\n```\n\nBut if we use `viciousCircle` as second argument to the function `ignoreY` it will simply be ignored and the first argument\nis returned:\n\n```haskell\n-- trying it in GHCi:\nλ\u003e ignoreY 42 viciousCircle\n42\n```\n\n### Define potentially infinite data structures\n\nIn the [section on lists](#lists) we have already met *arithmetic sequences* like `[1..10]`.\n\nArithmetic sequences can also be used to define infinite lists of numbers.\nHere are a few examples:\n\n```haskell\n-- all natural numbers\nnaturalNumbers = [1..]\n\n-- all even numbers\nevens = [2,4..]\n\n-- all odd numbers\nodds  = [1,3..]\n```\n\nDefining those infinite lists is rather easy. But what can we do with them? Are they useful for any purpose? In the `viciousCircle` example above we have learnt that\ndefining that expression is fine but any attempt to evaluate it will result in an infinite loop.\n\nIf we try to print `naturalNumbers` we will also end up in an infinite loop of integers printed to the screen.\n\nBut if we are bit less greedy than asking for all natural numbers everything will be OK.\n\n```haskell\nλ\u003e take 10 naturalNumbers\n[1,2,3,4,5,6,7,8,9,10]\n\nλ\u003e take 10 evens\n[2,4,6,8,10,12,14,16,18,20]\n\nλ\u003e take 10 odds\n[1,3,5,7,9,11,13,15,17,19]\n```\n\nWe can also peek at a specific position in such an infinite list, using the `(!!)` operator:\n\n```haskell\nλ\u003e odds !! 5000\n10001\n\nλ\u003e evens !! 10000\n20002\n```\n\n### List comprehension\n\nDo you remember *set comprehension* notation from your math classes?\n\nAs simple example would be the definition of the set of even numbers:\n\n\u003e Evens = {i | i = 2n ∧ n ∊ ℕ}\n\nWhich can be read as: Evens is defined as the set of all `i` where `i = 2*n` and `n` is an element of the set of natural numbers.\n\nThe Haskell *list comprehension* allows us to define - potentially infinite - lists with a similar syntax:\n\n```haskell\nevens' = [2*n | n \u003c- [1..]]\n```\n\nAgain we can avoid infinite loops by evaluating only a finite subset of `evens'`:\n\n```haskell\nλ\u003e take 10 evens'\n[2,4,6,8,10,12,14,16,18,20]\n```\n\nList comprehension can be very useful for defining numerical sets and series in a (mostly) declarative way that comes \nclose to the original mathematical definitions.\n\nTake for example the set `PT` of all pythagorean triples\n\n\u003e  PT = { (a,b,c) | a,b,c ∊ ℕ ∧ a² + b² = c² }\n\nThe Haskell definition looks like this:\n\n```haskell\npt :: [(Natural,Natural,Natural)]\npt = [(a,b,c) | c \u003c- [1..],\n                b \u003c- [1..c],\n                a \u003c- [1..b],\n                a^2 + b^2 == c^2]\n```\n\n### Define control flow structures as abstractions\n\nIn most languages it is not possible to define new conditional operations, e.g. your own `myIf` statement.\nA conditional operation will evaluate some of its arguments only if certain conditions are met.\nThis is very hard - if not impossible - to implement in language with call-by-value semantics which evaluates all function arguments before\nactually evaluating the function body.\n\nAs Haskell implements call-by-need semantics, it is possible to define new conditional operations.\nIn fact this is quite helpful when writing *domain specific languages*.\n\nHere comes a very simple version of `myIf`:\n\n```haskell\nmyIf :: Bool -\u003e b -\u003e b -\u003e b\nmyIf p x y = if p then x else y \n\nλ\u003e myIf (4 \u003e 2) \"true\" viciousCircle\n\"true\"\n```\n\nA somewhat more useful control-structure is the `cond` (for conditional) function that stems from LISP and Scheme languages.\nIt allows you to define a more table-like decision structure, somewhat resembling a `switch` statement from C-style languages:\n\n```haskell\ncond :: [(Bool, a)] -\u003e a\ncond []                 = error \"make sure that at least one condition is true\"\ncond ((True,  v):rest)  = v\ncond ((False, _):rest)  = cond rest\n```\n\nWith this function we can implement a signum function `sign` as follows:\n\n```haskell\nsign :: (Ord a, Num a) =\u003e a -\u003e a\nsign x = cond [(x \u003e 0     , 1 )\n              ,(x \u003c 0     , -1)\n              ,(otherwise , 0 )]\n\nλ\u003e sign 5\n1\nλ\u003e sign 0\n0\nλ\u003e sign (-4)\n-1\n```\n\n## Type Classes\n\nNow we come to one of the most distinguishing features of Haskell: *type classes*.\n\nIn the section [Polymorphic Data Types](#polymorphic-data-types) we have seen that type variables (or parameters) allow \ntype declarations to be polymorphic like in:\n\n```haskell\ndata [a] = [] | a : [a]\n```\n\nThis approach is called *parametric polymorphism* and is used in several programming languages.\n\nType classes on the other hand address *ad hoc polymorphism* of data types. This approach is also known as\n*overloading*.\n\nTo get a first intuition let's start with a simple example.\n\nWe would like to be able to use characters (represented by the data type `Char`) as if they were numbers.\nE.g. we would like to be able to things like:\n \n```haskell\nλ\u003e 'A' + 25\n'Z'\n\n-- please note that in Haskell a string is List of characters: type String = [Char]\nλ\u003e map (+ 5) \"hello world\"\n\"mjqqt%|twqi\"\n\nλ\u003e map (\\c -\u003e c - 5) \"mjqqt%|twqi\"\n\"hello world\"\n```\n\nTo enable this we will have to *overload* the infix operators `(+)` and `(-)` to work not only on numbers but also on characters.\nNow, let's have a look at the type signature of the `(+)` operator:\n\n```haskell\nλ\u003e :type (+)\n(+) :: Num a =\u003e a -\u003e a -\u003e a\n```\n\nSo `(+)` is not just declared to be of type `(+) :: a -\u003e a -\u003e a` but it contains a **constraint** on the type variable `a`, \nnamely `Num a =\u003e`. \nThe whole type signature of `(+)` can be read as: for all types `a` that are members of the type class `Num` the operator `(+)` has the type\n`a -\u003e a -\u003e a`.\n\nNext we obtain more information on the type class `Num`:\n\n```haskell\nλ\u003e :info Num\nclass Num a where\n  (+) :: a -\u003e a -\u003e a\n  (-) :: a -\u003e a -\u003e a\n  (*) :: a -\u003e a -\u003e a\n  negate :: a -\u003e a\n  abs :: a -\u003e a\n  signum :: a -\u003e a\n  fromInteger :: Integer -\u003e a\n  {-# MINIMAL (+), (*), abs, signum, fromInteger, (negate | (-)) #-}\n  \t-- Defined in `GHC.Num'\ninstance Num Word -- Defined in `GHC.Num'\ninstance Num Integer -- Defined in `GHC.Num'\ninstance Num Int -- Defined in `GHC.Num'\ninstance Num Float -- Defined in `GHC.Float'\ninstance Num Double -- Defined in `GHC.Float'\n```\n\nThis information details what functions a type `a` has to implement to be used as an instance of the `Num` type class.\nThe line `{-# MINIMAL (+), (*), abs, signum, fromInteger, (negate | (-)) #-}` tells us what a minimal complete implementation\nhas to provide.\nIt also tells us that the types `Word`, `Integer`, `Int`, `Float` and `Double` are instances of the `Num` type class.\n\nThis is all we need to know to make the type `Char` an instance of the `Num` type class, so without further ado we\ndive into the implementation (please note that `fromEnum` converts a `Char` into an `Int` and `toEnum` converts \nan `Int` into an `Char`):\n\n```haskell\ninstance Num Char where\n  a + b       = toEnum (fromEnum a + fromEnum b)\n  a - b       = toEnum (fromEnum a - fromEnum b)\n  a * b       = toEnum (fromEnum a * fromEnum b)\n  abs c       = c\n  signum      = toEnum . signum . fromEnum\n  fromInteger = toEnum . fromInteger\n  negate c    = c\n```\n\nThis piece of code makes the type `Char` an instance of the `Num` type class. We can then use `(+)` and `(-) as demonstrated\nabove.\n\nOriginally the idea for type classes came up to provide overloading of arithmetic operators\nin order to use the same operators across all numeric types.\n\nBut the type classes concept proved to be useful in a variety of other cases as well. \nThis has lead to a rich sets of type classes provided by the Haskell base library and\na wealth of programming techniques that make use of this powerful concept.\n\nHere comes a graphic overview of some of the most important type classes in the Haskell base library:\n\n![The hierarchy of basic type classes](https://upload.wikimedia.org/wikipedia/commons/thumb/0/04/Base-classes.svg/510px-Base-classes.svg.png)\n\nI won't go over all of these but I'll cover some of the most important ones.\n \nLet's start with Eq:\n\n```haskell\nclass Eq a where\n   (==), (/=) :: a -\u003e a -\u003e Bool\n\n       -- Minimal complete definition:\n       --      (==) or (/=)\n   x /= y     =  not (x == y)\n   x == y     =  not (x /= y)\n``` \n\nThis definition states two things: \n\n- if a type `a` is to be made an instance of the class `Eq` it must support the \n  functions `(==)` and `(/=)` both of them having  type `a -\u003e a -\u003e Bool`.  \n- `Eq` provides *default definitions* for `(==)` and `(/=)` in terms of each other. \n   As a consequence, there is no need for a type in `Eq` to provide both definitions - \n   given one of them, the other will work automatically.\n\nNow we can turn some of the data types that we defined in the section on \n[Algebraic Data Types](#algebraic-data-types) into instances of the `Eq` type class.\n\nHere the type declarations as a recap:\n\n```haskell\ndata Status   = Green | Yellow | Red\ndata Severity = Low | Middle | High \ndata PairStatusSeverity = PSS Status Severity\n```\n \nFirst, we create Eq instances for the simple types `Status` and `Severity` by defining the `(==)` \noperator for each of them:\n\n```haskell\ninstance Eq Status where\n  Green  == Green  = True\n  Yellow == Yellow = True\n  Red    == Red    = True\n  _      == _      = False\n  \ninstance Eq Severity where\n  Low    == Low    = True\n  Middle == Middle = True\n  High   == High   = True\n  _      == _      = False\n```\n\nNext, we create an `Eq` instance for `PairStatusSeverity` by defining the `(==)` operator:\n \n```haskell\ninstance Eq PairStatusSeverity where\n   (PSS sta1 sev1) == (PSS sta2 sev2) = (sta1 == sta2) \u0026\u0026 (sev1 == sev2)\n```\n\nWith these definitions it is now possible to use the `(==)` and `(/=)` on our three types.\n\nAs you will have noticed, the code for implementing `Eq` is quite boring. Even a machine could do it!\n\nThat's why the language designers have provided a `deriving` mechanism to let the compiler automatically implement\ntype class instances if it's automatically derivable as in the `Eq` case.\n\nWith this syntax it much easier to let a type implement the `Eq` type class:\n\n```haskell\ndata Status   = Green | Yellow | Red          deriving (Eq)\ndata Severity = Low | Middle | High           deriving (Eq)\ndata PairStatusSeverity = PSS Status Severity deriving (Eq)\n```\n\nThis automatic deriving of type class instances works for many cases and reduces a lof of repetitive code.\n\nFor example, its possible to automatically derive instances of the `Ord` type class, which provides\nordering functionality:\n\n```haskell\nclass (Eq a) =\u003e Ord a where\n    compare              :: a -\u003e a -\u003e Ordering\n    (\u003c), (\u003c=), (\u003e), (\u003e=) :: a -\u003e a -\u003e Bool\n    max, min             :: a -\u003e a -\u003e a\n    ...\n```\n\nIf you are using `deriving` for the `Status` and `Severity` types, the Compiler will implement the\nordering according to the ordering of the constructors in the type declaration.\nThat is `Green \u003c Yellow \u003c Red` and `Low \u003c Middle \u003c High`:\n\n```haskell\ndata Status   = Green | Yellow | Red          deriving (Eq, Ord)\ndata Severity = Low | Middle | High           deriving (Eq, Ord)\n```\n\n### Read and Show\n\nTwo other quite useful type classes are `Read` and `Show` that also support automatic deriving. \n\n`Show` provides a function `show` with the following type signature:\n\n```haskell\nshow :: Show a =\u003e a -\u003e String\n```\n\nThis means that any type implementing `Show` can be converted (or *marshalled*) into a `String` representation.\nCreation of a `Show` instance can be achieved by adding a `deriving (Show)` clause to the type declaration.\n\n```haskell\ndata PairStatusSeverity = PSS Status Severity deriving (Show)\n\nλ\u003e show (PSS Green Low)\n\"PSS Green Low\"\n```\n\nThe `Read` type class is used to do the opposite: *unmarshalling* data from a String with the function `read`:\n\n```haskell\nread :: Read a =\u003e String -\u003e a\n```\n\nThis signature says that for any type `a` implementing the `Read` type class the function `read` can\nreconstruct an instance of `a` from its String representation:\n\n```haskell\ndata PairStatusSeverity = PSS Status Severity deriving (Show, Read)\ndata Status = Green | Yellow | Red            deriving (Show, Read)\ndata Severity = Low | Middle | High           deriving (Show, Read)\n\nλ\u003e marshalled = show (PSS Green Low)\n\nλ\u003e read marshalled :: PairStatusSeverity\nPSS Green Low\n```\n\nPlease note that it is required to specify the expected target type with the `:: PairStatusSeverity` clause.\nHaskell uses static compile time typing. At compile time there is no way to determine which type\nan expression `read \"some string content\"` will return. Thus the expected type must be specified at compile time.\nEither by an implicit declaration given by some function type signature, or as in the example above,\nby an explicit declaration.\n\nTogether `show` and `read` provide a convenient way to serialize (marshal) and deserialize (unmarshal) Haskell\ndata structures.\nThis mechanism does not provide any optimized binary representation, but it is still good enough for\nmany practical purposes, the format is more compact than JSON, and it does not require a parser library.\n\n### Functor and Foldable\n\nThe most interesting type classes are those derived from abstract algebra or category theory.\nStudying them is a very rewarding process that I highly recommend. However, it is definitely\nbeyond the scope of this article. Thus, I'm only pointing to two resources covering this part of the Haskell\ntype class hierarchy.\nThe first one is the legendary [Typeclassopedia](https://wiki.haskell.org/Typeclassopedia) by Brent Yorgey. \nThe second one is [Lambda the ultimate Pattern Factory](https://github.com/thma/LtuPatternFactory)  by myself. \nThis text relates the algebraic type classes to software design patterns, and therefore we will only cover some of these type classes.\n\nIn the section on [declarative programming](#declarative-programming) we came across two very useful concepts:\n\n- mapping a function over all elements in a list (`map :: (a -\u003e b) -\u003e [a] -\u003e [b]`)\n- reducing a list with a binary operation and the neutral (identity) element of that operation \n  (`foldr :: (a -\u003e b -\u003e b) -\u003e b -\u003e [a] -\u003e b`)\n\nThese concepts are not only useful for lists, but also for many other data structures. So it doesn't come as a\nsurprise that there are type classes that abstract these concepts.\n\n#### Functor\n\nThe `Functor` type class generalizes the functionality of applying a function to a value in a context without altering the context, \n(e.g. mapping a function over a list `[a]` which returns a new list `[b]` of the same length):\n\n```haskell\nclass Functor f where\n  fmap :: (a -\u003e b) -\u003e f a -\u003e f b\n```\n\nLet's take a closer look at this idea by playing with a simple binary tree:\n\n```haskell\ndata Tree a = Leaf a | Node (Tree a) (Tree a) deriving (Show)\n\n-- a simple instance binary tree:\nstatusTree :: Tree Status\nstatusTree = Node (Leaf Green) (Node (Leaf Red) (Leaf Yellow))\n\n-- a function mapping Status to Severity\ntoSeverity :: Status -\u003e Severity\ntoSeverity Green  = Low\ntoSeverity Yellow = Middle\ntoSeverity Red    = High\n```\n\nWe want to use the function `toSeverity :: Status -\u003e Severity` to convert all `Status` elements of the `statusTree`\ninto `Severity` instances.\n\nTherefore, we let `Tree` instantiate the `Functor` class:\n\n```haskell\ninstance Functor Tree where\n  fmap f (Leaf a)   = Leaf (f a)\n  fmap f (Node a b) = Node (fmap f a) (fmap f b)\n```\n\n\nWe can now use `fmap` on `Tree` data structures:\n\n```haskell\nλ\u003e fmap toSeverity statusTree\nNode (Leaf Low) (Node (Leaf High) (Leaf Middle))\nλ\u003e :type it\nit :: Tree Severity\n```\n\nAs already described above, fmap maintains the tree structure unchanged but converts the type of each `Leaf` element, \nwhich effectively changes the type of the tree to `Tree Severity`.\n\nAs derivation of `Functor` instances is a boring task, it is again possible to use the `deriving` clause to\nlet data types instantiate `Functor`:\n\n```haskell\n{-# LANGUAGE DeriveFunctor #-} -- this pragma allows automatic deriving of Functor instances\ndata Tree a = Leaf a | Node (Tree a) (Tree a) deriving (Show, Functor)\n```\n\n#### Foldable\n\nAs already mentioned, `Foldable` provides the ability to perform *folding* operations on any data type instantiating the\n`Foldable` type class:\n\n```haskell\nclass Foldable t where\n  fold    :: Monoid m =\u003e t m -\u003e m\n  foldMap :: Monoid m =\u003e (a -\u003e m) -\u003e t a -\u003e m\n  foldr   :: (a -\u003e b -\u003e b) -\u003e b -\u003e t a -\u003e b\n  foldr'  :: (a -\u003e b -\u003e b) -\u003e b -\u003e t a -\u003e b\n  foldl   :: (b -\u003e a -\u003e b) -\u003e b -\u003e t a -\u003e b\n  foldl'  :: (b -\u003e a -\u003e b) -\u003e b -\u003e t a -\u003e b\n  foldr1  :: (a -\u003e a -\u003e a) -\u003e t a -\u003e a\n  foldl1  :: (a -\u003e a -\u003e a) -\u003e t a -\u003e a\n  toList  :: t a -\u003e [a]\n  null    :: t a -\u003e Bool\n  length  :: t a -\u003e Int\n  elem    :: Eq a =\u003e a -\u003e t a -\u003e Bool\n  maximum :: Ord a =\u003e t a -\u003e a\n  minimum :: Ord a =\u003e t a -\u003e a\n  sum     :: Num a =\u003e t a -\u003e a\n  product :: Num a =\u003e t a -\u003e a\n```\n\nbesides the abstraction of the `foldr` function, `Foldable` provides several other useful operations when dealing with\n*container*-like structures.\n\nBecause of the regular structure algebraic data types it is again possible to automatically derive `Foldable` instances\nby using the `deriving` clause:\n\n```haskell\n{-# LANGUAGE DeriveFunctor, DeriveFoldable #-} -- allows automatic deriving of Functor and Foldable\ndata Tree a = Leaf a | Node (Tree a) (Tree a) deriving (Eq, Show, Read, Functor, Foldable)\n```\n\nOf course, we can also implement the `foldr` function on our own:\n\n```haskell\ninstance Foldable Tree where\n  foldr f acc (Leaf a)   = f a acc\n  foldr f acc (Node a b) = foldr f (foldr f acc b) a\n```\n\nWe can now use `foldr` and other class methods of `Foldable`:\n\n```haskell\nstatusTree :: Tree Status\nstatusTree = Node (Leaf Green) (Node (Leaf Red) (Leaf Yellow))\n\nmaxStatus = foldr max Green statusTree\nmaxStatus' = maximum statusTree\n\n-- using length from Foldable type class\ntreeSize = length statusTree\n\n-- in GHCi:\nλ\u003e :t max\nmax :: Ord a =\u003e a -\u003e a -\u003e a\n\nλ\u003e foldr max Green statusTree\nRed\n-- using maximum from Foldable type class:\nλ\u003e maximum statusTree\nRed\nλ\u003e treeSize\n3\n-- using toList from Foldable type class:\nλ\u003e toList statusTree\n[Green,Red,Yellow]\n```\n\n### The Maybe Monad\n\nNow we will take the data type `Maybe` as an example to dive deeper into the more complex parts of the\nHaskell type class system.\n\nThe `Maybe` type is quite simple, it can be either a null value, called `Nothing` or a value of type `a` \nconstructed by `Just a`:\n\n```haskell\ndata  Maybe a  =  Nothing | Just a deriving (Eq, Ord)\n```\n\nThe Maybe type is helpful in situations where certain operation *may* return a valid result.\nTake for instance the function `lookup` from the Haskell base library. It looks up a key in a list of\nkey-value pairs. If it finds the key, the associated value `val` is returned - but wrapped in a Maybe: `Just val`.\nIf it doesn't find the key, `Nothing` is returned:\n\n```haskell\nlookup :: (Eq a) =\u003e a -\u003e [(a,b)] -\u003e Maybe b\nlookup _key []  =  Nothing\nlookup  key ((k,val):rest)\n    | key == k  =  Just val\n    | otherwise =  lookup key rest\n```\n\nThe `Maybe` type is a simple way to avoid NullPointer errors or similar issues with undefined results.\nThus, many languages have adopted it under different names. In Java for instance, it is called `Optional`.\n\n#### Total functions\n\nIn Haskell, it is considered good practise to use *total functions* - that is functions that have defined\nreturn values for all possible input values - where ever possible to avoid runtime errors.\n\nTypical examples for *partial* (i.e. non-total) functions are division and square roots.\nWe can use `Maybe` to make them total:\n\n```haskell\nsafeDiv :: (Eq a, Fractional a) =\u003e a -\u003e a -\u003e Maybe a\nsafeDiv _ 0 = Nothing\nsafeDiv x y = Just (x / y)\n\nsafeRoot :: (Ord a, Floating a) =\u003e a -\u003e Maybe a\nsafeRoot x\n  | x \u003c 0     = Nothing\n  | otherwise = Just (sqrt x)\n```\n\nIn fact, there are alternative base libraries that don't provide any partial functions.\n\n#### Composition of Maybe operations \n\nNow let's consider a situation where we want to combine several of those functions. \nSay for example we first want to lookup the divisor from a key-value table, then perform a\ndivision with it and finally compute the square root of the quotient:\n\n```haskell\nfindDivRoot :: Double -\u003e String -\u003e [(String, Double)] -\u003e Maybe Double\nfindDivRoot x key map =\n  case lookup key map of\n      Nothing -\u003e Nothing\n      Just y  -\u003e case safeDiv x y of\n          Nothing -\u003e Nothing\n          Just d  -\u003e case safeRoot d of\n              Nothing -\u003e Nothing\n              Just r  -\u003e Just r\n\n-- and then in GHCi:\nλ\u003e findDivRoot 27 \"val\" [(\"val\", 3)]\nJust 3.0\nλ\u003e findDivRoot 27 \"val\" [(\"val\", 0)]\nNothing\nλ\u003e findDivRoot 27 \"val\" [(\"val\", -3)]\nNothing\n```\n\nThe resulting control flow is depicted in the following diagram, which was inspired by the [Railroad Oriented Programming](https://fsharpforfunandprofit.com/rop/) presentation:\n![The Maybe railroad](img/maybe.png)\n\nIn each single step we have to check for `Nothing`, in that case we directly short circuit to an overall `Nothing` result value.\nIn the `Just` case we proceed to the next processing step.\n\nThis kind of handling is repetitive and buries the actual intention under a lot of boilerplate.\nAs Haskell uses layout (i.e. indentation) instead of curly brackets to separate blocks the code will\nend up in what is called the *dreaded staircase*: it marches to the right of the screen.\n\nSo we are looking for a way to improve the code by abstracting away the chaining of functions that return\n`Maybe` values and providing a way to *short circuit* the `Nothing` cases.\n\nWe need an operator `andThen` that takes the `Maybe` result of a first function\napplication as first argument, and a function as second argument that will be used in the `Just x` case and again \nreturns a `Maybe` result.\nIn case that the input is `Nothing` the operator will directly return `Nothing` without any further processing.\nIn case that the input is `Just x` the operator will apply the argument function `fun` to `x` and return its result:\n\n```haskell\nandThen :: Maybe a -\u003e (a -\u003e Maybe b) -\u003e Maybe b\nandThen Nothing _fun = Nothing\nandThen (Just x) fun = fun x\n```\n\nWe can then rewrite `findDivRoot` as follows:\n\n```haskell\nfindDivRoot'''' x key map =\n  lookup key map `andThen` \\y -\u003e\n  safeDiv x y    `andThen` \\d -\u003e\n  safeRoot d\n```\n\n(Side note: In Java the `Optional` type has a corresponding method: [Optional.flatmap](https://docs.oracle.com/javase/8/docs/api/java/util/Optional.html#flatMap-java.util.function.Function-))\n\nThis kind of chaining of functions in the context of a specific data type is quite common. So, it doesn't surprise us that\nthere exists an even more abstract `andThen` operator that works for arbitrary parameterized data types:\n\n```haskell\n(\u003e\u003e=) :: Monad m =\u003e m a -\u003e (a -\u003e m b) -\u003e m b\n```\n\nWhen we compare this *bind* operator with the type signature of the `andThen` operator:\n\n```haskell\nandThen :: Maybe a -\u003e (a -\u003e Maybe b) -\u003e Maybe b\n\n```\n \nWe can see that both operators bear the same structure.\nThe only difference is that instead of the concrete type `Maybe` the signature of `(\u003e\u003e=)`\nuses a type variable `m` with a `Monad` type class constraint. We can read this type signature as:\n\nFor any type `m` of the type class `Monad` the operator `(\u003e\u003e=)` is defined as `m a -\u003e (a -\u003e m b) -\u003e m b`\nBased on `(\u003e\u003e=)` we can rewrite the `findDivRoot` function as follows:\n\n```haskell\nfindDivRoot' x key map =\n  lookup key map \u003e\u003e= \\y -\u003e\n  safeDiv x y    \u003e\u003e= \\d -\u003e\n  safeRoot d\n```\n\nMonads are a central element of the Haskell type class ecosystem. In fact the monadic composition based on `(\u003e\u003e=)` is so\nfrequently used that there exists some specific syntactic sugar for it. It's called the do-Notation.\nUsing do-Notation `findDivRoot` looks like this:\n\n```haskell\nfindDivRoot''' x key map = do\n  y \u003c- lookup key map\n  d \u003c- safeDiv x y\n  safeRoot d\n```\n\nThis looks quite like a sequence of statements (including variable assignments) in an imperative language.\nDue to this similarity Monads have been aptly called [programmable semicolons](http://book.realworldhaskell.org/read/monads.html#id642960).\nBut as we have seen: below the syntactic sugar it's a purely functional composition!\n\n### Purity\n\nA function is called pure if it corresponds to a function in the mathematical sense: it associates each possible input \nvalue with an output value, and does nothing else. In particular,\n\n- it has no side effects, that is to say, invoking it produces no observable effect other than the result it returns; \n  it cannot also e.g. write to disk, or print to a screen.\n- it does not depend on anything other than its parameters, so when invoked in a different context or at a different \n  time with the same arguments, it will produce the same result.\n  \nPurity makes it easy to reason about code, as it is so close to mathematical calculus. \nThe properties of a Haskell program can thus often be determined with equational reasoning.\n(As an example I have provided an [example for equational reasoning in Haskell](functor-proof.md)).\n\nPurity also improves testability: It is much easier to set up tests without worrying about mocks or stubs to factor out\naccess to backend layers.\n\nAll the functions that we have seen so far are all *pure* code that is free from side effects.\n\nSo how can we achieve side effects like writing to a database or serving HTTP requests in Haskell?\n\nThe Haskell language designers came up with a solution that distinguishes Haskell from most other languages:\nSide effects are always explicitly declared in the function type signature.\nIn the next section we will learn how exactly this works.\n\n### Explicit side effects with the IO Monad\n\n\u003e Monadic I/O is a clever trick for encapsulating sequential, imperative computation, so that it can “do no evil” \n\u003e to the part that really does have precise semantics and good compositional properties.\n\u003e\n\u003e [Conal Elliott](http://conal.net/blog/posts/is-haskell-a-purely-functional-language)\n\nThe most prominent Haskell Monad is the `IO` monad. It is used to compose operations that perform I/O.\nWe'll study this with a simple example.\n\nIn an imperative language, reading a String from the console simply returns a String value (e.g. `BufferedReader.readline()` in Java: \n`public String readLine() throws IOException`).\n\nIn Haskell the function `getLine` does not return a `String` value but an `IO String`: \n\n```haskell\ngetLine :: IO String\n```\nThis could be interpreted as: `getLine` returns a String in an IO context. \nIn Haskell, it is not possible to extract the String value from its IO context (In Java on the other hand you could always \ncatch away the `IOException`).\n\nSo how can we use the result of `getLine` in a function that takes a `String` value as input parameter?\n\nWe need the monadic bind operation `(\u003e\u003e=)` to do this in the same as we already saw in the `Maybe` monad:\n\n```haskell\n-- convert a string to upper case\nstrToUpper :: String -\u003e String\nstrToUpper = map toUpper \n \nup :: IO () \nup = \n  getLine \u003e\u003e= \\str -\u003e\n  print (strToUpper str)\n\n-- and then in GHCi:\nλ\u003e :t print\nprint :: Show a =\u003e a -\u003e IO ()\nλ\u003e up\nhello world\n\"HELLO WORLD\"\n```\n\nor with do-Notation:\n```haskell\nup' :: IO () \nup' = do\n  str \u003c- getLine\n  print (strToUpper str)\n```\n\nMaking side effects explicit in function type signatures is one of the most outstanding achievements of Haskell.\nThis feature will lead to a very rigid distinction between code that is free of side effects (aka *pure* code) and code \nthat has side effects (aka *impure* code).\n\nKeeping domain logic *pure* - particularly when working only with *total* functions - will dramatically improve \nreliability and testability as tests can be run without setting up mocks or stubbed backends.\n\nIt's not possible to introduce side effects without making them explicit in type signatures. \nThere is nothing like the *invisible* Java `RuntimeExceptions`. \nSo you can rely on the compiler to detect any violations of a rule like \"No impure code in domain logic\".\n\nI've written a simple Restaurant Booking REST Service API that explains how Haskell helps you to keep domain logic pure by\norganizing your code according to the [ports and adapters pattern](https://github.com/thma/RestaurantReservation).\n\nThe section on type classes (and on Monads in particular) have been quite lengthy. Yet, they have hardly shown more than\nthe tip of the iceberg. If you want to dive deeper into type classes, I recommend \n[The Typeclassopedia](https://wiki.haskell.org/Typeclassopedia).\n\n## Conclusion\n\nWe have covered quite a bit of terrain in the course of this article.\n\nIt may seem that Haskell has invented an intimidating mass of programming concepts.\nBut in fact, Haskell inherits much from earlier functional programming languages.\n\nFeatures like first class functions, comprehensive list APIs or declarative programming\nhad already been introduced with Lisp and Scheme.\n\nSeveral others, like pattern matching, non-strict evaluation, immutability, purity, static and strong typing,\ntype inference, algebraic data types and polymorphic data types\nhave been invented in languages like Hope, Miranda and ML.\n\nOnly a few features like type classes and explicit side effects / monadic I/O were first introduced in Haskell.\n\nSo if you already know some functional language concepts, Haskell shouldn't seem too alien to you.\nFor developers with a background in OO languages, the conceptual gap will be much larger.\n\nI hope that this article helped to bridge that gap a bit and to better explain [why \nfunctional programming](https://www.cs.kent.ac.uk/people/staff/dat/miranda/whyfp90.pdf) - and Haskell in particular - matters.\n\nUsing functional programming languages - or applying some of their techniques - will help\nto create designs that are closer to the problem domain (as intented by domain driven design), \nmore readable (due to their declarative character), allow equational reasoning, will provide more rigid\nseparation of business logic and side effects,\nare more flexible for future changes or extensions, provide better testability (supporting BDD, TDD and property based testing), \nwill need much less debugging, are better to maintain and, last but not least, will be more fun to write.\n\n","funding_links":[],"categories":["Haskell"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fthma%2FWhyHaskellMatters","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fthma%2FWhyHaskellMatters","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fthma%2FWhyHaskellMatters/lists"}