{"id":13681723,"url":"https://github.com/thma/LtuPatternFactory","last_synced_at":"2025-04-30T06:32:02.021Z","repository":{"id":41413351,"uuid":"151768755","full_name":"thma/LtuPatternFactory","owner":"thma","description":"Lambda the ultimate Pattern Factory: FP, Haskell, Typeclassopedia vs Software Design Patterns","archived":false,"fork":false,"pushed_at":"2024-01-17T23:42:42.000Z","size":602,"stargazers_count":990,"open_issues_count":11,"forks_count":39,"subscribers_count":27,"default_branch":"master","last_synced_at":"2024-10-29T22:32:33.177Z","etag":null,"topics":["builder-pattern","category-theory","design-patterns","factory-pattern","function-composition","functional-languages","functor","functors","gof-patterns","haskell","iterator-pattern","monad","monad-transformers","monoids","pattern-language","reader-monad","strategy-pattern","traversable","typeclasses","typeclassopedia"],"latest_commit_sha":null,"homepage":null,"language":"Haskell","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/thma.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2018-10-05T19:32:13.000Z","updated_at":"2024-10-21T13:48:48.000Z","dependencies_parsed_at":"2022-08-25T08:11:08.305Z","dependency_job_id":"2676b403-51bf-424c-8bad-72732299a913","html_url":"https://github.com/thma/LtuPatternFactory","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/thma%2FLtuPatternFactory","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/thma%2FLtuPatternFactory/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/thma%2FLtuPatternFactory/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/thma%2FLtuPatternFactory/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/thma","download_url":"https://codeload.github.com/thma/LtuPatternFactory/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":224201721,"owners_count":17272627,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["builder-pattern","category-theory","design-patterns","factory-pattern","function-composition","functional-languages","functor","functors","gof-patterns","haskell","iterator-pattern","monad","monad-transformers","monoids","pattern-language","reader-monad","strategy-pattern","traversable","typeclasses","typeclassopedia"],"created_at":"2024-08-02T13:01:34.949Z","updated_at":"2024-11-12T01:30:29.437Z","avatar_url":"https://github.com/thma.png","language":"Haskell","readme":"# Lambda the Ultimate Pattern Factory\n\n[![Actions Status](https://github.com/thma/LtuPatternFactory/workflows/Haskell%20CI/badge.svg)](https://github.com/thma/LtuPatternFactory/actions)\n\nMy first programming languages were Lisp, Scheme, and ML. When I later started to work in OO languages like C++ and Java I noticed that idioms that are standard vocabulary in functional programming (fp) were not so easy to achieve and required sophisticated structures. Books like [Design Patterns: Elements of Reusable Object-Oriented Software](https://en.wikipedia.org/wiki/Design_Patterns) were a great starting point to reason about those structures. One of my earliest findings was that several of the GoF-Patterns had a stark resemblance of structures that are built into in functional languages: for instance the strategy pattern corresponds to higher order functions in fp (more details see [below](#strategy)).\n\nRecently, while re-reading through the [Typeclassopedia](https://wiki.haskell.org/Typeclassopedia) I thought it would be a good exercise to map the structure of software [design-patterns](https://en.wikipedia.org/wiki/Software_design_pattern#Classification_and_list) to the concepts found in the Haskell type class library and in functional programming in general.\n\nBy searching the web I found some blog entries studying specific patterns, but I did not come across any comprehensive study. As it seemed that nobody did this kind of work yet I found it worthy to spend some time on it and write down all my findings on the subject.\n\nI think this kind of exposition could be helpful if you are:\n\n* a programmer with an OO background who wants to get a better grip on how to implement more complex designs in functional programming\n* a functional programmer who wants to get a deeper intuition for type classes.\n* studying the [Typeclassopedia](https://wiki.haskell.org/Typeclassopedia) and are looking for an accompanying reading providing example use cases and working code.\n\n\u003eThis project is work in progress, so please feel free to contact me with any corrections, adjustments, comments, suggestions and additional ideas you might have.\n\u003e Please use the [Issue Tracker](https://github.com/thma/LtuPatternFactory/issues) to enter your requests.\n\n## Table of contents\n\n* [Lambda the ultimate pattern factory](#lambda-the-ultimate-pattern-factory)\n* [The Patternopedia](#the-patternopedia)\n  * [Data Transfer Object → Functor](#data-transfer-object--functor)\n  * [Singleton → Applicative](#singleton--applicative)\n  * [Pipeline → Monad](#pipeline--monad)\n  * [NullObject → Maybe Monad](#nullobject--maybe-monad)\n  * [Interpreter → Reader Monad](#interpreter--reader-monad)\n  \u003c!--  * [? → MonadFail](#--monadfail)--\u003e\n  * [Aspect Weaving → Monad Transformers](#aspect-weaving--monad-transformers)\n  \u003c!--* [? → MonadFix](#--monadfix) --\u003e\n  * [Composite → SemiGroup → Monoid](#composite--semigroup--monoid)\n  \u003c!--* [? → Alternative, MonadPlus, ArrowPlus](--alternative-monadplus-arrowplus) --\u003e\n  * [Visitor → Foldable](#visitor--foldable)\n  * [Iterator → Traversable](#iterator--traversable)\n  \u003c!-- * [? → Bifunctor](#--bifunctor) --\u003e\n  * [The Pattern behind the Patterns → Category](#the-pattern-behind-the-patterns--category)\n  \u003c!--* [? → Arrow](#--arrow) --\u003e\n  * [Fluent Api → Comonad](#fluent-api--comonad)\n* [Beyond type class patterns](#beyond-type-class-patterns)\n  * [Dependency Injection → Parameter Binding, Partial Application](#dependency-injection--parameter-binding-partial-application)\n  * [Command → Functions as First Class Citizens](#command--functions-as-first-class-citizens)\n  * [Adapter → Function Composition](#adapter--function-composition)\n  * [Template Method → type class default functions](#template-method--type-class-default-functions)\n  * [Creational Patterns](#creational-patterns)\n    * [Abstract Factory → functions as data type values](#abstract-factory--functions-as-data-type-values)\n    * [Builder → record syntax, smart constructor](#builder--record-syntax-smart-constructor)\n* [Functional Programming Patterns](#functional-programming-patterns)\n  * [Higher Order Functions](#higher-order-functions)\n  * [Map Reduce](#map-reduce)\n  \u003c!-- * [Continuation Passing](#continuation-passing) --\u003e\n  * [Lazy Evaluation](#lazy-evaluation)\n  \u003c!-- * [Functional Reactive Programming](#functional-reactive-programming) --\u003e\n  * [Reflection](#reflection)\n* [Conclusions](#conclusions)\n* [Some related links](#some-interesting-links)\n\n## The Patternopedia\n\nThe [Typeclassopedia](https://wiki.haskell.org/wikiupload/8/85/TMR-Issue13.pdf) is a now classic paper that introduces the Haskell type classes by clarifying their algebraic and category-theoretic background. In particular it explains the relationships among those type classes.\n\nIn this chapter I'm taking a tour through the Typeclassopedia from a design pattern perspective.\nFor each of the Typeclassopedia type classes I try to explain how it corresponds to structures applied in software design patterns.\n\nAs a reference map I have included the following chart that depicts the Relationships between type classes covered in the Typeclassopedia:\n\n![The Haskell type classes covered by the Typeclassopedia](https://wiki.haskell.org/wikiupload/c/c7/Typeclassopedia-diagram.svg)\n\n* Solid arrows point from the general to the specific; that is, if there is an arrow from Foo to Bar it means that every Bar is (or should be, or can be made into) a Foo.\n* Dotted lines indicate some other sort of relationship.\n* Monad and ArrowApply are equivalent.\n* Apply and Comonad are greyed out since they are not actually (yet?) in the standard Haskell libraries ∗.\n\n### Data Transfer Object → Functor\n\n\u003e In the field of programming a data transfer object (DTO) is an object that carries data between processes. \n\u003e The motivation for its use is that communication between processes is usually done resorting to remote interfaces \n\u003e (e.g., web services), where each call is an expensive operation.\n\u003e Because the majority of the cost of each call is related to the round-trip time between the client and the server, \n\u003e one way of reducing the number of calls is to use an object (the DTO) that aggregates the data that would have been \n\u003e transferred by the several calls, but that is served by one call only.\n\u003e (quoted from [Wikipedia](https://en.wikipedia.org/wiki/Data_transfer_object)\n\nData Transfer Object is a pattern from Martin Fowler's [Patterns of Enterprise Application Architecture](https://martinfowler.com/eaaCatalog/dataTransferObject.html).\nIt is typically used in multi-layered applications where data is transferred between backends and frontends.\n\nThe aggregation of data usually also involves a denormalization of data structures. As an example, please refer to the following\ndiagram where two entities from the backend (`Album` and `Artist`) are assembled to a compound denormalized DTO `AlbumDTO`:\n\n![DTO](https://martinfowler.com/eaaCatalog/dtoSketch.gif)\n\nOf course, there is also an inverse mapping from `AlbumDTO` to `Album` which is not shown in this diagram.\n\nIn Haskell `Album`, `Artist` and `AlbumDTO` can be represented as data types with record notation:\n\n```haskell\ndata Album = Album {\n    title       :: String\n  , publishDate :: Int\n  , labelName   :: String\n  , artist      :: Artist\n} deriving (Show)\n\ndata Artist = Artist {\n    publicName :: String\n  , realName   :: Maybe String\n} deriving (Show)\n\ndata AlbumDTO = AlbumDTO {\n    albumTitle  :: String\n  , published   :: Int\n  , label       :: String\n  , artistName  :: String\n} deriving (Show, Read)\n```\n\nThe transfer from an `Album` to an `AlbumDTO` and vice versa can be achieved by two simple functions, that perfom the\nintended field wise mappings:\n\n```haskell\ntoAlbumDTO :: Album -\u003e AlbumDTO\ntoAlbumDTO Album {title = t, publishDate = d, labelName = l, artist = a} =\n  AlbumDTO {albumTitle = t, published = d, label = l, artistName = (publicName a)}\n\ntoAlbum :: AlbumDTO -\u003e Album\ntoAlbum AlbumDTO {albumTitle = t, published = d, label = l, artistName = n} =\n  Album {title = t, publishDate = d, labelName = l, artist = Artist {publicName = n, realName = Nothing}}\n```\n\nIn this few lines we have covered the basic idea of the DTO pattern.\n\nNow, let's consider the typical situation that you don't have to transfer only a *single* `Album` instance but a whole \nlist of `Album` instances, e.g.:\n\n```haskell\nalbums :: [Album]\nalbums =\n    [\n      Album {title = \"Microgravity\",\n             publishDate = 1991,\n             labelName = \"Origo Sound\",\n             artist = Artist {publicName = \"Biosphere\", realName = Just \"Geir Jenssen\"}}\n    , Album {title = \"Apollo - Atmospheres \u0026 Soundtracks\",\n             publishDate = 1983,\n             labelName = \"Editions EG\",\n             artist = Artist {publicName = \"Brian Eno\", realName = Just \"Brian Peter George St. John le Baptiste de la Salle Eno\"}}\n    ]\n```\n\nIn this case we have to apply the `toAlbumDTO` function to all elements of the list. \nIn Haskell this *higher order* operation is called `map`:\n\n```haskell\nmap :: (a -\u003e b) -\u003e [a] -\u003e [b]\nmap _f []    = []\nmap f (x:xs) = f x : map f xs\n```\n\n`map` takes a function `f :: (a -\u003e b)` (a function from type `a` to type `b`) and an `[a]` list and returns a `[b]` list.\nThe `b` elements are produced by applying the function `f` to each element of the input list.\nApplying `toAlbumDTO` to a list of albums can thus be done in the Haskell REPL GHCi as follows:\n\n```haskell\nλ\u003e map toAlbumDTO albums\n[AlbumDTO {albumTitle = \"Microgravity\", published = 1991, label = \"Origo Sound\", artistName = \"Biosphere\"},\n AlbumDTO {albumTitle = \"Apollo - Atmospheres \u0026 Soundtracks\", published = 1983, label = \"Editions EG\", artistName = \"Brian Eno\"}]\n```\n\nThis mapping of functions over lists is a basic technique known in many functional languages.\nIn Haskell further generalises this technique with the concept of the `Functor` type class. \n\n\u003e The `Functor` class is the most basic and ubiquitous type class in the Haskell libraries. \n\u003e A simple intuition is that a `Functor` represents a “container” of some sort, along with the ability to apply a \n\u003e function uniformly to every element in the container. For example, a list is a container of elements, \n\u003e and we can apply a function to every element of a list, using `map`. \n\u003e As another example, a binary tree is also a container of elements, and it’s not hard to come up with a way to \n\u003e recursively apply a function to every element in a tree.\n\u003e\n\u003e Another intuition is that a Functor represents some sort of “computational context”. \n\u003e This intuition is generally more useful, but is more difficult to explain, precisely because it is so general. \n\u003e\n\u003e Quoted from [Typeclassopedia](https://wiki.haskell.org/Typeclassopedia#Functor)\n\nBasically, all instances of the `Functor` type class must provide a function `fmap`:\n\n```haskell\nclass  Functor f  where\n    fmap :: (a -\u003e b) -\u003e f a -\u003e f b\n```\n\nFor Lists the implementation is simply the `map` function that we already have seen above:\n```haskell\ninstance Functor [] where\n    fmap = map\n```\n\nFunctors have interesting properties, they fulfill the two so called *functor laws*, \nwhich are part of the definition of a mathematical functor:\n\n```haskell\nfmap id = id                        -- (1)\nfmap (g . h) = (fmap g) . (fmap h)  -- (2)\n```\n\nThe first law `(1)` states that mapping the identity function over every item in a container has no effect. \n\nThe second `(2)` says that mapping a composition of two functions over every item in a container is the same as first \nmapping one function, and then mapping the other.\n\nThese laws are very useful when we consider composing complex mappings from simpler operations.\n\nSay we want to extend our DTO mapping functionality by also providing some kind of marshalling. For a single album instance, \nwe can use function composition `(f . g) x == f (g x)`, which is defined in Haskell as:\n\n```haskell\n(.) :: (b -\u003e c) -\u003e (a -\u003e b) -\u003e a -\u003e c\n(.) f g x = f (g x)\n```\n\nIn the following GHCi session we are using `(.)` to first convert an `Album` to its `AlbumDTO` representation and then \nturn that into a `String` by using the `show` function:\n\n```haskell\nλ\u003e album1 = albums !! 0\nλ\u003e print album1\nAlbum {title = \"Microgravity\", publishDate = 1991, labelName = \"Origo Sound\", artist = Artist {publicName = \"Biosphere\", realName = Just \"Geir Jenssen\"}}\nλ\u003e marshalled = (show . toAlbumDTO) album1\nλ\u003e :t marshalled\nmarshalled :: String\nλ\u003e print marshalled\n\"AlbumDTO {albumTitle = \\\"Microgravity\\\", published = 1991, label = \\\"Origo Sound\\\", artistName = \\\"Biosphere\\\"}\"\n```\n\nAs we can rely on the functor law `fmap (g . h) = (fmap g) . (fmap h)` we can use fmap to use the same composed\nfunction on any functor, for example our list of albums:\n\n```haskell\nλ\u003e fmap (show . toAlbumDTO) albums\n[\"AlbumDTO {albumTitle = \\\"Microgravity\\\", published = 1991, label = \\\"Origo Sound\\\", artistName = \\\"Biosphere\\\"}\",\n \"AlbumDTO {albumTitle = \\\"Apollo - Atmospheres \u0026 Soundtracks\\\", published = 1983, label = \\\"Editions EG\\\", artistName = \\\"Brian Eno\\\"}\"]\n```\n\nWe can build more complex mappings by chaining multiple functions, to produce for example a gzipped byte string output:\n\n```haskell\nλ\u003e gzipped = (compress . pack . show . toAlbumDTO) album1\n```\n\nAs the sequence of operation must be read from right to left for the `(.)` operator this becomes quite unintuitive for longer sequences.\nThus, Haskellers often use the flipped version of `(.)`, `(\u003e\u003e\u003e)` which is defined as:\n\n```haskell\nf \u003e\u003e\u003e g = g . f\n```\n\nUsing `(\u003e\u003e\u003e)` the intent of our composition chain becomes much clearer (at least when you are trained to read from left to right):\n\n```haskell\nλ\u003e gzipped = (toAlbumDTO \u003e\u003e\u003e show \u003e\u003e\u003e pack \u003e\u003e\u003e compress) album1\n```\n\nUnmarshalling can be defined using the inverse operations:\n\n```haskell\nλ\u003e unzipped = (decompress \u003e\u003e\u003e unpack \u003e\u003e\u003e read \u003e\u003e\u003e toAlbum) gzipped\nλ\u003e :t unzipped\nunzipped :: Album\nλ\u003e print unzipped\nAlbum {title = \"Microgravity\", publishDate = 1991, labelName = \"Origo Sound\", artist = Artist {publicName = \"Biosphere\", realName = Nothing}}\n```\n\nOf course, we can use `fmap` to apply such composed mapping function to any container type instantiating the `Functor` \ntype class:\n\n```haskell\nλ\u003e marshalled   = fmap (toAlbumDTO \u003e\u003e\u003e show \u003e\u003e\u003e pack \u003e\u003e\u003e compress) albums\nλ\u003e unmarshalled = fmap (decompress \u003e\u003e\u003e unpack \u003e\u003e\u003e read \u003e\u003e\u003e toAlbum) marshalled\nλ\u003e print unmarshalled\n[Album {title = \"Microgravity\", publishDate = 1991, labelName = \"Origo Sound\", artist = Artist {publicName = \"Biosphere\", realName = Nothing}},\n Album {title = \"Apollo - Atmospheres \u0026 Soundtracks\", publishDate = 1983, labelName = \"Editions EG\", artist = Artist {publicName = \"Brian Eno\", realName = Nothing}}]\n```\n\n[Sourcecode for this section](https://github.com/thma/LtuPatternFactory/blob/master/src/DataTransferObject.hs)\n\n### Singleton → Applicative\n\n\u003e \"The singleton pattern is a software design pattern that restricts the instantiation of a class to one object. This is useful when exactly one object is needed to coordinate actions across the system.\"\n\u003e (quoted from [Wikipedia](https://en.wikipedia.org/wiki/Singleton_pattern)\n\nThe singleton pattern ensures that multiple requests to a given object always return one and the same singleton instance.\nIn functional programming this semantics can be achieved by `let`:\n\n```haskell\nlet singleton = someExpensiveComputation\nin  mainComputation\n\n--or in lambda notation:\n(\\singleton -\u003e mainComputation) someExpensiveComputation\n```\n\nVia the `let`-Binding we can thread the singleton through arbitrary code in the `in` block. All occurences of `singleton` in the `mainComputation`will point to the same instance.\n\nType classes provide several tools to make this kind of threading more convenient or even to avoid explicit threading of instances.\n\n#### Using Applicative Functor for threading of singletons\n\nThe following code defines a simple expression evaluator:\n\n```haskell\ndata Exp e = Var String\n           | Val e\n           | Add (Exp e) (Exp e)\n           | Mul (Exp e) (Exp e)\n\n-- the environment is a list of tuples mapping variable names to values of type e\ntype Env e = [(String, e)]\n\n-- a simple evaluator reducing expression to numbers\neval :: Num e =\u003e Exp e -\u003e Env e -\u003e e\neval (Var x)   env = fetch x env\neval (Val i)   env = i\neval (Add p q) env = eval p env + eval q env  \neval (Mul p q) env = eval p env * eval q env\n```\n\n`eval` is a classic evaluator function that recursively evaluates sub-expression before applying `+` or `*`.\nNote how the explicit `env`parameter is threaded through the recursive eval calls. This is needed to have the\nenvironment avalailable for variable lookup at any recursive call depth.\n\nIf we now bind `env` to a value as in the following snippet it is used as an immutable singleton within the recursive evaluation of `eval exp env`.\n\n```haskell\nmain = do\n  let exp = Mul (Add (Val 3) (Val 1))\n                (Mul (Val 2) (Var \"pi\"))\n      env = [(\"pi\", pi)]\n  print $ eval exp env\n```\n\nExperienced Haskellers will notice the [\"eta-reduction smell\"](https://wiki.haskell.org/Eta_conversion) in `eval (Var x) env = fetch x env` which hints at the possibilty to remove `env` as an explicit parameter. We can not do this right away as the other equations for `eval` do not allow eta-reduction. In order to do so we have to apply the combinators of the `Applicative Functor`:\n\n```haskell\nclass Functor f =\u003e Applicative f where\n    pure  :: a -\u003e f a\n    (\u003c*\u003e) :: f (a -\u003e b) -\u003e f a -\u003e f b\n\ninstance Applicative ((-\u003e) a) where\n    pure        = const\n    (\u003c*\u003e) f g x = f x (g x)\n```\n\nThis `Applicative` allows us to rewrite `eval` as follows:\n\n```haskell\neval :: Num e =\u003e Exp e -\u003e Env e -\u003e e\neval (Var x)   = fetch x\neval (Val i)   = pure i\neval (Add p q) = pure (+) \u003c*\u003e eval p  \u003c*\u003e eval q  \neval (Mul p q) = pure (*) \u003c*\u003e eval p  \u003c*\u003e eval q\n```\n\nAny explicit handling of the variable `env` is now removed.\n(I took this example from the classic paper [Applicative programming with effects](http://www.soi.city.ac.uk/~ross/papers/Applicative.pdf) which details how `pure` and `\u003c*\u003e` correspond to the combinatory logic combinators `K` and `S`.)\n\n[Sourcecode for this section](https://github.com/thma/LtuPatternFactory/blob/master/src/Singleton.hs)\n\n### Pipeline → Monad\n\n\u003e In software engineering, a pipeline consists of a chain of processing elements (processes, threads, coroutines, functions, etc.), arranged so that the output of each element is the input of the next; the name is by analogy to a physical pipeline.\n\u003e (Quoted from: [Wikipedia](https://en.wikipedia.org/wiki/Pipeline_(software))\n\nThe concept of pipes and filters in Unix shell scripts is a typical example of the pipeline architecture pattern.\n\n```bash\n$ echo \"hello world\" | wc -w | xargs printf \"%d*3\\n\" | bc -l\n6\n```\n\nThis works exactly as stated in the wikipedia definition of the pattern: the output of `echo \"hello world\"` is used as input for the next command `wc -w`. The ouptput of this command is then piped as input into `xargs printf \"%d*3\\n\"` and so on.\nOn the first glance this might look like ordinary function composition. We could for instance come up with the following approximation in Haskell:\n\n```haskell\n((3 *) . length . words) \"hello world\"\n6\n```\n\nBut with this design we missed an important feature of the chain of shell commands: The commands do not work on elementary types like Strings or numbers but on input and output streams that are used to propagate the actual elementary data around. So we can't just send a String into the `wc` command as in `\"hello world\" | wc -w`. Instead we have to use `echo` to place the string into a stream that we can then use as input to the `wc` command:\n\n```bash\n\u003e echo \"hello world\" | wc -w\n```\n\nSo we might say that `echo` *injects* the String `\"hello world\"` into the stream context.\nWe can capture this behaviour in a functional program like this:\n\n```haskell\n-- The Stream type is a wrapper around an arbitrary payload type 'a'\nnewtype Stream a = Stream a deriving (Show)\n\n-- echo injects an item of type 'a' into the Stream context\necho :: a -\u003e Stream a\necho = Stream\n\n-- the 'andThen' operator used for chaining commands\ninfixl 7 |\u003e\n(|\u003e) :: Stream a -\u003e (a -\u003e Stream b) -\u003e Stream b\nStream x |\u003e f = f x\n\n\n-- echo and |\u003e are used to create the actual pipeline\npipeline :: String -\u003e Stream Int\npipeline str =\n  echo str |\u003e echo . length . words |\u003e echo . (3 *)\n-- now executing the program in ghci repl:\nghci\u003e pipeline \"hello world\"\nStream 6  \n```\n\nThe `echo` function injects any input into the `Stream` context:\n\n```haskell\nghci\u003e echo \"hello world\"\nStream \"hello world\"\n```\n\nThe `|\u003e` (pronounced as \"andThen\") does the function chaining:\n\n```haskell\nghci\u003e echo \"hello world\" |\u003e echo . words\nStream [\"hello\",\"world\"]\n```\n\nThe result of `|\u003e` is of type `Stream b` that's why we cannot just write `echo \"hello world\" |\u003e words`. We have to use echo  to create a `Stream` output that can be digested by a subsequent `|\u003e`.\n\nThe interplay of a Context type `Stream a` and the functions `echo` and `|\u003e` is a well known pattern from functional languages: it's the legendary *Monad*. As the [Wikipedia article on the pipeline pattern](https://en.wikipedia.org/wiki/Pipeline_(software)) states:\n\n\u003e Pipes and filters can be viewed as a form of functional programming, using byte streams as data objects; more specifically, they can be seen as a particular form of monad for I/O.\n\nThere is an interesting paper available elaborating on the monadic nature of Unix pipes: [Monadic Shell](http://okmij.org/ftp/Computation/monadic-shell.html).\n\nHere is the definition of the Monad type class in Haskell:\n\n```Haskell\nclass Applicative m =\u003e Monad m where\n    -- | Sequentially compose two actions, passing any value produced\n    -- by the first as an argument to the second.\n    (\u003e\u003e=)  :: m a -\u003e (a -\u003e m b) -\u003e m b\n\n    -- | Inject a value into the monadic type.\n    return :: a -\u003e m a\n    return = pure\n```\n\nBy looking at the types of `\u003e\u003e=` and `return` it's easy to see the direct correspondence to `|\u003e` and `echo` in the pipeline example above:\n\n```haskell\n    (|\u003e)   :: Stream a -\u003e (a -\u003e Stream b) -\u003e Stream b\n    echo   :: a -\u003e Stream a\n```\n\nMhh, this is nice, but still looks a lot like ordinary composition of functions, just with the addition of a wrapper.\nIn this simplified example that's true, because we have designed the `|\u003e` operator to simply unwrap a value from the Stream and bind it to the formal parameter of the subsequent function:\n\n```haskell\nStream x |\u003e f = f x\n```\n\nBut we are free to implement the `andThen` operator in any way that we seem fit as long we maintain the type signature and the [monad laws](https://en.wikipedia.org/wiki/Monad_%28functional_programming%29#Monad_laws).\nSo we could for instance change the semantics of `\u003e\u003e=` to keep a log along the execution pipeline:\n\n```haskell\n-- The DeriveFunctor Language Pragma provides automatic derivation of Functor instances\n{-# LANGUAGE DeriveFunctor #-}\n\n-- a Log is just a list of Strings\ntype Log = [String]\n\n-- the Stream type is extended by a Log that keeps track of any logged messages\nnewtype LoggerStream a = LoggerStream (a, Log) deriving (Show, Functor)\n\ninstance Applicative LoggerStream where\n  pure = return\n  LoggerStream (f, _) \u003c*\u003e r = fmap f r  \n\n-- our definition of the Logging Stream Monad:\ninstance Monad LoggerStream where\n  -- returns a Stream wrapping a tuple of the actual payload and an empty Log\n  return a = LoggerStream (a, [])\n\n  -- we define (\u003e\u003e=) to return a tuple (composed functions, concatenated logs)\n  m1 \u003e\u003e= m2  = let LoggerStream(f1, l1) = m1\n                   LoggerStream(f2, l2) = m2 f1\n               in LoggerStream(f2, l1 ++ l2)\n\n-- compute length of a String and provide a log message\nlogLength :: String -\u003e LoggerStream Int\nlogLength str = let l = length(words str)\n                in LoggerStream (l, [\"length(\" ++ str ++ \") = \" ++ show l])\n\n-- multiply x with 3 and provide a log message\nlogMultiply :: Int -\u003e LoggerStream Int\nlogMultiply x = let z = x * 3\n                in LoggerStream (z, [\"multiply(\" ++ show x ++ \", 3\" ++\") = \" ++ show z])\n\n-- the logging version of the pipeline\nlogPipeline :: String -\u003e LoggerStream Int\nlogPipeline str =\n  return str \u003e\u003e= logLength \u003e\u003e= logMultiply\n\n-- and then in Ghci:\n\u003e logPipeline \"hello logging world\"\nLoggerStream (9,[\"length(hello logging world) = 3\",\"multiply(3, 3) = 9\"])\n```\n\nWhat's noteworthy here is that Monads allow to make the mechanism of chaining functions *explicit*. We can define what `andThen` should mean in our pipeline by choosing a different Monad implementation.\nSo in a sense Monads could be called [programmable semicolons](http://book.realworldhaskell.org/read/monads.html#id642960)\n\nTo make this statement a bit clearer we will have a closer look at the internal workings of the `Maybe` Monad in the next section.\n\n[Sourcecode for this section](https://github.com/thma/LtuPatternFactory/blob/master/src/Pipeline.hs)\n\n### NullObject → Maybe Monad\n\n\u003e[...] a null object is an object with no referenced value or with defined neutral (\"null\") behavior. The null object design pattern describes the uses of such objects and their behavior (or lack thereof).\n\u003e [Quoted from Wikipedia](https://en.wikipedia.org/wiki/Null_object_pattern)\n\nIn functional programming the null object pattern is typically formalized with option types:\n\u003e [...] an option type or maybe type is a polymorphic type that represents encapsulation of an optional value; e.g., it is used as the return type of functions which may or may not return a meaningful value when they are applied. It consists of a constructor which either is empty (named None or `Nothing`), or which encapsulates the original data type `A` (written `Just A` or Some A).\n\u003e [Quoted from Wikipedia](https://en.wikipedia.org/wiki/Option_type)\n\n(See also: [Null Object as Identity](http://blog.ploeh.dk/2018/04/23/null-object-as-identity/))\n\nIn Haskell the most simple option type is `Maybe`. Let's directly dive into an example. We define a reverse index, mapping songs to album titles.\nIf we now lookup up a song title we may either be lucky and find the respective album or not so lucky when there is no album matching our song:\n\n```haskell\nimport           Data.Map (Map, fromList)\nimport qualified Data.Map as Map (lookup) -- avoid clash with Prelude.lookup\n\n-- type aliases for Songs and Albums\ntype Song   = String\ntype Album  = String\n\n-- the simplified reverse song index\nsongMap :: Map Song Album\nsongMap = fromList\n    [(\"Baby Satellite\",\"Microgravity\")\n    ,(\"An Ending\", \"Apollo: Atmospheres and Soundtracks\")]\n\n```\n\nWe can lookup this map by using the function `Map.lookup :: Ord k =\u003e k -\u003e Map k a -\u003e Maybe a`.\n\nIf no match is found it will return `Nothing` if a match is found it will return `Just match`:\n\n```haskell\nghci\u003e Map.lookup \"Baby Satellite\" songMap\nJust \"Microgravity\"\nghci\u003e Map.lookup \"The Fairy Tale\" songMap\nNothing\n```\n\nActually the `Maybe` type is defined as:\n\n```haskell\ndata  Maybe a  =  Nothing | Just a\n    deriving (Eq, Ord)\n```\n\nAll code using the `Map.lookup` function will never be confronted with any kind of Exceptions, null pointers or other nasty things. Even in case of errors a lookup will always return a properly typed `Maybe` instance. By pattern matching for `Nothing` or `Just a` client code can react on failing matches or positive results:\n\n```haskell\n    case Map.lookup \"Ancient Campfire\" songMap of\n        Nothing -\u003e print \"sorry, could not find your song\"\n        Just a  -\u003e print a\n```\n\nLet's try to apply this to an extension of our simple song lookup.\nLet's assume that our music database has much more information available. Apart from a reverse index from songs to albums, there might also be an index mapping album titles to artists.\nAnd we might also have an index mapping artist names to their websites:\n\n```haskell\ntype Song   = String\ntype Album  = String\ntype Artist = String\ntype URL    = String\n\nsongMap :: Map Song Album\nsongMap = fromList\n    [(\"Baby Satellite\",\"Microgravity\")\n    ,(\"An Ending\", \"Apollo: Atmospheres and Soundtracks\")]\n\nalbumMap :: Map Album Artist\nalbumMap = fromList\n    [(\"Microgravity\",\"Biosphere\")\n    ,(\"Apollo: Atmospheres and Soundtracks\", \"Brian Eno\")]\n\nartistMap :: Map Artist URL\nartistMap = fromList\n    [(\"Biosphere\",\"http://www.biosphere.no//\")\n    ,(\"Brian Eno\", \"http://www.brian-eno.net\")]\n\nlookup' :: Ord a =\u003e Map a b -\u003e a -\u003e Maybe b\nlookup' = flip Map.lookup\n\nfindAlbum :: Song -\u003e Maybe Album\nfindAlbum = lookup' songMap\n\nfindArtist :: Album -\u003e Maybe Artist\nfindArtist = lookup' albumMap\n\nfindWebSite :: Artist -\u003e Maybe URL\nfindWebSite = lookup' artistMap\n```\n\nWith all this information at hand we want to write a function that has an input parameter of type `Song` and returns a `Maybe URL` by going from song to album to artist to website url:\n\n```haskell\nfindUrlFromSong :: Song -\u003e Maybe URL\nfindUrlFromSong song =\n    case findAlbum song of\n        Nothing    -\u003e Nothing\n        Just album -\u003e\n            case findArtist album of\n                Nothing     -\u003e Nothing\n                Just artist -\u003e\n                    case findWebSite artist of\n                        Nothing  -\u003e Nothing\n                        Just url -\u003e Just url\n```\n\nThis code makes use of the pattern matching logic described before. It's worth to note that there is some nice circuit breaking happening in case of a `Nothing`. In this case `Nothing` is directly returned as result of the function and the rest of the case-ladder is not executed.\nWhat's not so nice is *\"the dreaded ladder of code marching off the right of the screen\"* [(quoted from Real World Haskell)](http://book.realworldhaskell.org/).\n\nFor each find function we have to repeat the same ceremony of pattern matching on the result and either return `Nothing` or proceed with the next nested level.\n\nThe good news is that it is possible to avoid this ladder.\nWe can rewrite our search by applying the `andThen` operator `\u003e\u003e=` as `Maybe` is an instance of `Monad`:\n\n```haskell\nfindUrlFromSong' :: Song -\u003e Maybe URL\nfindUrlFromSong' song =\n    findAlbum song   \u003e\u003e= \\album -\u003e\n    findArtist album \u003e\u003e= \\artist -\u003e\n    findWebSite artist  \n```\n\nor even shorter as we can eliminate the lambda expressions by applying [eta-conversion](https://wiki.haskell.org/Eta_conversion):\n\n```haskell\nfindUrlFromSong'' :: Song -\u003e Maybe URL\nfindUrlFromSong'' song =\n    findAlbum song \u003e\u003e= findArtist \u003e\u003e= findWebSite\n```\n\nUsing it in GHCi:\n\n```haskell\nghci\u003e findUrlFromSong'' \"All you need is love\"\nNothing\nghci\u003e findUrlFromSong'' \"An Ending\"\nJust \"http://www.brian-eno.net\"\n```\n\nThe expression `findAlbum song \u003e\u003e= findArtist \u003e\u003e= findWebSite` and the sequencing of actions in the [pipeline](#pipeline---monad) example `return str \u003e\u003e= return . length . words \u003e\u003e= return . (3 *)` have a similar structure.\n\nBut the behaviour of both chains is quite different: In the Maybe Monad `a \u003e\u003e= b` does not evaluate b if `a == Nothing` but stops the whole chain of actions by simply returning `Nothing`.\n\nThe pattern matching and 'short-circuiting' is directly coded into the definition of `(\u003e\u003e=)` in the Monad implementation of `Maybe`:\n\n```haskell\ninstance  Monad Maybe  where\n    (Just x) \u003e\u003e= k      = k x\n    Nothing  \u003e\u003e= _      = Nothing\n```\n\nThis elegant feature of `(\u003e\u003e=)` in the `Maybe` Monad allows us to avoid ugly and repetetive coding.  \n\n#### Avoiding partial functions by using Maybe\n\nMaybe is often used to avoid the exposure of partial functions to client code. Take for example division by zero or computing the square root of negative numbers which are undefined (at least for real numbers).\nHere come safe \u0026ndash; that is total \u0026ndash; definitions of these functions that return `Nothing` for undefined cases:\n\n```haskell\nsafeRoot :: Double -\u003e Maybe Double\nsafeRoot x\n    | x \u003e= 0    = Just (sqrt x)\n    | otherwise = Nothing\n\nsafeReciprocal :: Double -\u003e Maybe Double\nsafeReciprocal x\n    | x /= 0    = Just (1/x)\n    | otherwise = Nothing\n```\n\nAs we have already learned the monadic `\u003e\u003e=` operator allows to chain such function as in the following example:\n\n```haskell\nsafeRootReciprocal :: Double -\u003e Maybe Double\nsafeRootReciprocal x = return x \u003e\u003e= safeReciprocal \u003e\u003e= safeRoot\n```\n\nThis can be written even more terse as:\n\n```haskell\nsafeRootReciprocal :: Double -\u003e Maybe Double\nsafeRootReciprocal = safeReciprocal \u003e=\u003e safeRoot\n```\n\nThe use of the [Kleisli 'fish' operator `\u003e=\u003e`](https://www.stackage.org/haddock/lts-13.0/base-4.12.0.0/Control-Monad.html#v:-62--61--62-)  makes it more evident that we are actually aiming at a composition of the monadic functions `safeReciprocal` and `safeRoot`.\n\nThere are many predefined Monads available in the Haskell curated libraries and it's also possible to combine their effects by making use of `MonadTransformers`. But that's a [different story...](#aspect-weaving--monad-transformers)\n\n[Sourcecode for this section](https://github.com/thma/LtuPatternFactory/blob/master/src/NullObject.hs)\n\n### Interpreter → Reader Monad\n\n\u003e In computer programming, the interpreter pattern is a design pattern that specifies how to evaluate sentences in a language. The basic idea is to have a class for each symbol (terminal or nonterminal) in a specialized computer language. The syntax tree of a sentence in the language is an instance of the composite pattern and is used to evaluate (interpret) the sentence for a client.\n\u003e\n\u003e [Quoted from Wikipedia](https://en.wikipedia.org/wiki/Interpreter_pattern)\n\nIn the section [Singleton → Applicative](#singleton--applicative) we have already written a simple expression evaluator. From that section it should be obvious how easy the definition of evaluators and interpreters is in functional programming languages.\n\nThe main ingredients are:\n\n* Algebraic Data Types (ADT) used to define the expression data type which is to be evaluated\n* An evaluator function that uses pattern matching on the expression ADT\n* 'implicit' threading of an environment\n\nIn the section on Singleton we have seen that some kind of 'implicit' threading of the environment can be already achieved with `Applicative Functors.\nWe still had the environment as an explicit parameter of the eval function:\n\n```haskell\neval :: Num e =\u003e Exp e -\u003e Env e -\u003e e\n```\n\nbut we could omit it in the pattern matching equations:\n\n```haskell\neval (Var x)   = fetch x\neval (Val i)   = pure i\neval (Add p q) = pure (+) \u003c*\u003e eval p  \u003c*\u003e eval q  \neval (Mul p q) = pure (*) \u003c*\u003e eval p  \u003c*\u003e eval q\n```\n\nBy using Monads the handling of the environment can be made even more implicit.\n\nI'll demonstrate this with a slightly extended version of the evaluator. In the first step we extend the expression syntax to also provide let expressions and generic support for binary operators:\n\n```haskell\n-- | a simple expression ADT\ndata Exp a =\n      Var String                            -- a variable to be looked up\n    | BinOp (BinOperator a) (Exp a) (Exp a) -- a binary operator applied to two expressions\n    | Let String (Exp a) (Exp a)            -- a let expression\n    | Val a                                 -- an atomic value\n\n-- | a binary operator type\ntype BinOperator a =  a -\u003e a -\u003e a\n\n-- | the environment is just a list of mappings from variable names to values\ntype Env a = [(String, a)]\n```\n\nWith this data type we can encode expressions like:\n\n```haskell\nlet x = 4+5\nin 2*x\n```\n\nas:\n\n```haskell\nLet \"x\" (BinOp (+) (Val 4) (Val 5))\n        (BinOp (*) (Val 2) (Var \"x\"))\n```\n\nIn order to evaluate such expression we must be able to modify the environment at runtime to create a binding for the variable `x` which will be referred to in the `in` part of the expression.\n\nNext we define an evaluator function that pattern matches the above expression ADT:\n\n```haskell\neval :: MonadReader (Env a) m =\u003e Exp a -\u003e m a\neval (Val i)          = return i\neval (Var x)          = asks (fetch x)\neval (BinOp op e1 e2) = liftM2 op (eval e1) (eval e2)\neval (Let x e1 e2)    = eval e1 \u003e\u003e= \\v -\u003e local ((x,v):) (eval e2)\n```\n\nLet's explore this dense code line by line.\n\n```haskell\neval :: MonadReader (Env a) m =\u003e Exp a -\u003e m a\n```\n\nThe most simple instance for `MonadReader` is the partially applied function type `((-\u003e) env)`.\nLet's assume the compiler will choose this type as the `MonadReader` instance. We can then rewrite the function signature as follows:\n\n```haskell\neval :: Exp a -\u003e ((-\u003e) (Env a)) a  -- expanding m to ((-\u003e) (Env a))\neval :: Exp a -\u003e Env a -\u003e a        -- applying infix notation for (-\u003e)\n```\n\nThis is exactly the signature we were using for the `Applicative` eval function which matches our original intent to eval an expression of type `Exp a` in an environment of type `Env a` to a result of type `a`.\n\n```haskell\neval (Val i)          = return i\n```\n\nIn this line we are pattern matching for a `(Val i)`. The atomic value `i` is `return`ed, that is lifted to a value of the type `Env a -\u003e a`.\n\n```haskell\neval (Var x)          = asks (fetch x)\n```\n\n`asks` is a helper function that applies its argument `f :: env -\u003e a` (in our case `(fetch x)` which looks up variable `x`) to the environment. `asks` is thus typically used to handle environment lookups:\n\n```haskell\nasks :: (MonadReader env m) =\u003e (env -\u003e a) -\u003e m a\nasks f = ask \u003e\u003e= return . f\n```\n\nNow to the next line handling the application of a binary operator:\n\n```haskell\neval (BinOp op e1 e2) = liftM2 op (eval e1) (eval e2)\n```\n\n`op` is a binary function of type `a -\u003e a -\u003e a` (typical examples are binary arithmetic functions like `+`, `-`, `*`, `/`).\n\nWe want to apply this operation on the two expressions `(eval e1)` and `(eval e2)`.\nAs these expressions both are to be executed within the same monadic context we have to use `liftM2` to lift `op` into this context:\n\n```haskell\n-- | Promote a function to a monad, scanning the monadic arguments from\n-- left to right.  For example,\n--\n-- \u003e liftM2 (+) [0,1] [0,2] = [0,2,1,3]\n-- \u003e liftM2 (+) (Just 1) Nothing = Nothing\n--\nliftM2  :: (Monad m) =\u003e (a1 -\u003e a2 -\u003e r) -\u003e m a1 -\u003e m a2 -\u003e m r\nliftM2 f m1 m2 = do { x1 \u003c- m1; x2 \u003c- m2; return (f x1 x2) }\n```\n\nThe last step is the evaluation of `Let x e1 e2` expressions like `Let \"x\" (Val 7) (BinOp (+) (Var \"x\") (Val 5))`. To make this work we have to evaluate `e1` and extend the environment by a binding of the variable `x` to the result of that evaluation.\nThen we have to evaluate `e2` in the context of the extended environment:\n\n```haskell\neval (Let x e1 e2)    = eval e1 \u003e\u003e= \\v -\u003e           -- bind the result of (eval e1) to v\n                        local ((x,v):) (eval e2)    -- add (x,v) to the env, eval e2 in the extended env\n```\n\nThe interesting part here is the helper function `local f m` which applies `f` to the environment and then executes `m` against the (locally) changed environment.\nProviding a locally modified environment as the scope of the evaluation of `e2` is exactly what the `let` binding intends:\n\n```haskell\n-- | Executes a computation in a modified environment.\nlocal :: (r -\u003e r) -- ^ The function to modify the environment.\n        -\u003e m a    -- ^ @Reader@ to run in the modified environment.\n        -\u003e m a\n\ninstance MonadReader r ((-\u003e) r) where\n    local f m = m . f\n```\n\nNow we can use `eval` to evaluate our example expression:\n\n```haskell\ninterpreterDemo = do\n    putStrLn \"Interpreter -\u003e Reader Monad + ADTs + pattern matching\"\n    let exp1 = Let \"x\"\n                (BinOp (+) (Val 4) (Val 5))\n                (BinOp (*) (Val 2) (Var \"x\"))\n    print $ runReader (eval exp1) env\n\n-- an then in GHCi:\n\n\u003e interpreterDemo\n18\n```\n\nBy virtue of the `local` function we used `MonadReader` as if it provided modifiable state. So for many use cases that require only *local* state modifications its not required to use the somewhat more tricky `MonadState`.\n\nWriting the interpreter function with a `MonadState` looks like follows:\n\n```haskell\neval1 :: (MonadState (Env a) m) =\u003e Exp a -\u003e m a\neval1 (Val i)          = return i\neval1 (Var x)          = gets (fetch x)\neval1 (BinOp op e1 e2) = liftM2 op (eval1 e1) (eval1 e2)\neval1 (Let x e1 e2)    = eval1 e1        \u003e\u003e= \\v -\u003e\n                         modify ((x,v):) \u003e\u003e\n                         eval1 e2\n```\n\nThis section was inspired by ideas presented in [Quick Interpreters with the Reader Monad](https://donsbot.wordpress.com/2006/12/11/quick-interpreters-with-the-reader-monad/).\n\n[Sourcecode for this section](https://github.com/thma/LtuPatternFactory/blob/master/src/Interpreter.hs)\n\n\u003c!-- \n### ? → MonadFail\n\ntbd.\n--\u003e\n\n### Aspect Weaving → Monad Transformers\n\n\u003e In computing, aspect-oriented programming (AOP) is a programming paradigm that aims to increase modularity by allowing the separation of cross-cutting concerns. It does so by adding additional behavior to existing code (an advice) without modifying the code itself, instead separately specifying which code is modified via a \"pointcut\" specification, such as \"log all function calls when the function's name begins with 'set'\". This allows behaviors that are not central to the business logic (such as logging) to be added to a program without cluttering the code, core to the functionality.\n\u003e\n\u003e [Quoted from Wikipedia](https://en.wikipedia.org/wiki/Aspect-oriented_programming)\n\n### Stacking Monads\n\nIn section\n[Interpreter -\u003e Reader Monad](#interpreter--reader-monad)\nwe specified an Interpreter of a simple expression language by defining a monadic `eval` function:\n\n```haskell\neval :: Exp a -\u003e Reader (Env a) a  \neval (Var x)          = asks (fetch x)\neval (Val i)          = return i\neval (BinOp op e1 e2) = liftM2 op (eval e1) (eval e2)\neval (Let x e1 e2) = eval e1 \u003e\u003e= \\v -\u003e local ((x,v):) (eval e2)\n```\n\nUsing the `Reader` Monad allows to thread an environment through all recursive calls of `eval`.\n\nA typical extension to such an interpreter would be to provide a log mechanism that allows tracing of the actual sequence of all performed evaluation steps.\n\nIn Haskell the typical way to provide such a log is by means of the `Writer Monad`.\n\nBut how to combine the capabilities of the `Reader` monad code with those of the `Writer` monad?\n\nThe answer is `MonadTransformer`s: specialized types that allow us to stack two monads into a single one that shares the behavior of both.\n\nIn order to stack the `Writer` monad on top of the `Reader` we use the transformer type `WriterT`:\n\n```haskell\n-- adding a logging capability to the expression evaluator\neval :: Show a =\u003e Exp a -\u003e WriterT [String] (Reader (Env a)) a\neval (Var x)          = tell [\"lookup \" ++ x] \u003e\u003e asks (fetch x)\neval (Val i)          = tell [show i] \u003e\u003e return i\neval (BinOp op e1 e2) = tell [\"Op\"] \u003e\u003e liftM2 op (eval e1) (eval e2)\neval (Let x e1 e2)    = do\n    tell [\"let \" ++ x]\n    v \u003c- eval e1\n    tell [\"in\"]\n    local ((x,v):) (eval e2)\n```\n\nThe signature of `eval` has been extended by Wrapping `WriterT [String]` around `(Reader (Env a))`. This denotes a Monad that combines a `Reader (Env a)` with a `Writer [String]`.  `Writer [String]` is a `Writer` that maintains a list of strings as log.\n\nThe resulting Monad supports function of both `MonadReader` and `MonadWriter` typeclasses. As you can see in the equation for `eval (Var x)` we are using `MonadWriter.tell` for logging and `MonadReader.asks` for obtaining the environment and compose both monadic actions by `\u003e\u003e`:\n\n```haskell\neval (Var x)          = tell [\"lookup \" ++ x] \u003e\u003e asks (fetch x)\n```\n\nIn order to execute this stacked up monads we have to apply the `run` functions of `WriterT` and `Reader`:\n\n```haskell\nghci\u003e runReader (runWriterT (eval letExp)) [(\"pi\",pi)]\n(6.283185307179586,[\"let x\",\"let y\",\"Op\",\"5.0\",\"7.0\",\"in\",\"Op\",\"lookup y\",\"6.0\",\"in\",\"Op\",\"lookup pi\",\"lookup x\"])\n````\n\nFor more details on MonadTransformers please have a look at the following tutorials:\n\n[MonadTransformers Wikibook](https://en.wikibooks.org/wiki/Haskell/Monad_transformers)\n\n[Monad Transformers step by step](https://page.mi.fu-berlin.de/scravy/realworldhaskell/materialien/monad-transformers-step-by-step.pdf)\n\n### Specifying AOP semantics with MonadTransformers\n\nWhat we have seen so far is that it possible to form Monad stacks that combine the functionality of the Monads involved: In a way a MonadTransformer adds capabilities that are cross-cutting to those of the underlying Monad.\n\nIn the following lines I want to show how MonadTransformers can be used to specify the formal semantics of Aspect Oriented Programming. I have taken the example from Mark P. Jones paper\n[The Essence of AspectJ](https://pdfs.semanticscholar.org/c4ce/14364d88d533fac6aa53481b719aa661ce73.pdf).\n\n#### An interpreter for MiniPascal\n\nWe start by defining a simple imperative language \u0026ndash; MiniPascal:\n\n```haskell\n-- | an identifier type\ntype Id = String\n\n-- | Integer expressions\ndata IExp = Lit Int\n    | IExp :+: IExp\n    | IExp :*: IExp\n    | IExp :-: IExp\n    | IExp :/: IExp\n    | IVar Id deriving (Show)\n\n-- | Boolean expressions\ndata BExp = T\n    | F\n    | Not BExp\n    | BExp :\u0026: BExp\n    | BExp :|: BExp\n    | IExp :=: IExp\n    | IExp :\u003c: IExp deriving (Show)\n\n-- | Staments\ndata Stmt = Skip        -- no op\n    | Id := IExp        -- variable assignment\n    | Begin [Stmt]      -- a sequence of statements\n    | If BExp Stmt Stmt -- an if statement\n    | While BExp Stmt   -- a while loop\n    deriving (Show)\n```\n\nWith this igredients its possible to write imperative programs like the following `while` loop that sums up the natural numbers from 1 to 10:\n\n```haskell\n-- an example program: the MiniPascal equivalent of `sum [1..10]`\nprogram :: Stmt\nprogram =\n    Begin [\n        \"total\" := Lit 0,\n        \"count\" := Lit 0,\n        While (IVar \"count\" :\u003c: Lit 10)\n            (Begin [\n                \"count\" := (IVar \"count\" :+: Lit 1),\n                \"total\" := (IVar \"total\" :+: IVar \"count\")\n            ])\n    ]\n```\n\nWe define the semantics of this language with an interpreter:\n\n```haskell\n-- | the store used for variable assignments\ntype Store = Map Id Int\n\n-- | evaluate numeric expression.\niexp :: MonadState Store m =\u003e IExp -\u003e m Int\niexp (Lit n) = return n\niexp (e1 :+: e2) = liftM2 (+) (iexp e1) (iexp e2)\niexp (e1 :*: e2) = liftM2 (*) (iexp e1) (iexp e2)\niexp (e1 :-: e2) = liftM2 (-) (iexp e1) (iexp e2)\niexp (e1 :/: e2) = liftM2 div (iexp e1) (iexp e2)\niexp (IVar i)    = getVar i\n\n-- | evaluate logic expressions\nbexp :: MonadState Store m =\u003e BExp -\u003e m Bool\nbexp T           = return True\nbexp F           = return False\nbexp (Not b)     = fmap not (bexp b)\nbexp (b1 :\u0026: b2) = liftM2 (\u0026\u0026) (bexp b1) (bexp b2)\nbexp (b1 :|: b2) = liftM2 (||) (bexp b1) (bexp b2)\nbexp (e1 :=: e2) = liftM2 (==) (iexp e1) (iexp e2)\nbexp (e1 :\u003c: e2) = liftM2 (\u003c)  (iexp e1) (iexp e2)\n\n-- | evaluate statements\nstmt :: MonadState Store m =\u003e Stmt -\u003e m ()\nstmt Skip       = return ()\nstmt (i := e)   = do x \u003c- iexp e; setVar i x\nstmt (Begin ss) = mapM_ stmt ss\nstmt (If b t e) = do\n    x \u003c- bexp b\n    if x then stmt t\n         else stmt e\nstmt (While b t) = loop\n    where loop = do\n            x \u003c- bexp b\n            when x $ stmt t \u003e\u003e loop\n\n-- | a variable assignments updates the store (which is maintained in the state)\nsetVar :: (MonadState (Map k a) m, Ord k) =\u003e k -\u003e a -\u003e m ()\nsetVar i x = do\n    store \u003c- get\n    put (Map.insert i x store)\n\n-- | lookup a variable in the store. return 0 if no value is found\ngetVar :: MonadState Store m =\u003e Id -\u003e m Int\ngetVar i = do\n    s \u003c- get\n    case Map.lookup i s of\n        Nothing  -\u003e return 0\n        (Just v) -\u003e return v\n\n-- | evaluate a statement\nrun :: Stmt -\u003e Store\nrun s = execState (stmt s) (Map.fromList [])\n\n-- and then in GHCi:\nghci\u003e run program\nfromList [(\"count\",10),(\"total\",55)]\n```\n\nSo far this is nothing special, just a minimal interpreter for an imperative language. Side effects in form of variable assignments are modelled with an environment that is maintained in a state monad.\n\nIn the next step we want to extend this language with features of aspect oriented programming in the style of *AspectJ*: join points, point cuts, and advices.\n\n#### An Interpreter for AspectPascal\n\nTo keep things simple we will specify only two types of joint points: variable assignment and variable reading:\n\n```haskell\ndata JoinPointDesc = Get Id | Set Id\n```\n\n`Get i` describes a join point at which the variable `i` is read, while `Set i` described a join point at which\na value is assigned to the variable `i`.\n\nFollowing the concepts of ApectJ pointcut expressions are used to describe sets of join points.\nThe abstract syntax for pointcuts is as follows:\n\n```haskell\ndata PointCut = Setter                  -- the pointcut of all join points at which a variable is being set\n              | Getter                  -- the pointcut of all join points at which a variable is being read\n              | AtVar Id                -- the point cut of all join points at which a the variable is being set or read\n              | NotAt PointCut          -- not a\n              | PointCut :||: PointCut  -- a or b\n              | PointCut :\u0026\u0026: PointCut  -- a and b\n```\n\nFor example this syntax can be used to specify the pointcut of\nall join points at which the variable `x` is set:\n\n```haskell\n(Setter :\u0026\u0026: AtVar \"x\")\n```\n\nThe following function computes whether a `PointCut` contains a given `JoinPoint`:\n\n```haskell\nincludes :: PointCut -\u003e (JoinPointDesc -\u003e Bool)\nincludes Setter     (Set i) = True\nincludes Getter     (Get i) = True\nincludes (AtVar i)  (Get j) = i == j\nincludes (AtVar i)  (Set j) = i == j\nincludes (NotAt p)  d       = not (includes p d)\nincludes (p :||: q) d       = includes p d || includes q d\nincludes (p :\u0026\u0026: q) d       = includes p d \u0026\u0026 includes q d\nincludes _ _                = False\n```\n\nIn AspectJ aspect oriented extensions to a program are described using the notion of advices.\nWe follow the same design here: each advice includes a pointcut to specify the join points at which the\nadvice should be used, and a statement (in MiniPascal syntax) to specify the action that should be performed at each matching join point.\n\nIn AspectPascal we only support two kinds of advice: `Before`, which will be executed on entry to a join point, and\n`After` which will be executed on the exit from a join point:\n\n```haskell\ndata Advice = Before PointCut Stmt\n            | After  PointCut Stmt\n```\n\nThis allows to define `Advice`s like the following:\n\n```haskell\n-- the countSets Advice traces each setting of a variable and increments the counter \"countSet\"\ncountSets = After (Setter :\u0026\u0026: NotAt (AtVar \"countSet\") :\u0026\u0026: NotAt (AtVar \"countGet\"))\n                  (\"countSet\" := (IVar \"countSet\" :+: Lit 1))\n\n-- the countGets Advice traces each lookup of a variable and increments the counter \"countGet\"\ncountGets = After (Getter :\u0026\u0026: NotAt (AtVar \"countSet\") :\u0026\u0026: NotAt (AtVar \"countGet\"))\n                  (\"countGet\" := (IVar \"countGet\" :+: Lit 1))\n```\n\nThe rather laborious PointCut definition is used to select access to all variable apart from `countGet` and `countSet`.\nThis is required as the action part of the `Advices` are normal MiniPascal statements that are executed by the same interpreter as the main program which is to be extended by advices. If those filters were not present execution of those advices would result in non-terminating loops, as the action statements also access variables.\n\nA complete AspectPascal program will now consist of a `stmt` (the original program) plus a list of `advices` that should be executed to implement the cross-cutting aspects:\n\n```haskell\n-- | Aspects are just a list of Advices\ntype Aspects = [Advice]\n```\n\nIn order to extend our interpreter to execute additional behaviour decribed in `advices` we will have to provide all evaluating functions with access to the `Aspects`.\nAs the `Aspects` will not be modified at runtime the typical solution would be to provide them by a `Reader Aspects` monad.\nWe already have learnt that we can use a MonadTransformer to stack our existing `State` monad with a `Reader` monad. The respective Transformer is `ReaderT`.\nWe thus extend the signature of the evaluation functions accordingly, eg:\n\n```haskell\n-- from:\niexp :: MonadState Store m =\u003e IExp -\u003e m Int\n\n-- to:\niexp :: MonadState Store m =\u003e IExp -\u003e ReaderT Aspects m Int\n```\n\nApart from extendig the signatures we have to modify all places where variables are accessed to apply the matching advices.\nSo for instance in the equation for `iexp (IVar i)` we specify that `(getVar i)` should be executed with applying all advices that match the read access to variable `i` \u0026ndash; that is `(Get i)` by writing:\n\n```haskell\niexp (IVar i)    = withAdvice (Get i) (getVar i)\n```\n\nSo the complete definition of `iexp` is:\n\n```haskell\niexp :: MonadState Store m =\u003e IExp -\u003e ReaderT Aspects m Int\niexp (Lit n) = return n\niexp (e1 :+: e2) = liftM2 (+) (iexp e1) (iexp e2)\niexp (e1 :*: e2) = liftM2 (*) (iexp e1) (iexp e2)\niexp (e1 :-: e2) = liftM2 (-) (iexp e1) (iexp e2)\niexp (e1 :/: e2) = liftM2 div (iexp e1) (iexp e2)\niexp (IVar i)    = withAdvice (Get i) (getVar i)\n```\n\n\u003e [...] if `c` is a computation corresponding to some join point with description `d`, then `withAdvice d c` wraps the\nexecution of `c` with the execution of the appropriate Before and After advice, if any:\n\n```haskell\nwithAdvice :: MonadState Store m =\u003e JoinPointDesc -\u003e ReaderT Aspects m a -\u003e ReaderT Aspects m a\nwithAdvice d c = do\n    aspects \u003c- ask                -- obtaining the Aspects from the Reader monad\n    mapM_ stmt (before d aspects) -- execute the statements of all Before advices\n    x \u003c- c                        -- execute the actual business logic\n    mapM_ stmt (after d aspects)  -- execute the statements of all After advices\n    return x\n\n-- collect the statements of Before and After advices matching the join point\nbefore, after :: JoinPointDesc -\u003e Aspects -\u003e [Stmt]\nbefore d as = [s | Before c s \u003c- as, includes c d]\nafter  d as = [s | After  c s \u003c- as, includes c d]\n```\n\nIn the same way the equation for variable assignment `stmt (i := e)` we specify that `(setVar i x)` should be executed with applying all advices that match the write access to variable `i` \u0026ndash; that is `(Set i)` by noting:\n\n```haskell\nstmt (i := e)   = do x \u003c- iexp e; withAdvice (Set i) (setVar i x)\n```\n\nThe complete implementation for `stmt` then looks like follows:\n\n```haskell\nstmt :: MonadState Store m =\u003e Stmt -\u003e ReaderT Aspects m ()\nstmt Skip       = return ()\nstmt (i := e)   = do x \u003c- iexp e; withAdvice (Set i) (setVar i x)\nstmt (Begin ss) = mapM_ stmt ss\nstmt (If b t e) = do\n    x \u003c- bexp b\n    if x then stmt t\n         else stmt e\nstmt (While b t) = loop\n    where loop = do\n            x \u003c- bexp b\n            when x $ stmt t \u003e\u003e loop\n```\n\nFinally we have to extend `run` function to properly handle the monad stack:\n\n```haskell\nrun :: Aspects -\u003e Stmt -\u003e Store\nrun a s = execState (runReaderT (stmt s) a) (Map.fromList [])\n\n-- and then in GHCi:\nghci\u003e run [] program\nfromList [(\"count\",10),(\"total\",55)]\n\nghci\u003e run [countSets] program\nfromList [(\"count\",10),(\"countSet\",22),(\"total\",55)]\n\nghci\u003e run [countSets, countGets] program\nfromList [(\"count\",10),(\"countGet\",41),(\"countSet\",22),(\"total\",55)]\n```\n\nSo executing the program with an empty list of advices yields the same result as executing the program with initial interpreter. Once we execute the program with the advices `countGets` and `countSets` the resulting map contains values for the variables `countGet` and `countSet` which have been incremented by the statements of both advices.\n\nWe have utilized Monad Transformers to extend our original interpreter in a minamally invasive way, to provide a formal and executable semantics for a simple aspect-oriented language in the style of AspectJ.\n\n\u003c!-- \n### ? → MonadFix\n\ntbd.\n--\u003e\n\n### Composite → SemiGroup → Monoid\n\n\u003eIn software engineering, the composite pattern is a partitioning design pattern. The composite pattern describes a group of objects that is treated the same way as a single instance of the same type of object. The intent of a composite is to \"compose\" objects into tree structures to represent part-whole hierarchies. Implementing the composite pattern lets clients treat individual objects and compositions uniformly.\n\u003e (Quoted from [Wikipedia](https://en.wikipedia.org/wiki/Composite_pattern))\n\nA typical example for the composite pattern is the hierarchical grouping of test cases to TestSuites in a testing framework. Take for instance the following class diagram from the [JUnit cooks tour](http://junit.sourceforge.net/doc/cookstour/cookstour.htm) which shows how JUnit applies the Composite pattern to group `TestCases` to `TestSuites` while both of them implement the `Test` interface:\n\n![Composite Pattern used in Junit](http://junit.sourceforge.net/doc/cookstour/Image5.gif)\n\nIn Haskell we could model this kind of hierachy with an algebraic data type (ADT):\n\n```haskell\n-- the composite data structure: a Test can be either a single TestCase\n-- or a TestSuite holding a list of Tests\ndata Test = TestCase TestCase\n          | TestSuite [Test]\n\n-- a test case produces a boolean when executed\ntype TestCase = () -\u003e Bool\n```\n\nThe function `run` as defined below can either execute a single TestCase or a composite TestSuite:\n\n```haskell\n-- execution of a Test.\nrun :: Test -\u003e Bool\nrun (TestCase t)  = t () -- evaluating the TestCase by applying t to ()\nrun (TestSuite l) = all (True ==) (map run l) -- running all tests in l and return True if all tests pass\n\n-- a few most simple test cases\nt1 :: Test\nt1 = TestCase (\\() -\u003e True)\nt2 :: Test\nt2 = TestCase (\\() -\u003e True)\nt3 :: Test\nt3 = TestCase (\\() -\u003e False)\n-- collecting all test cases in a TestSuite\nts = TestSuite [t1,t2,t3]\n```\n\nAs run is of type `run :: Test -\u003e Bool` we can use it to execute single `TestCases` or complete `TestSuites`.\nLet's try it in GHCI:\n\n```haskell\nghci\u003e run t1\nTrue\nghci\u003e run ts\nFalse\n```\n\nIn order to aggregate TestComponents we follow the design of JUnit and define a function `addTest`. Adding two atomic Tests will result in a TestSuite holding a list with the two Tests. If a Test is added to a TestSuite, the test is added to the list of tests of the suite. Adding TestSuites will merge them.\n\n```haskell\n-- adding Tests\naddTest :: Test -\u003e Test -\u003e Test\naddTest t1@(TestCase _) t2@(TestCase _)   = TestSuite [t1,t2]\naddTest t1@(TestCase _) (TestSuite list)  = TestSuite ([t1] ++ list)\naddTest (TestSuite list) t2@(TestCase _)  = TestSuite (list ++ [t2])\naddTest (TestSuite l1) (TestSuite l2)     = TestSuite (l1 ++ l2)\n```\n\nIf we take a closer look at `addTest` we will see that it is a associative binary operation on the set of `Test`s.\n\nIn mathemathics a set with an associative binary operation is a [Semigroup](https://en.wikipedia.org/wiki/Semigroup).\n\nWe can thus make our type `Test` an instance of the type class `Semigroup` with the following declaration:\n\n```haskell\ninstance Semigroup Test where\n    (\u003c\u003e)   = addTest\n```\n\nWhat's not visible from the JUnit class diagram is how typical object oriented implementations will have to deal with null-references. That is the implementations would have to make sure that the methods `run` and `addTest` will handle empty references correctly.\nWith Haskells algebraic data types we would rather make this explicit with a dedicated `Empty` element.\nHere are the changes we have to add to our code:\n\n```haskell\n-- the composite data structure: a Test can be Empty, a single TestCase\n-- or a TestSuite holding a list of Tests\ndata Test = Empty\n          | TestCase TestCase\n          | TestSuite [Test]\n\n-- execution of a Test.\nrun :: Test -\u003e Bool\nrun Empty         = True -- empty tests will pass\nrun (TestCase t)  = t () -- evaluating the TestCase by applying t to ()\n--run (TestSuite l) = foldr ((\u0026\u0026) . run) True l\nrun (TestSuite l) = all (True ==) (map run l) -- running all tests in l and return True if all tests pass\n\n-- addTesting Tests\naddTest :: Test -\u003e Test -\u003e Test\naddTest Empty t                           = t\naddTest t Empty                           = t\naddTest t1@(TestCase _) t2@(TestCase _)   = TestSuite [t1,t2]\naddTest t1@(TestCase _) (TestSuite list)  = TestSuite ([t1] ++ list)\naddTest (TestSuite list) t2@(TestCase _)  = TestSuite (list ++ [t2])\naddTest (TestSuite l1) (TestSuite l2)     = TestSuite (l1 ++ l2)\n```\n\nFrom our additions it's obvious that `Empty` is the identity element of the `addTest` function. In Algebra a Semigroup with an identity element is called *Monoid*:\n\n\u003e In abstract algebra, [...] a monoid is an algebraic structure with a single associative binary operation and an identity element.\n\u003e [Quoted from Wikipedia](https://en.wikipedia.org/wiki/Monoid)\n\nWith haskell we can declare `Test` as an instance of the `Monoid` type class by defining:\n\n```haskell\ninstance Monoid Test where\n    mempty = Empty\n```\n\nWe can now use all functions provided by the `Monoid` type class to work with our `Test`:\n\n```haskell\ncompositeDemo = do\n    print $ run $ t1 \u003c\u003e t2\n    print $ run $ t1 \u003c\u003e t2 \u003c\u003e t3\n```\n\nWe can also use the function `mconcat :: Monoid a =\u003e [a] -\u003e a` on a list of `Tests`: mconcat composes a list of Tests into a single Test. That's exactly the mechanism of forming a TestSuite from atomic TestCases.\n\n```haskell\ncompositeDemo = do\n    print $ run $ mconcat [t1,t2]\n    print $ run $ mconcat [t1,t2,t3]\n```\n\nThis particular feature of `mconcat :: Monoid a =\u003e [a] -\u003e a` to condense a list of Monoids to a single Monoid can be used to drastically simplify the design of our test framework.\n\nWe need just one more hint from our mathematician friends:\n\n\u003e Functions are monoids if they return monoids\n\u003e [Quoted from blog.ploeh.dk](http://blog.ploeh.dk/2018/05/17/composite-as-a-monoid-a-business-rules-example/)\n\nCurrently our `TestCases` are defined as functions yielding boolean values:\n\n```haskell\ntype TestCase = () -\u003e Bool\n```\n\nIf `Bool` was a `Monoid` we could use `mconcat` to form test suite aggregates. `Bool` in itself is not a Monoid; but together with a binary associative operation like `(\u0026\u0026)` or `(||)` it will form a Monoid.\n\nThe intuitive semantics of a TestSuite is that a whole Suite is \"green\" only when all enclosed TestCases succeed. That is the conjunction of all TestCases must return `True`.\n\n So we are looking for the Monoid of boolean values under conjunction `(\u0026\u0026)`. In Haskell this Monoid is called `All`):\n\n```haskell\n-- | Boolean monoid under conjunction ('\u0026\u0026').\n-- \u003e\u003e\u003e getAll (All True \u003c\u003e mempty \u003c\u003e All False)\n-- False\n-- \u003e\u003e\u003e getAll (mconcat (map (\\x -\u003e All (even x)) [2,4,6,7,8]))\n-- False\nnewtype All = All { getAll :: Bool }\n\ninstance Semigroup All where\n        (\u003c\u003e) = coerce (\u0026\u0026)\n\ninstance Monoid All where\n        mempty = All True\n```\n\nMaking use of `All` our improved definition of TestCases is as follows:\n\n```haskell\ntype SmartTestCase = () -\u003e All\n```\n\nNow our test cases do not directly return a boolean value but an `All` wrapper, which allows automatic conjunction of test results to a single value.\nHere are our redefined TestCases:\n\n```haskell\ntc1 :: SmartTestCase\ntc1 () = All True\ntc2 :: SmartTestCase\ntc2 () = All True\ntc3 :: SmartTestCase\ntc3 () = All False\n```\n\nWe now implement a new evaluation function `run'` which evaluates a `SmartTestCase` (which may be either an atomic TestCase or a TestSuite assembled by `mconcat`) to a single boolean result.\n\n```haskell\nrun' :: SmartTestCase -\u003e Bool\nrun' tc = getAll $ tc ()  \n```\n\nThis version of `run` is much simpler than the original and we can completely avoid the rather laborious `addTest` function. We also don't need any composite type `Test`.\nBy just sticking to the Haskell built-in type classes we achieve cleanly designed functionality with just a few lines of code.\n\n```haskell\ncompositeDemo = do\n    -- execute a single test case\n    print $ run' tc1\n\n    --- execute a complex test suite\n    print $ run' $ mconcat [tc1,tc2]\n    print $ run' $ mconcat [tc1,tc2,tc3]\n```\n\nFor more details on Composite as a Monoid please refer to the following blog:\n[Composite as Monoid](http://blog.ploeh.dk/2018/03/12/composite-as-a-monoid/)\n\n[Sourcecode for this section](https://github.com/thma/LtuPatternFactory/blob/master/src/Composite.hs)\n\n\u003c!-- \n### ? → Alternative, MonadPlus, ArrowPlus\n--\u003e\n\n### Visitor → Foldable\n\n\u003e [...] the visitor design pattern is a way of separating an algorithm from an object structure on which it operates. A practical result of this separation is the ability to add new operations to existent object structures without modifying the structures. It is one way to follow the open/closed principle.\n\u003e (Quoted from [Wikipedia](https://en.wikipedia.org/wiki/Visitor_pattern))\n\nIn functional languages - and Haskell in particular - we have a whole armada of tools serving this purpose:\n\n* higher order functions like map, fold, filter and all their variants allow to \"visit\" lists\n* The Haskell type classes `Functor`, `Foldable`, `Traversable`, etc. provide a generic framework to allow visiting any algebraic datatype by just deriving one of these type classes.\n\n#### Using Foldable\n\n```haskell\n-- we are re-using the Exp data type from the Singleton example\n-- and transform it into a Foldable type:\ninstance Foldable Exp where\n    foldMap f (Val x)   = f x\n    foldMap f (Add x y) = foldMap f x `mappend` foldMap f y\n    foldMap f (Mul x y) = foldMap f x `mappend` foldMap f y\n\nfilterF :: Foldable f =\u003e (a -\u003e Bool) -\u003e f a -\u003e [a]\nfilterF p = foldMap (\\a -\u003e if p a then [a] else [])\n\nvisitorDemo = do\n    let exp = Mul (Add (Val 3) (Val 2))\n                  (Mul (Val 4) (Val 6))\n    putStr \"size of exp: \"\n    print $ length exp\n    putStrLn \"filter even numbers from tree\"\n    print $ filterF even exp\n```\n\nBy virtue of the instance declaration Exp becomes a Foldable instance an can be used with arbitrary functions defined on Foldable like `length` in the example.\n\n`foldMap` can for example be used to write a filtering function `filterF`that collects all elements matching a predicate into a list.\n\n##### Alternative approaches\n\n[Visitory as Sum type](http://blog.ploeh.dk/2018/06/25/visitor-as-a-sum-type/)\n\n[Sourcecode for this section](https://github.com/thma/LtuPatternFactory/blob/master/src/Visitor.hs)\n\n### Iterator → Traversable\n\n\u003e [...] the iterator pattern is a design pattern in which an iterator is used to traverse a container and access the container's elements. The iterator pattern decouples algorithms from containers; in some cases, algorithms are necessarily container-specific and thus cannot be decoupled.\n\u003e [Quoted from Wikipedia](https://en.wikipedia.org/wiki/Iterator_pattern)\n\n#### Iterating over a Tree\n\nThe most generic type class enabling iteration over algebraic data types is `Traversable` as it allows combinations of `map` and `fold` operations.\nWe are re-using the `Exp` type from earlier examples to show what's needed for enabling iteration in functional languages.\n\n```haskell\ninstance Functor Exp where\n    fmap f (Var x)       = Var x\n    fmap f (Val a)       = Val $ f a\n    fmap f (Add x y)     = Add (fmap f x) (fmap f y)\n    fmap f (Mul x y)     = Mul (fmap f x) (fmap f y)\n\ninstance Traversable Exp where\n    traverse g (Var x)   = pure $ Var x\n    traverse g (Val x)   = Val \u003c$\u003e g x\n    traverse g (Add x y) = Add \u003c$\u003e traverse g x \u003c*\u003e traverse g y\n    traverse g (Mul x y) = Mul \u003c$\u003e traverse g x \u003c*\u003e traverse g y\n```\n\nWith this declaration we can traverse an `Exp` tree:\n\n```haskell\niteratorDemo = do\n    putStrLn \"Iterator -\u003e Traversable\"\n    let exp = Mul (Add (Val 3) (Val 1))\n                (Mul (Val 2) (Var \"pi\"))\n        env = [(\"pi\", pi)]\n    print $ traverse (\\x c -\u003e if even x then [x] else [2*x]) exp 0\n```\n\nIn this example we are touching all (nested) `Val` elements and multiply all odd values by 2.\n\n#### Combining traversal operations\n\nCompared with `Foldable` or `Functor` the declaration of a `Traversable` instance looks a bit intimidating. In particular the type signature of `traverse`:\n\n```haskell\ntraverse :: (Traversable t, Applicative f) =\u003e (a -\u003e f b) -\u003e t a -\u003e f (t b)\n```\n\nlooks like quite a bit of over-engineering for simple traversals as in the above example.\n\nIn oder to explain the real power of the `Traversable` type class we will look at a more sophisticated example in this section. This example was taken from the paper\n[The Essence of the Iterator Pattern](https://www.cs.ox.ac.uk/jeremy.gibbons/publications/iterator.pdf).\n\nThe Unix utility `wc` is a good example for a traversal operation that performs several different tasks while traversing its input:\n\n```bash\necho \"counting lines, words and characters in one traversal\" | wc\n      1       8      54\n```\n\nThe output simply means that our input has 1 line, 8 words and a total of 54 characters.\nObviously an efficients implementation of `wc` will accumulate the three counters for lines, words and characters in a single pass of the input and will not run three iterations to compute the three counters separately.\n\nHere is a Java implementation:\n\n```java\nprivate static int[] wordCount(String str) {\n    int nl=0, nw=0, nc=0;         // number of lines, number of words, number of characters\n    boolean readingWord = false;  // state information for \"parsing\" words\n    for (Character c : asList(str)) {\n        nc++;                     // count just any character\n        if (c == '\\n') {\n            nl++;                 // count only newlines\n        }\n        if (c == ' ' || c == '\\n' || c == '\\t') {\n            readingWord = false;  // when detecting white space, signal end of word\n        } else if (readingWord == false) {\n            readingWord = true;   // when switching from white space to characters, signal new word\n            nw++;                 // increase the word counter only once while in a word\n        }\n    }\n    return new int[]{nl,nw,nc};\n}\n\nprivate static List\u003cCharacter\u003e asList(String str) {\n    return str.chars().mapToObj(c -\u003e (char) c).collect(Collectors.toList());\n}\n```\n\nPlease note that the `for (Character c : asList(str)) {...}` notation is just syntactic sugar for\n\n```java\nfor (Iterator\u003cCharacter\u003e iter = asList(str).iterator(); iter.hasNext();) {\n    Character c = iter.next();\n    ...\n}\n```\n\nFor efficiency reasons this solution may be okay, but from a design perspective the solution lacks clarity as the required logic for accumulating the three counters is heavily entangled within one code block. Just imagine how the complexity of the for-loop will increase once we have to add new features like counting bytes, counting white space or counting maximum line width.\n\nSo we would like to be able to isolate the different counting algorithms (*separation of concerns*) and be able to combine them in a way that provides efficient one-time traversal.\n\nWe start with the simple task of character counting:\n\n```haskell\ntype Count = Const (Sum Integer)\n\ncount :: a -\u003e Count b\ncount _ = Const 1\n\ncciBody :: Char -\u003e Count a\ncciBody = count\n\ncci :: String -\u003e Count [a]\ncci = traverse cciBody\n\n-- and then in ghci:\n\u003e cci \"hello world\"\nConst (Sum {getSum = 11})\n```\n\nFor each character we just emit a `Const 1` which are elements of type `Const (Sum Integer)`.\nAs `(Sum Integer)` is the monoid of Integers under addition, this design allows automatic summation over all collected `Const` values.\n\nThe next step of counting newlines looks similar:\n\n```haskell\n-- return (Sum 1) if true, else (Sum 0)\ntest :: Bool -\u003e Sum Integer\ntest b = Sum $ if b then 1 else 0\n\n-- use the test function to emit (Sum 1) only when a newline char is detected\nlciBody :: Char -\u003e Count a\nlciBody c = Const $ test (c == '\\n')\n\n-- define the linecount using traverse\nlci :: String -\u003e Count [a]\nlci = traverse lciBody\n\n-- and the in ghci:\n\u003e lci \"hello \\n world\"\nConst (Sum {getSum = 1})\n```\n\nNow let's try to combine character counting and line counting.\nIn order to match the type declaration for `traverse`:\n\n```haskell\ntraverse :: (Traversable t, Applicative f) =\u003e (a -\u003e f b) -\u003e t a -\u003e f (t b)\n```\n\nWe had to define `cciBody` and `lciBody` so that their return types are `Applicative Functors`.\nThe good news is that the product of two `Applicatives` is again an `Applicative` (the same holds true for Composition of `Applicatives`).\nWith this knowledge we can now use `traverse` to use the product of `cciBody` and `lciBody`:\n\n```haskell\nimport Data.Functor.Product             -- Product of Functors\n\n-- define infix operator for building a Functor Product\n(\u003c#\u003e) :: (Functor m, Functor n) =\u003e (a -\u003e m b) -\u003e (a -\u003e n b) -\u003e (a -\u003e Product m n b)\n(f \u003c#\u003e g) y = Pair (f y) (g y)\n\n-- use a single traverse to apply the Product of cciBody and lciBody\nclci :: String -\u003e Product Count Count [a]\nclci = traverse (cciBody \u003c#\u003e lciBody)\n\n-- and then in ghci:\n\u003e clci \"hello \\n world\"\nPair (Const (Sum {getSum = 13})) (Const (Sum {getSum = 1}))\n```\n\nSo we have achieved our aim of separating line counting and character counting in separate functions while still being able to apply them in only one traversal.\n\nThe only piece missing is the word counting. This is a bit tricky as we can not just increase a counter by looking at each single character but we have to take into account the status of the previously read character as well: \n- If the previous character was non-whitespace and the current is also non-whitespace we are still reading the same word and don't increment the word count.\n- If the previous character was non-whitespace and the current is a whitespace character the last word was ended but we don't increment the word count.\n- If the previous character was whitespace and the current is also whitespace we are still reading whitespace between words and don't increment the word count.\n- If the previous character was whitespace and the current is a non-whitespace character the next word has started and we increment the word count.\n\nKeeping track of the state of the last character could be achieved by using a state monad (and wrapping it as an Applicative Functor to make it compatible with `traverse`). The actual code for this solution is kept in the sourcecode for this section (functions `wciBody'` and `wci'` in particular). But as this approach is a bit noisy I'm presenting a simpler solution suggested by [Noughtmare](https://www.reddit.com/r/haskell/comments/cfjnyu/type_classes_and_software_design_patterns/eub06p5?utm_source=share\u0026utm_medium=web2x).\n\nIn his approach we'll define a data structure that will keep track of the changes between whitespace and non-whitespace:\n\n```haskell\ndata SepCount = SC Bool Bool Integer\n  deriving Show\n\nmkSepCount :: (a -\u003e Bool) -\u003e a -\u003e SepCount\nmkSepCount pred x = SC p p (if p then 0 else 1)\n  where\n    p = pred x\n\ngetSepCount :: SepCount -\u003e Integer\ngetSepCount (SC _ _ n) = n    \n```\n\nWe then define the semantics for `(\u003c\u003e)` which implements the actual bookkeeping needed when `mappend`ing two `SepCount` items:\n\n```haskell\ninstance Semigroup SepCount where\n  (SC l0 r0 n) \u003c\u003e (SC l1 r1 m) = SC l0 r1 x where\n    x | not r0 \u0026\u0026 not l1 = n + m - 1\n      | otherwise = n + m\n```\n\nBased on these definitions we can then implement the wordcounting as follows:\n\n```haskell\nwciBody :: Char -\u003e Const (Maybe SepCount) Integer\nwciBody = Const . Just . mkSepCount isSpace where\n    isSpace :: Char -\u003e Bool\n    isSpace c = c == ' ' || c == '\\n' || c == '\\t'\n\n-- using traverse to count words in a String\nwci :: String -\u003e Const (Maybe SepCount) [Integer]\nwci = traverse wciBody \n\n-- Forming the Product of character counting, line counting and word counting\n-- and performing a one go traversal using this Functor product\nclwci :: String -\u003e (Product (Product Count Count) (Const (Maybe SepCount))) [Integer]\nclwci = traverse (cciBody \u003c#\u003e lciBody \u003c#\u003e wciBody)  \n\n-- extracting the actual Integer value from a `Const (Maybe SepCount) a` expression \nextractCount :: Const (Maybe SepCount) a -\u003e Integer\nextractCount (Const (Just sepCount)) =  getSepCount sepCount  \n\n-- the actual wordcount implementation.\n-- for any String a triple of linecount, wordcount, charactercount is returned\nwc :: String -\u003e (Integer, Integer, Integer)\nwc str =\n    let raw = clwci str\n        cc  = coerce $ pfst (pfst raw)\n        lc  = coerce $ psnd (pfst raw)\n        wc  = extractCount  (psnd raw)\n    in (lc,wc,cc)\n```\n\nThis sections was meant to motivate the usage of the `Traversable` type. Of course the word count example could be solved in much simpler ways. Here is one solution suggested by [NoughtMare](https://www.reddit.com/r/haskell/comments/cfjnyu/type_classes_and_software_design_patterns/ev4m6u6?utm_source=share\u0026utm_medium=web2x).\n\nWe simply use `foldMap` to perform a map / reduce based on our already defined `cciBody`, `lciBody` and `wciBody` functions. As `clwci''` now returns a simple tuple instead of the more clumsy `Product` type also the final wordcound function `wc''` now looks way simpler:\n\n```haskell \nclwci'' :: Foldable t =\u003e t Char -\u003e (Count [a], Count [a], Const (Maybe SepCount) Integer)\nclwci'' = foldMap (\\x -\u003e (cciBody x,  lciBody x, wciBody x))\n\nwc'' :: String -\u003e (Integer, Integer, Integer)\nwc'' str =\n    let (rawCC, rawLC, rawWC) = clwci'' str\n        cc  = coerce rawCC\n        lc  = coerce rawLC\n        wc  = extractCount rawWC\n    in (lc,wc,cc)    \n```\n\nAs map / reduce with `foldMap` is such a powerful tool I've written a [dedicated section on this topic](#map-reduce) further down in this study.\n\n[Sourcecode for this section](https://github.com/thma/LtuPatternFactory/blob/master/src/Iterator.hs)\n\n\u003c!-- \n### ? → Bifunctor\n\ntbd.\n--\u003e\n\n### The Pattern behind the Patterns → Category\n\n\u003e If you've ever used Unix pipes, you'll understand the importance and flexibility of composing small reusable programs to get powerful and emergent behaviors. Similarly, if you program functionally, you'll know how cool it is to compose a bunch of small reusable functions into a fully featured program.\n\u003e\n\u003eCategory theory codifies this compositional style into a design pattern, the category.\n\u003e [Quoted from HaskellForAll](http://www.haskellforall.com/2012/08/the-category-design-pattern.html)\n\nIn most of the patterns and type classes discussed so far we have seen a common theme: providing means to\ncompose behaviour and structure is one of the most important tools to design complex software by combining\nsimpler components.\n\n#### Function Composition\n\nFunction composition is a powerful and elegant tool to compose complex functionality out of simpler building blocks. We already have seen several examples of it in the course of this study.\nFunctions can be composed by using the binary `(.)` operator:\n\n```haskell\nghci\u003e :type (.)\n(.) :: (b -\u003e c) -\u003e (a -\u003e b) -\u003e a -\u003e c\n```\n\nIt is defined as:\n\n```haskell\n(f . g) x = f (g x)\n```\n\nThis operator can be used to combine simple functions to awesome one-liners (and of course much more useful stuff):\n\n```haskell\nghci\u003e product . filter odd . map length . words . reverse $ \"function composition is awesome\"\n77\n```\n\nFunction composition is associative `(f . g) . h = f . (g . h)`:\n\n```haskell\nghci\u003e (((^2) . length) . words) \"hello world\"\n4\nghci\u003e ((^2) . (length . words)) \"hello world\"\n4\n```\n\nAnd composition has a neutral (or identity) element `id` so that `f . id = id . f`:\n\n```haskell\nghci\u003e (length . id) [1,2,3]\n3\nghci\u003e (id . length) [1,2,3]\n3\n```\n\nThe definitions of `(.)` and `id` plus the laws of associativity and identity match exactly the definition of a category:\n\n\u003e In mathematics, a category [...] is a collection of \"objects\" that are linked by \"arrows\". A category has two basic properties: the ability to compose the arrows associatively and the existence of an identity arrow for each object.\n\u003e\n\u003e [Quoted from Wikipedia](https://en.wikipedia.org/wiki/Category_(mathematics))\n\nIn Haskell a category is defined as as a type class:\n\n```haskell\nclass Category cat where\n    -- | the identity morphism\n    id :: cat a a\n\n    -- | morphism composition\n    (.) :: cat b c -\u003e cat a b -\u003e cat a c\n```\n\n\u003e Please note: The name `Category` may be a bit misleading, since this type class cannot represent arbitrary categories, but only categories whose objects are objects of [`Hask`, the category of Haskell types](https://wiki.haskell.org/Hask).\n\nInstances of `Category` should satisfy that `(.)` and `id` form a Monoid \u0026ndash; that is `id` should be the identity of `(.)` and `(.)` should be associative:\n\n```haskell\nf  . id      =  f            -- (right identity)\nid . f       =  f            -- (left identity)\nf . (g . h)  =  (f . g) . h  -- (associativity)\n```\n\nAs function composition fulfills these category laws the function type constructor `(-\u003e)` can be defined as an instance of the category type class:\n\n```haskell\ninstance Category (-\u003e) where\n    id  = GHC.Base.id\n    (.) = (GHC.Base..)\n```\n\n#### Monadic Composition\n\nIn the section on the [Maybe Monad](#avoiding-partial-functions-by-using-maybe) we have seen that monadic operations can be chained with the Kleisli operator `\u003e=\u003e`:\n\n```haskell\nsafeRoot           :: Double -\u003e Maybe Double\nsafeRoot x\n    | x \u003e= 0    = Just (sqrt x)\n    | otherwise = Nothing\n\nsafeReciprocal     :: Double -\u003e Maybe Double\nsafeReciprocal x\n    | x /= 0    = Just (1/x)\n    | otherwise = Nothing\n\nsafeRootReciprocal :: Double -\u003e Maybe Double\nsafeRootReciprocal = safeReciprocal \u003e=\u003e safeRoot\n```\n\nThe operator `\u003c=\u003c` just flips the arguments of `\u003e=\u003e` and thus provides right-to-left composition.\nWhen we compare the signature of `\u003c=\u003c` with the signature of `.` we notice the similarity of both concepts:\n\n```haskell\n(.)   ::            (b -\u003e   c) -\u003e (a -\u003e   b) -\u003e a -\u003e   c\n(\u003c=\u003c) :: Monad m =\u003e (b -\u003e m c) -\u003e (a -\u003e m b) -\u003e a -\u003e m c\n```\n\nEven the implementation of `\u003c=\u003c` is quite similar to the definition of `.`\n\n```haskell\n(f  .  g) x = f     (g x)\n(f \u003c=\u003c g) x = f =\u003c\u003c (g x)\n```\n\nThe essential diffenerce is that `\u003c=\u003c` maintains a monadic structure when producing its result.\n\nNext we compare signatures of `id` and its monadic counterpart `return`:\n\n```haskell\nid     ::              (a -\u003e   a)\nreturn :: (Monad m) =\u003e (a -\u003e m a)\n```\n\nHere again `return` always produces a monadic structure.\n\nSo the category for Monads can simply be defined as:\n\n```haskell\n-- | Kleisli arrows of a monad.\nnewtype Kleisli m a b = Kleisli { runKleisli :: a -\u003e m b }\n\ninstance Monad m =\u003e Category (Kleisli m) where\n    id = Kleisli return\n    (Kleisli f) . (Kleisli g) = Kleisli (f \u003c=\u003c g)\n```\n\nSo if monadic actions form a category we expect that the law of identity and associativity hold:\n\n```haskell\nreturn \u003c=\u003c f    = f                -- left identity\n\nf \u003c=\u003c return    = f                -- right identity\n\n(f \u003c=\u003c g) \u003c=\u003c h = f \u003c=\u003c (g \u003c=\u003c h)  -- associativity\n```\n\nLet's try to prove it by applying some equational reasoning.\nFirst we take the definition of `\u003c=\u003c`: `(f \u003c=\u003c g) x = f =\u003c\u003c (g x)`\nto expand the above equations:\n\n```haskell\n-- 1. left identity\nreturn \u003c=\u003c f     = f    -- left identity (to be proven)\n(return \u003c=\u003c f) x = f x  -- eta expand\nreturn =\u003c\u003c (f x) = f x  -- expand \u003c=\u003c by above definition\nreturn =\u003c\u003c f     = f    -- eta reduce\nf \u003e\u003e= return     = f    -- replace =\u003c\u003c with \u003e\u003e= and flip arguments\n\n\n-- 2 right identity\nf \u003c=\u003c return     = f    -- right identity (to be proven)\n(f \u003c=\u003c return) x = f x  -- eta expand\nf =\u003c\u003c (return x) = f x  -- expand \u003c=\u003c by above definition\nreturn x \u003e\u003e= f   = f x  -- replace =\u003c\u003c with \u003e\u003e= and flip arguments\n\n-- 3. associativity\n(f \u003c=\u003c g) \u003c=\u003c h             = f \u003c=\u003c (g \u003c=\u003c h)  -- associativity (to be proven)\n((f \u003c=\u003c g) \u003c=\u003c h) x         = (f \u003c=\u003c (g \u003c=\u003c h)) x -- eta expand\n(f \u003c=\u003c g) =\u003c\u003c (h x)         = f =\u003c\u003c ((g \u003c=\u003c h) x) -- expand outer \u003c=\u003c on both sides\n(\\y -\u003e (f \u003c=\u003c g) y) =\u003c\u003c h x = f =\u003c\u003c ((g \u003c=\u003c h) x) -- eta expand on left hand side\n(\\y -\u003e f =\u003c\u003c (g y)) =\u003c\u003c h x = f =\u003c\u003c ((g \u003c=\u003c h) x) -- expand inner \u003c=\u003c on the lhs\n(\\y -\u003e f =\u003c\u003c (g y)) =\u003c\u003c h x = f =\u003c\u003c (g =\u003c\u003c (h x)) -- expand inner \u003c=\u003c on the rhs\nh x \u003e\u003e= (\\y -\u003e f =\u003c\u003c (g y)) = f =\u003c\u003c (g =\u003c\u003c (h x)) -- replace outer =\u003c\u003c with \u003e\u003e= and flip arguments on lhs\nh x \u003e\u003e= (\\y -\u003e g y \u003e\u003e= f)   = f =\u003c\u003c (g =\u003c\u003c (h x)) -- replace inner =\u003c\u003c with \u003e\u003e= and flip arguments on lhs\nh x \u003e\u003e= (\\y -\u003e g y \u003e\u003e= f)   = (g =\u003c\u003c (h x)) \u003e\u003e= f -- replace outer =\u003c\u003c with \u003e\u003e= and flip arguments on rhs\nh x \u003e\u003e= (\\y -\u003e g y \u003e\u003e= f)   = ((h x) \u003e\u003e= g) \u003e\u003e= f -- replace inner =\u003c\u003c with \u003e\u003e= and flip arguments on rhs\nh \u003e\u003e= (\\y -\u003e g y \u003e\u003e= f)     = (h \u003e\u003e= g) \u003e\u003e= f     -- eta reduce\n```\n\nSo we have transformed our three formulas to the following form:\n\n```haskell\nf \u003e\u003e= return   = f\n\nreturn x \u003e\u003e= f = f x\n\nh \u003e\u003e= (\\y -\u003e g y \u003e\u003e= f)  =  (h \u003e\u003e= g) \u003e\u003e= f\n```\n\nThese three equations are equivalent to the [Monad Laws](https://wiki.haskell.org/Monad_laws), which all Monad instances are required to satisfy:\n\n```haskell\nm \u003e\u003e= return    =  m\n\nreturn a \u003e\u003e= k  =  k a\n\nm \u003e\u003e= (\\x -\u003e k x \u003e\u003e= h)  =  (m \u003e\u003e= k) \u003e\u003e= h\n```\n\nSo by virtue of this equivalence any Monad that satisfies the Monad laws automatically satisfies the Category laws.\n\n\u003e If you have ever wondered where those monad laws came from, now you know! They are just the category laws in disguise.\n\u003e Consequently, every new Monad we define gives us a category for free!\n\u003e  \n\u003e Quoted from [The Category Design Pattern](http://www.haskellforall.com/2012/08/the-category-design-pattern.html)\n\n#### Conclusion\n\n\u003e Category theory codifies [the] compositional style into a design pattern, the category. Moreover, category theory gives us a precise\n\u003e prescription for how to create our own abstractions that follow this design pattern: the category laws. These laws differentiate category\n\u003e theory from other design patterns by providing rigorous criteria for what does and does not qualify as compositional.\n\u003e\n\u003e One could easily dismiss this compositional ideal as just that: an ideal, something unsuitable for \"real-world\" scenarios. However, the\n\u003e theory behind category theory provides the meat that shows that this compositional ideal appears everywhere and can rise to the challenge of \u003e messy problems and complex business logic.\n\u003e\n\u003e Quoted from [The Category Design Pattern](http://www.haskellforall.com/2012/08/the-category-design-pattern.html)\n\n\u003c!-- \n### ? → Arrow\n\ntbd.\n--\u003e\n\n### Fluent Api → Comonad\n\n\u003e In software engineering, a fluent interface [...] is a method for designing object oriented APIs based extensively on method chaining with the goal of making the readability of the source code close to that of ordinary written prose, essentially creating a domain-specific language within the interface.\n\u003e\n\u003e [Quoted from Wikipedia](https://en.wikipedia.org/wiki/Fluent_interface)\n\nThe [Builder Pattern](#builder--record-syntax-smart-constructor) is a typical example for a fluent API. The following short Java snippet show the essential elements:\n\n* creating a builder instance\n* invoking a sequence of mutators `with...` on the builder instance\n* finally calling `build()` to let the Builder create an object\n\n```java\nConfigBuilder builder = new ConfigBuilder();\nConfig config = builder\n        .withProfiling()        // Add profiling\n        .withOptimization()     // Add optimization\n        .build();\n}\n```\n\nThe interesting point is that all the `with...` methods are not implemented as `void` method but instead all return the Builder instance, which thus allows to fluently chain the next `with...` call.\n\nLet's try to recreate this fluent chaining of calls in Haskell.\nWe start with a configuration type `Config` that represents a set of option strings (`Options`):\n\n```haskell\ntype Options = [String]\n\nnewtype Config = Conf Options deriving (Show)\n```\n\nNext we define a function `configBuilder` which takes `Options` as input and returns a `Config` instance:\n\n```haskell\nconfigBuilder :: Options -\u003e Config\nconfigBuilder options = Conf options\n\n-- we can use this to construct a Config instance from a list of Option strings:\nghci\u003e configBuilder [\"-O2\", \"-prof\"]\nConf [\"-O2\",\"-prof\"]\n```\n\nIn order to allow chaining of the `with...` functions they always must return a new `Options -\u003e Config` function. So for example `withProfiling` would have the following signature:\n\n```haskell\nwithProfiling :: (Options -\u003e Config) -\u003e (Options -\u003e Config)\n```\n\nThis signature is straightforward but the implementation needs some thinking: we take a function `builder` of type `Options -\u003e Config` as input and must return a new function of the same type that will use the same builder but will add profiling options to the `Options` parameter `opts`:\n\n```haskell\nwithProfiling builder = \\opts -\u003e builder (opts ++ [\"-prof\", \"-auto-all\"])\n```\n\nHLint tells us that this can be written more terse as:\n\n```haskell\nwithProfiling builder opts = builder (opts ++ [\"-prof\", \"-auto-all\"])\n```\n\nIn order to keep notation dense we introduce a type alias for the function type `Options -\u003e Config`:\n\n```haskell\ntype ConfigBuilder = Options -\u003e Config\n```\n\nWith this shortcut we can implement the other `with...` functions as:\n\n```haskell\nwithWarnings :: ConfigBuilder -\u003e ConfigBuilder\nwithWarnings builder opts = builder (opts ++ [\"-Wall\"])\n\nwithOptimization :: ConfigBuilder -\u003e ConfigBuilder\nwithOptimization builder opts = builder (opts ++ [\"-O2\"])\n\nwithLogging :: ConfigBuilder -\u003e ConfigBuilder\nwithLogging builder opts = builder (opts ++ [\"-logall\"])\n```\n\nThe `build()` function is also quite straightforward. It constructs the actual `Config` instance by invoking a given `ConfigBuilder` on an empty list:\n\n```haskell\nbuild :: ConfigBuilder -\u003e Config\nbuild builder = builder mempty\n\n-- now we can use it in ghci:\nghci\u003e print (build (withOptimization (withProfiling configBuilder)))\nConf [\"-O2\",\"-prof\",\"-auto-all\"]\n```\n\nThis does not yet look quite object oriented but with a tiny tweak we'll get quite close. We introduce a special operator `#` that allows to write functional expression in an object-oriented style:\n\n```haskell\n(#) :: a -\u003e (a -\u003e b) -\u003e b\nx # f = f x\ninfixl 0 #\n```\n\nWith this operator we can write the above example as:\n\n```haskell\nconfig = configBuilder\n    # withProfiling    -- add profiling\n    # withOptimization -- add optimizations\n    # build\n```\n\nSo far so good. But what does this have to do with Comonads?\nIn the following I'll demonstrate how the chaining of functions as shown in our `ConfigBuilder` example follows a pattern that is covered by the `Comonad` type class.\n\nLet's have a second look at the `with*` functions:\n\n```haskell\nwithWarnings :: ConfigBuilder -\u003e ConfigBuilder\nwithWarnings builder opts = builder (opts ++ [\"-Wall\"])\n\nwithProfiling :: ConfigBuilder -\u003e ConfigBuilder\nwithProfiling builder opts = builder (opts ++ [\"-prof\", \"-auto-all\"])\n```\n\nThese functions all are containing code for explicitely concatenating the `opts` argument with additional `Options`.\nIn order to reduce repetitive coding we are looking for a way to factor out the concrete concatenation of `Options`.\nGoing this route the `with*` function could be rewritten as follows:\n\n```haskell\nwithWarnings'' :: ConfigBuilder -\u003e ConfigBuilder\nwithWarnings'' builder = extend' builder [\"-Wall\"]\n\nwithProfiling'' :: ConfigBuilder -\u003e ConfigBuilder\nwithProfiling'' builder = extend' builder [\"-prof\", \"-auto-all\"]\n```\n\nHere `extend'` is a higher order function that takes a `ConfigBuilder` and an `Options` argument (`opts2`) and returns a new function that returns a new `ConfigBuilder` that concatenates its input `opts1` with the original `opts2` arguments:\n\n```haskell\nextend' :: ConfigBuilder -\u003e Options -\u003e ConfigBuilder\nextend' builder opts2 = \\opts1 -\u003e builder (opts1 ++ opts2)\n-- or even denser without explicit lambda:\nextend' builder opts2 opts1 = builder (opts1 ++ opts2)\n```\n\nWe could carry this idea of refactoring repetitive code even further by eliminating the `extend'` from the `with*` functions. Of course this will change the signature of the functions:\n\n```haskell\nwithWarnings' :: ConfigBuilder -\u003e Config\nwithWarnings' builder = builder [\"-Wall\"]\n\nwithProfiling' :: ConfigBuilder -\u003e Config\nwithProfiling' builder = builder [\"-prof\", \"-auto-all\"]\n```\n\nIn order to form fluent sequences of such function calls we need an improved version of the `extend` function  which transparently handles the concatenation of `Option` arguments and also keeps the chain of `with*` functions open for the next `with*` function being applied:\n\n```haskell\nextend'' :: (ConfigBuilder -\u003e Config) -\u003e ConfigBuilder -\u003e ConfigBuilder\nextend'' withFun builder opt2 = withFun (\\opt1 -\u003e builder (opt1 ++ opt2))\n```\n\nIn order to use `extend''` efficiently in user code we have to modify our `#` operator slightly to transparently handle the extending of `ConfigBuilder` instances when chaining functions of type `ConfigBuilder -\u003e Config`:\n\n```haskell\n(#\u003e\u003e) :: ConfigBuilder -\u003e (ConfigBuilder -\u003e Config) -\u003e ConfigBuilder\nx #\u003e\u003e f = extend'' f x\ninfixl 0 #\u003e\u003e\n```\n\nUser code would then look like follows:\n\n```haskell\nconfigBuilder\n    #\u003e\u003e withProfiling'\n    #\u003e\u003e withOptimization'\n    #\u003e\u003e withLogging'\n    # build\n    # print\n```\n\nNow let's have a look at the definition of the `Comonad` type class. Being the dual of `Monad` it defines two functions `extract` and `extend` which are the duals of `return` and `(\u003e\u003e=)`:\n\n```haskell\nclass Functor w =\u003e Comonad w where\n    extract :: w a -\u003e a\n    extend  :: (w a -\u003e b) -\u003e w a -\u003e w b\n```\n\nWith the knowledge that `((-\u003e) a)` is an instance of `Functor` we can define a `Comonad` instance for `((-\u003e) Options)`:\n\n```haskell\ninstance {-# OVERLAPPING #-} Comonad ((-\u003e) Options) where\n    extract :: (Options -\u003e config) -\u003e config\n    extract builder = builder mempty\n    extend :: ((Options -\u003e config) -\u003e config') -\u003e  (Options -\u003e config) -\u003e (Options -\u003e config')\n    extend withFun builder opt2 = withFun (\\opt1 -\u003e builder (opt1 ++ opt2))\n```\n\nNow let's again look at the functions `build` and `extend''`:\n\n```haskell\nbuild :: (Options -\u003e Config) -\u003e Config\nbuild builder = builder mempty\n\nextend'' :: ((Options -\u003e Config) -\u003e Config) -\u003e (Options -\u003e Config) -\u003e (Options -\u003e Config)\nextend'' withFun builder opt2 = withFun (\\opt1 -\u003e builder (opt1 ++ opt2))\n```\n\nIt's obvious that `build` and `extract` are equivalent as well as `extend''` and `extend`. So we have been inventing a `Comonad` without knowing about it.\n\nBut we are even more lucky! Our `Options` type (being just a synonym for `[String]`) together with the concatenation operator `(++)` forms a `Monoid`.\nAnd for any `Monoid m` `((-\u003e) m)` is a Comonad:\n\n```haskell\ninstance Monoid m =\u003e Comonad ((-\u003e) m)  -- as defined in Control.Comonad\n```\n\nSo we don't have to define our own instance of Comonad but can rely on the predefined and more generic `((-\u003e) m)`.\n\nEquipped with this knowledge we define a more generic version of our `#\u003e\u003e` chaining operator:\n\n```haskell\n(#\u003e) :: Comonad w =\u003e w a -\u003e (w a -\u003e b) -\u003e w b\nx #\u003e f = extend f x\ninfixl 0 #\u003e\n```\n\nBased on this definition we can finally rewrite the user code as follows\n\n```haskell\n    configBuilder\n        #\u003e withProfiling'\n        #\u003e withOptimization'\n        #\u003e withLogging'\n        # extract  -- # build would be fine as well\n        # print\n```\n\nThis section is based on examples from [You could have invented Comonads](http://www.haskellforall.com/2013/02/you-could-have-invented-comonads.html). Please also check this [blogpost](http://gelisam.blogspot.com/2013/07/comonads-are-neighbourhoods-not-objects.html) which comments on the notion of *comonads as objects* in Gabriel Gonzales original posting.\n\n[Sourcecode for this section](https://github.com/thma/LtuPatternFactory/blob/master/src/FluentApi.hs).\n\n## Beyond type class patterns\n\nThe patterns presented in this chapter don't have a direct correspondence to specific type classes. They rather map to more general concepts of functional programming.\n\n### Dependency Injection → Parameter Binding, Partial Application\n\n\u003e [...] Dependency injection is a technique whereby one object (or static method) supplies the dependencies of another object. A dependency is an object that can be used (a service). An injection is the passing of a dependency to a dependent object (a client) that would use it. The service is made part of the client's state. Passing the service to the client, rather than allowing a client to build or find the service, is the fundamental requirement of the pattern.\n\u003e\n\u003e This fundamental requirement means that using values (services) produced within the class from new or static methods is prohibited. The client should accept values passed in from outside. This allows the client to make acquiring dependencies someone else's problem.\n\u003e (Quoted from [Wikipedia](https://en.wikipedia.org/wiki/Dependency_injection))\n\nIn functional languages this is achieved by binding the formal parameters of a function to values.\n\nLet's see how this works in a real world example. Say we have been building a renderer that allows to produce a markdown representation of a data type that represents the table of contents of a document:\n\n```haskell\n-- | a table of contents consists of a heading and a list of entries\ndata TableOfContents = Section Heading [TocEntry]\n\n-- | a ToC entry can be a heading or a sub-table of contents\ndata TocEntry = Head Heading | Sub TableOfContents\n\n-- | a heading can be just a title string or an url with a title and the actual link\ndata Heading = Title String | Url String String\n\n-- | render a ToC entry as a Markdown String with the proper indentation\nteToMd :: Int -\u003e TocEntry -\u003e String\nteToMd depth (Head head) = headToMd depth head\nteToMd depth (Sub toc)   = tocToMd  depth toc\n\n-- | render a heading as a Markdown String with the proper indentation\nheadToMd :: Int -\u003e Heading -\u003e String\nheadToMd depth (Title str)     = indent depth ++ \"* \" ++ str ++ \"\\n\"\nheadToMd depth (Url title url) = indent depth ++ \"* [\" ++ title ++ \"](\" ++ url ++ \")\\n\"\n\n-- | convert a ToC to Markdown String. The parameter depth is used for proper indentation.\ntocToMd :: Int -\u003e TableOfContents -\u003e String\ntocToMd depth (Section heading entries) = headToMd depth heading ++ concatMap (teToMd (depth+2)) entries\n\n-- | produce a String of length n, consisting only of blanks\nindent :: Int -\u003e String\nindent n = replicate n ' '\n\n-- | render a ToC as a Text (consisting of properly indented Markdown)\ntocToMDText :: TableOfContents -\u003e T.Text\ntocToMDText = T.pack . tocToMd 0\n```\n\nWe can use these definitions to create a table of contents data structure and to render it to markdown syntax:\n\n```haskell\ndemoDI = do\n    let toc = Section (Title \"Chapter 1\")\n                [ Sub $ Section (Title \"Section a\")\n                    [Head $ Title \"First Heading\",\n                     Head $ Url \"Second Heading\" \"http://the.url\"]\n                , Sub $ Section (Url \"Section b\" \"http://the.section.b.url\")\n                    [ Sub $ Section (Title \"UnderSection b1\")\n                        [Head $ Title \"First\", Head $ Title \"Second\"]]]\n    putStrLn $ T.unpack $ tocToMDText toc\n\n-- and the in ghci:\nghci \u003e demoDI\n* Chapter 1\n  * Section a\n    * First Heading\n    * [Second Heading](http://the.url)\n  * [Section b](http://the.section.b.url)\n    * UnderSection b1\n      * First\n      * Second\n```\n\nSo far so good. But of course we also want to be able to render our `TableOfContent` to HTML.\nAs we don't want to repeat all the coding work for HTML we think about using an existing Markdown library.\n\nBut we don't want any hard coded dependencies to a specific library in our code.\n\nWith these design ideas in mind we specify a rendering processor:\n\n```haskell\n-- | render a ToC as a Text with html markup.\n--   we specify this function as a chain of parse and rendering functions\n--   which must be provided externally\ntocToHtmlText :: (TableOfContents -\u003e T.Text) -- 1. a renderer function from ToC to Text with markdown markups\n              -\u003e (T.Text -\u003e MarkDown)        -- 2. a parser function from Text to a MarkDown document\n              -\u003e (MarkDown -\u003e HTML)          -- 3. a renderer function from MarkDown to an HTML document\n              -\u003e (HTML -\u003e T.Text)            -- 4. a renderer function from HTML to Text\n              -\u003e TableOfContents             -- the actual ToC to be rendered\n              -\u003e T.Text                      -- the Text output (containing html markup)\ntocToHtmlText tocToMdText textToMd mdToHtml htmlToText =\n    tocToMdText \u003e\u003e\u003e    -- 1. render a ToC as a Text (consisting of properly indented Markdown)\n    textToMd    \u003e\u003e\u003e    -- 2. parse text with Markdown to a MarkDown data structure\n    mdToHtml    \u003e\u003e\u003e    -- 3. convert the MarkDown data to an HTML data structure\n    htmlToText         -- 4. render the HTML data to a Text with hmtl markup\n```\n\nThe idea is simple:\n\n1. We render our `TableOfContents` to a Markdown `Text` (e.g. using our already defined `tocToMDText` function).\n2. This text is then parsed into a `MarkDown` data structure.\n3. The `Markdown` document is rendered into an `HTML` data structure,\n4. which is then rendered to a `Text` containing html markup.\n\nTo notate the chaining of functions in their natural order I have used the `\u003e\u003e\u003e` operator from `Control.Arrow` which is defined as follows:\n\n```haskell\nf \u003e\u003e\u003e g = g . f\n```\n\nSo `\u003e\u003e\u003e` is just left to right composition of functions which makes reading of longer composition chains much easier to read (at least for people trained to read from left to right).\n\nPlease note that at this point we have not defined the types `HTML` and `Markdown`. They are just abstract placeholders and we just expect them to be provided externally.\nIn the same way we just specified that there must be functions available that can be bound to the formal parameters\n`tocToText`, `textToMd`, `mdToHtml` and  `htmlToText`.\n\nIf such functions are avaliable we can *inject* them (or rather bind them to the formal parameters) as in the following definition:\n\n```haskell\n-- | a default implementation of a ToC to html Text renderer.\n--   this function is constructed by partially applying `tocToHtmlText` to four functions\n--   matching the signature of `tocToHtmlText`.\ndefaultTocToHtmlText :: TableOfContents -\u003e T.Text\ndefaultTocToHtmlText =\n    tocToHtmlText\n        tocToMDText         -- the ToC to markdown Text renderer as defined above\n        textToMarkDown      -- a MarkDown parser, externally provided via import\n        markDownToHtml      -- a MarkDown to HTML renderer, externally provided via import\n        htmlToText          -- a HTML to Text with html markup, externally provided via import\n```\n\nThis definition assumes that apart from `tocToMDText` which has already been defined the functions `textToMarkDown`, `markDownToHtml` and `htmlToText` are also present in the current scope.\nThis is achieved by the following import statement:\n\n```haskell\nimport CheapskateRenderer (HTML, MarkDown, textToMarkDown, markDownToHtml, htmlToText)\n```\n\nThe implementation in file CheapskateRenderer.hs then looks like follows:\n\n```haskell\nmodule CheapskateRenderer where\nimport qualified Cheapskate                      as C\nimport qualified Data.Text                       as T\nimport qualified Text.Blaze.Html                 as H\nimport qualified Text.Blaze.Html.Renderer.Pretty as R\n\n-- | a type synonym that hides the Cheapskate internal Doc type\ntype MarkDown = C.Doc\n\n-- | a type synonym the hides the Blaze.Html internal Html type\ntype HTML = H.Html\n\n-- | parse Markdown from a Text (with markdown markup). Using the Cheapskate library.\ntextToMarkDown :: T.Text -\u003e MarkDown\ntextToMarkDown = C.markdown C.def\n\n-- | convert MarkDown to HTML by using the Blaze.Html library\nmarkDownToHtml :: MarkDown -\u003e HTML\nmarkDownToHtml = H.toHtml\n\n-- | rendering a Text with html markup from HTML. Using Blaze again.\nhtmlToText :: HTML -\u003e T.Text\nhtmlToText = T.pack . R.renderHtml\n```\n\nNow let's try it out:\n\n```haskell\ndemoDI = do\n    let toc = Section (Title \"Chapter 1\")\n                [ Sub $ Section (Title \"Section a\")\n                    [Head $ Title \"First Heading\",\n                     Head $ Url \"Second Heading\" \"http://the.url\"]\n                , Sub $ Section (Url \"Section b\" \"http://the.section.b.url\")\n                    [ Sub $ Section (Title \"UnderSection b1\")\n                        [Head $ Title \"First\", Head $ Title \"Second\"]]]\n\n    putStrLn $ T.unpack $ tocToMDText toc\n\n    putStrLn $ T.unpack $ defaultTocToHtmlText toc  \n\n-- using this in ghci:\nghci \u003e demoDI\n* Chapter 1\n  * Section a\n    * First Heading\n    * [Second Heading](http://the.url)\n  * [Section b](http://the.section.b.url)\n    * UnderSection b1\n      * First\n      * Second\n\n\u003cul\u003e\n\u003cli\u003eChapter 1\n\u003cul\u003e\n\u003cli\u003eSection a\n\u003cul\u003e\n\u003cli\u003eFirst Heading\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"http://the.url\"\u003eSecond Heading\u003c/a\u003e\u003c/li\u003e\n\u003c/ul\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"http://the.section.b.url\"\u003eSection b\u003c/a\u003e\n\u003cul\u003e\n\u003cli\u003eUnderSection b1\n\u003cul\u003e\n\u003cli\u003eFirst\u003c/li\u003e\n\u003cli\u003eSecond\u003c/li\u003e\n\u003c/ul\u003e\u003c/li\u003e\n\u003c/ul\u003e\u003c/li\u003e\n\u003c/ul\u003e\u003c/li\u003e\n\u003c/ul\u003e\n```\n\nBy inlining this output into the present Markdown document we can see that Markdown and HTML rendering produce the same structure:\n\n\u003e * Chapter 1\n\u003e   * Section a\n\u003e     * First Heading\n\u003e     * [Second Heading](http://the.url)\n\u003e   * [Section b](http://the.section.b.url)\n\u003e     * UnderSection b1\n\u003e       * First\n\u003e       * Second\n\u003e\n\u003e \u003cul\u003e\n\u003e \u003cli\u003eChapter 1\n\u003e \u003cul\u003e\n\u003e \u003cli\u003eSection a\n\u003e \u003cul\u003e\n\u003e \u003cli\u003eFirst Heading\u003c/li\u003e\n\u003e \u003cli\u003e\u003ca href=\"http://the.url\"\u003eSecond Heading\u003c/a\u003e\u003c/li\u003e\n\u003e \u003c/ul\u003e\u003c/li\u003e\n\u003e \u003cli\u003e\u003ca href=\"http://the.section.b.url\"\u003eSection b\u003c/a\u003e\n\u003e \u003cul\u003e\n\u003e \u003cli\u003eUnderSection b1\n\u003e \u003cul\u003e\n\u003e \u003cli\u003eFirst\u003c/li\u003e\n\u003e \u003cli\u003eSecond\u003c/li\u003e\n\u003e \u003c/ul\u003e\u003c/li\u003e\n\u003e \u003c/ul\u003e\u003c/li\u003e\n\u003e \u003c/ul\u003e\u003c/li\u003e\n\u003e \u003c/ul\u003e\n\n[Sourcecode for this section](https://github.com/thma/LtuPatternFactory/blob/master/src/DependencyInjection.hs)\n\n#### Alternative approaches to dependency injection\n\nSince the carefree handling of dependencies is an important issue in almost every real-world application, it is not surprising that many different solution patterns have been developed for this over time.\n\nSpecifically in the Haskell environment, interesting approaches have been developed, such as \n\n* the use of the Reader Monad\n* the use of implicit parameters\n\nI will not go into these approaches further here, as there is already a very detailed description available: [Who still uses ReaderT](https://hugopeters.me/posts/10/).\n\nThere is a controversial discussion about implicit parameters, so I would like to refer to [this blog post](https://chrisdone.com/posts/whats-wrong-with-implicitparams/), which discusses some of those issues.\n\n### ComCmand → Functions as First Class Citizens\n\n\u003e In object-oriented programming, the command pattern is a behavioral design pattern in which an object is used to encapsulate all information needed to perform an action or trigger an event at a later time. This information includes the method name, the object that owns the method and values for the method parameters.\n\u003e\n\u003e [Quoted from Wikipedia](https://en.wikipedia.org/wiki/Command_pattern)\n\nThe Wikipedia article features implementation of a simple example in several languages. I'm quoting the Java version here:\n\n```java\nimport java.util.ArrayList;\n\n/** The Command interface */\npublic interface Command {\n   void execute();\n}\n\n/** The Invoker class */\npublic class Switch {\n   private final ArrayList\u003cCommand\u003e history = new ArrayList\u003c\u003e();\n\n   public void storeAndExecute(Command cmd) {\n      this.history.add(cmd);\n      cmd.execute();\n   }\n}\n\n/** The Receiver class */\npublic class Light {\n   public void turnOn() {\n      System.out.println(\"The light is on\");\n   }\n\n   public void turnOff() {\n      System.out.println(\"The light is off\");\n   }\n}\n\n/** The Command for turning on the light - ConcreteCommand #1 */\npublic class FlipUpCommand implements Command {\n   private final Light light;\n\n   public FlipUpCommand(Light light) {\n      this.light = light;\n   }\n\n   @Override    // Command\n   public void execute() {\n      light.turnOn();\n   }\n}\n\n/** The Command for turning off the light - ConcreteCommand #2 */\npublic class FlipDownCommand implements Command {\n   private final Light light;\n\n   public FlipDownCommand(Light light) {\n      this.light = light;\n   }\n\n   @Override    // Command\n   public void execute() {\n      light.turnOff();\n   }\n}\n\n/* The test class or client */\npublic class PressSwitch {\n   public static void main(final String[] arguments){\n      // Check number of arguments\n      if (arguments.length != 1) {\n         System.err.println(\"Argument \\\"ON\\\" or \\\"OFF\\\" is required!\");\n         System.exit(-1);\n      }\n\n      Light lamp = new Light();\n\n      Command switchUp = new FlipUpCommand(lamp);\n      Command switchDown = new FlipDownCommand(lamp);\n\n      Switch mySwitch = new Switch();\n\n      switch(arguments[0]) {\n         case \"ON\":\n            mySwitch.storeAndExecute(switchUp);\n            break;\n         case \"OFF\":\n            mySwitch.storeAndExecute(switchDown);\n            break;\n         default:\n            System.err.println(\"Argument \\\"ON\\\" or \\\"OFF\\\" is required.\");\n            System.exit(-1);\n      }\n   }\n}\n```\n\nRewriting this in Haskell is much denser:\n\n```haskell\nimport           Control.Monad.Writer  -- the writer monad is used to implement the history\n\n-- The Light data type with two nullary operations to turn the light on or off \ndata Light = Light {\n      turnOn  :: IO String\n    , turnOff :: IO String\n}\n\n-- our default instance of a Light\nsimpleLamp = Light { \n      turnOn  = putStrLn \"The Light is on\"  \u003e\u003e return \"on\"\n    , turnOff = putStrLn \"The Light is off\" \u003e\u003e return \"off\"\n}\n\n-- a command to flip on a Light\nflipUpCommand :: Light -\u003e IO String\nflipUpCommand = turnOn\n\n-- a command to flipDown a Light\nflipDownCommand :: Light -\u003e IO String\nflipDownCommand = turnOff\n\n-- execute a command and log it\nstoreAndExecute :: IO String -\u003e WriterT[String] IO ()\nstoreAndExecute command = do\n    logEntry \u003c- liftIO command\n    tell [logEntry]\n  \ncommandDemo :: IO ()\ncommandDemo = do\n    let lamp = simpleLamp\n    result \u003c- execWriterT $\n        storeAndExecute (flipUpCommand lamp)   \u003e\u003e\n        storeAndExecute (flipDownCommand lamp) \u003e\u003e\n        storeAndExecute (flipUpCommand lamp)\n\n    putStrLn $ \"switch history: \" ++ show result\n```\n\n[Sourcecode for this section](https://github.com/thma/LtuPatternFactory/blob/master/src/Command.hs)\n\n### Adapter → Function Composition\n\n\u003e \"The adapter pattern is a software design pattern (also known as wrapper, an alternative naming shared with the decorator pattern) that allows the interface of an existing class to be used as another interface. It is often used to make existing classes work with others without modifying their source code.\"\n\u003e (Quoted from [Wikipedia](https://en.wikipedia.org/wiki/Adapter_pattern)\n\nAn example is an adapter that converts the interface of a Document Object Model of an XML document into a tree structure that can be displayed.\n\nWhat does an adapter do? It translates a call to the adapter into a call of the adapted backend code. Which may also involve translation of the argument data.\n\nSay we have some `backend` function that we want to provide with an adapter. we assume that `backend` has type `c -\u003e d`:\n\n```haskell\nbackend :: c -\u003e d\n```\n\nOur adapter should be of type `a -\u003e b`:\n\n```haskell\nadapter :: a -\u003e b\n```\n\nIn order to write this adapter we have to write two function. The first is:\n\n```haskell\nmarshal :: a -\u003e c\n```\n\nwhich translated the input argument of `adapter` into the correct type `c` that can be digested by the backend.\nAnd the second function is:\n\n```haskell\nunmarshal :: d -\u003e b\n```\n\nwhich translates the result of the `backend`function into the correct return type of `adapter`.\n`adapter` will then look like follows:\n\n```haskell\nadapter :: a -\u003e b\nadapter = unmarshal . backend . marshal\n```\n\nSo in essence the Adapter Patterns is just function composition.\n\nHere is a simple example. Say we have a backend that understands only 24 hour arithmetics (eg. 23:50 + 0:20 = 0:10).\n\nBut in our frontend we don't want to see this ugly arithmetics and want to be able to add minutes to a time representation in minutes (eg. 100 + 200 = 300).\n\nWe solve this by using the above mentioned function composition of `unmarshal . backend . marshal`:\n\n```haskell\n-- a 24:00 hour clock representation of time\nnewtype WallTime = WallTime (Int, Int) deriving (Show)\n\n-- this is our backend. It can add minutes to a WallTime representation\naddMinutesToWallTime :: Int -\u003e WallTime -\u003e WallTime\naddMinutesToWallTime x (WallTime (h, m)) =\n    let (hAdd, mAdd) = x `quotRem` 60\n        hNew = h + hAdd\n        mNew = m + mAdd\n    in if mNew \u003e= 60\n        then\n            let (dnew, hnew') = (hNew + 1) `quotRem` 24\n            in  WallTime (24*dnew + hnew', mNew-60)\n        else WallTime (hNew, mNew)\n\n-- this is our time representation in Minutes that we want to use in the frontend\nnewtype Minute = Minute Int deriving (Show)\n\n-- convert a Minute value into a WallTime representation\nmarshalMW :: Minute -\u003e WallTime\nmarshalMW (Minute x) =\n    let (h,m) = x `quotRem` 60\n    in WallTime (h `rem` 24, m)\n\n-- convert a WallTime value back to Minutes\nunmarshalWM :: WallTime -\u003e Minute\nunmarshalWM (WallTime (h,m)) = Minute $ 60 * h + m\n\n-- this is our frontend that add Minutes to a time of a day\n-- measured in minutes\naddMinutesAdapter :: Int -\u003e Minute -\u003e Minute\naddMinutesAdapter x = unmarshalWM . addMinutesToWallTime x . marshalMW\n\nadapterDemo = do\n    putStrLn \"Adapter vs. function composition\"\n    print $ addMinutesAdapter 100 $ Minute 400\n    putStrLn \"\"\n```\n\n[Sourcecode for this section](https://github.com/thma/LtuPatternFactory/blob/master/src/Adapter.hs)\n\n### Template Method → type class default functions\n\n\u003e In software engineering, the template method pattern is a behavioral design pattern that defines the program skeleton of an algorithm in an operation, deferring some steps to subclasses.\n\u003e It lets one redefine certain steps of an algorithm without changing the algorithm's structure.\n\u003e [Quoted from Wikipedia](https://en.wikipedia.org/wiki/Template_method_pattern)\n\nThe TemplateMethod pattern is quite similar to the [StrategyPattern](#strategy---functor). The main difference is the level of granularity.\nIn Strategy a complete block of functionality - the Strategy - can be replaced.\nIn TemplateMethod the overall layout of an algorithm is predefined and only specific parts of it may be replaced.\n\nIn functional programming the answer to this kind of problem is again the usage of higher order functions.\n\nIn the following example we come back to the example for the [Adapter](#adapter---function-composition).\nThe function `addMinutesAdapter` lays out a structure for interfacing to some kind of backend:\n\n1. marshalling the arguments into the backend format\n2. apply the backend logic to the marshalled arguments\n3. unmarshal the backend result data into the frontend format\n\n```haskell\naddMinutesAdapter :: Int -\u003e Minute -\u003e Minute\naddMinutesAdapter x = unmarshalWM . addMinutesToWallTime x . marshalMW\n```\n\nIn this code the backend functionality - `addMinutesToWallTime` - is a hardcoded part of the overall structure.\n\nLet's assume we want to use different kind of backend implementations - for instance a mock replacement.\nIn this case we would like to keep the overall structure - the template - and would just make a specific part of it flexible.\nThis sounds like an ideal candidate for the TemplateMethod pattern:\n\n```haskell\naddMinutesTemplate :: (Int -\u003e WallTime -\u003e WallTime) -\u003e Int -\u003e Minute -\u003e Minute\naddMinutesTemplate f x =\n    unmarshalWM .\n    f x .\n    marshalMW\n```\n\n`addMinutesTemplate` has an additional parameter f of type `(Int -\u003e WallTime -\u003e WallTime)`. This parameter may be bound to `addMinutesToWallTime` or alternative implementations:\n\n```haskell\n-- implements linear addition (the normal case) even for values \u003e 1440\nlinearTimeAdd :: Int -\u003e Minute -\u003e Minute\nlinearTimeAdd = addMinutesTemplate addMinutesToWallTime\n\n-- implements cyclic addition, respecting a 24 hour (1440 Min) cycle\ncyclicTimeAdd :: Int -\u003e Minute -\u003e Minute\ncyclicTimeAdd = addMinutesTemplate addMinutesToWallTime'\n```\n\nwhere `addMinutesToWallTime'` implements a silly 24 hour cyclic addition:\n\n```haskell\n-- a 24 hour (1440 min) cyclic version of addition: 1400 + 100 = 60\naddMinutesToWallTime' :: Int -\u003e WallTime -\u003e WallTime\naddMinutesToWallTime' x (WallTime (h, m)) =\n    let (hAdd, mAdd) = x `quotRem` 60\n        hNew = h + hAdd\n        mNew = m + mAdd\n    in if mNew \u003e= 60\n        then WallTime ((hNew + 1) `rem` 24, mNew-60)\n        else WallTime (hNew, mNew)\n```\n\nAnd here is how we use it to do actual computations:\n\n```haskell\ntemplateMethodDemo = do\n    putStrLn $ \"linear time: \" ++ (show $ linearTimeAdd 100 (Minute 1400))\n    putStrLn $ \"cyclic time: \" ++ (show $ cyclicTimeAdd 100 (Minute 1400))\n```\n\n#### type class minimal implementations as template method\n\n\u003e The template method is used in frameworks, where each implements the invariant parts of a domain's architecture,\n\u003e leaving \"placeholders\" for customization options. This is an example of inversion of control.\n\u003e [Quoted from Wikipedia](https://en.wikipedia.org/wiki/Template_method_pattern)\n\nThe type classes in Haskells base library apply this template approach frequently to reduce the effort for implementing type class instances and to provide a predefined structure with specific 'customization options'.\n\nAs an example let's extend the type `WallTime` by an associative binary operation `addWallTimes` to form an instance of the `Monoid` type class:\n\n```haskell\naddWallTimes :: WallTime -\u003e WallTime -\u003e WallTime\naddWallTimes a@(WallTime (h,m)) b =\n    let aMin = h*60 + m\n    in  addMinutesToWallTime aMin b\n\ninstance Semigroup WallTime where\n    (\u003c\u003e)   = addWallTimes\ninstance Monoid WallTime where\n    mempty = WallTime (0,0)\n```\n\nEven though we specified only `mempty` and `(\u003c\u003e)` we can now use the functions `mappend :: Monoid a =\u003e a -\u003e a -\u003e a` and `mconcat :: Monoid a =\u003e [a] -\u003e a` on WallTime instances:\n\n```haskell\ntemplateMethodDemo = do\n    let a = WallTime (3,20)\n    print $ mappend a a\n    print $ mconcat [a,a,a,a,a,a,a,a,a]\n```\n\nBy looking at the definition of the `Monoid` type class we can see how this 'magic' is made possible:\n\n```haskell\nclass Semigroup a =\u003e Monoid a where\n    -- | Identity of 'mappend'\n    mempty  :: a\n\n    -- | An associative operation\n    mappend :: a -\u003e a -\u003e a\n    mappend = (\u003c\u003e)\n\n    -- | Fold a list using the monoid.\n    mconcat :: [a] -\u003e a\n    mconcat = foldr mappend mempty\n```\n\nFor `mempty` only a type requirement but no definition is given.\nBut for `mappend` and `mconcat` default implementations are provided.\nSo the Monoid type class definition forms a *template* where the default implementations define the 'invariant parts' of the type class and the part specified by us form the 'customization options'.\n\n(please note that it's generally possible to override the default implementations)\n\n[Sourcecode for this section](https://github.com/thma/LtuPatternFactory/blob/master/src/TemplateMethod.hs)\n\n### Creational Patterns\n\n#### Abstract Factory → functions as data type values\n\n\u003e The abstract factory pattern provides a way to encapsulate a group of individual factories that have a common theme without specifying their concrete classes.\n\u003e In normal usage, the client software creates a concrete implementation of the abstract factory and then uses the generic interface of the factory to create the concrete objects that are part of the theme.\n\u003e The client doesn't know (or care) which concrete objects it gets from each of these internal factories, since it uses only the generic interfaces of their products.\n\u003e This pattern separates the details of implementation of a set of objects from their general usage and relies on object composition, as object creation is implemented in methods exposed in the factory interface.\n\u003e [Quoted from Wikipedia](https://en.wikipedia.org/wiki/Abstract_factory_pattern)\n\nThere is a classic example that demonstrates the application of this pattern in the context of a typical problem in object oriented software design:\n\nThe example revolves around a small GUI framework that needs different implementations to render Buttons for different OS Platforms (called WIN and OSX in this example).\nA client of the GUI API should work with a uniform API that hides the specifics of the different platforms. The problem then is: how can the  client be provided with a platform specific implementation without explicitely asking for a given implementation and how can we maintain a uniform API that hides the implementation specifics.\n\nIn OO languages like Java the abstract factory pattern would be the canonical answer to this problem:\n\n* The client calls an abstract factory `GUIFactory` interface to create a `Button` by calling `createButton() : Button` that somehow chooses (typically by some kind of configuration) which concrete factory has to be used to create concrete `Button` instances.\n* The concrete classes `WinButton` and `OSXButton` implement the interface `Button` and provide platform specific implementations of `paint () : void`.\n* As the client uses only the interface methods `createButton()` and `paint()` it does not have to deal with any platform specific code.\n\nThe following diagram depicts the structure of interfaces and classes in this scenario:\n\n![The abstract Button Factory](https://upload.wikimedia.org/wikipedia/commons/thumb/a/a7/Abstract_factory.svg/517px-Abstract_factory.svg.png)\n\nIn a functional language this kind of problem would be solved quite differently. In FP functions are first class citizens and thus it is much easier to treat function that represent platform specific actions as \"normal\" values that can be reached around.\n\nSo we could represent a Button type as a data type with a label (holding the text to display on the button) and an `IO ()` action that represents the platform specific rendering:\n\n```haskell\n-- | representation of a Button UI widget\ndata Button = Button\n    { label  :: String           -- the text label of the button\n    , render :: Button -\u003e IO ()  -- a platform specific rendering action\n    }\n```\n\nPlatform specific actions to render a `Button` would look like follows:\n\n```haskell\n-- | rendering a Button for the WIN platform (we just simulate it by printing the label)\nwinPaint :: Button -\u003e IO ()\nwinPaint btn = putStrLn $ \"winButton: \" ++ label btn\n\n-- | rendering a Button for the OSX platform\nosxPaint :: Button -\u003e IO ()\nosxPaint btn = putStrLn $ \"osxButton: \" ++ label btn\n\n-- | paint a button by using the Buttons render function\npaint :: Button -\u003e IO ()\npaint btn@(Button _ render) = render btn\n```\n\n(Of course a real implementation would be quite more complex, but we don't care about the nitty gritty details here.)\n\nWith this code we can now create and use concrete Buttons like so:\n\n```haskell\nghci\u003e button = Button \"Okay\" winPaint\nghci\u003e :type button\nbutton :: Button\nghci\u003e paint button\nwinButton: Okay\n```\n\nWe created a button with `Button \"Okay\" winPaint`. The field `render` of that button instance now holds the function winPaint.\nThe function `paint` now applies this `render` function -- i.e. winPaint -- to draw the Button.\n\nApplying this scheme it is now very simple to create buttons with different `render` implementations:\n\n```haskell\n-- | a representation of the operating system platform\ndata Platform = OSX | WIN | NIX | Other\n\n-- | determine Platform by inspecting System.Info.os string\nplatform :: Platform\nplatform =\n  case os of\n    \"darwin\"  -\u003e OSX\n    \"mingw32\" -\u003e WIN\n    \"linux\"   -\u003e NIX\n    _         -\u003e Other\n\n-- | create a button for os platform with label lbl\ncreateButton :: String -\u003e Button\ncreateButton lbl =\n  case platform of\n    OSX    -\u003e Button lbl osxPaint\n    WIN    -\u003e Button lbl winPaint\n    NIX    -\u003e Button lbl (\\btn -\u003e putStrLn $ \"nixButton: \"   ++ label btn)\n    Other  -\u003e Button lbl (\\btn -\u003e putStrLn $ \"otherButton: \" ++ label btn)\n```\n\nThe function `createButton` determines the actual execution environment and accordingly creates platform specific buttons.\n\nNow we have an API that hides all implementation specifics from the client and allows him to use only `createButton` and `paint` to work with Buttons for different OS platforms:\n\n```haskell\nabstractFactoryDemo = do\n    putStrLn \"AbstractFactory -\u003e functions as data type values\"\n    let exit = createButton \"Exit\"            -- using the \"abstract\" API to create buttons\n    let ok   = createButton \"OK\"\n    paint ok                                  -- using the \"abstract\" API to paint buttons\n    paint exit\n\n    paint $ Button \"Apple\" osxPaint           -- paint a platform specific button\n    paint $ Button \"Pi\"                       -- paint a user-defined button\n        (\\btn -\u003e putStrLn $ \"raspberryButton: \" ++ label btn)\n```\n\n[Sourcecode for this section](https://github.com/thma/LtuPatternFactory/blob/master/src/AbstractFactory.hs)\n\n#### Builder → record syntax, smart constructor\n\n\u003e The Builder is a design pattern designed to provide a flexible solution to various object creation problems in object-oriented programming. The intent of the Builder design pattern is to separate the construction of a complex object from its representation.\n\u003e\n\u003e Quoted from [Wikipedia](https://en.wikipedia.org/wiki/Builder_pattern)\n\nThe Builder patterns is frequently used to ease the construction of complex objects by providing a safe and convenient API to client code.\nIn the following Java example we define a POJO Class `BankAccount`:\n\n```java\npublic class BankAccount {\n\n    private int accountNo;\n    private String name;\n    private String branch;\n    private double balance;\n    private double interestRate;\n\n    BankAccount(int accountNo, String name, String branch, double balance, double interestRate) {\n        this.accountNo = accountNo;\n        this.name = name;\n        this.branch = branch;\n        this.balance = balance;\n        this.interestRate = interestRate;\n    }\n\n    @Override\n    public String toString() {\n        return \"BankAccount {accountNo = \" + accountNo + \", name = \\\"\" + name\n                + \"\\\", branch = \\\"\" + branch + \"\\\", balance = \" + balance + \", interestRate = \" + interestRate + \"}\";\n    }\n}\n```\n\nThe class provides a package private constructor that takes 5 arguments that are used to fill the instance attributes.\nUsing constructors with so many arguments is often considered inconvenient and potentially unsafe as certain constraints on the arguments might not be maintained by client code invoking this constructor.\n\nThe typical solution is to provide a Builder class that is responsible for maintaining internal data constraints and providing a robust and convenient API.\nIn the following example the Builder ensures that a BankAccount must have an accountNo and that non null values are provided for the String attributes:\n\n```java\npublic class BankAccountBuilder {\n\n    private int accountNo;\n    private String name;\n    private String branch;\n    private double balance;\n    private double interestRate;\n\n    public BankAccountBuilder(int accountNo) {\n        this.accountNo = accountNo;\n        this.name = \"Dummy Customer\";\n        this.branch = \"London\";\n        this.balance = 0;\n        this.interestRate = 0;\n    }\n\n    public BankAccountBuilder withAccountNo(int accountNo) {\n        this.accountNo = accountNo;\n        return this;\n    }\n\n    public BankAccountBuilder withName(String name) {\n        this.name = name;\n        return this;\n    }\n\n    public BankAccountBuilder withBranch(String branch) {\n        this.branch = branch;\n        return this;\n    }\n\n    public BankAccountBuilder withBalance(double balance) {\n        this.balance = balance;\n        return this;\n    }\n\n    public BankAccountBuilder withInterestRate(double interestRate) {\n        this.interestRate = interestRate;\n        return this;\n    }\n\n    public BankAccount build() {\n        return new BankAccount(this.accountNo, this.name, this.branch, this.balance, this.interestRate);\n    }\n}\n```\n\nNext comes an example of how the builder is used in client code:\n\n```java\npublic class BankAccountTest {\n\n    public static void main(String[] args) {\n        new BankAccountTest().testAccount();\n    }\n\n    public void testAccount() {\n        BankAccountBuilder builder = new BankAccountBuilder(1234);\n        // the builder can provide a dummy instance, that might be used for testing\n        BankAccount account = builder.build();\n        System.out.println(account);\n        // the builder provides a fluent API to construct regular instances\n        BankAccount account1 =\n                 builder.withName(\"Marjin Mejer\")\n                        .withBranch(\"Paris\")\n                        .withBalance(10000)\n                        .withInterestRate(2)\n                        .build();\n\n        System.out.println(account1);\n    }\n}\n```\n\nAs we see the Builder can be either used to create dummy instaces that are still safe to use (e.g. for test cases) or by using the `withXxx` methods to populate all attributes:\n\n```haskell\nBankAccount {accountNo = 1234, name = \"Dummy Customer\", branch = \"London\", balance = 0.0, interestRate = 0.0}\nBankAccount {accountNo = 1234, name = \"Marjin Mejer\", branch = \"Paris\", balance = 10000.0, interestRate = 2.0}\n```\n\nFrom an API client perspective the Builder pattern can help to provide safe and convenient object construction which is not provided by the Java core language.\nAs the Builder code is quite a redundant (e.g. having all attributes of the actual instance class) Builders are typically generated (e.g. with [Lombok](https://projectlombok.org/features/Builder)).\n\nIn functional languages there is usually no need for the Builder pattern as the languages already provide the necessary infrastructure.\n\nThe following example shows how the above example would be solved in Haskell:\n\n```haskell\ndata BankAccount = BankAccount {\n    accountNo    :: Int\n  , name         :: String\n  , branch       :: String\n  , balance      :: Double\n  , interestRate :: Double\n} deriving (Show)\n\n-- a \"smart constructor\" that just needs a unique int to construct a BankAccount\nbuildAccount :: Int -\u003e BankAccount\nbuildAccount i = BankAccount i \"Dummy Customer\" \"London\" 0 0\n\nbuilderDemo = do\n    -- construct a dummmy instance\n    let account = buildAccount 1234\n    print account\n    -- use record syntax to create a modified clone of the dummy instance\n    let account1 = account {name=\"Marjin Mejer\", branch=\"Paris\", balance=10000, interestRate=2}\n    print account1\n\n    -- directly using record syntax to create an instance\n    let account2 = BankAccount {\n          accountNo    = 5678\n        , name         = \"Marjin\"\n        , branch       = \"Reikjavik\"\n        , balance      = 1000\n        , interestRate = 2.5\n        }\n    print account2\n\n-- and then in Ghci:\nghci\u003e builderDemo\nBankAccount {accountNo = 1234, name = \"Dummy Customer\", branch = \"London\", balance = 0.0, interestRate = 0.0}\nBankAccount {accountNo = 1234, name = \"Marjin Mejer\", branch = \"Paris\", balance = 10000.0, interestRate = 2.0}\nBankAccount {accountNo = 5678, name = \"Marjin Mejer\", branch = \"Reikjavik\", balance = 1000.0, interestRate = 2.5}\n```\n\n[Sourcecode for this section](https://github.com/thma/LtuPatternFactory/blob/master/src/Builder.hs)\n\n## Functional Programming Patterns\n\nThe patterns presented in this chapter all stem from functional languages.\nThat is, they have been first developed in functional languages like Lisp, Scheme or Haskell and have later been adopted in other languages.\n\n### Higher Order Functions\n\n\u003e In mathematics and computer science, a higher-order function is a function that does at least one of the following:\n\u003e\n\u003e * takes one or more functions as arguments (i.e. procedural parameters),\n\u003e * returns a function as its result.\n\u003e\n\u003eAll other functions are first-order functions. In mathematics higher-order functions are also termed operators or functionals. The differential operator in calculus is a common example since it maps a function to its derivative, also a function.\n\u003e [Quoted from Wikipedia](https://en.wikipedia.org/wiki/Higher-order_function)\n\nWe have already talked about higher order functions throughout this study \u0026ndash; in particular in the section on the [Strategy Pattern](#strategy--functor). But as higher order functions are such a central pillar of the strength of functional languages I'd like to cover them in some more depths.\n\n#### Higher Order Functions taking functions as arguments\n\nLet's have a look at two typical functions that work on lists; `sum` is calculating the sum of all values in a list, `product` likewise is computing the product of all values in the list:\n\n```haskell\nsum :: Num a =\u003e [a] -\u003e a\nsum []     = 0\nsum (x:xs) = x + sum xs\n\nproduct :: Num a =\u003e [a] -\u003e a\nproduct []     = 1\nproduct (x:xs) = x * product xs\n\n-- and then in GHCi:\nghci\u003e sum [1..10]\n55\nghci\u003e product [1..10]\n3628800\n```\n\nThese two functions `sum` and `product` have exactly the same structure. They both apply a mathematical operation `(+)` or `(*)` on a list by handling two cases:\n\n* providing a neutral (or unit) value in the empty list `[]` case and\n* applying the mathematical operation and recursing into the tail of the list in the `(x:xs)` case.\n\nThe two functions differ only in the concrete value for the empty list `[]` and the concrete mathematical operation to be applied in the `(x:xs)` case.\n\nIn order to avoid repetetive code when writing functions that work on lists, wise functional programmers have invented `fold` functions:\n\n```haskell\nfoldr :: (a -\u003e b -\u003e b) -\u003e b -\u003e [a] -\u003e b\nfoldr fn z []     = z\nfoldr fn z (x:xs) = fn x y\n    where y = foldr fn z xs\n```\n\nThis *higher order function* takes a function `fn` of type `(a -\u003e b -\u003e b)`, a value `z` for the `[]` case and the actual list as parameters.\n\n* in the `[]` case the value `z` is returned\n* in the `(x:xs)` case the  function `fn` is applied to `x` and `y`, where `y` is computed by recursively applying `foldr fn z` on the tail of the list `xs`.\n\nWe can use `foldr` to define functions like `sum` and `product` much more terse:\n\n```haskell\nsum' :: Num a =\u003e [a] -\u003e a\nsum' = foldr (+) 0\n\nproduct' :: Num a =\u003e [a] -\u003e a\nproduct' = foldr (*) 1\n```\n\n`foldr` can also be used to define *higher order functions* on lists like `map` and `filter` much denser than with the naive approach of writing pattern matching equations for `[]` and `(x:xs)`:\n\n```haskell\n-- naive approach:\nmap :: (a -\u003e b) -\u003e [a] -\u003e [b]\nmap _ []     = []\nmap f (x:xs) = f x : map f xs\n\nfilter :: (a -\u003e Bool) -\u003e [a] -\u003e [a]\nfilter _ []     = []\nfilter p (x:xs) = if p x then x : filter p xs else filter p xs\n\n-- wise functional programmers approach:\nmap' :: (a -\u003e b) -\u003e [a] -\u003e [b]\nmap' f = foldr ((:) . f) []\n\nfilter' :: (a -\u003e Bool) -\u003e [a] -\u003e [a]\nfilter' p = foldr (\\x xs -\u003e if p x then x : xs else xs) []\n\n-- and then in GHCi:\nghci\u003e map (*2) [1..10]\n[2,4,6,8,10,12,14,16,18,20]\nghci\u003e filter even [1..10]\n[2,4,6,8,10]\n```\n\nThe idea to use `fold` operations to provide a generic mechanism to fold lists can be extented to cover other algebraic data types as well. Let's take a binary tree as an example:\n\n```haskell\ndata Tree a = Leaf\n            | Node a (Tree a) (Tree a)\n\nsumTree :: Num a =\u003e Tree a -\u003e a\nsumTree Leaf = 0\nsumTree (Node x l r) = x + sumTree l + sumTree r\n\nproductTree :: Num a =\u003e Tree a -\u003e a\nproductTree Leaf = 1\nproductTree (Node x l r) = x * sumTree l * sumTree r\n\n-- and then in GHCi:\nghci\u003e sumTree tree\n9\nghci\u003e productTree tree\n24\n```\n\nThe higher order `foldTree` operation takes a function `fn` of type `(a -\u003e b -\u003e b)`, a value `z` for the `Leaf` case and the actual `Tree a` as parameters:\n\n```haskell\nfoldTree :: (a -\u003e b -\u003e b) -\u003e b -\u003e Tree a -\u003e b\nfoldTree fn z Leaf = z\nfoldTree fn z (Node a left right) = foldTree fn z' left where\n   z'  = fn a z''\n   z'' = foldTree fn z right\n```\n\nThe sum and product functions can now elegantly be defined by making use of `foldTree`:\n\n```haskell\nsumTree' = foldTree (+) 0\n\nproductTree' = foldTree (*) 1\n```\n\nAs the family of `fold` operation is useful for many data types the GHC compiler even provides a special pragma that allows automatic provisioning of this functionality by declaring the data type as an instance of the type class `Foldable`:\n\n```haskell\n{-# LANGUAGE DeriveFoldable #-}\n\ndata Tree a = Leaf\n            | Node a (Tree a) (Tree a) deriving (Foldable)\n\n-- and then in GHCi:\n\u003e foldr (+) 0 tree\n9\n```\n\nApart from several `fold` operations the `Foldable` type class also provides useful functions like `maximum` and `minimum`: [Foldable documentation on hackage](https://hackage.haskell.org/package/base-4.12.0.0/docs/Prelude.html#t:Foldable)\n\nIn this section we have seen how higher order functions that take functions as parameters can be very useful tools to provide generic algorithmic templates that can be applied in a wide range of situations.\n\n##### Origami programming style\n\nMathematicians love symmetry. So it comes with littly surprise that the Haskell standard library `Data.List` provides a dual to `foldr`: the higher order function `unfoldr`.\nThe function `foldr` allows to project a list of values on a single value. `unfoldr` allows to create a list of values starting from an initial value:\n\n```haskell\nunfoldr :: (b -\u003e Maybe (a, b)) -\u003e b -\u003e [a]\nunfoldr f u = case f u of\n    Nothing     -\u003e []\n    Just (x, v) -\u003e x:(unfoldr f v)\n```\n\nThis mechanism can be used to generate finite and infinite lists:\n\n```haskell\n-- a list [10..0]\nghci\u003e print $ unfoldr (\\n -\u003e if n==0 then Nothing else Just (n, n-1)) 10\n[10,9,8,7,6,5,4,3,2,1]\n\n-- the list of all fibonacci numbers\nghci\u003e fibs = unfoldr (\\(a, b) -\u003e Just (a, (b, a+b))) (0, 1)\nghci\u003e print $ take 20 fibs\n[0,1,1,2,3,5,8,13,21,34,55,89,144,233,377,610,987,1597,2584,4181]\n```\n\n`unfoldr` can also be used to formulate algorithms like bubble sort in quite a dense form:\n\n```haskell\n-- bubble out the minimum element of a list:\nbubble :: Ord a =\u003e [a] -\u003e Maybe (a, [a])\nbubble = foldr step Nothing where\n    step x Nothing = Just (x, [])\n    step x (Just (y, ys))\n        | x \u003c y     = Just (x, y:ys)\n        | otherwise = Just (y, x:ys)\n\n-- compute minimum, cons it with the minimum of the remaining list and so forth\nbubbleSort :: Ord a =\u003e [a] -\u003e [a]\nbubbleSort = unfoldr bubble\n```\n\nUnfolds produce data structures, and folds consume them. It is thus quite natural to compose these two operations.  The pattern of an unfold followed by a fold (called [*hylomorphism*](https://en.wikipedia.org/wiki/Hylomorphism_(computer_science)) is fairly common. As a simple example we define the factorial function with our new tools:\n\n```haskell\nfactorial = foldr (*) 1 . unfoldr (\\n -\u003e if n ==0 then Nothing else Just (n, n-1))\n```\n\nThe `unfold` part generates a list of integers `[1..n]` and the `foldr` part reduces this list by computing the product `[1..n]`.\n\nBut hylomorphisms are not limited to ivory tower examples: a typical compiler that takes some source code as input to generate an abstract syntax tree (unfolding) from which it then generates the object code of the target platform (folding) is quite a practical example of the same concept.\n\nOne interesting properties of hylomorphisms is that they may be fused \u0026ndash; the intermediate data structure needs not actually be constructed. This technique is called *deforestation* and can be done automatically by a compiler.\n\nCompressing data and uncompressing it later may be understood as a sequence of first folding and then unfolding. Algorithms that apply this pattern have been coined [*metamorphism*](https://patternsinfp.wordpress.com/2017/10/04/metamorphisms/).\n\nThe programming style that uses combinations of higher order functions like fold and unfold operations on algebraic data structure has been dubbed [*Origami Programming*](https://www.cs.ox.ac.uk/jeremy.gibbons/publications/origami.pdf) after the Japanese art form based on paper folds.\n\n#### Higher Order Functions returning functions\n\nFunctions returning new functions are ubiqituous in functional programming as well.\nIf we look at a simple binary arithmetic functions like `(+)` or `(*)` it would be quite natural to think that they have a type signature like follows:\n\n```haskell\n(+) :: Num =\u003e (a, a) -\u003e a\n```\n\nBut by inspecting the signature in GHCi (with `:t (+)`) we see that the actual signature is\n\n```haskell\n(+) :: Num a =\u003e a -\u003e a -\u003e a\n```\n\nThis is because in Haskell all functions are considered curried: That is, all functions in Haskell take just one argument. The curried form is usually more convenient because it allows [partial application](https://github.com/thma/LtuPatternFactory#dependency-injection--parameter-binding-partial-application). It allows us to create new functions by applying the original function to a subset of the formal parameters:\n\n```haskell\nghci\u003e double = (*) 2\nghci\u003e :t double\ndouble :: Num a =\u003e a -\u003e a\nghci\u003e double 7\n14\n```\n\nSo even if we read a signature like `Int -\u003e Int -\u003e Int` informally as \"takes two `Int`s and returns an `Int`\", It actually should be understood as `Int -\u003e (Int -\u003e Int)` which really says \"takes an `Int` and returns a function of type `Int -\u003e Int`\".\n\nApart from this implicit occurrence of \"functions returning functions\" there are also more explicit use cases of this pattern. I'll illustrate this with a simple generator for key/value mapping functions.\n\nWe start by defing a function type `Lookup` that can be used to define functions mapping keys to values:\n\n```haskell\n-- | Lookup is a function type from a key to a Maybe value:\ntype Lookup key value = key -\u003e Maybe value\n\n-- | a lookup function that always returns Nothing\nnada :: Lookup k v\nnada _ = Nothing\n\n-- | a function that knows it's abc...\nabc :: Num v =\u003e Lookup String v\nabc \"a\" = Just 1\nabc \"b\" = Just 2\nabc \"c\" = Just 3\nabc _   = Nothing\n```\n\nNow we write a `Lookup` function generator `put` that adds a new key to value mapping to an existing lookup function:\n\n```haskell\n-- | put returns a new Lookup function based on a key, a value and an existing lookup function:\nput :: Eq k =\u003e k -\u003e v -\u003e Lookup k v -\u003e Lookup k v\nput k v lookup =\n    \\key -\u003e if key == k\n            then Just v\n            else lookup key\n\n-- and then in GHCi:\nghci\u003e get = put \"a\" 1 nada\n\nghci\u003e :t get\nget :: Num v =\u003e Lookup String v\n\nghci\u003e get \"a\"\nJust 1\n\nghci\u003e get \"b\"\nNothing\n```\n\nWe can now use `put` to stack more key value mappings onto the `get` function:\n\n```haskell\nghci\u003e get' = put \"b\" 2 get\nghci\u003e get' \"a\"\nJust 1\nghci\u003e get' \"b\"\nJust 2\nghci\u003e get' \"c\"\nNothing\n```\n\nA framework for symbolic derivation of functions in calculus would be another possible application of this approach, but as it involves several more advanced features (like Template Haskell and tagged types) I won't cover it here but just point the fearless reader directly to the sourcecode: [A symbolic differentiator for a subset of Haskell functions](http://hackage.haskell.org/package/liboleg-2010.1.10.0/docs/src/Data-Symbolic-Diff.html)\n\n[Sourcecode for this section](https://github.com/thma/LtuPatternFactory/blob/master/src/HigherOrder.hs)\n\n### Map Reduce\n\n\u003e MapReduce is a programming model and an associated implementation for processing and generating large data sets. Users specify a map function that processes a key/value pair to generate a set of intermediate key/value pairs, and a reduce function that merges all intermediate values associated with the same intermediate key.\n\u003e\n\u003e Our abstraction is inspired by the map and reduce primitives present in Lisp and many other functional languages.\n\u003e [Quoted from Google Research](https://storage.googleapis.com/pub-tools-public-publication-data/pdf/16cb30b4b92fd4989b8619a61752a2387c6dd474.pdf)\n\nIn this section I'm featuring one of the canonical examples for MapReduce: counting word frequencies in a large text.\n\nLet's start with a function `stringToWordCountMap` that takes a string as input and creates the respective word frequency map:\n\n```haskell\n-- | a key value map, mapping a word to a frequency\nnewtype WordCountMap = WordCountMap (Map String Int) deriving (Show)\n\n-- | creating a word frequency map from a String.\n--   To ease readability I'm using the (\u003e\u003e\u003e) operator, which is just an inverted (.): f \u003e\u003e\u003e g == g . f\nstringToWordCountMap :: String -\u003e WordCountMap\nstringToWordCountMap =\n  map toLower \u003e\u003e\u003e words \u003e\u003e\u003e  -- convert to lowercase and split into a list of words\n  sort \u003e\u003e\u003e group \u003e\u003e\u003e         -- sort the words alphabetically and group all equal words to sub-lists\n  map (head \u0026\u0026\u0026 length) \u003e\u003e\u003e  -- for each of those list of grouped words: form a pair (word, frequency)\n  Map.fromList \u003e\u003e\u003e           -- create a Map from the list of (word, frequency) pairs\n  WordCountMap               -- wrap as WordCountMap\n\n-- and then in GHCi:\nghci\u003e stringToWordCountMap \"hello world World\"\nWordCountMap (fromList [(\"hello\",1),(\"world\",2)])\n```\n\nIn a MapReduce scenario we would have a huge text as input that would take ages to process on a single core.\nSo the idea is to split up the huge text into smaller chunks that can than be processed in parallel on multiple cores or even large machine clusters.\n\nLet's assume we have split a text into two chunks. We could then use `map` to create a `WordCountMap` for both chunks:\n\n```haskell\nghci\u003e map stringToWordCountMap [\"hello world World\", \"out of this world\"]\n[WordCountMap (fromList [(\"hello\",1),(\"world\",2)])\n,WordCountMap (fromList [(\"of\",1),(\"out\",1),(\"this\",1),(\"world\",1)])]\n```\n\nThis was the *Map* part. Now to *Reduce*.\nIn Order to get a comprehensive word frequency map we have to merge those individual `WordCountMap`s into one.\nThe merging must form a union of all entries from all individual maps. This union must also ensure that the frequencies from the indivual maps are added up properly in the resulting map. We will use the `Map.unionWith` function to achieve this:\n\n```haskell\n-- | merges a list of individual WordCountMap into single one.\nreduceWordCountMaps :: [WordCountMap] -\u003e WordCountMap\nreduceWordCountMaps = WordCountMap . foldr (Map.unionWith (+) . coerce) empty\n\n-- and then in GHCi:\nghci\u003e reduceWordCountMaps it\nWordCountMap (fromList [(\"hello\",1),(\"of\",1),(\"out\",1),(\"this\",1),(\"world\",3)])\n```\n\nWe have just performed a manual map reduce operation! We can now take these ingredients to write a generic MapReduce function:\n\n```haskell\nsimpleMapReduce ::\n     (a -\u003e b)   -- map function\n  -\u003e ([b] -\u003e c) -- reduce function\n  -\u003e [a]        -- list to map over\n  -\u003e c          -- result\nsimpleMapReduce mapFunc reduceFunc = reduceFunc . map mapFunc\n\n-- and then in GHCi\nghci\u003e simpleMapReduce stringToWordCountMap reduceWordCountMaps [\"hello world World\", \"out of this world\"]\nWordCountMap (fromList [(\"hello\",1),(\"of\",1),(\"out\",1),(\"this\",1),(\"world\",3)])\n```\n\nWhat I have shown so far just demonstrates the general mechanism of chaining `map` and `reduce` functions without implying any parallel execution.\nEssentially we are chaining a `map` with a `fold` (i.e. reduction) function. In the Haskell base library there is a higher order function `foldMap` that covers exactly this pattern of chaining. Please note that `foldMap`does only a single traversal of the foldable data structure. It fuses the `map` and `reduce` phase into a single one by function composition of `mappend` and the mapping function `f`:\n\n```haskell\n-- | Map each element of the structure to a monoid,\n-- and combine the results.\nfoldMap :: (Foldable t, Monoid m) =\u003e (a -\u003e m) -\u003e t a -\u003e m\nfoldMap f = foldr (mappend . f) mempty\n```\n\nThis signature requires that our type `WordCountMap` must be a `Monoid` in order to allow merging of multiple `WordCountMaps` by using `mappend`.\n\n```haskell\ninstance Semigroup WordCountMap where\n    WordCountMap a \u003c\u003e WordCountMap b = WordCountMap $ Map.unionWith (+) a b\ninstance Monoid WordCountMap where\n    mempty = WordCountMap Map.empty\n```\n\nThat's all we need to use `foldMap` to achieve a MapReduce:\n\n```haskell\nghci\u003e foldMap stringToWordCountMap [\"hello world World\", \"out of this world\"]\nWordCountMap (fromList [(\"hello\",1),(\"of\",1),(\"out\",1),(\"this\",1),(\"world\",3)])\n```\n\nFrom what I have shown so far it's easy to see that the `map` and `reduce` phases of the word frequency computation are candidates for heavily parallelized processing:\n\n* The generation of word frequency maps for the text chunks can be done in parallel. There are no shared data or other dependencies between those executions.\n* The reduction of the maps can start in parallel (that is we don't have to wait to start reduction until all individual maps are computed) and the reduction itself can also be parallelized.\n\nThe calculation of word frequencies is a candidate for a parallel MapReduce because the addition operation used to accumulate the word frequencies is *associatve*:\n*The order of execution doesn't affect the final result*.\n\n(Actually our data type `WordCountMap` is not only a `Monoid` (which requires an *associative* binary operation) \nbut even a [*commutative Monoid*](https://en.wikipedia.org/wiki/Monoid#Commutative_monoid).)\n\nSo our conclusion: if the intermediary key/value map for the data analytics task at hand forms a *monoid* under the reduce operation\nthen it is a candidate for parallel MapReduce. See also [An Algebra for Distributed Big Data Analytics](https://pdfs.semanticscholar.org/0498/3a1c0d6343e21129aaffca2a1b3eec419523.pdf).\n\nHaskell provides a package `parallel` for defining parallel executions in a rather declarative way.\nHere is what a parallelized MapReduce looks like when using this package:\n\n```haskell\n-- | a MapReduce using the Control.Parallel package to denote parallel execution\nparMapReduce :: (a -\u003e b) -\u003e ([b] -\u003e c) -\u003e [a] -\u003e c\nparMapReduce mapFunc reduceFunc input =\n    mapResult `pseq` reduceResult\n    where mapResult    = parMap rseq mapFunc input\n          reduceResult = reduceFunc mapResult `using` rseq\n\n-- and then in GHCi:\nghci\u003e parMapReduce stringToWordCountMap reduceWordCountMaps [\"hello world World\", \"out of this world\"]\nWordCountMap (fromList [(\"hello\",1),(\"of\",1),(\"out\",1),(\"this\",1),(\"world\",3)])\n```\n\nFor more details see [Real World Haskell](http://book.realworldhaskell.org/read/concurrent-and-multicore-programming.html)\n\n[Sourcecode for this section](https://github.com/thma/LtuPatternFactory/blob/master/src/MapReduce.hs)\n\n\u003c!--\n### Continuation Passing\n\ntbd.\n--\u003e\n\n### Lazy Evaluation\n\n\u003e In programming language theory, lazy evaluation, or call-by-need is an evaluation strategy which delays the evaluation of an expression until its value is needed (non-strict evaluation) and which also avoids repeated evaluations (sharing). The sharing can reduce the running time of certain functions by an exponential factor over other non-strict evaluation strategies, such as call-by-name.\n\u003e\n\u003e The benefits of lazy evaluation include:\n\u003e\n\u003e * The ability to define control flow (structures) as abstractions instead of primitives.\n\u003e * The ability to define potentially infinite data structures. This allows for more straightforward implementation of some algorithms.\n\u003e* Performance increases by avoiding needless calculations, and avoiding error conditions when evaluating compound expressions.\n\u003e\n\u003e [Quoted from Wikipedia](https://en.wikipedia.org/wiki/Lazy_evaluation)\n\nLet's start with a short snippet from a Java program:\n\n```java\n    // a non-terminating computation aka _|_ or bottom\n    private static Void bottom() {\n        return bottom();\n    }\n\n    // the K combinator, K x y returns x\n    private static \u003cA, B\u003e A k(A x, B y) {\n        return x;\n    }\n\n    public static void main(String[] args) {\n        // part 1\n        if (true) {\n            System.out.println(\"21 is only half the truth\");\n        } else {\n            bottom();\n        }\n\n        // part 2\n        System.out.println(k (42, bottom()));\n    }\n```\n\nWhat is the expected output of running `main`?\nIn part 1 we expect to see the text \"21 is only half the truth\" on the console. The else part of the `if` statement will never be executed (thus avoiding the endless loop of calling `bottom()`) as `true` is always true.\n\nBut what will happen in part 2?\nIf the Java compiler would be clever it could determine that `k (x, y)` will never need to evaluate `y` as is always returns just `x`. In this case we should see a 42 printed to the console.\n\nBut Java Method calls have eager evaluation semantics.\nSo will just see a `StackOverflowError`...\n\nIn a non-strict (or lazy) language like Haskell this will work out much smoother:\n\n```haskell\n-- | bottom, a computation which never completes successfully, aka as _|_\nbottom :: a\nbottom = bottom\n\n-- | the K combinator which drop its second argument (k x y = x)\nk :: a -\u003e b -\u003e a\nk x _ = x\n\ninfinityDemo :: IO ()\ninfinityDemo = do\n  print $ k 21 undefined -- evaluating undefined would result in a runtime error\n  print $ k 42 bottom    -- evaluating botoom would result in an endless loop\n  putStrLn \"\"\n```\n\nHaskell being a non-strict language the arguments of `k` are not evaluated when calling the function.\nthus in `k 21 undefined` and `k 42 bottom` the second arguments `undefined` and `bottom` are simply dropped and never evaluated.\n\nThe Haskell laziness can sometimes be tricky to deal with but it has also some huge benefits when dealing with infinite data structures.\n\n```haskell\n-- | a list of *all* natural numbers\nints :: Num a =\u003e [a]\nints = from 1\n  where\n    from n = n : from (n + 1)\n```\n\nThis is a recursive definition of a list holding all natural numbers.\nAs this recursion has no termination criteria it will never terminate!\n\nWhat will happen when we start to use `ints` in our code?\n\n```haskell\nghci\u003e take 10 ints\n[1,2,3,4,5,6,7,8,9,10]\n```\n\nIn this case we have not been greedy and just asked for a finite subset of ints. The Haskell runtime thus does not fully evaluate `ints` but only as many elements as we aked for.\n\nThese kind of generator functions (also known as [CAFs](https://wiki.haskell.org/Constant_applicative_form) for Constant Applicative Forms) can be very useful to define lazy streams of infinite data.\n\nHaskell even provides some more syntactic sugar to ease the definitions of such CAFs. So for instance our `ints` function could be written as:\n\n```haskell\nghci\u003e ints = [1..]\nghci\u003e take 10 ints\n[1,2,3,4,5,6,7,8,9,10]\n```\n\nThis feature is called *arithmetic sequences* and allows also to define regions and a step witdth:\n\n```haskell\nghci\u003e [2,4..20]\n[2,4,6,8,10,12,14,16,18,20]\n```\n\nAnother useful feature in this area are *list comprehensions*. With list comprehensions it's quite convenient to define infinite sets with specific properties:\n\n```haskell\n-- | infinite list of all odd numbers\nodds :: [Int]\nodds = [n | n \u003c- [1 ..], n `mod` 2 /= 0] -- read as set builder notation: {n | n ∈ ℕ, n%2 ≠ 0}\n\n-- | infinite list of all integer pythagorean triples with a² + b² = c²\npythagoreanTriples :: [(Int, Int, Int)]\npythagoreanTriples =  [ (a, b, c)\n  | c \u003c- [1 ..]\n  , b \u003c- [1 .. c - 1]\n  , a \u003c- [1 .. b - 1]\n  , a ^ 2 + b ^ 2 == c ^ 2\n  ]\n\n-- | infinite list of all prime numbers\nprimes :: [Integer]\nprimes = 2 : [i | i \u003c- [3,5..],  \n              and [rem i p \u003e 0 | p \u003c- takeWhile (\\p -\u003e p^2 \u003c= i) primes]]\n\n-- and the in GHCi:\nghci\u003e take 10 odds\n[1,3,5,7,9,11,13,15,17,19]\nghci\u003e take 10 pythagoreanTriples\n[(3,4,5),(6,8,10),(5,12,13),(9,12,15),(8,15,17),(12,16,20),(15,20,25),(7,24,25),(10,24,26),(20,21,29)]\nghci\u003e take 20 primes\n[2,3,5,7,11,13,17,19,23,29,31,37,41,43,47,53,59,61,67,71]\n```\n\nAnother classic example in this area is the Newton-Raphson algorithm that approximates the square roots of a number *n* by starting from an initial value *a\u003csub\u003e0\u003c/sub\u003e* and computing the approximation *a\u003csub\u003ei+1\u003c/sub\u003e* as:\n\n*a\u003csub\u003ei+1\u003c/sub\u003e = (a\u003csub\u003ei\u003c/sub\u003e + n/a\u003csub\u003ei\u003c/sub\u003e)/2*\n\nFor *n \u003e= 0* and *a\u003csub\u003e0\u003c/sub\u003e \u003e 0* this series converges quickly towards the square root of *n*\n(See [Newton's method on Wikipedia](https://en.wikipedia.org/wiki/Newton%27s_method) for details).\n\nThe Haskell implementations makes full usage of lazy evaluation. The first step is to define a function `next` that computes *a\u003csub\u003ei+1\u003c/sub\u003e* based on *n* and *a\u003csub\u003ei\u003c/sub\u003e*:\n\n```haskell\nnext :: Fractional a =\u003e a -\u003e a -\u003e a\nnext n a_i = (a_i + n/a_i)/2\n```\n\nNow we use `next` to define an infinite set of approximizations:\n\n```haskell\nghci\u003e root_of_16 = iterate (next 16) 1\nghci\u003e take 10 root_of_16\n[1.0,8.5,5.1911764705882355,4.136664722546242,4.002257524798522,4.000000636692939,4.000000000000051,4.0,4.0,4.0]\n```\n\nThe function `iterate` is a standard library function in Haskell. `iterate f x` returns an infinite list of repeated applications of `f` to `x`:\n\n```haskell\niterate f x == [x, f x, f (f x), ...]\n```\n\nIt is defined as:\n\n```haskell\niterate :: (a -\u003e a) -\u003e a -\u003e [a]\niterate f x =  x : iterate f (f x)\n```\n\nAs lazy evaluation is the default in Haskell it's totally safe to define infinite structures like `root_of_16` as long as we make sure that not all elements of the list are required by subsequent computations.\n\nAs `root_of_16` represents a converging series of approximisations we'll have to search this list for the first element that matches our desired precision, specified by a maximum tolerance `eps`.\n\nWe define a function `within` which takes the tolerance `eps` and a list of approximations and looks down the list for two successive approximations `a` and `b` that differ by no more than the given tolerance `eps`:\n\n```haskell\nwithin :: (Ord a, Fractional a) =\u003e a -\u003e [a] -\u003e a\nwithin eps (a:b:rest) =\n  if abs(a/b - 1) \u003c= eps\n    then b\n    else within eps (b:rest)\n```\n\nThe actual function `root n eps` can then be defined as:\n\n```haskell\nroot :: (Ord a, Fractional a) =\u003e a -\u003e a -\u003e a\nroot n eps = within eps (iterate (next n) 1)```\n\n-- and then in GHCI:\nghci\u003e root 2 0.000001\n1.414213562373095\n```\n\nThis example has been taken from The classic paper [Why Functional Programming Matters](https://www.cs.kent.ac.uk/people/staff/dat/miranda/whyfp90.pdf). In this paper John Hughes highlights higher order functions and lazy evaluation as two outstanding contributions of functional programming. The paper features several very instructive examples for both concepts.\n\n[Sourcecode for this section](https://github.com/thma/LtuPatternFactory/blob/master/src/Infinity.hs)\n\n\u003c!--\n### Functional Reactive Programming\n\ntbd.\n--\u003e\n\n### Reflection\n\n\u003e In computer science, reflection is the ability of a computer program to examine, introspect, and modify its own structure and behavior at runtime.\n\u003e\n\u003e [Quoted from Wikipedia](https://en.wikipedia.org/wiki/Reflection_(computer_programming))\n\nReflection is one of those programming language features that were introduced first in Lisp based environments but became popular in many mainstream programming languages as it proved to be very useful in writing generic frameworks for persistence, serialization etc.\n\nI'll demonstrate this with simple persistence library. This library is kept as simple as possible. We just define a new type class `Entity a` with two actions `persist` and `retrieve` with both have a generic default implementation used for writing an entity to a file or reading it back from a file.\nThe type class also features a function `getId` which returns a unique identifier for a given entity, which must be implemented by all concrete types deriving `Entity`.\n\n```haskell\nmodule SimplePersistence\n    ( Id\n    , Entity\n    , getId\n    , persist\n    , retrieve\n    ) where\n\n-- | Identifier for an Entity\ntype Id = String\n\n-- | The Entity type class provides generic persistence to text files\nclass (Show a, Read a) =\u003e Entity a where\n\n    -- | return the unique Id of the entity. This function must be implemented by type class instances.\n    getId :: a -\u003e Id\n\n    -- | persist an entity of type a and identified by an Id to a text file\n    persist :: a -\u003e IO ()\n    persist entity = do\n        -- compute file path based on entity id\n        let fileName = getPath (getId entity)\n        -- serialize entity as JSON and write to file\n        writeFile fileName (show entity)\n\n    -- | load persistent entity of type a and identified by an Id\n    retrieve :: Id -\u003e IO a\n    retrieve id = do\n        -- compute file path based on entity id\n        let fileName = getPath id\n        -- read file content into string\n        contentString \u003c- readFile fileName\n        -- parse entity from string\n        return (read contentString)\n\n-- | compute path of data file\ngetPath :: String -\u003e FilePath\ngetPath id = \".stack-work/\" ++ id ++ \".txt\"\n```\n\nA typical usage of this library would look like follows:\n\n```haskell\nimport SimplePersistence (Id, Entity, getId, persist, retrieve)\n\ndata User = User {\n      userId :: Id\n    , name   :: String\n    , email  :: String\n} deriving (Show, Read)\n\ninstance Entity User where\n    getId = userId\n\nreflectionDemo = do\n    let user = User \"1\" \"Heinz Meier\" \"hm@meier.com\"\n    persist user\n    user' \u003c- retrieve \"1\" :: IO User\n    print user'\n```\n\nSo all a user has to do in order to use our library is:\n\n1. let the data type derive the `Show` and `Read` type classes, which provides a poor mans serialization.\n2. let the data type derive from `Entity` by providing an implementation for `getId`.\n3. use `persist` and `retrieve` to write and read entities to/from file.\n\nAs we can see from the function signatures for `persist` and `retrieve` both functions have no information about the concrete type they are being used on:\n\n```haskell\npersist  :: Entity a =\u003e a  -\u003e IO ()\nretrieve :: Entity a =\u003e Id -\u003e IO a\n```\n\nAs a consequence the generic implementation of both function in the Entity type class also have no direct access to the concrete type of the processed entities. (They simply delegate to other generic functions like `read` and `show`.)\n\nSo how can we access the concrete type of a processed entity? Imagine we'd like to store our entities into files that bear the type name as part of the file name, e.g. `User.7411.txt`\n\nThe answer is of course: reflection. Here is what we have to add to our library to extend `persist` according to our new requirements:\n\n```haskell\n{-# LANGUAGE ScopedTypeVariables   #-}\nimport           Data.Typeable\n\nclass (Show a, Read a, Typeable a) =\u003e Entity a where\n\n    -- | persist an entity of type a and identified by an Id to a file\n    persist :: a -\u003e IO ()\n    persist entity = do\n        -- compute file path based on entity type and id\n        let fileName = getPath (typeOf entity) (getId entity)\n        -- serialize entity as JSON and write to file\n        writeFile fileName (show entity)\n\n-- | compute path of data file, this time with the type name as part of the file name\ngetPath :: TypeRep -\u003e String -\u003e FilePath\ngetPath tr id = \".stack-work/\" ++ show tr ++ \".\" ++ id ++ \".txt\"\n```\n\nWe have to add a new constrained `Typeable a` to our definition of `Entity`. This allows to use reflective code on our entity types. In our case we simply compute a type representation `TypeRep` by calling `typeOf entity` which we then use in `getPath` to add the type name to the file path.  \n\nThe definition of `retrieve` is a bit more tricky as we don't yet have an entity available yet when computing the file path. So we have to apply a small trick to compute the correct type representation:\n\n```haskell\n    retrieve :: Id -\u003e IO a\n    retrieve id = do\n        -- compute file path based on entity type and id\n        let fileName = getPath (typeOf (undefined :: a)) id\n        -- read file content into string\n        contentString \u003c- readFile fileName\n        -- parse entity from string\n        return (read contentString)\n```\n\nThe compiler will be able to deduce the correct type of `a` in the expression `(undefined :: a)` as the concrete return type of `retrieve` must be specified at the call site, as in example `user' \u003c- retrieve \"1\" :: IO User`\n\nOf course this was only a teaser of what is possible with generic reflective programming. The fearless reader is invited to study the [source code of the aeson library](https://github.com/bos/aeson) for a deep dive.\n\n[Sourcecode for this section](https://github.com/thma/LtuPatternFactory/blob/master/src/Reflection.hs)\n\n## Conclusions\n\n### Design Patterns are not limited to object oriented programming\n\n\u003e Christopher Alexander says, \"Each pattern describes a problem which occurs over and\n\u003e over again in our environment, and then describes the core of the solution to that\n\u003e problem, in such a way that you can use this solution a million times over, without ever\n\u003e doing it the same way twice\" [AIS+77, page x]. Even though Alexander was talking\n\u003e about patterns in buildings and towns, what he says is true about object-oriented design\n\u003e patterns. Our solutions are expressed in terms of objects and interfaces instead of walls\n\u003e and doors, but at the core of both kinds of patterns is a solution to a problem in a\n\u003e context.\n\u003e [Quoted from \"Design Patterns Elements of Reusable Object-Oriented Software\"](https://en.wikipedia.org/wiki/Design_Patterns)\n\nThe GoF *Design Patterns Elements of Reusable Object-Oriented Software* was written to help software developers to think about software design problems in a different way:\nFrom just writing a minimum adhoc solution for the problem at hand to stepping back and to think about how to solve the problem in a way that improves longterm qualities like extensibilty, flexibility, maintenability, testability and comprehensibility of a software design.\n\nThe GoF and other researches in the pattern area did \"pattern mining\": they examined code of experienced software developers and looked for recurring structures and solutions. The patterns they distilled by this process are thus *reusable abstractions* for structuring object-oriented software to achieve the above mentioned goals.\n\nSo while the original design patterns are formulated with object oriented languages in mind, they still adress universal problems in software engineering: decoupling of layers, configuration, dependency management, data composition, data traversal, handling state, variation of behaviour, etc.\n\nSo it comes with little surprise that we can map many of those patterns to commonly used structures in functional programming: The domain problems remain the same, yet the concrete solutions differ:\n\n* Some patterns are absorbed by language features:\n  * Template method and strategy pattern are no brainers in any functional language with functions as first class citizens and higher order functions.\n  * Dependency Injection and Configuration is solved by partial application of curried functions.\n  * Adapter layers are replaced by function composition\n  * Visitor pattern and Interpreters are self-evident with algebraic data types.\n* Other patterns are covered by libraries like the Haskell type classes:\n  * Composite is reduced to a Monoid\n  * Singleton, Pipeline, NullObject can be rooted in Functor, Applicative Functor and Monad\n  * Visitor and Iterator are covered by Foldable and Traversable.\n* Yet another category of patterns is covered by specific language features like Lazy Evaluation, Parallelism. These features may be specific to certain languages.\n  * Laziness allows to work with non-terminating compuations and data structures of infinite size.\n  * Parallelism allows to scale the execution of a program transparently across CPU cores.\n\n### Design patterns reflect mathematical structures\n\nWhat really struck me in the course of writing this study was that so many of the Typeclassopedia type classes could be related to Design Patterns.\n\nMost of these type classes stem from abstract algebra and category theory in particular.\nTake for instance the `Monoid` type class which is a 1:1 representation of the [monoid](https://en.wikipedia.org/wiki/Monoid) of abstract algebra.\nIdentifying the [composite pattern](#composite--semigroup--monoid) as an application of a monoidal data structure was an eye opener for me:\n\n*Design patterns reflect abstract algebraic structures.*\n\nAs another example take the [Map-Reduce](#map-reduce) pattern: we demonstrated that the question whether a problem can be \nsolved by a map-reduce approach boils down to the algebraic question whether the data structure used to hold the intermediary \nresults of the `map` operation forms a *monoid* under the `reduce` operation.\n\nRooting design patterns in abstract algebra brings a higher level of confidence to software design as we can move from 'hand waving' \u0026ndash; painting UML diagrams, writing prose, building prototypes, etc. \u0026ndash; to mathematical reasoning.\n\nMark Seemann has written an instructive series of articles on the coincidence of design patterns with abstract algebra: [From Design Patterns to Category Theory](http://blog.ploeh.dk/2017/10/04/from-design-patterns-to-category-theory/).\n\nJeremy Gibbons has also written several excellent papers on this subject:\n\n\u003e Design patterns are reusable abstractions in object-oriented software.\n\u003e However, using current mainstream programming languages, these elements can only be expressed extra-linguistically: as prose,pictures, and prototypes.\n\u003e We believe that this is not inherent in the patterns themselves, but evidence of a lack of expressivity in the languages of today.\n\u003e We expect that, in the languages of the future, the code parts of design patterns will be expressible as reusable library components.\n\u003e Indeed, we claim that the languages of tomorrow will suffice; the future is not far away. All that is needed, in addition to commonly-available features,\n\u003e are higher-order and datatype-generic constructs;\n\u003e these features are already or nearly available now.  \n\u003e Quoted from [Design Patterns as Higher-Order Datatype-Generic Programs](http://www.cs.ox.ac.uk/jeremy.gibbons/publications/hodgp.pdf)\n\nHe also maintains a blog dedicated to [patterns in functional programming](https://patternsinfp.wordpress.com/welcome/).\n\nI'd like to conclude this section with a quote from Martin Menestrets FP blog:\n\n\u003e [...] there is this curious thing called [Curry–Howard correspondence](https://en.wikipedia.org/wiki/Curry%E2%80%93Howard_correspondence) which is a direct analogy between mathematical concepts and computational calculus [...].\n\u003e  \n\u003e This correspondence means that a lot of useful stuff discovered and proven for decades in Math can then be transposed to programming, opening a way for a lot of extremely robust constructs for free.\n\u003e  \n\u003e In OOP, Design patterns are used a lot and could be defined as idiomatic ways to solve a given problems, in specific contexts but their existences won’t save you from having to apply and write them again and again each time you encounter the problems they solve.\n\u003e  \n\u003e Functional programming constructs, some directly coming from category theory (mathematics), solve directly what you would have tried to solve with design patterns.\n\u003e\n\u003e Quoted from [Geekocephale](http://geekocephale.com/blog/2018/10/08/fp)\n\n## some interesting links\n\n[IBM Developerworks](https://www.ibm.com/developerworks/library/j-ft10/index.html)\n\n[Design patterns in Haskell](http://blog.ezyang.com/2010/05/design-patterns-in-haskel/)\n\n[GOF patterns in Scala](https://staticallytyped.wordpress.com/2013/03/09/gang-of-four-patterns-with-type-classes-and-implicits-in-scala/)\n\n[Patterns in dynamic functional languages](http://norvig.com/design-patterns/design-patterns.pdf)\n\n[Scala Typeclassopedia](https://github.com/tel/scala-typeclassopedia)\n\n[FP resources](https://github.com/mmenestret/fp-resources/blob/master/README.md)\n","funding_links":[],"categories":["Haskell"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fthma%2FLtuPatternFactory","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fthma%2FLtuPatternFactory","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fthma%2FLtuPatternFactory/lists"}