{"id":13516433,"url":"https://github.com/devonestes/fast-elixir","last_synced_at":"2025-05-16T04:06:55.977Z","repository":{"id":52200846,"uuid":"86914400","full_name":"devonestes/fast-elixir","owner":"devonestes","description":":dash: Writing Fast Elixir :heart_eyes: -- Collect Common Elixir idioms.","archived":false,"fork":false,"pushed_at":"2023-11-08T17:04:23.000Z","size":1123,"stargazers_count":1283,"open_issues_count":8,"forks_count":41,"subscribers_count":48,"default_branch":"master","last_synced_at":"2025-04-08T14:11:12.841Z","etag":null,"topics":["elixir","elixir-lang","fast","performance"],"latest_commit_sha":null,"homepage":"https://github.com/devonestes/fast-elixir","language":"Elixir","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/devonestes.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2017-04-01T13:13:46.000Z","updated_at":"2025-03-30T00:43:34.000Z","dependencies_parsed_at":"2024-10-29T17:22:45.452Z","dependency_job_id":null,"html_url":"https://github.com/devonestes/fast-elixir","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/devonestes%2Ffast-elixir","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/devonestes%2Ffast-elixir/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/devonestes%2Ffast-elixir/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/devonestes%2Ffast-elixir/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/devonestes","download_url":"https://codeload.github.com/devonestes/fast-elixir/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":254464896,"owners_count":22075570,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["elixir","elixir-lang","fast","performance"],"created_at":"2024-08-01T05:01:22.237Z","updated_at":"2025-05-16T04:06:50.961Z","avatar_url":"https://github.com/devonestes.png","language":"Elixir","readme":"# Fast Elixir\n\nThere is a wonderful project in Ruby called [fast-ruby](https://github.com/JuanitoFatas/fast-ruby), from which I got the inspiration for this repo. The idea is to collect various idioms for writing performant code when there is more than one _essentially_ symantically identical way of computing something. There may be slight differences, so please be sure that when you're changing something that it doesn't change the correctness of your program.\n\nEach idiom has a corresponding code example that resides in [code](code).\n\n**Let's write faster code, together! \u003c3**\n\n## Measurement Tool\n\nWe use [benchee](https://github.com/PragTob/benchee).\n\n## Contributing\n\nHelp us collect benchmarks! Please [read the contributing guide](CONTRIBUTING.md).\n\n## Idioms\n\n- [Map Lookup vs. Pattern Matching Lookup](#map-lookup-vs-pattern-matching-lookup-code)\n- [IO Lists vs. String Concatenation](#io-lists-vs-string-concatenation-code)\n- [Combining lists with `|` vs. `++`](#combining-lists-with--vs--code)\n- [Putting into maps with `Map.put` and `put_in`](#putting-into-maps-with-mapput-and-put_in-code)\n- [Splitting Strings](#splitting-large-strings-code)\n- [`sort` vs. `sort_by`](#sort-vs-sort_by-code)\n- [Retrieving state from ets tables vs. Gen Servers](#retrieving-state-from-ets-tables-vs-gen-servers-code)\n- [Writing state in ets tables, persistent_term and Gen Servers](#writing-state-from-ets-tables-persistent-term-gen-servers-code)\n- [Comparing strings vs. atoms](#comparing-strings-vs-atoms-code)\n- [spawn vs. spawn_link](#spawn-vs-spawn_link-code)\n- [Replacements for Enum.filter_map/3](#replacements-for-enumfilter_map3-code)\n- [Filtering maps](#filtering-maps-code)\n\n#### Map Lookup vs. Pattern Matching Lookup [code](code/general/map_lookup_vs_pattern_matching.exs)\n\nIf you need to lookup static values in a key-value based structure, you might at\nfirst consider assigning a map as a module attribute and looking that up.\nHowever, it's significantly faster to use pattern matching to define functions\nthat behave like a key-value based data structure.\n\n```\n$ mix run code/general/map_lookup_vs_pattern_matching.exs\nOperating System: macOS\nCPU Information: Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz\nNumber of Available Cores: 16\nAvailable memory: 16 GB\nElixir 1.11.0-rc.0\nErlang 23.0.2\n\nBenchmark suite executing with the following configuration:\nwarmup: 2 s\ntime: 10 s\nmemory time: 0 ns\nparallel: 1\ninputs: none specified\nEstimated total run time: 24 s\n\nBenchmarking Map Lookup...\nBenchmarking Pattern Matching...\n\nName                       ips        average  deviation         median         99th %\nPattern Matching      909.12 K        1.10 μs  ±3606.70%           1 μs           2 μs\nMap Lookup            792.96 K        1.26 μs   ±532.10%           1 μs           2 μs\n\nComparison:\nPattern Matching      909.12 K\nMap Lookup            792.96 K - 1.15x slower +0.161 μs\n```\n\n#### IO Lists vs. String Concatenation [code](code/general/io_lists_vs_concatenation.exs)\n\nChances are, eventually you'll need to concatenate strings for some sort of\noutput. This could be in a web response, a CLI output, or writing to a file. The\nfaster way to do this is to use IO Lists rather than string concatenation or\ninterpolation.\n\n```\n$ mix run code/general/io_lists_vs_concatenation.exs\nOperating System: macOS\nCPU Information: Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz\nNumber of Available Cores: 16\nAvailable memory: 16 GB\nElixir 1.11.0-rc.0\nErlang 23.0.2\n\nBenchmark suite executing with the following configuration:\nwarmup: 2 s\ntime: 10 s\nmemory time: 0 ns\nparallel: 1\ninputs: 100 3-character strings, 100 300-character strings, 5 3-character_strings, 5 300-character_strings, 50 3-character strings, 50 300-character strings\nEstimated total run time: 2.40 min\n\nBenchmarking IO List with input 100 3-character strings...\nBenchmarking IO List with input 100 300-character strings...\nBenchmarking IO List with input 5 3-character_strings...\nBenchmarking IO List with input 5 300-character_strings...\nBenchmarking IO List with input 50 3-character strings...\nBenchmarking IO List with input 50 300-character strings...\nBenchmarking Interpolation with input 100 3-character strings...\nBenchmarking Interpolation with input 100 300-character strings...\nBenchmarking Interpolation with input 5 3-character_strings...\nBenchmarking Interpolation with input 5 300-character_strings...\nBenchmarking Interpolation with input 50 3-character strings...\nBenchmarking Interpolation with input 50 300-character strings...\n\n##### With input 100 3-character strings #####\nName                    ips        average  deviation         median         99th %\nIO List              1.41 M        0.71 μs  ±4475.40%           1 μs           2 μs\nInterpolation        0.31 M        3.27 μs    ±76.91%           3 μs          11 μs\n\nComparison:\nIO List              1.41 M\nInterpolation        0.31 M - 4.61x slower +2.56 μs\n\n##### With input 100 300-character strings #####\nName                    ips        average  deviation         median         99th %\nIO List              1.40 M        0.71 μs  ±4411.36%           1 μs           1 μs\nInterpolation        0.20 M        4.90 μs   ±248.22%           4 μs          22 μs\n\nComparison:\nIO List              1.40 M\nInterpolation        0.20 M - 6.86x slower +4.18 μs\n\n##### With input 5 3-character_strings #####\nName                    ips        average  deviation         median         99th %\nIO List              5.15 M      194.15 ns  ±2555.27%           0 ns        1000 ns\nInterpolation        1.84 M      544.12 ns  ±4764.73%           0 ns        2000 ns\n\nComparison:\nIO List              5.15 M\nInterpolation        1.84 M - 2.80x slower +349.96 ns\n\n##### With input 5 300-character_strings #####\nName                    ips        average  deviation         median         99th %\nIO List              5.03 M      198.76 ns  ±4663.45%           0 ns        1000 ns\nInterpolation        1.92 M      521.81 ns   ±193.09%           0 ns        1000 ns\n\nComparison:\nIO List              5.03 M\nInterpolation        1.92 M - 2.63x slower +323.05 ns\n\n##### With input 50 3-character strings #####\nName                    ips        average  deviation         median         99th %\nIO List              1.94 M        0.52 μs  ±6397.19%           0 μs           2 μs\nInterpolation        0.57 M        1.75 μs   ±130.98%           2 μs           2 μs\n\nComparison:\nIO List              1.94 M\nInterpolation        0.57 M - 3.40x slower +1.24 μs\n\n##### With input 50 300-character strings #####\nName                    ips        average  deviation         median         99th %\nIO List              2.06 M        0.49 μs  ±8825.39%           0 μs           2 μs\nInterpolation        0.37 M        2.71 μs   ±657.41%           2 μs          14 μs\n\nComparison:\nIO List              2.06 M\nInterpolation        0.37 M - 5.58x slower +2.22 μs\n```\n\n#### Combining lists with `|` vs. `++` [code](code/general/concat_vs_cons.exs)\n\nAdding two lists together might seem like a simple problem to solve, but in\nElixir there are a couple ways to solve that issue. We can use `++` to\nconcatenate two lists easily: `[1, 2] ++ [3, 4] #=\u003e [1, 2, 3, 4]`, but the\nproblem with that approach is that once you start dealing with larger lists it\nbecomes **VERY** slow! Because of this, when combining two lists, you should try\nand use the cons operator (`|`) whenever possible. This will require you to\nremember to flatten the resulting nested list, but it's a huge performance\noptimization on larger lists.\n\n```\n$ mix run ./code/general/concat_vs_cons.exs\nOperating System: macOS\nCPU Information: Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz\nNumber of Available Cores: 16\nAvailable memory: 16 GB\nElixir 1.11.0-rc.0\nErlang 23.0.2\n\nBenchmark suite executing with the following configuration:\nwarmup: 2 s\ntime: 10 s\nmemory time: 0 ns\nparallel: 1\ninputs: 1,000 large items, 1,000 small items, 10 large items, 10 small items, 100 large items, 100 small items\nEstimated total run time: 3.60 min\n\nBenchmarking Concatenation with input 1,000 large items...\nBenchmarking Concatenation with input 1,000 small items...\nBenchmarking Concatenation with input 10 large items...\nBenchmarking Concatenation with input 10 small items...\nBenchmarking Concatenation with input 100 large items...\nBenchmarking Concatenation with input 100 small items...\nBenchmarking Cons + Flatten with input 1,000 large items...\nBenchmarking Cons + Flatten with input 1,000 small items...\nBenchmarking Cons + Flatten with input 10 large items...\nBenchmarking Cons + Flatten with input 10 small items...\nBenchmarking Cons + Flatten with input 100 large items...\nBenchmarking Cons + Flatten with input 100 small items...\nBenchmarking Cons + Reverse + Flatten with input 1,000 large items...\nBenchmarking Cons + Reverse + Flatten with input 1,000 small items...\nBenchmarking Cons + Reverse + Flatten with input 10 large items...\nBenchmarking Cons + Reverse + Flatten with input 10 small items...\nBenchmarking Cons + Reverse + Flatten with input 100 large items...\nBenchmarking Cons + Reverse + Flatten with input 100 small items...\n\n##### With input 1,000 large items #####\nName                               ips        average  deviation         median         99th %\nCons + Reverse + Flatten         38.45       26.01 ms     ±6.11%       25.91 ms       30.56 ms\nCons + Flatten                   38.38       26.06 ms     ±6.39%       26.06 ms       29.32 ms\nConcatenation                    0.179     5573.57 ms     ±0.26%     5573.57 ms     5583.94 ms\n\nComparison:\nCons + Reverse + Flatten         38.45\nCons + Flatten                   38.38 - 1.00x slower +0.0501 ms\nConcatenation                    0.179 - 214.32x slower +5547.56 ms\n\n##### With input 1,000 small items #####\nName                               ips        average  deviation         median         99th %\nCons + Reverse + Flatten        3.78 K      264.27 μs    ±19.49%         243 μs         496 μs\nCons + Flatten                  3.76 K      266.16 μs    ±18.53%         246 μs      491.83 μs\nConcatenation                 0.0626 K    15984.51 μs     ±8.58%       15927 μs    20412.82 μs\n\nComparison:\nCons + Reverse + Flatten        3.78 K\nCons + Flatten                  3.76 K - 1.01x slower +1.90 μs\nConcatenation                 0.0626 K - 60.49x slower +15720.24 μs\n\n##### With input 10 large items #####\nName                               ips        average  deviation         median         99th %\nConcatenation                   8.33 K      120.04 μs    ±31.79%         111 μs         268 μs\nCons + Flatten                  5.12 K      195.17 μs    ±20.09%         181 μs         378 μs\nCons + Reverse + Flatten        5.11 K      195.88 μs    ±20.32%         181 μs         378 μs\n\nComparison:\nConcatenation                   8.33 K\nCons + Flatten                  5.12 K - 1.63x slower +75.13 μs\nCons + Reverse + Flatten        5.11 K - 1.63x slower +75.85 μs\n\n##### With input 10 small items #####\nName                               ips        average  deviation         median         99th %\nConcatenation                 575.41 K        1.74 μs  ±1951.31%           1 μs           4 μs\nCons + Flatten                331.62 K        3.02 μs   ±972.07%           3 μs           7 μs\nCons + Reverse + Flatten      330.05 K        3.03 μs   ±853.79%           3 μs           8 μs\n\nComparison:\nConcatenation                 575.41 K\nCons + Flatten                331.62 K - 1.74x slower +1.28 μs\nCons + Reverse + Flatten      330.05 K - 1.74x slower +1.29 μs\n\n##### With input 100 large items #####\nName                               ips        average  deviation         median         99th %\nCons + Reverse + Flatten         38.56       25.93 ms     ±6.25%       25.85 ms       32.02 ms\nCons + Flatten                   38.35       26.08 ms     ±6.30%       26.04 ms       30.68 ms\nConcatenation                    0.180     5561.40 ms     ±0.41%     5561.40 ms     5577.71 ms\n\nComparison:\nCons + Reverse + Flatten         38.56\nCons + Flatten                   38.35 - 1.01x slower +0.145 ms\nConcatenation                    0.180 - 214.47x slower +5535.47 ms\n\n##### With input 100 small items #####\nName                               ips        average  deviation         median         99th %\nCons + Flatten                 38.68 K       25.85 μs    ±32.87%          24 μs          69 μs\nCons + Reverse + Flatten       38.23 K       26.16 μs    ±39.65%          24 μs          70 μs\nConcatenation                   4.33 K      230.99 μs    ±50.47%         213 μs      590.06 μs\n\nComparison:\nCons + Flatten                 38.68 K\nCons + Reverse + Flatten       38.23 K - 1.01x slower +0.31 μs\nConcatenation                   4.33 K - 8.94x slower +205.13 μs\n```\n\n#### Putting into maps with `Map.put` and `put_in` [code](code/general/map_put_vs_put_in.exs)\n\nDo not put data into root of map with `put_in`. It is ~2x slower than `Map.put`. Also `put_in/2`\nis more effective than `put_in/3`.\n\n```\n$ mix run ./code/general/map_put_vs_put_in.exs\nOperating System: macOS\nCPU Information: Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz\nNumber of Available Cores: 16\nAvailable memory: 16 GB\nElixir 1.11.0-rc.0\nErlang 23.0.2\n\nBenchmark suite executing with the following configuration:\nwarmup: 2 s\ntime: 10 s\nmemory time: 0 ns\nparallel: 1\ninputs: Large (30,000 items), Medium (3,000 items), Small (30 items)\nEstimated total run time: 1.80 min\n\nBenchmarking Map.put/3 with input Large (30,000 items)...\nBenchmarking Map.put/3 with input Medium (3,000 items)...\nBenchmarking Map.put/3 with input Small (30 items)...\nBenchmarking put_in/2 with input Large (30,000 items)...\nBenchmarking put_in/2 with input Medium (3,000 items)...\nBenchmarking put_in/2 with input Small (30 items)...\nBenchmarking put_in/3 with input Large (30,000 items)...\nBenchmarking put_in/3 with input Medium (3,000 items)...\nBenchmarking put_in/3 with input Small (30 items)...\n\n##### With input Large (30,000 items) #####\nName                ips        average  deviation         median         99th %\nMap.put/3        247.43        4.04 ms    ±10.45%        3.97 ms        5.41 ms\nput_in/2         242.10        4.13 ms    ±12.48%        4.01 ms        5.74 ms\nput_in/3         221.53        4.51 ms    ±11.11%        4.41 ms        6.13 ms\n\nComparison:\nMap.put/3        247.43\nput_in/2         242.10 - 1.02x slower +0.0888 ms\nput_in/3         221.53 - 1.12x slower +0.47 ms\n\n##### With input Medium (3,000 items) #####\nName                ips        average  deviation         median         99th %\nMap.put/3        5.68 K      175.98 μs    ±34.49%      150.98 μs      400.98 μs\nput_in/2         3.62 K      276.42 μs    ±23.76%      252.98 μs      546.98 μs\nput_in/3         3.09 K      323.22 μs    ±22.44%      296.98 μs      630.98 μs\n\nComparison:\nMap.put/3        5.68 K\nput_in/2         3.62 K - 1.57x slower +100.44 μs\nput_in/3         3.09 K - 1.84x slower +147.23 μs\n\n##### With input Small (30 items) #####\nName                ips        average  deviation         median         99th %\nMap.put/3     1040.86 K        0.96 μs  ±3795.74%        0.98 μs        1.98 μs\nput_in/2       400.53 K        2.50 μs  ±1295.21%        1.98 μs        2.98 μs\nput_in/3       338.63 K        2.95 μs  ±1124.35%        1.98 μs        3.98 μs\n\nComparison:\nMap.put/3     1040.86 K\nput_in/2       400.53 K - 2.60x slower +1.54 μs\nput_in/3       338.63 K - 3.07x slower +1.99 μs\n```\n\n#### Splitting Large Strings [code](code/general/string_split_large_strings.exs)\n\nElixir's `String.split/2` is the fastest option for splitting strings by far, but\nusing a String literal as the splitter instead of a regex will yield significant\nperformance benefits.\n\n```\n$ mix run code/general/string_split_large_strings.exs\nOperating System: macOS\nCPU Information: Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz\nNumber of Available Cores: 16\nAvailable memory: 16 GB\nElixir 1.11.0-rc.0\nErlang 23.0.2\n\nBenchmark suite executing with the following configuration:\nwarmup: 2 s\ntime: 10 s\nmemory time: 0 ns\nparallel: 1\ninputs: Large string (1 Million Numbers), Medium string (10 Thousand Numbers), Small string (1 Hundred Numbers)\nEstimated total run time: 2.40 min\n\nBenchmarking split with input Large string (1 Million Numbers)...\nBenchmarking split with input Medium string (10 Thousand Numbers)...\nBenchmarking split with input Small string (1 Hundred Numbers)...\nBenchmarking split erlang with input Large string (1 Million Numbers)...\nBenchmarking split erlang with input Medium string (10 Thousand Numbers)...\nBenchmarking split erlang with input Small string (1 Hundred Numbers)...\nBenchmarking split regex with input Large string (1 Million Numbers)...\nBenchmarking split regex with input Medium string (10 Thousand Numbers)...\nBenchmarking split regex with input Small string (1 Hundred Numbers)...\nBenchmarking splitter |\u003e to_list with input Large string (1 Million Numbers)...\nBenchmarking splitter |\u003e to_list with input Medium string (10 Thousand Numbers)...\nBenchmarking splitter |\u003e to_list with input Small string (1 Hundred Numbers)...\n\n##### With input Large string (1 Million Numbers) #####\nName                          ips        average  deviation         median         99th %\nsplit                       13.96       71.63 ms    ±29.57%       59.81 ms      121.28 ms\nsplitter |\u003e to_list          3.24      308.26 ms    ±14.54%      290.97 ms      442.09 ms\nsplit erlang                 1.09      919.28 ms     ±4.86%      939.75 ms      998.24 ms\nsplit regex                  0.78     1286.40 ms     ±9.80%     1253.48 ms     1489.63 ms\n\nComparison:\nsplit                       13.96\nsplitter |\u003e to_list          3.24 - 4.30x slower +236.62 ms\nsplit erlang                 1.09 - 12.83x slower +847.65 ms\nsplit regex                  0.78 - 17.96x slower +1214.77 ms\n\n##### With input Medium string (10 Thousand Numbers) #####\nName                          ips        average  deviation         median         99th %\nsplit                     3813.15        0.26 ms    ±45.13%        0.21 ms        0.57 ms\nsplitter |\u003e to_list        397.04        2.52 ms    ±14.65%        2.48 ms        3.73 ms\nsplit erlang               137.55        7.27 ms     ±8.52%        7.17 ms        9.35 ms\nsplit regex                 93.73       10.67 ms     ±7.46%       10.56 ms       13.07 ms\n\nComparison:\nsplit                     3813.15\nsplitter |\u003e to_list        397.04 - 9.60x slower +2.26 ms\nsplit erlang               137.55 - 27.72x slower +7.01 ms\nsplit regex                 93.73 - 40.68x slower +10.41 ms\n\n##### With input Small string (1 Hundred Numbers) #####\nName                          ips        average  deviation         median         99th %\nsplit                    365.94 K        2.73 μs   ±634.81%           2 μs          14 μs\nsplitter |\u003e to_list       45.63 K       21.92 μs    ±45.25%          20 μs          63 μs\nsplit erlang              14.19 K       70.48 μs    ±48.03%          53 μs      186.91 μs\nsplit regex                9.87 K      101.28 μs    ±24.68%          93 μs         222 μs\n\nComparison:\nsplit                    365.94 K\nsplitter |\u003e to_list       45.63 K - 8.02x slower +19.18 μs\nsplit erlang              14.19 K - 25.79x slower +67.74 μs\nsplit regex                9.87 K - 37.06x slower +98.55 μs\n```\n\n#### `sort` vs. `sort_by` [code](code/general/sort_vs_sort_by.exs)\n\nSorting a list of maps or keyword lists can be done in various ways. However, since the sort\nbehavior is fairly implicit if you're sorting without a defined sort function, and since the\nspeed difference is quite small, it's probably best to use `sort/2` or `sort_by/2` in all\ncases when sorting lists and maps (including keyword lists and structs).\n\n```\n$ mix run code/general/sort_vs_sort_by.exs\nOperating System: macOS\nCPU Information: Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz\nNumber of Available Cores: 16\nAvailable memory: 16 GB\nElixir 1.11.0-rc.0\nErlang 23.0.2\n\nBenchmark suite executing with the following configuration:\nwarmup: 2 s\ntime: 10 s\nmemory time: 0 ns\nparallel: 1\ninputs: none specified\nEstimated total run time: 36 s\n\nBenchmarking sort/1...\nBenchmarking sort/2...\nBenchmarking sort_by/2...\n\nName                ips        average  deviation         median         99th %\nsort/1           7.82 K      127.86 μs    ±23.45%         118 μs         269 μs\nsort/2           7.01 K      142.57 μs    ±22.48%         132 μs         294 μs\nsort_by/2        6.68 K      149.62 μs    ±22.70%         138 μs         308 μs\n\nComparison:\nsort/1           7.82 K\nsort/2           7.01 K - 1.12x slower +14.71 μs\nsort_by/2        6.68 K - 1.17x slower +21.76 μs\n```\n\n#### Retrieving state from ets tables vs. Gen Servers [code](code/general/ets_vs_gen_server.exs)\n\nThere are many differences between Gen Servers and ets tables, but many people\nhave often praised ets tables for being extremely fast. For the simple case of\nretrieving information from a key-value store, the ets table is indeed much\nfaster for reads. For more complicated use cases, and for comparisons of writes\ninstead of reads, further benchmarks are needed, but so far ets lives up to its\nreputation for speed.\n\n```\n$ mix run code/general/ets_vs_gen_server.exs\nOperating System: macOS\nCPU Information: Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz\nNumber of Available Cores: 16\nAvailable memory: 16 GB\nElixir 1.11.0-rc.0\nErlang 23.0.2\n\nBenchmark suite executing with the following configuration:\nwarmup: 2 s\ntime: 10 s\nmemory time: 0 ns\nparallel: 1\ninputs: none specified\nEstimated total run time: 24 s\n\nBenchmarking ets table...\nBenchmarking gen server...\n\nName                 ips        average  deviation         median         99th %\nets table         5.11 M       0.196 μs  ±8972.86%           0 μs        0.98 μs\ngen server        0.55 M        1.82 μs   ±997.04%        1.98 μs        2.98 μs\n\nComparison:\nets table         5.11 M\ngen server        0.55 M - 9.31x slower +1.63 μs\n```\n\n#### Writing state in ets tables, persistent_term and Gen Servers [code](code/general/ets_vs_gen_server_write.exs)\n\nNot only is it faster to read from `ets` or `persistent_term` versus a `GenServer`, but it's also\nmuch faster to write state in these two options. If you have need for state that needs to be\nstored but without a lot of behavior around that state, `ets` or `persistent_term` is always going\nto be the better choice over a `GenServer`. `persistent_term` is the fastest to read from by far,\nbut is global across the VM and also slower to write to, so in most cases `ets` will be the best\nchoice for storing state and should be the default option to start with.\n\n```\n$ mix run code/general/ets_vs_gen_server_write.exs\nOperating System: macOS\nCPU Information: Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz\nNumber of Available Cores: 16\nAvailable memory: 16 GB\nElixir 1.11.0-rc.0\nErlang 23.0.2\n\nBenchmark suite executing with the following configuration:\nwarmup: 2 s\ntime: 10 s\nmemory time: 0 ns\nparallel: 1\ninputs: none specified\nEstimated total run time: 36 s\n\nBenchmarking ets table...\nBenchmarking gen server...\nBenchmarking persistent term...\n\nName                      ips        average  deviation         median         99th %\nets table              5.22 M      191.61 ns   ±798.69%           0 ns        1000 ns\npersistent term        2.43 M      410.87 ns ±11324.51%           0 ns        1000 ns\ngen server             0.58 M     1715.61 ns   ±367.31%        2000 ns        2000 ns\n\nComparison:\nets table              5.22 M\npersistent term        2.43 M - 2.14x slower +219.26 ns\ngen server             0.58 M - 8.95x slower +1524.00 ns\n```\n\n\n\n#### Comparing strings vs. atoms [code](code/general/comparing_strings_vs_atoms.exs)\n\nBecause atoms are stored in a special table in the BEAM, comparing atoms is\nrather fast compared to comparing strings, where you need to compare each part\nof the list that underlies the string. When you have a choice of what type to\nuse, atoms is the faster choice. However, what you probably should not do is\nto convert strings to atoms solely for the perceived speed benefit, since it\nends up being much slower than just comparing the strings, even dozens of times.\n\n```\n$ mix run code/general/comparing_strings_vs_atoms.exs\nOperating System: macOS\nCPU Information: Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz\nNumber of Available Cores: 16\nAvailable memory: 16 GB\nElixir 1.11.0-rc.0\nErlang 23.0.2\n\nBenchmark suite executing with the following configuration:\nwarmup: 2 s\ntime: 10 s\nmemory time: 0 ns\nparallel: 1\ninputs: Large (1-100), Medium (1-50), Small (1-5)\nEstimated total run time: 1.80 min\n\nBenchmarking Comparing atoms with input Large (1-100)...\nBenchmarking Comparing atoms with input Medium (1-50)...\nBenchmarking Comparing atoms with input Small (1-5)...\nBenchmarking Comparing strings with input Large (1-100)...\nBenchmarking Comparing strings with input Medium (1-50)...\nBenchmarking Comparing strings with input Small (1-5)...\nBenchmarking Converting to atoms and then comparing with input Large (1-100)...\nBenchmarking Converting to atoms and then comparing with input Medium (1-50)...\nBenchmarking Converting to atoms and then comparing with input Small (1-5)...\n\n##### With input Large (1-100) #####\nName                                             ips        average  deviation         median         99th %\nComparing atoms                               3.74 M      267.46 ns ±12198.11%           0 ns        1000 ns\nComparing strings                             3.71 M      269.25 ns ±11719.28%           0 ns        1000 ns\nConverting to atoms and then comparing        0.94 M     1065.67 ns   ±290.55%        1000 ns        2000 ns\n\nComparison:\nComparing atoms                               3.74 M\nComparing strings                             3.71 M - 1.01x slower +1.79 ns\nConverting to atoms and then comparing        0.94 M - 3.98x slower +798.21 ns\n\n##### With input Medium (1-50) #####\nName                                             ips        average  deviation         median         99th %\nComparing atoms                               3.70 M      270.08 ns ±11419.92%           0 ns        1000 ns\nComparing strings                             3.68 M      271.52 ns ±11603.67%           0 ns        1000 ns\nConverting to atoms and then comparing        1.34 M      743.76 ns  ±2924.56%        1000 ns        1000 ns\n\nComparison:\nComparing atoms                               3.70 M\nComparing strings                             3.68 M - 1.01x slower +1.44 ns\nConverting to atoms and then comparing        1.34 M - 2.75x slower +473.68 ns\n\n##### With input Small (1-5) #####\nName                                             ips        average  deviation         median         99th %\nComparing atoms                               3.81 M      262.27 ns ±11438.39%           0 ns        1000 ns\nComparing strings                             3.69 M      270.86 ns ±11945.32%           0 ns        1000 ns\nConverting to atoms and then comparing        2.45 M      407.62 ns  ±8371.44%           0 ns        1000 ns\n\nComparison:\nComparing atoms                               3.81 M\nComparing strings                             3.69 M - 1.03x slower +8.59 ns\nConverting to atoms and then comparing        2.45 M - 1.55x slower +145.34 ns\n```\n\n### spawn vs. spawn_link [code](code/general/spawn_vs_spawn_link.exs)\n\nThere are two ways to spawn a process on the BEAM, `spawn` and `spawn_link`.\nBecause `spawn_link` links the child process to the process which spawned it, it\ntakes slightly longer. The way in which processes are spawned is unlikely to be\na bottleneck in most applications, though, and the resiliency benefits of OTP\nsupervision trees vastly outweighs the slightly slower run time of `spawn_link`,\nso that should still be favored in nearly every case in which processes need to\nbe spawned.\n\n```\n$ mix run code/general/spawn_vs_spawn_link.exs\nOperating System: macOS\nCPU Information: Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz\nNumber of Available Cores: 16\nAvailable memory: 16 GB\nElixir 1.11.0-rc.0\nErlang 23.0.2\n\nBenchmark suite executing with the following configuration:\nwarmup: 2 s\ntime: 10 s\nmemory time: 2 s\nparallel: 1\ninputs: none specified\nEstimated total run time: 28 s\n\nBenchmarking spawn/1...\nBenchmarking spawn_link/1...\n\nName                   ips        average  deviation         median         99th %\nspawn/1           636.00 K        1.57 μs  ±1512.39%           1 μs           2 μs\nspawn_link/1      576.18 K        1.74 μs  ±1402.58%           2 μs           2 μs\n\nComparison:\nspawn/1           636.00 K\nspawn_link/1      576.18 K - 1.10x slower +0.163 μs\n\nMemory usage statistics:\n\nName            Memory usage\nspawn/1                 72 B\nspawn_link/1            72 B - 1.00x memory usage +0 B\n\n**All measurements for memory usage were the same**\n```\n\n#### Replacements for Enum.filter_map/3 [code](code/general/filter_map.exs)\n\nElixir used to have an `Enum.filter_map/3` function that would filter a list and\nalso apply a function to each element in the list that was not removed, but it\nwas deprecated in version 1.5. Luckily there are still four other ways to do\nthat same thing! They're all mostly the same, but if you're looking for the\noptions with the best performance your best bet is to use either a `for`\ncomprehension or `Enum.reduce/3` and then `Enum.reverse/1`. Using\n`Enum.filter/2` and then `Enum.map/2` is also a fine choice, but it has higher\nmemory usage than the other two options.\n\nThe one option you should avoid is using `Enum.flat_map/2` as it is both slower\nand has higher memory usage.\n\n```\n$ mix run code/general/filter_map.exs\nOperating System: macOS\nCPU Information: Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz\nNumber of Available Cores: 16\nAvailable memory: 16 GB\nElixir 1.11.0-rc.0\nErlang 23.0.2\n\nBenchmark suite executing with the following configuration:\nwarmup: 2 s\ntime: 10 s\nmemory time: 10 ms\nparallel: 1\ninputs: Large, Medium, Small\nEstimated total run time: 2.40 min\n\nBenchmarking filter |\u003e map with input Large...\nBenchmarking filter |\u003e map with input Medium...\nBenchmarking filter |\u003e map with input Small...\nBenchmarking flat_map with input Large...\nBenchmarking flat_map with input Medium...\nBenchmarking flat_map with input Small...\nBenchmarking for comprehension with input Large...\nBenchmarking for comprehension with input Medium...\nBenchmarking for comprehension with input Small...\nBenchmarking reduce |\u003e reverse with input Large...\nBenchmarking reduce |\u003e reverse with input Medium...\nBenchmarking reduce |\u003e reverse with input Small...\n\n##### With input Large #####\nName                        ips        average  deviation         median         99th %\nreduce |\u003e reverse         12.12       82.51 ms     ±4.60%       81.46 ms       97.24 ms\nfor comprehension         12.12       82.51 ms     ±4.53%       81.87 ms       94.38 ms\nfilter |\u003e map             10.78       92.75 ms     ±4.91%       92.15 ms      103.58 ms\nflat_map                   8.41      118.89 ms     ±3.22%      118.22 ms      134.28 ms\n\nComparison:\nreduce |\u003e reverse         12.12\nfor comprehension         12.12 - 1.00x slower +0.00348 ms\nfilter |\u003e map             10.78 - 1.12x slower +10.24 ms\nflat_map                   8.41 - 1.44x slower +36.38 ms\n\nMemory usage statistics:\n\nName                 Memory usage\nreduce |\u003e reverse         7.57 MB\nfor comprehension         7.57 MB - 1.00x memory usage +0 MB\nfilter |\u003e map            13.28 MB - 1.75x memory usage +5.71 MB\nflat_map                 14.32 MB - 1.89x memory usage +6.75 MB\n\n**All measurements for memory usage were the same**\n\n##### With input Medium #####\nName                        ips        average  deviation         median         99th %\nfor comprehension        1.27 K      788.69 μs    ±14.54%         732 μs     1287.38 μs\nreduce |\u003e reverse        1.26 K      792.37 μs    ±14.73%         732 μs     1283.97 μs\nfilter |\u003e map            1.16 K      859.07 μs    ±14.68%         802 μs     1377.75 μs\nflat_map                 0.86 K     1157.55 μs    ±15.68%        1093 μs     1838.80 μs\n\nComparison:\nfor comprehension        1.27 K\nreduce |\u003e reverse        1.26 K - 1.00x slower +3.68 μs\nfilter |\u003e map            1.16 K - 1.09x slower +70.38 μs\nflat_map                 0.86 K - 1.47x slower +368.87 μs\n\nMemory usage statistics:\n\nName                 Memory usage\nfor comprehension        57.13 KB\nreduce |\u003e reverse        57.13 KB - 1.00x memory usage +0 KB\nfilter |\u003e map           109.12 KB - 1.91x memory usage +51.99 KB\nflat_map                130.66 KB - 2.29x memory usage +73.54 KB\n\n**All measurements for memory usage were the same**\n\n##### With input Small #####\nName                        ips        average  deviation         median         99th %\nreduce |\u003e reverse      121.39 K        8.24 μs   ±179.26%           8 μs          30 μs\nfor comprehension      121.20 K        8.25 μs   ±180.01%           8 μs          30 μs\nfilter |\u003e map          111.29 K        8.99 μs   ±144.77%           8 μs          31 μs\nflat_map                85.08 K       11.75 μs   ±119.95%          11 μs          37 μs\n\nComparison:\nreduce |\u003e reverse      121.39 K\nfor comprehension      121.20 K - 1.00x slower +0.0133 μs\nfilter |\u003e map          111.29 K - 1.09x slower +0.75 μs\nflat_map                85.08 K - 1.43x slower +3.52 μs\n\nMemory usage statistics:\n\nName                 Memory usage\nreduce |\u003e reverse         1.09 KB\nfor comprehension         1.09 KB - 1.00x memory usage +0 KB\nfilter |\u003e map             1.60 KB - 1.46x memory usage +0.51 KB\nflat_map                  1.62 KB - 1.48x memory usage +0.52 KB\n\n**All measurements for memory usage were the same**\n```\n\n#### String.slice/3 vs :binary.part/3 [code](code/general/string_slice.exs)\n\nFrom `String.slice/3` [documentation](https://hexdocs.pm/elixir/String.html#slice/3):\nRemember this function works with Unicode graphemes and considers the slices to represent grapheme offsets. If you want to split on raw bytes, check `Kernel.binary_part/3` instead.\n\n```\n$ mix run code/general/string_slice.exs\nOperating System: macOS\nCPU Information: Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz\nNumber of Available Cores: 16\nAvailable memory: 16 GB\nElixir 1.11.0-rc.0\nErlang 23.0.2\n\nBenchmark suite executing with the following configuration:\nwarmup: 100 ms\ntime: 2 s\nmemory time: 10 ms\nparallel: 1\ninputs: Large string (10 Thousand Numbers), Small string (10 Numbers)\nEstimated total run time: 12.66 s\n\nBenchmarking :binary.part/3 with input Large string (10 Thousand Numbers)...\nBenchmarking :binary.part/3 with input Small string (10 Numbers)...\nBenchmarking String.slice/3 with input Large string (10 Thousand Numbers)...\nBenchmarking String.slice/3 with input Small string (10 Numbers)...\nBenchmarking binary_part/3 with input Large string (10 Thousand Numbers)...\nBenchmarking binary_part/3 with input Small string (10 Numbers)...\n\n##### With input Large string (10 Thousand Numbers) #####\nName                     ips        average  deviation         median         99th %\nbinary_part/3        11.14 M       89.78 ns  ±2513.45%         100 ns         200 ns\n:binary.part/3        3.59 M      278.65 ns  ±9466.55%           0 ns        1000 ns\nString.slice/3        0.90 M     1112.12 ns   ±440.40%        1000 ns        2000 ns\n\nComparison:\nbinary_part/3        11.14 M\n:binary.part/3        3.59 M - 3.10x slower +188.87 ns\nString.slice/3        0.90 M - 12.39x slower +1022.34 ns\n\nMemory usage statistics:\n\nName              Memory usage\nbinary_part/3              0 B\n:binary.part/3             0 B - 1.00x memory usage +0 B\nString.slice/3           880 B - ∞ x memory usage +880 B\n\n**All measurements for memory usage were the same**\n\n##### With input Small string (10 Numbers) #####\nName                     ips        average  deviation         median         99th %\nbinary_part/3         3.64 M      274.57 ns  ±7776.31%           0 ns        1000 ns\n:binary.part/3        3.56 M      281.06 ns  ±9071.16%           0 ns        1000 ns\nString.slice/3        0.91 M     1103.31 ns   ±246.39%        1000 ns        2000 ns\n\nComparison:\nbinary_part/3         3.64 M\n:binary.part/3        3.56 M - 1.02x slower +6.48 ns\nString.slice/3        0.91 M - 4.02x slower +828.73 ns\n\nMemory usage statistics:\n\nName              Memory usage\nbinary_part/3              0 B\n:binary.part/3             0 B - 1.00x memory usage +0 B\nString.slice/3           880 B - ∞ x memory usage +880 B\n\n**All measurements for memory usage were the same**\n```\n\n#### Filtering maps [code](code/general/filtering_maps.exs)\n\nIf we have a map and want to filter out key-value pairs from that map, there are\nseveral ways to do it. However, because of some optimizations in Erlang,\n`:maps.filter/2` is faster than any of the versions implemented in Elixir.\nIf you look at the benchmark code, you'll notice that the function used for\nfiltering takes two arguments (the key and value) instead of one (a tuple with\nthe key and value), and it's this difference that is responsible for the\ndecreased execution time and memory usage.\n\n```\n$ mix run code/general/filtering_maps.exs\nOperating System: macOS\nCPU Information: Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz\nNumber of Available Cores: 16\nAvailable memory: 16 GB\nElixir 1.11.0-rc.0\nErlang 23.0.2\n\nBenchmark suite executing with the following configuration:\nwarmup: 2 s\ntime: 10 s\nmemory time: 1 s\nparallel: 1\ninputs: Large (10_000), Medium (100), Small (1)\nEstimated total run time: 2.60 min\n\nBenchmarking :maps.filter with input Large (10_000)...\nBenchmarking :maps.filter with input Medium (100)...\nBenchmarking :maps.filter with input Small (1)...\nBenchmarking Enum.filter/2 |\u003e Enum.into/2 with input Large (10_000)...\nBenchmarking Enum.filter/2 |\u003e Enum.into/2 with input Medium (100)...\nBenchmarking Enum.filter/2 |\u003e Enum.into/2 with input Small (1)...\nBenchmarking Enum.filter/2 |\u003e Map.new/1 with input Large (10_000)...\nBenchmarking Enum.filter/2 |\u003e Map.new/1 with input Medium (100)...\nBenchmarking Enum.filter/2 |\u003e Map.new/1 with input Small (1)...\nBenchmarking for with input Large (10_000)...\nBenchmarking for with input Medium (100)...\nBenchmarking for with input Small (1)...\n\n##### With input Large (10_000) #####\nName                                   ips        average  deviation         median         99th %\n:maps.filter                        669.86        1.49 ms    ±14.38%        1.44 ms        2.31 ms\nEnum.filter/2 |\u003e Enum.into/2        532.59        1.88 ms    ±19.86%        1.78 ms        2.87 ms\nEnum.filter/2 |\u003e Map.new/1          527.37        1.90 ms    ±25.17%        1.79 ms        2.85 ms\nfor                                 524.51        1.91 ms    ±31.33%        1.80 ms        2.83 ms\n\nComparison:\n:maps.filter                        669.86\nEnum.filter/2 |\u003e Enum.into/2        532.59 - 1.26x slower +0.38 ms\nEnum.filter/2 |\u003e Map.new/1          527.37 - 1.27x slower +0.40 ms\nfor                                 524.51 - 1.28x slower +0.41 ms\n\nMemory usage statistics:\n\nName                            Memory usage\n:maps.filter                       780.45 KB\nEnum.filter/2 |\u003e Enum.into/2       782.85 KB - 1.00x memory usage +2.41 KB\nEnum.filter/2 |\u003e Map.new/1         782.87 KB - 1.00x memory usage +2.42 KB\nfor                                782.86 KB - 1.00x memory usage +2.41 KB\n\n**All measurements for memory usage were the same**\n\n##### With input Medium (100) #####\nName                                   ips        average  deviation         median         99th %\n:maps.filter                       76.01 K       13.16 μs    ±90.13%          12 μs          42 μs\nEnum.filter/2 |\u003e Map.new/1         61.19 K       16.34 μs    ±61.27%          15 μs          50 μs\nfor                                60.89 K       16.42 μs    ±65.36%          15 μs          51 μs\nEnum.filter/2 |\u003e Enum.into/2       60.60 K       16.50 μs    ±60.52%          15 μs          51 μs\n\nComparison:\n:maps.filter                       76.01 K\nEnum.filter/2 |\u003e Map.new/1         61.19 K - 1.24x slower +3.19 μs\nfor                                60.89 K - 1.25x slower +3.27 μs\nEnum.filter/2 |\u003e Enum.into/2       60.60 K - 1.25x slower +3.35 μs\n\nMemory usage statistics:\n\nName                            Memory usage\n:maps.filter                         5.67 KB\nEnum.filter/2 |\u003e Map.new/1           7.84 KB - 1.38x memory usage +2.17 KB\nfor                                  7.84 KB - 1.38x memory usage +2.17 KB\nEnum.filter/2 |\u003e Enum.into/2         7.84 KB - 1.38x memory usage +2.17 KB\n\n**All measurements for memory usage were the same**\n\n##### With input Small (1) #####\nName                                   ips        average  deviation         median         99th %\n:maps.filter                        2.46 M      406.55 ns  ±6862.02%           0 ns        1000 ns\nfor                                 1.81 M      551.70 ns  ±4974.10%           0 ns        1000 ns\nEnum.filter/2 |\u003e Map.new/1          1.78 M      562.13 ns  ±5004.53%           0 ns        1000 ns\nEnum.filter/2 |\u003e Enum.into/2        1.64 M      608.18 ns  ±4796.51%        1000 ns        1000 ns\n\nComparison:\n:maps.filter                        2.46 M\nfor                                 1.81 M - 1.36x slower +145.15 ns\nEnum.filter/2 |\u003e Map.new/1          1.78 M - 1.38x slower +155.58 ns\nEnum.filter/2 |\u003e Enum.into/2        1.64 M - 1.50x slower +201.63 ns\n\nMemory usage statistics:\n\nName                            Memory usage\n:maps.filter                           136 B\nfor                                    248 B - 1.82x memory usage +112 B\nEnum.filter/2 |\u003e Map.new/1             248 B - 1.82x memory usage +112 B\nEnum.filter/2 |\u003e Enum.into/2           248 B - 1.82x memory usage +112 B\n\n**All measurements for memory usage were the same**\n```\n\n## Something went wrong\n\nSomething look wrong to you? :cry: Have a better example? :heart_eyes: Excellent!\n\n[Please open an Issue](https://github.com/devonestes/fast-elixir/issues/new) or [open a Pull Request](https://github.com/devonestes/fast-elixir/pulls) to fix it.\n\nThank you in advance! :wink: :beer:\n\n## Also Checkout\n\n- [Benchmarking in Practice](https://www.youtube.com/watch?v=7-mE5CKXjkw)\n\n  Talk by [@PragTob](https://github.com/PragTob) from ElixirLive 2016 about benchmarking in Elixir.\n\n- [Credo](https://github.com/rrrene/credo)\n\n  Wonderful static analysis tool by [@rrrene](https://github.com/rrrene). It's not _just_ about speed, but it will flag some performance issues.\n\nBrought to you by [@devoncestes](https://twitter.com/devoncestes)\n\n## License\n\nThis work is licensed under a [Creative Commons Attribution-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-sa/4.0/).\n\n## Code License\n\n### CC0 1.0 Universal\n\nTo the extent possible under law, @devonestes has waived all copyright and related or neighboring rights to \"fast-elixir\".\n\nThis work belongs to the community.\n","funding_links":[],"categories":["Uncategorized","Elixir"],"sub_categories":["Uncategorized"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdevonestes%2Ffast-elixir","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fdevonestes%2Ffast-elixir","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdevonestes%2Ffast-elixir/lists"}