{"id":20285460,"url":"https://github.com/mramshaw/python_benchmarks","last_synced_at":"2025-11-30T21:05:30.269Z","repository":{"id":92905938,"uuid":"164333654","full_name":"mramshaw/Python_Benchmarks","owner":"mramshaw","description":"test out some general Python memes","archived":false,"fork":false,"pushed_at":"2019-01-29T19:09:28.000Z","size":28,"stargazers_count":0,"open_issues_count":0,"forks_count":0,"subscribers_count":1,"default_branch":"master","last_synced_at":"2025-01-14T08:12:11.787Z","etag":null,"topics":["benchmark","benchmarking","benchmarks","performance-test","performance-testing","python"],"latest_commit_sha":null,"homepage":null,"language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/mramshaw.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2019-01-06T18:23:15.000Z","updated_at":"2019-01-29T19:09:30.000Z","dependencies_parsed_at":"2023-04-29T00:53:46.195Z","dependency_job_id":null,"html_url":"https://github.com/mramshaw/Python_Benchmarks","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/mramshaw%2FPython_Benchmarks","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/mramshaw%2FPython_Benchmarks/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/mramshaw%2FPython_Benchmarks/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/mramshaw%2FPython_Benchmarks/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/mramshaw","download_url":"https://codeload.github.com/mramshaw/Python_Benchmarks/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":241780512,"owners_count":20019061,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["benchmark","benchmarking","benchmarks","performance-test","performance-testing","python"],"created_at":"2024-11-14T14:26:51.313Z","updated_at":"2025-11-30T21:05:30.179Z","avatar_url":"https://github.com/mramshaw.png","language":"Python","readme":"# Python_Benchmarks\n\n[![Known Vulnerabilities](http://snyk.io/test/github/mramshaw/Python_Benchmarks/badge.svg?style=plastic\u0026targetFile=requirements.txt)](http://snyk.io/test/github/mramshaw/Python_Benchmarks?style=plastic\u0026targetFile=requirements.txt)\n\ntest out some general Python memes\n\nThe contents are as follows:\n\n* [Rationale](#rationale)\n* [Premature Optimization](#premature-optimization)\n* [Memory Profiling](#memory-profiling)\n* [Garbage Collection](#garbage-collection)\n    * [With the default behaviour (GC under runtime control)](#with-the-default-behaviour-gc-under-runtime-control)\n    * [With GC disabled](#with-gc-disabled)\n    * [Difference](#difference)\n* [To Run](#to-run)\n    * [List versus Tuple](#list-versus-tuple)\n    * [Range versus Xrange](#range-versus-xrange)\n    * [Explicit function return versus Default function return](#explicit-function-return-versus-default-function-return)\n    * [For loop summation versus Sum function](#for-loop-summation-versus-sum-function)\n* [Conclusion](#conclusion)\n* [Versions](#versions)\n* [To Do](#to-do)\n\n## Rationale\n\nA lot of things are stated in Python writings.\n\nLets test some of these assumptions.\n\nWe will use the Python [benchmark](http://pypi.org/project/benchmark/) package.\n\n[It looks a little old, but as long as it does what's expected, that should be fine.]\n\nThere are performance-related alternatives to Python such as Cython, etc.\n\nBut perhaps the ___best___ strategy is to not run performance-critical code in Python.\n\n## Premature Optimization\n\n\u003e We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil.\n\n[Sir Charles Antony Richard Hoare FRS FREng](http://en.wikipedia.org/wiki/Tony_Hoare)\n\nMy personal feeling is that the time to start optimizing is when you have identified a performance problem.\n\n[Wikipedia has it that Tony Hoare invented __quicksort__ in 1959/1960, so the next time you are asked at an interview\n to whiteboard a sort algorithm you might ask them why they are asking you to reinvent the wheel. And a stone-age wheel\n at that. Reference the [DRY principle](http://en.wikipedia.org/wiki/Don't_repeat_yourself) for bonus points.]\n\n## Memory Profiling\n\nSimply profiling run time (without factoring in memory usage) is not a very useful statistic.\n\nHowever, memory profiling in Python is not always easy or very precise, as the Python interpreter handles memory management.\n\nThere are effective performance optimizations (such as enabling a JIT compiler with [numba](http://numba.pydata.org/)).\n\nIn general, my experience has been that benchmarking with JIT compilers is unreliable.\n\nEven a good optimizing compiler can make comparative benchmarking troublesome.\n\n## Garbage Collection\n\nGarbage Collection is controlled by the Python runtime - however, the random nature of\nGC can give rise to unexpected slowdowns at inopportune times. So we will take a leaf\nfrom the [timeit code](http://github.com/python/cpython/blob/3.7/Lib/timeit.py) and\ndisable GC while our benchmarks are running.\n\nThis will probably result in a more useful graph of memory usage, as no memory should\nbe reclaimed while our benchmarks are being run.\n\n#### With the default behaviour (GC under runtime control)\n\n```bash\n$ python list_versus_tuple.py\n\nBenchmark Report\n================\n\nBenchmark List\n--------------\n\nname | rank | runs |    mean |     sd | timesBaseline\n-----|------|------|---------|--------|--------------\nlist |    1 |  100 | 0.06922 | 0.0126 |           1.0\n\nBenchmark Tuple\n---------------\n\n name | rank | runs |     mean |        sd | timesBaseline\n------|------|------|----------|-----------|--------------\ntuple |    1 |  100 | 0.009339 | 1.175e-05 |           1.0\n\nEach of the above 200 runs were run in random, non-consecutive order by\n`benchmark` v0.1.5 (http://jspi.es/benchmark) with Python 2.7.12\nLinux-4.4.0-141-generic-x86_64 on 2019-01-29 16:05:14.\n\n$\n```\n\nMean __0.06922__ and Std. Dev. __0.0126__, Mean __0.009339__ and Std. Dev. __1.175e-05__.\n\n#### With GC disabled\n\n```bash\n$ python list_versus_tuple.py\n\nBenchmark Report\n================\n\nBenchmark List\n--------------\n\nname | rank | runs |    mean |       sd | timesBaseline\n-----|------|------|---------|----------|--------------\nlist |    1 |  100 | 0.06467 | 0.003457 |           1.0\n\nBenchmark Tuple\n---------------\n\n name | rank | runs |    mean |        sd | timesBaseline\n------|------|------|---------|-----------|--------------\ntuple |    1 |  100 | 0.00926 | 0.0007694 |           1.0\n\nEach of the above 200 runs were run in random, non-consecutive order by\n`benchmark` v0.1.5 (http://jspi.es/benchmark) with Python 2.7.12\nLinux-4.4.0-141-generic-x86_64 on 2019-01-29 16:06:05.\n\n$\n```\n\nMean __0.06467__ and Std. Dev. __0.003457__, Mean __0.00926__ and Std. Dev. __0.0007694__.\n\n#### Difference\n\nTest | Before / After | Mean | Std. Dev. | Mean | Std. Dev. | Results\n---- | -------------- | ---- | --------- | ---- | --------- | -------\nList vs. Tuple | Usual GC | 0.06922 | 0.0126 | 0.009339 | 1.175e-05\nList vs. Tuple | GC Off | 0.06467 | 0.003457 | 0.00926 | 0.0007694 | Slight decrease in both execution times\nRange vs. Xrange | Usual GC | 0.01832 | 0.0001215 | 0.009139 | 6.165e-05\nRange vs. Xrange | GC Off | 0.01963 | 0.0002059 | 0.009128 | 6.597e-05 | Slight increase in `range` execution time and entropy; no change for `xrange`\nExplicit return vs. Default return | Usual GC | 0.003286 | 2.28e-05 | 0.003344 | 0.0002187\nExplicit return vs. Default return | GC Off | 0.00336 | 6.077e-05 | 0.003483 | 0.0001302 | Slight increase in both execution times\nFor loop summation vs. Sum function | Usual GC | 0.0006326 | 8.786e-05 | 0.0001356 | 8.84e-07\nFor loop summation vs. Sum function | GC Off | 0.0006081 | 1.367e-05 | 0.0001352 | 2.186e-06 | Slight decrease in `for loop` execution time; slight increase in `sum` entropy\n\nI am reading the Standard Deviation of the results as a measure of execution entropy.\n\n## To Run\n\nRun the various tests as described below.\n\n#### List versus Tuple\n\nType \u003ckbd\u003epython list_versus_tuple.py\u003c/kbd\u003e as follows:\n\n```bash\n$ python list_versus_tuple.py\n\nBenchmark Report\n================\n\nBenchmark List\n--------------\n\nname | rank | runs |    mean |      sd | timesBaseline\n-----|------|------|---------|---------|--------------\nlist |    1 |  100 | 0.06516 | 0.00227 |           1.0\n\nBenchmark Tuple\n---------------\n\n name | rank | runs |     mean |        sd | timesBaseline\n------|------|------|----------|-----------|--------------\ntuple |    1 |  100 | 0.009221 | 0.0004186 |           1.0\n\nEach of the above 200 runs were run in random, non-consecutive order by\n`benchmark` v0.1.5 (http://jspi.es/benchmark) with Python 2.7.12\nLinux-4.4.0-141-generic-x86_64 on 2019-01-06 19:06:14.\n\n$\n```\n\n#### Range versus Xrange\n\nType \u003ckbd\u003epython range_versus_xrange.py\u003c/kbd\u003e as follows:\n\n```bash\n$ python range_versus_xrange.py\n\nBenchmark Report\n================\n\nBenchmark Range\n---------------\n\n name | rank | runs |    mean |       sd | timesBaseline\n------|------|------|---------|----------|--------------\nrange |    1 |  100 | 0.01897 | 0.001786 |           1.0\n\nBenchmark Xrange\n----------------\n\n  name | rank | runs |    mean |       sd | timesBaseline\n-------|------|------|---------|----------|--------------\nxrange |    1 |  100 | 0.00911 | 3.19e-05 |           1.0\n\nEach of the above 200 runs were run in random, non-consecutive order by\n`benchmark` v0.1.5 (http://jspi.es/benchmark) with Python 2.7.12\nLinux-4.4.0-141-generic-x86_64 on 2019-01-06 19:11:46.\n\n$\n```\n\n#### Explicit function return versus Default function return\n\nType \u003ckbd\u003epython explicit_versus_default_return.py\u003c/kbd\u003e as follows:\n\n```bash\n$ python explicit_versus_default_return.py\n\nBenchmark Report\n================\n\nBenchmark Default Return\n------------------------\n\n          name | rank | runs |     mean |        sd | timesBaseline\n---------------|------|------|----------|-----------|--------------\ndefault return |    1 | 1000 | 0.002304 | 1.142e-05 |           1.0\n\nBenchmark Explicit Return\n-------------------------\n\n           name | rank | runs |    mean |        sd | timesBaseline\n----------------|------|------|---------|-----------|--------------\nexplicit return |    1 | 1000 | 0.00233 | 0.0002527 |           1.0\n\nEach of the above 2000 runs were run in random, non-consecutive order by\n`benchmark` v0.1.5 (http://jspi.es/benchmark) with Python 2.7.12\nLinux-4.4.0-141-generic-x86_64 on 2019-01-07 18:09:31.\n\n$\n```\n\n#### For loop summation versus Sum function\n\nType \u003ckbd\u003epython for_loop_versus_sum.py\u003c/kbd\u003e as follows:\n\n```bash\n$ python for_loop_versus_sum.py\n\nBenchmark Report\n================\n\nBenchmark For loop\n------------------\n\n    name | rank | runs |      mean |        sd | timesBaseline\n---------|------|------|-----------|-----------|--------------\nfor loop |    1 | 1000 | 0.0005747 | 3.312e-06 |           1.0\n\nBenchmark Sum with xrange\n-------------------------\n\n           name | rank | runs |      mean |        sd | timesBaseline\n----------------|------|------|-----------|-----------|--------------\nsum with xrange |    1 | 1000 | 0.0001355 | 1.347e-06 |           1.0\n\nEach of the above 2000 runs were run in random, non-consecutive order by\n`benchmark` v0.1.5 (http://jspi.es/benchmark) with Python 2.7.12\nLinux-4.4.0-141-generic-x86_64 on 2019-01-25 21:34:59.\n\n$\n```\n\n## Conclusion\n\nAs is generally stated, `tuples` are in fact significantly faster than `lists`.\n\n[Note that tuples and lists are not interchangeable - lists are mutable whereas\n tuples are immutable (cannot be changed after their initial allocation). But\n for many uses - such as ___value objects___ - tuples are fine.]\n\nIn addition to other benefits (such as avoiding memory errors), `xrange`\nsignificantly outperforms `range` (at least for Python 2).\n\nIn terms of performance, an explicit function return is pretty much the same\nthing as a defaulted function return (the timing differences are really too\nclose to call).\n\nUsing the `sum` primitive is more efficient and less error-prone (being fewer\nlines of code) than a `for` loop.\n\n## Versions\n\n* Python __2.7.12__\n\n* benchmark __0.1.5__\n\n## To Do\n\n- [x] Ensure code conforms to `pylint`, `pycodestyle` and `pydocstyle`\n- [x] Add `pylint` exemptions for `benchmark` coding standards\n- [x] Disable Garbage Collection for duration of benchmarks\n- [x] Add test for `range` versus `xrange`\n- [x] Add test for Explicit function return versus Default function return\n- [x] Add test for For loop versus Sum\n- [x] Add a Snyk.io vulnerability scan badge\n- [ ] Find an alternative for Python 3; also memory profiling\n- [ ] Add some more benchmarks\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fmramshaw%2Fpython_benchmarks","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fmramshaw%2Fpython_benchmarks","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fmramshaw%2Fpython_benchmarks/lists"}