{"id":13502164,"url":"https://github.com/lobocv/pyperform","last_synced_at":"2025-04-06T17:13:24.377Z","repository":{"id":22715681,"uuid":"26059969","full_name":"lobocv/pyperform","owner":"lobocv","description":"An easy and convienent way to performance test python code.","archived":false,"fork":false,"pushed_at":"2021-07-17T22:26:10.000Z","size":694,"stargazers_count":223,"open_issues_count":2,"forks_count":14,"subscribers_count":8,"default_branch":"master","last_synced_at":"2025-03-30T15:11:14.373Z","etag":null,"topics":["benchmark-functions","benchmarking","performance-test","python","speed-test","timeit"],"latest_commit_sha":null,"homepage":null,"language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/lobocv.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE.txt","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null}},"created_at":"2014-11-01T18:55:07.000Z","updated_at":"2024-11-28T16:30:15.000Z","dependencies_parsed_at":"2022-08-21T00:00:50.701Z","dependency_job_id":null,"html_url":"https://github.com/lobocv/pyperform","commit_stats":null,"previous_names":[],"tags_count":12,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/lobocv%2Fpyperform","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/lobocv%2Fpyperform/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/lobocv%2Fpyperform/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/lobocv%2Fpyperform/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/lobocv","download_url":"https://codeload.github.com/lobocv/pyperform/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":247517921,"owners_count":20951719,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["benchmark-functions","benchmarking","performance-test","python","speed-test","timeit"],"created_at":"2024-07-31T22:02:04.304Z","updated_at":"2025-04-06T17:13:24.357Z","avatar_url":"https://github.com/lobocv.png","language":"Python","readme":"PyPerform\n=========\n\nAn easy and convenient way to performance test blocks of python code.\nTired of writing separate scripts for your performance tests? Don't like coding in strings?\nUsing the pyperform decorators, you can easily implement timeit tests to your functions with just one line!\n\n\nFeatures\n--------\nFeatures of pyperform include:\n\n    - Quick, easy to implement in-code performance tests that run once when the function is defined\n    - Speed comparison of several functions.\n    - Validation of results between ComparisonBenchmarks\n    - Summary reports.\n    - Supports class functions as well as global functions.\n    - Performance tests can easily be disabled/enabled globally.\n    - Community-driven library of performance tests to learn from.\n\nInstallation\n------------\nTo install:\n    \n    pip install pyperform\n    \n\nCompatibility\n-------------\nPyPerform was developed in Python 2.7 but has been tested with Python 3.4. Please report any compatibility issues or\nsend pull requests with your changes!\n\nUsage\n-----\n\nTo use pyperform to benchmark functions, you need to add one of the following decorators:\n\n```python\n\n@BenchmarkedFunction(setup=None,\n                     classname=None,\n                     largs=None,\n                     kwargs=None,\n                     timeit_repeat=3,\n                     timeit_number=1000)\n\n@BenchmarkedClass(setup=None,\n                  largs=None,\n                  kwargs=None,\n                  timeit_repeat=3,\n                  timeit_number=1000)\n\n@ComparisonBenchmark(group,\n                     classname=None,\n                     setup=None,\n                     largs=None,\n                     kwargs=None,\n                     validation=False,\n                     timeit_repeat=3,\n                     timeit_number=1000)\n\n```\n\nwhere largs is a list of arguments to pass to the function and kwargs is a dictionary of keyword arguments to pass to the \nfunction. The setup argument is described in the following section. All decorators have timeit_repeat and timeit_number\narguments which are can be used to set the number of trials and repetitions to use with timeit. The ComparisonBenchmark\nhas a validation flag, which when set to True, will attempt to compare the results of the functions in the group.\n\nImports and Setup Code\n----------------------\nSometimes your decorated function will require some setup code or imported modules. You can easily include any lines of \ncode by by appending the tag `#!` to the end of the line. For functions and classes, you only need to tag the `def` or\n`class` line and PyPerform will include the entire function/class definition as setup code.\n\n\nFor example:\n\n```python\n\n    from pyperform import BenchmarkedFunction\n    \n    import math #!\n    a = 10  #!\n    \n    \n    def do_calcuation(a, b): #!\n        return a * b\n    \n    \n    @BenchmarkedFunction(largs=(5,))\n    def call_function(b):\n        # We can reference the `a` variable because it is tagged\n        result = a * b\n        assert result == 50\n        # We can call the math module because it is tagged.\n        math.log10(100)\n        # We can call this function because it is tagged.\n        calc_result = do_calcuation(a, b)\n        return calc_result\n\n```\n\nResults in:\n\n    call_function \t 6.214 us\n\n    \nThe setup argument (Optional)\n-----------------------------\nAll decorators have a setup argument which can be either a function with no arguments, or string of code. If given a\nfunction, the body of the function is executed in the global scope. This means that objects and variables instantiated \nin the body of the function are accessible from within the benchmarked function.\n  \nExample:\n\n```python\n\nfrom pyperform import BenchmarkedFunction\n\ndef _setup():\n    a = 10\n\n@BenchmarkedFunction(setup=_setup, largs=(5,))\ndef multiply_by_a(b):\n    result = a * b\n    assert result == 50\n    return result\n\n```\n\nResults in:\n    \n    multiply_by_a \t 3.445 us\n\n\nClass-method Benchmarking\n-------------------------\nPyperform will also work on class methods, but in order to do so, we must instantiate an instance of the class.\nThis is done in `BenchmarkedClass`. Then once we have decorated the class with `BenchmarkedClass`, we can use\n`ComparisonBenchmark` or `BenchmarkedFunction` to performance test the class's methods.\n\n\u003cb\u003eNote that when benchmarking class methods, the `classname` argument to ComparisonBenchmark must be provided.\nThis argument will hopefully be removed in the future.\u003c/b\u003e\n\nIn the BenchmarkedClass we instantiate a Person object and then run three benchmarked class-methods.\nTwo of the class-methods are `ComparisonBenchmarks` and will be compared with one another. To see the result, you must\ncall the `ComparisonBenchmark.summarize()` function. The third function is a duplicate of calculate_savings_method2 but\nit is a BenchmarkedFunction instead. The result of BenchmarkedFunctions is printed when the script is run.\n\n```python\n\nfrom pyperform import BenchmarkedClass, ComparisonBenchmark, BenchmarkedFunction\n\n@BenchmarkedClass(largs=('Calvin', 24, 1000.,), kwargs={'height': '165 cm'})\nclass Person(object):\n\n    def __init__(self, name, age, monthly_income, height=None, *args, **kwargs):\n        self.name = name\n        self.age = age\n        self.height = height\n        self.monthly_income = monthly_income\n\n\n    @ComparisonBenchmark('Calculate Savings', classname=\"Person\", timeit_number=100,\n                         validation=True, largs=(55,), kwargs={'monthly_spending': 500})\n    def calculate_savings_method1(self, retirement_age, monthly_spending=0):\n        savings = 0\n        for y in range(self.age, retirement_age):\n            for m in range(12):\n                savings += self.monthly_income - monthly_spending\n        return savings\n\n    @ComparisonBenchmark('Calculate Savings', classname=\"Person\", timeit_number=100,\n                         validation=True, largs=(55,), kwargs={'monthly_spending': 500})\n    def calculate_savings_method2(self, retirement_age, monthly_spending=0):\n        yearly_income = 12 * (self.monthly_income - monthly_spending)\n        n_years = retirement_age - self.age\n        if n_years \u003e 0:\n            return yearly_income * n_years\n\n    @BenchmarkedFunction(classname=\"Person\", timeit_number=100,\n                         largs=(55,), kwargs={'monthly_spending': 500})\n    def same_as_method_2(self, retirement_age, monthly_spending=0):\n        yearly_income = 12 * (self.monthly_income - monthly_spending)\n        n_years = retirement_age - self.age\n        if n_years \u003e 0:\n            return yearly_income * n_years\n\n```\n\nYou can print the summary to file or if ComparisonBenchmark.summarize() is not given an fs parameter, it will print to\nconsole.\n\n```python\n\nreport_file = open('report.txt', 'w')\nComparisonBenchmark.summarize(group='Calculate Savings', fs=report_file)\n\n```\n\nThis results in a file `report.txt` that contains the ComparisonBenchmark's results:\n    \n    Call statement:\n    \n        instance.calculate_savings_method2(55, monthly_spending=500)\n    \n    \n    Rank     Function Name                       Time         % of Fastest    timeit_repeat   timeit_number \n    ------------------------------------------------------------------------------------------------------------------------\n    \n    1        Person.calculate_savings_method2    267.093 ns   100.0           3               100           \n    2        Person.calculate_savings_method1    35.623 us    0.7             3               100           \n    ------------------------------------------------------------------------------------------------------------------------\n    \n    \n    \n    Source Code:\n    ------------------------------------------------------------------------------------------------------------------------\n    \n    \n    def calculate_savings_method2(self, retirement_age, monthly_spending=0):\n        yearly_income = 12 * (self.monthly_income - monthly_spending)\n        n_years = retirement_age - self.age\n        if n_years \u003e 0:\n            return yearly_income * n_years\n    ------------------------------------------------------------------------------------------------------------------------\n    \n    \n    def calculate_savings_method1(self, retirement_age, monthly_spending=0):\n        savings = 0\n        for y in range(self.age, retirement_age):\n            for m in range(12):\n                savings += self.monthly_income - monthly_spending\n        return savings\n    ------------------------------------------------------------------------------------------------------------------------\n\nand printed to the screen, the results of the BenchmarkedFunction\n    \n    same_as_method_2 \t 262.827 ns\n    \nValidation\n----------\nComparisonBenchmark has a optional argument `validate`. When `validate=True`, the return value of each \nComparisonBenchmark in a group is compared. If the results of the function are the not same, a ValidationError is raised.\n \nExample:\n\n```python\n\nfrom pyperform import ComparisonBenchmark\nfrom math import sin  #!\n\n\n@ComparisonBenchmark('Group1', validation=True, largs=(100,))\ndef list_append(n, *args, **kwargs):\n    l = []\n    for i in xrange(1, n):\n        l.append(sin(i))\n    return l\n\n\n@ComparisonBenchmark('Group1', validation=True, largs=(100,))\ndef list_comprehension(n, *args, **kwargs):\n    return 1\n\n```\n\nOutput:\n\n    pyperform.ValidationError: Results of functions list_append and list_comprehension are not equivalent.\n    list_append:\t [0.8414709848078965, 0.9092974268256817, 0.1411200080598672, -0.7568024953079282, -0.9589242746631385, -0.27941549819892586, 0.6569865987187891, 0.9893582466233818, 0.4121184852417566, -0.5440211108893698, -0.9999902065507035, -0.5365729180004349, 0.4201670368266409, 0.9906073556948704, 0.6502878401571168, -0.2879033166650653, -0.9613974918795568, -0.750987246771676, 0.14987720966295234, 0.9129452507276277, 0.8366556385360561, -0.008851309290403876, -0.8462204041751706, -0.9055783620066239, -0.13235175009777303, 0.7625584504796027, 0.956375928404503, 0.27090578830786904, -0.6636338842129675, -0.9880316240928618, -0.404037645323065, 0.5514266812416906, 0.9999118601072672, 0.5290826861200238, -0.428182669496151, -0.9917788534431158, -0.6435381333569995, 0.2963685787093853, 0.9637953862840878, 0.7451131604793488, -0.158622668804709, -0.9165215479156338, -0.8317747426285983, 0.017701925105413577, 0.8509035245341184, 0.9017883476488092, 0.123573122745224, -0.7682546613236668, -0.9537526527594719, -0.26237485370392877, 0.6702291758433747, 0.9866275920404853, 0.39592515018183416, -0.5587890488516163, -0.9997551733586199, -0.5215510020869119, 0.43616475524782494, 0.9928726480845371, 0.6367380071391379, -0.3048106211022167, -0.9661177700083929, -0.7391806966492228, 0.16735570030280691, 0.9200260381967906, 0.8268286794901034, -0.026551154023966794, -0.8555199789753223, -0.8979276806892913, -0.11478481378318722, 0.7738906815578891, 0.9510546532543747, 0.25382336276203626, -0.6767719568873076, -0.9851462604682474, -0.38778163540943045, 0.5661076368981803, 0.9995201585807313, 0.5139784559875352, -0.4441126687075084, -0.9938886539233752, -0.6298879942744539, 0.31322878243308516, 0.9683644611001854, 0.7331903200732922, -0.1760756199485871, -0.9234584470040598, -0.8218178366308225, 0.03539830273366068, 0.8600694058124533, 0.8939966636005579, 0.10598751175115685, -0.7794660696158047, -0.9482821412699473, -0.24525198546765434, 0.683261714736121, 0.9835877454343449, 0.3796077390275217, -0.5733818719904229, -0.9992068341863537]\n    list_comprehension:\t1\n","funding_links":[],"categories":["Python"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Flobocv%2Fpyperform","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Flobocv%2Fpyperform","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Flobocv%2Fpyperform/lists"}