{"id":13461696,"url":"https://github.com/pythonprofilers/memory_profiler","last_synced_at":"2026-02-20T00:30:52.772Z","repository":{"id":1749541,"uuid":"2575826","full_name":"pythonprofilers/memory_profiler","owner":"pythonprofilers","description":"Monitor Memory usage of Python code","archived":false,"fork":false,"pushed_at":"2024-04-29T15:39:56.000Z","size":737,"stargazers_count":4540,"open_issues_count":139,"forks_count":386,"subscribers_count":81,"default_branch":"master","last_synced_at":"2026-01-04T23:56:06.905Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":"http://pypi.python.org/pypi/memory_profiler","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"other","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/pythonprofilers.png","metadata":{"files":{"readme":"README.rst","changelog":null,"contributing":null,"funding":null,"license":"COPYING","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2011-10-14T11:56:14.000Z","updated_at":"2025-12-25T17:45:09.000Z","dependencies_parsed_at":"2023-01-13T11:21:44.923Z","dependency_job_id":"e1f89f55-3bc1-4695-8ef3-e632aabe9892","html_url":"https://github.com/pythonprofilers/memory_profiler","commit_stats":{"total_commits":561,"total_committers":107,"mean_commits":5.242990654205608,"dds":0.7754010695187166,"last_synced_commit":"e079d3fa351889087f55edb68769b67682ceacf3"},"previous_names":[],"tags_count":46,"template":false,"template_full_name":null,"purl":"pkg:github/pythonprofilers/memory_profiler","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/pythonprofilers%2Fmemory_profiler","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/pythonprofilers%2Fmemory_profiler/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/pythonprofilers%2Fmemory_profiler/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/pythonprofilers%2Fmemory_profiler/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/pythonprofilers","download_url":"https://codeload.github.com/pythonprofilers/memory_profiler/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/pythonprofilers%2Fmemory_profiler/sbom","scorecard":{"id":752378,"data":{"date":"2025-08-11","repo":{"name":"github.com/pythonprofilers/memory_profiler","commit":"025929f8e4f4ea8c27ddb5ef72fc91f6bd703ea5"},"scorecard":{"version":"v5.2.1-40-gf6ed084d","commit":"f6ed084d17c9236477efd66e5b258b9d4cc7b389"},"score":3.9,"checks":[{"name":"Code-Review","score":5,"reason":"Found 6/11 approved changesets -- score normalized to 5","details":null,"documentation":{"short":"Determines if the project requires human code review before pull requests (aka merge requests) are merged.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#code-review"}},{"name":"Binary-Artifacts","score":10,"reason":"no binaries found in the repo","details":null,"documentation":{"short":"Determines if the project has generated executable (binary) artifacts in the source repository.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#binary-artifacts"}},{"name":"Token-Permissions","score":0,"reason":"detected GitHub workflow tokens with excessive permissions","details":["Warn: no topLevel permission defined: .github/workflows/lint_python.yml:1","Info: no jobLevel write permissions found"],"documentation":{"short":"Determines if the project's workflows follow the principle of least privilege.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#token-permissions"}},{"name":"Packaging","score":-1,"reason":"packaging workflow not detected","details":["Warn: no GitHub/GitLab publishing workflow detected."],"documentation":{"short":"Determines if the project is published as a package that others can easily download, install, easily update, and uninstall.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#packaging"}},{"name":"Dangerous-Workflow","score":10,"reason":"no dangerous workflow patterns detected","details":null,"documentation":{"short":"Determines if the project's GitHub Action workflows avoid dangerous patterns.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#dangerous-workflow"}},{"name":"Pinned-Dependencies","score":0,"reason":"dependency not pinned by hash detected -- score normalized to 0","details":["Warn: GitHub-owned GitHubAction not pinned by hash: .github/workflows/lint_python.yml:23: update your workflow using https://app.stepsecurity.io/secureworkflow/pythonprofilers/memory_profiler/lint_python.yml/master?enable=pin","Warn: GitHub-owned GitHubAction not pinned by hash: .github/workflows/lint_python.yml:24: update your workflow using https://app.stepsecurity.io/secureworkflow/pythonprofilers/memory_profiler/lint_python.yml/master?enable=pin","Warn: GitHub-owned GitHubAction not pinned by hash: .github/workflows/lint_python.yml:69: update your workflow using https://app.stepsecurity.io/secureworkflow/pythonprofilers/memory_profiler/lint_python.yml/master?enable=pin","Warn: GitHub-owned GitHubAction not pinned by hash: .github/workflows/lint_python.yml:70: update your workflow using https://app.stepsecurity.io/secureworkflow/pythonprofilers/memory_profiler/lint_python.yml/master?enable=pin","Warn: pipCommand not pinned by hash: .github/workflows/lint_python.yml:37","Warn: pipCommand not pinned by hash: .github/workflows/lint_python.yml:38","Warn: pipCommand not pinned by hash: .github/workflows/lint_python.yml:47","Warn: pipCommand not pinned by hash: .github/workflows/lint_python.yml:48","Warn: pipCommand not pinned by hash: .github/workflows/lint_python.yml:83","Warn: pipCommand not pinned by hash: .github/workflows/lint_python.yml:84","Warn: pipCommand not pinned by hash: .github/workflows/lint_python.yml:85","Warn: pipCommand not pinned by hash: .github/workflows/lint_python.yml:86","Info:   0 out of   4 GitHub-owned GitHubAction dependencies pinned","Info:   0 out of   8 pipCommand dependencies pinned"],"documentation":{"short":"Determines if the project has declared and pinned the dependencies of its build process.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#pinned-dependencies"}},{"name":"Maintained","score":0,"reason":"0 commit(s) and 0 issue activity found in the last 90 days -- score normalized to 0","details":null,"documentation":{"short":"Determines if the project is \"actively maintained\".","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#maintained"}},{"name":"CII-Best-Practices","score":0,"reason":"no effort to earn an OpenSSF best practices badge detected","details":null,"documentation":{"short":"Determines if the project has an OpenSSF (formerly CII) Best Practices Badge.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#cii-best-practices"}},{"name":"Security-Policy","score":0,"reason":"security policy file not detected","details":["Warn: no security policy file detected","Warn: no security file to analyze","Warn: no security file to analyze","Warn: no security file to analyze"],"documentation":{"short":"Determines if the project has published a security policy.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#security-policy"}},{"name":"Vulnerabilities","score":10,"reason":"0 existing vulnerabilities detected","details":null,"documentation":{"short":"Determines if the project has open, known unfixed vulnerabilities.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#vulnerabilities"}},{"name":"Fuzzing","score":0,"reason":"project is not fuzzed","details":["Warn: no fuzzer integrations found"],"documentation":{"short":"Determines if the project uses fuzzing.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#fuzzing"}},{"name":"License","score":9,"reason":"license file detected","details":["Info: project has a license file: COPYING:0","Warn: project license file does not contain an FSF or OSI license."],"documentation":{"short":"Determines if the project has defined a license.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#license"}},{"name":"Signed-Releases","score":-1,"reason":"no releases found","details":null,"documentation":{"short":"Determines if the project cryptographically signs release artifacts.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#signed-releases"}},{"name":"Branch-Protection","score":0,"reason":"branch protection not enabled on development/release branches","details":["Warn: branch protection not enabled for branch 'master'"],"documentation":{"short":"Determines if the default and release branches are protected with GitHub's branch protection settings.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#branch-protection"}},{"name":"SAST","score":0,"reason":"SAST tool is not run on all commits -- score normalized to 0","details":["Warn: 0 commits out of 29 are checked with a SAST tool"],"documentation":{"short":"Determines if the project uses static code analysis.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#sast"}}]},"last_synced_at":"2025-08-22T20:42:41.214Z","repository_id":1749541,"created_at":"2025-08-22T20:42:41.214Z","updated_at":"2025-08-22T20:42:41.214Z"},"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":29637408,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-02-19T22:32:43.237Z","status":"ssl_error","status_checked_at":"2026-02-19T22:32:38.330Z","response_time":117,"last_error":"SSL_connect returned=1 errno=0 peeraddr=140.82.121.6:443 state=error: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-07-31T11:00:52.941Z","updated_at":"2026-02-20T00:30:52.756Z","avatar_url":"https://github.com/pythonprofilers.png","language":"Python","readme":".. image:: https://travis-ci.org/pythonprofilers/memory_profiler.svg?branch=master\n    :target: https://travis-ci.org/pythonprofilers/memory_profiler\n\n=================\n Memory Profiler\n=================\n\n\n**Note:** This package is no longer actively maintained. I won't be actively responding to issues.\n\nThis is a python module for monitoring memory consumption of a process\nas well as line-by-line analysis of memory consumption for python\nprograms. It is a pure python module which depends on the `psutil\n\u003chttp://pypi.python.org/pypi/psutil\u003e`_ module.\n\n\n==============\n Installation\n==============\nInstall via pip::\n\n    $ pip install -U memory_profiler\n\nThe package is also available on `conda-forge\n\u003chttps://github.com/conda-forge/memory_profiler-feedstock\u003e`_.\n\nTo install from source, download the package, extract and type::\n\n    $ pip install .\n\n===========\nQuick Start\n===========\n\nUse `mprof` to generate a full memory usage report of your executable and to plot it.\n\n.. code-block:: bash\n\n    mprof run executable\n    mprof plot\n\nThe plot would be something like this:\n\n.. image:: https://i.stack.imgur.com/ixCH4.png\n\n=======\n Usage\n=======\n\n\nline-by-line memory usage\n=========================\n\nThe line-by-line memory usage mode is used much in the same way of the\n`line_profiler \u003chttps://pypi.python.org/pypi/line_profiler/\u003e`_: first\ndecorate the function you would like to profile with ``@profile`` and\nthen run the script with a special script (in this case with specific\narguments to the Python interpreter).\n\nIn the following example, we create a simple function ``my_func`` that\nallocates lists ``a``, ``b`` and then deletes ``b``:\n\n.. code-block:: python\n\n    @profile\n    def my_func():\n        a = [1] * (10 ** 6)\n        b = [2] * (2 * 10 ** 7)\n        del b\n        return a\n\n    if __name__ == '__main__':\n        my_func()\n\n\nExecute the code passing the option ``-m memory_profiler`` to the\npython interpreter to load the memory_profiler module and print to\nstdout the line-by-line analysis. If the file name was example.py,\nthis would result in::\n\n    $ python -m memory_profiler example.py\n\nOutput will follow::\n\n    Line #    Mem usage    Increment  Occurrences   Line Contents\n    ============================================================\n         3   38.816 MiB   38.816 MiB           1   @profile\n         4                                         def my_func():\n         5   46.492 MiB    7.676 MiB           1       a = [1] * (10 ** 6)\n         6  199.117 MiB  152.625 MiB           1       b = [2] * (2 * 10 ** 7)\n         7   46.629 MiB -152.488 MiB           1       del b\n         8   46.629 MiB    0.000 MiB           1       return a\n\n\nThe first column represents the line number of the code that has been\nprofiled, the second column (*Mem usage*) the memory usage of the\nPython interpreter after that line has been executed. The third column\n(*Increment*) represents the difference in memory of the current line\nwith respect to the last one. The fourth column (*Occurrences*) shows\nthe number of times that profiler has executed each line. The last column\n(*Line Contents*) prints the code that has been profiled.\n\nDecorator\n=========\nA function decorator is also available.  Use as follows:\n\n.. code-block:: python\n\n    from memory_profiler import profile\n\n    @profile\n    def my_func():\n        a = [1] * (10 ** 6)\n        b = [2] * (2 * 10 ** 7)\n        del b\n        return a\n\nIn this case the script can be run without specifying ``-m\nmemory_profiler`` in the command line.\n\nIn function decorator, you can specify the precision as an argument to the\ndecorator function.  Use as follows:\n\n.. code-block:: python\n\n    from memory_profiler import profile\n\n    @profile(precision=4)\n    def my_func():\n        a = [1] * (10 ** 6)\n        b = [2] * (2 * 10 ** 7)\n        del b\n        return a\n\nIf a python script with decorator ``@profile`` is called using ``-m\nmemory_profiler`` in the command line, the ``precision`` parameter is ignored.\n\nTime-based memory usage\n==========================\nSometimes it is useful to have full memory usage reports as a function of\ntime (not line-by-line) of external processes (be it Python scripts or not).\nIn this case the executable ``mprof`` might be useful. Use it like::\n\n    mprof run \u003cexecutable\u003e\n    mprof plot\n\nThe first line run the executable and record memory usage along time,\nin a file written in the current directory.\nOnce it's done, a graph plot can be obtained using the second line.\nThe recorded file contains a timestamps, that allows for several\nprofiles to be kept at the same time.\n\nHelp on each `mprof` subcommand can be obtained with the `-h` flag,\ne.g. `mprof run -h`.\n\nIn the case of a Python script, using the previous command does not\ngive you any information on which function is executed at a given\ntime. Depending on the case, it can be difficult to identify the part\nof the code that is causing the highest memory usage.\n\nAdding the `profile` decorator to a function(ensure no \n`from memory_profiler import profile` statement) and running the Python\nscript with\n\n    mprof run --python python \u003cscript\u003e\n\nwill record timestamps when entering/leaving the profiled function. Running\n\n    mprof plot\n\nafterward will plot the result, making plots (using matplotlib) similar to these:\n\n.. image:: https://camo.githubusercontent.com/3a584c7cfbae38c9220a755aa21b5ef926c1031d/68747470733a2f2f662e636c6f75642e6769746875622e636f6d2f6173736574732f313930383631382f3836313332302f63623865376337382d663563632d313165322d386531652d3539373237623636663462322e706e67\n   :target: https://github.com/scikit-learn/scikit-learn/pull/2248\n   :height: 350px\n\nor, with ``mprof plot --flame`` (the function and timestamp names will appear on hover):\n\n.. image:: ./images/flamegraph.png\n   :height: 350px\n\nA discussion of these capabilities can be found `here \u003chttp://fa.bianp.net/blog/2014/plot-memory-usage-as-a-function-of-time/\u003e`_.\n\n.. warning:: If your Python file imports the memory profiler `from memory_profiler import profile` these timestamps will not be recorded. Comment out the import, leave your functions decorated, and re-run.\n\nThe available commands for `mprof` are:\n\n  - ``mprof run``: running an executable, recording memory usage\n  - ``mprof plot``: plotting one the recorded memory usage (by default,\n    the last one)\n  - ``mprof list``: listing all recorded memory usage files in a\n    user-friendly way.\n  - ``mprof clean``: removing all recorded memory usage files.\n  - ``mprof rm``: removing specific recorded memory usage files\n\nTracking forked child processes\n===============================\nIn a multiprocessing context the main process will spawn child processes whose\nsystem resources are allocated separately from the parent process. This can\nlead to an inaccurate report of memory usage since by default only the parent\nprocess is being tracked. The ``mprof`` utility provides two mechanisms to\ntrack the usage of child processes: sum the memory of all children to the\nparent's usage and track each child individual.\n\nTo create a report that combines memory usage of all the children and the\nparent, use the ``include-children`` flag in either the ``profile`` decorator or\nas a command line argument to ``mprof``::\n\n    mprof run --include-children \u003cscript\u003e\n\nThe second method tracks each child independently of the main process,\nserializing child rows by index to the output stream. Use the ``multiprocess``\nflag and plot as follows::\n\n    mprof run --multiprocess \u003cscript\u003e\n    mprof plot\n\nThis will create a plot using matplotlib similar to this:\n\n.. image:: https://cloud.githubusercontent.com/assets/745966/24075879/2e85b43a-0bfa-11e7-8dfe-654320dbd2ce.png\n    :target: https://github.com/pythonprofilers/memory_profiler/pull/134\n    :height: 350px\n\nYou can combine both the ``include-children`` and ``multiprocess`` flags to show\nthe total memory of the program as well as each child individually. If using\nthe API directly, note that the return from ``memory_usage`` will include the\nchild memory in a nested list along with the main process memory.\n\nPlot settings\n===============================\n\nBy default, the command line call is set as the graph title. If you wish to customize it, you can use the ``-t`` option to manually set the figure title.\n\n\n    mprof plot -t 'Recorded memory usage'\n\nYou can also hide the function timestamps using the ``n`` flag, such as\n\n    mprof plot -n\n\nTrend lines and its numeric slope can be plotted using the ``s`` flag, such as\n\n    mprof plot -s\n\n.. image:: ./images/trend_slope.png\n   :height: 350px\n\nThe intended usage of the -s switch is to check the labels' numerical slope over a significant time period for : \n\n  - ``\u003e0`` it might mean a memory leak.\n  - ``~0`` if 0 or near 0, the memory usage may be considered stable.\n  - ``\u003c0`` to be interpreted depending on the expected process memory usage patterns, also might mean that the sampling period is too small.\n\nThe trend lines are for ilustrative purposes and are plotted as (very) small dashed lines.\n\n\nSetting debugger breakpoints\n=============================\nIt is possible to set breakpoints depending on the amount of memory used.\nThat is, you can specify a threshold and as soon as the program uses more\nmemory than what is specified in the threshold it will stop execution\nand run into the pdb debugger. To use it, you will have to decorate\nthe function as done in the previous section with ``@profile`` and then\nrun your script with the option ``-m memory_profiler --pdb-mmem=X``,\nwhere X is a number representing the memory threshold in MB. For example::\n\n    $ python -m memory_profiler --pdb-mmem=100 my_script.py\n\nwill run ``my_script.py`` and step into the pdb debugger as soon as the code\nuses more than 100 MB in the decorated function.\n\n.. TODO: alternatives to decoration (for example when you don't want to modify\n    the file where your function lives).\n\n=====\n API\n=====\nmemory_profiler exposes a number of functions to be used in third-party\ncode.\n\n\n\n``memory_usage(proc=-1, interval=.1, timeout=None)`` returns the memory usage\nover a time interval. The first argument, ``proc`` represents what\nshould be monitored.  This can either be the PID of a process (not\nnecessarily a Python program), a string containing some python code to\nbe evaluated or a tuple ``(f, args, kw)`` containing a function and its\narguments to be evaluated as ``f(*args, **kw)``. For example,\n\n.. code-block:: python\n\n    \u003e\u003e\u003e from memory_profiler import memory_usage\n    \u003e\u003e\u003e mem_usage = memory_usage(-1, interval=.2, timeout=1)\n    \u003e\u003e\u003e print(mem_usage)\n\t[7.296875, 7.296875, 7.296875, 7.296875, 7.296875]\n\n\nHere I've told memory_profiler to get the memory consumption of the\ncurrent process over a period of 1 second with a time interval of 0.2\nseconds. As PID I've given it -1, which is a special number (PIDs are\nusually positive) that means current process, that is, I'm getting the\nmemory usage of the current Python interpreter. Thus I'm getting\naround 7MB of memory usage from a plain python interpreter. If I try\nthe same thing on IPython (console) I get 29MB, and if I try the same\nthing on the IPython notebook it scales up to 44MB.\n\n\nIf you'd like to get the memory consumption of a Python function, then\nyou should specify the function and its arguments in the tuple ``(f,\nargs, kw)``. For example:\n\n.. code-block:: python\n\n    \u003e\u003e\u003e # define a simple function\n    \u003e\u003e\u003e def f(a, n=100):\n        ...     import time\n        ...     time.sleep(2)\n        ...     b = [a] * n\n        ...     time.sleep(1)\n        ...     return b\n        ...\n    \u003e\u003e\u003e from memory_profiler import memory_usage\n    \u003e\u003e\u003e memory_usage((f, (1,), {'n' : int(1e6)}))\n\nThis will execute the code `f(1, n=int(1e6))` and return the memory\nconsumption during this execution.\n\n=========\nREPORTING\n=========\n\nThe output can be redirected to a log file by passing IO stream as\nparameter to the decorator like @profile(stream=fp)\n\n.. code-block:: python\n\n    \u003e\u003e\u003e fp=open('memory_profiler.log','w+')\n    \u003e\u003e\u003e @profile(stream=fp)\n    \u003e\u003e\u003e def my_func():\n        ...     a = [1] * (10 ** 6)\n        ...     b = [2] * (2 * 10 ** 7)\n        ...     del b\n        ...     return a\n\nFor details refer: examples/reporting_file.py\n\n``Reporting via logger Module:``\n\nSometime it would be very convenient to use logger module specially\nwhen we need to use RotatingFileHandler.\n\nThe output can be redirected to logger module by simply making use of\nLogFile of memory profiler module.\n\n.. code-block:: python\n\n    \u003e\u003e\u003e from memory_profiler import LogFile\n    \u003e\u003e\u003e import sys\n    \u003e\u003e\u003e sys.stdout = LogFile('memory_profile_log')\n\n``Customized reporting:``\n\nSending everything to the log file while running the memory_profiler\ncould be cumbersome and one can choose only entries with increments\nby passing True to reportIncrementFlag, where reportIncrementFlag is\na parameter to LogFile class of memory profiler module.\n\n.. code-block:: python\n\n    \u003e\u003e\u003e from memory_profiler import LogFile\n    \u003e\u003e\u003e import sys\n    \u003e\u003e\u003e sys.stdout = LogFile('memory_profile_log', reportIncrementFlag=False)\n\nFor details refer: examples/reporting_logger.py\n\n=====================\n IPython integration\n=====================\nAfter installing the module, if you use IPython, you can use the `%mprun`, `%%mprun`,\n`%memit` and `%%memit` magics.\n\nFor IPython 0.11+, you can use the module directly as an extension, with\n``%load_ext memory_profiler``\n\nTo activate it whenever you start IPython, edit the configuration file for your\nIPython profile, ~/.ipython/profile_default/ipython_config.py, to register the\nextension like this (If you already have other extensions, just add this one to\nthe list):\n\n.. code-block:: python\n\n    c.InteractiveShellApp.extensions = [\n        'memory_profiler',\n    ]\n\n(If the config file doesn't already exist, run ``ipython profile create`` in\na terminal.)\n\nIt then can be used directly from IPython to obtain a line-by-line\nreport using the `%mprun` or `%%mprun` magic command. In this case, you can skip\nthe `@profile` decorator and instead use the `-f` parameter, like\nthis. Note however that function my_func must be defined in a file\n(cannot have been defined interactively in the Python interpreter):\n\n.. code-block:: python\n\n    In [1]: from example import my_func, my_func_2\n\n    In [2]: %mprun -f my_func my_func()\n\nor in cell mode:\n\n.. code-block:: python\n\n    In [3]: %%mprun -f my_func -f my_func_2\n       ...: my_func()\n       ...: my_func_2()\n\nAnother useful magic that we define is `%memit`, which is analogous to\n`%timeit`. It can be used as follows:\n\n.. code-block:: python\n\n    In [1]: %memit range(10000)\n    peak memory: 21.42 MiB, increment: 0.41 MiB\n\n    In [2]: %memit range(1000000)\n    peak memory: 52.10 MiB, increment: 31.08 MiB\n\nor in cell mode (with setup code):\n\n.. code-block:: python\n\n    In [3]: %%memit l=range(1000000)\n       ...: len(l)\n       ...:\n    peak memory: 52.14 MiB, increment: 0.08 MiB\n\nFor more details, see the docstrings of the magics.\n\nFor IPython 0.10, you can install it by editing the IPython configuration\nfile ~/.ipython/ipy_user_conf.py to add the following lines:\n\n.. code-block:: python\n\n    # These two lines are standard and probably already there.\n    import IPython.ipapi\n    ip = IPython.ipapi.get()\n\n    # These two are the important ones.\n    import memory_profiler\n    memory_profiler.load_ipython_extension(ip)\n\n===============================\nMemory tracking backends\n===============================\n`memory_profiler` supports different memory tracking backends including: 'psutil', 'psutil_pss', 'psutil_uss', 'posix', 'tracemalloc'.\nIf no specific backend is specified the default is to use \"psutil\" which measures RSS aka \"Resident Set Size\". \nIn some cases (particularly when tracking child processes) RSS may overestimate memory usage (see `example/example_psutil_memory_full_info.py` for an example).\nFor more information on \"psutil_pss\" (measuring PSS) and \"psutil_uss\" please refer to:\nhttps://psutil.readthedocs.io/en/latest/index.html?highlight=memory_info#psutil.Process.memory_full_info \n\nCurrently, the backend can be set via the CLI\n\n    $ python -m memory_profiler --backend psutil my_script.py\n\nand is exposed by the API\n\n.. code-block:: python\n\n    \u003e\u003e\u003e from memory_profiler import memory_usage\n    \u003e\u003e\u003e mem_usage = memory_usage(-1, interval=.2, timeout=1, backend=\"psutil\")\n\n    \n============================\n Frequently Asked Questions\n============================\n    * Q: How accurate are the results ?\n    * A: This module gets the memory consumption by querying the\n      operating system kernel about the amount of memory the current\n      process has allocated, which might be slightly different from\n      the amount of memory that is actually used by the Python\n      interpreter. Also, because of how the garbage collector works in\n      Python the result might be different between platforms and even\n      between runs.\n\n    * Q: Does it work under windows ?\n    * A: Yes, thanks to the\n      `psutil \u003chttp://pypi.python.org/pypi/psutil\u003e`_ module.\n\n\n===========================\n Support, bugs \u0026 wish list\n===========================\nFor support, please ask your question on `stack overflow\n\u003chttp://stackoverflow.com/\u003e`_ and add the `*memory-profiling* tag \u003chttp://stackoverflow.com/questions/tagged/memory-profiling\u003e`_.\nSend issues, proposals, etc. to `github's issue tracker\n\u003chttps://github.com/pythonprofilers/memory_profiler/issues\u003e`_ .\n\nIf you've got questions regarding development, you can email me\ndirectly at f@bianp.net\n\n.. image:: http://fa.bianp.net/static/tux_memory_small.png\n\n\n=============\n Development\n=============\nLatest sources are available from github:\n\n    https://github.com/pythonprofilers/memory_profiler\n\n===============================\nProjects using memory_profiler\n===============================\n\n`Benchy \u003chttps://github.com/python-recsys/benchy\u003e`_\n\n`IPython memory usage \u003chttps://github.com/ianozsvald/ipython_memory_usage\u003e`_\n\n`PySpeedIT \u003chttps://github.com/peter1000/PySpeedIT\u003e`_ (uses a reduced version of memory_profiler)\n\n`pydio-sync \u003chttps://github.com/pydio/pydio-sync\u003e`_ (uses custom wrapper on top of memory_profiler)\n\n=========\n Authors\n=========\nThis module was written by `Fabian Pedregosa \u003chttp://fseoane.net\u003e`_\nand `Philippe Gervais \u003chttps://github.com/pgervais\u003e`_\ninspired by Robert Kern's `line profiler\n\u003chttp://packages.python.org/line_profiler/\u003e`_.\n\n`Tom \u003chttp://tomforb.es/\u003e`_ added windows support and speed improvements via the\n`psutil \u003chttp://pypi.python.org/pypi/psutil\u003e`_ module.\n\n`Victor \u003chttps://github.com/octavo\u003e`_ added python3 support, bugfixes and general\ncleanup.\n\n`Vlad Niculae \u003chttp://vene.ro/\u003e`_ added the `%mprun` and `%memit` IPython magics.\n\n`Thomas Kluyver \u003chttps://github.com/takluyver\u003e`_ added the IPython extension.\n\n`Sagar UDAY KUMAR \u003chttps://github.com/sagaru\u003e`_ added Report generation feature and examples.\n\n`Dmitriy Novozhilov \u003chttps://github.com/demiurg906\u003e`_ and `Sergei Lebedev \u003chttps://github.com/superbobry\u003e`_ added support for `tracemalloc \u003chttps://docs.python.org/3/library/tracemalloc.html\u003e`_.\n\n`Benjamin Bengfort \u003chttps://github.com/bbengfort\u003e`_ added support for tracking the usage of individual child processes and plotting them.\n\n`Muhammad Haseeb Tariq \u003chttps://github.com/mhaseebtariq\u003e`_ fixed issue #152, which made the whole interpreter hang on functions that launched an exception.\n\n`Juan Luis Cano \u003chttps://github.com/Juanlu001\u003e`_ modernized the infrastructure and helped with various things.\n\n`Martin Becker \u003chttps://github.com/mgbckr\u003e`_ added PSS and USS tracking via the psutil backend.\n\n=========\n License\n=========\nBSD License, see file COPYING for full text.\n","funding_links":[],"categories":["Python","Development Environment","HarmonyOS","System Monitoring \u0026 Profiling","Python生态圈Dev\u0026Ops工具与服务","Uncategorized"],"sub_categories":["Debugging and Tracing","Windows Manager","Uncategorized"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fpythonprofilers%2Fmemory_profiler","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fpythonprofilers%2Fmemory_profiler","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fpythonprofilers%2Fmemory_profiler/lists"}