{"id":18006918,"url":"https://github.com/wbenny/python-graceful-shutdown","last_synced_at":"2026-03-16T04:33:18.746Z","repository":{"id":65681348,"uuid":"245884965","full_name":"wbenny/python-graceful-shutdown","owner":"wbenny","description":"Example of a Python code that implements graceful shutdown while using asyncio, threading and multiprocessing","archived":false,"fork":false,"pushed_at":"2020-03-11T12:46:18.000Z","size":100,"stargazers_count":135,"open_issues_count":2,"forks_count":12,"subscribers_count":8,"default_branch":"master","last_synced_at":"2023-11-07T19:42:17.971Z","etag":null,"topics":["async","asyncio","graceful-shutdown","multiprocessing","python","threading"],"latest_commit_sha":null,"homepage":null,"language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/wbenny.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null}},"created_at":"2020-03-08T20:41:19.000Z","updated_at":"2023-10-16T06:56:59.000Z","dependencies_parsed_at":"2023-02-04T01:25:11.333Z","dependency_job_id":null,"html_url":"https://github.com/wbenny/python-graceful-shutdown","commit_stats":null,"previous_names":[],"tags_count":0,"template":null,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/wbenny%2Fpython-graceful-shutdown","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/wbenny%2Fpython-graceful-shutdown/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/wbenny%2Fpython-graceful-shutdown/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/wbenny%2Fpython-graceful-shutdown/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/wbenny","download_url":"https://codeload.github.com/wbenny/python-graceful-shutdown/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":222145444,"owners_count":16938477,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["async","asyncio","graceful-shutdown","multiprocessing","python","threading"],"created_at":"2024-10-30T01:11:09.950Z","updated_at":"2026-03-16T04:33:13.710Z","avatar_url":"https://github.com/wbenny.png","language":"Python","readme":"# Holy grail of graceful shutdown in Python\n\nGraceful shutdown should be an inseparable part of every serious application.\nBut graceful shutdowns are often hard, especially in the case of Python.\nThere are numerous questions on [StackOverflow](http://stackoverflow.com)\nasking how to correctly catch `KeyboardInterrupt` and how to do an application\ncleanup correctly.\n\nThe headaches will progress once you throw `asyncio` and `multiprocessing`\nmodules in the ring.\n\nLet's try to figure it out!\n\n### Intro\n\nAlmost all applications have three stages of execution:\n* Initialization (usually in form of `start()`/`initialize()`/...)\n* Actual execution\n* Finalization (usually in form of `stop()`/`destroy()`/...)\n\nWhat happens if application is instructed to be stopped during initialization?\nWhat happens if application is instructed to be stopped during finalization?\nIs it even safe to kill it during those stages?\n\nIn the provided solution, we shield the initialization and finalization\nfrom termination - they always have to finish.  It should be a good practice\nto make these two critical stages run as quick as possible (nobody likes\nservices that take minutes to start).  However, this is not always the case.\nIf - for example - initialization code tries to reach some remote file on\nvery slow network, it might take a while.  Such operations shouldn't be part\nof initialization code, but rather part of the \"actual execution\".\n\nThis repository contains 2 scripts. Both scripts are similar at its core.\nBoth scripts have no external dependencies.\nBoth scripts have no side-effects (they don't create any files, they don't\nmake any network connections, ...), therefore you should not worry about\nexecuting them.\nThey have been tested with **Python 3.8** on Windows \u0026 Linux.\nThey have very extensive logging.\n\nThe intention of these scripts is to **demonstrate** how it can be done,\nand provide a reference from which you can take an inspiration (i.e.\ncopy-paste).\nThey are not intended to be used as packages.\n\nTry to experiment with them!\n\n**And most importantly, try to `Ctrl+C` them at any time during the execution!**\n\n##### `simple.py`\n\nA very simple `asyncio` application that implements graceful shutdown.\n\n##### `complex.py`:\n\nMore complex application, that combines `asyncio`, `multiprocessing`\nand `ThreadPoolExecutor`.  It is mere extension of the `simple.py`\nscript, but the core of the graceful shutdown remains the same.\n\nThe script demonstrates `DummyManager`, which - in real world scenario -\nrepresents a class that does some pythonic \"heavy lifting\", i.e. does\nsome CPU intensive work.\n\nIt has 2 arbitrary \"process\" methods that simulate the heavy work.\nIt has also update() method, which manipulates with an internal state.\n\nOne real world example of this class might be\n[\"Yara rules\"](https://github.com/VirusTotal/yara-python) manager:\nInstead of `process_string()` there would be something like `match()`\nand `update()` would update the internal `yara.Rules` object.\n\nWith the help of `multiprocessing.Process`, this `DummyManager` is then\nexecuted in several separate processes.  These process instances are then\nmanaged by the `MultiProcessManager`.\n\nThe `MultiProcessManager` is then wrapped by an asynchronous service\n`AsyncService1`, which executes its methods with the help of\n`ThreadPoolExecutor`.\n\n### Delay KeyboardInterrupt on initialization/finalization\n\nWhen someone tries to SIGINT a Python process - directly or by pressing\n`Ctrl+C` - the Python process injects a `KeyboardInterrupt` into a running\ncode.\n\nIf the `KeyboardInterrupt` is raised during initialization of your application,\nit might have unwanted consequences, especially in complex application\n(connections are not properly closed, not all content is written to a file,\n...).  The same applies for finalization.  Apart from that, properly handling\n(and potentially rollbacking) effects of initialization/finalization is hard.\nSimply said - initialization \u0026 finalization is something you **don't** want to\ninterrupt.\n\nTherefore in our application we implement exactly this: initialization and\nfinalization is shielded from interruption.  If SIGINT/SIGTERM was signaled\nduring initialization, execution of the signal handler is delayed until the\ninitialization is done.  If the application happens to be interrupted during\nthe initialization, then finalization is executed immediately after the\ninitialization is done.\n\n```python\ntry:\n    #\n    # Shield _start() from termination.\n    #\n\n    try:\n        with DelayedKeyboardInterrupt():\n            self._start()\n\n    #\n    # If there was an attempt to terminate the application,\n    # the KeyboardInterrupt is raised AFTER the _start() finishes\n    # its job.\n    #\n    # In that case, the KeyboardInterrupt is re-raised and caught in\n    # exception handler below and _stop() is called to clean all resources.\n    #\n    # Note that it might be generally unsafe to call stop() methods\n    # on objects that are not started properly.\n    # This is the main reason why the whole execution of _start()\n    # is shielded.\n    #\n\n    except KeyboardInterrupt:\n        print(f'!!! got KeyboardInterrupt during start')\n        raise\n\n    #\n    # Application is started now and is running.\n    # Wait for a termination event infinitelly.\n    #\n\n    self._wait()\n\nexcept KeyboardInterrupt:\n    #\n    # The _stop() is also shielded from termination.\n    #\n    try:\n        with DelayedKeyboardInterrupt():\n            self._stop()\n    except KeyboardInterrupt:\n        print(f'!!! got KeyboardInterrupt during stop')\n```\n\nThe `DelayedKeyboardInterrupt` is a context manager that suppresses\nSIGINT \u0026 SIGTERM signal handlers for a block of code.  The signal handlers\nare called on exit from the block.\n\nIt is inspired by [this StackOverflow comment](https://stackoverflow.com/a/21919644).\n\n```python\nSIGNAL_TRANSLATION_MAP = {\n    signal.SIGINT: 'SIGINT',\n    signal.SIGTERM: 'SIGTERM',\n}\n\n\nclass DelayedKeyboardInterrupt:\n    def __init__(self):\n        self._sig = None\n        self._frame = None\n        self._old_signal_handler_map = None\n\n    def __enter__(self):\n        self._old_signal_handler_map = {\n            sig: signal.signal(sig, self._handler)\n            for sig, _ in SIGNAL_TRANSLATION_MAP.items()\n        }\n\n    def __exit__(self, exc_type, exc_val, exc_tb):\n        for sig, handler in self._old_signal_handler_map.items():\n            signal.signal(sig, handler)\n\n        if self._sig is None:\n            return\n\n        self._old_signal_handler_map[self._sig](self._sig, self._frame)\n\n    def _handler(self, sig, frame):\n        self._sig = sig\n        self._frame = frame\n        print(f'!!! {SIGNAL_TRANSLATION_MAP[sig]} received; delaying KeyboardInterrupt')\n\n```\n\n### Asynchronous code\n\nFor graceful shutdown of asynchronous applications, you have to forget about\n`asyncio.run()`. The behavior of `asyncio.run()` when `KeyboardInterrupt` is\nraised is to cancel all tasks, wait for their cancellation (i.e. run their\n`except asyncio.CancelledError` handlers) and then close the loop.\n\nThis is not always desired and most importantly, you don't have any control\nover the order in which the tasks are cancelled.\n\nThe solution is to call an asynchronous finalizer function (e.g. you need\nsome kind of `async def astop()` function somewhere) when `KeyboardInterrupt`\nis raised.  This way you have control over how each task gets cancelled.\n\n### Beware of `ThreadPoolExecutor`\n\nKeep in mind that when you schedule a function to be executed in the\n`ThreadPoolExecutor`, the function will be executed until completion,\nregardless of whether the `asyncio.get_running_loop().run_in_executor(...)`\ntask was cancelled.\n\nIt's probably obvious, but it is important to know this. If you schedule\ntoo many functions into `ThreadPoolExecutor`, they won't get executed until\nthere's a thread ready to process them. If you fill all worker threads\nin the `ThreadPoolExecutor` with functions that never return, no other\nscheduled function will be executed.\n\nThis might be dangerous in situation where finalization is done in some\nsynchronous code (that must be scheduled by `run_in_executor()`), but the\nexecutor is busy processing some other tasks - the finalization code\nwon't get chance to be executed.\n\n```python\nexecutor = ThreadPoolExecutor(max_workers=4)\n\ndef process():\n    print('process')\n    while True:\n        time.sleep(1)\n\ndef stop():\n    print('stop')\n\nasync def aprocess():\n    print('aprocess')\n    await asyncio.get_running_loop().run_in_executor(executor, process)\n\nasync def astop():\n    print('astop')\n    await asyncio.get_running_loop().run_in_executor(executor, stop)\n\nasync def amain():\n    task_list = [ asyncio.create_task(aprocess()) for _ in range(4) ]\n    \n    #\n    # asyncio.sleep(0) yields the execution and lets process\n    # other tasks in the loop (like the ones we've just created).\n    #\n    await asyncio.sleep(0)\n    \n    #\n    # Cancel the asyncio tasks.\n    #\n    for task in task_list:\n        task.cancel()\n    await asyncio.gather(*task_list, return_exceptions=True)\n    \n    #\n    # Even though we've cancelled the asyncio tasks, the process()\n    # functions are still being executed in the ThreadPoolExecutor.\n    #\n    # Because 4 tasks are now occupying the ThreadPoolExecutor infinitelly,\n    # the next queued function in the executor won't get the chance to run.\n    #\n\n    await astop()\n\n    #\n    # We never get here!\n    # (actually, we can get here - by cancelling the current task,\n    # however it doesn't change the fact that the stop() function\n    # will never be called.\n    #\n```\n\n### Beware of Windows' buggy `ProactorEventLoop`\n\n```python\n#\n# Before the loop is finalized, we setup an exception handler that\n# suppresses several nasty exceptions.\n#\n# ConnectionResetError\n# --------------------\n# This exception is sometimes raised on Windows, possibly because of a bug in Python.\n#\n# ref: https://bugs.python.org/issue39010\n#\n# When this exception is raised, the context looks like this:\n# context = {\n#     'message': 'Error on reading from the event loop self pipe',\n#     'exception': ConnectionResetError(\n#         22, 'The I/O operation has been aborted because of either a thread exit or an application request',\n#         None, 995, None\n#       ),\n#     'loop': \u003cProactorEventLoop running=True closed=False debug=False\u003e\n# }\n#\n# OSError\n# -------\n# This exception is sometimes raised on Windows - usually when application is\n# interrupted early after start.\n#\n# When this exception is raised, the context looks like this:\n# context = {\n#     'message': 'Cancelling an overlapped future failed',\n#     'exception': OSError(9, 'The handle is invalid', None, 6, None),\n#     'future': \u003c_OverlappedFuture pending overlapped=\u003cpending, 0x1d8937601f0\u003e\n#                 cb=[BaseProactorEventLoop._loop_self_reading()]\u003e,\n# }\n#\n\ndef __loop_exception_handler(loop, context: Dict[str, Any]):\n    if type(context['exception']) == ConnectionResetError:\n        print(f'__loop_exception_handler: suppressing ConnectionResetError')\n    elif type(context['exception']) == OSError:\n        print(f'__loop_exception_handler: suppressing OSError')\n    else:\n        print(f'__loop_exception_handler: unhandled exception: {context}')\n\nloop.set_exception_handler(__loop_exception_handler)\n```\n\n### Don't forget to catch `KeyboardInterrupt` in `multiprocessing.Process` workers\n\nWhen application uses `multiprocessing.Process` and the application gets\ninterrupted, the signal handler is called in all children processes.\nThis effectively means that `KeyboardInterrupt` is injected into all processes.\nIf this exception is unhandled, the process is usually terminated but spits\na nasty exception log with traceback in the terminal (`stderr`).\n  \nIf we want to get rid of this exception log, we should establish an exception\nhandler to catch the `KeyboardInterrupt` in the `multiprocessing.Process`\nworker method (either `Process.run()` method, or the callback provided as the\n`target` parameter) and then terminate the application.\n\n```python\ndef _process_worker():\n    try:\n        __process_worker()\n    except KeyboardInterrupt:\n        print(f'[{multiprocessing.current_process().name}] ... Ctrl+C pressed, terminating ...')\n\ndef __process_worker():\n    while True:\n        time.sleep(1)\n\n#\n# ...\n#\n\nwith DelayedKeyboardInterrupt():\n    p = multiprocessing.Process(target=_process_worker)\n    p.start()\n```\n\n### ... or ignore the `KeyboardInterrupt` in `multiprocessing.Process` workers completely\n\nIf you're certain that you're going to cleanly shutdown all the\n`multiprocessing.Process` instances, you can choose to suppress the\n`KeyboardInterrupt` in the process worker function.\n\n```python\ndef _process_worker(stop_event: multiprocessing.Event):\n    try:\n        #\n        # Because we have our own stop_event, we're going to suppress the\n        # KeyboardInterrupt during the execution of the __process_worker().\n        #\n        # Note that if the parent process dies without setting the stop_event,\n        # this process will be unresponsive to SIGINT/SIGTERM.\n        # The only way to stop this process would be to ruthlessly kill it.\n        #\n        with DelayedKeyboardInterrupt():\n            __process_worker(stop_event)\n\n    #\n    # Keep in mind that the KeyboardInterrupt will get delivered\n    # after leaving from the DelayedKeyboardInterrupt() block.\n    #\n    except KeyboardInterrupt:\n        print(f'[{multiprocessing.current_process().name}] ... Ctrl+C pressed, terminating ...')\n\ndef __process_worker(stop_event: multiprocessing.Event):\n    stop_event.wait()\n\n#\n# ...\n#\n\nwith DelayedKeyboardInterrupt():\n    p = multiprocessing.Process(target=_process_worker)\n    p.start()\n```\n\n### Synchronize start of the `multiprocessing.Process` workers\n\nIf the `KeyboardInterrupt` happens to be raised before the `target` worker\nfunction is reached, we'd still get that nasty exception log.  If we want to\nbe sure we don't miss this exception, we need to synchronize the process\ncreation.\n\n```python\ndef _process_worker(\n        process_bootstrapped_event: multiprocessing.Event,\n        stop_event: multiprocessing.Event\n    ):\n    try:\n        #\n        # Because we have our own stop_event, we're going to suppress the\n        # KeyboardInterrupt during the execution of the __process_worker().\n        #\n        # Note that if the parent process dies without setting the stop_event,\n        # this process will be unresponsive to SIGINT/SIGTERM.\n        # The only way to stop this process would be to ruthlessly kill it.\n        #\n        with DelayedKeyboardInterrupt():\n            process_bootstrapped_event.set()\n            __process_worker(\n                process_bootstrapped_event,\n                stop_event\n            )\n\n    #\n    # Keep in mind that the KeyboardInterrupt will get delivered\n    # after leaving from the DelayedKeyboardInterrupt() block.\n    #\n    except KeyboardInterrupt:\n        print(f'[{multiprocessing.current_process().name}] ... Ctrl+C pressed, terminating ...')\n\ndef __process_worker(\n        process_bootstrapped_event: multiprocessing.Event,\n        stop_event: multiprocessing.Event\n    ):\n    stop_event.wait()\n\n#\n# ...\n#\n\nwith DelayedKeyboardInterrupt():\n    process_bootstrapped_event = multiprocessing.Event()\n    stop_event = multiprocessing.Event()\n    p = multiprocessing.Process(target=_process_worker, args=(process_bootstrapped_event, stop_event))\n    p.start()\n    \n    #\n    # Set some meaningful timeout - we don't want to wait here\n    # infinitelly if the process creation somehow failed.\n    #\n    process_bootstrapped_event.wait(5)\n\ntry:\n    #\n    # Let the process run for some time.\n    #\n    time.sleep(5)\nexcept KeyboardInterrupt:\n    print(f'... Ctrl+C pressed, terminating ...')\nfinally:\n    #\n    # And then stop it and wait for graceful termination.\n    #\n    with DelayedKeyboardInterrupt():\n        stop_event.set()\n        p.join()\n```\n\n### License\n\nThis software is open-source under the MIT license. See the LICENSE.txt file in this repository.\n\nIf you find this project interesting, you can buy me a coffee\n\n```\n  BTC 3GwZMNGvLCZMi7mjL8K6iyj6qGbhkVMNMF\n  LTC MQn5YC7bZd4KSsaj8snSg4TetmdKDkeCYk\n```\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fwbenny%2Fpython-graceful-shutdown","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fwbenny%2Fpython-graceful-shutdown","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fwbenny%2Fpython-graceful-shutdown/lists"}