{"id":13689220,"url":"https://github.com/SigmaQuan/Better-Python-59-Ways","last_synced_at":"2025-05-01T23:32:56.470Z","repository":{"id":45770526,"uuid":"81207058","full_name":"SigmaQuan/Better-Python-59-Ways","owner":"SigmaQuan","description":"Code Sample of Book \"Effective Python: 59 Specific Ways to Write Better Pyton\" by Brett Slatkin","archived":false,"fork":false,"pushed_at":"2023-06-13T19:03:29.000Z","size":297,"stargazers_count":1389,"open_issues_count":26,"forks_count":221,"subscribers_count":43,"default_branch":"master","last_synced_at":"2024-10-29T17:49:50.712Z","etag":null,"topics":["effective-python","python","python-3","tricks"],"latest_commit_sha":null,"homepage":null,"language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/SigmaQuan.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null}},"created_at":"2017-02-07T12:45:20.000Z","updated_at":"2024-10-28T03:26:17.000Z","dependencies_parsed_at":"2022-08-12T12:20:30.053Z","dependency_job_id":"86c03da7-7a92-44d3-98f0-fcb07bc45ddb","html_url":"https://github.com/SigmaQuan/Better-Python-59-Ways","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/SigmaQuan%2FBetter-Python-59-Ways","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/SigmaQuan%2FBetter-Python-59-Ways/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/SigmaQuan%2FBetter-Python-59-Ways/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/SigmaQuan%2FBetter-Python-59-Ways/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/SigmaQuan","download_url":"https://codeload.github.com/SigmaQuan/Better-Python-59-Ways/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":224187969,"owners_count":17270367,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["effective-python","python","python-3","tricks"],"created_at":"2024-08-02T15:01:38.648Z","updated_at":"2024-11-12T13:31:04.519Z","avatar_url":"https://github.com/SigmaQuan.png","language":"Python","readme":"# Effective Python: 59 Specific Ways to Write Better Python\nCode Sample of Book \"Effective Python: 59 Specific Ways to Write Better Python\" by Brett Slatkin.\n\n## Chapter 1: Pythonic thinking\n\n\n### [Item 1: Know which version of python you're using](item_01_version_of_python.py)\n- 1. There are two major version of Python still in active use: Python 2 and\nPython 3.\n- 2. There are multiple popular runtimes for Python: CPython, Jython,\n     IronPython, PyPy, etc.\n- 3. Be sure that the command-line for running Python on your system is the\n     version you expect it to be.\n- 4. Prefer Python 3 for your next project because that is the primary focus\n     of the Python community.\n    \n     \n### [Item 2: Follow the PEP 8 style guide](item_02_PEP8Style.py)\n- 1. Always follow the PEP 8 style guide when writing Python code.\n- 2. Sharing a common style with the larger Python community facilitates\n     collaboration with others.\n- 3. Using a consistent style makes it easier to modify your own code later.\n\n\n### [Item 3: Know the difference between bytes, str, and unicode](item_03_Difference_bytes_str_unicode.py)\n- 1. In Python 3, bytes contains sequences of 8-bit values, str contains\n     sequences of Unicode characters. bytes and str instances can't be\n     used together with operators (like \u003e or +).\n- 2. In Python 2, str contains sequences of 8-bit values, unicode contains\n     sequences of Unicode characters. str and unicode can be used together\n     with operators if the str only contains 7-bit ASCII characters.\n- 3. Use helper functions to ensure that the inputs you operate on are the\n     type of character sequence you expect (8-bit values, UTF-8 encoded\n     characters, Unicode characters, etc.)\n- 4. If you want to read or write binary data to/from a file, always open the\n     file using a binary mode (like 'rb' or 'wb').\n\n\n### [Item 4: Write helper functions instead of complex expressions](item_04_helper_function.py)\n- 1. Python's syntax makes it all too easy to write single-line expressions\n     that are overly complicated and difficult to read.\n- 2. Move complex expressions into helper functions, especially if you need to\n     use the same logic repeatedly.\n- 3. The if/else expression provides a more readable alternative to using\n     Boolean operators like or and adn in expressions.\n\n\n### [Item 5: Know hot to slice sequences](item_05_slice_sequence.py)\n- 1. Avoid being verbose: Don't supply 0 for the start index or the length of\n     the sequence for the end index.\n- 2. Slicing is forgiving of start or end indexes that are out of bounds,\n     making it easy to express slices on the front or back boundaries of a\n     sequence (like a[:20] or a[-20:]).\n- 3. Assigning to a list slice will replace that range in the original\n     sequence with what's referenced even if their lengths are different.\n\n\n### [Item 6: Avoid using start, end and stride in a single slice](item_06_avoid_using.py)\n- 1. Specifying start, end, and stride in a slice can be extremely confusing.\n- 2. Prefer using positive stride values in slices without start or end\n     indexes. Avoid negative stride values if possible.\n- 3. Avoid using start, end and stride together in a single slice. If you need\n     all three parameters, consider doing two assignments (one to slice,\n     another to stride) or using islice form itertools built-in module.\n\n\n### [Item 7: Use list comprehensions instead of map and filter](item_07_list_not_map_filter.py) \n- 1. List comprehensions are clearer than the map and filter built-in\n     functions because they don't require extra lambda expressions.\n- 2. List comprehensions allow you easily skip items from the input list, a\n     behavior map doesn't support without help from filter.\n- 3. Dictionaries and sets also support comprehension expressions.\n\n\n### [Item 8: Avoid more than two expressions in list comprehensions](item_08_no_more_than_2_expressions.py)\n- 1. List comprehensions support multiple levels of loops and multiple\n     conditions per loop level.\n- 2. List comprehensions with more than two expressions are very difficult to\n     read and should be avoided.\n\n\n### [Item 9: Consider generator expressions for large comprehensions](item_09_generator_expressions.py)\n- 1. List comprehensions can cause problems for large inputs by using too much\n     memory.\n- 2. Generator expressions avoid memory issues by producing outputs one at a\n     time as an iterator.\n- 3. Generator expressions can be composed by passing the iterator from one\n     generator expression into the for subexpression of another.\n- 4. Generator expressions execute very quickly when chained together.\n\n\n### [Item 10: Prefer enumerate over range](item_10_prefer_enumerate.py)\n- 1. enumerate provides concise syntax for looping over an iterator and\n     getting the index of each item from the iterator as you go.\n- 2. Prefer enumerate instead of looping over a range and indexing into a\n     sequence.\n- 3. You can supply a second parameter to enumerate to specify the number from\n     which to begin counting (zero is default).\n\n\n### [Item 11: Use zip to process iterators in parallel](item_11_use_zip.py)\n- 1. The zip built-in function can be used to iterate over multiple iterators\n     in parallel.\n- 2. In Python 3, zip is a lazy generator that produces tuples. In Python 2,\n     zip returns the full result as a list of tuples.\n- 3. zip truncates its outputs silently if you supply it with iterators of\n     different lengths.\n- 4. The zip_longest function from the itertools built-in module lets you\n     iterate over multiple iterators in parallel regardless of their\n     lengths (see Item 46: Use built-in algorithms and data structures).\n\n\n### [Item 12: Avoid else blocks after for and while loops](item_12_avoid_else.py)\n- 1. Python has special syntax that allows else blocks to immediately follow\n     for and while loop interior blocks.\n- 2. The else block after a loop only runs if the loop body did not encounter\n     a break statement.\n- 3. Avoid using else blocks after loops because their behavior isn't\n     intuitive and can be confusing.\n\n\n### [Item 13: Take advantage of each block in try/except/else/finally](item_13_try_except_else_finally.py)\n- 1. The try/finally compound statement lets you run cleanup code regardless\n     of whether exceptions were raised in the try block.\n- 2. The else block helps you minimize the amount of code in try blocks and\n     visually distinguish the success case from the try/except blocks.\n- 3. An else block can be used to perform additional actions after a\n     successful try block but before common cleanup in a finally block.\n\n\n## Chapter 2: Functions\n\n\n### [Item 14: Prefer exceptions to returning None](item_14_prefer_exceptions.py)\n- 1. Functions that return None to indicate special meaning are error prone\n     because None and other values (e.g., zero, the empty string) all\n     evaluate to False in conditional expressions.\n- 2. Raise exceptions to indicate special situations instead of returning\n     None. Expect the calling code to handle exceptions properly when they\n     are documented.\n\n\n### [item 15: Know how closures interact with variable scope](item_15_closure_variable_scope.py)\n- 1. Closure functions can refer to variables from any of the scopes in which \n     they were defined.\n- 2. By default, closure can't affect enclosing scopes by assigning variables.\n- 3. In Python 3, use the nonlocal statement to indicate when a closure can \n     modify a variable in its enclosing scopes.\n- 4. In Python 2, use a mutable value (like a single-item list) to work around\n     the lack of the nonlocal statement.\n- 5. Avoid using nonlocal statements for anything beyond simple functions.\n\n\n### [Item 16: Consider generators instead of returning lists](item_16_generators_instead_of_lists.py)\n- 1. Using generators can be clearer than the alternative of returning lists\n    of accumulated results.\n- 2. The iterator returned by a generator produces the set of values passed to\n    yield expressions within the generator function's body.\n- 3. Generators can produce a sequence of outputs for arbitrarily large inputs\n    because their working memory doesn't include all inputs and outputs.\n\n\n### [Item 17: Be defensive when iterating over arguments](item_17_be_defensive.py)\n- 1. Beware of functions that iterate over input arguments multiple times. If\n     these arguments are iterators, you may see strange behavior and missing \n     values.\n- 2. Python's iterator protocol defines how containers and iterators interact\n     with the iter and next built-in functions, for loops, and related \n     expression.\n- 3. You can easily define your own iterable container type by implementing \n     the __iter__ method as a generator.\n- 4. You can detect that a value is an iterator (instead of a container) if\n     calling iter on it twice produces the same result, which can then be \n     progressed with the next built-in function.\n\n\n### [Item 18: Reduce visual noise with variable positional arguments](item_18_reduce_visual_noise.py)\n- 1. Functions can accept a variable number of positional arguments by using\n    *args in the def statement.\n- 2. You can use the items from a sequence as the positional arguments for a\n    function with the * operator.\n- 3. Using the * operator with a generator may cause your program to run out\n    of memory and crash.\n- 4. Adding new positional parameters to functions that accept *args can\n    introduce hard-to-find bugs.\n\n### [Item 19: Provide optimal behavior with keyword arguments](item_19_provide_optimal_behavior.py)\n- 1. Function arguments can be specified by position or by keyword.\n- 2. Keywords make it clear what the purpose of each arguments is when it\n    would be confusing with only positional arguments.\n- 3. Keywords arguments with default values make it easy to add new behaviors\n    to a function, especially when the function has existing callers.\n- 4. Optional keyword arguments should always be passed by keyword instead of\n    by position.\n\n\n### [Item 20: Use None and Docstrings to specify dynamic default arguments](item_20_use_none_and_docstrings.py)\n- 1. Closure functions can refer to variables from any of the scopes in which\n     they were defined.\n- 2. By default, closure can't affect enclosing scopes by assigning variables.\n- 3. In Python 3, use the nonlocal statement to indicate when a closure can\n     modify a variable in its enclosing scopes.\n- 4. In Python 2, use a mutable value (like a single-item list) to work around\n     the lack of the nonlocal statement.\n- 5. Avoid using nonlocal statements for anything beyond simple functions.\n\n\n### [Item 21: Enforce clarity with key-word only arguments](item_21_enforce_clarity.py)\n- 1. Keyword arguments make the intention of a function call more clear.\n- 2. Use keyword-only arguments to force callers to supply keyword arguments\n    for potentially confusing functions, especially those that accept\n    multiple Boolean flags.\n- 3. Python 3 supports explicit syntax for keyword-only arguments in\n    functions.\n- 4. Python 2 can emulate keyword-only arguments for functions by using\n    **kwargs and manually raising TypeError exceptions.\n\n\n## Chapter 3: Classes and Inheritance\n\n\n### [Item 22: Prefer helper classes over bookkeeping with dictionaries and tuples](item_22_prefer_helper_classes.py)\n- 1. Avoid making dictionaries with values that are other dictionaries or\n    long tuples.\n- 2. Use namedtuple for lightweight, immutable data containers before you need\n    the flexibility of a full class.\n- 3. Move your bookkeeping code to use multiple helper classes when your\n    internal state dictionaries get complicated.\n\n\n### [Item 23: Accept functions for simple interfaces instead of classes](item_23_accepts_functions_4_interfaces.py)\n- 1. Instead of defining and instantiating classes, functions are often all\n    you need for simple interfaces between components in Python.\n- 2. References to functions and methods in Python are first class, meaning\n    they can be used in expressions like any other type.\n- 3. The __call__ special method enables instances of a class to be called\n    like plain Python functions.\n- 4. When you need a function to maintain state, consider defining a class\n    that provides the __call__ method instead of defining a stateful closure\n    (see Item 15: \"Know how closures interact with variable scope\").\n\n\n### [Item 24: Use @classmethod polymorphism to construct objects generically](item_24_use_classmethod.py)\n- 1. Python only supports a single constructor per class, the __init__ method.\n- 2. Use @classmethod to define alternative constructors for your classes.\n- 3. Use class method polymorphism to provide generic ways to build and\n     connect concrete subclasses.\n\n\n### [Item 25: Initialize parent classes with super](item_25_init_parent_classes_with_super.py)\n- 1. Python's standard method resolution order (MRO) solves the problems to\n    superclass initialization order and diamond inheritance.\n- 2. Always use the super built-in function to initialize parent classes.\n\n\n### [Item 26: Use multiple inheritance only for mix-in utility classes](item_26_when_use_multiple_inheritance.py)\n- 1. Avoid using multiple inheritance if mix-in classes can achieve the same\n    outcome.\n- 2. Use pluggable behaviors at the instance level to provide per-class \n    customization when mix-in classes may require it.\n- 3. Compose mix-ins to create complex functionality from simple behaviors.\n\n\n### [Item 27: Prefer public attributes over private ones](item_27_prefer_public_attributes.py)\n- 1. Private attributes aren't rigorously enforced by the Python compiler.\n- 2. Plan from the beginning to allow subclass to do more with your internal\n    APIs and attributes instead of locking them out by default.\n- 3. Use documentation of protected fields to guide subclass instead of trying\n    to force access control with private attributes.\n- 4. Only consider using private attributes to avoid naming conflicts with\n    subclasses that are out of your control.\n\n\n### [Item 28: Inherit from collections.abc for custom container types](item_28_inherit_from_collections_abc.py)\n- 1. Inherit directly from Python's container types (like list or dict) for\n    simple use cases.\n- 2. Beware of the large number of methods required to implement custom\n    container types correctly.\n- 3. Have your custom container types inherit from the interface defined in\n    collections.abc to ensure that your classes match required interfaces\n    and behaviors.\n\n\n## Chapter 4: Metaclasses and Attributes\n\n\n### [Item 29: Use plain attributes instead of get and set methods](item_29_use_plain_attributes.py)\n- 1. Define new class interfaces using simple public attributes, and avoid set\n     and get methods.\n- 2. Use @property to define special behavior when attributes are accessed on\n     your objects, if necessary.\n- 3. Follow the rule of least surprise and void weird side effects in your\n    @property methods.\n- 4. Ensure that @property methods are fast; do slow or complex work using\n    normal methods.\n\n\n### [Item 30: Consider @property instead of refactoring attributes](item_30_consider_property.py)\n- 1. Use @property to give existing instance attributes new functionality.\n- 2. Make incremental progress toward better data models by using @property.\n- 3. Consider refactoring a class and all call sites when you find yourself\n     using @property too heavily.\n\n\n### [Item 31: Use descriptors for reusable @property methods](item_31_use_descriptors.py)\n- 1. Reuse the behavior and validation of @property methods by defining your\n     own descriptor classes.\n- 2. Use WeakKeyDictionary to ensure that your descriptor classes don't cause\n     memory leaks.\n- 3. Don't get bogged down trying to understand exactly how __getattribute__\n     uses the descriptor protocol for getting and setting attributes.\n\n\n### [Item_32_Use __getattr__, __getattribute__, and __setattr__ for lazy attributes](item_32_use_getattr.py)\n- 1. Use __getattr__ and __setattr__ to lazily load and save attributes for an\n     object.\n- 2. Understand that __getattr__ only gets called once when accessing a\n     missing attribute, whereas __getattribute__ gets called every time an\n     attribute is accessed.\n- 3. Avoid infinite recursion in __getattribute__ and __setattr__ by using\n     methods from super() (i.e., the object class) to access instance\n     attributes directly.\n\n\n### [Item 33: Validate subclass with metaclass](item_33_validate_subclass.py)\n- 1. Use metaclasses to ensure that subclass are well formed at the time they \n     are defined, before objects of their type are constructed.\n- 2. Metaclass have slightly different syntax in Python 2 vs. Python 3.\n- 3. The __new__ method of metaclasses is run after the class statement's\n     entire body has been processed.\n\n\n### [Item 34: Register class existence with metaclass](item_34_register_class_existence.py)\n- 1. Class registration is a helpful pattern for building modular Python\n     programs.\n- 2. Metaclass let you run registration code automatically each time your\n     base class is subclassed in a program.\n- 3. Using metaclass for class registration avoids errors by ensuring that\n     you never miss a registration call.\n\n\n### [Item 35: Annotate class attributes with metaclass](item_35_annotate_class_attributes.py)\n- 1. Metaclass enable you to modify a class's attributes before the class is\n     fully defined.\n- 2. Descriptors and metaclasses make a powerful combination for declarative\n     behavior and runtime introspection.\n- 3. You can avoid both memory leaks and the weakref module by using\n     metaclasses along with descriptors.\n\n\n## Chapter 5: Concurrency and parallelism\n\n\n### [Item 36: use subprocess to manage child processes](item_36_use_subprocess.py)\n- 1. Use the subprocess to run child processes and manage their input and\n    output streams.\n- 2. Child processes run in parallel with the Python interpreter, enabling you\n    to maximize your CPU usage.\n- 3. Use the timeout parameter with communicate to avoid deadlocks and hanging\n    child processes.\n\n\n### [Item 37: Use threads for blocking I/O, avoid for parallelism](item_37_use_threads.py)\n- 1. Python threads can't bytecode in parallel on multiple CPU cores because\n     of the global interpreter lock (GIL).\n- 2. Python threads are still useful despite the GIL because they provide an\n     easy way to do multiple things at seemingly the same time.\n- 3. Use Python threads to make multiple system calls in parallel. This allows\n     you to do blocking I/O at the same time as computation.\n\n\n### [Item 38: Use lock to prevent data races in threads](item_38_use_lock.py)\n- 1. Even though Python has a global interpreter lock, you're still\n     responsible for protecting against objects without locks.\n- 2. Your programs will corrupt their data structures if you allow multiple\n     threads to modify the same objects without locks.\n- 3. The lock class in the threading built-in module is Python's standard\n     mutual exclusion lock implementation.\n\n\n### [Item 39: Use queue to coordinate work between threads](item_39_use_queue.py)\n- 1. Pipelines are a great way to organize sequences of work that run\n     concurrently using multiple Python threads.\n- 2. Be aware of the many problems in building concurrent pipelines: busy\n     waiting, stopping workers, and memory explosion.\n- 3. The Queue class has all of the facilities you need to build robust\n     pipelines: blocking operations, buffer sizes, and joining.\n\n\n### [Item 40: Consider coroutines to run many functions concurrently](item_40_consider_coroutines.py)\n- 1. Coroutines provide an efficient way to run tens of thousands of functions\n    seemingly at the same time.\n- 2. Within a generator, the value of the yield expression will be whatever\n    value was passed to the generator's send method from the exterior code.\n- 3. Coroutines give you a powerful tool for separating the core logic of your\n    program from its interaction with the surrounding environment.\n- 4. Python 2 doesn't support yield from or returning values from generators.\n\n\n### [Item 41: Consider concurrent.futures for true parallelism](item_41_consider+concurrent_futures.py)\n- 1. Moving CPU bottlenecks to C-extension modules can be an effective way to\n    improve performance while maximizing your investment in Python code.\n    However, the cost of doing so is high and may introduce bugs.\n- 2. The multiprocessing module provides powerful tools that can parallelize\n    certain types of Python computation with minimal effort.\n- 3. The power of multiprocessing is best accessed through the\n    concurrent.futures built-in module and its simple ProcessPoolExecutor\n    class.\n- 4. The advanced parts of the multiprocessing module should be avoided\n    because they are so complex.\n\n\n## Chapter 6: Built-in Modules\n\n\n### [Item 42: Define function decorators with functools.wraps](item_42_define_function_decorators.py)\n- 1. Decorators are Python syntax for allowing one function to modify another\n    function at runtime.\n- 2. Using decorators can cause strange behaviors in tools that do\n    introspection, such as debuggers.\n- 3. Use the wraps decorator from the functools built-in module when you\n    define your own decorators to avoid any issues.\n\n\n### [Item 43: Consider contextlib and with statements for reusable try/finally behavior](item_43_consier_contextlib.py)\n- 1. The with statement allows you to reuse logic from try/finally blocks and\n    reduce visual noise.\n- 2. The contextlib built-in module provides a contextmanager decorator that\n    makes it easy to use your own functions in with statements.\n- 3. The value yielded by context managers is supplied to the as part of the\n    with statement. It's useful for letting your code directly access the\n    cause of the special context.\n\n\n### [Item 44: Make pickle reliable with copyreg](item_44_make_pickle_reliable.py)\n- 1. The pickle built-in module is only useful for serializing and\n    de-serializing objects between trusted programs.\n- 2. The pickle module may break down when used for more than trivial use\n    cases.\n- 3. Use the copyreg built-in module with pickle to add missing attributes\n    values, allow versioning of classes, and provide stable import paths.\n\n\n### [Item 45: Use datetime instead of time for local clocks](item_45_use_date_time.py)\n- 1. Avoid using the time module for translating between different time zones.\n- 2. Use the datetime built-in module along with the pytz module to reliably\n    convert between times in different time zones.\n- 3. Always represent time in UTC and do conversations to local time as the\n    final step before presentation.\n\n\n### [Item 46: Use built-in algorithms and data structures](item_46_use_built_in_algorithm.py)\n- 1. Use Python's built-in modules for algorithms and data structures.\n- 2. Don't re-implement this functionality yourself. It's hard to get right.\n\n\n### [Item 47: Use decimal when precision ia paramount](item_47_use_decimal.py)\n- 1. Python has built-in types and classes in modules that can represent\n    practically every type of numerical value.\n- 2. The Decimal class is ideal for situations that require high precision and\n    exact rounding behavior, such as computations of monetary values.\n\n\n### [Item 48: Know where to find community built modules](item_48_communit_built_modules.py)\n- 1. The Python Package Index (PyPI) contains a wealth of common packages\n    that are built and maintained by the Python community.\n- 2. pip is the command-line to use for installing packages from PyPI.\n- 3. pip is installed by default in Python 3.4 and above; you must install it\n    yourself for older versions.\n- 4. The majority of PyPI modules are free and open source software.\n\n\n## Chapter 7: Collaboration\n\n\n### [Item 49: Write docstrings for every function, class and module](item_49_write_docstrings_4_everything.py)\n- 1. Write documentation for every module, class and function using\n    docstrings. Keep them up to date as your code changes.\n- 2. For modules: introduce the contents of the module and any important\n    classes or functions all users should know about.\n- 3. For classes: document behavior, important attributes, and subclass\n    behavior in the docstring following the class statement.\n- 4. For functions and methods: document every argument, returned value,\n    raised exception, and other behaviors in the docstring following the\n    def statement.\n\n\n### [Item 50: Use packages to organize modules and provide stable APIs](item_50_use_packages.py)\n- 1. Packages in Python are modules that contain other modules. Packages allow\n    you to organize your code into separate, non-conflicting namespaces with\n    unique absolute module names.\n- 2. Simple package are defined by adding an __init__.py file to a directory\n    that contains other source files. These files become that child modules\n    of the directory's package. Package directories may also contain other\n    packages.\n- 3. You can provide an explict API for a module by listing its publicly\n    visible name in its __all__ special attribute.\n- 4. You can hide a package's internal implementation by only importing public\n    names in the package's __init__.py file or by naming internal-only\n    members with a leading underscore.\n- 5. When collaborating within a single team or on a single codebase, using\n    __all__ for explicit APIs is probably unnecessary.\n\n\n### [Item 51: Define a root exception to insulate callers from APIs](item_51_define_a_root_exception.py)\n- 1. Defining root exceptions for your modules allows API consumers to\n    insulate themselves from your API.\n- 2. Catching root exceptions can help you find bugs in code that consumes an\n    API.\n- 3. Catching the Python Exception base class can help you find bugs in API\n    implementations.\n- 4. Intermediate root exceptions let you add more specific types of\n    exceptions in the future without breaking your API consumers.\n\n\n### [Item 52: Know how to break circular dependencies](item_52_break_circular_dependencies.py)\n- 1. Circular dependencies happen when two modules must call into each other\n    at import time. They can cause your program to crash at startup.\n- 2. The best way to break a circular dependency is refactoring mutual\n    dependencies into a separate module at the bottom of the dependency tree.\n- 3. Dynamic imports are the simplest solution for breaking a circular\n    dependency between modules while minimizing refactoring and complexity.\n\n\n### [Item 53: Use virtual environments for isolated and reproducible dependencies](item_53_use_virtual_environments.py)\n- 1. Virtual environment allow you to use pip to install many different\n    versions of the same package on the same machine without conflicts.\n- 2. Virtual environments are created with pyvenv, enabled with source\n    bin/activate, and disabled with deactivate.\n- 3. You can dump all of the requirements of an environment with pip freeze.\n    You can reproduce the environment by supplying the requirements.txt file\n    to pip install -r.\n- 4. In versions of Python before 3.4, the pyvenv tool must be downloaded and\n    installed separately. The command-line tool is called virtualenv instead\n    of pyvenv.\n\n\n## Chapter 8: Production\n\n\n### [Item 54: Consider module-scoped code to configure deployment environments](item_54_consier_module_scoped_code.py)\n- 1. Programs often need to run in multiple deployment environments that each\n    have unique assumptions and configurations.\n- 2. You can tailor a module's contents to different deployment environments\n    by using normal Python statements in module scope.\n- 3. Module contents can be the product of any external condition, including\n    host introspection through the sys and os modules.\n\n\n### [Item 55: Use repr strings for debugging output](item_55_use_repr_strings.py)\n- 1. Calling print on built-in Python types will produce the human-readable\n    string version of a value, which hides type information.\n- 2. Calling repr on built-in Python types will produce the printable string\n    version of a value. These repr strings could be passed to the eval\n    built-in function to get back the original value.\n- 3. %s in format strings will produce human-readable strings like str.%r will\n    produce printable strings like repr.\n- 4. You can define the __repr__ method to customize the printable\n    representation of a class and provide more detailed debugging\n    information.\n- 5. You can reach into any object's __dict__ attribute to view its internals.\n\n\n### [Item 56: Test everything with unittest](item_56_unittest.py)\n- 1. The only way to have confidence in a Python program is to write tests.\n- 2. The unittest built-in module provides most of the facilities you'll need\n    to write good tests.\n- 3. You can define tests by subclassing TestCase and defining one method per\n    behavior you'd like to test. Test methods on TestCase classes must start\n    with the word test.\n- 4. It's important to write both unit tests (for isolated functionality) and\n    integration tests (for modules that interact).\n\n\n### [Item 57： Consider interactive debugging with pdb](item_57_pdb.py)\n 1. You can initiate the Python interactive debugger at a point of interest\n    directly in your program with the import pdb; pdb.set_trace() statements.\n 2. The Python debugger prompt is a full Python shell that lets you inspect\n    and modify the state of a running program.\n 3. pdb shell commands let you precisely control program execution, allowing\n    you to alternate between inspecting program state and progressing program\n    execution.\n\n\n### [item 58: Profile before optimizing](item_58_profile_before_optimizing.py)\n- 1. It's import to profile Python programs before optimizing because the\n    source of slowdowns is often obscure.\n- 2. Use the cProfile module instead of the profile module because it provides\n    more accurate profiling information.\n- 3. The Profile object's runcall method provides everything you need to\n    profile a tree of function calls in isolation.\n- 4. The Stats object lets you select and print the subset of profiling\n    information you need to see to understand your program's performance.\n\n### [Item 59: Use tracemalloc to understand memory usage and leaks](item_59_use_tracemalloc.py)\n- 1. It can be difficult to understand how Python programs use and leak\n    memory.\n- 2. The gc module can help you understand which objects exist, but it has no\n    information about how they were allocated.\n- 3. The tracemalloc built-in module provides powerful tools for understanding\n    the source of memory usage.\n- 4. tracemalloc is only available in Python 3.4 and above.\n","funding_links":[],"categories":["Python"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FSigmaQuan%2FBetter-Python-59-Ways","html_url":"https://awesome.ecosyste.ms/projects/github.com%2FSigmaQuan%2FBetter-Python-59-Ways","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FSigmaQuan%2FBetter-Python-59-Ways/lists"}