{"id":28145083,"url":"https://github.com/goraved/playwright_python_practice","last_synced_at":"2025-05-14T22:16:06.904Z","repository":{"id":56321830,"uuid":"282146749","full_name":"Goraved/playwright_python_practice","owner":"Goraved","description":"Just a Playwright (python) tool practice","archived":false,"fork":false,"pushed_at":"2025-04-03T10:41:57.000Z","size":1322,"stargazers_count":11,"open_issues_count":0,"forks_count":0,"subscribers_count":0,"default_branch":"master","last_synced_at":"2025-04-03T11:26:50.139Z","etag":null,"topics":["github-actions","playwright","playwright-python","python"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/Goraved.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":".github/CODEOWNERS","security":null,"support":null}},"created_at":"2020-07-24T06:58:17.000Z","updated_at":"2025-04-03T10:43:37.000Z","dependencies_parsed_at":"2022-08-15T16:40:24.278Z","dependency_job_id":null,"html_url":"https://github.com/Goraved/playwright_python_practice","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Goraved%2Fplaywright_python_practice","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Goraved%2Fplaywright_python_practice/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Goraved%2Fplaywright_python_practice/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Goraved%2Fplaywright_python_practice/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/Goraved","download_url":"https://codeload.github.com/Goraved/playwright_python_practice/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":254235692,"owners_count":22036966,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["github-actions","playwright","playwright-python","python"],"created_at":"2025-05-14T22:13:59.357Z","updated_at":"2025-05-14T22:16:06.881Z","avatar_url":"https://github.com/Goraved.png","language":"Python","funding_links":[],"categories":[],"sub_categories":[],"readme":"# Test Automation with Playwright Python\n\nThis project uses a **[Page Object Model (POM)](https://martinfowler.com/bliki/PageObject.html)** architecture and \n**Object-Oriented Programming (OOP)** approach to automate web interface testing using **[Playwright](https://playwright.dev/python/docs/intro)**.\n\n---\n\n## Installation and Setup\n\n### 1. Minimum Requirements\n\n- **Python**: Version 3.12 or newer.\n- **PyCharm**: Recommended for convenient development environment setup.\n\n### 2. Creating a Virtual Environment (venv) in PyCharm\n\n1. **Open your project in PyCharm** or create a new one.\n2. Go to menu: **File** → **Settings** (or **Preferences** on Mac).\n3. Select: **Project: \u003cyour_project\u003e** → **Python Interpreter**.\n4. Click on **Add Interpreter** → **Add Local Interpreter**.\n5. Choose **Virtual Environment** and ensure the path corresponds to your project folder.\n6. In the **Base interpreter** field, select Python 3.12 or a newer version.\n7. Click **OK** to create and activate the virtual environment.\n\n### 3. Installing Dependencies\n\nAll project dependencies are stored in the **`requirements.txt`** file.\n\n1. Open the terminal in PyCharm or any other terminal.\n2. Make sure `uv` is installed (if not, install it):\n   ```bash\n   curl -LsSf https://astral.sh/uv/install.sh | sh\n   ```\n3. Create and activate a virtual environment using uv:\n\n**For Windows:**\n\n   ```bash\n   uv venv venv\n   venv\\Scripts\\activate\n   ```\n\n**For Mac/Linux:**\n\n```bash\nuv venv venv\nsource venv/bin/activate\n```\n\n4. Install dependencies from requirements.txt:\n\n```bash\nuv pip install -r requirements.txt\n```\n\n5. Initialize Playwright to download the necessary browsers:\n\n```bash\nplaywright install\n```\n\n---\n\n## How to Run Tests\n\n### 1. Running Tests in Parallel\n\n**[pytest-xdist](https://pypi.org/project/pytest-xdist/)** is used to run tests in multiple threads, allowing tests to\nbe executed simultaneously on multiple CPU cores. This significantly reduces test execution time.\n\n- **Run on all available CPU cores**:\n  ```bash\n  pytest -n auto\n  ```\n- **Run tests on a specific number of processes**:\n  ```bash\n  pytest -n 4\n  ```\n\nNote: Use the number of processes according to the number of cores in your processor for maximum efficiency.\n\n### 2. Running Tests with a Report and Viewing Results\n\nA custom reporting mechanism is used to generate a detailed and interactive HTML report on test execution. The reporting\nimplementation is in the `templates/report_handler.py` and `report_template.html` modules.\n\n![html_report.jpg](html_reporter/static/html_report.jpg)\n\n![details.jpg](html_reporter/static/details.jpg)\n\n![error_details.jpg](html_reporter/static/error_details.jpg)\n\nTo run tests with report generation, use the `pytest` command with the `--html-report=reports/test_report.html`\nparameter.\n\n```bash\npytest --html-report=reports/test_report.html\n```\n\nAdditional report options:\n\n- `--report-title=\"Report Title\"` - sets the title for the HTML report\n- `--headless=true` - runs tests in headless browser mode\n\nViewing the report:\n\n- After tests are completed, open the `reports/test_report.html` file in your browser\n- The report contains:\n    - Overall test execution statistics (passed, failed, skipped, etc.)\n    - Interactive filters for analyzing results\n    - Timeline of test execution\n    - Detailed information about each test, including screenshots and error messages\n    - Information about the test environment\n\nThe report is automatically generated after all tests are completed and saved at the specified path.\n\n### 3. Running Tests with Retries\n\n**[pytest-rerunfailures](https://pypi.org/project/pytest-rerunfailures/)** is used to automatically rerun unstable\ntests. This option allows tests to be repeated in case of temporary errors, reducing the number of false failures (flaky\ntests).\n\n```bash\npytest --reruns 2 --reruns-delay 5\n```\n\n- `reruns 2`: Retry test execution up to 2 times in case of failure.\n- `reruns-delay 5`: 5-second delay between retries.\n\n### Combined Run Example\n\nTo run tests in parallel, with HTML reporting and retries:\n\n```bash\npytest -n auto --html-report=reports/test_report.html --reruns 2 --reruns-delay 5\n```\n\nThis command runs tests in parallel on all available cores, generates an HTML report, and retries unstable tests twice\nwith a 5-second delay.\n\n---\n\n## Project Architecture Overview\n\nThe project architecture is built on the **Page Object Model (POM)** pattern, which separates the logic of pages,\ncomponents, and elements of web applications. The main components of this architecture:\n\n- **BasePage**: Base class for all pages, containing common methods for working with web pages (e.g., navigation).\n- **BaseComponent**: Base class for components (e.g., header, modal windows) consisting of multiple elements.\n- **BaseElement**: Class for working with individual web elements (buttons, input fields, etc.), containing basic\n  interaction methods (clicking, entering text, uploading files, etc.).\n\n---\n\n## Project Architecture Diagram\n\n```mermaid\ngraph TD\n    A[Tests] --\u003e B[Pages]\n    B --\u003e C[Components]\n    C --\u003e D[Elements]\n    \n    H[utils] --\u003e A\n    I[html_reporter] --\u003e A\n    \n    subgraph \"Key Abstractions\"\n        J[BasePage] --\u003e B\n        K[BaseComponent] --\u003e C\n        L[BaseElement] --\u003e D\n    end\n    \n    subgraph \"Test Execution Flow\"\n        O[conftest.py] --\u003e P[Fixtures]\n        P --\u003e Q[Test]\n        Q --\u003e R[Assertions]\n        R --\u003e S[Reporting]\n    end\n```\n\nThis diagram shows the main components of the architecture and their relationships:\n\n1. **Tests** use **Pages**, which consist of **Components**, which in turn contain **Elements**\n2. **BasePage**, **BaseComponent**, and **BaseElement** are abstract classes that define basic functionality for the\n   corresponding levels\n3. **utils** contain helper functions and tools for all levels of architecture\n4. **html_reporter** is responsible for generating reports with test results\n\n## Advantages of Using POM and OOP\n\n1. **Code Readability and Maintenance**: Tests become easier to read as page logic is moved into separate classes.\n2. **Code Reuse**: Components and elements can be reused on different pages.\n3. **Scalability**: Easy to add new pages and components without changing existing code.\n4. **OOP Approach**: Classes encapsulate logic, allowing code to be structured and making it understandable and flexible\n   for extension.\n\n---\n\n## `@track_execution_time` Decorator\n\n### Description\n\nThe `@track_execution_time` decorator is used to track the execution time of functions and fixtures in tests. It adds\ninformation about method name, execution time, and call order to the pytest HTML report.\n\n### Features\n\n- Automatically adds execution time to the `execution_log` of each test.\n- If a function has the name `factory`, the decorator analyzes the call stack and uses a regular expression to get the\n  name of the method or function that called `factory`.\n- Supports both calls with result assignment (`variable_name = function_name(...)`) and without it (\n  `function_name(...)`).\n\n### Usage Example\n\n#### Code with Decorator\n\n```python\nimport time\nimport pytest\n\n@track_execution_time\ndef example_function():\n    time.sleep(0.5)\n    return \"Result\"\n\n@pytest.fixture()\ndef create_owner(page) -\u003e Callable:\n    @track_execution_time\n    def factory(**kwargs) -\u003e dict:\n        time.sleep(0.2)  # Execution simulation\n        return {\"id\": 1, \"name\": \"Owner\"}\n\n    return factory\n\ndef test_example(create_owner):\n    created_info_message = create_owner(name=\"John Doe\")\n    delete_info_message(created_info_message[\"id\"])\n```\n\n#### HTML Report\n\nWhen the test is executed, a log with the execution time of methods will appear in the report:\n\n```text\nExecution Log\n-------------\ncreate_owner: 0.2001 seconds\ndelete_info_message: 0.0001 seconds\n```\n\n---\n\n## Using Soft Assert in Python with pytest_check\n\n### What is Soft Assert?\n\nSoft assert allows you to check multiple conditions in one test without immediately stopping its execution if one of the\nchecks fails. Unlike regular assert, which stops the test at the first error, soft assert allows the test to continue\nrunning, saving all check results. This is especially useful for tests where you need to check multiple conditions, for\nexample, the presence of numerous elements on a page.\n\n### What is Soft Assert Used For?\n\n- Checking multiple conditions within one test: if you want to check multiple parts of a response or interface, soft\n  assert will allow you to collect all check results before completing the test.\n- Increasing report informativeness: you'll get a complete picture of the test, showing all failed checks instead of\n  stopping at the first one.\n- Convenience for automation: it makes it easy to track and analyze all errors that occur within a single test case.\n\n### How to Use Soft Assert?\n\nThe pytest_check library can be used for working with soft assert. Below is an example of using soft assert to check for\nthe presence of certain text fragments in a PDF document loaded on a page.\n\n```python\n# Using soft assert to check multiple conditions without stopping test execution\nfrom pytest_check import check_that\n\n\ndef test_product_details(page):\n    # Navigate to product page\n    page.goto(\"https://example.com/products/1\")\n\n    # Get product information\n    product_name = page.locator(\".product-name\").text_content()\n    product_price = page.locator(\".product-price\").text_content()\n    product_stock = page.locator(\".product-stock\").text_content()\n\n    # Perform multiple soft assertions\n    with soft_assert:\n        assert product_name == \"Test Product\", f\"Expected 'Test Product', got '{product_name}'\"\n\n    with soft_assert:\n        assert \"$19.99\" in product_price, f\"Expected price to contain '$19.99', got '{product_price}'\"\n\n    with soft_assert:\n        assert \"In Stock\" in product_stock, f\"Expected 'In Stock', got '{product_stock}'\"\n\n    with soft_assert:\n        assert page.locator(\".product-rating\").count() \u003e 0, \"Product rating element not found\"\n\n    # Test continues even if some assertions fail\n    page.locator(\".add-to-cart\").click()\n```\n\n### Advantages of Using Soft Assert in Tests\n\n- Efficiency: allows all checks in a test to be performed even when errors occur.\n- Detailed reports: the test report shows a complete list of errors, making debugging much easier.\n- Reduced fix time: testers can fix multiple errors at once without stopping the test after the first failure.\n\n### When Not to Use Soft Assert\n\nIn cases where an error at one stage makes other checks unnecessary or impossible, using soft assert can lead to false\nresults. In such cases, it's better to use a regular assert, which will immediately stop the test execution.\n\n---\n\n## Code Smells Analyzer for Playwright Tests\n\nThis tool is designed for static analysis of UI tests (written using Python, pytest, and Playwright) for common\nproblems (\"code smells\") that worsen readability, stability, and maintainability of tests.\n\n### What are Code Smells?\n\n\"Code smells\" are patterns or signs in code that may indicate deeper problems with architecture, design, or development\npractices. They are not direct errors but make understanding and developing code more difficult. Using a tool to detect\nthem allows quicker identification of problem areas in tests.\n\n### Why These Checks?\n\nThe set of identified problems is inspired by the book **xUnit Test Patterns: Refactoring Test Code** (by Gerard\nMeszaros). This book describes common anti-patterns and testing practices to avoid, as well as suggesting better ways to\norganize and write tests.\n\nThis tool checks for the following patterns (code smells):\n\n1. **Assertion Roulette**: Too many checks in one test without clear messages.\n2. **Too Many Conditions**: Excessive number of `if` statements that complicate understanding of test logic.\n3. **Too Many Loops**: Large number of `for` or `while` loops indicates the need for parameterization or simplification.\n4. **Mystery Guest**: Test relies on external files or resources.\n5. **Hard-coded Selector**: Direct use of selectors in the test instead of constants or page objects.\n6. **Fixed Timeout**: Using `wait_for_timeout` instead of dynamic waits.\n7. **Direct Sleep**: Using `time.sleep` instead of Playwright synchronization methods.\n8. **Test Too Long**: An excessive number of statements in a test (excluding docstring) indicates an overly complex\n   scenario.\n\n### Note on Acceptable Parameter Values\n\nThe following default values are used:\n\n- `max_asserts: int = 30`\n- `max_conditions: int = 3`\n- `max_loops: int = 3`\n- `max_test_length: int = 200`\n\nUsually, such high values do not correspond to best practices, as a large number of checks, conditions, or loops in one\ntest complicates its structure and understandability. However, in our specific project, we are automating tests at a\nvery late stage of development (at the UAT level), when everything has already been developed and manually tested. Our\ngoal is to cover long and complex scenarios that are already implemented manually, instead of creating a large number of\nshort and simple tests. This is due to the high cost of generating data and maintaining a large number of small tests.\nTherefore, we deliberately set more liberal thresholds.\n\n### How to Use?\n\n1. Run the command:\n   ```bash\n   python code_smell_analyzer.py --dir=../tests --max-asserts=30 --max-conditions=3 --max-loops=3 --max-test-length=200\n   ```\n   Flags:\n    - `--dir:` path to the directory with tests (default ../tests).\n    - `--max-asserts:` maximum number of checks in one test (default: 30).\n    - `--max-conditions:` maximum number of conditions in one test (default: 3).\n    - `--max-loops:` maximum number of loops in one test (default: 3).\n    - `--max-test-length:` maximum number of statements in a test (excluding docstring), default 200.\n2. The result will be output to the console. You will see tests with detected \"code smells\", as well as statistics (\n   number of tests with problems and without, percentage of \"smelly\" tests, and statistics by code smell types).\n\n### Usage Example\n\nSuppose you have tests in the ../tests folder. Run:\n\n```bash\npython code_smell_analyzer.py --dir=./tests\n```\n\nThe screen will display a list of files, tests in which problems have been detected, and corresponding messages about\nwhat should be improved.\n\n```text\nAnalyzing pytest files in './tests' directory for 'code smells'...\n\n[File]: ./tests/test_login.py\n\n  [Test]: test_user_authentication\n    - Assertion Roulette (assert): 15 checks. Consider splitting the test or adding messages.\n    - Fixed Timeout: Using wait_for_timeout can lead to flaky tests. Consider using dynamic waits.\n\n=== Analysis Summary ===\nTotal tests analyzed: 35\nTests with 'code smells': 12\nTests without 'code smells': 23\nPercentage of 'smelly' tests: 34.29%\n\nStatistics by 'code smells' categories:\n - Assertion Roulette (assert): 8\n - Too many conditions (if): 3\n - Fixed Timeout: 4\n - Direct Sleep: 2\n```\n\n### Why is this Necessary?\n\nBy improving tests, you make them more stable, readable, and easier to maintain. Timely detection and elimination of\ncode smells will help the team avoid accumulating \"technical debt\" and ensure higher quality of test code.\n\n---\n\n@Goraved 2025\n","project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fgoraved%2Fplaywright_python_practice","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fgoraved%2Fplaywright_python_practice","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fgoraved%2Fplaywright_python_practice/lists"}