{"id":18537292,"url":"https://github.com/nareshnavinash/pyrest-python","last_synced_at":"2025-04-09T17:36:55.688Z","repository":{"id":43383608,"uuid":"241552215","full_name":"nareshnavinash/PyRest-Python","owner":"nareshnavinash","description":"REST API automation framework with snap mode, includes comparing image files","archived":false,"fork":false,"pushed_at":"2022-07-06T20:29:39.000Z","size":1455,"stargazers_count":18,"open_issues_count":2,"forks_count":6,"subscribers_count":2,"default_branch":"master","last_synced_at":"2025-03-24T09:04:00.748Z","etag":null,"topics":["api-automation","automation-framework","image-comparison","python","rest-api","snap-mode"],"latest_commit_sha":null,"homepage":"https://nareshnavinash.github.io/PyRest-Python/","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"gpl-3.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/nareshnavinash.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null}},"created_at":"2020-02-19T06:45:42.000Z","updated_at":"2024-07-17T23:15:58.000Z","dependencies_parsed_at":"2022-07-08T20:30:57.549Z","dependency_job_id":null,"html_url":"https://github.com/nareshnavinash/PyRest-Python","commit_stats":null,"previous_names":[],"tags_count":2,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/nareshnavinash%2FPyRest-Python","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/nareshnavinash%2FPyRest-Python/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/nareshnavinash%2FPyRest-Python/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/nareshnavinash%2FPyRest-Python/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/nareshnavinash","download_url":"https://codeload.github.com/nareshnavinash/PyRest-Python/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":248078261,"owners_count":21044079,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["api-automation","automation-framework","image-comparison","python","rest-api","snap-mode"],"created_at":"2024-11-06T19:37:43.953Z","updated_at":"2025-04-09T17:36:55.311Z","avatar_url":"https://github.com/nareshnavinash.png","language":"Python","readme":"# PyRest-Python\nPyRest is an automation framework to test REST API endpoints. This framework includes methods to download the image files from the rest API and then compare with the stored image files. This framework is built in Python and inspired from the simplicity of [Karate framework by Intuit](https://github.com/intuit/karate) and snapshot mode from [Jest framework by Facebook](https://jestjs.io/).\n\nSnapshot mode is added even for the image file comparison.\n\nThis project uses a In-House package which is published in PyPi for more details look into [SnaPyRest Package](https://pypi.org/project/snapyrest/)\n\n[![License: GPL v3](https://img.shields.io/badge/License-GPLv3-blue.svg)](LICENSE)\n[![Made with Python](https://img.shields.io/badge/Made%20with-Python-yellow.svg)](https://www.python.org/)\n[![StackOverflow](http://img.shields.io/badge/Stack%20Overflow-Ask-blue.svg)]( https://stackoverflow.com/users/10505289/naresh-sekar )\n[![Contributions Welcome](https://img.shields.io/badge/Contributions-Welcome-brightgreen.svg)](CONTRIBUTING.md)\n[![email me](https://img.shields.io/badge/Contact-Email-green.svg)](mailto:nareshnavinash@gmail.com)\n\n\n![alt text](Library/pyrest.png)\n\n\n## Supports\n* Rest API automation\n* Allure reports\n* Jenkins Integration\n* Modes of run via CLI command\n* Docker Execution\n* Testdata driven tests\n* Multi Thread run\n* Snap Mode to replace the response data as the test data\n* Static code analyser\n* Reports e-mailer using shovel\n\n## Setup\n* Clone this repository\n* Navigate to the cloned folder\n* To install the dependencies in MAC we use Homebrew version manager install (using)[https://brew.sh/]\n* Once brew is installed install python by `brew install python3`\n* To get additional dependencies of python like pip3, do `brew postinstall python3`\n* Install the required packages needed for this framework using `pip3 install -r requirements.txt`\n\n## To Run the tests\nFor a simple run of all the test files in normal mode, try\n\n```\npytest\n```\n\nTo run the tests in snap mode (to save the Response JSON to the test data file or to save the response image to the stored image)\n```\nsnap=1 pytest\n```\nOnce the changes are saved to the file run the tests with `pytest` to get the test running against the saved data.\n\nTo Run the tests in parallel mode or multi thread run for the available test files, try (To have parallel run you need to have atleast 2 tests inside your folder structure)\n\n```\npytest -s -v -n=2\n```\n\n## To open allure results\nAllure is a open source framework for reporting the test runs. To install allure in mac, use the following steps\n\n```\nbrew cask install adoptopenjdk\nbrew install allure\n```\n\nTo view the results for the test run, use\n\n```\nallure serve reports/allure\n```\n\n\n## Reports\nFor better illustration on the testcases, allure reports has been integrated. Allure reports can also be integrated with jenkins to get a dashboard view. Apart from allure, pytest's default reporting html file has been added to the `reports/` folder.\n\nIf there is a failure while comparing the images, allure report will have all the files attached to it. The difference between the two images is generated in run time and attached to the allure report for our reference.\n\n![alt text](Library/diff_image.png)\n \n\n## Jenkins Integration with Docker images\nGet any of the linux with python docker image as the slaves in jenkins and use the same for executing tests with this framework (Sample docker image - `https://hub.docker.com/_/python`). From the jenkins bash Execute the following to get the testcases to run,\n\n```\n#!/usr/bin/python3\npython --version\ncd \u003cpath_to_the_project\u003e\npip3 install -r requirements.txt\nsnap=1 pytest -s -v -n 4\n```\n\nIn Jenkins pipeline, try to add the following snippet to execute the tests,\n\n```\npipeline {\n    agent { docker { image 'python:3.7.6' } }\n    stages {\n        stage('test') {\n            steps {\n                sh 'python --version'\n                sh 'cd project/'\n                sh 'pip3 install -r requirements.txt'\n                sh 'pytest -s -v -n 4'\n            }\n        }\n    }\n}\n```\n\n# Break down into end to end tests\n\n## Creating a test file\n\n* Tests can be created directly within the `Tests/` folder with the file prefix as `test_` so that those files alone will be taken during test run. This is configured in `pytest.ini` file.\n\n```\n[pytest]\nmarkers =\n    sanity: sanity tests marker\n    regression: regression tests marker\n    snap: Snap feature enabled for this case, should have separate file for validating the response\n    plain: Snap feature is not recommended since the expected JSON has some custom values\npython_files=*.py\npython_functions=test_*\naddopts = -rsxX\n          -q\n          -v\n          --self-contained-html\n          --html=reports/html_report.html\n          --cov=Tests\n          --alluredir reports/allure\n          --clean-alluredir\n``` \n\n* Do import the needed modules inside the test file. Since we have imitated karate framework's approach of testing, we actually need to just use the commands to test the REST API endpoints. Those commands and the features can be discussed below.\n\n```\nimport allure\nimport pytest\nfrom snapyrest.api import Api\nfrom snapyrest.images import Img\n```\n\n* Do set the URL on which you want your automation suite to run in the `/Data/GlobalData/global_data.yml` file. You can also set add other project level data in this file and then call those by using `Var` method.\n\n```\nURL: https://naresh.free.beeceptor.com\ntimeout: 10\ntolerance: 0.01\n```  \nIn this project I have set the URL on which automation is going to run, maximum timeout which is allowed, and the tolerance which is allowed while comparing the images.\n\nIn order to change the URL against which the suite is running, one could always set the environment variable while executing the suite. Always environment variable gets the higher precedence, so even if we have URL set in the global variable data, the URL which we give from command line will be taken for execution.\n```\nURL=https://customurl.inruntime.com pytest\n``` \n\n* While starting to draft a test case, do add the following tags to the test case, which will be helpful in reporting part.\n\n```\n@allure.feature(\"Sample get request\") # Title for the test case\n@allure.severity('Critical') # Set the severity for the case\n@pytest.mark.regression  # Custom pytest marker to run the test cases with ease on demand\n@pytest.mark.snap  # Custom pytest marker to run the test cases with ease on demand\n``` \n\n### Simple test case with an endpoint\n\nFor a very simple basic get request and to validate the response code we could do,\n\n```\nApi.get(\"/name\")\nApi.verify_response_code(200)\n```\nOn calling only these two methods from the `Api` library, all the allure report actions, attaching the request and the response file to the reports, and asserting the response code of the response is taken care off.\n\n### Simple test case with validating the response with test data\n\nTo validate the response json with a test data, one could do the following,\n\n```\nApi.get(\"/name\")\nApi.verify_response_code(200)\nApi.verify_response_json(\"sample.yml\", \"test_sample_get_request_001\")\n```\nHere, we are trying to take the sample.yml file under `/Data/DynamicData/` folder and then fetch the data for the key `test_sample_get_request_001`. \nAfter getting the data from the stored file, we will compare that with the response data and generate the allure reports along with necessary attachments.\n\nThe YAML file will be looking like,\n```\ntest_sample_get_request_001:\n  age: 20\n  name: Naresh\n```\n\nWhile fetching the key from a yaml file, the above file structure will return the data in JSON format. This in turn gives us the edge while creating the test data. One can always save the key value in direct JSON format as well.\n\n```\ntest_sample_get_request_001:\n  age: 20\n  name: Naresh\n\nOR\n\ntest_sample_get_request_001: { \"age\": 20, \"name\": \"Naresh\" }\n```\nIn either way JSON parser will get the values in JSON format. Whereas when we use `snap` mode, the file will be saved in the first format which we can see in detail below.\n\n### Simple test case with validating the response with test data and ignoring few keys\n\nWhile validating an api response, we may encounter a scenario where we don't want to validate few keys. In such scenario one can do the following,\n```\nApi.get(\"/name\")\nApi.verify_response_code(200)\nApi.ignore_keys(\"age\")\nApi.verify_response_json(\"sample.yml\", \"test_sample_get_request_001\")\n```\nThe above code will validate the response status code, response json values except `age` key. If you want to have more keys that are supposed to be ignored, have that in the comma separated format,\n```\nApi.get(\"/name\")\nApi.verify_response_code(200)\nApi.ignore_keys(\"age,name\")\nApi.verify_response_json(\"sample.yml\", \"test_sample_get_request_001\")\n```\nThis will ignore the keys `age` and `name` while validating the response with the stored data.\n\n### Simple test case with validating the response with test data and custom markers\n\nWhile validating an api response, we may encounter a scenario where we need to validate whether a key is present or not but not the value for that key. In that case one can always have that marked in their test data with the unique markers specified with `$` symbol.\n\nTest File:\n```\nApi.get(\"/name\")\nApi.verify_response_code(200)\nApi.verify_response_json(\"sample.yml\", \"test_sample_get_request_001\")\n```\nData File:\n```\ntest_sample_get_request_001:\n  age: $notnull\n  name: Naresh\n```\nThe above combination will validate the response as,\n1. Whether `age` key is present without Null value in it.\n2. And `name` is present with the exact same value `Naresh` in it.\n\nWe can also make the validation so specific for the `age` field in the above example by mentioning that value corresponds to `age` should be a `number`. To achieve this we need to have the following combination.\n\nTest File:\n```\nApi.get(\"/name\")\nApi.verify_response_code(200)\nApi.verify_response_json(\"sample.yml\", \"test_sample_get_request_001\")\n```\nData File:\n```\ntest_sample_get_request_001:\n  age: $number\n  name: Naresh\n```\n\nApart from the above two there are multiple markers available which are listed as follows,\n\nMarker | Description\n------ | -----------\n`$notnull` | Expects actual value to be not-`null`\n`$array` | Expects actual value to be a JSON array\n`$object` | Expects actual value to be a JSON object\n`$boolean` | Expects actual value to be a boolean `true` or `false`\n`$number` | Expects actual value to be a number\n`$string` | Expects actual value to be a string\n`$uuid` | Expects actual (string) value to conform to the UUID format\n \n### Test cases with validation of images\n\nIn few scenarios if we need to validate the image file from the response, first we need to hit the endpoint and get the image URL, after which we need to download the image from the URL and store that in temporary folder, and then compare the image with the stored image. To do this,\n```\nApi.get(\"/image\")\nApi.verify_response_code(200)\nApi.verify_response_json(\"sample.yml\", \"test_sample_get_request_003\")\nimage_url = Api.get_params_from_response(\"image\")\nImg.download_image(image_url, \"downloaded_file\")\nImg.is_equal(\"Naresh\", \"downloaded_file\")\n``` \nThe above code will save a value from the response json through `Api.get_params_from_response`. If the URL is present inside the nested json one can always give the path to the image url using comma separated value like,\n```\nApi.get_params_from_response(\"image,0,user,profile,image\")\n```\nAfter getting the Image URL, we need to download it and save it in the temporary folder under `reports/images`. We are also supposed to send the name for the downloading image file. All the download and comparison of images are happening in png format. We need a change in framework if we want to compare images with some other format.\n\nNow after downloading, directly give the image name against which we need to compare the downloaded image. The stored image must be under the folder `/Data/Images/`.\n\nThe method `Img.is_equal` takes care of all the allure reporting part, attaching the images to the report and if there is a mismatch between the images, difference between two images also will be attached to the allure report. as mentioned in the above allure report topic. \n\n### Test cases with validation of images along with tolerance\n\nIn few scenarios if we need to validate the image file from the response along with the allowed tolerance. The above method will result in failure even if there is a minute change in the image file. To validate the images along with tolerance one has to change,\n\nFrom: \n```\nImg.is_equal(\"Naresh\", \"downloaded_file\")\n``` \nTo:\n```\nImg.is_equal_with_tolerance(\"Naresh\", \"downloaded_file\")\n```\nThis will take the tolerance level from the global data file and validate. Its always recommended to use same tolerance level across the project, but in few cases if one need to have custom tolerance level to an image compare one has to do,\n```\nImg.is_equal_with_tolerance(\"Naresh\", \"downloaded_file\", 0.5)\n```\nThe above code will validate the images with 0.5 percent tolerance level.\n\n### Detailing the snap mode\n\nBy following the above mentioned folder structure and code format, the snap mode implementation becomes easier. Consider we have a test data YAML file with multiple keys in it. If we run the suite in snap mode, the values for the keys will be changed only for the cases for which test cases are failing, remaining values will be retained in the YAML file.\n\nIn case of Image comparison, if snap mode is invoked and the image files are not same, the source image file will be replaced with the downloaded file.\n\n**Incase if we use custom markers in the test data file ($notnull, $string, etc.,), on using snap mode, all the markers will be overridden by the response values. Hence it is recommended to distinguish between the test cases with custom pytest markers**\n\nHave two markers, `snap` and `plain` and all the snap marked test cases should not carry any custom markers in the test data file and all the plain marked test cases should have custom marked test data files. While invoking the snap mode one has to run using the snap marker.\n\n```\nsnap=1 pytest -v -m snap\n``` \nThis will ensure all the custom created markers are retained in the test data files.\n\n**It seems possible to handle the above shortcomings of snap mode in the json validation, which I have in todo pipeline**\n \n## Data sets:\n\nIn order to have distinguished set of data I have used three types of data.\n\n* **Global** - Global configuration for the whole project. Here mode of run, browsers to use, browser configurations etc., are specified.\n* **Static Data** - This is to store the module level data. Ideally for each test file we need to have a test data file, but that depends on the requirement.\n* **Dynamic Data** - This is to store the dynamic data. Files in this folder are supposed to change when we run with `snap=1 pytest`. This is separated from the other data files so that other static files are not disturbed during the run.\n* **Images** - This folder is to store all the image files that are needed to compare with the response Image files \n\n\n## Static code analyser:\n\nFor static code analyser I used flake8. To check the configurations view (.flake8)[.flake8] file. To check on the code status execte,\n\n```\nflake8\n```\n\ncurrently there are `0` vulnerabilities with this project.\n\n\n## Shovel\n\nShovel has been added to mail the report status once the run is completed. This will be useful once we integrate with CI and helpful in debugging which commit has caused the actual problem. For more details you could visit the [Shovel file](Shovel/reports.py). (Code refactoring is yet to be done in that file). In Jenkins you could get the following values as environment variables.\n\n```\n$BUILD_URL #Directly fetched from ENV variable\n$GIT_URL #Directly fetched from ENV variable\n$GIT_BRANCH #Directly fetched from ENV variable\n$GIT_COMMIT #Directly fetched from ENV variable\n$GIT_COMMITER_EMAIL=$(git --no-pager show -s --format='%ae' $GIT_COMMIT)\n$REPORTS_EMAIL_RECEIVERS=\"nareshnavinash@gmail.com;nareshsekar@zoho.com\" #Or can be taken from parameterized build\n\n#Run the rake task by,\nshovel reports.email $REPORTS_EMAIL_RECEIVERS $BUILD_URL $GIT_BRANCH $GIT_COMMIT $GIT_COMMITER_EMAIL\n```\n\n## Built With\n\n* [pytest](https://docs.pytest.org/en/latest/) - Core test framework\n* [flake8](https://pypi.org/project/flake8/) - Static code analyser\n* [pytest-xdist](https://pypi.org/project/pytest-xdist/) - To run pytest in parallel mode\n* [Allure pytest](https://pypi.org/project/allure-pytest/) - For Detailed reporting\n* [Image_Compare](https://pypi.org/project/imgcompare/) - To compare two image files\n* [Diff_Image](https://pypi.org/project/diffimg/) - To generate image file with difference between two images\n\n## Contributing\n\n1. Clone the repo!\n2. Create your feature branch: `git checkout -b my-new-feature`\n3. Commit your changes: `git commit -am 'Add some feature'`\n4. Push to the branch: `git push origin my-new-feature`\n5. Create a pull request.\n\nPlease read [CONTRIBUTING.md](CONTRIBUTING.md) for details on code of conduct, and the process for submitting pull requests.\n\n## Authors\n\n* **[Naresh Sekar](https://github.com/nareshnavinash)**\n\n## License\n\nThis project is licensed under the GNU GPL-3.0 License - see the [LICENSE](LICENSE) file for details\n\n## Acknowledgments\n\n* To all the open source contributors whose code has been referred in this project.\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fnareshnavinash%2Fpyrest-python","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fnareshnavinash%2Fpyrest-python","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fnareshnavinash%2Fpyrest-python/lists"}