{"id":13793032,"url":"https://github.com/spinkham/skipfish","last_synced_at":"2025-05-12T17:31:47.605Z","repository":{"id":38485212,"uuid":"571448","full_name":"spinkham/skipfish","owner":"spinkham","description":"Web application security scanner created by lcamtuf for google - Unofficial Mirror","archived":false,"fork":false,"pushed_at":"2023-02-18T16:20:47.000Z","size":547,"stargazers_count":678,"open_issues_count":5,"forks_count":147,"subscribers_count":33,"default_branch":"master","last_synced_at":"2024-08-04T22:19:18.437Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":"http://code.google.com/p/skipfish","language":"C","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/spinkham.png","metadata":{"files":{"readme":"README","changelog":"ChangeLog","contributing":null,"funding":null,"license":"COPYING","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null}},"created_at":"2010-03-20T15:42:49.000Z","updated_at":"2024-07-31T16:20:12.000Z","dependencies_parsed_at":"2024-01-21T04:43:19.531Z","dependency_job_id":null,"html_url":"https://github.com/spinkham/skipfish","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/spinkham%2Fskipfish","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/spinkham%2Fskipfish/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/spinkham%2Fskipfish/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/spinkham%2Fskipfish/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/spinkham","download_url":"https://codeload.github.com/spinkham/skipfish/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":225144934,"owners_count":17427894,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-08-03T22:01:20.763Z","updated_at":"2024-11-18T07:30:18.017Z","avatar_url":"https://github.com/spinkham.png","language":"C","readme":"===========================================\nskipfish - web application security scanner\n===========================================\n\n  http://code.google.com/p/skipfish/\n\n  * Written and maintained by:\n\n      Michal Zalewski \u003clcamtuf@google.com\u003e\n      Niels Heinen \u003cheinenn@google.com\u003e\n      Sebastian Roschke \u003cs.roschke@googlemail.com\u003e\n\n  * Copyright 2009 - 2012 Google Inc, rights reserved.\n\n  * Released under terms and conditions of the Apache License, version 2.0.\n\n--------------------\n1. What is skipfish?\n--------------------\n\nSkipfish is an active web application security reconnaissance tool. It\nprepares an interactive sitemap for the targeted site by carrying out a\nrecursive crawl and dictionary-based probes. The resulting map is then\nannotated with the output from a number of active (but hopefully\nnon-disruptive) security checks. The final report generated by the tool is\nmeant to serve as a foundation for professional web application security\nassessments.\n\n-------------------------------------------------\n2. Why should I bother with this particular tool?\n-------------------------------------------------\n\nA number of commercial and open source tools with analogous functionality is\nreadily available (e.g., Nikto, Nessus); stick to the one that suits you\nbest. That said, skipfish tries to address some of the common problems\nassociated with web security scanners. Specific advantages include:\n\n  * High performance: 500+ requests per second against responsive Internet\n    targets, 2000+ requests per second on LAN / MAN networks, and 7000+ requests\n    against local instances have been observed, with a very modest CPU, network,\n    and memory footprint. This can be attributed to:\n\n    * Multiplexing single-thread, fully asynchronous network I/O and data\n      processing model that eliminates memory management, scheduling, and IPC\n      inefficiencies present in some multi-threaded clients.\n\n    * Advanced HTTP/1.1 features such as range requests, content compression,\n      and keep-alive connections, as well as forced response size limiting, to\n      keep network-level overhead in check.\n\n    * Smart response caching and advanced server behavior heuristics are used to\n      minimize unnecessary traffic.\n\n    * Performance-oriented, pure C implementation, including a custom\n      HTTP stack.\n\n  * Ease of use: skipfish is highly adaptive and reliable. The scanner features:\n\n    * Heuristic recognition of obscure path- and query-based parameter handling\n      schemes.\n\n    * Graceful handling of multi-framework sites where certain paths obey\n      completely different semantics, or are subject to different filtering\n      rules.\n\n    * Automatic wordlist construction based on site content analysis.\n\n    * Probabilistic scanning features to allow periodic, time-bound assessments\n      of arbitrarily complex sites.\n\n    * Well-designed security checks: the tool is meant to provide accurate\n      and meaningful results:\n\n      * Handcrafted dictionaries offer excellent coverage and permit thorough\n        $keyword.$extension testing in a reasonable timeframe.\n\n      * Three-step differential probes are preferred to signature checks for\n         detecting vulnerabilities.\n\n      * Ratproxy-style logic is used to spot subtle security problems:\n        cross-site request forgery, cross-site script inclusion, mixed content,\n        issues MIME- and charset mismatches, incorrect caching directives, etc.\n\n      * Bundled security checks are designed to handle tricky scenarios:\n        stored XSS (path, parameters, headers), blind SQL or XML injection,\n        or blind shell injection.\n\n      * Snort style content signatures which will highlight server errors,\n        information leaks or potentially dangerous web applications.\n\n      * Report post-processing drastically reduces the noise caused by any\n        remaining false positives or server gimmicks by identifying repetitive\n        patterns.\n\nThat said, skipfish is not a silver bullet, and may be unsuitable for certain\npurposes. For example, it does not satisfy most of the requirements outlined\nin WASC Web Application Security Scanner Evaluation Criteria (some of them on\npurpose, some out of necessity); and unlike most other projects of this type,\nit does not come with an extensive database of known vulnerabilities for\nbanner-type checks.\n\n-----------------------------------------------------\n3. Most curious! What specific tests are implemented?\n-----------------------------------------------------\n\nA rough list of the security checks offered by the tool is outlined below.\n\n  * High risk flaws (potentially leading to system compromise):\n\n    * Server-side query injection (including blind vectors, numerical parameters).\n    * Explicit SQL-like syntax in GET or POST parameters.\n    * Server-side shell command injection (including blind vectors).\n    * Server-side XML / XPath injection (including blind vectors).\n    * Format string vulnerabilities.\n    * Integer overflow vulnerabilities.\n    * Locations accepting HTTP PUT.\n\n  * Medium risk flaws (potentially leading to data compromise):\n\n    * Stored and reflected XSS vectors in document body (minimal JS XSS support).\n    * Stored and reflected XSS vectors via HTTP redirects.\n    * Stored and reflected XSS vectors via HTTP header splitting.\n    * Directory traversal / LFI / RFI (including constrained vectors).\n    * Assorted file POIs (server-side sources, configs, etc).\n    * Attacker-supplied script and CSS inclusion vectors (stored and reflected).\n    * External untrusted script and CSS inclusion vectors.\n    * Mixed content problems on script and CSS resources (optional).\n    * Password forms submitting from or to non-SSL pages (optional).\n    * Incorrect or missing MIME types on renderables.\n    * Generic MIME types on renderables.\n    * Incorrect or missing charsets on renderables.\n    * Conflicting MIME / charset info on renderables.\n    * Bad caching directives on cookie setting responses.\n\n  * Low risk issues (limited impact or low specificity):\n\n    * Directory listing bypass vectors.\n    * Redirection to attacker-supplied URLs (stored and reflected).\n    * Attacker-supplied embedded content (stored and reflected).\n    * External untrusted embedded content.\n    * Mixed content on non-scriptable subresources (optional).\n    * HTTPS -\u003e HTTP submission of HTML forms (optional).\n    * HTTP credentials in URLs.\n    * Expired or not-yet-valid SSL certificates.\n    * HTML forms with no XSRF protection.\n    * Self-signed SSL certificates.\n    * SSL certificate host name mismatches.\n    * Bad caching directives on less sensitive content.\n\n  * Internal warnings:\n\n    * Failed resource fetch attempts.\n    * Exceeded crawl limits.\n    * Failed 404 behavior checks.\n    * IPS filtering detected.\n    * Unexpected response variations.\n    * Seemingly misclassified crawl nodes.\n\n  * Non-specific informational entries:\n\n    * General SSL certificate information.\n    * Significantly changing HTTP cookies.\n    * Changing Server, Via, or X-... headers.\n    * New 404 signatures.\n    * Resources that cannot be accessed.\n    * Resources requiring HTTP authentication.\n    * Broken links.\n    * Server errors.\n    * All external links not classified otherwise (optional).\n    * All external e-mails (optional).\n    * All external URL redirectors (optional).\n    * Links to unknown protocols.\n    * Form fields that could not be autocompleted.\n    * Password entry forms (for external brute-force).\n    * File upload forms.\n    * Other HTML forms (not classified otherwise).\n    * Numerical file names (for external brute-force).\n    * User-supplied links otherwise rendered on a page.\n    * Incorrect or missing MIME type on less significant content.\n    * Generic MIME type on less significant content.\n    * Incorrect or missing charset on less significant content.\n    * Conflicting MIME / charset information on less significant content.\n    * OGNL-like parameter passing conventions.\n\nAlong with a list of identified issues, skipfish also provides summary\noverviews of document types and issue types found; and an interactive\nsitemap, with nodes discovered through brute-force denoted in a distinctive\nway.\n\nNOTE: As a conscious design decision, skipfish will not redundantly complain\nabout highly non-specific issues, including but not limited to:\n\n  * Non-httponly or non-secure cookies,\n  * Non-HTTPS or autocomplete-enabled forms,\n  * HTML comments detected on a page,\n  * Filesystem path disclosure in error messages,\n  * Server of framework version disclosure,\n  * Servers supporting TRACE or OPTIONS requests,\n  * Mere presence of certain technologies, such as WebDAV.\n\nMost of these aspects are easy to inspect in a report if so desired - for\nexample, all the HTML forms are listed separately, so are new cookies or\ninteresting HTTP headers - and the expectation is that the auditor may opt to\nmake certain design recommendations based on this data where appropriate.\nThat said, these occurrences are not highlighted as a specific security flaw.\n\n-----------------------------------------------------------\n4. All right, I want to try it out. What do I need to know?\n-----------------------------------------------------------\n\nFirst and foremost, please do not be evil. Use skipfish only against services\nyou own, or have a permission to test.\n\nKeep in mind that all types of security testing can be disruptive. Although\nthe scanner is designed not to carry out malicious attacks, it may\naccidentally interfere with the operations of the site. You must accept the\nrisk, and plan accordingly. Run the scanner against test instances where\nfeasible, and be prepared to deal with the consequences if things go wrong.\n\nAlso note that the tool is meant to be used by security professionals, and is\nexperimental in nature. It may return false positives or miss obvious\nsecurity problems - and even when it operates perfectly, it is simply not\nmeant to be a point-and-click application. Do not take its output at face\nvalue.\n\nRunning the tool against vendor-supplied demo sites is not a good way to\nevaluate it, as they usually approximate vulnerabilities very imperfectly; we\nmade no effort to accommodate these cases.\n\nLastly, the scanner is simply not designed for dealing with rogue and\nmisbehaving HTTP servers - and offers no guarantees of safe (or sane)\nbehavior there.\n\n--------------------------\n5. How to run the scanner?\n--------------------------\n\nTo compile it, simply unpack the archive and try make. Chances are, you will\nneed to install libidn first.\n\nNext, you need to read the instructions provided in doc/dictionaries.txt\nto select the right dictionary file and configure it correctly. This step has a \nprofound impact on the quality of scan results later on, so don't skip it.\n\nOnce you have the dictionary selected, you can use -S to load that dictionary,\nand -W to specify an initially empty file for any newly learned site-specific\nkeywords (which will come handy in future assessments):\n\n$ touch new_dict.wl\n$ ./skipfish -o output_dir -S existing_dictionary.wl -W new_dict.wl \\\n  http://www.example.com/some/starting/path.txt\n\nYou can use -W- if you don't want to store auto-learned keywords anywhere.\n\nNote that you can provide more than one starting URL if so desired; all of\nthem will be crawled. It is also possible to read URLs from file, using\nthe following syntax:\n\n$ ./skipfish [...other options...] @../path/to/url_list.txt\n\nThe tool will display some helpful stats while the scan is in progress. You\ncan also switch to a list of in-flight HTTP requests by pressing return.\n\nIn the example above, skipfish will scan the entire www.example.com\n(including services on other ports, if linked to from the main page), and\nwrite a report to output_dir/index.html. You can then view this report with\nyour favorite browser (JavaScript must be enabled; and because of recent\nfile:/// security improvements in certain browsers, you might need to access\nresults over HTTP). The index.html file is static; actual results are stored\nas a hierarchy of JSON files, suitable for machine processing or different\npresentation frontends if needs be. In addition, a list of all the discovered\nURLs will be saved to a single file, pivots.txt, for easy postprocessing.\n\nA simple companion script, sfscandiff, can be used to compute a delta for\ntwo scans executed against the same target with the same flags. The newer\nreport will be non-destructively annotated by adding red background to all\nnew or changed nodes; and blue background to all new or changed issues\nfound.\n\nSome sites may require authentication for which our support is described\nin doc/authentication.txt. In most cases, you'll be wanting to use the\nform authentication method which is capable of detecting broken sessions\nin order to re-authenticate.\n\nOnce authenticated, certain URLs on the site may log out your session;\nyou can combat this in two ways: by using the -N option, which causes\nthe scanner to reject attempts to set or delete cookies; or with the -X\nparameter, which prevents matching URLs from being fetched:\n\n$ ./skipfish -X /logout/logout.aspx ...other parameters...\n\nThe -X option is also useful for speeding up your scans by excluding /icons/,\n/doc/, /manuals/, and other standard, mundane locations along these lines. In\ngeneral, you can use -X and -I (only spider URLs matching a substring) to\nlimit the scope of a scan any way you like - including restricting it only to\na specific protocol and port:\n\n$ ./skipfish -I http://example.com:1234/ ...other parameters...\n\nA related function, -K, allows you to specify parameter names not to fuzz\n(useful for applications that put session IDs in the URL, to minimize noise).\n\nAnother useful scoping option is -D - allowing you to specify additional\nhosts or domains to consider in-scope for the test. By default, all hosts\nappearing in the command-line URLs are added to the list - but you can use -D\nto broaden these rules, for example:\n\n$ ./skipfish -D test2.example.com -o output-dir http://test1.example.com/\n\n...or, for a domain wildcard match, use:\n\n$ ./skipfish -D .example.com -o output-dir http://test1.example.com/\n\nIn some cases, you do not want to actually crawl a third-party domain, but\nyou trust the owner of that domain enough not to worry about cross-domain\ncontent inclusion from that location. To suppress warnings, you can use the\n-B option, for example:\n\n$ ./skipfish -B .google-analytics.com -B .googleapis.com ...other\nparameters...\n\nBy default, skipfish sends minimalistic HTTP headers to reduce the amount of\ndata exchanged over the wire; some sites examine User-Agent strings or header\nordering to reject unsupported clients, however. In such a case, you can use\n-b ie, -b ffox, or -b phone to mimic one of the two popular browsers (or\niPhone).\n\nWhen it comes to customizing your HTTP requests, you can also use the -H\noption to insert any additional, non-standard headers; or -F to define a\ncustom mapping between a host and an IP (bypassing the resolver). The latter\nfeature is particularly useful for not-yet-launched or legacy services.\n\nSome sites may be too big to scan in a reasonable timeframe. If the site\nfeatures well-defined tarpits - for example, 100,000 nearly identical user\nprofiles as a part of a social network - these specific locations can be\nexcluded with -X or -S. In other cases, you may need to resort to other\nsettings: -d limits crawl depth to a specified number of subdirectories; -c\nlimits the number of children per directory; -x limits the total number of\ndescendants per crawl tree branch; and -r limits the total number of requests\nto send in a scan.\n\nAn interesting option is available for repeated assessments: -p. By\nspecifying a percentage between 1 and 100%, it is possible to tell the\ncrawler to follow fewer than 100% of all links, and try fewer than 100% of\nall dictionary entries. This - naturally - limits the completeness of a scan,\nbut unlike most other settings, it does so in a balanced, non-deterministic\nmanner. It is extremely useful when you are setting up time-bound, but\nperiodic assessments of your infrastructure. Another related option is -q,\nwhich sets the initial random seed for the crawler to a specified value. This\ncan be used to exactly reproduce a previous scan to compare results.\nRandomness is relied upon most heavily in the -p mode, but also for making a\ncouple of other scan management decisions elsewhere.\n\nSome particularly complex (or broken) services may involve a very high number\nof identical or nearly identical pages. Although these occurrences are by\ndefault grayed out in the report, they still use up some screen estate and\ntake a while to process on JavaScript level. In such extreme cases, you may\nuse the -Q option to suppress reporting of duplicate nodes altogether, before\nthe report is written. This may give you a less comprehensive understanding\nof how the site is organized, but has no impact on test coverage.\n\nIn certain quick assessments, you might also have no interest in paying any\nparticular attention to the desired functionality of the site - hoping to\nexplore non-linked secrets only. In such a case, you may specify -P to\ninhibit all HTML parsing. This limits the coverage and takes away the ability\nfor the scanner to learn new keywords by looking at the HTML, but speeds up\nthe test dramatically. Another similarly crippling option that reduces the\nrisk of persistent effects of a scan is -O, which inhibits all form parsing\nand submission steps.\n\nSome sites that handle sensitive user data care about SSL - and about getting\nit right. Skipfish may optionally assist you in figuring out problematic\nmixed content or password submission scenarios - use the -M option to enable\nthis. The scanner will complain about situations such as http:// scripts\nbeing loaded on https:// pages - but will disregard non-risk scenarios such\nas images.\n\nLikewise, certain pedantic sites may care about cases where caching is\nrestricted on HTTP/1.1 level, but no explicit HTTP/1.0 caching directive is\ngiven on specifying -E in the command-line causes skipfish to log all such\ncases carefully.\n\nIn some occasions, you want to limit the requests per second to limit\nthe load on the targets server (or possibly bypass DoS protection). The\n-l flag can be used to set this limit and the value given is the maximum\namount of requests per second you want skipfish to perform.\n\nScans typically should not take weeks. In many cases, you probably\nwant to limit the scan duration so that it fits within a certain time\nwindow. This can be done with the -k flag, which allows the amount of\nhours, minutes and seconds to be specified in a H:M:S format. Use of\nthis flag can affect the scan coverage if the scan timeout occurs before\ntesting all pages.\n\nLastly, in some assessments that involve self-contained sites without\nextensive user content, the auditor may care about any external e-mails or\nHTTP links seen, even if they have no immediate security impact. Use the -U\noption to have these logged.\n\nDictionary management is a special topic, and - as mentioned - is covered in\nmore detail in doc/dictionaries.txt. Please read that file before\nproceeding. Some of the relevant options include -S and -W (covered earlier),\n-L to suppress auto-learning, -G to limit the keyword guess jar size, -R to\ndrop old dictionary entries, and -Y to inhibit expensive $keyword.$extension\nfuzzing.\n\nSkipfish also features a form auto-completion mechanism in order to maximize\nscan coverage. The values should be non-malicious, as they are not meant to\nimplement security checks - but rather, to get past input validation logic.\nYou can define additional rules, or override existing ones, with the -T\noption (-T form_field_name=field_value, e.g. -T login=test123 -T\npassword=test321 - although note that -C and -A are a much better method of\nlogging in).\n\nThere is also a handful of performance-related options. Use -g to set the\nmaximum number of connections to maintain, globally, to all targets (it is\nsensible to keep this under 50 or so to avoid overwhelming the TCP/IP stack\non your system or on the nearby NAT / firewall devices); and -m to set the\nper-IP limit (experiment a bit: 2-4 is usually good for localhost, 4-8 for\nlocal networks, 10-20 for external targets, 30+ for really lagged or\nnon-keep-alive hosts). You can also use -w to set the I/O timeout (i.e.,\nskipfish will wait only so long for an individual read or write), and -t to\nset the total request timeout, to account for really slow or really fast\nsites.\n\nLastly, -f controls the maximum number of consecutive HTTP errors you are\nwilling to see before aborting the scan; and -s sets the maximum length of a\nresponse to fetch and parse (longer responses will be truncated).\n\nWhen scanning large, multimedia-heavy sites, you may also want to specify -e.\nThis prevents binary documents from being kept in memory for reporting\npurposes, and frees up a lot of RAM.\n\nFurther rate-limiting is available through third-party user mode tools such\nas trickle, or kernel-level traffic shaping.\n\nOh, and real-time scan statistics can be suppressed with -u.\n\n--------------------------------\n6. But seriously, how to run it?\n--------------------------------\n\nA standard, authenticated scan of a well-designed and self-contained site\n(warns about all external links, e-mails, mixed content, and caching header\nissues), including gentle brute-force:\n\n$ touch new_dict.wl\n$ ./skipfish -MEU -S dictionaries/minimal.wl -W new_dict.wl \\\n  -C \"AuthCookie=value\" -X /logout.aspx -o output_dir \\\n  http://www.example.com/\n\nFive-connection crawl, but no brute-force; pretending to be MSIE and\ntrusting example.com content:\n\n$ ./skipfish -m 5 -L -W- -o output_dir -b ie -B example.com \\\n  http://www.example.com/\n\nHeavy brute force only (no HTML link extraction), limited to a single\ndirectory and timing out after 5 seconds:\n\n$ touch new_dict.wl\n$ ./skipfish -S dictionaries/complete.wl -W new_dict.wl \\\n   -P -I http://www.example.com/dir1/ -o output_dir -t 5 -I \\\n  http://www.example.com/dir1/\n\nFor a short list of all command-line options, try ./skipfish -h.\n\n----------------------------------------------------\n7. How to interpret and address the issues reported?\n----------------------------------------------------\n\nMost of the problems reported by skipfish should self-explanatory, assuming you\nhave a good gasp of the fundamentals of web security. If you need a quick\nrefresher on some of the more complicated topics, such as MIME sniffing, you\nmay enjoy our comprehensive Browser Security Handbook as a starting point:\n\n  http://code.google.com/p/browsersec/\n\nIf you still need assistance, there are several organizations that put a\nconsiderable effort into documenting and explaining many of the common web\nsecurity threats, and advising the public on how to address them. I encourage\nyou to refer to the materials published by OWASP and Web Application Security\nConsortium, amongst others:\n\n  * http://www.owasp.org/index.php/Category:Principle\n  * http://www.owasp.org/index.php/Category:OWASP_Guide_Project\n  * http://www.webappsec.org/projects/articles/\n\nAlthough I am happy to diagnose problems with the scanner itself, I regrettably\ncannot offer any assistance with the inner wokings of third-party web\napplications.\n\n---------------------------------------\n8. Known limitations / feature wishlist\n---------------------------------------\n\nBelow is a list of features currently missing in skipfish. If you wish to\nimprove the tool by contributing code in one of these areas, please let me\nknow:\n\n  * Buffer overflow checks: after careful consideration, I suspect there is\n    no reliable way to test for buffer overflows remotely. Much like the actual\n    fault condition we are looking for, proper buffer size checks may also\n    result in uncaught exceptions, 500 messages, etc. I would love to be proved\n    wrong, though.\n\n  * Fully-fledged JavaScript XSS detection: several rudimentary checks are\n    present in the code, but there is no proper script engine to evaluate\n    expressions and DOM access built in.\n\n  * Variable length encoding character consumption / injection bugs: these\n    problems seem to be largely addressed on browser level at this point, so\n    they were much lower priority at the time of this writing.\n\n  * Security checks and link extraction for third-party, plugin-based\n    content (Flash, Java, PDF, etc).\n\n  * Password brute-force and numerical filename brute-force probes.\n\n  * Search engine integration (vhosts, starting paths).\n\n  * VIEWSTATE decoding.\n\n  * NTLM and digest authentication.\n\n  * More specific PHP tests (eval injection, RFI).\n\n  * Proxy support: an experimental HTTP proxy support is available through\n    a #define directive in config.h. Adding support for HTTPS proxying is\n    more complicated, and still in the works.\n\n  * Scan resume option, better runtime info.\n\n  * Standalone installation (make install) support.\n\n  * Scheduling and management web UI.\n\n-------------------------------------\n9. Oy! Something went horribly wrong!\n-------------------------------------\n\nThere is no web crawler so good that there wouldn't be a web framework to one\nday set it on fire. If you encounter what appears to be bad behavior (e.g., a\nscan that takes forever and generates too many requests, completely bogus\nnodes in scan output, or outright crashes), please first check our known\nissues page:\n\n  http://code.google.com/p/skipfish/wiki/KnownIssues\n\nIf you can't find a satisfactory answer there, recompile the scanner with:\n\n$ make clean debug\n\n...and re-run it this way:\n\n$ ./skipfish [...previous options...] 2\u003elogfile.txt\n\nYou can then inspect logfile.txt to get an idea what went wrong; if it looks\nlike a scanner problem, please scrub any sensitive information from the log\nfile and send it to the author.\n\nIf the scanner crashed, please recompile it as indicated above, and then type:\n\n$ ulimit -c unlimited\n$ ./skipfish [...previous options...] 2\u003elogfile.txt\n$ gdb --batch -ex back ./skipfish core\n\n...and be sure to send the author the output of that last command as well.\n\n------------------------\n10. Credits and feedback\n------------------------\n\nSkipfish is made possible thanks to the contributions of, and valuable\nfeedback from, Google's information security engineering team.\n\nIf you have any bug reports, questions, suggestions, or concerns regarding\nthe application, the primary author can be reached at lcamtuf@google.com.\n","funding_links":[],"categories":["Инструменты","Application Security"],"sub_categories":["Fuzzing","DAST"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fspinkham%2Fskipfish","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fspinkham%2Fskipfish","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fspinkham%2Fskipfish/lists"}