{"id":13414171,"url":"https://github.com/boyter/scc","last_synced_at":"2025-05-12T05:18:23.993Z","repository":{"id":37258998,"uuid":"123394155","full_name":"boyter/scc","owner":"boyter","description":"Sloc, Cloc and Code: scc is a very fast accurate code counter with complexity calculations and COCOMO estimates written in pure Go","archived":false,"fork":false,"pushed_at":"2025-05-09T05:45:22.000Z","size":12698,"stargazers_count":7292,"open_issues_count":53,"forks_count":279,"subscribers_count":38,"default_branch":"master","last_synced_at":"2025-05-12T02:43:20.215Z","etag":null,"topics":["cli","cloc","code","complexity","golang","linux","macos","scc","sloc","sloccount","statistics","tokei","windows"],"latest_commit_sha":null,"homepage":"","language":"Go","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/boyter.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":".github/FUNDING.yml","license":"LICENSE","code_of_conduct":"CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null},"funding":{"github":"boyter"}},"created_at":"2018-03-01T06:44:25.000Z","updated_at":"2025-05-11T19:08:21.000Z","dependencies_parsed_at":"2023-12-04T06:23:31.948Z","dependency_job_id":"9a993bc9-8a05-4d19-acc7-8b3cca0a2e57","html_url":"https://github.com/boyter/scc","commit_stats":{"total_commits":1183,"total_committers":100,"mean_commits":11.83,"dds":0.1952662721893491,"last_synced_commit":"9251957d1b1350affc61bfd40126e99784c9c3a3"},"previous_names":[],"tags_count":43,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/boyter%2Fscc","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/boyter%2Fscc/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/boyter%2Fscc/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/boyter%2Fscc/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/boyter","download_url":"https://codeload.github.com/boyter/scc/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":253672696,"owners_count":21945480,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["cli","cloc","code","complexity","golang","linux","macos","scc","sloc","sloccount","statistics","tokei","windows"],"created_at":"2024-07-30T21:00:15.930Z","updated_at":"2025-05-12T05:18:23.964Z","avatar_url":"https://github.com/boyter.png","language":"Go","readme":"Sloc Cloc and Code (scc)\n------------------------\n\n\u003cimg alt=\"scc\" src=https://github.com/boyter/scc/raw/master/scc.jpg\u003e\n\nA tool similar to cloc, sloccount and tokei. For counting the lines of code, blank lines, comment lines, and physical lines of source code in many programming languages.\n\nGoal is to be the fastest code counter possible, but also perform COCOMO calculation like sloccount, estimate code complexity similar to cyclomatic complexity calculators and produce unique lines of code or DRYness metrics. In short one tool to rule them all.\n\nAlso it has a very short name which is easy to type `scc`. \n\nIf you don't like sloc cloc and code feel free to use the name `Succinct Code Counter`.\n\n[![Go](https://github.com/boyter/scc/actions/workflows/go.yml/badge.svg)](https://github.com/boyter/scc/actions/workflows/go.yml)\n[![Go Report Card](https://goreportcard.com/badge/github.com/boyter/scc)](https://goreportcard.com/report/github.com/boyter/scc)\n[![Coverage Status](https://coveralls.io/repos/github/boyter/scc/badge.svg?branch=master)](https://coveralls.io/github/boyter/scc?branch=master)\n[![Scc Count Badge](https://sloc.xyz/github/boyter/scc/)](https://github.com/boyter/scc/)\n![](https://img.shields.io/github/downloads/boyter/scc/total?label=downloads%20%28GH%29)\n[![Mentioned in Awesome Go](https://awesome.re/mentioned-badge.svg)](https://github.com/avelino/awesome-go)\n\nLicensed under MIT licence.\n\n## Table of Contents\n\n- [Support](#support)\n- [Install](#install)\n- [Background](#background)\n- [Pitch](#pitch)\n- [Usage](#usage)\n- [Complexity Estimates](#complexity-estimates)\n- [Unique Lines of Code (ULOC)](#unique-lines-of-code-uloc)\n- [COCOMO](#cocomo)\n- [Output Formats](#output-formats)\n- [Performance](#performance)\n- [Development](#development)\n- [Adding/Modifying Languages](#addingmodifying-languages)\n- [Issues](#issues)\n- [Badges (beta)](#badges-beta)\n- [Language Support](LANGUAGES.md)\n\n### Support\n\nUsing `scc` commercially? If you want priority support for `scc` you can purchase a years worth https://boyter.gumroad.com/l/kgenuv which entitles you to priority direct email support from the developer.\n\n### Install\n\n#### Go Install\n\nYou can install `scc` by using the standard go toolchain.\n\nTo install the latest stable version of scc:\n\n`go install github.com/boyter/scc/v3@latest`\n\nTo install a development version:\n\n`go install github.com/boyter/scc/v3@master`\n\nNote that `scc` needs go version \u003e= 1.24.\n\n#### Snap\n\nA [snap install](https://snapcraft.io/scc) exists thanks to [Ricardo](https://feliciano.tech/).\n\n`$ sudo snap install scc`\n\n*NB* Snap installed applications cannot run outside of `/home` https://askubuntu.com/questions/930437/permission-denied-error-when-running-apps-installed-as-snap-packages-ubuntu-17 so you may encounter issues if you use snap and attempt to run outside this directory.\n\n#### Homebrew\n\nOr if you have [Homebrew](https://brew.sh/) installed\n\n`$ brew install scc`\n\n#### Fedora\n\nFedora Linux users can use a [COPR repository](https://copr.fedorainfracloud.org/coprs/lihaohong/scc/):\n\n`$ sudo dnf copr enable lihaohong/scc \u0026\u0026 sudo dnf install scc`\n\n#### MacPorts\n\nOn macOS, you can also install via [MacPorts](https://www.macports.org)\n\n`$ sudo port install scc`\n\n#### Scoop\n\nOr if you are using [Scoop](https://scoop.sh/) on Windows\n\n`$ scoop install scc`\n\n#### Chocolatey\n\nOr if you are using [Chocolatey](https://chocolatey.org/) on Windows\n\n`$ choco install scc`\n\n#### WinGet\n\nOr if you are using [WinGet](https://github.com/microsoft/winget-cli) on Windows\n\n`winget install --id benboyter.scc --source winget`\n\n#### FreeBSD\n\nOn FreeBSD, scc is available as a package\n\n`$ pkg install scc`\n\nOr, if you prefer to build from source, you can use the ports tree\n\n`$ cd /usr/ports/devel/scc \u0026\u0026 make install clean`\n\n### Run in Docker\n\nGo to the directory you want to run scc from.\n\nRun the command below to run the latest release of scc on your current working directory:\n\n```\ndocker run --rm -it -v \"$PWD:/pwd\"  ghcr.io/boyter/scc:master scc /pwd\n```\n\n#### Manual\n\nBinaries for Windows, GNU/Linux and macOS for both i386 and x86_64 machines are available from the [releases](https://github.com/boyter/scc/releases) page.\n\n#### GitLab\n\nhttps://about.gitlab.com/blog/2023/02/15/code-counting-in-gitlab/\n\n#### Other\n\nIf you would like to assist with getting `scc` added into apt/chocolatey/etc... please submit a PR or at least raise an issue with instructions.\n\n### Background\n\nRead all about how it came to be along with performance benchmarks,\n\n - https://boyter.org/posts/sloc-cloc-code/\n - https://boyter.org/posts/why-count-lines-of-code/\n - https://boyter.org/posts/sloc-cloc-code-revisited/\n - https://boyter.org/posts/sloc-cloc-code-performance/\n - https://boyter.org/posts/sloc-cloc-code-performance-update/\n\nSome reviews of `scc`\n\n - https://nickmchardy.com/2018/10/counting-lines-of-code-in-koi-cms.html\n - https://www.feliciano.tech/blog/determine-source-code-size-and-complexity-with-scc/\n - https://metaredux.com/posts/2019/12/13/counting-lines.html\n\nA talk given at the first GopherCon AU about `scc` (press S to see speaker notes)\n\n - https://boyter.org/static/gophercon-syd-presentation/\n - https://www.youtube.com/watch?v=jd-sjoy3GZo\n\nFor performance see the [Performance](https://github.com/boyter/scc#performance) section\n\nOther similar projects,\n\n - [SLOCCount](https://www.dwheeler.com/sloccount/) the original sloc counter\n - [cloc](https://github.com/AlDanial/cloc), inspired by SLOCCount; implemented in Perl for portability\n - [gocloc](https://github.com/hhatto/gocloc) a sloc counter in Go inspired by tokei\n - [loc](https://github.com/cgag/loc) rust implementation similar to tokei but often faster\n - [loccount](https://gitlab.com/esr/loccount) Go implementation written and maintained by ESR\n - [polyglot](https://github.com/vmchale/polyglot) ATS sloc counter\n - [tokei](https://github.com/XAMPPRocky/tokei) fast, accurate and written in rust\n - [sloc](https://github.com/flosse/sloc) coffeescript code counter\n - [stto](https://github.com/mainak55512/stto) new Go code counter with a focus on performance\n\nInteresting reading about other code counting projects tokei, loc, polyglot and loccount\n\n - https://www.reddit.com/r/rust/comments/59bm3t/a_fast_cloc_replacement_in_rust/\n - https://www.reddit.com/r/rust/comments/82k9iy/loc_count_lines_of_code_quickly/\n - http://blog.vmchale.com/article/polyglot-comparisons\n - http://esr.ibiblio.org/?p=8270\n\nFurther reading about processing files on the disk performance\n\n - https://blog.burntsushi.net/ripgrep/\n\nUsing `scc` to process 40 TB of files from GitHub/Bitbucket/GitLab\n\n - https://boyter.org/posts/an-informal-survey-of-10-million-github-bitbucket-gitlab-projects/\n\n### Pitch\n\nWhy use `scc`?\n\n - It is very fast and gets faster the more CPU you throw at it\n - Accurate\n - Works very well across multiple platforms without slowdown (Windows, Linux, macOS)\n - Large language support\n - Can ignore duplicate files\n - Has complexity estimations\n - You need to tell the difference between Coq and Verilog in the same directory\n - cloc yaml output support so potentially a drop in replacement for some users\n - Can identify or ignore minified files\n - Able to identify many #! files ADVANCED! https://github.com/boyter/scc/issues/115\n - Can ignore large files by lines or bytes\n - Can calculate the ULOC or unique lines of code by file, language or project\n - Supports multiple output formats for integration, CSV, SQL, JSON, HTML and more\n\nWhy not use `scc`?\n\n - You don't like Go for some reason\n - It cannot count D source with different nested multi-line comments correctly https://github.com/boyter/scc/issues/27\n\n### Differences\n\nThere are some important differences between `scc` and other tools that are out there. Here are a few important ones for you to consider.\n\nBlank lines inside comments are counted as comments. While the line is technically blank the decision was made that \nonce in a comment everything there should be considered a comment until that comment is ended. As such the following,\n \n```\n/* blank lines follow\n\n\n*/\n```\n\nWould be counted as 4 lines of comments. This is noticeable when comparing scc's output to other tools on large\nrepositories.\n\n`scc` is able to count verbatim strings correctly. For example in C# the following,\n\n```\nprivate const string BasePath = @\"a:\\\";\n// The below is returned to the user as a version\nprivate const string Version = \"1.0.0\";\n```\n\nBecause of the prefixed @ this string ends at the trailing \" by ignoring the escape character \\ and as such should be \ncounted as 2 code lines and 1 comment. Some tools are unable to\ndeal with this and instead count up to the \"1.0.0\" as a string which can cause the middle comment to be counted as\ncode rather than a comment.\n\n`scc` will also tell you the number of bytes it has processed (for most output formats) allowing you to estimate the\ncost of running some static analysis tools. \n\n### Usage\n\nCommand line usage of `scc` is designed to be as simple as possible.\nFull details can be found in `scc --help` or `scc -h`. Note that the below reflects the state of master not a release, as such\nfeatures listed below may be missing from your installation.\n\n```\nSloc, Cloc and Code. Count lines of code in a directory with complexity estimation.\nVersion 3.5.0 (beta)\nBen Boyter \u003cben@boyter.org\u003e + Contributors\n\nUsage:\n  scc [flags] [files or directories]\n\nFlags:\n      --avg-wage int                       average wage value used for basic COCOMO calculation (default 56286)\n      --binary                             disable binary file detection\n      --by-file                            display output for every file\n  -m, --character                          calculate max and mean characters per line\n      --ci                                 enable CI output settings where stdout is ASCII\n      --cocomo-project-type string         change COCOMO model type [organic, semi-detached, embedded, \"custom,1,1,1,1\"] (default \"organic\")\n      --count-as string                    count extension as language [e.g. jsp:htm,chead:\"C Header\" maps extension jsp to html and chead to C Header]\n      --count-ignore                       set to allow .gitignore and .ignore files to be counted\n      --currency-symbol string             set currency symbol (default \"$\")\n      --debug                              enable debug output\n      --directory-walker-job-workers int   controls the maximum number of workers which will walk the directory tree (default 8)\n  -a, --dryness                            calculate the DRYness of the project (implies --uloc)\n      --eaf float                          the effort adjustment factor derived from the cost drivers (1.0 if rated nominal) (default 1)\n      --exclude-dir strings                directories to exclude (default [.git,.hg,.svn])\n  -x, --exclude-ext strings                ignore file extensions (overrides include-ext) [comma separated list: e.g. go,java,js]\n  -n, --exclude-file strings               ignore files with matching names (default [package-lock.json,Cargo.lock,yarn.lock,pubspec.lock,Podfile.lock,pnpm-lock.yaml])\n      --file-gc-count int                  number of files to parse before turning the GC on (default 10000)\n      --file-list-queue-size int           the size of the queue of files found and ready to be read into memory (default 8)\n      --file-process-job-workers int       number of goroutine workers that process files collecting stats (default 8)\n      --file-summary-job-queue-size int    the size of the queue used to hold processed file statistics before formatting (default 8)\n  -f, --format string                      set output format [tabular, wide, json, json2, csv, csv-stream, cloc-yaml, html, html-table, sql, sql-insert, openmetrics] (default \"tabular\")\n      --format-multi string                have multiple format output overriding --format [e.g. tabular:stdout,csv:file.csv,json:file.json]\n      --gen                                identify generated files\n      --generated-markers strings          string markers in head of generated files (default [do not edit,\u003cauto-generated /\u003e])\n  -h, --help                               help for scc\n  -i, --include-ext strings                limit to file extensions [comma separated list: e.g. go,java,js]\n      --include-symlinks                   if set will count symlink files\n  -l, --languages                          print supported languages and extensions\n      --large-byte-count int               number of bytes a file can contain before being removed from output (default 1000000)\n      --large-line-count int               number of lines a file can contain before being removed from output (default 40000)\n      --min                                identify minified files\n  -z, --min-gen                            identify minified or generated files\n      --min-gen-line-length int            number of bytes per average line for file to be considered minified or generated (default 255)\n      --no-cocomo                          remove COCOMO calculation output\n  -c, --no-complexity                      skip calculation of code complexity\n  -d, --no-duplicates                      remove duplicate files from stats and output\n      --no-gen                             ignore generated files in output (implies --gen)\n      --no-gitignore                       disables .gitignore file logic\n      --no-gitmodule                       disables .gitmodules file logic\n      --no-hborder                         remove horizontal borders between sections\n      --no-ignore                          disables .ignore file logic\n      --no-large                           ignore files over certain byte and line size set by large-line-count and large-byte-count\n      --no-min                             ignore minified files in output (implies --min)\n      --no-min-gen                         ignore minified or generated files in output (implies --min-gen)\n      --no-scc-ignore                      disables .sccignore file logic\n      --no-size                            remove size calculation output\n  -M, --not-match stringArray              ignore files and directories matching regular expression\n  -o, --output string                      output filename (default stdout)\n      --overhead float                     set the overhead multiplier for corporate overhead (facilities, equipment, accounting, etc.) (default 2.4)\n  -p, --percent                            include percentage values in output\n      --remap-all string                   inspect every file and remap by checking for a string and remapping the language [e.g. \"-*- C++ -*-\":\"C Header\"]\n      --remap-unknown string               inspect files of unknown type and remap by checking for a string and remapping the language [e.g. \"-*- C++ -*-\":\"C Header\"]\n      --size-unit string                   set size unit [si, binary, mixed, xkcd-kb, xkcd-kelly, xkcd-imaginary, xkcd-intel, xkcd-drive, xkcd-bakers] (default \"si\")\n      --sloccount-format                   print a more SLOCCount like COCOMO calculation\n  -s, --sort string                        column to sort by [files, name, lines, blanks, code, comments, complexity] (default \"files\")\n      --sql-project string                 use supplied name as the project identifier for the current run. Only valid with the --format sql or sql-insert option\n  -t, --trace                              enable trace output (not recommended when processing multiple files)\n  -u, --uloc                               calculate the number of unique lines of code (ULOC) for the project\n  -v, --verbose                            verbose output\n      --version                            version for scc\n  -w, --wide                               wider output with additional statistics (implies --complexity)\n```\n\nOutput should look something like the below for the redis project\n\n```\n$ scc redis \n───────────────────────────────────────────────────────────────────────────────\nLanguage                 Files     Lines   Blanks  Comments     Code Complexity\n───────────────────────────────────────────────────────────────────────────────\nC                          296    180267    20367     31679   128221      32548\nC Header                   215     32362     3624      6968    21770       1636\nTCL                        143     28959     3130      1784    24045       2340\nShell                       44      1658      222       326     1110        187\nAutoconf                    22     10871     1038      1326     8507        953\nLua                         20       525       68        70      387         65\nMarkdown                    16      2595      683         0     1912          0\nMakefile                    11      1363      262       125      976         59\nRuby                        10       795       78        78      639        116\ngitignore                   10       162       16         0      146          0\nYAML                         6       711       46         8      657          0\nHTML                         5      9658     2928        12     6718          0\nC++                          4       286       48        14      224         31\nLicense                      4       100       20         0       80          0\nPlain Text                   3       185       26         0      159          0\nCMake                        2       214       43         3      168          4\nCSS                          2       107       16         0       91          0\nPython                       2       219       12         6      201         34\nSystemd                      2        80        6         0       74          0\nBASH                         1       118       14         5       99         31\nBatch                        1        28        2         0       26          3\nC++ Header                   1         9        1         3        5          0\nExtensible Styleshe…         1        10        0         0       10          0\nSmarty Template              1        44        1         0       43          5\nm4                           1       562      116        53      393          0\n───────────────────────────────────────────────────────────────────────────────\nTotal                      823    271888    32767     42460   196661      38012\n───────────────────────────────────────────────────────────────────────────────\nEstimated Cost to Develop (organic) $6,918,301\nEstimated Schedule Effort (organic) 28.682292 months\nEstimated People Required (organic) 21.428982\n───────────────────────────────────────────────────────────────────────────────\nProcessed 9425137 bytes, 9.425 megabytes (SI)\n───────────────────────────────────────────────────────────────────────────────\n```\n\nNote that you don't have to specify the directory you want to run against. Running `scc` will assume you want to run against the current directory.\n\nYou can also run against multiple files or directories `scc directory1 directory2 file1 file2` with the results aggregated in the output.\n\nSince `scc` writes to standard output, there are many ways to easily share the results. For example, using [netcat](https://manpages.org/nc)\nand [one of many pastebins](https://paste.c-net.org/) gives a public URL:\n\n```\n$ scc | nc paste.c-net.org 9999\nhttps://paste.c-net.org/Example\n```\n\n### Ignore Files\n\n`scc` mostly supports .ignore files inside directories that it scans. This is similar to how ripgrep, ag and tokei work. .ignore files are 100% the same as .gitignore files with the same syntax, and as such `scc` will ignore files and directories listed in them. You can add .ignore files to ignore things like vendored dependency checked in files and such. The idea is allowing you to add a file or folder to git and have ignored in the count.\n\nIt also supports its own ignore file `.sccignore` if you want `scc` to ignore things while having ripgrep, ag, tokei and others support them.\n\n### Interesting Use Cases\n\nUsed inside Intel Nemu Hypervisor to track code changes between revisions https://github.com/intel/nemu/blob/topic/virt-x86/tools/cloc-change.sh#L9\nAppears to also be used inside both http://codescoop.com/ https://pinpoint.com/ https://github.com/chaoss/grimoirelab-graal\n\nIt also is used to count code and guess language types in https://searchcode.com/ which makes it one of the most frequently run code counters in the world. \n\nYou can also hook scc into your gitlab pipeline https://gitlab.com/guided-explorations/ci-cd-plugin-extensions/ci-cd-plugin-extension-scc\n\nAlso used by CodeQL https://github.com/boyter/scc/pull/317 and Scaleway https://twitter.com/Scaleway/status/1488087029476995074?s=20\u0026t=N2-z6O-ISDdDzULg4o4uVQ\n\n- https://docs.linuxfoundation.org/lfx/insights/v3-beta-version-current/getting-started/landing-page/cocomo-cost-estimation-simplified\n- https://openems.io/\n\n### Features\n\n`scc` uses a small state machine in order to determine what state the code is when it reaches a newline `\\n`. As such it is aware of and able to count\n\n - Single Line Comments\n - Multi Line Comments\n - Strings\n - Multi Line Strings\n - Blank lines\n\nBecause of this it is able to accurately determine if a comment is in a string or is actually a comment.\n\nIt also attempts to count the complexity of code. This is done by checking for branching operations in the code. For example, each of the following `for if switch while else || \u0026\u0026 != ==` if encountered in Java would increment that files complexity by one.\n\n### Complexity Estimates\n\nLet's take a minute to discuss the complexity estimate itself.\n\nThe complexity estimate is really just a number that is only comparable to files in the same language. It should not be used to compare languages directly without weighting them. The reason for this is that its calculated by looking for branch and loop statements in the code and incrementing a counter for that file.\n\nBecause some languages don't have loops and instead use recursion they can have a lower complexity count. Does this mean they are less complex? Probably not, but the tool cannot see this because it does not build an AST of the code as it only scans through it.\n\nGenerally though the complexity there is to help estimate between projects written in the same language, or for finding the most complex file in a project `scc --by-file -s complexity` which can be useful when you are estimating on how hard something is to maintain, or when looking for those files that should probably be refactored.\n\nAs for how it works.\n\nIt's my own definition, but tries to be an approximation of cyclomatic complexity https://en.wikipedia.org/wiki/Cyclomatic_complexity although done only on a file level. \n\nThe reason it's an approximation is that it's calculated almost for free from a CPU point of view (since its a cheap lookup when counting), whereas a real cyclomatic complexity count would need to parse the code. It gives a reasonable guess in practice though even if it fails to identify recursive methods. The goal was never for it to be exact.\n\nIn short when scc is looking through what it has identified as code if it notices what are usually branch conditions it will increment a counter.\n\nThe conditions it looks for are compiled into the code and you can get an idea for them by looking at the JSON inside the repository. See https://github.com/boyter/scc/blob/master/languages.json#L3869 for an example of what it's looking at for a file that's Java.\n\nThe increment happens for each of the matching conditions and produces the number you see.\n\n### Unique Lines of Code (ULOC)\n\nULOC stands for Unique Lines of Code and represents the unique lines across languages, files and the project itself. This idea was taken from\nhttps://cmcenroe.me/2018/12/14/uloc.html where the calculation is presented using standard Unix tools `sort -u *.h *.c | wc -l`. This metric is\nthere to assist with the estimation of complexity within the project. Quoting the source\n\n\u003e In my opinion, the number this produces should be a better estimate of the complexity of a project. Compared to SLOC, not only are blank lines discounted, but so are close-brace lines and other repetitive code such as common includes. On the other hand, ULOC counts comments, which require just as much maintenance as the code around them does, while avoiding inflating the result with license headers which appear in every file, for example.\n\nYou can obtain the ULOC by supplying the `-u` or `--uloc` argument to `scc`.\n\nIt has a corresponding metric `DRYness %` which is the percentage of ULOC to CLOC or `DRYness = ULOC / SLOC`. The \nhigher the number the more DRY (don't repeat yourself) the project can be considered. In general a higher value\nhere is a better as it indicates less duplicated code. The DRYness metric was taken from a comment by minimax https://lobste.rs/s/has9r7/uloc_unique_lines_code \n\nTo obtain the DRYness metric you can use the `-a` or `--dryness` argument to `scc`, which will implicitly set `--uloc`.\n\nNote that there is a performance penalty when calculating the ULOC metrics which can double the runtime.\n\nRunning the uloc and DRYness calculations against C code a clone of redis produces an output as follows.\n\n```\n$ scc -a -i c redis \n───────────────────────────────────────────────────────────────────────────────\nLanguage                 Files     Lines   Blanks  Comments     Code Complexity\n───────────────────────────────────────────────────────────────────────────────\nC                          419    241293    27309     41292   172692      40849\n(ULOC)                            133535\n───────────────────────────────────────────────────────────────────────────────\nTotal                      419    241293    27309     41292   172692      40849\n───────────────────────────────────────────────────────────────────────────────\nUnique Lines of Code (ULOC)       133535\nDRYness %                           0.55\n───────────────────────────────────────────────────────────────────────────────\nEstimated Cost to Develop (organic) $6,035,748\nEstimated Schedule Effort (organic) 27.23 months\nEstimated People Required (organic) 19.69\n───────────────────────────────────────────────────────────────────────────────\nProcessed 8407821 bytes, 8.408 megabytes (SI)\n───────────────────────────────────────────────────────────────────────────────\n```\n\nFurther reading about the ULOC calculation can be found at https://boyter.org/posts/sloc-cloc-code-new-metic-uloc/\n\n### COCOMO\n\nThe COCOMO statistics displayed at the bottom of any command line run can be configured as needed.\n\n```\nEstimated Cost to Develop (organic) $664,081\nEstimated Schedule Effort (organic) 11.772217 months\nEstimated People Required (organic) 5.011633\n```\n\nTo change the COCOMO parameters, you can either use one of the default COCOMO models.\n\n```\nscc --cocomo-project-type organic\nscc --cocomo-project-type semi-detached\nscc --cocomo-project-type embedded\n```\n\nYou can also supply your own parameters if you are familiar with COCOMO as follows,\n\n```\nscc --cocomo-project-type \"custom,1,1,1,1\"\n```\n\nSee below for details about how the model choices, and the parameters they use.\n\nOrganic – A software project is said to be an organic type if the team size required is adequately small, the\nproblem is well understood and has been solved in the past and also the team members have a nominal experience\nregarding the problem.\n\n`scc --cocomo-project-type \"organic,2.4,1.05,2.5,0.38\"`\n\nSemi-detached – A software project is said to be a Semi-detached type if the vital characteristics such as team-size,\nexperience, knowledge of the various programming environment lie in between that of organic and Embedded.\nThe projects classified as Semi-Detached are comparatively less familiar and difficult to develop compared to\nthe organic ones and require more experience and better guidance and creativity. Eg: Compilers or\ndifferent Embedded Systems can be considered of Semi-Detached type.\n\n`scc --cocomo-project-type \"semi-detached,3.0,1.12,2.5,0.35\"`\n\nEmbedded – A software project with requiring the highest level of complexity, creativity, and experience\nrequirement fall under this category. Such software requires a larger team size than the other two models\nand also the developers need to be sufficiently experienced and creative to develop such complex models.\n\n`scc --cocomo-project-type \"embedded,3.6,1.20,2.5,0.32\"`\n\n### Large File Detection\n\nYou can have `scc` exclude large files from the output. \n\nThe option to do so is `--no-large` which by default will exclude files over 1,000,000 bytes or 40,000 lines.\n\nYou can control the size of either value using `--large-byte-count` or `--large-line-count`.\n\nFor example to exclude files over 1,000 lines and 50kb you could use the following,\n\n`scc --no-large --large-byte-count 50000 --large-line-count 1000`\n\n### Minified/Generated File Detection\n\nYou can have `scc` identify and optionally remove files identified as being minified or generated from the output.\n\nYou can do so by enabling the `-z` flag like so `scc -z` which will identify any file with an average line byte size \u003e= 255 (by default) as being minified.\n\nMinified files appear like so in the output.\n\n```\n$ scc --no-cocomo -z ./examples/minified/jquery-3.1.1.min.js\n───────────────────────────────────────────────────────────────────────────────\nLanguage                 Files     Lines   Blanks  Comments     Code Complexity\n───────────────────────────────────────────────────────────────────────────────\nJavaScript (min)             1         4        0         1        3         17\n───────────────────────────────────────────────────────────────────────────────\nTotal                        1         4        0         1        3         17\n───────────────────────────────────────────────────────────────────────────────\nProcessed 86709 bytes, 0.087 megabytes (SI)\n───────────────────────────────────────────────────────────────────────────────\n```\n\nMinified files are indicated with the text `(min)` after the language name.\n\nGenerated files are indicated with the text `(gen)` after the language name.\n\nYou can control the average line byte size using `--min-gen-line-length` such as `scc -z --min-gen-line-length 1`. Please note you need `-z` as modifying this value does not imply minified detection.\n\nYou can exclude minified files from the count totally using the flag `--no-min-gen`. Files which match the minified check will be excluded from the output.\n\n### Remapping\n\nSome files may not have an extension. They will be checked to see if they are a #! file. If they are then the language will be remapped to the \ncorrect language. Otherwise, it will not process.\n\nHowever, you may have the situation where you want to remap such files based on a string inside it. To do so you can use `--remap-unknown`\n\n```\n scc --remap-unknown \"-*- C++ -*-\":\"C Header\"\n```\n\nThe above will inspect any file with no extension looking for the string `-*- C++ -*-` and if found remap the file to be counted using the C Header rules. \nYou can have multiple remap rules if required,\n\n```\n scc --remap-unknown \"-*- C++ -*-\":\"C Header\",\"other\":\"Java\"\n```\n\nThere is also the `--remap-all` parameter which will remap all files.\n\nNote that in all cases if the remap rule does not apply normal #! rules will apply.\n\n### Output Formats\n\nBy default `scc` will output to the console. However, you can produce output in other formats if you require.\n\nThe different options are `tabular, wide, json, csv, csv-stream, cloc-yaml, html, html-table, sql, sql-insert, openmetrics`. \n\nNote that you can write `scc` output to disk using the `-o, --output` option. This allows you to specify a file to\nwrite your output to. For example `scc -f html -o output.html` will run `scc` against the current directory, and output\nthe results in html to the file `output.html`.\n\nYou can also write to multiple output files, or multiple types to stdout if you want using the `--format-multi` option. This is \nmost useful when working in CI/CD systems where you want HTML reports as an artifact while also displaying the counts in stdout. \n\n```\nscc --format-multi \"tabular:stdout,html:output.html,csv:output.csv\"\n```\n\nThe above will run against the current directory, outputting to standard output the default output, as well as writing\nto output.html and output.csv with the appropriate formats.\n\n#### Tabular \n\nThis is the default output format when scc is run.\n\n#### Wide \n\nWide produces some additional information which is the complexity/lines metric. This can be useful when trying to\nidentify the most complex file inside a project based on the complexity estimate.\n\n#### JSON\n\nJSON produces JSON output. Mostly designed to allow `scc` to feed into other programs.\n\nNote that this format will give you the byte size of every file `scc` reads allowing you to get a breakdown of the\nnumber of bytes processed.\n\n#### CSV\n\nCSV as an option is good for importing into a spreadsheet for analysis. \n\nNote that this format will give you the byte size of every file `scc` reads allowing you to get a breakdown of the\nnumber of bytes processed. Also note that CSV respects `--by-file` and as such will return a summary by default.\n\n#### CSV-Stream\n\ncsv-stream is an option useful for processing very large repositories where you are likely to run into memory issues. It's output format is 100% the same as CSV. \n\nNote that you should not use this with the `format-multi` option as it will always print to standard output, and because of how it works will negate the memory saving it normally gains.\nsavings that this option provides. Note that there is no sort applied with this option. \n\n#### cloc-yaml \n\nIs a drop in replacement for cloc using its yaml output option. This is quite often used for passing into other \nbuild systems and can help with replacing cloc if required.\n\n```\n$ scc -f cloc-yml processor\n# https://github.com/boyter/scc/\nheader:\n  url: https://github.com/boyter/scc/\n  version: 2.11.0\n  elapsed_seconds: 0.008\n  n_files: 21\n  n_lines: 6562\n  files_per_second: 2625\n  lines_per_second: 820250\nGo:\n  name: Go\n  code: 5186\n  comment: 273\n  blank: 1103\n  nFiles: 21\nSUM:\n  code: 5186\n  comment: 273\n  blank: 1103\n  nFiles: 21\n\n$ cloc --yaml processor\n      21 text files.\n      21 unique files.\n       0 files ignored.\n\n---\n# http://cloc.sourceforge.net\nheader :\n  cloc_url           : http://cloc.sourceforge.net\n  cloc_version       : 1.60\n  elapsed_seconds    : 0.196972846984863\n  n_files            : 21\n  n_lines            : 6562\n  files_per_second   : 106.613679608407\n  lines_per_second   : 33314.2364566841\nGo:\n  nFiles: 21\n  blank: 1137\n  comment: 606\n  code: 4819\nSUM:\n  blank: 1137\n  code: 4819\n  comment: 606\n  nFiles: 21\n```\n\n#### HTML and HTML-TABLE\n\nThe HTML output options produce a minimal html report using a table that is either standalone `html` or as just a table `html-table`\nwhich can be injected into your own HTML pages. The only difference between the two is that the `html` option includes \nhtml head and body tags with minimal styling.\n\nThe markup is designed to allow your own custom styles to be applied. An example report\n[is here to view](SCC-OUTPUT-REPORT.html).\n\nNote that the HTML options follow the command line options, so you can use `scc --by-file -f html` to produce a report with every\nfile and not just the summary.\n\nNote that this format if it has the `--by-file` option will give you the byte size of every file `scc` reads allowing you to get a breakdown of the\nnumber of bytes processed.\n\n#### SQL and SQL-Insert\n\nThe SQL output format \"mostly\" compatible with cloc's SQL output format https://github.com/AlDanial/cloc#sql-\n\nWhile all queries on the cloc documentation should work as expected, you will not be able to append output from `scc` and `cloc` into the same database. This is because the table format is slightly different\nto account for scc including complexity counts and bytes.\n\nThe difference between `sql` and `sql-insert` is that `sql` will include table creation while the latter will only have the insert commands.\n\nUsage is 100% the same as any other `scc` command but sql output will always contain per file details. You can compute totals yourself using SQL, however COCOMO calculations will appear against the metadata table as the columns `estimated_cost` `estimated_schedule_months` and `estimated_people`.\n\nThe below will run scc against the current directory, name the output as the project scc and then pipe the output to sqlite to put into the database code.db\n\n```\nscc --format sql --sql-project scc . | sqlite3 code.db\n```\n\nAssuming you then wanted to append another project\n\n```\nscc --format sql-insert --sql-project redis . | sqlite3 code.db\n```\n\nYou could then run SQL against the database,\n\n```\nsqlite3 code.db 'select project,file,max(nCode) as nL from t\n                         group by project order by nL desc;'\n```\n\nSee the cloc documentation for more examples.\n\n\n#### OpenMetrics\n\n[OpenMetrics](https://openmetrics.io/) is a metric reporting format specification extending the Prometheus exposition text format.\n\nThe produced output is natively supported by [Prometheus](https://prometheus.io/) and [GitLab CI](https://docs.gitlab.com/ee/ci/testing/metrics_reports.html)\n\nNote that OpenMetrics respects `--by-file` and as such will return a summary by default.\n\nThe output includes a metadata header containing definitions of the returned metrics: \n```text\n# TYPE scc_files count\n# HELP scc_files Number of sourcecode files.\n# TYPE scc_lines count\n# UNIT scc_lines lines\n# HELP scc_lines Number of lines.\n# TYPE scc_code count\n# HELP scc_code Number of lines of actual code.\n# TYPE scc_comments count\n# HELP scc_comments Number of comments.\n# TYPE scc_blanks count\n# HELP scc_blanks Number of blank lines.\n# TYPE scc_complexity count\n# HELP scc_complexity Code complexity.\n# TYPE scc_bytes count\n# UNIT scc_bytes bytes\n# HELP scc_bytes Size in bytes.\n```\n\nThe header is followed by the metric data in either language summary form:\n```text\nscc_files{language=\"Go\"} 1\nscc_lines{language=\"Go\"} 1000\nscc_code{language=\"Go\"} 1000\nscc_comments{language=\"Go\"} 1000\nscc_blanks{language=\"Go\"} 1000\nscc_complexity{language=\"Go\"} 1000\nscc_bytes{language=\"Go\"} 1000\n```\n\nor, if `--by-file` is present, in per file form:\n```text\nscc_lines{language=\"Go\",file=\"./bbbb.go\"} 1000\nscc_code{language=\"Go\",file=\"./bbbb.go\"} 1000\nscc_comments{language=\"Go\",file=\"./bbbb.go\"} 1000\nscc_blanks{language=\"Go\",file=\"./bbbb.go\"} 1000\nscc_complexity{language=\"Go\",file=\"./bbbb.go\"} 1000\nscc_bytes{language=\"Go\",file=\"./bbbb.go\"} 1000\n```\n\n### Performance\n\nGenerally `scc` will the fastest code counter compared to any I am aware of and have compared against. The below comparisons are taken from the fastest alternative counters. See `Other similar projects` above to see all of the other code counters compared against. It is designed to scale to as many CPU's cores as you can provide.\n\nHowever, if you want greater performance and you have RAM to spare you can disable the garbage collector like the following on Linux `GOGC=-1 scc .` which should speed things up considerably. For some repositories turning off the code complexity calculation via `-c` can reduce runtime as well.\n\nBenchmarks are run on fresh 48 Core CPU Optimised Digital Ocean Virtual Machine 2024/09/30 all done using [hyperfine](https://github.com/sharkdp/hyperfine).\n\nSee https://github.com/boyter/scc/blob/master/benchmark.sh to see how the benchmarks are run.\n\n\n#### Valkey https://github.com/valkey-io/valkey\n\n```shell\nBenchmark 1: scc valkey\n  Time (mean ± σ):      28.0 ms ±   1.6 ms    [User: 166.1 ms, System: 55.0 ms]\n  Range (min … max):    24.7 ms …  31.5 ms    114 runs\n \nBenchmark 2: scc -c valkey\n  Time (mean ± σ):      25.8 ms ±   1.7 ms    [User: 123.7 ms, System: 53.2 ms]\n  Range (min … max):    23.3 ms …  29.3 ms    114 runs\n \nBenchmark 3: tokei valkey\n  Time (mean ± σ):      63.0 ms ±   3.8 ms    [User: 433.8 ms, System: 244.3 ms]\n  Range (min … max):    46.7 ms …  67.6 ms    44 runs\n \nBenchmark 4: polyglot valkey\n  Time (mean ± σ):      27.4 ms ±   0.8 ms    [User: 46.5 ms, System: 79.0 ms]\n  Range (min … max):    25.7 ms …  29.5 ms    108 runs\n \nSummary\n  scc -c valkey ran\n    1.06 ± 0.08 times faster than polyglot valkey\n    1.08 ± 0.09 times faster than scc valkey\n    2.44 ± 0.22 times faster than tokei valkey\n```\n\n#### CPython https://github.com/python/cpython\n\n```shell\nBenchmark 1: scc cpython\n  Time (mean ± σ):      81.9 ms ±   4.2 ms    [User: 789.6 ms, System: 164.6 ms]\n  Range (min … max):    74.0 ms …  89.6 ms    36 runs\n \nBenchmark 2: scc -c cpython\n  Time (mean ± σ):      75.4 ms ±   4.6 ms    [User: 621.9 ms, System: 152.6 ms]\n  Range (min … max):    68.4 ms …  84.5 ms    37 runs\n \nBenchmark 3: tokei cpython\n  Time (mean ± σ):     162.1 ms ±   3.4 ms    [User: 1824.0 ms, System: 420.4 ms]\n  Range (min … max):   156.7 ms … 168.9 ms    18 runs\n \nBenchmark 4: polyglot cpython\n  Time (mean ± σ):      94.2 ms ±   3.0 ms    [User: 210.3 ms, System: 260.3 ms]\n  Range (min … max):    88.3 ms …  99.4 ms    30 runs\n \nSummary\n  scc -c cpython ran\n    1.09 ± 0.09 times faster than scc cpython\n    1.25 ± 0.09 times faster than polyglot cpython\n    2.15 ± 0.14 times faster than tokei cpython\n```\n\n#### Linux Kernel https://github.com/torvalds/linux\n\n```shell\nBenchmark 1: scc linux\n  Time (mean ± σ):      1.070 s ±  0.036 s    [User: 15.253 s, System: 1.962 s]\n  Range (min … max):    1.011 s …  1.133 s    10 runs\n \nBenchmark 2: scc -c linux\n  Time (mean ± σ):      1.007 s ±  0.039 s    [User: 9.822 s, System: 1.937 s]\n  Range (min … max):    0.915 s …  1.043 s    10 runs\n \nBenchmark 3: tokei linux\n  Time (mean ± σ):      1.094 s ±  0.019 s    [User: 19.416 s, System: 11.085 s]\n  Range (min … max):    1.067 s …  1.135 s    10 runs\n \nBenchmark 4: polyglot linux\n  Time (mean ± σ):      1.387 s ±  0.028 s    [User: 3.775 s, System: 3.212 s]\n  Range (min … max):    1.359 s …  1.433 s    10 runs\n \nSummary\n  scc -c linux ran\n    1.06 ± 0.05 times faster than scc linux\n    1.09 ± 0.05 times faster than tokei linux\n    1.38 ± 0.06 times faster than polyglot linux\n```\n\n#### Sourcegraph https://github.com/SINTEF/sourcegraph.git\n\nSourcegraph has gone dark since I last ran these benchmarks hence using a clone taken before this occured.\nThe reason for this is to track what appears to be a performance regression in tokei.\n\n\n```shell\nBenchmark 1: scc sourcegraph\n  Time (mean ± σ):     125.1 ms ±   8.0 ms    [User: 638.1 ms, System: 218.0 ms]\n  Range (min … max):   116.7 ms … 141.3 ms    24 runs\n \nBenchmark 2: scc -c sourcegraph\n  Time (mean ± σ):     119.8 ms ±   8.3 ms    [User: 554.8 ms, System: 208.6 ms]\n  Range (min … max):   111.9 ms … 138.4 ms    22 runs\n \nBenchmark 3: tokei sourcegraph\n  Time (mean ± σ):     23.888 s ±  1.416 s    [User: 73.858 s, System: 630.906 s]\n  Range (min … max):   22.292 s … 27.010 s    10 runs\n \nBenchmark 4: polyglot sourcegraph\n  Time (mean ± σ):     113.3 ms ±   4.1 ms    [User: 237.7 ms, System: 791.8 ms]\n  Range (min … max):   107.9 ms … 124.3 ms    26 runs\n \nSummary\n  polyglot sourcegraph ran\n    1.06 ± 0.08 times faster than scc -c sourcegraph\n    1.10 ± 0.08 times faster than scc sourcegraph\n  210.86 ± 14.66 times faster than tokei sourcegraph\n\n```\n\nIf you enable duplicate detection expect performance to fall by about 20% in `scc`.\n\nPerformance is tracked for some releases and presented below.\n\n\u003cimg alt=\"scc\" src=https://github.com/boyter/scc/raw/master/performance-over-time.png\u003e\n\nThe decrease in performance from the 3.3.0 release was due to accurate .gitignore, .ignore and .gitmodule support.\nCurrent work is focussed on resolving this.\n\nhttps://jsfiddle.net/mw21h9va/\n\n### CI/CD Support\n\nSome CI/CD systems which will remain nameless do not work very well with the box-lines used by `scc`. To support those systems better there is an option `--ci` which will change the default output to ASCII only.\n\n```\n$ scc --ci main.go\n-------------------------------------------------------------------------------\nLanguage                 Files     Lines   Blanks  Comments     Code Complexity\n-------------------------------------------------------------------------------\nGo                           1       272        7         6      259          4\n-------------------------------------------------------------------------------\nTotal                        1       272        7         6      259          4\n-------------------------------------------------------------------------------\nEstimated Cost to Develop $6,539\nEstimated Schedule Effort 2.268839 months\nEstimated People Required 0.341437\n-------------------------------------------------------------------------------\nProcessed 5674 bytes, 0.006 megabytes (SI)\n-------------------------------------------------------------------------------\n```\n\nThe `--format-multi` option is especially useful in CI/CD where you want to get multiple output formats useful for storage or reporting.\n\n### Development\n\nIf you want to hack away feel free! PR's are accepted. Some things to keep in mind. If you want to change a language definition you need to update `languages.json` and then run `go generate` which will convert it into the `processor/constants.go` file.\n\nFor all other changes ensure you run all tests before submitting. You can do so using `go test ./...`. However, for maximum coverage please run `test-all.sh` which will run `gofmt`, unit tests, race detector and then all of the integration tests. All of those must pass to ensure a stable release.\n\n### API Support\n\nThe core part of `scc` which is the counting engine is exposed publicly to be integrated into other Go applications. See https://github.com/pinpt/ripsrc for an example of how to do this. \n\nIt also powers all of the code calculations displayed in https://searchcode.com/ such as https://searchcode.com/file/169350674/main.go/ making it one of the more used code counters in the world.\n\nHowever as a quick start consider the following,\n\nNote that you must pass in the number of bytes in the content in order to ensure it is counted!\n\n```\npackage main\n\nimport (\n\t\"fmt\"\n\t\"io/ioutil\"\n\n\t\"github.com/boyter/scc/v3/processor\"\n)\n\ntype statsProcessor struct{}\n\nfunc (p *statsProcessor) ProcessLine(job *processor.FileJob, currentLine int64, lineType processor.LineType) bool {\n\tswitch lineType {\n\tcase processor.LINE_BLANK:\n\t\tfmt.Println(currentLine, \"lineType\", \"BLANK\")\n\tcase processor.LINE_CODE:\n\t\tfmt.Println(currentLine, \"lineType\", \"CODE\")\n\tcase processor.LINE_COMMENT:\n\t\tfmt.Println(currentLine, \"lineType\", \"COMMENT\")\n\t}\n\treturn true\n}\n\nfunc main() {\n\tbts, _ := ioutil.ReadFile(\"somefile.go\")\n\n\tt := \u0026statsProcessor{}\n\tfilejob := \u0026processor.FileJob{\n\t\tFilename: \"test.go\",\n\t\tLanguage: \"Go\",\n\t\tContent:  bts,\n\t\tCallback: t,\n\t\tBytes:    int64(len(bts)),\n\t}\n\n\tprocessor.ProcessConstants() // Required to load the language information and need only be done once\n\tprocessor.CountStats(filejob)\n}\n```\n\n\n### Adding/Modifying Languages\n\nTo add or modify a language you will need to edit the `languages.json` file in the root of the project, and then run `go generate` to build it into the application. You can then `go install` or `go build` as normal to produce the binary with your modifications.\n\n### Issues\n\nIts possible that you may see the counts vary between runs. This usually means one of two things. Either something is changing or locking the files under scc, or that you are hitting ulimit restrictions. To change the ulimit see the following links.\n\n - https://superuser.com/questions/261023/how-to-change-default-ulimit-values-in-mac-os-x-10-6#306555\n - https://unix.stackexchange.com/questions/108174/how-to-persistently-control-maximum-system-resource-consumption-on-mac/221988#221988\n - https://access.redhat.com/solutions/61334\n - https://serverfault.com/questions/356962/where-are-the-default-ulimit-values-set-linux-centos\n - https://www.tecmint.com/increase-set-open-file-limits-in-linux/\n\nTo help identify this issue run scc like so `scc -v .` and look for the message `too many open files` in the output. If it is there you can rectify it by setting your ulimit to a higher value.\n\n### Low Memory\n\nIf you are running `scc` in a low memory environment \u003c 512 MB of RAM you may need to set `--file-gc-count` to a lower value such as `0` to force the garbage collector to be on at all times.\n\nA sign that this is required will be `scc` crashing with panic errors.\n\n### Tests\n\nscc is pretty well tested with many unit, integration and benchmarks to ensure that it is fast and complete.\n\n### Package\n\nPackaging as of version v3.1.0 is done through https://goreleaser.com/ \n\n### Containers\n\nNote if you plan to run `scc` in Alpine containers you will need to build with CGO_ENABLED=0.\n\nSee the below Dockerfile as an example on how to achieve this based on this issue https://github.com/boyter/scc/issues/208\n\n```\nFROM golang as scc-get\n\nENV GOOS=linux \\\nGOARCH=amd64 \\\nCGO_ENABLED=0\n\nARG VERSION\nRUN git clone --branch $VERSION --depth 1 https://github.com/boyter/scc\nWORKDIR /go/scc\nRUN go build -ldflags=\"-s -w\"\n\nFROM alpine\nCOPY --from=scc-get /go/scc/scc /bin/\nENTRYPOINT [\"scc\"]\n```\n\n### Badges (beta)\n\nYou can use `scc` to provide badges on your github/bitbucket/gitlab/sr.ht open repositories. For example, [![Scc Count Badge](https://sloc.xyz/github/boyter/scc/)](https://github.com/boyter/scc/)\n The format to do so is,\n\nhttps://sloc.xyz/PROVIDER/USER/REPO\n\nAn example of the badge for `scc` is included below, and is used on this page.\n\n```\n[![Scc Count Badge](https://sloc.xyz/github/boyter/scc/)](https://github.com/boyter/scc/)\n```\n\nBy default the badge will show the repo's lines count. You can also specify for it to show a different category, by using the `?category=` query string. \n\nValid values include `code, blanks, lines, comments, cocomo` and examples of the appearance are included below.\n\n[![Scc Count Badge](https://sloc.xyz/github/boyter/scc/?category=code)](https://github.com/boyter/scc/)\n[![Scc Count Badge](https://sloc.xyz/github/boyter/scc/?category=blanks)](https://github.com/boyter/scc/)\n[![Scc Count Badge](https://sloc.xyz/github/boyter/scc/?category=lines)](https://github.com/boyter/scc/)\n[![Scc Count Badge](https://sloc.xyz/github/boyter/scc/?category=comments)](https://github.com/boyter/scc/)\n[![Scc Count Badge](https://sloc.xyz/github/boyter/scc/?category=cocomo)](https://github.com/boyter/scc/)\n\nFor `cocomo` you can also set the `avg-wage` value similar to `scc` itself. For example,\n\nhttps://sloc.xyz/github/boyter/scc/?category=cocomo\u0026avg-wage=1\nhttps://sloc.xyz/github/boyter/scc/?category=cocomo\u0026avg-wage=100000 \n\nNote that the avg-wage value must be a positive integer otherwise it will revert back to the default value of 56286.\n\nYou can also configure the look and feel of the bad using the following parameters,\n\n - ?lower=true will lower the title text, so \"Total lines\" would be \"total lines\"\n\nThe below can control the colours of shadows, fonts and badges\n\n - ?font-color=fff\n - ?font-shadow-color=010101\n - ?top-shadow-accent-color=bbb\n - ?title-bg-color=555\n - ?badge-bg-color=4c1\n\nAn example of using some of these parameters to produce an admittedly ugly result\n\n[![Scc Count Badge](https://sloc.xyz/github/boyter/scc?font-color=ff0000\u0026badge-bg-color=0000ff\u0026lower=true)](https://github.com/boyter/scc/)\n\n*NB* it may not work for VERY large repositories (has been tested on Apache hadoop/spark without issue).\n\nYou can find the source code for badges in the repository at https://github.com/boyter/scc/blob/master/cmd/badges/main.go \n\n#### A example for each supported provider\n\n- Github - https://sloc.xyz/github/boyter/scc/\n- sr.ht - https://sloc.xyz/sr.ht/~nektro/magnolia-desktop/\n- Bitbucket - https://sloc.xyz/bitbucket/boyter/decodingcaptchas\n- Gitlab - https://sloc.xyz/gitlab/esr/loccount\n\n### Languages\n\nList of supported languages. The master version of `scc` supports 322 languages at last count. Note that this is always assumed that you built from master, and it might trail behind what is actually supported. To see what your version of `scc` supports run `scc --languages`\n\n[Click here to view all languages supported by master](LANGUAGES.md)\n\n\n### Release Checklist\n\n- Update version\n- Push code with release number\n- Tag off\n- Release via goreleaser\n- Update dockerfile\n","funding_links":["https://github.com/sponsors/boyter"],"categories":["Go","Misc","Files and Directories","Software Packages","Other","Utilities","软件包","Tools","Code","\u003ca name=\"programming\"\u003e\u003c/a\u003eProgramming","Evaluation-Werkzeuge","Other Software","Go Tools","Go 工具","Cli"],"sub_categories":["Files","Other Software","其他软件库和软件包","其他软件","Go","Cloc"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fboyter%2Fscc","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fboyter%2Fscc","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fboyter%2Fscc/lists"}