{"id":40935338,"url":"https://github.com/gsauthof/utility","last_synced_at":"2026-01-22T04:15:26.767Z","repository":{"id":45355784,"uuid":"57277561","full_name":"gsauthof/utility","owner":"gsauthof","description":"Collection of command line utilities","archived":false,"fork":false,"pushed_at":"2025-05-22T21:50:53.000Z","size":785,"stargazers_count":50,"open_issues_count":1,"forks_count":12,"subscribers_count":7,"default_branch":"master","last_synced_at":"2025-05-22T23:07:31.296Z","etag":null,"topics":["command-line-tool","linux","unix-command","unix-shell"],"latest_commit_sha":null,"homepage":null,"language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"gpl-3.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/gsauthof.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"COPYING","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null}},"created_at":"2016-04-28T06:44:04.000Z","updated_at":"2025-05-22T21:50:57.000Z","dependencies_parsed_at":"2025-01-27T22:27:14.696Z","dependency_job_id":"1a256e9e-b22f-493d-b18d-b3838fb01b95","html_url":"https://github.com/gsauthof/utility","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/gsauthof/utility","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/gsauthof%2Futility","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/gsauthof%2Futility/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/gsauthof%2Futility/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/gsauthof%2Futility/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/gsauthof","download_url":"https://codeload.github.com/gsauthof/utility/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/gsauthof%2Futility/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":28653970,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-01-22T01:17:37.254Z","status":"online","status_checked_at":"2026-01-22T02:00:07.137Z","response_time":144,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["command-line-tool","linux","unix-command","unix-shell"],"created_at":"2026-01-22T04:15:25.902Z","updated_at":"2026-01-22T04:15:26.759Z","avatar_url":"https://github.com/gsauthof.png","language":"Python","readme":"[![Build Status](https://travis-ci.org/gsauthof/utility.svg?branch=master)](https://travis-ci.org/gsauthof/utility)\n\nThis repository contains a collection of command line utilities.\n\n- [adjtimex](#adjtime)\n    -- list some clock related system settings\n- [addrof](#addrofdevof)\n    -- list IP address(es) of network devices\n- arsort\n    -- topologically sort static libraries\n- [ascii](#ascii)\n    -- pretty print the ASCII table\n- benchmark.sh\n    -- run a command multiple times and report stats\n- benchmark.py\n    -- run a command multiple times and report stats (more features)\n- [check2junit](#check2junit)\n    -- convert libcheck XML to Jenkins/JUnit compatible XML\n- check-bat.py\n    -- alarm user (or remote-control power socket) when Android\n    battery capacity reaches an upper threshold during charging\n- [check-cert](#check-cert)\n    -- check approaching expiration/validate certs of remote servers\n- [check-dnsbl](#check-dnsbl)\n    -- check if mailservers are DNS blacklisted/have rDNS records\n- chromium-extensions\n    -- list installed Chromium extensions\n- [cpufreq](#cpufreq)\n    -- print current CPU frequency using CPU counters\n- [dcat](#dcat)\n    -- decompressing cat (autodetects gzip/zstd/bz2/...)\n- [dcheck](#dcheck)\n    -- run a program under DBX's memory check mode\n- detect-size\n    -- detect and set real height/width of a terminal\n- devof\n    -- list network device names given an address (prefix)\n- disas\n    -- disassemble a certain function\n- [dtmemtime](#dtmemtime)\n    -- measure high-water memory usage of a process and its descendants under Solaris\n- exec\n    -- change argv[0] of a command\n- [firefox-addons](#firefox-addons)\n    -- list installed Firefox addons\n- gs-ext\n    -- list and manage installed Gnome Shell Extensions\n- [hcheck](#hcheck)\n    -- health-check command execution using a [healthchecks.io][hcio] instance\n- hc_check\n    -- health check mail delivery mail filter\n- inhibit\n    -- temporarily disable Gnome-Shell screen blanking from the terminal\n- isempty\n    -- detect empty images (e.g. in batch scan results)\n- [latest-kernel-running](#latest-kernel)\n    -- is the latest installed kernel actually running?\n- [link_stats64](#link_stats64)\n    -- dump the kernel's rtnl_link_stats64 structs\n- [lockf](#lockf)\n    -- protect command execution with a lock\n- lsata.sh\n    -- map ataX kernel log ids to /dev/sdY devices\n- [macgen](#macgen)\n    -- randomly generate a private/internal MAC address\n- macgen.py\n    -- Python implementation of macgen\n- matrixto\n    -- Send messages to a Matrix room from the command line\n- netio2csv.py\n    -- Collect metering data from Netio power sockets into CSV files.\n    See also check-bat.py.\n- [oldprocs](#oldprocs)\n    -- list running (and possibly restart) old processes/services whose object\n      files were updated\n- [pargs](#pargs)\n    -- display argv and other vectors of PIDs/core files\n- [pdfmerge](#pdfmerge)\n    -- vertically merge two PDF files (i.e. as two layers)\n- [pldd](#pldd)\n    -- list shared libraries linked into a running process\n- [pq](#pq)\n    -- query process and thread attributes\n- pwhatch\n    -- generate secure and easy to communicate passwords\n- [remove](#remove)\n    -- sync USB drive cache, power down and remove device\n- reset-tmux\n    -- reset a tmux session after binary data escaped to the console\n- rtcdelta\n    -- compute difference between RTC and system clock\n- ripdvd\n    -- copy each DVD track into a nicely named .vob file\n- [searchb](#searchb)\n    -- search a binary file in another\n- [silence](#silence)\n    -- silence stdout/stderr unless command fails\n- [silencce](#silencce)\n    -- C++ implementation of silence\n- [swap](#swap)\n    -- atomically exchange names of two files on Linux\n- tailuart.py\n    -- read lines from a UART tty device, timestamp and dump them to stdout\n- train-spam\n    -- feed spam maildir messages into gonzofilter and remove them\n- unrpm\n    -- extract an RPM file\n- [user-installed](#user-installed)\n    -- list manually installed packages on major distributions\n- wipedev\n    -- wipe storage device fast and thoroughly\n\nFor example:\n\n    $ crontab -l\n    15 10 * * * silence backup_stuff.sh /home\n\n\n2016, Georg Sauthoff \u003cmail@gms.tf\u003e\n\n\n## TOC Tail\n\nTo skip over the utility sections:\n\n- [Build Instructions](#build-instructions)\n- [Unittests](#unittests)\n- [License](#license)\n\n## Adjtimex\n\nThe adjtimex syscall allows to set and get many clock related\nsystem settings. This utility displays some settings that are\nmainly of interest when dealing with time synchronisation such\nas NTP and PTP. Example output:\n\n```\n$ ./adjtimex\nClock is synchronized (STA_UNSYNC unset)\nMaxerror: 500 us\nTAI offset: 37 s\nPPS frequency discipline (STA_PPSFREQ): disabled\nPPS time discipline (STA_PPSTIME): disabled\n```\n\n\n## Addrof/Devof\n\nIn the spirit of `pidof` these utilities list the IP address(es) of\na network devices or the name(s) of network device(s) that use an\nIP address (prefix).\n\nExample:\n\n```\n$ addrof enp0s31f6\n203.0.113.23\n2001:DB8:1337:2323::cafe\n$ addrof -4 enp0s31f6\n203.0.113.23\n$ devof 203.0.113.0/24\nenp0s31f6\n```\n\n## ASCII\n\nThis utility pretty-prints the [ASCII][ascii] table. By default, the table\nhas 4 columns. With 4 columns the table is still compact and some\nproperties are easy to spot. For example how upper and lower case\nconversion is just a matter of toggling one bit. Or the\nrelationship between control characters and pressing Ctrl and a\nprintable character on the same row (think: ESC vs. `Ctrl+[`,\n`TAB` vs. `Ctrl+I`, `CR` vs. `Ctrl+M` etc.)\n\nThe number of columns can be changed with the `-c` option. For\nexample `-c8` yields a 8 column table, which is how the ASCII\ntable is usually printed in (old) manuals. Such a table\nhighlights other properties. For example how ASCII somewhat\nsimplifies the conversion of [Binary Coded Decimals (BCD)][bcd]\n(think: bitwise-or the BCD nibble with `0b011` to get the ASCI\ndecimal character  and bitwise-and with `0b1111` for the other\ndirection).\n\nThe `-x` option is useful for looking up lesser used control\ncharacter abbreviations, e.g.:\n\n    $ /ascii.py -x EOT\n    EOT = End of Transmission (4)\n\nExample 32x4 table:\n\n       00   01   10   11\n      NUL  SPC    @    `  00000\n      SOH    !    A    a  00001\n      STX    \"    B    b  00010\n      ETX    #    C    c  00011\n      EOT    $    D    d  00100\n      ENQ    %    E    e  00101\n      ACK    \u0026    F    f  00110\n      BEL    '    G    g  00111\n       BS    (    H    h  01000\n       HT    )    I    i  01001\n       LF    *    J    j  01010\n       VT    +    K    k  01011\n       FF    ,    L    l  01100\n       CR    -    M    m  01101\n       SO    .    N    n  01110\n       SI    /    O    o  01111\n      DLE    0    P    p  10000\n      DC1    1    Q    q  10001\n      DC2    2    R    r  10010\n      DC3    3    S    s  10011\n      DC4    4    T    t  10100\n      NAK    5    U    u  10101\n      SYN    6    V    v  10110\n      ETB    7    W    w  10111\n      CAN    8    X    x  11000\n       EM    9    Y    y  11001\n       SS    :    Z    z  11010\n      ESC    ;    [    {  11011\n       FS    \u003c    \\    |  11100\n       GS    =    ]    }  11101\n       RS    \u003e    ^    ~  11110\n       US    ?    _  DEL  11111\n\nPlacing the column headers on top and the row headers at the\nright makes it clear how the resulting code is constructed for a\ncharacter, i.e. the column header is the prefix and the row\nheader is the suffix. Example: the R character has the binary\nvalue `0b1010010`.\n\n## Check-Cert\n\nThis script calls [`gnutls-cli`][gnutls] for the specified remote\nservices.\n\nExample:\n\n    $ check-cert.py imap.example.org_993 example.org_443\n\nAny validation errors (including OCSP ones) reported by\n[GnuTLS][gnutls] are printed by the script, which finally returns\nwith exit status unequal zero. The script also warns and exits\nunsuccessfully if a cert expires in less than 20 days.\n\nIf everything is fine the script is silent and exits\nsuccessfully, thus, check-cert is suitable for a Cron scheduled\nexecution.\n\nChecking a service that does TLS after a STARTTLS command like in\n\n   $ check-cert.py mail.example.org_25_smtp\n\nrequires GnuTLS version 3.4.x or later (e.g. 3.4.15). For example,\nRHEL/CentOS 7 comes with GnuTLS 3.3.8, while Fedora 23 provides\ngnutls 3.4.15.\n\nIt may make sense to create a `gnutls-cli` wrapper script and put\nit into `$PATH` such that the right version is called with the\nright CA bundle, e.g.:\n\n    #!/bin/bash\n    exec /nix/var/nix/profiles/default/bin/gnutls-cli \\\n      --x509cafile=/etc/pki/tls/cert.pem \"$@\"\n\n\nThe script doesn't use the comparable Openssl command (i.e. `openssl\ns_client`) because it doesn't conveniently present the\nexpiration dates and it doesn't even exit with a status unequal\nzero in case of verification errors.\n\n## Check-DNSBL\n\nThis utility checks a list of mail servers against some [well\nknown blacklists\n(DNSBL)](https://en.wikipedia.org/wiki/Comparison_of_DNS_blacklists)\nBy default, 30 or so lists are queried, but other/additional ones\ncan be specified via command line arguments or a CSV file.\n\nIt also checks, by default, if there are [reverse\nDNS](https://en.wikipedia.org/wiki/Reverse_DNS_lookup) records\nand if they match the forward ones.\n\nThe mail server can be specified as a list of IPv4 or IPv6\naddresses and/or domain names. MX records are followed, by\ndefault. Of course, an outgoing mail server doesn't necessarily\nhave to double as MX - in those cases its domain/address has to\nbe additionally specified.  In any case, if domains are\nspecified, at last, they are resolved via their A or AAAA records\nto IPv4 or IPv6 addresses.\n\nIf everything is ok, `check-dnsbl.py` doesn't generate any output\n(unless `--debug` is specified). Otherwise, it prints errors to\nstderr and exits with a status unequal zero. Thus, it can be\nused as Cron job for monitoring purposes.\n\nExamples:\n\nSomething is listed:\n\n    $ ./check-dnsbl.py 117.246.201.146\n    2016-11-05 19:01:13 - ERROR    - There is no reverse DNS record for 117.246.201.146\n    2016-11-05 19:01:13 - ERROR    - OMG, 117.246.201.146 is listed in DNSBL zen.spamhaus.org: 127.0.0.11 (\"https://www.spamhaus.org/query/ip/117.246.201.146\")\n    2016-11-05 19:01:19 - ERROR    - OMG, 117.246.201.146 is listed in DNSBL virbl.dnsbl.bit.nl: 127.0.0.2 (\"See: http://virbl.bit.nl/lookup/index.php?ip=117.246.201.146\")\n    2016-11-05 19:01:19 - ERROR    - 117.246.201.146 is listed in 2 blacklists\n\nEverything is ok:\n\n    $ ./check-dnstbl.py mail1.example.org mail2.example.org example.net\n    $ echo $?\n    0\n\nNote that your default resolving nameserver might reply\nincorrectly to some blacklist queries. It is thus advisable to\ntest some well known listed/non-listed addresses. See also the\n`--ns` option. There are also some options for selecting some\npredefined public DNS resolvers (e.g.  Google Public DNS,\nCloudflare, OpenDNS, Quad9). But again, some of those servers may\nfilter out some blacklists. For example, as of 2019-12-29, only\nCloudflare and OpenDNS return zen.spamhaus.org blacklisting\nrecords for `117.246.201.146` and `116.103.227.39` while Google\nand Quad9 don't. See also the [Spamhaus.org FAQ][spamhausfaq].\n\n[spamhausfaq]: https://www.spamhaus.org/faq/section/DNSBL%20Usage#261\n\n## Check2junit\n\n[Jenkins][jenkins] comes with the [JUnit plugin][junit] that\ndraws some nice graphs and creates reports from XML generated by\nJUnit testsuite runs. For example, there is a graph with the\nnumber of successful/failed testcases over the builds and there\nare graphs that display the duration of single test cases over\nthe builds.\n\nThe [libcheck][check] unittest C library also supports the\ngeneration of an XML report.\n\nThis scripts converts the libcheck XML format into the XML format\nsupported by the Jenkins JUnit plugin.\n\nExample (e.g. part of a build script):\n\n    CK_XML_LOG_FILE_NAME=test1.xml ./check_network\n    CK_XML_LOG_FILE_NAME=test2.xml ./check_backend\n    check2junit.py test1.xml test2.xml \u003e junit.xml\n\nThe JUnit plugin can be configured in the Jenkins job\nconfiguration, basically it has to be added as another post-build\naction (where the input filename can be set, e.g.\nto `junit.xml`).\n\n### Related\n\n- The C++ unittest library [Catch](https://github.com/philsquared/Catch/blob/master/docs/build-systems.md#junit-reporter) has JUnit XML output support built in.\n- Related to the integration of unittest results is also the\n  integration of coverage reports into Jenkins. A good solution\n  for this is to use the Jenkins [Cobertura plugin](https://wiki.jenkins-ci.org/display/JENKINS/Cobertura+Plugin), generate the coverage report with lcov and convert it with the [lcov-to-cobertura-xml](https://github.com/eriwen/lcov-to-cobertura-xml.git) script. The lcov HTML reports can also be included with the Jenkins [HTML publisher plugin](https://wiki.jenkins-ci.org/display/JENKINS/HTML+Publisher+Plugin)\n- Easiest to integrate with C/C++ builds is the Jenkins [Warnings plugin](https://wiki.jenkins-ci.org/display/JENKINS/Warnings+Plugin) as it natively suports GCC warnings.\n\n## disas\n\nDisas is a small wrapper around objdump/gdb for disassembling a\ngiven function.\n\nExamples:\n\n    $ disas a.out main\n    $ disas a.out 10144 -a    # dump function that includes this addr\n    $ disas a.out foobar -f   # also dump functions that call foobar\n    $ disas a.out '.*xyz'     # dump all functions regex match\n    # disas a.out cmpte --gdb # disassemble using gdb instead of objdump\n\nNote that recent versions of `objdump` support the\n`--disassemble=fn` option (e.g. on Fedora 31), but e.g. the\nobjdump on CentOS 7 doesn't. The wrapper doesn't require this\noption and thus also runs on older systems.\n\nThe wrapper doesn't always use gdb, because objdump is more\nwidely available and is more flexible when it comes to dumping\nmultiple functions.\n\n## CPUfreq\n\nThe `cpufreq` utility prints the current CPU frequency of each\nCPU core. It computes the frequency from a set of CPU counters\n(see the help text in `cpufreq.py` and further source code\ncomments there for details).\n\nThus, it doesn't rely on frequency-scaling support being enabled\nin the Linux kernel. When frequency-scaling support is disabled\nin the kernel and/or the BIOS the files\n`/sys/devices/system/cpu/cpu*/cpufreq/scaling_cur_freq` don't\nexist, commands like `cpupower frequency-info` don't work, and\n(as of 2020) the frequency  obtained via `/proc/cpuinfo` isn't\nnecessarily correct.\n\nIn any case, this tool can be used to cross-check CPU frequency\nassumptions and frequency scaling results.\n\n## dcat\n\nThe `dcat` utility automatically decompresses files with the\nright algorithm, on-the-fly. Thus, it's a decompressing cat. It\ndetects the compression file format by looking at the first\n[magic bytes][magic] and thus ignores any file extension.\nUncompressed files or files it doesn't recognize the `dcat`\nconcatenates as is.\n\nExamples:\n\n    $ echo 'Hello World' | zstd -c | ./dcat\n    Hello World\n    $ ./dcat foo.txt.gz bar.txt.zst baz.txt\n\nFor the actual decompressing, `dcat` execs a helper like `zcat`\nor `bzcat`. Currently, it autodetects gzip, Zstandard, LZ4, bzip2\nand XZ.\n\n[magic]: https://en.wikipedia.org/wiki/Magic_number_(programming)#Magic_numbers_in_files\n\n## DCheck\n\nThe dcheck.sh utility runs a program with the supplied arguments\ninside the [DBX debugger][dbx] with [memory checking mode][dbxcheck] enabled.\n\nExample:\n\n    $ dcheck someprog arg1 arg2\n\nFor each memory access issue the problem details and a stacktrace\nare printed and the execution is resumed. After the leak report\nis printed the DBX is automatically exited. The main work does\n[Ksh][ksh] function that is called by the wrapper (cf.\ncheck.dbx). Yes, DBX embeds a Ksh 88 compatible interpreter (in\ncontrast to GDB which supports Guile and Python scripting).\n\nThe memory check feature of [Solaris Studio's DBX][ssdbx] detects reads of\nuninitialized memory and heap based memory issues like buffer\noverflowing reads and writes (i.e. out-of-bounds writes). It also\ntracks allocations for detecting memory leaks. The DBX\ndynamically install hardware watch points for the access\nchecking.\n\nOn [Solaris][solaris]/[SPARC][sparc] it is a good and relatively\nwidely available alternative to a subset of the functionality of\nexcellent open source tools like [Valgrind (memcheck)][valgrind]\nor the [Address/Leak Sanitizers][asan] (which aren't available on\nSolaris/SPARC).\n\n## dtmemtime\n\nThis utility measures the high-water memory usage of a process\nand all its descendants under [Solaris][solaris]. The main work\ndoes a [DTrace][dtrace] script that hooks into some syscalls\n(hence the name). It's designed to work under Solaris 10 (or\nlater).\n\nExample:\n\n    $ cat foo.sh\n    #!/bin/bash\n    find \"$1\" | sed 's@/[^/]\\+$@@' | sort | uniq -c | sort -n -k 1,1 -r | head\n    $ dtmemtime foo.sh /usr/share\n    pid 23066 (#1) (gsed) runtime: 691 ms, highwater memory: 116208 bytes\n    pid 23065 (#1) (gfind) runtime: 691 ms, highwater memory: 968176 bytes\n    pid 23069 (#1) (gsort) runtime: 1584 ms, highwater memory: 8619504 bytes\n    pid 23067 (#1) (gsort) runtime: 1560 ms, highwater memory: 8619504 bytes\n    pid 23063 (#1) (foo.sh) runtime: 1601 ms, highwater memory: 97208 bytes\n    pid 23070 (#1) (ghead) runtime: 1583 ms, highwater memory: 91632 bytes\n    pid 23068 (#1) (guniq) runtime: 1559 ms, highwater memory: 91632 bytes\n    total runtime: 1601 ms, highwater memory: 18571096 bytes\n\nSay `mycmd.sh` executes some processes in parallel, e.g. with a\n[shell-pipe command][pipeline], then the high-water memory\nmeasurement takes the memory usages of all those processes into\naccount.\n\nThe [DTrace][dtrace] script installs some syscall and process\nprobes, e.g. a probe into the [`brk` syscall][brk] and probes\ninto syscalls used for managing anonymous memory mappings. Note\nthat the Solaris 10 libc memory management exclusively uses\n`brk`. The easiest way to get a more modern memory allocator that\nalso creates anonymous memory mappings (like under Linux) is to\nlink against [`libumem`][umem] and set an environment variable.\n\n[dtrace]: https://en.wikipedia.org/wiki/DTrace\n[brk]: https://en.wikipedia.org/wiki/Sbrk\n[umem]: https://en.wikipedia.org/wiki/Libumem\n[pipeline]: https://en.wikipedia.org/wiki/Pipeline_(Unix)\n\n\n## Firefox Addons\n\nIt lists the installed Firefox addons. Useful for disaster\nrecovery purposes. It also solves this problem: you have a bunch of\nFirefox addons installed that you want to replicate to another\nuser account.\n\nExample:\n\nAs user X on computer A:\n\n    $ ./firefox-addons.py -o addons.csv\n\nTransfer the file to user Y on computer B and execute:\n\n    $ cut -f1 -d, addon.csv | tail -n +2 | xargs firefox\n\nAfter that, you 'just' have to click a bunch of 'Add to Firefox'\nbuttons and close some tabs.\n\n\n## hcheck\n\nThe `hcheck` tool health-checks a command execution, i.e. it\nreports command forking and exit status to a\n[healthchecks.io][hcio] instance.\n\nHealthchecks.io is a notification service for monitoring periodic\njobs. It's open-source and can be self-hosted, however there is\nalso a public instance. An healthchecks instance basically pages\nyou when a cron-job doesn't keep it's schedule or fails.\nIn other words, it implements an external dead-man switch like monitoring\napproach.\n\nExample usage:\n\n    export hcheck_uuid=20fc5ce7-53f2-401f-a5ac-a0b5a718c5fc\n    hcheck /usr/local/bin/daily-backup.sh /home/juser\n\nAs a results the healthchecks events log contains the\n(externally) measured script runtime and exit status for each\ncommand execution. Depending on the check's configuration, you\nget notified when the script runs too long or completely misses a\nschedule.\n\nFor jobs that aren't 100 % silent even in the good case, you can\ncombine `hcheck` with [`silence`](#silence), to avoid superfluous mails from\nthe cron daemon.\n\nSee also healthcheck.io's list of [similar tools](https://healthchecks.io/docs/resources/) (Section Command Runners, Shell Wrappers).\n\n\n## Latest Kernel\n\nThis script checks if the system actually runs the latest\ninstalled kernel. This might not be the case if something like\nyum-cron automatically installs updates or if somebody forgot to\nrestart the system after a kernel update.\n\nA mismatch in kernel versions is reported via the exit status.\nWith option `-v` a diagnostic message is printed, as well.\n\nThus, the script can be used to send out notification mails (e.g.\nwhen running it as cron job).\n\nAlternatively, the result  can be used to initiate a restart of a\nmachine that already runs yum-cron. If the machine provides a\nclustered service, the restart can be coordinated with something\nlike etcd.\n\nThis check complements what [tracer][tracer] does.\nTracer checks if outdated applications or libraries are loaded\nbut (currently) doesn't check the kernel (cf. [Issue 45][tracer45]).\n\n\n## Link_stats64\n\nThe `link_stats64` utility dumps for each network interface its\n[link_stats64 struct](https://docs.kernel.org/networking/statistics.html#c.rtnl_link_stats64).\n\nInspecting the raw fields of this struct may be useful since\nother views (such as procfs or `ip -s l`) aggregate some fields\ntogether (e.g. drops) or might miss recent additions, such as\n`rx_otherhost_dropped`.\n\nIn particular the `rx_otherhost_dropped` field is interesting for\ndetecting misconfigured network devices that yield unicast\nflooding on switches. See also the `ping.py` utility for sending\nICMP echo request for non-existant destinations.\n\nHowever, as of 6.2 Linux kernel timeframe, at least some network\ndrivers don't maintain `rx_otherhost_dropped` (since they install\nMAC address filters and their hardware might not have a hardware\ncounter for this) and thus the field is only ever incremented\nwhen the network device is in promiscuous mode.\n\nThis repository also contains `otherhost.py` which just dumps the\n`rx_otherhost_dropped` field using the [drgn](https://github.com/osandov/drgn)\nprogrammable debugger.\n\nLink_stats64 is perhaps also a simple example of how to\ncommunicate with the Linux kernel over netlink in general,\nand sending `RTM_GETSTATS`/`IFLA_STATS_LINK_64` requests in\nparticular, which doesn't seem to documented much elsewhere.\n\nIt doesn't use libnl since it doesn't seem to be beneficial for\nthis use case. Also, digging through the netlink interface and\nstructs felt like less work than dealing with the arguably\nlightly documented libnl.\n\n\n## Lockf\n\n`lockf` only executes a provided command if a lock can be\naquired. Thus, it is able to serialize command execution and\nguard access to exclusive actions. It provides several\nlocking methods:\n\n- [`lockf()`][lockf] - hence the name\n- [`fcntl()`][fcntl]\n- [`flock()`][flock] - BSD API, also supported by e.g. Linux\n- [`open(..., ... O_CREAT | O_EXCL)`][open] - exclusive file creation\n- [`link()`][link] - hardlinking\n- [`mkdir()`][mkdir] - not necessarily atomic everywhere\n- [`rename()`][rename] - rename is also atomic\n\nAll of the methods are available on Linux. They also specified by\nPOSIX, except `flock()` which comes from BSD.\n\nThe methods `lockf()`, `fcntl()` and `flock()` support waiting on\na lock without polling (`-b` option).\n\nWhether different lock methods do interact depends is system\nspecific. For example, on recent Linux, `fcntl()` and `flock()`\ndon't interact unless they are on NFS. And POSIX allows\ninteraction between `lockf()` and `fcntl()` but doesn't require\nit.\n\nNot all methods are necessarily supported and work reliable over\nNFS. Especially in a heterogenous environment. Existing\nimplementation may chose to return success even if they implement\na locking API as null operation. Also, with some NFS\nimplementations some methods may become unreliable in case of\npacket loss or a rebooting NFS server. For NFS, the methods worth\nlooking into are `lockf()`, `fcntl()`, `open()` and `link()`.\n`mkdir()` is not specified by NFS to be atomic. Linux supports\n`flock()` over NFS since kernel 2.6.37, but only because it is\nemulated via `fcntl()` then.\n\nSimilar utilties:\n\n- [Linux-Util flock][lu-flock], uses `flock()`\n- [BSD lockf][bsd-lockf], uses `flock()` despite the name\n- [lockrun][lockrun], uses `lockf()` where available, `flock()`\n  otherwise\n- [Procmail lockfile][lockfile], uses `link()` and supports polling\n\nSee also:\n\n- [Correct locking in shell scripts?][1]\n\n## Macgen\n\nWhen creating network interfaces (dummy, bridge, tap, macvlan,\nmacvtap, veth, ...) on usually can omit a MAC address because the\nLinux kernel automatically generates and assigns one. Those MAC\naddresses come from the private/internal unicast namespace which\nis described by the following regular expression:\n\n    .[26ae]:..:..:..:..:..\n\n(cf. [locally administered\naddresses](https://en.wikipedia.org/wiki/MAC_address#Universal_vs._local)\n\nThe macgen scripts also randomly generate MAC addresses from\nthis namespace. Those can be used for explicitly specifying\nMAC addresses in network setup scripts. For example, to get\nreproducible results or to avoid having to query MAC addresses\nof just newly created virtual interfaces.\n\n## Oldprocs\n\nThe oldprocs utility lists processes and services that need to be\nrestarted because their executable or one of its libraries has\nchanged, e.g. after a `dnf update` and before the next system\nreboot.\n\nIt detects if a process belongs to a systemd service and prints\nthe matching `systemctl` command line to restart it.\nOptionally, with `--restart` the utility automatically restarts\nthe detected services.\n\nThus, it's well suited for automatic system updates. Think:\nsomething like `dnf -y update` runs from a cron-job followed by\n`oldprocs --restart`. This makes sense for systems where it's more\nimportant to get a required security update as fast as possible\nthan avoiding a potential service breakage due to the new update.\nArguably, depending on you distribution and package selection\nthis risk is rather small, anyways.\n\nAnother use case is to verify that all processes run current\nbinaries and thus nobody forgot to restart some services or\nreboot the system, when necessary.\n\nCases where a service can't be restarted (i.e. dbus), a restart\nwasn't sufficient (e.g. old processes are still around), a\nrestart would eject a user from a graphical session or there are\nother non-service processes are signaled via an exit status\nunequal zero and diagnostic messages.\n\nExample output when running with root privileges:\n\n    # ./oldprocs\n    You have to restart the following system services:\n\n    systemctl restart libvirtd.service\n\n    You have to restart the following user services:\n\n    sudo -u '#1000' systemctl --user restart evolution-addressbook-factory.service\n    sudo -u '#1000' systemctl --user restart evolution-calendar-factory.service\n    sudo -u '#1000' systemctl --user restart evolution-source-registry.service\n    sudo -u '#1000' systemctl --user restart gnome-terminal-server.service\n\n    The following user processes must be restarted manually\n    (or a session logoff/login might take care of them):\n\n    /usr/bin/clementine (deleted) (uid 1000) - pids: 20826\n    /usr/bin/clementine-tagreader (deleted) (uid 1000) - pids: 20831 20832 20833 20834\n    /usr/lib64/firefox/firefox (deleted) (uid 1000) - pids: 2601 2696 2981 7154 21053\n    /usr/libexec/gnome-shell-calendar-server (uid 1000) - pids: 2142\n    /usr/libexec/goa-daemon (uid 1000) - pids: 2163\n    # echo $?\n    11\n\nTo automatically restart all old system processes:\n\n    # ./oldprocs --restart\n\nThe auto-restart just includes processes belonging to systemd\nservices that belong to the systemd instance the user has direct\naccess to.\n\nExample output when user with id 1000 executes it:\n\n    $ ./oldprocs\n    You have to restart the following user services:\n\n    systemctl --user restart evolution-addressbook-factory.service\n    systemctl --user restart evolution-calendar-factory.service\n    systemctl --user restart evolution-source-registry.service\n    systemctl --user restart gnome-terminal-server.service\n\n    The following user processes must be restarted manually\n    (or a session logoff/login might take care of them):\n\n    /usr/bin/clementine (deleted) (uid 1000) - pids: 20826\n    /usr/bin/clementine-tagreader (deleted) (uid 1000) - pids: 20831 20832 20833 20834\n    /usr/lib64/firefox/firefox (deleted) (uid 1000) - pids: 2601 2696 2981 7154 21053\n    /usr/libexec/gnome-shell-calendar-server (uid 1000) - pids: 2142\n    /usr/libexec/goa-daemon (uid 1000) - pids: 2163\n\nThe difference is that the restart commands for the systemd\nuser services are generated from the user's perspective - and the\nold system processes are not included as permissions are lacking\nto access the relevant directories and files under `/proc`.\n\nHow does it work: oldprocs scans through `/proc` to find out\nwhich processes are running, which executables they were started\nwith, whether those were replaced, changed dates, which shared\nobjects are linked into the process, whether those were changed,\nwhich systemd service the process is part of, if any, etc.\n\nRelated tools: There is\n[tracer](https://github.com/FrostyX/tracer) which also lists\noutdated processes and provides restart commands for some\nservices. It is written in Python and in my experience, it\nsometimes runs very slow and the output sometimes contains\ninaccuracies (e.g. wrong restart commands, false negatives, wrong\ndiagnostics such as a required system reboot which isn't really\nnecessary).  As of 2018, tracer doesn't support the automatic\nrestarting of outdated services.\n\nIn contrast to that, oldprocs is written in C++, runs very fast,\nsupports the automatic restarting of outdated services and\nprovides some fresh diagnostics.\n\nThe utility `latest-kernel-running.sh` complements `oldprocs` as\nit checks whether a system reboot is necessary due to a previous\nkernel update.\n\n## Pargs\n\nPargs displays the argument vector (argv) of a running process or\nthe one that is included in the core file of a process, under\nLinux.  In addition, it supports printing the environment vector\n(envp) and the auxiliary vector (auxv) including some pretty\nprinting and dereferencing some interesting addresses (e.g. the\nexecutable filename or the platform string).\n\nExamples:\n\n    $ pargs $pid\n    $ pargs -l $pid\n    $ pargs -aex $pid\n    $ pargs -aexv some_core\n\nIt is inspired by [Solaris' pargs][solpargs] command. Similar to Linux,\nSolaris also has a `/proc` filesystem that provides much\ninformation about each process. In contrast to Linux, the pseudo\nfiles all contain binary data, i.e. following some struct\ndefinitions. Thus, it's natural that Solaris has a whole p-family\nof commands to deal with processes. Some like `pgrep` and `pkill`\nare also available on Linux, for a long time, but to the authors\nknowledge, this `pargs` in this repository is the first Linux pargs.\n\nThe default mode, displaying the argument vector of running\nprocess (`pargs $pid`) can be approximated under Linux like this:\n\n    $ tr '\\0' '\\n' \u003c /proc/$pid/cmdline\n\nDisplaying the environment vector works analogously, but the\nauxiliary vector (`/proc/$pid/auxv`) is more complicated because\nit's just an array of integers (64 or 32 bit depending on the\narchitecture/process) that also references addresses in the\nprocesses address space.\n\nThe complexity grows when we want to obtain the same information\nfrom a core file. Some can be displayed from gdb, but this\nrequires the availability of the executable file, as well.\n`pargs` just requires the core file.\n\nIn comparison to Solaris, some parts arguably can be obtained\nmore easily (e.g.  `/proc/$pid/{cmdline,environ}`) while others\nrequire more effort, on Linux. On Solaris, the core file contains\nsome structs that include copies of the argument and environment\nvectors, thus, it's straight forward to access that information.\nThis is not the case on Linux, where one has to search for the\nvectors in the right memory section.\n\nTested on:\n\n- Fedora 26 x86-64, both with 32/64 bit executables/core files,\n  and with core files from different byte-order architectures\n- RHEL 6 (needs `-s` there)\n- Debian 8 ppc64 (PowerPC), both with 32/64 bit executables/core files,\n  and with core files from different byte-order architectures\n\nIn general, the code is portable, e.g. when reading core files,\npargs supports word sizes and [byte orders][endian] (i.e. little\nvs. big endian) different from the native one. For example, pargs\nrunning on x86-64 Linux is able to print the argument vector,\nauxiliary vector etc. of a core file that was generated on a\nbig-endian PowerPC Linux system.\n\n## PDFmerge\n\nThe `pdfmerge` utility merges two PDF files such that the pages\nof the second PDF overlay the ones of the first one. In that\nsense it's a vertical merge. Example:\n\n    $ ./pdfmerge text-only.pdf image-only.pdf doc.pdf\n\nMain use case for this is to merge a text-only (transparent) PDF\nfile (OCR result) with an image-only PDF file (scan result). See\nalso [`adf2pdf.py`][adf2pdf] for a complete workflow.\n\nIt supports both the [PyPDF2][pypdf2] and [pdfrw][pdfrw] packages\n(cf. the `--pdfrw` option), thus, it's also a small case study of\nthe different PDF manipulation APIs. Fedora packages these\ndependencies as `python3-PyPDF2` and `python3-pdfrw`.\n\nRelated PDF tools:\n\n- PDFtk - supports vertical merging, as well (`pdftk\n  text-only.pdf multibackground image-only.pdf output doc.pdf`),\n  but it isn't widely available anymore. At least [Fedora has\n  removed][pdftkfed] it from its main repository. Since parts of it are\n  written Java it's arguably harder to install than this tiny\n  Python utility.\n- pdf-stapler - supports a small subset of pdftk functionality,\n  but, currently, [doesn't support][staplernot] such a merge\n  operation\n- mutool - from the makers of mupdf, doesn't support such a merge\n- poppler-utils - derived from the xpdf utils, doesn't support\n  vertical merging (just horizontal merging with `pdfunite`)\n\n[pypdf2]: https://pythonhosted.org/PyPDF2/\n[pdfrw]: https://github.com/pmaupin/pdfrw\n[adf2pdf]: https://github.com/gsauthof/adf2pdf\n[pdftkfed]: https://ask.fedoraproject.org/en/question/65261/pdftk-not-in-f21/\n[staplernot]: https://github.com/hellerbarde/stapler/issues/35\n\n## PLDD\n\nThe `pldd` command lists all shared libraries loaded into a\nrunning process. Often, the result is similar to what `ldd`\nprints for the executable. But the running process may actually\nend up with a different set of loaded shared libraries, e.g. due\nto a modified environment (e.g. `LD_LIBRARY_PATH`) or some\ndynamic logic (e.g. when the process calls `dlopen()`).\n\nExample:\n\n    $ ./pldd.py $$\n    /usr/lib64/ld-2.26.so\n    /usr/lib64/libc-2.26.so\n    /usr/lib64/libdl-2.26.so\n    /usr/lib64/libgdbm.so.4.0.0\n    [..]\n\nBoth implementations `pldd.sh` and `pldd.py` basically yield the\nsame results. The difference is just that `pldd.sh` prints the\ncurrent in-process-memory shared object table as created and\nmaintained by the `ld.so` dynamic linker. Whereas `pldd.py`\nprints the linked shared libraries from the kernel's point of\nview. The main difference is then that the kernel resolves all\nsymbolic links, e.g. `pldd.sh` may report `/lib64/libc.so.6`\nwhile `pldd.py` reports `/usr/lib64/libc-2.26.so` on systems\nwhere `/lib64` symlinks to `/usr/lib64`.\n\nRelated tools: [Solaris 10 has `pldd`][pldd-sol] which works like\n`pldd.sh` - but also supports reading the in-memory linker table\nfrom a single core file. Glibc comes with a `pldd` command\n(doesn't support core files) but [it's seriously broken for\nyears][pldd-glibc] (since glibc 2.19) - i.e. it goes into an\nendless loop instead of printing any results.\n\n[pldd-sol]: https://www.freebsd.org/cgi/man.cgi?query=pldd\u0026apropos=0\u0026sektion=0\u0026manpath=SunOS+5.10\u0026arch=default\u0026format=html\n[pldd-glibc]: https://manpages.debian.org/stretch/manpages/pldd.1.en.html#BUGS\n\n## PQ\n\nThe `pq` (process query) utility queries task attributes such as\nprocess flags, number of threads or environment variables.\n\nExamples:\n\nList all tasks that match the 'rcu' regular expression:\n\n```\n$ pq -e rcu -o pid aff cpu cls pri nice comm\n    pid aff cpu cls pri nice            comm\n      3   3   3 OTH   0  -20          rcu_gp\n      4 0-3   0 OTH   0  -20      rcu_par_gp\n     11 0-3   1 OTH   0    0       rcu_sched\n     31 0-3   1 OTH   0    0 rcu_tasks_kthre\n     32 0-3   1 OTH   0    0 rcu_tasks_rude_\n     33 0-3   1 OTH   0    0 rcu_tasks_trace\n\n```\n\nList all processes and threads:\n\n```\n$ pq -a -t | { head -3 ; tail -5; }\n    pid     tid    ppid aff cpu cls pri nice    syscall      rss            comm\n      1       1       0 0-3   3 OTH   0    0          #    10872         systemd\n      2       2       0 0-3   2 OTH   0    0          #        #        kthreadd\n 118106  118106    1326 0-3   3 OTH   0    0          #     8828 systemd-userwor\n 118107  118107    1326 0-3   1 OTH   0    0          #     8624 systemd-userwor\n 118115  118115       2 0-3   0 OTH   0    0          #        # kworker/u8:6-btrfs-endio-write\n 118117  118117  100290 0-3   0 OTH   0    0       read     3800              pq\n 118118  118118       #   #   #   ?   #    #          #        #               #\n```\n\nList all processes owned by a user:\n\n```\n$ pq -u juser | { head -3 ; tail -5; }\n    pid     tid    ppid aff cpu cls pri nice    syscall      rss            comm\n   1855    1855       1 0-3   0 OTH   0    0 epoll_wait     8976         systemd\n   1857    1857    1855 0-3   3 OTH   0    0          #     5856        (sd-pam)\n  87182   87182    3546 0-3   2 OTH   0    0      futex    65096 chromium-browse\n  95873   95873    5034 0-3   0 OTH   0    0       kill    11792             vim\n 100284  100284   57938 0-3   3 OTH   0    0     select    26044             vim\n 100290  100290  100284 0-3   0 OTH   0    0 rt_sigsuspend     8196             zsh\n\n 118138  118138  100290 0-3   1 OTH   0    0       read     3364              pq\n```\n\nList all available attributes:\n\n```\n$ pq -o help\n```\n\nList all kernel threads whose CPU affinity can't  be changed:\n\n```\n$ pq -a -k -o pid comm flags | grep 'pid\\|NO_SETAFF' | head -4\n    pid            comm flags\n      3          rcu_gp PF_WQ_WORKER|PF_FORKNOEXEC|PF_NOFREEZE|PF_KTHREAD|PF_NO_SETAFFINITY\n      4      rcu_par_gp PF_WQ_WORKER|PF_FORKNOEXEC|PF_NOFREEZE|PF_KTHREAD|PF_NO_SETAFFINITY\n      6 kworker/0:0H-kblockd PF_WQ_WORKER|PF_FORKNOEXEC|PF_NOFREEZE|PF_KTHREAD|PF_NO_SETAFFINITY\n```\n\nList attributes of two processes which are identified by their PIDs:\n\n```\n$ pq -p 48178 22548  -o pid tid uid comm loginuid threads env:OLDPWD cwd exe\n    pid     tid  uid            comm   loginuid threads      env             cwd        exe\n  48178   48178 1000       kwalletd5       1000       9 /home/juser       /home/juser /usr/bin/kwalletd5\n  22548   22548 1000     Web Content       1000      30 /home/juser /proc/22549/fdinfo (deleted) /usr/lib64/firefox/firefox\n```\n\nSome attributes `pq` supports can also be queried with `ps`,\n`tuna -P`, `pgrep` or `taskset -pc` - but not all, and none of the other\ntools supports all of the interesting attributes on its own, such\nthat one often needs to chain them together and/or add `cat\n/proc/$pid/...` commands, possibly wrapped in a long shell\none-liner.\n\nAlso, using `pq` can be more efficient. For example, when\nquerying just a single process (with `-p $PID`), `pq` really just\nreads a few files under `/proc/$PID/` whereas `ps` even then\ntraverses all process specific directories under `/proc`. For\nexample on a Fedora 33 system:\n\n```\n$ strace ps -p 48178 2\u003e\u00261 | grep '^open.*/proc' -c\n624\n$ strace ./pq -p 48178 2\u003e\u00261 | grep '^open.*/proc' -c\n5\n```\n\nSame `ps` behaviour can be observed on RHEL/CentOS 7, as well.\nObviously, this gets very annoying fast on systems that hosts\nthousands of processes.\n\n## Remove\n\nSynchronize the write cache of an external USB disk, power it\ndown and remove its device. Example:\n\n    $ ./remove.py /dev/sdb\n\nThe main use cases for this is to power down an external disk\ngracefully instead of suddenly removing the power (i.e. when it's\nstill running as it's unplugged) which should reduce mechanical\nstress.  Also, the explicit flushing of the drive's cache\nshouldn't hurt.  It should help after writing data directly to\nthe disk (e.g. with `dd`) or with low-quality USB enclosures that\ndon't flush the write cache on other synchronisation commands.\n\nRelated commands:\n\n- `udisksctl power-off --block-device /dev/sdb` - similar\n  effect, only available on systems where the `udisks2` service\n  is available and running\n- `eject /dev/sdb` - may work for some hardware, unclear what\n  features it supports for USB disks and doesn't really support\n  error reporting. Doesn't work for the author under Fedora 27,\n  i.e. it doesn't flush and it doesn't power-down.\n\nSee also [Gracefully shutting down USB disk drives before\ndisconnect][remove-se] on Unix-SE.\n\n[remove-se]: https://unix.stackexchange.com/q/444611/1131\n\n## Searchb\n\nThe purpose of `searchb` is quite simple: search if a file is\nincluded in another file and if it is report its offset. See also\n[this Unix SE question][searchse] about this use case.\n\nExample:\n\n    $ searchb queryfile targetfile\n    1337\n    $ searchb queryfile0 targetfile\n    $ echo $?\n    1\n\nThe obvious implementation choice is to map both files into\nmemory and use a text-book text search algorithm such as\n[Two-Way][twoway], [BMH][bmh] or [KMP][kmp] on it. Simple and at\nthe same time efficient. A tiny complication is that POSIX\n`mmap()` doesn't allow mapping zero length files, thus, one has\nto add a special case for this (as - say - searching for a\npattern in an empty file shouldn't be considered an error).\n\nAs a small case study, this repository contains several\nequivalent implementations of this small utility written in\ndifferent languages: C, C++, Python, Go and Rust\n\nEven with such a small example one can see the advantages,\ndisadvantages and trade-offs associated with the different\nlanguages when it comes to system programming.\n\nObservations:\n\n- C: as always, some boilerplate error checking code necessary,\n  otherwise straight forward. Unfortunately, POSIX doesn't\n  specify the range equivalent to `strstr()`, but modern\n  Unix-like operating systems like Linux provide `memmem()`. The\n  Linux version of `memmem()` is highly optimized.\n- C++: the C++ STL doesn't include a convenient API for memory\n  mapping a file, thus one either has to use the low-level C API\n  or another library. Boost has 2 mmap APIs (in iostreams and\n  interprocess) but both don't allow empty mappings. Libixxxutil\n  does allow them thus it's used. The STL includes a generic\n  search algorithm, although it doesn't have to better than\n  a naive implementation. Boost also includes BMH and KMP\n  implementations.\n- Python: just a few lines necessary to get the job done. Very\n  elegant and the standard Python library contains all the needed\n  pieces. Perhaps a tiny downer is that the search algorithm isn't\n  available as orthogonal function, instead it's `mmap.find()`,\n  `bytes.find()` etc. Likely, one implementation is shared,\n  internally and the standard library is usually mature enough to\n  expose all the obvious helper functions (like in this case).\n- Go: Similar to C and C++, Go also allows for an orthogonal\n  implementation of memory mapping and searching. The standard\n  library comes with `bytes.Index()` that implements some\n  special cases for different pattern lengths (including\n  [Rabin-Karp][rabink]) and\n  works on any byte slices, while the `Mmap()` syscall also\n  returns a zero copy byte slice. Still, one cannot call this\n  implementation extremely elegant: Since Go doesn't have exceptions\n  one has to invest in some repetitive error checking, there is\n  nothing similar to [RAII][raii] and the standard library\n  doesn't include a high-level mmap API (heck, even the syscalls\n  aren't part of the standard library). As a consequence, the Go\n  version isn't much shorter than the C version\n- Rust: the view-like slice syntax, move-semantics etc. are\n  well suited for this job, e.g. to allow for an orthogonal\n  combination of memory mapping and search. Unfortunately, Rust's\n  standard library neither contains a search algorithm for `u8`\n  byte slices (just for UTF strings) nor an mmap API. However,\n  external [crates][crate] are available for both tasks. Using\n  those, the implementation is very short and elegant, as well.\n  Although Rust also doesn't have exceptions, at least it has\n  some syntactic sugar to avoid some error checking boilerplate\n  code (e.g. the `?` operator).\n- In general, most of the mid- to high-level memory map APIs in\n  the different languages don't improve upon the POSIX limitation\n  to fail on zero length mappings. Just returning an empty range\n  simplifies its use (cf. the `mmap()` helper in the C and go\n  versions and libixxxutil) as some special error handling can be\n  omitted.\n- Performance: the C/C++/Python/Go/Rust versions are basically equally\n  fast. The `memmem()` likely contains some SIMD code and the\n  Rust search library has optional SIMD support, although it\n  requires support for inline assembly, which isn't available in\n  the current Rust stable (e.g. version 1.25).\n\n\n[searchse]: https://unix.stackexchange.com/q/39728/1131\n[twoway]: http://www-igm.univ-mlv.fr/~lecroq/string/node26.html\n[bmh]: https://en.wikipedia.org/wiki/Boyer%E2%80%93Moore%E2%80%93Horspool_algorithm\n[kmp]: https://en.wikipedia.org/wiki/Knuth%E2%80%93Morris%E2%80%93Pratt_algorithm\n[raii]: https://en.wikipedia.org/wiki/Resource_acquisition_is_initialization\n[crate]: https://doc.rust-lang.org/book/first-edition/crates-and-modules.html\n[rabink]: https://en.wikipedia.org/wiki/Rabin%E2%80%93Karp_algorithm\n\n## Silence\n\n`silence` is command wrapper that executes a command with its\narguments such that its stdout/stderr are written to unlinked\ntemporary files. In case the command exits with return code\nunequal zero, the temporary files are streamed to stdout and\nstderr of `silence`. Otherwise, the temporary files (under `TMPDIR`\nor `/tmp`) vanish when both `silence` and the called command exit.\n\nThis is useful e.g. for job schedulers like cron, where\nthe output is only of interest in the event of failure. With\ncron, the output of a program also triggers a notification mail\n(another trigger is the return code).\n\n`silence` provides the -k option for terminating the child in case\nit is terminated before the child has exited. On Linux, this is\nimplemented via installing SIGTERM as parent death signal in its\nchild before it executes the supplied command. On other systems\nthe parent death signal mechanism  is approximated via installing\na signal handler for SIGTERM that kills the child.\n\nThe utility is a C reimplementation of [moreutils\nchronic][moreutils] that is written in Perl. Thus, it has less\nruntime overhead, especially less startup overhead.  The\nunittests actually contain 2 test cases that fail for moreutils\nchronic because of the startup overhead. Another difference is\nthat moreutils chronic buffers stdout and stderr lines in memory,\nwhere `silence` writes them to temporary files, thus avoiding\nmemory issues with noisy long running commands. Also, moreutils\nchronic doesn't provide means to get the child killed when it is\nterminated. Other differences are documented in the unittest\ncases (cf.  `test/chronic.py`).\n\n## Silencce\n\n`silencce` is a C++ implementation of `silence`. The main difference\nis the usage of exceptions, thus simplifying the error reporting.\n\n## Swap\n\n[Since](https://github.com/torvalds/linux/commit/bd42998a6bcb9b1708dac9ca9876e3d304c16f3d)\n[2014\n(3.15)](https://github.com/torvalds/linux/commit/da1ce0670c14d8380e423a3239e562a1dc15fa9e)\n([cf. the development](https://lwn.net/Articles/569134/)) Linux\nimplements the `RENAME_EXCHANGE` flag with the `renameat2(2)`\nsystem call for atomically exchanging the filenames of two files.\nThe `swap` utility exposes this functionality on the command\nline.\n\nExample:\n\n    $ echo bar \u003e bar; echo foo \u003e foo\n    $ ls -i bar foo\n    1193977 bar  1193978 foo\n    $ cat bar foo\n    bar\n    foo\n    $ ./swap bar foo\n    $ ls -i bar foo\n    1193978 bar  1193977 foo\n    $ cat bar foo\n    foo\n    bar\n    $ ./swap bar foo\n    $ ls -i bar foo\n    1193977 bar  1193978 foo\n    $ cat bar foo\n    bar\n    foo\n\nBeside the use cases mentioned in the [`renameat(2)` man\npage](http://man7.org/linux/man-pages/man2/rename.2.html),\natomically filename swapping can be handy for e.g. log file\nrotation. There, any time window where the log filename is\nmissing is eliminated.\n\nThe `swap.c` source code also functions as example of how a\nsystem call can be called when glibc doesn't provide a wrapper for\nit.\n\nNot every filesystem necessarily supports `RENAME_EXCHANGE`.\nFirst supported by EXT4 in 2014, e.g. Btrfs supports it [since 2016\n(Linux 4.7)](https://kernelnewbies.org/Linux_4.7#head-0b57342c7fb5702b7741afbd6cd55410f84c4b34).\nAUFS (a union FS) doesn't support `RENAME_EXCHANGE`, but Overlay\nFS does. AUFS isn't part of the Linux kernel (in contrast to\nOverlay FS) but [used by some Docker\nversions](https://docs.docker.com/engine/userguide/storagedriver/aufs-driver/#configure-docker-with-aufs),\nby default. Docker supports several storage backends and there is\nalso a [backend that uses Overlay\nFS](https://docs.docker.com/engine/userguide/storagedriver/overlayfs-driver/#overlayfs-and-docker-performance).\nSome versions use that by default. On Linux, one can verify the\ntype of the filesystem a file or directory is part of via:\n\n    $ stat -f -c %T somefile\n\n\n## User-Installed\n\n`user-installed.py` lists all the packages that were manually\nselected, i.e. that are marked as user-installed in the local\npackage database because a user explicitly installed them. That\nmeans packages that were installed by the system installer or as\nautomatic dependencies aren't listed.\n\nIt supports different distributions:\nFedora, CentOS, RHEL, Termux, Debian and Ubuntu\n\nSuch a package list can be used for:\n\n- preparing a kickstart file\n- 'cloning' a good package selection of one system\n- restoring the package selection after a vanilla install (e.g.\n  because of a major distribution version upgrade or a system\n  recovery)\n\nExcluding the automatically installed packages from the list\nprotects against:\n\n- installing old dependency packages that are now obsolete on the\n  new version of the distribution\n- wrongly marking the old dependency packages as user-installed\n  on the new system\n- and thus making auto-cleaning after a future package removal of\n  then unneeded dependency packages ineffective\n- failed installs due to dependency packages that are removed in\n  the new distribution version\n\nExample for restoring a package list on a Fedora system:\n\n    # dnf install $(cat example-org.pkg.lst)\n\nIgnoring any unavailable packages:\n\n    # dnf install --setopt=strict=0 $(cat example-org.pkg.lst)\n\n## Build Instructions\n\nGet the source:\n\n    $ git clone https://github.com/gsauthof/utility.git\n    $ cd utility\n    # git submodule update --init\n\nOut of source builds are recommended, e.g.:\n\n    $ mkdir utility-bin \u0026\u0026 cd utility-bin\n    $ cmake ../utility\n    $ make\n\nOr to use ninja instead of make and create a release build:\n\n    $ mkdir utility-bin-o \u0026\u0026 cd utility-bin-o\n    $ cmake -G Ninja -D CMAKE_BUILD_TYPE=Release ../utility\n    $ ninja-build\n\nInstall it (for packaging):\n\n    $ mkdir build\n    $ cd build\n    $ cmake -G Ninja .. -DCMAKE_BUILD_TYPE=Release \\\n                 -DCMAKE_INSTALL_PREFIX=/usr/local\n    $ DESTDIR=$PWD/out ninja install\n\nIf you want to directly install it into the final destination you\ncan drop the `DESTDIR=...` part.\n\n\n## Unittests\n\n    $ make check\n\nor\n\n    $ ninja-build check\n\n## License\n\n[GPLv3+][gpl]\n\n[1]: http://unix.stackexchange.com/questions/22044/correct-locking-in-shell-scripts\n[bsd-lockf]: https://www.freebsd.org/cgi/man.cgi?query=lockf\n[gpl]: https://www.gnu.org/licenses/gpl.html\n[fcntl]: http://man7.org/linux/man-pages/man2/fcntl.2.html\n[flock]: http://man7.org/linux/man-pages/man2/flock.2.html\n[gnutls]: https://gnutls.org/\n[link]: http://man7.org/linux/man-pages/man2/link.2.html\n[lockf]: http://man7.org/linux/man-pages/man3/lockf.3.html\n[lockfile]: http://linux.die.net/man/1/lockfile\n[lockrun]: http://www.unixwiz.net/tools/lockrun.html\n[lu-flock]: http://linux.die.net/man/1/flock\n[mkdir]: http://man7.org/linux/man-pages/man2/mkdir.2.html\n[moreutils]: the://joeyh.name/code/moreutils/\n[open]: http://man7.org/linux/man-pages/man2/open.2.html\n[rename]: http://man7.org/linux/man-pages/man2/rename.2.html\n[tracer]: http://tracer-package.com/\n[tracer45]: https://github.com/FrostyX/tracer/issues/45\n[jenkins]: https://jenkins.io/\n[junit]: https://wiki.jenkins-ci.org/display/JENKINS/JUnit+Plugin\n[libcheck]: https://libcheck.github.io/check/\n[ascii]: https://en.wikipedia.org/wiki/ASCII\n[bcd]: https://en.wikipedia.org/wiki/Binary-coded_decimal\n[dbx]: https://en.wikipedia.org/wiki/Dbx_(debugger)\n[dbxcheck]: https://docs.oracle.com/cd/E19205-01/819-5257/blahg/index.html\n[ksh]: https://en.wikipedia.org/wiki/KornShell\n[ssdbx]: https://docs.oracle.com/cd/E24457_01/html/E21993/index.html\n[valgrind]: http://valgrind.org/\n[asan]: https://github.com/google/sanitizers/wiki/AddressSanitizer\n[solaris]: https://en.wikipedia.org/wiki/Solaris_(operating_system)\n[sparc]: https://en.wikipedia.org/wiki/SPARC\n[endian]: https://en.wikipedia.org/wiki/Endianness\n[solpargs]: https://www.freebsd.org/cgi/man.cgi?query=pargs\u0026apropos=0\u0026sektion=0\u0026manpath=SunOS+5.10\u0026arch=default\u0026format=html\n[hcio]: https://healthchecks.io\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fgsauthof%2Futility","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fgsauthof%2Futility","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fgsauthof%2Futility/lists"}