{"id":13588408,"url":"https://github.com/dbohdan/sqawk","last_synced_at":"2025-04-06T18:16:05.481Z","repository":{"id":25927792,"uuid":"29368999","full_name":"dbohdan/sqawk","owner":"dbohdan","description":"Like awk but with SQL and table joins","archived":false,"fork":false,"pushed_at":"2024-05-10T20:37:08.000Z","size":583,"stargazers_count":308,"open_issues_count":2,"forks_count":14,"subscribers_count":19,"default_branch":"master","last_synced_at":"2024-05-15T14:03:24.181Z","etag":null,"topics":["awk","cli","converter","csv","data-transformation","data-wrangling","delimited-files","json","sql","tsv"],"latest_commit_sha":null,"homepage":"","language":"Tcl","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/dbohdan.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2015-01-16T21:39:25.000Z","updated_at":"2024-06-19T11:31:53.102Z","dependencies_parsed_at":"2024-06-19T11:31:51.653Z","dependency_job_id":"b9c958d5-3f12-4c4f-b087-5d161a12369f","html_url":"https://github.com/dbohdan/sqawk","commit_stats":null,"previous_names":[],"tags_count":23,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/dbohdan%2Fsqawk","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/dbohdan%2Fsqawk/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/dbohdan%2Fsqawk/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/dbohdan%2Fsqawk/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/dbohdan","download_url":"https://codeload.github.com/dbohdan/sqawk/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":247526764,"owners_count":20953143,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["awk","cli","converter","csv","data-transformation","data-wrangling","delimited-files","json","sql","tsv"],"created_at":"2024-08-01T15:06:41.770Z","updated_at":"2025-04-06T18:16:05.465Z","avatar_url":"https://github.com/dbohdan.png","language":"Tcl","readme":"# Sqawk\n\n![A squawk](squawk.jpg)\n\n**Sqawk** is an [awk](https://en.wikipedia.org/wiki/AWK)-like program that uses SQL and can combine data from multiple files.  It is powered by SQLite.\n\n\n## An example\n\nSqawk is invoked like this:\n\n    sqawk -foo bar script baz=qux filename\n    \nwhere `script` is your SQL.\n\nHere is an example of what it can do:\n\n```sh\n## List all login shells used on the system.\nsqawk -ORS '\\n' 'select distinct shell from passwd order by shell' FS=: columns=username,password,uid,gui,info,home,shell table=passwd /etc/passwd\n```\n\nor, equivalently,\n\n```sh\n## Do the same thing.\nsqawk 'select distinct a7 from a order by a7' FS=: /etc/passwd\n```\n\nSqawk lets you be verbose to better document your script but aims to provide good defaults that save you keystrokes in interactive use.\n\n[Skip down](#more-examples) for more examples.\n\n\n## Table of contents\n\n* [Installation](#installation)\n* [Usage](#usage)\n* [SQL](#sql)\n* [Options](#options)\n  * [Global options](#global-options)\n  * [Output formats](#output-formats)\n  * [Per-file options](#per-file-options)\n  * [Input formats](#input-formats)\n* [More examples](#more-examples)\n* [License](#license)\n\n\n## Installation\n\nSqawk requires Tcl 8.6 or newer, Tcllib, and SQLite version 3 bindings for Tcl installed.\n\nTo install these dependencies on **Debian** and **Ubuntu** run the following command:\n\n    sudo apt install tcl tcllib libsqlite3-tcl\n\nOn **Fedora**, **RHEL**, and **CentOS**:\n\n    sudo dnf install tcl tcllib sqlite-tcl\n\nOn **FreeBSD** with [pkgng](https://wiki.freebsd.org/pkgng):\n\n    sudo pkg install tcl86 tcllib tcl-sqlite3\n    sudo ln -s /usr/local/bin/tclsh8.6 /usr/local/bin/tclsh\n\nOn **Windows 7** or later install [Magicsplat Tcl/Tk for Windows](http://www.magicsplat.com/tcl-installer/).\n\nOn **macOS** use [MacPorts](https://www.macports.org/):\n\n    sudo port install tcllib tcl-sqlite3\n\nOnce you have the dependencies installed on \\*nix, run\n\n    git clone https://github.com/dbohdan/sqawk\n    cd sqawk\n    make\n    make test\n    sudo make install\n\nor on Windows,\n\n    git clone https://github.com/dbohdan/sqawk\n    cd sqawk\n    assemble.cmd\n    tclsh tests.tcl\n\n\n## Usage\n\n`sqawk [globaloptions] script [option=value ...] \u003c filename`\n\nor\n\n`sqawk [globaloptions] script [option=value ...] filename1 [[option=value ...] filename2 ...]`\n\nOne of the filenames can be `-` for standard input.\n\n## SQL\n\nA Sqawk `script` consists of one or more statements in the SQLite version 3 [dialect](https://www.sqlite.org/lang.html) of SQL.\n\nThe default table names are `a` for the first input file, `b` for the second, `c` for the third, and so on.  You can change the table name for any file with a [file option](#per-file-options).  The table name is used as the prefix in the column names of the table.  By default, the columns are named `a1`, `a2`, etc. in table `a`; `b1`, `b2`, etc. in `b`; and so on.  For each record, `a0` is the text of the whole record (one line of input with the default `awk` parser and the default record separator of `\\n`).  `anr` in `a`, `bnr` in `b`, and so on contains the record number and is the primary key of its respective table.  `anf`, `bnf`, and so on contain the field count for a given record.\n\n## Options\n\n### Global options\n\nThese options affect all files.\n\n#### -FS value\n\nExample: `-FS '[ \\t]+'`\n\nThe input field separator regular expression for the default `awk` parser (for all files).\n\n#### -RS value\n\nExample: `-RS '\\n'`\n\nThe input record separator regular expression for the default `awk` parser (for all files).\n\n#### -OFS value\n\nExample: `-OFS ' '`\n\nThe output field separator string for the default `awk` serializer.\n\n#### -ORS value\n\nExample: `-ORS '\\n'`\n\nThe output record separator string for the default `awk` serializer.\n\n#### -NF value\n\nExample: `-NF 10`\n\nThe maximum number of fields per record.  The corresponding number of columns is added to the target table at the start (e.g., `a0`, `a1`, `a2`,\u0026nbsp;...\u0026nbsp;, `a10` for ten fields).  Increase this if you run Sqawk with `-MNF error` and get errors like `table x has no column named x51`.\n\n#### -MNF value\n\nExamples: `-MNF expand`, `-MNF crop`, `-MNF error`\n\nThe NF mode.  This option tells Sqawk what to do if a record exceeds the maximum number of fields: `expand`, the default, increases `NF` automatically and add columns to the table during import; `crop` truncates the record to `NF` fields (that is, the fields for which there aren't enough table columns are omitted); `error` makes Sqawk quit with an error message like `table x has no column named x11`.\n\n#### -dbfile value\n\nExample: `-dbfile test.db`\n\nThe SQLite database file in which Sqawk will store the parsed data.  Defaults to the special filename `:memory:`, which instructs SQLite to hold the data in RAM only.  Using an actual file instead of `:memory:` is slower but makes it possible to process larger datasets.  The database file is opened if it exists and created if it doesn't.  Once Sqawk creates the file, you can open it in other application, including the [sqlite3 CLI](https://sqlite.org/cli.html).  If you run Sqawk more than once with the same database file it reuses the tables each time.  By default it uses `a` for the first file, `b` for the second, etc.  For example, `sqawk -dbfile test.db 'select 0' foo; sqawk -dbfile test.db 'select 1' bar` inserts the data from both `foo` and `bar` into the table `a` in `test.db`; you can avoid this with `sqawk -dbfile test.db 'select 0' table=foo foo; sqawk -dbfile test.db 'select 1' table=bar bar`.  If you want to, you can also insert the data from both files into the same table in one invocation: `sqawk 'select * from a' foo table=a bar`.\n\n#### -noinput\n\nDo not read from standard input if Sqawk is given no filename arguments.\n\n#### -output value\n\nExample: `-output awk`\n\nThe output format.  See [Output formats](#output-formats).\n\n#### -v\n\nPrint the Sqawk version and exit.\n\n#### -1\n\nDo not split records into fields.  The same as `-FS 'x^'`.  (`x^` is a regular expression that matches nothing.)  Improves the performance somewhat for when you only want to operate on whole records (lines).\n\n### Output formats\n\nThe following are the possible values for the command line option `-output`.  Some formats have format options to further customize the output.  The options are appended to the format name and separated from the format name and each other with commas, e.g., `-output json,kv=1,pretty=1`.\n\n#### awk\n\nOptions: none\n\nExample: `-output awk`\n\nThe default serializer, `awk`, mimics its namesake awk.  When it is selected, the output consists of the rows returned by your query separated with the output record separator (-ORS).  Each row in turn consists of columns separated with the output field separator (-OFS).\n\n#### csv\n\nOptions: none\n\nExample: `-output csv`\n\nOutput CSV.\n\n#### json\n\nOptions: `kv` (default true), `pretty` (default false)\n\nExample: `-output json,pretty=0,kv=0`\n\nOutput the result of the query as JSON.  If `kv` (short for \"key-value\") is true, the result is an array of JSON objects with the column names as keys; if `kv` is false, the result is an array of arrays.  The values are all represented as strings in either case.  If `pretty` is true, each object (but not array) is indented for readability.\n\n#### table\n\nOptions: `alignments` or `align`, `margins`, `style`\n\nExamples: `-output table,align=center left right`, `-output table,alignments=c l r`\n\nOutput plain text tables.  The `table` serializer uses [Tabulate](https://wiki.tcl-lang.org/41682) to format the output as a table using [box-drawing characters](https://en.wikipedia.org/wiki/Box-drawing_character).  Note that the default Unicode table output does not display correctly in `cmd.exe` on Windows even after `chcp 65001`.  Use `style=loFi` to draw tables with plain ASCII characters instead.\n\n#### tcl\n\nOptions: `kv` (default false), `pretty` (default false)\n\nExample: `-output tcl,kv=1`\n\nOutput raw Tcl data structures.  With the `tcl` serializer Sqawk outputs a list of lists if `kv` is false and a list of dictionaries with the column names as keys if `kv` is true.  If `pretty` is true, print every list or dictionary on a separate line.\n\n### Per-file options\n\nThese options are set before a filename and only affect one file.\n\n#### columns\n\nExamples: `columns=id,name,sum`, `columns=id,a long name with spaces`\n\nGive custom names to the table columns for the next file.  If there are more columns than custom names, the columns after the last with a custom name are named automatically in the same way as with the option `header=1` (see below).  Custom column names override names taken from the header.  If you give a column an empty name, it is named automatically or retains its name from the header.\n\n#### datatypes\n\nExample: `datatypes=integer,real,text`\n\nSet the [datatypes](https://www.sqlite.org/datatype3.html) for the columns, starting with the first (`a1` if your table is `a`).  The datatype for each column for which the datatype is not explicitly given is `INTEGER`.  The datatype of `a0` is always `TEXT`.\n\n#### format\n\nExample: `format=csv csvsep=;`\n\nSet the input format for the next file.  See [Input formats](#input-formats).\n\n#### header\n\nExample: `header=1`\n\nCan be `0`/`false`/`no`/`off` or `1`/`true`/`yes`/`on`.  Use the first row of the file as a source of column names.  If the first row has five fields, then the first five columns will have custom names and all the following columns will have automatically generated names (e.g., `name`, `surname`, `title`, `office`, `phone`, `a6`, `a7`, ...).\n\n#### prefix\n\nExample: `prefix=x`\n\nThe column name prefix in the table.  Defaults to the table name.  For example, with `table=foo` and `prefix=bar` you have columns named `bar1`, `bar2`, `bar3`, etc. in table `foo`.\n\n#### table\n\nExample: `table=foo`\n\nThe table name.  By default, tables are named `a`, `b`, `c`, etc.  Specifying, for example, `table=foo` for the second file only results in the tables having the names `a`, `foo`, `c`, ...\n\n#### F0\n\nExamples: `F0=no`, `F0=1`\n\nCan be `0`/`false`/`no`/`off` or `1`/`true`/`yes`/`on`.  Enable the zeroth column of the table that stores the whole record.  Disabling this column lowers memory/disk usage.\n\n#### NF\n\nExample: `NF=20`\n\nThe same as the [global option](#global-options) -NF but for one file (table).\n\n#### MNF\n\nExample: `MNF=crop`\n\nThe same as the [global option](#global-options) -MNF but for one file (table).\n\n### Input formats\n\nA format option (`format=x`) selects the input parser with which Sqawk parses the next file.  Formats can have multiple synonymous names or multiple names that configure the parser in different ways.  Selecting an input format can enable additional per-file options that only work for that format.\n\n#### awk\n\nFormat options: `FS`, `RS`, `trim`, `fields`\n\nOption examples: `RS=\\n`, `FS=:`, `trim=left`, `fields=1,2,3-5,auto`\n\nThe default input parser.  Splits the input first into records then into fields using regular expressions.  The options `FS` and `RS` work the same as -FS and -RS respectively but only apply to one file.  The option `trim` removes whitespace at the beginning of each line of input (`trim=left`), at its end (`trim=right`), both (`trim=both`), or neither (`trim=none`, default).  The option `fields` configures how the fields of the input are mapped to the columns of the corresponding database table.  This option lets you discard some of the fields, which can save memory, and to merge the contents of others.  For example, `fields=1,2,3-5,auto` tells Sqawk to insert the contents of the first field into the column `a1` (assuming table `a`), the second field into `a2`, the third through the fifth field into `a3`, and the rest of the fields starting with the sixth into the columns `a4`, `a5`, and so on, one field per column.  If you merge several fields, the whitespace between them is preserved.\n\n#### csv, csv2, csvalt\n\nFormat options: `csvsep`, `csvquote`\n\nOption example: `format=csv csvsep=, 'csvquote=\"'`\n\nParse the input as CSV.  Using `format=csv2` or `format=csvalt` enables the [alternate mode](https://core.tcl.tk/tcllib/doc/trunk/embedded/md/tcllib/files/modules/csv/csv.md#section3) meant for parsing CSV files exported by Microsoft Excel.  `csvsep` sets the field separator; it defaults to `,`.  `csvquote` selects the character with which the fields that contain the field separator are quoted; it defaults to `\"`.  Note that some characters (like numbers and most letters) can't be be used as `csvquote`.\n\n#### json\n\nFormat options: `kv` (default true), `lines` (default false)\n\nOption example: `format=json kv=false`\n\nParse the input as JSON or [JSON Lines](https://jsonlines.org/).  The value for `kv` and `lines` can be `0`/`false`/`no`/`off` or `1`/`true`/`yes`/`on`.  If `lines` is false, the input is treated as a JSON array of either objects (`kv=1`) or arrays (`kv=0`).  If `lines` is true, the input is treated as a text file with a JSON array or object (depending on `kv`) on every line.\n\nWhen `kv` is false, each array becomes a record and each of its elements a field.  If the table for the input file is `a`, its column `a0` contains the concatenation of every element of the array, `a1` contains the first element, `a2` the second element, and so on.  When `kv` is true, the first record contains every unique key found in all of the objects.  This is intended for use with the [file option](#per-file-options) `header=1`.  The keys are in the same order they are in the first object of the input.  (We treat JSON objects as ordered.)  If some keys aren't in the first object but are in subsequent objects, they follow those that are in the first object in alphabetical order.  Records from the second on contain the values of the input objects.  These values are mapped to fields according to the order of the keys in the first record.\n\nEvery value in an object or an array is converted to text when parsed.  JSON given to Sqawk should only have one level of nesting (`[[],[],[]]` or `[{},{},{}]`).  What happens with more deeply nested JSON is undefined.  Currently it is converted to text as Tcl dictionaries and lists.\n\n#### tcl\n\nFormat options: `kv` (default false), `lines` (default false)\n\nOption example: `format=tcl kv=true`\n\nThe value for `kv` can be `0`/`false`/`no`/`off` or `1`/`true`/`yes`/`on`.  If `lines` is false, the input is treated as a Tcl list of either lists (`kv=0`) or dictionaries (`kv=1`).  If `lines` is true, it is treated as a text file with a Tcl list or dictionary (depending on `kv`) on every line.\n\nWhen `kv` is false, each list becomes a record and each of its elements a field.  If the table for the file is `a`, its column `a0` contains the full list, `a1` contains the first element, `a2` the second element, and so on.  When `kv` is true, the first record contains every unique key found in all of the dictionaries.  This is intended for use with the [file option](#per-file-options) `header=1`.  The keys are in the same order they are in the first dictionary of the input.  (Tcl dictionaries are ordered.)  If some keys aren't in the first dictionary but are in the subsequent ones, they follow those that are in the first dictionary in alphabetical order.  Records from the second on contain the values of the input dictionaries.  They are mapped to fields according to the order of the keys in the first record.\n\n\n## More examples\n\n### Sum up numbers\n\n    find . -iname '*.jpg' -type f -printf '%s\\n' | sqawk 'select sum(a1)/1024.0/1024 from a'\n\n### Line count\n\n    sqawk -1 'select count(*) from a' \u003c file.txt\n\n### Find lines that match a pattern\n\n    ls | sqawk -1 'select a0 from a where a0 like \"%win%\"'\n\n### Shuffle lines\n\n    sqawk -1 'select a1 from a order by random()' \u003c file\n\n### Pretty-print data as a table\n\n    ps | sqawk -output table \\\n         'select a1,a2,a3,a4 from a' \\\n         trim=left \\\n         fields=1,2,3,4-end\n\n#### Sample output\n\n```\n┌─────┬─────┬────────┬───────────────┐\n│ PID │ TTY │  TIME  │      CMD      │\n├─────┼─────┼────────┼───────────────┤\n│11476│pts/3│00:00:00│       ps      │\n├─────┼─────┼────────┼───────────────┤\n│11477│pts/3│00:00:00│tclkit-8.6.3-mk│\n├─────┼─────┼────────┼───────────────┤\n│20583│pts/3│00:00:02│      zsh      │\n└─────┴─────┴────────┴───────────────┘\n```\n\n### Convert input to JSON objects\n\n\n    ps a | sqawk -output json,pretty=1 \\\n                 'select PID, TTY, STAT, TIME, COMMAND from a' \\\n                 trim=left \\\n                 fields=1,2,3,4,5-end \\\n                 header=1\n\n#### Sample output\n\n```\n[{\n    \"PID\"     : \"1171\",\n    \"TTY\"     : \"tty7\",\n    \"STAT\"    : \"Rsl+\",\n    \"TIME\"    : \"191:10\",\n    \"COMMAND\" : \"/usr/lib/xorg/Xorg -core :0 -seat seat0 -auth /var/run/lightdm/root/:0 -nolisten tcp vt7 -novtswitch\"\n},{\n    \"PID\"     : \"1631\",\n    \"TTY\"     : \"tty1\",\n    \"STAT\"    : \"Ss+\",\n    \"TIME\"    : \"0:00\",\n    \"COMMAND\" : \"/sbin/agetty --noclear tty1 linux\"\n}, \u003c...\u003e, {\n    \"PID\"     : \"26583\",\n    \"TTY\"     : \"pts/1\",\n    \"STAT\"    : \"R+\",\n    \"TIME\"    : \"0:00\",\n    \"COMMAND\" : \"ps a\"\n},{\n    \"PID\"     : \"26584\",\n    \"TTY\"     : \"pts/1\",\n    \"STAT\"    : \"R+\",\n    \"TIME\"    : \"0:00\",\n    \"COMMAND\" : \"tclsh /usr/local/bin/sqawk -output json,pretty=1 select PID, TTY, STAT, TIME, COMMAND from a trim=left fields=1,2,3,4,5-end header=1\"\n}]\n```\n\n### Find duplicate lines\n\nPrint duplicate lines and how many times they are repeated.\n\n    sqawk -1 -OFS ' -- ' 'select a0, count(*) from a group by a0 having count(*) \u003e 1' \u003c file\n\n#### Sample output\n\n    13 -- 2\n    16 -- 3\n    83 -- 2\n    100 -- 2\n\n### Remove blank lines\n\n    sqawk -1 -RS '[\\n]+' 'select a0 from a' \u003c file\n\n### Sum up numbers with the same key\n\n    sqawk -FS , -OFS , 'select a1, sum(a2) from a group by a1' data\n\nThis is the equivalent of the AWK code\n\n    awk 'BEGIN {FS = OFS = \",\"} {s[$1] += $2} END {for (key in s) {print key, s[key]}}' data\n\n#### Input\n\n```\n1015,5\n1015,4\n1035,17\n1035,11\n1009,1\n1009,4\n1026,9\n1004,5\n1004,5\n1009,1\n```\n\n#### Output\n\n```\n1004,10\n1009,6\n1015,9\n1026,9\n1035,28\n```\n\n### Combine data from two files\n\n#### Commands\n\nThis example joins the data from two metadata files generated from the [happypenguin.com 2013 data dump](https://archive.org/details/happypenguin_xml_dump_2013).  You do not need to download the data dump to try the query; `MD5SUMS` and `du-bytes` are included in the directory [`examples/hp/`](./examples/hp/).\n\n    # Generate input files -- see below\n    cd happypenguin_dump/screenshots\n    md5sum * \u003e MD5SUMS\n    du -b * \u003e du-bytes\n    # Perform query\n    sqawk 'select a1, b1, a2 from a inner join b on a2 = b2 where b1 \u003c 10000 order by b1' MD5SUMS du-bytes\n\n#### Input files\n\n##### MD5SUMS\n\n```\nd2e7d4d1c7587b40ef7e6637d8d777bc  0005.jpg\n4e7cde72529efc40f58124f13b43e1d9  001.jpg\ne2ab70817194584ab6fe2efc3d8987f6  0.0.6-settings.png\n9d2cfea6e72d00553fb3d10cbd04f087  010_2.jpg\n3df1ff762f1b38273ff2a158e3c1a6cf  0.10-planets.jpg\n0be1582d861f9d047f4842624e7d01bb  012771602077.png\n60638f91b399c78a8b2d969adeee16cc  014tiles.png\n7e7a0b502cd4d63a7e1cda187b122b0b  017.jpg\n[...]\n```\n\n##### du-bytes\n\n```\n136229  0005.jpg\n112600  001.jpg\n26651   0.0.6-settings.png\n155579  010_2.jpg\n41485   0.10-planets.jpg\n2758972 012771602077.png\n426774  014tiles.png\n165354  017.jpg\n[...]\n```\n\n#### Output\n\n```\nd50700db41035eb74580decf83f83184 615 z81.png\ne1b64d03caf4615d54e9022d5b13a22d 677 init.png\na0fb29411c169603748edcc02c0e86e6 823 agendaroids.gif\n3b0c65213e121793d4458e09bb7b1f58 970 screen01.gif\n05f89f23756e8ea4bc5379c841674a6e 999 retropong.png\na49a7b5ac5833ec365ed3cb7031d1d84 1458 fncpong.png\n80616256c790c2a831583997a6214280 1516 el2_small.jpg\n[...]\n1c8a3cb2811e9c20572e8629c513326d 9852 7.png\nc53a88c68b73f3c1632e3cdc7a0b4e49 9915 choosing_building.PNG\nbf60508db16a92a46bbd4107f15730cd 9946 glad_shot01.jpg\n```\n\n\n## License\n\nMIT.\n\n`squawk.jpg` photograph by [Terry Foote](https://en.wikipedia.org/wiki/User:Terry_Foote) at [English Wikipedia](https://en.wikipedia.org/wiki/).  It is licensed under [CC BY-SA 3.0](https://creativecommons.org/licenses/by-sa/3.0/).\n","funding_links":[],"categories":["Tcl"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdbohdan%2Fsqawk","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fdbohdan%2Fsqawk","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdbohdan%2Fsqawk/lists"}