{"id":22230937,"url":"https://github.com/bigeasy/packet","last_synced_at":"2025-04-05T18:06:20.634Z","repository":{"id":57317827,"uuid":"866190","full_name":"bigeasy/packet","owner":"bigeasy","description":"Incremental binary parsers and serializers for Node.js.","archived":false,"fork":false,"pushed_at":"2024-02-21T09:45:49.000Z","size":5184,"stargazers_count":186,"open_issues_count":32,"forks_count":25,"subscribers_count":12,"default_branch":"master","last_synced_at":"2024-04-14T09:38:39.748Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":"http://bigeasy.github.io/packet","language":"JavaScript","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/bigeasy.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2010-08-27T12:44:53.000Z","updated_at":"2024-06-18T15:24:01.325Z","dependencies_parsed_at":"2024-06-18T15:23:55.660Z","dependency_job_id":"0e5adda6-83de-4ce2-a041-c557266eded2","html_url":"https://github.com/bigeasy/packet","commit_stats":{"total_commits":1440,"total_committers":7,"mean_commits":"205.71428571428572","dds":0.05902777777777779,"last_synced_commit":"03abea05da86ae5d9a0846345014a59ccfdf18cf"},"previous_names":[],"tags_count":11,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/bigeasy%2Fpacket","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/bigeasy%2Fpacket/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/bigeasy%2Fpacket/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/bigeasy%2Fpacket/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/bigeasy","download_url":"https://codeload.github.com/bigeasy/packet/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":247378140,"owners_count":20929296,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-12-03T01:16:51.124Z","updated_at":"2025-04-05T18:06:20.615Z","avatar_url":"https://github.com/bigeasy.png","language":"JavaScript","readme":"[![Actions Status](https://github.com/bigeasy/packet/workflows/Node%20CI/badge.svg)](https://github.com/bigeasy/packet/actions)\n[![codecov](https://codecov.io/gh/bigeasy/packet/branch/master/graph/badge.svg)](https://codecov.io/gh/bigeasy/packet)\n[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)\n\nIncremental binary parsers and serializers for Node.js.\n\n| What          | Where                                         |\n| --- | --- |\n| Discussion    | https://github.com/bigeasy/packet/issues/1    |\n| Documentation | https://bigeasy.github.io/packet              |\n| Source        | https://github.com/bigeasy/packet             |\n| Issues        | https://github.com/bigeasy/packet/issues      |\n| CI            | https://travis-ci.org/bigeasy/packet          |\n| Coverage:     | https://codecov.io/gh/bigeasy/packet          |\n| License:      | MIT                                           |\n\nPacket creates **pre-compiled**, **pure-JavaScript**, **binary parsers** and\n**serializers** that are **incremental** through a packet definition pattern\nlanguage that is **declarative** and very **expressive**.\n\nPacket simplifies the construction and maintenance of libraries that convert\nbinary to JavaScript and back. The name Packet may make you think that it is\ndesigned solely for binary network protocols, but it is also great for reading\nand writing binary file formats.\n\n**Incremental** \u0026mdash; Node packet creates incremental parsers and serializers\nthat are almost as fast as the parser you'd write by hand, but a easier to\ndefine and maintain.\n\n**Declarative** \u0026mdash; Packet defines a binary structure using a syntax-bashed\nJavaScript definition langauge. A single definition is used to define both\nparsers and serializers. If you have a protocol specification, or even just a C\nheader file with structures that define your binary data, you can probably\ntranslate that directly into a Packet definition.\n\n**Expressive** \u0026mdash; The pattern language can express\n\n  * signed and unsigned integers,\n  * endianess of singed and unsigned integers,\n  * floats and doubles,\n  * fixed length arrays of characters or numbers,\n  * length encoded strings of characters or numbers,\n  * zero terminated strings of characters or numbers,\n  * said strings terminated any fixed length terminator you specify,\n  * padding of said strings with any padding value you specify,\n  * signed and unsigned integers extracted from bit packed integers,\n  * string to integer value mappings,\n  * if/then/else conditionals,\n  * switch conditions,\n  * character encodings,\n  * custom transformations,\n  * assertions,\n  * and pipelines of custom transformations and assertions.\n\nPacket installs from NPM.\n\n```text\nnpm install packet\n```\n\n## Living `README.md`\n\nThis `README.md` is also a unit test using the\n[Proof](https://github.com/bigeasy/proof) unit test framework. We'll use the\nProof `okay` function to assert out statements in the readme. A Proof unit test\ngenerally looks like this.\n\n```javascript\nrequire('proof')(4, okay =\u003e {\n    okay('always okay')\n    okay(true, 'okay if true')\n    okay(1, 1, 'okay if equal')\n    okay({ value: 1 }, { value: 1 }, 'okay if deep strict equal')\n})\n```\n\nYou can run this unit test yourself to see the output from the various\ncode sections of the readme.\n\n```text\ngit clone git@github.com:bigeasy/packet.git\ncd packet\nnpm install --no-package-lock --no-save\nnode --allow-natives-syntax test/readme/readme.t.js\n```\n\nBe sure to run the unit test with the `--allow-natives-syntax` switch. The\n`--allow-natives-syntax` switch allows us to test that when we parse we are\ncreating objects that have JavaScript \"fast properties.\"\n\n## Parsers and Serializers\n\n**TODO** Here you need your incremental and whole parser interface with a simple\nexample. Would be an over view. In the next section we get into the weeds.\n\n## Packet Definition Language\n\nTo test our examples below we are going to use the following function.\n\n```javascript\nconst fs = require('fs')\nconst path = require('path')\n\nconst packetize = require('packet/packetize')\nconst SyncParser = require('packet/sync/parser')\nconst SyncSerializer = require('packet/sync/serializer')\n\n// Generate a packet parser and serializer mechnaics module.\n\n// Please ignore all the synchronous file operations. They are for testing\n// only. You will not generate packet parsers at runtime. You will use the\n// `packetizer` executable to generate your packet parser and serializer\n// mechanics modules and ship them.\n\n//\nfunction compile (name, definition, options) {\n    const source = packetize(definition, options)\n    const file = path.resolve(__dirname, '..', 'readme', name + '.js')\n    fs.writeFileSync(file, source)\n    return file\n}\n\n// Load mechanics and run a synchronous serialize and parse.\n\n// This looks more like production code, except for the part where you call\n// our for-the-sake-of-testing runtime compile.\n\n//\nfunction test (name, definition, object, expected, options = {}) {\n    const moduleName = compile(name, definition, options)\n\n    const mechanics = require(moduleName)\n\n    const serializer = new SyncSerializer(mechanics)\n    const buffer = serializer.serialize('object', object)\n\n    okay(buffer.toJSON().data, expected, `${name} correctly serialized`)\n\n    const parser = new SyncParser(mechanics)\n    const actual = parser.parse('object', buffer)\n\n    okay(actual, object, `${name} correctly parsed`)\n}\n```\n\n### Whole Integers\n\nIntegers are specified as multiples of 8 bits. For integers less than 48 bits\nyou can define the integer field as a JavaScript `typeof === 'number'`. If the\ninteger is larger than 48 bits you should define the field as JavaScript\n`BigInt`.\n\n**Mnemonic**: A count of bits to serialize or parse defined as a JavaScript\n`'number'` or `BigInt` since that's the type it will produce. We use a count of\nbits as opposed to a count of bytes so that our code looks consistent when when\nwe define bit packed integers which need to be defined in bits and not bytes.\n\nIn the following definition `value` is a 16-bit `'number'` with valid integer\nvalues from 0 to 65,535. Serializes objects with `'number'` fields must provide\na `'number'` type value and the number must be in range. No type or range\nchecking is performed.\n\n```javascript\nconst definition = {\n    object: {\n        value: 16\n    }\n}\n\nconst object = {\n    value: 0xabcd\n}\n\ntest('whole-integer', definition, object, [ 0xab, 0xcd ])\n```\n\nIntegers smaller than 48 bits _should_ be defined using a `'number'` to specify\nthe count of bits. Integers larger than 48 bits _must_ be defined as `BigInt`.\n\nIn the following definition `value` is a 64-bit `BigInt` with a valid integer\nvalues from 0 to 18,446,744,073,709,551,616. Serializes objects with `BigInt`\nfields must provide a `BigInt` type value and the number must be in range. No\ntype or range checking is performed.\n\n**Mnemonic**: The `n` suffix is the same suffix used to indicate a `BigInt`\nliteral in JavaScript.\n\n```javascript\nconst definition = {\n    object: {\n        value: 64n\n    }\n}\n\nconst object = {\n    value: 0xfedcba9876543210n\n}\n\ntest('whole-integer-64', definition, object, [\n    0xfe, 0xdc, 0xba, 0x98, 0x76, 0x54, 0x32, 0x10\n])\n```\n\n### Negative Integers\n\nIntegers with negative values are generally represented as two's compliment\nintegers on most machines. To parse and serialize as two's compliment you\nprecede the bit length of an integer field with a `-` negative symbol.\n\nIn the following definition `value` is a two's compliment 16-bit integer with\nvalid values from -32768 to 32767. Two's compliment is a binary representation\nof negative numbers.\n\n**Mnemonic**: Negative symbol to indicate a potentially negative value.\n\n```javascript\nconst definition = {\n    object: {\n        value: -16\n    }\n}\n\nconst object = {\n    value: -1\n}\n\ntest('negative-integer', definition, object, [ 0xff, 0xff ])\n```\n\nAs with whole integers, you _must_ define an integer larger than 32-bits as a\n`BitInt`.\n\nIn the following definition `value` is a two's compliment 16-bit integer with\nvalid values from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807.\n\n```javascript\nconst definition = {\n    object: {\n        value: -64n\n    }\n}\n\nconst object = {\n    value: -1n\n}\n\ntest('negative-integer-64', definition, object, [\n    0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff\n])\n```\n\n### Endianness\n\nBy default, all numbers are written out big-endian, where the bytes are written\nfrom the most significant to the least significant. The same order in which\nyou'd specify the value as a hexadecimal literal in JavaScript.\n\nLittle-endian means that the bytes are serialized from the least significant\nbyte to the most significant byte. Note that this is the order of _bytes_ and\nnot _bits_. This would be the case if you wrote an integer out directly to a\nfile from a C program on an Intel machine.\n\nTo parse and serialize an integer as little-endian you precede the bit length of\nan integer field with a `~` tilde.\n\nIn the following definition `value` is a 16-bit `number` field with valid\ninteger values from 0 to 65,535. A value of `0xabcd` would be serialized in\nlittle-endian order as `[ 0xcd, 0xab ]`.\n\n**Mnemonic**: The tilde is curvy and we're mixing things up, turning them\naround, vice-versa like.\n\n```javascript\nconst definition = {\n    object: {\n        value: ~16\n    }\n}\n\nconst object = {\n    value: 0xabcd\n}\n\ntest('little-endian', definition, object, [ 0xcd, 0xab ])\n```\n\nIf you want a little-endian negative number combine both `-` and `~`. You can\ncombine the `-` and `~` as `-~` and `~-`.\n\nIn the following definition both `first` and `second` are 16-bit `number` fields\nwith valid integer values from -32768 to 32767. A value of `-0x2` would be\nconverted to the twos compliment representation 0xfffe and serialized in\nlittle-endian as `[ 0xfe, 0xff ]`.\n\n```javascript\nconst definition = {\n    object: {\n        first: ~-16,\n        second: -~16\n    }\n}\n\nconst object = {\n    first: -2,\n    second: -2\n}\n\ntest('little-endian-twos-compliment', definition, object, [\n    0xfe, 0xff, 0xfe, 0xff\n])\n```\n\nJust as with the default big-endian integer fields, little-endian integer fields\ngreater than 32-bits must be specified as `BigInt` fields using a `'bigint'`\nliteral.\n\n```javascript\nconst definition = {\n    object: {\n        value: ~64n,\n    }\n}\n\nconst object = {\n    value: 0xfedcba9876543210n\n}\n\ntest('little-endian-64', definition, object, [\n    0x10, 0x32, 0x54, 0x76, 0x98, 0xba, 0xdc, 0xfe\n])\n```\n\nSimilarly, for little-endian signed number fields greater than 32-bits you\ncombine the `-` and `~` with a `BigInt` literal. You can combine the `-` and `~`\nas `-~` and `~-`.\n\n```javascript\nconst definition = {\n    object: {\n        first: ~-64n,\n        second: -~64n\n    }\n}\n\nconst object = {\n    first: -2n,\n    second: -2n\n}\n\ntest('little-endian-twos-compliment-64', definition, object, [\n    0xfe, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, // first\n    0xfe, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff  // second\n])\n```\n\n### Nested Structures\n\nYou can nest structures arbitrarily. The nesting of structures does not effect\nthe serialization or parsing, it will still be a series bytes in the stream, but\nit may help the structure your programs group values in a meaningful way.\n\n```javascript\nconst definition = {\n    object: {\n        header: {\n            type: 8,\n            length: 16\n        },\n        options: {\n            encrypted: 8,\n            checksum: 32\n        }\n    }\n}\n\nconst object = {\n    header: {\n        type: 1,\n        length: 64\n    },\n    options: {\n        encrypted: 0,\n        checksum: 0xaaaaaaaa\n    }\n}\n\ntest('nested-structures', definition, object, [\n    0x01,                   // header.type\n    0x00, 0x40,             // header.length\n    0x00,                   // options.encrypted\n    0xaa, 0xaa, 0xaa, 0xaa  // options.checksum\n])\n```\n\nYou can define a nested structure and then elide it by defining the structure\nwith a name that begins with a `_`. This is silly so don't do it. It is\navailable merely to be consistent with packet integers, accumulators and limits.\n\n**TODO** Short example.\n\n### Packed Integers\n\nPacked integers are expressed as nested structures grouped in an `Array`\nfollowed by an integer definition of the packed integer size. The byte lengths\nin the packed integer must sum to the size of the packed integer.\n\nPacked integer fields are always big-endian and cannot be made little endian.\nPacked integer fields can be made two's compliment by preceding the field bit\nlength with a `-` negative symbol just like whole integers.\n\nA packed 32-bit integer with a single two's compliment (potentially negative)\nvalue named `volume`.\n\nThe bit length values of the packed values sum to 32. Note that we consider\n`volume` to be 10 bits and not -10 bits in this summation of packed field\nvalues. The `-` is used to indicate a two's compliment integer field.\n\n```javascript\nconst definition = {\n    object: {\n        header: [{\n            type: 7,\n            encrypted: 1,\n            volume: -10,\n            length: 14\n        }, 32 ]\n    }\n}\n\nconst object = {\n    header: {\n        type: 3,\n        encrypted: 1,\n        volume: -1,\n        length: 1024\n    }\n}\n\ntest('packed-integer', definition, object, [\n    0x7,    // type and encrypted packed into ne byte\n    0xff,   // eight bytes of volume\n    0xc4,   // two types of volume and top of length\n    0x0     // rest of length\n])\n```\n\nThe packed integer will be serialized as big-endian by default. You can specify\nthat the packed integer is serialized as little-endian by proceeding the bit\nlength with a `~` tilde.\n\n```javascript\nconst definition = {\n    object: {\n        header: [{\n            type: 7,\n            encrypted: 1,\n            volume: -10,\n            length: 14\n        }, ~32 ]\n    }\n}\n\nconst object = {\n    header: {\n        type: 3,\n        encrypted: 1,\n        volume: -1,\n        length: 1024\n    }\n}\n\ntest('packed-integer-little-endian', definition, object, [\n    0x0,    // rest of length\n    0xc4,   // two types of volume and top of length\n    0xff,   // eight bytes of volume\n    0x7     // type and encrypted packed into ne byte\n])\n```\n\nYou many not want the nested structure of the packed header to appear in your\nparsed object. You can elide the nested structure by giving it a name that\nbegins with an `_`.\n\n**TODO** Example.\n\nElide only works for packet integers, accumulators, limits and structures. If\nyou name a field of any other type it will be defined with the underscore.\n\n**TODO** Example.\n\n### Literals\n\nLiterals are bytes written on serialization that are constant and not based on a\nvalue in the serialized structure.\n\nYou define a constant with an array that contains a `String` that describes the\nconstant value in hexadecimal digits. There should be two hexadecimal digits for\nevery byte. The length of the field is determined by the number of bytes\nnecessary to hold the value.\n\n**Mnemonic**: A string literal reminds us this is literal and stands out because\nit is not numeric. Hexadecimal helps distinguish these constant values from\nfield sizes and other aspects of the definition language expressed with numbers.\n\n```javascript\nconst definition = {\n    object: {\n        constant: [ 'fc' ],\n        value: 16\n    }\n}\n\nconst object = {\n    value: 0xabcd\n}\n\ntest('constant', definition, object, [\n    0xfc, 0xab, 0xcd\n])\n```\n\nGenerated parsers skip the constant bytes and do not validate the parsed value.\nIf you want to perform validation you can define the field as an integer field\nand inspect the parsed field value. This means you will also have to\nconsistently set the serialized field value on your own.\n\nA literal is ignored on serialization if it exists. It is not set in the\ngenerated structure on parsing. In our example the `contsant` property of the\nobject is not generated on parse.\n\n**TODO** How about an explicit example that doesn't require as much exposition\nas our `test` definition.\n\nNot much point in naming a literal, is there? The literal it will not be read\nfrom the serialized object nor will the named literal property be set in a\nparsed parsed object. What if you have multiple literals? Now you have to have\n`constant1` and `constant2`. It starts to look ugly as follows.\n\n```javascript\nconst definition = {\n    object: {\n        constant1: [ 'fc' ],\n        key: 16,\n        constant2: [ 'ab' ],\n        value: 16\n    }\n}\n\nconst object = {\n    key: 1,\n    value: 0xabcd\n}\n\ntest('constants', definition, object, [\n    0xfc, 0x0, 01,\n    0xab, 0xab, 0xcd\n])\n```\n\nYou can forgo naming a literal by defining it as padding before or after a\nfield.\n\nTo prepend a literal to a field definition in an array where the literal\ndefinition is the first element and field definition is the second. The literal\nwill be written before writing the field value and skipped when parsing the\nfield value.\n\n```javascript\nconst definition = {\n    object: {\n        key: [[ 'fc' ], 16 ],\n        value: [[ 'ab' ], 16 ]\n    }\n}\n\nconst object = {\n    key: 1,\n    value: 0xabcd\n}\n\ntest('unnamed-literals', definition, object, [\n    0xfc, 0x0, 01,\n    0xab, 0xab, 0xcd\n])\n```\n\nYou can specify an unnamed literal that follows a field. Enclose the field\ndefinition in an array with the field definition as the first element and the\nliteral definition as the second element.\n\n```javascript\nconst definition = {\n    object: {\n        value: [ 16, [ 'ea' ] ],\n    }\n}\n\nconst object = {\n    value: 0xabcd\n}\n\ntest('unnamed-literal-after', definition, object, [\n    0xab, 0xcd, 0xea\n])\n```\n\nYou can specify an unnamed literal both before and after a field. Enclose the\nfield definition in an array and define preceding literal as the first element\nand following literal as the last element.\n\nThe example above can be defined using literals around the `key` property alone.\n\n```javascript\nconst definition = {\n    object: {\n        key: [[ 'fc' ], 16, [ 'ab' ] ],\n        value: 16\n    }\n}\n\nconst object = {\n    key: 1,\n    value: 0xabcd\n}\n\ntest('unnamed-literals-before-and-after', definition, object, [\n    0xfc, 0x0, 01,\n    0xab, 0xab, 0xcd\n])\n```\n\nYou can define a literal that repeats its value. The constant value is defined\nusing an array that contains a `String` with the literal value as the first\nelement and the number of times to repeat the value as the second element.\n\n**Mnemonic**: The repeat count follows the hexadecimal definition, its relation\nto the definition is expressed by its containment in an array.\n\n```javascript\nconst definition = {\n    object: {\n        constant: [ 'beaf', 3 ],\n        value: 16\n    }\n}\n\nconst object = { value: 0xabcd }\n\ntest('literal-repeat', definition, object, [\n    0xbe, 0xaf, 0xbe, 0xaf, 0xbe, 0xaf,\n    0xab, 0xcd\n])\n```\n\nYou can express repeated literals as unnamed literals by prepending or appending\nthem to a field definition.\n\n```javascript\nconst definition = {\n    object: {\n        value: [[ 'beaf', 3 ], 16 ]\n    }\n}\n\nconst object = { value: 0xabcd }\n\ntest('unamed-literal-repeat', definition, object, [\n    0xbe, 0xaf, 0xbe, 0xaf, 0xbe, 0xaf,\n    0xab, 0xcd\n])\n```\n\nNote that a literal definition without a repeat count is the same as a literal\ndefinition with a repeat count of `1`.\n\n```javascript\nconst definition = {\n    object: {\n        explicit: [[ 'beaf', 1 ], 16 ],\n        implicit: [[ 'beaf' ], 16 ]\n    }\n}\n\nconst object = { explicit: 0xabcd, implicit: 0xabcd }\n\ntest('unamed-literal-repeat-once', definition, object, [\n    0xbe, 0xaf, 0xab, 0xcd,\n    0xbe, 0xaf, 0xab, 0xcd\n])\n```\n\nLittle endian serialization of literals seems like an unlikely use case. One\nwould imagine at a specification would specify the bytes in network byte order.\nOften times filler bytes are a repeat of a single byte so endianness doesn't\nmatter.\n\nIf you want little-endian serialization of a literal value you could simply\nreverse the bits yourself.\n\nHere we write `0xbeaf` little-endian by explicitly flipping `0xbe` and `0xaf`.\n\n```javascript\nconst definition = {\n    object: {\n        value: [[ 'afbe' ], 16 ]\n    }\n}\n\nconst object = { value: 0xabcd }\n\ntest('unamed-literal-little-endian-explicit', definition, object, [\n    0xaf, 0xbe,\n    0xab, 0xcd\n])\n```\n\nSimple enough, however...\n\nIf specify a repeat count prepended by a `~` the pattern will be written\nlittle-endian.\n\n**Mnemonic**: Use use a tilde `~` because it's squiggly and we're swirling the\nbytes around vice-versa. Same mnemonic for little-endian integer fields.\n\n```javascript\nconst definition = {\n    object: {\n        value: [[ 'beaf', ~1 ], 16 ]\n    }\n}\n\nconst object = { value: 0xabcd }\n\ntest('unamed-literal-little-endian', definition, object, [\n    0xaf, 0xbe,\n    0xab, 0xcd\n])\n```\n\nYou can repeat the little-endian serialization more than once.\n\n```javascript\nconst definition = {\n    object: {\n        value: [[ 'beaf', ~3 ], 16 ]\n    }\n}\n\nconst object = { value: 0xabcd }\n\ntest('unamed-literal-little-endian-repeat', definition, object, [\n    0xaf, 0xbe, 0xaf, 0xbe, 0xaf, 0xbe,\n    0xab, 0xcd\n])\n```\n\nUnnamed little-endian literals can be appended or prepended. Any unnamed literal\ndefinition can be appended, prepended or both.\n\n### Length-Encoded Arrays\n\nA common pattern in serialization formats is a series of repeated values\npreceded by a count of those values.\n\n**Mnemonic**: We enclose the definition in an array. The first element is an\ninteger field definition for the length. It's scalar appearance indicates that\nit does not repeat. The repeated value is enclosed in an array indicating that\nit will be the value that repeats. The ordering of the scalar followed by the\narray mirrors the binary representation of a length/count followed by repeated\nvalues.\n\n```javascript\nconst definition = {\n    object: {\n        array: [ 16, [ 8 ] ]\n    }\n}\n\nconst object = {\n    array: [ 0xaa, 0xbb, 0xcc, 0xdd ]\n}\n\ntest('length-encoded', definition, object, [\n    0x0, 0x4, 0xaa, 0xbb, 0xcc, 0xdd\n])\n```\n\nThe repeated value can be of any type including structures.\n\n```javascript\nconst definition = {\n    object: {\n        array: [ 16, [{ key: 16, value: 16 }] ]\n    }\n}\n\nconst object = {\n    array: [{ key: 0xaa, value: 0xbb }, { key: 0xcc, value: 0xdd }]\n}\n\ntest('length-encoded-structures', definition, object, [\n    0x0, 0x2,               // length encoding\n    0x0, 0xaa, 0x0, 0xbb,   // first structure\n    0x0, 0xcc, 0x0, 0xdd    // second structure\n])\n```\n\nYou can even nest length-encoded arrays inside length-encoded arrays.\n\n```javascript\nconst definition = {\n    object: {\n        array: [ 16, [[ 16, [ 8 ]]] ]\n    }\n}\n\nconst object = {\n    array: [[ 0xaa, 0xbb ], [ 0xcc, 0xdd ]]\n}\n\ntest('length-encoded-nested', definition, object, [\n    0x0, 0x2,               // length encoding\n    0x0, 0x2, 0xaa, 0xbb,   // first array length encoding and values\n    0x0, 0x2, 0xcc, 0xdd    // second array length encoding and values\n])\n```\n\nBecause pure binary data is a special case, instead of an array of `8` bit\nbites, you can specify a length encoded binary data as a `Buffer`.\n\n```javascript\nconst definition = {\n    object: {\n        array: [ 16, [ Buffer ] ]\n    }\n}\n\nconst object = {\n    array: Buffer.from([ 0xaa, 0xbb, 0xcc, 0xdd ])\n}\n\ntest('length-encoded-buffer', definition, object, [\n    0x0, 0x4, 0xaa, 0xbb, 0xcc, 0xdd\n])\n```\n\n### Inline Transforms and Assertions\n\nInline transforms are specified by wrapping a field definition in an array with\na pre-serialization function before or a post-parsing function after it or both.\nThe pre-serialization function and post-parsing function must be enclosed in an\narray.\n\nA pre-serialization transformation function takes the value from the JavaScript\nobject and returns the transformed that is then written to the stream. The\npost-parsing transformation function takes a value extracted from the stream and\nreturns the transformed value that is assigned to the JavaScript object.\n\nThe following transform will convert a hexadecimal string to an integer on\nserialization and back to a hexadecimal string on parse.\n\n**Mnemonic**: A function is obviously a function, it does something to in the\nmidst of parsing. We used functions elsewhere in the language, so we enclose\nthem in arrays, The array brackets act as parenthesis, these are parenthetical\nuser actions on the stream.\n\n```javascript\nconst definition = {\n    object: {\n        value: [[ $_ =\u003e parseInt($_, 16) ], 32, [ $_ =\u003e $_.toString(16) ]]\n    }\n}\n\nconst object = {\n    value: '89abcdef'\n}\n\ntest('transform-basic', definition, object, [\n    0x89, 0xab, 0xcd, 0xef\n])\n```\n\nWhoa, what's with the parameter names pal? `$_` violates everything I was ever\ntaught about naming variables. How would you even pronounce that?\n\nWell, once upon a time I wrote me a lot of Perl. In Perl this variable is called\n\"dollar under.\" It is the default variable for an array value when you loop\nthrough an array with `foreach`. I miss those days, so I thought I revive them.\nYou can name positional arguments anything you like, but I'll be using these\nnames to get you used to them, because they're available as named arguments as\nwell.\n\nYou can also use named arguments via object deconstruction. When you do, you\nmust specify names that are in the current namespace. The namespace will contain\nthe object properties in the current path.\n\n```javascript\nconst definition = {\n    object: {\n        value: [[ ({ value }) =\u003e parseInt(value, 16) ], 32, [ ({ value }) =\u003e value.toString(16) ]]\n    }\n}\n\nconst object = {\n    value: '89abcdef'\n}\n\ntest('transform-by-name', definition, object, [\n    0x89, 0xab, 0xcd, 0xef\n])\n```\n\nYou can also refer to the current variable using the Perl-esque \"dollar under\"\nvariable. Perl-esque variables can make your code more concise. If used\nconsistently it will still be human readable.\n\n```javascript\nconst definition = {\n    object: {\n        value: [[ ({ $_ }) =\u003e parseInt($_, 16) ], 32, [ ({ $_ }) =\u003e $_.toString(16) ]]\n    }\n}\n\nconst object = {\n    value: '89abcdef'\n}\n\ntest('transform-dollar-under', definition, object, [\n    0x89, 0xab, 0xcd, 0xef\n])\n```\n\nThere are two Perl-esque variable names `$_` for the immediate property value,\nand `$` for the root object. Any other system provided names such as `$i`,\n`$buffer`, `$start` and `$end` will begin with a `$` do distinguish them from\nuser specified names and to avoid namespace collisions.\n\n**Mnemonic**: Borrowed from Perl `foreach` loop, `$_` is the immediate property\nvalue, useful for its brevity. `$` is the root variable, the shortest special\nvariable because if you're starting from the root, you have a path ahead of you.\n\nA transform or assertion is always defined with an array with three elements. If\nyou only want to define a pre-serialization action, the last element will be an\nempty array. If you only want to define a post-parsing action, the first element\nwill be an empty array.\n\nIn the following example we do not want to perform a post-parsing action, so we\nleave the post-parsing array empty, but we do not neglect to add it.\n\n```javascript\nconst definition = {\n    object: {\n        value: [[ $_ =\u003e typeof $_ == 'string' ? parseInt($_, 16) : $_ ], 32, []]\n    }\n}\n\nconst moduleName = compile('transform-pre-only', definition)\nconst mechanics = require(moduleName)\n\n{\n    const buffer = new SyncSerializer(mechanics).serialize('object', { value: '89abcdef' })\n    const object = new SyncParser(mechanics).parse('object', buffer)\n    okay(object, { value: 0x89abcdef }, 'transform-pre-only-convert')\n}\n\n{\n    const buffer = new SyncSerializer(mechanics).serialize('object', { value: 0x89abcdef })\n    const object = new SyncParser(mechanics).parse('object', buffer)\n    okay(object, { value: 0x89abcdef }, 'transform-pre-only-no-convert')\n}\n```\n\nNamed arguments have limitations. We're using a simple regex based parser to\nextract the arguments from the function source, not a complete JavaScript\nparser. We are able to parse object destructuring, array destructuring, and\ndefault argument values of numbers, single quoted strings and double quoted\nstrings.\n\nDo not use regular expressions, interpolated strings or function calls, in your\ndefault argument assignments. You can use any valid JavaScript in your function\nbodies.\n\nIn the following definition we've added an unused named variable that is default\nassigned a value extracted from a literal string by a regular expression. The\nright curly brace in the literal string won't confuse our simple argument\nparser, but the right curly brace in the regular expression will.\n\n```javascript\nconst definition = {\n    object: {\n        value: [[\n            ({ $_, packet: { extra = /^([}])/.exec(\"}\")[1] } }) =\u003e parseInt($_, 16)\n        ], 32, [\n            ({ $_ }) =\u003e $_.toString(16)\n        ]]\n    }\n}\n\nconst _definition = {\n    object: {\n        value: [[ $_ =\u003e typeof $_ == 'string' ? parseInt($_, 16) : $_ ], 32, []]\n    }\n}\n\ntry {\n    packetize(definition)\n} catch (error) {\n    okay(error instanceof SyntaxError, 'unable to parse regex')\n}\n```\n\nAs you can see it's an unlikely use case. Basically, if you find yourself\nwriting logic in your named arguments, stop and place it in a function in a\nmodule and invoke that module function from the inline function.\n\nWe'll continue to use `$_` and `$` in positional argument examples so we can all\nget used to them.\n\nThe first argument to a transformation function with positional arguments is the\ntransformed value, the second argument is the root object being transformed.\n\nThe following WebSockets inspired example xors a value with a `mask` property in\nthe packet.\n\n```javascript\nconst definition = {\n    object: {\n        mask: 16,\n        value: [[ ($_, $) =\u003e $_ ^ $.mask ], 16, [ ($_, $) =\u003e $_ ^ $.mask ]]\n    }\n}\n\nconst object = {\n    mask: 0xaaaa,\n    value: 0xabcd\n}\n\ntest('transform-mask-positional', definition, object, [\n    0xaa, 0xaa, 0x1, 0x67\n])\n```\n\nThis can be expressed using named arguments. Note how we can order the arguments\nany way we like.\n\n```javascript\nconst definition = {\n    object: {\n        mask: 16,\n        value: [[ ({ $, $_ }) =\u003e $_ ^ $.mask ], 16, [ ({ $_, $ }) =\u003e $_ ^ $.mask ]]\n    }\n}\n\nconst object = {\n    mask: 0xaaaa,\n    value: 0xabcd\n}\n\ntest('transform-mask-named', definition, object, [\n    0xaa, 0xaa, 0x1, 0x67\n])\n```\n\nYou can also name the names of the object properties in the current path. Again,\nnote that the order of names does not matter with named arguments.\n\n```javascript\nconst definition = {\n    object: {\n        mask: 16,\n        value: [[\n            ({ object, value }) =\u003e value ^ object.mask\n        ], 16, [\n            ({ value, object }) =\u003e value ^ object.mask\n        ]]\n    }\n}\n\nconst object = {\n    mask: 0xaaaa,\n    value: 0xabcd\n}\n\ntest('transform-mask-long-named', definition, object, [\n    0xaa, 0xaa, 0x1, 0x67\n])\n```\n\n(Not to self: Seems like it might also be useful to be able to reference the\ncurrent object in a loop, which could be `$0` for the current object, `$1` for a\nparent. This would be simpler than passing in the indices, but that would be\nsimple enough, just give them the already existing `$i`. Heh, no make them\nsuffer.)\n\nThe third argument passed to a transformation function is an array of indices\nindicating the index of each array in the path to the object. **TODO** Move\nfixed arrays above.\n\n```javascript\nconst definition = {\n    object: {\n        array: [ 16, [{\n            mask: 16,\n            value: [[\n                ($_, $, $i) =\u003e $_ ^ $.array[$i[0]].mask\n            ], 16, [\n                ($_, $, $i) =\u003e $_ ^ $.array[$i[0]].mask\n            ]]\n        }]]\n    }\n}\n\nconst object = {\n    array: [{\n        mask: 0xaaaa, value: 0xabcd\n    }, {\n        mask: 0xffff, value: 0x1234\n    }]\n}\n\ntest('transform-mask-array-positional', definition, object, [\n    0x0, 0x2,                   // length encoded count of elements\n    0xaa, 0xaa, 0x1, 0x67,      // first element\n    0xff, 0xff, 0xed, 0xcb      // second element\n])\n```\n\nWe can use named arguments as well.\n\n```javascript\nconst definition = {\n    object: {\n        array: [ 16, [{\n            mask: 16,\n            value: [[\n                ({ $_, $, $i }) =\u003e $_ ^ $.array[$i[0]].mask\n            ], 16, [\n                ({ $_, $, $i }) =\u003e $_ ^ $.array[$i[0]].mask\n            ]]\n        }]]\n    }\n}\n\nconst object = {\n    array: [{\n        mask: 0xaaaa, value: 0xabcd\n    }, {\n        mask: 0xffff, value: 0x1234\n    }]\n}\n\ntest('transform-mask-array-named', definition, object, [\n    0x0, 0x2,                   // length encoded count of elements\n    0xaa, 0xaa, 0x1, 0x67,      // first element\n    0xff, 0xff, 0xed, 0xcb      // second element\n])\n```\n\nWe can also use the names of the object properties in the current path. The `$i`\narray variable of is a special system property and it therefore retains its\ndollar sign prepended name.\n\n```javascript\nconst definition = {\n    object: {\n        array: [ 16, [{\n            mask: 16,\n            value: [[\n                ({ value, object, $i }) =\u003e value ^ object.array[$i[0]].mask\n            ], 16, [\n                ({ value, object, $i }) =\u003e value ^ object.array[$i[0]].mask\n            ]]\n        }]]\n    }\n}\n\nconst object = {\n    array: [{\n        mask: 0xaaaa, value: 0xabcd\n    }, {\n        mask: 0xffff, value: 0x1234\n    }]\n}\n\ntest('transform-mask-array-full-named', definition, object, [\n    0x0, 0x2,                   // length encoded count of elements\n    0xaa, 0xaa, 0x1, 0x67,      // first element\n    0xff, 0xff, 0xed, 0xcb      // second element\n])\n```\n\nIf your pre-serialization function and post-parsing function are the same you\ncan specify it once and use it for both serialization and parsing by surrounding\nit with an additional array.\n\n```javascript\nconst definition = {\n    object: {\n        mask: 16,\n        value: [[[ ($_, $) =\u003e $_ ^ $.mask ]], 16 ]\n    }\n}\n\nconst object = {\n    mask: 0xaaaa, value: 0xabcd\n}\n\ntest('transform-mask-same', definition, object, [\n    0xaa, 0xaa, 0x1, 0x67\n])\n```\n\nNote that the above functions can also be defined using `function` syntax. Arrow\nfunctions are generally more concise, however.\n\n```javascript\nconst definition = {\n    object: {\n        mask: 16,\n        value: [[[ function ({ value, object }) {\n            return value ^ object.mask\n        } ]], 16 ]\n    }\n}\n\nconst object = {\n    mask: 0xaaaa, value: 0xabcd\n}\n\ntest('transform-mask-function-syntax', definition, object, [\n    0xaa, 0xaa, 0x1, 0x67\n])\n```\n\n### Fixed Length Arrays\n\nFixed length arrays are arrays of a fixed length. They are specified by an array\ncontaining the numeric length of the array.\n\n**Mnemonic**: Like a length encoded definition the element definition is placed\ninside an array because it is the array element. Like a length encoded\ndefinition the length of the array precedes the element definition. It is the\nlength of the array enclosed in an array like C array declaration.\n\n```javascript\nconst definition = {\n    object: {\n        fixed: [[ 2 ], [ 16 ]]\n    }\n}\n\nconst object = {\n    fixed: [ 0xabcd, 0xdcba ]\n}\n\ntest('fixed', definition, object, [\n    0xab, 0xcd, 0xdc, 0xba\n])\n```\n\nCalculated length arrays are fixed length arrays where the fixed length is\nspecified by function. The length is, therefore, not fixed at all. It is\ncalculated.\n\nIn the following example we have a length encoding that is in a header and there\nis a field between the length encoding and the array so we can't use a\nlength-encoded array definition. We use a calculated length that references the\nheader's length field.\n\n**Mnemonic** Same as fixed length arrays replacing the fixed length with a\nfunction indicating that the function will calculate the length.\n\n```javascript\nconst definition = {\n    object: {\n        header: {\n            length: 16,\n            type: 8\n        },\n        array: [[ $ =\u003e $.header.length ], [ 16 ]]\n    }\n}\n\nconst object = {\n    header: {\n        length: 2,\n        type: 1\n    },\n    array: [ 0xabcd, 0xdcba ]\n}\n\ntest('fixed-calculated', definition, object, [\n    0x0, 0x2,               // header.length\n    0x1,                    // header.type\n    0xab, 0xcd, 0xdc, 0xba\n])\n```\n\n### Requiring Modules\n\nThe functions in our packet parser may depend on external libraries. We can\n\n```javascript\nconst definition = {\n    object: {\n        value: [[ value =\u003e ip.toLong(value) ], 32, [ value =\u003e ip.fromLong(value) ]]\n    }\n}\n\nconst source = packetize(definition, { require: { ip: 'ip' } })\n\nconst moduleName = path.resolve(__dirname, 'require.js')\nfs.writeFileSync(moduleName, source)\n\nconst mechanics = require(moduleName)\n\nconst object = { value: '127.0.0.1' }\n\nconst buffer = new SyncSerializer(mechanics).serialize('object', object)\n\nokay(buffer.toJSON().data, [\n    127, 0, 0, 1\n], 'require serialized')\n\nconst parsed = new SyncParser(mechanics).parse('object', buffer)\nokay(parsed, object, 'require parsed')\n```\n\nWhen can also use modules local to the current project using relative paths, but\nwe face a problem; we're not going to ship language definition with our\ncompleted project, we're going to ship the generated software. Therefore,\nrelative must be relative to the generated file. Your relative paths much be\nrelative to the output directory... (eh, whatever. Maybe I can fix that up for\nyou.)\n\n```javascript\n{\n    ; ({\n        packet: {\n            value: [[ $value =\u003e ip.toLong($value) ], 32, [ $value =\u003e ip.fromLong($value) ]]\n        }\n    }, {\n        require: { ip: '../ip' }\n    })\n}\n```\n\n### Assertions\n\n**TODO** Needs examples of failed assertions.\n\nWe can also perform inline assertions. You specify an assertion the same way you\nspecify a transformation. You wrap your definition in an array.\nA pre-serialization assertion is a function within an array in the element\nbefore the definition. A post-parsing assertions is a function within an array\nin the element after the definition.\n\nWhen performing inline assertions, we are not transforming a value, we're simply\nchecking it's validity and raising an exception if a value is invalid. You could\nuse a transformation to do this, but you would end up returning the value as is.\n\nWith an assertion function the return value is ignored. It is not used as the\nserialization or assignment value.\n\nTo declare an assertion function you assign a default value of `0` or `null` to\nthe immediate property argument.\n\nIn the following definition we use a `0` default value for the immediate\nproperty argument which indicates that the value and should not be used for\nserialization for the pre-serialization function nor assignment for the\npost-parsing function.\n\n```javascript\nconst definition = {\n    object: {\n        value: [[\n            ($_ = 0) =\u003e assert($_ \u003c 1000, 'excedes max value')\n        ], 16, [\n            ($_ = 0) =\u003e assert($_ \u003c 1000, 'excedes max value')\n        ]]\n    }\n}\nconst object = {\n    value: 1\n}\ntest('assertion', definition, object, [\n    0x0, 0x1\n], { require: { assert: 'assert' } })\n```\n\n(I assume I'll implement this in this way:) The exception will propagate to the\nAPI caller so that you can catch it in your code and cancel the serialization or\nparse. (However, if I do wrap the assertion in a try/catch and rethrow it\nsomehow, then the following example is moot.\n\nIf you where to use a transform, you would have to return the value and your\ndefinition would be more verbose.\n\n```javascript\nconst definition = {\n    object: {\n        value: [[\n            $_ =\u003e {\n                assert($_ \u003c 1000, 'excedes max value')\n                return $_\n            }\n        ], 16, [\n            $_ =\u003e {\n                assert($_ \u003c 1000, 'execdes max value')\n                return $_\n            }\n        ]]\n    }\n}\nconst object = {\n    value: 1\n}\ntest('assertion-not-assertion', definition, object, [\n    0x0, 0x1\n], { require: { assert: 'assert' } })\n```\n\nYou can use the name function for both pre-serialization and post-parsing by\nsurrounding the function in an additional array.\n\n```javascript\nconst definition = {\n    object: {\n        value: [[[ ($_ = 0) =\u003e assert($_ \u003c 1000, 'excedes max value') ]], 16 ]\n    }\n}\nconst object = {\n    value: 1\n}\ntest('assertion-mirrored', definition, object, [\n    0x0, 0x1\n], { require: { assert: 'assert' } })\n```\n\nYou can use named arguments to declare an assertion function.\n\n```javascript\nconst definition = {\n    object: {\n        value: [[[ ({ $_ = 0 }) =\u003e assert($_ \u003c 1000, 'excedes max value') ]], 16 ]\n    }\n}\nconst object = {\n    value: 1\n}\ntest('assertion-named', definition, object, [\n    0x0, 0x1\n], { require: { assert: 'assert' } })\n```\n\n### Assertion and Transformation Arguments\n\nYou can pass arguments to assertions and transforms. Any value in the array that\nfollows the function that is not itself a `function` is considered an argument\nto the function. The arguments are passed in the order in which they are\nspecified preceding the immediate property value.\n\nIn the following definition the function is followed by a `number` argument\nwhich is passed as the first parameter to the function in serialize or parser.\n\n```javascript\nconst definition = {\n    object: {\n        value: [[[ (max, $_ = 0) =\u003e assert($_ \u003c max, `value excedes ${max}`), 1024 ]], 16 ]\n    }\n}\nconst object = {\n    value: 1\n}\ntest('assertion-parameter', definition, object, [\n    0x0, 0x1\n], { require: { assert: 'assert' } })\n```\n\nThis is useful when defining a function that you use more than once in your\ndefinition.\n\n```javascript\nconst max = (max, $_ = 0) =\u003e assert($_ \u003c max, `value excedes ${max}`)\n\nconst definition = {\n    object: {\n        length: [[[ max, 1024 ]], 16 ],\n        type: [[[ max, 12 ]], 8 ]\n    }\n}\nconst object = {\n    length: 256,\n    type: 3\n}\ntest('assertion-parameter-reuse', definition, object, [\n    0x1, 0x0, 0x3\n], { require: { assert: 'assert' } })\n```\n\nWhen using named arguments, the argument values are assigned to the named\nparameters preceding the first variable that is defined in the current scope.\nThat is, the first occurrence of a variable name that is either the name of a\nproperty in the current path or a system name beginning with `$` dollar sign.\n\nIn the following definition the first argument to the `max` function will be\nassigned to the `max` named argument. The positional argument mapping stops at\nthe `$path` parameter since it is a system parameter beginning with `$` dollar\nsign. The `'oops'` parameter of the `max` function call for the `type` property\nwill be ignored.\n\n```javascript\nconst max = ({ max, $path, $_ = 0 }) =\u003e assert($_ \u003c max, `${$path.pop()} excedes ${max}`)\n\nconst definition = {\n    object: {\n        length: [[[ max, 1024 ]], 16 ],\n        type: [[[ max, 12, 'oops' ]], 8 ]\n    }\n}\nconst object = {\n    length: 256,\n    type: 3\n}\ntest('assertion-parameter-named-reuse', definition, object, [\n    0x1, 0x0, 0x3\n], { require: { assert: 'assert' } })\n```\n\n### Terminated Arrays\n\nIn the following example, we terminate the array when we encounter a `0` value.\nThe `0` is not included in the array result.\n\n```javascript\n{\n    const definition = {\n        object: {\n            array: [[ 8 ], 0x0 ]\n        }\n    }\n    const object = {\n        array: [ 0xab, 0xcd ]\n    }\n    test('terminated', definition, object, [\n        0xab, 0xcd, 0x0\n    ])\n}\n```\n\n#### Multi-byte Terminators\n\nYou can specify multi-byte terminators by specifying the multi-byte terminator\nbyte by byte in the end of the definition array.\n\nIn the following example, we terminate the array when we encounter a `0xa` value\nfollowed by a `0xd` value, carriage return followed by line feed.\n\nThe `0` is not included in the array result.\n\n```javascript\nconst definition = {\n    object: {\n        array: [[ 8 ], 0xd, 0xa ]\n    }\n}\nconst object = {\n    array: [ 0xab, 0xcd ]\n}\ntest('terminated-multibyte', definition, object, [\n    0xab, 0xcd, 0xd, 0xa\n])\n```\n\n### String Value Maps\n\n**TODO**: Need first draft.\n\n```javascript\n{\n    const definition = {\n        object: {\n            header: [{ type: [ 8, [ 'off', 'on' ] ] }, 8 ]\n        }\n    }\n    const object = {\n        header: {\n            type: 'on'\n        }\n    }\n    test('string-value-map', definition, object, [\n        0x1\n    ])\n}\n```\n\n```javascript\n{\n    const description = {\n        packet: {\n            type: [ 8, { 0: 'off', 1: 'on', null: 'unknown' } ]\n        }\n    }\n}\n```\n\n### Floating Point Values\n\nPacket supports serializing and parsing IEEE754 floating point numbers. This is\nthe representation common to C.\n\nA floating point number is is specified by specifying the value as a floating\nthe point number as `number` with the bit size repeated in the decimal digits of\nthe number.\n\n**TODO** Values of `1.1` and `-1.5` are not serializing and restoring correctly.\nI can't remember if this is expected.\n\n```javascript\nconst definition = {\n    object: {\n        doubled: 64.64,\n        float: 32.32\n    }\n}\nconst object = {\n    doubled: 1.2,\n    float: -1.5\n}\ntest('float', definition, object, [\n    0x3f, 0xf3, 0x33, 0x33,\n    0x33, 0x33, 0x33, 0x33,\n    0xbf, 0xc0, 0x0, 0x0\n])\n```\n\nThere are only two sizes of floating point number available, 64-bit and 32-bit.\nThese are based on the IEEE 754 standard. As of 2008, the standard defines a\n[128-bit quad precision floating\npoint](https://en.wikipedia.org/wiki/Quadruple-precision_floating-point_format)\nbut the JavaScript `number` is itself a 64-bit IEEE 754 double-precision float,\nso we'd have to introduce one of the big decimal libraries from NPM to support\nit, so it's probably best you sort out a solution for your application using\ninline functions, maybe serializing to a byte array or `BigInt`. If you\nencounter at 128-bit number in the wild, I'd be curious. Please let me know.\n\n### Conditionals\n\n**TODO**: Need first draft.\n\nBasic conditionals are expressed as an array of boolean functions paired with\nfield definitions. The functions and definitions repeat creating an if/else if\nconditional. The array can end with a field definition that acts as the `else`\ncondition.\n\nIf the function has positional arguments, the function is called with the root\nobject, followed by an array of indices into any arrays the current path,\nfollowed by an array of names of the properties in the current path.\n\nIn the following definition the bit size of value is 8 bits of the `type`\nproperty is `1`, 16 bits if the type property is `2`, 24 bits if the `type`\nproperty `3` and `32` bits for any other value of `type`.\n\n```javascript\n{\n    const definition = {\n        object: {\n            type: 8,\n            value: [\n                $ =\u003e $.type == 1, 8,\n                $ =\u003e $.type == 2, 16,\n                $ =\u003e $.type == 3, 24,\n                true, 32\n            ]\n        }\n    }\n    const object = {\n        type: 2,\n        value: 1\n    }\n    test('conditional', definition, object, [\n        0x2,\n        0x0, 0x1\n    ])\n}\n```\n\nYou can use conditionals in bit-packed integers as well.\n\n```javascript\n{\n    const definition = {\n        object: {\n            header: [{\n                type: 4,\n                value: [\n                    $ =\u003e $.header.type == 1, 28,\n                    $ =\u003e $.header.type == 2, [{ first: 4, second: 24 }, 28 ],\n                    $ =\u003e $.header.type == 3, [{ first: 14, second: 14 }, 28 ],\n                    true, [[ 24, 'ffffff' ], 4 ]\n                ]\n            }, 32 ]\n        }\n    }\n    const object = {\n        header: {\n            type: 2,\n            value: { first: 0xf, second: 0x1 }\n        }\n    }\n    test('conditional-packed', definition, object, [\n        0x2f, 0x0, 0x0, 0x1\n    ])\n}\n```\n\n### Switch Conditionals\n\n**TODO**: Need first draft. Also, example is wrong.\n\n```javascript\nconst definition = {\n    object: {\n        type: 8,\n        value: [\n            ($) =\u003e $.type, [\n                { $_: 1 },          8,\n                { $_: [ 2, 3 ] },   16,\n                { $_: [] },         32\n            ]\n        ]\n    }\n}\nconst object = {\n    type: 2,\n    value: 1\n}\ntest('switch', definition, object, [\n    0x2, 0x0, 0x1\n])\n```\n\n### References to Parietals\n\n**TODO**: First draft done.\n\nIf you have a complicated type that requires a complicated definition that is\ntedious to repeat, you can reference that definition by name.\n\nReferences can be used as types and can also be used as length encoding lengths\nif they resolve to an integer type. If you create a type that is only used by\nreference that you do not want available as a packet, prepend and underscore and\nit will not be returned as a packet type.\n\n**Mnemonic**: A string name to name the referenced type.\n\nIn the following a definition an encoded integer is defined as a partial that\nwill not be presented as a packet due to the `_` prefix to the name. It is\nreferenced by the `series` property as a type and used for the length encoding\nof the `data` property.\n\n```javascript\nconst definition = {\n    $encodedInteger: [\n        [\n            value =\u003e value \u003c= 0x7f, 8,\n            value =\u003e value \u003c= 0x3fff, [ 16, [ 0x80, 7 ], [ 0x0, 7 ] ],\n            value =\u003e value \u003c= 0x1fffff, [ 24, [ 0x80, 7 ], [ 0x80, 7 ], [ 0x0, 7 ] ],\n            true, [ 32, [ 0x80, 7 ], [ 0x80, 7 ], [ 0x80, 7 ], [ 0x0, 7 ] ]\n        ],\n        [ 8,\n            sip =\u003e (sip \u0026 0x80) == 0, 8,\n            true, [ 8,\n                sip =\u003e (sip \u0026 0x80) == 0, [ 16, [ 0x80, 7 ], [ 0x0, 7 ] ],\n                true, [ 8,\n                    sip =\u003e (sip \u0026 0x80) == 0, [ 24, [ 0x80, 7 ], [ 0x80, 7 ], [ 0x0, 7 ] ],\n                    true, [ 32, [ 0x80, 7 ], [ 0x80, 7 ], [ 0x80, 7 ], [ 0x0, 7 ] ]\n                ]\n            ]\n        ]\n    ],\n    object: {\n        value: '$encodedInteger',\n        // array: [ '$encodedInteger', [ 8 ] ] ~ I haven't done this, requires using conditionals in length encoded arrays or calculated arrays.\n    }\n}\nconst object = {\n    value: 1,\n      array: [ 1 ]\n}\ntest('partial', definition, object, [\n    0x1 // , 0x1, 0x1\n])\n```\n\n### Checksums and Running Calculations\n\nSome protocols perform checksums on the body of message. Others require tracking\nthe remaining bytes in a message based on a length property in a header and\nmaking decisions about the contents of the message based on the bytes remaining.\n\nTo perform running calculations like buffers and remaining bytes we can use\naccumulators, lexically scoped object variables that can be used to store the\nstate of a running calculation.\n\nThe following definition creates an MD5 checksum of the body of a packet and\nstores the result in a checksum property that follows the body of the message.\n\n```javascript\n{\n    const definition = {\n        object: [{ hash: () =\u003e crypto.createHash('md5') }, {\n            body: [[[\n                ({ $buffer, $start, $end, hash }) =\u003e hash.update($buffer.slice($start, $end))\n            ]], {\n                number: 32,\n                data: [[ 8 ], 0x0 ]\n            }],\n            checksum: [[\n                ({ $_, hash }) =\u003e $_ = hash.digest()\n            ], [[ 16 ], [ Buffer ]], [\n                ({ checksum = 0, hash }) =\u003e {\n                    assert.deepEqual(hash.digest().toJSON(), checksum.toJSON())\n                }\n            ]]\n        }]\n    }\n    const object = {\n        body: {\n            number: 1,\n            data: [ 0x41, 0x42, 0x43 ]\n        },\n        checksum: Buffer.from([ 0xc9, 0xd0, 0x87, 0xbd, 0x2f, 0x8f, 0x4a, 0x33, 0xd4, 0xeb, 0x2d, 0xe4, 0x47, 0xc0, 0x40, 0x28 ])\n    }\n    test('checksum', definition, object, [\n        0x0, 0x0, 0x0, 0x1,\n        0x41, 0x42, 0x43, 0x0,\n        0xc9, 0xd0, 0x87, 0xbd,\n        0x2f, 0x8f, 0x4a, 0x33,\n        0xd4, 0xeb, 0x2d, 0xe4,\n        0x47, 0xc0, 0x40, 0x28\n    ], { require: { assert: 'assert', crypto: 'crypto' } })\n}\n```\n\nHere we also introduce the concept of buffer inlines. These are inlines that\noperate not on the serialized or parsed value, but instead on the underlying\nbuffer. In the above example the `hash.update()` inline is not called once for\neach property in the `body`, it is called for each buffer chunk that contains\nthe binary data for the `body`.\n\nUnlike ordinary inline functions, a buffer inline is not called prior to\nserialization. Buffer inlines are called as late as possible to process as much\nof the buffer continuously as possible. In the previous example, the\n`hash.update()` inline is applied to the binary data that defines the entire\n`body` which it encapsulates.\n\nWe use nested structures to group.\n\n**TODO**: Simpler calculation example to start. Calculation is important because\nit will allow us to talk about the difference between `sizeof`,\n`offsetof`.\n\n**TODO**: Come back and implement this by finding a way to extract sizeof and\noffset of. Punting because I don't really know what inspired this example or\nwhat it is supposed to illustrate.\n\n```javascript\nconst definition = {\n    packet: [{ counter: () =\u003e [] }, {\n        header: {\n            type: 8,\n            length: [[\n                ({ $, counter }) =\u003e {\n                    return counter[0] = $sizeof.packet($) - $offsetof.packet($, 'body')\n                }\n            ], 16, [\n                ({ $_, counter }) =\u003e {\n                    return counter[0] = $_\n                }\n            ]]\n        },\n        body: [[[\n            ({ $_ = 0, $start, $end, counter }) =\u003e counter[0] -= $end - $start\n        ]], {\n            value: 32,\n            string: [[ 8 ], 0x0 ],\n            variable: [\n                ({ counter }) =\u003e counter[0] == 4, 32,\n                ({ counter }) =\u003e counter[0] == 2, 16,\n                8\n            ]\n        }],\n    }]\n}\n```\n\n### Parameters\n\n**TODO**: Need first draft, or reread this and see if it is a real first draft.\n\nAccumulators described in the preceding section also define parameters. Any\naccumulator declared on the top most field will create parameters to the\ngenerated serializes and parsers.\n\n```javascript\nconst definition = {\n    object: [{ counter: [ 0 ] }, [[[\n        ({ $start, $end, counter }) =\u003e counter[0] += $end - $start\n    ]], {\n        number: 8,\n        string: [ [ 8 ], 0x0 ]\n    }]]\n}\n// **TODO**: API call to get counter.\n```\n\nThe parameters are available as both arguments that can be passed to inline\nfunctions as well as generally available in the program scope. Be careful not to\ncareful not to hide any module declarations you've declared.\n\n```javascript\nconst definition = {\n    object: [{ encoding: 'utf8' }, {\n        string: [[\n            value =\u003e Buffer.from(value, encoding)\n        ], [ [ Buffer ], 0x0 ], [\n            value =\u003e value.toString(encoding)\n        ]]\n    }]\n}\nconst moduleName = compile('parameters', definition, {})\n// *TODO*: API call to encode string ascii or something.\n```\n\n## What's Missing\n\nA section of things that need to be written.\n\n * **TODO** Did we support packed `BigInt` integers?\n * **TODO** Absent values.\n * **TODO** Switch statements.\n * **TODO** Composition.\n * **TODO** Parse or serialize conditionals, i.e. unconditionals.\n\n## Outline\n\n * Living `README.md`.\n * Parsers and Serializers\n * Packet Definition Language\n    * Integers\n        * Negative Integers\n        * Endianness\n    * Nested Structures\n    * Packed Integers\n    * Literals\n    * Length-Encoded Arrays\n    * Inline Transforms and Assertions\n    * Fixed Length Arrays\n    * Requiring Modules\n    * Assertions\n    * Assertion and Transformation Arguments\n    * Terminated Arrays\n        * Multi-byte Terminators\n    * String Value Maps\n    * Floating Point Values\n    * Conditionals\n    * Switch Conditionals\n    * References to Partials\n    * Accumulators\n    * Parameters\n * What's Missing\n * Outline\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fbigeasy%2Fpacket","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fbigeasy%2Fpacket","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fbigeasy%2Fpacket/lists"}