{"id":20996871,"url":"https://github.com/leandromoh/recordparser","last_synced_at":"2025-04-12T23:40:03.693Z","repository":{"id":39486675,"uuid":"266390144","full_name":"leandromoh/RecordParser","owner":"leandromoh","description":"Zero Allocation Writer/Reader Parser for .NET Core","archived":false,"fork":false,"pushed_at":"2024-08-10T07:07:25.000Z","size":37294,"stargazers_count":301,"open_issues_count":6,"forks_count":10,"subscribers_count":7,"default_branch":"master","last_synced_at":"2025-04-12T23:39:24.582Z","etag":null,"topics":["csv","delimited","dotnet-core","expression-tree","file","fixedlength","flat","flatfile","mapper","parser","performance","reader","span","tsv"],"latest_commit_sha":null,"homepage":"","language":"C#","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/leandromoh.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":".github/FUNDING.yml","license":"LICENSE.md","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null},"funding":{"github":["leandromoh"],"patreon":null,"open_collective":null,"ko_fi":null,"tidelift":null,"community_bridge":null,"liberapay":null,"issuehunt":null,"otechie":null,"lfx_crowdfunding":null,"custom":null}},"created_at":"2020-05-23T17:52:47.000Z","updated_at":"2025-04-12T15:05:29.000Z","dependencies_parsed_at":"2024-08-02T03:12:48.286Z","dependency_job_id":null,"html_url":"https://github.com/leandromoh/RecordParser","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/leandromoh%2FRecordParser","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/leandromoh%2FRecordParser/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/leandromoh%2FRecordParser/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/leandromoh%2FRecordParser/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/leandromoh","download_url":"https://codeload.github.com/leandromoh/RecordParser/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":248647254,"owners_count":21139081,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["csv","delimited","dotnet-core","expression-tree","file","fixedlength","flat","flatfile","mapper","parser","performance","reader","span","tsv"],"created_at":"2024-11-19T07:37:41.480Z","updated_at":"2025-04-12T23:40:03.672Z","avatar_url":"https://github.com/leandromoh.png","language":"C#","readme":"[![Nuget](https://img.shields.io/nuget/v/recordparser)](https://www.nuget.org/packages/recordparser)\n![GitHub Workflow Status (branch)](https://img.shields.io/github/actions/workflow/status/leandromoh/recordparser/test-dotnet.yml?branch=master)\n![GitHub](https://img.shields.io/github/license/leandromoh/recordparser)\n![GitHub code size in bytes](https://img.shields.io/github/languages/code-size/leandromoh/RecordParser)\n\n# RecordParser - Simple, Fast, GC friendly \u0026 Extensible\n\nRecordParser is a expression tree based parser that helps you to write maintainable parsers with high-performance and zero allocations, thanks to Span type.\nIt makes easier for developers to do parsing by automating non-relevant code, which allow you to focus on the essentials of mapping.\n\n## 🏆 2nd place in [The fastest CSV parser in .NET](https://www.joelverhagen.com/blog/2020/12/fastest-net-csv-parsers) blog post\n\nEven the focus of this library being data mapping to objects (classes, structs, etc), it got an excellent result in the blog benchmark which tested how fast libraries can transform a CSV row into an array of strings. We got 1st place by parsing a 1 million lines file in 826ms.\n\n## RecordParser is a Zero Allocation Writer/Reader Parser for .NET Core\n\n1. It supports .NET 6, 7, 8 and .NET Standard 2.1\n2. It supports to parse individual records as well as [whole files](#file-processing---read)\n3. It has minimal heap allocations because it does intense use of [Span](https://docs.microsoft.com/en-us/archive/msdn-magazine/2018/january/csharp-all-about-span-exploring-a-new-net-mainstay) type, a .NET type designed to have high-performance and reduce memory allocations [(see benchmark)](/Benchmark.md)\n4. It is even more performant because the relevant code is generated using [expression trees](https://docs.microsoft.com/dotnet/csharp/expression-trees), which once compiled is fast as handwriting code\n5. It supports parse for ANY type: classes, structs, records, arrays, tuples etc \n6. It supports to map values for properties, fields, indexers, etc.\n7. It does not do [boxing](https://docs.microsoft.com/dotnet/csharp/programming-guide/types/boxing-and-unboxing) for structs.\n8. It is flexible: you can choose the most convenient way to configure each of your parsers: indexed or sequential configuration\n9. It is extensible: you can totally customize your parsing with lambdas/delegates \n10. It is even more extensible because you can easily create extension methods that wraps custom mappings\n11. It is efficient: you can take advantage of multicore to use parallel processing and speed up parsing\n12. It is not intrusive: all mapping configuration is done outside of the mapped type. It keeps your classes with minimised dependencies and low coupling  \n13. It provides clean API with familiar methods: Parse, TryParse and TryFormat\n14. It is easy configurated with a builder object, even programmatically, because does not require to define a class each time you want to define a parser\n15. Compliant with [RFC 4180](https://www.ietf.org/rfc/rfc4180.txt) standard\n\n## Benchmark\n\nLibraries always say themselves have great perfomance, but how often they show you a benchmark comparing with other libraries? \nCheck the [benchmark page](/Benchmark.md) to see RecordParser comparisons. If you miss some, a PR is welcome.\n\nThird Party Benchmarks\n- [The fastest CSV parser in .NET](https://www.joelverhagen.com/blog/2020/12/fastest-net-csv-parsers)\n- [Sylvan Benchmarks](https://github.com/MarkPflug/Benchmarks)\n- [Fastest CSV parser in C sharp](https://github.com/mohammadeunus/Fastest-CSV-parser-in-C-sharp)\n\n## Currently there are parsers for 2 record formats: \n1. Fixed length, common in positional/flat files, e.g. financial services, mainframe use, etc\n    * [Reader](#fixed-length-reader)\n    * [Writer](#fixed-length-writer)\n2. Variable length, common in delimited files, e.g. CSV, TSV files, etc\n    * [Reader](#variable-length-reader)\n    * [Writer](#variable-length-writer)\n  \n### Custom Converters\n1. Readers\n    * [Default Type Convert](#default-type-convert---reader) *\n    * [Custom Property Convert](#custom-property-convert---reader) * \n2. Writers\n    * [Default Type Convert](#default-type-convert---writer)\n    * [Custom Property Convert](#custom-property-convert---writer)\n\n*ㅤyou can use a \"string pool\" (function that converts a `ReadOnlySpan\u003cchar\u003e` to `string`) to avoid creating multiple instances of strings with same content. This optimization is useful when there are a lot of repeated string values present. In this scenario, it may reduce allocated memory and speed-up processing time.   \n\n### Parsing Files\n1. [Readers](#file-processing---read)\n2. [Writers](#file-processing---write)\n\nNOTE: MOST EXAMPLES USE TUPLES FOR SIMPLICITY. PARSER ACTUALLY WORKS FOR ANY TYPE (CLASSES, STRUCTS, RECORDS, ARRAYS, TUPLES, ETC)\n\n## Fixed Length Reader\nThere are 2 flavors for mapping: indexed or sequential.  \n\nIndexed is useful when you want to map columns by its position: start/length. \n\n```csharp\n[Fact]\npublic void Given_value_using_standard_format_should_parse_without_extra_configuration()\n{\n    var reader = new FixedLengthReaderBuilder\u003c(string Name, DateTime Birthday, decimal Money)\u003e()\n        .Map(x =\u003e x.Name, startIndex: 0, length: 11)\n        .Map(x =\u003e x.Birthday, 12, 10)\n        .Map(x =\u003e x.Money, 23, 7)\n        .Build();\n\n    var result = reader.Parse(\"foo bar baz 2020.05.23 0123.45\");\n\n    result.Should().BeEquivalentTo((Name: \"foo bar baz\",\n                                    Birthday: new DateTime(2020, 05, 23),\n                                    Money: 123.45M));\n}\n```\nSequential is useful when you want to map columns by its order, so you just need specify the lengths.\n\n```csharp\n[Fact]\npublic void Given_value_using_standard_format_should_parse_without_extra_configuration()\n{\n    var reader = new FixedLengthReaderSequentialBuilder\u003c(string Name, DateTime Birthday, decimal Money)\u003e()\n        .Map(x =\u003e x.Name, length: 11)\n        .Skip(1)\n        .Map(x =\u003e x.Birthday, 10)\n        .Skip(1)\n        .Map(x =\u003e x.Money, 7)\n        .Build();\n\n    var result = reader.Parse(\"foo bar baz 2020.05.23 0123.45\");\n\n    result.Should().BeEquivalentTo((Name: \"foo bar baz\",\n                                    Birthday: new DateTime(2020, 05, 23),\n                                    Money: 123.45M));\n}\n```\n\n## Variable Length Reader\nThere are 2 flavors for mapping: indexed or sequential.  \n\nIndexed is useful when you want to map columns by its indexes. \n\n```csharp\n[Fact]\npublic void Given_value_using_standard_format_should_parse_without_extra_configuration()\n{\n    var reader = new VariableLengthReaderBuilder\u003c(string Name, DateTime Birthday, decimal Money, Color Color)\u003e()\n        .Map(x =\u003e x.Name, indexColumn: 0)\n        .Map(x =\u003e x.Birthday, 1)\n        .Map(x =\u003e x.Money, 2)\n        .Map(x =\u003e x.Color, 3)\n        .Build(\";\");\n  \n    var result = reader.Parse(\"foo bar baz ; 2020.05.23 ; 0123.45; LightBlue\");\n  \n    result.Should().BeEquivalentTo((Name: \"foo bar baz\",\n                                    Birthday: new DateTime(2020, 05, 23),\n                                    Money: 123.45M,\n                                    Color: Color.LightBlue));\n}\n```\n\nSequential is useful when you want to map columns by its order. \n\n```csharp\n[Fact]\npublic void Given_ignored_columns_and_value_using_standard_format_should_parse_without_extra_configuration()\n{\n    var reader = new VariableLengthReaderSequentialBuilder\u003c(string Name, DateTime Birthday, decimal Money)\u003e()\n        .Map(x =\u003e x.Name)\n        .Skip(1)\n        .Map(x =\u003e x.Birthday)\n        .Skip(2)\n        .Map(x =\u003e x.Money)\n        .Build(\";\");\n  \n    var result = reader.Parse(\"foo bar baz ; IGNORE; 2020.05.23 ; IGNORE ; IGNORE ; 0123.45\");\n  \n    result.Should().BeEquivalentTo((Name: \"foo bar baz\",\n                                    Birthday: new DateTime(2020, 05, 23),\n                                    Money: 123.45M));\n}\n```\n### Default Type Convert - Reader\n\nYou can define default converters for some type if you has a custom format.  \nThe following example defines all decimals values will be divided by 100 before assigning,  \nfurthermore all dates being parsed on `ddMMyyyy` format.  \nThis feature is avaible for both fixed and variable length.  \n\n```csharp\n[Fact]\npublic void Given_types_with_custom_format_should_allow_define_default_parser_for_type()\n{\n    var reader = new FixedLengthReaderBuilder\u003c(decimal Balance, DateTime Date, decimal Debit)\u003e()\n        .Map(x =\u003e x.Balance, 0, 12)\n        .Map(x =\u003e x.Date, 13, 8)\n        .Map(x =\u003e x.Debit, 22, 6)\n        .DefaultTypeConvert(value =\u003e decimal.Parse(value) / 100)\n        .DefaultTypeConvert(value =\u003e DateTime.ParseExact(value, \"ddMMyyyy\", null))\n        .Build();\n\n    var result = reader.Parse(\"012345678901 23052020 012345\");\n\n    result.Should().BeEquivalentTo((Balance: 0123456789.01M,\n                                    Date: new DateTime(2020, 05, 23),\n                                    Debit: 123.45M));\n}\n```\n### Custom Property Convert - Reader\n\nYou can define a custom converter for field/property.  \nCustom converters have priority case a default type convert is defined.  \nThis feature is avaible for both fixed and variable length.  \n\n```csharp\n[Fact]\npublic void Given_members_with_custom_format_should_use_custom_parser()\n{\n    var reader = new VariableLengthReaderBuilder\u003c(int Age, int MotherAge, int FatherAge)\u003e()\n        .Map(x =\u003e x.Age, 0)\n        .Map(x =\u003e x.MotherAge, 1, value =\u003e int.Parse(value) + 3)\n        .Map(x =\u003e x.FatherAge, 2)\n        .Build(\";\");\n\n    var result = reader.Parse(\" 15 ; 40 ; 50 \");\n\n    result.Should().BeEquivalentTo((Age: 15,\n                                    MotherAge: 43,\n                                    FatherAge: 50));\n}\n```\n### Nested Properties Mapping - Reader\n\nJust like a regular property, you can also configure nested properties mapping.  \nThe nested objects are created only if it was mapped, which avoids stack overflow problems.  \nThis feature is avaible for both fixed and variable length.  \n \n```csharp\n[Fact]\npublic void Given_nested_mapped_property_should_create_nested_instance_to_parse()\n{\n    var reader = new VariableLengthReaderBuilder\u003cPerson\u003e()\n        .Map(x =\u003e x.BirthDay, 0)\n        .Map(x =\u003e x.Name, 1)\n        .Map(x =\u003e x.Mother.BirthDay, 2)\n        .Map(x =\u003e x.Mother.Name, 3)\n        .Build(\";\");\n\n    var result = reader.Parse(\"2020.05.23 ; son name ; 1980.01.15 ; mother name\");\n\n    result.Should().BeEquivalentTo(new Person\n    {\n        BirthDay = new DateTime(2020, 05, 23),\n        Name = \"son name\",\n        Mother = new Person\n        {\n            BirthDay = new DateTime(1980, 01, 15),\n            Name = \"mother name\",\n        }\n    });\n}\n```\n\n## Fixed Length Writer\nThere are 2 flavors for mapping: indexed or sequential.  \n\nBoth indexed and sequential builders accept the following optional parameters in `Map` methods: \n- format\n- padding direction \n- padding character\n\nIndexed is useful when you want to map columns by its position: start/length. \n\n```csharp\n[Fact]\npublic void Given_value_using_standard_format_should_parse_without_extra_configuration()\n{\n    // Arrange\n\n    var writer = new FixedLengthWriterBuilder\u003c(string Name, DateTime Birthday, decimal Money)\u003e()\n        .Map(x =\u003e x.Name, startIndex: 0, length: 12)\n        .Map(x =\u003e x.Birthday, 12, 11, \"yyyy.MM.dd\", paddingChar: ' ')\n        .Map(x =\u003e x.Money, 23, 7, precision: 2)\n        .Build();\n\n    var instance = (Name: \"foo bar baz\",\n                    Birthday: new DateTime(2020, 05, 23),\n                    Money: 01234.567M);\n\n    // create buffer with 50 positions, all set to white space by default\n    Span\u003cchar\u003e destination = Enumerable.Repeat(element: ' ', count: 50).ToArray();\n\n    // Act\n\n    var success = writer.TryFormat(instance, destination, out var charsWritten);\n\n    // Assert\n\n    success.Should().BeTrue();\n\n    var result = destination.Slice(0, charsWritten);\n\n    result.Should().Be(\"foo bar baz 2020.05.23 0123456\");\n}\n```\nSequential is useful when you want to map columns by its order, so you just need specify the lengths.\n\n```csharp\n[Fact]\npublic void Given_value_using_standard_format_should_parse_without_extra_configuration()\n{\n    // Arrange\n\n    var writer = new FixedLengthWriterSequentialBuilder\u003c(string Name, DateTime Birthday, decimal Money)\u003e()\n        .Map(x =\u003e x.Name, length: 11)\n        .Skip(1)\n        .Map(x =\u003e x.Birthday, 10, \"yyyy.MM.dd\")\n        .Skip(1)\n        .Map(x =\u003e x.Money, 7, precision: 2)\n        .Build();\n\n    var instance = (Name: \"foo bar baz\",\n                    Birthday: new DateTime(2020, 05, 23),\n                    Money: 01234.567M);\n\n    // create buffer with 50 positions, all set to white space by default\n    Span\u003cchar\u003e destination = Enumerable.Repeat(element: ' ', count: 50).ToArray();\n\n    // Act\n\n    var success = writer.TryFormat(instance, destination, out var charsWritten);\n\n    // Assert\n\n    success.Should().BeTrue();\n\n    var result = destination.Slice(0, charsWritten);\n\n    result.Should().Be(\"foo bar baz 2020.05.23 0123456\");\n}\n```\n\n## Variable Length Writer\nThere are 2 flavors for mapping: indexed or sequential.  \n\nBoth indexed and sequential builders accept the format optional parameter in `Map` method.\n\nIndexed is useful when you want to map columns by its indexes. \n\n```csharp\n[Fact]\npublic void Given_value_using_standard_format_should_parse_without_extra_configuration()\n{\n    // Arrange \n\n    var writer = new VariableLengthWriterBuilder\u003c(string Name, DateTime Birthday, decimal Money, Color Color)\u003e()\n        .Map(x =\u003e x.Name, indexColumn: 0)\n        .Map(x =\u003e x.Birthday, 1, \"yyyy.MM.dd\")\n        .Map(x =\u003e x.Money, 2)\n        .Map(x =\u003e x.Color, 3)\n        .Build(\" ; \");\n\n    var instance = (\"foo bar baz\", new DateTime(2020, 05, 23), 0123.45M, Color.LightBlue);\n\n    Span\u003cchar\u003e destination = new char[100];\n\n    // Act\n\n    var success = writer.TryFormat(instance, destination, out var charsWritten);\n\n    // Assert\n\n    success.Should().BeTrue();\n\n    var result = destination.Slice(0, charsWritten);\n\n    result.Should().Be(\"foo bar baz ; 2020.05.23 ; 123.45 ; LightBlue\");\n}\n```\n\nSequential is useful when you want to map columns by its order. \n\n```csharp\n[Fact]\npublic void Given_value_using_standard_format_should_parse_without_extra_configuration()\n{\n    // Arrange \n\n    var writer = new VariableLengthWriterSequentialBuilder\u003c(string Name, DateTime Birthday, decimal Money)\u003e()\n        .Map(x =\u003e x.Name)\n        .Skip(1)\n        .Map(x =\u003e x.Birthday, \"yyyy.MM.dd\")\n        .Map(x =\u003e x.Money)\n        .Build(\" ; \");\n\n    var instance = (\"foo bar baz\", new DateTime(2020, 05, 23), 0123.45M);\n\n    Span\u003cchar\u003e destination = new char[100];\n\n    // Act\n\n    var success = writer.TryFormat(instance, destination, out var charsWritten);\n\n    // Assert\n\n    success.Should().BeTrue();\n\n    var result = destination.Slice(0, charsWritten);\n\n    result.Should().Be(\"foo bar baz ;  ; 2020.05.23 ; 123.45\");\n}\n```\n### Default Type Convert - Writer\n\nYou can define default converters for some type if you has a custom format.  \nThe following example defines all decimals values will be multiplied by 100 before writing (precision 2),  \nfurthermore all dates being written on `ddMMyyyy` format.  \nThis feature is avaible for both fixed and variable length.\n\n```csharp\n[Fact]\npublic void Given_types_with_custom_format_should_allow_define_default_parser_for_type()\n{\n    // Arrange\n\n    var writer = new FixedLengthWriterBuilder\u003c(decimal Balance, DateTime Date, decimal Debit)\u003e()\n        .Map(x =\u003e x.Balance, 0, 12, padding: Padding.Left, paddingChar: '0')\n        .Map(x =\u003e x.Date, 13, 8)\n        .Map(x =\u003e x.Debit, 22, 6, padding: Padding.Left, paddingChar: '0')\n        .DefaultTypeConvert\u003cdecimal\u003e((span, value) =\u003e (((long)(value * 100)).TryFormat(span, out var written), written))\n        .DefaultTypeConvert\u003cDateTime\u003e((span, value) =\u003e (value.TryFormat(span, out var written, \"ddMMyyyy\"), written))\n        .Build();\n\n    var instance = (Balance: 123456789.01M,\n                    Date: new DateTime(2020, 05, 23),\n                    Debit: 123.45M);\n\n    // create buffer with 50 positions, all set to white space by default\n    Span\u003cchar\u003e destination = Enumerable.Repeat(element: ' ', count: 50).ToArray();\n\n    // Act\n\n    var success = writer.TryFormat(instance, destination, out var charsWritten);\n\n    // Assert\n\n    success.Should().BeTrue();\n\n    var result = destination.Slice(0, charsWritten);\n\n    result.Should().Be(\"012345678901 23052020 012345\");\n}\n```\n### Custom Property Convert - Writer\n\nYou can define a custom converter for field/property.  \nCustom converters have priority case a default type convert is defined.  \nThis feature is avaible for both fixed and variable length.  \n\n```csharp\n[Fact]\npublic void Given_specified_custom_parser_for_member_should_have_priority_over_custom_parser_for_type()\n{\n    // Assert\n\n    var writer = new VariableLengthWriterBuilder\u003c(int Age, int MotherAge, int FatherAge)\u003e()\n        .Map(x =\u003e x.Age, 0)\n        .Map(x =\u003e x.MotherAge, 1, (span, value) =\u003e ((value + 2).TryFormat(span, out var written), written))\n        .Map(x =\u003e x.FatherAge, 2)\n        .Build(\" ; \");\n\n    var instance = (Age: 15,\n                    MotherAge: 40,\n                    FatherAge: 50);\n\n    Span\u003cchar\u003e destination = new char[50];\n\n    // Act\n\n    var success = writer.TryFormat(instance, destination, out var charsWritten);\n\n    // Assert\n\n    success.Should().BeTrue();\n\n    var result = destination.Slice(0, charsWritten);\n\n    result.Should().Be(\"15 ; 42 ; 50\");\n}\n```\n\n## File Processing - Read\n\nWhile version 1 of the library allows to parse individual records, version 2 introduced features to read/write records directly from/to files.\nThis functionality bridged the gap between the existing readers/writers and `TextReader`/`TextWriter`.\n\nOne awesome feature that makes RecordParser innovative and special is the hability to process files using the power of the parallel programmimg!\nYou can take advantage of multicore to use parallel processing and speed up reading and writing! \nYou can disable the parallelism just by set the feature flag to `false`, thus using sequential processing. \n\nJust import the namespace `RecordParser.Extensions` to see the extension methods to interact in `TextReader` and `TextWriter`.\n\n### File reading for FixedLengthReader\n\n```csharp\nusing System;\nusing RecordParser.Builders.Reader;\nusing RecordParser.Extensions;\nusing System.IO;\n\nvar fileContent = \n    \"\"\"\n    01 123-456-789 00033.5\n    99 abc-def-ghk 00050.7\n    \"\"\";\n\n// I am using StringReader because in the example the content is a string\n// but could be StreamReader or any TextReader\nusing TextReader textReader = new StringReader(fileContent);\n\nvar reader = new FixedLengthReaderSequentialBuilder\u003cRecord\u003e()\n    .Map(x =\u003e x.Foo, 2)\n    .Skip(1)\n    .Map(x =\u003e x.Bar, 11)\n    .Skip(1)\n    .Map(x =\u003e x.Qux, 7)\n    .Build();\n\nvar readOptions = new FixedLengthReaderOptions\u003cRecord\u003e\n{\n    Parser = reader.Parse,\n    ParallelismOptions = new()\n    {\n        Enabled = true,\n        EnsureOriginalOrdering = true,\n        MaxDegreeOfParallelism = 4\n    }\n};\n\nvar records = textReader.ReadRecords(readOptions);\n\nforeach (var r in records)\n    Console.WriteLine(r);\n\npublic record class Record(int Foo, string Bar, decimal Qux);\n```\n\n### File reading for VariableLengthReader\n\n```csharp\nusing System;\nusing RecordParser.Builders.Reader;\nusing RecordParser.Extensions;\nusing System.IO;\n\nvar fileContent = \n    \"\"\"\n    Foo,Bar,Qux\n    1,123456789,33\n    12,abc-def-ghk,50.7\n    \"\"\";\n\n// I am using StringReader because in the example the content is a string\n// but could be StreamReader or any TextReader\nusing TextReader textReader = new StringReader(fileContent);\n\nvar reader = new VariableLengthReaderBuilder\u003cRecord\u003e()\n    .Map(x =\u003e x.Foo, 0)\n    .Map(x =\u003e x.Bar, 1)\n    .Map(x =\u003e x.Qux, 2)\n    .Build(\",\");\n\nvar readOptions = new VariableLengthReaderOptions\n{\n    HasHeader = true,\n    ContainsQuotedFields = false,\n    ParallelismOptions = new()\n    {\n        Enabled = true,\n        EnsureOriginalOrdering = true,\n        MaxDegreeOfParallelism = 4\n    }\n};\n\nvar records = textReader.ReadRecords(reader, readOptions);\n\nforeach (var r in records)\n    Console.WriteLine(r);\n\npublic record class Record(int Foo, string Bar, decimal Qux);\n```\n\n### File reading for VariableLength Raw\n\nNote: only recommended when user need to receive each field as `string`. \nOther methods that does not force `string` uses `ReadOnlySpan\u003cchar\u003e`, which speed up processing and reduces unnecessary allocations.\n\n```csharp\nusing System;\nusing RecordParser.Extensions;\nusing System.IO;\n\nvar fileContent = \"\"\"\n    A,B,C,D\n    1,2,3,X\n    5,6,7,Y\n    9,10,11,Z\n    \"\"\";\n\n// I am using StringReader because in the example the content is a string\n// but could be StreamReader or any TextReader\nusing TextReader textReader = new StringReader(fileContent);\n\nvar readOptions = new VariableLengthReaderRawOptions\n{\n    HasHeader = true,\n    ContainsQuotedFields = false,\n    ColumnCount = 4,\n    Separator = \",\",\n    ParallelismOptions = new()\n    {\n        Enabled = true,\n        EnsureOriginalOrdering = true,\n        MaxDegreeOfParallelism = 4\n    }\n};\n\n// getField is a callback of type Func\u003cint, string\u003e,\n// it receives the index column and returns its content as string\nvar records = textReader.ReadRecordsRaw(readOptions, getField =\u003e\n{\n    var record = new\n    {\n        A = getField(0),\n        B = getField(1),\n        C = getField(2),\n        D = getField(3)\n    };\n    return record;\n});\n\nforeach (var r in records)\n    Console.WriteLine(r);\n```\n\n## File Processing - Write\n\nThis feature is agnostic to builder type since it just receives the delegate of `TryFormat` method.  \nSo you can use it with instances of both `FixedLengthWriter` and `VariableLengthWriter`.\n\n```csharp\nusing RecordParser.Builders.Writer;\nusing RecordParser.Extensions;\nusing System.IO;\n\nvar writer = new VariableLengthWriterBuilder\u003cRecord\u003e()\n    .Map(x =\u003e x.Foo, 0)\n    .Map(x =\u003e x.Bar, 1)\n    .Map(x =\u003e x.Qux, 2)\n    .Build(\",\");\n\n// I am using StreamWriter + MemoryStream in the example but could be any TextWriter\nusing var memory = new MemoryStream();\nusing TextWriter textWriter = new StreamWriter(memory);\n\nvar parallelOptions = new ParallelismOptions()\n{\n    Enabled = true,\n    EnsureOriginalOrdering = true,\n    MaxDegreeOfParallelism = 4,\n};\n\nvar records = new []\n{\n    new Record(12, \"123456789\", 33),\n    new Record(34, \"abc-def-ghk\", 50.7M)\n};\n\ntextWriter.WriteRecords(records, writer.TryFormat, parallelOptions);\n    \npublic record class Record(int Foo, string Bar, decimal Qux);\n```\n","funding_links":["https://github.com/sponsors/leandromoh"],"categories":["Misc","Audio"],"sub_categories":["GUI - other"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fleandromoh%2Frecordparser","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fleandromoh%2Frecordparser","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fleandromoh%2Frecordparser/lists"}