{"id":24838763,"url":"https://github.com/milover/post","last_synced_at":"2025-03-26T04:25:42.746Z","repository":{"id":188702212,"uuid":"638743473","full_name":"Milover/post","owner":"Milover","description":"A program for processing structured data files in bulk","archived":false,"fork":false,"pushed_at":"2023-10-17T12:27:14.000Z","size":363,"stargazers_count":1,"open_issues_count":0,"forks_count":0,"subscribers_count":1,"default_branch":"master","last_synced_at":"2025-01-31T06:24:58.851Z","etag":null,"topics":["cli","csv","latex","openfoam","postprocessing"],"latest_commit_sha":null,"homepage":"","language":"Go","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/Milover.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2023-05-10T02:31:40.000Z","updated_at":"2024-01-08T10:33:17.000Z","dependencies_parsed_at":null,"dependency_job_id":"4499ac27-7479-4570-b7a4-e7b55377d6b5","html_url":"https://github.com/Milover/post","commit_stats":null,"previous_names":["milover/post"],"tags_count":2,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Milover%2Fpost","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Milover%2Fpost/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Milover%2Fpost/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Milover%2Fpost/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/Milover","download_url":"https://codeload.github.com/Milover/post/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":245587312,"owners_count":20639920,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["cli","csv","latex","openfoam","postprocessing"],"created_at":"2025-01-31T06:21:45.158Z","updated_at":"2025-03-26T04:25:42.716Z","avatar_url":"https://github.com/Milover.png","language":"Go","readme":"# post\n\n`post` is a program for processing structured data files in bulk.\n\nIt was originally intended as an automation tool for generating [LaTeX][latex]\ngraphs from `functionObject` data generated by [OpenFOAM®][openfoam] simulations,\nbut has since evolved such that it can be used as a general structured data\nprocessor with optional graph generation support.\n\nIt's primary use is processing and formatting data spread over multiple files\nand/or archives. The main benefit being that the entire process is defined\nthrough one or more YAML formatted run files, hence, automating data processing\npipelines is fairly simple, while no programming is necessary.\n\n## Contents\n\n- [Installation](#installation)\n- [CLI usage](#cli-usage)\n- [Run file structure](#run-file-structure)\n- [Input](#input)\n- [Processing](#processing)\n- [Output](#output)\n- [Graphing](#graphing)\n- [Templates](#templates)\n\n\n## Installation\n\nIf [Go][golang] is installed locally, the following command will compile and\ninstall the latest version of `post`:\n\n```shell\n$ go install github.com/Milover/post@latest\n```\n\nPrecompiled binaries for Linux, Windows and Mac OS (Apple silicon) are also\navailable under [releases][post-release].\n\nFinally, `post` can also be built from source, assuming [Go][golang] is\navailable locally, by running the following commands:\n\n```shell\n$ git clone https://github.com/Milover/post\n$ cd post\n$ go install\n```\n\n## CLI usage\n\nUsage:\n\n```\npost [run file] [flags]\npost [command]\n```\n\nAvailable Commands:\n\n```\ncompletion  Generate the autocompletion script for the specified shell\ngraphfile   Generate graph file stub(s)\nhelp        Help about any command\nrunfile     Generate a run file stub\n```\n\nFlags:\n\n```\n    --dry-run             check runfile syntax and exit\n-h, --help                help for post\n    --log-mem             log memory usage at the end of each pipeline\n    --no-graph            don't write or generate graphs\n    --no-graph-generate   don't generate graphs\n    --no-graph-write      don't write graph files\n    --no-output           don't output data\n    --no-process          don't process data\n    --only-graphs         only write and generate graphs, skip input, processing and output\n    --skip strings        a list of pipeline IDs to be skipped during processing\n-v, --verbose             verbose log output\n```\n\n## Run file structure\n\n`post` is controlled by a run file in YAML format file supplied as a CLI parameter. \nThe run file usually consists of a list of pipelines, each defining 4 sections:\n`input`, `process`, `output` and `graph`. The `input` section defines input\nfiles and formats from which data is read; the `process` section defines\noperations which are applied to the data; the `output` section defines how\nthe processed data will be output/stored; and the `graph` section defines\nhow the data will be graphed.\n\n\u003e Note: All file paths within the run file are evaluated using\n\u003e the run file's parent directory as the current working directory.\n\nAll sections are optional and can be omitted, defined by themselves, or\nas part of a pipeline. A special case is the `template` section,\nwhich *cannot* be defined as part of a pipeline.\nSee [Templates](#templates) for a breakdown of their use.\n\nA single pipeline has the following fields:\n\n```yaml\n- id:\n  input:\n    type:\n    fields:\n    type_spec:\n  process:\n    - type:\n      type_spec:\n  output:\n    - type:\n      type_spec:\n  graph:\n    type:\n      graphs:\n```\n\n- `id`: the pipeline tag, used to reference the pipeline on the CLI; optional\n- `input`: the input section\n    - `type`: input type; see [Input](#input) for type descriptions\n    - `fields`: field (column) names of the input data; optional\n    - `type_spec`: input type specific configuration\n- `process`: the process section\n    - `type`: process type; see [Processing](#processing) for type descriptions\n    - `type_spec`: process type specific configuration\n- `output`: the output section\n    - `type`: output type; see [Output](#output) for type descriptions\n    - `type_spec`: output type specific configuration\n- `graph`: the graph section\n    - `type`: graph type; see [Graphing](#graphing) for type descriptions\n    - `graphs`: a list of graph type specific graph configurations\n\nA simple run file example is shown below.\n\n```yaml\n- input:\n    type: dat\n    fields: [x, y]\n    type_spec:\n      file: 'xy.dat'\n  process:\n    - type: expression\n      type_spec:\n        expression: '100*y'\n        result: 'result'\n  output:\n    - type: csv\n      type_spec:\n        file: 'output/data.csv'\n  graph:\n    type: tex\n    graphs:\n      - name: xy.tex\n        directory: output\n        table_file: 'output/data.csv'\n        axes:\n          - x:\n              min: 0\n              max: 1\n              label: '$x$'\n            y:\n              min: 0\n              max: 100\n              label: '$100 y$'\n            tables:\n              - x_field: x\n                y_field: result\n                legend_entry: 'result'\n```\n\nThe example run file instructs `post` to do the following:\n\n1. read data from a DAT formatted file `xy.dat` and rename the fields (columns)\n   to `x` and `y`\n2. evaluate the expression `100*y` and store the result to a field named `result`\n3. output the data, now containing the fields `x`, `y` and `result` to a\n   CSV formatted file `output/data.csv`, if the directory `output` does not\n   exist, it will be created\n4. generate a graph using TeX in the `output` directory, using `output/data.csv`\n   as the table (data) file, with `x` as the abscissa and `result` as the ordinate\n\nFor more examples see the [examples/](examples) directory.\n\nA generic run file stub, which can be a useful starting point, can be created\nby running:\n\n```shell\n$ post runfile\n```\n\n## Input\n\nThe following is a list of available input types and their descriptions\nalong with their run file configuration stubs:\n\n- [`archive`](#archive)\n- [`csv`](#csv)\n- [`dat`](#dat)\n- [`multiple`](#multiple)\n- [`ram`](#ram)\n- [`time-series`](#time-series)\n\n---\n\n#### `archive`\n\n`archive` reads input from an archive. The archive format is inferred from\nthe file name extension. The following archive formats are supported:\n`TAR`, `TAR-GZ`, `TAR-BZIP2`, `TAR-XZ`, `ZIP`. Note that `archive` input wraps\none or more input types, i.e., the `archive` configuration only specifies\nhow to read _some data_ from an archive, the wrapped input type reads the\nactual data.\n\nAnother important note is that the contents of the archive are stored into\nmemory the first time it is read, so if the same archive is used multiple\ntimes as an input source, it will be read from disk only once, each subsequent\nread will read directly from RAM. Hence it is beneficial to use the `archive`\ninput type when the data consists of a large amount of input files,\ne.g., a large `time-series`.\n\n\u003e Warning: it is currently faster to read regularly from the filesystem than\n\u003e using the `archive` on most machines due to a poorly optimized implementation\n\u003e of `archive`, so use with caution.\n\nThe `clear_after_read` flag can be used to clear *all* `archive` memory\nafter reading the data.\n\n```yaml\n  type: archive\n  type_spec:\n    file:                 # file path of the archive\n    clear_after_read:     # clear memory after reading; 'false' by default\n    format_spec:          # input type configuration, e.g., a CSV input type\n```\n\n#### `csv`\n\n`csv` reads from a CSV formatted file. If the file contains a header line\nthe `header` field should be set to `true` and the header column names will\nbe used as the field names for the data. If no header line is present the\n`header` field must be set to `false`.\n\n```yaml\n  type: csv\n  type_spec:\n    file:                 # file path of the CSV file\n    header:               # determines if the CSV file has a header; default 'true'\n    comment:              # character to denote comments; default '#'\n    delimiter:            # character to use as the field delimiter; default ','\n```\n\n#### `dat`\n\n`dat` reads from a white-space-separated-value file. The type and amount of\nwhite space between columns is irrelevant, as are leading and trailing white\nspaces, as long as the number of columns (non-white space fields) is\nconsistent in each row.\n\n```yaml\n  type: dat\n  type_spec:\n    file:                 # file path of the DAT file\n```\n\n#### `multiple`\n\n`multiple` is a wrapper for multiple input types. Data is read from\neach input type specified and once all inputs have been read, the data from\neach input is merged into a single data instance containing all fields\n(columns) from all inputs. The number and type of input types specified is\narbitrary, but each input must yield data with the same number of rows.\n\n```yaml\n  type: multiple\n  type_spec:\n    format_specs:         # a list of input type configurations\n```\n\n#### `ram`\n\n`ram` reads data from an in-memory store. For the data to be read it must\nhave been stored previously, e.g., a previous `output` section defines a `ram`\noutput.\n\nThe `clear_after_read` flag can be used to clear *all* `ram` memory\nafter reading the data.\n\n```yaml\n  type: ram\n  type_spec:\n    name:                 # key under which the data is stored\n    clear_after_read:     # clear memory after reading; 'false' by default\n```\n\n#### `time-series`\n\n`time-series` reads data from a time-series of structured data files in\nthe following format:\n\n ```\n .\n ├── 0.0\n │   ├── data_0.csv\n │   ├── data_1.dat\n │   └── ...\n ├── 0.1\n │   ├── data_0.csv\n │   ├── data_1.dat\n │   └── ...\n └── ...\n ```\n\nwhere each `data_*.*` file contains the data in some format at the moment in\ntime specified by the directory name.\nEach series dataset must be output into a different file, i.e., the\n`data_0.csv` files contain one dataset, `data_1.dat` another one, and so on.\n\n```yaml\n  type: time-series\n  type_spec:\n    file:                 # file name (base only) of the time-series data files\n    directory:            # path to the root directory of the time-series\n    time_name:            # the time field name; default is 'time'\n    format_spec:          # input type configuration, e.g., a CSV input type\n```\n\n## Processing\n\nThe following is a list of available processor types and their descriptions\nalong with their run file configuration stubs:\n\n- [`assert-equal`](#assert-equal)\n- [`average-cycle`](#average-cycle)\n- [`bin`](#bin)\n- [`expression`](#expression)\n- [`filter`](#filter)\n- [`regexp-rename`](#regexp-rename)\n- [`rename`](#rename)\n- [`resample`](#resample)\n- [`select`](#select)\n- [`sort`](#sort)\n\n---\n\n#### `assert-equal`\n\n`assert-equal` asserts that all `fields` are equal element-wise,\nup to `precision`. All field types must be the same.\nIf all fields are equal then no error is returned, otherwise\na non-nil error is returned, i.e., the program will stop execution.\nThe data remains unchanged in either case.\n\n```yaml\n  type: assert-equal\n  type_spec:\n    fields:               # field names for which to assert equality\n    precision:            # optional; machine precision by default\n```\n\n#### `average-cycle`\n\n`average-cycle` mutates the data by computing the enesemble average of a cycle\nfor all numeric fields. The ensemble average is computed as:\n\n```\nΦ(ωt) = 1/N Σ ϕ[ω(t+j)T], j = 0...N-1\n```\n\nwhere `ϕ` is the slice of values to be averaged, `ω` the angular velocity,\n`t` the time and `T` the period.\n\nThe resulting data will contain the cycle average of all numeric fields and a\ntime field (named `time`), containing times for each row of cycle average\ndata, in the range (0, T]. The time field will be the last field (column),\nwhile the order of the other fields is preserved.\n\nTime matching can be optionally specified, as well as the match precision,\nby setting `time_field` and `time_precision` respectively in the configuration.\nThis checks whether the time (step) is uniform and whether there is a\nmismatch between the expected time of the averaged value, as per the number\nof cycles defined in the configuration and the supplied data, and the read time.\nThe read time is the one read from the field named `time_field`.\nNote that in this case the output time field will be named after `time_field`,\ni.e., the time field name will remain unchanged.\n\n\u003e Warning: It is assumed that data is sorted chronologically, i.e.,\n\u003e by ascending time, even if `time_field` is not specified or does not exist.\n\n```yaml\n  type: average-cycle\n  type_spec:\n    n_cycles:             # number of cycles to average over\n    time_field:           # time field name; optional\n    time_precision:       # time-matching precision; optional\n```\n\n#### `bin`\n\n`bin` mutates the data by dividing all numeric fields into `n_bins`\nand setting the field values to bin-mean-values.\n\n\u003e Warning: Each bin _must_ contain the same number of field values,\n\u003e i.e., `len(field) % n_fields == 0`.\n\u003e This might change in the future.\n\n```yaml\n  type: bin\n  type_spec:\n    n_bins:               # number of bins into which the data is divided\n```\n\n#### `expression`\n`expression` evaluates an arithmetic expression and appends the resulting\nfield (column) to the data. The expression operands can be scalar values or\nfields (columns) present in the data, which are referenced by their names.\nNote that at least one of the operands must be a field present in the data.\n\nEach operation involving a field is applied element-wise. The following\narithmetic operations are supported: `+` `-` `*` `/` `**`\n\n```yaml\n  type: expression\n  type_spec:\n    expression:           # an arithmetic expression\n    result:               # field name of the resulting field\n```\n\n#### `filter`\n\n`filter` mutates the data by applying a set of row filters as defined\nin the configuration. The filter behaviour is described by providing\nthe field name `field` to which the filter is applied, the comparison\noperator `op` and a comparison value `value`. Rows satisfying the comparison\nare kept, while others are discarded. The following comparison operators\nare supported: `==` `!=` `\u003e` `\u003e=` `\u003c` `\u003c=`\n\nAll defined filters are applied at the same time. The way in which they\nare aggregated is controlled by setting the `aggregation` field in\nthe configuration, `and` and `or` aggregation modes are available.\nThe `or` mode is the default if the `aggregation` field is unset.\n\n```yaml\n  type: filter\n  type_spec:\n    aggregation:          # aggregration mode; defaults to 'or'\n    filters:\n      - field:            # field name to which the filter is applied\n        op:               # filtering operation\n        value:            # comparison value\n```\n\n#### `regexp-rename`\n\n`regexp-rename` mutates the data by replacing field names which\nmatch the regular expression src with repl.\nSee [https://golang.org/s/re2syntax](https://golang.org/s/re2syntax) for the\nregexp syntax description.\n\n```yaml\n  type: regexp-rename\n  type_spec:\n    src:                  # regular expression to use in matching\n    repl:                 # replacement string\n```\n\n#### `rename`\n\n`rename` mutates the data by renaming fields (columns).\n\n```yaml\n  type: rename\n  type_spec:\n    fields:               # map of old-to-new name key-value pairs\n```\n\n#### `resample`\n\n`resample` mutates the data by linearly interpolating all numeric fields,\nsuch that the resulting fields have `n_points` values, at uniformly\ndistributed values of the field `x_field`.\nIf `x_field` is not set, a uniform resampling is performed, i.e., as if\nthe values of each field were given at a uniformly distributed x,\nwhere x ∈ [0,1].\nThe first and last values of a field are preserved in the resampled field.\n\n```yaml\n  type: resample\n  type_spec:\n    n_points:             # number of resampling points\n    x_field:              # field name of the independent variable; optional\n```\n\n#### `select`\n\n`select` mutates the data by keeping or removing 'fields' (columns).\nIf 'remove' is true, the fields are removed, otherwise only the selected\nfields are kept in the order specified.\n\n```yaml\n  type: select\n  type_spec:\n    fields:               # a list of field names\n    remove:               # remove/keep selected fields; 'false' by default\n```\n\n#### `sort`\n\n`sort` sorts the data by `field` in ascending or descending,\nif `descending == true`, order. The processor takes a list of fields and\norderings and applies them in sequence. The order in which the fields\nare listed defines the sorting precedence, hence it is possible for some\nconstraints to not be satisfied.\n\n```yaml\n  type: sort\n  type_spec:\n    - field:              # field by which to sort\n      descending:         # sort in descending order; 'false' by default\n    - field:\n      descending:\n```\n\n## Output\n\nThe following is a list of available output types and their descriptions\nalong with their run file configuration stubs.\n\n#### `csv`\n\n`csv` writes CSV formatted data to a file. If `header` is set to `true`\nthe file will contain a header line with the field names as the column names.\nNote that, if necessary, directories will be created so as to ensure that\n`file` specifies a valid path.\n\n```yaml\n  type: csv\n  type_spec:\n    file:                 # file path of the CSV file\n    header:               # determines if the CSV file has a header; default 'true'\n    comment:              # character to denote comments; default '#'\n    delimiter:            # character to use as the field delimiter; default ','\n```\n\n#### `ram`\n\n`ram` stores data in an in-memory store. Once data is stored, any subsequent\n`ram` input type can access the data.\n\n```yaml\n  type: ram\n  type_spec:\n    name:                 # key under which the data is stored\n```\n\n## Graphing\n\nOnly TeX graphing, via `tikz` and `pgfplots`, is supported currently. Hence\nfor the graph generation to work, TeX needs to be installed along with any\ndependent packages.\n\nGraphing consists of two steps: generating TeX graph files from templates, and\ngenerating the graphs from TeX files. To see the default template files run:\n\n```shell\n$ post graphfile --outdir=templates\n```\n\nThe templates can be user supplied by setting `template_directory` and\n`template_main` (if necessary) in the run file configuration. The templates\nuse [Go][golang] template syntax, see the [package documentation][godoc-text-template]\nfor more information.\n\nA `tex` graph configuration stub is given below, note that several fields expect\nraw TeX as input.\n\n```yaml\ntype: tex\ngraphs:\n  - name:                   # used as a basename for all graph related files\n    directory:              # optional; output directory name, created if not present\n    table_file:             # optional; needed if 'tables.table_file' is undefined\n    template_directory:     # optional; template directory\n    template_main:          # optional; root template file name\n    template_delims:        # optional; go template delimiters; ['__{','}__'] by default\n    tex_command:            # optional; 'pdflatex' by default\n    axes:\n      - x:\n          min:\n          max:\n          label:            # raw TeX\n        y:\n          min:\n          max:\n          label:            # raw TeX\n        width:              # optional; raw TeX, axis width option\n        height:             # optional; raw TeX, axis height option\n        legend_style:       # optional; raw TeX, axis legend style option\n        raw_options:        # optional; raw TeX, if defined all other options are ignored\n        tables:\n          - x_field:\n            y_field:\n            legend_entry:   # raw TeX\n            col_sep:        # optional; 'comma' by default\n            table_file:     # optional; needed if 'graphs.table_file' is undefined\n```\n\n## Templates\n\nTemplates reduce boilerplate when it is necessary to process different sources\nof data but use the same processing pipeline.\n\nFor example, consider the case when we would like to extract data at specific\ntimes from some time series. The run file would look something like this:\n\n```yaml\n- input: # extract data at t = 0.1\n    type: dat\n    fields: [time, value]\n    type_spec:\n      file: 'data.dat'\n  process:\n    - type: filter\n      type_spec:\n        filters:\n          - field: 'time'\n            op: '=='\n            value: 0.1\n  output:\n    - type: csv\n      type_spec:\n        file: 'output/data_0.1.csv'\n\n- input: # extract data at t = 0.2\n    ...\n\n- input: # extract data at t = 0.3\n    ...\n```\n\nA new pipeline has to be defined for each time we would like to extract since\nthe `filter` uses a different time value and the extracted data is written\nto a different file each time. This is both cumbersome and error prone.\nSo we use a `template` to simplify this:\n\n```yaml\n- template:\n    params:\n      t: [0.1, 0.2, 0.3]\n    src: |\n      - input:\n          type: dat\n          fields: [time, value]\n          type_spec:\n            file: 'data.dat'\n        process:\n          - type: filter\n            type_spec:\n              filters:\n                - field: 'time'\n                  op: '=='\n                  value: {{ .t }}\n        output:\n          - type: csv\n            type_spec:\n              file: 'output/data_{{ .t }}.csv'\n```\n\nNow we only have to define the pipeline once and, in this case,\nparametrize it by time.\n\nA `template` consists of the following fields:\n\n```yaml\n- template:\n    params:               # a map of parameters used in the template\n    src:                  # YAML formatted string for the pipeline to template\n```\n\nFor a `template` definition to be the following must be true:\n\n- the `template` must be defined as part of a sequence (`!!seq`)\n- the definition can contain only one mapping,\n  which must have the tag `template`, i.e., it cannot be defined as part\n  of a pipeline\n\nThe `params` field is a map of parameters and their values. The values can\nbe of any type, including a mapping, but must be given as a list, even if\nonly one value is given. Here are some examples:\n\n```yaml\n- template:\n    params:\n      o: [0]              # a single integer, but must be a list\n      p: [0, 1, 2]        # a list of integers\n      q: ['ab', 'cd']     # a list of strings\n      r:                  # a list of maps\n        - tag: a\n          val: 0\n        - tag: b\n          val: 1\n```\n\nThe `src` field is a string containing the pipeline template, using\n[Go template syntax][godoc-text-template], i.e., the string within `src` is\nexpanded directly into the run file using parameter values defined in `params`.\n\nIf multiple parameters are defined, the `template` is executed for all\ncombinations of parameters. For example, the following `template`:\n\n```yaml\n- template:\n    params:\n      typ: [dat, csv]\n      ind: [0, 1]\n    src: |\n      input:\n        type: ram\n        type_spec:\n          name: 'data_{{ .ind }}_{{ .typ }}'\n      output:\n        - type: {{ .typ }}\n          type_spec:\n            file: 'data_{{ .ind }}.{{ .typ }}'\n```\n\nwill generate 4 files: `data_0.dat`, `data_1.dat`, `data_0.csv` and `data_1.csv`,\nalthough not necessarily in that order since the execution order of\nmulti-parameter templates is undefined, and so shouldn't be relied upon.\n\n\u003e Warning: YAML aliases currently *cannot* be used within the `src` field.\n\u003e This might change in the future.\n\nSee the [examples/](examples) directory for more usage examples.\n\n[godoc-text-template]: https://pkg.go.dev/text/template\n[golang]: https://go.dev\n[latex]: https://www.latex-project.org/\n[openfoam]: https://www.openfoam.com\n[post-release]: https://github.com/Milover/post/releases\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fmilover%2Fpost","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fmilover%2Fpost","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fmilover%2Fpost/lists"}