Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/bbrtj/perl-validator-benchmark
Benchmark of Perl validation frameworks. Example results -> https://bbrtj.eu/blog/article/validation-frameworks-benchmark
https://github.com/bbrtj/perl-validator-benchmark
benchmark perl perl5 validation
Last synced: 15 days ago
JSON representation
Benchmark of Perl validation frameworks. Example results -> https://bbrtj.eu/blog/article/validation-frameworks-benchmark
- Host: GitHub
- URL: https://github.com/bbrtj/perl-validator-benchmark
- Owner: bbrtj
- Created: 2022-01-28T19:23:09.000Z (almost 3 years ago)
- Default Branch: master
- Last Pushed: 2023-08-24T16:06:29.000Z (over 1 year ago)
- Last Synced: 2025-01-07T01:58:21.021Z (20 days ago)
- Topics: benchmark, perl, perl5, validation
- Language: Perl
- Homepage:
- Size: 222 KB
- Stars: 1
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Benchmarks for form validators
These benchmarks measure:
- how fast input hashes matching the specification can be validated at runtimeThese benchmarks don't (currently?) measure:
- validation failure speed
- compilation time overheadObject construction is part of the benchmark, but can be skipped if certain validator implementation does not require it.
## Example results
(https://bbrtj.eu/blog/article/validation-frameworks-benchmark)
## Running the benchmark
First, you have to install Carton for your perl and use it to install dependencies locally:
```
cpan Carton
carton install
```The benchmark is run one case at a time:
```
carton exec ./benchmark.pl
```Results can be optionally filtered through `format.pl` script, which reduces horizontal size of the output:
```
carton exec ./benchmark.pl | ./format.pl
```## Cases
Here is the list of current benchmark cases with rationales behind them:
### single_field
We pass a single field `a`, which is required and can be any scalar value.
*Rationale*: we measure the runtime overhead each system has. The validation rule here is as easy as it can be on purpose. Results of this benchmark can be used to decide whether a validator should be used for simple data validation cases that will run very frequently.
### multiple_fields
We pass five fields `a`, `b`, `c`, `d`, `e`, all of which are required and strings.
*Rationale*: we can compare results of the previous single_field validator and see how having multiple rules affects validator's performance. A big drop in validation speed can indicate a poorly optimized system.
### array_of_objects
We pass a single field `a`, which is an array reference of hash references. There are 100 nested hashes in total. Each hash contains keys `b` - a number, and `c` - a string.
*Rationale*: We measure how efficiently validators can crawl a nested structure and whether they can deal with large amount of data. The amount of data can be configured by hand in `BenchmarkSetup` to see whether the performance decreases linearly or exponentially. This data can be used to decide whether a validator is fit for larger volume of data.
## Contributing
Contributions are welcome! If you wish to contribute a new validator, follow these steps:
1. Add your validator module into `cpanfile`. Run `carton install` to update `cpanfile.snapshot`
2. Add a new benchmark runner in `lib/Utils.pm`. Name your benchmark `Bench`, `` being module name without `::` separators
3. Choose which benchmark case you would like to implement. Add your benchmark name into `BenchmarkSetup.pm` file in that directory (in `sub participants`)
4. Create a file in that directory named `Bench.pm`. Implement validation code in that file
5. Ensure your validator works, then create a pull request