Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/arturo-lang/benchmarks
Benchmarking tools & results for Arturo
https://github.com/arturo-lang/benchmarks
arturo benchmark interpreter language programming programming-language
Last synced: about 1 month ago
JSON representation
Benchmarking tools & results for Arturo
- Host: GitHub
- URL: https://github.com/arturo-lang/benchmarks
- Owner: arturo-lang
- License: mit
- Created: 2022-08-05T13:59:21.000Z (over 2 years ago)
- Default Branch: main
- Last Pushed: 2023-02-08T21:46:41.000Z (almost 2 years ago)
- Last Synced: 2024-05-27T12:33:20.489Z (6 months ago)
- Topics: arturo, benchmark, interpreter, language, programming, programming-language
- Language: Shell
- Homepage:
- Size: 77.2 MB
- Stars: 3
- Watchers: 3
- Forks: 0
- Open Issues: 1
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Benchmarks
This repository hosts the main benchmarking tools & data for [**Arturo**](https://github.com/arturo-lang/arturo) itself.
The main scripts are supposed to run automatically @ 21:00 UTC, on a daily basis (only if there are new commits to the main repo following the latest benchmarks), after re-building Arturo's master branch from scratch in *release* mode on a *fresh-spawn/vanilla* DigitalOcean droplet (c-4) with the following specifications:
- CPU-optimized
- 4 vCPUs
- 8 GB memory
- 50 GB SSD
- Ubuntu 20.04The main benchmarking tool orchestrating the whole process is [Hyperfine](https://github.com/sharkdp/hyperfine) - which is admittedly a... hyper-fine fit for this type of job.
All the results will be stored here (in the `/results` folder):
- the **macro**-benchmarks are actually all tests (unit-tests & RC examples) normally running as part of our CI workflows
- the **micro**-benchmarks are minimal tests, designed solely for benchmarking purposes, in order to isolate and measure specific features of ArturoThe collected data will - soon - be available from within Arturo's main website (pretty much [in the fashion of V lang](https://fast.vlang.io/) - only looking a bit better, I hope... :))
## To run manually
Although the main idea is to run the relevant scripts automatically, via a Cron job on our main server, the benchmarks can be triggered manually.
With **hyperfine** and Arturo installed (and globally available in the $PATH), and the two repos (this one and the main Arturo repo) side-by-side (that is: under the exact same parent folder), all we have to do is enter this folder (`/benchmarks`) and run:
```bash
./run.sh (*optional)
```------
[![DigitalOcean Referral Badge](https://web-platforms.sfo2.digitaloceanspaces.com/WWW/Badge%203.svg)](https://www.digitalocean.com/?refcode=d9efb97aa0f2&utm_campaign=Referral_Invite&utm_medium=Referral_Program&utm_source=badge)