https://github.com/parcellab/sift
mongodb queries in JS
https://github.com/parcellab/sift
team-backend
Last synced: 10 months ago
JSON representation
mongodb queries in JS
- Host: GitHub
- URL: https://github.com/parcellab/sift
- Owner: parcelLab
- Created: 2024-05-28T09:30:07.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2024-09-24T01:43:01.000Z (over 1 year ago)
- Last Synced: 2025-01-23T06:16:00.475Z (11 months ago)
- Topics: team-backend
- Language: TypeScript
- Homepage:
- Size: 2.85 MB
- Stars: 0
- Watchers: 9
- Forks: 0
- Open Issues: 4
-
Metadata Files:
- Readme: README.md
- Contributing: CONTRIBUTING.md
- Codeowners: CODEOWNERS
Awesome Lists containing this project
README
[](https://codecov.parcellab.dev/gh/parcelLab/sift)
# Sift
Run MongoDB queries in regular javascript.
(fork of https://github.com/crcn/sift.js)
## Table of Contents (Optional)
## About The Project
### Why fork https://github.com/crcn/sift.js?
The original project uses recursion and closures to construct a filter function based on the given MongoDB query.
- this makes debugging difficult, as you have to trace through deep function calls with little context of what the original filter was
- this makes it slow when V8 has not optimized it, since recursive and closure-heavy function calls make use of many stacks
- this makes it hard to change a behavior without affecting others
This fork intends to rewrite sift to a compiler like [ajv](https://github.com/ajv-validator/ajv).
- it intends to provide a verbose mode where each leaf comparison can provide a pass/fail status, and/or what the comparison function looks like
- it intends to be tiny and fast
- it intends to be safe, where $where function calls are constrained to the input object
- it intends to stick to the same behavior as MongoDB query operators as possible
## Installation
Supports node >= 18
`npm i @parcellab/sift`
## Usage
## Contributing
- You need a local copy of MongoDB (e.g. via docker, compose file provided) configured via TEST_MONGODB_URL which defaults to `mongo://localhost:27017/test`
- To update the benchmarks:
1. run `npm run test:bench` which will update the human readable `output.txt`
2. run `npm run test:bench:csv`
3. open `output.xlsx` in Microsoft Excel
4. click on Refresh Data Sources (this imports `output.csv` automatically)
5. export the updated chart as `output.png`
[Contribution guidelines](CONTRIBUTING.md)