https://github.com/daemon-node-byte/oneorigin
Interview technical assessment
https://github.com/daemon-node-byte/oneorigin
Last synced: 3 days ago
JSON representation
Interview technical assessment
- Host: GitHub
- URL: https://github.com/daemon-node-byte/oneorigin
- Owner: daemon-node-byte
- License: mit
- Created: 2023-04-01T04:13:14.000Z (about 3 years ago)
- Default Branch: main
- Last Pushed: 2023-04-01T19:15:58.000Z (about 3 years ago)
- Last Synced: 2024-07-22T01:34:55.802Z (almost 2 years ago)
- Language: TypeScript
- Homepage:
- Size: 7.81 KB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# OneOrigin
Interview technical assessment
## Round One
node script: run command
```bash
npm run roundOne
```
[source code](./refactorRoundOne.ts)
### Issue
Make a 1000 API calls to an endpoint that has a rate limit of 100 requests per 1 minute.
### Solution
Create a function that will dispatch total required requests with in the restrictions set.
### Code Explanation
My approach is to break down the steps of the process and build out as if each function is a shared utility to eliminate the overhead of repeated code blocks.
- First create a function (`getRequest`) that makes a single request and returns a response.
- Next create a function (`batchRequests`) that will make multiple requests.
- Last create a function (`sendBatch`) that will make interval requests in respect of arguments passed.
The `getRequest` function uses a predefined domain for our API service (global `API_DOMAIN`) and passing a parameter during call it ping desired endpoint. Here I added a increment to a global variable `REQUESTS_SENT` for verification of action later.
The `batchRequests` function makes multiple GET requests in a single call, passing a parameter to ping desired endpoint and amount of requests per function call. I choose recursion because of readability and time complexity. May not make much difference at 100 requests per function call, but 1000+ there may be a measurable difference.
The `sendBatch` function handles the batch of requests made withing various rate limit restrictions. Does take four arguments, The endpoint the batch of requests will be sent to, the max number of requests per batch, and the number of seconds between batches of requests, (thought seconds to be best for use in any testing suites) and last argument for the total number of requests desired. I tried catch edge cases that may had been unaccounted for. Such as the first batch of requests should run with no delay (handled by `initSend ? 0 : delaySeconds * 1000`), or a use case where the max number of requests per batch does't factor in to the total number of requests as a whole number (handle by `maxRatePer > remainingTotal ? remainingTotal : maxRatePer`)
For the purposes of running in node environment vs web, consoled out running timestamps and assertion if global `REQUESTS_SENT` matches the total number of requests passed as an parameter. Areas for improvement, I think making the functions asynchronous, adding custom resolve/reject handling and returning a `Promise` along with collected response data of batched requests. I think that would be an improvement and much more reliable and easier for debugging.