Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/lyokha/http-client-brread-timeout
Http client with timeouts applied in between body read events
https://github.com/lyokha/http-client-brread-timeout
haskell http-client network read timeout
Last synced: 10 days ago
JSON representation
Http client with timeouts applied in between body read events
- Host: GitHub
- URL: https://github.com/lyokha/http-client-brread-timeout
- Owner: lyokha
- License: mit
- Created: 2022-06-22T04:07:20.000Z (over 2 years ago)
- Default Branch: master
- Last Pushed: 2023-11-29T20:21:30.000Z (12 months ago)
- Last Synced: 2024-11-07T05:07:26.507Z (12 days ago)
- Topics: haskell, http-client, network, read, timeout
- Language: Haskell
- Homepage:
- Size: 18.6 KB
- Stars: 1
- Watchers: 2
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- Changelog: Changelog.md
- License: LICENSE
Awesome Lists containing this project
README
Http client with time-limited brRead
====================================[![Build Status](https://github.com/lyokha/http-client-brread-timeout/workflows/CI/badge.svg)](https://github.com/lyokha/http-client-brread-timeout/actions?query=workflow%3ACI)
[![Hackage](https://img.shields.io/hackage/v/http-client-brread-timeout.svg?label=hackage%20%7C%20http-client-brread-timeout&logo=haskell&logoColor=%239580D1)](https://hackage.haskell.org/package/http-client-brread-timeout)Http client with timeouts applied in between body read events.
Note that the response timeout in
[*http-client*](https://github.com/snoyberg/http-client) is applied only when
receiving the response headers which is not always satisfactory given that a
slow server may send the rest of the response very slowly.### How do I test this?
A slow server can be emulated in *Nginx* with the following configuration.
```nginx
user nobody;
worker_processes 2;events {
worker_connections 1024;
}http {
default_type application/octet-stream;
sendfile on;server {
listen 8010;
server_name main;location /slow {
echo 1; echo_flush;
# send extra chunks of the response body once in 20 sec
echo_sleep 20; echo 2; echo_flush;
echo_sleep 20; echo 3; echo_flush;
echo_sleep 20; echo 4;
}location /very/slow {
echo 1; echo_flush;
echo_sleep 20; echo 2; echo_flush;
# chunk 3 is extremely slow (40 sec)
echo_sleep 40; echo 3; echo_flush;
echo_sleep 20; echo 4;
}
}
}
```*GHCI* session.
Prelude> import Network.HTTP.Client as HTTP.Client
Prelude HTTP.Client> import Network.HTTP.Client.BrReadWithTimeout as BrReadWithTimeout
Prelude HTTP.Client BrReadWithTimeout> httpManager = newManager defaultManagerSettings
Prelude HTTP.Client BrReadWithTimeout> man <- httpManager
Prelude HTTP.Client BrReadWithTimeout> reqVerySlow <- parseRequest "GET http://127.0.0.1:8010/very/slow"
Prelude HTTP.Client BrReadWithTimeout> reqSlow <- parseRequest "GET http://127.0.0.1:8010/slow"
Prelude HTTP.Client BrReadWithTimeout> :set +s
Prelude HTTP.Client BrReadWithTimeout> httpLbs reqVerySlow man
Response {responseStatus = Status {statusCode = 200, statusMessage = "OK"}, responseVersion = HTTP/1.1, responseHeaders = [("Server","nginx/1.22.0"),("Date","Thu, 23 Jun 2022 22:04:02 GMT"),("Content-Type","application/octet-stream"),("Transfer-Encoding","chunked"),("Connection","keep-alive")], responseBody = "1\n2\n3\n4\n", responseCookieJar = CJ {expose = []}, responseClose' = ResponseClose, responseOriginalRequest = Request {
host = "127.0.0.1"
port = 8010
secure = False
requestHeaders = []
path = "/very/slow"
queryString = ""
method = "GET"
proxy = Nothing
rawBody = False
redirectCount = 10
responseTimeout = ResponseTimeoutDefault
requestVersion = HTTP/1.1
proxySecureMode = ProxySecureWithConnect
}
}
(80.09 secs, 1,084,840 bytes)
Prelude HTTP.Client BrReadWithTimeout> httpLbsBrReadWithTimeout reqVerySlow man
*** Exception: HttpExceptionRequest Request {
host = "127.0.0.1"
port = 8010
secure = False
requestHeaders = []
path = "/very/slow"
queryString = ""
method = "GET"
proxy = Nothing
rawBody = False
redirectCount = 10
responseTimeout = ResponseTimeoutMicro 30000000
requestVersion = HTTP/1.1
proxySecureMode = ProxySecureWithConnect
}
ResponseTimeout
Prelude HTTP.Client BrReadWithTimeout> httpLbsBrReadWithTimeout reqSlow man
Response {responseStatus = Status {statusCode = 200, statusMessage = "OK"}, responseVersion = HTTP/1.1, responseHeaders = [("Server","nginx/1.22.0"),("Date","Thu, 23 Jun 2022 22:08:46 GMT"),("Content-Type","application/octet-stream"),("Transfer-Encoding","chunked"),("Connection","keep-alive")], responseBody = "1\n2\n3\n4\n", responseCookieJar = CJ {expose = []}, responseClose' = ResponseClose, responseOriginalRequest = Request {
host = "127.0.0.1"
port = 8010
secure = False
requestHeaders = []
path = "/slow"
queryString = ""
method = "GET"
proxy = Nothing
rawBody = False
redirectCount = 10
responseTimeout = ResponseTimeoutDefault
requestVersion = HTTP/1.1
proxySecureMode = ProxySecureWithConnect
}
}
(60.07 secs, 1,082,880 bytes)Here, the first request comes from the standard `httpLbs` which, after timely
receiving of the first chunk of the response (including headers and the first
chunk of the body), no longer applies any timeouts and may last as long as the
response endures: in this case, it lasts 80 seconds and successfully returns.
In the second request, `httpLbsBrReadWithTimeout` timely receives the first
chunk of the response too, the second chunk comes in 20 seconds, and finally,
as the third chunk is going to come in 40 seconds which exceeds the default
response timeout value (30 seconds), the function throws `ResponseTimeout`
exception after 50 seconds from the start of the request. In the third request,
`httpLbsBrReadWithTimeout` returns successfully after 60 seconds because every
extra chunk of the response was coming every 20 seconds without triggering the
timeout.