Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/hrbrmstr/curlparse

📃Parse 'URLs' with 'libcurl'
https://github.com/hrbrmstr/curlparse

libcurl r r-cyber rstats url-parse

Last synced: 25 days ago
JSON representation

📃Parse 'URLs' with 'libcurl'

Awesome Lists containing this project

README

        

---
output: rmarkdown::github_document
editor_options:
chunk_output_type: console
---
```{r pkg-knitr-opts, include=FALSE}
hrbrpkghelpr::global_opts()
```

```{r badges, results='asis', echo=FALSE, cache=FALSE}
hrbrpkghelpr::stinking_badges()
```
# curlparse

Parse 'URLs' with 'libcurl'

## Description

Tools are provided to parse URLs using the modern 'libcurl' built-in parser.

## NOTE

You _need_ to have `libcurl` >= 7.62.0 for this to work since that's when it began to expose the URL parsing API.

macOS users can do:

```
$ brew install curl
```

(provided you're using [Homebrew](https://brew.sh/)).

Windows users are able to just install this pacakge since it uses the same, clever "anticonf" that Jeroen uses in the [{curl} pacakge](https://github.com/jeroen/curl).

The state of the availability of `libcurl` v7.62.0 across Linux distributions is sketch at best (as an example, Ubuntu bionic and comic are not even remotely at the current version). If your distribution does not have >= 7.62.0 available you will need to [compile and install it manually](https://curl.haxx.se/download.html) ensuring the library and headers are available to R to build the package.

## What's Inside The Tin

The following functions are implemented:

```{r ingredients, results='asis', echo=FALSE, cache=FALSE}
hrbrpkghelpr::describe_ingredients()
```

## Installation

```{r install-ex, results='asis', echo = FALSE}
hrbrpkghelpr::install_block()
```

## Usage

```{r message=FALSE, warning=FALSE, error=FALSE}
library(curlparse)

# current verison
packageVersion("curlparse")

```

### Process Some URLs

```{r libs}
library(urltools)
library(rvest)
library(curlparse)
library(tidyverse)

```
```{r cache=TRUE}
read_html("https://www.r-bloggers.com/blogs-list/") %>%
html_nodes(xpath=".//li[contains(., 'Contributing Blogs')]/ul/li/a[contains(@href, 'http')]") %>%
html_attr("href") -> blog_urls

```
```{r}
(parsed <- parse_curl(blog_urls))

count(parsed, scheme, sort=TRUE)

filter(parsed, !is.na(query))
```

### Benchmark

`curlparse` includes a `url_parse()` function to make it easier to use this package for current users of `urltools::url_parse()` since it provides the same API and same results back (including it being a regular data frame and not a `tbl`).

Spoiler alert: `urltools::url_parse()` is faster by ~100µs (per-100 URLs) for "good" URLs (if there's a mix of gnarly/bad URLs and valid ones they get closer to being on-par). The aim was not to try to beat it, though.

>Per the [blog post introducing this new set of API calls](https://daniel.haxx.se/blog/2018/09/09/libcurl-gets-a-url-api/):
>
>Applications that pass in URLs to libcurl would of course still very often need to parse URLs, create URLs or otherwise handle them, but libcurl has not been helping with that.
>
>At the same time, the under-specification of URLs has led to a situation where there's really no stable document anywhere describing how URLs are supposed to work and basically every implementer is left to handle the WHATWG URL spec, RFC 3986 and the world in between all by themselves. Understanding how their URL parsing libraries, libcurl, other tools and their favorite browsers differ is complicated.
>
>By offering applications access to libcurl's own URL parser, we hope to tighten a problematic vulnerable area for applications where the URL parser library would believe one thing and libcurl another. This could and has sometimes lead to security problems. (See for example Exploiting URL Parser in Trending Programming Languages! by Orange Tsai)

So, using this library adds consistency with how `libcurl` sees and handles URLs.

```{r}
library(microbenchmark)

set.seed(0)
test_urls <- sample(blog_urls, 100) # pick 100 URLs at random

microbenchmark(
curlparse = curlparse::url_parse(test_urls),
urltools = urltools::url_parse(test_urls), # we loaded urltools before curlparse at the top so namespace loading wasn't a factor for the benchmarks
times = 500
) -> mb

mb

autoplot(mb)
```

The individual handlers are a bit more on-par but mostly still slower (except for `fragment()`). Note that `urltools` has no equivalent function to just extract query strings so that's not in the test.

```{r fig.width=6, fig.height=6}
bind_rows(
microbenchmark(curlparse = curlparse::scheme(blog_urls), urltools = urltools::scheme(blog_urls)) %>%
mutate(test = "scheme"),
microbenchmark(curlparse = curlparse::domain(blog_urls), urltools = urltools::domain(blog_urls)) %>%
mutate(test = "domain"),
microbenchmark(curlparse = curlparse::port(blog_urls), urltools = urltools::port(blog_urls)) %>%
mutate(test = "port"),
microbenchmark(curlparse = curlparse::path(blog_urls), urltools = urltools::path(blog_urls)) %>%
mutate(test = "path"),
microbenchmark(curlparse = curlparse::fragment(blog_urls), urltools = urltools::fragment(blog_urls)) %>%
mutate(test = "fragment")
) %>%
mutate(test = factor(test, levels=c("scheme", "domain", "port", "path", "fragment"))) %>%
mutate(time = time / 1000000) %>%
ggplot(aes(expr, time)) +
geom_violin(aes(fill=expr), show.legend = FALSE) +
scale_y_continuous(name = "milliseconds", expand = c(0,0), limits=c(0, NA)) +
hrbrthemes::scale_fill_ft() +
facet_wrap(~test, ncol = 1) +
coord_flip() +
labs(x=NULL) +
hrbrthemes::theme_ft_rc(grid="XY", strip_text_face = "bold") +
theme(panel.spacing.y=unit(0, "lines"))
```

```{r echo=FALSE}
unloadNamespace("urltools")
```

### Stress Test

```{r}
c(
"", "foo", "foo;params?query#fragment", "http://foo.com/path", "http://foo.com",
"//foo.com/path", "//user:[email protected]/", "http://user:[email protected]/",
"file:///tmp/junk.txt", "imap://mail.python.org/mbox1",
"mms://wms.sys.hinet.net/cts/Drama/09006251100.asf", "nfs://server/path/to/file.txt",
"svn+ssh://svn.zope.org/repos/main/ZConfig/trunk/",
"git+ssh://[email protected]/user/project.git", "HTTP://WWW.PYTHON.ORG/doc/#frag",
"http://www.python.org:080/", "http://www.python.org:/", "javascript:console.log('hello')",
"javascript:console.log('hello');console.log('world')", "http://example.com/?",
"http://example.com/;", "tel:0108202201", "unknown:0108202201",
"http://[email protected]:8080/path;param?query#fragment",
"http://www.python.org:65536/", "http://www.python.org:-20/",
"http://www.python.org:8589934592/", "http://www.python.org:80hello/",
"http://:::cnn.com/", "http://./", "http://foo..com/", "http://foo../"
) -> ugly_urls

(u_parsed <- parse_curl(ugly_urls))

filter(u_parsed, !is.na(scheme))

filter(u_parsed, !is.na(user))

filter(u_parsed, !is.na(password))

filter(u_parsed, !is.na(host))

filter(u_parsed, !is.na(path))

filter(u_parsed, !is.na(query))

filter(u_parsed, !is.na(fragment))
```

Make sure the vector extractors work the same as the data frame converter:

```{r}
all(
c(
identical(u_parsed$scheme, scheme(ugly_urls)),
identical(u_parsed$user, user(ugly_urls)),
identical(u_parsed$password, password(ugly_urls)),
identical(u_parsed$host, host(ugly_urls)),
identical(u_parsed$path, path(ugly_urls)),
identical(u_parsed$query, query(ugly_urls)),
identical(u_parsed$fragment, fragment(ugly_urls))
)
)
```

## curlparse Metrics

```{r cloc, echo=FALSE}
cloc::cloc_pkg_md()
```

## Code of Conduct

Please note that this project is released with a Contributor Code of Conduct.
By participating in this project you agree to abide by its terms.