Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/nil0x42/duplicut

Remove duplicates from MASSIVE wordlist, without sorting it (for dictionary-based password cracking)
https://github.com/nil0x42/duplicut

c cracking dedupe dictionary duplicate-detection hashcat hashes password password-cracking remove-duplicates uniq unique wordlist wordlist-generator wordlists

Last synced: 1 day ago
JSON representation

Remove duplicates from MASSIVE wordlist, without sorting it (for dictionary-based password cracking)

Awesome Lists containing this project

README

        

Duplicut :scissors:


Quickly dedupe massive wordlists, without changing the order

tweet





github workflows


codacy code quality


lgtm alerts


codecov coverage



Mentioned in awesome-pentest












Created by
nil0x42 and
contributors


* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *

### :book: Overview

Nowadays, password wordlist creation usually implies concatenating
multiple data sources.

Ideally, most probable passwords should stand at start of the wordlist,
so most common passwords are cracked instantly.

With existing *dedupe tools* you are forced to choose
if you prefer to *preserve the order **OR** handle massive wordlists*.

Unfortunately, **wordlist creation requires both**:

![][img-1-comparison]

> **So i wrote duplicut in [highly optimized C][get-next-line] to address this very specific need :nerd\_face: :computer:**

* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *

### :bulb: Quick start

```sh
git clone https://github.com/nil0x42/duplicut
cd duplicut/ && make
./duplicut wordlist.txt -o clean-wordlist.txt
```

### :wrench: Options

![][img-4-help]

* **Features**:
* Handle massive wordlists, even those whose size exceeds available RAM
* Filter lines by max length (`-l` option)
* Can remove lines containing non-printable ASCII chars (`-p` option)
* Press any key to show program status at runtime.

* **Implementation**:
* Written in pure C code, designed to be fast
* Compressed hashmap items on 64 bit platforms
* Multithreading support

* **Limitations**:
* Any line longer than 255 chars is ignored

### :book: Technical Details

#### :small_orange_diamond: 1- Memory optimized:

An `uint64` is enough to index lines in hashmap, by packing
`size` info within pointer's [extra bits][tagged-pointer]:

![][img-2-line-struct]

#### :small_orange_diamond: 2- Massive file handling:

If whole file can't fit in memory, it is split into ![][latex-n]
virtual chunks, in such way that each chunk uses as much RAM as possible.

Each chunk is then loaded into hashmap, deduped, and tested against
subsequent chunks.

That way, execution time decreases to at most ![][latex-n]th *triangle number*:

![][img-3-chunked-processing]

## :bulb: Throubleshotting

If you find a bug, or something doesn't work as expected,
please compile duplicut in debug mode and post an [issue] with
attached output:

```
# debug level can be from 1 to 4
make debug level=1
./duplicut [OPTIONS] 2>&1 | tee /tmp/duplicut-debug.log
```

[get-next-line]: https://github.com/nil0x42/duplicut/blob/master/src/line.c#L39

[img-1-comparison]: data/img/1-comparison.png
[img-2-line-struct]: data/img/2-line-struct.png
[img-3-chunked-processing]: data/img/3-chunked-processing.png
[img-4-help]: data/img/4-help.png

[issue]: https://github.com/nil0x42/duplicut/issues
[tagged-pointer]: https://en.wikipedia.org/wiki/Tagged_pointer

[latex-n]: http://www.sciweavers.org/tex2img.php?fs=15&eq=n
[latex-nth-triangle]: http://www.sciweavers.org/tex2img.php?fs=32&eq=%5Csum_%7Bk%3D1%7D%5Enk