{"id":17632758,"url":"https://github.com/src-d/minhashcuda","last_synced_at":"2025-04-09T14:06:38.021Z","repository":{"id":50372784,"uuid":"71914464","full_name":"src-d/minhashcuda","owner":"src-d","description":"Weighted MinHash implementation on CUDA (multi-gpu).","archived":false,"fork":false,"pushed_at":"2023-11-29T18:46:03.000Z","size":92,"stargazers_count":117,"open_issues_count":2,"forks_count":24,"subscribers_count":10,"default_branch":"master","last_synced_at":"2025-04-02T07:51:33.553Z","etag":null,"topics":["cuda","lsh","machine-learning","minhash"],"latest_commit_sha":null,"homepage":"","language":"C++","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"other","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/src-d.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE.md","code_of_conduct":"CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null}},"created_at":"2016-10-25T16:01:49.000Z","updated_at":"2025-03-01T06:11:03.000Z","dependencies_parsed_at":"2023-12-17T22:42:21.305Z","dependency_job_id":"27093905-6458-4937-a1c4-ce4adaff0cee","html_url":"https://github.com/src-d/minhashcuda","commit_stats":{"total_commits":56,"total_committers":10,"mean_commits":5.6,"dds":0.3392857142857143,"last_synced_commit":"85d419a3b3fedafd8e25d4fb0c641f69198d2ba6"},"previous_names":[],"tags_count":16,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/src-d%2Fminhashcuda","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/src-d%2Fminhashcuda/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/src-d%2Fminhashcuda/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/src-d%2Fminhashcuda/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/src-d","download_url":"https://codeload.github.com/src-d/minhashcuda/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":248054227,"owners_count":21039952,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["cuda","lsh","machine-learning","minhash"],"created_at":"2024-10-23T01:45:37.864Z","updated_at":"2025-04-09T14:06:37.997Z","avatar_url":"https://github.com/src-d.png","language":"C++","readme":"MinHashCuda [![Build Status](https://travis-ci.org/src-d/minhashcuda.svg?branch=master)](https://travis-ci.org/src-d/minhashcuda) [![PyPI](https://img.shields.io/pypi/v/libMHCUDA.svg)](https://pypi.python.org/pypi/libMHCUDA) [![10.5281/zenodo.286955](https://zenodo.org/badge/DOI/10.5281/zenodo.286955.svg)](https://doi.org/10.5281/zenodo.286955)\n===========\n\nThis project is the reimplementation of Weighted MinHash calculation from\n[ekzhu/datasketch](https://github.com/ekzhu/datasketch) in NVIDIA CUDA and thus\nbrings 600-1000x speedup over numpy with [MKL](https://en.wikipedia.org/wiki/Math_Kernel_Library)\n(Titan X 2016 vs 12-core Xeon E5-1650).\nIt supports running on multiple GPUs to be even faster, e.g., processing 10Mx12M\nmatrix with sparsity 0.0014 takes 40 minutes using two Titan Xs.\nThe produced results are bit-to-bit identical to the reference implementation.\nRead the [article](http://blog.sourced.tech/post/minhashcuda/).\n\nThe input format is 32-bit float [CSR](https://en.wikipedia.org/wiki/Sparse_matrix#Compressed_sparse_row_.28CSR.2C_CRS_or_Yale_format.29) matrix.\nThe code is optimized for low memory consumption and speed.\n\nWhat is Weighted MinHash\n--------------------------\nMinHash can be used to compress unweighted set or binary vector, and estimate\nunweighted Jaccard similarity.\nIt is possible to modify MinHash for\n[weighted Jaccard](https://en.wikipedia.org/wiki/Jaccard_index#Generalized_Jaccard_similarity_and_distance)\nby expanding each item (or dimension) by its weight.\nHowever this approach does not support real number weights, and\ndoing so can be very expensive if the weights are very large.\n[Weighted MinHash](http://static.googleusercontent.com/media/research.google.com/en//pubs/archive/36928.pdf)\nis created by Sergey Ioffe, and its performance does not depend on the weights - as\nlong as the universe of all possible items (or dimension for vectors) is known.\nThis makes it unsuitable for stream processing, when the knowledge of unseen\nitems cannot be assumed.\n\nBuilding\n--------\n```\ncmake -DCMAKE_BUILD_TYPE=Release . \u0026\u0026 make\n```\n\nIt requires cudart, curand \u003e=8.0, OpenMP 4.0 compatible compiler (**that is, not gcc \u003c=4.8**) and \ncmake \u003e= 3.2.\nIf [numpy](http://www.numpy.org/) headers are not found,\nspecify the includes path with defining `NUMPY_INCLUDES`.\nIf you do not want to build the Python native module, add `-D DISABLE_PYTHON=y`.\nIf CUDA is not automatically found, add `-D CUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda-8.0`\n(change the path to the actual one). \nIf you are building in a Docker container you may encounter the following error: \n`Could NOT find CUDA (missing: CUDA_TOOLKIT_ROOT_DIR CUDA_INCLUDE_DIRS CUDA_CUDART_LIBRARY)`\nThis means you need to install the rest of the CUDA toolkit, which can be installed like in the\n`nvidia/cuda:8.0-devrel` [Dockerfile](https://gitlab.com/nvidia/cuda/blob/ubuntu16.04/8.0/devel/Dockerfile). \nIf you still run into `Could NOT find CUDA (missing: CUDA_INCLUDE_DIRS)` then run:\n ```ln -s /usr/local/cuda/targets/x86_64-linux/include/* /usr/local/cuda/include/```\n\nPython users: if you are using Linux x86-64 and CUDA 8.0, then you can\ninstall this easily:\n```\npip install libMHCUDA\n```\nOtherwise, you'll have to install it from source:\n```\npip install git+https://github.com/src-d/minhashcuda.git\n```\n**Building in Python virtual environments, e.g. pyenv or conda is officially not supported.** You can still submit patches to fix the related problems.\n\nTesting\n-------\n`test.py` contains the unit tests based on [unittest](https://docs.python.org/3/library/unittest.html).\nThey require [datasketch](https://github.com/ekzhu/datasketch) and [scipy](https://github.com/scipy/scipy).\n\nContributions\n-------------\n\n...are welcome! See [CONTRIBUTING](CONTRIBUTING.md) and [code of conduct](CODE_OF_CONDUCT.md).\n\nLicense\n-------\n\n[Apache 2.0](LICENSE.md)\n\nPython example\n--------------\n```python\nimport libMHCUDA\nimport numpy\nfrom scipy.sparse import csr_matrix\n\n# Prepare the rows\nnumpy.random.seed(1)\ndata = numpy.random.randint(0, 100, (6400, 130))\nmask = numpy.random.randint(0, 5, data.shape)\ndata *= (mask \u003e= 4)\ndel mask\nm = csr_matrix(data, dtype=numpy.float32)\ndel data\n\n# We've got 80% sparse matrix 6400 x 130\n# Initialize the hasher aka \"generator\" with 128 hash samples for every row\ngen = libMHCUDA.minhash_cuda_init(m.shape[-1], 128, seed=1, verbosity=1)\n\n# Calculate the hashes. Can be executed several times with different number of rows\nhashes = libMHCUDA.minhash_cuda_calc(gen, m)\n\n# Free the resources\nlibMHCUDA.minhash_cuda_fini(gen)\n```\nThe functions can be easily wrapped into a class (not included).\n\nPython API\n----------\nImport \"libMHCUDA\".\n\n```python\ndef minhash_cuda_init(dim, samples, seed=time(), deferred=False, devices=0, verbosity=0)\n```\nCreates the hasher.\n\n**dim** integer, the number of dimensions in the input. In other words, length of each weight vector.\n        Must be less than 2³².\n\n**samples** integer, the number of hash samples. The more the value, the more precise are the estimates,\n            but the larger the hash size and the longer to calculate (linear). Must not be prime\n            for performance considerations and less than 2¹⁶.\n\n**seed** integer, the random generator seed for reproducible results.\n\n**deferred** boolean, if True, disables the initialization of WMH parameters with\n             random numbers. In that case, the user is expected to call\n             minhash_cuda_assign_random_vars() afterwards.\n\n**devices** integer, bitwise OR-ed CUDA device indices, e.g. 1 means first device, 2 means second device,\n            3 means using first and second device. Special value 0 enables all available devices.\n            Default value is 0.\n\n**verbosity** integer, 0 means complete silence, 1 means mere progress logging,\n              2 means lots of output.\n              \n**return** integer, pointer to generator struct (opaque).\n\n```python\ndef minhash_cuda_calc(gen, matrix, row_start=0, row_finish=0xffffffff)\n```\nCalculates Weighted MinHash-es. May reallocate memory on GPU but does it's best to reuse the buffers.\n\n**gen** integer, pointer to generator struct obtained from init().\n\n**matrix** `scipy.sparse.csr_matrix` instance, the number of columns must match **dim**.\n           The number of rows must be less than 2³¹.\n           \n**row_start** integer, slice start offset (the index of the first row to process).\n              Enables efficient zero-copy sparse matrix slicing.\n              \n**row_finish** integer, slice finish offset (the index of the row after the last\n               one to process). The resulting matrix row slice is [row-start:row_finish].\n\n**return** `numpy.ndarray` of shape (number of matrix rows, **samples**, 2) and dtype uint32.\n\n```python\ndef minhash_cuda_fini(gen)\n```\nDisposes any resources allocated by init() and subsequent calc()-s. Generator pointer is invalidated.\n\n**gen** integer, pointer to generator struct obtained from init().\n\nC API\n-----\nInclude \"minhashcuda.h\".\n\n```C\nMinhashCudaGenerator* mhcuda_init(\n    uint32_t dim, uint16_t samples, uint32_t seed, int deferred,\n    uint32_t devices, int verbosity, MHCUDAResult *status)\n```\nInitializes the Weighted MinHash generator.\n\n**dim** the number of dimensions in the input. In other words, length of each weight vector.\n\n**samples** he number of hash samples. The more the value, the more precise are the estimates,\n            but the larger the hash size and the longer to calculate (linear). Must not be prime\n            for performance considerations.\n\n**seed** the random generator seed for reproducible results.\n\n**deferred** if set to anything except 0, disables the initialization of WMH parameters with\n             random numbers. In that case, the user is expected to call\n             mhcuda_assign_random_vars() afterwards.\n\n**devices** bitwise OR-ed CUDA device indices, e.g. 1 means first device, 2 means second device,\n            3 means using first and second device. Special value 0 enables all available devices.\n\n**verbosity** 0 means complete silence, 1 means mere progress logging, 2 means lots of output.\n\n**status** pointer to the reported return code. May be nullptr. In case of any error, the\n           returned result is nullptr and the code is stored into *status (with nullptr check).\n\n**return** pointer to the allocated generator opaque struct.\n\n```C\nMHCUDAResult mhcuda_calc(\n    const MinhashCudaGenerator *gen, const float *weights,\n    const uint32_t *cols, const uint32_t *rows, uint32_t length,\n    uint32_t *output)\n```\nCalculates the Weighted MinHash-es for the specified CSR matrix.\n\n**gen** pointer to the generator opaque struct obtained from mhcuda_init().\n**weights** sparse matrix's values.\n**cols** sparse matrix's column indices, must be the same size as weights.\n**rows** sparse matrix's row indices. The first element is always 0, the last is\n         effectively the size of weights and cols.\n**length** the number of rows. \"rows\" argument must have the size (rows + 1) because of\n           the leading 0.\n**output** resulting hashes array of size rows x samples x 2.\n\n**return** the status code.\n\n```C\nMHCUDAResult mhcuda_fini(MinhashCudaGenerator *gen);\n```\nFrees any resources allocated by mhcuda_init() and mhcuda_calc(), including device buffers.\nGenerator pointer is invalidated.\n\n**gen** pointer to the generator opaque struct obtained from mhcuda_init().\n\n**return** the status code.\n\n#### README {#ignore_this_doxygen_anchor}\n","funding_links":[],"categories":["Software"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fsrc-d%2Fminhashcuda","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fsrc-d%2Fminhashcuda","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fsrc-d%2Fminhashcuda/lists"}