https://github.com/navdeep-g/robust-random-cut-forest
Implementation of the Robust Random Cut Forest algorithm for anomaly detection
https://github.com/navdeep-g/robust-random-cut-forest
machine-learning outlier-detector outliers python random-forest robust-random-cut-forest
Last synced: about 1 month ago
JSON representation
Implementation of the Robust Random Cut Forest algorithm for anomaly detection
- Host: GitHub
- URL: https://github.com/navdeep-g/robust-random-cut-forest
- Owner: navdeep-G
- License: apache-2.0
- Created: 2021-01-17T16:47:59.000Z (over 4 years ago)
- Default Branch: main
- Last Pushed: 2024-12-27T02:01:24.000Z (5 months ago)
- Last Synced: 2025-03-22T17:44:17.353Z (about 2 months ago)
- Topics: machine-learning, outlier-detector, outliers, python, random-forest, robust-random-cut-forest
- Language: Python
- Homepage:
- Size: 28.3 KB
- Stars: 7
- Watchers: 2
- Forks: 1
- Open Issues: 1
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# Robust Random Cut Forests
This repository contains an implementation of the `Robust Random Cut Forest` anomaly detection model. This model attempts to find anomalies by seeking out points whose structure is not consistent with the rest of the data set. The `random_cut_forest` folder contains the `RandomCutForest` algorithm while the `notebooks` folder contains Jupyter notebooks showing examples leveraging the module.
## Contributing
If you want to contribute to this repo simply submit a pull request.## Getting Started
### Installation
To install the package you can do any of the following:- Run the command `pip install ...`
### Using RobustRandomCutForests
Using a RobustRandomCutForest to classify potential anomalies in your data is simple. Assuming you already have a vector of data stored in `X` you would run the following:```python
from robust_random_cut_forest import robust_random_cut_forest
forest = robust_random_cut_forest.RobustRandomCutForest()
forest = forest.fit(X)
```From there you can choose to get the normalized depths of each point within the forest by calling `average_depths` or have the forest label potential anomalies by calling `predict`:
```python
depths = forest.decision_function(X)
labels = forest.predict(X)
```The function `decision_function` will return an array with numbers ranging from zero to one. The lower the number the more anomalous the point appears (this is how sklearn implements scoring). By default any points that are given a score of `0.3` are labelled as anomalies. To stream new points into your forest simply call the `add_point` method:
```python
# Given an array of points....
for point in points:
forest.add_point(point)
depths = forest.decision_function(points)
labels = forest.predict(points)
```## Testing
All tests are written using `pytest`. Simply `pip install pytest` to be able to run tests. All tests are located under the `tests` folder. Any new tests are always welcome!## Articles
* For more information on Robust Random Cut Forests, see Guha et al.'s 2016 paper
which can be located [here](http://jmlr.org/proceedings/papers/v48/guha16.pdf).
* The original isolation forest paper can be found [here](http://cs.nju.edu.cn/zhouzh/zhouzh.files/publication/icdm08b.pdf).
* Isolation Forests have been implemented in [sklearn](http://scikit-learn.org/dev/modules/generated/sklearn.ensemble.IsolationForest.html)