https://github.com/xtra-computing/vertibench
Feature partitioner by imbalance or correlation (ICLR 2024)
https://github.com/xtra-computing/vertibench
Last synced: 12 months ago
JSON representation
Feature partitioner by imbalance or correlation (ICLR 2024)
- Host: GitHub
- URL: https://github.com/xtra-computing/vertibench
- Owner: Xtra-Computing
- License: apache-2.0
- Created: 2023-12-06T11:30:40.000Z (over 2 years ago)
- Default Branch: main
- Last Pushed: 2025-01-15T12:05:48.000Z (about 1 year ago)
- Last Synced: 2025-04-10T15:23:13.765Z (12 months ago)
- Language: Jupyter Notebook
- Homepage: https://vertibench.xtra.science
- Size: 94 MB
- Stars: 16
- Watchers: 8
- Forks: 1
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# VertiBench: Vertical Federated Learning Benchmark
## Introduction
VertiBench is a benchmark for [federated learning](https://ieeexplore.ieee.org/abstract/document/9599369/), [split learning](https://arxiv.org/abs/1912.12115), and [assisted learning](https://proceedings.neurips.cc/paper_files/paper/2022/hash/4d6938f94ab47d32128c239a4bfedae0-Abstract-Conference.html) on vertical partitioned data. It provides tools to synthetic vertical partitioned data from a given global dataset. VertiBench supports partition under various **imbalance** and **correlation** level, effectively simulating a wide-range of real-world vertical federated learning scenarios.

## Installation
VertiBench has already been published on PyPI. The installation requires the installation of `python>=3.9`. To further install VertiBench, run the following command:
```bash
pip install vertibench
```
## Getting Started
This examples includes the pipeline of split and evaluate. First,
load your datasets or generate synthetic datasets.
```python
from sklearn.datasets import make_classification
# Generate a large dataset
X, y = make_classification(n_samples=10000, n_features=10)
```
To split the dataset by importance,
```python
from vertibench.Splitter import ImportanceSplitter
imp_splitter = ImportanceSplitter(num_parties=4, weights=[1, 1, 1, 3])
Xs = imp_splitter.split(X)
```
To split the dataset by correlation,
```python
from vertibench.Splitter import CorrelationSplitter
corr_splitter = CorrelationSplitter(num_parties=4)
Xs = corr_splitter.fit_split(X)
```
To evaluate a feature split `Xs` in terms of party importance,
```python
from vertibench.Evaluator import ImportanceEvaluator
from sklearn.linear_model import LogisticRegression
import numpy as np
model = LogisticRegression()
X = np.concatenate(Xs, axis=1)
model.fit(X, y)
imp_evaluator = ImportanceEvaluator()
imp_scores = imp_evaluator.evaluate(Xs, model.predict)
alpha = imp_evaluator.evaluate_alpha(scores=imp_scores)
print(f"Importance scores: {imp_scores}, alpha: {alpha}")
```
To evaluate a feature split in terms of correlation,
```python
from vertibench.Evaluator import CorrelationEvaluator
corr_evaluator = CorrelationEvaluator()
corr_scores = corr_evaluator.fit_evaluate(Xs)
beta = corr_evaluator.evaluate_beta()
print(f"Correlation scores: {corr_scores}, beta: {beta}")
```