Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/shaoxiongji/federated-learning
A PyTorch Implementation of Federated Learning http://doi.org/10.5281/zenodo.4321561
https://github.com/shaoxiongji/federated-learning
deep-learning federated-learning pytorch
Last synced: 6 days ago
JSON representation
A PyTorch Implementation of Federated Learning http://doi.org/10.5281/zenodo.4321561
- Host: GitHub
- URL: https://github.com/shaoxiongji/federated-learning
- Owner: shaoxiongji
- License: mit
- Created: 2018-03-30T10:44:46.000Z (over 6 years ago)
- Default Branch: master
- Last Pushed: 2024-07-25T10:13:30.000Z (5 months ago)
- Last Synced: 2024-12-12T15:02:28.191Z (13 days ago)
- Topics: deep-learning, federated-learning, pytorch
- Language: Python
- Homepage: http://doi.org/10.5281/zenodo.4321561
- Size: 30.3 KB
- Stars: 1,304
- Watchers: 15
- Forks: 373
- Open Issues: 17
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- awesome-federated-computing - PyTorch Federated Learning - Github
README
# Federated Learning [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.4321561.svg)](https://doi.org/10.5281/zenodo.4321561)
This is partly the reproduction of the paper of [Communication-Efficient Learning of Deep Networks from Decentralized Data](https://arxiv.org/abs/1602.05629)
Only experiments on MNIST and CIFAR10 (both IID and non-IID) is produced by far.Note: The scripts will be slow without the implementation of parallel computing.
## Requirements
python>=3.6
pytorch>=0.4## Run
The MLP and CNN models are produced by:
> python [main_nn.py](main_nn.py)Federated learning with MLP and CNN is produced by:
> python [main_fed.py](main_fed.py)See the arguments in [options.py](utils/options.py).
For example:
> python main_fed.py --dataset mnist --iid --num_channels 1 --model cnn --epochs 50 --gpu 0`--all_clients` for averaging over all client models
NB: for CIFAR-10, `num_channels` must be 3.
## Results
### MNIST
Results are shown in Table 1 and Table 2, with the parameters C=0.1, B=10, E=5.Table 1. results of 10 epochs training with the learning rate of 0.01
| Model | Acc. of IID | Acc. of Non-IID|
| ----- | ----- | ---- |
| FedAVG-MLP| 94.57% | 70.44% |
| FedAVG-CNN| 96.59% | 77.72% |Table 2. results of 50 epochs training with the learning rate of 0.01
| Model | Acc. of IID | Acc. of Non-IID|
| ----- | ----- | ---- |
| FedAVG-MLP| 97.21% | 93.03% |
| FedAVG-CNN| 98.60% | 93.81% |## Ackonwledgements
Acknowledgements give to [youkaichao](https://github.com/youkaichao).## References
McMahan, Brendan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. Communication-Efficient Learning of Deep Networks from Decentralized Data. In Artificial Intelligence and Statistics (AISTATS), 2017.## Cite As
Shaoxiong Ji. (2018, March 30). A PyTorch Implementation of Federated Learning. Zenodo. http://doi.org/10.5281/zenodo.4321561