https://github.com/mdeff/saga
Mini-batch and distributed SAGA
https://github.com/mdeff/saga
convex-optimization gradient-descent
Last synced: about 1 month ago
JSON representation
Mini-batch and distributed SAGA
- Host: GitHub
- URL: https://github.com/mdeff/saga
- Owner: mdeff
- License: mit
- Created: 2016-06-06T13:19:09.000Z (almost 9 years ago)
- Default Branch: master
- Last Pushed: 2020-04-18T15:59:34.000Z (about 5 years ago)
- Last Synced: 2025-01-22T14:08:38.510Z (3 months ago)
- Topics: convex-optimization, gradient-descent
- Language: TeX
- Size: 1.75 MB
- Stars: 6
- Watchers: 5
- Forks: 4
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE.txt
Awesome Lists containing this project
README
# Mini-batch and distributed SAGA
[Michaël Defferrard](https://deff.ch), [Soroosh Shafiee](https://sorooshafiee.github.io)This small project explored two approaches to improve the SAGA incremental
gradient algorithm:1. Take gradients over mini-batches to reduce the memory requirement.
2. Compute gradients in parallel on multiple CPU cores to speed it up.## Content
See our [proposal](proposal.pdf), [report](report.pdf), and
[presentation](presentation.pdf) for an exposition of the methods and some
experimental results.You'll also find a [Python implementation](./mini_batch) of our mini-batch
approach as well as a [MATLAB implementation](./distributed) of our distributed
approach.