Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/samuela/git-re-basin
Code release for "Git Re-Basin: Merging Models modulo Permutation Symmetries"
https://github.com/samuela/git-re-basin
deep-learning deeplearning jax machine-learning neural-networks
Last synced: 13 days ago
JSON representation
Code release for "Git Re-Basin: Merging Models modulo Permutation Symmetries"
- Host: GitHub
- URL: https://github.com/samuela/git-re-basin
- Owner: samuela
- License: mit
- Created: 2022-09-13T08:19:54.000Z (about 2 years ago)
- Default Branch: main
- Last Pushed: 2023-03-07T23:50:34.000Z (over 1 year ago)
- Last Synced: 2024-10-12T00:30:01.303Z (28 days ago)
- Topics: deep-learning, deeplearning, jax, machine-learning, neural-networks
- Language: Python
- Homepage: https://arxiv.org/abs/2209.04836
- Size: 1.7 MB
- Stars: 464
- Watchers: 8
- Forks: 40
- Open Issues: 7
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- awesome-llm-interpretability - Git Re-Basin - Code release for "Git Re-Basin: Merging Models modulo Permutation Symmetries.” (Table of Contents / LLM Interpretability Tools)
README
# Git Re-Basin: Merging Models modulo Permutation Symmetries
![Video demonstrating the effect of our permutation matching algorithm on the loss landscape throughout training.](mnist_video.gif)
Code for the paper [Git Re-Basin: Merging Models modulo Permutation Symmetries](https://arxiv.org/abs/2209.04836).
Abstract:
> The success of deep learning is thanks to our ability to solve certain massive non-convex optimization problems with relative ease. Despite non-convex optimization being NP-hard, simple algorithms -- often variants of stochastic gradient descent -- exhibit surprising effectiveness in fitting large neural networks in practice. We argue that neural network loss landscapes contain (nearly) a single basin, after accounting for all possible permutation symmetries of hidden units. We introduce three algorithms to permute the units of one model to bring them into alignment with units of a reference model. This transformation produces a functionally equivalent set of weights that lie in an approximately convex basin near the reference model. Experimentally, we demonstrate the single basin phenomenon across a variety of model architectures and datasets, including the first (to our knowledge) demonstration of zero-barrier linear mode connectivity between independently trained ResNet models on CIFAR-10 and CIFAR-100. Additionally, we identify intriguing phenomena relating model width and training time to mode connectivity across a variety of models and datasets. Finally, we discuss shortcomings of a single basin theory, including a counterexample to the linear mode connectivity hypothesis.