Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/uogbuji/mlx-notes
Shared personal notes created while working with the Apple MLX machine learning framework
https://github.com/uogbuji/mlx-notes
apple articles documentation jupyter-notebooks large-language-models llm machine-learning mlx python
Last synced: 8 days ago
JSON representation
Shared personal notes created while working with the Apple MLX machine learning framework
- Host: GitHub
- URL: https://github.com/uogbuji/mlx-notes
- Owner: uogbuji
- License: cc-by-4.0
- Created: 2024-03-20T05:49:30.000Z (8 months ago)
- Default Branch: main
- Last Pushed: 2024-06-21T14:56:25.000Z (5 months ago)
- Last Synced: 2024-06-22T09:33:30.012Z (5 months ago)
- Topics: apple, articles, documentation, jupyter-notebooks, large-language-models, llm, machine-learning, mlx, python
- Language: Jupyter Notebook
- Homepage: https://ucheog.carrd.co/
- Size: 13.8 MB
- Stars: 13
- Watchers: 3
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# Notes on the Apple MLX machine learning framework
## Apple MLX for AI/Large Language Models—Day One
Rendered markdown
LinkedIn version
Jupyter notebook
## Converting models from Hugging Face to MLX format, and sharing
Rendered markdown
LinkedIn version
Jupyter notebook
## Retrieval augmentation with MLX: A bag full of RAG, part 1
Rendered markdown
LinkedIn version
Jupyter notebook
## Retrieval augmentation with MLX: A bag full of RAG, part 2
Rendered markdown
LinkedIn version
Jupyter notebook
# More (up to date) MLX resources
* [MLX home page](https://github.com/ml-explore/mlx)
* [Hugging Face MLX community](https://huggingface.co/mlx-community)
* [Using MLX at Hugging Face](https://huggingface.co/docs/hub/en/mlx)
* [MLX Text-completion Finetuning Notebook](https://github.com/mark-lord/MLX-text-completion-notebook)
* [MLX Tuning Fork—Framework for parameterized large language model (Q)LoRa fine-tuning using mlx, mlx_lm, and OgbujiPT. Architecture for systematic running of easily parameterized fine-tunes](https://github.com/chimezie/mlx-tuning-fork)# A few general notes
* For the many chat formats already charted out in llama.cpp, see the `@register_chat_format` decorated functions in [llama_chat_format.py](https://github.com/abetlen/llama-cpp-python/blob/main/llama_cpp/llama_chat_format.py)
## To do, or figure out
* [any grammar/ebnf support a la llama.cpp](https://christophergs.com/blog/running-open-source-llms-in-python#grammar)?
* Alternate LLM sampling methods
* Steering vectors# Syncing articles to notebooks
Use [Jupytext](https://jupytext.readthedocs.io/en/latest/) to convert the `.md` articles to `.ipynb` notebooks:
```sh
jupytext --to ipynb 2024/MLX-day-one.md
```May have to convert cells using plain `pip` to use `%pip` instead. It also doesn't seem to check the format metadata, so you might need to convert non-Python cells back to Markdown by hand.
# License
Shield: [![CC BY 4.0][cc-by-shield]][cc-by]
This work is licensed under a
[Creative Commons Attribution 4.0 International License][cc-by].[![CC BY 4.0][cc-by-image]][cc-by]
[cc-by]: http://creativecommons.org/licenses/by/4.0/
[cc-by-image]: https://i.creativecommons.org/l/by/4.0/88x31.png
[cc-by-shield]: https://img.shields.io/badge/License-CC%20BY%204.0-lightgrey.svgSee also: https://github.com/santisoler/cc-licenses?tab=readme-ov-file#cc-attribution-40-international