Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/dahlitzflorian/ai-on-ibm-power-playbooks
Collection of Ansible playbooks helping to set up IBM Power environments for AI workloads
https://github.com/dahlitzflorian/ai-on-ibm-power-playbooks
ai ansible ansible-playbooks ibm-power playbooks ppc64le
Last synced: about 1 month ago
JSON representation
Collection of Ansible playbooks helping to set up IBM Power environments for AI workloads
- Host: GitHub
- URL: https://github.com/dahlitzflorian/ai-on-ibm-power-playbooks
- Owner: DahlitzFlorian
- License: apache-2.0
- Created: 2024-09-16T08:23:29.000Z (4 months ago)
- Default Branch: main
- Last Pushed: 2024-12-20T10:17:11.000Z (about 1 month ago)
- Last Synced: 2024-12-20T11:22:06.309Z (about 1 month ago)
- Topics: ai, ansible, ansible-playbooks, ibm-power, playbooks, ppc64le
- Language: Python
- Homepage:
- Size: 95.7 KB
- Stars: 0
- Watchers: 1
- Forks: 1
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# AI on IBM Power Playbooks
## Description
This repository contains Ansible playbooks helping to set up IBM Power environments for running AI workloads on them.
## Usage
If Ansible is not already installed on your local machine, you can install it via ([more information](https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html)):
```shell
pip install ansible
```Make sure to adjust the example inventory file before using it.
```shell
ansible-playbook -i example-inventory.yml playbooks/basic-llama.cpp.yml
```## Configuration
The following parameters can be specified in the inventory:
- **auto_start** (*boolean*): If set to `true`, the llama.cpp server will be started by Ansible.
- **conda_dir** (*string*): Path is used as the `root_prefix` and `prefix` of Micromamba. Will be created if it does not exist.
- **detached** (*bool*): If set to `true`, the llama.cpp server is started in the background and the playbook can finish without terminating the process. Is ignored if `auto_start` is set to `false`.
- **micromamba_location** (*string*): Path to where the Micromamba binary gets stored.
- **model_repository** (*string*): Huggingface repository name, e.g. `QuantFactory/Meta-Llama-3-8B-GGUF`.
- **model_file** (*string*): Specific file in llama.cpp-supported format to download from given `model_repository`, e.g. `Meta-Llama-3-8B.Q8_0.gguf`.
- **python_version** (*string*): Python version number, e.g. `3.11`.
- **working_directory** (*string*): Path to the working directory. Will be created if it does not exist.
- **llama_cpp_args** (*dictionary*): Key-value pairs passed to `llama-server` in the format `-KEY VALUE`. For parameters without additional value, like `-v`, leave the value blank.
- **llama_cpp_argv** (*dictionary*): Key-value pairs passed to `llama-server` in the format `--KEY VALUE`. For parameters without additional value, like `--verbose`, leave the value blank.
- **uvicorn_cpp_args** (*dictionary*): Key-value pairs passed to `uvicorn` in the format `-KEY VALUE`. For parameters without additional value, like `-v`, leave the value blank.
- **uvicorn_cpp_argv** (*dictionary*): Key-value pairs passed to `uvicorn` in the format `--KEY VALUE`. For parameters without additional value, like `--verbose`, leave the value blank.## Additional information
Are you looking for Python scripts helping you to interact with OpenAI-compatible LLM instance? Check out [DahlitzFlorian/python-openai-client-example](https://github.com/DahlitzFlorian/python-openai-client-example).