https://github.com/build-on-aws/compilation-optimization-of-sagemaker
How to Speed Up Open Source LLMs Training and Cut Down Billing Time with Amazon SageMaker
https://github.com/build-on-aws/compilation-optimization-of-sagemaker
aws huggingface llm sagemaker
Last synced: about 1 month ago
JSON representation
How to Speed Up Open Source LLMs Training and Cut Down Billing Time with Amazon SageMaker
- Host: GitHub
- URL: https://github.com/build-on-aws/compilation-optimization-of-sagemaker
- Owner: build-on-aws
- License: mit-0
- Created: 2023-07-28T20:15:16.000Z (almost 2 years ago)
- Default Branch: main
- Last Pushed: 2023-09-01T13:57:08.000Z (almost 2 years ago)
- Last Synced: 2025-02-18T01:39:28.310Z (4 months ago)
- Topics: aws, huggingface, llm, sagemaker
- Language: Python
- Homepage: https://community.aws/posts/llm-compiler-optimization-gpt2
- Size: 19.5 KB
- Stars: 2
- Watchers: 5
- Forks: 0
- Open Issues: 2
-
Metadata Files:
- Readme: README.md
- Contributing: CONTRIBUTING.md
- License: LICENSE
- Code of conduct: CODE_OF_CONDUCT.md
Awesome Lists containing this project
README
## Open Source LLMs Optimization with Amazon SageMaker
In this project, we'll explain how to use Amazon SageMaker to optimize the training and inference of open source large language models (LLMs).
1/ How to Speed Up Open Source LLMs Training and Cut Down Billing Time with Amazon SageMaker
Model: GPT-2
Corresponding subdirectory: /gpt22/ Easily Deploy Falcon-40B Open Source LLM with Amazon SageMaker JumpStart
Model: Falcon-40B
Corresponding subdirectory: TBC3/ Fine-tune Falcon-40B Open Source LLM Using 4bit QLoRA and PEFT
Model: Falcon-40B
Corresponding subdirectory: TBC## Security
See [CONTRIBUTING](CONTRIBUTING.md#security-issue-notifications) for more information.
## License
This library is licensed under the MIT-0 License. See the LICENSE file.