{"id":23816241,"url":"https://github.com/andrewn6/tinyllm","last_synced_at":"2025-09-06T23:32:52.525Z","repository":{"id":263662089,"uuid":"891074938","full_name":"andrewn6/tinyllm","owner":"andrewn6","description":"Minimal, fast inference engine for LLM's","archived":false,"fork":false,"pushed_at":"2024-12-27T07:11:04.000Z","size":51317,"stargazers_count":5,"open_issues_count":0,"forks_count":0,"subscribers_count":2,"default_branch":"main","last_synced_at":"2024-12-27T08:22:02.744Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":null,"language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/andrewn6.png","metadata":{"files":{"readme":"README.MD","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2024-11-19T17:21:41.000Z","updated_at":"2024-12-27T07:11:07.000Z","dependencies_parsed_at":null,"dependency_job_id":"2a2ae6d0-1e39-4e5d-a482-e39378472c5b","html_url":"https://github.com/andrewn6/tinyllm","commit_stats":null,"previous_names":["andrewn6/swiftllm","andrewn6/tinyllm"],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/andrewn6%2Ftinyllm","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/andrewn6%2Ftinyllm/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/andrewn6%2Ftinyllm/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/andrewn6%2Ftinyllm/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/andrewn6","download_url":"https://codeload.github.com/andrewn6/tinyllm/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":232153449,"owners_count":18480125,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2025-01-02T04:29:19.365Z","updated_at":"2025-01-02T04:29:19.960Z","avatar_url":"https://github.com/andrewn6.png","language":"Python","readme":"# TinyLLM\n\nMinimal, high-performance inference engine for LLM's -- used in development environments\n\n## Overview\nTinyLLM streamlines the inference pipeline with minimal overhead, focusing on memory efficiency and throughput optimization. We include a custom tokenizer for self-developed models, and it's compataibile with existing LLM's through our scheduling systme.\n\n## Features\n- Memory managment pruning\n- Efficient batch processing and response streaming\n- Optimized scheduling for multi-model deployments\n- Custom tokenizer implmentation for self-developed models\n- Inference API\n- KV cache implementation\n- Training CLI for development models\n- Byte-level tokenization\n\n*This is very much still an experiment, especially the tokenizer, our scheduler is somewhat well-written, memory management is decent.*\n\nI'll continue to slowly improve these components over my weekends.\n\n## Scope\nThis is solely a inference engine. It does not:\n- Implement large model architectures\n- Include pre-trained models\n- Support distributed training\n\n## How to use?\n\nClone repository\n```\ngit clone https://github.com/andrewn6/tinyllm\n```\n```\npip install -e . \n```\n\nRegister your trained model\n```\ntinyllm model register transformer-19m v1 \\\n    --checkpoint models/tiny-19m.pt \\\n    --model-type native \\\n    --description \"19M parameter transformer\"\n```\n\nServe and expose to localhost\n```\ntinyllm serve \\\n    --model-name mymodel \\\n    --port 8000 \\\n    --model-type native\n```\n\nList models\n```\ntinyllm model list\n```\n\n\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fandrewn6%2Ftinyllm","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fandrewn6%2Ftinyllm","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fandrewn6%2Ftinyllm/lists"}