An open API service indexing awesome lists of open source software.

https://github.com/general-developer/llama_library

Llama Library Is library for inference any model ai LLAMA / LLM On Edge without api or internet quota, but need resources depends model you want run
https://github.com/general-developer/llama_library

ai dart flutter ggml large-language-model llama llm ml

Last synced: 9 months ago
JSON representation

Llama Library Is library for inference any model ai LLAMA / LLM On Edge without api or internet quota, but need resources depends model you want run

Awesome Lists containing this project

README

          

# Llama Library

**Llama Library** Is library for inference any model ai LLAMA / LLM On Edge without api or internet quota, but need resources depends model you want run

[![](https://raw.githubusercontent.com/General-Developer/llama_library/refs/heads/main/assets/demo_background.png)](https://youtu.be/drlqUwJEOg4)

[![](https://raw.githubusercontent.com/globalcorporation/.github/main/.github/logo/powered.png)](https://www.youtube.com/@Global_Corporation)

**Copyright (c) 2024 GLOBAL CORPORATION - GENERAL DEVELOPER**

## đŸ“šī¸ Docs

1. [Documentation](https://youtube.com/@GENERAL_DEV)
2. [Youtube](https://youtube.com/@GENERAL_DEV)
3. [Telegram Support Group](https://t.me/DEVELOPER_GLOBAL_PUBLIC)
4. [Contact Developer](https://github.com/General-Developer) (check social media or readme profile github)

## đŸ”–ī¸ Features

1. [x] đŸ“ąī¸ **Cross Platform** support (Device, Edge Severless functions)
2. [x] đŸ“œī¸ **Standarization** Style Code
3. [x] âŒ¨ī¸ **Cli** (Terminal for help you use this library or create project)
4. [x] đŸ”Ĩī¸ **Api** (If you developer bot / userbot you can use this library without interact cli just add library and use đŸš€ī¸)
5. [x] đŸ§Šī¸ **Customizable Extension** (if you want add extension so you can more speed up on development)
6. [x] âœ¨ī¸ **Pretty Information** (user friendly for newbie)

## â”ī¸ Fun Fact

- **This library 100%** use on every my create project (**App, Server, Bot, Userbot**)

- **This library 100%** support all models from [llama.cpp](https://github.com/ggerganov/llama.cpp) (depending on your device specs, if high then it can be up to turbo, but if low, just choose tiny/small)

## đŸ“ˆī¸ Proggres

- **10-02-2025**
Starting **Release Stable** With core Features

## Resources

1. [MODEL](https://huggingface.co/ggml-org/Meta-Llama-3.1-8B-Instruct-Q4_0-GGUF)

### đŸ“Ĩī¸ Install Library

1. **Dart**

```bash
dart pub add llama_library_dart
```

2. **Flutter**

```bash
flutter pub add llama_library_flutter ggml_library_flutter
```

## đŸš€ī¸ Quick Start

Example Quickstart script minimal for insight you or make you use this library because very simple

```dart

import 'dart:convert';
import 'dart:io';
import 'package:llama_library/llama_library.dart';
import 'package:llama_library/scheme/scheme/api/send_llama_library_message.dart';
import 'package:llama_library/scheme/scheme/respond/update_llama_library_message.dart';

void main(List args) async {
print("start");
File modelFile = File(
"../../../../../big-data/deepseek-r1/deepseek-r1-distill-qwen-1.5b-q4_0.gguf",
);
final LlamaLibrary llamaLibrary = LlamaLibrary(
sharedLibraryPath: "libllama.so",
invokeParametersLlamaLibraryDataOptions: InvokeParametersLlamaLibraryDataOptions(
invokeTimeOut: Duration(minutes: 10),
isThrowOnError: false,
),
);
await llamaLibrary.ensureInitialized();
llamaLibrary.loadModel(
modelPath: modelFile.path,
);
llamaLibrary.on(
eventType: llamaLibrary.eventUpdate,
onUpdate: (data) {
final update = data.update;
if (update is UpdateLlamaLibraryMessage) {
/// streaming update
if (update.is_done == false) {
stdout.write(update.text);
} else if (update.is_done == true) {
print("\n\n");
print("-- done --");
}
}
},
);
await llamaLibrary.initialized();

stdin.listen((e) async {
print("\n\n");
final String text = utf8.decode(e).trim();
if (text == "exit") {
llamaLibrary.dispose();
exit(0);
} else {
await llamaLibrary.invoke(
invokeParametersLlamaLibraryData: InvokeParametersLlamaLibraryData(
parameters: SendLlamaLibraryMessage.create(text: text),
isVoid: true,
extra: null,
invokeParametersLlamaLibraryDataOptions: null,
),
);
}
});
}
```

## Reference

1. [Ggerganov-llama.cpp](https://github.com/ggerganov/llama.cpp)
ffi bridge main script so that this program can run

**Copyright (c) 2024 GLOBAL CORPORATION - GENERAL DEVELOPER**

## Example Project Use This Library

1. [AZKA GRAM](https://github.com/azkadev/azkagram) / [Global GRAM](https://github.com/globalcorporation/global_gram_app)

**Telegram Application** with **redesign** with new some features userbot and other **features which is not officially provided on Telegram** First this project open source but we closed it to **close source** because our program is easy to read and allows other people to edit the source code and then use it for criminal acts

| CHAT PAGE | SIGN UP PAGE | HOME PAGE | GUIDE PAGE |
|:----------------------------------------------------------------------------------------------------------:|:----------------------------------------------------------------------------------------------------------:|-----------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------:|
| ![](https://user-images.githubusercontent.com/82513502/205481759-b6815e2f-bd5d-4d72-9570-becd3829dd36.png) | ![](https://user-images.githubusercontent.com/82513502/173319331-9e96fbe7-3e66-44b2-8577-f6685d86a368.png) | ![](https://user-images.githubusercontent.com/82513502/173319541-19a60407-f410-4e95-8ac0-d0da2eaf2457.png) | ![](https://raw.githubusercontent.com/GLXCORP/glx_bot_app/main/screenshots/home_telegram.png) |