Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/amaiya/onprem
A tool for running on-premises large language models with non-public data
https://github.com/amaiya/onprem
Last synced: 7 days ago
JSON representation
A tool for running on-premises large language models with non-public data
- Host: GitHub
- URL: https://github.com/amaiya/onprem
- Owner: amaiya
- License: apache-2.0
- Created: 2023-08-29T14:46:45.000Z (over 1 year ago)
- Default Branch: master
- Last Pushed: 2024-10-28T18:18:13.000Z (about 1 month ago)
- Last Synced: 2024-10-29T15:49:13.640Z (about 1 month ago)
- Language: Jupyter Notebook
- Homepage: https://amaiya.github.io/onprem
- Size: 3.17 MB
- Stars: 692
- Watchers: 7
- Forks: 35
- Open Issues: 2
-
Metadata Files:
- Readme: README.md
- Changelog: CHANGELOG.md
- Contributing: CONTRIBUTING.md
- License: LICENSE
Awesome Lists containing this project
- project-awesome - amaiya/onprem - A tool for running on-premises large language models with non-public data (Jupyter Notebook)
- jimsghstars - amaiya/onprem - A tool for running on-premises large language models with non-public data (Jupyter Notebook)
README
# OnPrem.LLM
> A tool for running large language models on-premises using non-public
> data**[OnPrem.LLM](https://github.com/amaiya/onprem)** is a simple Python
package that makes it easier to run large language models (LLMs) on your
own machines using non-public data (possibly behind corporate
firewalls). Inspired largely by the
[privateGPT](https://github.com/imartinez/privateGPT) GitHub repo,
**OnPrem.LLM** is intended to help integrate local LLMs into practical
applications.The full documentation is [here](https://amaiya.github.io/onprem/).
A Google Colab demo of installing and using **OnPrem.LLM** is
[here](https://colab.research.google.com/drive/1LVeacsQ9dmE1BVzwR3eTLukpeRIMmUqi?usp=sharing).------------------------------------------------------------------------
*Latest News* π₯
- \[2024/12\] v0.6.0 released and now includes support for PDF to
Markdown conversion (which includes Markdown representations of
tables), as shown
[here](https://amaiya.github.io/onprem/#extract-text-from-documents).- \[2024/11\] v0.5.0 released and now includes support for running LLMs
with Hugging Face
[transformers](https://github.com/huggingface/transformers) as the
backend instead of
[llama.cpp](https://github.com/abetlen/llama-cpp-python). See [this
example](https://amaiya.github.io/onprem/#using-hugging-face-transformers-instead-of-llama.cpp).- \[2024/11\] v0.4.0 released and now includes a `default_model`
parameter to more easily use models like **Llama-3.1** and
**Zephyr-7B-beta**.- \[2024/10\] v0.3.0 released and now includes support for
[concept-focused
summarization](https://amaiya.github.io/onprem/examples_summarization.html#concept-focused-summarization)- \[2024/09\] v0.2.0 released and now includes PDF OCR support and
better PDF table handling.- \[2024/06\] v0.1.0 of **OnPrem.LLM** has been released. Lots of new
updates!- [Ability to use with any OpenAI-compatible
API](https://amaiya.github.io/onprem/#connecting-to-llms-served-through-rest-apis)
(e.g., vLLM, Ollama, OpenLLM, etc.).
- Pipeline for [information
extraction](https://amaiya.github.io/onprem/examples_information_extraction.html)
from raw documents.
- Pipeline for [few-shot text
classification](https://amaiya.github.io/onprem/examples_classification.html)
(i.e., training a classifier on a tiny number of labeled examples)
along with the ability to explain few-shot predictions.
- Default model changed to
[Mistral-7B-Instruct-v0.2](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-GGUF)
- [API augmentations and bug
fixes](https://github.com/amaiya/onprem/blob/master/CHANGELOG.md)------------------------------------------------------------------------
## Install
Once you have [installed
PyTorch](https://pytorch.org/get-started/locally/), you can install
**OnPrem.LLM** with the following steps:1. Install **llama-cpp-python**:
- **CPU:** `pip install llama-cpp-python` ([extra
steps](https://github.com/amaiya/onprem/blob/master/MSWindows.md)
required for Microsoft Windows)
- **GPU**: Follow [instructions
below](https://amaiya.github.io/onprem/#on-gpu-accelerated-inference).
2. Install **OnPrem.LLM**: `pip install onprem`### On GPU-Accelerated Inference
When installing **llama-cpp-python** with
`pip install llama-cpp-python`, the LLM will run on your **CPU**. To
generate answers much faster, you can run the LLM on your **GPU** by
building **llama-cpp-python** based on your operating system.- **Linux**:
`CMAKE_ARGS="-DGGML_CUDA=on" FORCE_CMAKE=1 pip install --upgrade --force-reinstall llama-cpp-python --no-cache-dir`
- **Mac**: `CMAKE_ARGS="-DGGML_METAL=on" pip install llama-cpp-python`
- **Windows 11**: Follow the instructions
[here](https://github.com/amaiya/onprem/blob/master/MSWindows.md#using-the-system-python-in-windows-11s).
- **Windows Subsystem for Linux (WSL2)**: Follow the instructions
[here](https://github.com/amaiya/onprem/blob/master/MSWindows.md#using-wsl2-with-gpu-acceleration).For Linux and Windows, you will need [an up-to-date NVIDIA
driver](https://www.nvidia.com/en-us/drivers/) along with the [CUDA
toolkit](https://developer.nvidia.com/cuda-downloads) installed before
running the installation commands above.After following the instructions above, supply the `n_gpu_layers=-1`
parameter when instantiating an LLM to use your GPU for fast inference:``` python
llm = LLM(n_gpu_layers=-1, ...)
```Quantized models with 8B parameters and below can typically run on GPUs
with as little as 6GB of VRAM. If a model does not fit on your GPU
(e.g., you get a βCUDA Error: Out-of-Memoryβ error), you can offload a
subset of layers to the GPU by experimenting with different values for
the `n_gpu_layers` parameter (e.g., `n_gpu_layers=20`). Setting
`n_gpu_layers=-1`, as shown above, offloads all layers to the GPU.See [the FAQ](https://amaiya.github.io/onprem/#faq) for extra tips, if
you experience issues with
[llama-cpp-python](https://pypi.org/project/llama-cpp-python/)
installation.**Note:** Installing **llama-cpp-python** is optional if either the
following is true:- You use Hugging Face Transformers (instead of llama-cpp-python) as the
LLM backend by supplying the `model_id` parameter when instantiating
an LLM, as [shown
here](https://amaiya.github.io/onprem/#using-hugging-face-transformers-instead-of-llama.cpp).
- You are using **OnPrem.LLM** with an LLM being served through an
[external REST API](#connecting-to-llms-served-through-rest-apis)
(e.g., vLLM, OpenLLM, Ollama).## How to Use
### Setup
``` python
from onprem import LLMllm = LLM()
```By default, a 7B-parameter model (**Mistral-7B-Instruct-v0.2**) is
downloaded and used. If `default_model='llama'` is supplied, then a
**Llama-3.1-8B-Instsruct** model is automatically downloaded and used
(which is useful if the default Mistral model struggles with a
particular task):``` python
# Llama 3.1 is downloaded here and the correct prompt template for Llama-3.1 is automatically configured and used
llm = LLM(default_model='llama')
```Similarly, suppyling `default_model='zephyr`, will use
**Zephyr-7B-beta**. Of course, you can also easily supply the URL to an
LLM of your choosing to
[`LLM`](https://amaiya.github.io/onprem/core.html#llm) (see the the
[code generation
example](https://amaiya.github.io/onprem/examples_code.html) or the
[FAQ](https://amaiya.github.io/onprem/#faq) for examples). Any extra
parameters supplied to
[`LLM`](https://amaiya.github.io/onprem/core.html#llm) are forwarded
directly to `llama-cpp-python`.### Send Prompts to the LLM to Solve Problems
This is an example of few-shot prompting, where we provide an example of
what we want the LLM to do.``` python
prompt = """Extract the names of people in the supplied sentences. Here is an example:
Sentence: James Gandolfini and Paul Newman were great actors.
People:
James Gandolfini, Paul Newman
Sentence:
I like Cillian Murphy's acting. Florence Pugh is great, too.
People:"""saved_output = llm.prompt(prompt)
```Cillian Murphy, Florence Pugh.
Additional prompt examples are [shown
here](https://amaiya.github.io/onprem/examples.html).### Talk to Your Documents
Answers are generated from the content of your documents (i.e.,
[retrieval augmented generation](https://arxiv.org/abs/2005.11401) or
RAG). Here, we will use [GPU
offloading](https://amaiya.github.io/onprem/#speeding-up-inference-using-a-gpu)
to speed up answer generation using the default model. However, the
Zephyr-7B model may perform even better, responds faster, and is used in
our [example
notebook](https://amaiya.github.io/onprem/examples_rag.html).``` python
from onprem import LLMllm = LLM(n_gpu_layers=-1)
```#### Step 1: Ingest the Documents into a Vector Database
``` python
llm.ingest("./sample_data")
```Creating new vectorstore at /home/amaiya/onprem_data/vectordb
Loading documents from ./sample_data
Loaded 12 new documents from ./sample_data
Split into 153 chunks of text (max. 500 chars each)
Creating embeddings. May take some minutes...
Ingestion complete! You can now query your documents using the LLM.ask or LLM.chat methodsLoading new documents: 100%|ββββββββββββββββββββββ| 3/3 [00:00<00:00, 13.71it/s]
100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:02<00:00, 2.49s/it]#### Step 2: Answer Questions About the Documents
``` python
question = """What is ktrain?"""
result = llm.ask(question)
```Ktrain is a low-code machine learning library designed to facilitate the full machine learning workflow from curating and preprocessing inputs to training, tuning, troubleshooting, and applying models. Ktrain is well-suited for domain experts who may have less experience with machine learning and software coding.
The sources used by the model to generate the answer are stored in
`result['source_documents']`:``` python
print("\nSources:\n")
for i, document in enumerate(result["source_documents"]):
print(f"\n{i+1}.> " + document.metadata["source"] + ":")
print(document.page_content)
```Sources:
1.> /home/amaiya/projects/ghub/onprem/nbs/sample_data/1/ktrain_paper.pdf:
lection (He et al., 2019). By contrast, ktrain places less emphasis on this aspect of au-
tomation and instead focuses on either partially or fully automating other aspects of the
machine learning (ML) workο¬ow. For these reasons, ktrain is less of a traditional Au-
22.> /home/amaiya/projects/ghub/onprem/nbs/sample_data/1/ktrain_paper.pdf:
possible, ktrain automates (either algorithmically or through setting well-performing de-
faults), but also allows users to make choices that best ο¬t their unique application require-
ments. In this way, ktrain uses automation to augment and complement human engineers
rather than attempting to entirely replace them. In doing so, the strengths of both are
better exploited. Following inspiration from a blog post1 by Rachel Thomas of fast.ai3.> /home/amaiya/projects/ghub/onprem/nbs/sample_data/1/ktrain_paper.pdf:
with custom models and data formats, as well.
Inspired by other low-code (and no-
code) open-source ML libraries such as fastai (Howard and Gugger, 2020) and ludwig
(Molino et al., 2019), ktrain is intended to help further democratize machine learning by
enabling beginners and domain experts with minimal programming or data science experi-
4. http://archive.ics.uci.edu/ml/datasets/Twenty+Newsgroups
64.> /home/amaiya/projects/ghub/onprem/nbs/sample_data/1/ktrain_paper.pdf:
ktrain: A Low-Code Library for Augmented Machine Learning
toML platform and more of what might be called a βlow-codeβ ML platform. Through
automation or semi-automation, ktrain facilitates the full machine learning workο¬ow from
curating and preprocessing inputs (i.e., ground-truth-labeled training data) to training,
tuning, troubleshooting, and applying models. In this way, ktrain is well-suited for domain
experts who may have less experience with machine learning and software coding. Where### Extract Text from Documents
The
[`load_single_document`](https://amaiya.github.io/onprem/ingest.html#load_single_document)
function can extract text from a range of different document formats
(e.g., PDFs, Microsoft PowerPoint, Microsoft Word, etc.). It is
automatically invoked when calling
[`LLM.ingest`](https://amaiya.github.io/onprem/core.html#llm.ingest).
Extracted text is represented as LangChain `Document` objects, where
`Document.page_content` stores the extracted text and
`Document.metadata` stores any extracted document metadata.For PDFs, in particular, a number of different options are available
depending on your use case.**Fast PDF Extraction**
- **Pro:** Fast
- **Con:** Does not infer/retain structure of tables in PDF documents``` python
from onprem.ingest import load_single_documentdocs = load_single_document('sample_data/1/ktrain_paper.pdf')
docs[0].metadata
```{'source': '/home/amaiya/projects/ghub/onprem/nbs/sample_data/1/ktrain_paper.pdf',
'file_path': '/home/amaiya/projects/ghub/onprem/nbs/sample_data/1/ktrain_paper.pdf',
'page': 0,
'total_pages': 9,
'format': 'PDF 1.4',
'title': '',
'author': '',
'subject': '',
'keywords': '',
'creator': 'LaTeX with hyperref',
'producer': 'dvips + GPL Ghostscript GIT PRERELEASE 9.22',
'creationDate': "D:20220406214054-04'00'",
'modDate': "D:20220406214054-04'00'",
'trapped': ''}**Automatic OCR of PDFs**
- **Pro:** Automatically extracts text from scanned PDFs
- **Con:** SlowThe
[`load_single_document`](https://amaiya.github.io/onprem/ingest.html#load_single_document)
function will automatically OCR PDFs that require it (i.e., PDFs that
are scanned hard-copies of documents). If a document is OCRβed during
extraction, the `metadata['ocr']` field will be populated with `True`.``` python
docs = load_single_document('sample_data/4/lynn1975.pdf')
docs[0].metadata
```{'source': '/home/amaiya/projects/ghub/onprem/nbs/sample_data/4/lynn1975.pdf',
'ocr': True}**Markdown Conversion and Retaining Table Structure in PDFs**
- **Pro**: Retains structure of tables within PDFs as either Markdown or
HTML; Better chunking for QA; Support for OCR
- **Con**: Slower than default PDF extractionThe
[`load_single_document`](https://amaiya.github.io/onprem/ingest.html#load_single_document)
function can retain the structure of tables within documents, which can
help LLMs answer questions about information contained within these
tables. There are, in fact, two ways to do this in **OnPrem.LLM**.The first is to supply `pdf_markdown=True`to convert the PDF to Markdown
text (via PyMuPDF4LLM), in which case the tables are represented within
your document as **Markdown tables**:``` python
docs = load_single_document('your_pdf_document.pdf',
pdf_markdown=True)
```In addition to facilitating table understanding, converting to Markdown
can also facilitate question-answering in general. For instance, when
supplying `pdf_markdown=True` to
[`LLM.ingest`](https://amaiya.github.io/onprem/core.html#llm.ingest),
documents are chunked in a Markdown-aware fashion (e.g., the abstract of
a research paper tends to be kept together into a single chunk instead
of being split up). Note that Markdown will not be extracted if the
document requires OCR.The second way to retain table structure is to supply
`pdf_unstructured=True` and `infer_table_structure=True`, which uses a
TableTransformer model to infer tables and represents them as **HTML**
within the extracted text (via Unstructured):``` python
docs = load_single_document('your_pdf_document.pdf',
pdf_unstructured=True, infer_table_structure=True)
```Unlike the`pdf_markdown=True` argument, table structure is retained even
if the PDF is OCRβed when using `pdf_unstrucured=True`. (Note that
`pdf_markdown` and `pdf_unstructured` cannot both be set to `True`.)Any of the parameters described above can be supplied directly to
[`LLM.ingest`](https://amaiya.github.io/onprem/core.html#llm.ingest),
which will automatically pass them along to
[`load_single_document`](https://amaiya.github.io/onprem/ingest.html#load_single_document).**Parsing Extracted Text Into Sentences or Paragraphs**
For some analyses (e.g., using prompts for information extraction), it
may be useful to parse the text extracted from documents into individual
sentences or paragraphs. This can be accomplished using the
[`segment`](https://amaiya.github.io/onprem/utils.html#segment)
function:``` python
from onprem.ingest import load_single_document
from onprem.utils import segment
text = load_single_document('sample_data/3/state_of_the_union.txt')[0].page_content
`````` python
segment(text, unit='paragraph')[0]
```'Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.'
``` python
segment(text, unit='sentence')[0]
```'Madam Speaker, Madam Vice President, our First Lady and Second Gentleman.'
### Summarization Pipeline
Summarize your raw documents (e.g., PDFs, MS Word) with an LLM.
#### Map-Reduce Summarization
Summarize each chunk in a document and then generate a single summary
from the individual summaries.``` python
from onprem import LLM
llm = LLM(n_gpu_layers=-1, verbose=False, mute_stream=True) # disabling viewing of intermediate summarization prompts/inferences
`````` python
from onprem.pipelines import Summarizer
summ = Summarizer(llm)resp = summ.summarize('sample_data/1/ktrain_paper.pdf', max_chunks_to_use=5) # omit max_chunks_to_use parameter to consider entire document
print(resp['output_text'])
```Ktrain is an open-source machine learning library that offers a unified interface for various machine learning tasks. The library supports both supervised and non-supervised machine learning, and includes methods for training models, evaluating models, making predictions on new data, and providing explanations for model decisions. Additionally, the library integrates with various explainable AI libraries such as shap, eli5 with lime, and others to provide more interpretable models.
#### Concept-Focused Summarization
Summarize a large document with respect to a particular concept of
interest.``` python
from onprem import LLM
from onprem.pipelines import Summarizer
`````` python
llm = LLM(default_model='zephyr', n_gpu_layers=-1, verbose=False, temperature=0)
summ = Summarizer(llm)
summary, sources = summ.summarize_by_concept('sample_data/1/ktrain_paper.pdf', concept_description="question answering")
```The context provided describes the implementation of an open-domain question-answering system using ktrain, a low-code library for augmented machine learning. The system follows three main steps: indexing documents into a search engine, locating documents containing words in the question, and extracting candidate answers from those documents using a BERT model pretrained on the SQuAD dataset. Confidence scores are used to sort and prune candidate answers before returning results. The entire workflow can be implemented with only three lines of code using ktrain's SimpleQA module. This system allows for the submission of natural language questions and receives exact answers, as demonstrated in the provided example. Overall, the context highlights the ease and accessibility of building sophisticated machine learning models, including open-domain question-answering systems, through ktrain's low-code interface.
### Information Extraction Pipeline
Extract information from raw documents (e.g., PDFs, MS Word documents)
with an LLM.``` python
from onprem import LLM
from onprem.pipelines import Extractor
# Notice that we're using a cloud-based, off-premises model here! See "OpenAI" section below.
llm = LLM(model_url='openai://gpt-3.5-turbo', verbose=False, mute_stream=True, temperature=0)
extractor = Extractor(llm)
prompt = """Extract the names of research institutions (e.g., universities, research labs, corporations, etc.)
from the following sentence delimited by three backticks. If there are no organizations, return NA.
If there are multiple organizations, separate them with commas.
```{text}```
"""
df = extractor.apply(prompt, fpath='sample_data/1/ktrain_paper.pdf', pdf_pages=[1], stop=['\n'])
df.loc[df['Extractions'] != 'NA'].Extractions[0]
```/home/amaiya/projects/ghub/onprem/onprem/core.py:159: UserWarning: The model you supplied is gpt-3.5-turbo, an external service (i.e., not on-premises). Use with caution, as your data and prompts will be sent externally.
warnings.warn(f'The model you supplied is {self.model_name}, an external service (i.e., not on-premises). '+\'Institute for Defense Analyses'
### Few-Shot Classification
Make accurate text classification predictions using only a tiny number
of labeled examples.``` python
# create classifier
from onprem.pipelines import FewShotClassifier
clf = FewShotClassifier(use_smaller=True)# Fetching data
from sklearn.datasets import fetch_20newsgroups
import pandas as pd
import numpy as np
classes = ["soc.religion.christian", "sci.space"]
newsgroups = fetch_20newsgroups(subset="all", categories=classes)
corpus, group_labels = np.array(newsgroups.data), np.array(newsgroups.target_names)[newsgroups.target]# Wrangling data into a dataframe and selecting training examples
data = pd.DataFrame({"text": corpus, "label": group_labels})
train_df = data.groupby("label").sample(5)
test_df = data.drop(index=train_df.index)# X_sample only contains 5 examples of each class!
X_sample, y_sample = train_df['text'].values, train_df['label'].values# test set
X_test, y_test = test_df['text'].values, test_df['label'].values# train
clf.train(X_sample, y_sample, max_steps=20)# evaluate
print(clf.evaluate(X_test, y_test)['accuracy'])
#output: 0.98# make predictions
clf.predict(['Elon Musk likes launching satellites.']).tolist()[0]
#output: sci.space
```### Using Hugging Face Transformers Instead of Llama.cpp
By default, the LLM backend employed by **OnPrem.LLM** is
[llama-cpp-python](https://github.com/abetlen/llama-cpp-python), which
requires models in [GGUF format](https://huggingface.co/docs/hub/gguf).
As of v0.5.0, it is now possible to use [Hugging Face
transformers](https://github.com/huggingface/transformers) as the LLM
backend instead. This is accomplished by using the `model_id` parameter
(instead of supplying a `model_url` argument). In the example below, we
run the
[Llama-3.1-8B](https://huggingface.co/hugging-quants/Meta-Llama-3.1-8B-Instruct-AWQ-INT4)
model.``` python
# llama-cpp-python does NOT need to be installed when using model_id parameter
llm = LLM(model_id="hugging-quants/Meta-Llama-3.1-8B-Instruct-AWQ-INT4", device_map='cuda')
```This allows you to more easily use any model on the Hugging Face hub in
[SafeTensors format](https://huggingface.co/docs/safetensors/index)
provided it can be loaded with the Hugging Face `transformers.pipeline`.
Note that, when using the `model_id` parameter, the `prompt_template` is
set automatically by `transformers`.The Llama-3.1 model loaded above was quantized using
[AWQ](https://huggingface.co/docs/transformers/main/en/quantization/awq),
which allows the model to fit onto smaller GPUs (e.g., laptop GPUs with
6GB of VRAM) similar to the default GGUF format. AWQ models will require
the [autoawq](https://pypi.org/project/autoawq/) package to be
installed: `pip install autoawq` (AWQ only supports Linux system,
including Windows Subsystem for Linux). If you do need to load a model
that is not quantized, you can supply a quantization configuration at
load time (known as βinflight quantizationβ). In the following example,
we load an unquantized [Zephyr-7B-beta
model](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) that will be
quantized during loading to fit on GPUs with as little as 6GB of VRAM:``` python
from transformers import BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype="float16",
bnb_4bit_use_double_quant=True,
)
llm = LLM(model_id="HuggingFaceH4/zephyr-7b-beta", device_map='cuda',
model_kwargs={"quantization_config":quantization_config})
```When supplying a `quantization_config`, the
[bitsandbytes](https://huggingface.co/docs/bitsandbytes/main/en/installation)
library, a lightweight Python wrapper around CUDA custom functions, in
particular 8-bit optimizers, matrix multiplication (LLM.int8()), and 8 &
4-bit quantization functions, is used. There are ongoing efforts by the
bitsandbytes team to support multiple backends in addition to CUDA. If
you receive errors related to bitsandbytes, please refer to the
[bitsandbytes
documentation](https://huggingface.co/docs/bitsandbytes/main/en/installation).### Connecting to LLMs Served Through REST APIs
**OnPrem.LLM** can be used with LLMs being served through any
OpenAI-compatible REST API. This means you can easily use **OnPrem.LLM**
with tools like [vLLM](https://github.com/vllm-project/vllm),
[OpenLLM](https://github.com/bentoml/OpenLLM),
[Ollama](https://ollama.com/blog/openai-compatibility), and the
[llama.cpp
server](https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md).For instance, using [vLLM](https://github.com/vllm-project/vllm), you
can serve a LLaMA 3 model as follows:``` sh
python -m vllm.entrypoints.openai.api_server --model NousResearch/Meta-Llama-3-8B-Instruct --dtype auto --api-key token-abc123
```You can then connect OnPrem.LLM to the LLM by supplying the URL of the
server you just started:``` python
from onprem import LLM
llm = LLM(model_url='http://localhost:8000/v1', api_key='token-abc123')
# Note: The API key can either be supplied directly or stored in the OPENAI_API_KEY environment variable.
# If the server does not require an API key, `api_key` should still be supplied with a dummy value like 'na'.
```Thatβs it! Solve problems with **OnPrem.LLM** as you normally would
(e.g., RAG question-answering, summarization, few-shot prompting, code
generation, etc.).### Using OpenAI Models with OnPrem.LLM
Even when using on-premises language models, it can sometimes be useful
to have easy access to non-local, cloud-based models (e.g., OpenAI) for
testing, producing baselines for comparison, and generating synthetic
examples for fine-tuning. For these reasons, in spite of the name,
**OnPrem.LLM** now includes support for OpenAI chat models:``` python
from onprem import LLM
llm = LLM(model_url='openai://gpt-4o', temperature=0)
```/home/amaiya/projects/ghub/onprem/onprem/core.py:196: UserWarning: The model you supplied is gpt-4o, an external service (i.e., not on-premises). Use with caution, as your data and prompts will be sent externally.
warnings.warn(f'The model you supplied is {self.model_name}, an external service (i.e., not on-premises). '+\This OpenAI [`LLM`](https://amaiya.github.io/onprem/core.html#llm)
instance can now be used with as the engine for most features in
OnPrem.LLM (e.g., RAG, information extraction, summarization, etc.).
Here we simply use it for general prompting:``` python
saved_result = llm.prompt('List three cute names for a cat and explain why each is cute.')
```Certainly! Here are three cute names for a cat, along with explanations for why each is adorable:
1. **Whiskers**: This name is cute because it highlights one of the most distinctive and charming features of a catβtheir whiskers. It's playful and endearing, evoking the image of a curious cat twitching its whiskers as it explores its surroundings.
2. **Mittens**: This name is cute because it conjures up the image of a cat with little white paws that look like they are wearing mittens. It's a cozy and affectionate name that suggests warmth and cuddliness, much like a pair of soft mittens.
3. **Pumpkin**: This name is cute because it brings to mind the warm, orange hues of a pumpkin, which can be reminiscent of certain cat fur colors. It's also associated with the fall season, which is often linked to comfort and coziness. Plus, the name "Pumpkin" has a sweet and affectionate ring to it, making it perfect for a beloved pet.
**Using Vision Capabilities in GPT-4o**
``` python
image_url = "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg"
saved_result = llm.prompt('Describe the weather in this image.', image_path_or_url=image_url)
```The weather in the image appears to be clear and sunny. The sky is mostly blue with some scattered clouds, suggesting a pleasant day with good visibility. The sunlight is bright, illuminating the green grass and landscape.
**Using OpenAI-Style Message Dictionaries**
``` python
messages = [
{'content': [{'text': 'describe the weather in this image',
'type': 'text'},
{'image_url': {'url': image_url},
'type': 'image_url'}],
'role': 'user'}]
saved_result = llm.prompt(messages)
```The weather in the image appears to be clear and sunny. The sky is mostly blue with some scattered clouds, suggesting a pleasant day with good visibility. The sunlight is bright, casting clear shadows and illuminating the green landscape.
**Azure OpenAI**
For Azure OpenAI models, use the following URL format:
``` python
llm = LLM(model_url='azure://', ...)
# is the Azure deployment name and additional Azure-specific parameters
# can be supplied as extra arguments to LLM (or set as environment variables)
```### Guided Prompts
You can use **OnPrem.LLM** with the
[Guidance](https://github.com/guidance-ai/guidance) package to guide the
LLM to generate outputs based on your conditions and constraints. Weβll
show a couple of examples here, but see [our documentation on guided
prompts](https://amaiya.github.io/onprem/examples_guided_prompts.html)
for more information.``` python
from onprem import LLMllm = LLM(n_gpu_layers=-1, verbose=False)
from onprem.guider import Guider
guider = Guider(llm)
```With the Guider, you can use use Regular Expressions to control LLM
generation:``` python
prompt = f"""Question: Luke has ten balls. He gives three to his brother. How many balls does he have left?
Answer: """ + gen(name='answer', regex='\d+')guider.prompt(prompt, echo=False)
```{'answer': '7'}
``` python
prompt = '19, 18,' + gen(name='output', max_tokens=50, stop_regex='[^\d]7[^\d]')
guider.prompt(prompt)
```19, 18, 17, 16, 15, 14, 13, 12, 11, 10, 9, 8,{'output': ' 17, 16, 15, 14, 13, 12, 11, 10, 9, 8,'}
See [the
documentation](https://amaiya.github.io/onprem/examples_guided_prompts.html)
for more examples of how to use
[Guidance](https://github.com/guidance-ai/guidance) with **OnPrem.LLM**.## Built-In Web App
**OnPrem.LLM** includes a built-in Web app to access the LLM. To start
it, run the following command after installation:``` shell
onprem --port 8000
```Then, enter `localhost:8000` (or `:8000` if running on
remote server) in a Web browser to access the application:For more information, [see the corresponding
documentation](https://amaiya.github.io/onprem/webapp.html).## FAQ
1. **How do I use other models with OnPrem.LLM?**
> You can supply the URL to other models to the `LLM` constructor,
> as we did above in the code generation example.> As of v0.0.20, we support models in GGUF format, which supersedes
> the older GGML format. You can find llama.cpp-supported models
> with `GGUF` in the file name on
> [huggingface.co](https://huggingface.co/models?sort=trending&search=gguf).> Make sure you are pointing to the URL of the actual GGUF model
> file, which is the βdownloadβ link on the modelβs page. An example
> for **Mistral-7B** is shown below:>
> Note that some models have specific prompt formats. For instance,
> the prompt template required for **Zephyr-7B**, as described on
> the [modelβs
> page](https://huggingface.co/TheBloke/zephyr-7B-beta-GGUF), is:
>
> `<|system|>\n\n<|user|>\n{prompt}\n<|assistant|>`
>
> So, to use the **Zephyr-7B** model, you must supply the
> `prompt_template` argument to the `LLM` constructor (or specify it
> in the `webapp.yml` configuration for the Web app).
>
> ``` python
> # how to use Zephyr-7B with OnPrem.LLM
> llm = LLM(model_url='https://huggingface.co/TheBloke/zephyr-7B-beta-GGUF/resolve/main/zephyr-7b-beta.Q4_K_M.gguf',
> prompt_template = "<|system|>\n\n<|user|>\n{prompt}\n<|assistant|>",
> n_gpu_layers=33)
> llm.prompt("List three cute names for a cat.")
> ```2. **When installing `onprem`, Iβm getting βbuildβ errors related to
`llama-cpp-python` (or `chroma-hnswlib`) on Windows/Mac/Linux?**> See [this LangChain documentation on
> LLama.cpp](https://python.langchain.com/docs/integrations/llms/llamacpp)
> for help on installing the `llama-cpp-python` package for your
> system. Additional tips for different operating systems are shown
> below:> For **Linux** systems like Ubuntu, try this:
> `sudo apt-get install build-essential g++ clang`. Other tips are
> [here](https://github.com/oobabooga/text-generation-webui/issues/1534).> For **Windows** systems, please try following [these
> instructions](https://github.com/amaiya/onprem/blob/master/MSWindows.md).
> We recommend you use [Windows Subsystem for Linux
> (WSL)](https://learn.microsoft.com/en-us/windows/wsl/install)
> instead of using Microsoft Windows directly. If you do need to use
> Microsoft Window directly, be sure to install the [Microsoft C++
> Build
> Tools](https://visualstudio.microsoft.com/visual-cpp-build-tools/)
> and make sure the **Desktop development with C++** is selected.> For **Macs**, try following [these
> tips](https://github.com/imartinez/privateGPT/issues/445#issuecomment-1563333950).> There are also various other tips for each of the above OSes in
> [this privateGPT repo
> thread](https://github.com/imartinez/privateGPT/issues/445). Of
> course, you can also [easily
> use](https://colab.research.google.com/drive/1LVeacsQ9dmE1BVzwR3eTLukpeRIMmUqi?usp=sharing)
> **OnPrem.LLM** on Google Colab.> Finally, if you still canβt overcome issues with building
> `llama-cpp-python`, you can try [installing the pre-built wheel
> file](https://abetlen.github.io/llama-cpp-python/whl/cpu/llama-cpp-python/)
> for your system:> **Example:**
> `pip install llama-cpp-python==0.2.90 --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cpu`
>
> **Tip:** There are [pre-built wheel files for
> `chroma-hnswlib`](https://pypi.org/project/chroma-hnswlib/#files),
> as well. If running `pip install onprem` fails on building
> `chroma-hnswlib`, it may be because a pre-built wheel doesnβt yet
> exist for the version of Python youβre using (in which case you
> can try downgrading Python).3. **Iβm behind a corporate firewall and am receiving an SSL error when
trying to download the model?**> Try this:
>
> ``` python
> from onprem import LLM
> LLM.download_model(url, ssl_verify=False)
> ```> You can download the embedding model (used by `LLM.ingest` and
> `LLM.ask`) as follows:
>
> ``` sh
> wget --no-check-certificate https://public.ukp.informatik.tu-darmstadt.de/reimers/sentence-transformers/v0.2/all-MiniLM-L6-v2.zip
> ```> Supply the unzipped folder name as the `embedding_model_name`
> argument to `LLM`.> If youβre getting SSL errors when even running `pip install`, try
> this:
>
> ``` sh
> pip install β-trusted-host pypi.org β-trusted-host files.pythonhosted.org pip_system_certs
> ```4. **How do I use this on a machine with no internet access?**
> Use the `LLM.download_model` method to download the model files to
> `/onprem_data` and transfer them to the same
> location on the air-gapped machine.> For the `ingest` and `ask` methods, you will need to also download
> and transfer the embedding model files:
>
> ``` python
> from sentence_transformers import SentenceTransformer
> model = SentenceTransformer('sentence-transformers/all-MiniLM-L6-v2')
> model.save('/some/folder')
> ```> Copy the `some/folder` folder to the air-gapped machine and supply
> the path to `LLM` via the `embedding_model_name` parameter.5. **My model is not loading when I call `llm = LLM(...)`?**
> This can happen if the model file is corrupt (in which case you
> should delete from `/onprem_data` and
> re-download). It can also happen if the version of
> `llama-cpp-python` needs to be upgraded to the latest.6. **Iβm getting an `βIllegal instruction (core dumped)` error when
instantiating a `langchain.llms.Llamacpp` or `onprem.LLM` object?**> Your CPU may not support instructions that `cmake` is using for
> one reason or another (e.g., [due to Hyper-V in VirtualBox
> settings](https://stackoverflow.com/questions/65780506/how-to-enable-avx-avx2-in-virtualbox-6-1-16-with-ubuntu-20-04-64bit)).
> You can try turning them off when building and installing
> `llama-cpp-python`:> ``` sh
> # example
> CMAKE_ARGS="-DGGML_CUDA=ON -DGGML_AVX2=OFF -DGGML_AVX=OFF -DGGML_F16C=OFF -DGGML_FMA=OFF" FORCE_CMAKE=1 pip install --force-reinstall llama-cpp-python --no-cache-dir
> ```7. **How can I speed up
[`LLM.ingest`](https://amaiya.github.io/onprem/core.html#llm.ingest)
using my GPU?**> Try using the `embedding_model_kwargs` argument:
>
> ``` python
> from onprem import LLM
> llm = LLM(embedding_model_kwargs={'device':'cuda'})
> ```