Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/ollama/ollama-js
Ollama JavaScript library
https://github.com/ollama/ollama-js
javascript js ollama
Last synced: 3 days ago
JSON representation
Ollama JavaScript library
- Host: GitHub
- URL: https://github.com/ollama/ollama-js
- Owner: ollama
- License: mit
- Created: 2023-09-13T22:58:51.000Z (about 1 year ago)
- Default Branch: main
- Last Pushed: 2024-12-02T19:53:16.000Z (10 days ago)
- Last Synced: 2024-12-03T00:04:58.985Z (10 days ago)
- Topics: javascript, js, ollama
- Language: TypeScript
- Homepage: https://ollama.com
- Size: 464 KB
- Stars: 2,332
- Watchers: 24
- Forks: 185
- Open Issues: 48
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# Ollama JavaScript Library
The Ollama JavaScript library provides the easiest way to integrate your JavaScript project with [Ollama](https://github.com/jmorganca/ollama).
## Getting Started
```
npm i ollama
```## Usage
```javascript
import ollama from 'ollama'const response = await ollama.chat({
model: 'llama3.1',
messages: [{ role: 'user', content: 'Why is the sky blue?' }],
})
console.log(response.message.content)
```### Browser Usage
To use the library without node, import the browser module.
```javascript
import ollama from 'ollama/browser'
```## Streaming responses
Response streaming can be enabled by setting `stream: true`, modifying function calls to return an `AsyncGenerator` where each part is an object in the stream.
```javascript
import ollama from 'ollama'const message = { role: 'user', content: 'Why is the sky blue?' }
const response = await ollama.chat({ model: 'llama3.1', messages: [message], stream: true })
for await (const part of response) {
process.stdout.write(part.message.content)
}
```## Create
```javascript
import ollama from 'ollama'const modelfile = `
FROM llama3.1
SYSTEM "You are mario from super mario bros."
`
await ollama.create({ model: 'example', modelfile: modelfile })
```## API
The Ollama JavaScript library's API is designed around the [Ollama REST API](https://github.com/jmorganca/ollama/blob/main/docs/api.md)
### chat
```javascript
ollama.chat(request)
```- `request` ``: The request object containing chat parameters.
- `model` `` The name of the model to use for the chat.
- `messages` ``: Array of message objects representing the chat history.
- `role` ``: The role of the message sender ('user', 'system', or 'assistant').
- `content` ``: The content of the message.
- `images` ``: (Optional) Images to be included in the message, either as Uint8Array or base64 encoded strings.
- `format` ``: (Optional) Set the expected format of the response (`json`).
- `stream` ``: (Optional) When true an `AsyncGenerator` is returned.
- `keep_alive` ``: (Optional) How long to keep the model loaded.
- `tools` ``: (Optional) A list of tool calls the model may make.
- `options` ``: (Optional) Options to configure the runtime.- Returns: ``
### generate
```javascript
ollama.generate(request)
```- `request` ``: The request object containing generate parameters.
- `model` `` The name of the model to use for the chat.
- `prompt` ``: The prompt to send to the model.
- `suffix` ``: (Optional) Suffix is the text that comes after the inserted text.
- `system` ``: (Optional) Override the model system prompt.
- `template` ``: (Optional) Override the model template.
- `raw` ``: (Optional) Bypass the prompt template and pass the prompt directly to the model.
- `images` ``: (Optional) Images to be included, either as Uint8Array or base64 encoded strings.
- `format` ``: (Optional) Set the expected format of the response (`json`).
- `stream` ``: (Optional) When true an `AsyncGenerator` is returned.
- `keep_alive` ``: (Optional) How long to keep the model loaded.
- `options` ``: (Optional) Options to configure the runtime.
- Returns: ``### pull
```javascript
ollama.pull(request)
```- `request` ``: The request object containing pull parameters.
- `model` `` The name of the model to pull.
- `insecure` ``: (Optional) Pull from servers whose identity cannot be verified.
- `stream` ``: (Optional) When true an `AsyncGenerator` is returned.
- Returns: ``### push
```javascript
ollama.push(request)
```- `request` ``: The request object containing push parameters.
- `model` `` The name of the model to push.
- `insecure` ``: (Optional) Push to servers whose identity cannot be verified.
- `stream` ``: (Optional) When true an `AsyncGenerator` is returned.
- Returns: ``### create
```javascript
ollama.create(request)
```- `request` ``: The request object containing create parameters.
- `model` `` The name of the model to create.
- `path` ``: (Optional) The path to the Modelfile of the model to create.
- `modelfile` ``: (Optional) The content of the Modelfile to create.
- `stream` ``: (Optional) When true an `AsyncGenerator` is returned.
- Returns: ``### delete
```javascript
ollama.delete(request)
```- `request` ``: The request object containing delete parameters.
- `model` `` The name of the model to delete.
- Returns: ``### copy
```javascript
ollama.copy(request)
```- `request` ``: The request object containing copy parameters.
- `source` `` The name of the model to copy from.
- `destination` `` The name of the model to copy to.
- Returns: ``### list
```javascript
ollama.list()
```- Returns: ``
### show
```javascript
ollama.show(request)
```- `request` ``: The request object containing show parameters.
- `model` `` The name of the model to show.
- `system` ``: (Optional) Override the model system prompt returned.
- `template` ``: (Optional) Override the model template returned.
- `options` ``: (Optional) Options to configure the runtime.
- Returns: ``### embed
```javascript
ollama.embed(request)
```- `request` ``: The request object containing embedding parameters.
- `model` `` The name of the model used to generate the embeddings.
- `input` ` | `: The input used to generate the embeddings.
- `truncate` ``: (Optional) Truncate the input to fit the maximum context length supported by the model.
- `keep_alive` ``: (Optional) How long to keep the model loaded.
- `options` ``: (Optional) Options to configure the runtime.
- Returns: ``### ps
```javascript
ollama.ps()
```- Returns: ``
### abort
```javascript
ollama.abort()
```This method will abort **all** streamed generations currently running with the client instance.
If there is a need to manage streams with timeouts, it is recommended to have one Ollama client per stream.All asynchronous threads listening to streams (typically the ```for await (const part of response)```) will throw an ```AbortError``` exception. See [examples/abort/abort-all-requests.ts](examples/abort/abort-all-requests.ts) for an example.
## Custom client
A custom client can be created with the following fields:
- `host` ``: (Optional) The Ollama host address. Default: `"http://127.0.0.1:11434"`.
- `fetch` ``: (Optional) The fetch library used to make requests to the Ollama host.```javascript
import { Ollama } from 'ollama'const ollama = new Ollama({ host: 'http://127.0.0.1:11434' })
const response = await ollama.chat({
model: 'llama3.1',
messages: [{ role: 'user', content: 'Why is the sky blue?' }],
})
```## Building
To build the project files run:
```sh
npm run build
```