https://github.com/ghusta/demo-spring-ai-ollama-chatbot
Demo Spring AI + Ollama
https://github.com/ghusta/demo-spring-ai-ollama-chatbot
Last synced: 3 months ago
JSON representation
Demo Spring AI + Ollama
- Host: GitHub
- URL: https://github.com/ghusta/demo-spring-ai-ollama-chatbot
- Owner: ghusta
- Created: 2024-08-14T16:43:12.000Z (10 months ago)
- Default Branch: main
- Last Pushed: 2024-12-30T11:57:29.000Z (5 months ago)
- Last Synced: 2025-01-06T22:35:49.540Z (5 months ago)
- Language: Java
- Homepage:
- Size: 33.2 KB
- Stars: 1
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.adoc
Awesome Lists containing this project
README
= Demo Spring AI + Ollama
:toc:
:imagesdir: assets/images== Overview
Demo for testing chatbot with Ollama and Spring AI.
The LLM used is https://ollama.com/library/llama3[llama3] or https://ollama.com/library/llama3.1[llama3.1].
Inspired by article : https://www.baeldung.com/spring-ai-ollama-chatgpt-like-chatbot
== Starting service ollama
[source,bash]
----
ollama serve
ollama pull llama3.1
----=== To start it with Docker
[source,bash]
----
docker run -d -p 11434:11434 --gpus=all ollama/ollama
----NOTE: Ollama can make use of GPUs.
The https://docs.docker.com/reference/cli/docker/container/run/#gpus[_--gpus_] Docker option is required for that.Then a model must be downloaded (if no local volume contains an existing model).
[source,bash]
----
docker exec ollama pull llama3.1
----== Testing chatbot API
Send POST request to : _http://localhost:8080/helpdesk/chat_
=== Example with HTTPie
[source,bash]
----
http POST http://localhost:8080/helpdesk/chat prompt_message="Hello how are you ?" history_id=1234
----=== Example with cURL
[source,bash]
----
curl -X POST http://localhost:8080/helpdesk/chat -H "Content-Type: application/json" -d '{"prompt_message":"Hello how are you ?", "history_id":"1234"}'
----== References
* https://ollama.com/[Ollama]
** https://ollama.com/library[Ollama models]
* https://hub.docker.com/r/ollama/ollama[Ollama Docker image]