https://github.com/chrnthnkmutt/slm-py-experiment
This repository is used for making demonstration of running Phi-3, Microsoft's Small Language Model, locally on the device by using ollama, along with Python library, Streamlit. With the use case of Phi-3 vision.
https://github.com/chrnthnkmutt/slm-py-experiment
ollama phi3 phi3-vision python streamlit
Last synced: 2 months ago
JSON representation
This repository is used for making demonstration of running Phi-3, Microsoft's Small Language Model, locally on the device by using ollama, along with Python library, Streamlit. With the use case of Phi-3 vision.
- Host: GitHub
- URL: https://github.com/chrnthnkmutt/slm-py-experiment
- Owner: chrnthnkmutt
- License: mit
- Created: 2024-05-07T01:33:35.000Z (about 1 year ago)
- Default Branch: main
- Last Pushed: 2024-12-17T08:10:12.000Z (6 months ago)
- Last Synced: 2025-03-25T08:22:01.425Z (3 months ago)
- Topics: ollama, phi3, phi3-vision, python, streamlit
- Language: Jupyter Notebook
- Homepage:
- Size: 15.3 MB
- Stars: 3
- Watchers: 1
- Forks: 2
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# phi3-py-experiment
## Intelligent Chatbot with Phi-3 SLM and Streamlit Library
Let's build a chatbot with just Python using the Streamlit library, Ollama, and Microsoft Phi-3.
### Streamlit:
turns data scripts into shareable web apps in minutes. All in pure Python. No frontβend experience required.
You can find more info in the official Streamlit docs.### Ollama:
allows you to run open-source large language models, locally
You can find more info in the official Ollama docs.### Phi-3 Mini:
is a 3.8B parameters, lightweight, state-of-the-art open model by Microsoft.
You can find more info in the official Phi-3 Mini docs.### Steps
*If you can't use `pip` then use `conda` instead to install the library*. Ensure that you have install Ollama platform in to your computer, by visiting this link: [Ollama.com](https://ollama.com/)
1 - Create a new conda environment
```
conda create --name envStreamPhi
```
2 - Activate the environment
```
conda activate envStreamPhi
```
3 - Clone StreamLit template
```
git clone https://github.com/streamlit/streamlit.git
conda install streamlit
```
4 - Install ollama & pull the phi-3 model
```
pip install ollama
ollama pull phi3
```
5 - Pull the Embeddings model:
```
ollama pull nomic-embed-text
```
6 - Test installation
```
streamlit hello
```### Build the AI assistant
In order to build the AI assistant, you have 2 choices : clone the repo and get all the code from the get-go or coding along with me.
#### I - First option :
1 - Clone the project from Github
```
git clone https://github.com/chrnthnkmutt/phi3_experiment.git
```
2 - run the application
```
streamlit run app.py
```#### II - Second option:
code along
1 - Create your app.py file
```
app.py
```
2 - Add imports
```py
import streamlit as st
import ollama
```3 - Add the defacto message
```py
if "messages" not in st.session_state:
st.session_state["messages"] = [{"role": "assistant", "content": "Hello tehre, how can I help you, today?"}]
```
4 - Add the message history
```py
for msg in st.session_state.messages:
if msg["role"] == "user":
st.chat_message(msg["role"], avatar="π§βπ»").write(msg["content"])
else:
st.chat_message(msg["role"], avatar="π€").write(msg["content"])
```
5 - Configure model
```py
def generate_response():
response = ollama.chat(model='phi3', stream=True, messages=st.session_state.messages)
for partial_resp in response:
token = partial_resp["message"]["content"]
st.session_state["full_message"] += token
yield token
```
6 - Configure the prompt
```py
if prompt := st.chat_input():
st.session_state.messages.append({"role": "user", "content": prompt})
st.chat_message("user", avatar="π§βπ»").write(prompt)
st.session_state["full_message"] = ""
st.chat_message("assistant", avatar="π€").write_stream(generate_response)
st.session_state.messages.append({"role": "assistant", "content": st.session_state["full_message"]})
```
7 - all the codebase of app.py
```py
import streamlit as st
import ollamast.title("π¬ Phi3 Chatbot")
if "messages" not in st.session_state:
st.session_state["messages"] = [{"role": "assistant", "content": "Hello tehre, how can I help you, today?"}]### Write Message History
for msg in st.session_state.messages:
if msg["role"] == "user":
st.chat_message(msg["role"], avatar="π§βπ»").write(msg["content"])
else:
st.chat_message(msg["role"], avatar="π€").write(msg["content"])## Configure the model
def generate_response():
response = ollama.chat(model='phi3', stream=True, messages=st.session_state.messages)
for partial_resp in response:
token = partial_resp["message"]["content"]
st.session_state["full_message"] += token
yield tokenif prompt := st.chat_input():
st.session_state.messages.append({"role": "user", "content": prompt})
st.chat_message("user", avatar="π§βπ»").write(prompt)
st.session_state["full_message"] = ""
st.chat_message("assistant", avatar="π€").write_stream(generate_response)
st.session_state.messages.append({"role": "assistant", "content": st.session_state["full_message"]})
```
Run the Streamlit app
```
streamlit run app.py
```
## Experimenting Phi-3 Vision on Jupyter NotebookVisit the file name `phi3-vis-ocr.ipynb` and `phi3-vis-gen.ipynb` for execution the file for testing multimodal performance of Phi-3 Vision. Recommended to run on Google Collaboratory