https://github.com/ashot72/lm-studio-local-chat-app
Running the Multimodal AI Chat App with LM Studio using a locally loaded model
https://github.com/ashot72/lm-studio-local-chat-app
ai chart-js chatbot chatgpt gemma hugging-face langchain-js langgraph-js llm lm-studio local-models nextjs quantization react vercel-ai-sdk
Last synced: 3 days ago
JSON representation
Running the Multimodal AI Chat App with LM Studio using a locally loaded model
- Host: GitHub
- URL: https://github.com/ashot72/lm-studio-local-chat-app
- Owner: Ashot72
- Created: 2025-05-21T13:10:50.000Z (5 months ago)
- Default Branch: main
- Last Pushed: 2025-05-24T14:20:37.000Z (5 months ago)
- Last Synced: 2025-09-14T19:52:33.489Z (26 days ago)
- Topics: ai, chart-js, chatbot, chatgpt, gemma, hugging-face, langchain-js, langgraph-js, llm, lm-studio, local-models, nextjs, quantization, react, vercel-ai-sdk
- Language: TypeScript
- Homepage:
- Size: 3.99 MB
- Stars: 1
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Running the Multimodal AI Chat App with LM Studio using a locally loaded model
I explore how to run large language models (LLMs) locally on your machine such as a high-end notebook using [LM Studio](https://lmstudio.ai/). You'll see how to select a quantized model that’s compatible with your hardware and discover how easy it is to use LM Studio with local models like Gemma, DeepSeek, Llama, and others. The process requires only beginner-level experience.Running models locally is ideal when you want to keep your data private, avoid subscription costs, and continue working while traveling—even on airplanes without an internet connection.
LM Studio also supports programmatic use through its built-in server, enabling developers to build custom applications powered by locally running models—without incurring costs for accessing proprietary model APIs.
One of the great advantages of locally running tools like LM Studio is that, in addition to their own APIs, they support the OpenAI SDK, which has become the de facto standard for interacting with LLMs. The [multi modal chat app]( https://github.com/Ashot72/Multi-Modal-Chat) application I previously built was able to connect to the LM Studio server locally by changing just two settings: the baseURL, pointing to the local server, and the model name to reference the locally loaded model. No API key is required, as everything runs entirely on the local machine.
To show how it works, I used the Google Gemma small model. I shared my hardware profile on Hugging Face to make sure it would run. It wasn’t super fast, but it worked well enough to demonstrate the app.
To get started.
```
# Clone the repositorygit clone https://github.com/Ashot72/LM-Studio-local-chat-app
cd LM-Studio-local-chat-app# Create the .env file based on the env.example.txt file and include the respective keys.
# installs dependencies
yarn install# to run locally
npm run dev
# to run in production mode
npm run build
npm start
```Go to [LM Studio Video](https://youtu.be/DW75yo6W710) page
Go to [LM Studio Description](https://ashot72.github.io/LM-Studio-local-chat-app/doc.html) page