https://github.com/lacerbi/llm-js-lite
https://github.com/lacerbi/llm-js-lite
Last synced: 30 days ago
JSON representation
- Host: GitHub
- URL: https://github.com/lacerbi/llm-js-lite
- Owner: lacerbi
- Created: 2025-09-07T19:12:04.000Z (about 1 month ago)
- Default Branch: main
- Last Pushed: 2025-09-07T20:03:40.000Z (about 1 month ago)
- Last Synced: 2025-09-07T22:09:10.286Z (about 1 month ago)
- Language: JavaScript
- Size: 420 KB
- Stars: 0
- Watchers: 0
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
---
title: Qwen3 WebGPU
emoji: 🚀
colorFrom: blue
colorTo: pink
sdk: static
pinned: false
thumbnail: >-
https://huggingface.co/spaces/webml-community/qwen3-webgpu/resolve/main/banner.png
short_description: A hybrid reasoning model that runs locally in your browser.
models:
- onnx-community/Qwen3-0.6B-ONNX
---# Qwen3 WebGPU
## Getting Started
Follow the steps below to set up and run the application.
### 1. Clone the Repository
Clone the examples repository from GitHub:
```sh
git clone https://github.com/huggingface/transformers.js-examples.git
```### 2. Navigate to the Project Directory
Change your working directory to the `qwen3-webgpu` folder:
```sh
cd transformers.js-examples/qwen3-webgpu
```### 3. Install Dependencies
Install the necessary dependencies using npm:
```sh
npm i
```### 4. Run the Development Server
Start the development server:
```sh
npm run dev
```The application should now be running locally. Open your browser and go to `http://localhost:5173` to see it in action.