https://github.com/prajwalsrinvas/llm_api_cost_calc
LLM API Cost Calculator
https://github.com/prajwalsrinvas/llm_api_cost_calc
llm llm-cost streamlit
Last synced: 3 months ago
JSON representation
LLM API Cost Calculator
- Host: GitHub
- URL: https://github.com/prajwalsrinvas/llm_api_cost_calc
- Owner: Prajwalsrinvas
- Created: 2024-05-22T11:28:07.000Z (about 1 year ago)
- Default Branch: main
- Last Pushed: 2025-02-15T13:52:43.000Z (4 months ago)
- Last Synced: 2025-02-15T14:31:07.907Z (4 months ago)
- Topics: llm, llm-cost, streamlit
- Language: Python
- Homepage: https://llm-api-cost.streamlit.app/
- Size: 46.9 KB
- Stars: 1
- Watchers: 2
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# LLM API Cost Calculator 🤖🧮
A Streamlit app to calculate, compare, and visualize the costs of various LLM APIs. The app includes real-time pricing data, interactive visualization, and advanced features like token estimation and query parameter support.
[LLM API Cost Calculator demo.webm](https://github.com/user-attachments/assets/b7bd21b6-ade2-4d56-b008-203e0724a464)



---
## Features
- **Real-time Pricing Data:**
Fetches up-to-date pricing information from [docsbot.ai](https://docsbot.ai/tools/gpt-openai-api-pricing-calculator) using BeautifulSoup, regex extraction, and caching to minimize redundant API calls. The raw data is optionally saved to a local file (`cost.json`) for debugging.- **Cost Calculation:**
Computes the total cost based on input tokens, output tokens, and API calls. Costs are calculated on a per‑million tokens basis and further compared against a default model to provide a relative cost metric.- **Relative Cost Comparison:**
Compares costs of various models using a user-selected default model (e.g., **GPT-4o mini**) as the baseline.- **Provider Filtering:**
Allows filtering the results by LLM provider, with automatic normalization of provider names (e.g., "OpenAI / Azure" is treated as "OpenAI").- **Interactive Visualization:**
Displays cost data in both a detailed table and an interactive horizontal bar chart built with Plotly.- **Currency Conversion:**
Supports both USD and INR. A live USD-to-INR exchange rate is fetched and applied, with a fallback rate provided if the fetch fails.- **Token Estimation Dialog:**
Provides a dialog (powered by the `tiktoken` module) for estimating token counts from sample input and output texts. Estimated values are stored in session state and automatically populate the token input fields.- **Detailed Token Cost Breakdown:**
Offers an option to display individual input and output token costs in the results table.- **Query Parameters Support:**
Reads URL query parameters (`input_tokens`, `output_tokens`, and `api_calls`) to pre-populate the respective input fields, ensuring a seamless user experience.- **Caching & Session State:**
Utilizes Streamlit’s caching for data fetching and exchange rate lookups. The app leverages session state to retain token estimation results between interactions.---
## Dependencies
- pandas
- plotly
- requests
- beautifulsoup4
- streamlit (supports st.cache_data, st.dialog, and session state)
- tiktoken (optional, for token estimation)---
## Key Functions
- **`fetch_llm_api_cost()`**
Fetches and parses pricing data from the remote website using caching. The raw JSON data is optionally written to `cost.json` for debugging.- **`load_data()`**
Loads and preprocesses the fetched pricing data into a pandas DataFrame, including normalization of provider names.- **`calculate_costs()`**
Computes the total and relative costs based on user inputs (input tokens, output tokens, and API calls), applies currency conversion, and optionally includes a detailed breakdown of token costs.- **`create_total_cost_chart()`**
Generates an interactive horizontal bar chart visualizing the total cost per model.- **`estimate_dialog()`**
Opens a dialog for estimating token counts using sample texts. Estimated token counts are stored in session state and automatically populate the token input fields.---
## Main Application Flow
1. **Data Loading & Preprocessing:**
- Fetch pricing data and the live USD-to-INR exchange rate.
- Normalize provider names and prepare the data for cost calculations.2. **User Input Sidebar:**
- **Provider and Model Selection:**
Choose which LLM providers to include and select a default model for relative cost comparison.
- **Token and API Call Inputs:**
- Manually enter the number of input tokens, output tokens, and API calls.
- Alternatively, use URL query parameters (`input_tokens`, `output_tokens`, and `api_calls`) to pre-populate these values. (ex: http://localhost:8501/?input_tokens=2000&output_tokens=300&api_calls=10)
- **Token Estimation:**
Click the "Calculate Token Count" button to open the token estimation dialog. The resulting token counts are automatically populated into the input fields.
- **Display Options:**
- Toggle the display of individual token cost breakdowns.
- Select the display currency (USD or INR).3. **Cost Calculation & Visualization:**
- Calculate total and relative costs based on the provided inputs.
- Present the results in a detailed table and an interactive bar chart.4. **Performance & Debugging:**
- Leverages caching to optimize data fetching and exchange rate lookups.
- Optionally writes fetched pricing data to `cost.json` for debugging purposes.---
## Usage
To run the application locally:
```bash
curl -LsSf https://astral.sh/uv/install.sh | sh
uv venv --python 3.11
source .venv/bin/activate
uv pip install -r requirements.txt
streamlit run app.py
```Installs uv -> Installs python (if not present) -> creates and activates venv -> installs requirements
pip can be used for this too, uv is faster!
`streamlit run` command will launch the app in your browser. You can modify token values, filter providers, use URL query parameters to pre-populate inputs, and interactively view cost comparisons and visualizations.