https://github.com/x66ccff/liveideabench
π€π‘ LiveIdeaBench: Evaluating LLMs' Scientific Creativity and Idea Generation with Minimal Context
https://github.com/x66ccff/liveideabench
ai-for-science ai4science benchmark creative-ai divergent-thinking idea-generation llm science-research scientific-creativity scientific-ideation scientific-innovation
Last synced: 21 days ago
JSON representation
π€π‘ LiveIdeaBench: Evaluating LLMs' Scientific Creativity and Idea Generation with Minimal Context
- Host: GitHub
- URL: https://github.com/x66ccff/liveideabench
- Owner: x66ccff
- Created: 2024-12-19T03:24:52.000Z (10 months ago)
- Default Branch: main
- Last Pushed: 2025-01-10T07:34:37.000Z (9 months ago)
- Last Synced: 2025-01-10T08:36:11.422Z (9 months ago)
- Topics: ai-for-science, ai4science, benchmark, creative-ai, divergent-thinking, idea-generation, llm, science-research, scientific-creativity, scientific-ideation, scientific-innovation
- Language: Python
- Homepage: https://liveideabench.com/
- Size: 474 KB
- Stars: 6
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# π€π‘ [LiveIdeaBench](http://liveideabench.com): Evaluating LLMs' Scientific Creativity and Idea Generation with Minimal Context
[](https://arxiv.org/abs/2412.17596)


[](https://github.com/x66ccff/liveideabench/blob/main/co2.ipynb)
_"It's not like finding a needle in a haystack, it is like creating new needles."_
π Leaderboard: http://liveideabench.com π‘
### π€ Dataset
[](https://huggingface.co/datasets/6cf/liveideabench) [](https://huggingface.co/datasets/6cf/liveideabench-DLC-250127) [](https://huggingface.co/datasets/6cf/liveideabench-v2)
### π§ β¨π News (2025/3/29): Latest Dataset Update (v2) on Hugging Face!
We are pleased to announce that, based on the invaluable feedback from reviewers, we have enhanced our benchmark by upgrading it to **version 2**. This update introduces a new dimensionβ**Clarity**βand improves the prompts, evaluation process (including the rejection handling mechanism), making our benchmark more comprehensive and objective. π
This v2 version of the benchmark incorporates the latest models, including: `claude-3.7-sonnet:thinking`, `o3-mini-high`, `gpt-4.5-preview`, `qwq-32b`, `deepseek-r1`, `gemini-2.0-flash-thinking`, and a total of **41** state-of-the-art models.
### π§ β¨π News (2025/1/27): Latest Dataset Update on Hugging Face!
We are excited to announce that the latest dataset, including supplementary tests for models like **deepseek-R1**, **deepseek-V3**, **minimax-01**, **phi-4**, and **Opus**, has been uploaded to Hugging Face! π
---
## LiveIdeaBench Evaluation Framework

## π Evaluation Instruction
### 1οΈβ£ Install Environment
```bash
pip install -r requirements.txt
```### 2οΈβ£ Database Initialization
Run the Python script to initialize the database:
```bash
python -c "from utils.database import init_database; init_database()"
```### 3οΈβ£ Configuring API Keys
Before running the program, you need to configure at least one API key:
1. Create an `apikey` file and write your OpenRouter API key:
```bash
echo "your-openrouter-api-key" > apikey
```Alternatively, set environment variables:
```bash
export OPENROUTER_API_KEY="your-openrouter-api-key"
export STEP_API_KEY="your-step-api-key"
export GEMINI_API_KEYS="key1,key2,key3"
```### 4οΈβ£ Running Examples
Generate and evaluate ideas using a specified model:
```bash
# Generate ideas using a specified model
python run.py --idea_model "openai/gpt-4o-mini"# Use a specific provider
python run.py --idea_model "openai/gpt-4o-mini" --provider openrouter
``````bash
# Use a single keyword:python run.py --idea_model "openai/gpt-4o-mini" --keyword "relativity"
# Use multiple keywords:python run.py --idea_model "openai/gpt-4o-mini" --keyword "relativity" "periodic table"
# Do not specify a keyword (use all keywords):python run.py --idea_model "openai/gpt-4o-mini"
```### 5οΈβ£ Database Export
This step extracts the generated ideas, scores, and metadata from the internal database.
Run the script:
```bash
python view_database.py
```to extract the generated data from the SQL database.
Then, run `stats.ipynb`, to generate `data/data.parquet` which serves as input for the subsequent analysis notebooks.
### 6οΈβ£ Evaluate Fluency
Fluency measures the diversity and uniqueness of the generated ideas. This script calculates the Fluency score based on the processed data.
Run the script:
```bash
python hash.py
```### 7οΈβ£ Compute Flexibility & Plotting
Flexibility evaluates whether the model's generated ideas span across diverse scientific disciplines based on the input keyword(s).
This notebook calculates the Flexibility score and creates visualizations of the benchmark results.
Run the Jupyter Notebook: `stats_flexibility.ipynb`
Generated figures can be found in the `./figs` directory.
## ππ± CO2 Emission Estimation
This repo provides an estimation of the CO2 footprint associated with running the idea generation and evaluation pipeline.
Run the Jupyter Notebook: `co2.ipynb`
## Bibtex
```bibtex
@article{ruan2024liveideabench,
title={LiveIdeaBench: Evaluating LLMs' Scientific Creativity and Idea Generation with Minimal Context},
author={Kai Ruan and Xuan Wang and Jixiang Hong and Peng Wang and Yang Liu and Hao Sun},
journal={arXiv preprint arXiv:2412.17596},
year={2024}
}
```