https://github.com/maxonary/thesis-topic-evaluator
ReAct Multi-Agent Flow that improves Bachelor Thesis Topic definition
https://github.com/maxonary/thesis-topic-evaluator
agent agentic-workflow react-agent streamlit
Last synced: 7 months ago
JSON representation
ReAct Multi-Agent Flow that improves Bachelor Thesis Topic definition
- Host: GitHub
- URL: https://github.com/maxonary/thesis-topic-evaluator
- Owner: maxonary
- Created: 2025-06-29T14:52:23.000Z (8 months ago)
- Default Branch: main
- Last Pushed: 2025-06-29T15:37:49.000Z (8 months ago)
- Last Synced: 2025-06-29T16:37:23.018Z (8 months ago)
- Topics: agent, agentic-workflow, react-agent, streamlit
- Language: Python
- Homepage: https://thesis-topic-ai.streamlit.app
- Size: 9.77 KB
- Stars: 0
- Watchers: 0
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Thesis Topic Evaluator (Streamlit)
A Streamlit web application that leverages the ReAct (Reasoning + Acting) multi-agent framework to evaluate proposed Bachelor's thesis topics in Software Engineering.
## Features
1. **Scope Agent** – Verifies alignment with Bachelor-level expectations.
2. **Critic Agent** – Highlights vagueness, over-complexity, or unclear goals.
3. **Literature Agent** – Estimates availability of supporting academic literature.
4. **Feasibility Agent** – Assesses practical implementability with common tools and skills.
5. **Judge Agent** – Synthesises previous feedback, issuing an overall decision.
Each agent operates using an LLM via the OpenAI API, following the ReAct methodology to reason step-by-step before acting.
## Architecture
```mermaid
graph TD
U["User (Streamlit UI)"] --> SCOPE["Scope Agent"]
U --> CRITIC["Critic Agent"]
U --> LIT["Literature Agent"]
U --> FEAS["Feasibility Agent"]
SCOPE --> JUDGE["Judge Agent"]
CRITIC --> JUDGE
LIT --> JUDGE
FEAS --> JUDGE
JUDGE --> U
```
## Setup
1. **Clone & install dependencies**
```bash
python -m venv .venv
source .venv/bin/activate # on Windows: .venv\Scripts\activate
pip install -r requirements.txt
```
2. **Configure the OpenAI API key**
Create a `.env` file in the project root or export the variable directly:
```bash
echo "OPENAI_API_KEY=YOUR_KEY_HERE" > .env
```
3. **Run the application**
```bash
streamlit run streamlit_app.py
```
Then open the provided local URL (e.g. http://localhost:8501) in your browser.
## Notes
* The default model is `gpt-4o`; adjust in `streamlit_app.py` if needed.
* All agent responses are requested in JSON. If the LLM deviates, the raw output is still shown.
* This project is a proof-of-concept; production deployments should incorporate stronger error handling and rate-limit resilience.