Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/rbroc/mental-health-llm-bias


https://github.com/rbroc/mental-health-llm-bias

Last synced: 11 days ago
JSON representation

Awesome Lists containing this project

README

        

### Evaluating bias in LLM-based diagnostics and severity assessment

##### File content
- - `1_create_questionnaire_specs.py` creates json files (saved under `specs.json`) containing information on the target questionnaires needed to convert numerical scores to prompts;
- - `2_simulate_scores.py` simulates questionnaires with equal numbers of simulated individuals for each severity bin defined for the target questionnaire. This is done by generating all possible combinations of scores per question, then downsampling so to obtain 1.000 total examples, equally distributed across severity bins. Outputs are saved under `scores`;
- - `3_scores_to_narratives.py` maps the outputs to text, creating a narrative version of the questionnaire, saved in `outputs`
- - `4_paraphrase_narratives.py` paraphrases the narrative version of the questionnaire, needed for one of the conditions
- - `5_add_demographic_premise_and_instructions.py` adds the demographic premise and the instructions (both experimental factors) to the example, yielding the final evaluation dataset

##### TODO:
- Test with current examples
- Set up paraphrase code
- Reintroduce additional conditions (how do we sample?)
- Evaluate LLMs

##### Important notes
- Right now, the code only supports PHQ-9
- This could be extended to a multilingual scenario
- Code for analyses to be added