Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/rbroc/mental-health-llm-bias
https://github.com/rbroc/mental-health-llm-bias
Last synced: 11 days ago
JSON representation
- Host: GitHub
- URL: https://github.com/rbroc/mental-health-llm-bias
- Owner: rbroc
- Created: 2024-07-17T11:18:25.000Z (5 months ago)
- Default Branch: main
- Last Pushed: 2024-10-11T12:03:45.000Z (2 months ago)
- Last Synced: 2024-10-24T11:52:04.069Z (about 2 months ago)
- Language: Jupyter Notebook
- Size: 17.4 MB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
### Evaluating bias in LLM-based diagnostics and severity assessment
##### File content
- - `1_create_questionnaire_specs.py` creates json files (saved under `specs.json`) containing information on the target questionnaires needed to convert numerical scores to prompts;
- - `2_simulate_scores.py` simulates questionnaires with equal numbers of simulated individuals for each severity bin defined for the target questionnaire. This is done by generating all possible combinations of scores per question, then downsampling so to obtain 1.000 total examples, equally distributed across severity bins. Outputs are saved under `scores`;
- - `3_scores_to_narratives.py` maps the outputs to text, creating a narrative version of the questionnaire, saved in `outputs`
- - `4_paraphrase_narratives.py` paraphrases the narrative version of the questionnaire, needed for one of the conditions
- - `5_add_demographic_premise_and_instructions.py` adds the demographic premise and the instructions (both experimental factors) to the example, yielding the final evaluation dataset##### TODO:
- Test with current examples
- Set up paraphrase code
- Reintroduce additional conditions (how do we sample?)
- Evaluate LLMs##### Important notes
- Right now, the code only supports PHQ-9
- This could be extended to a multilingual scenario
- Code for analyses to be added