Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/dervinevolve/empathyarchives
https://github.com/dervinevolve/empathyarchives
Last synced: about 1 month ago
JSON representation
- Host: GitHub
- URL: https://github.com/dervinevolve/empathyarchives
- Owner: DervinEvolve
- Created: 2024-09-16T22:47:54.000Z (4 months ago)
- Default Branch: main
- Last Pushed: 2024-09-18T01:07:31.000Z (4 months ago)
- Last Synced: 2024-09-18T06:05:44.739Z (4 months ago)
- Language: TypeScript
- Homepage: https://empathyarchives.vercel.app
- Size: 2.08 MB
- Stars: 1
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
Empathy Archives
Your Historical Research Assistant
## Overview
Empathy Archives is an innovative voice-to-voice interface designed for students and educators to engage in intuitive conversations about history. This project was developed as a submission for Microsoft's Hackathon, leveraging the power of Hume AI's expression detection models to create a more empathetic and responsive learning experience.
## Features
- **Voice-to-Voice Interaction**: Engage in natural, spoken conversations about historical topics.
- **Emotion-Aware Responses**: The AI adapts its output based on the user's tone, creating a more personalized interaction.
- **Primary Source Integration**: Access and discuss a vast array of historical primary sources.
- **Real-time Transcription**: View a live transcription of the conversation.
- **Tone Analysis**: Displays the top three detected tones in the user's voice output.## How It Works
1. **User Input**: Speak your question or topic of interest related to history.
2. **Voice Analysis**: Hume AI's expression detection models analyze your tone and emotion.
3. **Contextual Search**: The AI searches for relevant primary sources and historical information.
4. **Tailored Response**: Based on the analysis and search results, the AI formulates a response.
5. **Expressive Output**: The response is delivered via voice, with tone and pacing adjusted to match the context.
6. **Visual Feedback**: A transcription appears on screen, highlighting the detected tones.