Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
Projects in Awesome Lists by declare-lab
A curated list of projects in awesome lists by declare-lab .
https://github.com/declare-lab/conv-emotion
This repo contains implementation of different architectures for emotion recognition in conversations.
conversational-agents conversational-ai dialogue-systems emotion-analysis emotion-recognition emotion-recognition-in-conversation lstm memory-network natural-language-processing natural-language-understanding pretrained-models pytorch sentiment-analysis
Last synced: 01 Aug 2024
https://github.com/declare-lab/tango
Hosts a family of diffusion models for text-to-audio generation.
audio-generation diffusion diffusion-models language-models large-language-models text-to-audio
Last synced: 01 Aug 2024
https://github.com/declare-lab/MELD
MELD: A Multimodal Multi-Party Dataset for Emotion Recognition in Conversation
chatbot conversational-ai dialogue dialogue-systems emotion emotion-detection emotion-recognition emotion-recognition-in-conversation multimodal-emotion-recognition multimodal-interactions multimodal-sentiment-analysis personality-profiling personality-traits sentiment-analysis
Last synced: 08 Aug 2024
https://github.com/declare-lab/multimodal-deep-learning
This repository contains various models targetting multimodal representation learning, multimodal fusion for downstream tasks such as multimodal sentiment analysis.
multimodal-deep-learning multimodal-interactions multimodal-learning multimodal-sentiment-analysis
Last synced: 02 Aug 2024
https://github.com/declare-lab/instruct-eval
This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.
Last synced: 02 Aug 2024
https://github.com/declare-lab/flan-alpaca
This repository contains code for extending the Stanford Alpaca synthetic instruction tuning to existing instruction-tuned models such as Flan-T5.
alpaca flan-t5 language-model llm transformers
Last synced: 01 Aug 2024
https://github.com/declare-lab/flacuna
Flacuna was developed by fine-tuning Vicuna on Flan-mini, a comprehensive instruction collection encompassing various tasks. Vicuna is already an excellent writing assistant, and the intention behind Flacuna was to enhance Vicuna's problem-solving capabilities. To achieve this, we curated a dedicated instruction dataset called Flan-mini.
large-language-models llama transformer
Last synced: 02 Aug 2024