Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
awesome-SOTA-FER
A curated list of facial expression recognition in both 7-emotion classification and affect estimation.
https://github.com/kdhht2334/awesome-SOTA-FER
Last synced: 4 days ago
JSON representation
-
7-Emotion Classification <a id="seven-emotion"></a>
-
- Open-Set Facial Expression Recognition - uaiaaaa) |
- Facial Expression Recognition with Adaptive Frame Rate based on Multiple Testing Correction - savchenko/face-emotion-recognition) |
- Leave No Stone Unturned: Mine Extra Knowledge for Imbalanced Facial Expression Recognition - uaiaaaa/Mine-Extra-Knowledge) |
- ASM: Adaptive Sample Mining for In-The-Wild Facial Expression Recognition
- POSTER: A Pyramid Cross-Fusion Transformer Network for Facial Expression Recognition
- LA-Net: Landmark-Aware Learning for Reliable Facial Expression Recognition under Label Noise
- Latent-OFER: Detect, Mask, and Reconstruct with Latent Vectors for Occluded Facial Expression Recognition - OFER) |
- Prompting Visual-Language Models for Dynamic Facial Expression Recognition - CLIP) |
- MAE-DFER: Efficient Masked Autoencoder for Self-supervised Dynamic Facial Expression Recognition - DFER) |
- Rethinking the Learning Paradigm for Facial Expression Recognition
- Multi-Domain Norm-Referenced Encoding enables Data Efficient Transformer Learning of Facial Expression Recognition
- Revisiting Self-Supervised Contrastive Learning for Facial Expression Recognition
- Cluster-level pseudo-labelling for source-free cross-domain facial expression recognition
- GaFET: Learning Geometry-aware Facial Expression Translation from In-The-Wild Images
- Rethinking Affect Analysis: A Protocol for Ensuring Fairness and Consistency
- LLDif: Diffusion Models for Low-light Emotion Recognition
- SynFER: Towards Boosting Facial Expression Recognition with Synthetic Data
- From Macro to Micro: Boosting Micro-Expression Recognition via Pre-Training on Macro-Expression Videos
- FacePhi: Light-Weight Multi-Modal Large Language Model for Facial Landmark Emotion Recognition
- Enhancing Zero-Shot Facial Expression Recognition by LLM Knowledge Transfer - CLIP) |
- MSSTNET: A Multi-Scale Spatio-Temporal CNN-Transformer Network for Dynamic Facial Expression Recognition
- Open-Set Facial Expression Recognition - uaiaaaa) |
- MIDAS: Mixing Ambiguous Data with Soft Labels for Dynamic Facial Expression Recognition
- Hard Sample-aware Consistency for Low-resolution Facial Expression Recognition
- Leave No Stone Unturned: Mine Extra Knowledge for Imbalanced Facial Expression Recognition - uaiaaaa/Mine-Extra-Knowledge) |
- ASM: Adaptive Sample Mining for In-The-Wild Facial Expression Recognition
- POSTER: A Pyramid Cross-Fusion Transformer Network for Facial Expression Recognition
- GaFET: Learning Geometry-aware Facial Expression Translation from In-The-Wild Images
- LA-Net: Landmark-Aware Learning for Reliable Facial Expression Recognition under Label Noise
- Latent-OFER: Detect, Mask, and Reconstruct with Latent Vectors for Occluded Facial Expression Recognition - OFER) |
- Addressing Racial Bias in Facial Emotion Recognition
- Learning Deep Hierarchical Features with Spatial Regularization for One-Class Facial Expression Recognition
- Facial Expression Recognition with Adaptive Frame Rate based on Multiple Testing Correction
- Feature Representation Learning with Adaptive Displacement Generation and Transformer Fusion for Micro-Expression Recognition
- Rethinking the Learning Paradigm for Facial Expression Recognition
- Multi-Domain Norm-Referenced Encoding enables Data Efficient Transformer Learning of Facial Expression Recognition
- Uncertainty-aware Label Distribution Learning for Facial Expression Recognition - distribution-learning-fer-tf/) |
- POSTER V2: A simpler and stronger facial expression recognition network - Q/POSTER_V2) |
- Prompting Visual-Language Models for Dynamic Facial Expression Recognition - CLIP) |
- MAE-DFER: Efficient Masked Autoencoder for Self-supervised Dynamic Facial Expression Recognition - DFER) |
- Learn From All: Erasing Attention Consistency for Noisy Label Facial Expression Recognition - uaiaaaa/Erasing-Attention-Consistency) |
- Learn-to-Decompose: Cascaded Decomposition Network for Cross-Domain Few-Shot Facial Expression Recognition
- Teaching with Soft Label Smoothing for Mitigating Noisy Labels in Facial Expressions
- Towards Semi-Supervised Deep Facial Expression Recognition with An Adaptive Confidence Margin - CM/) |
- Face2Exp: Combating Data Biases for Facial Expression Recognition
- Facial Expression Recognition By Using a Disentangled Identity Invariant Expression Representation - web.ing.unimore.it/icpr/media/posters/12024.pdf) | ICPR | ⭐️⭐️ | N/A |
- Vision Transformer Equipped with Neural Resizer on Facial Expression Recognition Task
- A Prototype-Oriented Contrastive Adaption Network For Cross domain Facial Expression Recognition
- Soft Label Mining and Average Expression Anchoring for Facial Expression Recognition - AEA) |
- Revisiting Self-Supervised Contrastive Learning for Facial Expression Recognition
- Cluster-level pseudo-labelling for source-free cross-domain facial expression recognition
- Analysis of Semi-Supervised Methods for Facial Expression Recognition
- TransFER: Learning Relation-aware Facial Expression Representations with Transformers
- Understanding and Mitigating Annotation Bias in Facial Expression Recognition
- Dive into Ambiguity: Latent Distribution Mining and Pairwise Uncertainty Estimation for Facial Expression Recognition - CV/FaceX-Zoo/tree/main/addition_module/DMUE) |
- Affective Processes: stochastic modelling of temporal context for emotion and facial expression recognition
- Identity-Aware Facial Expression Recognition Via Deep Metric Learning Based on Synthesized Images
- Relative Uncertainty Learning for Facial Expression Recognition - uaiaaaa/Relative-Uncertainty-Learning) |
- Identity-Free Facial Expression Recognition using conditional Generative Adversarial Network
- Feature Decomposition and Reconstruction Learning for Effective Facial Expression Recognition
- Learning a Facial Expression Embedding Disentangled from Identity
- Affect2MM: Affective Analysis of Multimedia Content Using Emotion Causality
- A Circular-Structured Representation for Visual Emotion Distribution Learning
- Temporal Stochastic Softmax for 3D CNNs: An Application in Facial Expression Recognition - stochastic-softmax) |
- Facial Expression Recognition in the Wild via Deep Attentive Center Loss
- Domain Generalisation for Apparent Emotional Facial Expression Recognition across Age-Groups
- Suppressing Uncertainties for Large-Scale Facial Expression Recognition - Cure-Network) |
- Label Distribution Learning on Auxiliary Label Space Graphs for Facial Expression Recognition
- Graph Neural Networks for Image Understanding Based on Multiple Cues: Group Emotion Recognition and Event Recognition as Use Cases - Neural-Networks-for-Image-Understanding-Based-on-Multiple-Cues) |
- Detecting Face2Face Facial Reenactment in Videos
- ExpLLM: Towards Chain of Thought for Facial Expression Recognition
- Feature Representation Learning with Adaptive Displacement Generation and Transformer Fusion for Micro-Expression Recognition
- Addressing Racial Bias in Facial Emotion Recognition
- Generalizable Facial Expression Recognition - uaiaaaa/Generalizable-FER) |
- Norface: Improving Facial Expression Analysis by Identity Normalization - fea.github.io/) |
- Bridging the Gaps: Utilizing Unlabeled Face Recognition Datasets to Boost Semi-Supervised Facial Expression Recognition
-
Facial Expression Manipulation and Synthesis [back-to-top](#seven-emotion) <a id="fem"></a>
- Efficient Emotional Adaptation for Audio-Driven Talking-Head Generation
- DisCoHead: Audio-and-Video-Driven Talking Head Generation by Disentangled Control of Head Pose and Facial Expressions - research.github.io/discohead/) |
- Seeing What You Said: Talking Face Generation Guided by a Lip Reading Expert
- TransEditor: Transformer-Based Dual-Space GAN for Highly Controllable Facial Editing
- EAMM: One-Shot Emotional Talking Face via Audio-Based Emotion-Aware Motion Model
- Learning an Animatable Detailed 3D Face Model from In-The-Wild Images
- Information Bottlenecked Variational Autoencoder for Disentangled 3D Facial Expression Modelling
- Detection and Localization of Facial Expression Manipulations
- Talk-to-Edit: Fine-Grained Facial Editing via Dialog - to-Edit) |
- Audio-Driven Emotional Video Portraits
- GANmut: Learning Interpretable Conditional Space for Gamut of Emotions
- 3D Dense Geometry-Guided Facial Expression Synthesis by Adversarial Learning
- FACIAL: Synthesizing Dynamic Talking Face with Implicit Attribute Learning
- Cascade EF-GAN: Progressive Facial Expression Editing with Local Focuses
- Interpreting the latent space of gans for semantic face editing
- Towards Localized Fine-Grained Control for Facial Expression Generation
- 3D Facial Expressions through Analysis-by-Neural-Synthesis
- FlowVQTalker: High-Quality Emotional Talking Face Generation through Normalizing Flow and Quantization
- EMOPortraits: Emotion-enhanced Multimodal One-shot Head Avatars
- FG-EmoTalk: Talking Head Video Generation with Fine-Grained Controllable Facial Expressions
- EmoStyle: One-Shot Facial Expression Editing Using Continuous Emotion Parameters
- EmoTalk: Speech-Driven Emotional Disentanglement for 3D Face Animation
- EMMN: Emotional Motion Memory Network for Audio-driven Emotional Talking Face Generation
- Efficient Emotional Adaptation for Audio-Driven Talking-Head Generation
- DisCoHead: Audio-and-Video-Driven Talking Head Generation by Disentangled Control of Head Pose and Facial Expressions - research.github.io/discohead/) |
- Learning Adaptive Spatial Coherent Correlations for Speech-Preserving Facial Expression Manipulation
- FSRT: Facial Scene Representation Transformer for Face Reenactment from Factorized Appearance, Head-pose, and Facial Expression Features
- LipFormer: High-fidelity and Generalizable Talking Face Generation with A Pre-learned Facial Codebook
- SadTalker: Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation
- Identity-Preserving Talking Face Generation with Landmark and Appearance Priors
- OTAvatar : One-shot Talking Face Avatar with Controllable Tri-plane Rendering
- High-fidelity Generalized Emotional Talking Face Generation with Multi-modal Emotion Space Learning
- EMOCA: Emotion Driven Monocular Face Capture and Animation
- 4D Facial Expression Diffusion Model
- Sparse to Dense Dynamic 3D Facial Expression Generation - 3DSAM/Sparse2Dense) |
- Neural Emotion Director: Speech-preserving semantic control of facial expressions in “in-the-wild” videos
- All You Need is Your Voice: Emotional Face Representation with Audio Perspective for Emotional Talking Face Generation
- AnimateMe: 4D Facial Expressions via Diffusion Models
-
Valence-arousal Affect Estimation and Analysis [back-to-top](#seven-emotion) <a id="affect"></a>
- Learning from Label Relationships in Human Affect
- Are 3D Face Shapes Expressive Enough for Recognising Continuous Emotions and Action Unit Intensities?
- Optimal Transport-based Identity Matching for Identity-invariant Facial Expression Recognition
- Detail-Enhanced Intra- and Inter-modal Interaction for Audio-Visual Emotion Recognition
- Bridging the Gap: Protocol Towards Fair and Consistent Affect Analysis - Consistent-Affect-Analysis) |
- Optimal Transport-based Identity Matching for Identity-invariant Facial Expression Recognition
- Emotion-aware Multi-view Contrastive Learning for Faciel Emotion Recognition
- Inconsistency-Aware Cross-Attention for Audio-Visual Fusion in Dimensional Emotion Recognition
- CAGE: Circumplex Affect Guided Expression Inference - niklas/CAGE_expression_inference) |
- Cross-Attention is Not Always Needed: Dynamic Cross-Attention for Audio-Visual Dimensional Emotion Recognition
- Learning from Label Relationships in Human Affect
- Contrastive Adversarial Learning for Person Independent Facial Emotion Recognition - Adversarial-Learning-FER) |
- Estimating continuous affect with label uncertainty
- Factorized Higher-Order CNNs with an Application to Spatio-Temporal Emotion Estimation
- BReG-NeXt: Facial affect computing using adaptive residual networks with bounded gradient - NeXt) |
- Ig3D: Integrating 3D Face Representations in Facial Expression Inference
- 3DEmo: for Portrait Emotion Recognition with New Dataset
-
About Facial Privacy [back-to-top](#seven-emotion) <a id="privacy"></a>
- here
- GANonymization: A GAN-based Face Anonymization Framework for Preserving Emotional Expressions
- Privacy-Preserving Face Recognition with Learnable Privacy Budgets in Frequency Domain
- Anonymization Prompt Learning for Facial Privacy-Preserving Text-to-Image Generation
- ϵ-Mesh Attack: A Surface-based Adversarial Point Cloud Attack for Facial Expression Recognition - mesh-attack) |
- Walk as you feel: Privacy preserving emotion recognition from gait patterns
- GANonymization: A GAN-based Face Anonymization Framework for Preserving Emotional Expressions
- Simulated adversarial testing of face recognition models
- Exploring frequency adversarial attacks for face forgery detection
- Protecting Facial Privacy: Generating Adversarial Identity Masks via Style-robust Makeup Transfer - codes/AMT-GAN) |
- AdverFacial: Privacy-Preserving Universal Adversarial Perturbation Against Facial Micro-Expression Leakages
- Lie to me: shield your emotions from prying software
- Point adversarial self-mining: A simple method for facial expression recognition
- Improving transferability of adversarial patches on face recognition with generative models
- Towards face encryption by generating adversarial identity masks - IM) |
- Disentangled Representation with Dual-stage Feature Learning for Face Anti-spoofing
- ϵ-Mesh Attack: A Surface-based Adversarial Point Cloud Attack for Facial Expression Recognition - mesh-attack) |
- DuetFace: Collaborative Privacy-Preserving Face Recognition via Channel Splitting in the Frequency Domain
-
Emotion Recognition, Facial Representations, and Others [back-to-top](#seven-emotion) <a id="er-fr-o"></a>
- The Strong Pull of Prior Knowledge in Large Language Models and Its Impact on Emotion Recognition
- Distilling Privileged Multimodal Information for Expression Recognition using Optimal Transport
- EmoCLIP: A Vision-Language Method for Zero-Shot Video Facial Expression Recognition
- Deep Imbalanced Learning for Multimodal Emotion Recognition in Conversations
- An Empirical Study of Super-resolution on Low-resolution Micro-expression Recognition
- Decoupled Multimodal Distilling for Emotion Recognition
- Context De-confounded Emotion Recognition
- Norms of valence, arousal, and dominance for 13,915 English lemmas
- DrFER: Learning Disentangled Representations for 3D Facial Expression Recognition
- More is Better: A Database for Spontaneous Micro-Expression with High Frame Rates
- Norms of valence, arousal, and dominance for 13,915 English lemmas
- Norms of valence, arousal, and dominance for 13,915 English lemmas
- Norms of valence, arousal, and dominance for 13,915 English lemmas
- Norms of valence, arousal, and dominance for 13,915 English lemmas
- Norms of valence, arousal, and dominance for 13,915 English lemmas
- Norms of valence, arousal, and dominance for 13,915 English lemmas
- Norms of valence, arousal, and dominance for 13,915 English lemmas
- Evaluating and Inducing Personality in Pre-trained Language Models
- Norms of valence, arousal, and dominance for 13,915 English lemmas
- Norms of valence, arousal, and dominance for 13,915 English lemmas
- Norms of valence, arousal, and dominance for 13,915 English lemmas
- Norms of valence, arousal, and dominance for 13,915 English lemmas
- Norms of valence, arousal, and dominance for 13,915 English lemmas
- Norms of valence, arousal, and dominance for 13,915 English lemmas
- Norms of valence, arousal, and dominance for 13,915 English lemmas
- Norms of valence, arousal, and dominance for 13,915 English lemmas
- Generative Technology for Human Emotion Recognition: A Scope Review
- Multimodal Prompt Learning with Missing Modalities for Sentiment Analysis and Emotion Recognition
- AffectGPT: Dataset and Framework for Explainable Multimodal Emotion Recognition
- Towards Context-Aware Emotion Recognition Debiasing from a Causal Demystification Perspective via De-confounded Training
- Learning Emotion Representations from Verbal and Nonverbal Communication
- Multivariate, Multi-frequency and Multimodal: Rethinking Graph Neural Networks for Emotion Recognition in Conversation
- How you feelin’? Learning Emotions and Mental States in Movie Scenes - ai.github.io/projects/emotx/) |
- Decoupled Multimodal Distilling for Emotion Recognition
- Context De-confounded Emotion Recognition
- More is Better: A Database for Spontaneous Micro-Expression with High Frame Rates
- EmoCLIP: A Vision-Language Method for Zero-Shot Video Facial Expression Recognition
- Towards affective computing that works for everyone
- Emotional Listener Portrait: Realistic Listener Motion Simulation in Conversation
- Affective Image Filter: Reflecting Emotions from Text to Images
- EmoGen: Emotional Image Content Generation with Text-to-Image Diffusion Models
- A Unified and Interpretable Emotion Representation and Expression Generation - diffusion.github.io/) |
- Weakly-Supervised Emotion Transition Learning for Diverse 3D Co-speech Gesture Generation - lab.github.io/Emo-Transition-Gesture/) |
- EmoVIT: Revolutionizing Emotion Insights with Visual Instruction Tuning
- Robust Emotion Recognition in Context Debiasing
- Region-Based Emotion Recognition via Superpixel Feature Pooling
- Emotion Recognition from the perspective of Activity Recognition
- GPT as Psychologist? Preliminary Evaluations for GPT-4V on Visual Affective Computing - Research/GPT4Affectivity) |
- The Strong Pull of Prior Knowledge in Large Language Models and Its Impact on Emotion Recognition
- DrFER: Learning Disentangled Representations for 3D Facial Expression Recognition
- Distilling Privileged Multimodal Information for Expression Recognition using Optimal Transport
- Beyond Accuracy: Fairness, Scalability, and Uncertainty Considerations in Facial Emotion Recognition
- Deep Imbalanced Learning for Multimodal Emotion Recognition in Conversations
- Norms of valence, arousal, and dominance for 13,915 English lemmas
- Pre-training strategies and datasets for facial representation learning - face-representation) |
- Multi-Dimensional, Nuanced and Subjective – Measuring the Perception of Facial Expressions
- General Facial Representation Learning in a Visual-Linguistic Manner
- Robust Egocentric Photo-realistic Facial Expression Transfer for Virtual Reality
- Fair Contrastive Learning for Facial Attribute Classification - CoolG/FSCL) |
- Quantified Facial Expressiveness for Affective Behavior Analytics
- Deep facial expression recognition: A survey
- iMiGUE: An Identity-free Video Dataset for Micro-Gesture Understanding and Emotion Analysis
- Emotions as overlapping causal networks of emotion components: Implications and methodological approaches
- An Empirical Study of Super-resolution on Low-resolution Micro-expression Recognition
- Norms of valence, arousal, and dominance for 13,915 English lemmas
- Norms of valence, arousal, and dominance for 13,915 English lemmas
- Weakly Supervised Video Emotion Detection and Prediction via Cross-Modal Temporal Erasing Network - zhichengzhang/WECL) |
- Latent to Latent: A Learned Mapper for Identity Preserving Editing of Multiple Face Attributes in StyleGAN-generated Images - To-Latent) |
- FaceForensics++: Learning to Detect Manipulated Facial Images
- Graph-Structured Referring Expression Reasoning in The Wild
- EmotiCon: Context-Aware Multimodal Emotion Recognition using Frege’s Principle
- Learning Visual Emotion Representations from Web Data - stonybrook/EmotionNet_CVPR2020) |
- Computational Models of Emotion Inference in Theory of Mind: A Review and Roadmap
- Putting feelings into words: Affect labeling as implicit emotion regulation
- Affective cognition: Exploring lay theories of emotion
- Facial Expression Recognition: A Survey
- Norms of valence, arousal, and dominance for 13,915 English lemmas
- Facial expression and emotion
- Understanding face recognition
- A circumplex model of affect
- Norms of valence, arousal, and dominance for 13,915 English lemmas
- Norms of valence, arousal, and dominance for 13,915 English lemmas
- How you feelin’? Learning Emotions and Mental States in Movie Scenes - ai.github.io/projects/emotx/) |
- Norms of valence, arousal, and dominance for 13,915 English lemmas
- Norms of valence, arousal, and dominance for 13,915 English lemmas
- Norms of valence, arousal, and dominance for 13,915 English lemmas
- FSFM: A Generalizable Face Security Foundation Model via Self-Supervised Facial Representation Learning - 3c.github.io/) |
- Norms of valence, arousal, and dominance for 13,915 English lemmas
- Norms of valence, arousal, and dominance for 13,915 English lemmas
- Norms of valence, arousal, and dominance for 13,915 English lemmas
- Affective Visual Dialog: A Large-Scale Benchmark for Emotional Reasoning Based on Visually Grounded Conversations - visual-dialog.github.io/) |
- Training A Small Emotional Vision Language Model for Visual Art Comprehension - code) |
- Norms of valence, arousal, and dominance for 13,915 English lemmas
- A survey on Graph Deep Representation Learning for Facial Expression Recognition
- Norms of valence, arousal, and dominance for 13,915 English lemmas
- Exploring Vision Language Models for Facial Attribute Recognition: Emotion, Race, Gender, and Age
- A Survey on Facial Expression Recognition of Static and Dynamic Emotions
- Emotion-LLaMA: Multimodal Emotion Recognition and Reasoning with Instruction Tuning - LLaMA) [Demo](https://huggingface.co/spaces/ZebangCheng/Emotion-LLaMA) |
- Norms of valence, arousal, and dominance for 13,915 English lemmas
- Norms of valence, arousal, and dominance for 13,915 English lemmas
-
Challenges [back-to-top](#seven-emotion) <a id="challenges"></a>
-
Tools [back-to-top](#seven-emotion) <a id="tools"></a>
- PyTorch
- Web
- Joint Face Detection and Alignment using Multi-task Cascaded Convolutional Networks - pytorch) [TensorFlow](https://github.com/davidsandberg/facenet) |
-
Datasets [back-to-top](#seven-emotion) <a id="datasets"></a>
- FindingEmo: An Image Dataset for Emotion Recognition in the Wild
- EmoSet: A Large-scale Visual Emotion Dataset with Rich Attributes
- FindingEmo: An Image Dataset for Emotion Recognition in the Wild
- VEATIC: Video-based Emotion and Affect Tracking in Context Dataset
- EmoSet: A Large-scale Visual Emotion Dataset with Rich Attributes
- MimicME: A Large Scale Diverse 4D Database for Facial Expression Analysis
- CelebV-HQ: A Large-Scale Video Facial Attributes Dataset - hq.github.io/) [GitHub](https://github.com/CelebV-HQ/CelebV-HQ) [Demo](https://www.youtube.com/watch?v=Y0uxlUW4sW0) |
- FERV39k: A Large-Scale Multi-Scene Dataset for Facial Expression Recognition in Videos
- MAFW: A Large-scale, Multi-modal, Compound Affective Database for Dynamic Facial Expression Recognition in the Wild - database.github.io/MAFW/) |
- __Aff-wild2__: Extending the aff-wild database for affect recognition - wild2/) |
- __Aff-wild__: valence and arousal 'In-the-Wild' challenge - affect-wild-challenge/) |
-
Remarkable Papers (2019~) [back-to-top](#seven-emotion) <a id="previous"></a>
- Facial expression recognition from near-infrared videos
- Facial expression recognition from near-infrared videos
- Facial expression recognition from near-infrared videos
- Facial expression recognition from near-infrared videos
- Facial expression recognition from near-infrared videos
- Facial expression recognition from near-infrared videos
- Facial expression recognition from near-infrared videos
- Facial expression recognition from near-infrared videos
- Facial expression recognition from near-infrared videos
- Facial expression recognition from near-infrared videos
- Facial expression recognition from near-infrared videos
- Facial expression recognition from near-infrared videos
- Facial expression recognition from near-infrared videos
- Facial expression recognition from near-infrared videos
- A Compact Embedding for Facial Expression Similarity
- A Personalized Affective Memory Model for Improving Emotion Recognition - AffMem) |
- Facial Expression Recognition by De-expression Residue Learning
- Joint pose and expression modeling for facial expression recognition - expression-recognition) |
- Identity-Aware Convolutional Neural Network for Facial Expression Recognition
- Facenet2expnet: Regularizing a deep face recognition net for expression recognition
- Facial expression recognition from near-infrared videos
- Facial expression decomposition
- Facial expression recognition from near-infrared videos
-
Facial Action Unit (AU) Detection (or Recognition) [back-to-top](#seven-emotion) <a id="au"></a>
- Causal intervention for subject-deconfounded facial action unit recognition
- Trend-Aware Supervision: On Learning Invariance for Semi-supervised Facial Action Unit Intensity Estimation
- FAN-Trans: Online Knowledge Distillation for Facial Action Unit Detection
- Knowledge-Driven Self-Supervised Representation Learning for Facial Action Unit Recognition
- Towards Accurate Facial Landmark Detection via Cascaded Transformers
- Causal intervention for subject-deconfounded facial action unit recognition
- PIAP-DF: Pixel-Interested and Anti Person-Specific Facial Action Unit Detection Net with Discrete Feedback Learning
-
Multi-modal, EEG-based Emotion Recognition [back-to-top](#seven-emotion) <a id="mm-er"></a>
- A Comprehensive Survey on EEG-Based Emotion Recognition: A Graph-Based Perspective
- Beyond Mimicking Under-Represented Emotions: Deep Data Augmentation with Emotional Subspace Constraints for EEG-Based Emotion Recognition
- A Brain-Inspired Way of Reducing the Network Complexity via Concept-Regularized Coding for Emotion Recognition - conceptual-knowledge) |
-
Sub Categories
Emotion Recognition, Facial Representations, and Others [back-to-top](#seven-emotion) <a id="er-fr-o"></a>
100
Facial Expression Manipulation and Synthesis [back-to-top](#seven-emotion) <a id="fem"></a>
38
Remarkable Papers (2019~) [back-to-top](#seven-emotion) <a id="previous"></a>
23
About Facial Privacy [back-to-top](#seven-emotion) <a id="privacy"></a>
18
Valence-arousal Affect Estimation and Analysis [back-to-top](#seven-emotion) <a id="affect"></a>
17
Challenges [back-to-top](#seven-emotion) <a id="challenges"></a>
15
Datasets [back-to-top](#seven-emotion) <a id="datasets"></a>
11
Facial Action Unit (AU) Detection (or Recognition) [back-to-top](#seven-emotion) <a id="au"></a>
7
Tools [back-to-top](#seven-emotion) <a id="tools"></a>
3
Multi-modal, EEG-based Emotion Recognition [back-to-top](#seven-emotion) <a id="mm-er"></a>
3