https://github.com/zabir-nabil/keras-human-pose
A simple wrapper to localize human joints from images/video frames for multiple subjects.
https://github.com/zabir-nabil/keras-human-pose
human-pose human-pose-estimation keras-tensorflow multi-subjects
Last synced: 3 months ago
JSON representation
A simple wrapper to localize human joints from images/video frames for multiple subjects.
- Host: GitHub
- URL: https://github.com/zabir-nabil/keras-human-pose
- Owner: zabir-nabil
- License: mit
- Created: 2019-11-21T06:28:49.000Z (almost 6 years ago)
- Default Branch: master
- Last Pushed: 2022-11-22T04:18:33.000Z (almost 3 years ago)
- Last Synced: 2023-02-27T21:13:54.506Z (over 2 years ago)
- Topics: human-pose, human-pose-estimation, keras-tensorflow, multi-subjects
- Language: Jupyter Notebook
- Size: 3.46 MB
- Stars: 9
- Watchers: 1
- Forks: 2
- Open Issues: 10
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# keras-human-pose
A simple wrapper to localize human joints from images/video frames for **multiple subjects**.
### Download pre-trained weight
https://drive.google.com/file/d/1n-H_cvTHNldZuz08EE62WiVtqqXzemKq/view?usp=sharing
### Install all the libraries
pip install -r requirements.txt
### Sample Use
```
# import all the necessary modules
import keras
from keras.models import Sequential
from keras.models import Model
from keras.layers import Input, Dense, Activation, Lambda
from keras.layers.convolutional import Conv2D
from keras.layers.pooling import MaxPooling2D
from keras.layers.normalization import BatchNormalization
from keras.layers.merge import Concatenate
import scipy
import math
# import the pose estimation model and processing module
from PoseEstimationModel import *
from PoseEstimationProcessing import *# load the pre-trained weights
pe = PoseEstimationModel('model.h5')
pemodel = pe.create_model() # create the modelimport cv2
import matplotlib
import pylab as plt
import numpy as np
# load a test image
test_image = 'test.jpg'oriImg = cv2.imread(test_image) # B,G,R order
print(oriImg.shape)
plt.imshow(oriImg[:,:,[2,1,0]])processor = PoseEstimationProcessing() # load the processor
shared_pts = processor.shared_points(pemodel, oriImg) # shared points across multiple subjects
subject_wise_loc = processor.subject_points(shared_pts)
subject_wise_loc = np.array(subject_wise_loc)```
* processor.subject_points() returns an array with shape (body parts, subject, X coordinate, Y coordinate)
* Body parts mapping => [nose, neck, Rsho, Relb, Rwri, Lsho, Lelb, Lwri, Rhip, Rkne, Rank, Lhip, Lkne, Lank, Leye, Reye, Lear, Rear, pt19]
* If a joint is missing for a subject the coordinates are (-1,-1)
![]()
### References
Most of codes are taken from https://github.com/michalfaber/keras_Realtime_Multi-Person_Pose_Estimation and simply re-written to simplify the pipeline and extract the subject-wise data points.